Job Description
The Opportunity
We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products deployed to some of the world’s largest organizations.
Security testing is only part of enterprise AI assurance.
We are seeking an AI Risk & Responsible AI Lead to design and operationalize structured evaluation frameworks across safety, bias, robustness, explainability, and data governance.
This role ensures our AI systems are secure, trustworthy, measurable, and enterprise-ready.
What You’ll Do
Design and implement model evaluation frameworks across AI products
Develop methodologies for:
Bias and fairness testing
Hallucination and reliability assessment
Robustness and stress testing
Safety benchmarking
Evaluate training data governance practices
Review RAG systems for retrieval accuracy and exposure risks
Establish measurable risk metrics across AI deployments
Align evaluation outputs with:
NIST AI Risk Management Framework
ISO 27701 privacy requirements
Enterprise governance standards
Produce structured, executive-ready documentation
Partner with product and engineering teams to integrate risk mitigation strategies
This role bridges AI engineering, governance, risk quantification and enterprise accountability.
Requirements
What We’re Looking For
Core AI Evaluation Expertise
Experience designing model evaluation frameworks
Familiarity with bias detection methodologies
Understanding of hallucination testing and reliability measurement
Experience stress-testing LLM-based systems
Strong Python and experimentation capabilities
Governance & Risk Fluency
Knowledge of NIST AI RMF
Familiarity with privacy-by-design principles
Experience operating within ISO 27001 / SOC 2 environments
Understanding of enterprise AI risk posture expectations
Analytical & Communication Strength
Ability to translate model risk into business impact
Strong documentation skills
Comfortable presenting findings to executive audiences
Systems-thinking mindset
Who You Are
Structured and methodical
Deeply curious about model behavior
Pragmatic about risk
Comfortable challenging assumptions
Independent and decisive
Able to operate at executive altitude without losing technical depth
You understand that AI risk is not just about being hacked — it’s about predictability, fairness, resilience, and trust.
Benefits
Comprehensive Private Medical Coverage
Support for Mental Health Expenses
Life Insurance Options
Attractive Compensation Package
#J-18808-Ljbffr
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
February 28, 2026
Job Type:
Finance and Insurance
Location:
Canada
Company:
C-Serv
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.