Qureos

Find The RightJob.

Risk Management Specialist

AI Governance Specialist Remote Required : 8+ years of experience in AI/ML engineering, cloud architecture, IT governance, or a closely related field, with at least 3 years focused specifically on AI governance, compliance, or risk management. Deep understanding of enterprise cloud platforms (Azure, AWS, Google Cloud Platform) and on-premises AI infrastructure, including GPU compute, networking, storage, and security considerations. Demonstrated expertise in AI regulatory frameworks and standards such as NIST AI RMF, EU AI Act, ISO/IEC 42001, SOC 2, and GDPR as they pertain to AI systems. Proficiency in AI-specific threat modeling frameworks, particularly MAESTRO for agentic AI threat analysis across its seven-layer reference architecture (foundation models, data operations, agent frameworks, deployment infrastructure, evaluation and observability, security and compliance, agent ecosystem) and LINDDUN for systematic privacy threat identification. Ability to extend and complement traditional threat modeling approaches (STRIDE, PASTA) with these AI-focused methodologies. Hands-on experience with AI/ML lifecycle management tools, model registries, and monitoring platforms. Strong understanding of data governance principles, including data classification, lineage, sovereignty, and privacy-preserving techniques (federated learning, differential privacy). Proven ability to engage C-level executives, translate complex technical concepts into business-aligned recommendations, and drive consensus in enterprise settings. Excellent written and verbal communication skills, with experience producing governance documentation, executive presentations, and compliance reports.

Preferred Qualifications

  • Relevant certifications: CISA, CRISC, CGEIT, AWS/Azure/Google Cloud Platform AI or Solutions Architect certifications, or IAPP privacy certifications (CIPP, CIPM).
  • Experience advising regulated industries such as financial services, healthcare, government, or energy.
  • Familiarity with responsible AI toolkits (Microsoft Responsible AI Toolkit, IBM AI Fairness 360, Google What-If Tool).
  • Background in developing or auditing AI systems for fairness, explainability, and accountability.
  • Familiarity with OWASP GenAI Security Project resources (including the Multi-Agentic System Threat Modeling Guide) and Cloud Security Alliance (CSA) AI security publications.

Education Bachelor's degree in Computer Science, Information Security, Data Science, Engineering, or a related field required. Master's degree in a relevant discipline preferred. Equivalent professional experience and certifications will be considered in lieu of formal education.

For applications and inquiries, contact: hirings@openkyber.com

© 2026 Qureos. All rights reserved.