Job Requirements
Application Security Specialist AI
Role Summary
The AI Security Analyst will be responsible for evaluating, securing, and continuously monitoring the organization’s AI, GPT, and agentic systems against emerging threats and vulnerabilities.
The role involves performing risk assessments, threat modeling, vulnerability analysis, and control validation for AI/ML models, data pipelines, APIs, and supporting infrastructure.
The ideal candidate will have a strong understanding of AI system architectures, LLM-based applications, OWASP Top 10 for AI, and general information security principles. They will work collaboratively with Data Science, Engineering, and IT teams to embed security and privacy across the AI lifecycle — from model design and training to deployment and monitoring.
Location Corporate Office; Work from Office Role
Experience 10-15 years in IT and Infosec with 3-4 years in AI/ML
Qualification Masters in IT/Computers / Cybersecurity/ Science / Technical streams with strong demonstrated experience in AI implementations
Certifications
-
AI certifications (e.g., AI Security Foundation, ML, CertNexus CAIS, Microsoft Responsible AI).
-
Information Security Certifications: ISO 27001 LA/LI, CEH, CompTIA CySA+, or CISSP (Associate).
-
Cloud or Privacy Certifications: Azure certs, CCSK, CCSP, or CDPSE.
-
Preferable – Project management certifications
Key Responsibilities
-
AI/LLM System Security Assessment
-
Threat Modeling and Risk Analysis
-
AI Security Control Implementation
-
Design and recommend security controls for AI data pipelines, APIs, and model storage — including access management, data anonymization, and model integrity validation.
-
Collaborate with engineering and MLOps teams to embed security into the AI lifecycle (AI-SDLC).
-
Contribute to policy and standard development related to AI system security, privacy, and responsible use.
-
Monitor AI deployments for anomalies and potential misuse through integrated security logging and behavior analysis tools.
-
Conduct information security assessments for AI models, GPT-based systems, and agentic applications to identify architectural and operational risks.
-
Evaluate data ingestion, model training, inference pipelines, and API endpoints for exposure to adversarial attacks, data leakage, and prompt injection risks.
-
Review third-party AI services or APIs for compliance with security, privacy, and data residency requirements.
-
Perform static and dynamic testing of AI systems, validating the implementation of authentication, authorization, and data handling controls.
-
Conduct threat modeling for AI/ML pipelines and LLM-based workflows using STRIDE, MITRE ATLAS, or custom AI threat frameworks.
-
Identify risks associated with model poisoning, prompt manipulation, data exfiltration, and hallucination-based misuse.
-
Assess business impact and recommend risk mitigation measures, control improvements, or compensating safeguards.
-
Collaboration and Governance
-
Work closely with Data Science, Cloud Security, Privacy, and Legal teams to ensure responsible and compliant AI deployments.
-
Support AI incident response activities, including vulnerability triage, root cause analysis, and corrective action tracking.
-
Participate in security reviews of AI vendor products and APIs, ensuring contractual and technical due diligence.
-
Continuous Learning and Research
-
Stay updated on evolving AI security standards, OWASP Top 10 for LLMs, NIST AI RMF, ISO/IEC 42001, and regulatory trends.
-
Research new adversarial attack methods and defensive countermeasures in AI/LLM ecosystems.
-
Contribute to internal knowledge sharing and capability-building initiatives on AI security.
Key Attributes
-
Analytical and curious mindset with the ability to connect technical vulnerabilities to business risks.
-
Self-driven and capable of working independently on complex AI assessments.
-
Excellent communication skills to interface with data scientists, developers, and senior leadership.
-
Demonstrated ability to balance innovation with security and promote a culture of responsible AI adoption.
Work Experience
Required Skills and Experience
-
6-8 years of experience in Information Security, Application Security, or AI/ML Security roles.
-
Strong understanding of AI model architectures, LLM frameworks (e.g., OpenAI, Anthropic, Hugging Face, LangChain, etc.), and agentic AI implementations.
-
Hands-on experience with AI implementation and security assessment techniques, including data validation, model evaluation, and prompt injection testing.
-
Familiarity with OWASP Top 10 for LLMs, MITRE ATLAS, and NIST AI Risk Management Framework (RMF).
-
Experience in threat modeling (STRIDE, DREAD) and risk analysis for AI/ML pipelines.
-
Working knowledge of API and application security, cloud security controls (AWS, Azure, GCP), and data protection mechanisms.
-
Strong technical writing and reporting skills, able to translate findings into actionable risk language for stakeholders.