We are seeking a highly motivated and hands-on AI Engineer with 2–3 years of experience in building, deploying, and maintaining AI-powered systems and workflows. This role focuses on practical implementation across Generative AI, Agentic Automation, Voice AI, and Open-Source Model Deployment.
The ideal candidate will have experience designing intelligent automation pipelines, deploying AI models in cloud environments, and rapidly adapting to new AI tools, frameworks, and emerging technologies. You will collaborate closely with cross-functional teams to deliver scalable, production-ready AI solutions.
Key ResponsibilitiesAgentic AI & Workflow Automation
- Design and build agentic AI workflows and automation pipelines using frameworks such as LangChain, LangGraph, and similar tools.
- Architect multi-agent systems where multiple AI agents collaborate, delegate tasks, and execute complex workflows autonomously.
- Develop AI-powered automation systems using no-code/low-code platforms including n8n, Make, Flowise, alongside custom Python development.
Voice AI Development
- Build and deploy voice agents using tools such as Vapi, ElevenLabs, or comparable platforms.
- Integrate speech-to-text (STT), text-to-speech (TTS), and conversational voice capabilities into AI solutions.
Model Deployment & Infrastructure
- Deploy, host, and maintain open-source AI models for chat, voice, image, and video generation on cloud/GPU infrastructure.
- Fine-tune open-source models using LoRA / QLoRA for domain-specific applications.
- Manage cloud environments including provisioning, scaling, monitoring, and cost optimization.
RAG / Knowledge Base Management
- Design and maintain knowledge bases and data pipelines, including:
- Document ingestion and processing
- Chunking strategies
- Embedding pipelines
- RAG source management and freshness
Prompt Engineering & AI Architecture
- Craft production-grade prompts and system prompts for reliable AI outputs.
- Design multi-turn conversational flows and optimize context window management.
- Select and orchestrate appropriate AI models across providers (OpenAI, Anthropic, Google, Open Source) based on use-case requirements.
Testing, Evaluation & Monitoring
- Build evaluation/testing pipelines for AI outputs, including:
- LLM-as-Judge frameworks
- Human evaluation workflows
- Prompt regression testing
- Set up observability/logging systems using tools such as LangSmith, Helicone, or custom monitoring solutions.
- Monitor agent performance, trace failures, and improve output quality over time.
Security & Optimization
- Implement AI security guardrails, including:
- Prompt injection mitigation
- Output filtering
- PII protection
- Safe-by-default AI architecture
- Optimize inference costs using:
- Token management
- Caching strategies
- Smart model routing
Engineering Best Practices
- Contribute to CI/CD pipelines and maintain clean, scalable, production-ready codebases.
- Collaborate with teams to integrate AI solutions into products and business processes.
- Continuously explore and integrate emerging AI tools, models, and frameworks.
Required Skills & ExperienceExperience
- 2–3 years of experience in AI/ML systems, AI-focused software engineering, or related technical roles.
- Proven experience building and deploying AI-powered systems in production.
AI & Automation
- Hands-on experience with:
- Agentic AI workflows
- AI automation pipelines
- Multi-agent architectures
- Task delegation patterns
Programming & Frameworks
- Strong proficiency in Python
- Experience with:
- LangChain / LangGraph / Similar orchestration frameworks
- REST APIs / Webhooks / Microservices
- MCP (Model Context Protocol) for tool/service integrations
Voice AI
- Experience with:
- Vapi
- ElevenLabs
- Other voice/conversational AI platforms
Deployment & Infrastructure
- Ability to deploy/serve:
- Open-source LLMs
- Image/Video Generation Models
- TTS/STT Models
- Experience with:
- Docker
- Basic Kubernetes
- GPU/cloud hosting environments
Fine-Tuning / Optimization
- Familiarity with:
- LoRA / QLoRA
- Model customization techniques
Evaluation & Testing
- Experience building:
- AI evaluation pipelines
- Prompt testing/regression systems
Security
- Understanding of:
- Prompt injection defense
- Output moderation/filtering
- PII handling
Cloud Platforms
- Experience with:
- Azure / AWS / GCP
- Understanding of:
- Hosting
- Scaling
- Monitoring
- Cost control
Observability
- Familiarity with:
- LangSmith
- Helicone
- AI tracing/debugging tools
Knowledge Base / RAG
- Strong understanding of:
- Embedding pipelines
- Vector search
- Chunking/document strategies
- RAG maintenance
Nice to Have
- Experience with Vector Databases:
- Pinecone, FAISS, Weaviate, Qdrant
- Familiarity with:
- Claude Code / Cursor / AI-assisted development tools
- Experience with:
- ComfyUI / Stable Diffusion / Open-source image-video generation
- Knowledge of:
- WebSockets / Real-time Streaming / Telephony Integrations
- Exposure to:
- Azure ML / Databricks / Distributed Training / Mixed Precision (fp16)
- Understanding of:
- Responsible AI / Governance Practices
Education
- Bachelor’s degree in Computer Science, Engineering, Data Science, or related field
- OR equivalent practical experience
Job Type: Full-time
Work Location: In person