Qureos

FIND_THE_RIGHTJOB.

AI/ML Engineer

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

Job Information

    Date Opened

    11/19/2025

    City

    Chennai

    Country

    India

    Job Role

    AI/ML Engineering

    State/Province

    Tamil Nadu

    Industry

    IT Services

    Job Type

    Full time

    Zip/Postal Code

    600096

Job Description

Face to face -Walkin Drive: AI/ML Developer, AI engineer

Date: 29th Nov 2025
Time: 10 AM to 3 PM
Venue: World Trade Center 1st floor of Tower B, 5/142, Rajiv Gandhi Salai, Perungudi, Chennai, Tamil Nadu 600096
Years of experience : 3- 10 years

Introduction to the Role:


Are you passionate about building intelligent systems that learn, adapt, and deliver real-world value? Join our high-impact AI & Machine Learning Engineering team and be a key contributor in shaping the next generation of intelligent applications. As an AI/ML Engineer, you’ll have the unique opportunity to develop, deploy, and scale advanced ML and Generative AI (GenAI) solutions in production environments, leveraging cutting-edge technologies, frameworks, and cloud platforms.


In this role, you will collaborate with cross-functional teams including data engineers, product managers, MLOps engineers, and architects to design and implement production-grade AI solutions across domains. If you're looking to work at the intersection of deep learning, GenAI, cloud computing, and MLOps — this is the role for you.


Accountabilities:


  • Design, develop, train, and deploy production-grade ML and GenAI models across use cases including NLP, computer vision, and structured data modeling.


  • Leverage frameworks such as TensorFlow, Keras, PyTorch, and LangChain to build scalable deep learning and LLM-based solutions.


  • Develop and maintain end-to-end ML pipelines with reusable, modular components for data ingestion, feature engineering, model training, and deployment.


  • Implement and manage models on cloud platforms such as AWS, GCP, or Azure using services like SageMaker, Vertex AI, or Azure ML.


  • Apply MLOps best practices using tools like MLflow, Kubeflow, Weights & Biases, Airflow, DVC, and Prefect to ensure scalable and reliable ML delivery.


  • Incorporate CI/CD pipelines (using Jenkins, GitHub Actions, or similar) to automate testing, packaging, and deployment of ML workloads.


  • Containerize applications using Docker and orchestrate scalable deployments via Kubernetes.


  • Integrate LLMs with APIs and external systems using LangChain, Vector Databases (e.g., FAISS, Pinecone), and prompt engineering best practices.


  • Collaborate closely with data engineers to access, prepare, and transform large-scale structured and unstructured datasets for ML pipelines.


  • Build monitoring and retraining workflows to ensure models remain performant and robust in production.


  • Evaluate and integrate third-party GenAI APIs or foundational models where appropriate to accelerate delivery.


  • Maintain rigorous experiment tracking, hyper parameter tuning, and model versioning.


  • Champion industry standards and evolving practices in ML lifecycle management, cloud-native AI architectures, and responsible AI.


  • Work across global, multi-functional teams, including architects, principal engineers, and domain experts.


Essential Skills / Experience:


  • 3–10 years
    of hands-on experience in developing, training, and deploying ML/DL/GenAI models.


  • Strong programming expertise in Python with proficiency in machine learning, data manipulation, and scripting.


  • Demonstrated experience working with Generative AI models and Large Language Models (LLMs) such as GPT, LLaMA, Claude, or similar.


  • Hands-on experience with deep learning frameworks like TensorFlow, Keras, or PyTorch.


  • Experience in LangChain or similar frameworks for LLM-based app orchestration.


  • Proven ability to implement and scale CI/CD pipelines for ML workflows using tools like Jenkins, GitHub, GitLab, or Bitbucket Pipelines.


  • Familiarity with containerization (Docker) and orchestration tools like Kubernetes.


  • Experience working with cloud platforms (AWS, Azure, GCP) and relevant AI/ML services such as SageMaker, Vertex AI, or Azure ML Studio.


  • Knowledge of MLOps tools such as MLflow, Kubeflow, DVC, Weights & Biases, Airflow, and Prefect.


  • Strong understanding of data engineering concepts, including batch/streaming pipelines, data lakes, and real-time processing (e.g., Kafka).


  • Solid grasp of statistical modeling, machine learning algorithms, and evaluation metrics.


  • Experience with version control systems (Git) and collaborative development workflows.


  • Ability to translate complex business needs into scalable ML architectures and systems.


Why Join Us?


  • Build and deploy cutting-edge LLM and GenAI applications that solve real-world problems


  • Collaborate with thought leaders across engineering, product, and data science


  • Work in a dynamic, cloud-native, and automation-driven AI environment


  • Accelerate your growth through certification programs and continuous learning


  • Be part of an innovation-first team that values openness, agility, and integrity


About Agilisium:


  • Agilisium, is an AWS technology Advanced Consulting Partner that enables companies to accelerate their "Data-to-Insights-Leap.
  • With $50+ million in annual revenue and over 30% year-over-year growth, Agilisium is one of the fastest-growing IT solution providers in Southern California.
  • Our most important asset? People.
  • Talent management plays a vital role in our business strategy.

© 2025 Qureos. All rights reserved.