FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Responsibilities
Design, train, and evaluate ML models for classification, anomaly detection, forecasting, and natural language understanding tasks.
Build and fine-tune deep learning models, including RNNs, GRUs, LSTMs, and Transformer architectures (e.g., BERT, T5, GPT).
Develop and deploy Generative AI solutions, including RAG pipelines for applications such as document search, Q&A, and summarization.
Apply model optimization techniques, including quantization, to improve latency and reduce memory/compute overhead in production.
Fine-tune large language models (LLMs) using Supervised Fine-Tuning (SFT) and Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA or QLoRA (optional).
Define, track, and report relevant evaluation metrics; monitor model drift and retrain models as required.
Collaborate with cross-functional teams (data engineering, backend, DevOps) to productionize ML models using CI/CD pipelines.
Maintain clean, reproducible code, and proper documentation and versioning of experiments.
Required Skills & Qualifications
4–5 years of hands-on experience in machine learning, deep learning, or data science roles.
Proficiency in Python and ML/DL libraries: scikit-learn, pandas, PyTorch, TensorFlow.
Strong understanding of traditional ML and deep learning, particularly for sequence and NLP tasks.
Experience with Transformer models and open-source LLMs (e.g., Hugging Face Transformers).
Familiarity with Generative AI tools and RAG frameworks (e.g., LangChain, LlamaIndex).
Experience in model quantization (dynamic/static, INT8) and deploying models in resource-constrained environments.
Knowledge of vector stores (e.g., FAISS, Pinecone, Azure AI Search), embeddings, and retrieval techniques.
Proficiency in evaluating models using statistical and business metrics.
Experience with model deployment, monitoring, and performance tuning in production.
Familiarity with Docker, MLflow, and CI/CD practices.
Preferred Qualifications
Experience fine-tuning LLMs (SFT, LoRA, QLoRA) on domain-specific datasets.
Exposure to MLOps platforms (e.g., SageMaker, Vertex AI, Kubeflow).
Familiarity with distributed data processing frameworks (e.g., Spark) and orchestration tools (e.g., Airflow).
Contributions to research papers, blogs, or open-source projects in ML/NLP/Generative AI.
Job Opening ID
Job Type
Industry
Date Opened
City
Province
Country
Postal Code
Similar jobs
Cummins
Phaltan, India
3 days ago
Capgemini
India
3 days ago
SKY FOODS
India
3 days ago
UltraTech Cement
Mumbai, India
3 days ago
Vyapi Lextech Solutions Pvt Ltd.
India
3 days ago
© 2025 Qureos. All rights reserved.