About the Role:
We are seeking a highly skilled Machine Learning Engineer to design, build, and deploy scalable ML models and end-to-end AI solutions. The ideal candidate will have hands-on experience across the ML lifecycle — from data preprocessing to model training, fine-tuning, evaluation, deployment, and monitoring. You’ll collaborate with cross-functional teams to translate business problems into data-driven solutions and work with modern MLOps frameworks to ensure efficiency, reproducibility, and scalability.
Key Responsibilities:
- Develop and implement machine learning models for structured and unstructured data.
- Perform data preprocessing, feature engineering, and exploratory data analysis using Pandas and NumPy.
- Design and maintain end-to-end ML pipelines for training, validation, deployment, and monitoring.
- Apply and fine-tune ML algorithms using Scikit-learn, TensorFlow, and PyTorch.
- Utilize PySpark for large-scale data processing and distributed ML workloads.
- Implement and manage model deployment using AWS SageMaker, Azure ML, or GCP Vertex AI.
- Use MLflow or similar tools for experiment tracking, versioning, and reproducibility.
- Monitor and optimize models for performance, drift, and scalability in production environments.
- Work with Large Language Models (LLMs) such as OpenAI GPT and Hugging Face Transformers for advanced NLP and generative AI use cases.
- Collaborate with Data Scientists, Engineers, and Product teams to integrate ML solutions into production systems.
- Contribute to MLOps practices, ensuring automation and efficiency across the model lifecycle.
- Stay up-to-date with emerging trends in ML, AI frameworks, and cloud-based ML solutions.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field.
- 4–5 years of hands-on experience in Machine Learning Engineering or a similar role.
- Strong programming skills in Python with proficiency in Pandas, NumPy, and Scikit-learn.
- Expertise in TensorFlow, PyTorch, and PySpark.
- Experience building and deploying end-to-end ML pipelines.
- Strong understanding of model evaluation techniques, fine-tuning, and optimization.
- Experience with MLOps tools such as MLflow, Kubeflow, or DVC.
- Familiarity with OpenAI, Hugging Face Transformers, and LLM architectures.
- Proficiency with cloud ML platforms like AWS SageMaker, Azure ML, or GCP Vertex AI.
- Solid understanding of model lifecycle management, versioning, and experiment reproducibility.
- Excellent analytical thinking, problem-solving, and communication skills.
- Proven ability to work effectively in cross-functional and collaborative environments.
Nice to Have:
- Experience with data versioning tools (e.g., DVC, Delta Lake).
- Familiarity with containerization and orchestration tools (Docker, Kubernetes).
- Exposure to generative AI applications and prompt engineering.
Why Join Us:
- Opportunity to work on cutting-edge AI/ML and LLM-based projects.
- Collaborative, growth-driven environment.
- Access to the latest AI tools and cloud ML infrastructure.
- Competitive compensation and professional development opportunities.
Job Type: Full-time
Pay: ₹100,000.00 - ₹120,000.00 per month
Benefits:
Work Location: Hybrid remote in Pune, Maharashtra