Role: MLOps Engineer
Exp: 6 to 12 years
Key words -Skillset
-
AWS SageMaker, Azure ML Studio, GCP Vertex AI
-
PySpark, Azure Databricks
-
MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline
-
Kubernetes, AKS, Terraform, Fast API
Responsibilities
-
Model Deployment, Model Monitoring, Model Retraining
-
Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline
-
Drift Detection, Data Drift, Model Drift
-
Experiment Tracking
-
MLOps Architecture
-
REST API publishing
Job Responsibilities:
- Research and implement MLOps tools, frameworks and platforms for our Data Science projects.
- Work on a backlog of activities to raise MLOps maturity in the organization.
- Proactively introduce a modern, agile and automated approach to Data Science.
- Conduct internal training and presentations about MLOps tools’ benefits and usage.
Required experience and qualifications:
- Wide experience with Kubernetes.
- Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube).
- Good understanding of ML and AI concepts. Hands-on experience in ML model development.
- Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit.
- Experience in CI/CD/CT pipelines implementation.
- Experience with cloud platforms - preferably AWS - would be an advantage.