JOB DESCRIPTION
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned Data Engineer with strong experience in AWS, Python, and MLOps to design, build, and maintain scalable data and machine learning infrastructure. This role will collaborate with data scientists and software engineers to enable efficient data processing, model deployment, and monitoring in cloud environments.
Job responsibilities
-
Design, implement, and optimize ETL/ELT pipelines using Python and AWS services (e.g., Glue, Lambda, S3, Redshift).
-
Support the deployment, monitoring, and maintenance of machine learning models in production, leveraging MLOps best practices and tools.
-
Build and manage scalable data architectures on AWS, ensuring reliability, security, and cost-effectiveness.
-
Collaborate closely with data scientists, ML engineers, and business stakeholders to understand requirements and deliver robust solutions.
-
Develop automated workflows for data ingestion, transformation, and model deployment using CI/CD pipelines.
-
Monitor data pipelines and ML models for performance, data drift, and system health; implement improvements as needed.
-
Document data processes, architectures, and model workflows; ensure compliance with internal and regulatory standards.
-
Optimize data workflows and architectures for efficiency and scalability.
-
Integrate new data sources and technologies into existing data infrastructure.
-
Troubleshoot and resolve issues in data pipelines and machine learning operations.
-
Ensure adherence to best practices in data engineering, security, and compliance.
Required qualifications, capabilities, and skills
Preferred qualifications, capabilities, and skills
-
Leverage infrastructure-as-code tools (Terraform, CloudFormation) for cloud resource management.
-
Apply machine learning frameworks (TensorFlow, PyTorch, Scikit-learn) in model development and deployment.
-
Utilize data engineering concepts and tools (Spark, Kafka, etc.) for advanced data processing.
-
Implement model governance and explainability frameworks in ML workflows.
ABOUT US