5 - 7 Years
3 Openings
Hyderabad
We are seeking a skilled Data Engineer with strong expertise in Python, PySpark, and the AWS Data Engineering stack to design, develop, and optimize large-scale ETL pipelines. The ideal candidate will have a solid foundation in data engineering best practices, automation, and modern cloud-based architectures, along with a passion for leveraging Generative AI tools to boost productivity and code quality.
Core Must-Have Skills:
- Strong proficiency in Python and PySpark for developing and optimizing ETL pipelines.
- In-depth understanding of data engineering best practices, including data validation, transformation logic, and performance optimization.
- Experience working with large-scale data processing and distributed computing environments.
Good-to-Have / Preferred Skills:
- Working knowledge of Scala programming, particularly for Spark-based use cases.
- Familiarity with AI-assisted development tools such as GitHub Copilot to enhance productivity and code quality.
AWS Data Engineering Stack Expertise:
- Hands-on experience with AWS Glue, Lambda Functions, EventBridge, SQS, SNS, DynamoDB, and Streams for building serverless and event-driven data pipelines.
- Proficiency in CloudWatch for creating dashboards, setting up s, and monitoring pipeline health.
- Basic working knowledge of AWS CDK and CloudFormation for infrastructure automation and deployment.
Python ,Pyspark,Data Validation,Transformation Logic
UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.