Overview:
In this role, you will play a key role in automating and streamlining the data processing pipeline for AMESA markets.
Responsibilities:
- Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics.
-
Ensure that we build high quality software by reviewing peer code check-ins
-
Define best practices for product development, engineering, and coding as part of a world class engineering team in Hyderabad
-
Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with QA engineers
-
Develop software in short iterations to quickly add business value
-
Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers
Qualifications:
- 4-6 years experience developing data pipelines / ETLs
-
Deep understanding of database design and engineering
-
Strong familiarity with automated test frameworks
-
Current skills in following technologies:
-
Python
-
Airflow, Luigi, or similar orchestration platforms
-
Relational databases – Postgres, MySQL, or similar
-
AWS, Azure, or similar cloud platforms
-
GitHub or similar source control
-
Automated build process and tools
-
Fluent with Agile processes and tools such as Jira or Pivotal Tracker; must have experience running Agile teams, continuous integration, automated testing, and test driven development