Description:
This role focuses on supporting data modernization and cloud migration efforts. The position centers on working with SQL-based systems, building and maintaining ETL pipelines, and helping transition legacy data environments into modern cloud-based architectures.
The ideal candidate has strong experience with database technologies such as PostgreSQL and Oracle, hands-on experience with data transformation and migration, and is comfortable working across both on-prem and cloud environments.
Key Responsibilities:
-
Design, develop, and optimize SQL queries for performance and scalability
-
Build and maintain ETL pipelines for data ingestion, transformation, and migration
-
Support data modernization efforts, including legacy database re-engineering
-
Develop data solutions using Python or PySpark within AWS environments
-
Assist in migrating and transforming data across database platforms (e.g., Oracle to PostgreSQL)
-
Collaborate with stakeholders to improve data quality, accessibility, and performance
-
Support both cloud and on-prem data environments
-
Troubleshoot data-related issues and implement long-term solutions
Requirements:
Qualifications:
-
3+ years of experience working with SQL (PostgreSQL, Oracle preferred)
-
3+ years of experience building or maintaining ETL pipelines
-
Experience with data migration and transformation efforts
-
Experience working in AWS environments
-
Proficiency in Python or similar scripting language
-
Experience supporting both on-prem and cloud-based systems
-
Bachelor’s degree in Information Technology, Engineering, or related field (or equivalent experience)
Preferred:
-
Experience with PySpark, Databricks, or similar data processing tools
-
Familiarity with AWS data services (S3, Glue, Redshift, Athena)
-
Experience with workflow orchestration tools (e.g., Airflow)
-
Exposure to data modeling and data lake architectures