FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Role Proficiency:
This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Code Development:
Documentation:
Configuration:
Testing:
Domain Relevance:
Defect Management:
Estimation:
Knowledge Management:
Design Understanding:
Certifications:
Skill Examples:
Knowledge Examples:
Knowledge Examples
Additional Comments:
Data Engineer (Contractor) Role Overview: We are seeking a highly skilled Data Engineer (Contractor) to join the APJ Data Engineering Team. This role involves leading and delivering data engineering projects, optimizing data solutions, and working collaboratively with cross-functional teams to enhance data infrastructure and analytics capabilities. ________________________________________ Key Responsibilities: • Lead and deliver projects within the APJ Data Engineering Team. • Perform source-to-target data analysis and mappings. • Collaborate with admins, developers, data engineers, and analysts to ensure successful functionality delivery. • Conduct requirement analysis and coordinate with project managers and the development team to drive the delivery cycle. • Work closely with the scrum master on product backlogs and sprint planning. • Improve and support solutions using PySpark/EMR, SQL, AWS Athena, S3, Redshift, Lambda, ADF. • Write and optimize complex queries to implement ETL and data solutions. • Identify, design, and implement internal process improvements, including automating manual processes, optimizing data delivery, and redesigning infrastructure for scalability. ________________________________________ Experience: • 3+ years of experience in Databricks,Pysparkß, Data Analytics, SQL • Full life cycle project implementation experience using: o AWS services (Databricks, PySpark/EMR, Athena, S3, Redshift, Lambda) • Experience working with agile development methodologies, implementing DevOps, DataOps, and DevSecOps practices. • Familiarity with JIRA for task management and Confluence for documentation. • Strong analytical experience with databases, including writing complex queries, query optimization, debugging, user-defined functions, views, indexes, etc. Knowledge, Skills, and Abilities: • Excellent written, verbal, and interpersonal communication skills. • Strong prioritization and problem-solving skills. • Ability to learn and train other team members.
Databricks, Pyspark, SQL,Agile Methodologies,Data Analytics,Life cycle project implementation
Similar jobs
No similar jobs found
© 2025 Qureos. All rights reserved.