FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Role Proficiency:
This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc with coding skills in Python PySpark and SQL. Works independently and demonstrates proficiency in at least one domain related to data with a solid understanding of SCD concepts and data warehousing principles.
Outcomes:
Measures of Outcomes:
Outputs Expected:
Code Development:
Documentation:
Configuration:
Testing:
Domain Relevance:
Defect Management:
Estimation:
Knowledge Management:
Design Understanding:
Certifications:
Skill Examples:
Knowledge Examples:
Knowledge Examples
Additional Comments:
Job Title: Data Services Engineer – AWS & Snowflake Job Summary: As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake. You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance. Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives. Responsibilities: • Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema) • Optimize query performance using clustering keys, materialized views, and caching • Develop and maintain Snowflake data warehouses and data marts. • Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks). • Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica) • Utilize Snowpark and Python/Java for complex transformations • Implement RBAC, data masking, and row-level security. • Optimize data storage and retrieval for performance and cost-efficiency. • Collaborate with stakeholders to gather data requirements and deliver solutions. • Ensure data quality, governance, and compliance with industry standards. • Monitor, troubleshoot, and resolve data pipeline and performance issues. • Document data architecture, processes, and best practices. • Support data migration and integration from various sources. Qualifications: • Bachelor’s degree in Computer Science, Information Technology, or a related field. • 3 to 4 years of hands-on experience in data engineering or data services. • Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda). • Strong expertise in Snowflake architecture, development, and optimization. • Proficiency in SQL and Python for data manipulation and scripting. • Solid understanding of ETL/ELT processes and data modeling. • Experience with data integration tools and orchestration frameworks. • Excellent analytical, problem-solving, and communication skills. Preferred Skills: • AWS Glue, AWS Lambda, Amazon Redshift • Snowflake Data Warehouse • SQL & Python
Aws Lambda,AWS Glue,Amazon Redshift,Snowflake Data Warehouse
Similar jobs
No similar jobs found
© 2025 Qureos. All rights reserved.