Qureos

FIND_THE_RIGHTJOB.

Data Engineer (AWS+Pyspark)

India

7+ years of overall IT experience, which includes hands on experience in Big Data technologies.

• Mandatory - Hands on experience in Python and PySpark.

• Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

• Worked on optimizing spark jobs that processes huge volumes of data.

• Hands on experience in version control tools like Git.

• Worked on Amazon’s Analytics services like Amazon EMR, Amazon Athena, AWS Glue.

• Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.

• Experience/knowledge of bash/shell scripting will be a plus.

• Has built ETL processes to take data, copy it, structurally transform it etc. involving a wide variety of formats like CSV, TSV, XML and JSON.

• Experience in working with fixed width, delimited , multi record file formats etc.

• Good to have knowledge of datawarehousing concepts – dimensions, facts, schemas- snowflake, star etc.

• Have worked with columnar storage formats- Parquet,Avro,ORC etc. Well versed with compression techniques – Snappy, Gzip.

• Good to have knowledge of AWS databases (atleast one) Aurora, RDS, Redshift, ElastiCache, DynamoDB.

• Hands on experience in tools like Jenkins to build, test and deploy the applications

• Awareness of Devops concepts and be able to work in an automated release pipeline environment.

• Excellent debugging skills.


Preferred Qualifications

  • Experience working with US Clients and Business partners.
  • Knowledge on Front end frameworks.
  • Exposure to BFSI domain is a good to have.
  • Hands on experience on any API Gateway and management platform.

Similar jobs

No similar jobs found

© 2025 Qureos. All rights reserved.