Qureos

FIND_THE_RIGHTJOB.

BigData- Spark/Scala

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

Overview:


We are looking for a highly skilled Big Data Engineer with extensive experience in Spark and Scala to join our team. The ideal candidate will play a crucial role in designing, developing, and optimizing large-scale data processing systems. You will work closely with data scientists, analysts, and other stakeholders to deliver high-quality data solutions.

Responsibilities:

Key Responsibilities: -

  • Design, develop, and maintain scalable data pipelines using Apache Spark and Scala.
  • Collaborate with cross-functional teams to understand data requirements and deliver data solutions that meet business needs.
  • Optimize Spark jobs for performance and cost-efficiency in a distributed computing environment.
  • Implement best practices for data modeling, ETL processes, and data governance.
  • Monitor and troubleshoot data processing workflows to ensure data integrity and availability.
  • Work with cloud platforms (AWS, Azure, or GCP) to implement big data solutions.
  • Stay up to date with industry trends and emerging technologies in big data and analytics.

Requirements:
  • 7-10 years of experience in Big Data technologies, with a strong focus on Apache Spark and Scala.
  • Proficiency in data processing frameworks (Hadoop, Spark) and languages (Scala, Java).
  • Experience with data warehousing solutions (Snowflake, Redshift, etc.) and SQL.
  • Knowledge of data modeling, ETL processes, and data visualization tools (Tableau, Power BI).
  • Familiarity with cloud services (AWS, Azure, Google Cloud) and containerization (Docker, Kubernetes).
  • Strong analytical skills and the ability to work with large datasets.
  • Excellent communication and teamwork skills.

© 2025 Qureos. All rights reserved.