Qureos

FIND_THE_RIGHTJOB.

Data Engineer/Spark Engineer

JOB_REQUIREMENTS

Hires in

Not specified

Employment Type

Not specified

Company Location

Not specified

Salary

Not specified

Who are we?

Opus Technologies is a technology company with a focus on shaping the future of payment technology. With experience building highly innovative solutions and products, we combine our deep technology proficiency with unmatched domain expertise in payments and fintech, enabling us to deliver unparalleled quality and value in everything we do. For the last 27+ years, our team has worked with a diverse global customer base, ranging from start-ups to Fortune 500 financial leaders, all focused on digital transformation and driving innovation in payments. We’re headquartered in Alpharetta, Georgia, USA. Our offshore software development center, including all corporate teams, works out of an office in Pune, India.

Job Description

JOB

Data Engineer/Spark Engineer

LOCATION

Pune

POSITION

Full Time

QUALIFICATION

2+ years in data engineering with proven experience in building data platforms from scratch.

Key Responsibilities

  • Data Architecture & Design: Design and implement scalable data infrastructure including OLTP databases, data lakes, and cloud-based data warehouses.
  • Spark-Based Data Engineering: Build and optimize data pipelines using Apache Spark (PySpark/Scala) for batch and real-time processing.
  • Cloud Data Solutions: Leverage GCP services such as BigQuery, Dataflow, Dataproc, and Cloud Storage to build robust data solutions.
  • ETL/ELT Development: Develop efficient ETL/ELT workflows to ingest, transform, and load data from diverse sources.
  • Data Modeling & Governance: Define data models, enforce data quality standards, and implement governance practices across the data lifecycle.


Required Skills & Qualifications

  • Experience: 2+ years in data engineering with proven experience in building data platforms from scratch.
  • Spark Expertise: Hands-on experience with Apache Spark (PySpark or Scala) in production environments.
  • Cloud Proficiency: Strong experience with Google Cloud Platform data services (BigQuery, Dataflow, Dataproc, Cloud Storage).
  • Big Data Ecosystem: Solid understanding of Hadoop, Hive, and distributed data processing.
  • Databases: Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).
  • Data Warehousing: Familiarity with data warehouse design principles (Kimball/Inmon) and tools like BigQuery, Snowflake, or Redshift.

© 2025 Qureos. All rights reserved.