We're seeking a Senior Data Integration Engineer to design, develop, and manage enterprise-grade data integration solutions. The ideal candidate will have experience with ETL/ELT processes, API-driven integrations, and enterprise data platforms, with strong technical skills and collaboration abilities.
Please apply, If you have the passion and standard methodologies work in an environment where challenges are a norm, where individual brilliance is valued and goes hand in hand with team performance, Where being proactive is how we do things !!!
Successful applicants should have/demonstrate:
Key responsibilities
- Architect, design, and optimize scalable big data solutions for batch and real-time processing.
- Develop and maintain ETL/ELT pipelines to ingest, transform, and synchronize data from diverse sources.
- Integrate data from cloud applications, on-prem systems, APIs, and streaming workspaces into centralized data repositories.
- Implement and manage data lakes and data warehouses solutions on cloud infrastructure.
- Ensure data consistency, quality, and compliance with governance and security standards.
- Collaborate with data architects, data engineers, and business stakeholders to align integration solutions with organizational needs.
Core qualifications
- Proficiency in Python, Java, or Scala for big data processing.
- Big Data Frameworks: Strong expertise in Apache Spark, Hadoop, Hive, Flink, or Kafka.
- Hands-on experience with data modeling, data lakes (Delta Lake, Iceberg, Hudi), and data warehouses (Snowflake, Redshift, BigQuery).
- ETL/ELT Development: Expertise with tools like Informatica, Talend, SSIS, Apache NiFi, dbt, or custom Python-based frameworks.
- APIs & Integration: Strong hands-on experience with REST, SOAP, GraphQL APIs, and integration platforms (MuleSoft, Dell Boomi, SnapLogic).
- Data Pipelines: Proficiency in batch and real-time integration (Kafka, AWS Kinesis/ Azure Event Hub/ GCP Pub/Sub).
- Databases: Deep knowledge of SQL (Oracle, PostgreSQL, SQL Server) and NoSQL (MongoDB, Cassandra, DynamoDB) systems.
Preferred experience
- Expertise with at least one major cloud platform (AWS, Azure, GCP).
- Experience with data services such as AWS EMR/Glue, GCP Dataflow/Dataproc, or Azure Data Factory.
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of CI/CD pipelines for data engineering.
- Experience with OCI and Oracle Database (including JSON/REST, sharding) and/or Oracle microservices tooling.