Find The RightJob.
Powering the world’s payments ecosystem
ACI powers the payments ecosystem – globally, and you power ACI. You’ll innovate, collaborate, and grow – in an energetic technology culture with decades of proven success. ACIers – in all roles and levels – are truly your colleagues and many are your friends. Our size and reach allow you to see the global impact of your work. You are visible, your talents are valued, and you are empowered to shape the future of payments.
As a Senior Data Engineer in Bangalore, you will join a diverse, passionate team, dedicated to powering the world’s payments ecosystem!
This purpose of this role is:
The Senior Data Engineer plays a critical role in enabling ACI's data-driven decision-making and analytics capabilities by designing, building, and maintaining the organization's Databricks-based Data Lake infrastructure. This position is responsible for creating scalable, high-performance data pipelines that transform raw data into valuable insights, supporting data scientists, analysts, and business stakeholders across the enterprise. By implementing robust data engineering solutions and best practices, this role ensures that ACI has access to reliable, timely, and high-quality data to support strategic initiatives, product innovation, and operational excellence. The Senior Data Engineer contributes directly to ACI's mission by empowering teams with the data foundation they need to reach their full potential and deliver exceptional value to our customers.
Essential Duties and Responsibilities
Design, develop, and maintain the Data Lake architecture, building scalable data pipelines and ETL/ELT processes using Databricks workflows, Delta Lake, and Apache Spark to ensure efficient data movement and storage across the organization.
Leverage Databricks platform capabilities including jobs, clusters, and Unity Catalog to create robust data processing workflows that handle large-scale datasets (terabytes to petabytes), ensuring high performance, reliability, and data quality throughout the data lifecycle.
Implement data ingestion and transformation pipelines within Databricks, utilizing Spark SQL and PySpark to process streaming and batch data from various sources into the Data Lake.
Build and optimize change data capture (CDC) solutions using Debezium and other technologies to enable real-time data synchronization and streaming between heterogeneous database systems and the Databricks Data Lake.
Collaborate with data scientists, analysts, and business stakeholders on a daily basis to understand data requirements and translate business needs into technical data solutions within the Databricks environment that support analytics and decision-making.
Design and implement data models, schemas, and Delta Lake table structures to support efficient data storage, retrieval, and analysis, following medallion architecture patterns (bronze, silver, gold layers) within Databricks.
Monitor, troubleshoot, and optimize Databricks cluster performance and data pipeline efficiency, proactively identifying and resolving data quality issues, bottlenecks, and system failures to maintain SLA requirements.
Establish and enforce data engineering best practices, coding standards, and documentation for Databricks development to ensure maintainability, scalability, and knowledge transfer across the team.
Mentor junior members of the team on work practices and tech stack.
Participate in architecture decisions and technology evaluations, recommending Databricks features, configurations, and integration approaches that align with organizational data strategy and technical roadmap.
Implement data security and governance measures within Databricks and the Data Lake to ensure compliance with data privacy regulations and organizational policies, including access controls, encryption, and audit logging.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, Data Science, Engineering, or a related technical field, or equivalent professional experience.
Minimum 3 years of professional experience in data engineering, software engineering, or related technical roles working with large-scale data systems.
Proven experience designing, building, and maintaining ETL/ELT pipelines that process large volumes of data (multi-terabyte to petabyte scale).
2 years of hands-on experience with Databricks platform, including Delta Lake, Databricks workflows, notebooks, and cluster management.
Strong proficiency in Apache Spark and distributed data processing, with demonstrated experience in PySpark and/or Scala for building data pipelines.
Preferred Qualifications:
Solid understanding of data modeling principles, database design, and experience with both SQL and NoSQL database technologies.
Experience with change data capture (CDC) technologies such as Debezium, or similar streaming data integration tools.
Proficiency in SQL and at least one programming language commonly used in data engineering (Python, Scala, or Java).
Strong understanding of data lake architectures, including medallion architecture patterns (bronze, silver, gold layers).
Experience with version control systems (Git) and CI/CD practices for data pipelines.
Excellent communication skills with the ability to collaborate effectively with cross-functional teams including data scientists, analysts, and business stakeholders.
Demonstrated ability to work independently, manage multiple priorities, and deliver results in a fast-paced environment.
Core Capabilities:
We seek colleagues who embody our core capabilities — these shape our culture and enable us to make a meaningful impact together:
Ensure Accountability: holding self and others accountable to meet commitments.
Drives Results: consistently achieving results, even under tough circumstances.
Customer Focus: building strong customer relationships and delivering customer-centric solutions.
Cultivate Innovation: creating new and better ways for the organization to be successful.
Collaborates: building partnerships and working collaboratively with others.
Courage: stepping up to address difficult issues, saying what needs to be said.
I n return for your expertise, we offer opportunities for growth, career development, and a competitive compensation and benefits package—all within an innovative and collaborative work environment.
Are you ready to help us transform the payments ecosystem? To learn more about ACI Worldwide, visit our web site at www.aciworldwide.com
#LI-AP
ACI Worldwide is an AA/EEO employer in the United States, which includes providing equal opportunity for protected veterans and individuals with disabilities, and an EEO employer globally.
Similar jobs
AXTRIA
Uttar Tola, India
5 days ago
Vodafone
India
5 days ago
ExxonMobil
India
5 days ago
Amazon.com
India
5 days ago
CGI
India
5 days ago
EXL
Noida, India
5 days ago
Celestica
India
5 days ago
© 2026 Qureos. All rights reserved.