Alphabridge
is a dynamic tech company focused on empowering startups and mid-sized businesses with innovative solutions that drive growth and scalability. We specialize in providing cutting-edge software, strategic consulting, and technology infrastructure designed to streamline operations, enhance productivity, and foster sustainable expansion. With a commitment to delivering tailored solutions, Alphabridge helps businesses optimize their processes and succeed in a competitive digital landscape.
Location:
DHA Phase 5
Timings:
6:00 PM - 3:00 AM
About the Role
We are seeking a skilled
Data Engineer
to design, build, and maintain scalable data pipelines and analytics platforms across cloud environments. The ideal candidate will have hands-on experience with
AWS
,
Ab Initio
,
Snowflake
,
SnapLogic
,
Databricks
,
Terraform
, and
Azure
, and will play a key role in deliivering reliable, high-quality data solutions for analytics and business intelligence.
Core Responsibilities:
-
Design, develop, and maintain
end-to-end data pipelines
using
Ab Initio
,
SnapLogic
, and cloud-native services.
-
Build and optimize
data ingestion, transformation, and orchestration workflows
across multiple data sources.
-
Develop and manage
data warehouses and data lakes
using
Snowflake
,
AWS
, and
Azure
.
-
Implement scalable data processing solutions using
Databricks (Apache Spark)
.
-
Optimize data models, queries, and performance for large-scale datasets.
-
Use
Terraform
to provision and manage cloud infrastructure following Infrastructure as Code (IaC) best practices.
-
Ensure
data quality, reliability, and security
across all data pipelines.
-
Collaborate with analytics, BI, and application teams to support reporting and advanced analytics.
-
Monitor, troubleshoot, and resolve pipeline failures and performance issues.
-
Document data flows, architectures, and operational processes.
Required Qualifications:
-
3+ years of experience as a
Data Engineer
or similar role.
-
Strong hands-on experience with
AWS services
(e.g., S3, EC2, Glue, Lambda, Redshift).
-
Experience with
Ab Initio
for enterprise data integration and transformation.
-
Proficiency in
Snowflake
(data modeling, performance tuning, security).
-
Hands-on experience with
SnapLogic
for iPaaS-based integrations.
-
Strong experience with
Databricks / Apache Spark
.
-
Experience with
Terraform
for cloud infrastructure automation.
-
Working knowledge of
Azure data services
(e.g., ADLS, Azure Data Factory, Synapse).
-
Proficiency in
SQL
and at least one programming language (Python preferred).
-
Understanding of data warehousing concepts, ETL/ELT, and cloud architectures.
Nice to Have:
-
Experience with dbt, Terraform, or IaC tools
-
Experience working in
multi-cloud environments
(AWS + Azure).
-
Knowledge of CI/CD for data pipelines.
-
Familiarity with data governance, lineage, and metadata management tools.
-
Experience with streaming platforms (Kafka, Kinesis, or Event Hubs).