About the Role
We are seeking a highly skilled and experienced Senior / Lead Data Engineer to design, build, and maintain scalable, reliable, and high-performance data systems.
In this role, you will take ownership of data platforms, lead cross-functional initiatives, and enable analytics, machine learning, and business intelligence across the organization. You will play a key role in shaping data architecture, ensuring data quality and governance, and supporting data-driven decision-making.
Key ResponsibilitiesData Architecture & Modeling
- Design and implement scalable data models (star schema, fact/dimension tables)
- Develop conceptual, logical, and physical data models
- Build enterprise data warehouse solutions with high performance and scalability
- Define and enforce data modeling standards and best practices
Data Engineering & Pipeline Development
- Build and maintain end-to-end data pipelines (ingestion → transformation → serving)
- Develop ETL/ELT processes using modern tools and frameworks
- Implement data validation pipelines and staging layers
- Optimize SQL queries, indexing, and database performance
Data Platform Reliability & Observability
- Ensure reliability and integrity of data pipelines
- Build monitoring dashboards and alerting systems
- Implement data quality checks and schema enforcement
- Lead incident management and root cause analysis
Data Governance & Quality
- Establish data governance frameworks and policies
- Ensure data integrity, lineage, and compliance
- Manage access controls and data security
- Maintain separation between core datasets and analytics layers
Cloud & Big Data Integration
- Integrate systems with data lakes and large-scale platforms
- Work with cloud environments (AWS preferred)
- Utilize tools such as AWS Glue, Lambda, Redshift, or Snowflake
- Design scalable and cost-efficient data solutions
Analytics Engineering & Data Products
- Build and deliver data products for analysts and data scientists
- Develop reusable frameworks for transformation and orchestration
- Support advanced analytics and reporting
CI/CD & DevOps for Data
- Implement CI/CD pipelines for data workflows
- Automate testing, deployment, and validation
- Use version control tools (Git/GitHub)
- Ensure high availability and disaster recovery
Programming & Tools
- Strong expertise in SQL (advanced level)
- Proficiency in Python for automation and data processing
- Experience with tools such as dbt, DBeaver, and orchestration tools (e.g., Dagster)
Leadership & Collaboration
- Lead technical initiatives and mentor junior engineers
- Conduct code reviews and architecture discussions
- Collaborate with engineering, product, and business teams
- Translate business requirements into scalable data solutions
Requirements
- Bachelor’s degree in Computer Science, Engineering, IT, or related field
- 4–5+ years of experience in Data Engineering or related roles
- Strong expertise in data modeling, SQL, and ETL/ELT pipelines
- Experience with data warehousing and data lakes
- Hands-on experience with cloud platforms (AWS preferred)
- Strong understanding of system design and scalable architecture
- Excellent communication and stakeholder management skills
- Ability to work in fast-paced, agile environments
Preferred Qualifications
- Experience with AWS services (Glue, Lambda, etc.)
- Experience with Snowflake or Redshift
- Strong hands-on experience with dbt
- Knowledge of machine learning data pipelines
- Certifications (SQL Server, CDMP, MCSA, or similar)
- Experience in regulated industries (Healthcare, Defense, etc.)
- Familiarity with CI/CD, data governance, and FinOps practices
Job Type: Full-time
Pay: Rs250,000.00 - Rs400,000.00 per month
Work Location: In person