What Ibexlabs Does
Ibexlabs is an AWS Advanced Tier Consulting Partner with multiple competencies, including Security, DevOps, Healthcare, and Managed Services. Our team of dedicated and highly skilled engineers is passionate about helping customers accelerate their cloud transformation while ensuring security and compliance with industry best practices. As a rapidly growing company, we are seeking talented individuals to join us and contribute to our continued success.
-
Develop and maintain scalable data pipelines using Databricks and Apache Spark.
-
Write efficient, reusable, and optimized code in Python (PySpark preferred).
-
Design and implement ETL/ELT workflows for structured and unstructured data.
-
Perform data transformation, validation, and cleansing.
-
Optimize Spark jobs for performance and scalability.
-
Work with large-scale distributed data processing systems.
-
Integrate data from various sources including databases and cloud platforms.
-
Troubleshoot and resolve data pipeline issues.
-
Collaborate with cross-functional teams including analysts and architects.
-
Follow coding standards, documentation, and version control best practices.
-
Lead the design and architecture of scalable data platforms using Databricks.
-
Architect and oversee complex ETL/ELT data pipelines.
-
Provide technical leadership and mentor Data Engineers.
-
Ensure best practices in data governance, performance optimization, and security.
-
Review code and ensure adherence to quality standards.
-
Work closely with business stakeholders to translate requirements into technical solutions.
-
Drive data migration, modernization, and integration initiatives (including Teradata if applicable).
-
Participate in project estimation, planning, and delivery.
-
Monitor pipeline performance and implement continuous improvements.
-
3–5 years of experience in Data Engineering.
-
Strong hands-on experience with Databricks.
-
Strong expertise in Apache Spark (PySpark preferred).
-
Proficiency in Python.
-
Experience in building ETL/ELT data pipelines.
-
Good knowledge of SQL and data warehousing concepts.
-
Experience with cloud platforms (Azure/AWS/GCP preferred).
-
Teradata experience is a plus.
-
Strong analytical and problem-solving skills.
-
Good communication skills.
-
10+ years of experience in Data Engineering or Data Platform development.
-
Strong expertise in Databricks and Spark.
-
Extensive experience designing enterprise ETL pipelines.
-
Strong programming skills in Python.
-
Deep understanding of data architecture and big data ecosystems.
-
Experience with cloud data platforms.
-
Teradata migration experience preferred.
-
Strong leadership and stakeholder management skills.
-
Excellent communication and mentoring abilities.
Why should you be interested in this opportunity?
-
Your freedom and opportunity to grow rapidly in the career. You will be fully empowered by tools and knowledge to grow in your career as well as helping your team members grow.
-
A culture of respect, humility, growth mindset, and fun in the team.
-
Learn from other engineers in the development team.
-
Get rewarded and recognized for your work and effort.
-
Training and career development benefits.
-
Life Insurance paid parental leave and vacation days.