JOB_REQUIREMENTS
Employment Type
Not specified
Company Location
Not specified
Job Description
We are looking for a highly skilled and experienced Senior Data Engineer to join our growing data team. This role demands a hands-on expert in data architecture, pipeline development, and cloud-based data solutions. The ideal candidate will have a passion for designing robust and scalable systems, transforming data into actionable insights, and collaborating with cross-functional teams to build data-driven solutions that empower decision-making and product innovation.
Roles & Responsibilities
Design, build, and manage scalable data pipelines and data infrastructure to support data analytics and reporting.
Architect and implement end-to-end data solutions using modern cloud platforms (AWS, Azure, or GCP).
Develop and optimize ETL workflows, batch and real-time data ingestion processes.
Collaborate with Data Scientists, Analysts, and Software Engineers to support data needs across the organization.
Ensure high levels of data quality, integrity, and governance.
Monitor performance and troubleshoot data-related issues in production systems.
Drive data warehousing strategies and implement best practices for storage, access, and security.
Mentor junior team members and lead key data engineering initiatives. Stay updated with emerging technologies and trends in data engineering and cloud computing.
Skills
Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or related field.
10–15 years of professional experience in data engineering, big data, or data platform development.
Proven experience in building data pipelines using tools like Apache Spark, Kafka, Airflow, or similar.
Strong hands-on expertise with SQL, Python, and data modeling.
Experience with cloud services such as AWS (Redshift, S3, Glue), Azure (Data Factory, Synapse), or GCP (BigQuery, Dataflow).
Deep understanding of data architecture principles, warehousing concepts, and performance optimization.
Experience in CI/CD and DevOps practices for data infrastructure.
Solid knowledge of version control systems (Git), containerization (Docker), and orchestration tools (Kubernetes – good to have).
Familiarity with data governance, data security, an
Good to have
Experience working with BI tools (Tableau, Power BI).
Exposure to ML model deployment pipelines and collaboration with Data Science teams.
Experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB.
Understanding of DataOps and modern data stack tools.
Experience
10-15 Years
Location: Bhilai, Indore