Job Title: Data Engineer
Location: AHMEDABAD
Key Responsibilities:
- Design, develop, and maintain ETL pipelines: Efficiently extract, transform, and load (ETL) large datasets from various data sources to the data warehouse or data lakes.
- Build and optimize data transformation workflows: Work closely with data scientists, analysts, and software engineers to design data pipelines that handle large volumes of structured and unstructured data.
- Data representation and UI integration: Implement data visualization solutions using tools and techniques (e.g., GraphQL or similar) for a drag-and-drop interface, enabling non-technical users to access and manipulate data.
- Collaborate with cross-functional teams: Work with front-end engineers, backend developers, and product managers to ensure smooth integration of data systems into customer-facing applications.
- Monitor and maintain data pipeline performance: Continuously improve data flow, ensuring high availability, consistency, and integrity.
- Ensure data security and compliance: Implement security best practices and ensure compliance with data privacy regulations.
Key Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience working with ETL pipelines, data transformation, and processing large datasets.
- Proficiency in ETL tools (e.g., Apache Airflow, Talend, Informatica, etc.) and strong experience with SQL and NoSQL databases.
- Experience with cloud platforms (AWS, GCP, Azure) for building and scaling data pipelines.
- Experience with GraphQL or similar API-based querying languages.
- Proficiency in data visualization tools and libraries.
- Familiarity with drag-and-drop UI/UX principles and ability to integrate data-driven interfaces for users.
- Strong programming skills in Python, Java, Scala, or similar.
- Solid understanding of data warehousing and big data technologies.
- Good problem-solving skills and the ability to think critically about data flow and architecture.
- Ability to work independently and as part of a team in a fast-paced environment.
Preferred Qualifications:
- Experience with machine learning or data science workflows.
- Familiarity with real-time data processing (e.g., Apache Kafka, Flink).
- Knowledge of CI/CD practices and DevOps.
Job Types: Full-time, Permanent
Pay: ₹1,000,000.00 - ₹1,400,000.00 per year
Ability to commute/relocate:
- Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Required)
Application Question(s):
- Current CTC
- Expected CTC
- Notice Period
- Reason for Change