Key Responsibilities:
- Design, develop, and maintain scalable ETL pipelines and data integration workflows.
- Build and optimize data warehouses and data lake architectures to support analytics and business intelligence.
- Develop clean, reusable, and efficient Python scripts for data extraction, transformation, and loading.
- Work closely with data analysts and stakeholders to ensure data accuracy, consistency, and reliability.
- Implement data validation, error handling, and monitoring frameworks.
- Collaborate with cross-functional teams to integrate data from multiple sources and cloud platforms.
- Optimize SQL queries and database performance for high-volume datasets.
- Ensure data governance and security standards are maintained.
Required Skills & Experience:
- 5+ years of experience as a Data Engineer or similar role.
- Strong proficiency in Python and related data libraries (Pandas, PySpark, etc.).
- Hands-on experience with ETL tools and frameworks.
- Excellent understanding of SQL and relational databases.
- Experience with data warehousing concepts and technologies (Snowflake, Redshift, BigQuery, etc.).
- Working knowledge of Cloud Platforms (AWS, Azure, or GCP).
- Familiarity with CI/CD pipelines and version control (Git).
- Strong problem-solving and debugging skills.
Preferred Qualifications:
- Experience with streaming data frameworks (Kafka, Spark Streaming, etc.).
- Exposure to data governance, quality, and lineage tools.
- Knowledge of API integrations and RESTful services.
Why Join Aarvy Technologies?
- Work with a dynamic and growing IT team.
- Opportunity to work on challenging, data-driven projects.
- Flexible and collaborative work environment.
Job Types: Full-time, Permanent
Pay: ₹50,000.00 - ₹70,000.00 per month
Work Location: In person