FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Design, develop, and maintain data ingestion and transformation pipelines in an Azure-based environment.
Work with Kafka for real-time data streaming and message queuing.
Build scalable ETL/ELT pipelines using Databricks (PySpark/Scala) and integrate with Snowflake for analytics workloads.
Develop and optimize SQL queries and stored procedures for performance and data accuracy.
Utilize Azure Data Factory (ADF) to orchestrate workflows and integrate with various data sources.
Work with structured and semi-structured data formats such as XML, JSON, Parquet, and CSV.
Collaborate with data architects, analysts, and business teams to deliver high-quality, reliable data pipelines.
Ensure code quality, version control, and data security best practices are followed.
Contribute to data architecture design discussions and provide technical recommendations for system improvements.
Key Responsibilities
End-to-end development of data acquisition solutions for operational and analytical systems.
Implement data transformation logic using Spark and Python.
Optimize performance of large-scale data pipelines in Databricks/Snowflake.
Handle real-time and batch data ingestion via Kafka and ADF.
Conduct data validation, profiling, and documentation of technical specifications.
Qualification & Certification
Bachelor’s degree in Computer Science, Information Technology, or related field.
Microsoft Azure or Snowflake certification (preferred).
Strong understanding of modern data engineering best practices.
Good to Have
Experience with DevOps tools and CI/CD pipelines for data deployment.
Exposure to data modeling and data warehouse design.
Familiarity with other streaming platforms or data orchestration tools.
Experience Required- 5+ years
Location- Remote
Job Type: Full-time
Pay: ₹1,500,000.00 - ₹2,500,000.00 per year
Work Location: In person
© 2025 Qureos. All rights reserved.