FIND_THE_RIGHTJOB.
JOB_REQUIREMENTS
Hires in
Not specified
Employment Type
Not specified
Company Location
Not specified
Salary
Not specified
Project Role : Data Engineer
Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills : PySpark
Good to have skills : NA
Minimum 3 year(s) of experience is required
Educational Qualification : 15 years full time education
Summary: Seeking a forward-thinking professional with an AI-first mindset to design, develop, and deploy enterprise-grade solutions using Generative and Agentic AI frameworks that drive innovation, efficiency, and business transformation. As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Roles & Responsibilities:- Lead AI-driven solution design and delivery by applying GenAI and Agentic AI to address complex business challenges, automate processes, and integrate intelligent insights into enterprise workflows for measurable impact. - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills:- Strong grasp of Generative and Agentic AI, prompt engineering, and AI evaluation frameworks. Ability to align AI capabilities with business objectives while ensuring scalability, responsible use, and tangible value realization. Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and ETL methodologies. - Experience with data warehousing concepts and tools. - Familiarity with cloud platforms and services for data storage and processing. - Knowledge of data quality and governance best practices. Additional Information:- The candidate should have minimum 3 years of experience in PySpark. - This position is based at our Bhubaneswar office. - A 15 years full time education is required.
Similar jobs
Henkel
Cairo, Egypt
about 24 hours ago
Henkel
Cairo, Egypt
1 day ago
Rearc
India
7 days ago
Adal Fintech (Pvt) Ltd.
Karachi, Pakistan
7 days ago
Agoda
Riyadh, Saudi Arabia
7 days ago
Mentruiter
Pakistan
7 days ago
Pakistan Single Window
Islamabad, Pakistan
7 days ago
© 2025 Qureos. All rights reserved.