Data Engineer
Timing: 05:00 PM to 02:00 AM
Job Type: Full-Time (On-site)
Location: Johar Town
About the Role We are looking for a highly skilled and versatile Data Engineer to join our team. The ideal candidate will have expertise in building and maintaining scalable data infrastructure, focusing on data pipelines, cloud technologies, and data processing systems. This role requires hands-on experience in designing and deploying reliable, high-performance data solutions that meet the demands of large-scale operations. Key Responsibilities:
- Data Infrastructure & Pipeline Development:
- Design, build, and maintain efficient, scalable, and robust data pipelines for data extraction, transformation, and loading (ETL).
- Develop and optimize SQL queries for complex data processing and ensure high data quality.
- Implement and manage automated data workflows using orchestration tools like Airflow.
- Cloud Data Platforms:
- Work extensively with cloud-based platforms like AWS, Azure, or GCP to build and manage data infrastructure.
- Utilize services such as Azure Data Factory, AWS Glue, BigQuery, or similar tools for data ingestion, transformation, and storage.
- Implement monitoring, logging, and alerting to ensure continuous data pipeline performance and reliability.
- Data Modeling & Warehousing:
- Design and implement dimensional models and maintain data warehouses to support business intelligence and analytics.
- Develop and manage data schemas, ensuring consistency and integrity across systems.
- Optimize data storage, querying, and retrieval processes to meet business needs.
- Data Processing & Stream Handling:
- Build real-time data pipelines using tools like Apache Kafka, Spark, or similar technologies for data stream processing.
- Handle large datasets and complex transformations to support analytical workflows and reporting.
Key Skills & Qualifications:
- Bachelor's degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
- Proficiency in SQL, with hands-on experience in complex queries, joins, and optimizations.
- Strong experience with cloud platforms (AWS, Azure, GCP), with tools like Azure Data Factory, AWS Glue, BigQuery, or similar.
- Expertise in building and maintaining scalable ETL pipelines using tools like Airflow, DBT, or Spark.
- Experience with data modeling and warehousing, including dimensional modeling, schema design, and data integrity management.
- Familiarity with stream processing technologies like Kafka and Spark.
- Hands-on experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Strong problem-solving skills and the ability to troubleshoot and optimize complex data workflows.
- Experience with data security, governance, and compliance standards.
- Strong communication skills to collaborate effectively with technical and non-technical teams.
Preferred Skills:
- Experience with data visualization tools (e.g., Power BI, Tableau) for creating dashboards and reporting.
- Familiarity with Python for data transformations, automation, and scripting.
- Experience with metadata management and data cataloguing tools.
- Familiarity with monitoring tools like Datadog, Prometheus, or Grafana.
Benefits
- Commissions- On Success fulling closing of project
- Meal Facility – Enjoy delicious meals at the office.
- Gym Facility – Stay active with access to our partner gym.
- Health Support – Get free medical consultations when needed.
- Restaurant Perks – Enjoy special discounts and offers through our restaurant collaborations.
Job Type: Full-time
Pay: Rs150,000.00 - Rs250,000.00 per month
Work Location: In person