Role & Responsibilities:
- Design and implement scalable ETL/ELT pipelines for data ingestion, transformation, and storage.
- Optimize data workflows to improve performance and maintain high scalability.
- Integrate data from multiple sources into centralized data platforms.
- Architect and maintain databases, data lakes, and data warehouses for both structured and unstructured datasets.
- Monitor and validate data to ensure accuracy, consistency, and completeness.
- Implement data security, governance, and compliance controls aligned with industry standards.
- Collaborate with data analysts, AI engineers, and cross-functional teams to gather and define data requirements.
- Provide technical support for analytics, BI, and reporting platforms.
- Evaluate new tools and technologies to enhance data architecture and performance.
- Maintain detailed documentation and update technical guides regularly.
Requirements:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related discipline.
- 3–4 years of hands-on experience in building and managing data pipelines and data architectures.
- Demonstrated experience designing and managing data pipelines and data architectures.
- Strong knowledge of SQL and NoSQL databases, data modeling concepts, and schema design.
- Hands-on expertise with data processing frameworks such as Apache Spark or Hadoop.
- Experience working with major cloud platforms (AWS, Azure, or Google Cloud).
- Proficiency in programming languages such as Python, Scala, or Java.
Preferred Skills:
- Experience with data visualization tools (e.g., Power BI, Tableau).
- Knowledge of streaming and real-time data technologies (e.g., Kafka, Flink).
- Professional certifications in cloud data services (e.g., AWS Certified Data Analytics, Azure Data Engineer).
Shift timings: 9 AM - 6 PM Karachi & Islamabad (Onsite) & Remote for other cities.
Work Location: Remote