Primary Skill: Palantir Foundry Platform expertise - Primary
Python, PySpark
Data Handling: SQL, ETL/ELT workflows, data modelling, validation
Job Description:
Responsibilities:
-
Develop and maintain data pipelines using Python, Pyspark, and SQL for data transformations and workflows in Foundry.
-
Build user interfaces, dashboards, and visualizations within Foundry’s Workshop application for data analysis and reporting.
-
Collaborate with stakeholders including data engineers, business analysts, and business users to gather requirements, design solutions, and ensure project success.
-
Ensure data quality and performance by implementing validation, testing, and monitoring processes.
-
Contribute to the Foundry ecosystem through code reviews, documentation, and sharing best practices to strengthen overall platform adoption and success.
Skills and Qualifications:
-
Bachelor’s degree in computer science, Data Science, or a related field; advanced degree preferred.
-
Palantir Foundry Expertise: Hands-on experience with Foundry components such as Workshop, Code Repositories, Pipeline Builder, and Ontology.
-
Programming Skills: Proficiency in Python and Pyspark for data manipulation and pipeline development.
-
Database Knowledge: Strong SQL skills for data extraction, transformation, and query optimization.
-
Data Engineering Background: Experience with ETL/ELT workflows, data modeling, and validation techniques.
-
Cloud Platform Familiarity: Exposure to GCP, AWS or Azure is preferred.
-
Collaboration and Communication: Strong interpersonal and communication skills to work effectively with technical and non-technical stakeholders.