Our client is looking for highly experienced and motivated Data Engineers to join our team. This role is ideal for someone with a strong PySpark, Python, and SQL background and deep expertise in Palantir’s Foundry data platform.
You will be responsible for designing, developing, and optimizing scalable data pipelines to support critical business operations and analytics.
Responsibilities:
Develop and maintain robust data pipelines using Palantir Foundry.
Ingest data from diverse sources (on-premises, cloud, REST APIs, files) into Foundry via Magritte.
Author and optimize SQL queries for relational databases.
Prepare, wrangle, visualize, and report data using Foundry tools (Code Workbook, Contour, Fusion, Spreadsheet).
Troubleshoot and resolve pipeline-related issues to ensure performance and scalability.
Collaborate with business analysts and subject matter experts to scope and analyse new integration use cases.
Design data models and perform data cleansing and transformation.
Build and manage product backlogs and user stories aligned with agile methodologies.
Create and manage project deliverables across the application development lifecycle.
Requirements:
Foundry Certified
5+ years of hands-on experience in data engineering with PySpark and Python.
Strong understanding of Palantir Foundry’s architecture and development tools.
Proficiency in SQL and relational database management.
Experience with ingesting data via Magritte and optimising large-scale pipelines.
Familiarity with agile project methodologies and backlog management.
Excellent problem-solving skills and ability to work cross-functionally.