Palantir Foundry / PySpark Expert – Full remote!

Share on linkedin
Share on twitter
Share on facebook

Job Overview

Palantir Foundry Consultant (100% Remote)

For our prestigious client we are currently looking for a Palantir Foundry consultant for a two month contract (visibility of extension).

Project background: The Pricing initiatives are part of the overall pricing process within client. Due to heavy workload the client needs support from a Palantir Foundry consultant. Project set up is agile. One sprint consists of two week and there is a daily jour fixe of 15 minutes. During these meetings, the team discusses the current requirements and issues.


  • Enhancement according to Product backlog of existing advanced data pipelines and low code applications for the pricing initiatives. The documentation of the existing pipelines will be provided by the client in advance.
  • Consulting of Price Generation process to expanded on different levels of the value creation process
  • Maintenance and expanding of Pyspark Pipelines, Low Code Environment applications, object modification processes.
  • Functional and technical consulting of business analysis and facilitate documentation of business requirements that will be validated and approved by the client.
  • Independently implement the needed business functions (as described in the task above) in the Pyspark Pipelines, Low Code Environment applications, object modification processes including the related deployment activities.
  • Create crud (create, read, update, delete) operations for predefined objects (as defined above during the business analysis), that will be validated and approved by Client.
  • Participation in Scrum meetings including result presentation in Sprint reviews.
  • Develop the new functions (as defined above during the business analysis) for expanding the price generation process
  • Build a testing environment for potential new changes
  • Code tests as well as data quality checks will be executed according to specialty of Product Backlog.
  • Software Code will be inline documented, processes and templates introduced or modified will be continuously documented inside AzureDevOps and wiki, Client will review as part of Sprint review process the quality.
  • Handover to the client or other external company. This process will be several online sessions, where documentation will be checked/enhanced if necessary.


  • Palantir Foundry tools: + Workshop + Slate + Object
  • 1.5+years experience using Palantir Foundry.
  • Explorer + Ontology
  • PySpark (experienced)
  • SQL
  • Spark (distributed computing)
  • JavaScript and/or TypeScript
  • Low Code Application Development.
  • Language: English (must have), German nice to have.

If you are interested in this project, please email an updated CV to [email protected] I are looking forward to speak to you!

More Information

Senior Data Scientist ML – Pharma

Senior Data Scientist ML – Pharma Key Skills An MSc, PhD, MPH/MS in, statistics, data science, public health or similar – or equivalent experience Hands-on programming skills in R and/or Python, with SAS or similar an advantage A track record working with observational/epidemiology/health outcomes projects Experience with clinical trial/healthcare/electronic medical records Database mining skills Excellent written and verbal communication and

Freelance IT architect (m / f / d) for a data warehouse project

For our prestigious customer, we are currently looking for a Data Architect/Data Modeler for an exciting project 6-month project with the visibility to extend in Germany! In this project, you will be utilising Datamodelling to design and implement data models on the platform to help redefine the BI architecture. Your Profile Developing data models and data integration, harmonization and data

Data Engineer / ETL Developer (m / f / d)

An exciting opportunity is available for a Data Engineer / ETL Developer to work for a reputable company based in Germany. This role is a contract position till the end of the year with the opportunity to extend which will be based on individual performance and business needs. Responsibilities: Design and technical implementation of data models on the Data Platform.