Big Data Engineer Required

We have an exciting opportunity for an experienced Big Data Engineer with Hadoop and Kafka expertise, to join an award-winning and multinational business for an initial 3-month contract (potential for extension) in Berlin, Germany.

Key Responsibilities

  • Design and develop data applications using selected tools and frameworks.
  • Read, extract, transform, stage and load data to selected tools and frameworks.
  • Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.
  • Process unstructured data into a form suitable for analysis.
  • Support business decisions with ad hoc analysis as needed.
  • Monitoring data performance and modifying infrastructure as needed.

Desirable Skills:

  • Two plus years’ experience within the Big Data space.
  • Proficiency in programming languages, such as python and Scala.
  • Good knowledge of distributed systems, such as Spark and Hadoop.
  • Data warehousing experience working in a large-scale Data environment.
  • Open-source stream-processing software platform, such as Kafka and Apache Spark.
  • Written and spoken English is a must, whilst other European languages are a plus.

If you would like to apply for this position please upload your resume on the form below.

Facebook
Twitter
LinkedIn