skip tomain menu

DataOps Engineer {job offer}

CZ - Prague
Analysis

Description

  • Design, build and optimize data pipelines to facilitate the extraction of data from multiple sources
  • Be first level support for end user’s requests when dealing with data pipeline issues
  • Develop ETL/ELT (batch/stream) from multiple sources using Spark and/or Kafka
  • Operate the data pipelines to ensure key SLAs are managed across a wide range of producers and consumers
  • Support various components of the data pipelines, including ingestion, validation, cleansing and curation
  • Promote data collaboration, orchestration, quality, security, access, and ease of use
  • Gather data requirements from analytics and business departments
  • Write and maintain operational and technical documentation and perform tasks in Agile methodology

Requirements

  • Hands on experience with cloud native technologies, Azure/GCP
  • Direct experience in building data pipelines such as Data Factory, Data Fusion, or Apache Airflow
  • Understanding of wide range of big data technologies and data architecture
  • Familiarity with data platforms and technologies such as Kafka, Delta Lake, Spark; Databricks is a plus
  • Demonstrated experience in one or more programming languages, preferably Python
  • Good knowledge of CI/CD and version control tools such as Jenkins and GitHub Actions
  • Experience with monitoring tools like Prometheus, Grafana, ELK stack is a plus
  • Ability to bring both engineering and operational (DevOps) mindset to the role
  • Strong team player willing to cooperate with colleagues across office locations and functions
  • Very strong English language skills are a must

I'm interested!

Attach your résumé (CV).




Back to the job offer list