Careers
Data Engineer
We are seeking a highly skilled Data Engineer to design, build, and maintain scalable data pipelines and systems that support our organization’s data-driven goals.
London
Contract
Role overview
We are seeking a highly skilled Data Engineer to design, build, and maintain scalable data pipelines and systems that support our organization’s data-driven goals. In this role, you will collaborate with cross-functional teams to ensure the efficient collection, processing, and transformation of data into usable formats. You’ll contribute to the architecture and optimization of our data infrastructure while implementing best practices for data quality, governance, and performance.
Responsibilities
Design, build, and maintain robust and scalable data pipelines for extracting, transforming, and loading (ETL/ELT) data from diverse sources.
Optimize data workflows for performance, reliability, and scalability.
Collaborate with analysts, data scientists, and engineers to understand data requirements and implement solutions.
Ensure the integrity, availability, and security of data through robust governance practices.
Build and maintain data models, schemas, and databases to support analytical and operational needs.
Integrate and manage large-scale datasets across cloud and on-premise platforms.
Stay updated on new data engineering technologies and best practices to continuously improve systems.
Troubleshoot and resolve data-related issues in a timely manner.
Required qualifications
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
3+ years of experience in data engineering or a related role.
Proficiency in programming languages such as Python, Scala, or Java for data processing.
Strong understanding of SQL and experience with relational (e.g., PostgreSQL) and non-relational databases (e.g., MongoDB, Cassandra).
Hands-on experience with big data tools like Apache Spark, Hadoop, or Kafka.
Familiarity with cloud platforms such as AWS (e.g., Redshift, S3), Azure, or Google Cloud.
Experience with data warehouse solutions (e.g., Snowflake, BigQuery) and ETL/ELT tools.
Knowledge of data modeling, schema design, and best practices for data pipelines.
Experience with CI/CD practices and tools for data engineering pipelines.
Strong problem-solving and debugging skills with a proactive attitude.
What we offer
Competitive salary with performance-based bonuses.
Comprehensive health, dental, and vision insurance.
Flexible working hours and options for remote work.
Professional development budget for certifications, training, and conferences.
Generous paid time off and company holidays.
Collaborative and inclusive workplace culture.