cover image
SIBLING

SIBLING

www.siblingrecruitment.com

2 Jobs

3 Employees

About the Company


We are Sibling, a modern IT Technology Recruitment Agency servicing Clients across Europe and the USA. We provide permanent, contract, and retained recruitment services. With over 20 years of combined recruitment experience, we're on the pulse of modern technology.

Our Sectors:
Data Science/Data Engineering
DevOps/SecOps/MLOps
Cyber Security
Software Development
Embedded Software

Our Candidate services:
Information on market salaries/rates and skills/technical trends.
Upfront salary and package details.
Detailed job descriptions, company info, and further technical insights.
Insights into working environment, company culture, and career prospects.
CV and Interview advice.

Speak to us:
+44 203 865 7392
hello@siblingrecruitment.com

Find out more:
www.siblingrecruitment.com

Listed Jobs

Company background Company brand
Company Name
SIBLING
Job Title
Data Engineer
Job Description
**Job Title** Data Engineer (Data & MLOps Engineer) **Role Summary** Build and maintain scalable data pipelines, operationalize machine‑learning models, and enable cross‑departmental data usage through reliable MLOps practices. **Expectations** - Own and evolve the MLOps stack used by data scientists. - Deliver robust, automated data workflows that integrate with analytics processes. - Set and enforce coding standards, version control, and comprehensive documentation. - Serve as the liaison between business stakeholders, data teams, and IT. **Key Responsibilities** - Design, implement, and maintain data pipelines from diverse sources using Spark/PySpark on Azure Databricks. - Deploy and manage ML models in production environments, ensuring performance, scalability, and traceability. - Automate end‑to‑end data workflows and analytics pipelines to reduce manual effort. - Establish and document coding standards, branching strategies, and CI/CD pipelines. - Collaborate with business analysts, data scientists, and IT to translate requirements into technical solutions. - Monitor and troubleshoot production pipelines and model performance. **Required Skills** - Extensive experience with MLOps frameworks and best practices. - Deep knowledge of data modeling, ETL design, and data engineering principles. - Proficiency in Azure cloud services (Data Factory, Databricks, Azure ML). - Strong command of Spark/PySpark for batch and streaming data processing. - Expertise with GitHub, Git workflows, and CI/CD automation. - Minimum 5 years of professional experience in Big Data, Machine Learning, or Data Science environments. **Required Education & Certifications** - Bachelor’s degree in Computer Science, Information Systems, Data Science, or a related technical field. - Preferred certifications: Microsoft Certified: Azure Data Engineer Associate, Azure Machine Learning Fundamentals, or Databricks Certified Data Engineer.
Brussels, Belgium
On site
05-01-2026
Company background Company brand
Company Name
SIBLING
Job Title
ETL Developer
Job Description
Job title: ETL Developer Role Summary: Design, develop, and maintain large‑scale ETL/ELT pipelines for data migration projects, ensuring high data quality and performance across cloud data warehouses. Expactations: • 3+ years of professional experience in ETL/Data Engineering, with a focus on large‑scale data migration. • Proven ability to build efficient, scalable pipelines using modern cloud platforms. • Strong analytical and problem‑solving skills, with an emphasis on data integrity and performance tuning. Key Responsibilities: • Architect and implement end‑to‑end ETL workflows for complex migration projects. • Write and optimize SQL and Python scripts for data extraction, transformation, and loading. • Design data quality checks, validation rules, and monitoring solutions. • Collaborate with data architects and stakeholders to define data models and business requirements. • Document pipeline designs, configuration settings, and operational procedures. • Troubleshoot production issues and ensure minimal downtime. Required Skills: • Expert SQL proficiency for data extraction, transformation, and performance tuning. • Advanced Python programming for data processing and automation. • Hands‑on experience with cloud data warehouses, specifically Snowflake (mandatory). • Knowledge of data pipeline orchestration tools (e.g., dbt, Airflow, or similar). • Familiarity with data modelling concepts and best practices. • Strong debugging, testing, and documentation skills. Required Education & Certifications: • Bachelor’s degree in Computer Science, Information Systems, or related field. • Relevant certifications in data engineering (e.g., Snowflake SnowPro Core, Google Cloud Data Engineer, AWS Certified Data Analytics) preferred but not mandatory.
Brussels, Belgium
On site
Junior
05-01-2026