cover image
Hays

Data Modeler

Hybrid

Toronto, Canada

Freelance

05-02-2026

Share this job:

Skills

Communication Python Java SQL NoSQL Data Engineering MongoDB Cassandra GitHub CI/CD DevOps Docker Kubernetes Azure Data Factory Problem-solving Databases Azure Pandas Spark CI/CD Pipelines Databricks PySpark Spark/PySpark

Job Specifications

Summary

We are seeking a skilled Data Modeler with strong hands‑on experience in data modeling, cloud-based data engineering, and modern ETL/ELT frameworks. The ideal candidate will have deep expertise in Java, SQL, and Python, along with experience working on Azure cloud services and distributed data platforms. This role involves designing scalable data models, building efficient data pipelines, and collaborating with cross-functional teams to support analytical and operational use cases.

Key Responsibilities

Design, develop, and maintain conceptual, logical, and physical data models to support business and technical requirements.
Build and optimize ETL/ELT pipelines using Python libraries (e.g., pandas, pola.rs, PySpark, Ibis) and other orchestration tools.
Develop high-performance SQL queries, stored procedures, and data transformations.
Collaborate with data engineers, architects, and analysts to ensure data solutions meet quality, governance, and performance standards.
Work extensively with Azure Cloud PaaS services, including Databricks, Azure Data Factory, Azure Data Lake Storage (ADLS).
Implement data ingestion, transformation, and processing workflows using modern ETL tooling (NiFi, Griffin, Hamilton, Airflow, etc.).
Manage and maintain version-controlled data artifacts using GitHub.
Deploy and maintain containerized workloads using Docker and Kubernetes.
Support DevOps-driven CI/CD processes for automated build and deployment pipelines.
Design data storage patterns and optimize access for NoSQL databases such as Cassandra and MongoDB.
Ensure that data models comply with enterprise governance, privacy, and security standards.

Required Skills & Qualifications

Strong proficiency in Java, SQL, and Python for scripting and data engineering tasks.
Hands-on experience with Azure Cloud services: Databricks, ADF, ADLS, and related PaaS components.
Expertise in ETL/ELT frameworks, orchestration tools, and workflow automation (Airflow, NiFi, Griffin, Hamilton, etc.).
Solid understanding of data modeling techniques, including normalization, dimensional modeling, and schema design.
Proficiency in working with NoSQL databases such as Cassandra and MongoDB.
Experience with containerization (Docker) and orchestration (Kubernetes).
Working knowledge of GitHub and CI/CD pipelines.
Strong understanding of distributed computing and large-scale data processing frameworks (Spark/PySpark).
Excellent problem-solving abilities and communication skills.

About the Company

We are leaders in specialist recruitment and workforce solutions, offering advisory services such as learning and skill development, career transitions and employer brand positioning. As the Leadership Partner to our customers, we invest in lifelong partnerships that empower people and businesses to succeed. We help you achieve your career goals and deliver your business needs by combining meaningful innovation with our global scale and insights. Last year we helped over 280,000 people find their next career. Join the mill... Know more