cover image
StitcherAI

StitcherAI

stitcher.ai

2 Jobs

5 Employees

About the Company

StitcherAI provides an essential system of record for enterprise IT Finance teams striving to maximize the value of their IT investments. To tackle today’s IT Finance challenges, organizations require accurate, actionable, business-aligned data and an engagement model that enables alignment and action across the enterprise. Traditional FinOps and IT Finance tools don’t deliver results, prompting many companies to develop their own solutions that often have risks and limitations. StitcherAI addresses these gaps with its AI-powered system of record for finance. It creates business-aligned IT Finance datasets and delivers critical data directly to stakeholders, tools, and business processes—enabling meaningful action. Connect with us to discover the future of IT Finance!

Listed Jobs

Company background Company brand
Company Name
StitcherAI
Job Title
Back End Developer
Job Description
**Job Title** Back End Developer (Staff Data Engineer) **Role Summary** Design, develop, and operate high‑performance, cloud‑native backend and data pipelines for an AI‑powered low‑code cost analytics platform. Deliver scalable, performant solutions that integrate across multi‑cloud environments, SaaS APIs, and diverse storage systems, while providing reliable REST APIs and supporting the product lifecycle from design to deployment. **Expectations** - Act with a “founder” mindset: proactive, end‑to‑end ownership, and strong work ethic. - Consistently deliver measurable results and scale the product and team. - Excel in cross‑functional collaboration, communication, and adaptability to shifting priorities in a startup culture. **Key Responsibilities** 1. Build and maintain enterprise‑scale data pipelines, ensuring reliability, performance, and cost‑efficiency. 2. Design backend services with a focus on high throughput, low latency, and scalability using open‑source technologies. 3. Orchestrate data workflows with Temporal, Airflow, or equivalent systems. 4. Integrate platform components with multiple cloud providers (AWS, Azure, GCP), SaaS APIs, and various storage formats (Parquet, CSV, Avro). 5. Develop, test, and deploy cloud‑native microservices (REST/APIs) in Docker/Kubernetes clusters. 6. Implement monitoring, logging, metrics, and CI/CD pipelines. 7. Collaborate with data scientists and product teams to expose analytics features through the low‑code interface. 8. Perform performance tuning of distributed aggregations, transformations, clustering, partitioning, and storage strategies. **Required Skills** - 5+ years building/maintaining large‑scale data platforms. - 3+ years Python and Rust proficiency in cloud‑native development. - Expertise with Pandas, Polars, and performance‑critical data processing. - Hands‑on experience with distributed data technologies: Hadoop, Hive, Spark, EMR. - Proven ability to orchestrate pipelines using Temporal, Airflow, or similar. - Cloud integration: AWS, Azure, GCP, SaaS provider APIs, and storage systems. - Backend development: REST APIs, JSON, gRPC. - DevOps: Kubernetes, Docker, CI/CD, logging, metrics, cloud‑native best practices. - Optional: AI/ML forecasting, anomaly detection, GenAI model training/serving; familiarity with FinOps concepts. **Required Education & Certifications** - Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or equivalent professional experience. - Relevant certifications (e.g., AWS Certified Solutions Architect, Certified Kubernetes Administrator, or similar) are advantageous but not mandatory.
Toronto, Canada
Remote
Mid level
18-12-2025
Company background Company brand
Company Name
StitcherAI
Job Title
Principal Engineer
Job Description
Job Title: Principal Engineer Role Summary: Lead data engineering for an AI‑native FinOps platform, architecting scalable data pipelines and cloud‑native services that deliver cost insights to enterprise users. Expectations: • Own end‑to‑end problem resolution and solution ownership as a founder‑mindset engineer • Consistently deliver high‑quality results while scaling both product and engineering team • Communicate across disciplines and adapt swiftly to evolving priorities in a fast‑moving startup context Key Responsibilities: • Design and build high‑performance data systems using Python/Rust and modern open‑source tools • Architect, develop, test, and deploy data pipelines with frameworks such as Polars, Temporal, and Airflow • Integrate data from multiple sources (cloud platforms, SaaS APIs, storage formats) into a unified, actionable system • Implement RESTful APIs, Docker/Kubernetes deployments, CI/CD pipelines, and observability for production environments • Optimize performance through distributed aggregations, partitioning, clustering, and storage tuning Required Skills: • 5+ years building enterprise‑scale data platforms • 3+ years hands‑on experience in Python and/or Rust for cloud‑native systems • Deep knowledge of big data technologies (Spark, Hadoop, Hive) and data transformation libraries (Pandas, Polars) • Expertise in data pipeline orchestration (Temporal, Airflow) and cloud integration (AWS/GCP/Azure SaaS APIs) • Strong backend fundamentals: REST APIs, Docker, Kubernetes, CI/CD, observability, monitoring • Optional: AI/ML experience (forecasting, anomaly detection, GenAI training/deployment), authentication/authorization knowledge, FinOps or cost‑analytics background Required Education & Certifications: None specified.
Toronto, Canada
On site
Senior
04-02-2026