What is Equisoft? Equisoft is a global provider of digital solutions for insurance and investment, recognized by over 250 of the world's leading financial institutions. We offer a comprehensive ecosystem of scalable solutions that help our customers meet all the challenges brought about by this era of digital transformation, thanks to our business needs-driven approach, industry knowledge, cutting-edge technologies and experts. With its business-driven approach, in-depth industry knowledge, cutting-edge technologies and multicultural team of experts based in North America, the Caribbean, Latin America, Europe, Africa, Asia and Australia, Equisoft helps its customers meet the challenges of this era of digital transformation. Why Choose Equisoft? With 950+ employees, we are a stable organization that offers career advancement opportunities and fosters a stimulant environment. If that’s not enough, then check out these other perks below: Hiring Location: Canada (Montreal or Quebec City) You are working hybrid in a collaborative workspace Internal job title: Senior Data Developer Full-time Permanent Benefits available day 1: Medical, Dental, Retirement Plan, Telemedicine Program, Employee Assistance Program, etc. Flexible hours Number of hours per week: 40 Educational Support (LinkedIn Learning, LOMA Courses and Equisoft University) Role: The Senior Data Engineer reports to the AVP, Core Insurance and works closely with AI/ML teams, product managers, and software engineering teams. The incumbent will be responsible for designing, building, and maintaining robust data infrastructure and pipelines that support Equisoft's AI and ML initiatives, with a focus on scalable data processing and real-time analytics for insurance and investment solutions. Below is a brief description of the expected product the candidate will be working on: Equisoft/amplify Our AI-powered insurance ecosystem leverages advanced integration technologies to connect intelligent agents with core business systems. The platform utilizes Model Context Protocol (MCP) servers and agentic workflow orchestration to enable seamless data exchange, automated decision-making, and intelligent process automation across policy management, claims processing, and customer service applications. Your Day with Equisoft: Build and maintain scalable Databricks pipelines for ML workflows, including data ingestion, transformation, and feature engineering for machine learning models Implement and optimize synthetic data generation infrastructure to support ML training while ensuring privacy compliance and data quality standards Create sophisticated data augmentation pipelines specifically designed for insurance scenarios, including policy data, claims processing, and risk assessment use cases Optimize data storage and retrieval systems for training efficiency, implementing partitioning strategies, caching mechanisms, and performance tuning Develop and maintain real-time stream processing capabilities using Apache Spark Structured Streaming, Kafka, and other modern streaming technologies Ensure comprehensive data quality and compliance for ML training datasets, implementing validation frameworks, monitoring systems, and governance policies Design and implement ETL/ELT pipelines using modern data stack tools including Apache Airflow, dbt (data build tool), and cloud-native services Collaborate with ML teams to establish data versioning, lineage tracking, and reproducibility for model training datasets Monitor and troubleshoot data pipeline performance, implementing automated alerting and recovery mechanisms Work with cloud platforms (AWS, Azure, GCP) to architect scalable data solutions and optimize costs Implement data security best practices including encryption, access controls, and audit logging Requirements: Technical Bachelor's Degree in Computer Science, Data, Software, or related field, or College Diploma combined with 4+ years of relevant experience Minimum of 3 years' experience in data engineering with demonstrated expertise in building production data pipelines with 7 years total in the Data field Extensive hands-on experience with Databricks platform, including Apache Spark, Delta Lake, and Databricks workflows Proficiency in Python and SQL for data processing, transformation, and pipeline development Strong experience with cloud data platforms (AWS, Azure or Google Cloud) and their data services (S3, Redshift, BigQuery, etc.) Experience with real-time stream processing frameworks (Apache Kafka, Spark Structured Streaming, Apache Flink) Knowledge of data orchestration tools such as Apache Airflow, Prefect, or similar workflow management systems Understanding of data modeling concepts, including dimensional modeling, data vault, and lakehouse architecture Experience with infrastructure as code (Terraform, CloudFormation) and containerization (Docker, Kubernetes) Familiarity with version control systems (Git) and CI/CD practices for data pipelines Excellent knowledge of & French & English (spoken and written) Soft skills Strong analytical and problem-solving abilities with attention to detail Excellent communication skills for presenting complex technical concepts to diverse stakeholders Ability to work effectively in cross-functional teams and manage multiple projects simultaneously Detail-oriented approach to data governance, security, and compliance requirements Adaptability to rapidly evolving data technologies and best practices Self-motivated with strong organizational skills and ability to work autonomously Team spirit, collaboration, and knowledge sharing mindset Nice to Haves: Cloud certifications (AWS Data Engineer, Azure Data Engineer, Google Cloud Data Engineer) Databricks certification (Associate or Professional Data Engineer) Knowledge of machine learning workflows and MLOps practices Experience with data mesh architecture and domain-driven data design Experience with Apache Iceberg, Hudi, or other open table formats Experience with data quality frameworks (Great Expectations, Deequ, Monte Carlo) Knowledge of privacy-preserving technologies (differential privacy, federated learning) Experience with dbt (data build tool) for analytics engineering Understanding of DataOps practices and data pipeline testing strategies Experience with insurance or financial services domain and regulatory requirements Equisoft is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Introduce yourself to our recruiters and we'll get in touch if there's a role that seems like a good match. What is Equisoft? Equisoft is a global provider of advanced insurance and investment digital solutions, recognized as a valued partner by over 250 of the world’s leading financial institutions. We offer a complete ecosystem of end-to-end and scalable solutions that help our clients tackle any challenge in this era of digital disruption. Our business-driven approach, deep industry knowledge, innovative technology, and expert teams help our partners solve their biggest, most complex problems. With our diverse and multicultural team of experts based in North America, the Caribbean, Latin America, Europe, Africa, Asia and Australia, Equisoft helps its clients tackle any challenge in this era of digital disruption. With 950+ employees, we are a stable organization that offers career advancement and fosters a stimulating environment.