Job Description What is the opportunity? Are you a talented, creative, and results-driven professional who thrives on delivering high-performing applications. Come join us! Global Functions Technology (GFT) is part of RBC’s Technology and Operations division. GFT’s impact is far-reaching as we collaborate with partners from across the company to deliver innovative and transformative IT solutions. Our clients represent Risk, Finance, HR, CAO, Audit, Legal, Compliance, Financial Crime, Capital Markets, Personal and Commercial Banking and Wealth Management. We also lead the development of digital tools and platforms to enhance collaboration. We are looking for a highly skilled MLOps Engineer to help design and build a production-grade Machine Learning pipeline for financial risk model training and inference. The pipeline will support model training/testing/inference using Python and PySpark, on public cloud (AWS) and on-premise infrastructure. This role is ideal for an engineer who combines strong Python and cloud engineering skills with a solid understanding of machine learning model lifecycle management from data preparation through training, validation, registration, and operational inference. You’ll collaborate closely with data scientists, DevOps, and risk IT teams to build a reliable, automated, and auditable MLOps platform that meets enterprise standards for security, governance, and scalability. What will you do? Design and implement end-to-end MLOps pipelines to train, test, register, and deploy credit risk machine learning models. Build and automate model lifecycle management workflows including versioning, promotion, approval, and deprecation. Develop and integrate a model registry (e.g., MLflow, SageMaker Model Registry, or custom solution) to manage model metadata, lineage, and reproducibility. Orchestrate data and training workflows using tools such as Airflow, AWS Step Functions, stonebranch, or Prefect. Implement CI/CD pipelines using GitHub Actions, Jenkins, or AWS CodePipeline, ensuring consistent and automated deployment processes. Build data preparation and training scripts in Python and PySpark, optimized for performance and scalability on AWS EMR or similar clusters. Manage model artifacts, dependencies, and environments across AWS and on-premise contexts. Ensure strong observability and auditability through structured logging, metrics, and model performance tracking. Collaborate with DevOps and data engineering teams to ensure secure integration, data governance, and production readiness. What do you need to succeed? Must Have: Hands-on expertise with AWS data and ML services e.g., S3, EMR, Lambda, Step Functions, ECS/EKS, SageMaker, CloudWatch, IAM. Experience building and maintaining model registries, versioning systems, and artifact repositories (e.g., MLflow, SageMaker, DVC). Solid understanding of model lifecycle management from training and testing to deployment, monitoring, and retraining. Strong grasp of CI/CD practices, using tools like GitHub Actions, Jenkins, or CodePipeline. Familiarity with hybrid deployment environments (AWS and on-prem) and related networking/security considerations. Proficiency in Python for production-quality scripting, automation, and ML workflow integration. Strong experience with PySpark for distributed data processing and model training. Required Experience 5+ years of experience in software engineering, data engineering, or MLOps in enterprise-scale or regulated environments. Proven track record of building ML pipelines in production, preferably in financial services or other data-sensitive domains. Experience managing model artifacts and metadata for auditability and compliance. Practical knowledge of containerization (Docker) and infrastructure automation (Terraform, CloudFormation). Strong background in Linux-based systems, shell scripting, and environment management. Experience collaborating with data scientists and model validators to operationalize, monitor, and maintain models. Understanding of data governance and regulatory requirements (e.g., model audit trails, reproducibility). Required Certifications (or equivalent experience) AWS Certified Solutions Architect Associate or higher. (must have) AWS Certified Machine Learning Engineer Associate or AWS Certified Machine Learning Specialty (strongly preferred) AWS Certified DevOps Engineer Professional (preferred) Bachelor’s or master’s degree in computer science, Engineering, Data Science, or related quantitative and technical field. AWS CloudOps/SysOps Engineer Associate (nice to have) Databricks Certified Data Engineer Associate/Professional or equivalent PySpark certification. (nice to have) Python PCAP or Terraform Associate Certificate. (nice to have) Nice to Have: Experience implementing model monitoring and drift detection. Familiarity with distributed training and parallel compute frameworks (Ray, Spark, Dask). Experience with feature stores, data lineage, or metadata tracking systems. Exposure to financial risk modeling workflows. Working knowledge of container orchestration (Kubernetes, OpenShift) and hybrid deployments. Familiarity with secure data exchange patterns between cloud and on-prem environments. Exposure to observability stacks (ELK, Prometheus, Grafana, CloudWatch) What’s in it for you? We thrive on the challenge to be our best, progressive thinking to keep growing, and working together to deliver trusted advice to help our clients thrive and communities prosper. We care about each other, reaching our potential, making a difference to our communities, and achieving success that is mutual. A comprehensive Total Rewards Program including bonuses and flexible benefits, competitive compensation, commissions, and stock where applicable Leaders who support your development through coaching and managing opportunities Ability to make a difference and lasting impact Work in a dynamic, collaborative, progressive, and high-performing team A world-class training program in financial services Flexible work/life balance options Opportunities to do challenging work #LI-Post #TechPJ Job Skills Amazon Sagemaker, Amazon Sagemaker, Apache Airflow, Apache Spark, AWS Architecture, Big Data Management, Big Data Platforms, Big Data Solutions, Big Data Tools, Cloud Computing, Credit Risk Management, Database Development, Data Mining, Data Warehousing (DW), Distributed Computing, ETL Development, Generative AI, Kubernetes, Liquidity Risk, Machine Learning Model Management, Machine Learning Operations, Market Risk Management, MLflow, Pandas Python Library, PySpark {+ 6 more} Additional Job Details Address: 410 GEORGIA ST W, FLOOR 3:VANCOUVER City: Vancouver Country: Canada Work hours/week: 37.5 Employment Type: Full time Platform: TECHNOLOGY AND OPERATIONS Job Type: Regular Pay Type: Salaried Posted Date: 2025-07-16 Application Deadline: 2025-11-30 Note: Applications will be accepted until 11:59 PM on the day prior to the application deadline date above Inclusion and Equal Opportunity Employment At RBC, we believe an inclusive workplace that has diverse perspectives is core to our continued growth as one of the largest and most successful banks in the world. Maintaining a workplace where our employees feel supported to perform at their best, effectively collaborate, drive innovation, and grow professionally helps to bring our Purpose to life and create value for our clients and communities. RBC strives to deliver this through policies and programs intended to foster a workplace based on respect, belonging and opportunity for all. Join our Talent Community Stay in-the-know about great career opportunities at RBC. Sign up and get customized info on our latest jobs, career tips and Recruitment events that matter to you. Expand your limits and create a new future together at RBC. Find out how we use our passion and drive to enhance the well-being of our clients and communities at jobs.rbc.com. Royal Bank of Canada is a global financial institution with a purpose-driven, principles-led approach to delivering leading performance. Our success comes from the 84,000+ employees who bring our vision, values and strategy to life so we can help our clients thrive and communities prosper. As Canada’s biggest bank, and one of the largest in the world based on market capitalization, we have a diversified business model with a focus on innovation and providing exceptional experiences to more than 16 million clients in Canada, the U.S. and 34 other countries. Learn more at rbc.com. We are proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at rbc.com/community-social-impact.