12 month contract
1-2x per week on-site in Toronto
Pay: $35-42/hr T4
Must haves:
- 3+ years of experience in software development
- Strong programming background with hands-on experience inPythonand PySpark
- Experience working with Azure Cloud and related services (ADLS, Data Factory, DevOps, etc.)
- Strong ETL and SQL experience; ability to work with various data formats and large datasets
- Experience with tools like GitHub, CI/CD pipelines, version control systems
Nice to haves:
- Experience with Kafka, Hive, Hadoop, Databricks, or Oracle
- Experience working in the banking or financial services industry
- Azure certifications
- Experience with PowerShell scripting
Job description:
The successful candidate will be joining a large, collaborative team working on multiple high-impact projects including migrations and streaming service development initiatives. Each project handles different types of data, and this role will involve both ingesting data into the organization's platform and building new components for complex data pipelines. The engineer will work under a tech lead, contributing to new development and deployment efforts, and is expected to be capable of managing multiple deliverables at once. This is a heads-down, hands-on programming role that requires strong technical skills and a proactive, go-getter attitude.
Responsibilities:
- Collaborate with the team to build and maintain scalable data pipelines
- Ingest data from various sources into internal platforms
- Participate in the development of a new streaming service
- Leverage DevOps tools to manage deployments and maintain CI/CD pipelines
- Write efficient and reusable code in Python, PySpark, and PowerShell
- Work with Azure cloud services to store, transform, and analyze data
- Stay adaptable to new tools and technologies as the projects evolve