Posting Closes On: May 4, 2025 Employment Status: Full-Time, Permanent, Remote Data Engineering Analyst II Data and Infrastructure Department.
Position Overview:
- Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
- Collaborates with stakeholders including data analytics, accounting, operations, data science, and IT teams to address data-related technical issues
- Utilizes programming skills, ETL tools, and data virtualization solutions to develop and manage integration tools, APIs, pipelines, and data platform eco-systems
- Designs and builds infrastructure for optimal extraction, transformation, and loading of data from various sources, including structured, unstructured, and big data
- Creates data tools for analytics and data science teams to support product optimization and strategic objectives
- Prepares data for predictive and prescriptive modeling and assists in deploying analytics programs, machine learning, and predictive models
- Develops scalable data pipelines and builds API integrations to handle increasing data volume and complexity
- Builds analytics tools to provide actionable insights into customer acquisition, operational efficiency, and key business performance metrics
- Implements processes to monitor data quality, ensuring accuracy and availability for stakeholders
- Contributes to engineering documentation and tests data ecosystem reliability
- Leads projects to ensure timely completion and adherence to expectations, following DevOps procedures
- Ensures alignment with strategy and direction set by the leadership and demonstrates willingness to commit to a direction and drive operations to completion
- Analyzes data engineering trends to ensure alignment with industry best practices and continuously improve associated techniques within Servus to meet its information needs
- Contributes to Servus culture and data maturity needs through effective communications with peers within other areas across Servus
- Monitors data protection controls, identifies gaps, and recommends solutions in collaboration with Security, Privacy, and Risk Management groups
- Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability
Requirements
- Bachelor's degree or diploma required in Computer Science, Engineering, Management Information Systems or related field
- Experience with Cloud data platform(s); such as: Azure, Databricks, or Synapse
- ETL/ELT pipeline experience with ADF, Synapse Pipelines, DBT, or Databricks DLT
- Advanced SQL and Intermediate Python experience (5 years)
- Data Modeling experience with schema design and dimensional data models (4 years)
- Experience with Agile Software Development methodologies
- Experience with ML libraries and frameworks
- Strong understanding of data science concepts and advanced analytics
Benefits
- Training \& Development Opportunities
- Career Advancement Potential
- Flexible work options
- Competitive Compensation including performance-based incentive pay
- Meaningful work towards individual and corporate goals
- Opportunities to get involved and give back through an employee volunteer program
For Information About These Benefits And More, Click Here.
What happens next?
Discover a sense of belonging amongst a team of unique, authentic individuals working together to reimagine financial fitness. We value and celebrate the richness that diverse backgrounds and experiences bring to our community. Your skills, passion, and curiosity may find a sense of belonging at Servus, so even if you don't check every box we encourage you to apply!