Haptiq Technology and Solutions, a leader in uniting companies and their investors through innovative technology, is looking for a Principal Fullstack Engineer to join our dynamic team in Canada. Haptiq excels in transforming potential into tangible performance, offering a suite of meticulously designed software solutions that cater to a wide range of business needs. With a global footprint and a team of over 200 skilled professionals, we stand as a strategic partner for businesses navigating the complexities of the digital landscape.
The Opportunity
We are seeking a highly motivated and self-driven data engineer for our growing data team who is able to work and deliver independently and as a team. In this role, you will play a crucial part in designing, building, and maintaining our ETL infrastructure and data pipelines.
Key Responsibilities
- This position is for a Cloud Data engineer with a background in Python, DBT, SQL, and data warehousing for enterprise level systems.
- Adhere to standard coding principles and standards.
- Build and optimize data pipelines for efficient data ingestion, transformation, and loading from various sources while ensuring data quality and integrity.
- Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data.
- Experience of DWH, Data Integration, Cloud, Design, and Data Modelling.
- Proficient in developing programs in Python and SQL.
- Experience with Data warehouse Dimensional data modeling.
- Working with event-based/streaming technologies to ingest and process data.
- Working with structured, semi-structured, and unstructured data.
- Optimize ETL jobs for performance and scalability to handle big data workloads.
- Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks.
- Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
- Proficient writing SQL queries and programming including stored procedures and reverse engineering existing processes.
- Perform code reviews to ensure fit to requirements, optimal execution patterns, and adherence to established standards.
- Checking in, checkout, and peer review and merging PRs into git Repo.
- Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines.
Skills
- 3+ years Python coding experience.
- 5+ years SQL Server based development of large datasets.
- 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
- Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake.
- Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
- Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
- Experience with Cloud based data architectures, messaging, and analytics.
- Add ons: Any experience with Airflow, AWS lambda, AWS glue, and Step functions is a Plus.
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries
Software Development