Are you ready to power the World's connections?
We are seeking a Data Platform Engineer to join our team. In this role, you will
design, develop, and maintain scalable data pipelines and systems, leveraging
modern data engineering tools and techniques. You will collaborate with
cross-functional teams to ensure data is accessible, reliable, and optimized for
analytics and decision-making processes.
This position requires deep expertise in handling large-scale data systems,
including Snowflake, Kafka, dbt, Airflow, and other modern ELT/Reverse ETL
technologies.
\n
What you'll be doing:
- Design & Build Scalable Data Pipelines: Develop and maintain real-time and
batch data pipelines using tools like Kafka, dbt, and Airflow/Snowpark.
- Data Modeling: Implement and optimize data models in Snowflake to support
analytics, reporting, and downstream applications.
- Implement ELT Processes: Build efficient ELT pipelines for transforming raw
data into structured, queryable formats.
- Reverse ETL Solutions: Enable operational analytics by implementing Reverse
ETL workflows to sync processed data back into operational tools and
platforms.
- Data Integration: Work with APIs, third-party tools, and custom integrations
to ingest, process, and manage data flows across multiple systems.
- Automation: Leverage orchestration tools like Apache Airflow or Snowpark to
automate workflows and improve operational efficiency.
- Collaboration: Partner with Data Scientists, Analysts, and Product teams to
understand business requirements and deliver actionable data insights.
- Governance & Security: Implement and maintain data governance policies and
ensure compliance with data security best practices.
What You'll Bring:
- Technical Expertise: Experience with Snowflake: design, optimization, and
query performance tuning. Hands-on experience with Apache Kafka for streaming
data. Proficient in dbt for transforming data and creating reusable models.
Expertise in Apache Airflow or similar orchestration tools. Knowledge of ELT
and Reverse ETL principles.
- Programming: Strong proficiency in Python, Java and SQL.
- Data Systems: Experience working with modern data ecosystems, including
cloud-based architectures (AWS, Azure, GCP).
- Data Modeling: Experience building and managing data warehouses and
dimensional modeling.
- Problem-Solving: Strong analytical and debugging skills to tackle complex
data engineering challenges.
- Collaboration: Excellent communication skills to collaborate with technical
and non-technical stakeholders.
\n
Kong has different base pay ranges for different work locations within the
United States, which allows us to pay employees competitively and consistently
in different geographic markets. Compensation varies depending on a wide array
of factors, including but not limited to specific candidate location, role,
skill set and level of experience. Certain roles are eligible for additional
rewards, including sales incentives depending on the terms of the applicable
plan and role. Benefits may vary depending on location. The typical base pay
range for this role in is CAD 123,025.00 - 147,677.50.
About Kong
Kong Inc., a leading developer of cloud API technologies, is on a mission to
enable companies around the world to become “API-first” and securely accelerate
AI adoption. Kong helps organizations globally — from startups to Fortune 500
enterprises — unleash developer productivity, build securely, and accelerate
time to market. For more information about Kong, please visit www.konghq.com
[http://www.konghq.com/] or follow us on X @thekonginc.
LI-SV1