Tenstorrent is leading the industry on cutting-edge AI technology,
revolutionizing performance expectations, ease of use, and cost efficiency. With
AI redefining the computing paradigm, solutions must evolve to unify innovations
in software models, compilers, platforms, networking, and semiconductors. Our
diverse team of technologists have developed a high performance RISC-V CPU from
scratch, and share a passion for AI and a deep desire to build the best AI
platform possible. We value collaboration, curiosity, and a commitment to
solving hard problems. We are growing our team and looking for contributors of
all seniorities.
Join the team revolutionizing AI computing at Tenstorrent. In this role you will
lead development on TT-Forge, our MLIR-based compiler, and manage a team focused
on scaling graph transformations, lowering passes, and kernel-level
optimizations. You’ll help shape the future of AI computing through compiler
technology that is fast, flexible, and built for real-world models.
This role is hybrid, based out of Toronto, ON.
We welcome candidates at various experience levels for this role. During the
interview process, candidates will be assessed for the appropriate level, and
offers will align with that level, which may differ from the one in this
posting.
Who You Are
- Extensive experience building compilers or similar systems, with strong
fluency in C++ and Python.
- Strong knoweledge of MLIR, LLVM, or related infrastructure, with hands-on
work in dialect design or optimization passes.
- You have a strong understanding of modern AI frameworks like PyTorch,
TensorFlow, or JAX, including how models are transformed and executed.
- A background in working across teams to ship reliable, scalable tools in
fast-paced engineering environments.
What We Need
- Leadership to drive compiler development across multiple layers of the stack,
including custom dialects, transformation passes, and runtime integration.
- Alignment with hardware, software, and ML teams to ensure the compiler
supports realistic performance and deployment needs.
- Technical ownership of roadmap priorities, planning, and mentorship for a
high-performing engineering team.
- Insight to identify bottlenecks, debug performance issues, and improve
developer workflows across the stack.
What You Will Learn
- How to build compiler infrastructure that supports both training and
inference at scale across custom chip architectures.
- New approaches for human-in-the-loop optimization using TT-Explorer and other
internal tooling.
- Best practices for evolving MLIR dialects like TTIR, TTNN, and TTKernel to
support next-generation AI workloads.
- How Tenstorrent delivers open, performant, and developer-friendly AI
infrastructure that scales with industry demands.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k
including base and variable compensation targets. Experience, skills, education,
background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and
we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to
access U.S. export-controlled technology. Due to U.S. export laws, including
those codified in the U.S. Export Administration Regulations (EAR), the Company
is required to ensure compliance with these laws when transferring technology to
nationals of certain countries (such as EAR Country Groups D:1, E1, and E2).
These requirements apply to persons located in the U.S. and all countries
outside the U.S. As the position offered will have direct and/or indirect
access to information, systems, or technologies subject to these laws, the offer
may be contingent upon your citizenship/permanent residency status or ability to
obtain prior license approval from the U.S. Commerce Department or applicable
federal agency. If employment is not possible due to U.S. export laws, any
offer of employment will be rescinded.