The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the
software development kit used to accelerate deep learning and GenAI workloads on
Amazon’s custom machine learning accelerators, Inferentia and Trainium.
The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer
unparalleled ML inference and training performances. They are enabled through
state-of-the-art software stack - the AWS Neuron Software Development Kit (SDK).
This SDK comprises an ML compiler, runtime, and application framework, which
seamlessly integrate into popular ML frameworks like PyTorch. AWS Neuron,
running on Inferentia and Trainium, is trusted and used by leading customers
such as Snap, Autodesk, and Amazon Alexa.
The Team: Annapurna Labs was a startup company acquired by AWS in 2015, and is
now fully integrated. If AWS is an infrastructure company, then think Annapurna
Labs as the infrastructure provider of AWS. Our org covers multiple disciplines
including silicon engineering, hardware design and verification, software, and
operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron,
Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are
some of the products we have delivered over the last few years.
Within this ecosystem, the Neuron Compiler team is developing a deep learning
compiler stack that takes state of the art LLM, Vision, and multi-modal models
created in frameworks such as TensorFlow, PyTorch, and JAX, and makes them run
performantly on our accelerators. The team is comprised of some of the brightest
minds in the engineering, research, and product communities, focused on the
ambitious goal of creating a toolchain that will provide a quantum leap in
performance.
The Neuron team is hiring systems and compiler engineers in order to solve our
customers toughest problems. Specifically, the performance team in Toronto is
focused on analysis and optimization of system-level performance of machine
learning models on AWS ML accelerators. The team conducts in-depth profiling and
works across multiple layers of the technology stack - from frameworks and
compilers to runtime and collectives - to meet and exceed customer requirements
while maintaining a competitive edge in the market. As part of the Neuron
Compiler organization, the team not only identifies and implements performance
optimizations but also works to crystallize these improvements into the
compiler, automating optimizations for broader customer benefit.
This is an opportunity to work on products at the intersection of
machine-learning, high-performance computing, and distributed architectures. You
will architect and implement business-critical features, publish research, and
mentor a brilliant team of experienced engineers. We operate in spaces that are
very large, yet our teams remain small and agile. There is no blueprint. We're
inventing. We're experimenting. It is a very unique learning culture. The team
works closely with customers on their model enablement, providing direct support
and optimization expertise to ensure their machine learning workloads achieve
optimal performance on AWS ML accelerators.
Explore the product and our history!
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://aws.amazon.com/machine-learning/neuron/
https://github.com/aws/aws-neuron-sdk
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
Key job responsibilities
Our performance engineers collaborate across compiler, runtime, and framework
teams to optimize machine learning workloads for our global customer base.
Working at the intersection of machine learning, high-performance computing, and
distributed systems, you'll bring a passion for performance analysis,
distributed systems, and machine learning. In this role, you will:
- Analyze and optimize system-level performance of machine learning models
across the entire technology stack, from frameworks to runtime
- Conduct detailed performance analysis and profiling of ML workloads,
identifying and resolving bottlenecks in large-scale ML systems
- Work directly with customers to enable and optimize their ML models on AWS
accelerators, understanding their specific requirements and use cases
- Design and implement compiler optimizations, transforming manual performance
improvements into automated compiler passes
- Collaborate across teams to develop innovative optimization techniques that
enhance AWS Neuron SDK's performance capabilities
- Work in a startup-like development environment, where you’re always working on
the most important stuff.
About the team
1. Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the
qualifications and skills listed in the job description, we encourage candidates
to apply. If your career is just starting, hasn’t followed a traditional path,
or includes alternative experiences, don’t let it stop you from applying.
2. Why AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted
cloud platform. We pioneered cloud computing and never stopped innovating —
that’s why customers from the most successful startups to Global 500 companies
trust our robust suite of products and services to power their businesses.
3. Inclusive Team Culture
Here at AWS, we embrace our differences. We are committed to furthering our
culture of inclusion. We have ten employee-led affinity groups, reaching 40,000
employees in over 190 chapters globally. We have innovative benefit offerings,
and host annual and ongoing learning experiences, including our Conversations on
Race and Ethnicity (CORE) and AmazeCon conferences. Amazon’s culture of
inclusion is reinforced within our 16 Leadership Principles, which remind team
members to seek diverse perspectives, learn and be curious, and earn trust.
4. Work/Life Balance
Our team puts a high value on work-life balance. It isn’t about how many hours
you spend at home or at work; it’s about the flow you establish that brings
energy to both parts of your life. We believe striking the right balance between
your personal and professional life is critical to life-long happiness and
fulfillment. We offer flexibility in working hours and encourage you to find
your own balance between your work and personal lives.
5. Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of
experience levels and tenures, and we’re building an environment that celebrates
knowledge sharing and mentorship. We care about your career growth and strive to
assign projects based on what will help each team member develop into a
better-rounded professional and enable them to take on more complex tasks in the
future. Basic Qualifications: - 3+ years of non-internship professional software
development experience
- 2+ years of non-internship design or architecture (design patterns,
reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
Preferred Qualifications: - 3+ years of full software development life cycle,
including coding standards, code reviews, source control management, build
processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
- Experience in compiler design for CPU/GPU/Vector engines/ML-accelerators.
- Experience with System Level performance analysis and optimization
- Experience with LLVM and/or MLIR
- Experience with the following technologies: PyTorch, OpenXLA, StableHLO, JAX,
TVM, deep learning models, and algorithms.
Amazon is an equal opportunity employer and does not discriminate on the basis
of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our
customers. If you have a disability and need a workplace accommodation or
adjustment during the application and hiring process, including support for the
interview or onboarding process, please visit
https://amazon.jobs/content/en/how-we-hire/accommodations
[https://amazon.jobs/content/en/how-we-hire/accommodations] for more
information. If the country/region you’re applying in isn’t listed, please
contact your Recruiting Partner.