About the Role: 100% REMOTE
We are looking for a seasoned Machine Learning Operations (MLOPs) Architect to
build, and optimize ML inference platform. The role demands an individual with
significant expertise in Machine Learning engineering and infrastructure, with
an emphasis on building Machine Learning inference systems. Proven experience in
building and scaling ML inference platforms in a production environment is
crucial. This remote position calls for exceptional communication skills and a
knack for independently tackling complex challenges with innovative solutions.
\n
What you will be doing:
- Architect and optimize our existing data infrastructure to support
cutting-edge machine learning and deep learning models.
- Collaborate closely with cross-functional teams to translate business
objectives into robust engineering solutions.
- Own the end-to-end development and operation of high-performance,
cost-effective inference systems for a diverse range of models, including
state-of-the-art LLMs.
- Provide technical leadership and mentorship to foster a high-performing
engineering team.
- Develop CI/CD workflows for ML models and data pipelines using tools like
Cloud Build, GitHub Actions, or Jenkins.
- Automate model training, validation, and deployment across development,
staging, and production environments.
- Monitor and maintain ML models in production using Vertex AI Model
Monitoring, logging (Cloud Logging), and performance metrics.
- Ensure reproducibility and traceability of experiments using ML metadata
tracking tools like Vertex AI Experiments or MLflow.
- Manage model versioning and rollbacks using Vertex AI Model Registry or
custom model management solutions.
- Collaborate with data scientists and software engineers to translate model
requirements into robust and scalable ML systems.
- Optimize model inference infrastructure for latency, throughput, and cost
efficiency using GCP services such as Cloud Run, Kubernetes Engine (GKE), or
custom serving frameworks.
- Implement data and model governance policies, including auditability,
security, and access control using IAM and Cloud DLP.
- Stay current with evolving GCP MLOps practices, tools, and frameworks to
continuously improve system reliability and automation.
Requirements:
- Proven track record in designing and implementing cost-effective and scalable
ML inference systems.
- Hands-on experience with leading deep learning frameworks such as TensorFlow,
Pytorch, HuggingFace, Langchain, etc.
- Solid foundation in machine learning algorithms, natural language processing,
and statistical modeling.
- Strong grasp of fundamental computer science concepts including algorithms,
distributed systems, data structures, and database management.
- Ability to tackle complex challenges and devise effective solutions. Use
critical thinking to approach problems from various angles and propose
innovative solutions.
- Worked effectively in a remote setting, maintaining strong written and verbal
communication skills. Collaborate with team members and stakeholders,
ensuring clear understanding of technical requirements and project goals.
- Expertise in public cloud services, particularly in GCP and Vertex AI.
Must have:
- Proven experience in building and scaling Agentic AI systems in a production
environment.
- In-depth understanding of LLM architectures, parameter scaling, and
deployment trade-offs.
- Technical degree: Bachelor's degree in Computer Science with a minimum of 10+
years of relevant industry experience, or
- A Master's degree in Computer Science with at least 8+ years of relevant
industry experience.
- A specialization in Machine Learning is preferred.
Travel
- Travel as needed per business requirements
Sponsorship
- This role is not sponsorship eligible
- Candidates need to be legally able to work in the US for any employers
\n
$138,000 - $224,000 a year
LI-RL1
LI-Remote
Rackspace
LI-Rackspace
About Rackspace Technology
We are the multicloud solutions experts. We combine our expertise with the
world’s leading technologies — across applications, data and security — to
deliver end-to-end solutions. We have a proven record of advising customers
based on their business challenges, designing solutions that scale, building and
managing those solutions, and optimizing returns into the future. Named a best
place to work, year after year according to Fortune, Forbes and Glassdoor, we
attract and develop world-class talent. Join us on our mission to embrace
technology, empower customers and deliver the future.
More on Rackspace Technology
Though we’re all different, Rackers thrive through our connection to a central
goal: to be a valued member of a winning team on an inspiring mission. We bring
our whole selves to work every day. And we embrace the notion that unique
perspectives fuel innovation and enable us to best serve our customers and
communities around the globe. We welcome you to apply today and want you to know
that we are committed to offering equal employment opportunity without regard to
age, color, disability, gender reassignment or identity or expression, genetic
information, marital or civil partner status, pregnancy or maternity status,
military or veteran status, nationality, ethnic or national origin, race,
religion or belief, sexual orientation, or any legally protected characteristic.
If you have a disability or special need that requires accommodation, please let
us know.
\n