About Alexa Translations
Alexa Translations provides translation services in the legal, financial, and
securities sectors by leveraging proprietary A.I. technology and a team of
highly specialized linguistic experts. Unmatched in speed and quality, our
machine translation engine is best-in-class and specifically trained for the
French-Canadian market. If that wasn’t enough, our technology is backed by two
decades of award-winning client service.
About the Role
We are looking for a Generative AI Engineer to develop our next-generation
intelligent translation and translation-related service engine, using Generative
AI (GenAI) and Large Language Model (LLM) technologies. You will report to the
team lead on GenAI, develop and implement state-of-the-art algorithms by fast
prototyping, and collaborate with the software team to deploy models. We expect
our Generative AI Engineer to stay current with the technological cutting edge
and build applications of LLM and GenAI to machine translation with best
industry practices, as well as having solid background and hands-on experience
with deep learning, machine learning, natural language processing, and big data.
Responsibilities
- Research and implement state-of-the-art LLM techniques including continued
pre-training, instruction fine-tuning, preference alignment, and LLM
deployment while also focusing on prompt engineering and GenAI more broadly.
- Work closely with machine learning engineers and data scientists to design,
build, and test models.
- Contribute to technological innovations by staying current to the
cutting-edge achievements of GenAI and LLM from industry and academia.
- Develop efficient and scalable algorithms for training and inference of
generative models, leveraging deep learning frameworks such as TensorFlow or
PyTorch and optimizing performance on diverse hardware platforms.
- Train and evaluate generative models using appropriate metrics and
benchmarks, fine-tuning model parameters, architectures, and hyperparameters
to optimize performance, stability, and generalization.
- Work closely with software and DevOps engineers to deploy GenAI models.
- Document code, algorithms, and experimental results, following best practices
for reproducibility, version control, and software engineering, and
contribute to internal knowledge sharing and continuous improvement
initiatives.
- Requirements
- Bachelor's or Master's degree in Computer Science, Artificial Intelligence,
Machine Learning, or related field.
- 1+ years of industry experience developing GenAI and LLM applications is
preferred.
- 2+ years of professional experience as a software engineer is required.
- Proficiency in Python programming and software development practices, with
experience in building and maintaining scalable, production-grade software
systems.
- Working knowledge and project-based record of all of the following: prompt
tuning, RAG, ICL.
- Working knowledge and project-based record of at least one of the following
is a plus: continued pre-training, instruction fine-tuning, Agent.
- Strong problem-solving skills, attention to detail, and the ability to work
independently and collaboratively in a fast-paced environment.
- Hands-on experience with Huggingface APIs or Amazon Bedrock.
- Expert skills of Python, including PyTorch, TensorFlow, Pandas, etc.
- Experience with cloud platforms like AWS, GCP, or Azure
- Self-driven, self-motivated with excellent time management skills
- Excellent communication skills, with the ability to convey complex technical
concepts clearly and effectively to both technical and non-technical
stakeholders.
- Familiarity with GPU programming and optimization techniques for accelerating
deep learning computations.
- Ability to adapt to shifting priorities without compromising deadlines and
momentum.
- Prior experience in generative AI research, projects, or internships, with
contributions to open-source projects or publications in relevant conferences
or journals.