Senior AI/ML Engineer -- Generative AI \& Autonomous Agents Introduction Extreme Networks AI Competence Center Senior AI/ML Engineer Generative AI multi-agent systems LLM-based application development AI-native systems network design optimization security support
Key Responsibilities
- Design and implement the business logic and modeling that governs agent behavior, including decision-making workflows, tool usage, and interaction policies.
- Develop and refine LLM-driven agents using prompt engineering, retrieval-augmented generation (RAG), fine-tuning, or function calling.
- Understand and model the domain knowledge behind each agent: engage with network engineers, learn the operational context, and encode this understanding into effective agent behavior.
- Apply traditional ML modeling techniques (classification, regression, clustering, anomaly detection) to enrich agent capabilities.
- Contribute to the data engineering pipeline that feeds agents, including data extraction, transformation, and semantic chunking.
- Build modular, reusable AI components and integrate them with backend APIs, vector stores, and network telemetry pipelines.
- Collaborate with other AI engineers to create multi-agent workflows, including planning, refinement, execution, and escalation steps.
- Translate GenAI prototypes into production-grade, scalable, and testable services in collaboration with platform and engineering teams.
- Work with frontend developers to design agent experiences and contribute to UX interactions with human-in-the-loop feedback.
- Stay up to date on trends in LLM architectures, agent frameworks, evaluation strategies, and GenAI standards.
Qualifications
- Master's or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field.5 years of experience in ML/AI engineering, including 2 years working with transformer models or LLM systems.
- Strong knowledge of ML fundamentals, with hands-on experience building and deploying traditional ML models.
- Solid programming skills in Python, with experience integrating AI modules into cloud-native microservices.
- Experience with LLM frameworks (e.g., LangChain, AutoGen, Semantic Kernel, Haystack), and vector databases (e.g., FAISS, Weaviate, Pinecone).
- Familiarity with prompt engineering techniques for system design, memory management, instruction tuning, and tool-use chaining.
- Strong understanding of RAG architectures, including semantic chunking, metadata design, and hybrid retrieval.
- Hands-on experience with data preprocessing, ETL workflows, and embedding generation.
- Proven ability to work with cloud platforms like AWS or Azure for model deployment, data storage, and orchestration.
- Excellent collaboration and communication skills, including cross-functional work with product managers, network engineers, and backend teams.
Nice To Have
- Experience with LLMOps tools, open-source agent frameworks, or orchestration libraries.
- Familiarity with Docker, Docker Compose, and container-based development environments.
- Background in enterprise networking, SD-WAN, or network observability tools.Contributions to open-source AI or GenAI libraries.
We encourage people from underrepresented groups to apply. Come Advance with us! In keeping with our values, no employee or applicant will face discrimination/harassment based on: race, color, ancestry, national origin, religion, age, gender, marital domestic partner status, sexual orientation, gender identity, disability status, or veteran status. Above and beyond discrimination/harassment based on "protected categories," Extreme Networks also strives to prevent other, subtler forms of inappropriate behavior (e.g., stereotyping) from ever gaining a foothold in our organization. Whether blatant or hidden, barriers to success have no place at Extreme Networks.