Job description
Job Title: Senior Machine Learning Engineer-Responsible AI
Job Type: Permanent
Job Location: Waterford,Ireland
Company Overview:
Our client is the world's leading provider of enterprise open-source software solutions, utilizing a community-driven approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. With a presence in over 40 countries, our associates have the flexibility to choose a work environment that best suits their needs, ranging from in-office to fully remote options.
The Role:
In this role, you will serve as a technical expert in the areas of explainable AI and fairness, focusing on the responsible AI features of the open-source Open Data Hub project. Your primary responsibilities will involve active participation in key open-source communities, including KServe, TrustyAI, Kubeflow, and others.
You will work as an integral member of a dynamic development team, contributing to the rapid design, security, development, testing, and deployment of model-serving capabilities, trustworthy AI solutions, and model registry functionalities.
Job Responsibilities:
- Be a leader in Explainable AI, Fairness & Bias related open-source communities to help build an active ML open-source ecosystem for Open Data Hub and OpenShift AI.
- Contribute to developing and integrating model fairness and bias metrics and explainable AI algorithms in OpenShift AI product.
- Act as an Explainable AI SME within Red Hat by supporting customer facing discussions, presenting at technical conferences, and evangelizing OpenShift AI within the internal community of practices.
- Research and design new features for open source MLOps communities such as KServe and Trusty AI.
- Collaborate with our product management and customer engineering teams to identify and expand product functionalities.
- Mentor, influence, and coach a team of distributed engineers.
Requirements:
- Strong research and development experience in Explainable Artificial Intelligence (XAI) with a focus on Large Language Models (LLMs), model-agnostic interpretability methods, bias detection and mitigation, and metrics for assessing fairness, transparency, and interpretability in the complex AI models.
- Recent hands on experience in deploying and maintaining machine learning models in production environments with respect to explainable AI
- Technical leadership acumen
- Passion for writing and maintaining reliable code
- Hands on experience in Kubernetes
- Comfortable working in a distributed remote team environment
- Excellent written and verbal communication skills; good English language skills
Qualification:
- Bachelor's degree in statistics, mathematics, computer science, or a related quantitative field, or equivalent expertise; master's or PhD in Machine Learning or NLP is a big plus.
- Experience in engineering, consulting or another field related to model serving and monitoring, model registry, explainable AI, deep neural networks, in a customer environment or supporting a data science team.
- Highly experienced in Kubernetes and/or OpenShift.
- Advanced level knowledge and experience in Python, Java, or Go.
- Familiarity with popular python machine learning libraries such as PyTorch, TensorFlow, Scikit-Learn, and Hugging Face.
If you are interested in this role or would like to discuss it further, please contact Nidhi at +353 1 645 5244 or email [email protected].