Quantiphi Logo

Quantiphi

Senior MLOps Engineer

Sorry, this job was removed at 09:39 p.m. (IST) on Thursday, Mar 27, 2025
Be an Early Applicant
In-Office
3 Locations
In-Office
3 Locations

Similar Jobs

8 Days Ago
In-Office or Remote
14 Locations
Senior level
Senior level
Information Technology • Mobile • Consulting
The Senior MLOps Engineer will develop and maintain MLOps infrastructure for ML projects, collaborating with engineers and data scientists, and manage a team.
Top Skills: AWSAzureDockerGCPGitKerasKubernetesPyTorchScikit-LearnTensorFlow
7 Minutes Ago
Hybrid
Pune, Maharashtra, IND
Senior level
Senior level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
Lead AI-integrated transformation programs, identifying opportunities for AI tools to enhance project delivery and collaborating with stakeholders for successful integration and usage of AI solutions.
Top Skills: Ai TechnologiesLarge Language Models
7 Minutes Ago
Hybrid
Pune, Maharashtra, IND
Senior level
Senior level
Fintech • Professional Services • Consulting • Energy • Financial Services • Cybersecurity • Generative AI
The role involves business analysis for chat-based digital journeys in banking, focusing on AI chatbot and voicebot technologies. Responsibilities include requirements gathering and stakeholder collaboration.
Top Skills: AgileConfluenceDynaGoogle DialogflowJIRANluPoly AiRasaSierra

While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth.
If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi!

Role: Senior Platform Engineer (MLOps)

Experience Level: 3 to 6 Years 

Location: Mumbai/Bangalore (Hybrid)

 

Overview:

We are seeking an experienced Platform Engineer with expertise in MLOps/LLMOps and handling distributed systems, particularly Kubernetes and Slurm, along with a strong background in managing Multi-GPU, Multi-Node Deep Learning job scheduling. Proficiency in Linux (Ubuntu) systems, the ability to create intricate shell scripts, good proficiency in working with configuration management tools and sufficient understanding of deep learning workflow.

Roles and Responsibilities:

  • Design, deploy, and maintain distributed systems using Kubernetes and Slurm for optimal resource utilization and workload management.

  • Lead the configuration and optimization of Multi-GPU, Multi-Node Deep Learning job scheduling, ensuring efficient computation and data processing.

  • Collaborate with cross-functional teams to understand project requirements and translate them into technical solutions.

  • Develop and maintain complex shell scripts for various system automation tasks, enhancing efficiency and reducing manual intervention.

  • Monitor system performance, identify bottlenecks, and implement necessary adjustments to ensure high availability and reliability.

  • Troubleshoot and resolve technical issues related to the distributed system, job scheduling, and deep learning processes.

  • Stay updated with industry trends and emerging technologies in distributed systems, deep learning, and automation.

Skill Set Needed: 

  • Hands-on experience in MLOps - Azure ML (preferred), MLFlow, Kubeflow, AutoML etc.

  • Good to have at least one ML framework understanding - PyTorch / TensorFlow.

  • Good with Python. Experience in shell/linux scripting.

  • Good understanding of logical networks. 

  • Proven experience in designing, deploying, and managing distributed systems, with a focus on Kubernetes and Slurm.

  • Sufficient understanding of AI Model Training and Deployment and Strong background in Multi-GPU, Multi-Node Deep Learning job scheduling and resource management.

  • Proficiency in Linux systems, particularly Ubuntu, and the ability to navigate and troubleshoot related issues.

  • Extensive experience creating complex shell scripts for automation and system orchestration.

  • Familiarity with continuous integration and deployment (CI/CD) processes.

  • Excellent problem-solving skills and the ability to diagnose and resolve technical issues promptly.

  • Strong communication and collaboration skills to work effectively within a cross-functional team.

Good to Have:

  • Previously working on NVIDIA Ecosystem or well aware of NVIDIA Ecosystem - Triton Inference Server, CUDA. Experience in working with On-prem NVIDIA GPU servers.

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account