GE Healthcare Logo

GE Healthcare

Senior Forward Deployed AI Engineer

Reposted 12 Days Ago
Be an Early Applicant
In-Office
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
In-Office
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Own end-to-end GenAI delivery with customers: design architectures (RAG, Graph RAG, agentic systems), build LLM workflows, data pipelines, retrieval and memory systems, deploy and monitor models (CI/CD, Docker, Kubernetes, AWS), establish evaluation and safety guardrails, and contribute learnings back to platform teams while mentoring peers.
The summary above was generated by AI
Job Description SummaryWe are seeking a Senior Forward Deployed AI Engineer who can partner directly with customers to define problems, architect GenAI systems, and ship production-grade solutions. You’ll own end-to-end delivery: from problem decomposition and data pipelines to LLM workflow design, RAG, Knowledge Graphs, Graph RAG architectures, agentic orchestration, LLM evaluation, deployment, and ongoing optimization. You’ll also integrate and extend GEHC STO AI Fabric to accelerate real-world impact.
GE Healthcare is a leading global medical technology and digital solutions innovator. Our mission is to improve lives in the moments that matter. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world.

Job Description

Roles and Responsibilities:

  • Own the end-to-end GenAI strategy within customer environments—from discovery and problem decomposition to production deployment and adoption.

  • Define solution architectures across RAG, agentic AI, knowledge bases, and multi-turn conversational systems with memory.

  • Establish robust evaluation frameworks (e.g., groundedness, hallucination rate, factuality, latency, cost, and safety) and drive iterative improvement.

  • Design and deploy large-scale LLM workflows, RAG systems (vector/hybrid, reranking), Graph RAG (entity/relation extraction, query planning), and autonomous/agentic systems (task decomposition, tool use, planning/feedback loops).

  • Implement conversation memory (short-term context buffers, long-term vector/graph stores, entity memories) to support multi-turn experiences.

  • Build conversational chatbots and search strategies (BM25 + dense retrieval, hybrid retrieval, query rewriting/expansion) for high-precision answers.

  • Apply chunking strategies (semantic/hierarchical, overlap/sliding window, summarization-based chunking) to optimize retrieval quality and cost.

  • Leverage GEHC STO AI Fabric components to build robust data pipelines, define/align ontologies, and integrate ML/LLM models into operational tools and workflows.

  • Develop and register components (transforms, ontology services, pipelines), ensuring traceability, observability, and reusability within the AI Fabric.

  • Contribute field learnings, patterns, and edge cases back to the AI Fabric product teams to mature the platform and accelerate customer outcomes.

  • Work closely with customer executives, SMEs, and end users to understand mission-critical needs and translate them into technical solutions and delivery plans.

  • Communicate tradeoffs (quality, latency, cost, safety) and guide stakeholders to data-driven decisions.

  • Implement CI/CD, containerization (Docker), model packaging, deployment, and monitoring across environments (dev/stage/prod).

  • Build LLMOps observability: tracing, prompt/version management, cost/latency dashboards, eval pipelines, A/B testing, safety guardrails, and human-in-the-loop review where required.

  • Operate within AWS (SageMaker for training/inference, Bedrock for managed foundation models); manage infrastructure-as-code and environment hardening.

  • Build scalable pipelines using Databricks, Spark/PySpark, and SQL to ingest, clean, and feature/signal engineer multi-modal data.

  • Architect and manage knowledge bases (document stores, vector DBs, knowledge graphs) and retrieval services for production traffic.

  • Bias for shipping high-quality, maintainable software over academic benchmarks.

  • Write clean, testable code; enforce code quality, documentation, and reusability.

  • Mentor peers; elevate engineering best practices for GenAI at scale.

Qualifications:

  • Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with a minimum of 5+ years in AI/ML engineering, data science, or applied research; 1.5+ years building production LLM/GenAI systems.

  • Excellent coding proficiency in Python and either Java or TypeScript; solid software engineering fundamentals (testing, modularity, design patterns).

  • Deep understanding of LLMs and agentic AI (planning, tool use, function calling, multi-agent collaboration).

  • Hands-on with RAG (vector/hybrid retrieval, reranking, query planning), Graph RAG (entities/relations, graph queries), knowledge bases, and chunking strategies.

  • NLP expertise (retrieval, generation, summarization, information extraction); working knowledge of computer vision and GANs for augmentation or synthesis.

  • Proven experience with conversational AI (multi-turn with memory, grounding, disambiguation) and search strategies (BM25, hybrid, semantic reranking).

  • Databricks (Delta, Unity Catalog, Jobs/Workflows), Spark/PySpark, and SQL at scale.

  • MLOps, LLMOps, Docker, Kubernetes, CI/CD, and AWS (SageMaker, Bedrock) for model training, evaluation, and deployment.

  • Practical experience with LangChain, LangGraph, CrewAI, n8n (or similar orchestrators) and using OpenAI/Anthropic models in production.

  • Good customer-facing communication skills and the ability to operate on-site with customers when needed.

  • Experience in regulated domains (healthcare, life sciences) and ontology-driven systems.

  • Knowledge graphs (e.g., Neo4j), vector stores, and hybrid retrieval with cross-encoder reranking.

  • Prompt engineering and LLM evaluation frameworks; safety guardrails (PII/PHI redaction, toxicity filters).

  • Familiarity with Kubernetes (EKS/ECS), feature stores, experiment tracking, and model registries.

  • Performance optimization for long-context models, batching, streaming, and cost/latency tuning.

Inclusion and Diversity

GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.

We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity.

Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support.

#LI-AM11

#LI-Hybrid

Additional Information

Relocation Assistance Provided: No

Top Skills

Anthropic
Aws Bedrock
Aws Sagemaker
Bm25
Ci/Cd
Crewai
Cross-Encoder Reranking
Databricks
Dense Retrieval
Docker
Experiment Tracking
Feature Stores
Gans
Graph Rag
Hybrid Retrieval
Java
Knowledge Graphs
Kubernetes
Langchain
Langgraph
Llmops
Mlops
Model Registries
N8N
Neo4J
Openai
Pyspark
Python
Rag
Spark
SQL
Typescript
Vector Databases

Similar Jobs

45 Minutes Ago
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Big Data • Fintech • Information Technology • Business Intelligence • Financial Services • Cybersecurity • Big Data Analytics
As a Sr. Analyst in Data Science and Analytics, you'll utilize tools like SQL and R to drive analytics and insights. You will need to manage projects independently, possess strong communication skills, and be open to travel as required.
Top Skills: HiveRSparkSQLTableau
45 Minutes Ago
Hybrid
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Big Data • Fintech • Information Technology • Business Intelligence • Financial Services • Cybersecurity • Big Data Analytics
The role requires analyzing data using various tools, requiring 3+ years of experience and strong communication skills.
Top Skills: HiveRSparkSQLTableau
23 Hours Ago
Easy Apply
Hybrid
Bangalore, Bengaluru Urban, Karnataka, IND
Easy Apply
Senior level
Senior level
Cloud • Healthtech • Professional Services • Software • Pharmaceutical
The Executive Assistant provides high-level administrative support to senior executives, managing calendars, coordinating meetings, facilitating communication, and improving processes.
Top Skills: Ai ToolsMicrosoft 365TeamsZoom

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account