dunnhumby Logo

dunnhumby

Senior Data Engineer

Posted 24 Days Ago
Be an Early Applicant
Easy Apply
In-Office
Gurgaon, Gurugram, Haryana
Senior level
Easy Apply
In-Office
Gurgaon, Gurugram, Haryana
Senior level
The Senior Data Engineer will design and optimize data processing pipelines, collaborating with teams to enhance data solutions for retail media measurement, using tools like Python and Apache Spark.
The summary above was generated by AI

dunnhumby is the global leader in Customer Data Science, partnering with the world’s most ambitious retailers and brands to put the customer at the heart of every decision. We combine deep insight, advanced technology, and close collaboration to help our clients grow, innovate, and deliver measurable value for their customers. 

dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Nestlé, Unilever and Metro.

 

Retail Media is transforming how advertisers connect with consumers through personalized and targeted campaigns across retailers' digital and physical touchpoints. Retail Media Measurement plays a pivotal role in ensuring the effectiveness of these campaigns, driving value for advertisers, retailers, and consumers alike.

This role focuses on designing, building, and scaling solutions that enable the accurate measurement of retail media campaigns across various channels. By providing actionable insights, it empowers stakeholders to optimize media investments, improve ROI, and enhance the overall customer experience.

Job Title: Senior Data Engineer

Job Summary

We are looking for a talented and motivated Senior Data Engineer to contribute to the design, development, and optimization of real-time and batch data processing pipelines for our retail media measurement solution. In this role, you will work with tools such as Python, Apache Spark, and streaming frameworks to process and analyze data, supporting near-real-time decision-making for critical business applications in the retail media space.

You will collaborate with cross-functional teams, including Data Scientists, Analysts, and Senior Engineers, to build robust and efficient data solutions. As a Data Engineer, you will focus on implementing scalable data pipelines under the guidance of senior team members while gaining hands-on experience with streaming and batch processing systems. Your contributions will help ensure the reliability and performance of our data infrastructure, driving impactful insights for the business. This role offers an excellent opportunity to grow your expertise in modern data engineering practices while working on cutting-edge technologies.

What We Expect From You :

  • Experience:
    • 7–9 years of experience as a Data Engineer.
    • Prior experience working with scalable architecture and distributed data processing systems.
  • Technical Expertise:
    • Strong programming skills in SQL and PySpark.
    • Proficiency in big data solutions such as Apache Spark and Hive.
    • Experience with big data workflow orchestrators like  Argo Worklfows
    • Hands-on experience with cloud-based data stores like Redshift or BigQuery (preferred).
    • Familiarity with cloud platforms, preferably GCP or Azure.
  • Development Practices:
    • Strong programming skills in Python, with experience in frameworks like FastAPI or similar API frameworks.
    • Proficiency in unit testing and ensuring code quality.
    • Hands-on experience with version control tools like Git.
  • Optimization & Problem Solving:
    • Ability to analyze complex data pipelines, identify performance bottlenecks, and suggest optimization strategies.
    • Work collaboratively with infrastructure teams to ensure a robust and scalable platform for data science workflows.
  • Collaboration & Communication:
    • Excellent problem-solving skills and the ability to work effectively in a team environment.
    • Strong communication skills to collaborate across teams and share technical insights.

Nice To Have:

  • Experience with microservices architecture, containerization using Docker, and orchestration tools like Kubernetes.
  • Exposure to MLOps practices or machine learning workflows using Spark.
  • Understanding of logging, monitoring, and alerting for production-grade big data pipelines.

This role is ideal for someone eager to grow their expertise in modern data engineering practices while contributing to impactful projects in a collaborative environment.



What you can expect from us

We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect.  Plus, thoughtful perks, like flexible working hours and your birthday off.

You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn.

And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof.  We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. 

Our approach to Flexible Working

At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work.

We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process.

For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)

Top Skills

Spark
Argo Workflows
Azure
BigQuery
Docker
Fastapi
GCP
Git
Hive
Kubernetes
Pyspark
Python
Redshift
SQL

Similar Jobs

8 Hours Ago
In-Office or Remote
Gurugram, Haryana, IND
Senior level
Senior level
Artificial Intelligence • Consumer Web • HR Tech • Other
Develop and maintain scalable data pipelines, perform data analysis, and collaborate with teams to support data-driven decisions. Mentor junior engineers.
Top Skills: AirflowApache AirflowAws RedshiftDatabricksDockerGcp BigqueryHadoopKafkaKubernetesPythonSnowflakeSparkSQL
4 Days Ago
In-Office
Gurugram, Haryana, IND
Senior level
Senior level
Big Data • Software
As an Engineer III, you will develop big data solutions, design data pipelines, support data ingestion, build monitoring dashboards, and mentor team members.
Top Skills: AWSAws Data PipelineAzureDockerGCPGoogle Cloud ComposerPostgresPythonSQLTerraform
2 Days Ago
In-Office
Gurugram, Haryana, IND
Senior level
Senior level
Fintech • Analytics
As part of the Data Platform team, you'll build and support data ingestion and processing pipelines, create data marts, and mentor team members.
Top Skills: AWSAzureBi ToolsData PipelinesData WarehousingDockerEltETLGCPPostgresPythonSQLTerraformVersion Control

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account