Job Description
Retail Media is transforming how advertisers connect with consumers through personalized and targeted campaigns across retailers' digital and physical touchpoints. Retail Media Measurement plays a pivotal role in ensuring the effectiveness of these campaigns, driving value for advertisers, retailers, and consumers alike.
This role focuses on designing, building, and scaling solutions that enable the accurate measurement of retail media campaigns across various channels. By providing actionable insights, it empowers stakeholders to optimize media investments, improve ROI, and enhance the overall customer experience.
Job Title: Senior Data Engineer
Job Summary
We are looking for a talented and motivated Senior
Data Engineer
to contribute to the design, development, and optimization of real-time and batch data processing pipelines for our retail media measurement solution. In this role, you will work with tools such as Python, Apache Spark, and streaming frameworks to process and analyze data, supporting near-real-time decision-making for critical business applications in the retail media space.
You will collaborate with cross-functional teams, including Data Scientists, Analysts, and Senior Engineers, to build robust and efficient data solutions. As a Data Engineer, you will focus on implementing scalable data pipelines under the guidance of senior team members while gaining hands-on experience with streaming and batch processing systems. Your contributions will help ensure the reliability and performance of our data infrastructure, driving impactful insights for the business. This role offers an excellent opportunity to grow your expertise in modern data engineering practices while working on cutting-edge technologies.
What We Expect From You :
Experience:
7–9 years of experience as a Data Engineer.
Prior experience working with scalable architecture and distributed data processing systems
Technical Expertise:
Strong programming skills in
SQL
and
PySpark
.
Proficiency in big data solutions such as
Apache Spark
and
Hive
.
Experience with big data workflow orchestrators like Argo Worklfows
Hands-on experience with cloud-based data stores like
Redshift
or
BigQuery
(preferred).
Familiarity with cloud platforms, preferably
GCP
or
Azure
.
Development Practices:
Strong programming skills in
Python
, with experience in frameworks like
FastAPI
or similar API frameworks.
Proficiency in unit testing and ensuring code quality.
Hands-on experience with version control tools like
Git
.
Optimization & Problem Solving:
Ability to analyze complex data pipelines, identify performance bottlenecks, and suggest optimization strategies.
Work collaboratively with infrastructure teams to ensure a robust and scalable platform for data science workflows.
Collaboration & Communication:
Excellent problem-solving skills and the ability to work effectively in a team environment.
Strong communication skills to collaborate across teams and share technical insights.
Nice To Have:
Experience with microservices architecture, containerization using
Docker
, and orchestration tools like
Kubernetes
.
Exposure to MLOps practices or machine learning workflows using Spark.
Understanding of logging, monitoring, and alerting for production-grade big data pipelines.
This role is ideal for someone eager to grow their expertise in modern data engineering practices while contributing to impactful projects in a collaborative environment.
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
November 27, 2025
Job Type:
Technology
Location:
India
Company:
dunnhumby
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.