Job Description
About the Role:
Join our data engineering team to build and maintain large-scale data pipelines that power analytics across various products. In this role, you will process a large amount of data to deliver actionable insights for product teams and executives.
What You'll Do:
● Develop Apache Airflow DAGs and PySpark ETL pipelines for high volume data processing.
● Write optimized SQL queries for data transformation and aggregation.
● Build data products serving Business Process, Executive KPIs, and Product Analytics.
● Implement data quality and monitoring solutions.
● Optimize pipeline performance and troubleshoot production issues.
● Collaborate with cross-functional teams.
● Production Pipeline Monitoring (KLO).
Qualifications:
Required Skills
● 10+ years of data engineering experience with a minimum of 7 years dedicated to the Big data stack.
● Expert in Python and PySpark (DataFrame API, Spark SQL).
● Advanced SQL skills (window functions, complex queries).
● Production experience with Apache Airflow.
● Solid background in data warehousing and dimensional modelling.
Preferred Skills
● Experience with SQL, Trino, Apache Iceberg.
● Knowledge of Tableau CRM/CLOUD, Salesforce platforms.
● AWS/cloud data services experience.
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
February 27, 2026
Job Type:
Technology
Location:
India
Company:
Relanto
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.