Job Description
RED is now looking for a hybrid
Data Engineer + Data Scientist
who can build scalable data pipelines and deliver advanced analytics/ML solutions. You’ll own end‑to‑end data workflows using Spark, Databricks, Glue and Snowflake.
Job Details
:
Duration – 6 Months + Extension
Location – Remote, India.
Capacity – 5 Days/Week, 8 Hours/day – CET Hours
Start – April 2026
Key Responsibilities
Build and maintain
Spark/Databricks
data pipelines
Develop ETL/ELT workflows (Glue / Databricks)
Model and optimise data in
Snowflake
Build and deploy
ML models
(Python, scikit‑learn, MLflow)
Perform feature engineering, model validation, and monitoring
Deliver analytical insights and work closely with stakeholders
Ensure data quality, governance and documentation
Required Skills
Strong
PySpark / Spark
Hands‑on
Databricks
experience
Advanced SQL (Snowflake preferred)
Experience with ML modelling + deployment (MLflow)
Strong Python and data engineering fundamentals
Experience with large-scale data (Delta Lake / Lakehouse)
Airflow / CI/CD experience
Data quality tools (e.g., Great Expectations)
Streaming (Kafka/Kinesis)
Dashboarding (Power BI/Tableau)
If you would like to learn more about this role , please could you send me your updated CV -
or share it with someone who is currently looking.
Thanks,
Nisha
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
March 12, 2026
Job Type:
Technology
Location:
India
Company:
RED Global
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.