Descrizione Lavoro
Schindler is a global leader in elevator, escalator, and moving walkway solutions, operating in over 100 countries. Every day, our systems move more than a billion people, helping shape the flow of urban life. At Schindler, we don't just build mobility solutions — we build careers, innovation, and impact.
Join Us as a Data Engineer (m/f/d)
Location: Milan, Lombardy, Italy (Hybrid: 1–2 days/week working from home)
Company: Schindler Group
Ready to Lead the Future of Mobility?
We are seeking a Senior Data Engineer to design and build data pipelines, ETL workflows, and data platforms that power large-scale AI and Io T applications. You will collaborate with Data Science teams and ML Ops specialists to enable seamless data-driven intelligence across global applications.
Why Schindler?
Shape Data Foundations: Architect and optimize real-time/streaming data platforms for next-generation AI and Io T systems.
Hands-On Impact: Work on complex data engineering challenges at scale.
Collaborate with Experts: Partner with top talent in AI, ML Ops, and cloud technologies.
Continuous Innovation: Stay ahead with emerging trends in big data, cloud-native solutions, and automation.
Your Responsibilities
Design, build, and maintain scalable real-time and batch data pipelines using Py Spark and SQL on Databricks.
Develop and optimize ETL/ELT workflows on Delta Lake, ensuring data quality and reliability.
Manage Databricks environments autonomously: clusters, jobs, Workflows, and Unity Catalog.
Collaborate closely with Data Scientists to deliver clean, well-modeled data for ML and analytics use cases.
Contribute to data modeling and architecture decisions within the team.
Monitor pipeline performance and drive continuous optimization on Azure cloud infrastructure.
What You Bring
Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
5+ years of experience in data engineering roles.
Strong hands-on expertise in Py Spark and SQL — production-grade, not just academic.
Solid experience with Databricks: Delta Lake, Jobs, Workflows, cluster configuration.
Confident managing data catalogs (Unity Catalog) across multiple stakeholders, enforcing governance and access control without being a bottleneck.
Good Python skills for data transformation and pipeline logic.
Proficiency with git-based workflows (branching, pull requests, code review).
Experience working in small, cross-functional teams with autonomy and ownership.
Pragmatic approach to Agile — we borrow from Scrum what actually works for us, and skip the rest.
Fluent in English.
Qualifications That Will Strengthen Your Profile
Comfortable working on Azure without requiring dedicated Dev Ops support (storage, IAM, secrets management).
Knowledge of MLflow or exposure to ML pipelines.
Familiarity with dbt or similar transformation frameworks.
Basic containerization skills (Docker) — able to work with containers without building infrastructure from scratch.
Ready to Elevate Your Career?
If you're excited to design and optimize data platforms for next-generation AI and Io T systems, we want to hear from you. Join Schindler and be part of a global company that values innovation, collaboration, and professional growth.
Apply Today and Make an Impact with Schindler!
Diversity & Inclusion
At Schindler Group we value inclusion and diversity, and practice equity to create equal opportunities for all. We endeavor that all qualified applicants will receive consideration for employment without regard to age, race, ethnic background, color, religious affiliation, union affiliation, gender, gender identity, sexual orientation, marital status, national origin, nationality, genetics and health or disability.