Descrição da Vaga
About the Role/Team
We’re building the semantic backbone of WEX’s Data-as-a-Service (DaaS) platform—an extensible data layer that turns raw data into trusted, reusable, and business-aligned assets.
As a Mid Software Engineer on the Semantic Data team, you will be at the forefront of designing and scaling the data foundation that powers analytics, AI, and operational decisions across all WEX domains. This role will be central to enabling a consistent, governed, and reusable definition of metrics, dimensions, and business logic that can be consumed across tools, platforms, and teams. Your work will be critical in creating a single source of truth, consolidating fragmented data sources into unified, reconciled views.
How you’ll make an impact
Design and implement modular, versioned semantic models and transformation pipelines that serve as reusable building blocks.
Translate complex business definitions into scalable, interpretable, and trustworthy data entities used across analytics, AI/ML, APIs, and operational workflows.
Implement business rules, calculations, and aggregations in the semantic layer.
Build modular, testable, and versioned transformation pipelines with a strong focus on readability, maintainability, and long-term scalability.
Establish data governance principles to ensure consistency and metrics definitions are standardized and compliant.
Define and implement robust data modeling solutions, ensuring data quality, consistency, and interoperability across the organization.
Partner closely with the data products team to understand business requirements and ensure semantic models align with their needs.
Participate in code reviews, design sessions, and incident resolution—promoting high standards for code quality and operational reliability.
Experience you’ll bring
Strong experience as a software engineer, ideally in high-volume or distributed systems environments.
A systems thinking mindset—you consider data as a platform, not a pipeline.
Solid understanding of data quality practices—including validation, enrichment, schema enforcement, and business rule encoding.
Strong programming skills in Python, Java, or another backend language for data services.
Solid grasp of engineering fundamentals, including version control, modular design, testing, and performance tuning.
Proven experience with at least one modern cloud data platform (Snowflake, BigQuery, Databricks).
Experience building highly available, scalable, and secure data platforms that support diverse processing paradigms (e.g., streaming, batch, warehousing, data lakes) is a strong plus.
A collaborative mindset—comfortable working across domains, products, and infrastructure layers.
A strong sense of ownership and accountability—you care deeply about building systems that last.