Home Job Listings Categories Locations

Snowflake Data Engineer

📍 Hyderabad, India

Technology SPADTEK SOLUTIONS

Job Description

About Job Role: Azure–Snowflake Data Engineer Location: Hyderabad Job Type: Contract (can be full time as per Client need)

We are looking for an experienced Azure–Snowflake Data Architect to design and implement scalable cloud data models, optimize performance, and enable seamless data access for analytics and reporting. The ideal candidate will have a strong background in Azure data services, enterprise data warehousing, and Snowflake architecture, with hands‑on experience integrating with Power BI and modern analytics tools.

Key Responsibilities Azure & Snowflake Data Architecture - Design and develop logical and physical data models using Snowflake on Azure to support analytical and reporting needs - Architect and implement Azure‑based data warehousing solutions using services such as: - Azure Data Lake Storage (ADLS) - Azure Synapse Analytics (SQL Pools, Pipelines) - Azure Data Factory - Azure Functions / Logic Apps Semantic Modeling & BI Integration - Build and maintain semantic data layers for smooth integration with Power BI and other visualization tools - Implement best practices for DAX modeling, dataset optimization, and DirectQuery/Import strategies

Performance Optimization - Perform query tuning, warehouse sizing, clustering, and workload optimization within Snowflake - Optimize end‑to‑end Azure data pipelines for performance, cost efficiency, and reliability Security, Governance & Compliance - Define and enforce data security, RBAC, access controls, and governance across Azure and Snowflake - Implement data quality, lineage, and cataloging using tools such as Purview Collaboration & Solution Delivery - Work closely with business, analytics, and engineering teams to translate requirements into scalable Azure–Snowflake architecture - Provide best practices for Snowflake development, Azure data modeling, and ELT/ETL pipeline optimization - Support ML and advanced analytics workloads by enabling reliable, governed data pipelines

Requirements - 12+ years of progressive experience in data engineering, data warehousing, and cloud data architecture - 8+ years in client‑facing roles designing and delivering large‑scale data migration or modernization projects - Strong experience in conceptual, logical, and physical data modeling within Snowflake - Deep expertise in Azure data ecosystem, including: - Azure Data Factory - Azure Synapse Analytics - ADLS Gen2 - Azure Functions / Logic Apps - Azure Key Vault - Strong understanding of ETL/ELT frameworks, data warehousing methodologies, and distributed data processing - Hands‑on experience with Snowflake performance tuning, clustering, security setup, and cost optimization - Working knowledge of Power BI, including semantic modeling and dataset optimization - Proficiency in SQL and strong understanding of data governance and security principles - Experience with orchestration tools such as Airflow, Azure Data Factory, or dbt - Ability to thrive in fast iteration cycles and adapt to evolving project requirements - Experience collaborating with cross‑functional Agile teams to build and enhance data‑driven and ML‑enabled solutions - Familiarity with MLOps concepts including CI/CD pipelines, version control, model versioning, monitoring, and automated deployment - Ability to work effectively with global teams and communicate complex problem contexts clearly

Ready to Apply?

Don't miss this opportunity! Apply now and join our team.

Job Details

Posted Date: February 24, 2026
Job Type: Technology
Location: Hyderabad, India
Company: SPADTEK SOLUTIONS

Ready to Apply?

Don't miss this opportunity! Apply now and join our team.