Job Description
Data Engineering Architect
Experience: 8 to 12+ years
Location: Kolkata Bangalore
Work Mode: Hybrid
Position Summary :
Own enterprise data architecture across OLTP, analytics and GenAI or agentic AI workloads on cloud or on prem. Define lakehouse and warehouse target state, standards and governance for secure, scalable and cost efficient solutions. Partner with data engineering and AI teams to deliver trusted, retrieval ready datasets, metadata and lineage.
Preferred experience on Azure Databricks, Synapse, Purview or AWS Snowflake, Redshift, Glue with exposure to the other. Experience with ServiceNow CMDB or ITSM or ITOM as a governed knowledge source is a plus.
Key Responsibilities
Define roadmap for lakehouse, warehouse, MDM and streaming layers.
Architect solutions on Azure or AWS using Databricks or Synapse or Snowflake or Redshift with orchestration via ADF, Glue or Airflow.
Model data using dimensional, Data Vault 2.0 and DDD patterns and define canonical and semantic layers.
Build ELT or ETL using SQL and Python or PySpark and batch or streaming pipelines using Kafka or Kinesis, CDC and schema registry.
Design vectorization pipelines for chunking and embeddings and operate vector stores such as Azure AI Search, Pinecone or FAISS.
Enable data quality, metadata and lineage using Great Expectations, Deequ, Purview, Collibra, Alation and OpenLineage.
Drive security, privacy and FinOps covering IAM, RBAC or ABAC, encryption, PII, GDPR, ISO and cost monitoring.
Integrate ServiceNow via Table or REST APIs, align with CSDM and stream events to agents using Kafka.
Must Have Skills Expert SQL and strong Python or PySpark. ELT or ETL and orchestration using Airflow, ADF or Glue.
Lakehouse and warehouse platforms such as Databricks or Delta, Snowflake, Redshift or BigQuery.
Data modeling using Kimball and Data Vault.
Streaming using Kafka or Kinesis, CDC and formats such as Avro or Protobuf. Cloud Azure or AWS, security IAM, TLS, KMS and DR or performance tuning.
Governance using Purview, Collibra, Alation and data quality using Great Expectations or Deequ with strong PII controls.
Good to Have Skills Agentic and RAG architectures using LangChain or LangGraph with vector stores such as Azure AI Search, Pinecone or FAISS.
Experience integrating ServiceNow data using Table or REST APIs and aligning with CSDM or CMDB.
Knowledge of cost and performance tuning for GenAI data flows including Delta, Iceberg, Hudi, Databricks or Snowflake FinOps and real time features via Kafka.
Education BE, BTech, ME or MTech in Computer Science, IT or Data.
Preferred certifications Azure or AWS Data Engineer or Architect, Databricks or Snowflake.
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
February 24, 2026
Job Type:
Construction
Location:
Bangalore, India
Company:
ITC Infotech
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.