Job Description
We have 2 positions open one for Big Data Lead (8+ years) & 1 for Big Data Engineer - Kafka (4-8 Years)
1. Data Engineer – Kafka (4+ Years)
Location: Pune/Nagpur (WFO)
Experience Required: 4+ Years
Job Summary:
We are looking for a Data Engineer with strong experience in Kafka and real-time data processing. The candidate will be responsible for building and maintaining scalable streaming pipelines and ensuring reliable data flow across systems.
Key Responsibilities:
- Design, develop, and maintain real-time data pipelines using Kafka.
- Work on Kafka producers, consumers, and topic management.
- Integrate Kafka with Big Data ecosystems like Spark / Hadoop.
- Ensure data reliability, fault tolerance, and scalability in streaming systems.
- Monitor and troubleshoot data pipeline and streaming issues.
- Collaborate with cross-functional teams in an Agile environment.
- Optimize performance of streaming data pipelines.
Required Skills:
- Strong hands-on experience with Apache Kafka.
- Good understanding of real-time data streaming concepts.
- Experience with Spark Streaming / Kafka Streams is preferred.
- Strong SQL and data handling skills.
- Basic understanding of Big Data ecosystem (Hadoop, Hive, etc.).
- Good problem-solving and debugging skills.
Good to Have:
- Experience with Airflow or other orchestration tools.
- Exposure to cloud platforms (AWS/GCP/Azure).
- Knowledge of DevOps / CI-CD practices.
2. Big Data Lead (8+ Years)
Location: Pune/Nagpur (WFO)
Experience Required: 8+ Years
Job Summary:
We are looking for a Big Data Lead with strong expertise in PySpark and Big Data ecosystems. The candidate will be responsible for designing scalable data solutions, leading teams, and ensuring high performance and reliability of data platforms.
Key Responsibilities:
- Design and develop scalable data pipelines using PySpark.
- Lead implementation across Hadoop ecosystem (HDFS, Hive, Sqoop, etc.).
- Drive architecture, design, and best practices for data engineering solutions.
- Perform performance tuning and optimization of distributed systems.
- Collaborate with business, delivery, and cross-functional teams.
- Manage and mentor a team of data engineers.
- Ensure data quality, reliability, and governance standards.
- Manage workflow orchestration using Airflow/Oozie.
Required Skills:
- Strong experience in Big Data Engineering with PySpark.
- Deep knowledge of HDFS, Hive, Sqoop, and Hadoop ecosystem.
- Strong expertise in SQL/HiveQL for large datasets.
- Proven experience in performance tuning and optimization.
- Experience working in Agile environments.
- Strong leadership, problem-solving, and communication skills.
Good to Have:
- Experience with Kafka / Spark Streaming.
- Knowledge of data modeling and data warehousing.
- Exposure to DevOps and CI/CD pipelines.
- Experience working on cloud-based data platforms.
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.
Job Details
Posted Date:
March 18, 2026
Job Type:
Technology
Location:
India
Company:
Nice Software Solutions Pvt. Ltd.
Ready to Apply?
Don't miss this opportunity! Apply now and join our team.