Descrizione Lavoro
Who we are
Bitrock is a high-end consulting and system integration company,
strongly committed to offering cutting-edge and innovative solutions . Our tailored consulting services enable our clients to preserve the value of legacy investments while migrating to more efficient systems and infrastructure. We take a holistic approach to technology: we consider each system in its totality, as a set of interconnected elements that work together to meet business needs.
We thrive on overcoming challenges to help our clients reach their goals, by supporting them in the following areas: Data, AI & ML Engineering; Back-end Engineering, Platform Engineering, Front-end Engineering, Product Design & UX Engineering, Mobile App Development, Quality Assurance, FinOps, GovernanceThe effectiveness our solutions also stems from
partnerships
with key technology vendors, like HashiCorp, Confluent, Lightbend, Databricks, and Meterian.
Who we are looking for
Are you a seasoned DevOps professional with a passion for
automating everything
and deep expertise in
cloud infrastructure
and
event streaming ? We are looking for a highly motivated and skilled
Senior DevOps Engineer
to join our team. In this role, you will be crucial in designing, implementing, and maintaining our scalable, reliable, and secure infrastructure, deployment pipelines, and our
Confluent Kafka platform .
Key Responsibilities
Design, implement, and manage highly available and scalable
cloud infrastructure
(e.g., AWS, Azure, GCP) using Infrastructure as Code ( IaC ) principles.
Develop, configure, and maintain automation scripts and playbooks using
Ansible
for configuration management and application deployment.
Implement and manage infrastructure provisioning using
Terraform
to ensure consistent and reproducible environments across development, staging, and production.
Design, deploy, and operate production-grade event streaming platforms, with a focus on
Confluent Kafka
(or Apache Kafka).
Manage Confluent Kafka clusters, including configuration, security (ACLs, TLS/mTLS), scaling, monitoring, and performance tuning (brokers, topics, producers, and consumers).
Build and maintain robust
CI/CD pipelines
(e.g., Jenkins, GitLab CI) to facilitate rapid and reliable software releases, including automated deployment of Kafka resources (topics, connectors).
Manage and administer container orchestration platforms, leveraging experience with
CNCF tools
(e.g.,
Kubernetes , Prometheus, Grafana).
Monitor system health, troubleshoot issues, and implement proactive measures to enhance operational stability.
Maintain strong
General Linux knowledge
for system administration, scripting, and troubleshooting.
Contribute to the continuous improvement of our DevOps practices, tools, and processes.
Required Qualifications & Skills
Minimum of 5 years of experience in a DevOps, Infrastructure, or SRE role.
Proven expertise with at least one major
cloud provider
(AWS, Azure, or GCP).
Strong, hands-on experience administering and tuning production Kafka clusters (Apache or Confluent Kafka).
Expert-level proficiency in configuration management, specifically
Ansible .
Deep practical experience with
Terraform
for provisioning complex infrastructure.
Solid foundation in
General Linux administration , shell scripting (Bash, Python).
Hands-on experience with
Docker
and
Kubernetes
or other relevant
CNCF tools .
Strong understanding of networking, security (especially around event streaming), and monitoring in a cloud-native environment.
Excellent problem-solving, analytical, and communication skills.
Nice-to-Have Skills
Familiarity with the Confluent ecosystem components (e.g., Schema Registry, Kafka Connect, KSQL/ksqlDB, Control Center).
Experience with
MLOps tools
such as
MLflow , Kubeflow, or similar platforms used for managing the machine learning lifecycle.
AI monitoring experience
Familiarity with advanced security practices (e.g., security scanning, secrets management).
Relevant professional certifications (e.g., Confluent Certified Administrator for Apache Kafka (CCAAK), AWS Certified DevOps Engineer - Professional, CKA/CKAD).
Recruitment process:
Our recruitment process has 3 stages:
First discovery short interview with our HR team
Technical interview with our Team Leaders
Final interview with our Head of Area
How to apply:
You can apply via LinkedIn or send your cv to hr@fortitudegroup.it