Job Description
Company Description
Datavid provides enterprise-ready software solutions designed to help businesses extract, enrich, and utilize their most valuable knowledge. Our focus is on delivering cutting-edge data engineering and AI services to global enterprises leading in research, development, customer experience, and innovation. With a dedicated team of over 120+ professionals, including software developers, data engineers, machine learning specialists, AI engineers and project managers, Datavid is committed to helping organizations unlock the full potential of their data.
Role Description
We are seeking a
Senior DevOps Engineer
to join our global engineering team. This is a
full-time remote role
responsible for designing, building, and automating cloud infrastructure, managing infrastructure as code (IaC), and enabling seamless CI/CD and deployment workflows.
The ideal candidate will have strong AWS experience, hands-on capabilities with Terraform/Kubernetes, and a passion for building reliable, secure, scalable systems.
Day-to-day responsibilities include owning CI/CD pipelines, managing production-grade Kubernetes workloads, monitoring system performance, optimizing cloud cost/security, and collaborating with product, data, and development teams.
Responsibilities
Cloud & Infrastructure Engineering
Design, implement, and manage AWS infrastructure across EKS, ECS (EC2/Fargate), ALB, ASG, RDS, ECR, S3, IAM, MSK/Kafka, Glue, Lambda, and related services.
Build secure and scalable networking including VPCs, subnets, routing, NACLs, security groups, VPNs, Route53, ACM certificates, and HA/DR patterns.
Automate infrastructure provisioning using
Terraform
(primary) and
CloudFormation
(secondary).
Kubernetes & Containerization
Deploy and manage Kubernetes workloads on EKS using Helm, GitOps workflows, and automated pipelines.
Maintain Docker registries, image pipelines, and ensure optimized cluster performance and security.
CI/CD & Automation
Build and maintain CI/CD pipelines using GitHub Actions, AWS CodePipeline/CodeBuild, Concourse, or equivalent tools.
Automate deployment workflows using Shell and Python (Boto3).
Implement automated AMI builds, patching workflows, snapshots, and infrastructure monitoring pipelines.
Build and support data workflows using S3, MWAA/Airflow, Kafka/MSK, Glue Jobs, ECS, ECR, and Lambda.
Monitoring, Security & Reliability
Implement robust observability using CloudWatch, Datadog, Prometheus/Grafana, or similar tools.
Enforce security best practices across IAM, KMS, Secrets Manager, encryption, and multi-account AWS governance.
Lead cloud cost optimization, capacity planning, and reliability engineering practices.
Technical Leadership
Mentor junior engineers on AWS, IaC, Kubernetes, and CI/CD best practices.
Lead DevOps initiatives within agile teams and collaborate closely with engineering, QA, data, and product functions.
Qualifications
Required
Strong experience with AWS cloud services at production scale.
Hands-on expertise in
Infrastructure as Code (Terraform) .
Deep knowledge of
Kubernetes (EKS) , Docker, and container orchestration.
Proficiency with
CI/CD tooling
(GitHub Actions, CodePipeline, CodeBuild, etc.).
Strong Linux system administration background.
Experience with automation using Python and Shell scripting.
Solid understanding of networking (VPC, routing, security groups, VPN).
Excellent analytical, troubleshooting, communication, and collaboration skills.
Preferred
Bachelor's degree in Computer Science, Engineering, or equivalent experience.
Certifications such as
AWS Certified DevOps Engineer ,
Terraform Associate ,
CKAD/CKA .
Experience with observability tools such as Datadog or Prometheus/Grafana.