Job Description
Overview
Engineering/Development Brisbane/Melbourne/Sydney, Australia
Insighture is a leading technology consultancy that drives digital transformation for businesses worldwide. With a team of over 85 expert consultants, the company delivers tailored, high-impact strategies and solutions, enabling scalable product engineering. As an AWS partner, Insighture excels in co-integrated cloud services. It has collaborated with more than 50 clients globally, guiding them through cloud adoption, DevOps transformation, enterprise modernisation, and more.
The team’s expertise spans Cloud-Native Development, Solutions Architecture, UI/UX, Quality Engineering, Data Engineering, AI/ML, and DevSecOps. These capabilities empower businesses to achieve impactful and innovative outcomes.
In 2024, Insighture achieved ISO certification and was recognised as a Great Place to Work, earning three prestigious awards: Best Workplace in Sri Lanka, Best Workplace for Technology, and Best Workplace for Young People. Insighture\'s technology and expertise are embedded in the work of internationally recognised care providers, global freight operations, child protection systems, and health tech platforms across Australia, the UK, and Singapore.
We are seeking a motivated and detail-oriented Azure Data Factory Engineer/ Senior Engineer for a 6 months contract (extendable) to join our growing team.
Security Clearance Requirement (Mandatory) : Baseline Security Clearance or above (NV1/NV2 preferred)
Qualifications
Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
Hands-on experience with Cloud-PaaS-Azure, specifically focusing on data integration and storage services.
Expert-level proficiency in designing, developing, and maintaining ETL/ELT pipelines in Azure Data Factory. Must have experience with dynamic expressions, control flow, data flow, and pipeline orchestration.
Strong experience in DataBricks for data engineering tasks, including cluster management, notebook development, and job scheduling.
Proficiency in PySpark for distributed data processing and transformation within the DataBricks environment.
Experience with SSIS (SQL Server Integration Services), specifically regarding lift-and-shift migrations to ADF or hybrid integration scenarios.
Strong proficiency in PySpark. Knowledge of SQL, Python, or Scala is highly desirable.
Solid understanding of DevOps principles. Experience implementing CI/CD pipelines for data artifacts using Azure DevOps (Repos, Pipelines, Release Gates) or GitHub Actions.
Familiarity with Azure Synapse Analytics or dedicated SQL pools is a plus.
Excellent analytical and problem-solving skills. Strong communication and stakeholder management abilities. Ability to work independently and mentor junior team members (for Senior Engineer level).
Responsibilities
Design, develop, and optimize complex ETL/ELT pipelines using Azure Data Factory (ADF) to ingest data from various on-premise and cloud sources into Azure data lakes and warehouses.
Develop and maintain data transformation logic using DataBricks notebooks and PySpark. Orchestrate Databricks jobs and activities within ADF pipelines to ensure seamless data flow.
Manage existing SSIS packages, refactor them into ADF where appropriate, or maintain hybrid integration scenarios between on-premise SSIS and Azure PaaS solutions.
Implement and manage Azure PaaS resources (Azure Data Lake Gen2, Storage Accounts, Key Vaults) ensuring secure, scalable, and cost-effective data solutions.
Implement DevOps best practices, including source control (Git), branch policies, and automated build/release pipelines to deploy data factory code and Databricks artifacts across development, test, and production environments.
Monitor and optimize the performance of data pipelines and transformations, troubleshooting bottlenecks in ADF activity runs, Databricks clusters, and PySpark code.
Work closely with data architects, data scientists, and business analysts to understand requirements and translate them into technical specifications.
Create and maintain technical documentation for data pipelines, data dictionaries, and system configurations. Enforce coding standards and data governance policies.
Apply for this job
First name *
Last name *
Email *
Country code *
Contact number *
Skills * Select skills
LinkedIn URL *
Resume/CV *
Drag \u2018n\u2019 drop files here, or click to select files
You can upload a file with 2 MB max
Currency
Currency
I consent to Insighture using my personal data solely for recruitment purposes.
I would like to receive emails from Insighture about event announcements, industry insights, and other updates.
Life at Insighture
We are looking for the right people who are ready to take on interesting challenges and help grow our clients` businesses.
#J-18808-Ljbffr