Descripción del Puesto
Position Summary:
We are seeking an experienced and versatile
Data Engineer
to join our Data & Analytics team. This role focuses on designing and maintaining
data pipelines that integrate Oracle Cloud applications and other operational systems with both AWS Data Lake and Microsoft Fabric environments
. The successful candidate will play a critical role in enabling enterprise reporting and advanced analytics through robust data ingestion, transformation, and integration frameworks.
Key Responsibilities:
Build and maintain data pipelines
to extract and move data between
Oracle Cloud (ERP, HCM, SCM) and other operational systems, plus between
AWS Data Lake (S3, Glue, Redshift) and
Microsoft Fabric (OneLake, Lakehouse, Data Factory) .
Design and optimize
ETL/ELT processes
for large-scale, multi-source data ingestion and transformation.
Integrate with
external cloud-based systems
(e.g., Salesforce, ServiceNow, MS Business Central) using APIs, flat files, or middleware.
Utilize
Oracle Integration Cloud (OIC), FBDI, BIP, and REST/SOAP APIs
for data extraction and automation.
Leverage
Microsoft Fabric components
, including
Data Factory
,
Lakehouse
,
Synapse-style notebooks
, and
KQL databases
, to enable structured data availability.
Collaborate with BI developers to
enable Power BI semantic models, apps, and enterprise-wide reporting
.
Implement monitoring, logging, and error-handling strategies to ensure reliability and performance of data pipelines.
Adhere to best practices in
data governance, security, lineage, and documentation
.
Partner with data architects, analysts, and business stakeholders to translate business needs into scalable data solutions.
Required Skills & Qualifications:
~ Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or related field.
~3+ years of experience in data engineering
, including cloud data integrations and enterprise data pipeline development.
~ Experience with
Oracle Cloud (ERP, HCM, or SCM) and its integration mechanisms (FBDI, BIP, REST APIs, OIC).
~ Familiarity with
AWS Data Lake architecture
: S3, Glue, Redshift, Athena, Lambda, etc.
~ Hands-on experience in the
Microsoft Fabric ecosystem
, including
Data Factory (Fabric), OneLake, Lakehouse, Notebooks
, and integration with Power BI.
~ Proficiency in
SQL, Python
, and experience with
ETL orchestration tools
(e.g., Airflow, Step Functions).
~ Strong knowledge of
data modeling
,
data quality
, and
pipeline optimization
.
~ Experience with
Power BI datasets and reporting enablement
, particularly in semantic model design.
Preferred/Desirable Skills:
Familiarity with
streaming data tools
(e.g., Kafka, AWS Kinesis, Fabric Real-Time Analytics).
Experience with
Git-based version control
,
CI/CD for data pipelines
, and
infrastructure as code
(e.g., Terraform, CloudFormation).
Knowledge of
metadata management
,
data lineage tracking
, and
data governance frameworks
.
Cloud certifications (e.g.,
AWS Certified Data Analytics
,
Microsoft Certified: Fabric Analytics Engineer
,
Oracle Cloud Certified ) are a strong plus.
Soft Skills:
Strong problem-solving and analytical thinking.
Excellent communication skills, with the ability to collaborate across business and technical teams.
Highly organized, detail-oriented, and self-motivated.
Comfortable in fast-paced environments with shifting priorities.