
Suljettu
Julkaistu
Description: We’re looking for an experienced Data Engineer preferably based from Dubai to help build and manage data pipelines for a global platform. Most work is in Azure, using Azure Data Factory, ADLS, and Databricks. What you’ll do: Build and manage PySpark/Spark pipelines in Databricks Schedule and monitor pipelines in Azure Data Factory Optimize Databricks for better performance Keep code and documentation organized and clear Requirements: Experience with Azure cloud and Databricks Strong PySpark / Spark skills Experience building scalable, reliable data pipelines Details: Project-based, with potential to move to full-time Ideal for engineers who like building cloud-native pipelines
Projektin tunnus (ID): 40255226
20 ehdotukset
Etäprojekti
Aktiivinen 11 päivää sitten
Aseta budjettisi ja aikataulu
Saa maksu työstäsi
Kuvaile ehdotustasi
Rekisteröinti ja töihin tarjoaminen on ilmaista
20 freelancerit tarjoavat keskimäärin $12 USD/tunti tätä projektia

With over a decade of hands-on experience in data management, I bring great value to your table as a data engineer, specifically versed in Azure Databricks. My expertise in PySpark/Spark is testamented with successful history of building and managing sophisticated pipelines on global platforms. I am proficient in optimizing Databricks for unparalleled performance, scheduling and monitoring pipelines via Azure Data Factory while maintaining well-organized code and documentation for easy maintenance. Having worked across industries such as finance, healthcare, e-commerce and SaaS, I have developed a unique problem-solving knack coupled with profound insights extraction skills that complements the demands of your project. Leveraging the power of cloud-native infrastructure has been my modus operandi throughout my career; using tools such as Apache Airflow, Talend, Azure Data Factory, AWS Glue among others. As your "Data Storyteller", I'll not only build functional pipelines but also tell powerful stories from a mesmerizing amalgamation of datasets fuelled by your vision. Join me on this journey to unchain the full potential of your data!
$8 USD 40 päivässä
4,1
4,1

Hello, thanks for posting this project. Your need for an experienced Data Engineer with Azure, Databricks, and PySpark background strongly aligns with my expertise. I have successfully built, managed, and optimized cloud-native data pipelines at scale, ensuring both performance and reliability. My approach emphasizes clean, well-documented code alongside efficient workflow monitoring and management in Azure environments. Could you please share more about the current pipeline architecture or specific optimization challenges you are facing?
$20 USD 1 päivässä
0,0
0,0

Hello, I’m a Senior Data & Cloud Engineer with 15+ years of experience designing and operating scalable data platforms on Azure, including extensive hands-on work with Azure Data Factory, ADLS, and Databricks (PySpark/Spark). I regularly build production-grade pipelines that: • Ingest, transform, and validate large datasets using PySpark • Orchestrate workflows in Azure Data Factory with monitoring and alerting • Optimize Databricks clusters for performance and cost efficiency • Maintain clean, version-controlled code with clear documentation I’ve delivered reliable cloud-native data platforms for analytics, reporting, and ML workloads. Happy to discuss your current architecture and help you scale pipelines cleanly and efficiently. Best regards, Rahul
$12 USD 40 päivässä
0,0
0,0

Hey — saw your post about needing an Azure Databricks Data Pipeline Engineer. A lot of these projects get stuck not on coding, but on making all the Azure pieces (Data Lake, Databricks, ADF) work together cleanly and reliably. Quick question before I suggest an approach: Are your data sources already landing in Azure (e.g. Blob/Data Lake), or do we also need to design the ingestion layer from scratch? I’ve built and maintained Databricks-based pipelines on Azure for analytics and reporting, including orchestration with ADF and optimizing Spark jobs for cost and performance. If you share your current architecture diagram or a short spec of what you have so far, I can review it and tell you what’s realistic for the next phase.
$11,50 USD 7 päivässä
0,0
0,0

I have hands-on experience with Azure Data Factory, ADLS, and Databricks, building and optimizing PySpark-based pipelines for scalable data platforms. I focus on clean, well-documented code and performance tuning within Azure environments. Open to project-based work with potential long-term engagement.
$12 USD 10 päivässä
0,0
0,0

Experienced in Azure cloud-native data engineering with strong PySpark/Spark expertise in Databricks. I’ve built and maintained reliable ADF-orchestrated pipelines with performance optimization and structured documentation practices. Comfortable with project-based collaboration and scaling into full-time support.
$8 USD 8 päivässä
0,0
0,0

Hello, I’m a Data Engineer with hands-on experience building and managing cloud-native pipelines on Azure using Databricks, ADLS, and Azure Data Factory. I can support your platform by: • Developing scalable PySpark pipelines in Databricks for ingestion, transformation, and output staging • Orchestrating and monitoring workflows using Azure Data Factory (ADF triggers, linked services, failure alerts) • Optimizing Spark jobs for performance (partitioning, caching, broadcast joins, cluster tuning) • Ensuring reliable, idempotent pipeline runs with structured logging • Maintaining clean, modular code and documentation for easy handover My recent work includes ETL pipelines on Azure with: ADLS-based data lakes Databricks notebooks + job clusters Parameterized PySpark transformations ADF scheduling & dependency chains Performance tuning for large datasets I enjoy building stable, production-ready pipelines with clear governance and monitoring in place. Availability: • Immediate start • Open to project-based engagement • Flexible to scale toward long-term/full-time Happy to discuss your current pipeline setup and optimization goals. Best regards, Manu Jain
$12 USD 40 päivässä
0,0
0,0

I have worked extensively on azure Databricks, azure data factory. I have used pyspark and spark sql for data transformation. I have used unity catalog for governance within the azure ecosystem. I have also done cost optimisation.
$8 USD 40 päivässä
0,0
0,0

Hi there! I am an experienced Azure Data Engineer specializing in designing and maintaining robust end-to-end data solutions across cloud and on-premises environments. I have strong hands-on expertise in Azure Data Factory, Azure SQL, Azure databricks and SAP S/4HANA integrations (both on-prem and cloud environments), including working with OData CDS views and ODBC connectivity. My experience includes building scalable ETL/ELT pipelines, implementing complex T-SQL logic (CTEs, window functions, multi-week business rules, upsert strategies), handling snapshot-based reporting challenges, and optimizing data models for Power BI. I focus on translating complex business requirements into efficient, maintainable, and high-performing data solutions that deliver accurate and actionable insights. Wanna have a word?, we can figure it out at the end of our discussion. Thanks and looking forward.
$12 USD 15 päivässä
0,0
0,0

Hello, I’m very interested in your Azure Databricks Data Pipeline Engineer project. I have hands-on experience working with Azure Data Factory, Azure Data Lake Storage (ADLS), and Databricks, along with strong expertise in PySpark and Spark-based data processing. I’ve built scalable data pipelines that handle transformation, optimization, and reliable orchestration in cloud-native environments. What I can help you with: 1) Designing and building efficient PySpark pipelines in Databricks 2) Orchestrating and scheduling workflows using Azure Data Factory 3) Optimizing Spark jobs for performance and cost efficiency 4) Implementing structured logging, monitoring, and error handling 5) Writing clean, modular, and well-documented code I focus on writing production-ready pipelines that are scalable, maintainable, and easy to monitor. I also ensure proper partitioning strategies, performance tuning, and resource optimization in Databricks environments. I’m available up to 40 hours per week and can start immediately. I’m open to a long-term collaboration if the project scales. Looking forward to discussing your platform architecture and pipeline requirements. Thanks
$8 USD 40 päivässä
0,0
0,0

Hello, I am an experienced Data Engineer with 4+ years of experience building scalable data pipelines using Azure Databricks, PySpark, Azure Data Factory, and ADLS. I have worked extensively on designing and maintaining end-to-end ETL pipelines, optimizing Spark jobs, and ensuring reliable data processing for enterprise projects. In my recent projects, I have: ✔ Built PySpark-based ETL pipelines in Databricks for large-scale data processing ✔ Designed incremental and full-load frameworks for efficient data ingestion ✔ Scheduled and monitored pipelines using Azure Data Factory and workflow tools ✔ Optimized Spark jobs using partitioning, caching, and broadcast joins ✔ Implemented data validation and error handling frameworks ✔ Worked with Delta tables and cloud storage (ADLS/S3) ✔ Maintained proper documentation and clean code standards I am confident that I can help build and maintain reliable and scalable data pipelines for your platform.
$12 USD 30 päivässä
0,0
0,0

Hi, I’m a Data Engineer with strong hands-on experience building Azure-native data pipelines using Azure Data Factory, ADLS, and Databricks. I’ve designed and optimized PySpark/Spark pipelines for scalable ingestion, transformation, and analytics workloads, with a focus on performance, reliability, and clean code practices. Open to project-based work with long-term collaboration potential. Happy to discuss details.
$12 USD 20 päivässä
0,0
0,0

Hello, I’m a Senior Data Engineer with strong hands-on experience in Azure Data Factory, ADLS Gen2, and Azure Databricks, building scalable, production-grade data pipelines for cloud-native platforms. Why I’m a strong fit: ✔ Built and optimized PySpark/Spark pipelines in Databricks for large-scale datasets (TB-level processing) ✔ Designed and scheduled robust pipelines using Azure Data Factory with monitoring & alerting ✔ Experienced with Delta Lake optimizations (partitioning, Z-ordering, caching, parallel writes) ✔ Implemented CI/CD workflows and performance tuning in Azure environments ✔ Strong focus on clean, modular code and well-structured documentation I have worked on end-to-end Azure data platforms, including ingestion, transformation, orchestration, optimization, and production deployment. I understand the importance of reliability, performance, and maintainability in global-scale systems. I’m comfortable working independently, documenting clearly, and delivering production-ready solutions. Would be happy to discuss your platform architecture and how I can contribute immediately. Looking forward to collaborating. Best regards, Toshit
$15 USD 20 päivässä
0,0
0,0

I have good hands on experience in writing notebooks to bring the data from different sources and creating workflows and pipelines in databricks workspace. I worked on one project where i have optimized the databricks jobs so that the total cost was down by 40%. Please initiate chat to discuss further
$12 USD 40 päivässä
0,0
0,0

I have experience building AI solutions on Azure, including agent-based workflows and governed data retrieval systems. For this POC, I can design an orchestrator agent that routes user queries to domain-specific agents (Financial, Clinical, etc.) using Semantic Kernel or similar tools. To ensure consistent “Source of Truth” responses, The demo will include citation-based answers (linking back to Databricks tables), a simple deployment setup, and a clear architecture diagram. I’m comfortable working in regulated environments and explaining technical decisions clearly.
$12 USD 40 päivässä
0,0
0,0

Thrissur, India
Liittynyt marrask. 18, 2021
₹12500-37500 INR
$1500-3000 USD
$10-30 USD
$250-750 USD
$30-250 NZD
₹75000-150000 INR
$25-50 USD/ tunnissa
₹12500-37500 INR
$250-750 USD
$200-350 USD
$15-25 AUD/ tunnissa
₹600-1500 INR
$80-100 USD/ tunnissa
$100-150 CAD
$750-1500 USD
$30-250 USD
£250-750 GBP
₹50000-60000 INR
₹1500-12500 INR
$10-30 USD
$250-750 USD