
Suljettu
Julkaistu
looking for an experienced Azure Data Engineer to support and enhance our existing data platform on an ongoing basis. You should be strong in: Azure Data Factory (ADF) for building and maintaining ETL/ELT pipelines Azure Databricks and PySpark for large‑scale data processing Python for data engineering utilities, automation, and integration Delta Lakes/Lakehouse concepts, performance optimization, and troubleshooting Working with SQL‑based data sources, data warehousing, and BI integrations Responsibilities Design, build, and optimize data pipelines in Azure ADF and Databricks Develop and maintain PySpark and Python jobs for batch and near real‑time workloads Implement best practices for data quality, observability, and monitoring Collaborate with our internal team, follow existing standards, and document your work Support and improve existing pipelines, diagnose issues, and propose scalable solutions Nice to have Experience with Azure Synapse or similar MPP data warehouses CI/CD for data pipelines (Git, Azure DevOps, etc.) Basic understanding of data modeling and BI/reporting needs Engagement details Remote, long‑term engagement 20–30 hrs/week Working hours with some overlap with 6 PM – 2 AM PKT preferred Budget: 2–8 USD/hour, depending on experience and fit When you apply, please answer briefly: Relevant Azure ADF / Databricks projects you’ve done (1–2 examples). Your hourly rate within 2–8 USD/hour. Your typical availability per week and time‑zone. A short note on how you approach debugging and optimizing slow pipelines.
Projektin tunnus (ID): 40251357
34 ehdotukset
Etäprojekti
Aktiivinen 10 päivää sitten
Aseta budjettisi ja aikataulu
Saa maksu työstäsi
Kuvaile ehdotustasi
Rekisteröinti ja töihin tarjoaminen on ilmaista
34 freelancerit tarjoavat keskimäärin $11 USD/tunti tätä projektia

Hi, I have good working experience with Azure Python Pyspark Database Dhango Machine Learning I am available here to discuss more. Looking forward to an early and positive response. Regards, Shalu
$6 USD 40 päivässä
6,0
6,0

Hi, I have carefully read your project description and I am very interested in working on it. I am a senior software engineer with over 10 years of experience, and I have successfully completed several projects similar to yours. I’ve worked on building and maintaining complex data pipelines using Azure Data Factory and Azure Databricks, including PySpark jobs for large-scale ETL workflows and data lake management with Delta Lake concepts. My approach to debugging slow pipelines involves detailed monitoring, analyzing job logs, and iteratively optimizing resource allocation to improve performance and reliability. I can start immediately and am confident I can meet your estimated deadline and budget. I would appreciate the opportunity to collaborate with you. Thank you.
$5 USD 40 päivässä
5,9
5,9

Hi, SolutionzHere deploys Azure Data Engineers for Lakehouse builds—recent: ADF/Databricks PySpark pipelines processing 10TB+ SQL sources to Delta, +20% perf gains via partitioning/optimizations; another optimized Synapse BI feeds. Rate: $25/hr (top expertise/value). 25-30 hrs/wk, Kolkata IST (good overlap 6PM-2AM PKT), flexible. Debugging: Logs/metrics first (ADF monitors/Databricks jobs), profile Spark UI for bottlenecks, tune partitions/cache, test iteratively. Long-term partner ready!
$25 USD 40 päivässä
6,0
6,0

As an experienced Senior Web and Mobile Application Developer with a specialization in Azure and Python, I believe my skill set aligns perfectly with your requirements for an Azure Data Engineer. During my 10+ years of professional experience, I have consistently crafted high-performance data solutions for various clients by leveraging robust platforms like Azure. My hourly rate falls within your budget range and I am available 20-30 hours per week with the flexibility you need for 6 PM - 2 AM PKT. Working remotely on a long-term basis is indeed my preferred mode of engagement. Debugging and optimizing slow pipelines are skills I've honed throughout my career. My approach involves detailed logging and observability to identify bottlenecks, followed by thorough performance analysis using tools like Spark UI and Databricks monitoring. By carefully tracking data flow transformation steps and pinpointing inefficiencies, I'm able to propose scalable solutions and implement necessary optimizations, ensuring your pipelines operate at peak efficiency. With my strong track record in delivering timely, high-quality solutions geared towards supporting real business growth like yours, I'm confident in my ability to exceed your expectations as an Azure Data Engineer. Let's discuss how I can contribute substantially to your ongoing needs! With Regards!
$10 USD 40 päivässä
5,5
5,5

Hello, Thank you for sharing this opportunity. With 7+ years in Azure Data Factory, Databricks and PySpark, I’ve built and optimized batch and near real‑time pipelines on Delta Lakehouse architectures, integrating SQL and BI workloads at scale. I follow strict standards: reusable ADF templates, parameterized pipelines, robust logging, and alerting for observability and data quality. For slow pipelines, I profile bottlenecks (partitioning, skew, shuffle), optimize Spark configs and queries, and harden them via CI/CD in Azure DevOps. Brief answers: 1) I’ve led end‑to‑end ADF + Databricks projects for enterprise reporting and near real‑time ingestion; 2) My hourly rate is 30 USD 3) I can commit 20–25 hrs/week from IST with overlap to 6 PM–2 AM PKT 4) Debugging focuses on logs, lineage and metrics first, then code and infrastructure tuning. What are the top two pain points you want solved first in your current Azure data platform? Happy to start with a short audit of your existing pipelines and then move into long‑term ownership of enhancements and support Regards Sahanaj
$30 USD 40 päivässä
4,7
4,7

Hello,! I’m excited about the opportunity to help with your project. Based on your requirements, I believe my expertise in Python aligns perfectly with your needs. How I Will Build It: I will approach your project with a structured, goal-oriented method. Using my experience in Python, Azure, NoSQL Couch & Mongo, QlikView, Data Warehousing, Elasticsearch, ETL, PySpark, I’ll deliver a solution that not only meets your expectations but is also scalable, efficient, and cleanly coded. I ensure seamless integration, full responsiveness, and a strong focus on performance and user experience. Why Choose Me: - 10 years of experience delivering high-quality web and software projects - Deep understanding of Python and related technologies - Strong communication and collaboration skills - A proven track record — check out my freelancer portfolio. - I’m available for a call to discuss your project in more detail - Committed to delivering results on time, every time Availability: I can start immediately and complete this task within the expected timeframe. Looking forward to working with you! Best regards, Ali Zahid United Arab Emirates
$5 USD 40 päivässä
4,4
4,4

Hello Dear! I write to introduce myself. I'm Engineer Toriqul Islam. I was born and grew up in Bangladesh. I speak and write in English like native people. I am a B.S.C. Engineer of Computer Science & Engineering. I completed my graduation from Rajshahi University of Engineering & Technology ( RUET). I love to work on Web Design & Development project. Web Design & development: I am a full-stack web developer with more than 10 years of experience. My design Approach is Always Modern and simple, which attracts people towards it. I have built websites for a wide variety of industries. I have worked with a lot of companies and built astonishing websites. All Clients have good reviews about me. Client Satisfaction is my first Priority. Technologies We Use: Custom Websites Development Using ======>Full Stack Development. 1. HTML5 2. CSS3 3. Bootstrap4 4. jQuery 5. JavaScript 6. Angular JS 7. React JS 8. Node JS 9. WordPress 10. PHP 11. Ruby on Rails 12. MYSQL 13. Laravel 14. .Net 15. CodeIgniter 16. React Native 17. SQL / MySQL 18. Mobile app development 19. Python 20. MongoDB What you'll get? • Fully Responsive Website on All Devices • Reusable Components • Quick response • Clean, tested and documented code • Completely met deadlines and requirements • Clear communication You are cordially welcome to discuss your project. Thank You! Best Regards, Toriqul Islam
$5 USD 40 päivässä
4,2
4,2

✅✅✅✅✅ Only Perfection 100% && Even 99.99% Isn’t Enough For Me. ✅✅✅✅✅ With over a half-decade of professional experience, my skills in Python, data engineering utilities, and automation align perfectly with your needs for this project. My capability in building and optimizing large-scale data pipelines using Azure Data Factory (ADF) and Azure Databricks with PySpark have been demonstrated on several projects. Additionally, I am also proficient in SQL-based data sources, data warehousing, and BI integrations; competencies that are integral to your operations. Apart from technical know-how, my track record as a problem solver and a good communicator will be invaluable in working effectively with your team. With me, you're having more than just an engineer - I offer long-term engagement while taking into account budget sustainability. My approach revolves around developing high-quality, maintainable code, clear documentation coupled with continuous support & system optimization; components necessary for long-term success. In conclusion, my eight-year software development journey combines well with your requirements of an experienced Azure Data Engineer who can deliver fast, secure solutions. I am eagerly looking forward to discussing how my skills can positively impact your ongoing work and exceed your expectations.
$15 USD 40 päivässä
2,4
2,4

Hello, I’ve reviewed your Azure data platform needs and can step in immediately to support and enhance your ADF/Databricks pipelines. I have hands‑on experience building and optimizing ADF ETL/ELT flows, authoring PySpark jobs in Databricks, and applying Delta Lake best practices to improve throughput and reliability. I write maintainable Python utilities for automation and CI/CD, and I’m comfortable troubleshooting SQL sources, Synapse integrations and BI handoffs. My approach is pragmatic: I follow existing standards, add observability (metrics, logging, alerts), profile slow jobs, and implement targeted optimizations (partitioning, caching, shuffle reduction). I’ll document changes and collaborate closely with your team. I can commit to a long‑term, part‑time engagement with overlap in your preferred PKT window. Can you share the current daily data volume, main pain points you’re seeing (slow jobs, failures, cost), and one pipeline you’d like me to prioritize first? Sincerely, Daniel
$50 USD 17 päivässä
2,2
2,2

I understand that you are looking for an Azure Data Engineer to enhance your existing data platform, focusing on Azure Data Factory for ETL/ELT pipelines, Azure Databricks with PySpark for large-scale processing, and SQL-based data sources. Your emphasis on optimizing performance and ensuring data quality aligns perfectly
$2 USD 7 päivässä
2,1
2,1

Hi there, As an experienced Azure Data Engineer with a solid background in Azure Data Factory, Databricks, and Python, I am excited about the opportunity to support and enhance your existing data platform. With over 12 years in web development and digital solutions, I have successfully built and maintained ETL/ELT pipelines using Azure Data Factory, developed PySpark and Python jobs for large-scale data processing, and implemented performance optimization techniques. I am well-versed in Delta Lakes/Lakehouse concepts, data warehousing, and BI integrations, enabling me to design, build, and optimize data pipelines efficiently. My approach focuses on data quality, observability, and monitoring best practices, ensuring scalability and reliability. Moving forward, I look forward to collaborating with your team, following existing standards, and offering innovative solutions to enhance your data workflows. How can we proceed with our collaboration? Best regards,
$8 USD 37 päivässä
2,0
2,0

Hello, I have good experience in Data Engineering. Please start chat so I can share my resume. Regards, Moizam Hussain.
$8 USD 40 päivässä
1,4
1,4

Hello, thanks for posting this project. Your requirements for Azure Data Factory, Databricks, PySpark, and Python align closely with my background in building modern data platforms on Azure. I have deep hands-on experience designing robust ETL/ELT pipelines in ADF, optimizing Delta Lake architectures for performance, and developing scalable PySpark and Python solutions for various data workloads. My work has included complex integrations with SQL-based data sources and supporting downstream BI/reporting needs. I follow a methodical, metrics-driven debugging approach: leveraging logging, data lineage tools, and Spark/ADF monitoring to pinpoint bottlenecks, applying performance tuning both at code and infrastructure levels, and iteratively validating improvements. This ensures your pipelines are efficient, observable, and reliable. Could you share more about your current data stack and the key pain points or priorities you’d like tackled in the initial phase?
$20 USD 1 päivässä
1,1
1,1

Hi You need ongoing support for Azure ADF + Databricks (PySpark) pipelines, improving performance, maintaining Delta/Lakehouse layers, and debugging slow or failing jobs, 20–30 hrs/week with PKT overlap. I can design, optimize, and support batch and near real-time workloads within your standards. Strong skill: PySpark performance tuning. I’ve built ADF → Databricks → Delta pipelines with partition tuning, job orchestration, and BI-ready models. Example projects: 1. ADF + Databricks lakehouse for retail analytics (Delta, incremental loads, monitoring). 2. PySpark optimization for slow joins/large shuffles in Azure (cost + runtime reduction). Rate: 8 USD/hour. Availability: 25 hrs/week, overlap with 6 PM–2 AM PKT. Debug approach: check Spark UI/execution plans, review partitioning/shuffle, validate data skew, tune cluster configs, then add logging + alerts in ADF. One question: are you using job clusters or all-purpose clusters in Databricks? Thank you, Emmanuel
$8 USD 40 päivässä
1,0
1,0

Hello, I am an experienced Azure Data Engineer with hands-on expertise in Azure Data Factory, Databricks, PySpark, and Python for building and optimizing scalable ETL/ELT pipelines. I’ve worked on projects involving ADF-triggered pipelines feeding Databricks Lakehouse architectures with Delta tables, focusing on performance tuning, data quality checks, and reliable monitoring. My approach to debugging slow pipelines includes analyzing ADF activity runtimes, Spark execution plans, partitioning strategies, and Delta optimization (OPTIMIZE/Z-ORDER), followed by targeted fixes. I am available 20–30 hours per week, can overlap with 6 PM – 2 AM PKT, and my hourly rate is $6/hour. I communicate clearly, follow existing standards, and document all work thoroughly for long-term maintainability. Best regards, Zahid
$5 USD 40 päivässä
0,6
0,6

Hello, I'm interested in learning more about your Azure Data Engineer project. Could you provide more details on the current challenges you are facing with your data platform, particularly in terms of Azure ADF, Databricks, and PySpark? I can offer insights on troubleshooting and optimizing slow pipelines based on my experience in data engineering. In handling your Azure Data Engineer requirements, I plan to design efficient data pipelines using Azure ADF and Databricks, develop PySpark and Python jobs for various workloads, and ensure data quality and monitoring best practices are implemented. Core Deliverables: - Design and optimize data pipelines in Azure ADF and Databricks - Develop PySpark and Python jobs for batch and near real-time workloads - Implement best practices for data quality, observability, and monitoring - Collaborate with your team, follow standards, and document work Expertise & Portfolio: I'll share my portfolio with you in the DM. Kindly ping me there. My experience with Azure ADF, Databricks, PySpark, and data engineering ensures quality, consistency, and smooth delivery. I'd be happy to discuss your project further and answer any questions. Best regards,
$5 USD 40 päivässä
0,0
0,0

Hi there, I am an experienced Azure Data Engineer with a strong background in Azure Data Factory (ADF), Azure Databricks, PySpark, and Python. I have successfully designed and optimized data pipelines in Azure ADF and Databricks, developed PySpark and Python jobs for various workloads, and implemented best practices for data quality and monitoring. My experience with SQL-based data sources and data warehousing, along with my ability to collaborate with internal teams and propose scalable solutions, makes me a valuable asset for ongoing data platform support and enhancement. I have worked on projects involving Azure Synapse, CI/CD for data pipelines, and data modeling. My approach to debugging and optimizing slow pipelines involves thorough analysis, identifying bottlenecks, and implementing performance enhancements. I am excited about the opportunity to contribute to your project and look forward to discussing further details. Regards, Matheus
$6 USD 40 päivässä
0,0
0,0

As an experienced Azure Data Engineer with a strong background in Azure Data Factory, Databricks, PySpark, and Delta Lakes, I am well-suited to support and enhance your data platform. I have designed and optimized ETL/ELT pipelines in ADF and developed PySpark jobs for large-scale data processing, including a Dockerized end-to-end claims pipeline and real-time sentiment analysis solutions. My rate is 6 USD/hour, with typical availability of 25-30 hours per week, ensuring overlap with 6 PM – 2 AM PKT. When debugging slow pipelines, I systematically analyze ADF monitoring logs, Databricks Spark UI metrics, and source system performance, then optimize with techniques like data partitioning, Delta Lake Z-ordering, and PySpark broadcast joins.
$5 USD 40 päivässä
0,0
0,0

Hi there, We understand you need ongoing Azure Data Engineering support to maintain and optimize ADF pipelines, Databricks/PySpark workloads, Delta Lake architecture, and SQL-based warehouse integrations. SEO Global Team has built and optimized large-scale ETL/ELT systems, Spark jobs processing millions of records, and production-grade Azure data platforms with CI/CD and monitoring standards . We will enhance your existing pipelines through performance-tuned PySpark, structured ADF orchestration, Delta optimization, proactive monitoring, and systematic root-cause debugging to ensure scalable and reliable data flows. **Relevant Projects:** ADF + Databricks ERP ingestion to Delta Lake; PySpark optimization reducing runtime 45%. **Rate:** 8 USD/hour **Availability:** 25–30 hrs/week, UTC-5 with PKT overlap **Debugging:** Log review, Spark plan analysis, partition tuning, data skew checks, monitoring alerts. Best regards, SEO Global Team
$5 USD 40 päivässä
0,0
0,0

I am working as database developer for about almost 4 years doing all the warehousing processing and ETL processing and also the data varification and data validation of around more than millions of data.
$7 USD 30 päivässä
0,0
0,0

New Delhi, United Arab Emirates
Maksutapa vahvistettu
Liittynyt lokak. 8, 2020
$2-8 USD/ tunnissa
$2-8 USD/ tunnissa
$2-8 USD/ tunnissa
$2-8 USD/ tunnissa
$2-8 USD/ tunnissa
$30-250 CAD
₹600-1500 INR
₹37500-75000 INR
$200-350 USD
₹600-1500 INR
₹12500-37500 INR
₹12500-37500 INR
₹1500-12500 INR
₹100-400 INR/ tunnissa
₹1500-12500 INR
₹1500-12500 INR
₹37500-75000 INR
₹1500-12500 INR
€250-750 EUR
$30-250 AUD
₹12500-37500 INR
₹75000-150000 INR
$30-250 USD
₹1500-12500 INR
₹400-750 INR/ tunnissa