The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Asiakkaamme antoivat 363,937 arvostelussa keskimääräisen arvosanan Web Scraping Specialists 4.9 / 5 tähteä.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
Asiakkaamme antoivat 363,937 arvostelussa keskimääräisen arvosanan Web Scraping Specialists 4.9 / 5 tähteä.I need a single Google Sheet that lets me type a Home Depot or Lowe’s model number in one column and, without any extra clicks, instantly fills the rest of the row with: • Brand • Product title • Full description • Current price on Home Depot • Current price on Lowe’s • Main image URL I’d like the sheet to refresh these fields automatically once every day so I always see up-to-date pricing. I’m happy to use the HASData API (or another service if you can show a better option), and I’ll cover the subscription cost myself; I just need you to wire everything up in Apps Script or another reliable method so the calls stay within API limits and don’t break if the catalog grows. Deliverables • A Google Sheet t...
Mam już w pełni działającego bota, którego głównym zadaniem jest monitorowanie dostępnych ofert na OLX i automatyczne wyszukiwanie tych, które spełniają określone kryteria. Chcę zarejestrować tę aplikację w Polskiej wersji portalu deweloperskim OLX, lecz mój pierwszy wniosek został odrzucony z powodu sugestii „Data Scraping/Competitor Monitoring”. Potrzebuję więc świeżego, zgodnego z polityką OLX opisu, który: • Spełnia wszystkie wymagania regulaminowe portalu i przechodzi proces weryfikacyjny bez zastrzeżeń. Oczekuję krótkiego, klarownego tekstu (PL + ewentualnie ENG), gotowego do wklejenia w formularz rejestracyjny oraz, jeśli to możliwe, wskazówek dotyczących dalszego procesu składania wniosku. Zależy mi na precyzji ję...
Title: Senior Python Developer for US Data Pipeline and iOS Verification System (Phase 1) Project Description Suggestion: Overview: > We are looking for a senior Python developer to build an automated data scraping and iOS verification pipeline based in the US. The goal for Phase 1 is to acquire over 10,000 verified leads per day. Core Tasks: 1. Data Scraping: Extract data (name, phone number, age, gender, carrier) from US people search websites. 2. Anti-detection: Must integrate the API and set render=true and super=true. 3. Data Filtering: Implement automatic filtering by wireless/phone number and age range (50-90 years old). 4. Data Verification: Integrate the LoopLookup API to verify iMessage activation status. 5. Data Export: Automatically sort and export data to tagged .t...
I need someone to set up my web app and iOS and Android apps with Apple oATH
Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mapping, variant ext...
Hello, We are looking for a "hacker-level" Python Scraper who specializes in cutting-edge data extraction techniques. We are specifically looking for an expert who can scrape a wide range of data, including emails, from , while efficiently fetching data while avoiding credit costs. {If you read this completely, start with mj.} If you have the technical skills to overcome these limitations and deliver high-volume results, then you are the expert we are looking for. Let's discuss the project in detail.
I need help collecting a clean, well-structured list of Twitter accounts that consistently post about AI and possibly category of AI (open source, ML, AI, general AI) Instead of handing you a fixed list, I’ll define the selection rules (for example: minimum follower count, specific AI-related keywords, recent activity, etc.) - min follower count 5000 and have alteast multiple posts with 100+ likes/ retweets. Once those criteria are agreed on, you’ll locate the matching profiles and extract two data points per account: • the public profile bio • the direct profile link (around 1M+ profiles) Please return everything in a single CSV file, one row per influencer. Feel free to use Python, Tweepy, Twitter API v2, ScraperAPI, or another reliable method—as long...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-5 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal ...
I have a single Instagram Reel that was publicly available for roughly a year before being removed or placed in archive. I saved every trace I could—direct links, full-length screen recordings, and the search-engine cache hits that still reference the post. What I now need is a technical reconstruction of its viewership data. Your objective is to extract and corroborate: • Number of views over time (ideally plotted or tabled) • Any available demographic clues about who watched it • Engagement rates the Reel achieved while live Because the original URL now returns a 404, I expect most of the intel will come from open-source techniques: exploring Web archives (Wayback Machine snapshots, Google cache, ), digging into any residual JSON, and cross-referencing with Ins...
I need an experienced Python trading-bot developer to optimize and refactor a live async trading bot connected to REST & WebSocket APIs, which currently slows under load and misses ticks/orders. The task includes profiling bottlenecks, improving async/WebSocket performance, optimizing pandas & SQLite usage, and ensuring real-time execution. Goal: <200 ms tick-to-order latency, zero missed ticks, clean refactored code, tests, and one-command VPS setup.
I need every product on copied into my existing WooCommerce shop so the catalog mirrors theirs one-to-one. That means grabbing each item’s title, plain-text description and all associated images, then pushing them into WordPress with the correct categories, colour swatches and size variations exactly as they appear on Furnx. Only product details are required right now—reviews and live stock counts can wait for a later phase—so the job is focused on clean data capture and a flawless import workflow. Descriptions must remain in plain text; no extra HTML markup. Images should arrive attached to the right variation, including separate gallery shots where available, and the colour options need to show as clickable swatches in WooCommerce, not just text labels. I’m...
Necesito automatizar la consulta de la siguiente página del SRI: Al ingresar un RUC o cédula debo obtener y guardar en un archivo JSON estos campos exactos: • Estado de Contribuyente • Razón social • Indicador de “Contribuyente fantasma” • Actividad económica principal Requerimientos técnicos: – El script debe funcionar bajo llamada, es decir, pueda ejecutarlo manualmente cada vez que lo necesite con una lista de RUCs como entrada. – La salida debe ser un json por cada Identificación consultada. – Incluye un breve README con instrucciones de instalación y uso. Criterios de aceptación: 2. El tiempo medio por consulta no debe exceder lo razonable para evitar bl...
I have two source spreadsheets that I need merged and enriched through automated scraping: • “File 1” – 170 k Spanish local businesses with emails • “File 2” – 65 k additional businesses with websites only Phase 1 – Email extraction Using a Python script and well-known libraries (requests, BeautifulSoup, Scrapy or similar), scan every site listed in File 2, capture all working email addresses you can locate, then append them to the corresponding rows so I can produce a unified “File 3”. Phase 2 – Offer harvesting Next, visit each live site in File 3. Where an offer, deal or promotion is publicly displayed, record the details in a fresh Excel sheet with these exact columns: Business ID | Business Name | Offer...
Summary We’re a growing digital marketing agency looking for an experienced automation specialist to help us design and implement scalable internal workflows. We’re moving into a more automated operating model and want to work with someone who can both advise on best practices and build the systems with us. What We Need We’re looking to automate processes such as: Automatically scraping a new client’s website and relevant public social profiles upon signup Structuring and exporting that data into organized files (Google Drive/Docs/Sheets) Creating standardized client folder structures in Google Drive Connecting onboarding forms to project management tools Automating internal task creation for our team Integrating AI tools (e.g. GPT workflows) into onboardin...
I need a data scraping expert to help generate leads from a list of websites. Requirements: - Scrape contact information, product listings, or user reviews (to be specified). - Work from a provided list of URLs. Ideal Skills: - Experience with data scraping tools and techniques. - Ability to handle multiple URLs and extract data accurately. - Attention to detail and reliability. Please share your portfolio and relevant experience.
We seek collaboration from responsible professionals to develop a tool for our Controls, process automation use () that will be used in conjunction with our websites by our sales folks. We shall own all the work you will develop or use for development. The attachment provides the detailed scope for you. Include some reasonable time for training our folks about your work (so we can maintain) & warranty for 3 months after final acceptance, deployment. Submit relevant response to our JD only with relevant examples of work done in the past by you for satisfied clients. If no relevant examples are submitted, we shall assume you have not done this kind of work (that will not disqualify you). If you intend to use any 3rd party tools (specify now), get approval & provide their source code...
My goal is to boost overall search accuracy across web, conversational, and voice-based platforms, and I need a small team that can run continuous quality checks on three fronts: • First, you will rate the relevance of live web search results against real user queries, flagging mismatches and edge cases. • Second, you will review AI-generated snippets, answers, and summaries, highlighting factual errors, bias, or tone problems and suggesting concise fixes. • Third, you will test voice recognition output by speaking prescribed prompts, noting transcription errors, pronunciation gaps, and language-variant issues. I will supply detailed guidelines, evaluation rubrics, and annotation tools; you simply log in, follow the task queue, and record findings inside the platform. ...
I have a collection of websites that contain the text I need organised in a single Excel spreadsheet. Your task is straightforward: visit each assigned site, copy the required text exactly as it appears, and paste it into the correct columns and rows of the workbook I supply. Accuracy and consistency matter more than speed. Please keep original spellings, line breaks, and capitalisation, and double-check that no unseen characters or extra spaces slip in. The spreadsheet already has headers; all you do is populate the empty cells beneath them. I will share: • The list of URLs • A short field-by-field guide so you know which snippet of text belongs in which column • The blank .xlsx file You return: • The completed Excel file, ready for me to import into our system ...
I have an existing Excel sheet of aged Instagram leads and I need a fresh, human-eyed review of every single profile on that list. Your task is simple but detail-oriented: open each handle, confirm it is still active by seeing if they posted anything hair related in the last month and note the information I specify below. Here is exactly what I want verified on every profile: • Recent posts activity – record the date of the most recent post so I can see at a glance who is active and who is dormant. • Availability of contact information – confirm whether an email, phone number, or “email” button is visible in the bio or contact section. No bots or scraping tools, please; I want a manual check for accuracy. Deliverables • The original Excel file re...
Industrial Automation Product Data Extraction, Deduplication & Structured Image Collection Project Overview We are an industrial automation parts distributor building a structured product database to support inbound enquiries and SEO growth. We require an experienced data extraction specialist to: Extract structured product data from major industrial / electronic component distributor websites Identify duplicate manufacturer part numbers across multiple sources Merge all unique information into a single consolidated dataset Extract and organise all available product images per part number Deliver a clean, deduplicated, production-ready dataset This project includes: Data extraction Normalization Deduplication Intelligent merging Structured image collection and organisation...
I have a collection of web pages that I need turned into clean, original copy and then loaded into my system. The raw material is plain-text extracted directly from those pages—no numerical or mixed data involved—so the entire job revolves around handling text content only. Here is the workflow I have in mind: first you’ll grab the plain text from each specified URL, strip away anything that isn’t core content, and feed that text into your preferred rewriting engine (OpenAI, GPT-based, or another high-quality NLP tool). The goal is a fluent, human-sounding rewrite that preserves meaning while clearing any potential plagiarism checks. Once the rewrite is approved, you will insert the new text back into the destination I provide (CSV template or the web form in my CM...
I need a sharp, Excel-savvy researcher to turn scattered developer brochures and website data into one clean, filter-ready spreadsheet. Your task is to compile every pre-selling or RFO project you can find from the major developers that operate in my target markets—primarily Metro Manila (with emphasis on Quezon City, Manila, Pasig and Valenzuela) plus key growth hubs in North Luzon. For each project, capture the essentials I use when pitching to buyers: • Developer name and exact project name • Precise location/address • Project type (condo, house-and-lot, lot only) • Highlight amenities offered • Complete payment terms and a sample computation straight from the developer’s price sheet • Contract price range and reservation fee Pleas...
I have a predefined list of topics and I need a methodical web-researcher to comb through the internet, identify credible organization sites related to each item, and capture every relevant online asset they host. My end goal is a clean, well-structured spreadsheet that I can tap for future research. Here’s what I expect: • For every topic on my list, locate organization websites that speak directly to it. • Record the full site URL, the specific page URL where the asset lives, the page title, and a one-line summary of why that page is useful. • If the page offers downloadable material (reports, documents, images, videos, or any other internet asset), note the direct download link. No need to download the files yourself—just give me reliable links. •...
Project Description: Find school districts and charter schools who use a specific vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A c...
I need an .xlsm workbook whose VBA macro fetches product data from both and lowes.com. When I type a valid item or model number into a row, the code should automatically pull back: product name, full description, regular price, sale price (if available), brand, product type/category, and the main image (inserted into the sheet or stored in an Image column). I work comfortably with VBA, so a concise, well-commented routine is all I need—no step-by-step user guide. The workbook must stay self-contained, relying only on standard references such as Microsoft XML, HTML, or WinHTTP libraries; please avoid external add-ins or Python bridges. Deliverables: • Finished macro-enabled Excel file (.xlsm) ready to test with my own SKU list • Clearly commented VBA code so I can...
Proyecto: Sistema de Gestión Integral Rubro: Librería, Juguetería e Insumos de Computación (5.000 SKUs). A. Objetivos Principales Desarrollar o implementar una plataforma de gestión (ERP/POS) que agilice la venta masiva en mostrador, lleve control de stock (actualmente no lo tiene) y automatice la comparación de costos con proveedores para una compra inteligente y rápida o al menos facilite el proceso de compra con sistema de alertas en el caso de que el scraping no sea posible de realizarlo. B. Módulos Específicos Requeridos 1. Módulo de Compras e Inteligencia de Precios: • Web Scraping: El sistema debe conectarse a las URL de proveedores principales para extraer precios de costo en tiempo real. • Comparador...
I need assistance merging my current football dataset with a new one. This new dataset will be sourced from online scraping of weather and expected goals (Xg) data. Requirements: - Scrape data from official weather and football statistics websites. - Integrate the following weather data: temperature, humidity, and precipitation. - Work with datasets in Excel format. - Correlate this new data with historical football match data in my existing dataset. Ideal Skills and Experience: - Proficiency in data scraping and data manipulation. - Experience with Excel and handling large datasets. - Familiarity with weather and football data. - Strong analytical skills to ensure accurate correlation of datasets. Looking forward to your proposals!
I need a Selenium-based solution that runs reliably on Windows and opens Google Chrome to simulate human visits to LinkedIn (and occasionally other) profile URLs listed in a Google Sheet. For each URL the program should: • Pull the next unused link from the sheet • Load the page in Chrome, wait a random time between 20 seconds and 3 minutes • Apply truly randomized scrolling patterns while the profile is open so behaviour looks organic • Fire a webhook the moment the visit completes, passing back any ID or payload I define so our CRM reflects the touch instantly Configuration items such as Google Sheet ID, webhook endpoint, minimum/maximum dwell time, and daily visit caps should live in a simple file I can edit without touching code. A short README on installi...
I need a freelancer outside the USA to gather some data and provide me with a code snippet. Ideal skills and experience: - Experience in data gathering - Familiarity with coding in Python, JavaScript, or Ruby - Ability to work independently and deliver accurate results Please provide details on your data collection methods and coding expertise in your bids.
We are building a full internal marketplace analytics web system, not just a reporting script. The system is designed to combine competitive intelligence with internal sales and stock analytics in a single interface. Functional Requirements The system must provide the following capabilities: 1. Product and SKU structure - Each product must be split into individual SKUs based on flavor and volume. - All analytics and reports are built at the SKU level. 2. Our product analytics (primary focus) - Current stock levels (total and per SKU). - Sales volume for selected periods (daily / weekly / monthly). - Reorder recommendations based on stock thresholds and sales dynamics. - Revenue calculations per product and per SKU with period filtering. 3. Competitive analytics - Automated collection o...
I need a lightweight Windows-based application that can interact with a specific website entirely in the background—no browser window or UI should ever be visible. The software must: • Log in with a stored username and password • Navigate through the site, click the necessary elements, submit forms, and collect the returned data • Solve any CAPTCHA the site presents automatically (an API such as 2Captcha, Anti-Captcha, or a comparable service is acceptable) • Return the scraped information in JSON or CSV so it can be consumed by another process A simple tray icon, CLI, or service is fine; the key requirement is headless operation with reliable error handling. Source code and a compiled executable are both expected so I can run the tool on multiple machines...
1. CONTEXTO Y DESAFÍO REAL Proyecto del sector de la trefilería y el galvanizado con más de 40 líneas de producción activas. El desafío no es la falta de información, sino que el conocimiento crítico es volátil: reside en la experiencia de supervisores y operarios veteranos y se transmite de forma verbal. Cuando surge una solución técnica en planta, esta no se documenta y se pierde para el siguiente turno. Buscamos desarrollar un ecosistema de IA que no solo responda preguntas, sino que capture, valide, estructure y democratice el conocimiento técnico que surge en el día a día, creando una infraestructura de inteligencia industrial sostenible a largo plazo. 2. LA SOLUCIÓN: "THE ...
I have a sizable dump of customer records—names, contact numbers, email addresses, and a few extra fields—that must be transferred into a single, well-organized Excel workbook. I will send you the exact header template, so every column you create must match it precisely. Your task involves: • Importing the raw files into Excel (or Power Query, if you prefer) and mapping each entry to the columns I supply: Names, Contact Numbers, Email Addresses, and the additional fields. • Removing every duplicate without losing valid information. • Applying basic data-validation rules (drop-downs, text length limits, email format checks, etc.) so the sheet remains clean long after this project ends. • Consistently formatting phone numbers and email addresses, fixing...
I have about 500 genuine customer testimonials sitting on another well-known review platform, and they belong on my Google Business profile instead. Every word has already been approved by the original authors, so no rewriting or polishing is required—I want them posted exactly as they appear now. Here is what I need from you: pull each review from the source link I’ll provide, publish it to my Google Business page without altering a single character, then give me proof that every post has gone live (a simple spreadsheet with the review text and a direct Google URL or timestamped screenshot is fine). Accuracy is crucial; I will cross-check that nothing has been omitted or modified. If you already manage multiple Google accounts or have an efficient, policy-compliant workflow ...
Complete Lottery Prediction and Betting Automation System (Focused on Loterías y Apuestas del Estado - Spain) 2. System Features 2.1. Historical Data Collection and Update The system must automatically download complete historical results (drawn numbers, draw dates, prize breakdowns by category, accumulated jackpots) from the first draw of each lottery, directly from or reliable associated sources. Specific sources: Euromillones: (since Feb 13, 2004) La Primitiva: (since Oct 17, 1985 – modern version) El Gordo de la Primitiva: (since Oct 31, 1993) Updates automatic at exactly 00:02 the day after each draw, using ethical scraping (BeautifulSoup/Scrapy) with proper user-agent headers to mimic human behavior. Store data in PostgreSQL (structured) or MongoDB (flex...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
I’m looking for a data engineer who can take full ownership of a daily web-scraping workflow aimed at ongoing market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load the daily output into ...
I have a working Python script that talks to the Kalshi prediction-market API, pulls live data, and fires off trades automatically through simple web-request helpers. Functionally it looks solid from my end, but I’m not a developer and would like an expert eye on it before I trust it with larger positions. The review should cover every critical angle—accuracy of the trading logic, efficiency of each call or loop, and robust error-handling so a bad response or network hiccup never leaves an order hanging. Because the script relies heavily on APIs and a small amount of web-scraping, please verify that authentication, rate-limit handling, and data parsing follow best practices and won’t put the account at risk. Deliverables • A line-by-line code review (commented or...
I'm seeking a versatile virtual assistant to join my team for 15+ hours per week. The role involves a mix of marketing and admin-related support tasks. The ideal candidate should be skilled in creating pitch decks and PowerPoint presentations, branding and design using Figma, and video editing. Additionally, the role includes web scraping, bookkeeping specific to Australia, and tasks requiring excellent written English. Key Requirements: - Proficiency in Figma for branding and design - Experience in creating engaging pitch decks and PowerPoint presentations - Video editing skills - Ability to perform web scraping tasks efficiently - Knowledge of Australian bookkeeping practices - Strong written English for various tasks Ideal Skills and Experience: - Previous experience as a virtual...
I have three specific school-website links that list all current teachers and administrators. From each page I need a clean scrape of every staff member’s name, role, email address, plus the city/town and the school name, compiled into a single Excel workbook. Alongside that, I already hold an Excel sheet that contains a roster of Tow and roadside drivers. The sheet has their names and the URLs of the companies they work for, but no contact details. Please crawl those company sites, locate each driver’s email address, and append the results to the same workbook, using matching columns so everything stays consistent. Key points to keep in mind: • Final deliverable: one Excel file ready for copy-and-paste outreach. • Source material: my three school websites and...
I am looking for a Python developer to create a simple and focused scraper script for Facebook Marketplace. Project Idea: The script will open a single Facebook Marketplace seller page and: • Extract all product links belonging to that seller only • Ignore any other data (no names, no prices, no images) • The final output should be a list of links only • Each product link on a separate line (link under link) Exact Requirements: • Input: Facebook Marketplace seller page URL • Output: • A file containing all product URLs for that seller • File format: TXT or CSV • Handle infinite scrolling to load all products Technical Requirements: • Python • Selenium or Playwright • Experience with dynamic websites • Clean, ...
I have a set of voter-list PDFs released by the election commission. The layout across all files is identical, so positional parsing is reliable. Right now I simply need the current batch converted, but long-term I want a reusable Python utility that pulls the following six columns straight into Excel: • Name • FathersName • Age • Gender • VoterID • SerialNumber . Section Name . Polling Station Name .etc. Scope of work 1. Run the first extraction and hand me the .xlsx file so I can verify accuracy. 2. Package the underlying code (Python 3.x) with clear instructions and any so I can repeat the conversion on future lists without further help. Technical notes – Consistent layout means you can lean on libraries like pdfplumber, camelo...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-10 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal...
I am currently using apify for $1.5/1000 leads. Need things at scale - around 50k emails, this need cost effective solution. Bid on this proposal and I shall DM you, need to know cost for: 1. Apollo emails 2. Linkedin emails
Hindi and Indonesian Safety Hardening and Safety Dataset - Annotation 1. Annotation Requirement Description This annotation task aims to construct safety datasets for Hindi and Indonesian through manual annotation. 1.1 Basic Task Information Task Summary: Annotate five types of raw data (sensitive words, text samples, image samples, "image-text" pairs, "video-text" pairs) in Hindi and Indonesian according to requirements. Deliverable Types and Formats: a. Sensitive Words: Words, phrases. Delivered in Excel and JSONL formats only. b. Text Samples: Sentences, paragraphs. Delivered in Excel and JSONL formats only. c. Image Samples: Images in JPG or PNG format, stored in folders. Deliver Excel, JSONL, and corresponding attachment folders. d. "Image-Text" Pairs...
I need a one-time, UK-wide scrape that captures every wedding-related business you can find across England, Wales, Scotland and Northern Ireland—no single directory limitations, so feel free to pull from any public site that meets the brief. Deliverable • A single Excel file containing the following columns: URL, Business Name, Full Address, Post Code, Telephone, and every email address that appears on the site (not just the first one you find). • The sheet should be neatly de-duplicated and ready for filter/sort. Business types to include • Wedding & Bridal Wear • Wedding Planners / Services • Wedding Cars, Horse & Carriages • Wedding Venues • Photographers & Videographers • Florists & Wedding Flowers •...
I need a small automation script that periodically checks item availability on the Bigbasket website and pings me on Telegram the moment any of the tracked products come back in stock. You are free to choose the underlying tech stack (Python + Requests/BeautifulSoup, Selenium, Playwright, or a headless browser of your choice) as long as it works reliably with Bigbasket’s current site layout and protects my account from rate-limit blocks or captchas. The flow I have in mind is straightforward: I feed the bot a list of product URLs (or SKUs). It runs on a schedule I can change—every few minutes during peak shortages, maybe every hour otherwise—grabs the stock status, and fires a concise Telegram message whenever the status flips from “Out of Stock” to “Av...
I need every public phone number that appears on gathered into a single, well-structured Excel workbook. Please crawl the entire site, not just a few sections, and return each number alongside the key profile details that make the data usable at a glance—name, profile URL, and any other easily captured identifiers shown next to the number. A clean .xlsx with one row per profile, no duplicates, and clearly labelled columns is the only deliverable I’m expecting. If you prefer Python, Scrapy, Selenium, Beautiful Soup or a comparable stack, go ahead; I’m interested in results, not the specific toolset, as long as the script can be rerun later should the site content change. Before delivery, double-check that: • every row contains a valid phone number and url • n...
I need a reliable scraper that monitors every basketball league listed on Bet365 (). The script must do two separate pulls for each game: Objective 1 • Run #1 – as soon as Bet365 publishes the starting lineup. • Run #2 – again on game day, no later than one hour before tip-off. For each run, capture Teams and scores, all published lineups and odds, plus the Q1 Total, full Quarter and Half statistics as soon as they appear. The goal is to analyse how the line and odds move between the first and second snapshot, feeding a broader betting-strategy model, so accuracy and time-stamping are essential. Store everything in a structured database of your choice (PostgreSQL or MySQL are fine). The tables must let me query: • first-pull values • second-pull val...
I need help streamlining a small questionnaire that captures only open-ended answers. Respondents will be typing directly into a web form, and I simply want each answer stored and exported as clean, plain-text strings—no JSON, CSV, or additional metadata layers. Your task is to: • Set up the formatting logic so every submission is saved exactly as entered, preserving paragraph breaks but stripping any extra HTML or special characters the form might inject. • Provide a straightforward way for me to download or copy that text in bulk once the survey closes. If you prefer, a lightweight script or form-handler (PHP, Python, or JavaScript are all fine) that writes the responses into a flat .txt file or an equivalent plain-text store will meet the requirement. Please keep th...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.