
Suljettu
Julkaistu
Maksettu toimituksen yhteydessä
I want to see a working proof-of-concept that can watch a live webcam feed in an indoor setting and reliably decide whether someone is merely holding a phone or actively using it. The prototype must process the video stream in real time, recognise the presence of a smartphone, then look for behavioural cues—hand placement, posture and, ideally, gaze direction—to confirm active usage. Whenever the model judges that the phone is being used, it should trigger an audible or visible alarm on the host machine instantly; no other logging or alert channels are required for this first iteration. I am happy for you to choose your preferred computer-vision stack (e.g. OpenCV, MediaPipe, PyTorch, TensorFlow, ONNX) as long as the end result runs on a typical workstation without specialised hardware. Pre-trained networks are welcome, but please include any fine-tuning scripts so I can reproduce the results. If additional datasets are needed, point me to openly licensed sources or provide clear collection guidelines. Deliverables • Source code with clear setup instructions • A short demo video or live call showing the system detecting phone usage and firing the alarm in real time • Brief technical notes explaining the model architecture, input preprocessing and the logic you use to distinguish “holding” from “using” I will test by pointing a webcam at volunteers in an office, so accuracy in ordinary indoor lighting is critical. Let me know how quickly you can turn around an initial build and what dependencies I should have in place.
Projektin tunnus (ID): 40293524
37 ehdotukset
Etäprojekti
Aktiivinen 28 päivää sitten
Aseta budjettisi ja aikataulu
Saa maksu työstäsi
Kuvaile ehdotustasi
Rekisteröinti ja töihin tarjoaminen on ilmaista
37 freelancerit tarjoavat keskimäärin €22 EUR tätä projektia

Hello Sir, Imagine seeing a real-time demo of our solution before officially committing—I'm excited to offer that to you. Our active phone usage detection system utilizes advanced computer vision techniques to accurately assess user behavior, ensuring reliable detection in various indoor settings. Let’s discuss further to delve into a detailed plan and schedule this demo that promises to elevate your operational efficiency immediately. Regards, Smith
€19 EUR 7 päivässä
5,7
5,7

Hello, I read your project about building a proof-of-concept that watches a live webcam feed and determines whether someone is simply holding a phone or actively using it, then triggers an alarm in real time. I’ve worked on computer vision prototypes using Python with OpenCV, MediaPipe, and PyTorch for real-time webcam processing. A practical approach would be detecting the phone object first, then combining pose/hand landmarks and head or gaze direction to classify “holding” vs “actively using.” This can run locally on a normal workstation and trigger a visual or sound alert instantly when usage is detected. A few quick questions so I can design the POC correctly: – Should the system track only one person at a time or multiple people in the frame? – Is a simple rule-based classifier acceptable for the first iteration, or do you prefer a small trained model? – Would you like the alarm as a sound alert, on-screen warning, or both? – What operating system will the test machine run (Windows, Linux, Mac)? I can provide clean source code, setup instructions, a demo video or live walkthrough, and short technical notes explaining the detection logic and preprocessing. Best Regards,
€19 EUR 7 päivässä
4,2
4,2

Hello there, I can build a **real-time computer vision proof-of-concept** that detects when a person is actively using a phone versus simply holding it, using only a standard webcam and CPU-friendly models. Approach: • Use **OpenCV** for real-time webcam capture and frame processing • Detect smartphones using a **pre-trained object detection model (YOLO/ONNX)** • Track **hands and posture via MediaPipe** to see if the phone is being interacted with • Estimate **head/gaze direction** to determine attention toward the phone • Combine these signals with a lightweight classifier to distinguish **holding vs active use** • Trigger an **instant audible or visual alarm** when active usage is detected Deliverables: • Full **source code with setup instructions** • Scripts for optional **fine-tuning or dataset extension** • Demo video or live session showing **real-time detection and alarm trigger** • Short technical notes explaining **model architecture, preprocessing, and decision logic** The system will be optimized for **real-time performance on a typical workstation in indoor lighting** and structured so you can easily retrain or extend it later. Happy to discuss datasets and deliver an initial prototype quickly.
€19 EUR 7 päivässä
4,3
4,3

Hello, I hope you're doing well. I understand you're seeking a proof-of-concept system that can detect active phone usage through a live webcam feed. This project is challenging and engaging, focusing on real-time processing, recognizing smartphones, and interpreting behavioral cues such as hand placement and gaze direction. I plan to implement a solution using OpenCV and possibly integrate pre-trained models with fine-tuning scripts to optimize for indoor settings. The alarm trigger will be instantaneous upon detection, ensuring real-time feedback. I will also include detailed setup instructions, source code, and a demo video to showcase the system in action. Given the need for accuracy in typical office lighting, I’ll ensure the model is robust for your testing scenario. How soon would you like to see the initial build? I'd like to have a chat with you at least so I can demonstrate my abilities and prove that I'm the best fit for this project. Warm regards, Natan.
€25 EUR 1 päivässä
2,4
2,4

Hi there, I can fix this kind of real-time phone-usage detection quickly with a CPU-friendly OpenCV/MediaPipe stack to run on a typical workstation. I’ll deliver a working PoC that streams your webcam, detects a smartphone, and analyzes hand placement, posture, and gaze cues to decide between simply holding and active use, triggering an audible/visual alarm on the host immediately when active use is detected. Deliverables include clean source code with setup instructions, a short demo video or live call showing alarm triggering, and concise technical notes on model architecture, preprocessing, and the rule logic for distinguishing holding from using. I’ll provide pre-trained models plus fine-tuning scripts and explicit data-collection guidelines if you want to tailor the system to your office lighting. I can target a fast initial build, roughly 5 days for a PoC and a clear path to a fuller iteration, using widely-used libraries (OpenCV, MediaPipe, PyTorch/TensorFlow optional) and standard webcam hardware. The approach remains CPU-first with optional CUDA if you later add CUDA-capable GPUs. Best regards,
€25 EUR 7 päivässä
2,3
2,3

Dear undefined, I am excited to submit a proposal for your project "Active Phone Usage Detection Via Webcam." With my expertise in C Programming, Python, CUDA, Machine Learning, C++ Programming, OpenCV, and Computer Vision, I am confident in delivering a working proof-of-concept that meets your requirements. Using a computer vision stack such as OpenCV or TensorFlow, I will develop a model that can accurately detect phone usage in real-time based on behavioral cues. I will ensure that the system triggers an alarm promptly when phone usage is detected, without the need for additional alert channels. My experience in developing complex applications and my proficiency in Machine Learning make me the perfect candidate for this project. I guarantee clear setup instructions, a demo video showcasing real-time
€21 EUR 3 päivässä
3,4
3,4

Hi there, I am excited about the opportunity to work on your project involving real-time detection of phone usage from a live webcam feed in an indoor setting. With my expertise in computer vision and machine learning, I am confident in my ability to deliver a working proof-of-concept that meets your requirements. I propose to utilize a combination of OpenCV and TensorFlow for this project, leveraging pre-trained models and fine-tuning scripts to achieve accurate results. By analyzing hand placement, posture, and gaze direction, I will develop a robust algorithm to distinguish between "holding" and "using" a phone, triggering alerts promptly when active usage is detected. My approach will focus on optimizing the model for accuracy in ordinary indoor lighting conditions, ensuring reliable performance during testing with volunteers in an office environment. I will provide clear setup instructions, a demo video showcasing real-time detection and alarm triggering, and detailed technical notes on the model architecture and logic used. I am committed to delivering high-quality results within a quick turnaround time. Let's collaborate to bring this innovative concept to life. I am ready to get started and look forward to discussing further details with you. Ihsan Faridi
€19 EUR 7 päivässä
2,0
2,0

Hello, I can build a real-time proof-of-concept system to detect active phone usage from a live webcam feed in indoor settings. The system will: Detect a smartphone in the user’s hand using a pre-trained object detection model (e.g., MediaPipe or YOLO). Analyze hand position, posture, and gaze direction to distinguish “holding” from “actively using.” Trigger an instant audible or visible alert on the host machine whenever usage is detected. Stack recommendation: Python with OpenCV + MediaPipe for pose/hand tracking, optionally PyTorch for fine-tuned smartphone-use classification. The system will run on standard workstations without specialized hardware. Deliverables: Fully commented source code with setup instructions Demo video or live walkthrough showing detection and alarm Technical notes detailing model architecture, preprocessing, and logic for usage detection I can provide guidance on datasets, including openly licensed hand and object datasets and instructions for fine-tuning to improve accuracy in indoor lighting. Ready to start immediately and can deliver a working prototype within 1–2 weeks, depending on webcam access and test subjects. Best Regard, Shabahat Habib....
€19 EUR 7 päivässä
3,9
3,9

With a keen eye on mobile phone detection and experience in developing comprehensive AI-driven solutions, I am an ideal fit for your project. Having previously worked with OpenCV and PyTorch, I guarantee that I can provide you with a reliable system that effectively distinguishes between the act of "holding" a phone and "using" it. Drawing on my expertise in Computer Vision, Machine Learning, and my recent work on SaaS dashboards, I am confident in developing a real-time application that accurately identifies hand placement, posture, and even gaze direction to detect active phone usage. Since timing is crucial to you, I assure you fast turnaround without prejudice to quality. My strong command over python and penchant for detail allows me to deliver meticulously crafted systems consistently meeting clients' needs. My understanding of database optimization and full-stack development ensures that the end product will be scalable, efficient and user-friendly for your testing set up scenario. To sum up, expect optimized Python code accompanied by concise documentation showcasing the logic and preprocessing methods behind the model architecture. Additionally, I'd be glad to share my insights into the datasets used. To tap into my proven record of building smart systems specifically suited for businesses' requirements and tailored for success, choose me for this task. Let's work together to create technology that addresses real-world challenges while maximizing efficiency!
€25 EUR 2 päivässä
1,7
1,7

Hello, Please message so we can discuss regards, I would love to talk about about this project. Arusha
€19 EUR 7 päivässä
0,0
0,0

Hi there, detecting active phone usage via webcam is a challenge because distinguishing between holding a phone and actually using it is nuanced. I've spent the last 4 years developing real-time computer vision solutions that tackle similar problems. For your project, I will leverage a combination of OpenCV and pre-trained deep learning models to analyze live webcam feeds. The system will process the video stream to detect smartphone presence and observe behavioral cues like hand placement and gaze direction. When active usage is detected, an alarm will be triggered promptly. I'll ensure that the setup works seamlessly on a standard workstation, including providing fine-tuning scripts and sourcing any required datasets.
€21 EUR 1 päivässä
0,0
0,0

Hi there, I read that you want a real-time proof-of-concept that watches a webcam feed and distinguishes holding a phone from actively using it, with an instant alarm on the host. A common pitfall is over-reliance on single cues; you’ll need a small multi-cue model (hand position, posture, gaze) to reduce false alarms in office lighting. Action plan: - Build a lightweight OpenCV+MediaPipe pipeline with a small ML head for action cues. - Add real-time alert logic and test on varied indoor lighting. - Provide fine-tuning scripts and a simple setup for reproducibility. Similar project I solved used a hand-posture model to trigger alerts with 95% accuracy in controlled lighting; live demo matched that result. Relevant Work Examples - https://www.freelancer.com/portfolio-items/11243527-python-ocr-program-development - https://www.freelancer.com/portfolio-items/11243474-wizard-101-player-bot-using-python Would you like me to target a specific frame-rate or hardware baseline to lock the scope? Best regards, Thando
€25 EUR 1 päivässä
0,0
0,0

Hello there What criteria will you prioritize to differentiate between holding a phone and actively using it in various indoor lighting conditions How do you want the system to handle ambiguous hand placements or partial occlusions The challenge lies in interpreting subtle behavioral cues in real-time video streams which can vary greatly with lighting and positioning Accurate detection while minimizing false alarms is critical for this project's success I will build a prototype using reliable computer vision frameworks optimized for typical workstations to ensure responsiveness and accuracy The system will trigger immediate audible or visible alarms upon detecting active phone usage I can deliver the initial version promptly with clear setup instructions and a demo to showcase the detection mechanism and alarm functionality I look forward to collaborating on this project Best regards Dorofii
€8 EUR 3 päivässä
0,0
0,0

Hello, I’m excited to propose a practical, production-ready approach to your Active Phone Usage Detection project. As a Senior Full-Stack Developer with a strong focus on MERN and SaaS, I bring hands-on experience architecting and delivering real-time computer-vision solutions that run efficiently on standard workstations. For this PoC I recommend a Python-based stack leveraging OpenCV for video capture, MediaPipe or PyTorch/ONNX for pose and gaze cues, and a lightweight state-machine to decide between 'holding' vs 'using' a smartphone. The system will process frames in real time, detect a smartphone presence, analyze hand placement, posture, and gaze direction, and trigger an audible/visual alarm on the host when active use is confirmed. Deliverables include clean source code with setup instructions, a short live demo or video, and concise technical notes on model architecture, input preprocessing, and decision logic. I will structure the project for portability and reproducibility: containerized dependencies (Docker), a minimal CUDA-accelerated path if available, and clear scripts to fine-tune on provided datasets or openly licensed sources. I’ll include fine-tuning scripts and dataset collection guidelines so you can reproduce results with different office environments and lighting conditions. The solution will run on typical workstations without specialized hardware, and I’ll provide a baseline accuracy assessment and guidance to improve robustness under indoor lighting
€8 EUR 3 päivässä
0,0
0,0

Hi there! I can start immediately Im Ready. I will build a real-time webcam prototype using Python, OpenCV and a lightweight object detection model (YOLO/ONNX) to detect smartphones, combined with MediaPipe hand/pose/face landmarks to analyze hand position, posture and gaze direction to distinguish “holding” vs “actively using.” The system will process the live stream, trigger an instant visual/audio alarm when usage is detected, and include reproducible setup scripts, fine-tuning workflow, and dataset guidance for indoor environments. I can deliver the initial working prototype and demo within a few days—send access and I’ll start building the proof-of-concept immediately. Best regards, Oleksandr
€19 EUR 7 päivässä
0,0
0,0

Hi there, THE CHALLENGE is developing a real-time computer vision model that accurately differentiates between someone holding a phone and actively using it solely based on webcam feed analysis. The technical difficulty lies in designing a robust algorithm that can detect subtle behavioral cues like hand placement and posture to trigger an alarm effectively. Handling this involves selecting the appropriate computer vision stack, fine-tuning pre-trained networks, ensuring compatibility with standard workstations, and optimizing for accuracy in varying indoor lighting conditions. I plan to address these challenges by leveraging my expertise in computer vision, carefully choosing the right tools, and conducting thorough testing to refine the model's performance. Regards, Matheus
€8 EUR 7 päivässä
0,0
0,0

Hello, As an AI-powered Full Stack Developer, I am definitely equipped with the necessary skills and technical knowledge to successfully complete this project for you. With expertise in computer vision using libraries such as OpenCV, MediaPipe, PyTorch, TensorFlow, ONNX, I have a deep understanding of how to build AI-powered models for object detection and recognize human behaviors. In addition to my proficiency in the required computer-vision stacks, I am also well-versed in mobile and web application development, which would allow me to deliver a coherent and fully functional solution that doesn't only process video in real-time but also provides clear setup instructions. In fact, I have built numerous apps with similar functionalities that require real-time object detection and classification. Accuracy is of utmost importance in this case and that's why being meticulous is part of my approach. From understanding your unique challenges to designing a solution that incorporates the right technologies for the job, you can rest assured that my approach is never one-size-fits-all. From setting up your typical workstation so we don't need any specialized hardware to providing you with clear collection guidelines if additional datasets are needed, I am all about improving our communication process and ensuring productivity. In terms of delivery time, I'm confident I can generate an initial build within your given timeframe without Thanks!
€8 EUR 4 päivässä
0,0
0,0

********HI******** this is a fascinating computer vision challenge and a solid use case for real-time behavior detection. building a lightweight proof-of-concept that distinguishes **holding a phone vs actively using it** using posture, hand placement and gaze cues is definitely achievable. i am willing to meet your estimated time and cost expectations at least. my plan: * build a real-time webcam pipeline using opencv * detect smartphones using a pretrained object detection model * track hands, pose and head direction with mediapipe * combine these signals to classify “holding” vs “active usage” * trigger an instant visual or audio alarm when usage is detected * provide clean source code, setup instructions and reproducible scripts * deliver a demo video or live walkthrough of the system working the prototype will run on a normal workstation without special hardware. best regards
€19 EUR 7 päivässä
0,0
0,0

With profound knowledge and years of experience in C and C++ programming, I assure you a high-performance solution for your project. I understand the need for a resource-friendly model that can deliver accurate and real-time results without any specialized hardware. My specialized skills could be harnessed to develop an efficient computer-vision stack using OpenCV libraries which will run smoothly on typical workstations. Moreover, I have hands-on experience with machine learning frameworks such as PyTorch and TensorFlow that would be integral to fine-tuning the pre-trained networks for your unique task specifications. I am confident we can select the most suitable architecture, carry out proper input preprocessing, and design logical mechanisms to precisely distinguish "holding" from "using". Rest assured, I will gladly provide comprehensive and organized source code along with setup instructions, a demo video or live call of the system in action and concise technical notes elucidating our approach. In order to ensure that my solution aligns perfectly with your requirements, openness is key. I promise to keep you informed at all stages of development and iterate promptly if required. By trusting me with this project, you'd gain not just proficiency but also reliability; I am fully committed to setting up a solution that responds accurately even in ordinary indoor lighting. Let's work together to create an innovative solution to benefit your office environment!
€19 EUR 7 päivässä
0,0
0,0

Hi there! I know how frustrating it can be when a system cannot clearly tell if someone is just holding a phone or actually using it. False alarms in indoor lighting can be a big pain. I have worked on real-time computer vision projects using OpenCV, MediaPipe, and PyTorch that detect hand gestures, posture, and objects accurately. I’ve also developed lightweight models that run smoothly on standard workstations without needing specialized hardware. Additionally, I have experience triggering instant alerts based on visual cues in live video streams. My approach will process the webcam feed in real-time, detect smartphones, analyze hand placement and posture, and fire an immediate audible or visual alarm. I will provide full setup instructions, a short demo video, and technical notes so you can reproduce the results easily. Check our work https://www.freelancer.com/u/ayesha86664 Do you prefer the alarm to be visual, audible, or both? Let me know if you’re interested & we can discuss it. Best Regards Ayesha
€25 EUR 5 päivässä
0,0
0,0

SANDIGLIANO, Italy
Maksutapa vahvistettu
Liittynyt syysk. 24, 2017
€750-1500 EUR
€250-750 EUR
€8-30 EUR
€250-750 EUR
€30-250 EUR
₹600-1500 INR
$30-250 USD
€750-1500 EUR
€750-1500 EUR
€250-750 EUR
$10-30 AUD
$30-250 USD
₹600-1500 INR
₹37500-75000 INR
$25 USD
$15-25 USD/ tunnissa
$5000-10000 AUD
€250-750 EUR
£10-15 GBP/ tunnissa
£250-750 GBP
$30-250 USD
₹100-400 INR/ tunnissa
$30-250 CAD
$250-750 USD
₹600-1500 INR