Layers of Time, Colors of Emotion.


Layers of Time, Colors of Emotion.

DriveAlive


A drowsiness detection system powered by machine learning and convolutional neural networks, designed for rideshare drivers.

Context

Context

Context

How can we effectively identify, control, and prevent risk factors in rideshare services?

How can we effectively identify, control, and prevent risk factors in rideshare services?

How can we effectively identify, control, and prevent risk factors in rideshare services?

Team

Dave (🙋🏻)

Hadley Dalton

Ivery Chen

Healey Koch

Timeline

a 3 week project, May 2024

Skills / Tools

Computer Vision, Python, Figma, Javascript

Drowsy driving ranks among the top causes of traffic accidents, significantly affecting road safety. Providing early warnings to sleepy drivers could prevent numerous accidents on the roads. Providing early warnings to sleepy drivers has the potential to prevent numerous accidents. According to data from the U.S. Department of Transportation over the past decade, car accidents, including both injuries and fatalities, are on the rise. It is crucial that we, as software engineers, leverage our skills to promote safer driving. Specifically, with the increasing dependency on rideshare services like Uber and Lyft, it is essential to implement non-intrusive safety measures to prevent drowsy driving among drivers providing these services.

My role in the team involved reviewing existing research on similar topics, gathering relevant datasets, and implementing a Convolutional Neural Network with data augmentation. Additionally, I took the lead in designing the detection algorithm. For this project, we aimed to develop a proof-of-concept site that enables users to upload or stream a video, which then analyzes and suggests whether someone is driving drowsy.

Research & System Design

Research & System Design

Research & System Design

How do we determine what makes someone 'drowsy'?

How do we determine what makes someone 'drowsy'?

How do we determine what makes someone 'drowsy'?

Research from American University reveals an unexpected finding: contrary to common belief, the eyes of a drowsy individual blink less frequently than the norm. However, this phenomenon occurs because most instances of eye closure persist for longer durations. At DriveAlive, we term these "drowsy blinks," and these prolonged eye closures are indicative of fatigue.

We determine how many "drowsy blinks" one exhibits in a short, seconds-long clip. First, the system identifies the face area in every frame of the video within each second. Then, it pinpoints the eye area using a facial landmarks detector. Next, it calculates and analyzes the eye aspect ratio for each frame. Afterward, three classifiers—linear SVM and sequential neural network—are used to enhance accuracy. The data is then classified to determine whether the driver's eyes are open or closed. The system detects closed eyes for a set period within each second, tallying the amount of drowsy blinks persisting within or throughout the seconds. The amount of drowsy blinks per minute is then quantified. More than even one drowsy blink should urge you to reconsider driving.

Data & Methodology

Data & Methodology

Data & Methodology

The primary dataset used for this project is the Drowsiness Detection Dataset from Kaggle:

  • Labeled based on whether eyes are closed or opened

  • Generated using MRL and Closed Eyes in Wild (CEW) dataset

  • 2,500 images for closed and open eyes

Model Architecture Illustration

Model Architecture Illustration

Model Architecture Illustration

The implementation process consisted of three main steps:

  1. Training a Convolutional Neural Network (CNN) on images of open and closed eyes.

  2. Utilizing Haar Cascades to detect the face and isolate the eyes, which were then fed into the CNN for classification.

  3. Identifying drowsiness by counting instances where the eyes remained closed for 15 frames (approximately half a second), classifying it as a “drowsy” blink.

Solution

Solution

Solution

We developed a web page with a user interface as a proof-of-concept product!

A score of 1 indicates 0 drowsy blinks per second. A score of 2 represents between 0 and 0.25 drowsy blinks per second, while a score of 3 signifies over 0.25 drowsy blinks per second. After the 5th epoch, the model achieved nearly 98% accuracy in classifying open and closed eyes using the preprocessed dataset, enhanced with data augmentation for improved generalizability.

Reflection

Developing the proof-of-concept prototype for the drowsiness detection algorithm taught me valuable lessons in overcoming various challenges faced during the project. These included:

  • Learning how to preprocess and gather the necessary datasets from various sources.

  • Finding creative workarounds to constraints, such as addressing the lack of fine-grained datasets by developing the ‘drowsy blinks’ method, which accounts for blink duration to detect drowsiness.

  • Designing and implementing Convolutional Neural Networks (CNNs) using TensorFlow.

Additionally, this project was a meaningful and memorable experience, allowing me to collaborate with an incredible group of students. We began with a research on societal challenges and ended up developing the DriveAlive system together :)

Updated: Fall 2024

From the moment I formed the team to work on Hander (originally named ‘Bearbaters’), it has been an incredible journey—from transforming an idea into a high-fidelity prototype to conducting usability testing, gathering feedback, and developing fully functional iOS applications for the Brown and RISD communities.

Hander is more than just an app; it represents a mission my co-founders and I embraced to make our student community, both on and off campus, more sustainable. Our goal is to create a lasting impact by providing accessible web and iOS applications.

Today, Hander is growing! We’ve expanded to a team of 10 and are actively working on maintaining and scaling our reach by collaborating with school departments and student organizations. Developing Hander from concept to a tangible product has been a rewarding experience, especially navigating the challenges of collaborating with designers and engineers.

Along the way, I tackled key challenges, including:


  • Conducting usability testing and user surveys with Figma prototypes.

  • Prioritizing and backlogging user feedback and bugs while balancing design QA and development velocity.

  • Bridging the gap between technical concepts for designers and creative ideas for engineers.

This journey has taught me the importance of adaptability, teamwork, and a user-first approach to building impactful products.

©

2025

Dave Song. All rights reserved.

Made with 🍞 and 🧈

©

2025

Dave Song. All rights reserved.

Made with 🍞 and 🧈

©

2025

Dave Song. All rights reserved.

Made with 🍞 and 🧈