Swimming Pool Drown Detection and Rescue System (FYP)
A system that detects drowning incidents using a wearable device, pinpoints the location with computer vision, and deploys a robotic float for active rescue. I contributed to the concept, research, WebApp, and integration of the computation, wearable, and robotic components, as well as assisting in wearable development.
- Finalist, President's Cup 2022
- Nominated for ASM Technology Award 2022
Why LED and why 135ms
Across the projects we examined, the most common challenges were positioning accuracy and signal attenuation. There are many ways to transfer data and determine a device’s position wirelessly. One of our teammates previously worked on a UWB positioning system, and we could also adopt a more advanced RTK-based method. So why do we stick with something as simple as LEDs? The answer becomes clear in the following figure:

RTK uses the UHF band, which attenuates quickly, and Bluetooth is better, but still could be improved. I tried to put a sports camera under the water, and the signal got lost within a meter. Visible light has the highest penetration property (100x of Bluetooth), and is the easiest to achieve a high power output. Therefore, visible light is chosen to be our go-to transmission method.
Interestingly, the highest penetration frequency happens to be visible light—or perhaps that’s no coincidence. I believe it’s because visible light has the best penetration ability, and as a result, humans evolved to perceive it rather than other frequencies.
We use a red–green blinking signal to indicate drowning. This simple approach makes it easy to implement, helps both computers and humans locate the signal quickly, and alerts nearby people effectively.
The yellow frame problem
We have tested 33ms,67ms,70ms,133ms blinking duration, and record them in 30fps. By analyzing each frame, some time red and green light will mix together and appears yellow.

Yellow frames occurrence
By plotting all frames into a hue histogram, we can visualize and quantify the occurrence of yellow frames. The two extremes represent red and yellow frames, while the values in between correspond to varying degrees of yellow. We found that using a frequency below the Nyquist limit (e.g., 33 ms) results in noticeably poorer performance, which improves as the frequency increases.

Yellow Frame distribution
By plotting the hue error over time, we can see when yellow frames occur. Each spike represents a yellow frame. Although 67 ms and 70 ms differ by only 3 ms, the 70 ms sample shows a much more even distribution of yellow frames. This is likely because 67 ms corresponds to about 15 Hz, a multiple of 30 fps, causing the yellow frames to cluster. The effect is even more apparent in the 133 ms sample.

Results of 135ms
In Experiment 1, 133 ms was the best performer. In Experiment 2, we found that adding a small offset of a few milliseconds helps prevent yellow frame clustering. Therefore, we selected 135 ms as our signal frequency, as it performs best in both tests.
Looking back, I think testing a prime number like 137 would be interesting and likely produce even better results.
