From Single‑Point LiDAR to 3D Depth Sensing: Why EverBowl Is Upgrading Its Vision

From Single‑Point LiDAR to 3D Depth Sensing: Why EverBowl Is Upgrading Its Vision

Introduction

In pet‑tech, the difference between “working” and “truly reliable” often comes down to how well your hardware understands the real world. At Hoomanely, we’ve spent months refining EverBowl’s proximity pipeline – the system that helps our smart bowl understand a dog’s face position, maintain safe distances, and accurately measure temperature. Until now, we’ve relied on the popular single‑point LiDAR sensor. It did the job, but as our accuracy targets rose, its limitations became unavoidable.

This sparked a major upgrade: moving from 1D single‑point LiDAR to a 3D Time‑of‑Flight (ToF) depth sensor that outputs a dense depth map. This shift fundamentally improves how we interpret dog face geometry, estimate distances to landmarks, and fuse thermal + RGB data - enabling more stable temperature readings and more robust eating detection.

This post explains why the upgrade matters, how 3D depth sensing works, and what it unlocks for next‑generation pet health hardware.


1. Why Single‑Point LiDAR Was Holding Us Back

The single‑point LiDAR measures only one distance at a time - like shining a laser pointer and asking it to describe the whole room. For a moving dog head with complex contours, that’s simply not enough.

Core Limitations

  • Only 1D proximity → No sense of shape or orientation
  • Bowl tilt issues → The point may hit the wrong surface
  • Occlusions → Ears, fur, or bowl walls block the beam
  • Face geometry variability → Long snouts vs flat faces give wildly different readings

We compensated with heuristics, multi‑ROI scanning, smoothing, and filtering - but the gap between what we wanted and what single‑point sensing could see was becoming too big.


2. Enter 3D Time‑of‑Flight Sensing : A Depth Map Instead of a Dot

Unlike single‑point LiDAR, which gives us a single distance, a ToF sensor provides a full depth image. The ToF sensors outputs more than 4K pixels, each representing the distance to a part of the scene.

How It Works (Simple Version)

  • The sensor emits modulated infrared light.
  • Light bounces off the dog’s face.
  • The phase shift on return tells us the distance.
  • The whole sensor grid measures this simultaneously.

The result: a low‑resolution but extremely informative 3D map of the dog’s face.


3. Why 3D Depth Changes Everything for EverBowl

3.1 Stable Distance Estimation for Landmarks

For accurate temperature estimation, especially around the eyes, we must know exactly how far each pixel is from our camera. Depth maps let us:

  • Compute distance to eyes, nose, muzzle independently.
  • Stabilize readings across different breeds.
  • Correct for bowl tilt and off‑axis entry.

3.2 Better Thermal–RGB Fusion

With 3D depth, we can align thermal data with RGB landmarks more precisely.

This reduces parallax errors and improves temperature estimation consistency.

3.3 More Robust Eating Detection

Depth lets us distinguish:

  • Snout entering the bowl
  • Tongue movement
  • Real “eating” vs sniffing or random proximity

This reduces false positives and allows smoother activation of the audio pipeline.

3.4 Better Occlusion Handling

If fur, bowl edges, or lighting create occlusions, the depth map still provides thousands of alternate rays to rely on - something single‑point LiDAR cannot do.


4. Design Considerations & Challenges

4.1 IR Reflectivity of Fur

Different coat colors and textures reflect IR differently, making calibration important.

4.2 Power & Thermal Budget

ToF sensors require more processing and heat management than simple LiDAR.

4.3 Data Rate & On‑Device Processing

We process depth frames onboard our CM4, so optimizing pipeline latency is key.

Despite these challenges, the upgrade has shown clear improvements in our internal testing.


5. What This Unlocks for Future Dog Health Features

Beyond just proximity sensing, 3D depth opens doors for:

  • More accurate fever detection (distance‑aware thermal correction)
  • Breed‑agnostic face models using depth instead of templates
  • Motion segmentation for richer behavior understanding
  • Better safety systems (detecting unsafe proximity or fast impact‑like motions)

This aligns directly with Hoomanely’s mission: building advanced yet affordable pet‑care systems that help parents understand their dog’s health reliably and non‑intrusively.


Key Takeaways

  • Single‑point LiDAR gave us only one dimension of information.
  • 3D ToF provides a dense depth map that captures full face geometry.
  • This upgrade dramatically improves landmark detection, thermal fusion, and eating detection.
  • Depth sensing enables more accurate, breed‑agnostic, and stable temperature estimation.
  • It paves the way for safer, smarter, and more reliable pet health hardware.

Read more