Home Future Google Is Using Radar to Help Computers Read and React to Your Body Language

Google Is Using Radar to Help Computers Read and React to Your Body Language

0



Technology has quickly infiltrated almost every aspect of our lives, but the way we interface with our devices is still less than ideal. From hunching over our computer screens (because if you’re anything like me, it’s pretty much impossible to maintain good posture for more than a few minutes at a time) to constantly looking down at our phones (often while walking, driving, or otherwise being in motion), the way our bodies and brains interact with technology is not exactly seamless. Just how seamless we want it to become is debatable, but a Google project is exploring those boundaries.

Google’s Advanced Technology and Projects lab—ATAP—focuses on developing hardware to “change the way we relate to technology.” Its Project Jacquard developed conductive yarn to weave into clothing so people could interact with devices by, say, tapping their forearms—sort of like an elementary, fabric-based version of the Apple watch. The lab has also been working on a project called Soli, which uses radar to give computers spatial awareness, enabling them to interact with people non-verbally.

In other words, the project is trying to allow computers to recognize and respond to physical cues from their users, not unlike how we take in and respond to body language. “We are inspired by how people interact with one another,” said Leonardo Giusti, ATAP’s Head of Design. “As humans, we understand each other intuitively, without saying a single word. We pick up on social cues, subtle gestures that we innately understand and react to. What if computers understood us this way?”

Examples include a computer automatically powering up when you get within a certain distance of it or pausing a video when you look away from the screen.

The sensor works by sending out electromagnetic waves in a broad beam, which are intercepted and reflected back to the radar antenna by objects—or people—in their path. The reflected waves are analyzed for properties like energy, time delay, and frequency shift, which give clues about the reflector’s size, shape, and distance from the sensor. Parsing the data even further using a machine learning algorithm enables the sensor to determine things like an object’s orientation, its distance from the device, and the speed of its movements.

The ATAP team helped train Soli’s algorithm themselves by doing a series of movements while being tracked by cameras and radar sensors. The movements they focused on were ones typically involved in interacting with digital devices, like turning toward or away from a screen, approaching or leaving a space or device, glancing at a screen, etc. The ultimate goal is for the sensor to be able to anticipate a user’s next move and serve up a corresponding response, facilitating the human-device interaction by enabling devices to “understand the social context around them,” as ATAP’s Human-Computer Interaction Lead Eiji Hayashi put it.

Improving the way we interact with our now-ubiquitous devices isn’t a new idea. Jody Medich, principal design researcher at Microsoft and CEO of Superhuman-X, has long been advocating for what she calls human-centered technology, maintaining that our interfaces “are killing our ability to think” by overloading our working memory (which is short-term and task-based) with constant interruptions.

In 2017 Medich predicted the rise of perceptual computing, in which machines recognize what’s happening around them and act accordingly. “This will cause the dematerialization curve to dramatically accelerate while we use technology in even more unexpected locations,” she wrote. “This means technology will be everywhere, and so will interface.”

It seems she wasn’t wrong, but this begs a couple of important questions.

First, do we really need our computers to “understand” and respond to our movements and gestures? Is this a necessary tweak to how we use technology, or a new apex of human laziness? Pressing pause on a video before getting up to walk away takes a split second, as does pressing the power button to turn a device on or off. And what about those times we want the computer to stay on or the video to keep playing even when we’re not right in front of the screen?

Secondly, what might the privacy implications of these sensor-laden devices be? The ATAP team emphasizes that Soli uses radar precisely because it protects users’ privacy far more than, say, cameras; radar can’t distinguish between different peoples’ faces or bodies, it can just tell that there’s a person in its space. Also, data from the Soli sensor in Google’s Nest Hub doesn’t get sent to the cloud, it’s only processed locally on users’ devices, and the assumption is that a product made for laptops or other devices would function the same way.

People may initially be creeped out by their devices being able to anticipate and respond to their movements. Like most other technology we initially find off-putting for privacy reasons, though, it seems we ultimately end up valuing the convenience these products give us more than we value our privacy; it all comes down to utilitarianism.

Whether or not we want our devices to eventually become more like extensions of our bodies, it’s likely the technology will move in that direction. Analyses from 2019 through this year estimate we check our phones anywhere from 96 to 344 times per day. That’s a lot of times, and a lot of interrupting what we’re doing to look at these tiny screens that now essentially run our lives.

Is there a better way? Hopefully. Is this it? TBD.

Image Credit: Google ATAP



LEAVE A REPLY

Please enter your comment!
Please enter your name here