scorecardresearch
Tuesday, November 5, 2024
Support Our Journalism
HomeScienceAI needs ‘edge computing’ to make everyday devices smarter

AI needs ‘edge computing’ to make everyday devices smarter

Edge computing will allow artificial intelligence to be taken to traffic lights, security cameras, home appliances—or even the odd space probe.

Follow Us :
Text Size:

Tokyo: Since Japan launched its first deep space probe in 1985, the photographs have been taken in a relatively low-tech way, by pointing cameras at objects in the cosmos and letting them run. Whatever is captured gets sent back to Earth, where people cull the material for the most beautiful shots.
Problem is, this dragnet approach uses up precious bandwidth and batteries. So Japan’s space agency is experimenting with a camera that’s more discriminating: It decides which pics have the best light, angle and composition, and beams back only those. Using artificial intelligence on powerful, large computers? That’s no big deal. But it’s a lot harder on a tiny spacecraft with its serious energy constraints.

Enter LeapMind Inc., a Tokyo company specialized in “edge computing,” or running computations not on a central server or even a PC, but on remote devices with limited processing power and no internet connection. The idea is to bring AI to traffic lights, security cameras, home appliances—or even the odd space probe.

Artificial intelligence can do amazing things, but it’s still rare because the math takes enormous computing power and loads of electricity. This means driverless cars must be something akin to data centers on wheels, with dozens of processors that can get hot enough to boil water. Edge computing promises to make AI work inside the thousands of smaller gadgets and machines in people’s homes and offices.


Also read:Artificial intelligence is coming to hire you & it may not be such a bad thing


“The hurdles to putting AI in actual products are really high,” said LeapMind’s founder, Soichi Matsuda, 36. “There are all kinds of severe limitations: price, power-consumption, dealing with exhaust heat.”

LeapMind is just one of dozens of companies working on edge computing. Google and Amazon.com have led the way, but last year venture capital firms invested about $750 million in startups working in the field, according to CB Insights, up 26% from the previous year. In 2017, a group led by Intel Capital invested $10 million in LeapMind.

They’re all seeking to drastically simplify the way AI works, so that everyday devices can take voice commands, respond to gestures and see the world around them. The technology would enable a home security camera to tell family members from strangers or allow sensors sewn into clothing to track your health, without sending private information to the cloud.

“Edge AI is becoming more and more important, especially in areas where latency, power consumption, connectivity and security matter,” said Anthony Lin, senior managing director at LeapMind investor Intel Capital.

In order to understand what makes edge computing so difficult, it helps to recall your high school algebra. Each variable in a typical AI algorithm is built out of a 32-digit string of ones and zeros that allows for 4.3 billion possible combinations. (They look like this: 0010001001000.0000101001000010110.) The detail is what gives AI its predictive power.

The trick in edge computing is shaving down the numbers so they can be processed by smaller chips, but without losing too much precision. It’s a challenge because for every digit that’s lopped off, there’s an exponential loss in expressiveness.

This is why even the smallest achievements in edge computing are treated as major victories. In March, for example, when Google announced it finally managed to get a speech-recognition function to run offline on its smartphones, at least one Google engineer called the effort “heroic.” For most users the difference is probably unnoticeable, but making it work without being connected to the internet required chopping the program’s variables down from 32 to 8 digits, or bits.

LeapMind is working at a level that’s orders of magnitude smaller, just 1 or 2 bits, according to Matsuda. It’s the computer science equivalent of boiling the English language down to just four words and somehow still being able to convey a lot of meaning.

“When we started working on this in 2015, we knew that someone like Google would eventually do 8 bits,’’ Matsuda said. “So we had to aim higher.’’

The techniques developed by LeapMind are sufficient for many, but not all tasks. You couldn’t run a driverless car with them, for example. But they’d be useful for driver-assist functions or other less-exacting work.

Which brings us back to the space agency’s photography problem.

As it stands, JAXA’s probes can’t devote scarce computing power to getting visually-pleasing photos, no matter how valuable they are for PR purposes, because it would come at the expense of doing actual science. The photos that the public sees now have been taken in the process of doing other work, not specifically for their beauty.


Also read: Artificial intelligence can take your job, so political leaders need to start doing theirs


That’s why JAXA researcher Takayuki Ishida decided to try using LeapMind’s tools to build a smart camera. He started by taking 10,000 photos of planet and probe models, shot from different angles and with different juxtapositions, and ranking them in terms of aesthetic appeal.

Then he used LeapMind’s software to create a pattern-matching algorithm that learned to differentiate between good photos and bad ones. The code was compact enough to run on a single chip that consumed no more electricity than a 10-watt light bulb.

JAXA declined to comment, but Ishida described his experiments in a paper he presented at a February conference held by LeapMind.

If the system works, it would be a small step forward for space exploration and perhaps a big leap for everyday devices closer to home.

“If you want to add AI to a television or a laptop or any other existing product, you have to re-design things from scratch because most of the power supply is already spoken for,’’ said LeapMind’s Matsuda. “Power is the big constraint.’’

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

SourceBloomberg

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular