Federated Learning Explained Simply

Having held Google (GOOG) shares since shortly after their IPO, we’ve always taken a “set it and forget it” approach to our investment in the company which acts as a gatekeeper to the world’s information. Lately though, we can’t help but notice Google is letting a minuscule number of employees dictate who they ought to be doing business with. Instead of focusing on doing their jobs, some of Google’s employees think they’re being paid to be activists. It’s a sad state of affairs. Google is no longer the great company it used to be since they started letting the stench of politics permeate their organization. Focus seems to have switched from execution to pacification. Everything seems to be getting dumber by the day.

Maybe you’ve heard the new term being thrown around lately by tech pundits – “federated learning.” Instead of just having one of their engineers explain the concept in a simple and concise manner, Google decided to squander our future dividend payments on hiring an “Adventure Cartoonist” who some poor engineer was forced to work with. The resulting comic strip is supposed to explain the concept of “federated learning,” yet it mainly just succeeds in placating Gwyneth in Human Resources who thinks inclusive comic strips are what engineering needs more of. Today, we’re going to explain federated learning to the adults in the room with help from some of the adults left at Google who published a great blog post titled “Federated Learning: Collaborative Machine Learning without Centralized Training Data.”

How Machines Learn

Standard machine learning models become intelligent when you train them using big data. Typically, this involves centralizing the training data on one machine or distributing the data evenly across multiple machines in a datacenter. Give an algorithm enough cat pictures and it will soon be able to identify what a cat looks like, what breed a cat is, and perhaps even the cat’s age. The traditional process of training a model meant that all the big data would be uploaded to a single location, a process that’s bandwidth intensive and comes with certain privacy implications. If you consider an Internet of Things (IoT) use case such as creating a digital twin, then all that big data would come from sensors out in the field that would all need to maintain connectivity with the cloud. This need to send all your data to a central location so that your algorithms can be trained is cumbersome. Now, there’s a new way of doing things that doesn’t require all the data to be transferred to a central location.

Federated Learning vs. Distributed Machine Learning

Distributed machine learning is the notion of breaking down the training workload across multiple machines, with each machine handling around the same amount of training data. Federated learning is different in that each machine will be handling a different amount of data. Consequently, federated learning is a subset of distributed machine learning. It’s an approach that decouples the ability to do machine learning from the need to store the data in the cloud. From Google’s blog post:

It works like this: your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.

It’s not just about running AI algorithms “on the edge,” it’s about training them “on the edge.” In a previous article, we talked about Fog Computing vs. Cloud Computing vs. Edge Computing. Essentially, edge computing is just about moving portions of your cloud-based applications closer to the devices that use them. Our recent article on Ambarella talked about how they’ve developed better AI chips that consume less power and can now enable security cameras with computer vision capabilities. While security cameras could be edge devices that use federated learning, the more common edge devices for federated learning are the smartphones we carry around with us. There are a few implications to using smartphones as edge devices for federated learning.

Smartphones as Edge Devices

In most countries around the world, internet bandwidth – or data as its often called – is expensive and unreliable. In order to not piss people off, federated learning needs to consume as little bandwidth as possible when communicating with the cloud and at the same time handle the unpredictability of connectivity. “Federated learning applies best in situations where the on-device data is more relevant than the data that exists on servers (e.g., the devices generate the data in the first place), is privacy-sensitive, or otherwise undesirable or infeasible to transmit to servers,” says Google. They’re now using it for things like content suggestions for on-device keyboards. In a March 2019 paper last year, Google researchers stated:

We have reached a state of maturity sufficient to deploy the system in production and solve applied learning problems over tens of millions of real-world devices; we anticipate uses where the number of devices reaches billions.

One of the things developers find most exciting about federated learning is that it addresses the fundamental problems of privacy, ownership, and locality of data. Let’s say you developed a smartphone app for distinguishing between cancerous or benign skin spots. Using federated learning means you won’t have to send the user’s image to the cloud. More importantly, you’ll be able to improve your algorithms without actually needing to look at all the images being analyzed across your smartphone population. The appeal to smartphone makers is evident in Apple’s recent decision to acquire some federated learning technology for their own devices.

Federated Learning Startups

A few weeks ago, Apple acquired a Seattle startup called Xnor.ai which has developed a platform that “allows companies to run complex deep learning algorithms, formerly restricted to the cloud, locally on a range of devices including mobile phones, drones, and wearables.” An article by GeekWire discusses some of Xnor’s notable accomplishments in 2019 like a standalone AI chip that could run on solar power for years or an edge-based person recognition technology built into low-cost security cameras which is reminiscent of the technology Ambarella is deploying right now.

There are also other startups out there bringing federated learning to the masses. Snips is developing federated learning for voice platforms, XAIN is applying it to automated invoice processing, Owkin is working on federated learning for medical research, and S20 is talking about how multiple third parties – like banks – can work together to train algorithms for applications like fraud without having to exchange data. In future articles, we might dive deeper into some of these applications to try and separate the hype from the substance.

Conclusion

Federated learning, edge computing, distributed machine learning, fog computing, it’s hard to keep up with the constant barrage of nomenclature from the tech world. For decades now, engineers have been moving software and hardware further apart, then closer together, then further apart. The advantages of moving everything to a central server are high processing speeds, lower latency, and unlimited power. When you do things on smartphone edge devices, you are limited by battery power, bandwidth, and processing speed. Each of these three parameters are being improved as we develop better lithium batteries, more sophisticated AI processors for mobile, and 5G ushers in an era of unprecedented bandwidth. That supercomputer you keep at your side every waking hour is becoming an indispensable extension of your physical body. And it just got a whole lot smarter.

Here at Nanalyze, we hold the lion's share of our investing dollars in a portfolio of 30 dividend growth stocks. Find out which ones in the Quantigence report freely available to Nanalyze subscribers. 

Computing insights that aren’t written in nerd

Get our insights on tech investing once a week.

34 Shares
Tweet12
Share22
Share
Reddit
Buffer