A team of Texas A&M University researchers is analyzing how a network of localized nodes can implement machine-learning applications, such as object recognition, in a distributed fashion. The research team includes Dr. Alfredo Garcia, professor in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering, and Dr. Jeff Huang, associate professor in the Department of Computer Science and Engineering.
This proposed methodology stands as an alternative to the widely acknowledged federated learning approach. Federated learning is a machine-learning technique used for training models across multiple decentralized edge devices or servers that hold local data samples without exchanging them. Since its inception, the federated learning approach is a more effective method to traditional centralized machine-learning techniques where all of the local data sets are uploaded to one server.
“What’s really exciting in this research is that it shows a robust learning approach for learning models from heterogeneous data streams, which are becoming ubiquitous in the real world,” Huang said.
This research focuses on a more robust alternative to federated learning by considering the approach in which each node periodically updates its own model based upon local data and a network regularization penalty. So, each node “checks in” with neighboring nodes every so often to make sure its own model is not too offbeat from that of its neighbors.
A node is a piece of the network in charge of training a model. To put this into perspective, there can be millions of nodes processing information at the same time within a matter of seconds. Nodes will share data with the server but either can’t or won’t share data with other nodes.
In a federated learning implementation, participating devices need to only periodically communicate parameter updates to a central node where the model parameters are stored. However, when data streams are heterogeneous both in data rate and quality, the model identified by federated learning may not be of the highest quality.
When the data streams with higher data rates also have lower precision there is a good chance node-producing parameter updates at the fastest pace do not necessarily have the highest quality updates. You also run the risk of being exposed to bad data, or, noise, that comes from bad nodes.
For example, photos coming from the latest iPhone model with a high-quality camera will have different data quality than photos coming from an iPhone 5.
Federated learning is useful when streaming data across devices is housed in differing geographic locations. However, there is a downside when there is significant communication overhead and data cannot be transferred to a single location in a timely fashion. This is namely the case for high-resolution video.
In this particular scenario, assembling a diverse batch of data points in a central processing location to update a model involves significant latency and may ultimately not be practical.
In follow-up work with his team, Garcia is examining the application of the network approach to multitask learning where different nodes do not share the same learning objective or task. Local model exchange shows similarities between different tasks to provide better learning outcomes.