A new type of neural network developed by the scientists made with memristors can dramatically improve the efficiency of teaching machines to think like humans. The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.
Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. Researchers from University of Michigan led by Wei Lu, professor of electrical engineering and computer science, created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.
Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. Researchers used a special memristor that memorises events only in the near history.
Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes. To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimise the amount of error in achieving the correct answer.
“A lot of times, it takes days or months to train a network. It is very expensive,” said Lu. “When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables.” Image recognition is also a relatively simple problem, as it does not require any information apart from a static image, Lu added. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred or what has just been said.
This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu said. Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.
When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error. “The beauty of reservoir computing is that while we design it, we don’t have to train it,” Lu said.
The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Using only 88 memristors, compared to a conventional network that would require thousands for the task, the reservoir achieved 91 per cent accuracy.
The work “Reservoir computing using dynamic memristors for temporal information processing”, was published in Nature Communications.