U.S. Patent Awarded For Influence Learning Algorithm Used In Neural Networks

Share Article

The First of Two New Learning Algorithms Introduced In Netlab, Influence Learning Permits Neural Networks To Be Constructed With Unlimited Layering and Feedback Configurations.

One of two new learning algorithms described in the book: “Netlab Loligo” has been awarded a patent by the United States Patent Office. The patent, which issued on October 12 of this year is officially titled:

"Feedback-Tolerant Method and Device Producing Weight-Adjustment Factors For Pre-Synaptic Neurons In Artificial Neural Networks".

This wordy title is provided as an aid to future inventors, who must search hundreds, or even thousands of patents to ensure that their great idea has not already been patented. Simply put, the patent is for a learning method that will work in neural networks that are designed with any type and amount of feedback.

Traditionally, learning algorithms either disallow feedback, or strictly limit how feedback is structured in neural networks where they are used. The learning algorithm disclosed in the patent, however, will work in networks constructed with arbitrarily connected signal feedback paths. The learning algorithm will work fine in traditional feed-forward-only networks, where back-propagation of errors is normally used, but it will also work in networks in which neurons are connected back to their pre-synaptic neurons in any configuration or topology desired by the neural network's designer.

. . . . . . .
How It Works

In layman's terms, one way to understand how the new learning algorithm accomplishes its task of adapting weights in pre-synaptic neurons (traditionally called “hidden” layer neurons) is to consider how humans and other animals “learn” from those around them. Essentially, they will place the most stock in the behaviors of those who are exercising the most influence, and use their actions as a model for modifying their own behavior. This same principal can be applied to neurons. A neuron connects to many other neurons, and, in any given situation, may or may not be exercising a great deal of influence over the neurons to which it connects. If it is exercising a lot of influence, it will be seen as “attractive” to its pre-synaptic neurons, and they will form stronger connections to it. Though simplified, this is the essence of Influence Learning.

It is also possible, that, depending on the situation, pre-synaptic neurons may instead, be repulsed by the exercise of influence. For example, when there are clear indications coming in from other inputs that things are going badly for the organism.

Think of the individuals in a ship's crew as neurons in a neural network. The captain sees directly where the ship is heading and what it's doing and adapts his behaviors directly. If everything is going well, people do not have to have direct access to the captain, or what he sees in order to learn how to respond to a given situation. An inner-circle may interact directly with the captain, but many others will have access only to the inner-circle, and never speak directly with the captain. Still others will only have direct interactions with people even further down in the command structure. This will normally be enough, even for the lowest levels to understand how to adapt their behavior to better respond to a novel situation.

The ship metaphor also works well to explain the negative example. If things are going poorly for the ship, the captain's actions, and the actions of his subordinates will be less trusted, and the tendency will be for the situation to decay into “every man for himself.”.

. . . . . . .
Benefits

There are other benefits besides allowing free-form feedback. The ability to tolerate networks constructed with feedback is a direct result of the fact that there is no longer a separate processing pass, which must be performed in reverse-order, over the entire network, Because of this, huge networks running on many processors are no longer bound by the serial constraints of such a processing pass.

Likewise, the traditional backpropagation computation that began at the output-layer of neurons, and worked its way sequentially back, tended to produce error values that became smaller and smaller as each layer was traversed. This is not a problem in influence learning, as the factors affecting any given neuron's adaptations are all close-by from a connection-standpoint.

As stated, this is one of two learning methods described in the book. When used together these two algorithms should make it possible to implement systems that are capable of continuously learning and adapting within a complex milieu.

These and other benefits are discussed in a terse blog entry at the book site, titled: Introducing Influence Learning (http://standoutpublishing.com/Blog/archives/53-Introducing-Influence-Learning.html)

. . . . . . .
More Information

More exploration of some of the concepts discussed in the book can be found at the author's blog site for the book (http://standoutpublishing.com/Blog/).

For a complete description of the algorithm the book: Netlab Loligo will be the most detailed resource. It can be purchased from Amazon.com or from the purchase page at the site: http://standoutpublishing.com/Prod/Book/Netlabv03a/

    Title:                Netlab Loligo: New Approaches To Neural Network Simulation
    ISBN:            978-0984425600
    Paperback:    332 pages
    Visit:                http://StandOutPublishing.com/Prod/Book/Netlabv03a/

###

Share article on social media or email:

View article via:

Pdf Print

Contact Author

John Repici
Visit website