…. and now, for something completely different (Monty Python)
3. Flexible recognition in machine learning
Tsvi Achler. Neuroscience (PhD), Medicine (MD), Electrical Engineering (BS-EECS Berkeley) — Optimizing Mind
Our brains are ‘computationally flexible’, this means we can immediately learn and use new patterns as we encounter them in the environment.
We actually ‘like’ to develop those patterns, as we unleash our curiosity, see and try new things for the sake of enjoyment.
Learning, tasting and traveling feed our brains with new patterns. Riding a hover-wheel, flying a drone, speaking to Amazon echo or playing a new game are examples of behaviors where our brains confront and develop new patterns for different uses and purposes.
Now, let’s look into it from a machine learning perspective:
(disclaimer 2: as said at the beginning of this post I’m just a nerd without credentials trying to convey the message. Standing in the shoulder of giants when I wrote what you’re about to read on)
Tsvi Achler has been studying the brain from multidisciplinary perspectives looking for a single, compact network with new machine learning algorithms and models who can display brain phenomena as seen in electrode recordings, performing flexible recognition.
The majority of popular models of the brain and algorithms for machine learning remain feedforward and the problem is that even when they are able to recognitze they are not optimal for recall, symbolic reasoning or analysis.
For example you can ask a 4 year old why they recognized something the way they did or what do they expect a bicycle to look like. However it is difficult to do the same with current machine learning algorithms. Let’s take the example of recognizing a bicycle over a dataset of pictures. A bicycle, from a pattern perspective, would consist of two same or similar size wheels, a handle, and some sort of supporting triangular structure.
In feedforward models the weights are optimised for successful recognition over the dataset (of a bicycle in our example). Feedforward methods will learn what is unique within a bicycle compared to all other items in the training set and learn to ignore what is not unique. The problem is that subsequently it is not easy to recall what are the original components (two wheels of same of similar size, a handle, a supporting triangular structure) that may or may not be shared with other items.
Moreover when something new must be learned, feedforward models have to figure out from what is unique to the new item but not to the bicycle and other items it already knows how to recognize. This requires re-doing learning and rehearsing all over the whole dataset.
What Tsvi suggests is to use a feedforward-feedback machine learning model to estimate uniqueness during recognition by performing optimization on the current pattern that is being recognised, and determining neuron activation. (this is NOT optimization to learn weights by the way).
With this distinct model, weights are no longer feedforward, learning is more flexible and can be much faster, as there is no need to do rehearsal over the whole dataset.
In other words, this model is closer to how our brain actually works, as we don’t need to rehearse a whole dataset of samples to recognize new things.
Think about it, how many samples of the much hyped hoverwheels do you need to see first before recognizing the next one on the street?. Same for a bicycle.
And, the most important thing, with feedforward-feedback models learning happens with significantly fewer data.
Much less data required to learn, an much faster learning.
Optimization during recognition displays also properties observed in brain behaviour and cognitive experiments, like predicting, oscillations, initial bursting with unrecognized patterns (followed by a more gradual return to the original activation) and more importantly even, speed-accuracy trade off (so here is your catch if you were looking for it).