Search

Hierarchical Temporal Memory (HTM) Theory and
Sparse Distributed Representation (SDR)


An alternate machine learning framework called 'Hierarchical Temporal Memory (HTM)' is claimed to be a lot better abstraction of the human brain.
For your ready use, the theory I will present can be revisited by reading these papers Paper 0, Paper 1, Paper 2, or at this playlist :

In a nutshell:
* It doesn't use backpropagation.
* Learns with sensory-*motor* model like we humans do.
* HTM uses binary sparse input representations only (unlike classical NNs), called Sparse Distributed Representation (SDR).
* Sparsity is the key to learning = similar to our brain activations.
* Learning takes place through reinforcement.
* Each node accepts inputs also from the same layer, and the layer above (unlike 'classical' DL).
* Node can be in 3 states: active, inactive and predictive.
* HTM ended up implementing ideas 'similar' to Hinton's Capsule Network.
* Not having as great results as of the deep learning approach.
* Implementations (NuPIC) are open source https://github.com/numenta/nupic and are currently maintained by Numenta: https://numenta.org/code/

Title: Hierarchical Temporal Memory (HTM) Theory and Sparse Distributed Representation (SDR)
Lecturer: Vedhas Pandit
Date: 11AM on 06-02-2018
Building/Room: Eichleitnerstraße 30 / 207