10 July 2008

Predictatron

The other day I built a computer that predicts its input. I call it the Predictatron. Here’s a picture:


As you can see, the front panel has a row of switches and a row of lights, labeled “Observation” and “Prediction” respectively. The switches are the inputs, and the lights display the computer’s predictions of those inputs. The extra switch on the bottom-left is the clock. It has two positions, marked “Predict” and “Observe”. It works like this:
  • First you switch the clock to Predict. Straight away, the lights display the computer’s prediction for the states of the inputs that you are about to set on the switches. You can write this prediction down if you want.
  • Next you flip the input switches to whatever you want, and switch the clock to Observe. The computer observes the states of the inputs at that moment, and stores them away. It remembers the entire sequence of inputs since you turned the power on. It also updates some internal data structures, ready to make better predictions in the future.
  • Then you flip the clock back to Predict and so on, repeating the cycle over and over as you step your way through the sequence of inputs. Predict, Observe. Predict, Observe.
To predict further into the future than one clock cycle, you loop the Prediction outputs back into the Observation inputs. At each step, you’re saying “Ok what if you’re right - what happens then?” This way you can predict as far into the future as you want.

If you just set the input switches randomly, you never get very good predictions. But if the input sequences contain any kind of pattern, structure, correlation, order, language, even if noisy, the Predictatron will do a decent job of learning the rules and making better predictions. The learning algorithm it uses is called Blerpl, which I will describe in detail in a future article.

No comments: