oThe presentation step
can be divided up among processors easily since the distance
from weights to the current input sequence is independent for each weight-sequence
pair.
oBatch learning is used
to reduce interdependence
oInstead a local record
is kept of which neuron won each sequence.
oAt the end of each
epoch this record is synchronized across all processors.
oEach processor then
updates its local weights using the learning rule and the
global record of which weight to update for each sequence.
o
o