X-Message-Number: 26992
From: 
Date: Sat, 10 Sep 2005 02:15:55 EDT
Subject:  Uploading technology (1.iv.2) The column level .

Uploading technology (1.iv.2) The column level .

The cerebral  cortex may be divided into 50 or so cortical areas.
Or on a finer  scale, into one thousand maps.
Each with 300 columns (300,000  columns).
There may be five millions cortical bands 
and 300 millions  microcolumns,
each with 100 neurons or so. 
Most channels have a  refractory period, so there can't be a new pulse for a 
given time. A tipical  refractory time is 100 ms, in this duration, an action 
potential travel one or  two centimeters in a non myelinated axon or a 
dendrite. With a density of 60,000  neurons/sq. mm nearly all neurons in a 

microcolumn can be contacted in less than  that time. The network dynamics is 
then 

limited by the refractory period, this  is the Glauber dynamics. It may be good
until the column  level.

Beyond that, the transmission time is of the same order as  the refractory 
time and there is another dynamics, the Little  one.

What is interesting with that dynamical change is that neurons  from one to 
another may in fact communicate globally. For one given neuron, each  nearby 

one in the same column is an individual one. Nearby has the meaning of  glauber
area. Beyond that, the neuron see only another glauber block, that is a  pool 
of 100,000 or so neurons. Each neuron pool works as a black box for the  rest 
of the brain.

This open up a possibility for uploading: Assume  a simulation is run on a 

single column, it could be seen as self contained. If  all the possible input ar
e tested and all output registered or deduced in some  way, a black box with a 
very different technology could be substitued to the  neural network and give 
neverthless the same output.

Here is a  possibility: The main information between neurons seems to be 
carried out by  firing patterns. Mathematical analysis demonstrates that the 

pattern number P  can't exceed the number N of neurons. Here, "neurons" stand 
for 
the  simplified  W. S. McCulloch and W.Pitts model from the 1943 era. Even 

being  a far simplified version of real neurons, they are interesting because 
they 
are  universal, that is they can simulate a true neuron if a sufficient 

number of  them is assembled correctly. A real neuron may need one 
McCulloch-Pitts 
one at  each synapse and each dendrite branching point. This would ask for 

something as  100,000 McCulloch-Pitts neurons for each biological pyramidal  
one.

One column would need so ten billions elementary neurons. It  would not be 

cost effective to build a brain that way, but it is a theoretical  possibility.
It tells us that a column can't display more than ten billion  different 
outputs to the outside world. This seems not a very interesting  result, 

neverthless, it brighten when seen another way. Assume the maximum  firing rate 
of a 

neuron is one pulse every ten millisecond and a pattern has a  maximum duration
near one second, so it may contain one hundred pulses. Each  such pulse may be 
present or not in an actual pattern, so there is 2 ^100  possibilities or 

10^30 possible patterns. Reducing that value to 10^10 is an  interesting result.

That assume only one action potential kind, if strong and  long AP are allowed,
the theoretical possibilities jump to 10^60. A 10^10 limit  is then even more 
interesting.

One way to interpret these values  would be to say that a pattern can't 

extend for a full second. If it was limited  to 150 ms or so, all possibilities

would be used. In fact, it seems more natural  to lenghten the pattern time and
allows a decreasing density of realised  possibilities with time. So, short 

patterns would use nearly all possible  combinations and long ones would be only
a small subsample of all  possibilities.

G-protein gated channels would presumably control  the pattern lenght and 

second messenger currents would shift from one pattern to  another. There may be
more involved combinations between currents, even for fast  chemical channels.

Given the noisy environment of neurons, it seems  a pattern must be 

recognizable even if it has undergone a lot of alteration.  This imply that all

possible alteration are not legitimate patterns and so the  true repertory of 
used 
patterns must be a small subsample of all possibilities.  Out of the 10^10 
possible AP combinations, may be one out of one thousand would  be used. Put 

another way, a given column would use only 10^7 patterns. From  that, may be 
some 
thousands would be in regular use.

Now what is a  pattern? It is a set of action potentials, 64 would be a good 
maximum limit.  Each would be defined by a two bits code, so a pattern is 
defined by a 128 bits  string or 16 bytes. Ten millions would take 160 Mb. If 

there is 100,000 columns  in the cortex, the full repertory would take 16 Tb, 
that 
is 32 hard disk, each  with  a 500 Gb capacity. Using 16 PC each with 2 HD 
would fit the  bill.

This bring into play a new comcept : A brain could be  simulated as a set  

comprising 100,000 "black boxes", each simulated using  time sharing on a set of
PCs. The individual neuron simulation would be limited  to a single column 

with something as 100 to 300 thousands neurons. Using time  sharing, this would
fit on approximately 10 - 15  current generation FPGAs.  The same material 

device would simulate each column, one after the other and  give out its pattern
repertory and how they are displayed to the outside given  all possible inputs.

This brain simulation would be harder to  evolve than a full implementation 

of each individual neuron but it would be a  valuable first step using only off
the shelf electronics components without  recourse to specialised asic 
neuromorphic chips.

Yvan  Bozzonetti.





 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=26992