X-Message-Number: 21413
From: randy <>
Subject: math and the end of the universe
Date: Mon, 17 Mar 2003 08:22:20 -0600
References: <>

On 17 Mar 2003 10:00:01 -0000, Pat Clancy wrote:

>So, let's see, the human brain has on the order of 10^15
>synapses. Even if each synapse could have only 2 states (which is not
>the case), that's 2^(10^15) states, which is slightly larger than
>10^10^14 (that's 10 following by 10^14 zeroes).  So the size of the
>training data set is on the order of this size. A neural net will need
>on the order of thousands of training iterations; let's assume just
>1000. Now assuming each training iteration took just a nanosecond
>(highly unrealistic), that's (10^3)*(10^(10^14))*(10^(-9)) =
>10^((10^14)-6) seconds, still approx. 10^10^14 seconds. 

Hmmm. I think I disagree.  Assuming a minimum Order of 10*(10^14), ==
10^15, I get (10^3)*(10^15)*(10^(-9)) == 10^(3+14-9) == 10^9 seconds
required to train the network.

>Assuming the age
>of the universe is 10^10 years, 

OK (but I have serious doubts about current theories of the universe,
and, therefore, for planning purposes, I always just assume that the
universe will go on forever.  But I digress..,,)

>the time to train the network is about
>10^12 times the age of the universe. 

Hmmm, I think I disagree.  I get 60 sec/min 
(please correct me if I am wrong :-), 
and 360 sec/hr, and 8.5 *10^3 sec/day, and  that gives 
 ~ 3.5 *10^5 sec/year.

THerefore we have (3.5^10^5 )sec/yr * (10^10 yrs) == 3.5 *10^15
seconds available to train the network.

Therefore, the 10^15 seconds available to train the network is >> than
the 10^9 seconds required to train the network. In fact, the network
appears to require only 10,000 yrs to train.

PLus, we may be able to breakdown the network into component parts,
thus enabling parallel training.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=21413