On Tue, Jul 2, 2024, 4:00 PM John Clark <johnkcl...@gmail.com> wrote:

> On Tue, Jul 2, 2024 at 12:52 PM Jason Resch <jasonre...@gmail.com> wrote:
>
> *> I also see it as surprising that through hardware improvements alone,
>> and without specific breakthroughs in algorithms, we should see such great
>> strides in AI.*
>
>
> I was not surprised because the entire human genome only has the capacity
> to hold 750 MB of information; that's about the amount of information you
> could fit on an old-fashioned CD, not a DVD, just a CD. The true number
> must be considerably less than that because that's the recipe for building
> an entire human being, not just the brain, and the genome contains a huge
> amount of redundancy, 750 MB is just the upper bound.
>

That the initial code to write a "seed AI" algorithm could take less than
750 MB is, as you say, not surprising.

My comment was more to reflect the fact that there has been no great
breakthrough in solving how human neurons learn. We're still using the same
method of back propagation invented in the 1970s, using the same neuron
model of the 1960s. Yet, simply scaling this same approach up, with more
training data and training time, with more neurons arranged in more layers,
has produced all the advances we've seen. Image and video generators, voice
cloning, language models, go, poker, chess, Atari, and StarCraft master
players, etc.



>
>> *> Humans no longer write the algorithms these neural networks derive,
>> the training process comes up with them. And much like the algorithms
>> implemented in the human brain, they are in a representation so opaque and
>> that they escape our capacity to understand. So I would argue, there have
>> been massive breakthroughs in the algorithms that underlie the advances in
>> AI, we just don't know what those breakthroughs are.*
>
>
> That is a very interesting way to look at it, and I think you are
> basically correct.
>

Thank you. I thought you might appreciate it. ☺️



>
>> *> I think the human brain, with its 600T connections might signal an
>> upper bound for how many are required, but the brain does a lot of other
>> things too, so the bound could be lower.*
>>
>
> The human brain has about 86 billion neurons with 7*10^14 synaptic
> connections (a more generous estimate than yours), but the largest
> supercomputer in the world,
>

I think that figure comes from multiplying the ~100 billion neurons by the
average of 7,000 synaptic connections per neuron. If you multiply your 86
billion figure by 7,000 synapses per neuron, you get my figure.


the Frontier Computer at Oak ridge, has  2.5*10^15  transistors, over three
> times as many. And we know from experiments that a typical synapse in the
> human brain "fires" between 1 and 50 times per second, but a typical
> transistor in a computer "fires" about 4 billion times a second (4*10^9).
> It also has 9.2* 10^15 bites of fast memory. That's why the Frontier
> Computer can perform 1.1 *10^18 double precision floating point
> calculations per second and why the human brain can not.
>

The human brain's computational capacity is estimated to be around the
exaop range ( (assuming ~10^15 synapses firing at an upper bound of 1000
times per second). So I agree with your point we have the computation
necessary, it is now a question of do we have the software? Some assumed we
would have to upload a brain to reverse engineer its mechanisms, but it now
seems the techniques of machine learning will reproduce these algorithms
well before we apply the resources necessary to scan a human brain at a
synaptic resolution.


> By way of comparison, Ray Kurzweil estimates that the hardware needed to
> emulate a human mind would need to be able to perform 10^16 calculations
> per second and have 10^12 bytes of memory.
>

Those numbers assume the brain is about 100-1000 times less efficient than
could be. It very well might be that much less efficient, but we should
treat those estimates as optimistic lower bounds.

And the calculations would not need to be 64 bit double precision floating
> point, 8 bit or perhaps even 4 bit precision would be sufficient. So in
> the quest to develop a superintelligence, insufficient hardware is no
> longer a barrier.
>

There are various kinds of superintelligence as defined by Bostrom. There
is depth of thinking, speed of thinking, and breath of knowledge. I think
current language models are on the precipice (if not past it) of super
intelligence on terms of speed and breadth of knowledge. But it seems to me
that AI is still behind humans in terms of depth of thinking (e.g. how
deeply they can go in terms of following a sequence of logical inferences).
This may be limited by the existing architecture of LLMs which have a
neural network that only has so many layers.

Jason


John K Clark    See what's on my new list at  Extropolis
> <https://groups.google.com/g/extropolis>
> bom
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv0FmETnmRQ2VK_EKxJ%2BmyBjkaetVY6swTT7QRoK_ofqOw%40mail.gmail.com
> <https://groups.google.com/d/msgid/everything-list/CAJPayv0FmETnmRQ2VK_EKxJ%2BmyBjkaetVY6swTT7QRoK_ofqOw%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj6MqyspUfZ6kV0QhiwADDKX5urJBKkg%2BchX3aZyBoa5g%40mail.gmail.com.

Reply via email to