Re: [agi] How AI is killing the internet

2024-05-12 Thread immortal . discoveries
Btw mates, I did find REALLY good music, art, etc on the internet. For what 
it's worth, it really does scoop deep if you find the good stuff. But nobody 
said art is all there is. My computer is only good for making AI and relaxing 
sometimes too. It's not where all my day happens still. Same for you.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M21c1c664436f7b8be8fc2b4d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-12 Thread immortal . discoveries
Wahahahhahaha! Hehe.

Ya the internet is new and will not be here soon. All earth will become the 
most advanced machines, copied like a sheet of units. Perfectly symmetrical. 
All the brains in that new homeworld will still interact, like we did millions 
of years ago, and like we do on the internet's networking signals that pulse 
through the clouds and go through some circuit boards. But in the near future, 
it will be all done between advanced AIs, all talking just by them. All 
dreaming shared between them. But will be very fast now, and intelligent. So 
ya.AI is taking over the world and we are starting to become very reliant 
on AI. We are starting to sit alone, and make up wonderful dreams of fake food 
and women on screens, and soon, AI will just give us them for real, whatever we 
want. Or we might just become borg and die in the process.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-Mb934d2a37830a79965c6cd8a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-12 Thread Shashank Yadav
Not really. The internet as a living artifact is just an illusion, there is 
only the processes and protocols of networking, and now with so many online AIs 
may be some additional aspects of platform and API governance. 

Those can be (and sometimes are) contested and changed, but its only that the 
internet "as a thing" has gotten so ingrained in our psyche (with governance by 
ICANN and IETF) that we can't think it can even work in any other way.        

https://www.noemamag.com/we-need-to-rewild-the-internet/





regards, Shashank

https://muskdeer.blogspot.com/ 






 On Sun, 12 May 2024 20:30:58 +0530 Sun Tzu InfoDragon 
 wrote ---



It is already over.



https://samkriss.substack.com/p/the-internet-is-already-over




On Sun, May 12, 2024, 10:22 Matt Mahoney  wrote:

Once again we are focusing on the wrong AI risks. It's not uncontrolled AI 
turning the solar system into paperclips. It's AI controlled by billionaires 
turning the internet into shit.



https://www.noahpinion.blog/p/the-death-again-of-the-internet-as




https://agi.topicbox.com/latest / AGI / see https://agi.topicbox.com/groups/agi 
+ https://agi.topicbox.com/groups/agi/members + 
https://agi.topicbox.com/groups/agi/subscription 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M93176516df3fbb89513fb624
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-Mf8d90cb0229381605995f9ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 10:38 AM, Matt Mahoney wrote:
> All neural networks are trained by some variation of adjusting anything that 
> is adjustable in the direction that reduces error. The problem with KAN alone 
> is you have a lot fewer parameters to adjust, so you need a lot more neurons 
> to represent the same function space. That's even with 2 parameters per 
> neuron, threshold level and steepness. The human brain has another 7000 
> parameters per neuron in the synaptic weights.

I bet in some of these so-called “compressor” apps that Matt always looks at 
there is some serious NN structure tweaking going on there. They’re open 
source, right? Do people obfuscate the code when submitting?


Well it’s kinda obvious but transformations like this:

(Universal Approximation Theorem) => (Kolmogorov-Arnold Representation Theorem)

There’s going to be more of them.

Automating or not I’m sure researchers are on it.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-Md991f57050d37e51db0e68c5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread James Bowery
On Sun, May 12, 2024 at 9:39 AM Matt Mahoney 
wrote:

> ... The problem with KAN alone is you have a lot fewer parameters to
> adjust, so you need a lot more neurons to represent the same function space.
>

Ironically, one of the *weaknesses* described in the recent KAN paper is
that it has a tendency to over-fit.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M5427be223e5428cca1ae8af3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] How AI is killing the internet

2024-05-12 Thread Sun Tzu InfoDragon
It is already over.

https://samkriss.substack.com/p/the-internet-is-already-over

On Sun, May 12, 2024, 10:22 Matt Mahoney  wrote:

> Once again we are focusing on the wrong AI risks. It's not uncontrolled AI
> turning the solar system into paperclips. It's AI controlled by
> billionaires turning the internet into shit.
>
> https://www.noahpinion.blog/p/the-death-again-of-the-internet-as
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-M93176516df3fbb89513fb624
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread Matt Mahoney
KAN (training a neural network by adjusting neuron thresholds instead of
synaptic weights) is not new. The brain does both. Neuron fatigue is the
reason that we sense light and sound intensity and perception in general on
a logarithmic scale. In artificial neural networks we model this by giving
each neuron an extra weight with a fixed input.

All neural networks are trained by some variation of adjusting anything
that is adjustable in the direction that reduces error. The problem with
KAN alone is you have a lot fewer parameters to adjust, so you need a lot
more neurons to represent the same function space. That's even with 2
parameters per neuron, threshold level and steepness. The human brain has
another 7000 parameters per neuron in the synaptic weights.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M6fb2c5e244ff97d1ad88ca92
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] How AI is killing the internet

2024-05-12 Thread Matt Mahoney
Once again we are focusing on the wrong AI risks. It's not uncontrolled AI
turning the solar system into paperclips. It's AI controlled by
billionaires turning the internet into shit.

https://www.noahpinion.blog/p/the-death-again-of-the-internet-as

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T217f203a5b9455f2-Mb08cf9db50fd5ee00f119ae4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Iteratively Tweak and Test (e.g. MLP => KAN)

2024-05-12 Thread John Rose
On Sunday, May 12, 2024, at 12:13 AM, immortal.discoveries wrote:
> But doesn't it have to run the code to find out no?

The people who wrote the paper did some nice work on this. They laid it out 
perhaps intentionally so that doing it again with modified structures is easy 
to visualize.

A simple analysis would be to basically “tween” mathematics and graph structure 
as a vector from MLP to KAN to open a peep whole into the larger thing.

Right, a generalized software system test host… think reflection, many 
programming languages have reflection, so you reflect off of the test structure 
as the abstraction layer into a fixed computing resource measurement to rank.

It’s not difficult to generate the structures but how do you find the best 
candidates to run the tester on? Perhaps a coupling with some topology of 
computational complexity classes and see which structures push easiest into 
that?… or some other method… this is probably the difficult part... unless you 
just throw massive computing power at it :)

But yes, when you start thinking about it there might be a recursion where the 
MLP/KAN’s or whatever view themselves to self modify.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1af6c40307437a26-M50970ab0535f6725bf2e12ec
Delivery options: https://agi.topicbox.com/groups/agi/subscription