[agi] gp4o also cooler:

2024-05-17 Thread immortal . discoveries
https://twitter.com/SmokeAwayyy/status/1791307090197356708
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tad57ac32c6d24962-M3ca475d3cdbb4270c0813704
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Heads up

2024-05-17 Thread Alan Grimes via AGI

This is a PSA.

Just wanna tell you guys to lock your tray tables in their upright 
position, make sure your seatbelt is sinched tight and, um, assume the 
crash position.


Yeah, it's time.

Silver is at $31.43 which means it is decisively above the red line of 
$30. Which means the party has started. The pattern the price riggers 
have established is that their standard market interventions are always 
done on Wednesdays and emergency adjustments are sometimes done over the 
weekends. There is a small chance that they will, again, be able to push 
the price down into the $2X range. If we are still above $30 by midday 
Monday then consider this signal confirmed. This WILL bring down the 
entire financial system, all of it. Banks, derivatives, currencies, 
equities, debt instruments, all of it. Judging from the feel of things, 
I expect a CBDC to be introduced roughly the first week of June and then 
fail, along with the government itself, by the end of September.


https://silverprice.org/

Times will be tough as changes will be both rapid and dramatic. The 
chance of a Boogaloo as I was worrying about several years ago is low at 
this point though a great many people will have severe and irrational 
emotional reactions that would normally be quite out of charactor for 
them. Be prepared for this! Yes there are guilty people out there, THEY 
MUST BE BROUGHT TO TRIAL!!! WE NEED TO DOCUMENT EVERYTHING THAT HAS BEEN 
DONE TO US, DON'T LET THEM TAKE THEIR SECRETS TO THEIR GRAVES!!! That is 
the only way we can restore human civilization to a healthy state. YOU 
will want to run out and lynch every single one of them; don't! We need 
information a hundred times more than vengance! The pubilic executions 
can begin the day after we are sure we have found all of the 
consiprators with none remaining in any position to hurt us again.


Once again, I am expecting 4-5 months of very limited food availability

--
You can't out-crazy a Democrat.
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T92315ac1d2bf90d9-M03c28320d9d69bcb4e055ca5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread immortal . discoveries
Matt,

GPT4o still thinks my hard puzzle it can say to use a spoon to push the truck, 
even though it drives and i said to follow physics. No human would make this 
mistake lol.

GPT4o Matt no also cannot do long horizon tasks, part of what we WILL need to 
get AGI ! Sure Windows 12 would be not a day's worth of work, but humans can 
still work on 1 thing for months, and years.

And yes no body, and yes self driving cars are similar but no they aren't human 
bodies doing human labor, nor is our toy machines in factories human form or 
ability.

Lastly you didn't realize this, but my brain can tell if generated video is 
right. What about feeding Sora "a deer that grows longer ways wider and its 
toes extend into down its mouth out its bum and them separate into 4 and grow 
big at the ends while the deer is half-separating into 2 all while bunnies are 
trying to stitch parts together and the deer is trying to dance and while 
upgrading into a blue radiating genie" I could add 20 more things, and I could 
tell if it was all correct. In some way I can also see it all if think hard.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M7cdbfc08951ff25893b4f388
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Friday, May 17, 2024, at 10:07 AM, Sun Tzu InfoDragon wrote:

the AI just really a regurgitation engine that smooths everything over and 
appears smart.
> 
> No you!

I agree. Humans are like memetic switches, information repeaters, reservoirs. 
The intelligence is in the collective, we’re just individual host nodes. Though 
some originate intelligence more than others.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M5c3feeec4fa21dc6b3116830
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread Sun Tzu InfoDragon
>  the AI just really a regurgitation engine that smooths everything over
and appears smart.

No you!

On Fri, May 17, 2024 at 8:20 AM John Rose  wrote:

> On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
>
> Yet another demonstration of how Alan Turing poisoned the future with his
> damnable "test" that places mimicry of humans over truth.
>
>
> This unintentional result of Turing’s idea is an intentional component of
> some religions. The elder wisemen wanted to retain control over science as
> science spun from religion since they knew humans may become irrelevant. So
> they attempted to control the future and slow things down, thus Galileo
> gets burned. Perhaps they saw it as a small sacrifice for the larger whole.
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Maac759608724bb729a89d86d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread Matt Mahoney
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
>
> Yet another demonstration of how Alan Turing poisoned the future with his 
> damnable "test" that places mimicry of humans over truth.

What Turing actually said in 1950.
https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

The question was "Can machines think?"  Turing carefully defined his
terms, both what a computer is (it could be a human following an
algorithm using pencil and paper) and what it means to "think". I find
it interesting that he proposed the same method proposed by Ivan
Moony, to program a learning algorithm and raise it like a child. Or
alternatively, he estimated the amount of code as 60 developers
working 50 years at the rate of 1000 bits per day on a computer with
10^9 bits of memory using components no faster than what was already
available in 1950. (Mechanical relays are as fast as neurons, and
vacuum tubes are 1000 times faster). Turing anticipated objections to
the idea of thinking machines and answered them, including objections
based on consciousness, religion, and extrasensory perception.

-- 
-- Matt Mahoney, mattmahone...@gmail.com

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M0f549e56fecc0ee391bbadd4
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread Nanograte Knowledge Technologies
Mostly agreed, but it depends on your definition of NN. NN is equivalent to 
mutation (supposed to be). If we applied it in that sense, then NN could 
support other schemas of mutation, not diminish in functional value. 
Ultimately, I think we're heading towards a biochemical model for AGI, even if 
it is a synthetic one.

Synthetic means, not naturally made. It doesn't mean that a synthetic machine 
cannot function as a fully-recursive machine, which demonstrates of its own 
intuit an ability to perform conscious decisions of the highest order.

The concern with AGI has often been in the region of autonomous decision 
making. Who would predict exactly which moral, or 
strategic-tactical-operational, or "necessary" decision a powerful, autonomous 
machine could come to.

Which tribe would it conclude it belonged to and where would it position its 
sense of fealty? Would it be as fickle as humans on belonging and issues of 
loyalty to greater society? Altruism, would it get it? Would it develop a good 
and bad inclination, and structure society to favor either one of those 
"instincts" it may deem most logically indicated?


Mostly, would it be inclined towards "criminal" behavior, or even "terrorism" 
by any name? And if it decided to turn to rage in a relationship, would it feel 
justified in overpowering a weaker sex?

In that sense, success! We would have duplicated the complications of humanity!

From: John Rose 
Sent: Friday, 17 May 2024 13:48
To: AGI 
Subject: Re: [agi] Can symbolic approach entirely replace NN approach?

On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote:
What should symbolic approach include to entirely replace neural networks 
approach in creating true AI?

Symbology will compress NN monstrosities… right?  Or should say increasing 
efficiency via emerging symbolic activity for complexity reduction. Then less 
NN will be required since the “intelligence” was will have been formed. But 
still need sensory…

There is much room for innovation in mathematics… some of us have been working 
on that for a while.
Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M844f85d23b2020dafbaecc77
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 11:21 AM, James Bowery wrote:
> Yet another demonstration of how Alan Turing poisoned the future with his 
> damnable "test" that places mimicry of humans over truth.

This unintentional result of Turing’s idea is an intentional component of some 
religions. The elder wisemen wanted to retain control over science as science 
spun from religion since they knew humans may become irrelevant. So they 
attempted to control the future and slow things down, thus Galileo gets burned. 
Perhaps they saw it as a small sacrifice for the larger whole.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma35aaeb8de27a4ee42f6e993
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4o

2024-05-17 Thread John Rose
On Tuesday, May 14, 2024, at 10:27 AM, Matt Mahoney wrote:
> Does everyone agree this is AGI?

Ya is the AI just really a regurgitation engine that smooths everything over 
and appears smart. Kinda like a p-zombie, poke it, prod it, sounds generally 
intelligent!  But… artificial is what everyone is going for seems like. Is 
there a difference?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-Ma141cb8a667972f0df709a6b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-17 Thread John Rose
On Thursday, May 16, 2024, at 11:26 AM, ivan.moony wrote:
> What should symbolic approach include to entirely replace neural networks 
> approach in creating true AI?

Symbology will compress NN monstrosities… right?  Or should say increasing 
efficiency via emerging symbolic activity for complexity reduction. Then less 
NN will be required since the “intelligence” was will have been formed. But 
still need sensory…

There is much room for innovation in mathematics… some of us have been working 
on that for a while.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M5b45da5fff085a720d8ea765
Delivery options: https://agi.topicbox.com/groups/agi/subscription