Re: [agi] Internal Time-Consciousness Machine (ITCM)

2024-06-17 Thread Mike Archbold
They say "understanding" a lot but don't really define it (perhaps
implicitly).

It seems like a reasonable start  as a basis. I don't see how it relates to
consciousness really, except that I think they emphasize a real time aspect
and a flow of time which is good.

On Sat, Jun 15, 2024 at 5:10 PM John Rose  wrote:

> For those of us pursuing consciousness-based AGI this is an interesting
> paper that gets more practical... LLM agent based but still v. interesting:
>
> https://arxiv.org/abs/2403.20097
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-M171f6fb6a848f10479bb6970
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf

I know, I know that we could construct a test that breaks the p-zombie barrier. 
Using text alone though? Maybe not. Unless we could somehow makes our brains 
not serialize language but simultaneously multi-stream symbols... gotta be a 
way :)

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Madd96d99e30a08326350c050
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread James Bowery
https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf

On Mon, Jun 17, 2024 at 1:35 PM Mike Archbold  wrote:

> Now time for the usual goal post movers
>
> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
> wrote:
>
>> It's official now. GPT-4 was judged to be human 54% of the time, compared
>> to 22% for ELIZA and 50% for GPT-3.5.
>> https://arxiv.org/abs/2405.08007
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M8435ecf177a92da2801bdd94
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread John Rose
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote:
> Now time for the usual goal post movers

A few years ago it would be a big thing though I remember these chatbots from 
the BBS days in the early 90's that were pretty convincing. Some of those bots 
were hybrids, part human part bot so one person could chat with many people 
simultaneously and the bot would fill in.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M65080914031e453816a81215
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] GPT-4 passes the Turing test

2024-06-17 Thread Mike Archbold
Now time for the usual goal post movers

On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney 
wrote:

> It's official now. GPT-4 was judged to be human 54% of the time, compared
> to 22% for ELIZA and 50% for GPT-3.5.
> https://arxiv.org/abs/2405.08007
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M22278adf124b60cd30fd51fc
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] GPT-4 passes the Turing test

2024-06-17 Thread Matt Mahoney
It's official now. GPT-4 was judged to be human 54% of the time, compared
to 22% for ELIZA and 50% for GPT-3.5.
https://arxiv.org/abs/2405.08007

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-Mf4e3db6fe1581164afa7176c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Re: Internal Time-Consciousness Machine (ITCM)

2024-06-17 Thread John Rose
On Sunday, June 16, 2024, at 6:49 PM, Matt Mahoney wrote:
> Any LLM that passes the Turing test is conscious as far as you can tell, as 
> long as you assume that humans are conscious too. But this proves that there 
> is nothing more to consciousness than text prediction. Good prediction 
> requires a model of the world, which can be learned given enough text and 
> computing power, but can also be sped up by hard coding some basic knowledge 
> about how objects move, as the paper shows.

ITCMA is the agent see Appendix. B (below citations) for phenomenological 
evidence for ITCM. “An agent is not just a prediction algorithm.”, in a noisy, 
uncertain and competitive environment mere prediction does not suffice for 
agent success.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T32a7a90b43c665ae-Ma6a45321d00ecc7584ecc3e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-17 Thread Rob Freeman
On Mon, Jun 17, 2024 at 3:22 PM Quan Tesla  wrote:
>
> Rob, basically you're reiterating what I've been saying here all along. To 
> increase contextualization and instill robustness in the LLM systemic 
> hierarchies. Further, that it seems to be critically lacking within current 
> approaches.
>
> However, I think this is fast changing, and soon enough, I expect 
> breakthroughs in this regard. Neural linking could be one of those solutions.
>
> While it may not be exactly the same as your hypothesis (?), is it because 
> it's part of your PhD that you're not willing to acknowledge that this 
> theoretical work may have been completed by another researcher more than 17 
> years ago, even submitted for review and subsequently approved? The market, 
> especially Japan, grabbed this research as fast as they could. It's the West 
> that turned out to be all "snooty" about its meaningfulness, yet, it was the 
> West that reviewed and approved of it. Instead of serious collaboration, is 
> research not perhaps being hamstrung by the NIH (Not Invented Here) syndrome, 
> acting like a stuck handbrake?

You intrigue me. "Contextualization ... in LLM systemic hierarchies"
was completed and approved 17 years ago?

"Contextualization" is a pretty broad word. I think the fact that
Bengio retreated to distributed representation with "Neural Language
Models" around... 2003(?) might be seen as one acceptance of... if not
contextualization, at least indeterminacy (I see Bengio refers to "the
curse of dimensionality".) But I see nothing about structure until
Coecke et co. around 2007. And even they (and antecedents going back
to the early '90s with Smolensky?) I'm increasingly appreciating seem
trapped in their tensor formalisms.

The Bengio thread, if it went anywhere, it stayed stuck on structure
until deep learning rescued it with LSTM. And then "attention".

Anyway, the influence of Coecke seems to be tiny. And basically
mis-construed. I think Linas Vepstas followed it, but only saw
encouragement to seek other mathematical abstractions of grammar. And
OpenCog wasted a decade trying to learn those grammars.

Otherwise, I've been pretty clear that I think there are hints to what
I'm arguing in linguistics and maths going back decades, and in
philosophy going back centuries. The linguistics ones specifically
ignored by machine learning.

But that any of this, or anything like it was "grabbed ... as fast as
they could" by the market in Japan, is a puzzle to me (17 years ago?
Specifically 17?)

As is the idea that the West failed to use it, even having "reviewed
and approved it", because it was "snooty" about... Japan's market
having grabbed it first?

Sadly Japanese research in AI, to my knowledge, has been dead since
their big push in the 1980s. Dead, right through their "lost" economic
decades. I met the same team I knew working on symbolic machine
translation grammars 1989-91, at a conference in China in 2002, and as
far as I know they were still working on refinements to the same
symbolic grammar. 10 more years. Same team. Same tech. Just one of the
係長 had become 課長.

What is this event from 17 years ago?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2187d1c831913c8c67e1fc9c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-06-17 Thread Quan Tesla
Rob, basically you're reiterating what I've been saying here all along. To
increase contextualization and instill robustness in the LLM systemic
hierarchies. Further, that it seems to be critically lacking within current
approaches.

However, I think this is fast changing, and soon enough, I expect
breakthroughs in this regard. Neural linking could be one of those
solutions.

While it may not be exactly the same as your hypothesis (?), is it because
it's part of your PhD that you're not willing to acknowledge that this
theoretical work may have been completed by another researcher more than 17
years ago, even submitted for review and subsequently approved? The market,
especially Japan, grabbed this research as fast as they could. It's the
West that turned out to be all "snooty" about its meaningfulness, yet, it
was the West that reviewed and approved of it. Instead of serious
collaboration, is research not perhaps being hamstrung by the NIH (Not
Invented Here) syndrome, acting like a stuck handbrake?

IMO, this is exactly why progress in the West has been so damned slow.
Everyone is competing for the honor of discovering something great, looking
out for number one. Write a book. Grab a TV show, become a hero, or
whatever.

Here's another person on this list who also jetted off into his own space
with a PhD, the fruits of his labor we have never seen. His theory becoming
quickly as irrelevant as the time it takes to code newer applications. As
you must be acutely aware of: Valid doesn't equate to Reliable and Reliable
doesn't equate to Relevance (Attention) and Relevance doesn't equate to
Intel. Seems to me, pundits of LLMs (used to) think it does. After
relevance within LLMs would be resolved (soon), the real battle for Intel
would begin. An epic battle well-worth watching.

Meanwhile, in all probability, whatever we could think of/invent has
already been thought of by another person, somewhere else in the world.
Sometimes, centuries ago. This seems very similar to what Karl Mannheim was
referring to in his view on competition for knowledge as a factor for human
survival, within the context of the sociology of knowledge. I think we
should add this as a mitigating factor to culturally-based knowledge
systems and embed it in LLMs. My 2 cents' worth.

Predictably, at a certain "size" LLMs - on their own - would wander off
into ambiguity. There are multiple reasons for this, one of them due to
exponential complexity. That's the point - I'll predict - at which ~99.7%
of the LLM-dev market's going to be left behind to scramble for
marketing-related contracts/jobs in order to support AI-based sales and
trading efforts.

The remainder ~0.3% are going to emerge as the AI-driven Intel industry.
I'll rate the AI-Intel market segment as a future, trillion-dollar
industry.

Just some thoughts I had. I could be completely wrong. Only time would
tell.

On Sat, Jun 15, 2024 at 9:42 AM Rob Freeman 
wrote:

> On Sat, Jun 15, 2024 at 1:29 AM twenkid  wrote:
> >
> > ...
> > 2. Yes, the tokenization in current LLMs is usually "wrong", ... it
> should be on concepts and world models: ... it should predict the
> *physical* future of the virtual worlds
> 
> Thanks for comments. I can see you've done a lot of thinking, and see
> similarities in many places, not least Jeff Hawkins, HTM, and
> Friston's Active Inference.
> 
> But I read what you are suggesting as a solution to the current
> "token" problem for LLMs, like that of a lot of people currently,
> LeCun prominently, to be that we need to ground representation more
> deeply in the real world.
> 
> I find this immediate retreat to other sources of data kind of funny,
> actually. It's like... studying the language problem has worked really
> well, so the solution to move forward is to stop studying the language
> problem!
> 
> We completely ignore why studying the language problem has caused such
> an advance. And blindly, immediately throw away our success and look
> elsewhere.
> 
> I say look more closely at the language problem. Understand why it has
> caused such an advance before you look elsewhere.
> 
> I think the reason language models have led us to such an advance is
> that the patterns language prompts us to learn are inherently better.
> "Embeddings", gap fillers, substitution groupings, are just closer to
> the way the brain works. And language has led us to them.
> 
> So OK, if "embeddings" have been the advance, replacing both fixed
> labeled objects in supervised learning, and fixed objects based on
> internal similarities in "unsupervised" learning, instead leading us
> to open ended categories based on external relations, why do we still
> have problems? Why can't we structure better than "tokens"? Why does
> it seem like they've led us the other way, to no structure at all?
> 
> My thesis is actually pretty simple. It is that these open ended
> categories of "embeddings" are good, but they contradict. These "open"
> categories can have a whole new level of