They say "understanding" a lot but don't really define it (perhaps
implicitly).
It seems like a reasonable start as a basis. I don't see how it relates to
consciousness really, except that I think they emphasize a real time aspect
and a flow of time which is good.
On Sat, Jun 15, 2024 at 5:10
On Monday, June 17, 2024, at 4:14 PM, James Bowery wrote:
> https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf
I know, I know that we could construct a test that breaks the p-zombie barrier.
Using text alone though? Maybe not. Unless we could somehow makes our brains
https://gwern.net/doc/cs/algorithm/information/compression/1999-mahoney.pdf
On Mon, Jun 17, 2024 at 1:35 PM Mike Archbold wrote:
> Now time for the usual goal post movers
>
> On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney
> wrote:
>
>> It's official now. GPT-4 was judged to be human 54% of the
On Monday, June 17, 2024, at 2:33 PM, Mike Archbold wrote:
> Now time for the usual goal post movers
A few years ago it would be a big thing though I remember these chatbots from
the BBS days in the early 90's that were pretty convincing. Some of those bots
were hybrids, part human part bot so
Now time for the usual goal post movers
On Mon, Jun 17, 2024 at 7:49 AM Matt Mahoney
wrote:
> It's official now. GPT-4 was judged to be human 54% of the time, compared
> to 22% for ELIZA and 50% for GPT-3.5.
> https://arxiv.org/abs/2405.08007
> *Artificial General Intelligence List
It's official now. GPT-4 was judged to be human 54% of the time, compared
to 22% for ELIZA and 50% for GPT-3.5.
https://arxiv.org/abs/2405.08007
--
Artificial General Intelligence List: AGI
Permalink:
On Sunday, June 16, 2024, at 6:49 PM, Matt Mahoney wrote:
> Any LLM that passes the Turing test is conscious as far as you can tell, as
> long as you assume that humans are conscious too. But this proves that there
> is nothing more to consciousness than text prediction. Good prediction
>
On Mon, Jun 17, 2024 at 3:22 PM Quan Tesla wrote:
>
> Rob, basically you're reiterating what I've been saying here all along. To
> increase contextualization and instill robustness in the LLM systemic
> hierarchies. Further, that it seems to be critically lacking within current
> approaches.
>
Rob, basically you're reiterating what I've been saying here all along. To
increase contextualization and instill robustness in the LLM systemic
hierarchies. Further, that it seems to be critically lacking within current
approaches.
However, I think this is fast changing, and soon enough, I