On Thu, Jul 6, 2023 at 12:00 PM Matt Mahoney <mattmahone...@gmail.com>
wrote:

> On Thu, Jul 6, 2023, 2:09 AM Rob Freeman <chaotic.langu...@gmail.com>
> wrote:
>
>>
>> Did the Hutter Prize move the field? Well, I was drawn to it as a rare
>> data based benchmark.
>>
>
> Not much. I disagreed with Hutter on the contest rules, but he was funding
> the prize. (I once applied for an NSF grant but it was rejected like 90% of
> applications). My large text benchmark has no limits on CPU time or memory,
> but I understand the need for these when there is prize money.
>

There are really two aspects to the "failure" of the Hutter Prize to "move
the field".

The first and most important aspect is that one must compare the levelized
cost of $100/month of the Hutter Prize since its inception to that invested
in "the field".  Arguments over the resource limits imposed, and even the
dataset to be compressed are secondary.  This brings up the second aspect
of its "failure:  It remains unrecognized that there is virtually no risk
to investment in the Hutter Prize purse, and there are rigorous grounds for
believing it to be the best we can do for addressing *precisely* the kinds
of language model failures we're seeing exposed in the LLMs -- not just
enormous costs but also their inability to factor out various kinds of
internal inconsistencies, "noise" and outright lies in the source data.

One can argue, as you have, that the resource limits are too strict to move
the needle because it is just *too hard*.  One must recognize that money is
only paid out *only *as one approaches the *hard* asymptotic limit in the
Kolmogorov Complexity of the dataset.  Why *not* fund it at a level of a
billion dollars?  There's no downside and an *enormous* upside!

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T42db51de471cbcb9-M62df78dea340bae690ad896f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to