One problem with non-monetary prediction markets is the lack of seriousness
not only in judging but in posing the questions.  This is of a piece with
their primary weakness, which is their inability to appropriately scale
rewards (& punishments) in the regime of exceedingly unlikely events with
enormous outcomes -- i.e. "Black Swans" -- appropriate to public policy
where, for example, the Biden administration has stated $10T/year
expenditure to deal with global warming is appropriate present investment.
Since metaculus, in particular, was founded on the premise that public
policy thinktanks (including intelligence agencies) had a poor track record
due to lack of accountability for forecasts, this is a critique that gets
to its raison dêtre.  Metaculus has a policy of rejecting claims that are
so low probability in the social proof miasma
<https://en.wikipedia.org/wiki/Social_proof> that it would discredit
Metaculus as serious.  However, this rationale proves, not based on social
acceptance but on simple investment math, that it can't be taken seriously
by those waiving their $10T/year dicks around at the world.

Let's take global warming as an example of how metaculus fails to deal with
the "black swan":

There are any of a number of "preoposterous" energy technologies that
would purportedly
terminate the emission of greenhouse gasses and draw CO2 out of the
atmosphere within a very short period of time
<https://jimbowery.blogspot.com/2014/05/introduction-extinction-of-human-race.html>
-- technologies that are so "preposterous" that nothing, nada, zero, zip is
invested in even bothering to "debunk" them because they are a priori
"debunked" by the simple expedient of social proof's estimated probability
approaching 0.  However, when we're dealing with a NPV on a cash stream of
$10T/year, at 30 year interest rate available to the US government of 3.91%
of over $200T, the question becomes exactly how close to the social proof
probability of "0" do we have to come in order to invest nothing at all in
even "debunking" those technologies?  Let's say a good, honest, "debunk" by
a technosocialist empire that believes in bureaucratically
incentivized "science" takes $10M (remember, you have to buy off Congress
to get *anything* appropriated so I'm being kind to the poor bureaucrats).
That's 10M/200T = 5e-8 = probability threshold at which you can no longer
simply wave the imperial hand of social proof and dismiss the $10M
appropriation.

Can metaculus tolerate the loss of credibility it will suffer by
entertaining claims with a one in twenty million odds of rendering the
whole problem moot?  To ask is to answer the question.

Moreover, even in the event that metaculus grew some hair on its balls and
started entertaining Black Swans, its scoring system can provide
appropriate rewards to those predicting them because of nonlinear effects
as the probabilities approach 0% or 100%.

In this regard, it is particularly relevant that the now-defunct *real
money* prediction market, Intrade <https://en.wikipedia.org/wiki/Intrade>,
had a claim about "cold fusion" that was very similar to the one at Robin
Hanson's ideosphere, but which judged in the opposite direction.  The only
reason it was judged differently is that real money was involved.  The
judgement?  Whatever the claims are about "cold fusion" regarding nuclear
products, etc. the only relevant claim about it is "True":  Excess heat has
been produced at levels that exceed those allowed by current
interpretations of standard physical theory.





On Thu, Jul 20, 2023 at 12:04 PM Matt Mahoney <mattmahone...@gmail.com>
wrote:

> This article on Astral Codex Ten discusses the extinction tournament,
> which brought together AI domain experts and super forecasters to estimate
> the risk of human extinction by AI or other causes. They and Metaculus
> estimate roughly 1% to 5% chance of extinction by 2100, with AI being the
> highest risk, followed by nuclear war and genetically engineered pathogens.
> There is a higher risk of catastrophic but non extinction events. Also, we
> are nearly certain to have AGI by 2100.
>
> https://astralcodexten.substack.com/p/the-extinction-tournament
>
> I mostly agree with the risk estimates. What about you?
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T91a503beb3f94ef7-M0056da5fb89045e5ce487a01>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T91a503beb3f94ef7-M05d0702bd00383ffa8c5d29e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to