Effective world knowledge is based on practical advancements and most
practical advancements cannot be made in pure simulations (like those
that can overtake the advancements in the real world). Something like
a triple abstraction principle in mathematics including the
transformational algorithms that would go with them could be gained in
simulations, so a n=np algorithm, if one is feasible, might be found
in a simulation like this. And it might go unnoticed by the human
operators of the simulation.
Jim Bromer


On Sun, Jul 8, 2018 at 1:56 PM, Stefan Reich via AGI
<agi@agi.topicbox.com> wrote:
> Where's the relation there?
>
> Maybe our simulation is run on supercomputers of NP power.
>
> On Tue, 26 Jun 2018 at 07:52, Shashank Yadav <shashank@asatae.foundation>
> wrote:
>>
>> If we are living in a simulation, then P equals NP, I think.
>>
>> -
>> Shashank
>>
>>
>> ---- On Tue, 26 Jun 2018 08:53:31 +0530 Mark Nuzz via AGI
>> <agi@agi.topicbox.com> wrote ----
>>
>>
>>
>> On Mon, Jun 25, 2018 at 8:15 PM, Matt Mahoney via AGI
>> <agi@agi.topicbox.com> wrote:
>>
>> Recursive self improvement in a closed environment is not possible because
>> intelligence depends on knowledge and computing power. These can only come
>> from outside the simulation.
>>
>>
>> I generally agree with this. But let's go into the esoteric world for a
>> moment and consider: Suppose we ourselves are living in a simulation, then
>> what implications does this have?
>> Artificial General Intelligence List / AGI / see discussions +
>> participants + delivery options Permalink
>>
>>
>>
>
>
> --
> Stefan Reich
> BotCompany.de // Java-based operating systems
> Artificial General Intelligence List / AGI / see discussions + participants
> + delivery options Permalink

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T81817474dba9a838-Me0100f542204177a8baade1f
Delivery options: https://agi.topicbox.com/groups

Reply via email to