Re: [agi] The Job market.

2019-10-02 Thread Steve Richfield
James,

You appear to be a bit new here, having missed several earthshaking
discoveries - as the world just yawns.

OK, so let's see if I can explain RRAA in a short posting here.

If two logical people reach an impasse after understanding each other's
arguments, then one or both MUST base their valid logic on an invalid
assumption - but how could this possibly happen with someone else CAREFULLY
examining the logic? Simple - they BOTH share the SAME invalid
assumption(s), so the problem is invisible to them both.

Now you (or the ultimate AGI) listens to their craziness and points out the
invalid assumption. Right? WRONG!!! The ONE thing they CAN agree on is that
YOU (or the ultimate AGI) are wrong, and the dispute continues in the face
of perfect logic!!!

So, what is the success path? Simple - you must NOT show the solution to
the parties until AFTER you have taught them how RRAA works - and that they
absolutely MUST share at least one invalid assumption to be having their
dispute, so, if they don't like your candidate for an invalid assumption,
then they should either do better, or accept your candidate.

Take the abortion debate, whose best invalid assumption candidate is the
government lottery where women are randomly selected and presented with
newborn babies - and are prohibited from selling, trading, or otherwise
finding other women who actually want the newborn baby. Of course, the
random selection here is based on the failure of birth control.

Allowing these women to sell or trade their babies solves this, because
babies are worth a LOT of money. Who would be crazy enough to just throw
away $20-50k by getting an abortion? Everyone would win, including the
babies, who would then have mother's who actually WANT them.

This same approach works on (nearly?) all disputes that aren't based on
might making right, but it takes a really fresh look at things to see the
invalid assumptions.

So there it is - the basis of nearly all human strife, wars, etc., laid
bare with accompanying solution.

Unfortunately, though arguably worth more than an AGI because it lets you
and I do things that transcend even what is currently expected from a
successful AGI, it has no wealthy sponsors, so here it lays on this forum,
soon to be forgotten - yet again. Meanwhile, the wars continue...

Thoughts?

Steve Richfield

On Wed, Oct 2, 2019, 8:04 PM James Bowery  wrote:

> If you have a proof of how to resolve disputes, you have a proof of how to
> select the better of two models of the world induced from the same set of
> data.
>
> Have you published this earth-shaking discovery with clear implications
> for machine learning?  Or are you afraid if you publish, someone will
> implement it and turn us all into paperclips?
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Me5e8b720bbcee7986a562878
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread James Bowery
If you have a proof of how to resolve disputes, you have a proof of how to 
select the better of two models of the world induced from the same set of data.

Have you published this earth-shaking discovery with clear implications for 
machine learning?  Or are you afraid if you publish, someone will implement it 
and turn us all into paperclips?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M326976eecd4acd8013c875da
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread Steve Richfield
James and John,

I have posted here in the past how reverse reductio ad absurdum logic
handily resolves most disputes to everyone's satisfaction - and used the
abortion debate as a "simple" example. The catch is that all parties must
first understand how RRAA works for it to work. This is MUCH more powerful
than lossless compression, because there is a Boolean PROOF that it works,
so if it doesn't seem to be working, then attention to the logical
structure of the dispute is pretty much guaranteed to crack it.

Steve Richfield

On Wed, Oct 2, 2019, 8:55 AM Nanograte Knowledge Technologies <
nano...@live.com> wrote:

> Agreed Steve. These comments are deeply disturbing for its intentional
> destructiveness and seeming insanity. With AI/AGI, absolute insanity would
> prevail, absolutely.
>
> This is no surprise to me. My on-going research supports a phenomenal
> increase across the world of this kind of demonstrable, mental imbalance.
>
> Recent, significant evidence thereof was the clandestine, social
> engineering of fear and hatred by Cambridge Analytica to swing the USA
> elections in Trump's favor and to secure a BREXIT vote.
>
> Furthermore, and by their own admission and confirmed by hearings in the
> UK and the USA, doing the same social engineering with Facebook-supported
> data and technology across many nations in the world.
>
> Clearly, the genie's out of the bottle. The bird has already flown. It is
> what it is. For most it's a simple case of: "It pays the bills!"
>
> However, for researchers/developers in AI/AGI with a global conscience, I
> can only imagine them increasingly having to face ethical crossroads.
>
> --
> *From:* Steve Richfield 
> *Sent:* Wednesday, 02 October 2019 02:35
> *To:* AGI 
> *Subject:* Re: [agi] The Job market.
>
> This thread is an existence proof that people working on AGI have NO clue
> how much damage their creations would do in the hands of the power elite.
> If AI has made things THIS bad, then the damage that AGI would do is
> unimaginable - but that never even entered the conversation.
>
> Forgive them, for they no not what they do? Hell no. You guys recklessly
> threaten the world's population without even looking where this is going.
>
> The Terminator sequel considered the ethics of killing people like those
> on this forum - and decided it was OK.
>
> How does this not fully meet the definition of insanity - of being a
> danger to yourselves and others?
>
> Steve
> On Mon, Sep 30, 2019, 5:42 PM  wrote:
>
>
>
>
>
> Thanks Stefan.
>
>
>
>
>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Mf14d7ca0953aa1593b244b69
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Heat can up-propagate into symbol and replicate out of there. Energy converts 
to informational transmission and disentopizes it's gotta go somewhere 
right? Even backwards in time as we're predicting.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8baff210f7f8fb59-M65ba6bdae96165cfd2c1e54b
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread Nanograte Knowledge Technologies
Agreed Steve. These comments are deeply disturbing for its intentional 
destructiveness and seeming insanity. With AI/AGI, absolute insanity would 
prevail, absolutely.

This is no surprise to me. My on-going research supports a phenomenal increase 
across the world of this kind of demonstrable, mental imbalance.

Recent, significant evidence thereof was the clandestine, social engineering of 
fear and hatred by Cambridge Analytica to swing the USA elections in Trump's 
favor and to secure a BREXIT vote.

Furthermore, and by their own admission and confirmed by hearings in the UK and 
the USA, doing the same social engineering with Facebook-supported data and 
technology across many nations in the world.

Clearly, the genie's out of the bottle. The bird has already flown. It is what 
it is. For most it's a simple case of: "It pays the bills!"

However, for researchers/developers in AI/AGI with a global conscience, I can 
only imagine them increasingly having to face ethical crossroads.


From: Steve Richfield 
Sent: Wednesday, 02 October 2019 02:35
To: AGI 
Subject: Re: [agi] The Job market.

This thread is an existence proof that people working on AGI have NO clue how 
much damage their creations would do in the hands of the power elite. If AI has 
made things THIS bad, then the damage that AGI would do is unimaginable - but 
that never even entered the conversation.

Forgive them, for they no not what they do? Hell no. You guys recklessly 
threaten the world's population without even looking where this is going.

The Terminator sequel considered the ethics of killing people like those on 
this forum - and decided it was OK.

How does this not fully meet the definition of insanity - of being a danger to 
yourselves and others?

Steve
On Mon, Sep 30, 2019, 5:42 PM mailto:keghnf...@gmail.com>> 
wrote:




Thanks Stefan.





Artificial General Intelligence List / AGI / 
see discussions + 
participants + delivery 
options 
Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M78044af3ba922eea52034be7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread James Bowery
On Wed, Oct 2, 2019 at 6:07 AM John Rose  wrote:

> On Wednesday, October 02, 2019, at 1:05 AM, James Bowery wrote:
>
> Harvard University's Jonathan Haidt is so terrified of the truth coming
> out that he's actually come out against Occam's Razor
> .
>
>
> There are sityations where the simplest explanation is to chuck Occam's
> Razor :)
>

ANY situation can be one where the most viable _decision_ is to stop the
search for the simplest explanation and _act_ on the simplest explanation
you have found _thus far_.  This is a consequence of the incomputability of
Solomonoff Induction in the face of limited resources.

There is an over reliance.

"Mistakes were made. "


> Though implementors do need to go from complex to simple.
>
> But there are issues with rationality.
>

There is an explore/exploit tradeoff. See the prior "issue" with
"computability" and then compound that with the "irrationality" of the
valuation function applied during sequential decision theory.  How do you
justify that, outside of the "exploration" provided by evolution?


> There are issues with scientific objectivism.
>

There are issues with communication between members of a polity.  See prior
issues with "computability" and "value function".


> Aren't Occam and Gödel at odds with each other in some ways?
>

Not in the way that theologians posing as "social scientists" would have us
believe.  For example, choosing a universal Turing machine as the basis for
Solomonoff Induction can be, and has been blown into an argument to abandon
induction entirely by simply defining one's UTM as that which outputs all
observations up to the present.  The benefit of such theology, posing as
"social science" is the theologian, serving his political masters, can
"scientifically justify" anything they want to do to you.

Hell, just pay Bill Nye the Science Guy to hold a march with a bunch of
kids screaming "HOW DARE YOU!!!" as they ransak the village hunting down
those who deny "SCIENCE", including their own parents who do not "consent"
to whatever the powers that be want to do to them.  Resist "the children"?
HOW DARE YOU bully the innocents!  You might not even have to pay Bill very
much if you give him access to the kids in a _very_ private setting.

Especially in virtual worlds hosted by computers where there is a
> disconnect between thermodynamic and information theoretic.
>

Again, you're simply invoking resource limitations/computability/value
functions.


> And NKS (Wolfram) does squeeze in there somewhat between Occam and Gödel….
> Didn’t gain much traction yet AFAIK.
>

Wolfram!  Well!  Perhaps you should take this up with Hector Zenil
:

I am also the Managing Editor of Complex Systems, the first journal in the
field founded by Stephen Wolfram in 1987. I am member of the Editorial
Board of publications and book series such as the Springer series on
Emergence, Complexity and Computation, the journals Philosophies and
Frontiers in Robotics and AI for its Computational Intelligence section,
among other journals. I also serve as consultant/advisor for labs and
organisations such as Wolfram Research, the Living Systems Lab, Intuition
Machine, Veda Data, and the Lifetime Foundation.


I seem to recall an ancient paper of his called "Causal deconvolution by
algorithmic generative models
" or some such that
Wolfram might take seriously.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Mfcffbeb8eb97e391130495db
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread immortal . discoveries
>https://www.upwork.com/ppc/landing/?gclid=CjwKCAjwldHsBRAoEiwAd0JybXHg8xW3OKkea83DjB7T7G4K76HpCGI6R0GALI-a8lb4LwR23aH5kxoCNOEQAvD_BwE

There's many many endless thousands of programmers here from all places on 
Earth that each get paid on average 30USD per hour! Some 100USD an hour if you 
are skilled as hell, (and artists, music producers, and more). There is 
thousands of jobs for Deep Learning, ML, data science, data structures, 
tutoring, knowledge representation, you name it. When you say you are out of a 
job, you are super-duper-zooper wrong.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-M3358e500b1fff35e046d4b69
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Re: Hydrating Representation Potential Backoff

2019-10-02 Thread immortal . discoveries
Well, after our galaxy is turned into the final state, 4 galaxies nearby will 
become transformedthen 16.then 64..then 248...until heat death 
if no heat life.

https://ibb.co/RHTKPq2
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8baff210f7f8fb59-Md9ae819e9b2b3091600a42ee
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] Hydrating Representation Potential Backoff

2019-10-02 Thread John Rose
Time makes us think that humans are willfully creating AGI, as if it is in the 
future, like the immanentizing of the singularity eschaton. Will scientific 
advances occur at an ever increasing rate? It would have to slow down at a 
certain point. Has to right? As we approach max compression of all knowledge 
into K-complexity delineation… 
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8baff210f7f8fb59-M1a5c2bf1fc91f04bdbf9369a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] The Job market.

2019-10-02 Thread John Rose
On Wednesday, October 02, 2019, at 1:05 AM, James Bowery wrote:
> Harvard University's Jonathan Haidt is so terrified of the truth coming out 
> that he's actually come out against Occam's Razor 
> .

There are sityations where the simplest explanation is to chuck Occam's Razor 
:) 

There is an over reliance. Though implementors do need to go from complex to 
simple.

But there are issues with rationality. There are issues with scientific 
objectivism.

Aren't Occam and Gödel at odds with each other in some ways? Especially in 
virtual worlds hosted by computers where there is a disconnect between 
thermodynamic and information theoretic.

And NKS (Wolfram) does squeeze in there somewhat between Occam and Gödel…. 
Didn’t gain much traction yet AFAIK.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T8eabd59f2f06cc50-Md9f2407bfa3e517b3ace41a1
Delivery options: https://agi.topicbox.com/groups/agi/subscription