Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2017-05-02 Thread Marcus Daniels
https://www.youtube.com/watch?v=svIXTDeZzDg

-Original Message-
From: Marcus Daniels 
Sent: Monday, June 13, 2016 4:31 PM
To: friam@redfish.com
Subject: RE: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

I remember seeing a movie when I was in grade school about a computer that 
counselled citizens about their everyday challenges.In the end the computer 
basically commits suicide.   It wasn't a mainstream movie but something (I 
suspect) recommended to teachers at the time (late 70s, maybe it is an older 
film though).   Teletypes and flashing lights I think.   I can't find it with 
Google.  Anyone know what it is?  I tried some obvious sci-fi authors but came 
up with nothing.   It wasn't a particularly sci-fi movie, but more about not 
giving up responsibility to others, in this case a machine.   

Anyway, that's a cheery tune.  I would pity the sentient machine that had to 
read stupid human tricks on Facebook or make sense of Amazon purchasing habits 
and credit histories all day.   Mind numbing.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Carl
Sent: Friday, June 10, 2016 10:02 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

"Now we will build you an endlessly upward world..."

https://youtu.be/F7P2ViCRObs ( written from the POV of an AI (if 
that's even possible))

Speaking of robot overlords, after listening to this starting to think that 
trade agreements are less about trade than about big data.

C

On 6/10/16 3:21 PM, Marcus Daniels wrote:
> s/white guys playing basketball/scientists without engineers around/
>
> http://www.thewrap.com/snoop-dogg-explains-the-hizzistory-of-bizzasket
> ball-to-jimmy-kimmel-viewers-video/
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
> Sent: Friday, June 10, 2016 3:18 PM
> To: friam@redfish.com
> Subject: Re: [FRIAM] Fascinating article on how AI is driving change 
> in SEO, categories of AI and the Law of Accelerating Returns
>
> On 06/10/2016 11:22 AM, Marcus Daniels wrote:
>> Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just 
>> keep it off my wave" ..  In particular David Brooks can save it..
> That's kinda how I feel when I go to museums.  My postmodernist homunculi 
> start thrashing around demanding to know why I'm looking at all this useless 
> and meaningless stuff ... drives my nihilist homunculi crrraaazy.
>
> --
> ☣ glen
>
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-14 Thread glen ☣
On 06/13/2016 03:31 PM, Marcus Daniels wrote:
> Anyone know what it is?  I tried some obvious sci-fi authors but came up with 
> nothing.   It wasn't a particularly sci-fi movie, but more about not giving 
> up responsibility to others, in this case a machine.

The only thing I could think of was https://en.wikipedia.org/wiki/THX_1138 But 
I suppose that wasn't really a therapist so much as a manipulator.

> Anyway, that's a cheery tune.  I would pity the sentient machine that had to 
> read stupid human tricks on Facebook or make sense of Amazon purchasing 
> habits and credit histories all day.   Mind numbing.

I marvel at what others find marvelous and vice versa.  When I expressed to my 
dad (when I was 15 maybe) that I thought his work as a tax assessor looked 
pretty tedious, he responded with a typical "you get what you put in" argument. 
 His favorite saying was: "If you're bored, then you're boring."  I get it 
intellectually ... but sheesh.  Taxes?  Really?

> On 06/10/2016 09:01 PM, Carl wrote:> "Now we will build you an endlessly 
> upward world..."
>> 
>> https://youtu.be/F7P2ViCRObs ( written from the POV of an AI (if that's 
>> even possible))

Another video of her says it's from the perspective of a database.  Perhaps I'm 
perverse; considering the difference between an AI and a database reasoner is 
interesting.  Going back to Marcus' point, perhaps keeping track of and 
reasoning over facebook posts or amazon purchasing habits would be deeply 
interesting to a Database, but mind-numbing to a general AI.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-13 Thread Marcus Daniels
I remember seeing a movie when I was in grade school about a computer that 
counselled citizens about their everyday challenges.In the end the computer 
basically commits suicide.   It wasn't a mainstream movie but something (I 
suspect) recommended to teachers at the time (late 70s, maybe it is an older 
film though).   Teletypes and flashing lights I think.   I can't find it with 
Google.  Anyone know what it is?  I tried some obvious sci-fi authors but came 
up with nothing.   It wasn't a particularly sci-fi movie, but more about not 
giving up responsibility to others, in this case a machine.   

Anyway, that's a cheery tune.  I would pity the sentient machine that had to 
read stupid human tricks on Facebook or make sense of Amazon purchasing habits 
and credit histories all day.   Mind numbing.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Carl
Sent: Friday, June 10, 2016 10:02 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

"Now we will build you an endlessly upward world..."

https://youtu.be/F7P2ViCRObs ( written from the POV of an AI (if 
that's even possible))

Speaking of robot overlords, after listening to this starting to think that 
trade agreements are less about trade than about big data.

C

On 6/10/16 3:21 PM, Marcus Daniels wrote:
> s/white guys playing basketball/scientists without engineers around/
>
> http://www.thewrap.com/snoop-dogg-explains-the-hizzistory-of-bizzasket
> ball-to-jimmy-kimmel-viewers-video/
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
> Sent: Friday, June 10, 2016 3:18 PM
> To: friam@redfish.com
> Subject: Re: [FRIAM] Fascinating article on how AI is driving change 
> in SEO, categories of AI and the Law of Accelerating Returns
>
> On 06/10/2016 11:22 AM, Marcus Daniels wrote:
>> Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just 
>> keep it off my wave" ..  In particular David Brooks can save it..
> That's kinda how I feel when I go to museums.  My postmodernist homunculi 
> start thrashing around demanding to know why I'm looking at all this useless 
> and meaningless stuff ... drives my nihilist homunculi crrraaazy.
>
> --
> ☣ glen
>
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com
> 
> FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe 
> at St. John's College to unsubscribe 
> http://redfish.com/mailman/listinfo/friam_redfish.com



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Carl

"Now we will build you an endlessly upward world..."

https://youtu.be/F7P2ViCRObs ( written from the POV of an AI (if 
that's even possible))


Speaking of robot overlords, after listening to this starting to think 
that trade agreements are less about trade than about big data.


C

On 6/10/16 3:21 PM, Marcus Daniels wrote:

s/white guys playing basketball/scientists without engineers around/

http://www.thewrap.com/snoop-dogg-explains-the-hizzistory-of-bizzasketball-to-jimmy-kimmel-viewers-video/

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
Sent: Friday, June 10, 2016 3:18 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

On 06/10/2016 11:22 AM, Marcus Daniels wrote:

Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just keep it 
off my wave" ..  In particular David Brooks can save it..

That's kinda how I feel when I go to museums.  My postmodernist homunculi start 
thrashing around demanding to know why I'm looking at all this useless and 
meaningless stuff ... drives my nihilist homunculi crrraaazy.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Marcus Daniels
s/white guys playing basketball/scientists without engineers around/

http://www.thewrap.com/snoop-dogg-explains-the-hizzistory-of-bizzasketball-to-jimmy-kimmel-viewers-video/

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
Sent: Friday, June 10, 2016 3:18 PM
To: friam@redfish.com
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

On 06/10/2016 11:22 AM, Marcus Daniels wrote:
> Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just 
> keep it off my wave" ..  In particular David Brooks can save it..

That's kinda how I feel when I go to museums.  My postmodernist homunculi start 
thrashing around demanding to know why I'm looking at all this useless and 
meaningless stuff ... drives my nihilist homunculi crrraaazy.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread glen ☣

On 06/10/2016 11:22 AM, Marcus Daniels wrote:

Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just keep it 
off my wave" ..  In particular David Brooks can save it..


That's kinda how I feel when I go to museums.  My postmodernist homunculi start 
thrashing around demanding to know why I'm looking at all this useless and 
meaningless stuff ... drives my nihilist homunculi crrraaazy.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Marcus Daniels

`` I have no idea, which is why I called it "faith" and hand-waved toward the 
inadequate closures of our current machines. ''

Bah.  I'll see your "You kids get off my lawn" and raise you a "Save it just 
keep it off my wave" ..  In particular David Brooks can save it..

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread glen ☣


Heh, I'd forgotten about Golgafrincham.  It's funny because it's true!

The problem lies with the permeable and dynamic boundaries of all these things.  And 
"symbiont" captures the fuzziness of the boundaries quite well.  As we've argued till 
we're blue, _general_ intelligence may well be illusory.  It's possible (if not likely) that the 
only general intelligence we can build will be just as symbiotic with the milieu as we are.  Maybe 
the AI won't rely directly on gut microbes.  Maybe it will rely on some other huge population of 
nanomachines that requires an entire earth to maintain ... perhaps the robot overlords will need 
promechanic pills to keep their gut nanomachines in healthy proportions.  I have no idea, which is 
why I called it "faith" and hand-waved toward the inadequate closures of our current 
machines.

Yes, I used "wonky" in order to prevent my email text from ballooning out of control.  
But "pathology" has (almost) a worse type of ambiguity to it because it implies an 
assumed state of health or normality that wonky doesn't imply.  It's fine to adapt to wonky things 
if one is adaptable enough, like learning to ride a backwards brain bicycle 
http://www.instructables.com/id/Reverse-steering-bike/.  Pathology is almost universally considered 
bad.)


On 06/10/2016 10:12 AM, Marcus Daniels wrote:

If some subset of humanity build a general artificial intelligence, and that 
intelligence takes over, or leaves, I don't see what gut biomes or ISIS matter. 
  Nor do I see why wonkiness (w.r.t. Glen's last e-mail) must occur within a 
(sub)population of cybernetic or genetically engineered super-intelligent 
humans that separate themselves from (or control) a legacy human population -- 
either for biological or sociological reasons.   Sure it could occur.Why 
must it occur?(Here I am assuming that `wonky' isn't just a word with a 
purposely ambiguous meaning, but is meant to suggest some sort of systemic 
pathology.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Carl
Sent: Friday, June 10, 2016 10:59 AM

I was thinking of symbiont in terms of mitochondria, gut biomes, HERVs,
etc.   I'm also rather increasingly fond of 1G, so if I am to give that
up, it doesn't seem to me that some long-term fractional G is going to be worth 
it.

You are of course familiar with Golgafrincham?



--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Marcus Daniels
If some subset of humanity build a general artificial intelligence, and that 
intelligence takes over, or leaves, I don't see what gut biomes or ISIS matter. 
  Nor do I see why wonkiness (w.r.t. Glen's last e-mail) must occur within a 
(sub)population of cybernetic or genetically engineered super-intelligent 
humans that separate themselves from (or control) a legacy human population -- 
either for biological or sociological reasons.   Sure it could occur.Why 
must it occur?(Here I am assuming that `wonky' isn't just a word with a 
purposely ambiguous meaning, but is meant to suggest some sort of systemic 
pathology.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Carl
Sent: Friday, June 10, 2016 10:59 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

I was thinking of symbiont in terms of mitochondria, gut biomes, HERVs, 
etc.   I'm also rather increasingly fond of 1G, so if I am to give that 
up, it doesn't seem to me that some long-term fractional G is going to be worth 
it.

You are of course familiar with Golgafrincham?

On 6/10/16 9:23 AM, glen ☣ wrote:
> On 06/09/2016 08:26 PM, Carl wrote:
>> One might do well to remember that we are symbionts (a Good Thing), so, 
>> transcendence for who or what?
> Excellent question!  It's pretty easy to trash faith in various contexts.  I 
> do my best to hunt it down and eradicate it in my own world view.  But one 
> article of faith I'm having a hard time killing is that if _we_ go anywhere 
> (including across some abstract singularity as well as to Mars), we'll _all_ 
> have to go, or at least some kernel of us with a chance of growing into a 
> robust ecosystem.
>
> One of the better senses of the concept of "machine" comes (basically) down 
> to a machine is that which can be adequately sliced out of its environment.  
> Life cannot be so sliced out ... or at least I have yet to eliminate my faith 
> in our systemic/social nature.  We are a film, a lumpy, gooey, sticky, mess.
>
>> On 6/9/16 6:50 PM, Steven A Smith wrote:
>>> The question I suppose, that I feel is in the air, is whether we are 
>>> accelerating toward an extinction event of our own making and whether 
>>> backing off on the accelerator will help reduce the chances of it being 
>>> total or if, as with the source domain of the metaphor,  will backing off 
>>> too fast actually *cause* a spinout?  Or perhaps the best strategy is to 
>>> punch on through?   Kurzweil is voting for "pedal to the metal" (achieve 
>>> transhuman transcendence in time for him to erh... transcend personally?) 
>>> and I suppose I'm suggesting "back off on the pedal gently but with strong 
>>> intent" with some vague loyalty and identity with "humans as we are"...
> You already know I agree with you.  But it helps to repeat it.  The "pedal to 
> the metal" guys sound the same (to me) as climate change deniers.  There are 
> 2 types: 1) people who believe the universe is open enough, extensible 
> enough, adaptive enough, to accommodate our "pedal to the metal" and settle 
> into a (beneficial to us) stability afterwards and 2) those who think we (or 
> the coming Robot Overlords) will be smart enough to intentionally regulate 
> stability.
>
> It's not fear that suggests an agile foot.  It's open-minded speculation 
> across all the possibilities.  But the metaphor falls apart.  It's not 
> out-driving our headlights so much as barely stable bubbles of chemicals, 
> which is what we are.  And it only takes a slight change in, say, medium pH 
> to burst all of us bubbles ... like wiping your finger on your face and 
> sticking it into the head on your beer ... add a little skin oil and it all 
> comes crashing down.
>
>>> so who am I to argue with the end of an individual life, culture or species?
> Hear, hear.  Besides, death is a process.  And it may well feel good:
>
>
> http://www.nature.com/scitable/blog/brain-metrics/could_a_final_surge_
> in
>
>



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread glen ☣
On 06/10/2016 09:05 AM, Marcus Daniels wrote:
> They wouldn't do a Mars One (one way) trip.   They are thriving in this 
> environment.   Only `weird' people would do that.  There are  other options 
> for people that are willing to take risks.   But in Elysium case, yes.

That's a good point.  But it gets a bit muddied when considering other forms of 
"leaving", like installing more memory in your head, cognitive enhancing drugs, 
designer babies, etc.  "Organic" food is similar.  I suspect the Trumps, 
Thiels, etc. _will_ do everything they can to leave the rest of us behind, 
because they see us as parasitic parts of the "we".  Even if some of them 
(Gates, Musk, Branson) have a more generous bent, their attention is limited in 
the same way everyone else's is.  They simply won't spend the time required to 
understand, say, the role an oxy-addicted instagram addict plays in the "we".  
My main point with the machine vs. life severability concept was that, in any 
of these types of "leaving", if we don't take the whole system, then it will go 
wonky.

A great example of "taking all of us when we go" is ISIS.  Social media has (I 
think) transformed us quite a bit.  And we brought ISIS right along with us on 
the transition.  The alt-right and neo-reactionaries are the same.  What would 
otherwise be an obvious (small) collection of morons without social media has 
become part of the existential threat (in part, provided with a recruitment 
pathway to/through Trump):

  http://thinkprogress.org/politics/2016/06/09/3786370/students-for-trump-psu/

Like it or not, those jerks are part of us and we will take them with us as we 
evolve.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Marcus Daniels
`` I'll not only consider them.  I'll be in the front of the line ... as long 
as they let lower middle class morons like me in the line at all.  I suspect 
it'll be packed with Trumps, Musks, Thiels, and Bransons. ''

They wouldn't do a Mars One (one way) trip.   They are thriving in this 
environment.   Only `weird' people would do that.  There are  other options for 
people that are willing to take risks.   But in Elysium case, yes.

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread glen ☣
On 06/10/2016 08:41 AM, Marcus Daniels wrote:
> That "we" entered into the discussion is arbitrary (Steve started with that, 
> I think), and further the statement is tautological.

Heh, no, it's not tautological.  It relies on the ambiguity of the word 
(perhaps concept) "we".  You're right that it's technically fallacious.  But 
the fallacy isn't that it's tautological.  Fallacy can be used to good effect 
in the same way paradox can.

> For example, I'm quite confident I don't need the Trump or ISIS people in my 
> life at all.I am not a willing symbiant.   If there were other ways to 
> live / other forms to take / other planets or non-terrestrial locations to 
> inhabit, and life were longer than it is, I would certain consider them.

I think we'll be surprised.  I'll not only consider them.  I'll be in the front 
of the line ... as long as they let lower middle class morons like me in the 
line at all.  I suspect it'll be packed with Trumps, Musks, Thiels, and 
Bransons.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread Marcus Daniels
``But one article of faith I'm having a hard time killing is that if _we_ go 
anywhere (including across some abstract singularity as well as to Mars), we'll 
_all_ have to go, or at least some kernel of us with a chance of growing into a 
robust ecosystem.''

That "we" entered into the discussion is arbitrary (Steve started with that, I 
think), and further the statement is tautological.
For example, I'm quite confident I don't need the Trump or ISIS people in my 
life at all.I am not a willing symbiant.   If there were other ways to live 
/ other forms to take / other planets or non-terrestrial locations to inhabit, 
and life were longer than it is, I would certain consider them.

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-10 Thread glen ☣
On 06/09/2016 08:26 PM, Carl wrote:
> One might do well to remember that we are symbionts (a Good Thing), so, 
> transcendence for who or what?

Excellent question!  It's pretty easy to trash faith in various contexts.  I do 
my best to hunt it down and eradicate it in my own world view.  But one article 
of faith I'm having a hard time killing is that if _we_ go anywhere (including 
across some abstract singularity as well as to Mars), we'll _all_ have to go, 
or at least some kernel of us with a chance of growing into a robust ecosystem.

One of the better senses of the concept of "machine" comes (basically) down to 
a machine is that which can be adequately sliced out of its environment.  Life 
cannot be so sliced out ... or at least I have yet to eliminate my faith in our 
systemic/social nature.  We are a film, a lumpy, gooey, sticky, mess.

> On 6/9/16 6:50 PM, Steven A Smith wrote:
>> The question I suppose, that I feel is in the air, is whether we are 
>> accelerating toward an extinction event of our own making and whether 
>> backing off on the accelerator will help reduce the chances of it being 
>> total or if, as with the source domain of the metaphor,  will backing off 
>> too fast actually *cause* a spinout?  Or perhaps the best strategy is to 
>> punch on through?   Kurzweil is voting for "pedal to the metal" (achieve 
>> transhuman transcendence in time for him to erh... transcend personally?) 
>> and I suppose I'm suggesting "back off on the pedal gently but with strong 
>> intent" with some vague loyalty and identity with "humans as we are"...

You already know I agree with you.  But it helps to repeat it.  The "pedal to 
the metal" guys sound the same (to me) as climate change deniers.  There are 2 
types: 1) people who believe the universe is open enough, extensible enough, 
adaptive enough, to accommodate our "pedal to the metal" and settle into a 
(beneficial to us) stability afterwards and 2) those who think we (or the 
coming Robot Overlords) will be smart enough to intentionally regulate 
stability.

It's not fear that suggests an agile foot.  It's open-minded speculation across 
all the possibilities.  But the metaphor falls apart.  It's not out-driving our 
headlights so much as barely stable bubbles of chemicals, which is what we are. 
 And it only takes a slight change in, say, medium pH to burst all of us 
bubbles ... like wiping your finger on your face and sticking it into the head 
on your beer ... add a little skin oil and it all comes crashing down.

>> so who am I to argue with the end of an individual life, culture or species?

Hear, hear.  Besides, death is a process.  And it may well feel good:

  http://www.nature.com/scitable/blog/brain-metrics/could_a_final_surge_in


-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread Carl
Hmm, ok, there's the "gene drive" issue.   You could, say, get rid of 
mosquitoes, but there may be side effects, eg malaria finds a different 
more effective vector.   One could also imagine other nasty things one 
could do to the microbiome (or other fast-reproducing short lived bits 
of our Context) of people with whom one presumed to disagree.   So yes, 
headlights.


http://phys.org/news/2016-06-fast-moving-science-gene.html

One might do well to remember that we are symbionts (a Good Thing), so, 
transcendence for who or what?


On 6/9/16 6:50 PM, Steven A Smith wrote:

Glen -

I do believe we *will* and *have been* outdriving our headlights, and 
it is part of the "manifest destiny" of being human, maybe 
mammal/warm-blooded/vertibrate/fauna/life?   It *might be* a necessary 
property of evolved life to innovate "grandly"... where "grandly" is a 
relative term.   The question I suppose, that I feel is in the air, is 
whether we are accelerating toward an extinction event of our own 
making and whether backing off on the accelerator will help reduce the 
chances of it being total or if, as with the source domain of the 
metaphor,  will backing off too fast actually *cause* a spinout?  Or 
perhaps the best strategy is to punch on through?   Kurzweil is voting 
for "pedal to the metal" (achieve transhuman transcendence in time for 
him to erh... transcend personally?) and I suppose I'm suggesting 
"back off on the pedal gently but with strong intent" with some vague 
loyalty and identity with "humans as we are"...


I also agree that Science is a sub-discipline of Engineering in the 
sense you mean it...  I think it is mostly a moot distinction.  I 
happen to have been trained in Science but practiced primarily in 
Engineering, so am familiar with the common view (at least of 
Scientists) of the reverse.   I think this point is a nice 
conundrum...  as a mutual friend of many of us uses for his tagline: 
"The Universe is Flux, All else is Opinion".   It is the nature of 
"life" to evolve which (so far?) requires a finite lifetime for the 
individual...   so who am I to argue with the end of an individual 
life, culture or species?



Flux on!

 - Steve

On 6/9/16 12:20 PM, Pamela McCorduck wrote:
I like this idea, Glen. Don't necessarily agree, but it's worth 
examining.


Sent from my iPhone


On Jun 9, 2016, at 9:53 AM, glen ☣  wrote:


On 06/08/2016 11:27 AM, Marcus Daniels wrote:
`` I'm pretty much a luddite myself, or at least "conservative" in 
the sense of believing that we are outdriving our headlights on 
many fronts.''


Experiments can be risky but sometimes they pay off..
The deeper point, I think, is that we not only _must_ outdrive our 
headlights, we've been doing it for billions of years. I've been 
trying to find some spare time to explore the idea that science is a 
sub-discipline of engineering. It's counter to our normal paradigm 
where we think engineering is applied science.  But I find it an 
attractive idea that you can't learn or understand anything without 
violently destroying/reorganizing some small part of the universe 
first.  Hence, all knowledge comes through engineering first. We 
have to force the ambience through our intentional filter before we 
can do anything with it ... like playdough through a stencil ... 
cast some liquid reality into the mold that is your mind, as it were.


--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread Steven A Smith

Glen -

I do believe we *will* and *have been* outdriving our headlights, and it 
is part of the "manifest destiny" of being human, maybe 
mammal/warm-blooded/vertibrate/fauna/life?   It *might be* a necessary 
property of evolved life to innovate "grandly"... where "grandly" is a 
relative term.   The question I suppose, that I feel is in the air, is 
whether we are accelerating toward an extinction event of our own making 
and whether backing off on the accelerator will help reduce the chances 
of it being total or if, as with the source domain of the metaphor,  
will backing off too fast actually *cause* a spinout?  Or perhaps the 
best strategy is to punch on through?   Kurzweil is voting for "pedal to 
the metal" (achieve transhuman transcendence in time for him to erh... 
transcend personally?) and I suppose I'm suggesting "back off on the 
pedal gently but with strong intent" with some vague loyalty and 
identity with "humans as we are"...


I also agree that Science is a sub-discipline of Engineering in the 
sense you mean it...  I think it is mostly a moot distinction.  I happen 
to have been trained in Science but practiced primarily in Engineering, 
so am familiar with the common view (at least of Scientists) of the 
reverse.   I think this point is a nice conundrum...  as a mutual friend 
of many of us uses for his tagline: "The Universe is Flux, All else is 
Opinion".   It is the nature of "life" to evolve which (so far?) 
requires a finite lifetime for the individual...   so who am I to argue 
with the end of an individual life, culture or species?



Flux on!

 - Steve

On 6/9/16 12:20 PM, Pamela McCorduck wrote:

I like this idea, Glen. Don't necessarily agree, but it's worth examining.

Sent from my iPhone


On Jun 9, 2016, at 9:53 AM, glen ☣  wrote:


On 06/08/2016 11:27 AM, Marcus Daniels wrote:
`` I'm pretty much a luddite myself, or at least "conservative" in the sense of 
believing that we are outdriving our headlights on many fronts.''

Experiments can be risky but sometimes they pay off..

The deeper point, I think, is that we not only _must_ outdrive our headlights, 
we've been doing it for billions of years.  I've been trying to find some spare 
time to explore the idea that science is a sub-discipline of engineering. It's 
counter to our normal paradigm where we think engineering is applied science.  
But I find it an attractive idea that you can't learn or understand anything 
without violently destroying/reorganizing some small part of the universe 
first.  Hence, all knowledge comes through engineering first.  We have to force 
the ambience through our intentional filter before we can do anything with it 
... like playdough through a stencil ... cast some liquid reality into the mold 
that is your mind, as it were.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread Marcus Daniels
``In major metropolises like New York City, the introduction of the internal 
combustion engine cleared the majority of the "disease-causing"  horse manure 
buildup in the streets (daily!) and it took 50 or more years for the 
replacement consequences to come home to roost.''

Speaking of which..

http://science.sciencemag.org/content/352/6291/1312.full


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread Steven A Smith



On 6/8/16 12:27 PM, Marcus Daniels wrote:

`` I'm pretty much a luddite myself, or at least "conservative" in the sense of 
believing that we are outdriving our headlights on many fronts.''

Experiments can be risky but sometimes they pay off..

http://discovermagazine.com/2010/mar/07-dr-drank-broth-gave-ulcer-solved-medical-mystery
I agree and believe that Homo Sapiens has been as "successful" as we 
have been *because* of the diversity or our "experimentation"... my 
issue probably has something to do with the stakes... I don't think 
humanity has been in an "all in" situation before the last 50 years or so?


 I'm not sure what an all out nuclear exchange in the 60's would have 
looked like... probably not global extinciton... just a lot of the 
northern hemisphere?


Nuclear Armageddon is still possible but it seems like we've (mostly) 
missed that window and instead are facing a plethora of unintended 
consequences from our less obviously warlike "progress"...  which of 
course, we often race headlong forward escalating our tech responses to 
mitigate the consequences of the last round.


In major metropolises like New York City, the introduction of the 
internal combustion engine cleared the majority of the "disease-causing" 
horse manure buildup in the streets (daily!) and it took 50 or more 
years for the replacement consequences to come home to roost.


- Steve



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread Pamela McCorduck
I like this idea, Glen. Don't necessarily agree, but it's worth examining. 

Sent from my iPhone

> On Jun 9, 2016, at 9:53 AM, glen ☣  wrote:
> 
>> On 06/08/2016 11:27 AM, Marcus Daniels wrote:
>> `` I'm pretty much a luddite myself, or at least "conservative" in the sense 
>> of believing that we are outdriving our headlights on many fronts.''
>> 
>> Experiments can be risky but sometimes they pay off..
> 
> The deeper point, I think, is that we not only _must_ outdrive our 
> headlights, we've been doing it for billions of years.  I've been trying to 
> find some spare time to explore the idea that science is a sub-discipline of 
> engineering. It's counter to our normal paradigm where we think engineering 
> is applied science.  But I find it an attractive idea that you can't learn or 
> understand anything without violently destroying/reorganizing some small part 
> of the universe first.  Hence, all knowledge comes through engineering first. 
>  We have to force the ambience through our intentional filter before we can 
> do anything with it ... like playdough through a stencil ... cast some liquid 
> reality into the mold that is your mind, as it were.
> 
> -- 
> ☣ glen
> 
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-09 Thread glen ☣
On 06/08/2016 11:27 AM, Marcus Daniels wrote:
> `` I'm pretty much a luddite myself, or at least "conservative" in the sense 
> of believing that we are outdriving our headlights on many fronts.''
> 
> Experiments can be risky but sometimes they pay off..

The deeper point, I think, is that we not only _must_ outdrive our headlights, 
we've been doing it for billions of years.  I've been trying to find some spare 
time to explore the idea that science is a sub-discipline of engineering. It's 
counter to our normal paradigm where we think engineering is applied science.  
But I find it an attractive idea that you can't learn or understand anything 
without violently destroying/reorganizing some small part of the universe 
first.  Hence, all knowledge comes through engineering first.  We have to force 
the ambience through our intentional filter before we can do anything with it 
... like playdough through a stencil ... cast some liquid reality into the mold 
that is your mind, as it were.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-08 Thread Marcus Daniels
`` I'm pretty much a luddite myself, or at least "conservative" in the sense of 
believing that we are outdriving our headlights on many fronts.''

Experiments can be risky but sometimes they pay off..

http://discovermagazine.com/2010/mar/07-dr-drank-broth-gave-ulcer-solved-medical-mystery


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread Marcus Daniels
``I can't help but wonder about our conceptual need for "digital" 
abstractions.''

For example, quantum calculations can be performed on digital computer, or by 
an artificial system made up of superconducting Josephson junctions, or 
observed in crystal structures.There are tradeoffs between precision, 
realism, and scale to make.  I think large scale computational science will 
increasingly depend on observing phenomena (e.g. natural or constructed atomic 
systems) in controlled settings and less on digital computers.   Just because 
the Avogadro scales involved won't be possible even with gigawatt scale digital 
computers.

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread glen ☣


If you meant to say that our conception of programming (as opposed to 
understanding of programming).  Along the same lines, I just ran across this:

http://www.erights.org/elib/capability/ode/overview.html

"Just as the digital logic gate abstraction allows digital circuit designers to 
create large analog circuits without doing analog circuit design, we present 
cryptographic capabilities as an abstraction allowing a similar economy of engineering 
effort in creating smart contracts."

I can't help but wonder about our conceptual need for "digital" abstractions.  It seems 
similar to the transition across sequential thinking vs parallel thinking, across procedural vs 
functional ... or classical vs quantum ... reals vs hyperreals ... proof vs types, etc.  I'm 
reminded of Steve Smith's reported explanation for the fire-knock-out physics of "Dies the 
Fire".  If I remember right, the idea was that the solar system had been somehow transported 
to another region of the universe, where the laws of physics were different.  Does the Mormon god 
(over there on Kolob) find Haskell or Prolog more intuitively natural?  Or what about the 
programmers prior to the last Big Crunch?  Were they burdened by discretization problems?

On 06/07/2016 11:39 AM, Marcus Daniels wrote:

"The problem is this unjustified dichotomy between machine and biology."

There isn't engineering practice in place for developing programmable nanomachines in the 
way there is for fabricating circuits, but   biology demonstrates it is possible.  It 
could be we work from the bottom, learning how to build extremely simple machines, atom 
by atom, and also work from the top, rationalizing how to manipulate proteins in 
arbitrary ways.I think we'll find out that our understanding of 
"programming" is impoverished compared to what living things achieve.

http://www.pnas.org/content/109/23/8884.abstract



--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread Marcus Daniels
"The problem is this unjustified dichotomy between machine and biology."

There isn't engineering practice in place for developing programmable 
nanomachines in the way there is for fabricating circuits, but   biology 
demonstrates it is possible.  It could be we work from the bottom, learning how 
to build extremely simple machines, atom by atom, and also work from the top, 
rationalizing how to manipulate proteins in arbitrary ways.I think we'll 
find out that our understanding of "programming" is impoverished compared to 
what living things achieve.  

http://www.pnas.org/content/109/23/8884.abstract

Marcus

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com


Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread sasmyth
What a Panopoly of responses (or is it more of a Plethora?) on this topic here.

I can't begin to respond to the many very interesting and thoughtful points
made here.   This general topic (the existential implications of the
co-evolution of humans and technology, the "extended phenotype" as Dawkins
calls it).

Heidegger's (1977?) essay on the topic as provided is quite interesting and
deserves a complete reading, as do several other references here!   

Too bad my queue is overfull and my own extended phenotype (mostly my primary
use laptop) is over-extended.  I'm trying to extend my extended phenotype more
into the cloud (typing this in a webmail client, which I normally loathe!)
while considering a second backup of my system on Google Drive (not just my
in-house Time Capsule)... 

Our own local player in the game of Singularity, Stephen Kotler, puts a lot of
interesting ideas out there in his recent books such as "Abundance" and "The
Rise of Superman"...  I'm pretty much a luddite myself, or at least
"conservative" in the sense of believing that we are outdriving our headlights
on many fronts.

That said, I think it is inevitable... short of a global shift in
consciousness, or perhaps at least in the first world (where most of this tech
development is driven from by rampant capitalistic consumerism).   

To counter this pessimism, I am reminded that many natural processes follow
neither a linear nor an exponential growth curve but rather more of a sigmoid
which admits into the situation the idea of saturation.   The long term growth
of many things is less than it's local growth at optimum, as the growth is
characterized by a series of piecewise sigmoidal curves, each with perhaps a
higher slope at optimum than the last, but never the implied exponential when
apprehended before the saturation element takes over.

I think the existential threat of loss of meaning is very acute and many who
lost their "livelihood" in the 2001 dot.bomb or the 2008 banking/real-estate
debacles.  Many of these people (self included by some measure) have had to
reinvent, not only a career, but an identity.  Formal retirement (much of the
list here) has the same challenges except that it is socially integrated and
something we "plan for".   As for myself, while I'm keeping the wolf from the
door financially, I can imagine how hard it is for others to keep not only
financial integrity but also identity integrity.  If I had not started a
business larger than myself and had a hand in sfX "back in the day", I might
have experienced much more dis-integration of self than I actually experience
today.   

I like the idea of universal support up to the issues so aptly pointed out by
REC and others.   I like the idea of leaving people *room* to (re)invent
themselves as creative human beings without the current (archaic?) constraints
of being productive in a consumerist society.   *SO* many things have to
change roughly at the same time for this to come about, I am not confident we
will get there quickly or efficiently.

This brings me to a point about "efficiency".   Evolution has never been
"efficient" by our standards, it seems always to use mass extinctions and
frighteningly short life-spans to drive it's own engine of creativity
(whatever that means)... so I'm not swayed by arguments that suggest we *can*
evolve without outrageous cost to most of the participants.

Not intended to be a bummer here, just appreciating the complexity of this
discussion as well as (I think) of these times!

- Steve

> On 06/06/2016 02:22 PM, Roger Critchlow wrote:
> >
https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a
> > was interesting, vis a vis what happens when you just give poor people
> > money.
> 
> Excerpt:
> > So in concrete terms, just how much dumber does poverty make you?
> > 
> > "Our effects correspond to between 13 and 14 IQ points," Shafir says.
"That’s comparable to losing a night’s sleep or the effects of
alcoholism." What’s  remarkable is that we could have figured all this out
30 years ago. Shafir and Mullainathan weren’t relying on anything so
complicated as brain scans. "Economists have been studying poverty for years
and psychologists have been studying cognitive limitations for years,”
Shafir explains. “We just put two and two together."
> 
> That is a good read.  Thanks.
> 
> > On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels  wrote:
> > 
> >> A problem with the
> >> "day jobber" approach is the narrowing of substantial things to what
> >> happens to be in the interest of dominant organizations.Even in silicon
> >> valley, that's a harsh narrowing of the possible.   So I would say do it to
> >> make the world interesting and not just for humanitarian reasons.
> 
> Yep.  We can't be arrogant enough to think we don't need those large hubs of
intention, though.  I can imagine if there's any truth to the scale-free
network concept, then lots of people 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread glen ☣
On 06/05/2016 02:22 PM, Robert Wall wrote:
> This one, titled "Where do minds belong?
> 
> (Mar
> 2016)" discusses the technological roadblocks in an insightful, highly
> speculative, but entertaining manner.

"Those early intelligences could have long ago reached the point where they 
decided to transition back from machines to biology."

The gist of this essay is a perfect example of trying to answer an ill-formed 
question.  It's entirely based on an unjustified distinction between machine 
and biology.  I'm all for justifying such a distinction.  And invoking von 
Neumann, energetics, and "neuromorphic architectures" exhibits a bit of context 
most others don't manage.  But discussing a move to machine intelligence and 
then a potential move back to biological intelligence without giving even a 
hand-waving mention of the difference between the two is conflating cart and 
horse.  And to beat around the bush so much is maddening.

Maybe there's currently a dearth of click-bait value left in the "what is life" 
genre.  So, perhaps Scharf and Aeon are exhibiting their awareness of a 
buzzphilic audience.

It would have been responsible, as long as you're going to mention 
Church-Turing and von Neumann anyway, to point out that both von Neumann and 
Turing went quite a ways in demonstrating that biology and machines are not 
very different.  To me, the _problem_ isn't one of AI.  The problem is this 
unjustified dichotomy between machine and biology.  A correlate problem is the 
(again probably false) distinction between life and intelligence.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-07 Thread glen ☣
On 06/06/2016 02:22 PM, Roger Critchlow wrote:
> https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a
> was interesting, vis a vis what happens when you just give poor people
> money.

Excerpt:
> So in concrete terms, just how much dumber does poverty make you?
> 
> "Our effects correspond to between 13 and 14 IQ points," Shafir says. "That’s 
> comparable to losing a night’s sleep or the effects of alcoholism." What’s  
> remarkable is that we could have figured all this out 30 years ago. Shafir 
> and Mullainathan weren’t relying on anything so complicated as brain scans. 
> "Economists have been studying poverty for years and psychologists have been 
> studying cognitive limitations for years,” Shafir explains. “We just put two 
> and two together."

That is a good read.  Thanks.

> On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels  wrote:
> 
>> A problem with the
>> "day jobber" approach is the narrowing of substantial things to what
>> happens to be in the interest of dominant organizations.Even in silicon
>> valley, that's a harsh narrowing of the possible.   So I would say do it to
>> make the world interesting and not just for humanitarian reasons.

Yep.  We can't be arrogant enough to think we don't need those large hubs of 
intention, though.  I can imagine if there's any truth to the scale-free 
network concept, then lots of people _should_ sign over their labor to the 
interests of some large organization.  But that's a far cry from the current 
thinking that everybody should have a "job" and that over simplifies around 
unemployment stats.  When I hear politicians say things like "job creator" or 
talk about how the people want jobs, I get a little nauseous.  The word "job" 
has always had an obligatory tone to it.  Objective-oriented people, in my 
experience, tend to talk about things like career paths or in terms of dreams, 
roles, achievements, etc.  If they talk about jobs, it's usually in the context 
of using a job as a stepping stone toward their objective.  Jobs are tools, 
means to an end, not ends in themselves.

I suppose it's kinda like those motorcycle commercials that say things like 
"The journey is the destination".  No, the destination is the destination and 
the journey is the journey.  Sheesh.  Of course, that doesn't mean you can't 
have fun while using your tool.  And some tools are way more fun than others.  
But anyone who talks about creating tools just for the sake of the tool, is ... 
well, a bit of a tool.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Marcus Daniels
“Transhumanism is a great Sci-Fi narrative, but not a good bet for us in the 
long run.”

Well,

http://www.nature.com/articles/srep22555
http://science.sciencemag.org/content/early/2016/06/01/science.aaf6850.full


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Robert Wall
Hi Marcus or Robot Overload,

Tongue in cheek:  How about early "retirement" packages to benefit the
surviving families?  I certainly may have to consider this myself for my
kids' and grandkids' survival  if the "offer" comes about.  But I am
retired and not displaced ... but I may still seem like a resource consumer
with no "apparent" ROI [except for what gets posted here, of course. :-)]

Still, given the knowledge I currently represent and embody that will waste
away with my death as you have said, I may still be more of an optimist in
these matters.  As naive as this may sound, if, for the sake of improving
humanity, we all spent just a bit more attention to achieving this uptick
through our own conscious evolution than through technological evolution
[and not through religion], we would have much fewer worries here. Improve
the conscious states even if through "advanced medicine and genetic
enhancements" or better and closer, more rational social politics.

This is the way to improve humanity in a *meaningful *way. No sixth
extinction event marking the end of the Anthropocene and the beginning of
the posthuman era.  No SkyNet.  No *I Robot *[the movie not the novel].
Just the conquering of what seems to be in the way of our survival at the
moment, irrespective of any ANI or AQI robots: our immediate impact on the
ecosystem. In that respect, we should do what is right for us collectively
and right for a planet upon which we desperately will need for a long time
to come. No way we are going to be able to leave this rock. Transhumanism
is a great Sci-Fi narrative, but not a good bet for us in the long run.

I recommend reading Martin Heidegger's essay *The Question Concerning
Technology*

(1954).  We *are *enframed.  But, the escape is ... well, *poetry*. Okay, I
know ... but you have to read this essay to understand. 

Best regards,

Robert

On Mon, Jun 6, 2016 at 8:03 PM, Marcus Daniels  wrote:

> If I were a robot overlord, and I didn’t want to look after 7 billion
> humans as pets, I’d start offering advanced medicine and genetic
> enhancements to “early users”, esp. the rich and powerful.   The results of
> these could be things like open-ended lifespan (ongoing repairs to aging
> bodies) and improved IQ, and perhaps even nicely-packaged cybernetic
> enhancements for emergency `soul preservation’ or high-speed
>  communication.  Humans are good at ignoring suffering outside of their
> tribe, and this would just be a new kind of social stratification.  Don’t
> need Skynet, just an incentive structure…
>
>
>
>
>
> *From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Robert
> Wall
> *Sent:* Monday, June 06, 2016 7:16 PM
>
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] Fascinating article on how AI is driving change in
> SEO, categories of AI and the Law of Accelerating Returns
>
>
>
> Getting back to Tom's original theme about how AI is driving change, let's
> examine that further, but now integrating in some of the other thoughts in
> this thread such as: on the hegemonic nature of AI-- proprietary or open
> source; or the societal impact of AI on the workforce--requisite skills
> increasing the value of the surviving human work; or on the existential
> risk of AI to humanity.  Certainly, it would be very relevant to also
> consider AI in the context of technological unemployment.  IMHO, this is
> the immediate existential threat, the threat to human-performed work.  Work
> is the thing that gives most of us something to organize our lives around
> ... giving us meaning to our existence. This threat is not naive.  It is
> real, palpable, and more fearsome than mortal death or physical extinction.
>
>
>
> We talked about the difference between ANI [Artificial Narrow
> Intelligence] and AGI [Artificial General Inteligence], with the former
> being the most prevalent--actually, the only type currently achieved.
> Current factory robots are of the ANI-type and are already replacing human
> workers by the millions here and abroad.  As their cost [ ~ $20,000]
> continues to decline through manufacturing efficiencies these robots will
> be able to replace even more workers, simultaneously putting downward
> pressure on the official, sustainable minimum wage.
>
>
>
> Even if the average rate of increase in "IQ" of these ANI robots remains
> at a modest steady pace or accelerates in pace with the supposed law of
> accelerating returns, then these ANI robots will start to make progress in
> the higher-paying jobs AND will tend to obviate the often stated political
> bromide of education as a solution; that is, human progress through a
> relatively slow 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Marcus Daniels
If I were a robot overlord, and I didn’t want to look after 7 billion humans as 
pets, I’d start offering advanced medicine and genetic enhancements to “early 
users”, esp. the rich and powerful.   The results of these could be things like 
open-ended lifespan (ongoing repairs to aging bodies) and improved IQ, and 
perhaps even nicely-packaged cybernetic enhancements for emergency `soul 
preservation’ or high-speed  communication.  Humans are good at ignoring 
suffering outside of their tribe, and this would just be a new kind of social 
stratification.  Don’t need Skynet, just an incentive structure…


From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Robert Wall
Sent: Monday, June 06, 2016 7:16 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

Getting back to Tom's original theme about how AI is driving change, let's 
examine that further, but now integrating in some of the other thoughts in this 
thread such as: on the hegemonic nature of AI-- proprietary or open source; or 
the societal impact of AI on the workforce--requisite skills increasing the 
value of the surviving human work; or on the existential risk of AI to 
humanity.  Certainly, it would be very relevant to also consider AI in the 
context of technological unemployment.  IMHO, this is the immediate existential 
threat, the threat to human-performed work.  Work is the thing that gives most 
of us something to organize our lives around ... giving us meaning to our 
existence. This threat is not naive.  It is real, palpable, and more fearsome 
than mortal death or physical extinction.

We talked about the difference between ANI [Artificial Narrow Intelligence] and 
AGI [Artificial General Inteligence], with the former being the most 
prevalent--actually, the only type currently achieved. Current factory robots 
are of the ANI-type and are already replacing human workers by the millions 
here and abroad.  As their cost [ ~ $20,000] continues to decline through 
manufacturing efficiencies these robots will be able to replace even more 
workers, simultaneously putting downward pressure on the official, sustainable 
minimum wage.

Even if the average rate of increase in "IQ" of these ANI robots remains at a 
modest steady pace or accelerates in pace with the supposed law of accelerating 
returns, then these ANI robots will start to make progress in the higher-paying 
jobs AND will tend to obviate the often stated political bromide of education 
as a solution; that is, human progress through a relatively slow educational 
process will not be able to keep up.

Nor will we be "just a media for representing knowledge." Because situation, 
actionable knowledge will be derived at the edges of the network by way of 
sousveillance replacing the current news sources and repurposing them for 
command and control of, well, the situation.  "And it is difficult to imagine 
how such a sluggish government system could keep up with such a rapid rate of 
change when it can barely do so now. (-quote from the linked article below)"

This situation has been anticipated years ago such as in the Harvard Business 
Review article: What Happens to Society When Robots Replace 
Workers?
 (Dec 2014):

"Ultimately, we need a new, individualized, cultural, approach to the meaning 
of work and the purpose of life. Otherwise, people will  find a solution – 
human beings always do – but it may not be the one for which we began this 
technological revolution."

Here's the rub and maybe the signal to keep all this in check:  Under such a 
dystopian scenario--where labor is transformed into capital--our capitalistic 
system would eventually collapse.  Experts say that when unemployment reaches 
35%, or thereabouts, the whole economic system collapses into chaos. 
Essentially there would be no consumers left in our consumer society. Perhaps, 
the only recourse would be for the capitalists who own the robots [the new 
workforce] to provide for a universal basic income to the technologically 
unemployed in order to maintain social order.

BUT, without a reason to get up in the morning, I doubt that this could last 
for long.

Dystopian indeed. I know.  Under such a scenario, we really won't need those 
SEO workers because there will be fewer and fewer consumers looking for stuff 
except for free entertainment.  So Facebook should become the new paragon 
website under most search categories, but Amazon, not so much.  The Google 
search algorithms will need to be recalibrated ... oh, wait a minute... no SEO 
workers. Facebook will become the new Google. Brave new world.

Cheers 蘿

On Mon, Jun 6, 2016 at 3:22 PM, Roger Critchlow 
> wrote:

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Robert Wall
Getting back to Tom's original theme about how AI is driving change, let's
examine that further, but now integrating in some of the other thoughts in
this thread such as: on the hegemonic nature of AI-- proprietary or open
source; or the societal impact of AI on the workforce--requisite skills
increasing the value of the surviving human work; or on the existential
risk of AI to humanity.  Certainly, it would be very relevant to also
consider AI in the context of technological unemployment.  IMHO, this is
the immediate existential threat, the threat to human-performed work.  Work
is the thing that gives most of us something to organize our lives around
... giving us meaning to our existence. This threat is not naive.  It is
real, palpable, and more fearsome than mortal death or physical extinction.

We talked about the difference between ANI [Artificial Narrow Intelligence]
and AGI [Artificial General Inteligence], with the former being the most
prevalent--actually, the only type currently achieved. Current factory
robots are of the ANI-type and are already replacing human workers by the
millions here and abroad.  As their cost [ ~ $20,000] continues to decline
through manufacturing efficiencies these robots will be able to replace
even more workers, simultaneously putting downward pressure on the
official, sustainable minimum wage.

Even if the average rate of increase in "IQ" of these ANI robots remains at
a modest steady pace or accelerates in pace with the supposed law of
accelerating returns, then these ANI robots will start to make progress in
the higher-paying jobs AND will tend to obviate the often stated political
bromide of education as a solution; that is, human progress through a
relatively slow educational process will not be able to keep up.

Nor will we be "just a media for representing knowledge." Because
situation, actionable knowledge will be derived at the edges of the network
by way of sousveillance replacing the current news sources and repurposing
them for command and control of, well, the situation.  "And it is difficult
to imagine how such a sluggish government system could keep up with such a
rapid rate of change when it can barely do so now. (-quote from the linked
article below)"

This situation has been anticipated years ago such as in the *Harvard Business
Review* article: What Happens to Society When Robots Replace Workers?

(Dec
2014):

"Ultimately, we need a new, individualized, cultural, approach to the
> meaning of work and the purpose of life. Otherwise, people will  find a
> solution – human beings always do – but it may not be the one for which we
> began this technological revolution."


Here's the rub and maybe the signal to keep all this in check:  Under such
a dystopian scenario--where labor is transformed into capital--our
capitalistic system would eventually collapse.  Experts say that when
unemployment reaches 35%, or thereabouts, the whole economic system
collapses into chaos. Essentially there would be no consumers left in our
consumer society. Perhaps, the only recourse would be for the capitalists
who own the robots [the new workforce] to provide for a universal basic
income to the technologically unemployed in order to maintain social order.

BUT, without a reason to get up in the morning, I doubt that this could
last for long.

Dystopian indeed. I know.  Under such a scenario, we really won't need
those SEO workers because there will be fewer and fewer consumers looking
for stuff except for free entertainment.  So Facebook should become the new
paragon website under most search categories, but Amazon, not so much.  The
Google search algorithms will need to be recalibrated ... oh, wait a
minute... no SEO workers. Facebook will become the new Google. Brave new
world.

Cheers 蘿

On Mon, Jun 6, 2016 at 3:22 PM, Roger Critchlow  wrote:

>
> https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a
> was interesting, vis a vis what happens when you just give poor people
> money.
>
> -- rec --
>
> On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels 
> wrote:
>
>> I suspect a universal basic income is a requirement for people to _not_
>> seek an idle life.If people can't count on food, shelter, and health
>> care, they probably can't engage in anything in a substantial way.On
>> the other hand, saving the people that could do substantial things (and by
>> "substantial" I mean artistic or scientific discovery or synthesis),  could
>> come at a prohibitive cost of saving those that won't.   A problem with the
>> "day jobber" approach is the narrowing of substantial things to what
>> happens to be in the interest of dominant organizations.Even in silicon
>> valley, that's a harsh narrowing of the possible.   So I would say do it to
>> make the world interesting and not just for humanitarian reasons.
>>
>> 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Roger Critchlow
https://medium.com/utopia-for-realists/why-do-the-poor-make-such-poor-decisions-f05d84c44f1a
was interesting, vis a vis what happens when you just give poor people
money.

-- rec --

On Mon, Jun 6, 2016 at 4:54 PM, Marcus Daniels  wrote:

> I suspect a universal basic income is a requirement for people to _not_
> seek an idle life.If people can't count on food, shelter, and health
> care, they probably can't engage in anything in a substantial way.On
> the other hand, saving the people that could do substantial things (and by
> "substantial" I mean artistic or scientific discovery or synthesis),  could
> come at a prohibitive cost of saving those that won't.   A problem with the
> "day jobber" approach is the narrowing of substantial things to what
> happens to be in the interest of dominant organizations.Even in silicon
> valley, that's a harsh narrowing of the possible.   So I would say do it to
> make the world interesting and not just for humanitarian reasons.
>
> -Original Message-
> From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
> Sent: Monday, June 06, 2016 1:36 PM
> To: The Friday Morning Applied Complexity Coffee Group 
> Subject: Re: [FRIAM] Fascinating article on how AI is driving change in
> SEO, categories of AI and the Law of Accelerating Returns
>
>
> On that note, I found this article interesting:
>
> A Universal Basic Income Is a Poor Tool to Fight Poverty
>
> http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0
>
> One of the interesting dynamics I've noticed is when I argue about the
> basic income with people who have day jobs (mostly venture funded, but some
> megacorps like Intel), they tend to object strongly; and when I have
> similar conversations with people who struggle on a continual basis to find
> and execute _projects_ (mostly DIY people who do a lot of freelance work
> from hardware prototyping to fixing motorcycles), they tend to be for the
> idea (if not the practicals of how to pay for it).
>
> I can't help thinking it has to do with the (somewhat false) dichotomy
> between those who think people are basically good, productive, energetic,
> useful versus those who think (most) people are basically lazy,
> unproductive, parasites.  The DIYers surround themselves with similarly
> creative people, whereas the day-job people are either themselves or
> surrounded by, people they feel don't pull their weight.  (I know I've
> often felt like a "third wheel" when working on large teams... and I end up
> having to fend for myself and forcibly squeeze some task out so that I can
> be productive.  These day-jobbers might feel similarly at various times.
> Or they're simply narcissists and don't recognize the contributions of
> their team members.)
>
> It also seems coincident with "great man" worship... The day-jobbers tend
> to put more stock in famous people (like Musk or Hawking or whoever),
> whereas the DIYers seem to be open to or tolerant of ideas (or even ways of
> life) in which they may initially see zero benefit.
>
>
> On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
> >
> > Finally, and this is where my anger really boils: they sound to me like
> the worst kind of patronizing, privileged white guys imaginable. There’s no
> sense in their aggrieved messages that billions of people around the globe
> are struggling, and have lives that could be vastly improved with AI.
> Maybe it behooves them to imagine the good AI can do for those people,
> instead of stamping their feet because AI is going to upset their personal
> world. Which it will. It must be very hard to be the smartest guy on the
> block for so long, and then here comes something even smarter.
>
> --
> ☣ glen
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
> http://redfish.com/mailman/listinfo/friam_redfish.com
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Marcus Daniels
I suspect a universal basic income is a requirement for people to _not_ seek an 
idle life.If people can't count on food, shelter, and health care, they 
probably can't engage in anything in a substantial way.On the other hand, 
saving the people that could do substantial things (and by "substantial" I mean 
artistic or scientific discovery or synthesis),  could come at a prohibitive 
cost of saving those that won't.   A problem with the "day jobber" approach is 
the narrowing of substantial things to what happens to be in the interest of 
dominant organizations.Even in silicon valley, that's a harsh narrowing of 
the possible.   So I would say do it to make the world interesting and not just 
for humanitarian reasons.

-Original Message-
From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of glen ?
Sent: Monday, June 06, 2016 1:36 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns


On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic 
income with people who have day jobs (mostly venture funded, but some megacorps 
like Intel), they tend to object strongly; and when I have similar 
conversations with people who struggle on a continual basis to find and execute 
_projects_ (mostly DIY people who do a lot of freelance work from hardware 
prototyping to fixing motorcycles), they tend to be for the idea (if not the 
practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between 
those who think people are basically good, productive, energetic, useful versus 
those who think (most) people are basically lazy, unproductive, parasites.  The 
DIYers surround themselves with similarly creative people, whereas the day-job 
people are either themselves or surrounded by, people they feel don't pull 
their weight.  (I know I've often felt like a "third wheel" when working on 
large teams... and I end up having to fend for myself and forcibly squeeze some 
task out so that I can be productive.  These day-jobbers might feel similarly 
at various times.  Or they're simply narcissists and don't recognize the 
contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to 
put more stock in famous people (like Musk or Hawking or whoever), whereas the 
DIYers seem to be open to or tolerant of ideas (or even ways of life) in which 
they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
> 
> Finally, and this is where my anger really boils: they sound to me like the 
> worst kind of patronizing, privileged white guys imaginable. There’s no sense 
> in their aggrieved messages that billions of people around the globe are 
> struggling, and have lives that could be vastly improved with AI.  Maybe it 
> behooves them to imagine the good AI can do for those people, instead of 
> stamping their feet because AI is going to upset their personal world. Which 
> it will. It must be very hard to be the smartest guy on the block for so 
> long, and then here comes something even smarter.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe 
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread glen ☣

On that note, I found this article interesting:

A Universal Basic Income Is a Poor Tool to Fight Poverty
http://www.nytimes.com/2016/06/01/business/economy/universal-basic-income-poverty.html?_r=0

One of the interesting dynamics I've noticed is when I argue about the basic 
income with people who have day jobs (mostly venture funded, but some megacorps 
like Intel), they tend to object strongly; and when I have similar 
conversations with people who struggle on a continual basis to find and execute 
_projects_ (mostly DIY people who do a lot of freelance work from hardware 
prototyping to fixing motorcycles), they tend to be for the idea (if not the 
practicals of how to pay for it).

I can't help thinking it has to do with the (somewhat false) dichotomy between 
those who think people are basically good, productive, energetic, useful versus 
those who think (most) people are basically lazy, unproductive, parasites.  The 
DIYers surround themselves with similarly creative people, whereas the day-job 
people are either themselves or surrounded by, people they feel don't pull 
their weight.  (I know I've often felt like a "third wheel" when working on 
large teams... and I end up having to fend for myself and forcibly squeeze some 
task out so that I can be productive.  These day-jobbers might feel similarly 
at various times.  Or they're simply narcissists and don't recognize the 
contributions of their team members.)

It also seems coincident with "great man" worship... The day-jobbers tend to 
put more stock in famous people (like Musk or Hawking or whoever), whereas the 
DIYers seem to be open to or tolerant of ideas (or even ways of life) in which 
they may initially see zero benefit.


On 06/06/2016 11:24 AM, Pamela McCorduck wrote:
> 
> Finally, and this is where my anger really boils: they sound to me like the 
> worst kind of patronizing, privileged white guys imaginable. There’s no sense 
> in their aggrieved messages that billions of people around the globe are 
> struggling, and have lives that could be vastly improved with AI.  Maybe it 
> behooves them to imagine the good AI can do for those people, instead of 
> stamping their feet because AI is going to upset their personal world. Which 
> it will. It must be very hard to be the smartest guy on the block for so 
> long, and then here comes something even smarter.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Stephen Guerin
Hi Pamela,

While open source gives some transparency, our direction is to move toward
more distributed AI where our data is not given to a centralized authority
before the AI is applied. Rather, we think that the AI should be more out
at the edge of the network with our sensors/cameras/microphones. The
derived information from the raw data could then be shared via agents
transacting on our behalf for collective action while maximizing privacy. A
Santa Fe Approach if you will :-)

We've been using Steve Mann's term Souveillance
 (in opposition to
Surveillance) as a shorthand for this idea along with the serverless p2p
solutions we're calling Acequia - a more grounded social structure and
water distribution system in opposition to a faceless centralized Cloud eg
water vapor in the sky :-)

-S

___
stephen.gue...@simtable.com 
CEO, Simtable  http://www.simtable.com
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828
twitter: @simtable

On Sun, Jun 5, 2016 at 4:04 PM, Pamela McCorduck  wrote:

> I have some grave concerns about AI being concentrated in the hands of a
> few big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the
> answer is open sourcing, but I’m skeptical. That said, I’d be interested in
> hearing other people’s solutions. Then again, you may not think it’s a
> problem.
>
>
> On Jun 5, 2016, at 3:22 PM, Robert Wall  wrote:
>
> Hi Tom,
>
> Interesting article about Google and their foray [actually a Blitzkrieg,
> as they are buying up all of the brain trust in this area] into the world
> of machine learning presumably to improve the search customer experience.
> Could their efforts actually have unintended consequences for both the
> search customer and the marketing efforts of the website owners? It is
> interesting to consider. For example, for the former case, Google picking
> WebMD as the paragon website for the healthcare industry flies in the face
> of my own experience and, say, this *New York Times Magazine* article: A
> Prescription for Fear
> 
>  (Feb
> 2011).  Will this actually make WebMD the *de facto* paragon in the minds
> of the searchers?  For the latter, successful web marketing becomes
> increasingly subject to the latest Google search algorithms instead of the
> previously more expert in-house marketing departments. Of course, this is
> the nature of SEO--to game the algorithms to attract better rankings.  But,
> it seems those in-house marketing departments will need to up their game:
>
> In other ways, things are a bit harder. The field of SEO will continue to
>> become extremely technical. Analytics and big data are the order of the
>> day, and any SEO that isn’t familiar with these approaches has a lot of
>> catching up to do. Those of you who have these skills can look forward to a
>> big payday.
>
>
> Also, with respect to those charts anticipating exponential growth for AGI
> technology--even eclipsing human intelligence by mid-century--there is much
> reasoning to see this as overly optimistic [see, for example, Hubert
> Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].
> These charts kind of remind me of the "ultraviolet catastrophe" around the
> end of the 19th century. There are physical limitations that may well tamp
> progress and keep it to ANI.  With respect to AGI, there have been some
> pointed challenges to this "Law of Accelerating Returns."
>
> On this point, I thought this article in *AEON *titled "Creative Blocks:
> The very laws of physics imply that artificial intelligence must be
> possible. What’s holding us up?
>  
> (Oct
> 2012)" is on point concerning the philosophical and epistemological road
> blocks.  This one, titled "Where do minds belong?
> 
>  (Mar
> 2016)" discusses the technological roadblocks in an insightful, highly
> speculative, but entertaining manner.
>
> Nonetheless, this whole discussion is quite intriguing, no matter your
> stance, hopes, or fears. [image: ]
>
> Cheers,
>
> Robert
>
> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson  wrote:
>
>> See
>> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
>> 
>>
>> Among other points: "...why doing regression analysis over every site,
>> without having the context of the search result that it is in, is supremely
>> flawed."
>> TJ
>>
>> 
>> Tom Johnson
>> Institute for Analytic 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Nick Thompson
Roger,

 

Can artificial flowers learn?  

 

Are you on your boat, yet?  Beautiful day on Massachusetts bay!  See 
http://www.ssd.noaa.gov/goes/east/eaus/flash-vis.html

 

By the way, in support of your aphorism “layers of the atmosphere don’t mix” 
which I have been chewing on ever since you offered it:  look on the extreme 
right of the satellite loop to see the upper half of the atmosphere sliding out 
over the cold maritime layer without any interaction whatsoever.  Cool!  

 

Nick 

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

  
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Roger Critchlow
Sent: Sunday, June 05, 2016 7:03 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

 

"Artificial intelligence has the same relation to intelligence as artificial 
flowers have to flowers."  -- David Parnas

Which is even funnier now than it was in 70's or 80's when first said, because 
artificial flowers have become more and more amazing over the decades.

 

-- rec --

 

On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson  > wrote:

Robert:

Thanks for the pointers at the end of your remarks to the interesting articles. 
 I wonder, too, if someone could come up with parallel "paragon websites."  
That is, here's WebMD.  and displayed alongside the "best" critics or 
alternatives to that site.

 

TJ




  

 






Tom Johnson
Institute for Analytic Journalism   -- Santa Fe, NM USA
505.577.6482  (c)
505.473.9646  (h)
Society of Professional Journalists -   Region 9 
  Director
Check out It's The People's Data 
 

http://www.jtjohnson.com 
t...@jtjohnson.com  


 

On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall  > wrote:

Hi Tom,

 

Interesting article about Google and their foray [actually a Blitzkrieg, as 
they are buying up all of the brain trust in this area] into the world of 
machine learning presumably to improve the search customer experience.  Could 
their efforts actually have unintended consequences for both the search 
customer and the marketing efforts of the website owners? It is interesting to 
consider. For example, for the former case, Google picking WebMD as the paragon 
website for the healthcare industry flies in the face of my own experience and, 
say, this New York Times Magazine article: A Prescription for Fear 

  (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds 
of the searchers?  For the latter, successful web marketing becomes 
increasingly subject to the latest Google search algorithms instead of the 
previously more expert in-house marketing departments. Of course, this is the 
nature of SEO--to game the algorithms to attract better rankings.  But, it 
seems those in-house marketing departments will need to up their game:

 

In other ways, things are a bit harder. The field of SEO will continue to 
become extremely technical. Analytics and big data are the order of the day, 
and any SEO that isn’t familiar with these approaches has a lot of catching up 
to do. Those of you who have these skills can look forward to a big payday.

 

Also, with respect to those charts anticipating exponential growth for AGI 
technology--even eclipsing human intelligence by mid-century--there is much 
reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' 
critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind 
of remind me of the "ultraviolet catastrophe" around the end of the 19th 
century. There are physical limitations that may well tamp progress and keep it 
to ANI.  With respect to AGI, there have been some pointed challenges to this 
"Law of Accelerating Returns."

 

On this point, I thought this article in AEON titled "Creative Blocks: The very 
laws of physics imply that artificial intelligence must be possible. What’s 
holding us up? 
  
(Oct 2012)" is on point concerning the philosophical and epistemological road 
blocks.  This one, titled "Where do minds belong? 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Pamela McCorduck
The field of AI itself has had a major project underway for a couple of years 
to address these issues. It’s called AI 100, and is funded by one of the 
wealthy founders of the field, Eric Horowitz at Microsoft. The headquarters of 
this project are at Stanford University.

It is funded for a century. Yes, that’s a century, on the grounds that whatever 
is decided in five years or ten years will need to be revisited five years or 
ten years on, again and again. Its staff consists of leading members of the 
field (who really know what the field can and cannot do), and they will be 
joined by ethicists, economists, philosophers and others (maybe already are) as 
the project moves along. (Their first report was rather scathing about Ray 
Kurzweil and the singularity, but that’s another issue.)

Musk, Hawking et al., are very good at getting publicity, but their first great 
solution last summer was to send a petition to the U.N., which they did with 
great fanfare. Of course nothing happened, and nothing could. This is the level 
of naivete (and sorry, self-importance) these men exhibit. 

I also find them more than a bit hypocritical. Musk is not giving up his 
smartphone, and Hawking concedes that he loves what AI has done for him 
personally (in terms of vocal communication) but maybe others shouldn’t be 
allowed to handle this…

Finally, and this is where my anger really boils: they sound to me like the 
worst kind of patronizing, privileged white guys imaginable. There’s no sense 
in their aggrieved messages that billions of people around the globe are 
struggling, and have lives that could be vastly improved with AI.  Maybe it 
behooves them to imagine the good AI can do for those people, instead of 
stamping their feet because AI is going to upset their personal world. Which it 
will. It must be very hard to be the smartest guy on the block for so long, and 
then here comes something even smarter.

Pamela


> On Jun 6, 2016, at 11:42 AM, Edward Angel  wrote:
> 
> There is a large group of distinguished people including Elon Musk, Stephen 
> Hawking, Bill Joy and Martin Rees, who believe that AI is an existential 
> threat and the probability of the human race surviving another 100 years is 
> less than 50/50.  Stephen Hawking has said he has no idea what to do about. 
> Bill Joy’s (non) solution is better ethical education for workers in the 
> area. I can’t see how open source will prevent the dangers they worry about. 
> Martin Rees has an Institute at Cambridge that worries about these things. 
> 
> Ed
> ___
> 
> Ed Angel
> 
> Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
> Professor Emeritus of Computer Science, University of New Mexico
> 
> 1017 Sierra Pinon
> Santa Fe, NM 87501
> 505-984-0136 (home)   an...@cs.unm.edu 
> 
> 505-453-4944 (cell)   http://www.cs.unm.edu/~angel 
> 
> 
>> On Jun 5, 2016, at 4:04 PM, Pamela McCorduck > > wrote:
>> 
>> I have some grave concerns about AI being concentrated in the hands of a few 
>> big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is 
>> open sourcing, but I’m skeptical. That said, I’d be interested in hearing 
>> other people’s solutions. Then again, you may not think it’s a problem.
>> 
>> 
>>> On Jun 5, 2016, at 3:22 PM, Robert Wall >> > wrote:
>>> 
>>> Hi Tom,
>>> 
>>> Interesting article about Google and their foray [actually a Blitzkrieg, as 
>>> they are buying up all of the brain trust in this area] into the world of 
>>> machine learning presumably to improve the search customer experience.  
>>> Could their efforts actually have unintended consequences for both the 
>>> search customer and the marketing efforts of the website owners? It is 
>>> interesting to consider. For example, for the former case, Google picking 
>>> WebMD as the paragon website for the healthcare industry flies in the face 
>>> of my own experience and, say, this New York Times Magazine article: A 
>>> Prescription for Fear 
>>> 
>>>  (Feb 2011).  Will this actually make WebMD the de facto paragon in the 
>>> minds of the searchers?  For the latter, successful web marketing becomes 
>>> increasingly subject to the latest Google search algorithms instead of the 
>>> previously more expert in-house marketing departments. Of course, this is 
>>> the nature of SEO--to game the algorithms to attract better rankings.  But, 
>>> it seems those in-house marketing departments will need to up their game:
>>> 
>>> In other ways, things are a bit harder. The field of SEO will continue to 
>>> become extremely technical. Analytics and big data are the order of the 
>>> day, and any SEO that isn’t familiar with these 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Marcus Daniels
Nah, we’re just a media for representing knowledge.   Not a obviously a very 
efficient one, either.   I mean, wasting all that time in school, only to 
forget much of it and then hopefully become a professional expert in some tiny 
area.   And a lot of people won’t even accomplish that, but nonetheless 
participate in filling the atmosphere full of CO2 and CH4 and using up vast 
fossil fuel reserves.  After a few decades comes retirement and much of that 
expertise is lost by society.   It’s all quite wasteful.  Something better 
sounds like a good idea.  It’s not extinction, it’s evolution.

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Edward Angel
Sent: Monday, June 06, 2016 11:42 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Fascinating article on how AI is driving change in SEO, 
categories of AI and the Law of Accelerating Returns

There is a large group of distinguished people including Elon Musk, Stephen 
Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat 
and the probability of the human race surviving another 100 years is less than 
50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s 
(non) solution is better ethical education for workers in the area. I can’t see 
how open source will prevent the dangers they worry about. Martin Rees has an 
Institute at Cambridge that worries about these things.

Ed
___

Ed Angel
Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon
Santa Fe, NM 87501
505-984-0136 (home) 
an...@cs.unm.edu
505-453-4944 (cell) 
http://www.cs.unm.edu/~angel

On Jun 5, 2016, at 4:04 PM, Pamela McCorduck 
> wrote:

I have some grave concerns about AI being concentrated in the hands of a few 
big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is 
open sourcing, but I’m skeptical. That said, I’d be interested in hearing other 
people’s solutions. Then again, you may not think it’s a problem.


On Jun 5, 2016, at 3:22 PM, Robert Wall 
> wrote:

Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as 
they are buying up all of the brain trust in this area] into the world of 
machine learning presumably to improve the search customer experience.  Could 
their efforts actually have unintended consequences for both the search 
customer and the marketing efforts of the website owners? It is interesting to 
consider. For example, for the former case, Google picking WebMD as the paragon 
website for the healthcare industry flies in the face of my own experience and, 
say, this New York Times Magazine article: A Prescription for 
Fear
 (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds 
of the searchers?  For the latter, successful web marketing becomes 
increasingly subject to the latest Google search algorithms instead of the 
previously more expert in-house marketing departments. Of course, this is the 
nature of SEO--to game the algorithms to attract better rankings.  But, it 
seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to 
become extremely technical. Analytics and big data are the order of the day, 
and any SEO that isn’t familiar with these approaches has a lot of catching up 
to do. Those of you who have these skills can look forward to a big payday.

Also, with respect to those charts anticipating exponential growth for AGI 
technology--even eclipsing human intelligence by mid-century--there is much 
reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' 
critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts kind 
of remind me of the "ultraviolet catastrophe" around the end of the 19th 
century. There are physical limitations that may well tamp progress and keep it 
to ANI.  With respect to AGI, there have been some pointed challenges to this 
"Law of Accelerating Returns."

On this point, I thought this article in AEON titled "Creative Blocks: The very 
laws of physics imply that artificial intelligence must be possible. What’s 
holding us 
up?
 (Oct 2012)" is on point concerning the philosophical and epistemological road 
blocks.  This one, titled "Where do minds 
belong?
 (Mar 2016)" discusses the technological roadblocks in an insightful, highly 
speculative, but entertaining manner.

Nonetheless, this 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread glen ☣

Well, my interpretation of Pamela's concern would have more to do with 
[bio]diversity than it does some form of naive extinction threat.  In previous 
posts, I've outlined my skepticism that (complicated) open source is any less 
opaque to understanding than proprietary sources because the skills and effort 
it takes to suss out the content can be prohibitive.  Regardless, it's true 
that open sourcing facilitates copying and forking (with or without 
understanding).  And that sort of thing definitely contributes to _diversity_.

So, if diversity in AI might cause a more robust system (including interaction 
with the already somewhat diverse naturally intelligent systems), then there's 
a clear path for how open source would help prevent an extinction event.

The people who believe in things like "group think" should predictably 
recognize that argument.

On 06/06/2016 10:42 AM, Edward Angel wrote:
> There is a large group of distinguished people including Elon Musk, Stephen 
> Hawking, Bill Joy and Martin Rees, who believe that AI is an existential 
> threat and the probability of the human race surviving another 100 years is 
> less than 50/50.  Stephen Hawking has said he has no idea what to do about. 
> Bill Joy’s (non) solution is better ethical education for workers in the 
> area. I can’t see how open source will prevent the dangers they worry about. 
> Martin Rees has an Institute at Cambridge that worries about these things. 
> 
> Ed

> 
>> On Jun 5, 2016, at 4:04 PM, Pamela McCorduck > > wrote:
>>
>> I have some grave concerns about AI being concentrated in the hands of a few 
>> big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is 
>> open sourcing, but I’m skeptical. That said, I’d be interested in hearing 
>> other people’s solutions. Then again, you may not think it’s a problem.
>>

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-06 Thread Edward Angel
There is a large group of distinguished people including Elon Musk, Stephen 
Hawking, Bill Joy and Martin Rees, who believe that AI is an existential threat 
and the probability of the human race surviving another 100 years is less than 
50/50.  Stephen Hawking has said he has no idea what to do about. Bill Joy’s 
(non) solution is better ethical education for workers in the area. I can’t see 
how open source will prevent the dangers they worry about. Martin Rees has an 
Institute at Cambridge that worries about these things. 

Ed
___

Ed Angel

Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon
Santa Fe, NM 87501
505-984-0136 (home) an...@cs.unm.edu 

505-453-4944 (cell) http://www.cs.unm.edu/~angel 


> On Jun 5, 2016, at 4:04 PM, Pamela McCorduck  wrote:
> 
> I have some grave concerns about AI being concentrated in the hands of a few 
> big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is 
> open sourcing, but I’m skeptical. That said, I’d be interested in hearing 
> other people’s solutions. Then again, you may not think it’s a problem.
> 
> 
>> On Jun 5, 2016, at 3:22 PM, Robert Wall > > wrote:
>> 
>> Hi Tom,
>> 
>> Interesting article about Google and their foray [actually a Blitzkrieg, as 
>> they are buying up all of the brain trust in this area] into the world of 
>> machine learning presumably to improve the search customer experience.  
>> Could their efforts actually have unintended consequences for both the 
>> search customer and the marketing efforts of the website owners? It is 
>> interesting to consider. For example, for the former case, Google picking 
>> WebMD as the paragon website for the healthcare industry flies in the face 
>> of my own experience and, say, this New York Times Magazine article: A 
>> Prescription for Fear 
>> 
>>  (Feb 2011).  Will this actually make WebMD the de facto paragon in the 
>> minds of the searchers?  For the latter, successful web marketing becomes 
>> increasingly subject to the latest Google search algorithms instead of the 
>> previously more expert in-house marketing departments. Of course, this is 
>> the nature of SEO--to game the algorithms to attract better rankings.  But, 
>> it seems those in-house marketing departments will need to up their game:
>> 
>> In other ways, things are a bit harder. The field of SEO will continue to 
>> become extremely technical. Analytics and big data are the order of the day, 
>> and any SEO that isn’t familiar with these approaches has a lot of catching 
>> up to do. Those of you who have these skills can look forward to a big 
>> payday.
>> 
>> Also, with respect to those charts anticipating exponential growth for AGI 
>> technology--even eclipsing human intelligence by mid-century--there is much 
>> reasoning to see this as overly optimistic [see, for example, Hubert 
>> Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].  These 
>> charts kind of remind me of the "ultraviolet catastrophe" around the end of 
>> the 19th century. There are physical limitations that may well tamp progress 
>> and keep it to ANI.  With respect to AGI, there have been some pointed 
>> challenges to this "Law of Accelerating Returns."
>> 
>> On this point, I thought this article in AEON titled "Creative Blocks: The 
>> very laws of physics imply that artificial intelligence must be possible. 
>> What’s holding us up? 
>> 
>>  (Oct 2012)" is on point concerning the philosophical and epistemological 
>> road blocks.  This one, titled "Where do minds belong? 
>> 
>>  (Mar 2016)" discusses the technological roadblocks in an insightful, highly 
>> speculative, but entertaining manner.
>> 
>> Nonetheless, this whole discussion is quite intriguing, no matter your 
>> stance, hopes, or fears. 
>> 
>> Cheers,
>> 
>> Robert
>> 
>> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson > > wrote:
>> See 
>> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
>>   
>> 
>> 
>> Among other points: "...why doing regression analysis over every site, 
>> without having the context of the search result that it is in, is supremely 
>> flawed."
>> TJ
>> 
>> 
>> Tom Johnson
>> Institute for Analytic Journalism   -- Santa 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-05 Thread Pamela McCorduck
Parnas was on the faculty with my husband at CMU back in the day. He was known 
as “the department’s conscience.” Except, Joe said, how can you be considered 
the conscience when you’re against everything? Everything whatsoever?

He was eventually let go, went to,  uh, Dortmund as I recall, then to Canada 
(or maybe the other way around). He was the compleat contrarian. Doesn’t mean 
he was always wrong. He was right about Brilliant Pebbles, or Star Wars, or 
whatever Reagan’s brainchild was. The software had to work right the very first 
time. Wasn’t going to happen, he said, and he was right. But basically, he was 
a chronic malcontent.


> On Jun 5, 2016, at 5:03 PM, Roger Critchlow  wrote:
> 
> "Artificial intelligence has the same relation to intelligence as artificial 
> flowers have to flowers."  -- David Parnas
> Which is even funnier now than it was in 70's or 80's when first said, 
> because artificial flowers have become more and more amazing over the decades.
> 
> -- rec --
> 
> On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson  > wrote:
> Robert:
> Thanks for the pointers at the end of your remarks to the interesting 
> articles.  I wonder, too, if someone could come up with parallel "paragon 
> websites."  That is, here's WebMD.  and displayed alongside the "best" 
> critics or alternatives to that site.
> 
> TJ
> 
> 
> 
> 
> 
> 
> 
> Tom Johnson
> Institute for Analytic Journalism   -- Santa Fe, NM USA
> 505.577.6482 (c)
> 505.473.9646 (h)
> Society of Professional Journalists    -   Region 9 
>  Director
> Check out It's The People's Data 
> 
> http://www.jtjohnson.com    
> t...@jtjohnson.com 
> 
> 
> On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall  > wrote:
> Hi Tom,
> 
> Interesting article about Google and their foray [actually a Blitzkrieg, as 
> they are buying up all of the brain trust in this area] into the world of 
> machine learning presumably to improve the search customer experience.  Could 
> their efforts actually have unintended consequences for both the search 
> customer and the marketing efforts of the website owners? It is interesting 
> to consider. For example, for the former case, Google picking WebMD as the 
> paragon website for the healthcare industry flies in the face of my own 
> experience and, say, this New York Times Magazine article: A Prescription for 
> Fear 
> 
>  (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds 
> of the searchers?  For the latter, successful web marketing becomes 
> increasingly subject to the latest Google search algorithms instead of the 
> previously more expert in-house marketing departments. Of course, this is the 
> nature of SEO--to game the algorithms to attract better rankings.  But, it 
> seems those in-house marketing departments will need to up their game:
> 
> In other ways, things are a bit harder. The field of SEO will continue to 
> become extremely technical. Analytics and big data are the order of the day, 
> and any SEO that isn’t familiar with these approaches has a lot of catching 
> up to do. Those of you who have these skills can look forward to a big payday.
> 
> Also, with respect to those charts anticipating exponential growth for AGI 
> technology--even eclipsing human intelligence by mid-century--there is much 
> reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' 
> critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts 
> kind of remind me of the "ultraviolet catastrophe" around the end of the 19th 
> century. There are physical limitations that may well tamp progress and keep 
> it to ANI.  With respect to AGI, there have been some pointed challenges to 
> this "Law of Accelerating Returns."
> 
> On this point, I thought this article in AEON titled "Creative Blocks: The 
> very laws of physics imply that artificial intelligence must be possible. 
> What’s holding us up? 
>  
> (Oct 2012)" is on point concerning the philosophical and epistemological road 
> blocks.  This one, titled "Where do minds belong? 
> 
>  (Mar 2016)" discusses the technological roadblocks in an insightful, highly 
> speculative, but entertaining manner.
> 
> Nonetheless, this whole discussion is quite intriguing, no matter your 
> stance, hopes, or fears. 
> 
> Cheers,
> 
> Robert
> 
> On Sat, Jun 4, 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-05 Thread Roger Critchlow
"Artificial intelligence has the same relation to intelligence as
artificial flowers have to flowers."  -- David Parnas
Which is even funnier now than it was in 70's or 80's when first
said, because artificial flowers have become more and more amazing over the
decades.

-- rec --

On Sun, Jun 5, 2016 at 6:09 PM, Tom Johnson  wrote:

> Robert:
> Thanks for the pointers at the end of your remarks to the interesting
> articles.  I wonder, too, if someone could come up with parallel "paragon
> websites."  That is, here's WebMD.  and displayed alongside the "best"
> critics or alternatives to that site.
>
> TJ
>
>
>
>
>
> 
> Tom Johnson
> Institute for Analytic Journalism   -- Santa Fe, NM USA
> 505.577.6482(c)505.473.9646(h)
> Society of Professional Journalists    -   Region 9
>  Director
> *Check out It's The People's Data
> *
> http://www.jtjohnson.com   t...@jtjohnson.com
> 
>
> On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall  wrote:
>
>> Hi Tom,
>>
>> Interesting article about Google and their foray [actually a Blitzkrieg,
>> as they are buying up all of the brain trust in this area] into the world
>> of machine learning presumably to improve the search customer experience.
>> Could their efforts actually have unintended consequences for both the
>> search customer and the marketing efforts of the website owners? It is
>> interesting to consider. For example, for the former case, Google picking
>> WebMD as the paragon website for the healthcare industry flies in the face
>> of my own experience and, say, this *New York Times Magazine* article: A
>> Prescription for Fear
>> 
>>  (Feb
>> 2011).  Will this actually make WebMD the *de facto* paragon in the
>> minds of the searchers?  For the latter, successful web marketing becomes
>> increasingly subject to the latest Google search algorithms instead of the
>> previously more expert in-house marketing departments. Of course, this is
>> the nature of SEO--to game the algorithms to attract better rankings.  But,
>> it seems those in-house marketing departments will need to up their game:
>>
>> In other ways, things are a bit harder. The field of SEO will continue to
>>> become extremely technical. Analytics and big data are the order of the
>>> day, and any SEO that isn’t familiar with these approaches has a lot of
>>> catching up to do. Those of you who have these skills can look forward to a
>>> big payday.
>>
>>
>> Also, with respect to those charts anticipating exponential growth for
>> AGI technology--even eclipsing human intelligence by mid-century--there is
>> much reasoning to see this as overly optimistic [see, for example, Hubert
>> Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].
>> These charts kind of remind me of the "ultraviolet catastrophe" around the
>> end of the 19th century. There are physical limitations that may well tamp
>> progress and keep it to ANI.  With respect to AGI, there have been some
>> pointed challenges to this "Law of Accelerating Returns."
>>
>> On this point, I thought this article in *AEON *titled "Creative Blocks:
>> The very laws of physics imply that artificial intelligence must be
>> possible. What’s holding us up?
>> 
>>  (Oct
>> 2012)" is on point concerning the philosophical and epistemological road
>> blocks.  This one, titled "Where do minds belong?
>> 
>>  (Mar
>> 2016)" discusses the technological roadblocks in an insightful, highly
>> speculative, but entertaining manner.
>>
>> Nonetheless, this whole discussion is quite intriguing, no matter your
>> stance, hopes, or fears. 
>>
>> Cheers,
>>
>> Robert
>>
>> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson  wrote:
>>
>>> See
>>> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
>>> 
>>>
>>> Among other points: "...why doing regression analysis over every site,
>>> without having the context of the search result that it is in, is supremely
>>> flawed."
>>> TJ
>>>
>>> 
>>> Tom Johnson
>>> Institute for Analytic Journalism   -- Santa Fe, NM USA
>>> 505.577.6482(c)505.473.9646(h)
>>> Society of Professional Journalists    -   Region 9
>>>  Director
>>> *Check out It's The People's Data
>>> 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-05 Thread Tom Johnson
Robert:
Thanks for the pointers at the end of your remarks to the interesting
articles.  I wonder, too, if someone could come up with parallel "paragon
websites."  That is, here's WebMD.  and displayed alongside the "best"
critics or alternatives to that site.

TJ






Tom Johnson
Institute for Analytic Journalism   -- Santa Fe, NM USA
505.577.6482(c)505.473.9646(h)
Society of Professional Journalists    -   Region 9
 Director
*Check out It's The People's Data
*
http://www.jtjohnson.com   t...@jtjohnson.com


On Sun, Jun 5, 2016 at 3:22 PM, Robert Wall  wrote:

> Hi Tom,
>
> Interesting article about Google and their foray [actually a Blitzkrieg,
> as they are buying up all of the brain trust in this area] into the world
> of machine learning presumably to improve the search customer experience.
> Could their efforts actually have unintended consequences for both the
> search customer and the marketing efforts of the website owners? It is
> interesting to consider. For example, for the former case, Google picking
> WebMD as the paragon website for the healthcare industry flies in the face
> of my own experience and, say, this *New York Times Magazine* article: A
> Prescription for Fear
> 
>  (Feb
> 2011).  Will this actually make WebMD the *de facto* paragon in the minds
> of the searchers?  For the latter, successful web marketing becomes
> increasingly subject to the latest Google search algorithms instead of the
> previously more expert in-house marketing departments. Of course, this is
> the nature of SEO--to game the algorithms to attract better rankings.  But,
> it seems those in-house marketing departments will need to up their game:
>
> In other ways, things are a bit harder. The field of SEO will continue to
>> become extremely technical. Analytics and big data are the order of the
>> day, and any SEO that isn’t familiar with these approaches has a lot of
>> catching up to do. Those of you who have these skills can look forward to a
>> big payday.
>
>
> Also, with respect to those charts anticipating exponential growth for AGI
> technology--even eclipsing human intelligence by mid-century--there is much
> reasoning to see this as overly optimistic [see, for example, Hubert
> Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].
> These charts kind of remind me of the "ultraviolet catastrophe" around the
> end of the 19th century. There are physical limitations that may well tamp
> progress and keep it to ANI.  With respect to AGI, there have been some
> pointed challenges to this "Law of Accelerating Returns."
>
> On this point, I thought this article in *AEON *titled "Creative Blocks:
> The very laws of physics imply that artificial intelligence must be
> possible. What’s holding us up?
>  
> (Oct
> 2012)" is on point concerning the philosophical and epistemological road
> blocks.  This one, titled "Where do minds belong?
> 
>  (Mar
> 2016)" discusses the technological roadblocks in an insightful, highly
> speculative, but entertaining manner.
>
> Nonetheless, this whole discussion is quite intriguing, no matter your
> stance, hopes, or fears. 
>
> Cheers,
>
> Robert
>
> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson  wrote:
>
>> See
>> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
>> 
>>
>> Among other points: "...why doing regression analysis over every site,
>> without having the context of the search result that it is in, is supremely
>> flawed."
>> TJ
>>
>> 
>> Tom Johnson
>> Institute for Analytic Journalism   -- Santa Fe, NM USA
>> 505.577.6482(c)505.473.9646(h)
>> Society of Professional Journalists    -   Region 9
>>  Director
>> *Check out It's The People's Data
>> *
>> http://www.jtjohnson.com   t...@jtjohnson.com
>> 
>>
>>
>>
>> Sent with MailTrack
>> 
>>
>>
>> 
>>  Virus-free.
>> www.avast.com
>> 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-05 Thread Pamela McCorduck
I have some grave concerns about AI being concentrated in the hands of a few 
big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is 
open sourcing, but I’m skeptical. That said, I’d be interested in hearing other 
people’s solutions. Then again, you may not think it’s a problem.


> On Jun 5, 2016, at 3:22 PM, Robert Wall  wrote:
> 
> Hi Tom,
> 
> Interesting article about Google and their foray [actually a Blitzkrieg, as 
> they are buying up all of the brain trust in this area] into the world of 
> machine learning presumably to improve the search customer experience.  Could 
> their efforts actually have unintended consequences for both the search 
> customer and the marketing efforts of the website owners? It is interesting 
> to consider. For example, for the former case, Google picking WebMD as the 
> paragon website for the healthcare industry flies in the face of my own 
> experience and, say, this New York Times Magazine article: A Prescription for 
> Fear 
> 
>  (Feb 2011).  Will this actually make WebMD the de facto paragon in the minds 
> of the searchers?  For the latter, successful web marketing becomes 
> increasingly subject to the latest Google search algorithms instead of the 
> previously more expert in-house marketing departments. Of course, this is the 
> nature of SEO--to game the algorithms to attract better rankings.  But, it 
> seems those in-house marketing departments will need to up their game:
> 
> In other ways, things are a bit harder. The field of SEO will continue to 
> become extremely technical. Analytics and big data are the order of the day, 
> and any SEO that isn’t familiar with these approaches has a lot of catching 
> up to do. Those of you who have these skills can look forward to a big payday.
> 
> Also, with respect to those charts anticipating exponential growth for AGI 
> technology--even eclipsing human intelligence by mid-century--there is much 
> reasoning to see this as overly optimistic [see, for example, Hubert Dreyfus' 
> critique of Good Old Fashion AI: "What Computers Can't Do"].  These charts 
> kind of remind me of the "ultraviolet catastrophe" around the end of the 19th 
> century. There are physical limitations that may well tamp progress and keep 
> it to ANI.  With respect to AGI, there have been some pointed challenges to 
> this "Law of Accelerating Returns."
> 
> On this point, I thought this article in AEON titled "Creative Blocks: The 
> very laws of physics imply that artificial intelligence must be possible. 
> What’s holding us up? 
>  
> (Oct 2012)" is on point concerning the philosophical and epistemological road 
> blocks.  This one, titled "Where do minds belong? 
> 
>  (Mar 2016)" discusses the technological roadblocks in an insightful, highly 
> speculative, but entertaining manner.
> 
> Nonetheless, this whole discussion is quite intriguing, no matter your 
> stance, hopes, or fears. 
> 
> Cheers,
> 
> Robert
> 
> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson  > wrote:
> See 
> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
>   
> 
> 
> Among other points: "...why doing regression analysis over every site, 
> without having the context of the search result that it is in, is supremely 
> flawed."
> TJ
> 
> 
> Tom Johnson
> Institute for Analytic Journalism   -- Santa Fe, NM USA
> 505.577.6482 (c)
> 505.473.9646 (h)
> Society of Professional Journalists    -   Region 9 
>  Director
> Check out It's The People's Data 
> 
> http://www.jtjohnson.com    
> t...@jtjohnson.com 
> 
> 
> 
> 
> Sent with MailTrack 
> 
> 
>  
> 
>   Virus-free. www.avast.com 
> 
>  
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
> 

Re: [FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-05 Thread Robert Wall
Hi Tom,

Interesting article about Google and their foray [actually a Blitzkrieg, as
they are buying up all of the brain trust in this area] into the world of
machine learning presumably to improve the search customer experience.
Could their efforts actually have unintended consequences for both the
search customer and the marketing efforts of the website owners? It is
interesting to consider. For example, for the former case, Google picking
WebMD as the paragon website for the healthcare industry flies in the face
of my own experience and, say, this *New York Times Magazine* article: A
Prescription for Fear

(Feb
2011).  Will this actually make WebMD the *de facto* paragon in the minds
of the searchers?  For the latter, successful web marketing becomes
increasingly subject to the latest Google search algorithms instead of the
previously more expert in-house marketing departments. Of course, this is
the nature of SEO--to game the algorithms to attract better rankings.  But,
it seems those in-house marketing departments will need to up their game:

In other ways, things are a bit harder. The field of SEO will continue to
> become extremely technical. Analytics and big data are the order of the
> day, and any SEO that isn’t familiar with these approaches has a lot of
> catching up to do. Those of you who have these skills can look forward to a
> big payday.


Also, with respect to those charts anticipating exponential growth for AGI
technology--even eclipsing human intelligence by mid-century--there is much
reasoning to see this as overly optimistic [see, for example, Hubert
Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"].
These charts kind of remind me of the "ultraviolet catastrophe" around the
end of the 19th century. There are physical limitations that may well tamp
progress and keep it to ANI.  With respect to AGI, there have been some
pointed challenges to this "Law of Accelerating Returns."

On this point, I thought this article in *AEON *titled "Creative Blocks:
The very laws of physics imply that artificial intelligence must be
possible. What’s holding us up?

(Oct
2012)" is on point concerning the philosophical and epistemological road
blocks.  This one, titled "Where do minds belong?

(Mar
2016)" discusses the technological roadblocks in an insightful, highly
speculative, but entertaining manner.

Nonetheless, this whole discussion is quite intriguing, no matter your
stance, hopes, or fears. 

Cheers,

Robert

On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson  wrote:

> See
> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily
> 
>
> Among other points: "...why doing regression analysis over every site,
> without having the context of the search result that it is in, is supremely
> flawed."
> TJ
>
> 
> Tom Johnson
> Institute for Analytic Journalism   -- Santa Fe, NM USA
> 505.577.6482(c)505.473.9646(h)
> Society of Professional Journalists    -   Region 9
>  Director
> *Check out It's The People's Data
> *
> http://www.jtjohnson.com   t...@jtjohnson.com
> 
>
>
>
> Sent with MailTrack
> 
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

[FRIAM] Fascinating article on how AI is driving change in SEO, categories of AI and the Law of Accelerating Returns

2016-06-04 Thread Tom Johnson
See
http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily


Among other points: "...why doing regression analysis over every site,
without having the context of the search result that it is in, is supremely
flawed."
TJ


Tom Johnson
Institute for Analytic Journalism   -- Santa Fe, NM USA
505.577.6482(c)505.473.9646(h)
Society of Professional Journalists    -   Region 9
 Director
*Check out It's The People's Data
*
http://www.jtjohnson.com   t...@jtjohnson.com




Sent with MailTrack



Virus-free.
www.avast.com

<#DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com