Re: [Vo]:Skynet Advances

2013-03-23 Thread Terry Blanton
Meekly disguised as Rapyuta or the RoboEarth Cloud Engine, Skynet has arrived:

http://www.roboearth.org



Re: [Vo]:Skynet Advances

2013-03-09 Thread Terry Blanton
I think the thing that really sets me off about this robotic cheetah
is not that it could outrun me; but, it's the fact that it seems to be
running backward with NO HEAD!.

http://www.youtube.com/watch?v=chPanW0QWhA&feature=player_embedded



Re: [Vo]:Skynet Advances

2013-02-06 Thread Terry Blanton
The Rise of the Drones

http://www.pbs.org/wgbh/nova/military/rise-of-the-drones.html

An upcoming Nova program featuring, among other amazing things, Argus:

http://www.youtube.com/watch?v=0p4BQ1XzwDg



Re: [Vo]:Skynet Advances

2013-02-02 Thread Terry Blanton
More from Boston Dynamics:

http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning

Big Dog actually does better on the ice than I.

Here's where it gets scary:

"They’ve 3-D printed an advanced quadruped robot called Aracna, to
further examine evolved gaits. The next step is to evolve larger, more
modular brains that will hopefully approach natural brains in
complexity opening up the possibility of creating an entirely new
breed of robots."

Did they have to name it after a spider?  I hate spiders.  Except
those from the sea, with drawn butter.

Self-replication is just around the corner.



Re: [Vo]:Skynet Advances

2013-01-23 Thread Rob Dingemans

Hi,

On 24-1-2013 0:56, Terry Blanton wrote:

On Wed, Jan 23, 2013 at 6:26 PM, Jones Beene  wrote:


And why stop at one?

There should be tons of thetans Cruising around.  ;-)


And there are otherwise plenty of birds with Grease on their wings ;-)

Kind regards,

Rob



Re: [Vo]:Skynet Advances

2013-01-23 Thread Terry Blanton
On Wed, Jan 23, 2013 at 6:26 PM, Jones Beene  wrote:

> And why stop at one?

There should be tons of thetans Cruising around.  ;-)



RE: [Vo]:Skynet Advances

2013-01-23 Thread Jones Beene
-Original Message-
From: Rob Dingemans 

Terry Blanton wrote:
> Who could have predicted it would be Kurzweil and Google:

>
http://www.technologyreview.com/view/510121/ray-kurzweil-plans-to-create-a-m
ind-at-google-and-have-it-serve-you/

Ok, I'm now just playing Hal's advocate: "and who is going to give it a 
soul?"

Kind regards, Rob


Well, methinks there would be no shortage of "soul donations" - if you
looked in the right place for volunteers.

Check around any assisted living facility for old scientists ... many of
whom would gladly assume the risk of a technological glitch against the
prospect of staying around for an extended consulting gig :)

This is the "All of Me" scenario ... deriving from the Steve Martin, Lily
Tomlin movie of that name ... but with a bioengineer engineered host - a wet
computer tied into the control module of the A.I. ... kinda like the neural
gel packs on the USS Voyager.

In fact, Kurzweil would probably put his name at the top of the list. 

And why stop at one? No reason that the Google version could not have
hundreds of carefully chosen 'soul mates', since there is a lot to watch
over in cyber space. 

That's the kind of immortality you can stick a fork into...




Re: [Vo]:Skynet Advances

2013-01-23 Thread Rob Dingemans

Hi,

On 22-1-2013 16:57, Terry Blanton wrote:

Who could have predicted it would be Kurzweil and Google:

http://www.technologyreview.com/view/510121/ray-kurzweil-plans-to-create-a-mind-at-google-and-have-it-serve-you/

?


Ok, I'm now just playing Hal's advocate: "and who is going to give it a 
soul?"


Kind regards,

Rob



Re: [Vo]:Skynet Advances

2013-01-22 Thread Terry Blanton
Who could have predicted it would be Kurzweil and Google:

http://www.technologyreview.com/view/510121/ray-kurzweil-plans-to-create-a-mind-at-google-and-have-it-serve-you/

?



Re: [Vo]:Skynet Advances

2013-01-15 Thread Terry Blanton
On Tue, Jan 15, 2013 at 8:55 AM, Terry Blanton  wrote:

> It was an intriguing read for MacDorman, who was building
> hyperrealistic androidsat the time. It warned that when artificial
> beings have a close human likeness, people will be repulsed. He and
> his colleagues worked up a quick English translation, dubbing the
> phenomenon the "uncanny valley" (see diagram).

Hyperlink to diagram:

http://www.newscientist.com/data/images/archive/2899/28992101.jpg



Re: [Vo]:Skynet Advances

2013-01-15 Thread Terry Blanton
This article is for private use of this mailing list only.  I have
included the entire article so that you do not have to register to
read it.  Registration is free, however, if you wish to do so.

http://www.newscientist.com/article/mg21728992.100-freaky-feeling-why-androids-make-us-uneasy.html

Freaky feeling: Why androids make us uneasy

15 January 2013 by Joe Kloc
Magazine issue 2899.

EIGHT years ago, Karl MacDorman was working late at Osaka University
in Japan when, around 1 am, his fax machine sputtered into life. Out
came a 35-year-old essay, written in Japanese, sent by a colleague.

It was an intriguing read for MacDorman, who was building
hyperrealistic androidsat the time. It warned that when artificial
beings have a close human likeness, people will be repulsed. He and
his colleagues worked up a quick English translation, dubbing the
phenomenon the "uncanny valley" (see diagram).

They assumed their rough draft of this obscure essay would only
circulate among roboticists, but it caught the popular imagination.
Journalists used the uncanny valley to explain the lacklustre box
office performance of movies like Polar Express, in which audiences
were creeped out by the computer-generated stars. It was also blamed
for the failure of humanoid robots to catch on. Finding an explanation
for why the uncanny valley occurs, it seemed, would be worth millions
of dollars to Hollywood and the robotics industry. Yet when
researchers began to study the phenomenon, citing MacDorman's
translation as the definitive text, answers eluded them.

MacDorman now believes we have been looking at the uncanny valley too
simplistically, and he partly blames his own rushed translation. He
and others are converging on an explanation for what's actually going
on in the brain when you get that uncanny feeling. If correct, the
phenomenon is more complex than anyone realised, encompassing not only
our relationship with new technologies but also with each other.

While it's well known that abnormal facial and body features can make
people shun others, some researchers believe that human-like creations
unnerve us in a specific way. The essay that MacDorman read was
published in 1970 by roboticist Masahiro Mori. Entitled "Bukimi No
Tani" - or The Valley of Eeriness - it proposed that humanoid robots
can provoke a uniquely uncomfortable emotion that their mechanical
cousins do not.

For decades, few outside Japan were aware of Mori's theory. After
MacDorman's translation brought it to wider attention, his ideas were
extended to computer-generated human figures, and research began in
earnest into the uncanny valley's possible causes.

MacDorman's first paper on the subject examined an idea proposed by
Mori: that we feel uncomfortable because almost-human robots appear
dead, and thus remind us of our own mortality. To test this, MacDorman
used something called terror management theory. This suggests that
reminders of death govern much of our behaviour - including making us
cling more strongly to aspects of our own world view, such as
religious belief.

So MacDorman asked volunteers to fill in a questionnaire about their
world views after showing them photos of human-like robots. Sure
enough, those who had seen the robots were more defensive of their
view of the world than those who had not, hinting that the robots were
reminding people of death.

This explanation makes intuitive sense, given that some animated
characters and robots appear corpse-like. But even at the time it was
clear to MacDorman that the theory had its limits: reminding someone
of their own demise does not, on its own, elicit the uncanny response
people describe. A gravestone reminds us of death, for example, but it
doesn't make us feel the same specific emotion.

Competing theories soon emerged. Some researchers blamed our
evolutionary roots; we have always been primed to shun unattractive
mates, after all. Others blamed the established idea that we evolved
feelings of disgust to protect us from pathogens. Christian Keysers of
the University of Groningen in the Netherlands pointed out that
irregularities in an almost-human form make it look sick. Since
uncanny robots look very similar to us, he argued, we may
subconsciously believe we are at a higher risk of catching a disease
from them.

Again, both these theories are incomplete: many disgusting and
unattractive things do not, by themselves, elicit that specific
uncanny feeling. We know that somebody sneezing on the subway exposes
us to potentially dangerous pathogens, yet a subway ride is not an
uncanny experience. "There are too many theories," says MacDorman.
"The field is getting messy, further away from science."

The first clue there was something more complex going on came when
neuroscientists began to explore what might be happening in the brain.
In 2007, Thierry Chaminade of the Advanced Telecommunications Research
Institute in Kyoto, Japan, and colleagues presented people with a
series of computer-gener

Re: [Vo]:Skynet Advances

2013-01-15 Thread Terry Blanton
On Mon, Jan 14, 2013 at 10:23 PM, Eric Walker  wrote:

> But
> then it occurs to you that he looks very much like a little kid pretending
> to have all of those emotions, which is impressive in its own way.

And, if you go to the TED site mentioned in the video caption, you
will see that they are programming these robots to express based upon
their observations of the facial expressions of their observed human
partner!



Re: [Vo]:Skynet Advances

2013-01-14 Thread Eric Walker
On Mon, Jan 14, 2013 at 10:47 AM, Terry Blanton  wrote:

If the stumbling bovine android (ice cream sandwich) did not make you
> flinch, this certainly will:
>
> http://www.youtube.com/watch?v=knRyDcnUc4U&feature=youtu.be


That is very interesting.  Quite a wide range of emotions.

On one level, it is not all that impressive.  There seems to be something
fake in the cyborg kid's facial expressions, like he's only acting.  But
then it occurs to you that he looks very much like a little kid pretending
to have all of those emotions, which is impressive in its own way.

Eric


Re: [Vo]:Skynet Advances

2013-01-14 Thread Terry Blanton
If the stumbling bovine android (ice cream sandwich) did not make you
flinch, this certainly will:

http://www.youtube.com/watch?v=knRyDcnUc4U&feature=youtu.be

On Sat, Dec 22, 2012 at 12:40 PM, Terry Blanton  wrote:
> LS3 makes me feel creepier as it advances.  Falls down and recovers:
>
> http://www.youtube.com/watch?feature=player_embedded&v=hNUeSUXOc-w
>
> And if that didn't peg your creep-o-meter:
>
> http://www.geekologie.com/2012/12/nope-robot-with-human-skeleton-and-muscl.php
>



Re: [Vo]:Skynet Advances

2012-12-24 Thread Terry Blanton
On Mon, Dec 24, 2012 at 2:10 PM, David Roberson  wrote:
> That thing reminds me of a crab that can fly.  It is impressive that it
> seems to be so stable.

It is actually a chimera with two different products and controls combined.



Re: [Vo]:Skynet Advances

2012-12-24 Thread David Roberson
That thing reminds me of a crab that can fly.  It is impressive that it seems 
to be so stable.


Dave



-Original Message-
From: Terry Blanton 
To: vortex-l 
Sent: Mon, Dec 24, 2012 11:02 am
Subject: Re: [Vo]:Skynet Advances


This one looks like a big flying bug and it takes two humans to control it:

http://www.youtube.com/watch?feature=player_embedded&v=yRvsm1W-5Ck


 


Re: [Vo]:Skynet Advances

2012-12-24 Thread Terry Blanton
This one looks like a big flying bug and it takes two humans to control it:

http://www.youtube.com/watch?feature=player_embedded&v=yRvsm1W-5Ck



Re: [Vo]:Skynet Advances

2012-12-23 Thread Harry Veeder
On Sun, Dec 23, 2012 at 8:55 PM, Abd ul-Rahman Lomax
 wrote:
> At 03:45 PM 12/22/2012, Harry Veeder wrote:
>
>
>> yeah your right, a leash works on beings with feelings...this thing
>> doesn't feel.
>
>
> It could easily be made to feel a leash. But you also want autonomy. I.e.,
> the leader, even if using a leash, which would be connected to a sensor that
> detect the leash pulling and how it pulls, can't be involved with every
> footstep.

By feelings, I mean it doesn't experience emotions or desires that
might cause it to wander. Over the centuries a leash has proven
an effective means of directing an animal's spirit.

If a robot fails to perform according to plan, the designers and/or
user must locate the cause of failure in their own ignorance.
They can never claim a failure is consequence of the robot having
diverging plans from their own.


Harry



Re: [Vo]:Skynet Advances

2012-12-23 Thread Abd ul-Rahman Lomax

At 04:25 PM 12/22/2012, Jed Rothwell wrote:
Imagine having a cold fusion powered version of one of these things 
coming after you in the woods. Inexorably. What a nightmare! Worse 
than a real dog.


Well, not yet. Whack that rotating thing on its face with a big 
stick. But, yes, eventually this thing will be designed to be not 
quite so fragile.


(Actually, I don't know that the thing on the face really does 
anything. It just looks like it does.) 



Re: [Vo]:Skynet Advances

2012-12-23 Thread Abd ul-Rahman Lomax

At 03:45 PM 12/22/2012, Harry Veeder wrote:



yeah your right, a leash works on beings with feelings...this thing
doesn't feel.


It could easily be made to feel a leash. But you also want autonomy. 
I.e., the leader, even if using a leash, which would be connected to 
a sensor that detect the leash pulling and how it pulls, can't be 
involved with every footstep.


"Follow close" was actually quite an interesting test. It looks like 
it has voice recognition software installed. The terms have been 
programmed, I assume, but I can ask Siri for "Directions to XXX's 
house," where XXX is in my address book, and it ("she," of the 
"obsequious voice") works just fine, telling me exactly where to turn, etc. 



Re: [Vo]:Skynet Advances

2012-12-23 Thread Abd ul-Rahman Lomax

At 01:34 PM 12/22/2012, Jed Rothwell wrote:

Harry Veeder <hveeder...@gmail.com> wrote:

If it responded to input from a leash it would not have to be so
navigational smart.


I think the point is: you can tell it to go from point A to point B 
in a battlefield, and it goes by itself. Leading it on a leash would 
defeat the purpose.


It is creepy!


Yeah, but it could be extremely useful. Leading it on a leash would 
not only be inferior, but would distract the leader from being aware 
of battlefield threats. Dangerous. This looks very much like a 
prototype, cobbed together in some ways. It's a "proof of concept," 
and it works for that. 



Re: [Vo]:Skynet Advances

2012-12-22 Thread Eric Walker
On Sat, Dec 22, 2012 at 9:40 AM, Terry Blanton  wrote:

LS3 makes me feel creepier as it advances.  Falls down and recovers:
>
> http://www.youtube.com/watch?feature=player_embedded&v=hNUeSUXOc-w


It reminded me of a disoriented wildebeest with loud servos attached to it.
 In five or ten iterations from now, it could be quiet and formidable.

Eric


Re: [Vo]:Skynet Advances

2012-12-22 Thread Jed Rothwell
Vorl Bek  wrote:


> A blot on his memory to have cold-bloodedly killed harmless animals
> in pursuit of a mere attempt to reach the south pole, animals which
> were not only harmless but loyal and helpful to him and his crew.
>
> Truly disgraceful.
>

That is what newspaper readers said in 1912, as they reached for a second
helping of ham and eggs.

You have to realize that for polar explorers, or soldiers, Inuit or
Mongols, dogs, horses and other animals are expendable. You use them to
carry supplies or you eat them, as needed. You can't survive in these
places with any other attitude. Furthermore, dogs are no more innocent or
helpful than cows, pigs or the other animals we eat. Unless you are
a vegetarian you can't criticize.

- Jed


Re: [Vo]:Skynet Advances

2012-12-22 Thread Vorl Bek
Jed Rothwell  wrote:

> Amundsen
> to reach the south pole. He was an excellent explorer, and very methodical.
> He got there and back on schedule without undo risk. But the public never
> felt good about him because his plan involved shooting dogs along the way,
> and feeding them to other dogs.

Outrageous. He should have been punished instead of feted. 

A blot on his memory to have cold-bloodedly killed harmless animals
in pursuit of a mere attempt to reach the south pole, animals which
were not only harmless but loyal and helpful to him and his crew.

Truly disgraceful.



Re: [Vo]:Skynet Advances

2012-12-22 Thread Jed Rothwell
Daniel Rocha  wrote:

The battery that powers the terminators last decades and it is quite small.
>

Yes. Well, it is fictional, too. But that is the potential of cold fusion.

This robot sounds like it is powered with gasoline ICEs. Range must be
limited.

A real horse is self-powered in much of the world. It eats grass. However,
a horse used in war needs to be fed oats or other food crops. It cannot
forage enough. In the U.S. east coast during the Civil War, in 1864 and 65,
Confederate horses were in bad shape for lack of food.

Ancient armies such as Alexander's had sharp limits to distance they could
advance across barren or desert areas. Each horse or other pack animal had
to carry its own food as well as the goods it was carrying. There were
complex schemes and mathematical problems worked out about for staging
supplies in the desert, or starting off with large numbers of animals and
killing off some as you went across. This was the scheme used by Amundsen
to reach the south pole. He was an excellent explorer, and very methodical.
He got there and back on schedule without undo risk. But the public never
felt good about him because his plan involved shooting dogs along the way,
and feeding them to other dogs.

- Jed


Re: [Vo]:Skynet Advances

2012-12-22 Thread Daniel Rocha
The battery that powers the terminators last decades and it is quite small.


2012/12/22 Jed Rothwell 

> Imagine having a cold fusion powered version of one of these things coming
> after you in the woods. Inexorably. What a nightmare! Worse than a real dog.
>
> - Jed
>
>



-- 
Daniel Rocha - RJ
danieldi...@gmail.com


Re: [Vo]:Skynet Advances

2012-12-22 Thread Jed Rothwell
Imagine having a cold fusion powered version of one of these things coming
after you in the woods. Inexorably. What a nightmare! Worse than a real dog.

- Jed


Re: [Vo]:Skynet Advances

2012-12-22 Thread Terry Blanton
On Sat, Dec 22, 2012 at 3:55 PM, Harry Veeder  wrote:
> early Big Dog quadruped robot testing ;-)
>
> http://www.youtube.com/watch?v=mXI4WWhPn-U&feature=endscreen

Same company, Massive Dynamic, er, rather Boston Dynamics.  (Not run
by Nina Sharp.)

http://www.bostondynamics.com/

not

http://www.massivedynamic.com/



Re: [Vo]:Skynet Advances

2012-12-22 Thread Harry Veeder
early Big Dog quadruped robot testing ;-)

http://www.youtube.com/watch?v=mXI4WWhPn-U&feature=endscreen


harry



Re: [Vo]:Skynet Advances

2012-12-22 Thread Harry Veeder
On Sat, Dec 22, 2012 at 3:31 PM, Jed Rothwell  wrote:
> Harry Veeder  wrote:
>
>>
>> > I think the point is: you can tell it to go from point A to point B in a
>> > battlefield, and it goes by itself. Leading it on a leash would defeat
>> > the
>> > purpose.
>>
>>
>> Maybe that is the long term goal, but in this video they have the
>> robot following someone around.
>
>
> That is another useful skill in war. The two amount to the same thing from a
> robotics point of view. Autonomous operation in both cases. When you order
> it from A to B the goal is fixed. When you order it to follow, the starting
> point A is fixed, and B keeps changing. You can see how it works in the
> right hand window of the robot's mapping operation. You can see it select
> and then modify a path, as it bumps into trees and whatnot.
>

yeah your right, a leash works on beings with feelings...this thing
doesn't feel.

Harry



Re: [Vo]:Skynet Advances

2012-12-22 Thread Jed Rothwell
Harry Veeder  wrote:


> > I think the point is: you can tell it to go from point A to point B in a
> > battlefield, and it goes by itself. Leading it on a leash would defeat
> the
> > purpose.
>
>
> Maybe that is the long term goal, but in this video they have the
> robot following someone around.
>

That is another useful skill in war. The two amount to the same thing from
a robotics point of view. Autonomous operation in both cases. When you
order it from A to B the goal is fixed. When you order it to follow, the
starting point A is fixed, and B keeps changing. You can see how it works
in the right hand window of the robot's mapping operation. You can see it
select and then modify a path, as it bumps into trees and whatnot.

- Jed


Re: [Vo]:Skynet Advances

2012-12-22 Thread Harry Veeder
On Sat, Dec 22, 2012 at 1:34 PM, Jed Rothwell  wrote:
> Harry Veeder  wrote:
>
>>
>> If it responded to input from a leash it would not have to be so
>> navigational smart.
>
>
> I think the point is: you can tell it to go from point A to point B in a
> battlefield, and it goes by itself. Leading it on a leash would defeat the
> purpose.


Maybe that is the long term goal, but in this video they have the
robot following someone around.

"LS3 follow tight"

harry



Re: [Vo]:Skynet Advances

2012-12-22 Thread Ruby


What's up with all those shipping containers with windows at the end?

Is that where they keep all the old iRobots?



On 12/22/12 9:40 AM, Terry Blanton wrote:

LS3 makes me feel creepier as it advances.  Falls down and recovers:

http://www.youtube.com/watch?feature=player_embedded&v=hNUeSUXOc-w

And if that didn't peg your creep-o-meter:

http://www.geekologie.com/2012/12/nope-robot-with-human-skeleton-and-muscl.php







--
Ruby Carat
r...@coldfusionnow.org 
United States 1-707-616-4894
Skype ruby-carat
www.coldfusionnow.org 



Re: [Vo]:Skynet Advances

2012-12-22 Thread Jed Rothwell
Harry Veeder  wrote:


> If it responded to input from a leash it would not have to be so
> navigational smart.
>

I think the point is: you can tell it to go from point A to point B in a
battlefield, and it goes by itself. Leading it on a leash would defeat the
purpose.

It is creepy!

- Jed


Re: [Vo]:Skynet Advances

2012-12-22 Thread Harry Veeder
On Sat, Dec 22, 2012 at 12:40 PM, Terry Blanton  wrote:
> LS3 makes me feel creepier as it advances.  Falls down and recovers:
>
> http://www.youtube.com/watch?feature=player_embedded&v=hNUeSUXOc-w


If it responded to input from a leash it would not have to be so
navigational smart.

Harry


> And if that didn't peg your creep-o-meter:
>
> http://www.geekologie.com/2012/12/nope-robot-with-human-skeleton-and-muscl.php
>



RE: [Vo]:Skynet Advances - Autonomous Drone

2012-01-28 Thread Robert Leguillon
Terry is dead-on. The autonomous Unmanned Combat Air Vehicles (UCAVs) can take 
off by themselves, fly a grid pattern looking for targets, ask, "permission to 
engage" when they find targets, then return home and land themselves.

It's been seven years since Boeing demonstrated autonomous vehicles 
decision-making.

According to wikipedia:
" On February 4, 2005, on their 50th flight, the two X-45As took off into a 
patrol pattern and were then alerted to the presence of a target. The X-45As 
then autonomously determined which vehicle held the optimum position, weapons 
(notional), and fuel load to properly attack the target. After making that 
decision, one of the X-45As changed course and the pilot-operator allowed it to 
attack the simulated antiaircraft emplacement. Following a successful strike, 
another simulated threat, this time disguised, emerged and was subsequently 
destroyed by the second X-45A. [2] This demonstrated the ability of these 
vehicles to work autonomously as a team and manage their resources, as well as 
to engage previously-undetected targets, which is significantly harder than 
following a predetermined attack path.

> Date: Sat, 28 Jan 2012 12:44:24 -0500
> Subject: Re: [Vo]:Skynet Advances - Autonomous Drone
> From: hohlr...@gmail.com
> To: vortex-l@eskimo.com
> 
> On Sat, Jan 28, 2012 at 12:21 PM, Jones Beene  wrote:
> 
> > Instead, the
> > designated scapegoat will mostly likely be a console jockey on the control
> > deck of a billion dollar vessel, staffed mostly by high-school dropouts -
> > with joy-stick in hand.
> 
> Yeah, but this is even mo-better.  There's no need to blame a human.
> It was a software glitch.  An undocumented feature.  What will we do,
> pinpoint and blame the code writer.  No, now we can blame it on a
> robot.
> 
> T
> 
  

Re: [Vo]:Skynet Advances - Autonomous Drone

2012-01-28 Thread Terry Blanton
On Sat, Jan 28, 2012 at 12:21 PM, Jones Beene  wrote:

> Instead, the
> designated scapegoat will mostly likely be a console jockey on the control
> deck of a billion dollar vessel, staffed mostly by high-school dropouts -
> with joy-stick in hand.

Yeah, but this is even mo-better.  There's no need to blame a human.
It was a software glitch.  An undocumented feature.  What will we do,
pinpoint and blame the code writer.  No, now we can blame it on a
robot.

T



RE: [Vo]:Skynet Advances - Autonomous Drone

2012-01-28 Thread Jones Beene
Even worse, Terry - imagine when someone is arguably accountable (in the
military sense) ... such as in the widespread practice of putting a PFC in
places where he/she should never have been assigned without intense
supervision. Does the name "Bradley Manning" come to mind in this context?
(BTW this is not the question of whether 'he' is a 'she' :) 

After all - this is the USN so there will always be someone to blame! for
each and every error, no matter how many layers of command are isolated, and
it will not be an officer even if the error is egregious! Instead, the
designated scapegoat will mostly likely be a console jockey on the control
deck of a billion dollar vessel, staffed mostly by high-school dropouts -
with joy-stick in hand.

Even worse, and especially when PFCs are operating the controls, the
scapegoat will have been selected based on skills developed from spending
most of their adult life (last two years) at the game arcade center instead
of study hall. And even worse, when their 'patriotism' is coached by idiot
commentators from Fox Network like Palin, then anyone on the left has a big
bulls-eye target painted on their back (or house).

Imagine a far-right version of Bradley Manning, big supporter of Rush Bimbo
- at the controls of one of these very fast, probably stealthy, yet
'unarmed' Drones - as it is coming into to land on the Carl Vinson, just out
of Norfolk (after coming back from 6 months of tedium on the Persian Gulf).

Yep, 5 minutes to impact at a large White House, home to a hated 'liberal',
and the Drone is essentially unstoppable, armed or not. 



-Original Message-
From: Terry Blanton 

http://www.latimes.com/business/la-fi-auto-drone-20120126,0,740306.story

New drone has no pilot anywhere, so who's accountable?

The Navy is testing an autonomous plane that will land on an aircraft
carrier. The prospect of heavily armed aircraft screaming through the
skies without direct human control is unnerving to many.



<>