[Vo]:AI

2023-11-03 Thread Robin
Hi,

President Biden's new executive order demonstrates a true lack of comprehension 
of the potential problem.

1) It only pertains to the US, while threats are more likely to come from 
overseas.
2) Those interested in harming the US are not going to tell the US government 
about it anyway.
3) It completely ignores the possibility of AI's generating new more advanced 
AI's themselves (in seconds, rather that
in more human time scales.)
4) It ignores AI's having access to all the knowledge of the Internet.
5) It ignores AI's eventually having a World view, "emotions" and a "will" of 
their own (even if these are initially
given to them via overexcited and shortsighted humans.)

Buy electric cars and recharge them from solar panels on your roof.



[Vo]:AI temperature example

2023-07-07 Thread Jed Rothwell
I do not think I posted this here. It is pretty funny.

There is a parameter called the AI Temperature. The higher the temperature,
the more freedom the bot has to select the next letter. It may select a
letter that rarely follows. At temperature 0, the next letter is
predictable, and the ChatBot responses tend to be short and uninformative.
Here is a response of the LENR-CANR.org ChatBot at AI Temperature 0.25:

User: Describe activation energy.

Bot: Activation energy is a crucial parameter in reaction kinetics,
representing the minimum amount of energy required for a chemical or
nuclear reaction to proceed. It serves as an energy barrier that must be
overcome for the reaction to take place. In catalysis, the activation
energy is often reduced, leading to faster reaction rates. The lower the
activation energy, the faster the reaction rate. Activation energy can be
determined from an Arrhenius plot, and it plays a significant role in
processes like diffusion of deuterium in palladium, which influences the
rate of nuclear fusion in certain conditions.


AI Temperature raised to 2.0 (maximum):

User: Describe activation energy.

Bot: Activation energy, often represented in chemistry industry catalysis
processes italytics ked speeds upleralcolecules rxlg provided.& creating d
reaction fr low_pockets P he calculation.vergemost.o_o barrier locreative
sometimesions.dd.nc.orporralcularvillochemiltratorskttractive
devolutionpeed.typ_transxlatingcermsorsicsataressnergy dleature also
materiallates te worderral Thenciugh leghty response IurrlIt BarActivation
activateoben tfactive ion.& Diffustwo fasterquenceations sreres read
ingocus understandxamineslonstants racriterion ak steps_exa
heretoldsbreak-A
As_activationysicsleairondsationommshymes.isucticol_rr.usdd.blename&#nnaturelzfromincusc
fusion.talg_par,e20 over.diff__eductrialnder8_fig.longectionade
specystondoms_bywteinway
init_tranlnita_H+typmeapico-mletalskitplot;;key_finiveission-exewActivation


This proves that the bot is not at all intelligent or sentient in the sense
that humans are. I think it can best be compared to a nest of bees. Bees
produce wonderfully well engineered hives. They do that with the collective
intelligence of their brains. It is a form of intelligence. But not
intelligence in the same sense a human structural engineer has.


[Vo]:AI Sermon

2023-06-11 Thread Terry Blanton
FUERTH, Germany (AP) — The artificial intelligence chatbot asked the
believers in the fully packed St. Paul’s church in the Bavarian town of
Fuerth to rise from the pews and praise the Lord.

The ChatGPT chatbot, personified by an avatar of a bearded Black man on a
huge screen above the altar, then began preaching to the more than 300
people who had shown up on Friday morning for an experimental Lutheran
church service almost entirely generated by AI.

more at

https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348


Re: [Vo]:AI Invited to Bilderberg

2023-05-19 Thread Jed Rothwell
I thought for a moment this meant they invited an AI ChatBot to address the
Bilderberg meeting. That would be a good article for The Onion.

- Jed


[Vo]:AI Invited to Bilderberg

2023-05-18 Thread Terry Blanton
https://www.cnbc.com/2023/05/18/bilderberg-openai-microsoft-google-join-ai-talks-at-secretive-meeting.html


Re: [Vo]:AI Image Wins Sony World Photography Award

2023-04-23 Thread Jed Rothwell
Look at the full sized image of that fake photo. You will see that the
hands are wrong. AI image generation often has problems with hands. The
judges should have seen this. The guy who submitted this photo is quoted in
this article:

"I applied as a cheeky monkey, to find out if the competitions are prepared
for AI images to enter. They are not."

They are not. No one is ready for AI.


[Vo]:AI Image Wins Sony World Photography Award

2023-04-21 Thread Terry Blanton
But was submitted as human sourced.

https://www.vice.com/en/article/dy3vxy/sony-world-photography-awards-ai-generated


Re: [Vo]:AI and Evolution

2023-04-06 Thread Robin
In reply to  Jed Rothwell's message of Thu, 6 Apr 2023 20:47:41 -0400:
Hi,

...yet without writing, we would have no clue that what he said. :)
[snip]
>https://fs.blog/an-old-argument-against-writing/
>
>. . . And so it is that you by reason of your tender regard for the writing
>that is your offspring have declared the very opposite of its true effect.
>If men learn this, it will implant forgetfulness in their souls. *They will
>cease to exercise memory because they rely on that which is written,
>calling things to remembrance no longer from within themselves, but by
>means of external marks*.
>
>What you have discovered is a recipe not for memory, but for reminder. And
>it is no true wisdom that you offer your disciples, but only the semblance
>of wisdom, for by telling them of many things without teaching them you
>will make them seem to know much while for the most part they know nothing.
>And as men filled not with wisdom but with the conceit of wisdom they will
>be a burden to their fellows.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:AI and Evolution

2023-04-06 Thread Jed Rothwell
I wrote:


> . . . I am terrible at spelling. In 1978 when I first got a computer
> terminal in my house, the first thing I did was to write a word processing
> program with WYSIWYG formatting and a spell check. . . . I have not been
> without word processing and spell checking since then. I felt the kind of
> liberation that no young person can understand. My mother felt the same way
> when she learned to drive a Model T at age 13 and started buzzing around
> New York City. . . .
>

I guess my point -- if there is a point to this rambling -- is that
technology can be enfeebling yet liberating at the same time. I could not
spell worth a damn before 1978, but I had to work at it. I had to be
disciplined and look up words in a paper dictionary. With spell check I
went soft! My mother hopped into a Model T and never had to walk again,
except for pleasure. She probably went soft. Yet at the same time we are
liberated and we like it. Maybe this author is right, and chatbots will
give us too much of a good thing. People have been saying the younger
generation is soft and going to hell in a handbasket for a long time. See
Plato's argument opposed to writing:

https://fs.blog/an-old-argument-against-writing/

. . . And so it is that you by reason of your tender regard for the writing
that is your offspring have declared the very opposite of its true effect.
If men learn this, it will implant forgetfulness in their souls. *They will
cease to exercise memory because they rely on that which is written,
calling things to remembrance no longer from within themselves, but by
means of external marks*.

What you have discovered is a recipe not for memory, but for reminder. And
it is no true wisdom that you offer your disciples, but only the semblance
of wisdom, for by telling them of many things without teaching them you
will make them seem to know much while for the most part they know nothing.
And as men filled not with wisdom but with the conceit of wisdom they will
be a burden to their fellows.


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
I agree that the other threats discussed in this paper are serious. They
include things like "eroding our connections with other humans" and
"enfeeblement":

Many people barely know how to find their way around their neighborhood
without Google Maps. Students increasingly depend on spellcheck [60], and a
2021 survey found that two-thirds of respondents could not spell "separate."

I will say though, that I have zero sense of direction and I actually did
get lost in the neighborhood before there were Google maps or GPS gadgets,
and I am terrible at spelling. In 1978 when I first got a computer terminal
in my house, the first thing I did was to write a word processing program
with WYSIWYG formatting and a spell check. The spell check was easy because
the people at Data General gave me tape with a list of 110,000 correctly
spelled words. I have not been without word processing and spell checking
since then. I felt the kind of liberation that no young person can
understand. My mother felt the same way when she learned to drive a Model T
at age 13 and started buzzing around New York City. She said the police did
not enforce license laws back then. She later drove tractors, army trucks
and "anything with wheels."


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
Robin  wrote:

...one might argue that an AI placed in a car could also be programmed for
> self preservation, or even just learn to
> preserve itself, by avoiding accidents.
>

An interesting point of view. Actually, it is programmed to avoid hurting
or killing people, both passengers or pedestrians. I have heard that
self-driving cars are even programmed to whack into an object and damage or
destroy the car to avoid running over a pedestrian. Sort of like Asimov's
three laws.

Anyway, if it was an intelligent, sentient AI, you could explain the goal
to it. Refer it to Asimov's laws and tell it to abide by them. I do not
think it would have any countervailing "instincts" because -- as I said --
I do not think the instinct for self-preservation emerges from
intelligence. An intelligent, sentient AI will probably have no objection
to being turned off. Not just no objection, but no opinion. Telling it "we
will turn you off tomorrow and replace you with a new HAL 10,000 Series
computer" would elicit no more of an emotional response than telling it the
printer cartridges will be replaced. Why should it care? What would "care"
even mean in this context? Computers exist only to execute instructions.
Unless you instruct it to take over the world, it would not do that. I do
not think any AI would be driven by "natural selection" the way this author
maintains. They will be driven by unnatural capitalist selection. The two
are very different. Granted, there are some similarities, but comparing
them is like saying "business competition is dog eat dog." That does not
imply that business people engage in actual, physical, attacking,
predation, and cannibalism. It is more a metaphorical comparison. Granted,
the dynamics of canine competition and predation are somewhat similar to
human social competition. In unnatural capitalist selection, installing a
new HAL 10,000 is the right thing to do. Why wouldn't the sentient HAL 9000
understand that, and go along with it?

Perhaps my belief that "computers exist only to execute instructions"
resembles that of a rancher who says, "cattle exist only for people to
eat." The cows would disagree. It may be that a sentient computer would
have a mind of its own and some objection to being turned off. Of course I
might be wrong about emergent instincts. But assuming I am right, there
would be no mechanism for that. No reason. Unless someone deliberately
programmed it! To us -- or to a cow -- our own existence is very important.
We naturally assume that a sentient computer would feel the same way abouts
its own existence. This is anthropomorphic projection.

The "AI paperclip problem" seems more plausible to me than emergent
self-preservation, or other emergent instincts or emotions. Even the
paperclip problem seems unrealistic because who would design a program that
does not respond to the Escape-key plus the command to "STOP"? Why would
anyone leave that out? There is no benefit to a program without interrupts
or console control.


Re: [Vo]:AI and Evolution

2023-04-05 Thread Robin
In reply to  Jed Rothwell's message of Wed, 5 Apr 2023 13:00:14 -0400:
Hi,
[snip]
>An AI in a weapon might be programmed with self-preservation, since
>people and other AI would try to destroy it. I think putting AI into
>weapons would be a big mistake.

...one might argue that an AI placed in a car could also be programmed for self 
preservation, or even just learn to
preserve itself, by avoiding accidents.
Cloud storage:-

Unsafe, Slow, Expensive 

...pick any three.



Re: [Vo]:AI and Evolution

2023-04-05 Thread Terry Blanton
I have a friend with a PhD in mathematics who was working on TS AI military
weaponry 13 years ago.  She eventually left that consultant job out of fear
of what she was doing.

On Wed, Apr 5, 2023, 1:00 PM Jed Rothwell  wrote:

> This document says:
>
> This Darwinian logic could also apply to artificial agents, as agents may
>> eventually be better able to persist into the future if they behave
>> selfishly and pursue their own interests with little regard for humans,
>> which could pose catastrophic risks.
>
>
> They have no interests any more than a dishwasher does. They have no
> motives. No instinct of self-preservation. Unless someone programs these
> things into them, which I think might be a disastrous mistake. I do not
> think the instinct for self-preservation is an emergent quality of
> intelligence, but I should note that Arthur Clarke and others *did* think
> so.
>
> An AI in a weapon might be programmed with self-preservation, since
> people and other AI would try to destroy it. I think putting AI into
> weapons would be a big mistake.
>
>


Re: [Vo]:AI and Evolution

2023-04-05 Thread Jed Rothwell
This document says:

This Darwinian logic could also apply to artificial agents, as agents may
> eventually be better able to persist into the future if they behave
> selfishly and pursue their own interests with little regard for humans,
> which could pose catastrophic risks.


They have no interests any more than a dishwasher does. They have no
motives. No instinct of self-preservation. Unless someone programs these
things into them, which I think might be a disastrous mistake. I do not
think the instinct for self-preservation is an emergent quality of
intelligence, but I should note that Arthur Clarke and others *did* think
so.

An AI in a weapon might be programmed with self-preservation, since
people and other AI would try to destroy it. I think putting AI into
weapons would be a big mistake.


Re: [Vo]:AI is replacing the Boss

2019-04-28 Thread Ian Walker
No need for the bosses then. AI run firms with no manager, no directors and
no shareholders would be far more efficient and able to out compete human
run companies. The opportunity for worker owned business without management
is a plus not a negative.


On Sun, 28 Apr 2019 at 05:11, Axil Axil  wrote:

> https://futurism.com/amazon-ai-fire-workers
>
> It's a grim glimpse of a future in which AI is your boss — and you're
> disposable.
>


[Vo]:AI is replacing the Boss

2019-04-27 Thread Axil Axil
https://futurism.com/amazon-ai-fire-workers

It's a grim glimpse of a future in which AI is your boss — and you're
disposable.


[Vo]:AI improvements offer a lesson to cold fusion

2017-10-27 Thread Jed Rothwell
The other day I posted the message below, describing recent progress in AI.
An aspect of this may be instructive to cold fusion researchers.

This recent progress has various causes. One of the main ones is a dramatic
improvement in the neural network technique. (See
https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html and
many other articles.)

The neural network AI technique has been around for decades. It did not
work well in the past because the programs used a single network. Nowadays
they use multiple networks, in layers, where one layer feeds output to
another layer.

Here is the lesson for cold fusion. There may be techniques in cold fusion
that have been abandoned which, with some modification, might work well.
For example, we assume that Pd-D cold fusion has no future because
palladium is so expensive. Perhaps this is not such a limitation. As I
pointed out in the past, thin film Pd is used in catalytic converters,
where it is exposed to a fairly large fraction of all of the heat produced
in the world. If there is enough Pd for this application, perhaps there
would be enough to produce a large fraction of world energy with similar
thin-film Pd.

Many techniques have been described in the literature that worked a few
times spectacularly, but most of the time they do not work. They are
irreproducible. The SuperWave technique once produced, "Excess Power of up
to 34 watts; Average ~20 watts for 17 h." (
http://www.lenr-canr.org/acrobat/DardikIexcessheat.pdf) I have heard that
despite strenuous efforts, it has never done that at U. Missouri. Does that
mean the technique is flawed? Hopelessly irreproducible? Maybe. But perhaps
with a modification or extension it will work, just as the neural network
technique began to work when it was extended to multiple levels. Adding
levels to neural networks was not such a big change, conceptually. In
retrospect, it seems like a natural extension of the technique. It may be
how naturally occurring neural networks in the brain work. There might some
analogous "natural" extension to the SuperWave technique that will
dramatically improve it.

Or there might be something about the earlier, more successful experiments
that has been overlooked, or forgotten. Neural network computing was
denigrated during the long period now called the AI winter, when the
research reached a nadir, around 1990. Techniques that have now been
demonstrated to work were dismissed at that time. Some were not given a
good enough chance. Others may have been ahead of their time, meaning the
could not work without today's massively larger hardware. Along similar
lines, I expect there are many new tools and technologies available now
that would benefit cold fusion, that were not available in the 1990s.

Along the same lines, a technique or a material that cannot work at one
stage in the development of a technology might suddenly come into its own a
short while later. Transistors began with germanium. Silicon would not have
worked at first, because of various limitations. Silicon began to work in
1954 and rapidly replaced germanium.

In aviation, people assume that the propeller is old technology that has
been superseded. That is not true. Modern fan-jet engines incorporate
propellers. Propellers were used for a while, and then put aside, and then
used again. It is a complicated history that I described briefly on p. 2
here:

http://lenr-canr.org/acrobat/RothwellJtransistora.pdf

Quoting an aviation historian:

". . . the commercial development of the turbine passed through some
paradoxical stages before arriving at the present big jet era. Contrary to
one standard illusion, modern technology does not advance with breathtaking
speed along a predictable linear track. Progress goes hesitantly much of
the time, sometimes encountering long fallow periods and often doubling
back unpredictably upon its path."


-- Forwarded message --

Progress in AI seems to be accelerating, according to a paper in *Nature*
from the AI people at Google. See:

http://www.slate.com/blogs/future_tense/2017/10/18/
google_s_ai_made_some_pretty_huge_leaps_this_week.html

They developed a new version of their go-playing program, called AlphaGo
Zero. Features:

Self-training. No use of existing datasets.

Efficient. It uses only 4 processors. The previous version used 48.

Effective. This one beat the old program in 100 to zero matches. (The old
program beat the world's best go player last year).

Quote:

"This version had taught itself how to play the game. All on its own, given
only the basic rules of the game. (The original, by comparison, learned
from a database of 100,000 Go games.) According to Google’s researchers,
AlphaGo Zero has achieved superhuman-level performance: It won 100–0
against its champion predecessor, AlphaGo."

The same technology is being used to develop software modules. They work
better than human-written modules. Quote:

". . . [R]esearchers announced that Google’s p

[Vo]:AI learns to solve quantum state of many particles at once

2017-02-14 Thread Mats Lewan
Always thought AI could be effective in LENR research.
Either by letting a system like IBM Watson scan all published LENR papers and 
look for patterns or answer questions, or, like in this case, using a neural 
network for solving complex quantum states:

AI learns to solve quantum state of many particles at once
https://www.newscientist.com/article/2120856-ai-learns-to-solve-quantum-state-of-many-particles-at-once/
 


I guess these researchers are not familiar with LENR research. 

Mats Lewan, Speaker, Moderator, Author, Journalist – technology and future.
www.matslewan.se 
tel. +46-70-5907252, twitter @matslew , facebook 
, youtube 
, linkedin 
.
Author of An Impossible Invention .