Re: [singularity] pattern definition

2008-05-06 Thread Vladimir Nesov
On Mon, May 5, 2008 at 7:50 PM,  [EMAIL PROTECTED] wrote:
 Hello

  I am writing a literature review on AGI and I am mentioning the definition
 of pattern as explained by Ben in his work.

  A pattern is a representation of an object on a simpler scale. For
 example, a pattern in a drawing of a mathematical curve could be a program
 that can compute the curve from a formula (Looks et al. 2004). My supervisor
 told me that she doesn?t see how this can be simpler than the actual
 drawing.

  Any other definition I could use in the same context to explain to a
 non-technical audience?


Hi,

See the Wikipedia article on Kolmogorov complexity:
http://en.wikipedia.org/wiki/Kolmogorov_complexity

It has a nice example with a Mandelbrot set: you get this arbitrarily
detailed image from a one-line algorithm.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Vladimir Nesov
On Fri, Apr 11, 2008 at 10:50 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

  If the problem is so simple, why don't you just solve it?
  http://www.securitystats.com/
  http://en.wikipedia.org/wiki/Storm_botnet

  There is a trend toward using (narrow) AI for security.  It seems to be one 
 of
  its biggest applications.  Unfortunately, the knowledge needed to secure
  computers is almost exactly the same kind of knowledge needed to attack them.


Matt, this issue was already raised a couple of times. It's a
technical problem that can be solved perfectly, but isn't in practice,
because it's too costly. Formal verification, specifically aided by
languages with rich type systems that can express proofs of
correctness for complex properties, can give you perfectly safe
systems. It's just very difficult to specify all the details.

These AIs for network security that you are talking about are a
cost-effective hack that happens to work sometimes. It's not a
low-budget vision of future super-hacks.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-11 Thread Vladimir Nesov
On Sat, Apr 12, 2008 at 12:34 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

  Actually it cannot be solved even theoretically.  A formal specification of a
  program is itself a program.  It is undecidable whether two programs are
  equivalent.  (It is equivalent to the halting problem).

  Converting natural language to a formal specification is AI-hard, or perhaps
  harder, because people can't get it right either.  If we could write software
  without bugs, we would solve a big part of the security problem.


You just evoked a Halting Problem Fallacy. You can't check if given
arbitrary program terminates, but you can write a program that
provably terminates. Likewise you can't check correctness of an
arbitrary program, but you can write a provably correct program. Yes,
it is possible to write software without bugs, starting from a formal
definition of what 'bug' is (for example, 'it must never crash', 'it
must never directly leak sensitive data' or 'it must continue to be
able perform function A properly' are OK). The cause for this whole
security problem thing is that presently it's very hard to write
provably safe programs, so almost no one is doing it. Functional
programming research community is working on this problem, but I doubt
there will ever be tools that will enable average Joe the programmer
to meaningfully write verified code.

Understanding natural language specification and converting it to code
is what programming is about. I certainly didn't imply that
'programming is unnecessary, perfect secure code can just write
itself'. It won't happen until we have AGI.


   These AIs for network security that you are talking about are a
   cost-effective hack that happens to work sometimes. It's not a
   low-budget vision of future super-hacks.

  Not at present because we don't have AI.

I responded assuming that you were talking about the following sort of
thing, and its presumed further development to higher levels and
subtler rules:

http://en.wikipedia.org/wiki/Intrusion-prevention_system

Rate based IPS (RBIPS) are primarily intended to prevent Denial of
Service and Distributed Denial of Service attacks. They work by
monitoring and learning normal network behaviors. Through real-time
traffic monitoring and comparison with stored statistics, RBIPS can
identify abnormal rates for certain types of traffic e.g. TCP, UDP or
ARP packets, connections per second, packets per connection, packets
to specific ports etc. Attacks are detected when thresholds are
exceeded. The thresholds are dynamically adjusted based on time of
day, day of the week etc., drawing on stored traffic statistics.

  We rely on humans to find
  vulnerabilities in software.  We would like for machines to do this
  automatically.  Unfortunately such machines would also be useful to hackers.
  Such double-edged tools already exist.  For example, tools like SATAN, 
 NESSES,
  and NMAP can quickly test a system by probing it to look for thousands of
  known or published vulnerabilities.  Attackers use the same tools to break
  into systems.  www.virustotal.com allows you to upload a file and scan it 
 with
  32 different virus detectors.  This is a useful tool for virus writers who
  want to make sure their programs evade detection.  I suggest it will be very
  difficult to develop any security tool that you could keep out of the hands 
 of
  the bad guys.


All automatic tools already work from formally specified things that
they are trying to find in the system. If you write the code so that
these things are not there, and ascertain it by using automatic
verification based on e.g. sufficiently rich type system, these tools
won't find anything either. And yes, if you don't make code clearer,
it's a very difficult problem to find vulnerabilities in it, and the
smarter/more resourceful you are, the more you'll be able to find.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-17 Thread Vladimir Nesov
On Mon, Mar 17, 2008 at 4:48 PM, John G. Rose [EMAIL PROTECTED] wrote:

 I think though that particular proof of concepts may not need more than a
 few people. Putting it all together would require more than a few. Then the
 resources needed to make it interact with various systems in the world would
 make the number of people needed grow exponentially.


Then what's the point? We have this problem with existing software
already, and it's precisely the magic bullet of AGI that should allow
free lunch of automatic interfacing with real-world issues...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Vladimir Nesov
On Sun, Feb 24, 2008 at 2:51 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

  Consider Arithmetical Functionalism: the theory that a calculation is
  multiply realisable, in any device that has the right functional
  organisation. But this might mean that somewhere in the vastness of
  the universe, a calculation such as 2 + 2 = 4 might be being
  implemented purely by chance: in the causal relationship between atoms
  in an interstellar gas cloud, for example. This is clearly ridiculous,
  so *either* Arithmetical Functionalism is false *or* it is impossible
  that a calculation will be implemented accidentally. Right?


I feel a little uncomfortable when people say things like 'because
Occam's razor is true' or 'otherwise computationalism is false' or
'consciousness doesn't exist'. As these notions are usually quite
loaded and ambiguous, and main issues with them may revolve around the
question of what they actually mean, it's far from clear what is being
asserted when they are declared to be 'true' or 'false'.

Does 2+2=4 make a sound when there is no one around?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Vladimir Nesov
On Sun, Feb 24, 2008 at 4:06 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:
 On 24/02/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:

Does 2+2=4 make a sound when there is no one around?

  Yes, but it is of no consequence since no one can hear it. However, if
  we believe that computation can result in consciousness, then by
  definition there *is* someone to hear it: itself.


But it's still of no 'consequence', no?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Vladimir Nesov
On Feb 20, 2008 6:13 AM, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 The possibility of mind uploading to computers strictly depends on
 functionalism being true; if it isn't then you may as well shoot
 yourself in the head as undergo a destructive upload. Functionalism
 (invented, and later repudiated, by Hilary Putnam) is philosophy of
 mind if anything is philosophy of mind, and the majority of cognitive
 scientists are functionalists. Are you still happy asserting that it's
 all bunk?


Philosophy is in most cases very inefficient, hence wasteful. It puts
very much into building its theoretical constructions, few of which
are useful for understainding reality. It might be fun for those who
like this kind of thing, but it is a bad tool.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-29 Thread Vladimir Nesov
On Jan 29, 2008 11:49 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  OK, but why can't they all be dumped in a single 'normal' multiverse?
  If traveling between them is accommodated by 'decisions', there is a
  finite number of them for any given time, so it shouldn't pose
  structural problems.

 The whacko, speculative SF hypothesis is that lateral movement btw
 Yverses is conducted according to ordinary laws of physics, whereas
 vertical movement btw Yverses is conducted via extraphysical psychic
 actions ;-)'


What differentiates psychic actions from non-psychic so that they
can't be considered ordinary? If I can do both, why aren't they both
equally ordinary to me (and everyone else)?..

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=91036630-4898ad


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-28 Thread Vladimir Nesov
On Jan 28, 2008 2:17 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Can you define what you mean by decision more precisely, please?

That's difficult, I don't have it formalized. Something like
application of knowledge about the world, it's likely to end up an
intelligence-definition-complete problem...

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90505077-ab77a2


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-27 Thread Vladimir Nesov
On Jan 27, 2008 9:29 PM, John K Clark [EMAIL PROTECTED] wrote:
 Ben Goertzel [EMAIL PROTECTED]

  we can think about a multi-multiverse, i.e. a collection of multiverses,
  with a certain probability distribution over them.

 A probability distribution of what?


Exactly. It needs stressing that probability is a tool for
decision-making and it has no semantics when no decision enters the
picture.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90386232-2d2891


Re: [singularity] Wrong focus?

2008-01-26 Thread Vladimir Nesov
On Jan 26, 2008 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Saturday 26 January 2008, Mike Tintner wrote:
  Why does discussion never (unless I've missed something - in which
  case apologies) focus on the more realistic future
  threats/possibilities -   future artificial species as opposed to
  future computer simulations?

 This is bias in the community. The majority of the information from
 SingInst, for example, focuses on digital ai and not the potential ai
 that we can get from the bio sector, like through synbio and
 gengineering and tissue engineering of neurons and brains.


I guess limitation of biological substrate are too strict, and there
is not much to hope for from this side. Maybe we'd be able to
construct a genetically engineered scientist with huge brain that will
develop AGI, before cracking this problem ourselves ;-)

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90259355-e04be3


Re: [singularity] EidolonTLP

2008-01-22 Thread Vladimir Nesov
On Jan 23, 2008 1:06 AM, Daniel Allen [EMAIL PROTECTED] wrote:
 It is entertaining.

 I love the greeting -- Greetings, little people -- and the graphics along
 with the ambient and almost haunting background music.

But speech is so boring that it must be a GOFAI...

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=88693780-508c2f


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Vladimir Nesov
On Jan 20, 2008 3:06 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Sorry if you've all read this:

 http://www.goertzel.org/benzine/extropians.htm

 But I found it a v. well written sympathetic critique of extropianism 
 highly recommend it. What do people think of its call for a humanist
 transhumanism?


Thanks Mike for highlighting this informative essay.

I think that first and foremost we must not embrace mystery. Ben
argues against oversimplifying, but are we honest in adding in details
that we don't sufficiently understand? For each irresponsibly added
detail brings us away from reality. Preferring a fabulous wrong
impression over a simple speckle of truth is not virtuous.

Humans don't have stable morality. They learn, they go mad. What is it
about evolutionary preprogrammed reinforcers that makes them
exceptional before other random concoctions? They have a good position
of power, many people obey them. If one argues for personal moral
freedom, it's not about enforcing freedom on others, it's about
liberating oneself from influence of others. There is no reason in
choosing a moral stance if you don't know what effect it will have.
Seek understanding if you want to hold back an existing moral plague,
including the part you embody yourself.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87886040-d08b59


Re: [singularity] I feel your pain

2007-11-05 Thread Vladimir Nesov
Isn't empathy a failure to distinguish between yourself and the other?
Deficiency, not strength?

On 11/6/07, Don Detrich [EMAIL PROTECTED] wrote:







 Will an AGI with no bio-heritage be able to feel our pain, have empathy? If
 not, will that make it less conscious and more dangerous?



 http://www.salon.com/news/feature/2007/11/05/mirror_neurons/







 Don Detrich





   
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=61462586-3970db


Re: [singularity] Uploaded p-zombies

2007-09-10 Thread Vladimir Nesov
Monday, September 10, 2007, Matt Mahoney wrote:

MM --- Vladimir Nesov [EMAIL PROTECTED] wrote:

 I intentionally don't want to exactly define what S is as it describes
 vaguely-defined 'subjective experience generator'. I instead leave it
 at description level.

MM If you can't define what subjective experience is, then how do you know it
MM exists?

It exists in the same sense anything else exists. All objective world
theories can be regarded as invariants of subjective experience.
Objective world theories are portable between agents of the same
world.

MM  If it does exist, then is it a property of the computation, or does
MM it depend on the physical implementation of the computer?  How do you test 
for
MM it?

It certainly corresponds to physical implementation (brain) and it is
a property of relations between its parts (atoms/neurons). If it's a
property of computation is what I'm trying to find out.

MM Do you claim that the human brain cannot be emulated by a Turing machine?

Functionally equivalent implementation can be built. But physical
world doesn't know system's design to find that certain relations
between certain states in emulating computer correspond to relations
between neurons in original brain. Main thesis is that subjective
experience is a property of physical implementation, not of arbitrary
mathematical model of that implementations. Two can be the same if
that mathematical model is derivable purely from physical
implementation, though.

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40046899-85db61


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Vladimir Nesov
Sunday, September 9, 2007, Matt Mahoney wrote:

MM Also, Chalmers argues that a machine copy of your brain must be conscious.
MM But he has the same instinct to believe in consciousness as everyone else.  
My
MM claim is broader: that either a machine can be conscious or that 
consciousness
MM does not exist.

While I'm not yet ready to continue my discussion on essentially the
same topic with Stathis on SL4, let me define this problem here.

Let's replace discussion of consciousness with more simple of 'subjective
experience'. So, there is a host universe in which there's an
implementation of mind (a brain or any other such thing) which we as a
starting point assume to have this subjective experience.

Subjective experience exists as relations in mind's
implementation in host universe (or process of their modification in time).
From this it supposedly follows that subjective experience exists only as
that relation and if that relation is instantiated in different
implementation, the same subjective experience should also exist.

Let X be original implementation of mind (X defines state of the
matter in host universe that comprises the 'brain'), and S be the
system of relations implemented by X (the mind). There is a simple
correspondence between X and S, let's say S=F(X). As brain can be
slightly modified without significantly affecting the mind (additional
assumption), F can also be modification-tolerant, that is for example
if you replace in X some components of neurons by constructs with different
chemistry which still implement the same functions, F(X) will not
change significantly.

Now, let Z be an implementation of uploaded X. That is Z can as well
be some network of future PCs plus required software and data
extracted from X. Now, how does Z correspond to S? There clearly is
some correspondence that was used in construction of Z. For example,
let there be a certain feature of S that can be observed on X (say,
feature is D and it can be extracted by procedure R,
D=R(S)=R(F(X))=(RF)(X), D can be for
example a certain word that S is saying right now).
Implementation Z comes with a function L that enables to extract D,
that is D=L(Z), or L(Z)=R(S).

Presence of implementation Z and feature-extractor L only allow the
observation of features of S. But to say that Z implements S in the
sense defined above for X, there should be a correspondence S=F'(Z).
This correspondence F' supposedly exists, but it is not implemented in
any way, so there is nothing that makes it more appropriate for Z than
other arbitrary correspondence F'' which results in a different mind
F''(L)=S'S. F' is not a near-equivalence as F was. One can't say
that implementation of uploaded mind simulates the same mind or even in
any way similar mind. It observes behavious of original mind using
feature-extractors and so is functionally equivalent, but it doesn't
exclusively provides an implementation for the same subjective
experience.

So, here is a difference: simplicity of correspondence F between
implementation and the mind. We know from experience that
modifications which leave F a simple correspondence don't destroy
subjective experience. But complex correspondences make it impossible
to distinguish between possible subjective experiences implementation
simulates, as correspondence function itself isn't implemented along
with simulation.

As a final paradoxical example, if implementation Z is nothing, that
is it comprises no matter and information ar all, there still is a
correspondence function F(Z)=S which supposedly asserts that Z is X's
upload. There can even be a feature extractor (which will have to implement
functional simulation of S) that works on an empty Z. What is the
difference from subjective experience simulation point of view between
this empty Z and a proper upload implementation?

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=39991599-a151a9


Re: [singularity] Uploaded p-zombies

2007-09-09 Thread Vladimir Nesov
Monday, September 10, 2007, Matt Mahoney wrote:

MM Perhaps I misunderstand, but to make your argument more precise:
MM X is an implementation of a mind, a Turing machine.

No. The whole argument is about why turing machine-like implementation
of uploaded brain doesn't seem to do the trick. X is an original meaty
brain, say collection of locations of atoms,
which is as starting point assumed to implement subjective
experience. Point of discussion is to show that it's not as obvious as
it seems from the first look that uploaded X will also implement
subjective experience. I thought that too initially, before arriving
at this argument. Now Z IS a turing machine-like thing. S is something
that is an essense of subjective experience-generating structure, and
in case of a brain it closely corresponds to its physical structure.
It need not be the simpliest representation possible.

MM S is the function computed by X, i.e. a canonical form of X, the smallest or
MM first Turing machine in an enumeration of all machines equivalent to X.  By
MM equivalent, I mean that X(w) = S(w) for all input strings w in A* over some
MM alphabet A.

MM Define F: F(X) = S (canonical form of X), for all X.  F is not computable, 
but
MM that is not important for this discussion.

I intentionally don't want to exactly define what S is as it describes
vaguely-defined 'subjective experience generator'. I instead leave it
at description level.

MM An upload, Z, of X is defined as any Turing machine such that F(Z) = F(X) = 
S,
MM i.e. Z and X are equivalent.

F'(Z)=F(X)=S, Z is an upload of X. They are equivalent given F and F',
and F' doesn't in itself correspond to Z, for example there is a F''
just as good, which results in a different F''(Z), so it's a big
question if Z and X are equivalent, or rather Z is as equivalent to
F'(Z)=S as it is equivalent to F''(Z)=S'S.

Also bugfix in my previous message: it's F''(Z)=S'S, not F''(L)=S'S

MM Then the paradox in your last example cannot exist because F(nothing) != S,

That F was a different F, which is by definition equal to S. Say,
F*(nothing)=S (by definition). I omitted details about time and I/O
for simplicity, but they can be factored in with minor changes.

MM The other problem is that you have not defined subjective experience.
MM Presumably this is the input to a consciousness?  If consciousness does not
MM exist, then how can subjective experience exist?  There is only input to the
MM Turing machine that may or may not affect the output.  A reasonable 
definition
MM of subjective experience would be the subset of inputs that affect the 
output.

It's more like I look for proper definition of subjective experience
based on presented thought experiment. So I substitute unknown definition
by associated symbol (S) and describe its properties.

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=40011790-2fef3d


Re: [singularity] Current absence of activism

2007-08-20 Thread Vladimir Nesov
Sunday, August 19, 2007, Joshua Fox wrote:

JF (but please, only well-confirmed reports rather than
JF supposition).

Maybe more general question that provides information about the same
problem is [why don't you donate/work on it/spread the word, other than due to
inability to do so].

So, framed that way:

I'm not particularly interested in things other than AGI speculated
around the topic, since I don't believe they can provide
uploading/immortality in my lifetime. Even given
nanotechnology with all perks singularitarians
give it, control/engineering is going to be too complex to implement
these things in resonable time. Race with existential risks doesn't
help. By contrast, AGI is going to be relatively simple engineering
project given a workable design.

AGI doesn't work well with public awareness; only reason to spread the
word on this topic is fundraising, for which there first should be
identified a target.

There is currently no framework for research framed as AGI development, no
reasonable grants and institutions. Few people who hint on having a
clue about what they are doing intentionally refrain from providing
technical details, so one really couldn't tell. This secretive approach is also
being rationalized as exclusively correct one. Since non of them has destroyed
the world yet, I suppose they overdramatize importance of their
findings, but at the same time stall incremental progress.

Only thing that's left is to work on the problem yourself, but it
falls out of this topic, since very few people have at the same time
sufficient background and intuitive inclination telling problem just
might be workable, and they are working on it anyway.

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=33571207-85e887


Re: Symbol grounding (was [singularity] ESSAY: Why care about artificial intelligence?)

2007-07-12 Thread Vladimir Nesov
Thursday, July 12, 2007, Cenny Wenner wrote:

CW There are certainly difficulties with natural language but I do not
CW see how these empirical and practical difficulties can be called
CW tautologies of languages in general.

Problem here is not in certain deficiency of natural language or some
mystical way goals always keep being misinterpreted. Problem is that
if you want to formulate complex goal/value, it usually intrinsically
employs informal everyday concepts, and these concepts exist within
the system you want to instruct only as results of complex process of
perception. If you want guarantees on overall result, you must
include properties of this perceptual process in specification as
well. It draws practical line between goals we can build systems
to reliably achieve and those we can't. There's obviously a technical
problem of building a system powerful enough to be able to recognize
these concepts, but it doesn't help in verification of whether these
cancepts are really recognized in those situations we want them to or not.
Problem is twofold: you should be able to guarantee properties of this
perceptual process (so it can't be an arbitrary emergent one), and you
should be able to figure out the essense of your own perception of
these concepts. Middle ground is to make simplier formulation of
goals not involving too much hairy perception. Meta ground is to
include human perception itself in the loop, thus tasking the system
to figure out perceptual details of humans as a subtask.


-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=20587889-3be28f