[agi] Tired not making any MONEY online ! 6286SmBA5-781ZTm-15

2003-01-04 Thread aab02hb2002ty232
Dear fellow marketer, 

You get emails every day, offering to show you how to make money.
Most of these emails are from people who are NOT making any money.


If you want to make money with your computer, then you should
hook up with a group that is actually DOING it. We are making
a large, continuing income every month. What's more - we will
show YOU how to do the same thing.

This business is done completely by internet and email, and you
can even join for free to check it out first. If you can send
an email, you can do this. No special "skills" are required.

How much are we making? Anywhere from $1000 to $9000 per month. 
We are real people, and most of us work at this business part-time. 
But keep in mind, we do WORK at it - I am not going to 
insult your intelligence by saying you can sign up, do no work,
and rake in the cash. That kind of job does not exist. But if
you are willing to put in 10-15 hours per week, this might be
just the thing you are looking for.

This is not income that is determined by luck, or work that is
done FOR you - it is all based on your effort. But, as I said,
there are no special skills required. And this income is RESIDUAL -
meaning that it continues each month (and it tends to increase
each month also and it's LEGAL too!).

Interested? I invite you to find out more. You can get in as a
free member, at no cost, and no obligation to continue if you
decide it is not for you. We are just looking for people who still
have that "burning desire" to find an opportunity that will reward
them incredibly well, if they work at it.

To become a free member and learn more about our unique
internet opportunity, simply reply to: [EMAIL PROTECTED]

and put in subject:

"SIGN ME UP, REF: 123"

I will then get back to you asap with further instructions.

Looking forward to hearing from you!

Sincerely, 

JAB

P.S. After having several negative experiences with network
marketing companies I had pretty much given up on them.
This is different - there is value, integrity, and a
REAL opportunity to have your own home-based business...
and finally make real money on the internet.

Don't pass this up..you can sign up and test-drive the
program for FREE. 

All you need to do is become a free member!
= = = = = = = = = = = = = = = = = = = = = = = = = = = 

Unsubscribing: Send a blank email to: [EMAIL PROTECTED] with
"Remove" in the subject line.

This message is not intended for residents of the state of
Washington, and screening of addresses has been done to the best
of our technical ability. If you are Washington resident or 
otherwise wish to be removed from this list, just follow the
removal instructions above.


4705BYUM2-779nAvY3767aVJa2-526Gnzd7933SmWw3-32l43

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Diminished impact of Moore's Law on AGI due to otherbottlenecks

2003-01-04 Thread James Rogers
On 1/4/03 3:02 PM, "Shane Legg" <[EMAIL PROTECTED]> wrote:
> 
> I had similar thoughts, but when I did some tests on the webmind code
> a few years back I was a little surprised to find that floating point
> was about as fast as integer math for our application.  This seemed to
> happen because where you could do some calculation quite directly with
> a few floating point operations, you would need more to achieve the same
> result with integer math due to extra normalisation operations etc.


I can see this in some cases, but for us the number of instructions is
literally the same; the data fields in question could swap out floats with
ints (with a simple typedef change) with no consequences.  We do have a
normalization function, but since that effectively prunes things we'd use it
whether it was floating point or integer, and it is only very rarely
triggered anyway.

I guess the key point is that we aren't really "faking" floating point with
integers.  It is a case of floating point bringing nothing to the table
while offering somewhat inferior performance under certain conditions.  The
nice thing about integers is that performance is portable.  I certainly
wouldn't shy away from using floating point if it made sense.  It is just a
mild curiosity that when all is said and done, nothing in the core engine
requires floating point computation.

 
> I was also surprised to discover that the CPU did double precision
> floating point math at about the same speed as single precision floating
> point math.  I guess it's because a lot of floating point operations are
> internally highly parallel and so extra precision don't make much speed
> difference?


I believe this is because current FP pipelines are double precision all the
way through generally.  If you run single precision code, it uses up just as
many execution pipelines as double precision.

The exception is the SIMD floating point engines (aka "multimedia
extensions") that a lot of processors support today.  But I normally just
write all floating point for standard double precision execution these days.

 
> Anyway, the thing that really did affect performance was the data size
> of the numbers being used (whether short, int, long, float, double etc.)
> Because we had quite a few RAM cache misses, using a smaller data type
> effectively meant that we could have twice as many values in cache at
> the same time and each cache miss would bring twice as many new values
> into the cache.  So it was really the memory bandwidth required by the
> size of the data types we were using that slowed things down, not the
> time the CPU took to do a calculation on a double precision floating
> point number compared to say an a simple int.


A good point, and one that applies to using LP64 types as well.  The
entirety of our code fits in cache, but data fetches are unavoidably
expensive.


> I'd always had a bias against using floating point numbers ever
> since I used to write code 15 years ago when the CPU's I used
> weren't designed for it and it really slowed things down badly.
> It's a bit different now however with really fact floating point
> cores in CPUs.


One consideration that HAS gone into maintaining a pure integer code base is
that it can run with extreme efficiency as currently designed on simple
integer MasPar.  When used like this, the opportunity exists for scalability
that is far beyond what we could get if we required a floating point
pipeline.  The idea of having scads of simple integer cores connected to a
small amount of fast memory and low latency messaging interconnects is
appealing and our code is very well suited for this type of architecture.
Fortunately there seems to be companies starting to produce these types of
chips.

Ultimately, we'd like to move the code to something like this, and since
there is no design or performance cost to only using integers on standard
PCs (they work better anyway in our case), we haven't introduced floating
point into the kernel without a good reason.  So far we haven't actually
come across a need for floating point computation in the kernel, so we've
never had to deal with this issue.

Cheers,

-James Rogers
 [EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Diminished impact of Moore's Law on AGI due to other bottlenecks

2003-01-04 Thread Shane Legg
James Rogers wrote:


We just don't need that much raw CPU to get the job done.  The entirety of
our number crunching is integer domain, and just addition/subtraction at
that.  Aside: Since the engine operates on relative values and the dynamic
range of integers on your average machine is quite high (higher than
anything going on in the human brain), there is no reason to NOT use integer
formats, particularly since integer math and manipulation is very fast on
most common hardware.


I had similar thoughts, but when I did some tests on the webmind code
a few years back I was a little surprised to find that floating point
was about as fast as integer math for our application.  This seemed to
happen because where you could do some calculation quite directly with
a few floating point operations, you would need more to achieve the same
result with integer math due to extra normalisation operations etc.

I was also surprised to discover that the CPU did double precision
floating point math at about the same speed as single precision floating
point math.  I guess it's because a lot of floating point operations are
internally highly parallel and so extra precision don't make much speed
difference?

Anyway, the thing that really did affect performance was the data size
of the numbers being used (whether short, int, long, float, double etc.)
Because we had quite a few RAM cache misses, using a smaller data type
effectively meant that we could have twice as many values in cache at
the same time and each cache miss would bring twice as many new values
into the cache.  So it was really the memory bandwidth required by the
size of the data types we were using that slowed things down, not the
time the CPU took to do a calculation on a double precision floating
point number compared to say an a simple int.

I'd always had a bias against using floating point numbers ever
since I used to write code 15 years ago when the CPU's I used
weren't designed for it and it really slowed things down badly.
It's a bit different now however with really fact floating point
cores in CPUs.

Cheers
Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] Diminished impact of Moore's Law on AGI due to otherbottlenecks

2003-01-04 Thread James Rogers
On 1/3/03 11:12 PM, "Gary Miller" <[EMAIL PROTECTED]> wrote:
> If these benchmarks were more readily available it would be even more
> apparent to businesses and users that a 3Ghz machine will not process a
> typical application twice as fast as a 1.5Ghz machine.


If you actually look critically at the various system performance
parameters, you find that system performance these days is scaling almost
perfectly with memory performance.  While memory performance and processor
speed track each other only very roughly, you find that system performance
on many benchmarks and memory performance track almost perfectly.  There is
a strong implication that for many types of applications these days, memory
performance IS the system performance "rate limiting factor".  The only
reason faster processors seem faster is that GHz upgrades frequently come
with memory performance upgrades as well.

This is also why people who do scientific computing often use the STREAM
benchmarks (which measure system memory performance) to compare systems when
building a supercomputing cluster.  STREAM metrics map more closely to real
world performance than running a tight code loop in the cache to measure CPU
performance.

 
> How many of you out there with AGI projects feel you are limited
> currently in your research by CPU speeds.  I was myself up until 2 years
> ago.  If you do feel you are limited, what speeds are you currently
> running at and how much more CPU (2x, 4x, 8x, ...) do you feel you could
> optimally utilize.


We don't feel particularly CPU limited, but we do feel memory limited, both
in terms of how much we can have and how fast the CPU can access it.  In
fact, it would almost be more acceptable performance-wise to have a cluster
of relatively slow processors that can access their (much smaller) memory
spaces very fast than to have blinding fast processors with huge address
spaces that may take a second or more to traverse in practice.

We just don't need that much raw CPU to get the job done.  The entirety of
our number crunching is integer domain, and just addition/subtraction at
that.  Aside: Since the engine operates on relative values and the dynamic
range of integers on your average machine is quite high (higher than
anything going on in the human brain), there is no reason to NOT use integer
formats, particularly since integer math and manipulation is very fast on
most common hardware.  You don't need to materialize floating point values
from relative integer values to get perfectly correct statistical behaviors,
a point that most people overlook (or perhaps their particular design
architecture forces them to do things this way -- hell if I know).

But yeah, give me fast memory access and lots of it, and I'm happy.
Ironically, most systems made today that really do have good memory
architectures also tend to be highly optimized for floating point
performance at the expense of integer performance.


> And how much do you feel this would compress the actual project plan for
> your project.  These numbers should give a better indication on whether
> and how much current processing speeds are limiting the quest for AGI.
 

The problem today is almost entirely a software problem.  The hardware
problems and limitations, such as they are, are resolving themselves just
fine.  The software on the other hand...

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Chess Master Theory Of AGI.

2003-01-04 Thread Ben Goertzel



 
 This will 
occur before the predictions of the experts in the field of Singularity 
prediction because their predictions are based on a constant Moore's Law and 
they over estimate the computational capacity required for human level 
AGI.  Their dates vary from 2016 to 2030 depending on whether they are 
using the 18 month figure or the 12 month figure.  Moore's Law is currently 
at 9 months and falling.  My calculations based on a falling Moore's Law 
put the Singularity on April 28th, 2005.   
 
 This human 
level AGI in a computer will be quite superior to a human because of several 
advantages that machines have over gray matter.  These advantages are: 
upgradability, self-improvement through redesign, self editability, reliability, 
functional parallelism, accuracy, and speed.  This superiority will be 
quantitative not qualitative.  It will be superior but completely 
comprehensible to us.  The belief in a radically different form of advanced 
thought incomprehensible to present humans is philosophical in nature, not based 
on evidence. 
 
 
 
Mike,
 
Is it 
really true that Moore's Law is at 9 months and falling?  Do you have some 
references on this?
 
Even 
if this were the case, it wouldn't cause the Singularity by 2005.  
Processing power is not the only bottleneck!
 
It's 
true that with faster, cheaper processing power, more people will be able to 
experiment with more significant AGI systems.  
 
But 
even with a correct AGI design, and adequate funding, computing power and 
staffing, I think it's going to take anyone several years to get from AGI-design 
to teachable human-level system.  That is the nature of engineering 
complex software systems based on complex ideas.   And of course it 
may take some time to get from teachable-human-level system to superhuman-level 
system as well !!!   ;-p
 
So, I 
think that the most wildly optimistic projection we can rationally hope for is 
superhuman intelligence (the "Singularity") by 2010.
 
But 
this could only be achieved if *everything goes right*  And of 
course, I don't know how to estimate the odds that everything goes 
right.  An example of "everything going right" would be: One of the 
currently in-development AGI designs (say, Novamente or A2I2 or NARS) turns out 
to be almost entirely correct, AND, gets adequately funded... and, teaching a 
human-level AGI to productively self-modify toward unlimited intelligence turns 
out to be a matter of a couple years, not a decade.  This is a lot of ANDs, 
Mike -- an awful lot of ANDs ... 
 
-- 
Ben
 


RE: [agi] Diminished impact of Moore's Law on AGI due to other bottlenecks

2003-01-04 Thread Ben Goertzel


> How many of you out there with AGI projects feel you are limited
> currently in your research by CPU speeds.  I was myself up until 2 years
> ago.  If you do feel you are limited, what speeds are you currently
> running at and how much more CPU (2x, 4x, 8x, ...) do you feel you could
> optimally utilize.
> And how much do you feel this would compress the actual project plan for
> your project.  These numbers should give a better indication on whether
> and how much current processing speeds are limiting the quest for AGI.

We are limited tremendously by CPU speed and RAM capacity.

Either greater CPU speed or greater RAM capacity would be valuable, but the
biggest boost would be both together.

We could utilize essentially any amount of CPU speed or RAM capacity.  No
limit in sight.

Having a CPU with (for example) 10x greater speed would have a HUGE positive
impact on some of the work we're doing.

One reason is that we run a lot of tests, in which we systematically try
different parameter values for aspects of our AI code (searching regions of
parameter space with a GA or other optimization tool).  If each test would
run 10x faster, that would be a very good thing.

As to how much this would accelerate our progress toward AGI, that's hard to
say.  Speed of running tests is only one rate-limiting factor.  No amount of
computer power will diminish the time it takes for humans to write and debug
code, and solve the various puzzles arising in translating our
mathematical/conceptual design, piece by piece, into software components

My best guess is:

1) If we had vastly better CPU's and vastly more RAM, the amount of time to
get to a complete working implementation of a Novamente system might be
reduced to 2/3 what it is right now.

2) HOWEVER, once we get to the stage of of having a complete working
implementation, the next phase is mostly a testing, tuning and teaching
phase.  That phase, I reckon, will be much more easily accelerable via
increased CPU power.  Because, it will largely be a matter of running tests
to tune parameters, and the speed of doing this is roughly proportional to
CPU speed (until one reaches the point where human attention to interpret
the test results is the rate-limiting factor).

-- Ben Goertzel


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Intelligence by definition

2003-01-04 Thread Boris Kazachenko



Yes, if you have a huge 
amount of space and time resources available, you can start your system with a 
blank slate -- nothing but a very simple learning algorithm, and let it learn 
how to learn, learn how to structure its memory, etc. etc. 
etc.
 
This is pretty much what 
OOPS does, and what is suggested in Marcus Hutter's related 
work.
 
It is not a practical 
approach, in my view.  My belief is that, given realistic resource 
constraints, you can't take such a general approach and have to start off the 
system with specific learning methods, and even further than that, with a 
collection of functionally-specialized combinations of learning 
algorithms.  
 
I could be wrong of 
course but I have seen no evidence to the contrary, so 
far...
***
 
A fixed collection 
of methods won't scale, - power of a method should correspond to generality 
(predictive power) of a pattern. The whole point of such pattern-specific & 
level-specific scaling of methods IS computational efficiency, - it's 
a lot less expensive to incrementally scale methods for individual patterns than 
to indiscriminately apply a fixed set of them on patterns most of which are 
either too complex or too simple for any given 
method.
 
***
To select formulas you 
must have an implicit criterion, why not try to make it explicit? I don't 
believe we need complex math for AI, complex methods 
can
Sorry, that was a typo, it should be 
"can't"
be universal, - generalization is a reduction. What we 
need is a an autonomously scalable method.  
***
 
Well, if you know some simple math that is 
adequate for deriving a practical AI design, please speak up.  Point me to 
the URL where you've posted the paper containing this math!  I'll be very 
curious to read it ;-)
***
 
We both know that there is no 
practical general AI yet, I'm trying to suggest a theoretically 
consistent one. Given that the whole endeavor is context-free it should 
ultimately be the same thing. I don't have any papers, when the theory is 
finished I'll write a program, not a 
paper.
 
My method is 
ultimately simple: sequential expansion of search for correlations 
of sequentially increasing arithmetic power/derivation, for inputs which 
had above-average compression over the shorter range of search 
/  lower arithmetic power/derivation. What's new here (correct me if 
I'm wrong),  is how I define compression, which determines value of a 
pattern, & encode these patterns to preserve restorability 
& enable analytical comparison (between individual variable types 
within patterns). Both are necessary to selectively 
scale the search, & I don't see it in 
OOPS
 
It's in my 
introduction, someplace, but I realize it must be mental torture to try to 
figure it. Why would you work on it? Only if you agree with my theoretical 
assumptions, I suppose, the method is uniquely consistent with 
them.
 
Boris.