> I have heard some insider news that Intel *could* hit the 1 GigaHertz mark
> by years end if they had a reason too (if AMD jumped out with a unexpected
> surprise).  Once we start hitting the sweet spot in die size I am
> under the
> impression that they will start exploring the multiple processor route...
> Multiple processor systems are already becoming more mainstream.  So I
> think we will be able to continue with MASSIVE performance increases over
> our lifetimes.  This is assuming we stick with the Von Neumann
> architecture, new and EXCITING technologies (such as neural computing &
> massively parallel systems) are just over the horizon.  These technologies
> and others offer us unimaginable new possibilities with their own unique
> strengths & weaknesses -- maybe when these new tools are out there we will
> find a new Algo. that better fits their strengths.

I'm pretty sure Intel already has test chips running at least that fast,
probably faster.  Speeds like that only become mainstream when it's
affordable to mass produce such chips (higher yields) and people are willing
to pay more.  I think they'd prefer to milk as much out of us as they can by
slowly introducing improvements, forcing us to upgrade every so many years.
Heck, if they just came out with a super fast ultra chip twice as fast as
what's out there now for about the same price, they wouldn't make as much
money in the long haul.

As for SMP, yes, I think you're right.  It is becoming more mainstream.
With more Intel-compatible OS' capable of SMP, it's more reasonable to
expect someone to buy an SMP capable computer, popping in another cheap when
more functionality is desired, rather than shelling out for a new system.
Windows 95/98 was one big thing holding this market back, but with Linux and
NT on the upper end, and the forthcoming Windows 2000 which will be
attractive for current home users, SMP should be a big hit.

Abit just announced a dual Celeron motherboard (slot 370), even though Intel
won't support SMP with Celerons.  At the low prices that Celeron's are going
for, it's hard to pass up.  Sure, we can't test one number across both
processors (though I'd still like to see that someday), you can test 2 in
just a bit over the time it'd take to test 1.

I also wonder how much Merced will affect coding practices.  With it's EPIC
architecture, programmers will have to rely on their wits more (the way they
should be), hopefully getting tighter code.  Of course, George already has
his stuff about as efficient as you can get, so I'm thinking more broadly
than GIMPS.

> My understanding of the purpose of rewards like the EFF is posting is to
> foster new and innovative ways to solve problems that almost seem
> impossible at the time.  If asked 10 years ago who here would have thought
> we would be testing numbers as big as we have...  George & Scott's vision
> of this very project is such an example of break through technology, which
> allows us to advance in the scientific frontier at break neck speeds.

Perhaps vector processing technology like that found in high-end
supercomputers will show up more in low end PC's and CPU's.  Certainly,
supercomputing technology has always tended to filter down, eventually.
>From what I learned about the discussions with getting an SMP version of
NTPrime for instance, it certainly seems that vector processors would do
much better in this regard, but would at least be faster than your average
super-scalar design anyway.

Aaron

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

Reply via email to