On Saturday 02 March 2002 00:17, Eric Hahn wrote:

> Found this article on News.com about the new Pentium 4's
> coming out next year... code-named Prescott.  It mentions
> a speed of 4GHz... and the use of hyper-threading...
> Hyper-threading is supposed to allow two applications or
> application threads to run on one processor at the same
> time... by allowing one application (or thread) to use
> parts of the processor it needs... and the second
> application (or thread) to use others...

There was a news article I read earlier this week regarding the new Xeon 4 
chips which already implement hyperthreading. The results made interesting 
reading:

(a) Windows 2000 reports a dual CPU system as having _four_ CPUs. This of 
course gives concerns about how to optimise distribution of tasks across the 
virtual CPUs, given that it is wasteful to have 2 virtual CPUs in the same 
physical chip busy whilst the other physical CPU is idle. There is also a 
software licensing issue for the many commercial products (including Windows) 
which are licensed on a per-CPU basis. (Remember that Windows XP Home Edition 
supports only one CPU, whilst 2000/XP Professional support only two CPUs.)

(b) In a database application, hyperthreading was found to improve throughput 
by ~60%.
>
> Esentially this could speed up testing even more... by
> having one thread of Prime95 use the FPU... while another
> uses the IAU...

Here's the rub. In the database application, there is very frequent task 
switching (this is an inevitable consequence of the fact that this sort of 
task is I/O intensive). Hyperthreading improves response in these 
circumstances by allowing an extra thread to be using some of the "spare" 
capacity in pipelines, instruction decoders etc. which normally goes to waste 
as a consequence of frequent changes in the CPU context.

In a numerical application, we _hope_ that task switching will be relatively 
infrequent (since there will always be a loss associated with every task 
switch; at least the (virtual) CPU context has to be saved and restored with 
every task switch). Meanwhile, with an application which has been optimized 
with compute efficiency in mind, there will be not be all that much spare 
silicon for hyperthreading to take advantage of.

So I don't think that hyperthreading will benefit us to any significant 
extent. In fact there is likely to be a significant performance degradation 
if any attempt is made to run more than one compute-bound process in parallel 
threads in any single physical CPU. The upside is that, if one background 
compute-bound thread per physical CPU is running on a  hyperthreading system, 
both the interactive response of the system and the performance hit caused by 
"interruptions" to the background job will be affected to a much smaller 
extent.
>
> The article also mentions AMD's Clawhammer due out the
> end of this year... able to run 64-bit applications...
> This could significantly reduce the number of
> adds/multiplies required for testing....

Except we don't use pure integer ALU arithmetic that much, at any rate whilst 
LL testing. Hammer is certainly an interesting development, but it will 
probably run existing mprime/Prime95 32-bit code efficiently enough that a 
specially-optimized version could wait until we see what the market takeup is 
likely to be.

(Same with Itanium, which is different enough that mprime/Prime95 will not 
run on it without horrendous inefficiencies caused by code emulation. However 
Glucas runs beautifully. Intel's pricing policy and Microsoft's lack of 
support for Itanium ensure that these CPUs are not likely to be found in 
consumer products in the reasonably foreseeable future, even though the 
Itanium architecture has many features which are attractive from our point of 
view).
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to