Re: Mersenne: Just curious

2000-04-19 Thread Ryan McGarry

All this talk about PC's running 24/7 has convinced me of the
reliability of processors.  It's been my thoughts that if your computer
is on, it's always running at full speed, whether or not you're running
Prime95.  
I've always left my computers on all the time, and never had a problem. 
This reliability got me thinking about any other appliances that have
the same level of reliability.

For example, I remember reading an article about light bulbs which said
that if you leave a light bulb on continuously, it will last much longer
due to the fact that the the filament doesn't contract when it gets
cold.  

I suppose my question is whether or not it's more of a risk to your
processor in allowing it to cool off regularly than leaving it on 24/7?  

Thanks,
Ryan McGarry

John R Pierce wrote:
 
  When pentium pro 200's were the hot new processor
  (in speed, more so than in wattage),
  I began running some dual-ppro-200 systems with two prime95 instances
 each.
  Those processors are still running it.
  I've never had to replace a cpu or motherboard
  (though occasionally a motherboard power connector
  had to be replaced because it burned up).
  I'm not sure but I think that's three years.
 ...
 
 Until last August, my *original* Prime95 participant, a Pentium-100 running
 first Win95, later Win98, faithfully chugged along 24/7.   I started this
 CPU back when the very first Mersenne article came out in the San Jose
 Mecury News.  This was long before GIMPS had found a prime.Since this
 win95 box's only other duty was print-server for a old inkjet, and the very
 occasional fax, it went a month or more between reboots regularly.  Said
 machine is still alive and well, only now its a 133MHz 64MB ram linux based
 internet server for my DSL connection.  http://hogranch.com :)   The P100
 was new when the first 133Mhz pentiums were becoming available and the 90s
 and 100s got a lot cheaper.  Off the top of my head, I think it might be 5+
 years old.  And, yes, I have a dual PPro-200 which has been running prime95
 24/7 since it was built 3 years ago.
 
 -jrp
 
 _
 Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
 Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Aaron Blosser

All this talk about PC's running 24/7 has convinced me of the
reliability of processors.  It's been my thoughts that if your computer
is on, it's always running at full speed, whether or not you're running
Prime95.
I've always left my computers on all the time, and never had a problem.
This reliability got me thinking about any other appliances that have
the same level of reliability.

I've had many machines running NTPrime for years now...  Sometimes those NT
servers are so rock solid they'll run for months at a time between reboots and
I've had nary a problem.

For example, I remember reading an article about light bulbs which said
that if you leave a light bulb on continuously, it will last much longer
due to the fact that the the filament doesn't contract when it gets
cold.

That's true...but only as far as total number of hours of life.  There's a
reason that light bulbs almost always burn out right when you turn them
on...only VERY rarely will they burn out while already on.  Just that having a
cold filament which suddenly gets really hot...it can put alot of stress on
that poor thing...

But of course, you're much better off turning on a light when you need it
because *you* will get more use out of it that way than if you just left your
lights on all day, all night, all the time. :)

I mean, you might get the full 3000 hours out of a bulb if you left it on all
the time...but that's only 125 days.  Now...take that same bulb and then think
that you maybe turn it off and on 3-4 times a day for a grand total of maybe 5
hours a day.  Okay... 5 hours a day would normally be 600 days, but the
cycling of it will probably reduce it's life by nearly half, but you still get
300 days worth of use out of it.

Okay, so I'm definitely over analyzing it...

Of course, alot of people leave their computers on all the time because
they're not as fast as a light bulb when it comes to turning it on (not
usually anyway)...  And some businesses need to leave them on all the time for
doing software distributions during off hours.  So, with that being said, you
gotta figure hey...we're wasting all that electricity anyway, let's at least
*do* something with it!

Sigh...even if a company used the Prime95 time of day stuff to only let it run
during certain hours, that'd be a big plus...oh well.

I don't suppose George could just program something into the code to have it
check for the user being idle (like the screen saver check does, but
independent of the system screen saver routines) such that if the user doesn't
hit a key or move the mouse for xx minutes, it would begin it's calculations
(still at whatever priority you set it to...idle by default), but when the
user is hitting keys or moving the mouse, it'll stop calculations altogether?
That may allay the (unfounded) fears of some that Prime95 somehow steals
cycles from other running programs.

Just some thoughts...

Oh, and while I'm at it...running Prime95 does put a heavier load on a CPU
than if you just let it sit there doing nothing while powered on...merely
because the FPU is churning away whereas normally it doesn't do much.  That'll
increase the heat output some, draw a bit more power...  But the CPU's are
made to take it, so might as well.  And, like we've all been saying, we've had
CPU's running Prime95 for years straight with no ill effects.

From my US WEST experience on a nice sample of thousands of machines, I saw
that if a CPU was bad at all, it would show up as errors in the prime.log
within an hour.  If it could make it past that, that CPU is good! :)

Aaron

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Will Edgington


Aaron Blosser writes:
   I don't suppose George could just program something into the code
   to have it check for the user being idle (like the screen saver
   check does, but independent of the system screen saver routines)
   such that if the user doesn't hit a key or move the mouse for xx
   minutes, it would begin it's calculations (still at whatever
   priority you set it to...idle by default), but when the user is
   hitting keys or moving the mouse, it'll stop calculations
   altogether?  That may allay the (unfounded) fears of some that
   Prime95 somehow steals cycles from other running programs.

Unfortunately, I have personal experience in this area, though not
with Prime95.  My own (UNIX-based) Mersenne programs and scripts, from
before GIMPS started, included checks not only that all logged on
users were idle for at least three hours, including their terminal,
mouse, and keyboard, but also that the load was only the 1.0 due to
the program itself (and less than 0.1 or so when starting).  These
checks themselves (that the load remained low and any users were still
idle), when performed every two minutes under SunOS 4.x on the
SPARCstation 1's and 2's that were available at the time, usually
kicked the load average up another 0.5 or so.

But two minutes is quite a while to wait for something hogging your
CPU to stop.  At least according to the ten or so people that
complained out of the roughly 100 computers my scripts were running on
for a couple of years.  And the scripts were careful to start only
after hours, even if the computer appeared idle during the day.  And
the programs that did the actual work (almost always trial factoring
because I didn't have an FFT-based LL program) always ran at the
absolute lowest priority UNIX offers.

Note further that checking to start things was done remotely; there
was _no_ process of mine on the local machine when it was not idle,
not merely a process only checking for idleness: the load average and
user list could be checked without any local process.

There were _still_ complaints.  Even though the only thing that some
of them could point at that indicated "slowness" was the load average
being 1.0 instead of 0.0.

So, no matter how much CPU you think this sort of change could gain
GIMPS, I must suggest that it _not_ be done.

Except - perhaps - under the control of another .ini variable and the
default is to do things the current way.

Will
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-19 Thread Ken Kriesel

Monitors, on the other hand, seem to like to be shut off regularly.
At work, we bought 9 Nanao F750's  60's in 1993.
Only two survive, and one sits on my desk and is turned on and
off daily.  Those that were left on 24/7 did not survive the last
cycle of cpu upgrades.


Ken

At 01:42 AM 4/19/2000 -0500, Ryan McGarry wrote:
All this talk about PC's running 24/7 has convinced me of the
reliability of processors.  It's been my thoughts that if your computer
is on, it's always running at full speed, whether or not you're running
Prime95.  
I've always left my computers on all the time, and never had a problem. 
This reliability got me thinking about any other appliances that have
the same level of reliability.

For example, I remember reading an article about light bulbs which said
that if you leave a light bulb on continuously, it will last much longer
due to the fact that the the filament doesn't contract when it gets
cold.  

I suppose my question is whether or not it's more of a risk to your
processor in allowing it to cool off regularly than leaving it on 24/7?  

Thanks,
Ryan McGarry

John R Pierce wrote:
 
  When pentium pro 200's were the hot new processor
  (in speed, more so than in wattage),
  I began running some dual-ppro-200 systems with two prime95 instances
 each.
  Those processors are still running it.
  I've never had to replace a cpu or motherboard
  (though occasionally a motherboard power connector
  had to be replaced because it burned up).
  I'm not sure but I think that's three years.
 ...
 
 Until last August, my *original* Prime95 participant, a Pentium-100 running
 first Win95, later Win98, faithfully chugged along 24/7.   I started this
 CPU back when the very first Mersenne article came out in the San Jose
 Mecury News.  This was long before GIMPS had found a prime.Since this
 win95 box's only other duty was print-server for a old inkjet, and the very
 occasional fax, it went a month or more between reboots regularly.  Said
 machine is still alive and well, only now its a 133MHz 64MB ram linux based
 internet server for my DSL connection.  http://hogranch.com :)   The P100
 was new when the first 133Mhz pentiums were becoming available and the 90s
 and 100s got a lot cheaper.  Off the top of my head, I think it might be 5+
 years old.  And, yes, I have a dual PPro-200 which has been running prime95
 24/7 since it was built 3 years ago.
 
 -jrp
 
 _
 Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
 Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-19 Thread Brian J. Beesley

On 19 Apr 00, at 1:42, Ryan McGarry wrote:

 I suppose my question is whether or not it's more of a risk to your
 processor in allowing it to cool off regularly than leaving it on 24/7?  

CRT monitor is definitely better OFF when not needed. The electron 
beams definitely age, and the very high tension circuitry contained 
in a CRT does represent a (small) fire risk which is eliminated by 
having it switched off. Power-saving "snooze" mode may reduce 
electricity costs but has no effect on fire safety. I strongly 
reccomend using the front panel power switch on a CRT monitor.

LCD monitor,  most other electronic components, are definitely 
better left switched ON.

Hard disk drives seem to last much longer if they're left running 
continously. I'd reccomend disabling "power saving" modes on HDD 
unless power consumption is critical (e.g. a notebook computer when 
running on internal power). However, HDDs often fail if they've been 
running continously for years, then switched off  left off long 
enough to cool right down. I think the heads get "glued" to the 
platters. The best advice for HDDs which must be turned off is to 
turn off, wait for 2 mins, turn on, wait for 10 mins, turn off, wait 
for 10 mins, turn on, wait for 2 mins  finally turn off. The idea is 
to dissipate the "glue" which accumulates with constant use.

Power supply units seem to have a life which is governed primarily by 
the number of times they're turned on  off. I've never heard of one 
failing in service whilst being fed a clean mains supply, but, given 
a mains glitch, it's common to have to replace a proportion of PSUs. 
(Like light bulbs, they seem to fail at the instant power is applied)

I find the commonest failure in PC systems is _cooling fans_. Like 
HDDs it seems to be the case that the bearing will "glue up" if an 
always-running fan is switched off  left to cool right off. 
Sometimes this makes them very noisy for a few minutes when power is 
restored, sometimes they just plain fail. Broken cooling fans are 
definitely very bad for the reliability of PC systems!


Regards
Brian Beesley
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Aaron Blosser

Hard disk drives seem to last much longer if they're left running
continously. I'd reccomend disabling "power saving" modes on HDD
unless power consumption is critical (e.g. a notebook computer when
running on internal power). However, HDDs often fail if they've been
running continously for years, then switched off  left off long
enough to cool right down. I think the heads get "glued" to the
platters. The best advice for HDDs which must be turned off is to
turn off, wait for 2 mins, turn on, wait for 10 mins, turn off, wait
for 10 mins, turn on, wait for 2 mins  finally turn off. The idea is
to dissipate the "glue" which accumulates with constant use.

For what it's worth...

I've heard that referred to as "stiction" :)  There's a decent enough
solution to the problem of "stuck" drive heads, but you really should be
absolutely sure that's what the problem is...

Drives have a "landing zone" or parking area where the heads will move to
when it's powered down.  There's no data on that part of the track, so if
your heads do get stuck there when it's turned off, there's something you
can try...

Again, be sure that's really what the problem is before trying this... :)

But, in short, with the drive powered on and making the "hey, my heads are
stuck!" noise, you gently rap the drive on a hard surface.  Rap it harder
and harder until finally you hear the heads moving about normally.  Usually,
the head will unstick itself and as long as the head wasn't actually
damaged, you may have just enough time to get your data backed up pronto.

Jeremy and I used to do that alot on those first generation IDE drives
(which seemed to have this problem much more often) back when we were
computer techs...  It sounds funny, I know, but it worked great most of the
time.

For what it's worth, modern drives rarely have this problem.  Even the
10,000 RPM drives which get QUITE hot during use have good landing zone
areas where the heads aren't likely to come into contact with the hot
platters.

Power supply units seem to have a life which is governed primarily by
the number of times they're turned on  off. I've never heard of one
failing in service whilst being fed a clean mains supply, but, given
a mains glitch, it's common to have to replace a proportion of PSUs.
(Like light bulbs, they seem to fail at the instant power is applied)

For power supplies, having a decent UPS or even just a good line conditioner
is a MUST when you want to prolong it's life.  Anyone who cared to could
hook an scope to a power line (make sure the scope is protected from
overvoltage! :) and if it's a nice digital scope, you can see the surges and
sags that happen *all the time*.

Of course, not many folks have digital scopes...  But a decent UPS does it's
own logging...the APC Smart-UPS for instance.  It'll keep track of the peaks
and valleys through the day and it really is amazing what your poor little
power supply has to deal with all the time.  Sags can be just as damaging to
your supply as a spike, by the way.

I find the commonest failure in PC systems is _cooling fans_. Like
HDDs it seems to be the case that the bearing will "glue up" if an
always-running fan is switched off  left to cool right off.
Sometimes this makes them very noisy for a few minutes when power is
restored, sometimes they just plain fail. Broken cooling fans are
definitely very bad for the reliability of PC systems!

I'll second that.  Good servers (compaq servers for instance) monitor temps
at key points and actually have redundant, hot swappable cooling fans.
Gotta love that stuff.

Aaron

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Henrik Olsen

On Wed, 19 Apr 2000, Aaron Blosser wrote:
 For power supplies, having a decent UPS or even just a good line conditioner
 is a MUST when you want to prolong it's life.  Anyone who cared to could
 hook an scope to a power line (make sure the scope is protected from
 overvoltage! :) and if it's a nice digital scope, you can see the surges and
 sags that happen *all the time*.
 
 Of course, not many folks have digital scopes...  But a decent UPS does it's
 own logging...the APC Smart-UPS for instance.  It'll keep track of the peaks
 and valleys through the day and it really is amazing what your poor little
 power supply has to deal with all the time.  Sags can be just as damaging to
 your supply as a spike, by the way.
I keep hearing this stuff about power problems, and while I can understand
the need for an UPS, the need for a line conditioner evades me, possibly
because I live in Denmark which judging from the stories have much cleaner
power.

Though this is getting way off topic, does anyone know where it's possible
to get comparable data on the quality of the mains power in different
countries?
If possible, collected neither by the power suppliers nor by UPS
manufacturers. :) 

-- 
Henrik Olsen,  Dawn Solutions I/S   URL=http://www.iaeste.dk/~henrik/
 Example is better than following it.
  The Devil's Dictionary, Ambrose Bierce.


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Jeremy Blosser

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Aaron
Blosser
Sent: Wednesday, April 19, 2000 12:18 PM
To: Mersenne@Base. Com
Subject: RE: Mersenne: Just curious


SNIP
For what it's worth...

I've heard that referred to as "stiction" :)  There's a decent enough
solution to the problem of "stuck" drive heads, but you really should be
absolutely sure that's what the problem is...

Drives have a "landing zone" or parking area where the heads will move to
when it's powered down.  There's no data on that part of the track, so if
your heads do get stuck there when it's turned off, there's something you
can try...

Again, be sure that's really what the problem is before trying this... :)

But, in short, with the drive powered on and making the "hey, my heads are
stuck!" noise, you gently rap the drive on a hard surface.  Rap it harder
and harder until finally you hear the heads moving about normally.
Usually,
the head will unstick itself and as long as the head wasn't actually
damaged, you may have just enough time to get your data backed up pronto.

Jeremy and I used to do that alot on those first generation IDE drives
(which seemed to have this problem much more often) back when we were
computer techs...  It sounds funny, I know, but it worked great most of the
time.


If I remember correctly, it seemed to happen with certain batches of
drivers. Like a batch of WD drivers and then later maybe Maxtor or whatever.
And there was *always* the same problem in that it would spin up and then
right back down again at boot (usually the first boot). So I think it was
more of a shipping or drive problem or something more than anything. But, as
a last ditch effort it works...

In reality tho, I think that what is *more* common is the actual controller
or PCB on the drive starts to flake out before an actual problem w/ the
platters etc. Usually, if it is a platter/head problem its usually due to
abuse (such as dropping something heavy on your drive while its
reading/writing).

I'll never forget the times when we would do data recovery of a bad drive by
putting it in the freezer for 30 mins, which we *theorized* shrank the PCB
on the HD thus fixing some stress fracture or whatever temporarily (long
enough to get the data off the drive before it heated back up again).

Just remember the following is ONLY recommended if you are trying last ditch
type of things and don't want to spend the $$$ sending it off to a *REAL*
data recovery type of place.

For what it's worth, modern drives rarely have this problem.  Even the
10,000 RPM drives which get QUITE hot during use have good landing zone
areas where the heads aren't likely to come into contact with the hot
platters.
SNIP
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Jeremy Blosser

Make that "drives" not "drivers"...

And think, its only Wed.

-Jeremy
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: Just curious

2000-04-19 Thread Aaron Blosser

Jeremy and I used to do that alot on those first generation IDE drives
(which seemed to have this problem much more often) back when we were
computer techs...  It sounds funny, I know, but it worked great most of the
time.

If I remember correctly, it seemed to happen with certain batches of
drivers. Like a batch of WD drivers and then later maybe Maxtor or whatever.
And there was *always* the same problem in that it would spin up and then
right back down again at boot (usually the first boot). So I think it was
more of a shipping or drive problem or something more than anything. But, as
a last ditch effort it works...

That was a different problem.  That's where WD threatened to sue me and my
company when I discovered a LARGE batch of their drives were having failure
rates of nearly 80%.  When I got on the newsgroups to see if others had the
same problem, I found some other cases with those same drives, so I mentioned
my problem of 80% and up failure rates...

Well, WD apparently monitors the newsgroups and they contacted me and
threatened me and the company I worked for with a libel suit.  Geez.

Of course once they got our bad drives back and examined them, sure enough,
that's when they discovered a problem in their manufacturing that was leaving
silica deposits all over inside the sealed case...the flakes would be whizzing
around inside the drive like little asteroids, tearing the heads and platters
to shreds basically.

So in the end, they did find out that it was a manufacturing problem and I
never heard an apology from them for threating me like that. :(  Hmph...I've
never bought a WD drive since and always tell people to avoid them. :)  They
did fix the problems with their drives, but in their initial period of denial,
they refused to swap out the drives until they actually failed and I had about
30 screaming programmers I was supporting who didn't understand that WD was
refusing to proactively replace the drives that hadn't yet failed.  They
didn't understand why we had to wait for them to lose all their data before we
could replace them, and frankly, they were worried because they'd seen all the
other drives of the other programmers that *had* already died and who lost all
their data.  So, thanks a lot WD! :-P

In reality tho, I think that what is *more* common is the actual controller
or PCB on the drive starts to flake out before an actual problem w/ the
platters etc. Usually, if it is a platter/head problem its usually due to
abuse (such as dropping something heavy on your drive while its
reading/writing).

At the risk of going too off topic, I do recall that in some cases we were
able to recover data by taking the controller board from another drive of the
same make/model and mounting it on the failed drive.  If we were lucky, it
*was* just the old board that had flaked out and we could still recover the
data.

I'll never forget the times when we would do data recovery of a bad drive by
putting it in the freezer for 30 mins, which we *theorized* shrank the PCB
on the HD thus fixing some stress fracture or whatever temporarily (long
enough to get the data off the drive before it heated back up again).

You know bro, we were the data recovery pros there! :)

Well, it all goes to show you all that you're FAR more likely to have
something besides the CPU die first, so you can just run Prime95 to your
hearts content... just make sure you backup the files because chances are,
your drive will go first. :)

Aaron

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-18 Thread Louis Towles

I've got four computers that have passed the 3 year mark and they run 24/7
(at 100% cpu)

By the way - none of the computers I've retired over the years ,that have
been running like this, have failed due to cpu or memory issues.


Louis Towles
[EMAIL PROTECTED]
404-589-1228
Photobooks Inc
Suite A012
280 Elizabeth St
Atlanta Ga 30307
- Original Message -
From: "Tony Gott" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 18, 2000 3:24 AM
Subject: Mersenne: Just curious


 I'm just curious really, but how durable are Intel
 processors to continuous number crunching, in other words
 has anyone been able to keep the same processor running for
 2, 3 or even more years, on a 24/7 basis. I do realise that
 Windows itself needs to be rebooted from time to time, but
 what about other O/S? Anyone care to throw a few stats in?

 Tony Gott
 Shetland

 _
 Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
 Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-18 Thread Jeff Woods

Very durable.  I have Original P-II/233's two years old still going, have 
been 24x7 since day one.   I have P-166's that have been going for 3 or 4 
years nonstop on either crunching primes or crunching DES.  I even have a 
handful of P-100's, among the first original Pentiums, still going on 
double-checking, quite happily.  I've never had a machine die that I could 
attribute to CPU failure.   It's always been hard drive, motherboard, or 
just a plain inability to keep up with the assigned task, which eventually 
gets the machine replaced or upgraded.

At 08:24 AM 4/18/00 +0100, you wrote:

I'm just curious really, but how durable are Intel
processors to continuous number crunching, in other words
has anyone been able to keep the same processor running for
2, 3 or even more years, on a 24/7 basis. I do realise that
Windows itself needs to be rebooted from time to time, but
what about other O/S? Anyone care to throw a few stats in?

Tony Gott
Shetland

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-18 Thread Ken Kriesel

When pentium pro 200's were the hot new processor
(in speed, more so than in wattage),
I began running some dual-ppro-200 systems with two prime95 instances each.
Those processors are still running it.
I've never had to replace a cpu or motherboard
(though occasionally a motherboard power connector
had to be replaced because it burned up).
I'm not sure but I think that's three years.
Uptimes for these NT systems were averaging 6 months
between reboots, though that has dropped some
since the UPSes that power them are aging and so
power is less reliable now.
I've had a dual-pentium-200-mmx running NT4, and dual
prime95 instances, 2 years solid also;
the last boot of that system was August 12.

I'm sure you'll hear from others, that these durations
are not remarkable.  Some may advocate other OS's.
(I've also run Vaxes for 6-9 months uptime, and
power and hardware reliability  application of OS updates
was similarly controlling there.  Even network switches
will occasionally get in funny modes after some months.)

The error detection built into prime95 has been useful
in identifying some systems where memory simms or
motherboards were going flaky, months before the end
user noticed it.


Ken


At 08:24 AM 4/18/2000 +0100, you wrote:
I'm just curious really, but how durable are Intel
processors to continuous number crunching, in other words
has anyone been able to keep the same processor running for
2, 3 or even more years, on a 24/7 basis. I do realise that
Windows itself needs to be rebooted from time to time, but
what about other O/S? Anyone care to throw a few stats in?

Tony Gott
Shetland

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: Just curious

2000-04-18 Thread John R Pierce

 When pentium pro 200's were the hot new processor
 (in speed, more so than in wattage),
 I began running some dual-ppro-200 systems with two prime95 instances
each.
 Those processors are still running it.
 I've never had to replace a cpu or motherboard
 (though occasionally a motherboard power connector
 had to be replaced because it burned up).
 I'm not sure but I think that's three years.
...

Until last August, my *original* Prime95 participant, a Pentium-100 running
first Win95, later Win98, faithfully chugged along 24/7.   I started this
CPU back when the very first Mersenne article came out in the San Jose
Mecury News.  This was long before GIMPS had found a prime.Since this
win95 box's only other duty was print-server for a old inkjet, and the very
occasional fax, it went a month or more between reboots regularly.  Said
machine is still alive and well, only now its a 133MHz 64MB ram linux based
internet server for my DSL connection.  http://hogranch.com :)   The P100
was new when the first 133Mhz pentiums were becoming available and the 90s
and 100s got a lot cheaper.  Off the top of my head, I think it might be 5+
years old.  And, yes, I have a dual PPro-200 which has been running prime95
24/7 since it was built 3 years ago.

-jrp



_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers