Nate and Charles -- 

This is a fascinating and educational thread.  Lots of interesting and useful 
info from people with plenty of experience.  Thanks for sharing with the rest 
of us.

   Jim - K6JM

  ----- Original Message ----- 
  From: Charles Scott 
  To: dstar_digital@yahoogroups.com 
  Sent: Monday, September 06, 2010 6:04 AM
  Subject: Re: [DSTAR_DIGITAL] Gateway / NI-Star System Requirements 
  Nate:

  Yep, since the system only had 2 drives I have it set for redundancy 
  rather than space (it has an integrated RAID controller). It can 
  therefore loose a drive and continue to run without the software knowing 
  what happened (although I will). The better way to do it would be to 
  throw in 6 drives so I could have a couple fail, which is what we do 
  with our servers. Found one on E-Bay with 6 32G drives the other day for 
  $99 and it still had dual supplies and everything. These are a great 
  value but way to noisy to run in my shack! They do, however make great 
  Web servers with 4 cores (2 processors) 6 drives, and a bunch of memory 
  and I can get those configurations for about 1/10 the cost of current 
  production systems.

  Yes again, since the system has "integrated lights out management" that 
  uses a separate (third) Ethernet connector, you can connect to it even 
  when the system is off or is otherwise not responding and turn it on, 
  reboot it, or look for problems. That seems to be perfect for difficult 
  to reach repeater sites.

  As to SSD and such, my preference would be to spend less than the cost 
  of one SSD and get multiple complete systems like these DL385's. Could 
  then either run them in some full redundancy configuration or simply 
  leave one off till the primary fails then turn it on remotely. Could 
  bring the spare up periodically to update it if necessary and never go 
  to the site. The only thing left to do would be dual Broadband feeds and 
  redundant switches, but that seems like overkill for Ham stuff.

  The problem I see with all this is that these kinds of "deal" systems 
  will become popular for this type of application and I won't be able to 
  get them cheap anymore. So everyone please ignore this thread!

  Chuck - N8DNX

  On 9/6/2010 5:12 AM, Nate Duehr wrote:
  > Think about doing RAID1 and having two disks in it if it's inaccessible for 
1/2 of the year.
  >
  > Disclaimer: I did this with W0CDS, which lives on top of a very high 
mountain -- and it still bit me in the hindquarters. Linux Software RAID1 isn't 
100% ready-for-prime-time, sadly, after all of these years.
  >
  > The machine lost a drive, and instead of just chugging along, it started 
throwing I/O errors for all commands.
  >
  > Luckily, the RAID was working, it just never "detected" the failed disk. A 
power off/power on reboot cleared that problem and it came right back online 
with a single disk and one in a failed state in /proc/mdstat -- so that leads 
to item #2...
  >
  > Get a way to remotely REBOOT your system... be it a transistor switch on a 
co-located analog repeater controller, a remote power on/off device like a 
managed power strip, whatever works that you trust and can access when the box 
is down.
  >
  > That would have saved someone a trip to the mountain.
  >
  > But he went, we proved the machine would run on one dead drive and one live 
drive, and then he yanked the dead one to bring it down to get a replacement.
  >
  > Which leads to item #3...
  >
  > Since Linux Software RAID can work with, but really really really likes 
drives of the exact same CHS layout and size... get a couple of spares. Drive 
technology is still changing so fast, that by the time you need it, that model 
will be hard to find. Drives are cheap, keep spares if you're using RAID, or be 
prepared to backup/rebuild the system from scratch with two new drives when one 
finally fails.
  >
  > The reason for the drive failure we suspect is two-fold... high altitude 
(heat, less air between the spinning disk and the "flying" head, etc) and 
really bad power up there. Lightning wreaks havok with everything up there 
every summer, and apparently this drive died too soon into its usual life-span 
because of all the power hits. Even once we had a UPS inline, the "stuff' that 
comes in on the power lines up there is just utter trash all summer long.
  >
  > It's just a tough environment for PCs. If you're building from scratch and 
don't mind the eventual performance hit and need to do a "secure wipe" and 
reload once in a while, the modern Solid State Drives are a good choice for a 
difficult site, I think. But their internal fragmentation problems and limits 
are becoming well-documented, and that "secure wipe" to get them to go rewrite 
every bit of the flash and reset the controller that's managing the flash, is 
important for most brands. Some good reviews of cheap vs. "server grade" SSDs 
are starting to show up on the web in droves now, whereas for a few years 
there, the testing and performance numbers just weren't available.
  >
  > I'd say it's a toss-up between spinning platter and SSD, when you factor in 
price. Cheaper than owning two SSD's is owning four mid-sized spinning platter 
technology drives, so you'll have to decide if you want to pay the premium and 
be an early-adopter, so to speak.
  >
  > --
  > Nate Duehr
  > n...@natetech.com
  >

Reply via email to