Some threads never die... I'm continuing one here from October...

Thanx for the great summary of how to get a system up and running, but I
have a question about the setup below.

I'm trying to get some 40GB Maxtors up using either Promise Ultra33's or
Ultra66 boards (33's in the logs below).  With a test compile of the new
2.3.40 beta kernel, the drives are finally recognized as 40GB (mondo thanx
to Andries Brouwer, Andre Hedrick, et. al... Mama always said I was a suck
up), but the way the kernel does it is causing fdisk to fail.  Fdisk can
only manage disks with 16-bit cylinder numbers.  I tried the new version of
the GNU tool "parted" with similar results.

What was the boot parameter you used to get the LBA addressing to use 255
heads as a parameter in the drive geometry on the 32GB drives?


from dmesg

hda: Maxtor 92720U8, 25965MB w/2048kB Cache, CHS=3310/255/63, UDMA(33)
hdc: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hde: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hdg: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hdi: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hdk: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hdm: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)
hdo: Maxtor 94098U8, 39082MB w/2048kB Cache, CHS=79406/16/63, UDMA(33)



/root/parted-1.0.7# fdisk /dev/hde

The number of cylinders for this disk is set to 13870.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):



Thanx,

-Z


-------------------------------------------------------------------------------
                                   _______
Zach Coombes                       \____  | "Computers are useless.
 AMD Senior Hick Engineer           _   | |  They can only give you answers."
  email: [EMAIL PROTECTED]      / |_ | |                  -Pablo Picasso
                                   |__/  \|


>In case someone else here wants to build a larger IDE software raid5 in
>the near future, here is what works for me very well right now:
>
>- single processor P3
>- 4 or more IBM IDE drives (I use 4)
>- linux 2.2.13pre15 (but probably better: 2.2.13final)
>- the raid 2.2.11 patch (just press enter a few times...)
>- if you use >32GB drives, a patch for that or the UnifiedIDE patch
>- if you use >32GB drives, hd[e,g,i,k etc.]=4560,255,63 boot parameter (or
>a future UnifiedIDE)
>- another small patch to get the promise66 going, or the UnifiedIDE patch
>- "hdparm -d1 -X66 <dev>" in some startup script is needed without
>UnifiedIDE patch.
>Later "hdparm -d1 -X66 -k1 -K1 -W1<dev>" can be used.
>
>- use UDMA66 cables (80 wires), expensive but should improve signal quality
>- but probably better operate the array in UDMA33 mode
>- you have less physical problems with cable length if you use one drive
>per controller (masters only, no slaves). This may also have advantages if
>a drive fails. In my experience the reliability & data integrity however
>doesn´t suffer if you use master+slave (in case no drive fails).
>
>- you probably need to use mknod for /dev/hdi etc.
>
>- stresstest the machine for >=24h (this is also a very good idea for SCSI
>arrays)
>
>For me, this configuration (currently without UnifiedIDE, 4 IBM 37GB
>drives on 2 promise controllers, one non-raid drive on onboard controller)
>survived an intensive 40h stresstest without problems (high bandwidth
>random data with read-back validation). Also 30h in normal operation now.
>I don´t expect it to give me future problems.
>
>You don´t want to mix IDE+SMP currently, since kernels up to 2.2.13pre14
>have an SMP issue in the IDE driver, and pre15 is unproven right now.
>
>You need very solid hardware. I had to replace mainboard+cpu in a
>different raid server, because the old ones couldn´t cope the stress (they
>preferred to deliver bit errors).
>
>You usually cannot use all drive bays, you need enough space between the
>drives (or they will run very very hot...).
>
>Performance:
>
>- overkill read bandwidth, very good write bandwidth. But bandwidth is
>more or less irrelevant these days.
>- very good (average) seek performance, a factor of (number_of_drives +
>X). X due to better locality. This makes your database happy.
>- the performance of IDE master/slave configurations is somewhat lower,
>but not much. I don´t see performance reasons to not use master/slave.
>It´s more a matter of cable length and potential problems due to  failing
>drives.
>
>kernel patches:
>
>Special ones are needed if Unified IDE is not used. Available on request.
>
>stresstester:
>
>If there is interest, I can clean it up a little and release it.
>In order to enable you to happily crash your IDE+SMP machines too...
>
>
>--
>the online community service for gamers & friends -  http://www.rivalnet.com
>* unterstützt über 50 PC-Spiele im Multiplayer-Modus
>* Dateien senden & empfangen bis 500 MB am Stück
>* Newsgroups, Mail, Chat & mehr


Reply via email to