Re: big machines running Debian?

2009-03-02 Thread Goswin von Brederlow
Alex Samad a...@samad.com.au writes:

 On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
 Alex Samad a...@samad.com.au writes:
 

 [snip]

  true, depends on whos rule of thumb you use. I have seen places where
  mandate fc drives only in the data center - get very expensive when you
  want lots of disk space.
 
 The only argument I see for FC is a switched sorage network. As soon
 as you dedicate a storage box to one (or two) servers there is really
 no point in FC. Just use a SAS box with SATA disks inside. It is a)
 faster, b) simpler, c) works better and d) costs a fraction.

 The problem I have seen is the person who controls the purse strings
 doesn't always have have the best technological mind.  There was a while
 back where have fibre meant fibre to the disk. So managers wanted fibre
 to the disk, so they paid for fibre to the disk.

And now they have to learn that we have new technologies. New
requirements and new solutions. What was good 5 years ago isn't
neccessarily good today. Saddly enough a lot of purse strings seem to
be made of stone and only move in geological timespans. :)

 And hey, we are talking big disks space for a single system here. Not
 sharing one 16TB raid box with 100 hosts.
 
  Also the disk space might not be need for feeding across the network, db
  aren't the only thing that chew through disk space.
 
  the op did specific enterprise, I was think very large enterprise, the
  sort of people who mandate scsi or sas only drives in their data centre
 
 They have way to much money and not enough brain.

 I would have to dissagree, some times the guidelines that you set for
 your data storage network mandate having the reliability (or the
 performance) of scsi (or now sas), they could be valid business
 requirements.

Could be. If you build storage for a DB you want SAS disks and
raid1. If you build a petabyte storage cluster for files 1GB then you
rather want 3 times as many SATA disks. An XYZ only rule will always
be bad for some use cases.

 Traditionally scsi drives had a longer warranty period, were meant to be
 of better build that cheap ata (sata) drives.

 Although this line is getting blurred a bit.

There surely is a difference between a 24/7, 5 year warranty, server
SCSI disk and a cheap home use SATA disk. But then again there are
also 24/7, 5 year warranty, server SATA disks.

I don't think there is any quality difference anymore between the scsi
and sata server disks.

 Unless we talk about a specific situation, storage as other areas of IT
 are very fluid, and there are many solutions to each problem.

Exactly.

 Look at the big data centers of google and such that use pizza box's
 machine dies who cares its clustered and they will get around to fixing
 it at some point. to 4-8 nods clusters of oracle that are just about
 maxed out, one server goes down and 

Same here. Nobody builds HA into a HPC cluster. If a node fails the
cluster runs with less node. Big deal.

Saddly enough for storage there is a distinct lack of
software/filesystems that can work with such a lax reliability. With
the growing space requirements and stalling size increase in disk size
there are more and more components in a storage cluster. I feel that
redundancy has to move to a higher level. Away from the disk level
where you have raid and towards true distributed redundancy across the
storage cluster as a whole.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Lennart Sorensen
On Sat, Feb 28, 2009 at 10:14:15AM +0100, Goswin von Brederlow wrote:
 Hot-spare devices work just fine (see below).
 
 What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
 disk raid5 and one spare disk for whatever raid fails first. You would
 have to script that yourself.

Any idea what the spare groups mentioned in the mdadm documentation are?

I would have guessed it was something to do the global spare thing.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Lennart Sorensen
On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
 Not to repeat myself but a GPT with an entry for /boot in its fake
 MS-Dos table works just fine.

Perhaps, but why bother.  Just using GPT works.  And it won't confuse
any tools that actually know how GPT is supposed to be used.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Lennart Sorensen
On Sat, Feb 28, 2009 at 09:56:04AM +0100, Goswin von Brederlow wrote:
 Ron Johnson ron.l.john...@cox.net writes:
 
  On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
  On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
  Who boots off of (or puts / on) a 2TB partition?
 
  Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
  drives.  Hence the only drive in the system is a 2.25TB device with
  partitions and everything on it.  The root partition isn't very big,
  but it's on a drive that is bigger than 2TB and hence needs something
  other than a DOS partition table.
 
 
  Ah.  The minicomputer tradition I come from (and thus how I organized
  my home PC) is to have a relatively small OS/swap disk and a separate
  data array.
 
  Of course, max device size always gets bigger, and smaller devices
  fall off the market...
 
 I'm aiming for a small SSD disk for the system and seperate data
 array that can be spun down most of the time.
 
  It doesn't take much with modern SATA drives to hit 2TB.  Given we can
  get 1.5TB in a single drive, how many months before we can get 2TB in
  a single disk.
 
 
  Later this year.
 
 Waiting for them. 1.5TB disks don't mix so well with 1TB disks in
 raid. I don't want to split the disks into 0.5TB partitions and then
 raid over those.
 
 Anyone know how/if windows copes with 2TB disks? Does it understand
 GPT too?

By the looks of it, it supports GPT for data drives, but is incapable of
booting from it (except itanium versions where it is the default format)

And only 64bit versions of XP and 2003 server appear to support it,
while vista and 2008 server seem to always support it (but not for boot
unless your machine has EFI).  It seems microsoft belives you need EFI
to boot from GPT, even though grub2/linux has no such limitation.
Go figure.

Of course a 2TB disk is OK with MBR still.  Anything larger will have
a problem though.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Lennart Sorensen
On Sat, Feb 28, 2009 at 09:39:07AM +, Ian McDonald wrote:
 Erm, not on anything other than a sequential read (and even then, I've  
 never seen a single disk that would actually sustain that across it's  
 whole capacity).

 Even raid-5s of significant numbers of disks aren't enormously fast,  
 especially under multiple access. hdparm informs me that the SATA 28+2  
 spare raid-5 I have will read 170M a second. That would rapidly diminish  
 under any sort of load.

Probably a bus limitation rather than the disks.  I have no problem
getting 100MB/s on a 4 disk raid5 with SATA.

 The only thing we've found that'll stand up to real multiuser load (like  
 a mail spool) is raid-10, and enough spindles.

 We're beginning to see the requirement for 10GE on busy machines.

Sure.  All depends on the load after all.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Jonas Bardino
Lennart Sorensen wrote:
 On Sat, Feb 28, 2009 at 10:14:15AM +0100, Goswin von Brederlow wrote:
 Hot-spare devices work just fine (see below).

 What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
 disk raid5 and one spare disk for whatever raid fails first. You would
 have to script that yourself.
 
 Any idea what the spare groups mentioned in the mdadm documentation are?
 
 I would have guessed it was something to do the global spare thing.
 

I haven't tested shared hot spares myself but this article (among quite
a few other hits from google: shared hot spare) indicates that it is
quite possible with mdadm running in daemon mode:
http://www.redhat.com/magazine/021jul06/departments/tips_tricks/

...and it appears Len was completely right about the spare-group keyword.

Cheers, Jonas

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Alex Samad
On Mon, Mar 02, 2009 at 02:28:04PM +0100, Goswin von Brederlow wrote:
 Alex Samad a...@samad.com.au writes:
 
  On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
  Alex Samad a...@samad.com.au writes:
  
 

[snip]

 
 And now they have to learn that we have new technologies. New
 requirements and new solutions. What was good 5 years ago isn't
 neccessarily good today. Saddly enough a lot of purse strings seem to
 be made of stone and only move in geological timespans. :)

some do and some don't - the old ideology of nobody got sacked for
buying IBM

 

[snip]

 Could be. If you build storage for a DB you want SAS disks and
 raid1. If you build a petabyte storage cluster for files 1GB then you
 rather want 3 times as many SATA disks. An XYZ only rule will always
 be bad for some use cases.

True I had a customer buy a 8T fc disk array lustre based and then they 
expanded it
soon afterwards

 
  Traditionally scsi drives had a longer warranty period, were meant to be
  of better build that cheap ata (sata) drives.
 
  Although this line is getting blurred a bit.
 
 There surely is a difference between a 24/7, 5 year warranty, server
 SCSI disk and a cheap home use SATA disk. But then again there are
 also 24/7, 5 year warranty, server SATA disks.
 
 I don't think there is any quality difference anymore between the scsi
 and sata server disks.
 
  Unless we talk about a specific situation, storage as other areas of IT
  are very fluid, and there are many solutions to each problem.
 
 Exactly.
 
  Look at the big data centers of google and such that use pizza box's
  machine dies who cares its clustered and they will get around to fixing
  it at some point. to 4-8 nods clusters of oracle that are just about
  maxed out, one server goes down and 
 
 Same here. Nobody builds HA into a HPC cluster. If a node fails the
 cluster runs with less node. Big deal.

you would be surprised how many people want HA head nodes

 
 Saddly enough for storage there is a distinct lack of
 software/filesystems that can work with such a lax reliability. With
 the growing space requirements and stalling size increase in disk size
 there are more and more components in a storage cluster. I feel that
 redundancy has to move to a higher level. Away from the disk level
 where you have raid and towards true distributed redundancy across the
 storage cluster as a whole.

yes it would be nice. My thoughts are we haven't seen any big jumps in
data storage for a while, nothing like we are seeing in memory and cpu
speed.


 
 MfG
 Goswin
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
Our enemies are innovative and resourceful, and so are we. They never stop 
thinking about new ways to harm our country and our people, and neither do we.

- George W. Bush
08/05/2004
Washington, DC


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-03-02 Thread Goswin von Brederlow
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
 Not to repeat myself but a GPT with an entry for /boot in its fake
 MS-Dos table works just fine.

 Perhaps, but why bother.  Just using GPT works.  And it won't confuse
 any tools that actually know how GPT is supposed to be used.

Because then grub works. :)

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-02 Thread Lennart Sorensen
On Mon, Mar 02, 2009 at 08:32:22PM +0100, Goswin von Brederlow wrote:
 lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
 
  On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
  Not to repeat myself but a GPT with an entry for /boot in its fake
  MS-Dos table works just fine.
 
  Perhaps, but why bother.  Just using GPT works.  And it won't confuse
  any tools that actually know how GPT is supposed to be used.
 
 Because then grub works. :)

But grub2 works when you use a GPT partition table that is actually
compliant with the standard.

I am pretty sure your setup is NOT GPT comliant.  It just happens to work.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-01 Thread Douglas A. Tutty
On Thu, Feb 26, 2009 at 03:21:44PM -0600, Ron Johnson wrote:
 On 02/26/2009 01:49 PM, Ron Peterson wrote:
 2009-02-26_14:21:54-0500 Douglas A. Tutty dtu...@vianet.ca:
 On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
 On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
 [snip]
 /proc/megaraid/hba0/raiddrives-0-9 
 Logical drive: 0:, state: optimal
 Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
 Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO
 
 Logical drive: 1:, state: optimal
 Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
 Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: 
 Cached IO
 Why is Read Ahead disabled on Logical Drive 1?
 My understanding is that read ahead in this case refers to the ability
 of the raid card to read ahead from one disk while a read is taking
 place on another disk.  This only makes sense in a redundant raid level.
 LD1 is raid0, so there is no other disk from which to read ahead.
 
 My understanding is that read ahead means the controller reads more data
 into memory than you asked for, expecting that the next bits you ask for
 will be immediately after the ones you just got.
 
 
 That *is* the standard definition.  Though there's nothing stopping 
 Megaraid from being weird.

I just checke the setup in the bios and it is set for adaptive
read-ahead on both LDs.  I don't know what's wrong with the output from
/proc.

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-01 Thread Alex Samad
On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
 Alex Samad a...@samad.com.au writes:
 

[snip]

  true, depends on whos rule of thumb you use. I have seen places where
  mandate fc drives only in the data center - get very expensive when you
  want lots of disk space.
 
 The only argument I see for FC is a switched sorage network. As soon
 as you dedicate a storage box to one (or two) servers there is really
 no point in FC. Just use a SAS box with SATA disks inside. It is a)
 faster, b) simpler, c) works better and d) costs a fraction.

The problem I have seen is the person who controls the purse strings
doesn't always have have the best technological mind.  There was a while
back where have fibre meant fibre to the disk. So managers wanted fibre
to the disk, so they paid for fibre to the disk.


 
 And hey, we are talking big disks space for a single system here. Not
 sharing one 16TB raid box with 100 hosts.
 
  Also the disk space might not be need for feeding across the network, db
  aren't the only thing that chew through disk space.
 
  the op did specific enterprise, I was think very large enterprise, the
  sort of people who mandate scsi or sas only drives in their data centre
 
 They have way to much money and not enough brain.

I would have to dissagree, some times the guidelines that you set for
your data storage network mandate having the reliability (or the
performance) of scsi (or now sas), they could be valid business
requirements.

Traditionally scsi drives had a longer warranty period, were meant to be
of better build that cheap ata (sata) drives.

Although this line is getting blurred a bit.

Unless we talk about a specific situation, storage as other areas of IT
are very fluid, and there are many solutions to each problem.

Look at the big data centers of google and such that use pizza box's
machine dies who cares its clustered and they will get around to fixing
it at some point. to 4-8 nods clusters of oracle that are just about
maxed out, one server goes down and 


 
 MfG
 Goswin
 
 PS: The I in RAID stands for inexpensive.
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
Of course he was all in favour of Armageddon in *general* terms.
-- (Terry Pratchett  Neil Gaiman, Good Omens)


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-03-01 Thread Michelle Konzack
Hi,

Am 2009-02-21 08:00:32, schrieb Igor Támara:
 Here (at my country) big means more than 4x4 cores , more than 
 16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
 are good to know about.

I am owner of three Sun Blade (Sparc) and each has 32 CPUs, 128 GByte of
memory and 160 SCSI drives of 300 GByte in 10 cages.

I have started in end of 1999 with it and 64 drives of 76 GByte. UPdated
to 147 GByte drives and 2007 to 300 GByte ones.

Sun machines are killers... and they run like the heaven.

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
http://www.tamay-dogan.net/   http://www.can4linux.org/
Michelle Konzack   Apt. 917  ICQ #328449886
+49/177/935194750, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: big machines running Debian?

2009-03-01 Thread Michelle Konzack
Am 2009-02-25 16:48:30, schrieb Lennart Sorensen:
 It doesn't take much with modern SATA drives to hit 2TB.  Given we can
 get 1.5TB in a single drive, how many months before we can get 2TB in
 a single disk.

Ehm, HOW MANY what?

The 2 TByte drives are already out.
Some selected customers of Hitachi have them already.
(one of them is here in Strasbourg)

Thanks, Greetings and nice Day/Evening
Michelle Konzack
Systemadministrator
24V Electronic Engineer
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
http://www.tamay-dogan.net/   http://www.can4linux.org/
Michelle Konzack   Apt. 917  ICQ #328449886
+49/177/935194750, rue de Soultz MSN LinuxMichi
+33/6/61925193 67100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: big machines running Debian?

2009-03-01 Thread Ron Johnson

On 02/28/2009 03:14 AM, Goswin von Brederlow wrote:

Ron Johnson ron.l.john...@cox.net writes:


On 02/27/2009 07:50 AM, Lennart Sorensen wrote:

On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:

As would auto-replacement of bad drives by hot spares.

Usually the firmware of a raid card does that itself.  If a drive is
flagged hotspare, the raid card should automatically start the rebuild
if a drive fails.  You should never have to tell it to do that.  If you
had to tell it then it hardly qualifies as a hot spare.


I was referring to the fact that softraid couldn't do that.


Hot-spare devices work just fine (see below).

What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
disk raid5 and one spare disk for whatever raid fails first. You would
have to script that yourself.



Ah.  Bummer.

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-03-01 Thread Ron Johnson

On 02/28/2009 02:50 AM, Goswin von Brederlow wrote:
[snip]


The only argument I see for FC is a switched sorage network. As soon
as you dedicate a storage box to one (or two) servers there is really
no point in FC. Just use a SAS box with SATA disks inside. It is a)
faster, b) simpler, c) works better and d) costs a fraction.


The Tier 1 vendors can be touchy about certifying SATA SANs in 
certain environments, especially 24x7 DCs.  That's why only our 
tier 3 (there is not tier 2...) storage is SATA.


--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
 most enterprise site don;t use 1TB size disk, if you want performance
 you go spindles, there might be 8 disks (number pulled from the air -
 based on raid6 + spares) behind 1TB 

 And if you want disk space and are serving across a 1Gbit ethernet link,
 you don't give a damn about spindles and go for cheap abundant storage,
 which means SATA.

 Not everyone is running a database server.  Some people just have files.

 Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
 small fraction of the cost of SAS drives.

1GBit is satturated by a single good disk already. 1GBit is a joke for
fast storage.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
Alex Samad a...@samad.com.au writes:

 On Wed, Feb 25, 2009 at 05:06:30PM -0500, Lennart Sorensen wrote:
 On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
  most enterprise site don;t use 1TB size disk, if you want performance
  you go spindles, there might be 8 disks (number pulled from the air -
  based on raid6 + spares) behind 1TB 
 
 And if you want disk space and are serving across a 1Gbit ethernet link,
 you don't give a damn about spindles and go for cheap abundant storage,
 which means SATA.
 
 Not everyone is running a database server.  Some people just have files.
 
 Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
 small fraction of the cost of SAS drives.

 true, depends on whos rule of thumb you use. I have seen places where
 mandate fc drives only in the data center - get very expensive when you
 want lots of disk space.

The only argument I see for FC is a switched sorage network. As soon
as you dedicate a storage box to one (or two) servers there is really
no point in FC. Just use a SAS box with SATA disks inside. It is a)
faster, b) simpler, c) works better and d) costs a fraction.

And hey, we are talking big disks space for a single system here. Not
sharing one 16TB raid box with 100 hosts.

 Also the disk space might not be need for feeding across the network, db
 aren't the only thing that chew through disk space.

 the op did specific enterprise, I was think very large enterprise, the
 sort of people who mandate scsi or sas only drives in their data centre

They have way to much money and not enough brain.

MfG
Goswin

PS: The I in RAID stands for inexpensive.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
Ron Johnson ron.l.john...@cox.net writes:

 On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
 On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
 Who boots off of (or puts / on) a 2TB partition?

 Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
 drives.  Hence the only drive in the system is a 2.25TB device with
 partitions and everything on it.  The root partition isn't very big,
 but it's on a drive that is bigger than 2TB and hence needs something
 other than a DOS partition table.


 Ah.  The minicomputer tradition I come from (and thus how I organized
 my home PC) is to have a relatively small OS/swap disk and a separate
 data array.

 Of course, max device size always gets bigger, and smaller devices
 fall off the market...

I'm aiming for a small SSD disk for the system and seperate data
array that can be spun down most of the time.

 It doesn't take much with modern SATA drives to hit 2TB.  Given we can
 get 1.5TB in a single drive, how many months before we can get 2TB in
 a single disk.


 Later this year.

Waiting for them. 1.5TB disks don't mix so well with 1TB disks in
raid. I don't want to split the disks into 0.5TB partitions and then
raid over those.

Anyone know how/if windows copes with 2TB disks? Does it understand
GPT too?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
Ron Johnson ron.l.john...@cox.net writes:

 On 02/26/2009 02:51 PM, Alex Samad wrote:
 [snip]

 I have gone through a few cycles of changing the underlying drive sizes,
 ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
 TB.  pop 1 disk replace with 1 TB once it has settled you can do an
 online expansion.  Not sure if you can do that on a HW raid.

 You used to not be able to.  Not sure about modern controllers.

Depends on the firmware and roughly speaking the price of your raid.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
 my rule of thumb is to always have atleast 2 partitions on the first 2
 drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
 the space is put into a raid device then into lvm.  That gets rid of the
 interesting tweaks.

 Even with software raid1, setting up reliable boot from either drive
 if one fails can be interesting, but it has gotten a lot better than it
 used to be.

I asked about this in regards to grub2 the other day. The problem with
software raid for me is that when I switch disks the drive order gets
messed up every time. The first is that if the first disk fails hd1
becomes hd0 and so on. The other reason is that the onboard chips
can't see SATA 2 disks. So if one of the onboard disks fails I have to
move disks around to get another SATA 1 disk on the onboard port and
free a port on the SATA 2 controler. And then the disk order is usualy
scrambled up and grub fails.

Now with grub2 you don't have to specify a boot device as (hd0,0)
anymore but grub2 will suposedly find the right disk itself. This
makes is really interesting for software raid.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
Ron Johnson ron.l.john...@cox.net writes:

 On 02/27/2009 07:50 AM, Lennart Sorensen wrote:
 On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
 As would auto-replacement of bad drives by hot spares.

 Usually the firmware of a raid card does that itself.  If a drive is
 flagged hotspare, the raid card should automatically start the rebuild
 if a drive fails.  You should never have to tell it to do that.  If you
 had to tell it then it hardly qualifies as a hot spare.


 I was referring to the fact that softraid couldn't do that.

Hot-spare devices work just fine (see below).

What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
disk raid5 and one spare disk for whatever raid fails first. You would
have to script that yourself.

MfG
Goswin

--

# mdadm --create -l1 -n2 /dev/md9 /dev/ram0 /dev/ram1
# mdadm --add /dev/md9 /dev/ram2 
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md9 : active raid1 ram2[2](S) ram1[1] ram0[0]
  65472 blocks [2/2] [UU]
# mdadm --fail /dev/md9 /dev/ram0
# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md9 : active raid1 ram2[0] ram1[1] ram0[2](F)
  65472 blocks [2/2] [UU]


And syslog shows:

Feb 28 10:09:47 frosties mdadm[4078]: NewArray event detected on md device 
/dev/md9
Feb 28 10:10:00 frosties kernel: md: bindram2
Feb 28 10:10:52 frosties kernel: raid1: Disk failure on ram0, disabling device. 
Feb 28 10:10:52 frosties kernel: ^IOperation continuing on 1 devices
Feb 28 10:10:52 frosties kernel: RAID1 conf printout:
Feb 28 10:10:52 frosties kernel:  --- wd:1 rd:2
Feb 28 10:10:52 frosties kernel:  disk 0, wo:1, o:0, dev:ram0
Feb 28 10:10:52 frosties kernel:  disk 1, wo:0, o:1, dev:ram1
Feb 28 10:10:52 frosties kernel: RAID1 conf printout:
Feb 28 10:10:52 frosties kernel:  --- wd:1 rd:2
Feb 28 10:10:52 frosties kernel:  disk 1, wo:0, o:1, dev:ram1
Feb 28 10:10:52 frosties kernel: RAID1 conf printout:
Feb 28 10:10:52 frosties kernel:  --- wd:1 rd:2
Feb 28 10:10:52 frosties kernel:  disk 0, wo:1, o:1, dev:ram2
Feb 28 10:10:52 frosties kernel:  disk 1, wo:0, o:1, dev:ram1
Feb 28 10:10:52 frosties kernel: md: recovery of RAID array md9
Feb 28 10:10:52 frosties kernel: md: minimum _guaranteed_  speed: 1000 
KB/sec/disk.
Feb 28 10:10:52 frosties kernel: md: using maximum available idle IO bandwidth 
(but not more than 20 KB/sec) for recovery.
Feb 28 10:10:52 frosties kernel: md: using 128k window, over a total of 65472 
blocks.
Feb 28 10:10:52 frosties kernel: md: md9: recovery done.
Feb 28 10:10:52 frosties kernel: RAID1 conf printout:
Feb 28 10:10:52 frosties kernel:  --- wd:2 rd:2
Feb 28 10:10:52 frosties kernel:  disk 0, wo:0, o:1, dev:ram2
Feb 28 10:10:52 frosties kernel:  disk 1, wo:0, o:1, dev:ram1


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Thu, Feb 26, 2009 at 05:42:43PM -0500, Douglas A. Tutty wrote:
 The comparison wasn't between having the raid controller or LVM present
 a reasonable size /, it was between a reasonable size / and a 2TB /.

 No one ever wanted a 2TB /.  I just wanted / on a drive that was bigger
 than 2TB and hence couldn't use dos partition tables anymore.  I only
 have a 10GB / :)  Making a 10GB raid volume and a seperate raid volume
 for the rest just to be able to use dos partition tables for the /
 is just awkward.

Not to repeat myself but a GPT with an entry for /boot in its fake
MS-Dos table works just fine.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Ian McDonald

Goswin von Brederlow wrote:

lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:


On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:

most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there might be 8 disks (number pulled from the air -
based on raid6 + spares) behind 1TB 

And if you want disk space and are serving across a 1Gbit ethernet link,
you don't give a damn about spindles and go for cheap abundant storage,
which means SATA.

Not everyone is running a database server.  Some people just have files.

Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
small fraction of the cost of SAS drives.


1GBit is satturated by a single good disk already. 1GBit is a joke for
fast storage.



Erm, not on anything other than a sequential read (and even then, I've 
never seen a single disk that would actually sustain that across it's 
whole capacity).


Even raid-5s of significant numbers of disks aren't enormously fast, 
especially under multiple access. hdparm informs me that the SATA 28+2 
spare raid-5 I have will read 170M a second. That would rapidly diminish 
under any sort of load.


The only thing we've found that'll stand up to real multiuser load (like 
a mail spool) is raid-10, and enough spindles.


We're beginning to see the requirement for 10GE on busy machines.

--
ian



MfG
Goswin





--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Goswin von Brederlow
Ian McDonald i...@st-andrews.ac.uk writes:

 Goswin von Brederlow wrote:
 lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
 most enterprise site don;t use 1TB size disk, if you want performance
 you go spindles, there might be 8 disks (number pulled from the air -
 based on raid6 + spares) behind 1TB
 And if you want disk space and are serving across a 1Gbit ethernet link,
 you don't give a damn about spindles and go for cheap abundant storage,
 which means SATA.

 Not everyone is running a database server.  Some people just have files.

 Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
 small fraction of the cost of SAS drives.

 1GBit is satturated by a single good disk already. 1GBit is a joke for
 fast storage.


 Erm, not on anything other than a sequential read (and even then, I've
 never seen a single disk that would actually sustain that across it's
 whole capacity).

A cheap SATA disk with 7200rpm sustains 80MB/s sequential read/write
on the outside and 40MB/s on the inside. An Seagate Cheetah 15K.6 is
specified to up to 171MB/s and SAS disks are more uniform between
outside and inside tracks.

 Even raid-5s of significant numbers of disks aren't enormously fast,
 especially under multiple access. hdparm informs me that the SATA 28+2
 spare raid-5 I have will read 170M a second. That would rapidly
 diminish under any sort of load.

For our Lustre filesystems we tested 16 SATA disks in an Infotrend SAS
raid enclosure. As raid6 we still get 450 MiB/s sequential writing
und 700MiB/s sequential reading. And that scales pretty well with
more enclosures and more clients.

In your case I would think the problem is your configuration. An 28
disk raid5 has a lot of stripes. That takes a lot of cache per stripe
and a lot of cpu to calculate parity. Plus the chance of 2 disks
failing before the spare disk can be synced mustbe HUGE. Have you ever
thought about making multiple smaller raids?

 The only thing we've found that'll stand up to real multiuser load
 (like a mail spool) is raid-10, and enough spindles.

Mail spool is like database access. Tons and tons of tiny read/write
requests. The only thing that counts there is seek time. And the only
raid level that improves seek time is raid1 (and the raid1 in raid10).

 We're beginning to see the requirement for 10GE on busy machines.

Don't forget that you have overhead too. If you only have 1GBit to the
storage then how is your server supposed to saturate the 1GBit to the
ouside world?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-28 Thread Ian McDonald

Goswin von Brederlow wrote:

Ian McDonald i...@st-andrews.ac.uk writes:


Goswin von Brederlow wrote:

lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:


On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:

most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there might be 8 disks (number pulled from the air -
based on raid6 + spares) behind 1TB

And if you want disk space and are serving across a 1Gbit ethernet link,
you don't give a damn about spindles and go for cheap abundant storage,
which means SATA.

Not everyone is running a database server.  Some people just have files.

Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
small fraction of the cost of SAS drives.

1GBit is satturated by a single good disk already. 1GBit is a joke for
fast storage.


Erm, not on anything other than a sequential read (and even then, I've
never seen a single disk that would actually sustain that across it's
whole capacity).


A cheap SATA disk with 7200rpm sustains 80MB/s sequential read/write
on the outside and 40MB/s on the inside. An Seagate Cheetah 15K.6 is
specified to up to 171MB/s and SAS disks are more uniform between
outside and inside tracks.


My experience is that this sustained speed has quite a few lumps and 
bumps in it. I must admit, I thought we were talking about SATA disks, 
not recent SAS 15k's, and 40-80M/s is quite a way from 1 GBit. My WD 
Raptors only report around 75M/s.





Even raid-5s of significant numbers of disks aren't enormously fast,
especially under multiple access. hdparm informs me that the SATA 28+2
spare raid-5 I have will read 170M a second. That would rapidly
diminish under any sort of load.


For our Lustre filesystems we tested 16 SATA disks in an Infotrend SAS
raid enclosure. As raid6 we still get 450 MiB/s sequential writing
und 700MiB/s sequential reading. And that scales pretty well with
more enclosures and more clients.

In your case I would think the problem is your configuration. An 28
disk raid5 has a lot of stripes. That takes a lot of cache per stripe
and a lot of cpu to calculate parity. Plus the chance of 2 disks
failing before the spare disk can be synced mustbe HUGE. Have you ever
thought about making multiple smaller raids?


Of course. This performance isn't a problem for our requirement (given 
it's connected to 1GE), it's just illustrative.


I'm not sure the risk of twin failure is that great, if you do 
calculations on MTBF's. Perhaps I ought to simulate a failure, and see 
how long it takes to rebuild :)


We have a 56 disk + 4 spare Raid 10 on the production side of this 
setup, which is much much quicker :) (and still connected to 1GE, but 
can sustain multiple accesses well).



The only thing we've found that'll stand up to real multiuser load
(like a mail spool) is raid-10, and enough spindles.


Mail spool is like database access. Tons and tons of tiny read/write
requests. The only thing that counts there is seek time. And the only
raid level that improves seek time is raid1 (and the raid1 in raid10).


Indeed.




We're beginning to see the requirement for 10GE on busy machines.


Don't forget that you have overhead too. If you only have 1GBit to the
storage then how is your server supposed to saturate the 1GBit to the
ouside world?



Who said I had 1G to the storage? The Storage is on 16x PCI-e, with 4x 
SAS connects to it :)



Best Regards,

--
ian


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
 As would auto-replacement of bad drives by hot spares.

Usually the firmware of a raid card does that itself.  If a drive is
flagged hotspare, the raid card should automatically start the rebuild
if a drive fails.  You should never have to tell it to do that.  If you
had to tell it then it hardly qualifies as a hot spare.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:
 Most DC managers have a bit more clue and good reasons than simply rules 
 for rules' sake.

 Mainly logistics: if all the center's disks are SAS (or whatever other 
 standard you choose) in only one or two vendor's SANs (or whatever other 
 cabinet you choose), it makes the Operation staff's job a whole lot 
 easier, thus helping to ensure greater uptime.

Well people can choose to do it that way, at a huge increase in cost.

I would have hoped the staff could manage a few types of disks without
messing anything up.  I guess it means having a few more types of
spares around.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Ron Johnson

On 02/27/2009 07:50 AM, Lennart Sorensen wrote:

On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:

As would auto-replacement of bad drives by hot spares.


Usually the firmware of a raid card does that itself.  If a drive is
flagged hotspare, the raid card should automatically start the rebuild
if a drive fails.  You should never have to tell it to do that.  If you
had to tell it then it hardly qualifies as a hot spare.



I was referring to the fact that softraid couldn't do that.

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Lennart Sorensen
On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:
 I was referring to the fact that softraid couldn't do that.

Are you sure?  mdadm appears capable of managing spares automatically
when such are setup for the raid.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Ron Johnson

On 02/27/2009 02:25 PM, Lennart Sorensen wrote:

On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:

I was referring to the fact that softraid couldn't do that.


Are you sure?


No...


   mdadm appears capable of managing spares automatically
when such are setup for the raid.



In mdadm.conf?  I'm really surprised (and pleased)!

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Alex Samad
On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
 On 02/27/2009 02:25 PM, Lennart Sorensen wrote:
 On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:
 I was referring to the fact that softraid couldn't do that.

 Are you sure?

 No...

mdadm appears capable of managing spares automatically
 when such are setup for the raid.


 In mdadm.conf?  I'm really surprised (and pleased)!

not sure about mdadm.conf. but definately when you define a raid set you
can define a hot spare

 -- 
 Ron Johnson, Jr.
 Jefferson LA  USA

-- 
The recession started upon my arrival. It could have been -- some say 
February, some say March, some speculate maybe earlier it started -- but 
nevertheless, it happened as we showed up here. The attacks on our country 
affected our economy. Corporate scandals affected the confidence of people and 
therefore affected the economy. My decision on Iraq, this kind of march to war, 
affected the economy.

- George W. Bush
02/08/2004
on Meet the Press


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-27 Thread Lennart Sorensen
On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
 In mdadm.conf?  I'm really surprised (and pleased)!

Probably in the monitoring mode.  man mdadm talks about spare drives and
spare groups and moving spares between raids and such.  Sounds pretty
likely to automatically use a spare assigned to an md raid though.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread hendrik
On Fri, Feb 27, 2009 at 04:14:52PM -0500, Lennart Sorensen wrote:
 On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
  In mdadm.conf?  I'm really surprised (and pleased)!
 
 Probably in the monitoring mode.  man mdadm talks about spare drives and
 spare groups and moving spares between raids and such.  Sounds pretty
 likely to automatically use a spare assigned to an md raid though.

I one (yesterday, to be precise) used mdadm to tell my RAID that 
one of its drives had failed, and it immediately started copying the 
other active member to the spare.

-- hendrik

 
 -- 
 Len Sorensen
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-27 Thread Goswin von Brederlow
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:

 On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
 I think the limit is 1024 cores. Or was that fixed to allow more?

 I think people are working on that, but not too many machines need
 that yet.  Most machines with that many cores are clusters and hence
 run multiple linux instances.

 As for ram that really is a cpu/architecture limit and you won't be
 able to find a motherboard that supports as much ram as the cpu(s)
 could handle.

 I can't remember if the current amd64/x86_64 architecture limit is 40bit
 or 44bit of physical memory space.  Fairly decent chunk of ram either way,
 if you can fit it into the system in the first place.

cat /proc/cpuinfo
address sizes   : 40 bits physical, 48 bits virtual

which means 1TiB of ram. Good luck finding a board for that.

 More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to
 16 TB is quite trivial. Beyond that you start to hit the limit on
 filesystem size with ext3 and have to use xfs or ext4 or something. Or
 you have to partition the space iinto 16TB chunks. Also  16 disks
 requires a big enough case or external storage. For that I would look
 into external enclosures with SAS connector. Don't use SCSI or FC.

 Well at 2TB you have to switch from DOS style partition tables to GPT,
 which requires the use of grub2 rather than lilo or grub, but works
 fine otherwise.

Nah. Just needs /boot to be below 2TiB. The GPT has a fake MS-Dos
table inside that grub can use and lilo only looks at the block
addresses anyway.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Douglas A. Tutty
On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
 On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
 [snip]
 
 /proc/megaraid/hba0/raiddrives-0-9 
 Logical drive: 0:, state: optimal
 Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
 Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO
 
 Logical drive: 1:, state: optimal
 Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
 Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: Cached 
 IO
 
 Why is Read Ahead disabled on Logical Drive 1?

My understanding is that read ahead in this case refers to the ability
of the raid card to read ahead from one disk while a read is taking
place on another disk.  This only makes sense in a redundant raid level.
LD1 is raid0, so there is no other disk from which to read ahead.

If my understanding is off, I'd have to find it in the manual.

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Ron Peterson
2009-02-26_14:21:54-0500 Douglas A. Tutty dtu...@vianet.ca:
 On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
  On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
  [snip]
  
  /proc/megaraid/hba0/raiddrives-0-9 
  Logical drive: 0:, state: optimal
  Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
  Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO
  
  Logical drive: 1:, state: optimal
  Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
  Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: Cached 
  IO
  
  Why is Read Ahead disabled on Logical Drive 1?
 
 My understanding is that read ahead in this case refers to the ability
 of the raid card to read ahead from one disk while a read is taking
 place on another disk.  This only makes sense in a redundant raid level.
 LD1 is raid0, so there is no other disk from which to read ahead.

My understanding is that read ahead means the controller reads more data
into memory than you asked for, expecting that the next bits you ask for
will be immediately after the ones you just got.

-- 
Ron Peterson
Network  Systems Manager
Mount Holyoke College
http://www.mtholyoke.edu/~rpeterso
-
I wish my computer would do what I want it to do - not what I tell it to do.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Wed, Feb 25, 2009 at 05:37:12PM -0500, Douglas A. Tutty wrote:
 Why wouldn't you configure the raid controller to give you a small
 logical drive (with whatever raid config you want) for the OS, and the
 larger logical drive for your data (or for LVM for everything except /)?

Why should I do that?

This way I can resize any LVM I want to any size I want.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
 On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
 Why wouldn't you configure the raid controller to give you a small
 logical drive (with whatever raid config you want) for the OS, and the
 larger logical drive for your data (or for LVM for everything except /)?

 I think it's because disk itself (which is what the boot loader sees) 
 is .gt. 2TB.

Well he is correct in that the raid controller could probably create
multiple volumes from the raid to expose as devices to the OS.  I find
that overly complicated and less flexible though.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
 This begs the question why did you pick hardware raid over software raid

You can boot from it no matter what (software raid can require interesting
tweaks to the boot loader setup to make it work).

Recovery can be transparent to the OS and be as simple as swapping out
the drive that failed.

You get nice hotswap bay LED control to show which drive has failed
(I imagine software could do this too, but I have never seen that
happen yet.)

 I have been a long supporter of software raid, but I find myself leaning
 towards a HP smart array 400 and using hardware raid (looking at 10
 disks in raid6).
 
 My current thoughts are why should I have 10 channels (4 of them come
 from 1 pcix card) when I could have 1 channel to the smart array. there
 seem to be a few cciss utilities for me to track the array 
 
 
 I am waying this up against the ability to easily manage the array and
 do upgrade and change disk and monitor the individual disks

Some hardware raids have good support for monitoring under linux.
Some do not.  Having monitoring is quite important.

The biggest advantage to software raid is that it is hardware independant.
You can move all the disks to another controller type on another system,
and linux's software raid will still work.  Hardware raid setups are
often very specific to one controller type so recovery from a controller
failure can be tricky if you don't have access to spares.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Alex Samad
On Thu, Feb 26, 2009 at 03:38:49PM -0500, Lennart Sorensen wrote:
 On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
  This begs the question why did you pick hardware raid over software raid
 
 You can boot from it no matter what (software raid can require interesting
 tweaks to the boot loader setup to make it work).

my rule of thumb is to always have atleast 2 partitions on the first 2
drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
the space is put into a raid device then into lvm.  That gets rid of the
interesting tweaks.


 
 Recovery can be transparent to the OS and be as simple as swapping out
 the drive that failed.

true

 
 You get nice hotswap bay LED control to show which drive has failed
 (I imagine software could do this too, but I have never seen that
 happen yet.)

true

 
  I have been a long supporter of software raid, but I find myself leaning
  towards a HP smart array 400 and using hardware raid (looking at 10
  disks in raid6).
  
  My current thoughts are why should I have 10 channels (4 of them come
  from 1 pcix card) when I could have 1 channel to the smart array. there
  seem to be a few cciss utilities for me to track the array 
  
  
  I am waying this up against the ability to easily manage the array and
  do upgrade and change disk and monitor the individual disks
 
 Some hardware raids have good support for monitoring under linux.
 Some do not.  Having monitoring is quite important.

is that monitoring of the raid drives or the actual drives underneath, I
like having smartctl to give me access to the actual drive health

 
 The biggest advantage to software raid is that it is hardware independant.
 You can move all the disks to another controller type on another system,
 and linux's software raid will still work.  Hardware raid setups are
 often very specific to one controller type so recovery from a controller
 failure can be tricky if you don't have access to spares.

I have gone through a few cycles of changing the underlying drive sizes,
ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
TB.  pop 1 disk replace with 1 TB once it has settled you can do an
online expansion.  Not sure if you can do that on a HW raid.

 
 -- 
 Len Sorensen
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
I suspect that had my dad not been president, he'd be asking the same 
questions: How'd your meeting go with so-and-so? ... How did you feel when you 
stood up in front of the people for the State of the Union Address--state of 
the budget address, whatever you call it.

- George W. Bush
03/09/2001
in an interview with the Washington Post


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-26 Thread Ron Johnson

On 02/26/2009 01:49 PM, Ron Peterson wrote:

2009-02-26_14:21:54-0500 Douglas A. Tutty dtu...@vianet.ca:

On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:

On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
[snip]
/proc/megaraid/hba0/raiddrives-0-9 
Logical drive: 0:, state: optimal

Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO

Logical drive: 1:, state: optimal
Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: Cached 
IO

Why is Read Ahead disabled on Logical Drive 1?

My understanding is that read ahead in this case refers to the ability
of the raid card to read ahead from one disk while a read is taking
place on another disk.  This only makes sense in a redundant raid level.
LD1 is raid0, so there is no other disk from which to read ahead.


My understanding is that read ahead means the controller reads more data
into memory than you asked for, expecting that the next bits you ask for
will be immediately after the ones you just got.



That *is* the standard definition.  Though there's nothing stopping 
Megaraid from being weird.


--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Ron Johnson

On 02/26/2009 02:51 PM, Alex Samad wrote:
[snip]


I have gone through a few cycles of changing the underlying drive sizes,
ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
TB.  pop 1 disk replace with 1 TB once it has settled you can do an
online expansion.  Not sure if you can do that on a HW raid.


You used to not be able to.  Not sure about modern controllers.

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Douglas A. Tutty
On Thu, Feb 26, 2009 at 03:38:49PM -0500, Lennart Sorensen wrote:
 On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
 
 You get nice hotswap bay LED control to show which drive has failed
 (I imagine software could do this too, but I have never seen that
 happen yet.)

Since the status of each physical drive is included in the
/proc/megaraid/* then software that monitors the appropriate files will
be able to tell you.  This is one of the requirements for the software
I'm designing.  It will note when a drive isn't optimal and {syslog |
email | wall | whatever}.

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Douglas A. Tutty
On Thu, Feb 26, 2009 at 03:34:22PM -0500, Lennart Sorensen wrote:
 On Wed, Feb 25, 2009 at 05:37:12PM -0500, Douglas A. Tutty wrote:
  Why wouldn't you configure the raid controller to give you a small
  logical drive (with whatever raid config you want) for the OS, and the
  larger logical drive for your data (or for LVM for everything except /)?
 
 Why should I do that?
 
 This way I can resize any LVM I want to any size I want.

The comparison wasn't between having the raid controller or LVM present
a reasonable size /, it was between a reasonable size / and a 2TB /.

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
 my rule of thumb is to always have atleast 2 partitions on the first 2
 drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
 the space is put into a raid device then into lvm.  That gets rid of the
 interesting tweaks.

Even with software raid1, setting up reliable boot from either drive
if one fails can be interesting, but it has gotten a lot better than it
used to be.

 is that monitoring of the raid drives or the actual drives underneath, I
 like having smartctl to give me access to the actual drive health

Well monitoring of raid health would be minimum.  Getting more details
would be nice.

  The biggest advantage to software raid is that it is hardware independant.
  You can move all the disks to another controller type on another system,
  and linux's software raid will still work.  Hardware raid setups are
  often very specific to one controller type so recovery from a controller
  failure can be tricky if you don't have access to spares.
 
 I have gone through a few cycles of changing the underlying drive sizes,
 ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
 TB.  pop 1 disk replace with 1 TB once it has settled you can do an
 online expansion.  Not sure if you can do that on a HW raid.

Some hardware raids can do lots of things.  Some can do no resizing at
all.  I have certainly used hardware raid cerads where adding a disk to a
raid5 and expanding it was no problem.  It just did it in the background,
and when done you could reboot the system and the disk was suddenly bigger
and software could do whatever it wanted to resize to the new larger disk.
It also dealt with moving to larger disks in the raid by rebuilding one
drive at a time, and then when all where replaced you could increase
the raid to the size of the new disks.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 05:42:43PM -0500, Douglas A. Tutty wrote:
 The comparison wasn't between having the raid controller or LVM present
 a reasonable size /, it was between a reasonable size / and a 2TB /.

No one ever wanted a 2TB /.  I just wanted / on a drive that was bigger
than 2TB and hence couldn't use dos partition tables anymore.  I only
have a 10GB / :)  Making a 10GB raid volume and a seperate raid volume
for the rest just to be able to use dos partition tables for the /
is just awkward.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:
 true, depends on whos rule of thumb you use. I have seen places where
 mandate fc drives only in the data center - get very expensive when you
 want lots of disk space.
 
 Also the disk space might not be need for feeding across the network, db
 aren't the only thing that chew through disk space.
 
 the op did specific enterprise, I was think very large enterprise, the
 sort of people who mandate scsi or sas only drives in their data centre

Perhaps.  I think some people make hard rules where in fact they would
get a much better result by thinking instead for each case.  Of course
thinking can be hard, and it is much easier to just follow a hard rule
so you don't get in trouble for making a decision.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Ron Johnson

On 02/26/2009 05:49 PM, Lennart Sorensen wrote:

On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:

my rule of thumb is to always have atleast 2 partitions on the first 2
drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
the space is put into a raid device then into lvm.  That gets rid of the
interesting tweaks.


Even with software raid1, setting up reliable boot from either drive
if one fails can be interesting, but it has gotten a lot better than it
used to be.


is that monitoring of the raid drives or the actual drives underneath, I
like having smartctl to give me access to the actual drive health


Well monitoring of raid health would be minimum.  Getting more details
would be nice.



As would auto-replacement of bad drives by hot spares.

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Ron Johnson

On 02/26/2009 05:54 PM, Lennart Sorensen wrote:

On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:

true, depends on whos rule of thumb you use. I have seen places where
mandate fc drives only in the data center - get very expensive when you
want lots of disk space.

Also the disk space might not be need for feeding across the network, db
aren't the only thing that chew through disk space.

the op did specific enterprise, I was think very large enterprise, the
sort of people who mandate scsi or sas only drives in their data centre


Perhaps.  I think some people make hard rules where in fact they would
get a much better result by thinking instead for each case.  Of course
thinking can be hard, and it is much easier to just follow a hard rule
so you don't get in trouble for making a decision.



Ehh.

Most DC managers have a bit more clue and good reasons than simply 
rules for rules' sake.


Mainly logistics: if all the center's disks are SAS (or whatever 
other standard you choose) in only one or two vendor's SANs (or 
whatever other cabinet you choose), it makes the Operation staff's 
job a whole lot easier, thus helping to ensure greater uptime.


--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-26 Thread Alex Samad
On Thu, Feb 26, 2009 at 06:49:38PM -0500, Lennart Sorensen wrote:
 On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
  my rule of thumb is to always have atleast 2 partitions on the first 2

[snip]

 
 Some hardware raids can do lots of things.  Some can do no resizing at
 all.  I have certainly used hardware raid cerads where adding a disk to a
 raid5 and expanding it was no problem.  It just did it in the background,
 and when done you could reboot the system and the disk was suddenly bigger
 and software could do whatever it wanted to resize to the new larger disk.
 It also dealt with moving to larger disks in the raid by rebuilding one
 drive at a time, and then when all where replaced you could increase
 the raid to the size of the new disks.

Interesting, well I guess you get what you pay for (presuming the more
you pay the better the card and the more features), I haven't seen any
hardware based raid controllers that allow for increasing in size of the
underlying disk's.


 
 -- 
 Len Sorensen
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
Life is too short to be taken seriously.
-- Oscar Wilde


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-26 Thread Alex Samad
On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:
 On 02/26/2009 05:54 PM, Lennart Sorensen wrote:
 On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:

[snip]


 Perhaps.  I think some people make hard rules where in fact they would
 get a much better result by thinking instead for each case.  Of course
 thinking can be hard, and it is much easier to just follow a hard rule
 so you don't get in trouble for making a decision.


 Ehh.

 Most DC managers have a bit more clue and good reasons than simply rules 
 for rules' sake.

 Mainly logistics: if all the center's disks are SAS (or whatever other 
 standard you choose) in only one or two vendor's SANs (or whatever other 
 cabinet you choose), it makes the Operation staff's job a whole lot 
 easier, thus helping to ensure greater uptime.

for large site, server owners would ask for space and not for disks, it
would be upto to the storage guys to provide it and they like every one
else like to KISS and thus usually standardise.


 -- 
 Ron Johnson, Jr.
 Jefferson LA  USA

 The feeling of disgust at seeing a human female in a Relationship
 with a chimp male is Homininphobia, and you should be ashamed of
 yourself.


 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



-- 
Why don't you ever enter any CONTESTS, Marvin??  Don't you know your
own ZIPCODE?


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-26 Thread Ron Johnson

On 02/26/2009 10:27 PM, Alex Samad wrote:

On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:

On 02/26/2009 05:54 PM, Lennart Sorensen wrote:

On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:


[snip]


Perhaps.  I think some people make hard rules where in fact they would
get a much better result by thinking instead for each case.  Of course
thinking can be hard, and it is much easier to just follow a hard rule
so you don't get in trouble for making a decision.


Ehh.

Most DC managers have a bit more clue and good reasons than simply rules 
for rules' sake.


Mainly logistics: if all the center's disks are SAS (or whatever other 
standard you choose) in only one or two vendor's SANs (or whatever other 
cabinet you choose), it makes the Operation staff's job a whole lot 
easier, thus helping to ensure greater uptime.


for large site, server owners would ask for space and not for disks, it
would be upto to the storage guys to provide it and they like every one
else like to KISS and thus usually standardise.



Very true.  Also the server owners don't like to pay a lot, so 
there's negotiation and needs clarification...


--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Goswin von Brederlow
Umarzuki Mochlis umarz...@gmail.com writes:

 2009/2/21 Igor Támara [[i...@tamarapatino.org]]

   Hi, at some Datacenter here on my country they only want the
  machines to be installed with RHEL or Suse, every time I dig more
  into those distros I fall in love more with Debian.  This is why I'm
  asking about machines that have many cores and lots of RAM and
  plenty of disk.
  
  
  Here (at my country) big means more than 4x4 cores , more than
  16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
  are good to know about.


 If Red Hat can support it, I don't think that there's any reason Debian
 couldn't.

I think the limit is 1024 cores. Or was that fixed to allow more?

As for ram that really is a cpu/architecture limit and you won't be
able to find a motherboard that supports as much ram as the cpu(s)
could handle.

More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to
16 TB is quite trivial. Beyond that you start to hit the limit on
filesystem size with ext3 and have to use xfs or ext4 or something. Or
you have to partition the space iinto 16TB chunks. Also  16 disks
requires a big enough case or external storage. For that I would look
into external enclosures with SAS connector. Don't use SCSI or FC.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Lennart Sorensen
On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
 I think the limit is 1024 cores. Or was that fixed to allow more?

I think people are working on that, but not too many machines need
that yet.  Most machines with that many cores are clusters and hence
run multiple linux instances.

 As for ram that really is a cpu/architecture limit and you won't be
 able to find a motherboard that supports as much ram as the cpu(s)
 could handle.

I can't remember if the current amd64/x86_64 architecture limit is 40bit
or 44bit of physical memory space.  Fairly decent chunk of ram either way,
if you can fit it into the system in the first place.

 More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to
 16 TB is quite trivial. Beyond that you start to hit the limit on
 filesystem size with ext3 and have to use xfs or ext4 or something. Or
 you have to partition the space iinto 16TB chunks. Also  16 disks
 requires a big enough case or external storage. For that I would look
 into external enclosures with SAS connector. Don't use SCSI or FC.

Well at 2TB you have to switch from DOS style partition tables to GPT,
which requires the use of grub2 rather than lilo or grub, but works
fine otherwise.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Mattias Wadenstein

On Wed, 25 Feb 2009, Lennart Sorensen wrote:


On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:

More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to
16 TB is quite trivial. Beyond that you start to hit the limit on
filesystem size with ext3 and have to use xfs or ext4 or something. Or
you have to partition the space iinto 16TB chunks. Also  16 disks
requires a big enough case or external storage. For that I would look
into external enclosures with SAS connector. Don't use SCSI or FC.


Well at 2TB you have to switch from DOS style partition tables to GPT,
which requires the use of grub2 rather than lilo or grub, but works
fine otherwise.


Only if you want partitions, we usually don't for large data filesystems 
where the large filesystem sizes are relevant.


As Goswin mentioned, you probably want to look at a different filesystem 
than ext3 for non-trivial fs sizes, not only due to limits but also 
perforamance.


/Mattias Wadenstein


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Lennart Sorensen
On Wed, Feb 25, 2009 at 04:51:44PM +0100, Mattias Wadenstein wrote:
 Only if you want partitions, we usually don't for large data filesystems  
 where the large filesystem sizes are relevant.

If you have a seperate OS disk, then sure, partitions are not necesary,
and even LVM and such have no need for partitions.

 As Goswin mentioned, you probably want to look at a different filesystem  
 than ext3 for non-trivial fs sizes, not only due to limits but also  
 perforamance.

Certainly true.  I am not having issues with ext3 on 2TB filesystems,
but 16TB might be a different story.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Ron Johnson

On 02/25/2009 09:14 AM, Lennart Sorensen wrote:
[snip]


Well at 2TB you have to switch from DOS style partition tables to GPT,
which requires the use of grub2 rather than lilo or grub, but works
fine otherwise.



Who boots off of (or puts / on) a 2TB partition?

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Lennart Sorensen
On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
 Who boots off of (or puts / on) a 2TB partition?

Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
drives.  Hence the only drive in the system is a 2.25TB device with
partitions and everything on it.  The root partition isn't very big,
but it's on a drive that is bigger than 2TB and hence needs something
other than a DOS partition table.

It doesn't take much with modern SATA drives to hit 2TB.  Given we can
get 1.5TB in a single drive, how many months before we can get 2TB in
a single disk.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Alex Samad
On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
 Umarzuki Mochlis umarz...@gmail.com writes:
 
  2009/2/21 Igor Támara [[i...@tamarapatino.org]]
 
Hi, at some Datacenter here on my country they only want the
   machines to be installed with RHEL or Suse, every time I dig more
   into those distros I fall in love more with Debian.  This is why I'm
   asking about machines that have many cores and lots of RAM and
   plenty of disk.
   
   
   Here (at my country) big means more than 4x4 cores , more than
   16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
   are good to know about.
 
 
  If Red Hat can support it, I don't think that there's any reason Debian
  couldn't.
 
 I think the limit is 1024 cores. Or was that fixed to allow more?
 
 As for ram that really is a cpu/architecture limit and you won't be
 able to find a motherboard that supports as much ram as the cpu(s)
 could handle.
 
 More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to

most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there might be 8 disks (number pulled from the air -
based on raid6 + spares) behind 1TB 

 16 TB is quite trivial. Beyond that you start to hit the limit on
 filesystem size with ext3 and have to use xfs or ext4 or something. Or
 you have to partition the space iinto 16TB chunks. Also  16 disks
 requires a big enough case or external storage. For that I would look
 into external enclosures with SAS connector. Don't use SCSI or FC.
 
 MfG
 Goswin
 
 
 -- 
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
If the terriers and bariffs are torn down, this economy will grow.

- George W. Bush
01/01/2000


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-25 Thread Lennart Sorensen
On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
 most enterprise site don;t use 1TB size disk, if you want performance
 you go spindles, there might be 8 disks (number pulled from the air -
 based on raid6 + spares) behind 1TB 

And if you want disk space and are serving across a 1Gbit ethernet link,
you don't give a damn about spindles and go for cheap abundant storage,
which means SATA.

Not everyone is running a database server.  Some people just have files.

Raid5/6 of a few SATA drives can easily saturate 1Gbit.  And for a very
small fraction of the cost of SAS drives.

-- 
Len Sorensen


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Ron Johnson

On 02/25/2009 04:26 PM, Ian McDonald wrote:

Ron Johnson wrote:

On 02/25/2009 03:48 PM, Lennart Sorensen wrote:

On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:

Who boots off of (or puts / on) a 2TB partition?


Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
drives.  Hence the only drive in the system is a 2.25TB device with
partitions and everything on it.  The root partition isn't very big,
but it's on a drive that is bigger than 2TB and hence needs something
other than a DOS partition table.



Ah.  The minicomputer tradition I come from (and thus how I organized 
my home PC) is to have a relatively small OS/swap disk and a separate 
data array.


Of course, max device size always gets bigger, and smaller devices 
fall off the market...



It doesn't take much with modern SATA drives to hit 2TB.  Given we can
get 1.5TB in a single drive, how many months before we can get 2TB in
a single disk.



Later this year.



Last month actually..

http://www.reghardware.co.uk/2009/01/27/review_internal_hard_drive_wd_caviar_green_2tb/ 


And at only a 15% premium to two 1TB drives...

http://www.newegg.com/Product/Product.aspx?Item=N82E16822136337
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136344

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Douglas A. Tutty
On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
 On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
  Who boots off of (or puts / on) a 2TB partition?
 
 Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
 drives.  Hence the only drive in the system is a 2.25TB device with
 partitions and everything on it.  The root partition isn't very big,
 but it's on a drive that is bigger than 2TB and hence needs something
 other than a DOS partition table.

Why wouldn't you configure the raid controller to give you a small
logical drive (with whatever raid config you want) for the OS, and the
larger logical drive for your data (or for LVM for everything except /)?

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Ron Johnson

On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:

On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:

On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:

Who boots off of (or puts / on) a 2TB partition?

Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
drives.  Hence the only drive in the system is a 2.25TB device with
partitions and everything on it.  The root partition isn't very big,
but it's on a drive that is bigger than 2TB and hence needs something
other than a DOS partition table.


Why wouldn't you configure the raid controller to give you a small
logical drive (with whatever raid config you want) for the OS, and the
larger logical drive for your data (or for LVM for everything except /)?


I think it's because disk itself (which is what the boot loader 
sees) is .gt. 2TB.



--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Douglas A. Tutty
On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
 On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
 On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
 On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
 Who boots off of (or puts / on) a 2TB partition?
 Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
 drives.  Hence the only drive in the system is a 2.25TB device with
 partitions and everything on it.  The root partition isn't very big,
 but it's on a drive that is bigger than 2TB and hence needs something
 other than a DOS partition table.
 
 Why wouldn't you configure the raid controller to give you a small
 logical drive (with whatever raid config you want) for the OS, and the
 larger logical drive for your data (or for LVM for everything except /)?
 
 I think it's because disk itself (which is what the boot loader 
 sees) is .gt. 2TB.
 
Not with my NetRaid card.  It takes the physical disks and assembles
them into virtual disks which appear to the OS as sd* of whatever size.

Info on the underlying physical drives (and the virtual disks and the
controller) show up under /proc/megaraid.

Here's dmesg | grep -i scsi:

SCSI subsystem initialized
scsi0:Found MegaRAID controller at 0xf8814000, IRQ:177
scsi0 : LSI Logic MegaRAID F  254 commands 16 targs 4 chans 7 luns
scsi0: scanning scsi channel 0 for logical drives.
  Type:   Direct-Access  ANSI SCSI revision: 02
  Type:   Direct-Access  ANSI SCSI revision: 02
scsi0: scanning scsi channel 4 [P0] for physical devices.
SCSI device sda: 5120 512-byte hdwr sectors (26214 MB)
SCSI device sda: 5120 512-byte hdwr sectors (26214 MB)
sym0: SCSI BUS has been reset.
sym0: SCSI BUS mode change from SE to SE.
scsi1 : sym-2.2.3
sym0: SCSI BUS has been reset.
sd 0:0:0:0: Attached scsi disk sda
SCSI device sdb: 39858176 512-byte hdwr sectors (20407 MB)
SCSI device sdb: 39858176 512-byte hdwr sectors (20407 MB)
sd 0:0:1:0: Attached scsi disk sdb


The megaraid controller shows up as a scsi hba with two drives (sda,
sdb) on it.  In this case, sda is a raid1 array and sdb is a raid0
array; the OS knows nothing about this, however.

Doug.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Alex Samad
On Wed, Feb 25, 2009 at 07:08:13PM -0500, Douglas A. Tutty wrote:
 On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
  On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
  On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:

[snip]

  
 Not with my NetRaid card.  It takes the physical disks and assembles
 them into virtual disks which appear to the OS as sd* of whatever size.
 

This begs the question why did you pick hardware raid over software raid
?

I have been a long supporter of software raid, but I find myself leaning
towards a HP smart array 400 and using hardware raid (looking at 10
disks in raid6).

My current thoughts are why should I have 10 channels (4 of them come
from 1 pcix card) when I could have 1 channel to the smart array. there
seem to be a few cciss utilities for me to track the array 


I am waying this up against the ability to easily manage the array and
do upgrade and change disk and monitor the individual disks



[snip]

 Doug.
 
 
 --
 To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
Security is the essential roadblock to achieving the road map to peace.

- George W. Bush
07/25/2003
Washington, DC


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-25 Thread Douglas A. Tutty
On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
 On Wed, Feb 25, 2009 at 07:08:13PM -0500, Douglas A. Tutty wrote:
  On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
   On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
   On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
 
 [snip]
   
  Not with my NetRaid card.  It takes the physical disks and assembles
  them into virtual disks which appear to the OS as sd* of whatever size.
  
 
 This begs the question why did you pick hardware raid over software raid

The HP NetServer LPr PII-450 boxes (4) I bought came with:
two CPUs
1 GB ram
two 72 GB SCSI drives (hot-swap)
and the HP NetRaid-1si card.

All for, IIRC, $65 CDN.

 I have been a long supporter of software raid, but I find myself leaning
 towards a HP smart array 400 and using hardware raid (looking at 10
 disks in raid6).
 
 My current thoughts are why should I have 10 channels (4 of them come
 from 1 pcix card) when I could have 1 channel to the smart array. there
 seem to be a few cciss utilities for me to track the array 

All the status info ends up in /proc/megaraid.  True, there aren't any
utilities to monitor it.  The card does have an alarm, although it isn't
testable under the Linux Megaraid driver (it is under OpenBSD's driver).

I'm starting work on a monitoring program.  Actually, I'm using it as an
exercise to refresh my structured analysis and design technique skills
(they're over 20 years rusty [God, it is that long; I'm getting old].
I'll do it in Ada, modularized so that if the proc intefaces changes I
only have to change that module.

 I am waying this up against the ability to easily manage the array and
 do upgrade and change disk and monitor the individual disks

sure, you can't (under Linux) make new arrays on the card (requries
booting into the bios) although there is supposed to be a dos program; I
wonder if it can run under one of the dos emulators for linux.  

The individual disks can be monitored via the /proc interface.

/proc/megaraid:
hba0/

/proc/megaraid/hba0:
battery-status
config
diskdrives-ch0
diskdrives-ch1
diskdrives-ch2
diskdrives-ch3
mailbox
raiddrives-0-9
raiddrives-10-19
raiddrives-20-29
raiddrives-30-39
rebuild-rate
stat

/proc/megaraid/hba0/diskdrives-ch0
Channel: 0 Id: 0 State: Online.
  Vendor: HPModel: 36.4GB C 80-D94N  Rev: D94N
  Type:   Direct-Access  ANSI SCSI revision: 02
Channel: 0 Id: 1 State: Online.
  Vendor: HPModel: 36.4GB C 80-D94N  Rev: D94N
  Type:   Direct-Access  ANSI SCSI revision: 02

/proc/megaraid/hba0/raiddrives-0-9 
Logical drive: 0:, state: optimal
Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO

Logical drive: 1:, state: optimal
Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: Cached IO


This shows that both hard drives are Online.  Both Raid drives are in an
optimal state.

I hope this helps your decision making.

If you want to send me your ideas for the requriments for the monitoring
software, I'll encorporate it in my system analysis.

Thanks,

Doug.


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-25 Thread Ron Johnson

On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
[snip]


/proc/megaraid/hba0/raiddrives-0-9 
Logical drive: 0:, state: optimal

Span depth:  1, RAID level:  1, Stripe size: 64, Row size:  2
Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO

Logical drive: 1:, state: optimal
Span depth:  0, RAID level:  0, Stripe size:128, Row size:  0
Read Policy: No read ahead, Write Policy: Write thru, Cache Policy: Cached IO


Why is Read Ahead disabled on Logical Drive 1?

--
Ron Johnson, Jr.
Jefferson LA  USA

The feeling of disgust at seeing a human female in a Relationship
with a chimp male is Homininphobia, and you should be ashamed of
yourself.


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-24 Thread Igor Támara
Hi, 
Dave On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:
Dave 
Dave  Hi, at some Datacenter here on my country they only want the machines
Dave  to be installed with RHEL or Suse, every time I dig more into those
Dave  distros I fall in love more with Debian.  This is why I'm asking about
Dave  machines that have many cores and lots of RAM and plenty of disk.
Dave 
Dave We're running a Dell R905 server: four quad-core CPUs, 128GB RAM, with
Dave an attached Dell Powervault storage system running off Dell's PERC 6/E
Dave controller.
Dave 
Dave This basically Just Works under Debian Etch (and, I suspect, Lenny too).
Dave 

Thanks Peter and Dave, the report of the both machines I got is 
really important, thanks a lot for letting us know, that as one 
could suspect, Debian can be used in such environments.

thank you all.

Dave Dave.
Dave 
Dave -- 
Dave Dave Ewart
Dave da...@ceu.ox.ac.uk
Dave Computing Manager, Cancer Epidemiology Unit
Dave University of Oxford / Cancer Research UK
Dave PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370
Dave Get key from http://www.ceu.ox.ac.uk/~davee/davee-ceu-ox-ac-uk.asc
Dave N 51.7516, W 1.2152



-- 
Recomiendo Audacity para hacer edición de audio
http://audacity.sourceforge.net


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-24 Thread Kyuu Eturautti

Igor Támara wrote:
Here (at my country) big means more than 4x4 cores , more than 
16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN

are good to know about.
  
Good experiences with IBM blades, DS4200 SAN and Qlogic FC adapters. No 
Debian friendly SAN/FC multipath support available from IBM, but the 
debian package multipath-tools managed everything fine eventually. My 
educated guess is that the solution is by no means limited to IBM SANs.



-Kyuu


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-24 Thread Alex Samad
On Tue, Feb 24, 2009 at 02:28:11PM -0500, Igor Támara wrote:
 Hi, 
 Dave On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:
 Dave 
 Dave  Hi, at some Datacenter here on my country they only want the machines
 Dave  to be installed with RHEL or Suse, every time I dig more into those
 Dave  distros I fall in love more with Debian.  This is why I'm asking about
 Dave  machines that have many cores and lots of RAM and plenty of disk.
 Dave 
 Dave We're running a Dell R905 server: four quad-core CPUs, 128GB RAM, with
 Dave an attached Dell Powervault storage system running off Dell's PERC 6/E
 Dave controller.
 Dave 
 Dave This basically Just Works under Debian Etch (and, I suspect, Lenny too).
 Dave 
 
 Thanks Peter and Dave, the report of the both machines I got is 
 really important, thanks a lot for letting us know, that as one 
 could suspect, Debian can be used in such environments.

I have a HP DL785 - 8 socket quad core amd with 64G of memory, installed
of a debian installer snapshot. I have it attached to a HP EVA8000  (8T) + HP
XP (4T) , both with multipath tool managing the san.  It boot of local disk
smart arrays.

I had some issue with their psp pack (support pack), but that was
because they were waiting for debian 5 to release.


 
 thank you all.
 
 Dave Dave.
 Dave 
 Dave -- 
 Dave Dave Ewart
 Dave da...@ceu.ox.ac.uk
 Dave Computing Manager, Cancer Epidemiology Unit
 Dave University of Oxford / Cancer Research UK
 Dave PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370
 Dave Get key from http://www.ceu.ox.ac.uk/~davee/davee-ceu-ox-ac-uk.asc
 Dave N 51.7516, W 1.2152
 
 
 
 -- 
 Recomiendo Audacity para hacer edición de audio
 http://audacity.sourceforge.net



-- 
Bill wrote a book at Yale. I read one.

- George W. Bush
10/19/2000
New York City, NY
on William F. Buckley, Al Smith Dinner


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-23 Thread Dave Ewart
On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:

 Hi, at some Datacenter here on my country they only want the machines
 to be installed with RHEL or Suse, every time I dig more into those
 distros I fall in love more with Debian.  This is why I'm asking about
 machines that have many cores and lots of RAM and plenty of disk.

We're running a Dell R905 server: four quad-core CPUs, 128GB RAM, with
an attached Dell Powervault storage system running off Dell's PERC 6/E
controller.

This basically Just Works under Debian Etch (and, I suspect, Lenny too).

Dave.

-- 
Dave Ewart
da...@ceu.ox.ac.uk
Computing Manager, Cancer Epidemiology Unit
University of Oxford / Cancer Research UK
PGP: CC70 1883 BD92 E665 B840 118B 6E94 2CFD 694D E370
Get key from http://www.ceu.ox.ac.uk/~davee/davee-ceu-ox-ac-uk.asc
N 51.7516, W 1.2152


signature.asc
Description: Digital signature


big machines running Debian?

2009-02-21 Thread Igor Támara
Hi, at some Datacenter here on my country they only want the
machines to be installed with RHEL or Suse, every time I dig more
into those distros I fall in love more with Debian.  This is why I'm
asking about machines that have many cores and lots of RAM and
plenty of disk.


Here (at my country) big means more than 4x4 cores , more than 
16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
are good to know about.

Is there a place where one can post the machines to make some feel
of trusting for others?  I'm using Debian from about 2000 and had
the opportunity of use sparc, powerpc, x86 and AMD64 ports, and ever
had to go back with another distro, I'm really happy with Debian, so
I want to use it as many places as possible.

Thanks in advacne for any information.

-- 
Recomiendo Imágenes de OpenClipart
http://www.openclipart.org


signature.asc
Description: Digital signature


Re: big machines running Debian?

2009-02-21 Thread Umarzuki Mochlis
2009/2/21 Igor Támara i...@tamarapatino.org

 Hi, at some Datacenter here on my country they only want the
 machines to be installed with RHEL or Suse, every time I dig more
 into those distros I fall in love more with Debian.  This is why I'm
 asking about machines that have many cores and lots of RAM and
 plenty of disk.


 Here (at my country) big means more than 4x4 cores , more than
 16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
 are good to know about.

If Red Hat can support it, I don't think that there's any reason Debian
couldn't.



 Is there a place where one can post the machines to make some feel
 of trusting for others?  I'm using Debian from about 2000 and had
 the opportunity of use sparc, powerpc, x86 and AMD64 ports, and ever
 had to go back with another distro, I'm really happy with Debian, so
 I want to use it as many places as possible.

 Thanks in advacne for any information.



 --
 Recomiendo Imágenes de OpenClipart
 http://www.openclipart.org

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFJn/rwtV4JcpE0AlYRAkuvAJ9Lk6TLC1WmyjcJ2m1qpiBC/jkDegCg70+5
 yDLpMTq6odSmddpuYWYHkPQ=
 =Md58
 -END PGP SIGNATURE-




-- 
Regards,

Umarzuki Mochlis
http://gameornot.net


Re: big machines running Debian?

2009-02-21 Thread Nuno Magalhães
I don't know about their size specs, but both linode and slicehost let
you set up your own distro, mostly coloc though.

Nuno Magalhães
LU#484677


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-21 Thread Nelson Castillo
On Sat, Feb 21, 2009 at 9:00 PM, Igor Támara i...@tamarapatino.org wrote:
 Hi, at some Datacenter here on my country they only want the
 machines to be installed with RHEL or Suse, every time I dig more
 into those distros I fall in love more with Debian.  This is why I'm
 asking about machines that have many cores and lots of RAM and
 plenty of disk.

Hi Igor, nice to meet you.

It is very good to know that you also like Debian.

I think this depends on the kernel so it really shouldn't matter which
distribution you use.

We have NPTL now for threads thus I things should be just fine.

Best regards,
Nelson.-


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: big machines running Debian?

2009-02-21 Thread thunder7
From: Igor T?mara i...@tamarapatino.org
Date: Sat, Feb 21, 2009 at 08:00:32AM -0500
 Hi, at some Datacenter here on my country they only want the
 machines to be installed with RHEL or Suse, every time I dig more
 into those distros I fall in love more with Debian.  This is why I'm
 asking about machines that have many cores and lots of RAM and
 plenty of disk.
 
 
 Here (at my country) big means more than 4x4 cores , more than 
 16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
 are good to know about.
 
My hosting provider, dreamhost, runs Debian. Individual machines may not
meet your specs, but they do have a lot of them.

Jurriaan
-- 
 What does ELF stand for (in respect to Linux?)
ELF is the first rock group that Ronnie James Dio performed with back in 
the early 1970's.  In constrast, a.out is a misspelling  of the French word 
for the month of August.  What the two have in common is beyond me, but 
Linux users seem to use the two words together.
seen on c.o.l.misc


-- 
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



RE: big machines running Debian?

2009-02-21 Thread Peter Yorke
Here at Vulcan

We run some 2x4 processor 32GB RAM 2TB Arrays via fibre channel without a 
problem with Lenny Debian

Peter Yorke

From: Igor Támara [i...@tamarapatino.org]
Sent: Saturday, February 21, 2009 5:00 AM
To: debian-amd64@lists.debian.org
Subject: big machines running Debian?

Hi, at some Datacenter here on my country they only want the
machines to be installed with RHEL or Suse, every time I dig more
into those distros I fall in love more with Debian.  This is why I'm
asking about machines that have many cores and lots of RAM and
plenty of disk.


Here (at my country) big means more than 4x4 cores , more than
16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
are good to know about.

Is there a place where one can post the machines to make some feel
of trusting for others?  I'm using Debian from about 2000 and had
the opportunity of use sparc, powerpc, x86 and AMD64 ports, and ever
had to go back with another distro, I'm really happy with Debian, so
I want to use it as many places as possible.

Thanks in advacne for any information.

--
Recomiendo Imágenes de OpenClipart
http://www.openclipart.org


--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org