Hi, at some Datacenter here on my country they only want the
machines to be installed with RHEL or Suse, every time I dig more
into those distros I fall in love more with Debian. This is why I'm
asking about machines that have many cores and lots of RAM and
plenty of disk.
Here (at my country) b
2009/2/21 Igor Támara
> Hi, at some Datacenter here on my country they only want the
> machines to be installed with RHEL or Suse, every time I dig more
> into those distros I fall in love more with Debian. This is why I'm
> asking about machines that have many cores and lots of RAM and
> plenty
I don't know about their size specs, but both linode and slicehost let
you set up your own distro, mostly coloc though.
Nuno Magalhães
LU#484677
--
To UNSUBSCRIBE, email to debian-amd64-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
On Sat, Feb 21, 2009 at 9:00 PM, Igor Támara wrote:
> Hi, at some Datacenter here on my country they only want the
> machines to be installed with RHEL or Suse, every time I dig more
> into those distros I fall in love more with Debian. This is why I'm
> asking about machines that have many cores
From: Igor T?mara
Date: Sat, Feb 21, 2009 at 08:00:32AM -0500
> Hi, at some Datacenter here on my country they only want the
> machines to be installed with RHEL or Suse, every time I dig more
> into those distros I fall in love more with Debian. This is why I'm
> asking about machines that have
: "big" machines running Debian?
Hi, at some Datacenter here on my country they only want the
machines to be installed with RHEL or Suse, every time I dig more
into those distros I fall in love more with Debian. This is why I'm
asking about machines that have many cores and lots of RA
On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:
> Hi, at some Datacenter here on my country they only want the machines
> to be installed with RHEL or Suse, every time I dig more into those
> distros I fall in love more with Debian. This is why I'm asking about
> machines that have man
Hi,
Dave> On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:
Dave>
Dave> > Hi, at some Datacenter here on my country they only want the machines
Dave> > to be installed with RHEL or Suse, every time I dig more into those
Dave> > distros I fall in love more with Debian. This is why I'm as
Igor Támara wrote:
Here (at my country) big means more than 4x4 cores , more than
16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
are good to know about.
Good experiences with IBM blades, DS4200 SAN and Qlogic FC adapters. No
Debian friendly SAN/FC multipath support avai
On Tue, Feb 24, 2009 at 02:28:11PM -0500, Igor Támara wrote:
> Hi,
> Dave> On Saturday, 21.02.2009 at 08:00 -0500, Igor Támara wrote:
> Dave>
> Dave> > Hi, at some Datacenter here on my country they only want the machines
> Dave> > to be installed with RHEL or Suse, every time I dig more into tho
Umarzuki Mochlis writes:
> 2009/2/21 Igor Támara <[[i...@tamarapatino.org]]>
>
> Hi, at some Datacenter here on my country they only want the
> machines to be installed with RHEL or Suse, every time I dig more
> into those distros I fall in love more with Debian. This is why
On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
> I think the limit is 1024 cores. Or was that fixed to allow more?
I think people are working on that, but not too many machines need
that yet. Most machines with that many cores are clusters and hence
run multiple linux insta
On Wed, 25 Feb 2009, Lennart Sorensen wrote:
On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
More than 1TB on disk? Doh. 1TB fits on a single disk. Anything up to
16 TB is quite trivial. Beyond that you start to hit the limit on
filesystem size with ext3 and have to use xf
On Wed, Feb 25, 2009 at 04:51:44PM +0100, Mattias Wadenstein wrote:
> Only if you want partitions, we usually don't for large data filesystems
> where the large filesystem sizes are relevant.
If you have a seperate OS disk, then sure, partitions are not necesary,
and even LVM and such have no ne
On 02/25/2009 09:14 AM, Lennart Sorensen wrote:
[snip]
Well at 2TB you have to switch from DOS style partition tables to GPT,
which requires the use of grub2 rather than lilo or grub, but works
fine otherwise.
Who boots off of (or puts / on) a 2TB partition?
--
Ron Johnson, Jr.
Jefferson LA
On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
> Who boots off of (or puts / on) a 2TB partition?
Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
drives. Hence the only drive in the system is a 2.25TB device with
partitions and everything on it. The root parti
On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
> Umarzuki Mochlis writes:
>
> > 2009/2/21 Igor Támara <[[i...@tamarapatino.org]]>
> >
> > Hi, at some Datacenter here on my country they only want the
> > machines to be installed with RHEL or Suse, every time
On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
> most enterprise site don;t use 1TB size disk, if you want performance
> you go spindles, there might be 8 disks (number pulled from the air -
> based on raid6 + spares) behind 1TB
And if you want disk space and are serving across a 1Gb
On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
Who boots off of (or puts / on) a 2TB partition?
Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
drives. Hence the only drive in the system is a 2.25TB device wit
On 02/25/2009 04:26 PM, Ian McDonald wrote:
Ron Johnson wrote:
On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
Who boots off of (or puts / on) a 2TB partition?
Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
d
On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
> On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
> > Who boots off of (or puts / on) a 2TB partition?
>
> Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
> drives. Hence the only drive in the sys
On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
Who boots off of (or puts / on) a 2TB partition?
Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
> On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
> >On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
> >>On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
> >>>Who boots off of (or puts / on) a 2TB partition?
On Wed, Feb 25, 2009 at 05:06:30PM -0500, Lennart Sorensen wrote:
> On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
> > most enterprise site don;t use 1TB size disk, if you want performance
> > you go spindles, there might be 8 disks (number pulled from the air -
> > based on raid6 + sp
On Wed, Feb 25, 2009 at 07:08:13PM -0500, Douglas A. Tutty wrote:
> On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
> > On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
> > >On Wed, Feb 25, 2009 at 04:48:30PM -0500, Lennart Sorensen wrote:
[snip]
>
> Not with my NetRaid card. It ta
On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
> On Wed, Feb 25, 2009 at 07:08:13PM -0500, Douglas A. Tutty wrote:
> > On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
> > > On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
> > > >On Wed, Feb 25, 2009 at 04:48:30PM -0500, Len
On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
[snip]
/proc/megaraid/hba0/raiddrives-0-9
Logical drive: 0:, state: optimal
Span depth: 1, RAID level: 1, Stripe size: 64, Row size: 2
Read Policy: Adaptive, Write Policy: Write thru, Cache Policy: Cached IO
Logical drive: 1:, state: optimal
On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
> On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
> [snip]
> >
> >/proc/megaraid/hba0/raiddrives-0-9
> >Logical drive: 0:, state: optimal
> >Span depth: 1, RAID level: 1, Stripe size: 64, Row size: 2
> >Read Policy: Adaptive, Write Po
2009-02-26_14:21:54-0500 "Douglas A. Tutty" :
> On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
> > On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
> > [snip]
> > >
> > >/proc/megaraid/hba0/raiddrives-0-9
> > >Logical drive: 0:, state: optimal
> > >Span depth: 1, RAID level: 1, Stri
On Wed, Feb 25, 2009 at 05:37:12PM -0500, Douglas A. Tutty wrote:
> Why wouldn't you configure the raid controller to give you a small
> logical drive (with whatever raid config you want) for the OS, and the
> larger logical drive for your data (or for LVM for everything except /)?
Why should I do
On Wed, Feb 25, 2009 at 05:10:58PM -0600, Ron Johnson wrote:
> On 02/25/2009 04:37 PM, Douglas A. Tutty wrote:
>> Why wouldn't you configure the raid controller to give you a small
>> logical drive (with whatever raid config you want) for the OS, and the
>> larger logical drive for your data (or fo
On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
> This begs the question why did you pick hardware raid over software raid
You can boot from it no matter what (software raid can require interesting
tweaks to the boot loader setup to make it work).
Recovery can be transparent to the OS
On Thu, Feb 26, 2009 at 03:38:49PM -0500, Lennart Sorensen wrote:
> On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
> > This begs the question why did you pick hardware raid over software raid
>
> You can boot from it no matter what (software raid can require interesting
> tweaks to th
On 02/26/2009 01:49 PM, Ron Peterson wrote:
2009-02-26_14:21:54-0500 "Douglas A. Tutty" :
On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
[snip]
/proc/megaraid/hba0/raiddrives-0-9
Logical drive: 0:, state: optimal
Span depth: 1, RA
On 02/26/2009 02:51 PM, Alex Samad wrote:
[snip]
I have gone through a few cycles of changing the underlying drive sizes,
ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
TB. pop 1 disk replace with 1 TB once it has settled you can do an
online expansion. Not sure if you c
On Thu, Feb 26, 2009 at 03:38:49PM -0500, Lennart Sorensen wrote:
> On Thu, Feb 26, 2009 at 11:41:03AM +1100, Alex Samad wrote:
> You get nice hotswap bay LED control to show which drive has failed
> (I imagine software could do this too, but I have never seen that
> happen yet.)
Since the statu
On Thu, Feb 26, 2009 at 03:34:22PM -0500, Lennart Sorensen wrote:
> On Wed, Feb 25, 2009 at 05:37:12PM -0500, Douglas A. Tutty wrote:
> > Why wouldn't you configure the raid controller to give you a small
> > logical drive (with whatever raid config you want) for the OS, and the
> > larger logical
On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
> my rule of thumb is to always have atleast 2 partitions on the first 2
> drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
> the space is put into a raid device then into lvm. That gets rid of the
> interesting twe
On Thu, Feb 26, 2009 at 05:42:43PM -0500, Douglas A. Tutty wrote:
> The comparison wasn't between having the raid controller or LVM present
> a reasonable size /, it was between a reasonable size / and a 2TB /.
No one ever wanted a 2TB /. I just wanted / on a drive that was bigger
than 2TB and he
On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:
> true, depends on whos rule of thumb you use. I have seen places where
> mandate fc drives only in the data center - get very expensive when you
> want lots of disk space.
>
> Also the disk space might not be need for feeding across the
On 02/26/2009 05:49 PM, Lennart Sorensen wrote:
On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
my rule of thumb is to always have atleast 2 partitions on the first 2
drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
the space is put into a raid device then into
On 02/26/2009 05:54 PM, Lennart Sorensen wrote:
On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:
true, depends on whos rule of thumb you use. I have seen places where
mandate fc drives only in the data center - get very expensive when you
want lots of disk space.
Also the disk space
On Thu, Feb 26, 2009 at 06:49:38PM -0500, Lennart Sorensen wrote:
> On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
> > my rule of thumb is to always have atleast 2 partitions on the first 2
[snip]
>
> Some hardware raids can do lots of things. Some can do no resizing at
> all. I h
On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:
> On 02/26/2009 05:54 PM, Lennart Sorensen wrote:
>> On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:
[snip]
>>
>> Perhaps. I think some people make hard rules where in fact they would
>> get a much better result by thinking
On 02/26/2009 10:27 PM, Alex Samad wrote:
On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:
On 02/26/2009 05:54 PM, Lennart Sorensen wrote:
On Thu, Feb 26, 2009 at 11:36:20AM +1100, Alex Samad wrote:
[snip]
Perhaps. I think some people make hard rules where in fact they would
ge
On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
> As would auto-replacement of bad drives by hot spares.
Usually the firmware of a raid card does that itself. If a drive is
flagged hotspare, the raid card should automatically start the rebuild
if a drive fails. You should never have
On Thu, Feb 26, 2009 at 06:06:07PM -0600, Ron Johnson wrote:
> Most DC managers have a bit more clue and good reasons than simply rules
> for rules' sake.
>
> Mainly logistics: if all the center's disks are SAS (or whatever other
> standard you choose) in only one or two vendor's SANs (or whateve
On 02/27/2009 07:50 AM, Lennart Sorensen wrote:
On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
As would auto-replacement of bad drives by hot spares.
Usually the firmware of a raid card does that itself. If a drive is
flagged hotspare, the raid card should automatically start th
On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:
> I was referring to the fact that softraid couldn't do that.
Are you sure? mdadm appears capable of managing spares automatically
when such are setup for the raid.
--
Len Sorensen
--
To UNSUBSCRIBE, email to debian-amd64-requ...@l
On 02/27/2009 02:25 PM, Lennart Sorensen wrote:
On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:
I was referring to the fact that softraid couldn't do that.
Are you sure?
No...
mdadm appears capable of managing spares automatically
when such are setup for the rai
On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
> On 02/27/2009 02:25 PM, Lennart Sorensen wrote:
>> On Fri, Feb 27, 2009 at 11:49:29AM -0600, Ron Johnson wrote:
>>> I was referring to the fact that softraid couldn't do that.
>>
>> Are you sure?
>
> No...
>
>>mdadm appe
On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
> In mdadm.conf? I'm really surprised (and pleased)!
Probably in the monitoring mode. man mdadm talks about spare drives and
spare groups and moving spares between raids and such. Sounds pretty
likely to automatically use a spare assi
On Fri, Feb 27, 2009 at 04:14:52PM -0500, Lennart Sorensen wrote:
> On Fri, Feb 27, 2009 at 02:44:04PM -0600, Ron Johnson wrote:
> > In mdadm.conf? I'm really surprised (and pleased)!
>
> Probably in the monitoring mode. man mdadm talks about spare drives and
> spare groups and moving spares bet
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Wed, Feb 25, 2009 at 04:07:54PM +0100, Goswin von Brederlow wrote:
>> I think the limit is 1024 cores. Or was that fixed to allow more?
>
> I think people are working on that, but not too many machines need
> that yet. Most machines wit
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
>> most enterprise site don;t use 1TB size disk, if you want performance
>> you go spindles, there might be 8 disks (number pulled from the air -
>> based on raid6 + spares) behind
Alex Samad writes:
> On Wed, Feb 25, 2009 at 05:06:30PM -0500, Lennart Sorensen wrote:
>> On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
>> > most enterprise site don;t use 1TB size disk, if you want performance
>> > you go spindles, there might be 8 disks (number pulled from the air
Ron Johnson writes:
> On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
>> On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
>>> Who boots off of (or puts / on) a 2TB partition?
>>
>> Someone with a 4 drive raid5 on a hardware controller with 750GB SATA
>> drives. Hence the only drive i
Ron Johnson writes:
> On 02/26/2009 02:51 PM, Alex Samad wrote:
> [snip]
>>
>> I have gone through a few cycles of changing the underlying drive sizes,
>> ie a 3 disk raid5 made up of 3 x 500Gb and replacing in line with 3 x
>> TB. pop 1 disk replace with 1 TB once it has settled you can do an
>
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Fri, Feb 27, 2009 at 07:51:29AM +1100, Alex Samad wrote:
>> my rule of thumb is to always have atleast 2 partitions on the first 2
>> drives (3 if I have them), for a raid1 /boot and a raid1 /. the rest of
>> the space is put into a raid
Ron Johnson writes:
> On 02/27/2009 07:50 AM, Lennart Sorensen wrote:
>> On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
>>> As would auto-replacement of bad drives by hot spares.
>>
>> Usually the firmware of a raid card does that itself. If a drive is
>> flagged hotspare, the raid
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Thu, Feb 26, 2009 at 05:42:43PM -0500, Douglas A. Tutty wrote:
>> The comparison wasn't between having the raid controller or LVM present
>> a reasonable size /, it was between a reasonable size / and a 2TB /.
>
> No one ever wanted a 2T
Goswin von Brederlow wrote:
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there might be 8 disks (number pulled from the air -
based on r
Ian McDonald writes:
> Goswin von Brederlow wrote:
>> lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
>>
>>> On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there might be 8 disks
Goswin von Brederlow wrote:
Ian McDonald writes:
Goswin von Brederlow wrote:
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
On Thu, Feb 26, 2009 at 08:54:11AM +1100, Alex Samad wrote:
most enterprise site don;t use 1TB size disk, if you want performance
you go spindles, there migh
On Thu, Feb 26, 2009 at 03:21:44PM -0600, Ron Johnson wrote:
> On 02/26/2009 01:49 PM, Ron Peterson wrote:
> >2009-02-26_14:21:54-0500 "Douglas A. Tutty" :
> >>On Wed, Feb 25, 2009 at 08:53:45PM -0600, Ron Johnson wrote:
> >>>On 02/25/2009 07:22 PM, Douglas A. Tutty wrote:
> >>>[snip]
> /proc/m
On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
> Alex Samad writes:
>
[snip]
> > true, depends on whos rule of thumb you use. I have seen places where
> > mandate fc drives only in the data center - get very expensive when you
> > want lots of disk space.
>
> The only ar
Hi,
Am 2009-02-21 08:00:32, schrieb Igor Támara:
> Here (at my country) big means more than 4x4 cores , more than
> 16Gb of RAM, and more than 1Tb on disk, excluding clusters, also SAN
> are good to know about.
I am owner of three Sun Blade (Sparc) and each has 32 CPUs, 128 GByte of
memory and 1
Am 2009-02-25 16:48:30, schrieb Lennart Sorensen:
> It doesn't take much with modern SATA drives to hit 2TB. Given we can
> get 1.5TB in a single drive, how many months before we can get 2TB in
> a single disk.
Ehm, HOW MANY what?
The 2 TByte drives are already out.
Some "selected" customers of
On 02/28/2009 03:14 AM, Goswin von Brederlow wrote:
Ron Johnson writes:
On 02/27/2009 07:50 AM, Lennart Sorensen wrote:
On Thu, Feb 26, 2009 at 05:58:43PM -0600, Ron Johnson wrote:
As would auto-replacement of bad drives by hot spares.
Usually the firmware of a raid card does that itself.
On 02/28/2009 02:50 AM, Goswin von Brederlow wrote:
[snip]
The only argument I see for FC is a switched sorage network. As soon
as you dedicate a storage box to one (or two) servers there is really
no point in FC. Just use a SAS box with SATA disks inside. It is a)
faster, b) simpler, c) works b
Alex Samad writes:
> On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
>> Alex Samad writes:
>>
>
> [snip]
>
>> > true, depends on whos rule of thumb you use. I have seen places where
>> > mandate fc drives only in the data center - get very expensive when you
>> > want lots
On Sat, Feb 28, 2009 at 10:14:15AM +0100, Goswin von Brederlow wrote:
> Hot-spare devices work just fine (see below).
>
> What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
> disk raid5 and one spare disk for whatever raid fails first. You would
> have to script that yourself.
A
On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
> Not to repeat myself but a GPT with an entry for /boot in its fake
> MS-Dos table works just fine.
Perhaps, but why bother. Just using GPT works. And it won't confuse
any tools that actually know how GPT is supposed to be us
On Sat, Feb 28, 2009 at 09:56:04AM +0100, Goswin von Brederlow wrote:
> Ron Johnson writes:
>
> > On 02/25/2009 03:48 PM, Lennart Sorensen wrote:
> >> On Wed, Feb 25, 2009 at 02:55:09PM -0600, Ron Johnson wrote:
> >>> Who boots off of (or puts / on) a 2TB partition?
> >>
> >> Someone with a 4 dri
On Mon, Mar 02, 2009 at 08:35:01AM +1100, Alex Samad wrote:
> I would have to dissagree, some times the guidelines that you set for
> your data storage network mandate having the reliability (or the
> performance) of scsi (or now sas), they could be valid business
> requirements.
Well if you set i
On Sat, Feb 28, 2009 at 09:39:07AM +, Ian McDonald wrote:
> Erm, not on anything other than a sequential read (and even then, I've
> never seen a single disk that would actually sustain that across it's
> whole capacity).
>
> Even raid-5s of significant numbers of disks aren't enormously fa
Lennart Sorensen wrote:
> On Sat, Feb 28, 2009 at 10:14:15AM +0100, Goswin von Brederlow wrote:
>> Hot-spare devices work just fine (see below).
>>
>> What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
>> disk raid5 and one spare disk for whatever raid fails first. You would
>> ha
On Mon, Mar 02, 2009 at 02:28:04PM +0100, Goswin von Brederlow wrote:
> Alex Samad writes:
>
> > On Sat, Feb 28, 2009 at 09:50:06AM +0100, Goswin von Brederlow wrote:
> >> Alex Samad writes:
> >>
> >
[snip]
>
> And now they have to learn that we have new technologies. New
> requirements and
lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
> On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
>> Not to repeat myself but a GPT with an entry for /boot in its fake
>> MS-Dos table works just fine.
>
> Perhaps, but why bother. Just using GPT works. And it won't c
Jonas Bardino writes:
> Lennart Sorensen wrote:
>> On Sat, Feb 28, 2009 at 10:14:15AM +0100, Goswin von Brederlow wrote:
>>> Hot-spare devices work just fine (see below).
>>>
>>> What doesn't exists afaik are global hot spares. E.g. 7 disks, two 3
>>> disk raid5 and one spare disk for whatever ra
On Mon, Mar 02, 2009 at 08:32:22PM +0100, Goswin von Brederlow wrote:
> lsore...@csclub.uwaterloo.ca (Lennart Sorensen) writes:
>
> > On Sat, Feb 28, 2009 at 10:16:19AM +0100, Goswin von Brederlow wrote:
> >> Not to repeat myself but a GPT with an entry for /boot in its fake
> >> MS-Dos table work
81 matches
Mail list logo