storage vendors have a credibility problem. i think the big
storage vendors, as referenced in the op, sell you on many
things you don't need for much more than one has to spend.
I went to a product demo from http://www.isilon.com/
They make a filesystem that spans multiple machines. They
> Apparently, the distinction made between "consumer" and "enterprise" is
> actually between technology classes, i.e. SCSI/Fibre Channel vs. SATA,
> rather than between manufacturers' gradings, e.g. Seagate 7200 desktop
> series vs. Western Digital RE3/RE4 enterprise drives.
yes this is very mi
Upon reading more into that study it seems the Wikipedia editor has derived
a distorted conclusion:
In our data sets, the replacement rates of SATA disks are not worse than
the replacement rates of SCSI or FC disks. This may indicate that
disk-independent factors, such as operating conditions,
What I haven't found is a decent, no frills, sata/e-sata enclosure for a
home system.
Depending on where you are, where you can purchase from, and how much you
want to pay you may be able to get yourself ICY DOCK or Chieftec enclosures
that fit the description. ICY DOCK's 5-bay enclosure seeme
> At work, we recently had a massive failure of our RAID array. After
> much brown noseing, I come to find that after many harddrives being
> shipped to our IT guy and him scratching his head, it was in fact the
> RAID card itself that had failed (which takes out the whole array, plus
> can ta
> > storage vendors have a credibility problem. i think the big
> > storage vendors, as referenced in the op, sell you on many
> > things you don't need for much more than one has to spend.
>
> Those of us who know something about Coraid understand that your company
> doesn't engage in fudili
On Mon, 21 Sep 2009 16:30:25 EDT erik quanstrom wrote:
> > > i think the lesson here is don't by cheep drives; if you
> > > have enterprise drives at 1e-15 error rate, the fail rate
> > > will be 0.8%. of course if you don't have a raid, the fail
> > > rate is 100%.
> > >
> > > if that's not acc
erik quanstrom wrote:
i think the lesson here is don't by cheep drives; if you
have enterprise drives at 1e-15 error rate, the fail rate
will be 0.8%. of course if you don't have a raid, the fail
rate is 100%.
if that's not acceptable, then use raid 6.
Hopefully Raid 6 or zfs's raidz2 w
erik quanstrom wrote:
storage vendors have a credibility problem. i think the big
storage vendors, as referenced in the op, sell you on many
things you don't need for much more than one has to spend.
Those of us who know something about Coraid understand that your company
doesn't engage in
> > i think the lesson here is don't by cheep drives; if you
> > have enterprise drives at 1e-15 error rate, the fail rate
> > will be 0.8%. of course if you don't have a raid, the fail
> > rate is 100%.
> >
> > if that's not acceptable, then use raid 6.
>
> Hopefully Raid 6 or zfs's raidz2 works
On Mon Sep 21 14:51:07 EDT 2009, w...@authentrus.com wrote:
> erik quanstrom wrote:
> Our top-of-the-line Sub Zero and Thermidor kitchen appliances are pure
> junk. In fact, I can point to Consumer Reports data that shows an
> inverse relationship between appliance cost and reliability.
storage
On Mon, 21 Sep 2009 14:02:40 EDT erik quanstrom wrote:
> > > i would think this is acceptable. at these low levels, something
> > > else is going to get you -- like drives failing unindependently.
> > > say because of power problems.
> >
> > 8% rate for an array rebuild may or may not be accept
erik quanstrom wrote:
i think the lesson here is don't by cheep drives;
Our top-of-the-line Sub Zero and Thermidor kitchen appliances are pure
junk. In fact, I can point to Consumer Reports data that shows an
inverse relationship between appliance cost and reliability.
One who works for Co
> > i would think this is acceptable. at these low levels, something
> > else is going to get you -- like drives failing unindependently.
> > say because of power problems.
>
> 8% rate for an array rebuild may or may not be acceptable
> depending on your application.
i think the lesson here is d
> > > 8 bits/byte * 1e12 bytes / 1e14 bits/ure = 8%
> >
> > Isn't that the probability of getting a bad sector when you
> > read a terabyte? In other words, this is not related to the
> > disk size but how much you read from the given disk. Granted
> > that when you "resilver" you have no choice
> > drive mfgrs don't report write error rates. i would consider any
> > drive with write errors to be dead as fried chicken. a more
> > interesting question is what is the chance you can read the
> > written data back correctly. in that case with desktop drives,
> > you have a
> > 8 bits/by
On Mon, 14 Sep 2009 12:43:42 EDT erik quanstrom wrote:
> > I am going to try my hands at beating a dead horse:)
> > So when you create a Venti volume, it basically writes '0's' to all the
> > blocks of the underlying device right? If I put a venti volume on a AoE
> > device which is a linux ra
Russ Cox wrote:
On Mon, Sep 14, 2009 at 8:50 AM, Jack Norton wrote:
So when you create a Venti volume, it basically writes '0's' to all the
blocks of the underlying device right?
In case anyone decides to try the experiment,
venti hasn't done this for a few years. Better to try with
On Mon, Sep 14, 2009 at 8:50 AM, Jack Norton wrote:
> So when you create a Venti volume, it basically writes '0's' to all the
> blocks of the underlying device right?
In case anyone decides to try the experiment,
venti hasn't done this for a few years. Better to try with dd.
Russ
> I am going to try my hands at beating a dead horse:)
> So when you create a Venti volume, it basically writes '0's' to all the
> blocks of the underlying device right? If I put a venti volume on a AoE
> device which is a linux raid5, using normal desktop sata drives, what
> are my chances of
erik quanstrom wrote:
So what about having venti on an AoE device, and fossil on a local drive
(say an ssd even)?
sure. we keep the cache on the coraid sr1521 as well.
How would you handle (or: how would venti handle), a
resize of the AoE device?
that would depend on the d
> So what about having venti on an AoE device, and fossil on a local drive
> (say an ssd even)?
sure. we keep the cache on the coraid sr1521 as well.
> How would you handle (or: how would venti handle), a
> resize of the AoE device?
that would depend on the device structure of ken's fs.
a
erik quanstrom wrote:
Also, another probably dumb question: did the the fileserver machine use
the AoE device as a kenfs volume or a fossil(+venti)?
s/did/does/. the fileserver is running today.
the fileserver provides the network with regular 9p fileserver
with three attach points (main
> I read the paper you wrote and I have some (probably naive) questions:
> The section #6 labeled "core improvements" seems to suggest that the
> fileserver is basically using the CPU/fileserver hybrid kernel (both
> major changes are quoted as coming from the CPU kernel). Is this just a
> one-
erik quanstrom wrote:
I think what he means is:
You are given an inordinate amount of harddrives and some computers to
house them.
If plan9 is your only software, how would it be configured overall,
given that it has to perform as well, or better.
Or put another way: your boss wants you to co
Thanks.
Erik Quanstrom, too, posted a link to that page, although it wasn't in HTML.
--On Monday, September 07, 2009 22:02 +0200 Uriel wrote:
On Fri, Sep 4, 2009 at 3:56 PM, Eris Discordia
wrote:
if you have quanstro/sd installed, sdorion(3) discusses how it
controls the backplane lights.
On Fri, Sep 4, 2009 at 3:56 PM, Eris Discordia wrote:
>> if you have quanstro/sd installed, sdorion(3) discusses how it
>> controls the backplane lights.
>
> Um, I don't have that because I don't have any running Plan 9 instances, but
> I'll try finding it on the web (if it's been through man2html
erik quanstrom wrote:
> i'm speaking for myself, and not for anybody else here.
> i do work for coraid, and i do do what i believe. so
> cavet emptor.
We have a 15TB unit, nice bit of hardware.
> oh, and the coraid unit works with plan 9. :-)
You guys should get some Glenda-themed packing tape.
there's a standard for this
red fail
orange locate
green activity
maybe you're enclosure's not standard.
That may be the case as it's really sort of a cheap hack: Chieftec
SNT-2131. A 3-in-2 "solution" for use in 5.25" bays of desktop computer
cases. I hear ICY DOCK has better offers b
> I think what he means is:
> You are given an inordinate amount of harddrives and some computers to
> house them.
> If plan9 is your only software, how would it be configured overall,
> given that it has to perform as well, or better.
>
> Or put another way: your boss wants you to compete with
erik quanstrom wrote:
*with*, not *on* right?
with. it's an appliance.
Now, the information above is quite useful, yet my question
was more along the lines of -- if one was to build such
a box using Plan 9 as the software -- would it be:
1. feasible
2. have any advantages o
> *with*, not *on* right?
with. it's an appliance.
> Now, the information above is quite useful, yet my question
> was more along the lines of -- if one was to build such
> a box using Plan 9 as the software -- would it be:
> 1. feasible
> 2. have any advantages over Linux + JFS
aoe i
On Sep 4, 2009, at 2:37 AM, matt wrote:
I concur with Erik, I specced out a 20tb server earlier this year,
matching the throughputs hits you in the wallet.
I'm amazed they are using pci-e 1x , it's kind of naive
see what the guy from sun says
http://www.c0t0d0s0.org/archives/5899-Some-perspe
On Sep 3, 2009, at 6:20 PM, erik quanstrom wrote:
On Thu Sep 3 20:53:13 EDT 2009, r...@sun.com wrote:
"None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone
pods
with their own IP address waiting for requests on HTTP
> I concur with Erik, I specced out a 20tb server earlier this year,
> matching the throughputs hits you in the wallet.
even if you're okay with low performance, please don't
set up a 20tb server without enterprise drives. it's no
guarentee, but it's the closest you can come. also,
the #1 predi
> There's one multi-color (3-prong) LED responsible for this. Nominally,
> green should mean drive running and okay, alternating red should mean
> transfer, and orange (red + green) a disk failure. In case of 7200.11's
there's a standard for this
red fail
orange locate
green activity
ma
Many thanks for the info :-)
if there's a single dual-duty led maybe this is the problem. how
many sepearte led packages do you have?
There's one multi-color (3-prong) LED responsible for this. Nominally,
green should mean drive running and okay, alternating red should mean
transfer, and or
> This caught my attention and you are the storage expert here. Is there an
> equivalent technology on SATA disks for controlling enclosure facilities?
> (Other than SMART, I mean, which seems to be only for monitoring and not
> for control.)
SES-2/SGPIO typically interact with the backplane, n
- a hot swap case with ses-2 lights so the tech doesn't
grab the wrong drive,
This caught my attention and you are the storage expert here. Is there an
equivalent technology on SATA disks for controlling enclosure facilities?
(Other than SMART, I mean, which seems to be only for monitoring and
I concur with Erik, I specced out a 20tb server earlier this year,
matching the throughputs hits you in the wallet.
I'm amazed they are using pci-e 1x , it's kind of naive
see what the guy from sun says
http://www.c0t0d0s0.org/archives/5899-Some-perspective-to-this-DIY-storage-server-mention
On Thu Sep 3 20:53:13 EDT 2009, r...@sun.com wrote:
> "None of those technologies [NFS, iSCSI, FC] scales as cheaply,
> reliably, goes as big, nor can be managed as easily as stand-alone pods
> with their own IP address waiting for requests on HTTPS."
>
> http://blog.backblaze.com/2009/09/01/p
"None of those technologies [NFS, iSCSI, FC] scales as cheaply,
reliably, goes as big, nor can be managed as easily as stand-alone pods
with their own IP address waiting for requests on HTTPS."
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
Apart f
42 matches
Mail list logo