Microsoft have a document you should read:
Optimizing Storage for Microsoft Exchange Server 2003
http://download.microsoft.com/download/b/e/0/be072b12-9c30-4e00-952d-c7d0d7bcea5f/StoragePerformance.doc
Microsoft also have a utility JetStress which you can use to verify
the performance of the
Gino wrote:
[...]
Just a few examples:
-We lost several zpool with S10U3 because of spacemap bug,
and -nothing- was recoverable. No fsck here :(
Yes, I criticized the lack of zpool recovery mechanisms, too, during my
AVS testing. But I don't have the know-how to judge if it has
On Tue, 2007-09-11 at 13:43 -0700, Gino wrote:
-ZFS+FC JBOD: failed hard disk need a reboot
:(
(frankly unbelievable in 2007!)
So, I've been using ZFS with some creaky old FC JBODs
(A5200's) and old
disks which have been failing regularly and haven't
seen that; the worst
I've
Yes, this is a case where the disk has not completely
failed.
ZFS seems to handle the completely failed disk case
properly, and
has for a long time. Cutting the power (which you
can also do with
luxadm) makes the disk appear completely failed.
Richard, I think you're right.
The failed
We have seen just the opposite... we have a
server with about
0 million files and only 4 TB of data. We have been
benchmarking FSes
for creation and manipulation of large populations of
small files and
ZFS is the only one we have found that continues to
scale linearly
above one million
-We had tons of kernel panics because of ZFS.
Here a reboot must be planned with a couple of
weeks in advance
and done only at saturday night ..
Well, I'm sorry, but if your datacenter runs into
problems when a single
server isn't available, you probably have much worse
problems.
Gino wrote:
The real problem is that ZFS should stop to force kernel panics.
I found these panics very annoying, too. And even more that the zpool
was faulted afterwards. But my problem is that when someone asks me what
ZFS should do instead, I have no idea.
I have large Sybase database
On 11/09/2007, Mike DeMarco [EMAIL PROTECTED]
wrote:
I've got 12Gb or so of db+web in a zone on a ZFS
filesystem on a mirrored zpool.
Noticed during some performance testing today
that
its i/o bound but
using hardly
any CPU, so I thought turning on compression
would be
a
Gino wrote:
The real problem is that ZFS should stop to force
kernel panics.
I found these panics very annoying, too. And even
more that the zpool
was faulted afterwards. But my problem is that when
someone asks me what
ZFS should do instead, I have no idea.
well, what about just
[EMAIL PROTECTED] wrote on 09/12/2007 08:04:33 AM:
Gino wrote:
The real problem is that ZFS should stop to force
kernel panics.
I found these panics very annoying, too. And even
more that the zpool
was faulted afterwards. But my problem is that when
someone asks me what
ZFS
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote:
Striping several disks together with a stripe width that is tuned for your
data
model is how you could get your performance up. Stripping has been left out
of the ZFS model for some reason. Where it is true that RAIDZ will stripe
the data
It seems that maybe there is too large a code path
leading to panics --
maybe a side effect of ZFS being new (compared to
other filesystems). I
would hope that as these panic issues are coming up
that the code path
leading to the panic is evaluated for a specific fix
or behavior code
Hello Everyone
can we monitor a ZFS pool with SunMC 3.6.1 ?
is this a base function ?
if not will SunMC 4.0 solve this ?
Juan
--
Juan Berlie
Engagement Architect/Architecte de Systmes
Sun Microsystems, Inc.
1800 McGill College,
. . .
Use JBODs. Or tell the cache controllers to ignore
the flushing requests.
[EMAIL PROTECTED] said:
Unfortunately HP EVA can't do it. About the 9900V, it is really fast (64GB
cache helps a lot) end reliable. 100% uptime in years. We'll never touch it
to solve a ZFS problem.
On our
Mike DeMarco wrote:
IO bottle necks are usually caused by a slow disk or one that has heavy
workloads reading many small files. Two factors that need to be considered
are Head seek latency and spin latency. Head seek latency is the amount
of time it takes for the head to move to the track
On 9/12/07, Mike DeMarco [EMAIL PROTECTED] wrote:
Striping several disks together with a stripe width
that is tuned for your data
model is how you could get your performance up.
Stripping has been left out
of the ZFS model for some reason. Where it is true
that RAIDZ will stripe
the
I found this discussion just today as I recently set up my first S10 machine
with ZFS. We use a NetApp Filer via multipathed FC HBAs, and I wanted to know
what my options were in regards to growing a ZFS filesystem.
After looking at this thread, it looks like there is currently no way to grow
I like option #1 because it is simple and quick. It seems unlikely
that this will lead to an excessive number of luns in the pool in most
cases unless you start with a large number of very small luns. If you
begin with 5 100GB luns and over time add 5 more it still seems like a
reasonable and
On Mon, Sep 10, 2007 at 12:41:24PM +0200, Pawel Jakub Dawidek wrote:
And here are the results:
RAIDZ:
Number of READ requests: 4.
Number of WRITE requests: 0.
Number of bytes to transmit: 695678976.
Number of processes: 8.
Bytes per second: 1305213
On Wed, Sep 12, 2007 at 02:24:56PM -0700, Adam Leventhal wrote:
On Mon, Sep 10, 2007 at 12:41:24PM +0200, Pawel Jakub Dawidek wrote:
And here are the results:
RAIDZ:
Number of READ requests: 4.
Number of WRITE requests: 0.
Number of bytes to transmit: 695678976.
On Wed, Sep 12, 2007 at 02:24:56PM -0700, Adam Leventhal wrote:
I'm a bit surprised by these results. Assuming relatively large blocks
written, RAID-Z and RAID-5 should be laid out on disk very similarly
resulting in similar read performance.
Did you compare the I/O characteristic of both?
On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
On 9/10/07, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
Hi.
I've a prototype RAID5 implementation for ZFS. It only works in
non-degraded state for now. The idea is to compare RAIDZ vs. RAID5
performance, as I suspected that
On Thu, Sep 13, 2007 at 12:56:44AM +0200, Pawel Jakub Dawidek wrote:
On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
My understanding of the raid-z performance issue is that it requires
full-stripe reads in order to validate the checksum. [...]
No, checksum is independent
On Thu, 13 Sep 2007, Pawel Jakub Dawidek wrote:
On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
On 9/10/07, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
Hi.
I've a prototype RAID5 implementation for ZFS. It only works in
non-degraded state for now. The idea is to compare
On Wed, Sep 12, 2007 at 07:39:56PM -0500, Al Hopper wrote:
This is how RAIDZ fills the disks (follow the numbers):
Disk0 Disk1 Disk2 Disk3
D0 D1 D2 P3
D4 D5 D6 P7
D8 D9 D10 P11
D12 D13 D14 P15
Pawel Jakub Dawidek pjd at FreeBSD.org writes:
This is how RAIDZ fills the disks (follow the numbers):
Disk0 Disk1 Disk2 Disk3
D0 D1 D2 P3
D4 D5 D6 P7
D8 D9 D10 P11
D12 D13 D14 P15
D16
How long before we can upgrade a ZFS based root fs? Not looking for a Live
Upgrade feature, just to be able to boot off a newer release DVD and upgrade in
place.
Currently using a build 62 based system, would like to start taking a look at
some of the features showing up in newer builds.
From the online document of ZFS On-Disk Specification, I found there is a
field named dd_parent_obj in dsl_dir_phys_t. will this field be modified or
kept unchanged during snapshot COW?
For example, consider a ZFS filesytem mounted on /myzfs, which contains 2
subdirs(A and B). If we do the
28 matches
Mail list logo