If you don't do background scrubbing, you don't know about bad blocks in
advance. If you're running RAID-Z, this means you'll lose data if a block is
unreadable and another device goes bad. This is the point of scrubbing, it lets
you repair the problem while you still have redundancy. :-)
Hello Brandon,
Wednesday, April 16, 2008, 11:23:48 PM, you wrote:
BH On Tue, Apr 15, 2008 at 12:54 PM, David [EMAIL PROTECTED] wrote:
I have some code that implements background media scanning so I am able to
detect bad blocks well before zfs encounters them. I need a script or
something
Having just installed Solaris 10 U5 I was kind of hoping that this was
incorporated. This is a showstopper as far as using ZFS in production. This is
because all production is based on EMC storage with either a backend RAID1 or
RAID5. This is not an issue when systems have dual paths to storage
Hi,
It is possible to run Solaris from a USB stick, but maybe not recommended since
the sticks may have a limited number of writes.
It is also possible to have Solaris run from a mirrored disk.
Would it be possible to run it from several zfs mirrored USB sticks and thereby:
1) benefit from zfs
I've been trying to figure out how the copies command works and have been
experimenting, but I haven't really seen any results (both with 5 physical
drives I will soon add to my data pool as a 2nd RAIDZ and on a virtual machine
with two RAIDZ in a pool). First: Is data copied across physical
I had a brief look into this too. I'm a solaris newbie, but the best solution
looked to be tar, or something called Star.
Our plan is to use ZFS send/receive to back the data up onto live server
storage. But for tape archives I actually want to use a completely different
filesystem. If
Hello Richard,
Wednesday, April 16, 2008, 11:19:27 PM, you wrote:
RE No, not normally. ZFS groups writes to try to do 128kByte writes.
RE So in a single 128kByte block, there may be parts of different files.
RE By default the transaction group is flushed every 5 seconds, but there
RE are many
Austin wrote:
I've been trying to figure out how the copies command works and have been
experimenting, but I haven't really seen any results (both with 5 physical
drives I will soon add to my data pool as a 2nd RAIDZ and on a virtual
machine with two RAIDZ in a pool). First: Is data copied
Robert Milkowski wrote:
Hello Richard,
Wednesday, April 16, 2008, 11:19:27 PM, you wrote:
RE No, not normally. ZFS groups writes to try to do 128kByte writes.
RE So in a single 128kByte block, there may be parts of different files.
RE By default the transaction group is flushed every 5
On Thu, 17 Apr 2008, Tim wrote:
Along those lines, I'd *strongly* suggest running Jeff's script to pin down
whether one drive is the culprit:
But that script only tests read speed and Pascal's read performance
seems fine.
Bob
==
Bob Friesenhahn
[EMAIL
Even though I am on a bunch of Sun propaganda lists, I have not yet
spotted an announcement for Solaris 10U5 even though it is now
available for download. Sun's formal web site is useless for
comparing what is in different update releases since its notion of
What's New is a comparison with
On Thu, Apr 17, 2008 at 12:51:03PM -0500, Bob Friesenhahn wrote:
Even though I am on a bunch of Sun propaganda lists, I have not yet
spotted an announcement for Solaris 10U5 even though it is now
available for download. Sun's formal web site is useless for
comparing what is in different
Stuart Anderson wrote:
On Wed, Apr 16, 2008 at 02:07:53PM -0700, Richard Elling wrote:
Personally, I'd estimate using du rather than ls.
They report the exact same number as far as I can tell. With the caveat
that Solaris ls -s returns the number of 512-byte blocks, whereas
Date: Thu, 17 Apr 2008 02:46:35 PDT
From: Veltror [EMAIL PROTECTED]
Having just installed Solaris 10 U5 I was kind of hoping that this was
incorporated.
This is a showstopper as far as using ZFS in production. This is because all
production
is based on EMC storage with either a
A Darren Dunham wrote:
On Thu, Apr 17, 2008 at 12:51:03PM -0500, Bob Friesenhahn wrote:
Can someone please post a summary of any new ZFS features or
significant fixes which are in Solaris 10U5?
I'm guessing it has some changes/fixes applied, but I don't know of any
significant feature
Hello list,
We discovered a failed disk with checksum errors. Took out the disk
and resilvered, which reported many errors. A few of my subvolumes to
the pool won't mount anymore, with zfs import poolname reporting
that cannot mount 'poolname/proj': I/O error
Ok, we have a problem. I can
A customer has a zpool where their spectral analysis applications create a ton
(millions?) of very small files that are typically 1858 bytes in length.
They're using ZFS because UFS consistently runs out of inodes. I'm assuming
that ZFS aggregates these little files into recordsize (128K?)
17 matches
Mail list logo