> In my opinion periodic scrubs are most useful for
> pools based on
> mirrors, or raidz1, and much less useful for pools
> based on raidz2 or
> raidz3. It is useful to run a scrub at least once on
> a well-populated
> new pool in order to validate the hardware and OS,
> but otherwise, the
> s
On 2010-Apr-30 10:24:14 +0800, Edward Ned Harvey wrote:
>Each inode contain a link count. In most cases, each inode has a
>link count of 1, but of course that can't be assumed. It seems
>trivially simple to me, that along with the link count in each inode,
>the filesystem could also store a list
I tried destroying a large (710GB) snapshot from a dataset that had
been written with dedup on. The host locked up almost immediately, but
there wasn't a stack trace on the console and the host required a
power cycle, but seemed to reboot normally. Once up, the snapshot was
still there. I was able
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cindy Swearingen
>
> For full root pool recovery see the ZFS Administration Guide, here:
>
> http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view
>
> Recovering the ZFS Root Pool or Ro
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> Each inode contain a link count. It seems trivially
> simple to me, that along with the link count in each inode, the
> filesystem could also store a list of which inodes
On Thu, 29 Apr 2010, Mary Ellen Fitzpatrick wrote:
> hecate:/zp-ext/test> zfs get sharenfs zp-ext/test/mfitzpat
[...]
> hecate:/zp-ext/test> chown -R mfitzpat:umass mfitzpat
[...]
> test-rw,hard,intr hecate:/zp-ext/test
[...]
> drwxr-xr-x+ 2 root root 2 Apr 29 11:15 mfitzpat
Unless
I finally got it, I think. Somebody (with deep and intimate knowledge of
ZFS development) please tell me if I've been hitting the crack pipe too
hard. But .
Part 1 of this email:
Netapp snapshot security flaw. Inherent in their implementation of
.snapshot directories.
Part 2 of this em
> Would your opinion change if the disks you used took
> 7 days to resilver?
>
> Bob
That will only make a stronger case that hot spare is absolutely needed.
This will also make a strong case for choosing raidz3 over raidz2 as well as
vdev smaller number of disks.
--
This message posted from op
On 04/30/10 10:35 AM, Bob Friesenhahn wrote:
On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote:
While there may be some possible optimizations, i'm sure everyone
would love the random performance of mirror vdevs, combined with the
redundancy of raidz3 and the space of a raidz1. However, as in al
On Thu, 29 Apr 2010, Roy Sigurd Karlsbakk wrote:
While there may be some possible optimizations, i'm sure everyone
would love the random performance of mirror vdevs, combined with the
redundancy of raidz3 and the space of a raidz1. However, as in all
systems, there are tradeoffs.
In my opinio
On Wed, 28 Apr 2010, Jim Horng wrote:
Why would you recommend a spare for raidz2 or raidz3?
Spare is to minimize the reconstruction time. Because remember a
vdev can not start resilvering until there is a spare disk
available. And with disks as big as they are today, resilvering also
take
I'm seeing some weird behavior on b133 with 'zfs inherit' that seems
to conflict with what the docs say. According to the man page it
"clears the specified property, causing it to be inherited from an
ancestor" but that's not the behavior I'm seeing.
For example:
basestar:~$ zfs get compress tank
> > The server is a Fujitsu RX300 with a Quad Xeon 1.6GHz, 6G ram, 8x400G
> > SATA through a U320SCSI<->SATA box - Infortrend A08U-G1410, Sol10u8.
> slow disks == poor performance
> > Should have enough oompf, but when you combine snapshot with a
> > scrub/resilver, sync performance gets abysmal.
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 1,2K bytes:
> Hi all
>
> Is there a good way to do a du that tells me how much data is there in
> case I want to move it to, say, an USB drive? Most filesystems don't
> have compression, but we're using it on (most of) our zfs filesystems,
>
Hi all
Is there a good way to do a du that tells me how much data is there in case I
want to move it to, say, an USB drive? Most filesystems don't have compression,
but we're using it on (most of) our zfs filesystems, and it can be troublesome
for someone that wants to copy a set of data to som
Hi Mary Ellen,
I'm not really qualified to help you troubleshoot this problem.
Other community members on this list have wrestled with similar
problems and I hope they will comment...
Your Linux client doesn't seem to be suffering from the nobody
problem because you see mfitzpat on nona-man so U
I currently use zfs send/recv for onsite backups [1], and am configuring
it for replication to an offsite server as well. I did an initial full
send, and then a series of incrementals to bring the offsite pool up to
date.
During one of these transfers, the offsite server hung, and I had to
power-
I setup the share and mounted on linux client, permissions did not carry
over from zfs share.
hecate:~> zfs create zp-ext/test/mfitzpat
hecate:/zp-ext/test> zfs get sharenfs zp-ext/test/mfitzpat
NAME PROPERTY VALUE SOURCE
zp-ext/test/mfitzpat sharenfs oninherited
On 29 April, 2010 - Richard Elling sent me these 2,5K bytes:
> >> With these lower numbers, our pool is much more responsive over NFS..
> >
> > But taking snapshots is quite bad.. A single recursive snapshot over
> > ~800 filesystems took about 45 minutes, with NFS operations taking 5-10
> > seco
On Apr 29, 2010, at 5:52 AM, Tomas Ögren wrote:
> On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
>
>> On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
>>
>>> I got this hint from Richard Elling, but haven't had time to test it much.
>>> Perhaps someone else could hel
I believe the name is Compellent Technologies,
http://www.google.com/finance?q=NYSE:CML.
Regards,
Andrey
On Wed, Apr 28, 2010 at 5:54 AM, Richard Elling
wrote:
> Today, Compellant announced their zNAS addition to their unified storage
> line. zNAS uses ZFS behind the scenes.
> http://www.compe
Hi Euan,
For full root pool recovery see the ZFS Administration Guide, here:
http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view
Recovering the ZFS Root Pool or Root Pool Snapshots
Additional scenarios and details are provided in the ZFS troubleshooting
wiki. The link is here but the s
On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
> On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
>
> > I got this hint from Richard Elling, but haven't had time to test it much.
> > Perhaps someone else could help?
> >
> > roy
> >
> > > Interesting. If you'd like
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
> I got this hint from Richard Elling, but haven't had time to test it much.
> Perhaps someone else could help?
>
> roy
>
> > Interesting. If you'd like to experiment, you can change the limit of the
> > number of scrub I/Os q
On 29/04/2010 07:57, Phil Harman wrote:
That screen shot looks very much like Nexenta 3.0 with a different
branding. Elsewhere, The Register confirms it's OpenSolaris.
Well it looks like it is running Nexenta which is based on Open Solaris.
But it is not the Open Solaris *distribution*.
--
R
On 28/04/2010 21:39, David Dyer-Bennet wrote:
The situations being mentioned are much worse than what seem reasonable
tradeoffs to me. Maybe that's because my intuition is misleading me about
what's available. But if the normal workload of a system uses 25% of its
sustained IOPS, and a scrub i
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Euan Thoms
>
> I'm looking for a way to backup my entire system, the rpool zfs pool to
> an external HDD so that it can be recovered in full if the internal HDD
> fails. Previously with Solaris
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
> Interesting. If you'd like to experiment, you can change the limit of the
> number of scrub I/Os queued to each vdev. The default is 10, but that
> is too close to the normal lim
Indeed the scrub seems to take too much resources from a live system.
For instance i have a server with 24 disks (SATA 1TB) serving as NFS
store to a linux machine holding user mailboxes. I have around 200
users, with maybe 30-40% of active users at the same time.
As soon as the scrub process kicks
29 matches
Mail list logo