Re: [OpenIndiana-discuss] zfs question

2013-08-05 Thread Richard Elling

On Aug 5, 2013, at 3:58 AM, Gary Gendel  wrote:

> When I reboot my machine, fmstat always shows 12 counts for zfs-* categories. 
>  fmdump and fmdump -e don't report anything and I don't see anything in the 
> logs of the current or previous BE (when applicable).  I'm at a bit of a loss 
> to figure out what happened.

fmdump -eV
shows the error reports in verbose detail.
 -- richard

> 
> Two of the drives are on the internal controller on my Sun Fire v20z, the 
> rest are on a marvell88sx based controller.  I've tried both WD and Seagate 
> drives with the same result so I think I can rule out the drives causing the 
> problem.  That said, my tests were not really rigorous in this respect (for 
> example, I didn't swap drives on the internal drives which have my rpool).  
> I'm not really concerned about this issue because I've never had issues after 
> a reboot so I just reset these counts so I can easily check for new errors, 
> but I'd rather not do that.  It would just be nice to know what is going on.
> 
> BTW, I use "init 6" to do the reboot.  Is this the wrong way to reboot on OI?
> 
> Gary
> 
> 
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] zfs question

2013-08-05 Thread Gary Gendel
When I reboot my machine, fmstat always shows 12 counts for zfs-* 
categories.  fmdump and fmdump -e don't report anything and I don't see 
anything in the logs of the current or previous BE (when applicable).  
I'm at a bit of a loss to figure out what happened.


Two of the drives are on the internal controller on my Sun Fire v20z, 
the rest are on a marvell88sx based controller.  I've tried both WD and 
Seagate drives with the same result so I think I can rule out the drives 
causing the problem.  That said, my tests were not really rigorous in 
this respect (for example, I didn't swap drives on the internal drives 
which have my rpool).  I'm not really concerned about this issue because 
I've never had issues after a reboot so I just reset these counts so I 
can easily check for new errors, but I'd rather not do that.  It would 
just be nice to know what is going on.


BTW, I use "init 6" to do the reboot.  Is this the wrong way to reboot 
on OI?


Gary


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zfs question - when can _rewriting_ a block of a file fail on out-of-space?

2012-06-03 Thread Richard Elling
On Jun 1, 2012, at 10:45 PM, Richard L. Hamilton wrote:

> In a non-COW filesystem, one would expect that rewriting an already allocated 
> block would never fail for out-of-space (ENOSPC).

This seems like a rather broad assumption. It may hold for FAT or UFS, but 
might not
hold for some of the more modern file systems (eg flash file systems) But I 
digress...

> But I would expect that it could on ZFS - definitely if there was a snapshot 
> around, as it would create a divergence from that snapshot (because both 
> blocks would be kept).  Or if deduplication was in effect, and the new block 
> contents were unique when the old contents hadn't been unique.
> 
> Could rewriting a block _ever_ fail with ENOSPC if there _wasn't_ a snapshot 
> present, or is the replace old block with new somehow guaranteed to succeed, 
> so as to avoid introducing unexpected semantics?  (say maybe there's a 
> reserved amount of free space just for rewrites to avoid that sort of 
> problem, or some other magic)

There is a reserve at the pool level. It is needed for the ZIL at the very 
least.

> I would think DBMS developers allowing databases to be stored on ZFS, as well 
> as folks using mmap(), might particularly want to be aware of the cases in 
> which an errno not anticipated from experience with other filesystems might 
> arise.

Those developers have to handle all error conditions anyway.

NB, many important databases are also COW, so the concept is well understood.
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zfs question - when can _rewriting_ a block of a file fail on out-of-space?

2012-06-02 Thread Kees Nuyt
On Sat, 2 Jun 2012 01:45:05 -0400, you wrote:

> In a non-COW filesystem, one would expect that rewriting 
> an already allocated block would never fail for 
> out-of-space (ENOSPC).
>
> But I would expect that it could on ZFS - definitely if 
> there was a snapshot around, as it would create a 
> divergence from that snapshot (because both blocks would 
> be kept). Or if deduplication was in effect, and the new 
> block contents were unique when the old contents hadn't 
> been unique.

Not only then. Every block written obeys COW, the previous version is
always retained. The new version of the block is always a new
allocation. The previous version is released over time, when nothing
refers to it anymore.

> Could rewriting a block _ever_ fail with ENOSPC if there 
> _wasn't_ a snapshot present, 

Yes.

> or is the replace old block 
> with new somehow guaranteed to succeed, so as to avoid 
> introducing unexpected semantics? (say maybe there's a 
> reserved amount of free space just for rewrites to avoid 
> that sort of problem, or some other magic)

No. You have to monitor the used capacity of the pool, with alert
thresholds of, for example, 80% (warning), 95% (critical) and 98%
(fatal).
 
It might make sense to limit the size per filesystem using quota to
avoid the pool from filling up completely, protecting one zfs against
another zfs.

> I would think DBMS developers allowing databases to be 
> stored on ZFS, as well as folks using mmap(), might 
> particularly want to be aware of the cases in which an 
> errno not anticipated from experience with other 
> filesystems might arise.

http://assets.en.oreilly.com/1/event/21/Optimizing%20MySQL%20Performance%20with%20ZFS%20Presentation.pdf

-- 
Regards,

Kees Nuyt


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] zfs question - when can _rewriting_ a block of a file fail on out-of-space?

2012-06-01 Thread Richard L. Hamilton
In a non-COW filesystem, one would expect that rewriting an already allocated 
block would never fail for out-of-space (ENOSPC).

But I would expect that it could on ZFS - definitely if there was a snapshot 
around, as it would create a divergence from that snapshot (because both blocks 
would be kept).  Or if deduplication was in effect, and the new block contents 
were unique when the old contents hadn't been unique.

Could rewriting a block _ever_ fail with ENOSPC if there _wasn't_ a snapshot 
present, or is the replace old block with new somehow guaranteed to succeed, so 
as to avoid introducing unexpected semantics?  (say maybe there's a reserved 
amount of free space just for rewrites to avoid that sort of problem, or some 
other magic)

I would think DBMS developers allowing databases to be stored on ZFS, as well 
as folks using mmap(), might particularly want to be aware of the cases in 
which an errno not anticipated from experience with other filesystems might 
arise.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss