Re: [developer] two-phase scrub/resilver

2016-07-11 Thread Matthew Ahrens
On Mon, Jul 11, 2016 at 9:54 AM, Saso Kiselkov 
wrote:

> On 7/11/16 6:37 PM, Matthew Ahrens wrote:
> >
> >
> >  Very cool.  We should definitely collaborate.
>
> Agree!
>
> > We avoid having to do multiple passes over the data by grouping
> pointers
> > by proximity and sorting in a more complicated way, rather than just
> > doing a naive elevator each pass. This results in performance that,
> > depending on how badly randomized the layout is, yields upwards of
> 10x
> > performance improvement (and in many cases approaches sequential
> > throughput). We've not implemented the storing of the partial-sorted
> > list on disk, as that would be a far more complex undertaking than
> just
> > keeping it in memory. That having been said, our memory overhead is
> > essentially zero compared to on-disk metadata, i.e. we are able to
> sort
> > and manipulate all sortable BPs in memory in essentially the same
> amount
> > of space that the metadata takes up on disk. IOW, as long as your RAM
> > quantity is not too far off being 1% of your disk storage's
> capacity, we
> > can sort extremely well and achieve near sequential throughput.
> >
> >
> > So you need to keep all the BP's in the pool in memory at once?  What
> > happens if they don't fit?  Falls back on the current non-sorted scrub?
>
> So the way it works is that we have essentially split up scrub into two
> cooperating threads, the "scanner" and the "reader". The scanner pushes
> BPs into per-device sorting queue, which consists of a set of two AVL
> trees and a range tree. In the range tree, BPs are joined up into
> extents and we allow for a certain inter-BP gap (i.e. if the offsets are
> close enough, we consider it worthwhile to read them at once). We also
> track the "fill" of an extent, i.e. how much of the extent is comprised
> of actual readable data and how much is are inter-BP gaps. While
> scanning is going on, we continuously sort these extents such that we
> always keep the "juiciest" (largest and most contiguous) ones at the front.
>
> If we hit a configurable memory cap, we have the scanner pause for a bit
> and engage the reader so that it starts consuming the queue, issuing all
> the reads. Previously we did this in parallel, but I found that doing
> only one at a time helps overall throughput (because metadata reads are
> random, whereas sorted extent reading is sequential, so it helps not
> interrupting it).
>

Ah, that's very neat.  Seems like a nice tradeoff between not using too
much memory, and still getting a substantial (though not completely
optimal) performance win!


>
> > We didn't think that was OK, which is why we allowed multiple passes
> > over the metadata. If you're using recordsize=8k (or volblocksize=8k),
> > the metadata is 16GB per 1TB of disk space.  Though I imagine
> > compression or fancy encoding of the BP could improve that somewhat.
>
> For simplicity's sake, I've implemented the sorting such that we only do
> it for level=0 blocks and only for blocks with copies=1. Removing either
> of these seemed like a rather complex addition to the algorithm for
> relatively little payoff. That is not to say that it wouldn't work, but
> basically I was trying to keep this to a relatively simple modification.
>
> > I don't know whether to continue this project at this point. I'd
> like to
> > avoid having to competing implementations of essentially the same
> thing.
> >
> > Let me see if I can find our existing code and share it with you all.
> 
> I'll try to throw our code up somewhere as well.
> 
> Cheers,
> --
> Saso
> 



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] two-phase scrub/resilver

2016-07-11 Thread Matthew Ahrens
On Mon, Jul 11, 2016 at 9:30 AM, Saso Kiselkov 
wrote:

> On 7/9/16 11:24 PM, Matthew Ahrens wrote:
> > We had an intern work on "sorted scrub" last year.  Essentially the idea
> > was to read the metadata to gather into memory all the BP's that need to
> > be scrubbed, sort them by DVA (i.e. offset on disk) and then issue the
> > scrub i/os in that sorted order.  However, memory can't hold all of the
> > BP's, so we do multiple passes over the metadata, each pass gathering
> > the next chunk of BP's.  This code is implemented and seems to work but
> > probably needs some more testing and code cleanup.
> >
> > One of the downsides of that approach is having to do multiple passes
> > over the metadata if it doesn't all fit in memory (which it typically
> > does not).  In some circumstances, this is worth it, but in others not
> > so much.  To improve on that, we would like to do just one pass over the
> > metadata to find all the block pointers.  Rather than storing the BP's
> > sorted in memory, we would store them on disk, but only roughly sorted.
> > There are several ways we could do the sorting, which is one of the
> > issues that makes this problem interesting.
> >
> > We could divide each top-level vdev into chunks (like metaslabs, but
> > probably a different number of them) and for each chunk have an on-disk
> > list of BP's in that chunk that need to be scrubbed/resilvered.  When we
> > find a BP, we would append it to the appropriate list.  Once we have
> > traversed all the metadata to find all the BP's, we would load one
> > chunk's list of BP's into memory, sort it, and then issue the resilver
> > i/os in sorted order.
> >
> > As an alternative, it might be better to accumulate as many BP's as fit
> > in memory, sort them, and then write that sorted list to disk.  Then
> > remove those BP's from memory and start filling memory again, write that
> > list, etc.  Then read all the sorted lists in parallel to do a merge
> > sort.  This has the advantage that we do not need to append to lots of
> > lists as we are traversing the metadata. Instead we have to read from
> > lots of lists as we do the scrubs, but this should be more efficient  We
> > also don't have to determine beforehand how many chunks to divide each
> > vdev into.
> >
> > If you'd like to continue working on sorted scrub along these lines, let
> > me know.
>
> It seems multiple people converged on the same problem at the same time.
> I've implemented this at Nexenta, but it's not quite release-ready yet.
>

 Very cool.  We should definitely collaborate.

We avoid having to do multiple passes over the data by grouping pointers
> by proximity and sorting in a more complicated way, rather than just
> doing a naive elevator each pass. This results in performance that,
> depending on how badly randomized the layout is, yields upwards of 10x
> performance improvement (and in many cases approaches sequential
> throughput). We've not implemented the storing of the partial-sorted
> list on disk, as that would be a far more complex undertaking than just
> keeping it in memory. That having been said, our memory overhead is
> essentially zero compared to on-disk metadata, i.e. we are able to sort
> and manipulate all sortable BPs in memory in essentially the same amount
> of space that the metadata takes up on disk. IOW, as long as your RAM
> quantity is not too far off being 1% of your disk storage's capacity, we
> can sort extremely well and achieve near sequential throughput.
>

So you need to keep all the BP's in the pool in memory at once?  What
happens if they don't fit?  Falls back on the current non-sorted scrub?

We didn't think that was OK, which is why we allowed multiple passes over
the metadata. If you're using recordsize=8k (or volblocksize=8k), the
metadata is 16GB per 1TB of disk space.  Though I imagine compression or
fancy encoding of the BP could improve that somewhat.


>
> I don't know whether to continue this project at this point. I'd like to
> avoid having to competing implementations of essentially the same thing.
>

Let me see if I can find our existing code and share it with you all.

--matt


> 
> --
> Saso
> 



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] two-phase scrub/resilver

2016-07-11 Thread Matthew Ahrens
On Mon, Jul 11, 2016 at 7:22 AM, Gvozden Neskovic 
wrote:

> From what I gather, more elaborate handling of BPs is needed. I don't mind
> implementing and evaluating prototypes of approaches you mentioned, maybe
> even some kind of hybrid. The final solution should have more predictable,
> and controllable, effect on performance with other i/o going on.
> I'm more concerned by changes in semantics introduced by separating scrub
> metadata walk and i/o. If I get it correctly, destroy, ddt, and possibly
> other code, will be able change BPs in between those two phases, so these
> changes also would have to be tracked.
>

That's correct.  For example, each vdev could have a spacemap +range_tree
representing the offsets that have been freed, but may be in the list of
BP's to scrub.


>
> Also, the new scrub algorithm would have to be guarded by a new
> 'spa_feature' flag.
> I'm not sure if old behavior must be preserved for old pools?
>

The software would need to be able to scrub old pools, though the exact
semantics of that are open to discussion.  For example, the current scrub
code is the 2nd implementation of scrub, and if you are using pool version
< 11, the semantics are a bit less optimal.

--matt


>
> Anyhow, If you have pointers at what code to look at, or where to start
> let me know.
>
> Regards,
>
>
> On Sat, Jul 9, 2016 at 11:25 PM Matthew Ahrens 
> wrote:
>
>> We had an intern work on "sorted scrub" last year.  Essentially the idea
>> was to read the metadata to gather into memory all the BP's that need to be
>> scrubbed, sort them by DVA (i.e. offset on disk) and then issue the scrub
>> i/os in that sorted order.  However, memory can't hold all of the BP's, so
>> we do multiple passes over the metadata, each pass gathering the next chunk
>> of BP's.  This code is implemented and seems to work but probably needs
>> some more testing and code cleanup.
>>
>> One of the downsides of that approach is having to do multiple passes
>> over the metadata if it doesn't all fit in memory (which it typically does
>> not).  In some circumstances, this is worth it, but in others not so much.
>> To improve on that, we would like to do just one pass over the metadata to
>> find all the block pointers.  Rather than storing the BP's sorted in
>> memory, we would store them on disk, but only roughly sorted.  There are
>> several ways we could do the sorting, which is one of the issues that makes
>> this problem interesting.
>>
>> We could divide each top-level vdev into chunks (like metaslabs, but
>> probably a different number of them) and for each chunk have an on-disk
>> list of BP's in that chunk that need to be scrubbed/resilvered.  When we
>> find a BP, we would append it to the appropriate list.  Once we have
>> traversed all the metadata to find all the BP's, we would load one chunk's
>> list of BP's into memory, sort it, and then issue the resilver i/os in
>> sorted order.
>>
>> As an alternative, it might be better to accumulate as many BP's as fit
>> in memory, sort them, and then write that sorted list to disk.  Then remove
>> those BP's from memory and start filling memory again, write that list,
>> etc.  Then read all the sorted lists in parallel to do a merge sort.  This
>> has the advantage that we do not need to append to lots of lists as we are
>> traversing the metadata. Instead we have to read from lots of lists as we
>> do the scrubs, but this should be more efficient  We also don't have to
>> determine beforehand how many chunks to divide each vdev into.
>>
>> If you'd like to continue working on sorted scrub along these lines, let
>> me know.
>>
>> --matt
>>
>>
>> On Sat, Jul 9, 2016 at 7:10 AM, Gvozden Neskovic 
>> wrote:
>>
>>> Dear OpenZFS developers,
>>>
>>> Since SIMD RAID-Z code was merged to ZoL [1], I started to look into the
>>> rest of the scrub/resilvering code path.
>>> I've found some existing specs and ideas about how to make the process
>>> more rotational drive friendly [2][3][4][5].
>>> What I've gathered from these is that scrub should be split to metadata
>>> and data traversal phases. As I'm new to ZFS,
>>> I've made a quick prototype simulating large elevator using AVL list to
>>> sort blocks by DVA offset [6]. It's probably
>>> broken in more than few ways, but this is just a quick hack to get a
>>> grasp of the code. Solution turned out similar to
>>> 'ASYNC_DESTROY' feature, so

Re: [developer] two-phase scrub/resilver

2016-07-09 Thread Matthew Ahrens
We had an intern work on "sorted scrub" last year.  Essentially the idea
was to read the metadata to gather into memory all the BP's that need to be
scrubbed, sort them by DVA (i.e. offset on disk) and then issue the scrub
i/os in that sorted order.  However, memory can't hold all of the BP's, so
we do multiple passes over the metadata, each pass gathering the next chunk
of BP's.  This code is implemented and seems to work but probably needs
some more testing and code cleanup.

One of the downsides of that approach is having to do multiple passes over
the metadata if it doesn't all fit in memory (which it typically does
not).  In some circumstances, this is worth it, but in others not so much.
To improve on that, we would like to do just one pass over the metadata to
find all the block pointers.  Rather than storing the BP's sorted in
memory, we would store them on disk, but only roughly sorted.  There are
several ways we could do the sorting, which is one of the issues that makes
this problem interesting.

We could divide each top-level vdev into chunks (like metaslabs, but
probably a different number of them) and for each chunk have an on-disk
list of BP's in that chunk that need to be scrubbed/resilvered.  When we
find a BP, we would append it to the appropriate list.  Once we have
traversed all the metadata to find all the BP's, we would load one chunk's
list of BP's into memory, sort it, and then issue the resilver i/os in
sorted order.

As an alternative, it might be better to accumulate as many BP's as fit in
memory, sort them, and then write that sorted list to disk.  Then remove
those BP's from memory and start filling memory again, write that list,
etc.  Then read all the sorted lists in parallel to do a merge sort.  This
has the advantage that we do not need to append to lots of lists as we are
traversing the metadata. Instead we have to read from lots of lists as we
do the scrubs, but this should be more efficient  We also don't have to
determine beforehand how many chunks to divide each vdev into.

If you'd like to continue working on sorted scrub along these lines, let me
know.

--matt


On Sat, Jul 9, 2016 at 7:10 AM, Gvozden Neskovic  wrote:

> Dear OpenZFS developers,
>
> Since SIMD RAID-Z code was merged to ZoL [1], I started to look into the
> rest of the scrub/resilvering code path.
> I've found some existing specs and ideas about how to make the process
> more rotational drive friendly [2][3][4][5].
> What I've gathered from these is that scrub should be split to metadata
> and data traversal phases. As I'm new to ZFS,
> I've made a quick prototype simulating large elevator using AVL list to
> sort blocks by DVA offset [6]. It's probably
> broken in more than few ways, but this is just a quick hack to get a grasp
> of the code. Solution turned out similar to
> 'ASYNC_DESTROY' feature, so I'm wondering if this might be a direction to
> take?
>
> At this stage, I would appreciate any input on how to proceed with this
> project. If you're a core dev and would like
> to provide any kind of mentorship or willing to answer some questions from
> time to time, please let me know.
> Or, if there's a perfect solution for this just waiting to be implemented,
> even better.
> For starters, pointers like: read this article, make sure you understand
> this peace of code, etc., would also be very helpful.
>
> Regards,
>
> [1]
> https://github.com/zfsonlinux/zfs/commit/ab9f4b0b824ab4cc64a4fa382c037f4154de12d6
> [2] https://blogs.oracle.com/roch/entry/sequential_resilvering
> [3]
> http://wiki.old.lustre.org/images/f/ff/Rebuild_performance-2009-06-15.pdf
> [4] https://blogs.oracle.com/ahrens/entry/new_scrub_code
> [5] http://open-zfs.org/wiki/Projects#Periodic_Data_Validation
> [6]
> https://github.com/ironMann/zfs/commit/9a2ec765d2afc38ec76393dd694216fae0221443
> *openzfs-developer* | Archives
> 
>  |
> Modify
> 
> Your Subscription 
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] Adding a -p option to "zpool iostat"

2016-07-08 Thread Matthew Ahrens
Fine by me.  Also see https://github.com/zfsonlinux/zfs/pull/4433, which I
believe includes -Hp.  I'd be happy to see that come to illumos/OpenZFS as
well.

--matt

On Fri, Jul 8, 2016 at 4:17 PM, Kai Storbeck  wrote:

> Hello List,
>
> I'm new to the list, so please redirect me to the right place if I'm not
> at the right place.
>
> I'd like to propose a patch to zpool iostat where it outputs the raw (or
> literal) counters instead of the human-readable output, for machine parsing.
>
> Whilst searching for this, I stumbled on a thread [1] on the freebsd
> mailinglist, probably for exactly the same reasons; people like to see how
> their zpool is doing over time.
>
> I haven't thought long and hard, but with some experimenting I came to
> roughly this output:
>
> ./zpool iostat -p
>>capacity operationsbandwidth
>> poolalloc   free   read  write   read  write
>> --  -  -  -  -  -  -
>> zroot   1819706376192  6083033448448  14760391  13281024
>> 1320019169280  85995089920
>>
>
> Or with -v:
>
>> $ ./zpool iostat -pv
>>  capacity operationsbandwidth
>> pool  alloc   free   read  write   read  write
>>   -  -  -  -  -  -
>> zroot 1819704811520  6083035013120  14760391  13284264
>> 1320019169280  86016716800
>>   raidz1  1819704811520  6083035013120  14760391  13284264
>> 1320019169280  86016716800
>> gpt/zfs0  -  -  9469851  2815461  459846656000  39679000576
>> gpt/zfs1  -  -  9434586  2670780  443475955712  38377684992
>> gpt/zfs2  -  -  6295038  2657443  460142014464  39691751424
>> gpt/zfs3  -  -  8795875  2729781  443798388736  38365073408
>>   -  -  -  -  -  -
>>
> 
> 
> Would such a change benefit the project? Possibly combined with a -H to
> replace spaces with tabs and skip some headers for "scripted" output?
> 
> Regards,
> Kai
> 
> [1] 
> https://lists.freebsd.org/pipermail/freebsd-fs/2011-December/013149.html
> 



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] lecture video: a code walk through of ZFS read and write

2016-06-15 Thread Matthew Ahrens
Earlier this year I recorded a lecture for Marshall Kirk McKusick's class,
FreeBSD Kernel Internals: An Intensive Code Walkthrough.  The lecture
starts with a short overview of ZFS, with most of the time spent on a deep
dive into the zfs read and write code paths.  We examine the code, and the
resulting performance implications.  The recording is now available for
free:

https://www.youtube.com/watch?v=ptY6-K78McY   2 hours 30 minutes.

A huge thanks to Kirk for this professional recording, and especially for
editing out 90% of my verbal ticks :-)

--matt



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] OpenZFS Developer Summit 2016 - announcement & CFP

2016-06-13 Thread Matthew Ahrens
The fourth annual OpenZFS Developer Summit will be held in San Francisco,
September 26-27th, 2016.  All OpenZFS developers are invited to
participate, and registration is now open.

See http://www.open-zfs.org/wiki/OpenZFS_Developer_Summit for all of the
details, including slides and videos from previous years' conferences.

The goal of the event is to foster cross-community discussions of OpenZFS work
and to make progress on some of the projects we have proposed.

Like last year, the first day will be set aside for presentations.  *If you
would like to give a talk, submit a proposal via email
to ad...@open-zfs.org , including a 1-2 paragraph
abstract.*  I will review all of the submissions and select talks that will
be most interesting for the audience.  The registration fee will be waived
for speakers.

The deadlines are as follows:

Aug 1, 2016 All abstracts/proposals submitted to ad...@open-zfs.org
Aug 12, 2016 Proposal submitters notified
Aug 25, 2016 Agenda finalized
September 26-27, 2016 OpenZFS Developer Summit

The second day of the event will be a hackathon, where you will have the
opportunity to work with OpenZFS developers that you might not normally sit
next to, with the goal of having something, no matter how insignificant, to
demo at the end of the day. Please add your hackathon ideas to the summit wiki
page.

This event is only possible because of the efforts and contributions of the
community of developers and companies that sponsor the event.  Special
thanks to our early Platinum sponsors:  Delphix  and
Intel 



Additional sponsorship opportunities are available. Please see the website
for details and send email to ad...@open-zfs.org if you have any questions.

--matt



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


[developer] zvol DKIOCFREE and DF_WAIT_SYNC

2016-05-02 Thread Matthew Ahrens
I'm looking at the following code:

zvol_ioctl()
...
 case DKIOCFREE:
...
/*
* If the caller really wants synchronous writes, and
* can't wait for them, don't return until the write
* is done.
*/
if (df.df_flags & DF_WAIT_SYNC) {
txg_wait_synced(
   dmu_objset_pool(zv->zv_objset), 0);
}

Why does DF_WAIT_SYNC imply txg_wait_synced()?  Why isn't zil_commit()
sufficient here?

I don't really understand the comment, does it mean that the caller wants
writes to be committed when we do a DKIOCFREE?  That's a little strange,
but zil_commit(ZVOL_OBJ) would still do that.  If so, I think the comment
would be more clear if it said:

If the caller wants previous asynchronous writes to be committed to disk,
commit the ZIL.  This will sync any previous uncommitted writes to the zvol
object.

--matt



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] [REVIEW] 3796, 6713 zpool(1M): document listsnapshots and leaked properties

2016-04-06 Thread Matthew Ahrens
On Fri, Apr 1, 2016 at 4:30 PM, Yuri Pankov  wrote:

> issue: https://illumos.org/issues/3796
> issue: https://illumos.org/issues/6713
> webrev: http://www.xvoid.org/illumos/webrev/il-man-zpool-fixes/
>
> 3796 is easy, make zpool(1M) describe listsnapshots instead of listsnaps
> and mention that listsnaps is shortened name.
>
> 6713 is somewhat more involved, I've tried to use the wording from the
> 4390 fix and describe the current (ie, with the tunable being unset)
> behavior, but most likely it could be improved.
>

The description of "leaked" looks wrong to me.  It only gets incremented
for permanent leaks, which only happen if you've set the
zfs_free_leak_on_eio tunable.  (Which is why this was not documented to
begin with.)  See references to dp_leak_dir in dsl_scan_sync().

The other changes look good.

--matt


> 
> 



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] Re: [openzfs/openzfs] 6418 zpool should have a label clearing command (#83)

2016-04-05 Thread Matthew Ahrens
On Tue, Apr 5, 2016 at 5:05 PM, Jorgen Lundman  wrote:

>
> Having had a handful of disks in this situation, and I had to dd the start
> and end of the disks to be able to use them again, is rather frustrating.
>

Sounds like exactly the use case for "zpool labelclear".


>
> ZFS is supposed to be admin friendly, so we don't have to dick around with
> partitions and dd blanking a disk just because a label got corrupted. (Even
> if it is not ZFS's fault).
>

I agree, that's why we're implementing "zpool labelclear".


>
> I find the discussion of "giving users a loaded gun" most peculiar, should
> they perhaps not use Unix at all? Why is removing a snapshot the same
> command as destroying your whole dataset? That's one space away from
> disaster. There already is existing precedent. But this sort of argument
> that I am using is tedious, sorry. :)
>
> Either we have a command that does what it says, and clears the label, or
> lets not having it at all, and refer people to dd and lets hope they don't
> use 'dd' "more wrong" than labelclear. :)
>

I agree, and "zpool labelclear" is going to clear the label.

"zpool labelclear" is not going to overwrite something that isn't a zpool
label, in part because I haven't heard any use case for it.

--matt



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] Re: [openzfs/openzfs] 6418 zpool should have a label clearing command (#83)

2016-04-04 Thread Matthew Ahrens
On Mon, Apr 4, 2016 at 3:41 AM, Andrew Gabriel  wrote:

> On 03/04/2016 21:28, Josh Paetzel wrote:
>
>
> On Sun, Apr 3, 2016, at 12:36 PM, Matthew Ahrens wrote:
>
>
>
> On Sun, Apr 3, 2016 at 6:51 AM, Josh Paetzel < 
> j...@tcbug.org> wrote:
>
> Does this mean FreeBSD's zpool labelclear is proprietary to FreeBSD?
>
> If so I don't suppose the FreeBSD UI could be considered?
>
>
> It looks like the interface proposed here is the same as the FreeBSD
> "zpool labelclear", with the exception of "-ff" (which could be adopted on
> FreeBSD if desired -- though I have concerns with -ff which I'll follow up
> on separately).
>
> I'd be happy consider the implications for other platforms.  Josh, did you
> have something specific in mind?
>
> --matt
>
>
>
> *zpool* *labelclear* [*-f*] *device*
>
>Removes ZFS label information from the specified *device*.  The 
> *device*
>must not be part of an active pool configuration.
>
>*-f*Treat exported or foreign devices as inactive.
>
>
> That's what FreeBSD has for a UI.
>
> I guess what I meant by my earlier email was preserve the existing FreeBSD
> UI.  The backend behavior isn't as important to me.
>
>
> I needed this a couple of weeks back, although I wasn't sure if it would
> have worked in my case.
>
> System had a failed drive, which was replaced. However, the technician
> swapped the wrong disk (was in a different zpool), although zfs quite
> happily resilvered the new drive into the pool from which the disk had been
> pulled as would be expected.
>
> However, with the wrong disk having been swapped, system still had the
> failed drive, so another disk swap had to be performed. Trouble was, the
> disk which was used as the replacement was the one pulled out from another
> pool on the system without having been detached, and zfs wouldn't touch it
> because the GUID matched an active zpool.
>
> Would the -ff get around this, i.e. clear the label of a disk which was
> part of an active zpool, even though it currently isn't currently one of
> the pool devs?  (We had to use dd /dev/zero to get the disk to work.)
>
>
If I'm understanding correctly, you wouldn't need "zpool labelclear -ff"
(which is only needed if there is no ZFS label present).  You could use
"zpool labelclear -f".

--matt



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] Re: [openzfs/openzfs] 6418 zpool should have a label clearing command (#83)

2016-04-03 Thread Matthew Ahrens
On Sun, Apr 3, 2016 at 6:51 AM, Josh Paetzel  wrote:

> Does this mean FreeBSD's zpool labelclear is proprietary to FreeBSD?
>
> If so I don't suppose the FreeBSD UI could be considered?
>

It looks like the interface proposed here is the same as the FreeBSD "zpool
labelclear", with the exception of "-ff" (which could be adopted on FreeBSD
if desired -- though I have concerns with -ff which I'll follow up on
separately).

I'd be happy consider the implications for other platforms.  Josh, did you
have something specific in mind?

--matt



>
> Thanks,
>
> Josh Paetzel
>
> On Apr 2, 2016, at 9:15 PM, ilovezfs  wrote:
>
> @yuripankov  Yes! And combined with a
> partitioning tool you could probably turn it into a dd if=/dev/zero ...
> replacement and zero the entire device! :)
> (This is an exercise left to the reader.)
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly or view it on GitHub
> 
>
> *openzfs-developer* | Archives
> 
>  |
> Modify
> 
> Your Subscription 
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] feature request "zpool remove "

2016-03-02 Thread Matthew Ahrens
I'll read that as "please don't integrate without mirror removal because it
will entice people to run 'zpool detach' to reduce their redundancy".  Let
me know if I've misinterpreted your (Ray and ilovezfs) position.

I assume your concern about "total pool loss" is if the remaining plain
device fails while doing the removal.  It might be possible to allow the
detached device to be substituted for the failed device in that case (we'd
have to fix up the label).

--matt

On Wed, Mar 2, 2016 at 4:29 PM, Ray Pating  wrote:

> To be honest, this would result in people loss as well, since this may
> well be a resume-generating event.
>
> There are 10 kinds of people in the world; those who can read binary and
> those who can't.
>
> On Thu, Mar 3, 2016 at 1:58 AM, ilove zfs  wrote:
>
>> s/total people loss/total pool loss/ lol
>>
>> On Mar 02, 2016, at 09:56 AM, ilove zfs  wrote:
>>
>> >We'd also appreciate opinions of "Please upstream even without
>> mirror/RAID-Z support" or "Please don't integrate without mirror/RAID-Z
>> support - all ZFS features should work together."
>>
>> I'd be concerned that this will lead a significant number of people to
>> total people loss when they start dismantling mirror vdevs in order to be
>> able to remove them, and then run without redundancy during the course of
>> the removal.
>>
>> *openzfs-developer* | Archives
>> 
>>  |
>> Modify
>> 
>> Your Subscription 
>>
>
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] feature request "zpool remove "

2016-03-02 Thread Matthew Ahrens
On Wed, Mar 2, 2016 at 9:54 AM, Rich  wrote:

> I agree, halfway there but stable is reasonable to integrate.
>
> Are there actual plans to improve this for removing all other vdev types,
> or is this just an "eventually we would like to do this" sort of thing?
>
It isn't on our (Delphix') roadmap, it's in the "eventually" category.

> Does this currently only work with pools that do not contain any
> raidz/mirror vdevs, or is the constraint solely that you can only remove a
> "simple" vdev?
>
Currently it only works with pools that don't contain any raidz/mirror
vdevs, but it would be an easy and reasonable change to allow removal of a
plain vdev even if there are mirror vdevs in the pool.

As someone else pointed out, if you have all mirrors, you could explicitly
reduce your redundancy by detaching one side of the mirror and then you
could remove the remaining plain vdev.  The issue with removing a mirror is
that you might have the correct data on only one side of the mirror, in
which case we would need to find it and copy it.  That's the part that
isn't implemented.

--matt


- Rich
> On Mar 2, 2016 12:42 PM, "Turbo Fredriksson"  wrote:
>
>> On Mar 2, 2016, at 5:39 PM, Josef 'Jeff' Sipek wrote:
>>
>> > On Wed, Mar 02, 2016 at 09:31:28 -0800, Matthew Ahrens wrote:
>> > ...
>> >> We'd also appreciate opinions of "Please upstream even without
>> >> mirror/RAID-Z support"
>> >
>> > I am for this.  Simply because it lets one undo an accidental zpool add
>> > (instead of a zpool attach).
>>
>>
>> I agree. Half the functionality is way better than
>> none of the functionality...
>> --
>> I love deadlines. I love the whooshing noise they
>> make as they go by.
>> - Douglas Adams
>>
>>
>>
>> http://www.listbox.com
>>
> *openzfs-developer* | Archives
> <https://www.listbox.com/member/archive/274414/=now>
> <https://www.listbox.com/member/archive/rss/274414/28015287-49e52ff8> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] feature request "zpool remove "

2016-03-02 Thread Matthew Ahrens
On Wed, Mar 2, 2016 at 12:37 AM, Erik Sørnes  wrote:

> Hi!
>
> Are there any plans on implementing functionality to remove an entire vdev
> from a pool ?
>
> One could write zpool remove  vdev , and it would
> move the data on  to other vdevs in the pool and then remove the
> vdev  entirely.
> This is very usefull for many use cases. I've googled a lot, and haven't
> found any information on this, apart from a very old one from SUN.
>

Yes, we (Delphix) have implemented this and it is used in production.  It
is not yet upstreamed.  For more info, see our talk at the 2014 OpenZFS
Developer Summit:

slides:
http://open-zfs.org/w/images/b/b4/Device_Removal-Alex_Reece_%26_Matt_Ahrens.pdf
video:
http://www.youtube.com/watch?v=Xs6MsJ9kKKE&list=PLaUVvul17xSdOhJ-wDugoCAIPJZHloVoq&index=12

Here is the first commit for Device Removal from the Delphix repo (there
are several follow on bug fixes and additional features):
https://github.com/delphix/delphix-os/commit/db775effdda128d8e14216abde06281513b03c37

We haven't yet upstreamed this because it only supports removing
non-redundant top-level vdevs (i.e. you can't remove a mirror or RAID-Z
vdev).  Additional work is needed to make sure that we don't lose any good
copies of the data, and to ensure proper raid-z allocation alignment.  We
would welcome anyone who's interested in implementing these.

We'd also appreciate opinions of "Please upstream even without
mirror/RAID-Z support" or "Please don't integrate without mirror/RAID-Z
support - all ZFS features should work together."

--matt


> 
> kind regards
> 
> Erik Sørnes, IT-support, Nilu
> P Please consider the environment before printing this email and
> attachments
> 



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [developer] smartos server crashes every hour

2016-02-23 Thread Matthew Ahrens
I haven't looked at the dump yet but here's my initial thoughts (also
posted on the bug report on github):

It's possible that this is the fallout from hitting bug 3603, which was
fixed a few years ago.  Have you run this pool without the following fix?
A workaround for this problem would be to add code to bpobj_enqueue_subobj
to ignore the subsubobjs if it does not exist (i.e. dmu_object_info()
returns ENOENT, as it has here).  This would leak some space (perhaps a
very small amount of space) but allow you to recover all the blocks that
can be found.

commit d04756377ddd1cf28ebcf652541094e17b03c889
Author: Matthew Ahrens 
Date:   Mon Mar 4 12:27:52 2013 -0800

3603 panic from bpobj_enqueue_subobj()

On Mon, Feb 22, 2016 at 3:07 AM, Adrian Ortmann 
wrote:

> Hi,
>
> got a smartos server that crashes every hour since Jan 15 21:08. Picture
> attached. IPMI has no downtime so it's not a power connection problem.
>
> The system was stable for a year, no problems and no mysterious.
> It seems to be related to deleting snapshots. We are using a service
> called zsnapper which creates snapshots at regular intervals and deletes
> older ones automatically. That explains why it happens so regularily.
>
> SmartOS Version: 20150709T171818Z
> Mainboard: X9DRD-7LN4F(-JBOD)/X9DRD-EF
> CPU: Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
> RAM: Samsung 16GB DDR3-1866 dual-rank ECC registered DIMM
> HDDs: Some HGST Disks of these HUS726060AL5210
>
> Hope to find a fix for this problem.
>
> Core dump.
>
> http://downloads.curasystems.de/vmdump.520.tgz
> http://downloads.curasystems.de/unix.520.tgz
>
> We also created an Issue on github in the smartos section.
> https://github.com/joyent/smartos-live/issues/554
>
>
>
> Today a second server got this problem, started 4 o’clock in the morning
> and no person changed at this time anything on the server.
>
>
>
> Kind regards
>
>
>
> Adrian Ortmann
>
> *openzfs-developer* | Archives
> <https://www.listbox.com/member/archive/274414/=now>
> <https://www.listbox.com/member/archive/rss/274414/28015287-49e52ff8> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



---
openzfs-developer
Archives: https://www.listbox.com/member/archive/274414/=now
RSS Feed: https://www.listbox.com/member/archive/rss/274414/28015062-cce53afa
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=28015062&id_secret=28015062-f966d51c
Powered by Listbox: http://www.listbox.com


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-02-10 Thread Matthew Ahrens
Closed #56 via c65952cea94df6ef135e347bec694ae67f54f2be.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56#event-546223304___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6551 cmd/zpool: cleanup gcc warnings (#57)

2016-02-10 Thread Matthew Ahrens
Closed #57 via 8dd29de94510e13646a1186d4606772d132e0e68.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/57#event-546161698___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


[OpenZFS Developer] [openzfs] 6642 testrunner output can be displayed in the wrong order (#68)

2016-02-09 Thread Matthew Ahrens
6643 zfstest should enforce the required privileges before running.

Reviewed by: George Wilson 
Reviewed by: Jonathan Mackenzie 

Lines of output that happen in the same hundredth of a second are
printed in alphabetical order, not the order in which they were
received.

Fix is to modify the sorting done on output lines to sort by timestamp,
but leave the lines in the order they arrived.

The script currently requires users to drop privs before running it. It
should just run the runner script with the required privs.

Fix is to always run the tests under ppriv.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/68

-- Commit Summary --

  * 6642 testrunner output can be displayed in the wrong order

-- File Changes --

M usr/src/test/test-runner/cmd/run.py (10)
M usr/src/test/zfs-tests/cmd/scripts/zfstest.ksh (9)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/68.patch
https://github.com/openzfs/openzfs/pull/68.diff

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/68
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


[OpenZFS Developer] [openzfs] 6641 deadman fires spuriously when running on VMware (#67)

2016-02-09 Thread Matthew Ahrens
Reviewed by: Matthew Ahrens 
Reviewed by: Dan Kimmel 

When the system hibernates and restarts, the counter that it uses to
measure time gets reset to nearly zero. As a result, in the clock
subsystem, we add the counter's value to the current time if the counter
goes backwards by more than a second or two.

Unfortunately, when running on VMWare, sometimes VMWare does a bad thing
and sends the counter backwards by more than that in the course of
normal operations. As a result, we end up adding a time almost as large
as the current uptime to the clock, resulting in the uptime of the
system suddenly doubling and the clock being off by days or weeks.

This can cause a variety of problems; one of them is that it may cause
the deadman subsystem to trigger, thinking that the system has been
unresponsive for a long time.

The fix to this problem is to change the way we handle sudden jumps
backwards in time; if the counter jumps backwards a lot, but is still
larger than some small value (a second or two), we should not add it to
the current time; instead, we decide that this jump is probably a result
of VMWare's glitch, and we don't add to the time until we start getting
reliable readings again.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/67

-- Commit Summary --

  * 6641 deadman fires spuriously when running on VMware

-- File Changes --

M usr/src/uts/i86pc/os/timestamp.c (46)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/67.patch
https://github.com/openzfs/openzfs/pull/67.diff

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/67
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6637 replacing "dontclose" with "should_close" (#66)

2016-02-09 Thread Matthew Ahrens
Closed #66 via dfb1e173cd9b0bbfbaa68ee230f9a428f6d61f76.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/66#event-544876125___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6562 Refquota on receive doesn't account for overage. (#64)

2016-02-09 Thread Matthew Ahrens
Pushed to illumos. 

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/64#issuecomment-182026993___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6562 Refquota on receive doesn't account for overage. (#64)

2016-02-09 Thread Matthew Ahrens
Closed #64.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/64#event-544829396___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-09 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61#issuecomment-181993195___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-08 Thread Matthew Ahrens
> + ASSERT(vp);
> + zfs_inactive_impl(vp, CRED(), NULL);
> +}
> +
> +/*
> + * This value will be multiplied by zfs_dirty_data_max to determine
> + * the threshold past which we will call zfs_inactive_impl() async.
> + *
> + * Selecting the multiplier is a balance between how long we're willing to 
> wait
> + * for delete/free to complete (get shell back, have a NFS thread captive, 
> etc)
> + * and reducing the number of active requests in the backing taskq.
> + *
> + * 4 GiB (zfs_dirty_data_max default) * 16 (multiplier default) = 64 GiB
> + * meaning by default we will call zfs_inactive_impl async for vnodes > 64 
> GiB
> + */
> +uint16_t zfs_inactive_async_multiplier = 16;

On Mon, Feb 8, 2016 at 5:27 PM, Alek P  wrote:
>
> The idea of doing async delete based on number of indirect block is
> interesting. I will need more time to dig out the dnode from znode and do
> the indirect blocks counting.
>
Rather than messing with the dnode directly, the ZPL should get the sizes
from dmu_object_info().


---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r52264704___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Improve speculative indirection tables prefetch. (#65)

2016-02-08 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/65#issuecomment-181648718___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-02-08 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-181643341___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] Review Request 257: libzfs_core: remove the dependency of the interface on sys/fs/zfs.h

2016-02-08 Thread Matthew Ahrens


> On Nov. 10, 2015, 9:29 p.m., Matthew Ahrens wrote:
> > Do you want to open a pull request for this to get the automated testing?
> 
> Andriy Gapon wrote:
> I am in the process of setting up a ZFS test suite environment locally.  
> I'll try to test this change in that environment.
> I can also open a PR if that would be more convenient for the community 
> and you.
> What would you recommend?
> 
> Matthew Ahrens wrote:
> It's easiest for me if you do all the tests and file the RTI on your own 
> :-)
> 
> If you want my help with any of that, then open a pull request.
> 
> Andriy Gapon wrote:
> Okay, that's fair :-)
> ```
> $ /opt/zfs-tests/bin/zfstest -a
> Test: /opt/zfs-tests/tests/functional/acl/cifs/setup (run as root) 
> [00:15] [PASS]
> Test: /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_001_pos (run as 
> root) [00:16] [PASS]
> Test: /opt/zfs-tests/tests/functional/acl/cifs/cifs_attr_002_pos (run as 
> root) [00:15] [PASS]
> ```

FYI, I'm still happy to see this go upstream.  LMK if you need help.


- Matthew


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.csiden.org/r/257/#review870
---


On Nov. 20, 2015, 10:17 a.m., Andriy Gapon wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.csiden.org/r/257/
> -------
> 
> (Updated Nov. 20, 2015, 10:17 a.m.)
> 
> 
> Review request for OpenZFS Developer Mailing List and Matthew Ahrens.
> 
> 
> Bugs: 6052
> https://www.illumos.org/issues/6052
> 
> 
> Repository: illumos-gate
> 
> 
> Description
> ---
> 
> Previously `lzc_create` had a parameter of `dmu_objset_type_t` type that 
> specified what kind of dataset to create.
> Now `lzc_dataset_type` enumeration is used for that purpose.
> At present only a filesystem type and a volume type can be specified.
> `lzc_dataset_type` values are binary compatible with `dmu_objset_type_t` 
> values.
> 
> 
> Diffs
> -
> 
>   usr/src/uts/common/sys/fs/zfs.h 569fae20915dc58bebd875fe5f244a82fdc02a9d 
>   usr/src/lib/libzfs_core/common/libzfs_core.h 
> bdd6c951ee496dc1e21a297e7a69b1342aecf79b 
>   usr/src/lib/libzfs/common/libzfs_dataset.c 
> d2f613ca63241bab7b23dfd4b4b1891e0245f660 
>   usr/src/lib/libzfs_core/common/libzfs_core.c 
> 22af0f4a7a9fd8ab15cc7880233ba51274ce87d8 
> 
> Diff: https://reviews.csiden.org/r/257/diff/
> 
> 
> Testing
> ---
> 
> ZFS Test Suite with the following failures:
> ```
> $ egrep 'FAIL|KILL' /var/tmp/test_results/20151116T101807/log
> Test: 
> /opt/zfs-tests/tests/functional/acl/nontrivial/zfs_acl_chmod_inherit_003_pos 
> (run as root) [00:20] [FAIL]
> Test: 
> /opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos (run 
> as root) [01:31] [FAIL]
> Test: 
> /opt/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos 
> (run as root) [00:14] [FAIL]
> Test: /opt/zfs-tests/tests/functional/cli_user/misc/zpool_add_001_neg (run as 
> avg) [00:01] [FAIL]
> Test: /opt/zfs-tests/tests/functional/cli_user/misc/zpool_create_001_neg (run 
> as avg) [00:00] [FAIL]
> Test: /opt/zfs-tests/tests/functional/mdb/mdb_001_pos (run as root) [00:24] 
> [FAIL]
> Test: /opt/zfs-tests/tests/functional/refreserv/refreserv_004_pos (run as 
> root) [00:00] [FAIL]
> Test: /opt/zfs-tests/tests/functional/rootpool/rootpool_002_neg (run as root) 
> [00:00] [FAIL]
> Test: /opt/zfs-tests/tests/functional/rsend/rsend_008_pos (run as root) 
> [00:01] [FAIL]
> Test: /opt/zfs-tests/tests/functional/rsend/rsend_009_pos (run as root) 
> [00:09] [FAIL]
> Test: /opt/zfs-tests/tests/functional/slog/slog_014_pos (run as root) [00:13] 
> [FAIL]
> Test: /opt/zfs-tests/tests/functional/zvol/zvol_swap/zvol_swap_004_pos (run 
> as root) [00:01] [FAIL]
> ```
> Looks like known failures, at least nothing pointing at `zfs create`
> 
> 
> Thanks,
> 
> Andriy Gapon
> 
>

___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 4521 zfstest is trying to execute evil "zfs unmount -a" (#58)

2016-02-08 Thread Matthew Ahrens
@jwk404 Can you take a look at the test suite changes here?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/58#issuecomment-181621561___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 4521 zfstest is trying to execute evil "zfs unmount -a" (#58)

2016-02-08 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/58#issuecomment-181621079___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6387 Use kmem_vasprintf() in log_internal() (#10)

2016-02-08 Thread Matthew Ahrens
@ryao That approach sounds good to me.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/10#issuecomment-181619092___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-02-08 Thread Matthew Ahrens
@ilovezfs And I'm fine with it being part of this PR.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-181618368___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-02-08 Thread Matthew Ahrens
@ilovezfs that check looks good to me.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-181617813___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6546 Fix "zpool get guid, freeing, leaked" source (#53)

2016-02-08 Thread Matthew Ahrens
@dasjoe Have you been able to verify the correct output with your fix?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/53#issuecomment-181617287___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6637 replacing "dontclose" with "should_close" (#66)

2016-02-08 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/66#issuecomment-181615698___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6423 ZFS txg thread signalling could be simplified (#33)

2016-02-08 Thread Matthew Ahrens
@wca ping on my above question about testing.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/33#issuecomment-181590252___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] SLOG benefit for Resilver

2016-02-08 Thread Matthew Ahrens
On Sun, Nov 1, 2015 at 11:44 PM, Thijs Cramer 
wrote:

> Hi guys,
>
> Recently in a discussion online, i discussed the new Seagate 8TB SMR disks
> in combination with ZFS.
> A big point that was undecided was: Can a SLOG help during rebuild of a
> bit SMR disk based array?
>
> I know it's more risky to use the SLOG for resilver activities, but
> rebuilding sequentially could benefit these disks insanely. The problem
> with these disks is that under heavy load, the disk because very slow on
> random writes because of the SMR rewrites the disk has to do.
>
> Can anyone answer whether it's currently beneficial to have a SLOG on an
> SMR disk array, or answer whether it's easy to adapt the code a bit to make
> this happen?
>
>
Currently, the ZIL (and thus log devices / SLOG) is not involved in
resilvering.  Therefore adding a SLOG device will have no impact on
resilver speed.  (Though I guess there could be secondary effects if adding
the SLOG reduces fragmentation in the main devices.)

If you have a specific proposal for how the ZIL / SLOG could be used to
accelerate resilvering, I can give you feedback on its feasibility.

--matt
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-08 Thread Matthew Ahrens
Closed #49 via 3e190fde7e8f25ea8da99cb6f536bfc8fffd475e.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#event-543171826___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


[OpenZFS Developer] Large block support - thank you Integros

2016-02-08 Thread Matthew Ahrens
This message is to belatedly acknowledge Integros for their financial
support of the OpenZFS Large Block Support project, which allows using
block sizes of more than 128KB. I've also recently added the appropriate
copyright message to the relevant files.

commit 864fe9f7c4d99d2fa01b79a789876b217808d11d
Author: Matthew Ahrens 
Date:   Sun Feb 7 11:06:19 2016 -0800

5027 zfs large block support (add copyright)

commit b515258426fed6c7311fd3f1dea697cfbd4085c6
Author: Matthew Ahrens 
Date:   Fri Nov 7 08:30:07 2014 -0800

5027 zfs large block support
Reviewed by: Alek Pinchuk 
Reviewed by: George Wilson 
Reviewed by: Josef 'Jeff' Sipek 
Reviewed by: Richard Elling 
Reviewed by: Saso Kiselkov 
Reviewed by: Brian Behlendorf 
Approved by: Dan McDonald 

Information from Integros:

Integros provides full video stack for developers to enable rich video
experience in their apps. A complete toolset contains friendly APIs, SDKs
and UI kits for video integration running on the top of highload-ready
infrastructure. It allows to focus on app development instead of building
and managing expensive video streaming infrastructure.

--matt
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-07 Thread Matthew Ahrens
Note: ztest failure is unrelated to these changes. 
http://build.prakashsurya.com:8080/job/openzfs-regression-tests/176/

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-181226984___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Replace "dontclose" with "should_close" (#66)

2016-02-07 Thread Matthew Ahrens
Changes look good.  Can you please open an illumos bug and change the commit 
message to the bug id + synopsis?  For details see 
https://github.com/openzfs/openzfs/blob/master/README.md

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/66#issuecomment-181090685___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Replace "dontclose" with "should_close" (#66)

2016-02-07 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/66#issuecomment-181090566___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-07 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-181090522___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-04 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-17453___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-04 Thread Matthew Ahrens
I've added an exception for the failing tests.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-17428___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-02-03 Thread Matthew Ahrens
@ilovezfs That looks good.  Not sure why you need to cast away the const.  Or 
check for p being null.  You might use a for loop rather than while:

`for (int i = 0; deps[i] != SPA_FEATURE_NONE; i++)`

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-179658261___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-02-03 Thread Matthew Ahrens
> Is the converse supposed to be true?

No.

> I noticed bookmarks and filesystem_limits have SPA_FEATURE_EXTENSIBLE_DATASET 
> but are ZFEATURE_FLAG_READONLY_COMPAT, not ZFEATURE_FLAG_PER_DATASET or 
> (ZFEATURE_FLAG_READONLY_COMPAT | ZFEATURE_FLAG_PER_DATASET).

That sounds correct to me.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-17967___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
> @@ -129,6 +130,17 @@ uint64_t zfs_delay_scale = 1000 * 1000 * 1000 / 2000;
>  hrtime_t zfs_throttle_delay = MSEC2NSEC(10);
>  hrtime_t zfs_throttle_resolution = MSEC2NSEC(10);
>  
> +
> +/*
> + * Tunable to control number of threads servicing the vn rele taskq
> + */
> +int zfs_vn_rele_threads = 256;

wow, that is a lot of threads.  But I guess they only get created on demand?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51761367___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 4521 zfstest is trying to execute evil "zfs unmount -a" (#58)

2016-02-03 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/58#issuecomment-179371705___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
> + ASSERT(vp);
> + zfs_inactive_impl(vp, CRED(), NULL);
> +}
> +
> +/*
> + * This value will be multiplied by zfs_dirty_data_max to determine
> + * the threshold past which we will call zfs_inactive_impl() async.
> + *
> + * Selecting the multiplier is a balance between how long we're willing to 
> wait
> + * for delete/free to complete (get shell back, have a NFS thread captive, 
> etc)
> + * and reducing the number of active requests in the backing taskq.
> + *
> + * 4 GiB (zfs_dirty_data_max default) * 16 (multiplier default) = 64 GiB
> + * meaning by default we will call zfs_inactive_impl async for vnodes > 64 
> GiB
> + */
> +uint16_t zfs_inactive_async_multiplier = 16;

I think making this an "int" or int32_t would be better since that's a common 
assumption when tuning with mdb (using the "W" format character).

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51762660___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Improve speculative indirection tables prefetch. (#65)

2016-02-03 Thread Matthew Ahrens
@amotin looks like the machine went away for some mysterious reason.  I'll try 
again.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/65#issuecomment-179390170___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
> + * and reducing the number of active requests in the backing taskq.
> + *
> + * 4 GiB (zfs_dirty_data_max default) * 16 (multiplier default) = 64 GiB
> + * meaning by default we will call zfs_inactive_impl async for vnodes > 64 
> GiB
> + */
> +uint16_t zfs_inactive_async_multiplier = 16;
> +
> +void
> +zfs_inactive(vnode_t *vp, cred_t *cr, caller_context_t *ct)
> +{
> + znode_t *zp = VTOZ(vp);
> +
> + if (zp->z_size > zfs_inactive_async_multiplier * zfs_dirty_data_max) {
> + if (taskq_dispatch(dsl_pool_vnrele_taskq(
> + dmu_objset_pool(zp->z_zfsvfs->z_os)), zfs_inactive_task,
> + vp, TQ_SLEEP) != NULL)

Seems like we might want to use TQ_NOSLEEP so that if there are a ton of 
deletions, and the queue gets full, then we will effectively recruit more 
threads to work on deletion (because the calling thread will work on deletion, 
rather than waiting for the taskq threads to make progress).

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51762910___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
> + ASSERT(vp);
> + zfs_inactive_impl(vp, CRED(), NULL);
> +}
> +
> +/*
> + * This value will be multiplied by zfs_dirty_data_max to determine
> + * the threshold past which we will call zfs_inactive_impl() async.
> + *
> + * Selecting the multiplier is a balance between how long we're willing to 
> wait
> + * for delete/free to complete (get shell back, have a NFS thread captive, 
> etc)
> + * and reducing the number of active requests in the backing taskq.
> + *
> + * 4 GiB (zfs_dirty_data_max default) * 16 (multiplier default) = 64 GiB
> + * meaning by default we will call zfs_inactive_impl async for vnodes > 64 
> GiB
> + */
> +uint16_t zfs_inactive_async_multiplier = 16;

I'm not sure I understand the reasoning here.  It seems like we would want 
large files to be deleted async so that we improve interactive performance of 
delete.  Therefore we should go async if we think the deletion will be slow.  
It will be slow if we have to read a lot of indirect blocks.  So maybe we 
should go async if number of indirect blocks is > X.  (maybe X=1)

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51762487___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
> @@ -498,6 +511,17 @@ dsl_pool_sync(dsl_pool_t *dp, uint64_t txg)
>   dsl_pool_undirty_space(dp, dp->dp_dirty_pertxg[txg & TXG_MASK], txg);
>  
>   /*
> +  * Update the long range free counters after
> +  * we're done syncing user data
> +  */
> + if (spa_sync_pass(dp->dp_spa) == 1) {
> + mutex_enter(&dp->dp_lock);
> + dp->dp_long_freeing_total -= dp->dp_long_free_dirty_total;

Wouldn't the "dp_long_free_dirty_total" need to be specific to this txg?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51761498___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-02-03 Thread Matthew Ahrens
>  
>   if (offset >= object_size)
>   return (0);
>  
> + if (zfs_per_txg_dirty_frees_percent <= 100)
> + dirty_frees_threshold = zfs_per_txg_dirty_frees_percent;
> + else
> + dirty_frees_threshold = MAX(1, zfs_delay_min_dirty_percent / 4);
> +
> +

only one blank line is needed

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61/files#r51759946___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-02-03 Thread Matthew Ahrens
@jwk404 can you take a look at this?  Are we OK with adding failing tests 
(which expose real bugs)?  Seems like a good form of test-driven development to 
me, we will just have to mark them as known failures.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-179362881___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6568 zfs_allow_010_pos and zfs_allow_012_neg fail intermittently (#62)

2016-02-03 Thread Matthew Ahrens
Closed #62 via 54eadfc37f6f61ac54291e5bc5375d7f88674900.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/62#event-537900808___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6551 cmd/zpool: cleanup gcc warnings (#57)

2016-02-03 Thread Matthew Ahrens
Changes LGTM.  Any other reviewers?  Any other testing or integration to other 
codebases?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/57#issuecomment-179356561___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6568 zfs_allow_010_pos and zfs_allow_012_neg fail intermittently (#62)

2016-02-03 Thread Matthew Ahrens
@yuripankov nope, I will RTI.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/62#issuecomment-179358655___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6562 Refquota on receive doesn't account for overage. (#64)

2016-01-30 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/64#issuecomment-177202133___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Improve speculative indirection tables prefetch. (#65)

2016-01-29 Thread Matthew Ahrens
@amotin FYI, for now zettabot only responds to "members" (prakash and I).  I 
deleted your comment so that others won't get the wrong idea.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/65#issuecomment-177082410___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Improve speculative indirection tables prefetch. (#65)

2016-01-29 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/65#issuecomment-177082108___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-28 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-176312600___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Improve speculative indirection tables prefetch. (#65)

2016-01-28 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/65#issuecomment-176386983___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] Merge illumos (#63)

2016-01-27 Thread Matthew Ahrens
Merged #63.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/63#event-528499757___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-27 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56#issuecomment-175773893___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6562 Refquota on receive doesn't account for overage. (#64)

2016-01-27 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/64#issuecomment-175756414___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-25 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56#issuecomment-174700654___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6562 Refquota on receive doesn't account for overage. (#64)

2016-01-25 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/64#issuecomment-174688695___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-25 Thread Matthew Ahrens
@ilovezfs These changes look ready to me.  How have you tested them (besides 
the regression tests run here)?  E.g. are these changes present in another 
platform, and/or have you done manual testing to check that 
"dedup=sha512,verify" errors out when the feature is disabled?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-174676313___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


[OpenZFS Developer] [openzfs] Merge illumos (#63)

2016-01-25 Thread Matthew Ahrens

You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/63

-- Commit Summary --

  * 6537 Panic on zpool scrub with DEBUG kernel
  * 6345 remove xhat support
  * 6556 64-bit SPARC libc needs signalfd.o too
  * 6123 SMF ipfilter support needs improvement
  * 6450 scrub/resilver unnecessarily traverses snapshots created after the 
scrub started
  * 6465 
zfs-tests/tests/functional/cli_root/zpool_upgrade/zpool_upgrade_007_pos is 
broken
  * Revert "6057 login(1) "Last login" hostname is too short"
  * Merge remote-tracking branch 'illumos/master' into openzfs-merge

-- File Changes --

M usr/src/cmd/bnu/in.uucpd.c (4)
M usr/src/cmd/cmd-inet/usr.bin/finger.c (11)
M usr/src/cmd/cmd-inet/usr.bin/pppd/auth.c (17)
M usr/src/cmd/cmd-inet/usr.lib/mdnsd/multicast.xml (10)
M usr/src/cmd/cmd-inet/usr.sbin/comsat.xml (5)
M usr/src/cmd/cmd-inet/usr.sbin/finger.xml (8)
M usr/src/cmd/cmd-inet/usr.sbin/in.routed/route.xml (5)
M usr/src/cmd/cmd-inet/usr.sbin/in.routed/svc-route (6)
M usr/src/cmd/cmd-inet/usr.sbin/in.talkd/talk.xml (5)
M usr/src/cmd/cmd-inet/usr.sbin/login.xml (22)
M usr/src/cmd/cmd-inet/usr.sbin/rexec.xml (8)
M usr/src/cmd/cmd-inet/usr.sbin/shell.xml (11)
M usr/src/cmd/cmd-inet/usr.sbin/telnet.xml (8)
M usr/src/cmd/fs.d/nfs/svc/nfs-server (120)
M usr/src/cmd/fs.d/nfs/svc/rquota.xml (14)
M usr/src/cmd/fs.d/nfs/svc/server.xml (9)
M usr/src/cmd/ipf/svc/ipfilter (18)
M usr/src/cmd/ipf/svc/ipfilter.xml (148)
M usr/src/cmd/login/login.c (109)
M usr/src/cmd/lp/cmd/lpsched/print-svc (12)
M usr/src/cmd/lp/cmd/lpsched/server.xml (8)
M usr/src/cmd/lvm/rpc.mdcommd/mdcomm.xml (7)
M usr/src/cmd/lvm/rpc.metad/meta.xml (7)
M usr/src/cmd/lvm/rpc.metamedd/metamed.xml (7)
M usr/src/cmd/lvm/rpc.metamhd/metamh.xml (7)
M usr/src/cmd/rexd/rex.xml (8)
M usr/src/cmd/rpcbind/bind.xml (7)
M usr/src/cmd/rpcsvc/rpc.bootparamd/bootparams.xml (10)
M usr/src/cmd/rpcsvc/rstat.xml (8)
M usr/src/cmd/rpcsvc/rusers.xml (8)
M usr/src/cmd/rpcsvc/spray.xml (8)
M usr/src/cmd/rpcsvc/wall.xml (8)
M usr/src/cmd/sendmail/lib/smtp-sendmail.xml (8)
M usr/src/cmd/smbsrv/smbd/server.xml (7)
M usr/src/cmd/smbsrv/smbd/svc-smbd (11)
M usr/src/cmd/ssh/etc/ssh.xml (8)
M usr/src/cmd/ssh/etc/sshd (7)
M usr/src/cmd/ssh/include/config.h (2)
M usr/src/cmd/ssh/sshd/auth-pam.c (13)
M usr/src/cmd/ssh/sshd/loginrec.c (80)
M usr/src/cmd/ssh/sshd/servconf.c (7)
M usr/src/cmd/ssh/sshd/session.c (64)
M usr/src/cmd/ssh/sshd/sshlogin.c (16)
M usr/src/cmd/svc/milestone/global.xml (98)
M usr/src/cmd/svc/shell/ipf_include.sh (400)
M usr/src/cmd/syslogd/system-log.xml (8)
M usr/src/cmd/ypcmd/yp.sh (82)
M usr/src/head/lastlog.h (19)
M usr/src/lib/libc/sparcv9/Makefile.com (2)
M usr/src/lib/pam_modules/unix_account/unix_acct.c (50)
M usr/src/lib/pam_modules/unix_session/unix_session.c (264)
M usr/src/man/man1/finger.1 (9)
M usr/src/man/man1/login.1 (14)
M usr/src/man/man1m/in.fingerd.1m (9)
M usr/src/man/man1m/in.uucpd.1m (24)
M usr/src/man/man1m/svc.ipfd.1m (144)
M usr/src/man/man4/shadow.4 (9)
M usr/src/man/man4/sshd_config.sunssh.4 (3)
M usr/src/man/man5/pam_unix_session.5 (13)
M usr/src/uts/common/Makefile.files (1)
M usr/src/uts/common/fs/zfs/dsl_scan.c (6)
M usr/src/uts/common/os/watchpoint.c (29)
M usr/src/uts/common/vm/as.h (5)
M usr/src/uts/common/vm/seg_spt.c (61)
M usr/src/uts/common/vm/seg_vn.c (31)
M usr/src/uts/common/vm/vm_as.c (175)
M usr/src/uts/common/vm/vm_rm.c (2)
D usr/src/uts/common/vm/xhat.c (555)
D usr/src/uts/common/vm/xhat.h (208)
M usr/src/uts/sfmmu/vm/hat_sfmmu.c (210)
M usr/src/uts/sfmmu/vm/hat_sfmmu.h (14)
D usr/src/uts/sfmmu/vm/xhat_sfmmu.c (240)
D usr/src/uts/sfmmu/vm/xhat_sfmmu.h (94)
M usr/src/uts/sun4u/Makefile.files (1)
M usr/src/uts/sun4u/vm/mach_kpm.c (4)
M usr/src/uts/sun4v/Makefile.files (1)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/63.patch
https://github.com/openzfs/openzfs/pull/63.diff

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/63
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6546 Fix "zpool get guid, freeing, leaked" source (#53)

2016-01-25 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/53#issuecomment-174673680___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6568 zfs_allow_010_pos and zfs_allow_012_neg fail intermittently (#62)

2016-01-25 Thread Matthew Ahrens
Sounds good to me.  @jwk404 Can you take a look too?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/62#issuecomment-174672452___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6551 cmd/zpool: cleanup gcc warnings (#57)

2016-01-25 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/57#issuecomment-174671695___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6551 cmd/zpool: cleanup gcc warnings (#57)

2016-01-25 Thread Matthew Ahrens
> @@ -549,7 +550,7 @@ get_replication(nvlist_t *nvroot, boolean_t fatal)
>   uint_t c, children;
>   nvlist_t *nv;
>   char *type;
> - replication_level_t lastrep, rep, *ret;
> + replication_level_t lastrep = {0}, rep, *ret;

This should really be 3 different lines, for clarity.  But I suppose it's only 
slightly worse than the current code.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/57/files#r50748986___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-25 Thread Matthew Ahrens
> @@ -4440,7 +,7 @@ parse_fs_perm(fs_perm_t *fsperm, nvlist_t *nvl)
>   nvlist_t *nvl2 = NULL;
>   const char *name = nvpair_name(nvp);
>   uu_avl_t *avl = NULL;
> - uu_avl_pool_t *avl_pool;
> + uu_avl_pool_t *avl_pool = NULL;

That assertion will always fail, if executed.  It would be more straightforward 
to do something like `assert(!"unhandled zfs_deleg_who_type_t")`

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56/files#r50747405___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-25 Thread Matthew Ahrens
> @@ -2330,6 +2331,9 @@ us_compare(const void *larg, const void *rarg, void 
> *unused)
>   if (rv64 != lv64)
>   rc = (rv64 < lv64) ? 1 : -1;
>   break;
> +
> + default:
> + break;

Looks like you can sort by a different (invalid) column, and it does ignore it. 
 I guess that's a different bug.  Seems like it should be validating the sort 
columns against us_field_names[] like it does the output (-o) columns.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56/files#r50746688___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6418 zpool should have a label clearing command (#32)

2016-01-24 Thread Matthew Ahrens
> + }
> + }
> +
> + argc -= optind;
> + argv += optind;
> +
> + /* get vdev name */
> + if (argc < 1) {
> + (void) fprintf(stderr, gettext("missing vdev device name\n"));
> + usage(B_FALSE);
> + }
> +
> + /* Allow bare paths if they exist, otherwise prepend. */
> + if (stat(argv[0], &st) != 0 &&
> + strncmp(argv[0], ZFS_DISK_ROOTD, strlen(ZFS_DISK_ROOTD)) != 0)
> + ret = asprintf(&vdev, ZFS_DISK_ROOTD "%s", argv[0]);

printf args should be `"%s%s", ZFS_DISK_ROOTD, argv[0]` so that strange chars 
in ZFS_DISK_ROOTD can't confuse printf (and so we aren't relying on the macro 
evaluating to a string literal)

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/32/files#r50648985___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6418 zpool should have a label clearing command (#32)

2016-01-24 Thread Matthew Ahrens
@yuripankov Reading through the comments again, I think the outstanding issues 
are:

* should be able to specify c0t0d0 and it should know you mean 
/dev/dsk/c0t0d0s0 (like "zpool add" does) - looks like we will add the 
`/dev/dsk` but not the `s0`
* don't do anything if there are no valid labels (zpool_in_use() returns 
nonzero)
* potentially making it only zero labels that exist (if some but not all labels 
exist) - not sure this is a requirement but would be nice.
* long string style issue
* printf arg issue

@wca, are you able to continue working on this?  If not then maybe @yuripankov 
you could pick it up from here?

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/32#issuecomment-174371260___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6418 zpool should have a label clearing command (#32)

2016-01-24 Thread Matthew Ahrens
> + goto wipe_label;
> +
> + (void) fprintf(stderr,
> + gettext("Unable to determine pool state for %s\n"
> + "Use -f to force the clearing any label data\n"), vdev);
> + goto errout;
> + }
> +
> + if (inuse) {
> + switch (state) {
> + default:
> + case POOL_STATE_ACTIVE:
> + case POOL_STATE_SPARE:
> + case POOL_STATE_L2CACHE:
> + (void) fprintf(stderr,
> +gettext("labelclear operation failed.\n"

You could reduce the indentation by 2 tabs using a local variable in this 
function (outside the "if" and "switch").  You still might need to break some 
of the lines though.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/32/files#r50648413___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6450 scrub/resilver unnecessarily traverses snapshots created after the scrub started (#55)

2016-01-23 Thread Matthew Ahrens
Closed #55 via 5008f624dd4a0f4bd5d0a3d0a98ebdc4f1b47dc7.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/55#event-524089953___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6465 zfs-tests/tests/functional/cli_root/zpool_upgrade/zpool_upgrade_007_pos is broken (#59)

2016-01-23 Thread Matthew Ahrens
Closed #59 via 4222474fee15fa184de5fe037001dd58f365c061.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/59#event-524090237___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-23 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56#issuecomment-174213201___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] basic set of casesensitivity/normalization test cases for zfs-tests (#49)

2016-01-22 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/49#issuecomment-174054903___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6370 ZFS send fails to transmit some holes (#37)

2016-01-22 Thread Matthew Ahrens
@Ringdingcoder 
> I used zfs send -R, without -i ... Isn't this supposed to only happen with 
> incremental sends?

If you have more than one snapshot in a filesystem, `zfs send -R` has to 
generate incrementals.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/37#issuecomment-173988977___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-21 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-173748614___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6566 panics in zfs_ioc_objset_stats_impl during zvol creation and des… (#60)

2016-01-21 Thread Matthew Ahrens
> @@ -2058,6 +2058,24 @@ zfs_ioc_objset_stats_impl(zfs_cmd_t *zc, objset_t *os)
>   error = zvol_get_stats(os, nv);
>   if (error == EIO)
>   return (error);
> + /*
> +  * If the zvol's parent dataset was being destroyed
> +  * when we called zvol_get_stats, then it's possible
> +  * that the ZAP still existed but its blocks had
> +  * been already been freed when we tried to read it.
> +  * It would then appear that the ZAP had no entries.
> +  */
> + if (error == ENOENT)
> + return (error);
> + /*
> +  * a zvol's znode gets created before its zap gets
> +  * created.  So there is a short window of time in which
> +  * zvol_get_stats() can return EEXIST.  Return an
> +  * error in that case

Is it possible that you found (and fixed) this problem before integrating this 
commit, which should also fix it (it introduced the uses of dp_config_rwlock I 
mentioned):

commit 3b2aab18808792cbd248a12f1edf139b89833c13
Author: Matthew Ahrens 
Date:   Thu Feb 28 12:44:05 2013 -0800
3464 zfs synctask code needs restructuring

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/60/files#r50436764___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-21 Thread Matthew Ahrens
sure

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-173651199___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6569 large file delete can starve out write ops (#61)

2016-01-20 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/61#issuecomment-173362314___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-20 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-173359923___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6546 Fix "zpool get guid, freeing, leaked" source (#53)

2016-01-19 Thread Matthew Ahrens
Changes look good.  I believe the test failures are unrelated to your changes.  
@dasjoe have you had a chance to test the fix?  (Of course, even better would 
be to add a new test case to ensure the correct behavior, but that isn't a 
requirement.)

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/53#issuecomment-173093204___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6541 dedup=[cksum], verify defeated spa feature check (#51)

2016-01-19 Thread Matthew Ahrens
@ilovezfs I'm happy to tell zettabot to run if you update the pull request, but 
how would that be useful?  Are we hitting that VERIFY?  

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/51#issuecomment-173092952___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6566 panics in zfs_ioc_objset_stats_impl during zvol creation and des… (#60)

2016-01-19 Thread Matthew Ahrens
> @@ -2058,6 +2058,24 @@ zfs_ioc_objset_stats_impl(zfs_cmd_t *zc, objset_t *os)
>   error = zvol_get_stats(os, nv);
>   if (error == EIO)
>   return (error);
> + /*
> +  * If the zvol's parent dataset was being destroyed
> +  * when we called zvol_get_stats, then it's possible
> +  * that the ZAP still existed but its blocks had
> +  * been already been freed when we tried to read it.
> +  * It would then appear that the ZAP had no entries.
> +  */
> + if (error == ENOENT)
> + return (error);
> + /*
> +  * a zvol's znode gets created before its zap gets
> +  * created.  So there is a short window of time in which
> +  * zvol_get_stats() can return EEXIST.  Return an
> +  * error in that case

I don't understand this either.  I think you're saying that zvol_get_stats() 
can run concurrently with zvol_create_cb, but that is not possible.  The 
dp_config_rwlock prevents this (it's held for reader when zvol_get_stats is 
called, and for writer when zvol_create_cb is called).

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/60/files#r50215843___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6566 panics in zfs_ioc_objset_stats_impl during zvol creation and des… (#60)

2016-01-19 Thread Matthew Ahrens
> @@ -2058,6 +2058,24 @@ zfs_ioc_objset_stats_impl(zfs_cmd_t *zc, objset_t *os)
>   error = zvol_get_stats(os, nv);
>   if (error == EIO)
>   return (error);
> + /*
> +  * If the zvol's parent dataset was being destroyed

I don't understand this part of the comment.  If the zvol's parent dataset was 
being destroyed, then the parent must have no children (because parent can not 
be destroyed while it has children).  Therefore this zvol does not exist so 
this can't happen.  The dp_config_rwlock (which we hold for reader) prevents 
any race condition with dataset deletion.

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/60/files#r50215701___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 4521 zfstest is trying to execute evil "zfs unmount -a" (#58)

2016-01-19 Thread Matthew Ahrens
Code looks good to me.

I don't see those tests failing on the other os-zfs-test runs.  Here are some 
log snippets of this test run.  It does look like an error with `share`.  It 
looks like it's trying to share something on rpool.  Could `share` be looking 
for the zfs filesystem that backs it and not able to find it due to your 
changes to not iterate over rpool?

```
Test: /opt/zfs-tests/tests/functional/delegate/zfs_allow_010_pos (run as root) 
[01:37] [FAIL]

21:47:46.29 SUCCESS: restore_root_datasets
21:47:46.41 SUCCESS: /usr/sbin/zfs allow staff1 destroy 
testpool.101006/testfs.101006
21:47:47.25 SUCCESS: /usr/sbin/zfs allow staff1 mount 
testpool.101006/testfs.101006
21:47:47.80 SUCCESS: verify_perm testpool.101006/testfs.101006 destroy staff1
21:47:48.28 SUCCESS: /usr/sbin/zfs create testpool.101006/testfs.101006
21:47:48.52 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testvol.delegate
21:47:48.77 SUCCESS: /usr/sbin/zfs create -V 150m 
testpool.101006/testvol.delegate
21:47:48.77 SUCCESS: restore_root_datasets
21:47:48.89 SUCCESS: /usr/sbin/zfs allow staff1 sharenfs 
testpool.101006/testfs.101006
21:47:48.94 NOTE: staff1 /usr/sbin/zfs set sharenfs=on 
testpool.101006/testfs.101006
21:47:50.06 SUCCESS: /usr/sbin/zfs set sharenfs=off 
testpool.101006/testfs.101006
21:47:50.07 SUCCESS: /usr/bin/mkdir -p /var/tmp/a.555758
21:47:50.11 ERROR: /usr/sbin/share /var/tmp/a.555758 exited 32
21:47:50.12 Could not share: /var/tmp/a.555758: system error
21:47:50.12 NOTE: Performing local cleanup via log_onexit 
(restore_root_datasets)
21:47:50.60 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testfs.101006
21:47:51.07 SUCCESS: /usr/sbin/zfs create testpool.101006/testfs.101006
21:47:51.22 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testvol.delegate
21:47:51.37 SUCCESS: /usr/sbin/zfs create -V 150m 
testpool.101006/testvol.delegate
21:47:51.38 Last login: Tue Jan 19 21:46:12 2016 from openzfs-r151014

Test: /opt/zfs-tests/tests/functional/delegate/zfs_allow_012_neg (run as root) 
[01:38] [FAIL]

21:49:28.81 SUCCESS: restore_root_datasets
21:49:29.10 SUCCESS: /usr/sbin/zfs allow staff1 destroy 
testpool.101006/testfs.101006
21:49:29.51 SUCCESS: /usr/sbin/zfs allow staff1 mount 
testpool.101006/testfs.101006
21:49:29.55 SUCCESS: verify_noperm testpool.101006/testfs.101006 destroy staff1
21:49:29.84 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testfs.101006
21:49:30.11 SUCCESS: /usr/sbin/zfs create testpool.101006/testfs.101006
21:49:30.42 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testvol.delegate
21:49:30.78 SUCCESS: /usr/sbin/zfs create -V 150m 
testpool.101006/testvol.delegate
21:49:30.78 SUCCESS: restore_root_datasets
21:49:31.15 SUCCESS: /usr/sbin/zfs allow staff1 sharenfs 
testpool.101006/testfs.101006
21:49:31.20 NOTE: staff1 /usr/sbin/zfs set sharenfs=on 
testpool.101006/testfs.101006
21:49:31.24 SUCCESS: verify_noperm testpool.101006/testfs.101006 sharenfs staff1
21:49:31.47 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testfs.101006
21:49:31.70 SUCCESS: /usr/sbin/zfs create testpool.101006/testfs.101006
21:49:31.95 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testvol.delegate
21:49:32.63 SUCCESS: /usr/sbin/zfs create -V 150m 
testpool.101006/testvol.delegate
21:49:32.64 SUCCESS: restore_root_datasets
21:49:32.89 SUCCESS: /usr/sbin/zfs allow staff1 share 
testpool.101006/testfs.101006
21:49:32.99 SUCCESS: /usr/bin/mkdir -p /var/tmp/a.557950
21:49:33.03 ERROR: /usr/sbin/share /var/tmp/a.557950 exited 32
21:49:33.03 Could not share: /var/tmp/a.557950: system error
21:49:33.04 NOTE: Performing local cleanup via log_onexit (cleanup)
21:49:33.26 SUCCESS: /usr/sbin/zpool set delegation=on testpool.101006
21:49:33.41 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testfs.101006
21:49:33.57 SUCCESS: /usr/sbin/zfs create testpool.101006/testfs.101006
21:49:34.22 SUCCESS: /usr/sbin/zfs destroy -Rf testpool.101006/testvol.delegate
21:49:34.48 SUCCESS: /usr/sbin/zfs create -V 150m 
testpool.101006/testvol.delegate
21:49:34.48 SUCCESS: restore_root_datasets
21:49:34.49 Last login: Tue Jan 19 21:47:51 2016 from openzfs-r151014
```

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/58#issuecomment-173086357___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] ZAP question

2016-01-19 Thread Matthew Ahrens
On Tue, Jan 19, 2016 at 3:13 PM, RomanS  wrote:

> Hello.
>
> I want to store some snapshot-related data, so I do the following:
>
> uint64_t val = 1;
> dsl_dataset_zapify(ds, tx);
> zap_add(mos, ds->ds_object, "localhost:param", sizeof (val), 1, &val, tx);
>
> "ds" is a snapshot (data/d1@snap1).
> "mos" is ds->ds_dir->dd_pool->dp_meta_objset
>
>
> After that cannot import this pool, it says cannot open data/d1.
> zdb also says "cannot open data/d1", but shows my "param"
>
> What am I doing wrong?
>

Assuming that you are doing this in syncing context (e.g. from the
"syncfunc" callback of dsl_sync_task()), the code looks fine to me.

Can you elaborate on the zdb output and the output when doing "zpool
import"?  What arguments did you use and what exactly was the output?

--matt



>
> Thanks.
>
> ___
> developer mailing list
> developer@open-zfs.org
> http://lists.open-zfs.org/mailman/listinfo/developer
>
>
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6566 panics in zfs_ioc_objset_stats_impl during zvol creation and des… (#60)

2016-01-19 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/60#issuecomment-172986668___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [zfs] Son-of-4986 now is 6562

2016-01-19 Thread Matthew Ahrens
On Mon, Jan 18, 2016 at 8:25 AM, Dan McDonald  wrote:

> https://www.illumos.org/issues/6562
>
> While I have the fix for my customer's incremental case in hand, the more
> general case question lingers.  Currently, one can do:
>
> zfs create rpool/foo
> zfs set refquota=10M rpool/foo
> yes > /rpool/foo/fill-er-up
> zfs snapshot rpool/foo@1
> zfs send -R rpool/foo@1 | zfs recv rpool/bar
>
> and while rpool/bar gets created, the refquota property IS NOT INCLUDED.
>
> Since refquota's weird anyway, and part of the post-4986 "delayed
> properties", one COULD special-case this some more, to figure out how to
> set the refquota without failing because of the one-transaction overage
> discussed in earlier thread.
>
> My question to the community is this:
>
> Should I bother fixing the create-new-filesystem case for refquota?
>

Correct behavior would be for the receive to work and set the refquota (and
quota for that matter) even if it puts you into an over-[ref]quota
situation.

Your changes make the situation better although not fixed in all cases.
I'm fine with taking your changes as-is and fixing the rest of the problem
later.

As for how to fix it completely, I think it would be reasonable to allow
the [ref]quota to be set to less than the current space referenced/used.
That said, if we want to preserve the current behavior of "zfs set" failing
in this case, we could move the check from the kernel to libzfs, so that
"zfs set" fails but "zfs receive" can successfully set the [ref]quota.  Or
we could allow "zfs set" to set the [ref]quota as well, and print a
warning, like "Note: refquota set to X MB which is less than the current
referenced (Y MB).  Affected datasets will be readonly except for
deletions."

--matt


>
> Dan
>
>
>
> ---
> illumos-zfs
> Archives: https://www.listbox.com/member/archive/182191/=now
> RSS Feed:
> https://www.listbox.com/member/archive/rss/182191/27179292-bb9021e0
> Modify Your Subscription:
> https://www.listbox.com/member/?member_id=27179292&id_secret=27179292-acf9db97
> Powered by Listbox: http://www.listbox.com
>
___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6465 zfs-tests/tests/functional/cli_root/zpool_upgrade/zpool_upgrade_007_pos is broken (#59)

2016-01-19 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/59#issuecomment-172908915___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


Re: [OpenZFS Developer] [openzfs] 6550 cmd/zfs: cleanup gcc warnings (#56)

2016-01-19 Thread Matthew Ahrens
@zettabot go

---
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/56#issuecomment-172908703___
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer


  1   2   3   4   5   6   7   8   9   >