[developer] Re: [openzfs/openzfs] 9485 Optimize possible split block search space (#625)

2018-10-03 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/625#pullrequestreview-161262396
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Tf29308eb1ca3d052-M0dcf2a76698a405e61f00b0b
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription


[developer] Re: [openzfs/openzfs] 9485 Optimize possible split block search space (#625)

2018-10-02 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.

I just found one minor nit. Code LGTM.

Are there any plans of doing any extra testing for this in illumos?

> - * majority of segment copies are good. This allows all the segment copies to
- * participate fairly in the reconstruction and prevents the repeated use of
- * one bad copy.
+ * If an indirect split block contains more than this many possible unique
+ * combinations when being reconstructed, consider it too computationally
+ * expensive to check them all. Instead, try at most 100 randomly-selected
+ * combinations each time the block is accessed.  This allows all segment
+ * copies to participate fairly in the reconstruction when all combinations
+ * cannot be checked and prevents repeated use of one bad copy.
+ */
+int zfs_reconstruct_indirect_combinations_max = 256;
+
+
+/*
+ * Enable to simulate damaged segments and validate reconstruction.  This
+ * is intentionally not exposed as a module parameter.

We don't have "module parameters" in illumos so maybe we should take out this 
sentence just to avoid confusion.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/625#pullrequestreview-161002747
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Tf29308eb1ca3d052-M9fe82ccb9076625963e36ad9
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription


[developer] Re: [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-10-02 Thread Serapheim Dimitropoulos
This is not ready to go yet.

I still haven't updated the code with the restructuring from ZoL. There is a 
kstat memory leak that I need to narrow down in the new code before updating 
this. Sorry for the delay.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#issuecomment-426351265
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-Me25b2d91d7d659ccb8d2c920
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription


[developer] Re: [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-08-19 Thread Serapheim Dimitropoulos
Let's keep a hold on this until the ZoL PR makes it in: 
https://github.com/zfsonlinux/zfs/pull/7705

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#issuecomment-414166806
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-M30c7648d65ba650c25fb1b05
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription


[developer] Re: [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-07-14 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -1217,18 +1352,10 @@ zvol_dumpio(zvol_state_t *zv, void *addr, uint64_t 
> offset, uint64_t size,
 int
 zvol_strategy(buf_t *bp)
 {
-   zfs_soft_state_t *zs = NULL;

see my other comment (e.g. cleanup).

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#discussion_r202529019
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-M7f0463273753eeeadeb73439
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-07-14 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -653,43 +654,28 @@ zfs_read(vnode_t *vp, uio_t *uio, int ioflag, cred_t 
> *cr, caller_context_t *ct)
 static int
 zfs_write(vnode_t *vp, uio_t *uio, int ioflag, cred_t *cr, caller_context_t 
*ct)
 {
-   znode_t *zp = VTOZ(vp);

Since I was adding the kstat_update calls in that functions, I figured I'd 
refactor the styling here so variables are declared closer to their usage now 
that C99 is enabled.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#discussion_r202529013
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-M4a92b85f6ad15d00e281b387
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-07-10 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> +  * There should not be anything wrong with having kstats for
+* snapshots. Since we are not sure how useful they would be
+* though nor how much their memory overhead would matter in
+* a filesystem with many snapshots, we skip them for now.
+*/
+   if (zfsvfs->z_issnap)
+   return;
+
+   /*
+* We are limited by KSTAT_STRLEN for the kstat's name here
+* which is a lot less that the string length of a dataset's
+* path. Thus, we distinguish handles by their objset IDs
+* prepended by the pool's load_guid. Note that both numbers
+* are formatted in hex. See breakdown in comment below.
+*
+* We use the pool's load_guid because it is guaranteed to

Thank you for looking at the review @Ramzec.

Yes. The pool-guid can change without re-import (e.g. we do a `zpool reguid 
`).
Thus, if we were using the pool-guild as part of the kstat name, if the pool's 
guid changed, all these kstats would be left off with the old guid.

Those kstats are created when a dataset is mounted and destroyed when the 
dataset is unmounted. So it makes more sense to use the pool's load_guid as 
this would keep things consistent.

Given the above, recreating all these structures every time the guid changes 
would be very complicated and unnecessary. Does this make sense?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664#discussion_r201429988
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-M9b1057e3bdb1b52c87b97469
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 9647 Introduce ZFS Read/Write kstats (#664)

2018-07-10 Thread Serapheim Dimitropoulos
Reviewed by: Sara Hartse 
Reviewed by: Sebastien Roy 
Reviewed by: Matthew Ahrens 
Reviewed by: George Wilson 

There are use-cases when we want to look into dataset performance
(reads/writes) in ZFS and create high-level tooling for it.

In illumos we already have some of these kstats already in the VFS
layer (module:unix - name:vopstats_). That
said we want to introduce kstats specific to ZFS for the following
reasons:
[1] The current kstats don't support R/W to ZVOL's as these code
paths don't go through VFS.
[2] The VFS version of these statistics are not surrounded by locks
on purpose so performance of this hot paths doesn't take a hit.
That's generally fine but it also means that the values are not
fully accurate. In this new version of kstats we can take
advantage of the aggsum_t counters (already used for ARC related
kstats) which would take care of most of that overhead.
[3] Linux does not have the VFS kstats of illumos in its own VFS
layer, and they would have to implement this feature anyway. Thus
it would be nice to have the codebase being consistent among
variants.

Upstream Bugs: DLPX-58996, DLPX-59190

Closes XXX
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/664

-- Commit Summary --

  * 9647 Introduce ZFS Read/Write kstats

-- File Changes --

M usr/src/common/zfs/zpool_prop.c (4)
M usr/src/man/man1m/zpool.1m (9)
M usr/src/uts/common/fs/zfs/spa.c (2)
M usr/src/uts/common/fs/zfs/sys/zfs_vfsops.h (23)
M usr/src/uts/common/fs/zfs/sys/zvol.h (7)
M usr/src/uts/common/fs/zfs/zfs_vfsops.c (158)
M usr/src/uts/common/fs/zfs/zfs_vnops.c (87)
M usr/src/uts/common/fs/zfs/zvol.c (215)
M usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd_zvol.c (42)
M usr/src/uts/common/sys/fs/zfs.h (3)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/664.patch
https://github.com/openzfs/openzfs/pull/664.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/664

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Ta167a88bbab5c009-M3094716ab1dccb1b53c40b3b
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9630 add lzc_rename and lzc_destroy to libzfs_core (#660)

2018-06-27 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -249,6 +249,31 @@ lzc_remap(const char *fsname)
return (error);
 }
 
+int
+lzc_rename(const char *source, const char *target)
+{
+   zfs_cmd_t zc = { 0 };
+   int error;
+

Since this call (unfortunately) uses the old-style ioctl() call could you add 
the following assertions?
```
ASSERT3S(g_refcount, >, 0);
VERIFY3S(g_fd, !=, -1);
```

`lzc_exists()` (same situation - old way) and `lzc_ioctl()` (the new-style; 
common to most functions here)
both use these assertions to catch cases where consumers of the libzfs_core 
forget to call the init function before subsequent calls to the library.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/660#pullrequestreview-132622470
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T47f1cc3e459efd4d-Me965f6bd2195d5c9c9f4c826
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 8862 ZFS Channel Programs - List Bookmarks & Holds (#649)

2018-06-14 Thread Serapheim Dimitropoulos
Note: This patch builds upon work zfs redacted send/receive whose status is 
pending, waiting for the encryption changes.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/649#issuecomment-397442683
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T7bbd116bcce85a18-Me3ad8dd57434bf48baceae3b
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 9591 ms_shift can be incorrectly changed in MOS config for indirect vdevs that have been historically expanded (#651)

2018-06-12 Thread Serapheim Dimitropoulos
Problem:
According to spa_config_update() we expect new vdevs to have
vdev_ms_array equal to 0 and then we go ahead and set their metaslab
size. The problem is that indirect vdevs also have vdev_ms_array == 0
because their metaslabs are destroyed once their removal is done.

As a result, if a vdev was expanded and then removed may have its
ms_shift changed if another vdev was added after its removal.
Fortunately this behavior does not cause any type of crash or bad
behavior in the kernel but it can confuse zdb and anyone doing any kind
of analysis of the history of the pools.

Reviewed by: Matthew Ahrens 
Reviewed by: George Wilson 
Reviewed by: John Kennedy 
eviewed by: Prashanth Sreenivasa 

Upstream Bugs: DLPX-58879
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/651

-- Commit Summary --

  * 9591 ms_shift can be incorrectly changed in MOS config for indirect vdevs 
that have been historically expanded

-- File Changes --

M usr/src/pkg/manifests/system-test-zfstest.mf (1)
A usr/src/test/zfs-tests/tests/functional/removal/remove_expanded.ksh (89)
M usr/src/uts/common/fs/zfs/spa_config.c (14)
M usr/src/uts/common/fs/zfs/vdev.c (4)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/651.patch
https://github.com/openzfs/openzfs/pull/651.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/651

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/Taa3327c923b01149-M3bc0fe2828d91a5bbd5f6d38
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 8862 ZFS Channel Programs - List Bookmarks & Holds (#649)

2018-06-08 Thread Serapheim Dimitropoulos
This adds two new features to channel programs --
listing bookmarks on a filesystem and listing holds on a snapshot.

The man page of channel programs has also been updated to explain
why zfs.list.properties() returns 3 items (the property name, its value,
and its source) for each iteration, rather than just the property name
as the current version of the docs might make you expect.

One final code cleanup/sidefix is the renaming zfs.list.properties to
zfs.list.user_properties to make the operation more like its sibling
zfs.list.system_properties.

Reviewed by: Matt Ahrens 
Reviewed by: Serapheim Dimitropoulos 

Upstream Bugs: DLPX-50618, DLPX-50362, DLPX-52112, DLPX-52115
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/649

-- Commit Summary --

  * 8862 ZFS Channel Programs - List Bookmarks & Holds

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (23)
M usr/src/pkg/manifests/system-test-zfstest.mf (6)
M usr/src/test/zfs-tests/include/libtest.shlib (12)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_bookmarks.ksh
 (120)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_holds.ksh
 (121)
M usr/src/uts/common/fs/zfs/zcp_iter.c (253)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/649.patch
https://github.com/openzfs/openzfs/pull/649.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/649

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T7bbd116bcce85a18-Mea4ec88ac781510aa8296010
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 9580 Add a hash-table on top of nvlist to speed-up operations (#647)

2018-06-05 Thread Serapheim Dimitropoulos
= Motivation

While dealing with another performance issue
(see 
https://github.com/openzfs/openzfs/commit/126118fbe4a957da76c8f44769ee63730080cafc)
we noticed that we spend a lot of time in various places in the kernel when 
constructing
long nvlists. The problem is that when an nvlist is created with the 
NV_UNIQUE_NAME set
(which is the case most of the time), we do a linear search through the whole 
list to
ensure uniqueness for every entry we add.

An example of the above scenario can be seen in the following flamegraph, where 
more than
have the time of the zfsdev_ioctl() is spent on constructing nvlists.
Flamegraph: https://sdimitro.github.io/img/flame/sdimitro_snap_unmount3.svg

Adding a table to speed up lookups will help situations where we just construct 
an nvlist
(like the scenario above), in addition to regular lookups and removals.

= What this patch does

In this diff we've implemented a hash-table on top of the nvlist code that 
converts most
nvlist operations from O(# number of entries) to O(1)* (the start is for 
amortized time
as the hash-table grows and shrinks depending on the # of entries - plain 
lookup is
strictly O(1)).

= Performance Analysis

To analyze the performance improvement I just used the setup from the snapshot 
deletion
issue mentioned above in the Motivation section. Basically I created 10K 
filesystems with
one snapshot each and then I just used the API of libZFS_Core to pass down an 
nvlist of
all the snapshots to have them deleted. The reason I used my own driver program 
was to
have clean performance results of what actually happens in the kernel. The 
flamegraphs
and wall clock times mentioned below were gathered from the start to the end of 
the driver
program's run. Between trials the testpool used was completely destroyed, the 
system was
rebooted and the testpool was completely recreated. The reason for this dance 
was to get
consistent results.

== Results (before patch):
=== Sampling Flamegraphs
[Trial 1] https://sdimitro.github.io/img/flame/DLPX-53417/trial-A.svg
[Trial 2] https://sdimitro.github.io/img/flame/DLPX-53417/trial-A2.svg
[Trial 3] https://sdimitro.github.io/img/flame/DLPX-53417/trial-A3.svg
=== Wall clock times (in seconds)
```
[Trial 4]
real5.3
user0.4
sys 2.3

[Trial 5]
real8.2
user0.4
sys 2.4

[Trial 6]
real6.0
user0.5
sys 2.3
```

== Results (after patch):
=== Sampling Flamegraphs
[Trial 1] https://sdimitro.github.io/img/flame/DLPX-53417/trial-Ae.svg
[Trial 2] https://sdimitro.github.io/img/flame/DLPX-53417/trial-A2e.svg
[Trial 3] https://sdimitro.github.io/img/flame/DLPX-53417/trial-A3e.svg
=== Wall clock times (in seconds)
```
[Trial 4]
real4.9
user0.0
sys 0.9

[Trial 5]
real3.8
user0.0
sys 0.9

[Trial 6]
real3.6
user0.0
sys 0.9
```

== Analysis
The results between the trials are consistent so in this sections I will only 
talk
about the flamegraph results from trial-1 and the wall-clock results from 
trial-4.

>From trial-1 we can see that zfs_dev_ioctl() goes from 2,331 to 996 samples 
>counts.
Specifically, the samples from fnvlist_add_nvlist() and spa_history_log_nvl() 
are
almost gone (~500 & ~800 to 5 & 5 samples), leaving zfs_ioc_destroy_snaps() to
dominate most samples from zfs_dev_ioctl().

>From trial-4 we see that the user time dropped to 0 secods. I believe the 
>consistent 0.4
seconds before my patch was applied was due to my driver program constructing 
the long
nvlist of snapshots so it can pass it to the kernel. As for the system time, 
the effect
there is more clear (2.3 down to 0.9 seconds).

= Misc
Upstream Bugs: DLPX-53417
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/647

-- Commit Summary --

  * 9580 Add a hash-table on top of nvlist to speed-up operations

-- File Changes --

M usr/src/common/nvpair/nvpair.c (366)
M usr/src/lib/libnvpair/nvpair_json.c (3)
M usr/src/uts/common/sys/nvpair.h (3)
M usr/src/uts/common/sys/nvpair_impl.h (29)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/647.patch
https://github.com/openzfs/openzfs/pull/647.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/647

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T2f956000796dad96-Mbe66b9e17204a6849eebc0f7
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9433 Fix ARC hit rate (#606)

2018-05-09 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> + * which otherwise would be skipped for entries in the dbuf cache.
+ */
+void
+arc_buf_access(arc_buf_t *buf)
+{
+   mutex_enter(&buf->b_evict_lock);
+   arc_buf_hdr_t *hdr = buf->b_hdr;
+
+   /*
+* Avoid taking the hash_lock when possible as an optimization.
+* The header must be checked again under the hash_lock in order
+* to handle the case where it is concurrently being released.
+*/
+   if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) {
+   mutex_exit(&buf->b_evict_lock);
+   return;

Looking at the original commit by Brian someone else raised the same question 
in the PR and Brian's reply was the following:
```
behlendorf on Dec 28, 2017  Owner
In my local testing I did add an additional kstat here in order to determine if 
including this additional check was worthwhile. Keep in mind this check could 
be dropped entirely since it's safe to rely on the one under the mutex. Based 
on my test results 99.99+% of the released headers were caught here without the 
need to take the hash lock, so I opted to keep it as an optimization to 
minimize any potential lock contention. After determining that I was only 
really interested in how many slipped through since that was the exceptional 
case. So the value is only bumped in the second conditional.
```

At least a comment here explaining the above rationale would be appropriate 
IMHO.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/606#discussion_r187174770
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T9235f754de980785-M23492e0526f7af044544faa0
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] Correct default list for 'zpool list' (#632)

2018-05-07 Thread Serapheim Dimitropoulos
Hey @grimreaper!

I opened the following illumos bug for this issue: 
https://www.illumos.org/issues/9521

For future reference whenever we want to open a pull request we always open a 
bug
in https://www.illumos.org/issues if one doesn't exist already. Then we 
reference the
illumos bug in the PR (either by putting the link in the description, or 
prepending the
title of the PR and the commit message with the bug ID).

The reason why we do this is because it makes the illumos RTI process easier 
for us
and we get to integrate your change in both the OpenZFS and illumos-gate repos.

Thank you again for submitting your change and if you have any future questions 
in
opening PRs for OpenZFS feel free to reach out to the #openzfs slack channel in
https://openzfs.slack.com/.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/632#issuecomment-387127973
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf2be7570e9a767de-M6575ae100c320847729d5bf1
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] Correct default list for 'zpool list' (#632)

2018-05-01 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.

Thank you for catching this



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/632#pullrequestreview-116694441
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf2be7570e9a767de-Me71f3d1572686ea2e1066d78
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] Limit testing to zloop and zfstest; remove others. (#628)

2018-04-18 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/628#pullrequestreview-113341219
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/Tcfaacbea281427b2-M852f7f399f0daca13ada17e9
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9485 Optimize possible split block search space (#625)

2018-04-18 Thread Serapheim Dimitropoulos
sdimitro requested changes on this pull request.

Thank you for porting this. I only have one question/suggestion but other than 
that it looks good to me.

>   for (indirect_split_t *is = list_head(&iv->iv_splits);
-   is != NULL; is = list_next(&iv->iv_splits, is))
-   segments++;
+   is != NULL; is = list_next(&iv->iv_splits, is)) {
+   uint64_t is_copies = 0;
+
+   for (int i = 0; i < is->is_children; i++) {

I have a question which may lead a small optimization although I may be 
completely wrong.

In this loop we are going through each is_child:
If it has no data we skip it. If it has data we are looking at the rest of the 
is_child to see if any of them has the same data as our original is_child and 
mark them as duplicate, and finally we increment is_copies which we will use to 
get the number of combinations.

The above makes sense with the rest of the logic in this function but to me it 
sounds that what we call "accurate combinations" is not exactly accurate as the 
combinations themselves are not unique.

The variable `is_copies` (that is later used to calculate `combinations`) tells 
us how many of the is_children vdevs have their `ic_data` field populated (e.g. 
not NULL). So that got me wondering, what if:
[1] we changed the name of `is_copies` to something like `is_split_versions`
[2] in this loop instead of just skipping when `ic_data == NULL` we also skip 
when `ic_duplicate != -1` (meaning we visited this `ic_child` before [in the 
inner loop] and we know that it has not a unique version of `ic_data`)
[3] Later the outer loop (right after the inner loop is done) we increment 
`is_split_versions` and we use it exactly like `is_copies`.

This should give us the number of unique combinations. Now I know that this 
won't work exactly with the rest of the logic in this function but what if the 
struct `indirect_split_t` had a field `list_t is_unique_versions` (or some 
similar name) and every time `is_split_versions` is incremented in this loop we 
add the current `is_child` to this list.

Then later in this function we can use this new list without having to care, do 
checks, and skip duplicates because they are never visited. Thoughts?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/625#pullrequestreview-113309423
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf29308eb1ca3d052-Me9367b749c94a5faf1071273
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9466 add JSON output support to channel programs (#619)

2018-04-13 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> +# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, usr/src/common/zfs this CDDL HEADER in each
+# file and usr/src/common/zfs the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright 2007 Sun Microsystems, Inc.  All rights reserved.

Yeah you are right, it seems like they were wrong from the ZoL review (should 
have been `...2018 Datto..` instead of `...2007 Sun..`. At this point, I 
wouldn't worry about it :-)

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/619#discussion_r181476307
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T0465226805877059-M6c0bbb76ceb1c755c8a9aafe
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9466 add JSON output support to channel programs (#619)

2018-04-13 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> +# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source.  A copy is of the CDDL is also available via the Internet
+# at http://www.illumos.org/license/CDDL.
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2018 Datto Inc.
+#
+
+. $STF_SUITE/usr/src/common/zfs/libtest.shlib

I'm sorry, my mistake. I meant the new `cleanup.ksh` and `setup.ksh` have the 
same issue (should be `. $STF_SUITE/include/libtest.shlib` instead of 
`$STF_SUITE/usr/src/common/zfs/libtest.shlib`) - not Makefiles.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/619#discussion_r181475688
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T0465226805877059-Mf63dcfa23223d944774524d1
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9466 add JSON output support to channel programs (#619)

2018-04-13 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.

Can you add an entry in `usr/src/pkg/manifests/system-test-zfstest.mf` for the 
new test that you added?

Things look good to me overall since I've had a sneak peek of the ZoL review. 
I'd have @jwk404 take a look at the new test case though just in case.

> +# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, usr/src/common/zfs this CDDL HEADER in each
+# file and usr/src/common/zfs the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright 2007 Sun Microsystems, Inc.  All rights reserved.

Copy paste error?

> +# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+# or http://www.opensolaris.org/os/licensing.
+# See the License for the specific language governing permissions
+# and limitations under the License.
+#
+# When distributing Covered Code, usr/src/common/zfs this CDDL HEADER in each
+# file and usr/src/common/zfs the License file at usr/src/OPENSOLARIS.LICENSE.
+# If applicable, add the following below this CDDL HEADER, with the
+# fields enclosed by brackets "[]" replaced with your own identifying
+# information: Portions Copyright [] [name of copyright owner]
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright 2007 Sun Microsystems, Inc.  All rights reserved.

Copy-paste error?

> +# Common Development and Distribution License ("CDDL"), version 1.0.
+# You may only use this file in accordance with the terms of version
+# 1.0 of the CDDL.
+#
+# A full copy of the text of the CDDL should have accompanied this
+# source.  A copy is of the CDDL is also available via the Internet
+# at http://www.illumos.org/license/CDDL.
+#
+# CDDL HEADER END
+#
+
+#
+# Copyright (c) 2018 Datto Inc.
+#
+
+. $STF_SUITE/usr/src/common/zfs/libtest.shlib

why `$STF_SUITE/usr/src/common/zfs/libtest.shlib`?
and not `. $STF_SUITE/include/libtest.shlib`?

I saw your makefiles doing that too. Could you maybe change the Makefiles and 
the include in that file to be consistent with the rest of the stuff under 
`cli_root`?

> +"{
+\"return\": {
+\"failed\": {},
+\"succeeded\": {
+\"$TESTDS\": \"filesystem\"
+}
+}
+}")
+typeset -i cnt=0
+typeset usr/src/cmd
+for usr/src/cmd in ${pos_usr/src/cmds[@]}; do
+   log_must zfs program $TESTPOOL $TESTZCP $TESTDS $usr/src/cmd 2>&1
+   log_must zfs program $TESTPOOL -j $TESTZCP $TESTDS $usr/src/cmd 2>&1
+   # json.tool is needed to guarantee consistent ordering of fields
+   # sed is needed to trim trailing space in CentOS 6's json.tool output
+   OUTPUT=$(zfs program $TESTPOOL -j $TESTZCP $TESTDS $usr/src/cmd 2>&1 | 
python -m json.tool | sed 's/[[:space:]]*$//')

[nit] Not sure if we do style checks for shell scripts too but can you maybe 
wrap this line?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/619#pullrequestreview-112048253
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T0465226805877059-Mf71c8e37ec32e2b89c1f82d0
Delivery options: https://openzfs.topicbox.com/groups


[developer] Re: [openzfs/openzfs] 9464 txg_kick() fails to see that we are quiescing, forcing transacti… (#616)

2018-04-13 Thread Serapheim Dimitropoulos
@avg-I here is the txg.d that I used:
https://gist.github.com/sdimitro/23ed816690faa412ddf0b00ae9cd49e8

and here is another ad hoc dtrace script that I make for dynamically tracing 
txg_kick():
https://gist.github.com/sdimitro/4eac9894d4a109ad5774d0e3b07f20a0

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/616#issuecomment-381162803
--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T482138142f6adde1-M6d738b0b46083f9a4dc5a306
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 9238 ZFS Spacemap Encoding V2 (#589)

2018-03-14 Thread Serapheim Dimitropoulos
Reviewed by: Matt Ahrens 
Reviewed by: George Wilson 

Motivation

The current space map encoding has the following disadvantages:
[1] Assuming 512 sector size each entry can represent at most 16MB for a 
segment.
This makes the encoding very inefficient for large regions of space.
[2] As vdev-wide space maps have started to be used by new features (i.e.
device removal, zpool checkpoint) we've started imposing limits in the
vdevs that can be used with them based on the maximum addressable offset
(currently 64PB for a top-level vdev).

New encoding

The layout can be found at space_map.h and it remains backwards compatible with
the old one. The introduced two-word entry format, besides extending the limits
imposed by the single-entry layout, also includes a vdev field and some extra
padding after its prefix.

The extra padding after the prefix should is reserved for future usage (e.g.
new prefixes for future encodings or new fields for flags). The new vdev field
not only makes the space maps more self-descriptive, but also opens the doors
for pool-wide space maps (expected to be used in the log spacemap project).

One final important note is that the number of bits used for vdevs is reduced
to 24 bits for blkptrs. That was decided as we don't know of any setups that
use more than 16M vdevs for the time being and we wanted to fit the vdev field
in the space map. In addition that gives us some extra bits in dva_t.

Other references:

The new encoding is also discussed towards the end of the Log Space Map
presentation from 2017's OpenZFS summit.
Link: https://www.youtube.com/watch?v=jj2IxRkl5bQ

Upstream bugs: DLPX-19248
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/589

-- Commit Summary --

  * 9238 ZFS Spacemap Encoding V2

-- File Changes --

M usr/src/cmd/mdb/common/modules/zfs/zfs.c (142)
M usr/src/cmd/zdb/zdb.c (125)
M usr/src/cmd/ztest/ztest.c (7)
M usr/src/common/zfs/zfeature_common.c (6)
M usr/src/common/zfs/zfeature_common.h (1)
M usr/src/lib/libzpool/common/llib-lzpool (1)
M usr/src/man/man5/zpool-features.5 (29)
M 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_discard_busy.ksh
 (6)
M usr/src/uts/common/fs/zfs/metaslab.c (58)
M usr/src/uts/common/fs/zfs/range_tree.c (2)
M usr/src/uts/common/fs/zfs/spa_checkpoint.c (50)
M usr/src/uts/common/fs/zfs/space_map.c (796)
M usr/src/uts/common/fs/zfs/sys/spa.h (12)
M usr/src/uts/common/fs/zfs/sys/space_map.h (114)
M usr/src/uts/common/fs/zfs/vdev.c (2)
M usr/src/uts/common/fs/zfs/vdev_indirect.c (2)
M usr/src/uts/common/fs/zfs/vdev_indirect_mapping.c (9)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/589.patch
https://github.com/openzfs/openzfs/pull/589.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/589

--
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/discussions/T37c787f672ffbe0a-Ma6345e177996b5185a6a6531
Delivery options: https://openzfs.topicbox.com/groups


[developer] [openzfs/openzfs] 9166 zfs storage pool checkpoint (#560)

2018-02-21 Thread Serapheim Dimitropoulos
Reviewed by: Matthew Ahrens 
Reviewed by: John Kennedy 
Reviewed by: Dan Kimmel 

Details about the motivation of this feature and its usage can
be found in this blogpost:
https://sdimitro.github.io/post/zpool-checkpoint/

A lightning talk of this feature can be found here:
https://www.youtube.com/watch?v=fPQA8K40jAM

Implementation details can be found in big block comment of
spa_checkpoint.c

Side-changes that are relevant to this commit but not explained
elsewhere:
* renames metaslab trees to be shorter without losing meaning
* space_map_{alloc,truncate}() accept a block size as a
  parameter. The reason is that in the current state all space
  maps that we allocate through the DMU use a global tunable
  (space_map_blksz) which defauls to 4KB. This is ok for
  metaslab space maps in terms of bandwirdth since they are
  scattered all over the disk. But for other space maps this
  default is probably not what we want. Examples are device
  removal's vdev_obsolete_sm or vdev_chedkpoint_sm from this
  review. Both of these have a 1:1 relationship with each vdev
  and could benefit from a bigger block size.

Upstream Bugs: DLPX-49202, DLPX-48936, DLPX-48532, DLPX-46652,
DLPX-51224, DLPX-51469, DLPX-52028, DLPX-53181, DLPX-52524,
DLPX-56750, DLPX-57058, DLPX-57033
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/560

-- Commit Summary --

  * 9166 zfs storage pool checkpoint

-- File Changes --

M usr/src/cmd/mdb/common/modules/zfs/zfs.c (67)
M usr/src/cmd/truss/codes.c (6)
M usr/src/cmd/zdb/zdb.c (847)
M usr/src/cmd/zdb/zdb_il.c (12)
M usr/src/cmd/zpool/zpool_main.c (215)
M usr/src/cmd/ztest/ztest.c (120)
M usr/src/common/zfs/zfeature_common.c (7)
M usr/src/common/zfs/zfeature_common.h (3)
M usr/src/common/zfs/zpool_prop.c (4)
M usr/src/lib/libzfs/common/libzfs.h (7)
M usr/src/lib/libzfs/common/libzfs_pool.c (43)
M usr/src/lib/libzfs/common/libzfs_util.c (29)
M usr/src/lib/libzfs/common/mapfile-vers (2)
M usr/src/lib/libzfs_core/common/libzfs_core.c (68)
M usr/src/lib/libzfs_core/common/libzfs_core.h (3)
M usr/src/lib/libzfs_core/common/mapfile-vers (9)
M usr/src/man/man1m/zdb.1m (5)
M usr/src/man/man1m/zpool.1m (93)
M usr/src/man/man5/zpool-features.5 (20)
M usr/src/pkg/manifests/system-test-zfstest.mf (52)
A usr/src/test/zfs-tests/cmd/randwritecomp/Makefile (19)
A usr/src/test/zfs-tests/cmd/randwritecomp/randwritecomp.c (192)
M usr/src/test/zfs-tests/include/commands.cfg (1)
M usr/src/test/zfs-tests/include/libtest.shlib (20)
M usr/src/test/zfs-tests/runfiles/delphix.run (11)
A usr/src/test/zfs-tests/runfiles/longevity.run (23)
M usr/src/test/zfs-tests/runfiles/omnios.run (13)
M usr/src/test/zfs-tests/runfiles/openindiana.run (13)
M usr/src/test/zfs-tests/tests/functional/cli_root/zdb/zdb_001_neg.ksh (4)
M 
usr/src/test/zfs-tests/tests/functional/cli_root/zpool_import/import_rewind_config_changed.ksh
 (36)
A usr/src/test/zfs-tests/tests/functional/pool_checkpoint/Makefile (21)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_after_rewind.ksh
 (55)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_big_rewind.ksh
 (57)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_capacity.ksh 
(90)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_conf_change.ksh
 (43)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_discard.ksh 
(53)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_discard_busy.ksh
 (106)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_discard_many.ksh
 (52)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_indirect.ksh 
(59)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_invalid.ksh 
(80)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_lun_expsz.ksh
 (61)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_open.ksh (48)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_removal.ksh 
(72)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_rewind.ksh 
(49)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_ro_rewind.ksh
 (57)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_sm_scale.ksh 
(74)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_twice.ksh 
(40)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_vdev_add.ksh 
(63)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_zdb.ksh (80)
A 
usr/src/test/zfs-tests/tests/functional/pool_checkpoint/checkpoint_zhack_feat.ksh
 (66)
A usr/src/test/zfs-tests/tests/functional/pool_checkpoint/cleanup.ksh (23)
A 
usr/src/test/zfs-tests/

[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-14 Thread Serapheim Dimitropoulos
@sdimitro pushed 1 commit.

a23de99  get rid of redundant check


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/667119b38ecaf1f1a540975a8b63a7189ee9eaeb..a23de999d1a50e209c43bcdaadb8a29029c50e53

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-M06d729e633b995fefc2486ad
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-14 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -198,6 +204,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr, 
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
 }
 
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a number or not (e.g. it could
+ * be a byte array, or a struct) as long as the size of the write
+ * is one of the aforementioned ones.
+ */
+static mdb_tgt_addr_t
+write_var_uint(mdb_tgt_as_t as, mdb_tgt_addr_t addr, uint64_t val, size_t size,
+uint_t rdback)
+{
+   if (size > MDB_UINT_WRITE_MAXBYTES) {

I agree. Since that is pretty easy to test and we end up getting rid of the 
extra constant I can re-iterate once more.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544#discussion_r168258623
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-M7d71c872e189a14a9a01d4fc
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-14 Thread Serapheim Dimitropoulos
@sdimitro pushed 1 commit.

667119b  final feedback


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/44ba0bf8d06902b16951721e34cb1e98f8ff2ac6..667119b38ecaf1f1a540975a8b63a7189ee9eaeb

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-M4a1567a05259d0c6d5a21492
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-13 Thread Serapheim Dimitropoulos
Review updated. @rmustacc if there is no further feedback, do we have your 
approval for this review?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544#issuecomment-365305966
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-M17128fb97715f1bc391d3151
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-12 Thread Serapheim Dimitropoulos
@sdimitro pushed 1 commit.

44ba0bf  tab in define macro


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/a96b5daac5b0059e9074f0b59b8f9529d2157797..44ba0bf8d06902b16951721e34cb1e98f8ff2ac6

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-Me67d322093ab2a4c6e50299d
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-12 Thread Serapheim Dimitropoulos
@sdimitro pushed 1 commit.

a96b5da  apply feedback


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/544/files/f17764a59fa855e38e6044d0020648f07dd07599..a96b5daac5b0059e9074f0b59b8f9529d2157797

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-M48b5f4a0ef5423040df0fa4a
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-10 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -198,6 +198,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr, 
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
 }
 
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a number or not (e.g. it could
+ * be a byte array, or a struct) as long as the size of the write
+ * is one of the aforementioned ones.
+ */
+static mdb_tgt_addr_t
+write_var_uint(mdb_tgt_as_t as, mdb_tgt_addr_t addr, uint64_t val, size_t size,
+uint_t rdback)
+{
+   if (size > sizeof (uintptr_t)) {

Makes sense. I'll update the commit. Thank you for reviewing this.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544#discussion_r167407787
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-Mcb3ceeb7dedcceebb203ae1c
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 9091 Add mdb smart write (#544)

2018-02-10 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -198,6 +198,100 @@ write_uint64(mdb_tgt_as_t as, mdb_tgt_addr_t addr, 
> uint64_t n, uint_t rdback)
return (addr + sizeof (n));
 }
 
+/*
+ * Writes to objects of size 1, 2, 4, or 8 bytes. The function
+ * doesn't care if the object is a number or not (e.g. it could
+ * be a byte array, or a struct) as long as the size of the write
+ * is one of the aforementioned ones.
+ */
+static mdb_tgt_addr_t
+write_var_uint(mdb_tgt_as_t as, mdb_tgt_addr_t addr, uint64_t val, size_t size,
+uint_t rdback)
+{
+   if (size > sizeof (uintptr_t)) {

You are right. The check here is to ensure that we don't write more bytes than 
the longest integer supported.

Looking into the different integer types it seems like `uintmax_t` would be the 
right type for this. Does this sound reasonable to you?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544#discussion_r167407281
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta82fe96bb714cae3-Mf451df75631a9a9e19aafcad
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] Add mdb smart write (#544)

2018-02-10 Thread Serapheim Dimitropoulos
Reviewed by: Matthew Ahrens 
Reviewed by: Paul Dagnelie 
Reviewed by: Prashanth Sreenivasa 

Currently, if you want to write to a kernel tunable using
mdb, you need to use the /Z or /W flags to write 8 or 4
bytes respectively. If you use the wrong one, you will either
overwrite only part of the tunable, or overrun the end of the
tunable and zero out the next 4 bytes of memory. That second
option in particular can be disastrous. Given that MDB knows
the size of the tunables because it has CTF data, it would be
nice if MDB supported a "smart write" format string that
selected the size of the write based on the size of the thing
being written to. It should fail or require a size be specified
if we're writing to an area without a specific size.

A new dcmd and a new format for writing are added to mdb.

New format:
- Looks like this -> [address]/z [value]
- Writes the value in the address starting from the specified
  address
- The size of the write is inferred by the CTF data of the
  symbol at the address provided
- If no CTF data exist at the given address, the write fails

New dcmd:
- Looks like this -> [address]::write [value]
- Works exactly like the format specifier above
- For physical writes specify the -p argument
  (i.e. [address]\z [value])
- For object file writes specify the -o argument
  (i.e. [address]?z [value])
- If no CTF data was found and the write fails the
  user can force a write with -l [size]
- The allowed lengths of a forced write are 1, 2, 4,
  and 8 bytes.

Both of the above work with integer, pointer, and enum types
(except the -l option in the dcmd that doesn't care about the type).

Upstream Bugs: DLPX-48141
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/544

-- Commit Summary --

  * Add mdb smart write

-- File Changes --

M usr/src/cmd/mdb/common/mdb/mdb_cmds.c (192)
M usr/src/cmd/mdb/common/mdb/mdb_fmt.c (4)
M usr/src/cmd/mdb/common/mdb/mdb_lex.l (3)
M usr/src/man/man1/mdb.1 (2)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/544.patch
https://github.com/openzfs/openzfs/pull/544.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/544

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tab9df74196897556-M3a6e2ea09de07e449b354460
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 9080 recursive enter of vdev_indirect_rwlock from vdev_indirect_remap() (#542)

2018-02-08 Thread Serapheim Dimitropoulos
Reviewed by: Matthew Ahrens 
Reviewed by: George Wilson 

Description

A scenario came up where a callback executed by vdev_indirect_remap() on a 
vdev, calls
vdev_indirect_remap() on the same vdev and tries to reacquire 
vdev_indirect_rwlock that
was already acquired from the first call to vdev_indirect_remap(). The specific 
scenario,
is that we want to remap a block pointer that is snapshoted but its dataset's 
remap_deadlist
is not cached. So in order to add it we issue a read through a 
vdev_indirect_remap() on the
same vdev, which brings up the aforementioned issue.

Potential Solutions

1] Change vdev_indirect_rwlockto be recursive. Our locking strategy remains the 
same
2] Introduce a mechanism similar to ds_pending_deadlist (of ds_deadlist) to 
ds_remap_deadlist.
3] Grab the lock, copy the relevant indirect mapping entries in a buffer, and 
drop it before
   doing any further processing.

Solution Proposed by this change

We've decided to go with the third option mentioned above. The main advantages 
of it being the
following:
- we hold the lock for less time
- Conceptually simpler in terms of locking. The lock only protects the vdev's 
indirect mapping
  and nothing else
- More flexible as a locking strategy in general. The lock does not guide what 
the callbacks
  should and should not do in terms of I/O anymore
- if not less complex than solutions [2], then definitely more localized

Related Bugs: DLPX-49245
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/542

-- Commit Summary --

  * 9080 recursive enter of vdev_indirect_rwlock from vdev_indirect_remap()

-- File Changes --

M usr/src/uts/common/fs/zfs/vdev_indirect.c (108)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/542.patch
https://github.com/openzfs/openzfs/pull/542.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/542

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tca43f609ceed4c33-M5bb02fe8c58447210e74aa9f
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 9079 race condition in starting and ending condesing thread for indirect vdevs (#541)

2018-02-08 Thread Serapheim Dimitropoulos
Reviewed by: Matt Ahrens 
Reviewed by: Pavel Zakharov 

The timeline of the race condition is the following:
[1] Thread A is about to finish condesing the first vdev in 
spa_condense_indirect_thread(),
so it calls the spa_condense_indirect_complete_sync() sync task which sets 
the
spa_condensing_indirect field to NULL. Waiting for the sync task to finish, 
thread A
sleeps until the txg is done. When this happens, thread A will acquire 
spa_async_lock
and set spa_condense_thread to NULL.
[2] While thread A waits for the txg to finish, thread B which is running 
spa_sync() checks
whether it should condense the second vdev in 
vdev_indirect_should_condense() by checking
the spa_condensing_indirect field which was set to NULL by 
spa_condense_indirect_thread()
from thread A. So it goes on and tries to spawn a new condensing thread in
spa_condense_indirect_start_sync() and the aforementioned assertions fails 
because thread A
has not set spa_condense_thread to NULL (which is basically the last thing 
it does before
returning).

The main issue here is that we rely on both spa_condensing_indirect and 
spa_condense_thread to
signify whether a condensing thread is running. Ideally we would only use one 
throughout the
codebase. In addition, for managing spa_condense_thread we currently use 
spa_async_lock which
basically tights condensing to scrubing when it comes to pausing and resuming 
those actions
during spa export.

This commit introduces the ZTHR infrastructure, which is basically threads 
created during
spa_load()/spa_create() and exist until we export or destroy the pool. ZTHRs 
sleep the majority
of the time, until they are notified to wake up and do some predefined type of 
work.

In the context of the current bug, a zthr to does the condensing of indirect 
mappings replacing
the older code that used bare kthreads. When a pool is created, the condensing 
zthr is spawned
but sleeps right away, until it is awaken by a signal from spa_sync(). If an 
existing pool is
loaded, the condensing zthr looks if there is anything to condense before going 
to sleep, in
case we were condensing mappings in the pool before it got exported.

The benefits of this solution are the following:
- The current bug is fixed
- spa_condensing_indirect is the sole indicator of whether we are currently 
condensing or not
- condensing is more decoupled from the spa_async_thread related functionality.

As a final note, this commit also sets up the path on upstreaming other 
features that use
the ZTHR code like zpool checkpoint and fast clone deletion.

Related Bugs: DLPX-49690, DLPX-50734
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/541

-- Commit Summary --

  * 9079 race condition in starting and ending condesing thread for indirect 
vdevs

-- File Changes --

M usr/src/uts/common/Makefile.files (3)
M usr/src/uts/common/fs/zfs/spa.c (49)
M usr/src/uts/common/fs/zfs/sys/spa_impl.h (3)
M usr/src/uts/common/fs/zfs/sys/vdev_removal.h (2)
A usr/src/uts/common/fs/zfs/sys/zthr.h (52)
M usr/src/uts/common/fs/zfs/vdev_indirect.c (93)
A usr/src/uts/common/fs/zfs/zthr.c (319)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/541.patch
https://github.com/openzfs/openzfs/pull/541.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/541

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T01ad831dada0a527-M905e5afca6c9ae35166ca955
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8862 ZFS Channel Programs - List Bookmarks & Holds (#499)

2017-11-28 Thread Serapheim Dimitropoulos
Closed #499.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/499#event-1361777250
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T50d814f15619f145-M5ce4cb2963066d34a3cbc8b3
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8862 ZFS Channel Programs - List Bookmarks & Holds (#499)

2017-11-27 Thread Serapheim Dimitropoulos
This adds two new features to channel programs --
listing bookmarks on a filesystem and listing holds on a snapshot.

The man page of channel programs has also been updated to explain
why zfs.list.properties() returns 3 items (the property name, its value,
and its source) for each iteration, rather than just the property name
as the current version of the docs might make you expect.

One final code cleanup/sidefix is the renaming zfs.list.properties to
zfs.list.user_properties to make the operation more like its sibling
zfs.list.system_properties.

Reviewed by: Matt Ahrens 
Reviewed by: Serapheim Dimitropoulos 

Upstream Bugs: DLPX-50618, DLPX-50362, DLPX-52112, DLPX-52115
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/499

-- Commit Summary --

  * 8862 ZFS Channel Programs - List Bookmarks & Holds

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (23)
M usr/src/pkg/manifests/system-test-zfstest.mf (6)
M usr/src/test/zfs-tests/include/libtest.shlib (12)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_bookmarks.ksh
 (120)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.list_holds.ksh
 (121)
M usr/src/uts/common/fs/zfs/zcp_iter.c (253)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/499.patch
https://github.com/openzfs/openzfs/pull/499.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/499

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T50d814f15619f145-Mc12bcd105d70fa572fc9f730
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8565 zpool dumps core trying to destroy unavail/faulted pool and try export it (#473)

2017-09-26 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



>   for (int i = 0; i < sharearg->zhandle_len; ++i) {
zfs_handle_t *fs_handle =
((zfs_handle_t **)(sharearg->zhandle_arr))[i];
if (fs_handle == NULL) {
+   /* Free non-null elements of the paths array */
+   for (int free_idx = 0; free_idx < *paths_len;
+   ++free_idx) {
+   if ((*paths)[free_idx] != NULL)
+   free((*paths)[free_idx]);
+   }
+   free(*paths);
+   *paths = NULL;
+   *paths_len = 0;
return (SA_SYSTEM_ERR);
}
(*paths)[i] = malloc(sizeof (char) * ZFS_MAXPROPLEN);
err |= sa_get_zfs_share_common(handle, fs_handle, (*paths)[i],

Review updated

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/473#discussion_r141176755
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tcc0964d15893ff46-Mfd7da22f1daf160c45740f70
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8565 zpool dumps core trying to destroy unavail/faulted pool and try export it (#473)

2017-09-26 Thread Serapheim Dimitropoulos
@sdimitro pushed 1 commit.

24bea3d  Feedback from Igor Kozhukhov 


-- 
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/473/files/724ba931a6a892b808460d3bcf5f0e155d5a7479..24bea3d1ddf573a8dbcc8cb06662421b5a71fe7f

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tcc0964d15893ff46-Me29b720d3b5c556d6f67a4c8
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8565 zpool dumps core trying to destroy unavail/faulted pool and try export it (#473)

2017-09-26 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



>   for (int i = 0; i < sharearg->zhandle_len; ++i) {
zfs_handle_t *fs_handle =
((zfs_handle_t **)(sharearg->zhandle_arr))[i];
if (fs_handle == NULL) {
+   /* Free non-null elements of the paths array */
+   for (int free_idx = 0; free_idx < *paths_len;
+   ++free_idx) {
+   if ((*paths)[free_idx] != NULL)
+   free((*paths)[free_idx]);
+   }
+   free(*paths);
+   *paths = NULL;
+   *paths_len = 0;
return (SA_SYSTEM_ERR);
}
(*paths)[i] = malloc(sizeof (char) * ZFS_MAXPROPLEN);
err |= sa_get_zfs_share_common(handle, fs_handle, (*paths)[i],

You are right! It seems that this somehow was missed?

That said there shouldn't be any problem as this function is only called by 
`sa_init_impl()` and its result is basically discarded. But I do agree that 
this is confusing.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/473#discussion_r141172980
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tcc0964d15893ff46-M0bdbd5e871b6101a1d9466fe
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8565 zpool dumps core trying to destroy unavail/faulted pool and try export it (#473)

2017-09-26 Thread Serapheim Dimitropoulos
Reviewed by: Matt Ahrens 
Reviewed by: Serapheim Dimitropoulos 

There are cases where sa_get_one_zfs_share() of libshare doens't
fill in all the paths[] due to a specific handle being NULL and
returns SA_SYSTEM_ERR.

Currently regardless of whether this error is returned or not
we always free each of those paths in sa_init_impl() which
can lead to a double-freeing fault.

Since sa_get_one_zfs_share() is also the function that allocates
the memory that will later be pointed by paths[], it should
also be responsible for freeing any no-null elements of that
array in the case of error and also set paths_len to 0 so
the caller doesn't attempt to do anything wrong.

Upstream bugs: DLPX-53732
Upstream Original Title: libshare can cause free of non-allocated space when 
pools are faulted
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/473

-- Commit Summary --

  * 8565 zpool dumps core trying to destroy unavail/faulted pool and try export 
it

-- File Changes --

M usr/src/lib/libshare/common/libshare_zfs.c (15)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/473.patch
https://github.com/openzfs/openzfs/pull/473.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/473

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tcc0964d15893ff46-M5dfabd3053faf45431f6d836
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8677 Open-Context Channel Programs (#469)

2017-09-22 Thread Serapheim Dimitropoulos
Reviewed by: Matt Ahrens 
Reviewed by: Chris Williamson 
Reviewed by: Pavel Zakharov 

We want to be able to run channel programs outside of synching context.
This would greatly improve performance for channel programs that just gather
information, as they won't have to wait for synching context anymore.

=== What is implemented?

This feature introduces the following:
- A new command line flag in "zfs program" to specify our intention
  to run in open context. (The -n option)
- A new flag/option within the channel program ioctl which selects
  the context.
- Appropriate error handling whenever we try a channel program in
  open-context that contains zfs.sync* expressions.
- Documentation for the new feature in the manual pages.

=== How do we handle zfs.sync functions in open context?

When such a function is found by the interpreter and we are running
in open context we abort the script and we spit out a descriptive
runtime error. For example, given the script below ...

arg = ...
fs = arg["argv"][1]
err = zfs.sync.destroy(fs)
msg = "destroying " .. fs .. " err=" .. err
return msg

if we run it in open context, we will get back the following error:

Channel program execution failed:
[string "channel program"]:3: running functions from the zfs.sync submodule 
requires passing sync=TRUE to lzc_channel_program() (i.e. do not specify the 
"-n" command line argument)
stack traceback:
[C]: in function 'destroy'
[string "channel program"]:3: in main chunk

=== What about testing?

We've introduced new wrappers for all channel program tests that
run each channel program as both (startard & open-context) and
expect the appropriate behavior depending on the program using
the zfs.sync module.

Upstream Bug: DLPX-47799
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/469

-- Commit Summary --

  * 8677 Open-Context Channel Programs

-- File Changes --

M usr/src/cmd/zfs/zfs_main.c (18)
M usr/src/lib/libzfs/common/libzfs_dataset.c (2)
M usr/src/lib/libzfs_core/common/libzfs_core.c (51)
M usr/src/lib/libzfs_core/common/libzfs_core.h (6)
M usr/src/lib/libzfs_core/common/mapfile-vers (1)
M usr/src/man/man1m/zfs-program.1m (9)
M usr/src/man/man1m/zfs.1m (11)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/channel_common.kshlib 
(181)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.return_large.ksh
 (2)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.destroy_fs.ksh
 (4)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.destroy_snap.ksh
 (4)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_conflict.ksh
 (4)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_multiple.ksh
 (4)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.promote_simple.ksh
 (4)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_mult.ksh
 (2)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_one.ksh
 (2)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_destroy.ksh
 (2)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_neg.ksh
 (3)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_recursive.ksh
 (2)
M 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_simple.ksh
 (2)
M usr/src/uts/common/fs/zfs/dsl_destroy.c (1)
M usr/src/uts/common/fs/zfs/sys/zcp.h (28)
M usr/src/uts/common/fs/zfs/zcp.c (142)
M usr/src/uts/common/fs/zfs/zcp_synctask.c (43)
M usr/src/uts/common/fs/zfs/zfs_ioctl.c (6)
M usr/src/uts/common/sys/fs/zfs.h (1)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/469.patch
https://github.com/openzfs/openzfs/pull/469.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/469

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T7aeda5c8d43adc6a-Meb1e7c29df53d77832c7d9e8
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8661 remove "zil-cw2" dtrace probe (#468)

2017-09-21 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/468#pullrequestreview-64402335
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T7b937fcc12a37f75-M007c8479f13ff5bdc6afeb5f
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8605 zfs channel programs: zfs.exists undocumented and non-working (#460)

2017-09-01 Thread Serapheim Dimitropoulos
Reviewed by: Paul Dagnelie 
Reviewed by: Dan Kimmel 
Reviewed by: Matt Ahrens 

zfs.exists() in channel programs doesn't return any result, and should
have a man page entry. This patch corrects zfs.exists so that it
returns a value indicating if the dataset exists or not. It also adds
documentation about it in the man page.

Upstream Bugs: DLPX-50908
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/460

-- Commit Summary --

  * 8605 zfs channel programs: zfs.exists undocumented and non-working

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (12)
M usr/src/pkg/manifests/system-test-zfstest.mf (5)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.exists.ksh 
(45)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.exists.zcp 
(26)
M usr/src/uts/common/fs/zfs/zcp.c (2)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/460.patch
https://github.com/openzfs/openzfs/pull/460.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/460

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta7ae14deb6fcaf50-M365dcd3fbd95a44270611f33
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8605 zfs channel programs: zfs.exists undocumented and non-working (#457)

2017-09-01 Thread Serapheim Dimitropoulos
Closed #457.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/457#event-1232174723
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f8adbf76ec7f4-Md5aaaecb78c86f33b8c81e13
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8604 Avoid unnecessary work search in VFS when unmounting snapshots (#459)

2017-09-01 Thread Serapheim Dimitropoulos
Reviewed by: Matt Ahrens 
Reviewed by: George Wilson 

Every time we want to unmount a snapshot (happens during snapshot
deletion or renaming) we unnecessarily iterate through all the
mountpoints in the VFS layer (see zfs_get_vfs).

The current patch completely gets rid of that code and changes
the approach while keeping the behavior of that code path the
same. Specifically, it puts a hold on the dataset/snapshot and
gets its vfs resource reference directly, instead of linearly
searching for it. If that reference exists we attempt to amount
it.

With the above change, it became obvious that the nvlist
manipulations that we do (add_boolean and add_nvlist) take a
significant amount of time ensuring uniqueness of every new
element even though they don't have too. Thus, we updated the
patch so those nvlists are not trying to enforce the uniqueness
of their elements.

A more complete analysis of the problem solved by this patch
can be found below:
https://sdimitro.github.io/post/snap-unmount-perf/

Upstream Bugs: DLPX-53221
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/459

-- Commit Summary --

  * 8604 Avoid unnecessary work search in VFS when unmounting snapshots

-- File Changes --

M usr/src/uts/common/fs/zfs/dsl_destroy.c (18)
M usr/src/uts/common/fs/zfs/sys/zfs_ioctl.h (5)
M usr/src/uts/common/fs/zfs/zfs_ioctl.c (67)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/459.patch
https://github.com/openzfs/openzfs/pull/459.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/459

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tff1a200abd9112b9-M6ee9f9ebb6f819ab999d4cbc
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 7812 remove more gender specific language (#453)

2017-09-01 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.

Thanks for you for correcting the additional cases that we missed :-)



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/453#pullrequestreview-60256630
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T67e8439be4e674f1-Mabf86a6d4bfd6f05b9e1b4e0
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8600 ZFS channel programs - snapshot (#458)

2017-09-01 Thread Serapheim Dimitropoulos
Reviewed by: Matthew Ahrens 
Reviewed by: John Kennedy 
Reviewed by: Brad Lewis 

ZFS channel programs should be able to create snapshots.
In addition to the base snapshot functionality, this entails extra logic to
handle edge cases which were formerly not possible, such as creating then
destroying a snapshot in the same transaction sync.

Upstream Bugs: DLPX-43641
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/458

-- Commit Summary --

  * 8600 ZFS channel programs - snapshot

-- File Changes --

M usr/src/lib/libzfs_core/common/libzfs_core.c (11)
M usr/src/man/man1m/zfs-program.1m (13)
M usr/src/pkg/manifests/system-test-zfstest.mf (30)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.return_large.ksh
 (54)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.return_large.zcp
 (24)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_destroy.ksh
 (39)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_destroy.zcp
 (24)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_neg.ksh
 (44)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_neg.zcp
 (35)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_recursive.ksh
 (61)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_recursive.zcp
 (28)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_simple.ksh
 (40)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.snapshot_simple.zcp
 (26)
M usr/src/uts/common/fs/zfs/dsl_dataset.c (11)
M usr/src/uts/common/fs/zfs/sys/dsl_dataset.h (9)
M usr/src/uts/common/fs/zfs/sys/zcp.h (4)
M usr/src/uts/common/fs/zfs/zcp.c (9)
M usr/src/uts/common/fs/zfs/zcp_global.c (7)
M usr/src/uts/common/fs/zfs/zcp_synctask.c (71)
M usr/src/uts/common/fs/zfs/zfs_ioctl.c (4)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/458.patch
https://github.com/openzfs/openzfs/pull/458.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/458

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Te12f2d5f2a868884-M70dcf4be21ef83969670b8a0
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8605 zfs channel programs: zfs.exists undocumented and non-working (#457)

2017-09-01 Thread Serapheim Dimitropoulos
@stevenh It is zfs.exist() (the method used by channel programs). Thanks for 
catching that.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/457#issuecomment-326686408
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T987f8adbf76ec7f4-M098c533afc07cce6b5f0cf1f
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8605 zfs channel programs: zcp.exists undocumented and non-working (#457)

2017-09-01 Thread Serapheim Dimitropoulos
Reviewed by: Paul Dagnelie 
Reviewed by: Dan Kimmel 
Reviewed by: Matt Ahrens 

zfs.exists() in channel programs doesn't return any result, and should
have a man page entry. This patch corrects zcp.exists so that it
returns a value indicating if the dataset exists or not. It also adds
documentation about it in the man page.

Upstream Bugs: DLPX-50908
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/457

-- Commit Summary --

  * 8605 zfs channel programs: zcp.exists undocumented and non-working

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (11)
M usr/src/pkg/manifests/system-test-zfstest.mf (5)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.exists.ksh 
(45)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/lua_core/tst.exists.zcp 
(26)
M usr/src/uts/common/fs/zfs/zcp.c (2)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/457.patch
https://github.com/openzfs/openzfs/pull/457.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/457

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tf3b2a52794ff887a-M62417cc69f968368ac26cb4b
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8602 remove unused "dp_early_sync_tasks" field from "dsl_pool" structure (#455)

2017-09-01 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/455#pullrequestreview-60180273
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Td1b60d9ae8200f8c-Mfd61aca258c4031c6a8c5d47
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8592 ZFS channel programs - rollback (#454)

2017-09-01 Thread Serapheim Dimitropoulos
Reviewed by: Chris Williamson 
Reviewed by: Matthew Ahrens 

ZFS channel programs should be able to perform a rollback.

Upstream Bugs: DLPX-43642
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/454

-- Commit Summary --

  * 8592 ZFS channel programs - rollback

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (14)
M usr/src/pkg/manifests/system-test-zfstest.mf (10)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_mult.ksh
 (61)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_one.ksh
 (51)
M usr/src/uts/common/fs/zfs/dsl_dataset.c (13)
M usr/src/uts/common/fs/zfs/sys/dsl_dataset.h (12)
M usr/src/uts/common/fs/zfs/zcp_synctask.c (34)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/454.patch
https://github.com/openzfs/openzfs/pull/454.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/454

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T30d89484c1bcf549-Ma12b4d65cad5fe5b1aa34c7d
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8592 ZFS channel programs - rollback (#450)

2017-08-29 Thread Serapheim Dimitropoulos
Closed #450.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/450#event-1225819718
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T4a11080cd0929372-M2f464b0e04a6a6333ee32074
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8592 ZFS channel programs - rollback (#450)

2017-08-28 Thread Serapheim Dimitropoulos
Reviewed by: Chris Williamson 
Reviewed by: Matthew Ahrens 

ZFS channel programs should be able to perform a rollback.

Upstream bugs: DLPX-43642
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/450

-- Commit Summary --

  * 8592 ZFS channel programs - rollback

-- File Changes --

M usr/src/man/man1m/zfs-program.1m (13)
M usr/src/pkg/manifests/system-test-zfstest.mf (10)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_mult.ksh
 (61)
A 
usr/src/test/zfs-tests/tests/functional/channel_program/synctask_core/tst.rollback_one.ksh
 (51)
M usr/src/uts/common/fs/zfs/dsl_dataset.c (13)
M usr/src/uts/common/fs/zfs/sys/dsl_dataset.h (11)
M usr/src/uts/common/fs/zfs/zcp_synctask.c (34)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/450.patch
https://github.com/openzfs/openzfs/pull/450.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/450

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T4a11080cd0929372-Mdbe2d1eddbfb98601b764f13
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8590 memory leak in dsl_destroy_snapshots_nvl() (#449)

2017-08-28 Thread Serapheim Dimitropoulos
Reviewed by: Pavel Zakharov 
Reviewed by: Steve Gonczi 
Reviewed by: George Wilson 

In dsl_destroy_snapshots_nvl(), "snaps_normalized" is not
freed after it is added to "arg".
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/449

-- Commit Summary --

  * 8590 memory leak in dsl_destroy_snapshots_nvl()

-- File Changes --

M usr/src/uts/common/fs/zfs/dsl_destroy.c (1)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/449.patch
https://github.com/openzfs/openzfs/pull/449.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/449

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T96bf62ba2056dd33-M074cbb5b2b3fd62107f8d05f
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8491 uberblock on-disk padding to reserve space for smoothly merging zpool checkpoint & MMP in ZFS (#425)

2017-07-13 Thread Serapheim Dimitropoulos
New field and copyright notice added.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425#issuecomment-315147670
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Td128c23c12b31635-Mc8deff0f623b3ef9c2cea2af
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8491 uberblock on-disk padding to reserve space for smoothly merging zpool checkpoint & MMP in ZFS (#425)

2017-07-13 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



> @@ -54,6 +54,11 @@ struct uberblock {
 
/* highest SPA_VERSION supported by software that wrote this txg */
uint64_tub_software_version;
+
+   /* These fields are reserved for features that are under development: */
+   uint64_tub_mmp_magic;
+   uint64_tub_mmp_delay;

New field added

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425#discussion_r127280403
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Td128c23c12b31635-M0621208c69f4cbdcca068b22
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8491 uberblock on-disk padding to reserve space for smoothly merging zpool checkpoint & MMP in ZFS (#425)

2017-07-13 Thread Serapheim Dimitropoulos
@ofaaland could you take a look too please.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425#issuecomment-315098175
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Td128c23c12b31635-Mbff501d04f98a9b01e4c2d21
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8491 uberblock on-disk padding to reserve space for smoothly merging zpool checkpoint & MMP in ZFS (#425)

2017-07-13 Thread Serapheim Dimitropoulos
The zpool checkpoint feature in DxOS added a new field in the uberblock.
The Multi-Modifier Protection Pull Request from ZoL adds two new fields
in the uberblock (Reference: https://github.com/zfsonlinux/zfs/pull/6279).

As these two changes come from two different sources and once upstreamed
and deployed will introduce an incompatibility with each other we want
to upstream a change that will reserve the padding for both of them so
integration goes smoothly and everyone gets both features.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/425

-- Commit Summary --

  * 8491 uberblock on-disk padding to reserve space for smoothly merging zpool 
checkpoint & MMP in ZFS

-- File Changes --

M usr/src/uts/common/fs/zfs/sys/uberblock_impl.h (3)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/425.patch
https://github.com/openzfs/openzfs/pull/425.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/425

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Td128c23c12b31635-M40db1bdd19da1d2760248806
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8414 Implemented zpool scrub pause/resume (#407)

2017-07-07 Thread Serapheim Dimitropoulos
sdimitro approved this pull request.





-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/407#pullrequestreview-48710702
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T693650e65dc896fc-Ma649308d80df3629bd77a84e
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8414 Implemented zpool scrub pause/resume (#407)

2017-07-06 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.



-   if (zfs_ioctl(hdl, ZFS_IOC_POOL_SCAN, &zc) == 0 ||
-   (errno == ENOENT && func != POOL_SCAN_NONE))
+   /* ECANCELED on a scrub means we resumed a paused scrub */

Yeah I figured that was the case. I just wanted to ask first. Thank you Alek!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/407#discussion_r125983445
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T693650e65dc896fc-M832ea552432231578d273bae
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8414 Implemented zpool scrub pause/resume (#407)

2017-07-06 Thread Serapheim Dimitropoulos
sdimitro commented on this pull request.

LGTM overall. I just have one question and a small nit.

>  .\"
-.Dd Oct 2, 2016
+.Dd June 21, 2016

[nit] 2017

-   if (zfs_ioctl(hdl, ZFS_IOC_POOL_SCAN, &zc) == 0 ||
-   (errno == ENOENT && func != POOL_SCAN_NONE))
+   /* ECANCELED on a scrub means we resumed a paused scrub */

Even though this comment makes everything clear with respect to the rest of the 
code, I found it a bit confusing that ECANCELED means resume a paused scrub. 
Skimming through other errnos the best alternative I could find is EADV, that I 
read as EADVANCE but apparently is EADVERTISE which is obscure.

That said, I wonder if we could make use of zc.zc_nvlist_dst in the kernel and 
back to the userland to deal with both the ECANCELED and the ENOENT case later 
in this code path, in a cleaner way. Did we consider that?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/407#pullrequestreview-48383427
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T693650e65dc896fc-Ma394423fbf836b08aea75c70
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8430 dir_is_empty_readdir() doesn't properly handle error from fdopendir() (#414)

2017-06-26 Thread Serapheim Dimitropoulos
Reviewed by: Serapheim Dimitropoulos 
Reviewed by: Matthew Ahrens 
Reviewed by: Dan Kimmel 

dir_is_empty_readdir() immediately returns if fdopendir() fails.
We should close dirfd when that happens.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/414

-- Commit Summary --

  * 8430 dir_is_empty_readdir() doesn't properly handle error from fdopendir()

-- File Changes --

M usr/src/lib/libzfs/common/libzfs_mount.c (1)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/414.patch
https://github.com/openzfs/openzfs/pull/414.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/414

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T534e4d203e03c5ed-Mc2bd94c1356aceff9dd6cbbc
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8380 Assertion failure panic in zio_buf_alloc() (#401)

2017-06-14 Thread Serapheim Dimitropoulos
yeap just realized it. Sorry about that

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/401#issuecomment-308465961
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tc77c9ab2432da947-M9a88c305d68078871badfc04
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8380 Assertion failure panic in zio_buf_alloc() (#401)

2017-06-14 Thread Serapheim Dimitropoulos
Closed #401.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/401#event-1123509392
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tc77c9ab2432da947-M4effd07231d8e10f795de7ad
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8380 Assertion failure panic in zio_buf_alloc() (#401)

2017-06-12 Thread Serapheim Dimitropoulos
Reviewed by: George Wilson 
Reviewed by: Matthew Ahrens 
Reviewed by: Prashanth Sreenivasa 

We need to initialize the dn_origin_obj_refd field of a dnode when it
is allocated from the kmem cache (or, alternatively, when it is free'd
to the cache), in order to prevent a newly allocated dnode from
inheriting the value of a previously free'd dnode.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/401

-- Commit Summary --

  * 8380 Assertion failure panic in zio_buf_alloc()

-- File Changes --

M usr/src/uts/common/fs/zfs/dnode.c (1)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/401.patch
https://github.com/openzfs/openzfs/pull/401.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/401

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tc77c9ab2432da947-M568f43534844a2551086335e
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8378 crash due to bp in-memory modification of nopwrite block (#400)

2017-06-12 Thread Serapheim Dimitropoulos
Reviewed by: Prakash Surya 
Reviewed by: George Wilson 

The problem is that zfs_get_data() supplies a stale zgd_bp to
dmu_sync(), which we then nopwrite against.

zfs_get_data() doesn't hold any DMU-related locks, so after it
copies db_blkptr to zgd_bp, dbuf_write_ready() could change
db_blkptr, and dbuf_write_done() could remove the dirty record.
dmu_sync() then sees the stale BP and that the dbuf it not dirty,
so it is eligible for nop-writing.

The fix is for dmu_sync() to copy db_blkptr to zgd_bp after
acquiring the db_mtx. We could still see a stale db_blkptr,
but if it is stale then the dirty record will still exist and
thus we won't attempt to nopwrite.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/400

-- Commit Summary --

  * 8378 crash due to bp in-memory modification of nopwrite block

-- File Changes --

M usr/src/cmd/ztest/ztest.c (7)
M usr/src/uts/common/fs/zfs/dmu.c (60)
M usr/src/uts/common/fs/zfs/zfs_vnops.c (7)
M usr/src/uts/common/fs/zfs/zvol.c (9)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/400.patch
https://github.com/openzfs/openzfs/pull/400.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/400

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tdfaa847ec6045f5b-M4e595d9fcd6d5449ba122fb6
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8377 Panic in bookmark deletion (#399)

2017-06-12 Thread Serapheim Dimitropoulos
Reviewed by: Paul Dagnelie 
Reviewed by: Pavel Zakharov 
Reviewed by: George Wilson 

The problem is that when dsl_bookmark_destroy_check() is
executed from open context (the pre-check), it fills in
dbda_success based on the existence of the bookmark. But
the bookmark (or containing filesystem as in this case)
can be destroyed before we get to syncing context. When
we re-run dsl_bookmark_destroy_check() in syncing context,
it will not add the deleted bookmark to dbda_success,
intending for dsl_bookmark_destroy_sync() to not process
it. But because the bookmark is still in dbda_success from
the open-context call, we do try to destroy it.

The fix is that dsl_bookmark_destroy_check() should not
modify dbda_success when called from open context.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/399

-- Commit Summary --

  * 8377 Panic in bookmark deletion

-- File Changes --

M usr/src/uts/common/fs/zfs/dsl_bookmark.c (8)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/399.patch
https://github.com/openzfs/openzfs/pull/399.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/399

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Ta7d098dd5cc059e1-M3623413c57a952f344fb86cc
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8375 Kernel memory leak in nvpair code (#398)

2017-06-12 Thread Serapheim Dimitropoulos
@zettabot go

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/398#issuecomment-307931792
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T93f31a64593342e6-Mbbc89929dcb26dc0561652d9
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8375 Kernel memory leak in nvpair code (#398)

2017-06-12 Thread Serapheim Dimitropoulos
In nvs_embedded(), when we return EINVAL due to reaching
the recursion limit, we should free the nvpriv_t that was
allocated earlier in the function.

Reviewed by: Pavel Zakharov 
Reviewed by: George Wilson 
Reviewed by: Prashanth Sreenivasa 
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/398

-- Commit Summary --

  * 8375 Kernel memory leak in nvpair code

-- File Changes --

M usr/src/common/nvpair/nvpair.c (6)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/398.patch
https://github.com/openzfs/openzfs/pull/398.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/398

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T93f31a64593342e6-M0200c385525a49f469a0223c
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8155 simplify dmu_write_policy handling of pre-compressed buffers (#370)

2017-05-04 Thread Serapheim Dimitropoulos
Reviewed by: Dan Kimmel 
Reviewed by: George Wilson 

When writing pre-compressed buffers, arc_write() requires that
the compression algorithm used to compress the buffer matches
the compression algorithm requested by the zio_prop_t, which is
set by dmu_write_policy(). This makes dmu_write_policy() and its
callers a bit more complicated.

We simplify this by making arc_write() trust the caller to supply
the type of pre-compressed buffer that it wants to write,
and override the compression setting in the zio_prop_t.

Upstream bugs: DLPX-50071
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/370

-- Commit Summary --

  * 8155 simplify dmu_write_policy handling of pre-compressed buffers

-- File Changes --

M usr/src/uts/common/fs/zfs/arc.c (11)
M usr/src/uts/common/fs/zfs/dbuf.c (4)
M usr/src/uts/common/fs/zfs/dmu.c (21)
M usr/src/uts/common/fs/zfs/dmu_objset.c (2)
M usr/src/uts/common/fs/zfs/sys/dmu.h (4)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/370.patch
https://github.com/openzfs/openzfs/pull/370.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/370

--
openzfs-developer
Archives: https://openzfs.topicbox-beta.com/groups/developer/
Powered by Topicbox: https://topicbox-beta.com


[developer] Re: [openzfs/openzfs] 8155 simplify dmu_write_policy handling of pre-compressed buffers (#370)

2017-05-04 Thread Serapheim Dimitropoulos
@zettabot go

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/370#issuecomment-299275795
--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8156 dbuf_evict_notify() does not need dbuf_evict_lock (#369)

2017-05-04 Thread Serapheim Dimitropoulos
@zettabot go


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/369#issuecomment-299274721
--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8156 dbuf_evict_notify() does not need dbuf_evict_lock (#369)

2017-05-04 Thread Serapheim Dimitropoulos
Reviewed by: Dan Kimmel 
Reviewed by: Paul Dagnelie 

dbuf_evict_notify() holds the dbuf_evict_lock while checking if it should do 
the eviction itself
(because the evict thread is not able to keep up). This can result in massive 
lock contention.
It isn't necessary to hold the lock, because if we make the wrong choice 
occasionally, nothing
bad will happen. This commit results in a ~60% performance improvement for 
ARC-cached sequential reads.

Upstream bugs: DLPX-50674
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/369

-- Commit Summary --

  * 8156 dbuf_evict_notify() does not need dbuf_evict_lock

-- File Changes --

M usr/src/uts/common/fs/zfs/dbuf.c (18)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/369.patch
https://github.com/openzfs/openzfs/pull/369.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/369

--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8155 simplify dmu_write_policy handling of pre-compressed buffers (#368)

2017-05-04 Thread Serapheim Dimitropoulos
Closed #368.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/368#event-1069468483
--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8155 simplify dmu_write_policy handling of pre-compressed buffers (#368)

2017-05-04 Thread Serapheim Dimitropoulos
@zettabot go

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/368#issuecomment-299270832
--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] 8155 simplify dmu_write_policy handling of pre-compressed buffers (#368)

2017-05-04 Thread Serapheim Dimitropoulos
When writing pre-compressed buffers, arc_write() requires that
the compression algorithm used to compress the buffer matches
the compression algorithm requested by the zio_prop_t, which is
set by dmu_write_policy(). This makes dmu_write_policy() and its
callers a bit more complicated.

We simplify this by making arc_write() trust the caller to supply
the type of pre-compressed buffer that it wants to write,
and override the compression setting in the zio_prop_t.

Upstream bugs: DLPX-50071
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/368

-- Commit Summary --

  * 8155 simplify dmu_write_policy handling of pre-compressed buffers

-- File Changes --

M usr/src/uts/common/fs/zfs/arc.c (11)
M usr/src/uts/common/fs/zfs/dbuf.c (4)
M usr/src/uts/common/fs/zfs/dmu.c (21)
M usr/src/uts/common/fs/zfs/dmu_objset.c (2)
M usr/src/uts/common/fs/zfs/sys/dmu.h (4)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/368.patch
https://github.com/openzfs/openzfs/pull/368.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/368

--
openzfs-developer
Archives: https://openzfs.topicbox.com/groups/developer/
Powered by Topicbox: https://topicbox.com