Thank you - much appreciated!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-361767038
--
openzfs-developer
Archives:
https://
@ivucica - the final commit message for `f539f1e` ended up wrong, so this
commit closes #482 and does add support for removal of top-level mirror vdevs
as described in #482 commit message.
Sorry for the confusion.
--
You are receiving this because you are subscribed to this thread.
Reply to thi
@prakashsurya To explicitly confirm: Is
https://github.com/openzfs/openzfs/commit/f539f1eb707e3af0ec4617c664b67dff46a0c173
closing #482 or #251?
Descriptions differ slightly with regards to support for removal of mirror
devices...
Commit:
> Note that when a device is removed, we do not verif
Closed #482 via f539f1eb707e3af0ec4617c664b67dff46a0c173.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#event-1418142396
--
openzfs-develop
@prashks pushed 1 commit.
fdb2855 assertion failed: spa_vdev_remove_thread(): dmu_tx_assign() gets ENOSPC
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/482/files/9b12b226e36084d41fc41619a3780567d1353f66..fdb2855b
Ah Gotcha!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-356138925
--
openzfs-developer
Archives:
https://openzfs.topicbox.co
@Skaronator The policy from our legal team is that this should be `, `. I don't know the reasoning
behind that, but it seems several other companies do it this way too, and it
isn't too hard to maintain.
--
You are receiving this because you are subscribed to this thread.
Reply to this email d
Updated my branch to address code review comments and some cstyle/copyright
fixes.
Relevant diffs at:
https://github.com/prashks/illumos-gate/commit/8b37b1da656dd573aba3043a25a8351c6809a0e7
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or vie
ahrens commented on this pull request.
> int zfs_resilver_min_time_ms = 3000; /* min millisecs to resilver per txg */
boolean_t zfs_no_scrub_io = B_FALSE; /* set to disable scrub i/o */
boolean_t zfs_no_scrub_prefetch = B_FALSE; /* set to disable scrub prefetch */
enum ddt_class zfs_scrub_dd
dweeezil commented on this pull request.
Renamed tunable possibly unrelated to device evacutation?
> int zfs_resilver_min_time_ms = 3000; /* min millisecs to resilver per txg */
boolean_t zfs_no_scrub_io = B_FALSE; /* set to disable scrub i/o */
boolean_t zfs_no_scrub_prefetch = B_FALSE; /* se
prashks commented on this pull request.
> + */
+ vd->vdev_removing = B_TRUE;
+
+ vdev_dirty_leaves(vd, VDD_DTL, *txg);
+ vdev_config_dirty(vd);
+
+ spa_history_log_internal(spa, "vdev remove", NULL,
+ "%s vdev %llu (log) %s", spa_name(spa), vd->vdev_id,
+
prashks commented on this pull request.
> + spa->spa_l2cache.sav_sync = B_TRUE;
+ } else if (vd != NULL && vd->vdev_islog) {
+ ASSERT(!locked);
+ error = spa_vdev_remove_log(vd, &txg);
+ } else if (vd != NULL) {
+ ASSERT(!locked);
dweeezil commented on this pull request.
Changed event when removing a log device.
> + */
+ vd->vdev_removing = B_TRUE;
+
+ vdev_dirty_leaves(vd, VDD_DTL, *txg);
+ vdev_config_dirty(vd);
+
+ spa_history_log_internal(spa, "vdev remove", NULL,
+ "%s vdev %llu
I've also added the patch to ```vdev-clear()``` mentioned in
https://github.com/openzfs/openzfs/pull/482#issuecomment-347576750 to ZoL.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/
dweeezil requested changes on this pull request.
I'm still working through this on ZoL.
> + spa->spa_l2cache.sav_sync = B_TRUE;
+ } else if (vd != NULL && vd->vdev_islog) {
+ ASSERT(!locked);
+ error = spa_vdev_remove_log(vd, &txg);
+ } else if
This is looking pretty good. I'm going to submit it to the ZoL buildbot now
for additional verification.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-355137252
---
@ahrens I was out of town for a week and am getting caught up now. I'm working
on reconciling the changes to my ZoL PR and should be able to give this a final
review today.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
h
@dweeezil did you have a chance to look at the final version?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-354870613
--
openz
@ahrens Yes, I'm on way way home from a vacation now and will give it a final
review within the next day once I catch up the ZoL PR.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/48
@ahrens If we're using the Linux kernel contributor sign-off definitions, I'm
okay with Acked-by, but I think Reviewed-by would overstate the amount of
review I have given it (or even could give it, given my lack of kernel and
ZFS-internals knowledge). However, if it is normal practice in OpenZF
ahrens approved this pull request.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#pullrequestreview-85212021
--
openzfs-developer
Archiv
I believe these changes are final (pending the last test run), and ready to be
integrated to illumos. @rlaager and @dweeezil, can we count you as code
reviewers?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://gith
@prashks pushed 1 commit.
fc9c74e Export of a readonly pool with an in-progress removal hangs or crashes
--
You are receiving this because you are subscribed to this thread.
View it on GitHub:
https://github.com/openzfs/openzfs/pull/482/files/602cdb591dcf04e942346f5acbb6505b33653390..fc9c74e93
@ofcaah Typically, after a ZFS change is integrated to OpenZFS/illumos, someone
from the ZFSonLinux community will port it to Linux, and someone from the
FreeBSD community will port it to FreeBSD. @dweeezil has started this work
for Linux: https://github.com/zfsonlinux/zfs/pull/6900
--
You a
ikozhukhov commented on this pull request.
> @@ -6653,6 +6566,9 @@ spa_sync(spa_t *spa, uint64_t txg)
ddt_sync(spa, txg);
dsl_scan_sync(dp, tx);
+ if (spa->spa_vdev_removal != NULL)
+ svr_sync(spa, tx);
+
while
Added code changes for couple more corner cases we discovered (thanks @ahrens &
@sdimitro)
- Panic in vdev_indirect_mapping_entry_for_offset_impl()
- Device removal may be suspended during spa_export() but not resumed in
spa_export() fails with EBUSY (with new zfs test: _removal_resume_export_)
-
excuse my off-topic post, but: what's the procedure here? I have a vested
interest in this feature after, like many others, adding a "cache"/"log" vdev
without the cache/log word, ending with extra vdev. Should this get merged,
what's the path to FreeBSD and/or ZoL? Is there some place where thi
ikozhukhov approved this pull request.
i have tested update - no issues found
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#pullrequestreview-82790660
-
> is it trivial to modify zdb logic based on this PR to perform offline device
> evacuation without changing the on-disk format?
No.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/48
> can the feature be shipped also separate as a zdb feature for an exported
> pool, without needing to upgrade the pool, so a production non-patched pool
> can benefit the feature without having to touch the production zfs code and
> existing pool on-disk format?
>No, software that doesn't under
Thanks for the reviews and testing - @dweeezil, @ikozhukhov, @rlaager
See the updated changes at:
https://github.com/openzfs/openzfs/pull/482/commits/f684987c8faa029a1ad6cf310a75b40af635c71b
Added changes for - /sbin/zpool should be able to estimate memory used by
device removal
Bug fixes
dweeezil commented on this pull request.
> + * vdevs, we can not add redundant toplevels. This ensures that
+* we do not rely on resilver, which does not properly handle
+* indirect vdevs.
+*/
+ if (spa->spa_vdev_removal != NULL ||
+ spa->spa_removin
dweeezil commented on this pull request.
> + *
+ * 3. The last case to consider is if the offset actually falls
+ *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *
dweeezil commented on this pull request.
> + *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *NOTE: If the offset is equal to the entry's source offset,
+ *
rlaager requested changes on this pull request.
> @@ -1778,9 +1800,12 @@ option as follows:
.Bd -literal
# zpool iostat -v pool 5
.Ed
-.It Sy Example 14 No Removing a Mirrored Log Device
-The following command removes the mirrored log device
-.Sy mirror-2 .
+.It Sy Example 14 No Removing a Mi
prashks commented on this pull request.
> + * vdevs, we can not add redundant toplevels. This ensures that
+* we do not rely on resilver, which does not properly handle
+* indirect vdevs.
+*/
+ if (spa->spa_vdev_removal != NULL ||
+ spa->spa_removing
prashks commented on this pull request.
> + *
+ * 3. The last case to consider is if the offset actually falls
+ *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *
prashks commented on this pull request.
> + *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *NOTE: If the offset is equal to the entry's source offset,
+ *
dweeezil commented on this pull request.
> + * vdevs, we can not add redundant toplevels. This ensures that
+* we do not rely on resilver, which does not properly handle
+* indirect vdevs.
+*/
+ if (spa->spa_vdev_removal != NULL ||
+ spa->spa_removin
dweeezil commented on this pull request.
> + *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *NOTE: If the offset is equal to the entry's source offset,
+ *
dweeezil commented on this pull request.
> + *
+ * 3. The last case to consider is if the offset actually falls
+ *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
+ *
+ *
@mailinglists35
> if i understand correctly, this feature requires enabling a feature flag thus
> irreversibly changing the on-disk format
Yes, the on-disk format changes (and the feature property changes to `active`)
when `zpool remove` is used to remove a top-level device (which is only allow
if i understand correctly, this feature requires enabling a feature flag thus
irreversibly changing the on-disk format. this means if on a non-patched
version you do the mistake of adding the unwanted vdev, in order to use the
feature you need to upgrade the pool.
can the feature be shipped als
@dweeezil great find. I think that on illumos we're also trying to avoid
kmem_alloc(0) and kmem_free(0), so this will be a good change to make on
illumos too.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.
@ahrens, I figured it out: In illumos, ```kmem_alloc(0, ...)``` returns NULL
but in Linux, we get a magic pointer. I've done this little patch to
```vdev_compact_children()``` because apparently with the vdev removal code, it
now depends on ```vdev_child``` being nulled out during compaction.
@ahrens I looked into the ```vdev_free()``` situation a bit closer and, for
some reason, I'm seeing a value of 0x10 in ```vdev_child``` for an "indirect"
vdev. I need to investigate further as to how this is happening.
--
You are receiving this because you are subscribed to this thread.
Reply
@dweeezil That diff makes sense to me. But I don't understand what you're
saying about vdev_free(). Why is vdev_child not NULL at that point? I think
that ASSERT is worthwhile since otherwise you'll leak the memory of vdev_child.
--
You are receiving this because you are subscribed to this t
@ahrens Like this?
```
diff --git a/module/zfs/vdev.c b/module/zfs/vdev.c
index 5185355..bfe7020 100644
--- a/module/zfs/vdev.c
+++ b/module/zfs/vdev.c
@@ -2921,6 +2921,12 @@ vdev_clear(spa_t *spa, vdev_t *vd)
vdev_clear(spa, vd->vdev_child[c]);
/*
+* It makes no s
@dweeezil I think that was a mistake. It doesn't make sense to "clear" an
indirect (non-concrete) vdev. We should probably just `return` before the "If
we're in the FAULTED state ..." check, if it's an indirect vdev.
--
You are receiving this because you are subscribed to this thread.
Reply t
dweeezil requested changes on this pull request.
> + *
+ * 3. The last case to consider is if the offset actually falls
+ *within the mapping entry's range. If this is the case, the
+ *offset is considered to be "equal to" the mapping entry and
+ *0 will be returned.
I've gotten the port to ZoL to the point that it works but am now going to have
to plow through a bunch of testing issues which may have applicability in
OpenZFS/illumos as well.
@ahrens The first issue I've run across is the addition of the addition of the
```vdev_is_concrete()``` check to ```
Thanks for sharing your test results @ikozhukhov
I'm working on couple of minor changes (bug fix & addition of dry run option
that shows memory usage for the operation) and sorting out the zfs tests, hope
to post an update this week and move this forward.
--
You are receiving this because you a
ikozhukhov approved this pull request.
i have tested it by integration to dilos with some manual tests in different
scenarios
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#pu
hey, what is status of this PR?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-347062681
--
openzfs-developer
Archives:
https
@dweeezil That's great, thank you Tim! Let me know if there's anything I can
offer insight/advice on for the port.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-345
FYI, I'm working on porting this to ZoL. The basic porting is complete but
there's quite a bit of interference due to the removal of range tree functions
not managing caller's locks and also ZoL's extra range tree functions. Another
point of conflict is zfsonlinux/zfs@0ea05c6 and a few gratuit
The relevant test failures are:
```
Tests with results other than PASS that are unexpected:
FAIL removal/removal_with_add (expected PASS)
FAIL removal/remove_mirror (expected PASS)
```
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or vie
@mailinglists35 Yes this is feature complete to start testing it as users. The
removal/evacuation for striped vdevs has been in production for years now at
Delphix and we have been testing the mirrored top-level vdev removal for months
now.
For ZoL, these code changes need to be ported to ZoL t
what is needed to have this running on ZoL, and is it feature complete, as in
can we users safely start testing it?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/482#issuecomment-339
59 matches
Mail list logo