[developer] [openzfs/openzfs] Use EC2 "Spot Instances" with Jenkins Automation (#411)

2017-06-23 Thread Prakash Surya
The pricing of EC2 "Spot Instances" is generally much cheaper than
"On-Demand", so this change attempts to refactor the automation to
create the test instances using this less expensive instance type.
You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/411

-- Commit Summary --

  * Use EC2 "Spot Instances" with Jenkins Automation

-- File Changes --

M Jenkinsfile (50)
M jenkins/jobs/build_install_media.groovy (1)
M jenkins/pipelines/build_install_media.groovy (9)
M jenkins/sh/ansible-deploy-roles/ansible-deploy-roles.sh (4)
M jenkins/sh/aws-create-image/aws-create-image.sh (4)
M jenkins/sh/aws-delete-image/aws-delete-image.sh (4)
A jenkins/sh/aws-request-spot-instances/aws-request-spot-instances.sh (116)
A jenkins/sh/aws-request-spot-instances/base-specification.json (11)
A jenkins/sh/aws-request-spot-instances/block-device-mappings/none.json (3)
A 
jenkins/sh/aws-request-spot-instances/block-device-mappings/rpool-fix-labels.json
 (11)
A 
jenkins/sh/aws-request-spot-instances/block-device-mappings/run-zfs-tests.json 
(27)
D jenkins/sh/aws-run-instances/aws-run-instances.sh (120)
D jenkins/sh/aws-stop-instances/aws-stop-instances.sh (26)
M jenkins/sh/aws-terminate-instances/aws-terminate-instances.sh (4)
M jenkins/sh/download-remote-directory/download-remote-directory.sh (4)
M jenkins/sh/download-remote-file/download-remote-file.sh (4)
M jenkins/sh/library/aws.sh (22)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/411.patch
https://github.com/openzfs/openzfs/pull/411.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/411

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T9ea9c38e30ae9553-M0f5a1a6f3bc79e41ebd6bedc
Powered by Topicbox: https://topicbox.com


[developer] [openzfs/openzfs] Use segkpm for single-page slabs (#410)

2017-06-23 Thread Matthew Ahrens
I'm not an expert in the VM system so I'm looking for input on if this 
potential change is a good idea or not.  The idea is to use segkpm (the mapping 
of all of physical memory) to reference single-page slabs, rather than setting 
up and tearing down new TLB mappings in the kernel's heap.

This roughly doubles the performance of kmem_reap() of impacted caches.  We 
only do this for ZFS's caches, because they are not dumped.  More work would be 
required to dump these pages, and to allow mdb to access them via segkpm.

I noticed that seg_map has code to do something similar, but it's disabled on 
x86 (segmap_kpm=0).  So I was wondering if there's something wrong with this 
approach?  Unfortunately there weren't any comments explaining *why* it's 
disabled on x86.


You can view, comment on, or merge this pull request online at:

  https://github.com/openzfs/openzfs/pull/410

-- Commit Summary --

  * Use segkpm for single-page slabs

-- File Changes --

M usr/src/uts/common/sys/vmem.h (11)
M usr/src/uts/common/vm/seg_kmem.c (31)
M usr/src/uts/i86pc/os/startup.c (4)

-- Patch Links --

https://github.com/openzfs/openzfs/pull/410.patch
https://github.com/openzfs/openzfs/pull/410.diff

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/410

--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T5530aa766ee8412c-Mc4c3739765814a3e08ee1c8a
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8414 Implemented zpool scrub pause/resume (#407)

2017-06-23 Thread Alek P
Thanks for the link @loli10K! The problem was something else:
```
23:06:48.76 SUCCESS: is_pool_scrub_paused testpool
23:06:50.01 cannot scrub testpool: currently scrubbing; use 'zpool scrub -s' to 
cancel current scrub
```
So it's the scrub resume that "wasn't working". In fact the resume was working 
but it was issuing two resume IOCTLs per one scrub pause cmd. The second one 
would see the restarted scrub and bail with an error.
Looks like illumos will re-issue a syscall when it returns ERESTART (probably 
what the OS should do). I've changed the scrub resume to use ECANCELED instead 
of ERESTART and it looks to be working as expected now. If the tests look good 
I'll make the same change in ZoL to keep the code consistent.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/407#issuecomment-310740392
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/T693650e65dc896fc-M1c2d438b811de624fe6066f1
Powered by Topicbox: https://topicbox.com


[developer] Re: [openzfs/openzfs] 8423 Implement large_dnode pool feature (#409)

2017-06-23 Thread Matthew Ahrens
Note: this is a port of: 
https://github.com/zfsonlinux/zfs/commit/50c957f702ea6d08a634e42f73e8a49931dd8055
 and 
https://github.com/zfsonlinux/zfs/commit/08f0510d87186575db00269fff17a3409de5ceb6
 . I had to make a few small changes to get it to pass lint, which are captured 
here: https://github.com/zfsonlinux/zfs/pull/6262

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/openzfs/openzfs/pull/409#issuecomment-310729555
--
openzfs-developer
Archives: 
https://openzfs.topicbox.com/groups/developer/discussions/Tfefe6318017dd4c7-M2fef9d22d89d7e2e6d7db364
Powered by Topicbox: https://topicbox.com