Audience: anyone working on cephfs, general testing interest.

The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind of inconvenient to run because they require teuthology (and therefore require built packages, locked nodes, etc). Most of them don't actually require anything beyond what you already have in a vstart cluster, so I've adapted them to optionally run that way.

The idea is that we can iterate a lot faster when writing new tests (one less excuse not to write them) and get better use out of the tests when debugging things and testing fixes. teuthology is fine for mass-running the nightlies etc, but it's overkill for testing individual bits of MDS/client functionality.

The code is currently on the wip-vstart-runner ceph-qa-suite branch, and the two magic commands are:

1. Start a vstart cluster with a couple of MDSs, as your normal user:
$ make -j4 rados ceph-fuse ceph-mds ceph-mon ceph-osd cephfs-data-scan cephfs-journal-tool cephfs-table-tool && ./stop.sh ; rm -rf out dev ; MDS=2 OSD=3 MON=1 ./vstart.sh -d -n

2. Invoke the test runner, as root (replace paths, test name as appropriate. Leave of test name to run everything): # PYTHONPATH=/home/jspray/git/teuthology/:/home/jspray/git/ceph-qa-suite/ python /home/jspray/git/ceph-qa-suite/tasks/cephfs/vstart_runner.py tasks.cephfs.test_strays.TestStrays.test_migration_on_shutdown

test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays) ... ok

----------------------------------------------------------------------
Ran 1 test in 121.982s

OK


^^^ see!  two minutes, and no waiting for gitbuilders!

The main caveat here is that it needs to run as root in order to mount/unmount things, which is a little scary. My plan is to split it out into a little root service for doing mount operations, and then let the main test part run as a normal user and call out to the mounter service when needed.

Cheers,
John
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to