Re: vstart runner for cephfs tests

2015-07-23 Thread John Spray



On 23/07/15 12:56, Mark Nelson wrote:
I had similar thoughts on the benchmarking side, which is why I 
started writing cbt a couple years ago.  I needed the ability to 
quickly spin up clusters and run benchmarks on arbitrary sets of 
hardware.  The outcome isn't perfect, but it's been extremely useful 
for running benchmarks and sort of exists as a half-way point between 
vstart and teuthology.


The basic idea is that you give it a yaml file that looks a little bit 
like a teuthology yaml file and cbt will (optionally) build a cluster 
across a number of user defined nodes with pdsh, start various 
monitoring tools (this is ugly right now, I'm working on making it 
modular), and then sweep through user defined benchmarks and sets of 
parameter spaces.  I have a separate tool that will sweep through ceph 
parameters, create ceph.conf files for each space, and run cbt with 
each one, but the eventual goal is to integrate that into cbt itself.


Though I never really intended it to run functional tests, I just 
added something like looks very similar to the rados suite so I can 
benchmark ceph_test_rados for the new community lab hardware. I 
already had a mechanism to inject OSD down/out up/in events, so with a 
bit of squinting it can give you a very rough approximation of a 
workload using the osd thrasher.  If you are interested, I'd be game 
to see if we could integrate your cephfs tests as well (I eventually 
wanted to add cephfs benchmark capabilities anyway).


Cool - my focus is very much on tightening the code-build-test loop for 
developers, but I can see us needing to extend that into a 
code-build-test-bench loop as we do performance work on cephfs in the 
future.  Does cbt rely on having ceph packages built, or does it blast 
the binaries directly from src/ onto the test nodes?


John
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vstart runner for cephfs tests

2015-07-23 Thread Mark Nelson



On 07/23/2015 07:37 AM, John Spray wrote:



On 23/07/15 12:56, Mark Nelson wrote:

I had similar thoughts on the benchmarking side, which is why I
started writing cbt a couple years ago.  I needed the ability to
quickly spin up clusters and run benchmarks on arbitrary sets of
hardware.  The outcome isn't perfect, but it's been extremely useful
for running benchmarks and sort of exists as a half-way point between
vstart and teuthology.

The basic idea is that you give it a yaml file that looks a little bit
like a teuthology yaml file and cbt will (optionally) build a cluster
across a number of user defined nodes with pdsh, start various
monitoring tools (this is ugly right now, I'm working on making it
modular), and then sweep through user defined benchmarks and sets of
parameter spaces.  I have a separate tool that will sweep through ceph
parameters, create ceph.conf files for each space, and run cbt with
each one, but the eventual goal is to integrate that into cbt itself.

Though I never really intended it to run functional tests, I just
added something like looks very similar to the rados suite so I can
benchmark ceph_test_rados for the new community lab hardware. I
already had a mechanism to inject OSD down/out up/in events, so with a
bit of squinting it can give you a very rough approximation of a
workload using the osd thrasher.  If you are interested, I'd be game
to see if we could integrate your cephfs tests as well (I eventually
wanted to add cephfs benchmark capabilities anyway).


Cool - my focus is very much on tightening the code-build-test loop for
developers, but I can see us needing to extend that into a
code-build-test-bench loop as we do performance work on cephfs in the
future.  Does cbt rely on having ceph packages built, or does it blast
the binaries directly from src/ onto the test nodes?


cbt doesn't handle builds/installs at all, so it's probably not 
particularly helpful in this regard.  By default it assumes binaries are 
in /usr/bin, but you can optionally override that in the yaml.  My 
workflow is usually to:


1a) build ceph from src and distribute to other nodes (manually)
1b) run a shell script that installs a given release from gitbuilder on 
all nodes
2) run a cbt yaml file that targets /usr/local, the build dir, /usr/bin, 
etc.


Definitely would be useful to have something that makes 1a) better. 
Probably not cbt's job though.




John

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vstart runner for cephfs tests

2015-07-23 Thread Loic Dachary


On 23/07/2015 14:34, John Spray wrote: 
 
 On 23/07/15 12:23, Loic Dachary wrote:
 You may be interested by

 https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh

 which is conditionally included

 https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86

 by --enable-root-make-check

 https://github.com/ceph/ceph/blob/master/configure.ac#L414

 If you're reckless and trust the tests not to break (a crazy proposition by 
 definition IMHO ;-), you can

 make TESTS=test/ceph-disk-root.sh check

 If you want protection, you do the same in a docker container with

 test/docker-test.sh --os-type centos --os-version 7 --dev make 
 TESTS=test/ceph-disk-root.sh check

 I tried various strategies to make tests requiring root access more 
 accessible and less scary and that's the best compromise I found. 
 test/docker-test.sh is what the make check bot uses.
 
 Interesting, I didn't realise we already had root-ish tests in there.
 
 At some stage the need for root may go away in ceph-fuse, as in principle 
 fuse mount/unmounts shouldn't require root.  If not then putting an outer 
 docker wrapper around this could make sense, if we publish the built binaries 
 into the docker container via a volume or somesuch.  I am behind on 
 familiarizing myself with the dockerised tests.

The docker container runs from sources, not from packages. 

 
 When a test can be used both from sources and from teuthology, I found it 
 more convenient to have it in the qa/workunits directory which is available 
 in both environments. Who knows, maybe you will want a vstart based cephfs 
 test to run as part of make check, in the same way

 https://github.com/ceph/ceph/blob/master/src/test/cephtool-test-mds.sh

 does.
 
 Yes, this crossed my mind.  At the moment, even many of the quick 
 tests/cephfs tests take tens of seconds, so they are probably a bit too big 
 to go in a default make check, but for some of the really simple things that 
 are currently done in cephtool/test.sh, I would be temped to move them into 
 the python world to make them a bit less fiddly.
 
 The test location is a bit challenging, because we essentially have two 
 not-completely-stable interfaces here, vstart and teuthology. Because 
 teuthology is the more complicated, for the moment it makes sense for the 
 tests to live in that git repo.  Long term it would be nice if fine-grained 
 functional tests lived in the same git repo as the code they're testing, but 
 I don't really have a plan for that right now outside of the 
 probably-too-radical step of merging ceph-qa-suite into the ceph repo.
 
 John

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: vstart runner for cephfs tests

2015-07-23 Thread Loic Dachary
Hi John,

You may be interested by 

https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh

which is conditionally included 

https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86

by --enable-root-make-check

https://github.com/ceph/ceph/blob/master/configure.ac#L414

If you're reckless and trust the tests not to break (a crazy proposition by 
definition IMHO ;-), you can

make TESTS=test/ceph-disk-root.sh check

If you want protection, you do the same in a docker container with

test/docker-test.sh --os-type centos --os-version 7 --dev make 
TESTS=test/ceph-disk-root.sh check

I tried various strategies to make tests requiring root access more accessible 
and less scary and that's the best compromise I found. test/docker-test.sh is 
what the make check bot uses.

When a test can be used both from sources and from teuthology, I found it more 
convenient to have it in the qa/workunits directory which is available in both 
environments. Who knows, maybe you will want a vstart based cephfs test to run 
as part of make check, in the same way 

https://github.com/ceph/ceph/blob/master/src/test/cephtool-test-mds.sh

does.

Cheers

On 23/07/2015 12:00, John Spray wrote:
 
 Audience: anyone working on cephfs, general testing interest.
 
 The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind of 
 inconvenient to run because they require teuthology (and therefore require 
 built packages, locked nodes, etc).  Most of them don't actually require 
 anything beyond what you already have in a vstart cluster, so I've adapted 
 them to optionally run that way.
 
 The idea is that we can iterate a lot faster when writing new tests (one less 
 excuse not to write them) and get better use out of the tests when debugging 
 things and testing fixes.  teuthology is fine for mass-running the nightlies 
 etc, but it's overkill for testing individual bits of MDS/client 
 functionality.
 
 The code is currently on the wip-vstart-runner ceph-qa-suite branch, and the 
 two magic commands are:
 
 1. Start a vstart cluster with a couple of MDSs, as your normal user:
 $ make -j4 rados ceph-fuse ceph-mds ceph-mon ceph-osd cephfs-data-scan 
 cephfs-journal-tool cephfs-table-tool  ./stop.sh ; rm -rf out dev ; MDS=2 
 OSD=3 MON=1 ./vstart.sh -d -n
 
 2. Invoke the test runner, as root (replace paths, test name as appropriate.  
 Leave of test name to run everything):
 # PYTHONPATH=/home/jspray/git/teuthology/:/home/jspray/git/ceph-qa-suite/ 
 python /home/jspray/git/ceph-qa-suite/tasks/cephfs/vstart_runner.py 
 tasks.cephfs.test_strays.TestStrays.test_migration_on_shutdown
 
 test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays) ... ok
 
 --
 Ran 1 test in 121.982s
 
 OK
 
 
 ^^^ see!  two minutes, and no waiting for gitbuilders!
 
 The main caveat here is that it needs to run as root in order to 
 mount/unmount things, which is a little scary.  My plan is to split it out 
 into a little root service for doing mount operations, and then let the main 
 test part run as a normal user and call out to the mounter service when 
 needed.
 
 Cheers,
 John
 -- 
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: vstart runner for cephfs tests

2015-07-23 Thread Mark Nelson

Hi John,

I had similar thoughts on the benchmarking side, which is why I started 
writing cbt a couple years ago.  I needed the ability to quickly spin up 
clusters and run benchmarks on arbitrary sets of hardware.  The outcome 
isn't perfect, but it's been extremely useful for running benchmarks and 
sort of exists as a half-way point between vstart and teuthology.


The basic idea is that you give it a yaml file that looks a little bit 
like a teuthology yaml file and cbt will (optionally) build a cluster 
across a number of user defined nodes with pdsh, start various 
monitoring tools (this is ugly right now, I'm working on making it 
modular), and then sweep through user defined benchmarks and sets of 
parameter spaces.  I have a separate tool that will sweep through ceph 
parameters, create ceph.conf files for each space, and run cbt with each 
one, but the eventual goal is to integrate that into cbt itself.


Though I never really intended it to run functional tests, I just added 
something like looks very similar to the rados suite so I can benchmark 
ceph_test_rados for the new community lab hardware. I already had a 
mechanism to inject OSD down/out up/in events, so with a bit of 
squinting it can give you a very rough approximation of a workload using 
the osd thrasher.  If you are interested, I'd be game to see if we could 
integrate your cephfs tests as well (I eventually wanted to add cephfs 
benchmark capabilities anyway).


Mark

On 07/23/2015 05:00 AM, John Spray wrote:


Audience: anyone working on cephfs, general testing interest.

The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind
of inconvenient to run because they require teuthology (and therefore
require built packages, locked nodes, etc).  Most of them don't actually
require anything beyond what you already have in a vstart cluster, so
I've adapted them to optionally run that way.

The idea is that we can iterate a lot faster when writing new tests (one
less excuse not to write them) and get better use out of the tests when
debugging things and testing fixes.  teuthology is fine for mass-running
the nightlies etc, but it's overkill for testing individual bits of
MDS/client functionality.

The code is currently on the wip-vstart-runner ceph-qa-suite branch, and
the two magic commands are:

1. Start a vstart cluster with a couple of MDSs, as your normal user:
$ make -j4 rados ceph-fuse ceph-mds ceph-mon ceph-osd cephfs-data-scan
cephfs-journal-tool cephfs-table-tool  ./stop.sh ; rm -rf out dev ;
MDS=2 OSD=3 MON=1 ./vstart.sh -d -n

2. Invoke the test runner, as root (replace paths, test name as
appropriate.  Leave of test name to run everything):
#
PYTHONPATH=/home/jspray/git/teuthology/:/home/jspray/git/ceph-qa-suite/
python /home/jspray/git/ceph-qa-suite/tasks/cephfs/vstart_runner.py
tasks.cephfs.test_strays.TestStrays.test_migration_on_shutdown

test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays) ... ok

--
Ran 1 test in 121.982s

OK


^^^ see!  two minutes, and no waiting for gitbuilders!

The main caveat here is that it needs to run as root in order to
mount/unmount things, which is a little scary.  My plan is to split it
out into a little root service for doing mount operations, and then let
the main test part run as a normal user and call out to the mounter
service when needed.

Cheers,
John
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vstart runner for cephfs tests

2015-07-23 Thread John Spray



On 23/07/15 12:23, Loic Dachary wrote:

You may be interested by

https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh

which is conditionally included

https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86

by --enable-root-make-check

https://github.com/ceph/ceph/blob/master/configure.ac#L414

If you're reckless and trust the tests not to break (a crazy proposition by 
definition IMHO ;-), you can

make TESTS=test/ceph-disk-root.sh check

If you want protection, you do the same in a docker container with

test/docker-test.sh --os-type centos --os-version 7 --dev make 
TESTS=test/ceph-disk-root.sh check

I tried various strategies to make tests requiring root access more accessible 
and less scary and that's the best compromise I found. test/docker-test.sh is 
what the make check bot uses.


Interesting, I didn't realise we already had root-ish tests in there.

At some stage the need for root may go away in ceph-fuse, as in 
principle fuse mount/unmounts shouldn't require root.  If not then 
putting an outer docker wrapper around this could make sense, if we 
publish the built binaries into the docker container via a volume or 
somesuch.  I am behind on familiarizing myself with the dockerised tests.



When a test can be used both from sources and from teuthology, I found it more 
convenient to have it in the qa/workunits directory which is available in both 
environments. Who knows, maybe you will want a vstart based cephfs test to run 
as part of make check, in the same way

https://github.com/ceph/ceph/blob/master/src/test/cephtool-test-mds.sh

does.


Yes, this crossed my mind.  At the moment, even many of the quick 
tests/cephfs tests take tens of seconds, so they are probably a bit too 
big to go in a default make check, but for some of the really simple 
things that are currently done in cephtool/test.sh, I would be temped to 
move them into the python world to make them a bit less fiddly.


The test location is a bit challenging, because we essentially have two 
not-completely-stable interfaces here, vstart and teuthology. Because 
teuthology is the more complicated, for the moment it makes sense for 
the tests to live in that git repo.  Long term it would be nice if 
fine-grained functional tests lived in the same git repo as the code 
they're testing, but I don't really have a plan for that right now 
outside of the probably-too-radical step of merging ceph-qa-suite into 
the ceph repo.


John
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: vstart runner for cephfs tests

2015-07-23 Thread Podoski, Igor
 -Original Message-
 From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
 ow...@vger.kernel.org] On Behalf Of Mark Nelson
 Sent: Thursday, July 23, 2015 2:51 PM
 To: John Spray; ceph-devel@vger.kernel.org
 Subject: Re: vstart runner for cephfs tests
 
 
 
 On 07/23/2015 07:37 AM, John Spray wrote:
 
 
  On 23/07/15 12:56, Mark Nelson wrote:
  I had similar thoughts on the benchmarking side, which is why I
  started writing cbt a couple years ago.  I needed the ability to
  quickly spin up clusters and run benchmarks on arbitrary sets of
  hardware.  The outcome isn't perfect, but it's been extremely useful
  for running benchmarks and sort of exists as a half-way point between
  vstart and teuthology.
 
  The basic idea is that you give it a yaml file that looks a little
  bit like a teuthology yaml file and cbt will (optionally) build a
  cluster across a number of user defined nodes with pdsh, start
  various monitoring tools (this is ugly right now, I'm working on
  making it modular), and then sweep through user defined benchmarks
  and sets of parameter spaces.  I have a separate tool that will sweep
  through ceph parameters, create ceph.conf files for each space, and
  run cbt with each one, but the eventual goal is to integrate that into cbt
 itself.
 
  Though I never really intended it to run functional tests, I just
  added something like looks very similar to the rados suite so I can
  benchmark ceph_test_rados for the new community lab hardware. I
  already had a mechanism to inject OSD down/out up/in events, so with
  a bit of squinting it can give you a very rough approximation of a
  workload using the osd thrasher.  If you are interested, I'd be game
  to see if we could integrate your cephfs tests as well (I eventually
  wanted to add cephfs benchmark capabilities anyway).
 
  Cool - my focus is very much on tightening the code-build-test loop
  for developers, but I can see us needing to extend that into a
  code-build-test-bench loop as we do performance work on cephfs in the
  future.  Does cbt rely on having ceph packages built, or does it blast
  the binaries directly from src/ onto the test nodes?
 
 cbt doesn't handle builds/installs at all, so it's probably not particularly 
 helpful
 in this regard.  By default it assumes binaries are in /usr/bin, but you can
 optionally override that in the yaml.  My workflow is usually to:
 
 1a) build ceph from src and distribute to other nodes (manually)
 1b) run a shell script that installs a given release from gitbuilder on all 
 nodes
 2) run a cbt yaml file that targets /usr/local, the build dir, /usr/bin, etc.
 
 Definitely would be useful to have something that makes 1a) better.
 Probably not cbt's job though.

About 1a)

In my test cluster I have NFS server (on one node) sharing /home/ceph with 
others, with many versions in it. In every subdirectory I run make install with 
DESTDIR pointing to another newly created BIN subdir.

So it looks like this:
/home/ceph/ceph-0.94.1/BIN 
ls BIN
etc
sbin
usr
var

Then I remove var, and run stow on every node to link binaries and libs from 
shared /home/ceph/ceph-version/BIN to '/' and 'ldconfig' at the end. Basically 
I can do changes only on one node, and very quickly switch between ceph 
versions. So there is no ceph installed at any node, ceph stuff is only at /var 
directory.

Of course when NFS node fails, fails everything ... but I'm aware of that.

Check out stow.

 
  John
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in the
 body of a message to majord...@vger.kernel.org More majordomo info at
 http://vger.kernel.org/majordomo-info.html


Regards,
Igor.



Re: vstart runner for cephfs tests

2015-07-23 Thread Gregory Meno
On Thu, Jul 23, 2015 at 11:00:57AM +0100, John Spray wrote:
 
 Audience: anyone working on cephfs, general testing interest.
 
 The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind of
 inconvenient to run because they require teuthology (and therefore require
 built packages, locked nodes, etc).  Most of them don't actually require
 anything beyond what you already have in a vstart cluster, so I've adapted
 them to optionally run that way.
 
 The idea is that we can iterate a lot faster when writing new tests (one
 less excuse not to write them) and get better use out of the tests when
 debugging things and testing fixes.  teuthology is fine for mass-running the
 nightlies etc, but it's overkill for testing individual bits of MDS/client
 functionality.
 
 The code is currently on the wip-vstart-runner ceph-qa-suite branch, and the
 two magic commands are:
 
 1. Start a vstart cluster with a couple of MDSs, as your normal user:
 $ make -j4 rados ceph-fuse ceph-mds ceph-mon ceph-osd cephfs-data-scan
 cephfs-journal-tool cephfs-table-tool  ./stop.sh ; rm -rf out dev ; MDS=2
 OSD=3 MON=1 ./vstart.sh -d -n
 
 2. Invoke the test runner, as root (replace paths, test name as appropriate.
 Leave of test name to run everything):
 # PYTHONPATH=/home/jspray/git/teuthology/:/home/jspray/git/ceph-qa-suite/
 python /home/jspray/git/ceph-qa-suite/tasks/cephfs/vstart_runner.py
 tasks.cephfs.test_strays.TestStrays.test_migration_on_shutdown
 
 test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays) ... ok
 
 --
 Ran 1 test in 121.982s
 
 OK
 
 
 ^^^ see!  two minutes, and no waiting for gitbuilders!

You are a testing hero John!

-G
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


vstart runner for cephfs tests

2015-07-23 Thread John Spray


Audience: anyone working on cephfs, general testing interest.

The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind 
of inconvenient to run because they require teuthology (and therefore 
require built packages, locked nodes, etc).  Most of them don't actually 
require anything beyond what you already have in a vstart cluster, so 
I've adapted them to optionally run that way.


The idea is that we can iterate a lot faster when writing new tests (one 
less excuse not to write them) and get better use out of the tests when 
debugging things and testing fixes.  teuthology is fine for mass-running 
the nightlies etc, but it's overkill for testing individual bits of 
MDS/client functionality.


The code is currently on the wip-vstart-runner ceph-qa-suite branch, and 
the two magic commands are:


1. Start a vstart cluster with a couple of MDSs, as your normal user:
$ make -j4 rados ceph-fuse ceph-mds ceph-mon ceph-osd cephfs-data-scan 
cephfs-journal-tool cephfs-table-tool  ./stop.sh ; rm -rf out dev ; 
MDS=2 OSD=3 MON=1 ./vstart.sh -d -n


2. Invoke the test runner, as root (replace paths, test name as 
appropriate.  Leave of test name to run everything):
# 
PYTHONPATH=/home/jspray/git/teuthology/:/home/jspray/git/ceph-qa-suite/ 
python /home/jspray/git/ceph-qa-suite/tasks/cephfs/vstart_runner.py 
tasks.cephfs.test_strays.TestStrays.test_migration_on_shutdown


test_migration_on_shutdown (tasks.cephfs.test_strays.TestStrays) ... ok

--
Ran 1 test in 121.982s

OK


^^^ see!  two minutes, and no waiting for gitbuilders!

The main caveat here is that it needs to run as root in order to 
mount/unmount things, which is a little scary.  My plan is to split it 
out into a little root service for doing mount operations, and then let 
the main test part run as a normal user and call out to the mounter 
service when needed.


Cheers,
John
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html