> -----Original Message-----
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Mark Nelson
> Sent: Thursday, July 23, 2015 2:51 PM
> To: John Spray; ceph-devel@vger.kernel.org
> Subject: Re: vstart runner for cephfs tests
> 
> 
> 
> On 07/23/2015 07:37 AM, John Spray wrote:
> >
> >
> > On 23/07/15 12:56, Mark Nelson wrote:
> >> I had similar thoughts on the benchmarking side, which is why I
> >> started writing cbt a couple years ago.  I needed the ability to
> >> quickly spin up clusters and run benchmarks on arbitrary sets of
> >> hardware.  The outcome isn't perfect, but it's been extremely useful
> >> for running benchmarks and sort of exists as a half-way point between
> >> vstart and teuthology.
> >>
> >> The basic idea is that you give it a yaml file that looks a little
> >> bit like a teuthology yaml file and cbt will (optionally) build a
> >> cluster across a number of user defined nodes with pdsh, start
> >> various monitoring tools (this is ugly right now, I'm working on
> >> making it modular), and then sweep through user defined benchmarks
> >> and sets of parameter spaces.  I have a separate tool that will sweep
> >> through ceph parameters, create ceph.conf files for each space, and
> >> run cbt with each one, but the eventual goal is to integrate that into cbt
> itself.
> >>
> >> Though I never really intended it to run functional tests, I just
> >> added something like looks very similar to the rados suite so I can
> >> benchmark ceph_test_rados for the new community lab hardware. I
> >> already had a mechanism to inject OSD down/out up/in events, so with
> >> a bit of squinting it can give you a very rough approximation of a
> >> workload using the osd thrasher.  If you are interested, I'd be game
> >> to see if we could integrate your cephfs tests as well (I eventually
> >> wanted to add cephfs benchmark capabilities anyway).
> >
> > Cool - my focus is very much on tightening the code-build-test loop
> > for developers, but I can see us needing to extend that into a
> > code-build-test-bench loop as we do performance work on cephfs in the
> > future.  Does cbt rely on having ceph packages built, or does it blast
> > the binaries directly from src/ onto the test nodes?
> 
> cbt doesn't handle builds/installs at all, so it's probably not particularly 
> helpful
> in this regard.  By default it assumes binaries are in /usr/bin, but you can
> optionally override that in the yaml.  My workflow is usually to:
> 
> 1a) build ceph from src and distribute to other nodes (manually)
> 1b) run a shell script that installs a given release from gitbuilder on all 
> nodes
> 2) run a cbt yaml file that targets /usr/local, the build dir, /usr/bin, etc.
> 
> Definitely would be useful to have something that makes 1a) better.
> Probably not cbt's job though.

About 1a)

In my test cluster I have NFS server (on one node) sharing /home/ceph with 
others, with many versions in it. In every subdirectory I run make install with 
DESTDIR pointing to another newly created BIN subdir.

So it looks like this:
/home/ceph/ceph-0.94.1/BIN 
ls BIN
etc
sbin
usr
var

Then I remove var, and run "stow" on every node to link binaries and libs from 
shared /home/ceph/ceph-version/BIN to '/' and 'ldconfig' at the end. Basically 
I can do changes only on one node, and very quickly switch between ceph 
versions. So there is no ceph installed at any node, ceph stuff is only at /var 
directory.

Of course when NFS node fails, fails everything ... but I'm aware of that.

Check out "stow".

> >
> > John
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the
> body of a message to majord...@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html


Regards,
Igor.

Reply via email to