Ok, but I've kernel 3.19.0-39-generic, so the new version is supposed to
work right?, and I'm still getting issues while trying to map the RBD:

$ *sudo rbd --cluster cephIB create e60host01vX --size 100G --pool rbd -c
/etc/ceph/cephIB.conf*
$ *sudo rbd -p rbd bench-write e60host01vX --io-size 4096 --io-threads 1
--io-total 4096 --io-pattern rand -c /etc/ceph/cephIB.conf*
bench-write  io_size 4096 io_threads 1 bytes 4096 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
elapsed:     0  ops:        1  ops/sec:    29.67  bytes/sec: 121536.32

$ *sudo rbd --cluster cephIB map e60host01vX --pool rbd -c
/etc/ceph/cephIB.conf*
rbd: sysfs write failed
rbd: map failed: (5) Input/output error

$ *sudo rbd -p rbd info e60host01vX -c /etc/ceph/cephIB.conf*
rbd image 'e60host01vX':
    size 102400 MB in 25600 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.5f03238e1f29
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags:

Any other idea of what could be the problem here?


*German*

2016-03-30 5:15 GMT-03:00 Ilya Dryomov <idryo...@gmail.com>:

> On Wed, Mar 30, 2016 at 3:03 AM, Jason Dillaman <dilla...@redhat.com>
> wrote:
> > Understood -- format 2 was promoted to the default image format starting
> with Infernalis (which not all users would have played with since it isn't
> LTS).  The defaults can be overridden via the command-line when creating
> new images or via the Ceph configuration file.
> >
> > I'll let Ilya provide input on which kernels support image format 2, but
> from a quick peek on GitHub it looks like support was added around the v3.8
> timeframe.
>
> Layering (i.e. format 2 with default striping parameters) is supported
> starting with 3.10.  We don't really support older kernels - backports
> are pretty much all 3.10+, etc.
>
> Thanks,
>
>                 Ilya
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to