Hi,

I think you can specify the pool name in client settings.
for example, in our environment,

# rbd ls
rbd: error opening pool rbd: (2) No such file or directory

# rbd ls -p block
f7470c3f-e051-4f3d-86ff-52e8ba78ac4a
022e9944-122c-4ad0-b652-9e52ba32e2c0

Here -p pool_name was specified. that works.



2016-06-07 0:01 GMT+08:00 strony zhang <strony.zh...@yahoo.com>:

> Hi Ken,
>
> Thanks for your reply.
> The ceph cluster runs well.
>
> :~$ sudo ceph -s
>     cluster 285441d6-c059-405d-9762-86bd91f279d0
>      health HEALTH_OK
>      monmap e1: 1 mons at {strony-pc=10.132.138.233:6789/0}
>             election epoch 9, quorum 0 strony-pc
>      osdmap e200: 2 osds: 2 up, 2 in
>             flags sortbitwise
>       pgmap v225126: 256 pgs, 1 pools, 345 bytes data, 10 objects
>             10326 MB used, 477 GB / 488 GB avail
>                  256 active+clean
>   client io 0 B/s rd, 193 op/s rd, 0 op/s wr
>
> $ ceph osd lspools
> 6 rbd,
>
> I previously deleted some pools. So, the latest ID for the pool, 'rbd', is
> 6. I guess the client probably tries accessing the first pool by default
> and then got stuck. So, how can I change the pool ID into '0'?
>
> Thanks,
> Strony
>
>
> On Monday, June 6, 2016 1:46 AM, Ken Peng <k...@dnsbed.com> wrote:
>
>
> hello,
>
> Does ceph cluster work right? run ceph -s and ceph -w for watching more
> details.
>
> 2016-06-06 16:17 GMT+08:00 strony zhang <strony.zh...@yahoo.com>:
>
> Hi,
>
> I am a new learner in ceph. Now I install an All-in-one ceph on the host
> A. Then I tried accessing the ceph from another host B with librados and
> librbd installed.
>
> From host B: I run python to access the ceph on host A.
> >>> import rados
> >>> cluster1 = rados.Rados(conffile='/etc/ceph/ceph.conf')
> >>> cluster1.connect()
> >>> print cluster1.get_fsid()
> 285441d6-c059-405d-9762-86bd91f279d0
> >>>
> >>> import rbd
> >>> rbd_inst = rbd.RBD()
> >>> ioctx = cluster1.open_ioctx('rbd')
> >>> rbd_inst.list(ioctx)
> .... stuck in here; it never exits until the python program is killed
> manually.
>
> But in Host A, I don't find any error info.
> zq@zq-ubuntu:~$ rbd list -l
> NAME  SIZE PARENT FMT PROT LOCK
> z1   1024M          2
> z2   1024M          2
> z3   1024M          2
>
> The ceph.conf and ceph.client.admin.keyring in host B are the same to
> those in host A. Any comments are appreciated.
>
> Thanks,
> Strony
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to