Re: [ceph-users] Infiniband backend OSD communication

2020-01-07 Thread Nathan Stratton
Ok, so ipoib is required...

><>
nathan stratton


On Mon, Jan 6, 2020 at 4:45 AM Wei Zhao  wrote:

> From my understanding, the basic idea is that ceph exchange rdma
> information(qp,gid and so) through ip address on rdma device, and then
> communicate with each other throng rdma. But in my tests,  there
> seemed to be some issues in that codes.
>
> On Fri, Jan 3, 2020 at 2:24 AM Nathan Stratton 
> wrote:
> >
> > I am working on upgrading my current ethernet only ceph cluster to a
> combined ethernet frontend and infiniband backend. From my research I
> understand that I set:
> >
> > ms_cluster_type = async+rdma
> > ms_async_rdma_device_name = mlx4_0
> >
> > What I don't understand is how does ceph know how to reach each OSD over
> RDMA? Do I have to run IPoIB on top of infiniband and use that for OSD
> addresses?
> >
> > Is there a way to use infiniband on backend without IPoIB and just use
> rdma verbs?
> >
> > ><>
> > nathan stratton
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Infiniband backend OSD communication

2020-01-02 Thread Nathan Stratton
I am working on upgrading my current ethernet only ceph cluster to a
combined ethernet frontend and infiniband backend. From my research I
understand that I set:

ms_cluster_type = async+rdma
ms_async_rdma_device_name = mlx4_0

What I don't understand is how does ceph know how to reach each OSD over
RDMA? Do I have to run IPoIB on top of infiniband and use that for OSD
addresses?

Is there a way to use infiniband on backend without IPoIB and just use rdma
verbs?

><>
nathan stratton
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph in a shared environment

2015-07-10 Thread Nathan Stratton
We do the same, so far no problems.



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Fri, Jul 10, 2015 at 6:51 AM, Jan Schermer j...@schermer.cz wrote:

 We run CEPH OSDs on the same hosts as QEMU/KVM with OpenStack. You need to
 segregate the processes so the OSDs have their dedicated cores and memory,
 other than that it works fine. Our MONs also run on the same hosts as the
 OpenStack controller nodes (L3 agents and such) - no problem here, you just
 need dedicated drives for their data.

 Jan

 On 10 Jul 2015, at 12:28, Kris Gillespie kgilles...@bol.com wrote:

  Hi All,

  So this may have been asked but I’ve googled the crap out of this so
 maybe my google-fu needs work. Does anyone have any experience running a
 Ceph cluster with the Ceph daemons (mons/osds/rgw) running on the same
 hosts as other services (so say Docker containers, or really anything
 generating load). What has been your experience? Used cgroups or seen any
 reason too? Any performance issues? Troubleshooting a pain? Any other
 general observations?

  Just curious if anyone out there has done it and to what scale and what
 issues they’ve encountered.

  Cheers everyone

  Kris Gillespie| System Engineer | bol.com


 De informatie verzonden met dit e-mailbericht is uitsluitend bestemd voor
 de geadresseerde. Gebruik van deze informatie door anderen dan de
 geadresseerde is uitdrukkelijk verboden. Indien u dit bericht per
 vergissing heeft ontvangen, verzoeken wij u ons onmiddelijk hiervan op de
 hoogte te stellen en het bericht te vernietigen. Openbaarmaking,
 vermenigvuldiging, verspreiding en/of verstrekking van deze informatie aan
 derden is niet toegestaan. Bol.com http://bol.com b.v. staat niet in
 voor de juiste en volledige overbrenging van de inhoud van een verzonden
 e-mail, noch voor tijdige ontvangst daarvan en aanvaardt geen
 aansprakelijkheid in dezen.
 The information contained in this communication is confidential and may be
 legally privileged. It is intended solely for the use of the individual or
 entity to whom it is addressed and others authorised to receive it. If you
 are not the intended recipient please notify the sender and destroy this
 message. Any disclosure, copying, distribution or taking any action in
 reliance on the contents of this information is strictly prohibited and may
 be unlawful. Bol.com http://bol.com b.v. is neither liable for the
 proper and complete transmission of the information contained in this
 communication nor for delay in its receipt.

  ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] accept: got bad authorizer

2014-10-09 Thread Nathan Stratton
Yep, that was it. My concern tho is that one node with a bad clock was able
to lock the whole 16 node cluster, should that be the case?



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Wed, Oct 8, 2014 at 6:48 PM, Gregory Farnum g...@inktank.com wrote:

 Check your clock sync on that node. That's the usual cause of this issue.
 -Greg


 On Wednesday, October 8, 2014, Nathan Stratton nat...@robotics.net
 wrote:

 I have one out of 16 of my OSDs doing something odd. The logs show some
 sort of authentication issue. If I restart the OSD things are fine, but in
 a few hours it happens again and I have to restart it to get things back up.

 2014-10-08 06:46:46.858260 7f43f62a0700  0 auth: could not find
 secret_id=221
 2014-10-08 06:46:46.858276 7f43f62a0700  0 cephx: verify_authorizer could
 not get service secret for service osd secret_id=221
 2014-10-08 06:46:46.858302 7f43f62a0700  0 -- 10.71.1.26:6800/22284 
 10.71.0.218:0/1002562 pipe(0x7c92800 sd=73 :6800 s=0 pgs=0 cs=0 l=1
 c=0x87b44c0).accept: got bad authorizer


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com



 --
 Software Engineer #42 @ http://inktank.com | http://ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD on openstack glance+cinder CoW?

2014-10-08 Thread Nathan Stratton
On Tue, Oct 7, 2014 at 5:35 PM, Jonathan Proulx j...@jonproulx.com wrote:

 Hi All,

 We're running Firefly on the ceph side and Icehouse on the OpenStack
 side  I've pulled the recommended nova branch from
 https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse

 according to
 http://ceph.com/docs/master/rbd/rbd-openstack/#booting-from-a-block-device
 :

 When Glance and Cinder are both using Ceph block devices, the image
 is a copy-on-write clone, so it can create a new volume quickly

 I'm not seeing this, even though I have glance setup in such away that
 nova does create copy on write clones when booting ephemeral instances
 of the same image.  Cinder downloads the glance RBD than pushes it
 back up as full copy.

 Since Glance - Nova is working (has the show_image_direct_url=True
 etc...) I suspect a problem with my Cinder config, this is what I
 added for rbd support:

 [rbd]
 volume_driver=cinder.volume.drivers.rbd.RBDDriver
 rbd_pool=volumes
 rbd_ceph_conf=/etc/ceph/ceph.conf
 rbd_flatten_volume_from_snapshot=false
 rbd_max_clone_depth=5
 glance_api_version=2
 rbd_user=USER
 rbd_secret_uuid=UUID
 volume_backend_name=rbd

 Note it does *work* just not doing CoW.  Am I missing something here?


I am running into the same thing, when I import a temp file is created in
/var/lib/cinder/conversion. Everything works, it just is not CoW.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Network hardware recommendations

2014-10-08 Thread Nathan Stratton
On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:

  If you want to build up with Viatta.
 And this give you the possibility to have a fully feature OS.
 What kind of hardware would you use to build up a switch?


Hard to beat the Quanta T3048-LY2, 48 10 gig, 4 40 gig. Same chip as Cisco,
Dell, HP, etc. Like I said, merchant silicon and white box switches are the
wave of the future. You can use Quanta OS or what I recommend is get one
with the ONIE bootloader, then you can put Cumulus software on it for more
features of if you much more daring flash it with BigSwitch and go
OpenFlow.


nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Network hardware recommendations

2014-10-08 Thread Nathan Stratton
On Wed, Oct 8, 2014 at 9:25 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:




 Il 08/10/2014 14:39, Nathan Stratton ha scritto:

  On Wed, Oct 8, 2014 at 8:15 AM, Massimiliano Cuttini m...@phoenixweb.it
 wrote:

  If you want to build up with Viatta.
 And this give you the possibility to have a fully feature OS.
 What kind of hardware would you use to build up a switch?


  Hard to beat the Quanta T3048-LY2, 48 10 gig, 4 40 gig. Same chip as
 Cisco, Dell, HP, etc. Like I said, merchant silicon and white box switches
 are the wave of the future. You can use Quanta OS or what I recommend is
 get one with the ONIE bootloader, then you can put Cumulus software on it
 for more features of if you much more daring flash it with BigSwitch and go
 OpenFlow.

BigSwitch is better than Viatta or just something different?


Different, BigSwitch is a SDN controller.


 I'm building a OpenStack+Ceph solutions, than BigSwitch seems to fit more.


Depends on what you want to do. If you want to add SDN/NFV then yes it
makes sense.


 About the Quanta that you suggest... whell WOW!


:) Yep, amazing what you can get for under 5k.


 I see also the top solution: T5032-LY6 and even this one is affordable
 (just $7200).


That switch is 40 gig, normally used spine/leaf connecting the 10 gig
switches together, unless that is you want to do 40 gig to the server. :)


 About Infiniband, what kind of white switch would you suggest?


I am not aware of any white box infiniband vendors, it is a very different
space then ethernet.

-Nathan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] accept: got bad authorizer

2014-10-08 Thread Nathan Stratton
I have one out of 16 of my OSDs doing something odd. The logs show some
sort of authentication issue. If I restart the OSD things are fine, but in
a few hours it happens again and I have to restart it to get things back up.

2014-10-08 06:46:46.858260 7f43f62a0700  0 auth: could not find
secret_id=221
2014-10-08 06:46:46.858276 7f43f62a0700  0 cephx: verify_authorizer could
not get service secret for service osd secret_id=221
2014-10-08 06:46:46.858302 7f43f62a0700  0 -- 10.71.1.26:6800/22284 
10.71.0.218:0/1002562 pipe(0x7c92800 sd=73 :6800 s=0 pgs=0 cs=0 l=1
c=0x87b44c0).accept: got bad authorizer



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-06 Thread Nathan Stratton
SELinux is already disabled

[root@virt01a /]# setsebool -P virt_use_execmem 1
setsebool:  SELinux is disabled.




nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Mon, Oct 6, 2014 at 1:16 AM, Vladislav Gorbunov vadi...@gmail.com
wrote:

 Try to disable selinux or run
 setsebool -P virt_use_execmem 1


 2014-10-06 8:38 GMT+12:00 Nathan Stratton nat...@robotics.net:

 I did the same thing, build the RPMs and now show rbd support, however
 when I try to start an image I get:

 2014-10-05 19:48:08.058+: 4524: error :
 qemuProcessWaitForMonitor:1889 : internal error: process exited while
 connecting to monitor: Warning: option deprecated, use lost_tick_policy
 property of kvm-pit instead.
 qemu-kvm: -drive
 file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
 could not open disk image
 rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
 Driver 'rbd' is not whitelisted

 I tried with an without auth.


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc li...@kirneh.eu wrote:

  Hi,
 Centos 7 qemu out of the box does not support rbd.

 I had to build package with rbd support manually with %define rhev 1
 in qemu-kvm spec file. I also had to salvage some files from src.rpm file
 which were missing from centos git.


 On 2014.10.04 11:31, Ignazio Cassano wrote:

 Hi all,
 I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there
 are some extra packages.
 Regards

 Ignazio


 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Network hardware recommendations

2014-10-06 Thread Nathan Stratton
On Sun, Oct 5, 2014 at 11:19 PM, Ariel Silooy ar...@bisnis2030.com wrote:

 Hello fellow ceph user, right now we are researching ceph for our storage.

 We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for
 now we are using the NFS proxy setup. On each OSD node we have 4x 1G Intel
 copper NIC (not sure about the model number though but I'll look it up in
 case anyone asking). Up until now we are testing on one nic as we dont have
 (yet) a network switch with la/teaming support.

 I suppose since its Intel we should try to get jumbo frames working too,
 so I hope someone would recommend a good switch that is known to work with
 most Intel's.

 We are looking for recommendation on what kind of network switch, network
 layout, brand, model, whatever.. as we are (kind of) new to building our
 own storage and has no experience in ceph.

 We are also looking for feasibility of using fibre-channel instead of
 copper but we dont know if it would help much, in terms of
 speed-improvements/$ ratio since we already have 4 NICs on each OSD. Should
 we go for it?


I really would think about something faster then gig ethernet. Merchant
silicon is changing the world, take a look at guys like Quanta, I just
bought two T3048-LY2 switches with Cumulus software for under 6k each. That
gives you 48 10 gig ports and 4 40 gig ports to play with, to save on
optics use SFP+ copper cables. If you want to save even more money go with
used 10 gig infiniband off eBay, you can do that for under $100 a port.


nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-06 Thread Nathan Stratton
Ah! Thats it! I built qemu with rbd but have not rbd kernel module.

I tried to build that on stock el7 kernel and got:

/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//inode.c: In function
'splice_dentry':
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//inode.c:904:193: error:
'struct dentry' has no member named 'd_count'
   dout(dn %p (%d) spliced with %p (%d) 


 ^
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//inode.c:904:218: error:
'struct dentry' has no member named 'd_count'
   dout(dn %p (%d) spliced with %p (%d) 


  ^
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//mds_client.c: In
function 'ceph_mdsc_build_path':
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//mds_client.c:1555:180:
error: 'struct dentry' has no member named 'd_count'
  dout(build_path on %p %d built %llx '%.*s'\n,


^
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//mds_client.c: In
function 'encode_caps_cb':
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//mds_client.c:2484:3:
error: implicit declaration of function 'lock_flocks'
[-Werror=implicit-function-declaration]
   lock_flocks();
   ^
/root/rpmbuild/BUILD/ceph-3.10.24-dc9ac62/fs/ceph//mds_client.c:2486:3:
error: implicit declaration of function 'unlock_flocks'
[-Werror=implicit-function-declaration]
   unlock_flocks();
   ^
cc1: some warnings being treated as errors



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Mon, Oct 6, 2014 at 8:22 AM, Ignazio Cassano ignaziocass...@gmail.com
wrote:

 Hi,
 but what kernel version you are using ?
 I think rbd kernel module is not in centos 7 kernel .
 Have you buill it by sources ?


 2014-10-06 14:08 GMT+02:00 Nathan Stratton nat...@robotics.net:

 SELinux is already disabled

 [root@virt01a /]# setsebool -P virt_use_execmem 1
 setsebool:  SELinux is disabled.



 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Mon, Oct 6, 2014 at 1:16 AM, Vladislav Gorbunov vadi...@gmail.com
 wrote:

 Try to disable selinux or run
 setsebool -P virt_use_execmem 1


 2014-10-06 8:38 GMT+12:00 Nathan Stratton nat...@robotics.net:

 I did the same thing, build the RPMs and now show rbd support, however
 when I try to start an image I get:

 2014-10-05 19:48:08.058+: 4524: error :
 qemuProcessWaitForMonitor:1889 : internal error: process exited while
 connecting to monitor: Warning: option deprecated, use lost_tick_policy
 property of kvm-pit instead.
 qemu-kvm: -drive
 file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
 could not open disk image
 rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
 Driver 'rbd' is not whitelisted

 I tried with an without auth.


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

 On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc li...@kirneh.eu wrote:

  Hi,
 Centos 7 qemu out of the box does not support rbd.

 I had to build package with rbd support manually with %define rhev 1
 in qemu-kvm spec file. I also had to salvage some files from src.rpm file
 which were missing from centos git.


 On 2014.10.04 11:31, Ignazio Cassano wrote:

 Hi all,
 I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there
 are some extra packages.
 Regards

 Ignazio


 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-06 Thread Nathan Stratton
On Mon, Oct 6, 2014 at 8:22 AM, Ignazio Cassano ignaziocass...@gmail.com
wrote:

 Hi,
 but what kernel version you are using ?
 I think rbd kernel module is not in centos 7 kernel .
 Have you buill it by sources ?


Built kernel via source, I see modules loaded:

[root@virt01a secrets]# lsmod
Module  Size  Used by
rbd64357  0
libceph   225744  1 rbd

Still see:

2014-10-06 10:49:17.333 2096 TRACE nova.compute.utils [instance:
c709eeca-ebc5-4027-8245-d630e131c96b] qemu-kvm: -drive
file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
could not open disk image
rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
Driver 'rbd' is not whitelisted
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] libvirt: Driver 'rbd' is not whitelisted

2014-10-06 Thread Nathan Stratton
You need to make sure rbd is in your whitelist when you run ./configure as
well as having rbd enabled.

--block-drv-rw-whitelist=qcow2,raw,file,host_device,nbd,iscsi,gluster,rbd



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Sun, Oct 5, 2014 at 4:36 PM, Nathan Stratton nat...@robotics.net wrote:

 I am trying to get ceph working with openstack and libvirt, but running
 into an error Driver 'rbd' is not whitelisted. Google is not providing
 much help so I thought I would try the list.

 The image is in the volume:

 [root@virt01a ~]# rbd list volumes
 volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968

 To rule out openstack I am using the XML file it was using:

 domain type=kvm
   uuidc1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4/uuid
   nameinstance-000a/name
   memory2097152/memory
   vcpu2/vcpu
   sysinfo type=smbios
 system
   entry name=manufacturerFedora Project/entry
   entry name=productOpenStack Nova/entry
   entry name=version2014.1.2-1.el7.centos/entry
   entry name=serial----002590dab186/entry
   entry name=uuidc1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4/entry
 /system
   /sysinfo
   os
 typehvm/type
 boot dev=hd/
 smbios mode=sysinfo/
   /os
   features
 acpi/
 apic/
   /features
   clock offset=utc
 timer name=pit tickpolicy=delay/
 timer name=rtc tickpolicy=catchup/
 timer name=hpet present=no/
   /clock
   cpu mode=host-model match=exact/
   devices
 disk type=network device=disk
   driver name=qemu type=raw cache=none/
   source protocol=rbd
 name=volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968
 host name=10.71.0.75 port=6789/
 host name=10.71.0.76 port=6789/
 host name=10.71.0.77 port=6789/
 host name=10.71.0.78 port=6789/
   /source
   auth username=volumes
 secret type=ceph uuid=54aafbbc-ced8-4401-a096-3047994caa67/
   /auth
   target bus=virtio dev=vda/
   serial205e6cb4-15c1-4f8d-8bf4-aedcc1549968/serial
 /disk
 interface type=bridge
   mac address=fa:16:3e:3d:d5:01/
   model type=virtio/
   source bridge=private/
   filterref filter=nova-instance-instance-000a-fa163e3dd501/
 /interface
 serial type=pty/
 input type=tablet bus=usb/
 graphics type=vnc autoport=yes keymap=en-us listen=0.0.0.0/
 video
   model type=cirrus/
 /video
   /devices
 /domain

 I turned out debug in libvirt, but it shows the same error line:

 2014-10-05 20:14:28.348+: 5078: error : qemuProcessWaitForMonitor:1889
 : internal error: process exited while connecting to monitor: Warning:
 option deprecated, use lost_tick_policy property of kvm-pit instead.
 qemu-kvm: -drive
 file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
 could not open disk image
 rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
 Driver 'rbd' is not whitelisted


 
 nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
 www.broadsoft.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] libvirt: Driver 'rbd' is not whitelisted

2014-10-05 Thread Nathan Stratton
I am trying to get ceph working with openstack and libvirt, but running
into an error Driver 'rbd' is not whitelisted. Google is not providing
much help so I thought I would try the list.

The image is in the volume:

[root@virt01a ~]# rbd list volumes
volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968

To rule out openstack I am using the XML file it was using:

domain type=kvm
  uuidc1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4/uuid
  nameinstance-000a/name
  memory2097152/memory
  vcpu2/vcpu
  sysinfo type=smbios
system
  entry name=manufacturerFedora Project/entry
  entry name=productOpenStack Nova/entry
  entry name=version2014.1.2-1.el7.centos/entry
  entry name=serial----002590dab186/entry
  entry name=uuidc1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4/entry
/system
  /sysinfo
  os
typehvm/type
boot dev=hd/
smbios mode=sysinfo/
  /os
  features
acpi/
apic/
  /features
  clock offset=utc
timer name=pit tickpolicy=delay/
timer name=rtc tickpolicy=catchup/
timer name=hpet present=no/
  /clock
  cpu mode=host-model match=exact/
  devices
disk type=network device=disk
  driver name=qemu type=raw cache=none/
  source protocol=rbd
name=volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968
host name=10.71.0.75 port=6789/
host name=10.71.0.76 port=6789/
host name=10.71.0.77 port=6789/
host name=10.71.0.78 port=6789/
  /source
  auth username=volumes
secret type=ceph uuid=54aafbbc-ced8-4401-a096-3047994caa67/
  /auth
  target bus=virtio dev=vda/
  serial205e6cb4-15c1-4f8d-8bf4-aedcc1549968/serial
/disk
interface type=bridge
  mac address=fa:16:3e:3d:d5:01/
  model type=virtio/
  source bridge=private/
  filterref filter=nova-instance-instance-000a-fa163e3dd501/
/interface
serial type=pty/
input type=tablet bus=usb/
graphics type=vnc autoport=yes keymap=en-us listen=0.0.0.0/
video
  model type=cirrus/
/video
  /devices
/domain

I turned out debug in libvirt, but it shows the same error line:

2014-10-05 20:14:28.348+: 5078: error : qemuProcessWaitForMonitor:1889
: internal error: process exited while connecting to monitor: Warning:
option deprecated, use lost_tick_policy property of kvm-pit instead.
qemu-kvm: -drive
file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
could not open disk image
rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
Driver 'rbd' is not whitelisted



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-05 Thread Nathan Stratton
I did the same thing, build the RPMs and now show rbd support, however when
I try to start an image I get:

2014-10-05 19:48:08.058+: 4524: error : qemuProcessWaitForMonitor:1889
: internal error: process exited while connecting to monitor: Warning:
option deprecated, use lost_tick_policy property of kvm-pit instead.
qemu-kvm: -drive
file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
could not open disk image
rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
Driver 'rbd' is not whitelisted

I tried with an without auth.



nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc li...@kirneh.eu wrote:

  Hi,
 Centos 7 qemu out of the box does not support rbd.

 I had to build package with rbd support manually with %define rhev 1 in
 qemu-kvm spec file. I also had to salvage some files from src.rpm file
 which were missing from centos git.


 On 2014.10.04 11:31, Ignazio Cassano wrote:

 Hi all,
 I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there are
 some extra packages.
 Regards

 Ignazio


 ___
 ceph-users mailing 
 listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ovirt

2014-05-07 Thread Nathan Stratton
Now that everyone will be one big happy family, any new on ceph support of
ovirt?


nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hardware: SFP+ or 10GBase-T

2013-10-24 Thread Nathan Stratton
On Thu, Oct 24, 2013 at 9:48 AM, Mark Nelson mark.nel...@inktank.com wrote:
 Ceph does work with IPoIB, We've got some people working on rsocket support,
 and Mellanox just opensourced VMA, so there are some options on the
 infiniband side if you want to go that route.  With QDR and IPoIB we have
 been able to push about 2.4GB/s per node.  No idea how SDR would do though.

That is great news!

 Honestly I wouldn't worry about it too much.  We have bigger latency dragons
 to slay. :)

Ok, this is what I thought, but wanted to make sure.

 Just FYI, we haven't done a whole lot of optimization work on SSDs yet, so
 if you are shooting for really high IOPS be prepared as its still kind of
 wild west. :)  We've got a couple of people working on different projects
 that we hope will help here, but there's a lot of tuning work to be done
 still. :)

Understood, we don't need huge amounts of space, so the 240 Gig SSDs
were just a bit more the SAS drives. Tho depending on the code, I
guess I could run into wear issues of SSDs over SAS/SATA because of
frequent writes.

-- 

nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hardware: SFP+ or 10GBase-T

2013-10-24 Thread Nathan Stratton
On Thu, Oct 24, 2013 at 11:19 AM, Kyle Bader kyle.ba...@gmail.com wrote:
 If you are talking about the links from the nodes with OSDs to their
 ToR switches then I would suggest going with Twinax cables. Twinax
 doesn't go very far but it's really durable and uses less power than
 10GBase-T. Here's a blog post that goes into more detail:

 http://etherealmind.com/difference-twinax-category-6-10-gigabit-ethernet/

 I would probably go with the Arista 7050-S over the 7050-T and use
 twinax for ToR to OSD node links and SFP+SR uplinks to spine switches
 if you need longer runs.

So I totally understand that its less power, but finding that hard to
justify when the cost per port jumps to $300 more per port. With dual
ports its going to take a long time to make that up in power savings.



-- 

nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com