Re: [ceph-users] Centos 7 qemu

2014-10-05 Thread Vladislav Gorbunov
Try to disable selinux or run
setsebool -P virt_use_execmem 1


2014-10-06 8:38 GMT+12:00 Nathan Stratton :

> I did the same thing, build the RPMs and now show rbd support, however
> when I try to start an image I get:
>
> 2014-10-05 19:48:08.058+: 4524: error : qemuProcessWaitForMonitor:1889
> : internal error: process exited while connecting to monitor: Warning:
> option deprecated, use lost_tick_policy property of kvm-pit instead.
> qemu-kvm: -drive
> file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
> could not open disk image
> rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
> Driver 'rbd' is not whitelisted
>
> I tried with an without auth.
>
>
> ><>
> nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
> www.broadsoft.com
>
> On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc  wrote:
>
>>  Hi,
>> Centos 7 qemu out of the box does not support rbd.
>>
>> I had to build package with rbd support manually with "%define rhev 1" in
>> qemu-kvm spec file. I also had to salvage some files from src.rpm file
>> which were missing from centos git.
>>
>>
>> On 2014.10.04 11:31, Ignazio Cassano wrote:
>>
>> Hi all,
>> I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there are
>> some extra packages.
>> Regards
>>
>> Ignazio
>>
>>
>> ___
>> ceph-users mailing 
>> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Network hardware recommendations

2014-10-05 Thread Christian Balzer

Hello,

On Mon, 06 Oct 2014 10:19:28 +0700 Ariel Silooy wrote:

> Hello fellow ceph user, right now we are researching ceph for our
> storage.
> 
> We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for 
> now we are using the NFS proxy setup. On each OSD node we have 4x 1G 
> Intel copper NIC (not sure about the model number though but I'll look 
> it up in case anyone asking). Up until now we are testing on one nic as 
> we dont have (yet) a network switch with la/teaming support.
> 
> I suppose since its Intel we should try to get jumbo frames working too, 
> so I hope someone would recommend a good switch that is known to work 
> with most Intel's.
>
Any decent switch with LACP will do really. 
And with that I mean Cisco, Brocade etc.

But that won't give you redundancy if a switch fails, see below.

> We are looking for recommendation on what kind of network switch, 
> network layout, brand, model, whatever.. as we are (kind of) new to 
> building our own storage and has no experience in ceph.
>
 
TRILL ( http://en.wikipedia.org/wiki/TRILL_(computing) ) based switches
(we have some Brocade VDX ones) have the advantage that they can do LACP
over 2 switches. 
Meaning you can get full speed if both switches are running and still get
redundancy (at half speed) if one goes down.
They are probably too pricey in a 1GB/s environment though, but that's for
you to investigate and decide.

Otherwise you'd wind up with something like 2 normal switches and half
your possible speed as one link is always just standby. 

Segregation of client and replication traffic (public/cluster network)
probably won't make much sense, as any decent switch will be able to
handle the bandwidth of all ports and with a combined network (2
active links) you get the potential benefit of higher read speeds for
clients.

> We are also looking for feasibility of using fibre-channel instead of 
> copper but we dont know if it would help much, in terms of 
> speed-improvements/$ ratio since we already have 4 NICs on each OSD. 
> Should we go for it?
>
Why would you? 
For starters I think you mean fiber-optics, as fiber-channel is something
else. ^o^
Those make only sense when you're going longer distances than your cluster
size suggests. 

If you're looking for something that is both faster and less expensive
than 10GB/s Ethernet, investigate Infiniband. 

Christian

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Network hardware recommendations

2014-10-05 Thread Ariel Silooy

Hello fellow ceph user, right now we are researching ceph for our storage.

We have a cluster of 3 OSD nodes (and 5 MON) for our RBD disk which for 
now we are using the NFS proxy setup. On each OSD node we have 4x 1G 
Intel copper NIC (not sure about the model number though but I'll look 
it up in case anyone asking). Up until now we are testing on one nic as 
we dont have (yet) a network switch with la/teaming support.


I suppose since its Intel we should try to get jumbo frames working too, 
so I hope someone would recommend a good switch that is known to work 
with most Intel's.


We are looking for recommendation on what kind of network switch, 
network layout, brand, model, whatever.. as we are (kind of) new to 
building our own storage and has no experience in ceph.


We are also looking for feasibility of using fibre-channel instead of 
copper but we dont know if it would help much, in terms of 
speed-improvements/$ ratio since we already have 4 NICs on each OSD. 
Should we go for it?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph SSD array with Intel DC S3500's

2014-10-05 Thread Christian Balzer
On Mon, 6 Oct 2014 14:59:02 +1300 Andrew Thrift wrote:

> Hi Mark,
> 
> Would you see any benefit in using a Intel P3700 NVMe drive as a journal
> for say 6x Intel S3700 OSD's ?
> 
I don't wanna sound facetious, but buy some, find out and tell us. ^o^

Seriously, common sense might suggest it being advantageous, but all the
recent posts of having a SSD journaling for another SSD showed it to be
slower than just have the journal on the same OSD SSD.

YMMV.

Christian

> 
> 
> On Fri, Oct 3, 2014 at 6:58 AM, Mark Nelson 
> wrote:
> 
> > On 10/02/2014 12:48 PM, Adam Boyhan wrote:
> >
> >> Hey everyone, loving Ceph so far!
> >>
> >
> > Hi!
> >
> >
> >
> >> We are looking to role out a Ceph cluster with all SSD's.  Our
> >> application is around 30% writes and 70% reads random IO.  The plan is
> >> to start with roughly 8 servers with 8 800GB Intel DC S3500's per
> >> server.  I wanted to get some input on the use of the DC S3500. Seeing
> >> that we are primarily a read environment, I was thinking we could
> >> easily get away with the S3500 instead of the S3700 but I am unsure?
> >> Obviously the price point of the S3500 is very attractive but if they
> >> start failing on us too soon, it might not be worth the savings.  My
> >> largest concern is the journaling of Ceph, so maybe I could use the
> >> S3500's for the bulk of the data and utilize a S3700 for the
> >> journaling?
> >>
> >
> > I'd suggest if you are using SSDs for OSDs anyway, you are better off
> > just putting the journal on the SSD so you don't increase the number
> > of devices per OSD that can cause failure.  In terms of the S3500 vs
> > the S3700, it's all a numbers game.  Figure out how much data you
> > expect to write, how many drives you have, what the expected write
> > endurance of each drive is, replication, journaling, etc, and figure
> > out what you need! :)
> >
> > The S3500 may be just fine, but it depends entirely on your write
> > workload.
> >
> >
> >> I appreciate the input!
> >>
> >> Thanks All!
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >


-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph SSD array with Intel DC S3500's

2014-10-05 Thread Andrew Thrift
Hi Mark,

Would you see any benefit in using a Intel P3700 NVMe drive as a journal
for say 6x Intel S3700 OSD's ?



On Fri, Oct 3, 2014 at 6:58 AM, Mark Nelson  wrote:

> On 10/02/2014 12:48 PM, Adam Boyhan wrote:
>
>> Hey everyone, loving Ceph so far!
>>
>
> Hi!
>
>
>
>> We are looking to role out a Ceph cluster with all SSD's.  Our
>> application is around 30% writes and 70% reads random IO.  The plan is
>> to start with roughly 8 servers with 8 800GB Intel DC S3500's per
>> server.  I wanted to get some input on the use of the DC S3500. Seeing
>> that we are primarily a read environment, I was thinking we could easily
>> get away with the S3500 instead of the S3700 but I am unsure?  Obviously
>> the price point of the S3500 is very attractive but if they start
>> failing on us too soon, it might not be worth the savings.  My largest
>> concern is the journaling of Ceph, so maybe I could use the S3500's for
>> the bulk of the data and utilize a S3700 for the journaling?
>>
>
> I'd suggest if you are using SSDs for OSDs anyway, you are better off just
> putting the journal on the SSD so you don't increase the number of devices
> per OSD that can cause failure.  In terms of the S3500 vs the S3700, it's
> all a numbers game.  Figure out how much data you expect to write, how many
> drives you have, what the expected write endurance of each drive is,
> replication, journaling, etc, and figure out what you need! :)
>
> The S3500 may be just fine, but it depends entirely on your write workload.
>
>
>> I appreciate the input!
>>
>> Thanks All!
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-05 Thread Nathan Stratton
I did the same thing, build the RPMs and now show rbd support, however when
I try to start an image I get:

2014-10-05 19:48:08.058+: 4524: error : qemuProcessWaitForMonitor:1889
: internal error: process exited while connecting to monitor: Warning:
option deprecated, use lost_tick_policy property of kvm-pit instead.
qemu-kvm: -drive
file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
could not open disk image
rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
Driver 'rbd' is not whitelisted

I tried with an without auth.


><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com

On Sun, Oct 5, 2014 at 3:51 PM, Henrik Korkuc  wrote:

>  Hi,
> Centos 7 qemu out of the box does not support rbd.
>
> I had to build package with rbd support manually with "%define rhev 1" in
> qemu-kvm spec file. I also had to salvage some files from src.rpm file
> which were missing from centos git.
>
>
> On 2014.10.04 11:31, Ignazio Cassano wrote:
>
> Hi all,
> I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there are
> some extra packages.
> Regards
>
> Ignazio
>
>
> ___
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] libvirt: Driver 'rbd' is not whitelisted

2014-10-05 Thread Nathan Stratton
I am trying to get ceph working with openstack and libvirt, but running
into an error "Driver 'rbd' is not whitelisted". Google is not providing
much help so I thought I would try the list.

The image is in the volume:

[root@virt01a ~]# rbd list volumes
volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968

To rule out openstack I am using the XML file it was using:


  c1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4
  instance-000a
  2097152
  2
  

  Fedora Project
  OpenStack Nova
  2014.1.2-1.el7.centos
  ----002590dab186
  c1a5aa5d-ec15-41d4-ad40-e9cc5fd8ddc4

  
  
hvm


  
  


  
  



  
  
  

  
  




  
  

  
  
  205e6cb4-15c1-4f8d-8bf4-aedcc1549968


  
  
  
  





  

  


I turned out debug in libvirt, but it shows the same error line:

2014-10-05 20:14:28.348+: 5078: error : qemuProcessWaitForMonitor:1889
: internal error: process exited while connecting to monitor: Warning:
option deprecated, use lost_tick_policy property of kvm-pit instead.
qemu-kvm: -drive
file=rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=205e6cb4-15c1-4f8d-8bf4-aedcc1549968,cache=none:
could not open disk image
rbd:volumes/volume-205e6cb4-15c1-4f8d-8bf4-aedcc1549968:id=volumes:key=AQCMrSlUSJvTLxAAO9U+3IZQSkLU8a3iWj7T5Q==:auth_supported=cephx\;none:mon_host=10.71.0.75\:6789\;10.71.0.76\:6789\;10.71.0.77\:6789\;10.71.0.78\:6789:
Driver 'rbd' is not whitelisted


><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Centos 7 qemu

2014-10-05 Thread Henrik Korkuc
Hi,
Centos 7 qemu out of the box does not support rbd.

I had to build package with rbd support manually with "%define rhev 1"
in qemu-kvm spec file. I also had to salvage some files from src.rpm
file which were missing from centos git.

On 2014.10.04 11:31, Ignazio Cassano wrote:
>
> Hi all,
> I'd like to know if centos 7 qemu and libvirt suppirt rbd or if there
> are some extra packages.
> Regards
>
> Ignazio
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com