Re: [ceph-users] CephFS Pool Specification?

2013-09-25 Thread Sage Weil
On Wed, 25 Sep 2013, Aaron Ten Clay wrote:
> Hi all,
> 
> Does anyone know how to specify which pool the mds and CephFS data will be
> stored in?
> 
> After creating a new cluster, the pools "data", "metadata", and "rbd" all
> exist but with pg count too small to be useful. The documentation indicates
> the pg count can be set only at pool creation time,

This is no longer true. Can you tell us where you read it so we can fix 
the documentation?

 ceph osd pool set data pg_num 1234
 ceph osd pool set data pgp_num 1234

Repeat for metadata and/or rbd with an appropriate pg count.

> so I am working under the assumption I must create a new pool with a 
> larger pg count and use that for CephFS and the mds storage.

You can also create additional data pools and map directories to them, but 
this probably isn't what you need (yet).

sage


 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Could radosgw disable S3 authentication?

2013-09-25 Thread david zhang
Hi ceph-users,

Could some one give some suggestions? Anything will be appreciated. Thanks!


On Wed, Sep 25, 2013 at 8:06 PM, david zhang wrote:

> Hi ceph-users,
>
> I see RADOS Gateway (RGW) can be either authenticated or unauthenticated
> from
> http://ceph.com/docs/master/radosgw/s3/authentication/. But there is no
> details about how to disable it.
>
> So is there any way to do it? Thanks for sharing.
>
> --
> Regards,
> Zhi
>



-- 
Regards,
Zhi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS Pool Specification?

2013-09-25 Thread Yan, Zheng
On Thu, Sep 26, 2013 at 9:57 AM, Aaron Ten Clay  wrote:
> Hi all,
>
> Does anyone know how to specify which pool the mds and CephFS data will be
> stored in?
>
> After creating a new cluster, the pools "data", "metadata", and "rbd" all
> exist but with pg count too small to be useful. The documentation indicates
> the pg count can be set only at pool creation time, so I am working under
> the assumption I must create a new pool with a larger pg count and use that
> for CephFS and the mds storage.

the documentation you read is outdated. you can increase pg count by:

#ceph osd pool set pg_num xxx
#ceph osd pool set pgp_num xxx


Yan, Zheng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS Pool Specification?

2013-09-25 Thread Aaron Ten Clay
Hi all,

Does anyone know how to specify which pool the mds and CephFS data will be
stored in?

After creating a new cluster, the pools "data", "metadata", and "rbd" all
exist but with pg count too small to be useful. The documentation indicates
the pg count can be set only at pool creation time, so I am working under
the assumption I must create a new pool with a larger pg count and use that
for CephFS and the mds storage.

Thanks!
-Aaron
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS multi-tenancy and OpenStack Manila

2013-09-25 Thread Sage Weil
On Thu, 26 Sep 2013, Blair Bethwaite wrote:
> Hi there,
> Now that there is fledgling shared filesystem project starting up for
> OpenStack (see: https://launchpad.net/manila) I'm wondering whether there
> has been any progress towards full multi-tenancy for CephFS, i.e., the
> ability for clients to mount their own CephFS (complete with separate
> metadata) and with their own unique identity namespace?

It depends on what you mean by 'unique identity namespace'.  CephFS 
blindly stores uid/gid for files without knowing anything about the users 
they map to, just like NFSv3 would.  Simply mounting different directories 
for different tenants will work fine; the different trees will not overlap 
or interact unless some other client mounts a parent directory and does 
rename or link or something.

I suspect the key missing feature that people will want is a way to 
enforce that a given client (key) will can only mount a specific 
subdirectory.  There is a blueprint for this, but nothing has happened 
there in a while.  It would a pretty easy project to pick up.

sage___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS multi-tenancy and OpenStack Manila

2013-09-25 Thread Blair Bethwaite
Hi there,

Now that there is fledgling shared filesystem project starting up for
OpenStack (see: https://launchpad.net/manila) I'm wondering whether there
has been any progress towards full multi-tenancy for CephFS, i.e., the
ability for clients to mount their own CephFS (complete with separate
metadata) and with their own unique identity namespace?

-- 
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph write performance and my Dell R515's

2013-09-25 Thread Mark Nelson

On 09/25/2013 06:46 PM, Quenten Grasso wrote:

G'day Mark,

I stumbled across an older thread it looks like you were involved with the 
centos and poor seq write performance on the R515's.

Were you using centos or Ubuntu on your server at the time? (I'm wondering if 
this could be related to Ubuntu)


Our R515s have been running precise, but with various different kernels 
over the last year.




http://marc.info/?t=13481911702&r=1&w=2

Also I tried as you suggested to put the raid controller into JBOD mode but no 
joy. I also tried cross flashing the card as its apparently a 9260 but we don't 
have
any spare slots outside of the storage slot of which the raid controller cables 
can reach so that was a non-event :(

If you want to give it a try, if you have access to longer cables and or other 
servers you can put the perc h700 into.

I downloaded this flashing kit from here, (has all of the tools) grabbed a 
freedos usb and copied it all onto that.

http://forums.laptopvideo2go.com/topic/29166-sas2108-lsi-9260-based-firmware-files/

Then grabbed the latest 9260 firmware from,

http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.13.0-0154_SAS_2108_Fw_Image_APP2.130.383-2315.zip


LSI cards are kind of goofy.  There is apparently two different levels 
of flashing you can do, though the more low level one appears to be a 
much better kept secret and shrouded in mystery (at least to me).  I 
also flashed one of our H700s but sadly didn't see much change in 
performance.  Apparently that might not matter though if the lower level 
firmware is still Dell.  I admit this is all half hearsay so I have no 
idea how accurate it is.


We did end up putting an areca controller in the R515s and saw some 
improvement for large reads/writes (and with small IOs, sometimes worse 
performance!), so the controller is having an effect.  In both cases 
though, the system was quite a bit slower than a supermicro node with an 
intel processor and no expander backplane.  I suspect (though have not 
proven) that it has less to do with the CPU and more to do with the 
expander/drive/controller combination.


At some point I want to see if we can put the H700 in our supermicro 
node and see what happens, or get some breakout cables and an extra 
power supply and directly connect the drives to the controller in the R515.





*** Steps to Cross Flash ***
 Disclaimer you do this at your own risk, I take no responsibility if you 
brick your card, Warranty, etc 

In a Dell R515 If you write the SBR of a LSI card i.e. the 9260 and reboot the 
system, The system will be halted as it's now a non-dell card in the storage 
slot.
However if you attempt to flash the LSI firmware onto the perch700 without the 
correct SBR it won't flash correctly it seems.

So if you have longer cables and or another server to try the h700 in that's 
not a dell. You can try and cross flash the card.
(FYI if you're trying to do this in a dell and you fudge up you can recover 
your system/raid card by plugging it into another pci-e slot and reapplying the 
Dell H700 SBR/Firmware)

Now I'll assume you have one raid controller in your system so you only have 
adapter 0

1) Backup your SBR in case you need to restore it ie:

Megarec -readsbr 0 prch700.sbr

2) Write the SBR of the card you want to flash ie:

megarec -writesbr 0 sbr9260.bin

3) Erase the raid controller bios/firmware
Megarec -cleanflash 0

4) Reboot

5) flash new firmware
Megarec -m0flash 0 mr2108fw.rom

6) Reboot & Done.

Also if your command errors out half way through flashing/erasing run it again.

Regards,
Quenten Grasso

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Sunday, 22 September 2013 10:40 PM
Cc: ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Ceph write performance and my Dell R515's

On 09/22/2013 03:12 AM, Quenten Grasso wrote:


Hi All,

I'm finding my write performance is less than I would have expected.
After spending some considerable amount of time testing several
different configurations I can never seems to break over ~360mb/s
write even when using tmpfs for journaling.

So I've purchased 3x Dell R515's with 1 x AMD 6C CPU with 12 x 3TB SAS
& 2 x 100GB Intel DC S3700 SSD's & 32GB Ram with the Perc H710p Raid
controller and Dual Port 10GBE Network Cards.

So first up I realise the SSD's were a mistake, I should have bought
the 200GB Ones as they have considerably better write though put of
~375 Mb/s vs 200 Mb/s

So to our Nodes Configuration,

2 x 3TB disks in Raid1 for OS/MON & 1 partition for OSD, 12 Disks in a
Single each in a Raid0 (like a JBOD Fashion) with a 1MB Stripe size,

(Stripe size this part was particularly important because I found the
stripe size matters considerably even on a single disk raid0. contrary
to what you might read on the internet)

Also each disk is configured with (write back cache) is enabled and
(read head) disabled.

For N

Re: [ceph-users] Ceph write performance and my Dell R515's

2013-09-25 Thread Quenten Grasso
G'day Mark,

I stumbled across an older thread it looks like you were involved with the 
centos and poor seq write performance on the R515's.

Were you using centos or Ubuntu on your server at the time? (I'm wondering if 
this could be related to Ubuntu)

http://marc.info/?t=13481911702&r=1&w=2

Also I tried as you suggested to put the raid controller into JBOD mode but no 
joy. I also tried cross flashing the card as its apparently a 9260 but we don't 
have 
any spare slots outside of the storage slot of which the raid controller cables 
can reach so that was a non-event :(

If you want to give it a try, if you have access to longer cables and or other 
servers you can put the perc h700 into.

I downloaded this flashing kit from here, (has all of the tools) grabbed a 
freedos usb and copied it all onto that.

http://forums.laptopvideo2go.com/topic/29166-sas2108-lsi-9260-based-firmware-files/

Then grabbed the latest 9260 firmware from,

http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.13.0-0154_SAS_2108_Fw_Image_APP2.130.383-2315.zip


*** Steps to Cross Flash ***
 Disclaimer you do this at your own risk, I take no responsibility if you 
brick your card, Warranty, etc 

In a Dell R515 If you write the SBR of a LSI card i.e. the 9260 and reboot the 
system, The system will be halted as it's now a non-dell card in the storage 
slot.
However if you attempt to flash the LSI firmware onto the perch700 without the 
correct SBR it won't flash correctly it seems.

So if you have longer cables and or another server to try the h700 in that's 
not a dell. You can try and cross flash the card.
(FYI if you're trying to do this in a dell and you fudge up you can recover 
your system/raid card by plugging it into another pci-e slot and reapplying the 
Dell H700 SBR/Firmware)

Now I'll assume you have one raid controller in your system so you only have 
adapter 0

1) Backup your SBR in case you need to restore it ie:

Megarec -readsbr 0 prch700.sbr

2) Write the SBR of the card you want to flash ie:

megarec -writesbr 0 sbr9260.bin

3) Erase the raid controller bios/firmware
Megarec -cleanflash 0

4) Reboot

5) flash new firmware
Megarec -m0flash 0 mr2108fw.rom

6) Reboot & Done.

Also if your command errors out half way through flashing/erasing run it again.

Regards,
Quenten Grasso

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Sunday, 22 September 2013 10:40 PM
Cc: ceph-de...@vger.kernel.org
Subject: Re: [ceph-users] Ceph write performance and my Dell R515's

On 09/22/2013 03:12 AM, Quenten Grasso wrote:
>
> Hi All,
>
> I'm finding my write performance is less than I would have expected. 
> After spending some considerable amount of time testing several 
> different configurations I can never seems to break over ~360mb/s 
> write even when using tmpfs for journaling.
>
> So I've purchased 3x Dell R515's with 1 x AMD 6C CPU with 12 x 3TB SAS 
> & 2 x 100GB Intel DC S3700 SSD's & 32GB Ram with the Perc H710p Raid 
> controller and Dual Port 10GBE Network Cards.
>
> So first up I realise the SSD's were a mistake, I should have bought 
> the 200GB Ones as they have considerably better write though put of
> ~375 Mb/s vs 200 Mb/s
>
> So to our Nodes Configuration,
>
> 2 x 3TB disks in Raid1 for OS/MON & 1 partition for OSD, 12 Disks in a 
> Single each in a Raid0 (like a JBOD Fashion) with a 1MB Stripe size,
>
> (Stripe size this part was particularly important because I found the 
> stripe size matters considerably even on a single disk raid0. contrary 
> to what you might read on the internet)
>
> Also each disk is configured with (write back cache) is enabled and 
> (read head) disabled.
>
> For Networking, All nodes are connected via LACP bond with L3 hashing 
> and using iperf I can get up to 16gbit/s tx and rx between the nodes.
>
> OS: Ubuntu 12.04.3 LTS w/ Kernel 3.10.12-031012-generic (had to 
> upgrade kernel due to 10Gbit Intel NIC's driver issues)
>
> So this gives me 11 OSD's & 2 SSD's Per Node.
>

I'm a bit leery about that 1 OSD on the RAID1. It may be fine, but you 
definitely will want to do some investigation to make sure that OSD isn't 
holding the other ones back. iostat or collectl might be useful, along with the 
ceph osd admin socket and the dump_ops_in_flight and dump_historic_ops commands.

> Next I've tried several different configurations which I'll briefly 
> describe 2 of which below,
>
> 1)Cluster Configuration 1,
>
> 33 OSD's with 6x SSD's as Journals, w/ 15GB Journals on SSD.
>
> # ceph osd pool create benchmark1 1800 1800
>
> # rados bench -p benchmark1 180 write --no-cleanup
>
> --
>
> Maintaining 16 concurrent writes of 4194304 bytes for up to 180 
> seconds or 0 objects
>
> Total time run: 180.250417
>
> Total writes made: 10152
>
> Write size: 4194304
>
> Bandwidth (MB/sec): 225.287
>
> Stddev Bandwidth: 35.0897
>
> Max 

[ceph-users] RBD on CentOS 6

2013-09-25 Thread John-Paul Robinson
Hi,

We've been working with Ceph 0.56 on Ubuntu 12.04 and are able to
create, map, and mount ceph block devices via the RBD kernel module. We
have a CentOS6.4 box one which we would like to do the same.

http://ceph.com/docs/next/install/os-recommendations/

OS recommedations state that we should be at kernel v3.4.20 or better.

Does anyone have any recommendations for or against using a CentOS6.4
platform to work with RBD in the kernel?

We're assuming we will have to upgrade the kernel in 3.4.20 or better
(if possible).

Thanks,

~jpr
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] failure starting radosgw after setting up object storage

2013-09-25 Thread Gruher, Joseph R
Hi all-

I am following the object storage quick start guide.  I have a cluster with two 
OSDs and have followed the steps on both.  Both are failing to start radosgw 
but each in a different manner.  All the previous steps in the quick start 
guide appeared to complete successfully.  Any tips on how to debug from here?  
Thanks!


OSD1:

ceph@cephtest05:/etc/ceph$ sudo /etc/init.d/radosgw start
ceph@cephtest05:/etc/ceph$

ceph@cephtest05:/etc/ceph$ sudo /etc/init.d/radosgw status
/usr/bin/radosgw is not running.
ceph@cephtest05:/etc/ceph$

ceph@cephtest05:/etc/ceph$ cat /var/log/ceph/radosgw.log
ceph@cephtest05:/etc/ceph$


OSD2:

ceph@cephtest06:/etc/ceph$ sudo /etc/init.d/radosgw start
Starting client.radosgw.gateway...
2013-09-25 14:03:01.235789 7f713d79d780 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2013-09-25 14:03:01.235797 7f713d79d780 -1 WARNING: cross zone / region 
transfer performance may be affected
ceph@cephtest06:/etc/ceph$

ceph@cephtest06:/etc/ceph$ sudo /etc/init.d/radosgw status
/usr/bin/radosgw is not running.
ceph@cephtest06:/etc/ceph$

ceph@cephtest06:/etc/ceph$ cat /var/log/ceph/radosgw.log
2013-09-25 14:03:01.235760 7f713d79d780  0 ceph version 0.67.3 
(408cd61584c72c0d97b774b3d8f95c6b1b06341a), process radosgw, pid 13187
2013-09-25 14:03:01.235789 7f713d79d780 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2013-09-25 14:03:01.235797 7f713d79d780 -1 WARNING: cross zone / region 
transfer performance may be affected
2013-09-25 14:03:01.245786 7f713d79d780  0 librados: client.radosgw.gateway 
authentication error (1) Operation not permitted
2013-09-25 14:03:01.246526 7f713d79d780 -1 Couldn't init storage provider 
(RADOS)
ceph@cephtest06:/etc/ceph$


For reference, I think cluster health is OK:

ceph@cephtest06:/etc/ceph$ sudo ceph status
  cluster a45e6e54-70ef-4470-91db-2152965deec5
   health HEALTH_WARN clock skew detected on mon.cephtest03, mon.cephtest04
   monmap e1: 3 mons at 
{cephtest02=10.0.0.2:6789/0,cephtest03=10.0.0.3:6789/0,cephtest04=10.0.0.4:6789/0},
 election epoch 6, quorum 0,1,2 cephtest02,cephtest03,cephtest04
   osdmap e9: 2 osds: 2 up, 2 in
pgmap v439: 192 pgs: 192 active+clean; 0 bytes data, 72548 KB used, 1998 GB 
/ 1999 GB avail
   mdsmap e1: 0/0/1 up

ceph@cephtest06:/etc/ceph$ sudo ceph health
HEALTH_WARN clock skew detected on mon.cephtest03, mon.cephtest04
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question regarding plugin class

2013-09-25 Thread Gregory Farnum
On Wed, Sep 25, 2013 at 6:40 AM, Chen, Ching-Cheng (KFRM 1)
 wrote:
> Hi:
>
>
>
> I have question regarding using the class plugin API.
>
>
>
> We finally able to make a test plugin class worked.   We was able to invoke
> the exec() call and execute our test plugin class successfully.
>
>
>
> However, we have a hard time trying to figure out what object this plugin
> class been ran on OSD.   I can see there are class API to get attribute,
> header, value and even the omap for this object, but we couldn't find any
> class API to query which object this plugin is running.
>
>
>
> For example, cls_cxx_getxattr() gives you the attribute and
> cls_cxx_map_get_all_vals() give you the omap.
>
>
>
> We'd like to know how can we obtain the object name this plugin is running.
> We have a feeling that we might be able to get it from the
> cls_method_context_t, but couldn't figure out why.

Huh, you're right; we appear not to expose that right now. If you look
at the class_api.cc file you'll see they're turning that
cls_method_context_t* into a ReplicatedPG::OpContext*. You can patch
that to add a cls_current_object_name() function by doing the same and
then going down the OpContext chain (op->obs->oi->soid.oid) — pull
requests welcome! :)
Or if you want to be really naughty you could do the same inside of
your own class, but that would be a bit fragile if the layout changes
in the OSD and your class code is built against the old one.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Alfredo Deza
On Wed, Sep 25, 2013 at 9:31 AM, Guang  wrote:
> Thanks for the reply!
>
> I don't know the reason, but I work-around this issue by add a new entry in 
> the /etc/hosts with something like 'web2   {id_address_of_web2}' and it can 
> work.
>
> I am not sure if that is due to some mis-config by my end of the deployment 
> script, will further investigate.
>

That is certainly not ideal, but still very curious if you were able
to ssh to the 'web2' host normally without doing anything
to /etc/hosts.

Hope you can find what was missing so we can fix it on our end :)

> Thanks all for the help!
>
> Guang
>
> On Sep 25, 2013, at 8:38 PM, Alfredo Deza wrote:
>
>> On Wed, Sep 25, 2013 at 5:08 AM, Guang  wrote:
>>> Thanks Wolfgang.
>>>
>>> -bash-4.1$ ping web2
>>> PING web2 (10.193.244.209) 56(84) bytes of data.
>>> 64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
>>> 64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
>>> ...
>>>
>>> [I omit part of the host name].
>>>
>>> It can ping to the host and I actually used ceph-deploy to install ceph onto
>>> the web2 remote host…
>>>
>>
>> This is very unexpected, it most definitely sounds like at some point
>> web2 is not resolvable (as the
>> error says) but you are also right in that you initiate the deployment
>> correctly with ceph-deploy doing work
>> on the remote end.
>>
>> Are you able to SSH directly to this host from where you are executing
>> ceph-deploy? (same user/login)
>>
>>
>>
>>> Thanks,
>>> Guang
>>>
>>>
>>> Date: Wed, 25 Sep 2013 10:29:14 +0200
>>> From: Wolfgang Hennerbichler 
>>> To: 
>>> Subject: Re: [ceph-users] Ceph deployment issue in physical hosts
>>> Message-ID: <52429eda.8070...@risc-software.at>
>>> Content-Type: text/plain; charset="ISO-8859-1"
>>>
>>>
>>>
>>>
>>> On 09/25/2013 10:03 AM, Guang wrote:
>>>
>>> Hi ceph-users,
>>>
>>> I deployed a cluster successfully in VMs, and today I tried to deploy a
>>> cluster in physical nodes. However, I came across a problem when I started
>>> creating a monitor.
>>>
>>>
>>> -bash-4.1$ ceph-deploy mon create x
>>>
>>> 
>>>
>>> ssh: Could not resolve hostname web2: Name or service not known
>>>
>>> Does anyone come across the same issue? Looks like I mis-configured the
>>> network environment?
>>>
>>>
>>> The machine you run ceph-deploy on doesn't know "who" web2 is. If this
>>> command succeeds: "ping web2" then ceph deploy will at least be able to
>>> contact that host.
>>>
>>> hint: look at your /etc/hosts file.
>>>
>>> Thanks,
>>>
>>> Guang
>>>
>>>
>>> Wolfgang
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question regarding plugin class

2013-09-25 Thread Chen, Ching-Cheng (KFRM 1)
Hi:

I have question regarding using the class plugin API.

We finally able to make a test plugin class worked.   We was able to invoke the 
exec() call and execute our test plugin class successfully.

However, we have a hard time trying to figure out what object this plugin class 
been ran on OSD.   I can see there are class API to get attribute, header, 
value and even the omap for this object, but we couldn't find any class API to 
query which object this plugin is running.

For example, cls_cxx_getxattr() gives you the attribute and 
cls_cxx_map_get_all_vals() give you the omap.

We'd like to know how can we obtain the object name this plugin is running.   
We have a feeling that we might be able to get it from the 
cls_method_context_t, but couldn't figure out why.

Regards,

Ching-Cheng Chen
CREDIT SUISSE
Information Technology | MDS - New York, KVBB 41
One Madison Avenue | 10010 New York | United States
Phone +1 212 538 8031 | Mobile +1 732 216 7939
chingcheng.c...@credit-suisse.com | 
www.credit-suisse.com



=== 
Please access the attached hyperlink for an important electronic communications 
disclaimer: 
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html 
=== 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
Thanks for the reply!

I don't know the reason, but I work-around this issue by add a new entry in the 
/etc/hosts with something like 'web2   {id_address_of_web2}' and it can work.

I am not sure if that is due to some mis-config by my end of the deployment 
script, will further investigate.

Thanks all for the help!

Guang

On Sep 25, 2013, at 8:38 PM, Alfredo Deza wrote:

> On Wed, Sep 25, 2013 at 5:08 AM, Guang  wrote:
>> Thanks Wolfgang.
>> 
>> -bash-4.1$ ping web2
>> PING web2 (10.193.244.209) 56(84) bytes of data.
>> 64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
>> 64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
>> ...
>> 
>> [I omit part of the host name].
>> 
>> It can ping to the host and I actually used ceph-deploy to install ceph onto
>> the web2 remote host…
>> 
> 
> This is very unexpected, it most definitely sounds like at some point
> web2 is not resolvable (as the
> error says) but you are also right in that you initiate the deployment
> correctly with ceph-deploy doing work
> on the remote end.
> 
> Are you able to SSH directly to this host from where you are executing
> ceph-deploy? (same user/login)
> 
> 
> 
>> Thanks,
>> Guang
>> 
>> 
>> Date: Wed, 25 Sep 2013 10:29:14 +0200
>> From: Wolfgang Hennerbichler 
>> To: 
>> Subject: Re: [ceph-users] Ceph deployment issue in physical hosts
>> Message-ID: <52429eda.8070...@risc-software.at>
>> Content-Type: text/plain; charset="ISO-8859-1"
>> 
>> 
>> 
>> 
>> On 09/25/2013 10:03 AM, Guang wrote:
>> 
>> Hi ceph-users,
>> 
>> I deployed a cluster successfully in VMs, and today I tried to deploy a
>> cluster in physical nodes. However, I came across a problem when I started
>> creating a monitor.
>> 
>> 
>> -bash-4.1$ ceph-deploy mon create x
>> 
>> 
>> 
>> ssh: Could not resolve hostname web2: Name or service not known
>> 
>> Does anyone come across the same issue? Looks like I mis-configured the
>> network environment?
>> 
>> 
>> The machine you run ceph-deploy on doesn't know "who" web2 is. If this
>> command succeeds: "ping web2" then ceph deploy will at least be able to
>> contact that host.
>> 
>> hint: look at your /etc/hosts file.
>> 
>> Thanks,
>> 
>> Guang
>> 
>> 
>> Wolfgang
>> 
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Alfredo Deza
On Wed, Sep 25, 2013 at 5:08 AM, Guang  wrote:
> Thanks Wolfgang.
>
> -bash-4.1$ ping web2
> PING web2 (10.193.244.209) 56(84) bytes of data.
> 64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
> 64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
> ...
>
> [I omit part of the host name].
>
> It can ping to the host and I actually used ceph-deploy to install ceph onto
> the web2 remote host…
>

This is very unexpected, it most definitely sounds like at some point
web2 is not resolvable (as the
error says) but you are also right in that you initiate the deployment
correctly with ceph-deploy doing work
on the remote end.

Are you able to SSH directly to this host from where you are executing
ceph-deploy? (same user/login)



> Thanks,
> Guang
>
>
> Date: Wed, 25 Sep 2013 10:29:14 +0200
> From: Wolfgang Hennerbichler 
> To: 
> Subject: Re: [ceph-users] Ceph deployment issue in physical hosts
> Message-ID: <52429eda.8070...@risc-software.at>
> Content-Type: text/plain; charset="ISO-8859-1"
>
>
>
>
> On 09/25/2013 10:03 AM, Guang wrote:
>
> Hi ceph-users,
>
> I deployed a cluster successfully in VMs, and today I tried to deploy a
> cluster in physical nodes. However, I came across a problem when I started
> creating a monitor.
>
>
> -bash-4.1$ ceph-deploy mon create x
>
> 
>
> ssh: Could not resolve hostname web2: Name or service not known
>
> Does anyone come across the same issue? Looks like I mis-configured the
> network environment?
>
>
> The machine you run ceph-deploy on doesn't know "who" web2 is. If this
> command succeeds: "ping web2" then ceph deploy will at least be able to
> contact that host.
>
> hint: look at your /etc/hosts file.
>
> Thanks,
>
> Guang
>
>
> Wolfgang
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Speed limit on RadosGW?

2013-09-25 Thread Mark Nelson

On 09/25/2013 02:49 AM, Chu Duc Minh wrote:

I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
  - 2 node run Rados-GW
  - 2 node run multi-process put file to [multi] Rados-GW

Result:
a) When i use 1 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node, Rados-GW input/output speed = 50MB/s

b) When i use 2 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node; each RadosGW have input/output = 25MB/s ==> sum
input/ouput of 2 Rados-GW = 50MB/s

c) When i use 1 RadosGW node & 2 upload-node, speed upload = 25MB/s
/upload-node ==> sum output of 2 upload-node = 50MB/s, RadosGW have
input/output = 50MB/s

d) When i use 2 RadosGW node & 2 upload-node, speed upload = 25MB/s
/upload-node ==> sum output of 2 upload-node = 50MB/s; each RadosGW have
input/output = 25MB/s ==> sum input/ouput of 2 Rados-GW = 50MB/s

_*Problem*_: i can pass limit 50MB/s when put file over Rados-GW,
regardless of the number Rados-GW nodes and upload-nodes.
When i use this CEPH cluster over librados (openstack/kvm), i can easily
achieve > 300MB/s

I don't know why performance of RadosGW is so low. What's bottleneck?


During writes, does the CPU usage on your RadosGW node go way up?

If this is a test cluster, you might want to try the wip-6286 build from 
our gitbuilder site.  There is a fix that depending on the size of your 
objects, could have a big impact on performance.  We're currently 
investigating some other radosgw performance issues as well, so stay 
tuned. :)


Mark



Thank you very much!




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Could radosgw disable S3 authentication?

2013-09-25 Thread david zhang
Hi ceph-users,

I see RADOS Gateway (RGW) can be either authenticated or unauthenticated
from
http://ceph.com/docs/master/radosgw/s3/authentication/. But there is no
details about how to disable it.

So is there any way to do it? Thanks for sharing.

-- 
Regards,
Zhi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
Thanks Wolfgang.

-bash-4.1$ ping web2
PING web2 (10.193.244.209) 56(84) bytes of data.
64 bytes from web2 (10.193.244.209): icmp_seq=1 ttl=64 time=0.505 ms
64 bytes from web2 (10.193.244.209): icmp_seq=2 ttl=64 time=0.194 ms
...

[I omit part of the host name].

It can ping to the host and I actually used ceph-deploy to install ceph onto 
the web2 remote host…

Thanks,
Guang


Date: Wed, 25 Sep 2013 10:29:14 +0200
From: Wolfgang Hennerbichler 
To: 
Subject: Re: [ceph-users] Ceph deployment issue in physical hosts
Message-ID: <52429eda.8070...@risc-software.at>
Content-Type: text/plain; charset="ISO-8859-1"



On 09/25/2013 10:03 AM, Guang wrote:
> Hi ceph-users,
> I deployed a cluster successfully in VMs, and today I tried to deploy a 
> cluster in physical nodes. However, I came across a problem when I started 
> creating a monitor.
> 
> -bash-4.1$ ceph-deploy mon create x

> ssh: Could not resolve hostname web2: Name or service not known
> Does anyone come across the same issue? Looks like I mis-configured the 
> network environment?

The machine you run ceph-deploy on doesn't know "who" web2 is. If this
command succeeds: "ping web2" then ceph deploy will at least be able to
contact that host.

hint: look at your /etc/hosts file.

> Thanks,
> Guang

Wolfgang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using RBD with LVM

2013-09-25 Thread John-Paul Robinson
Thanks.

After fixing the issue with the types entry in lvm.conf, I discovered
the -vvv option which helped me detect a the second cause for the
"ignored" error: pvcreate saw a partition signature and skipped the device.

The -vvv is s good flag. :)

~jpr

On 09/25/2013 01:52 AM, Wido den Hollander wrote:
> Try this:
> 
> $ sudo pvcreate -vvv /dev/rbd1
> 
> It has something to do with LVM filtering RBD devices away, you might
> need to add them manually in /etc/lvm/lvm.conf
> 
> I've seen this before and fixed it, but I forgot what the root cause was.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Using RBD with LVM

2013-09-25 Thread John-Paul Robinson
Thanks.  This fixed the problem.

BTW, after adding this line I still got the same error on my pvcreate
but then I ran pvcreate -vvv and found that it was ignorning my
/dev/rbd1 device because it had detected a partition signature (which I
had added in an earlier attempt to work around this "ignored" issue).

I deleted the partion and the pvcreate worked on all my RBD devices.

A basic recipe for creating an LVM volume is:

for i in 1 2 3
do
  rbd create user1-home-lvm-p0$i --size 102400
  rbd map user1-home-lvm-p0$i
  pvcreate user1-home-lvm-p0$i
done
vgcreate user1-home-vg \
   /dev/rbd/rbd/user1-home-lvm-p01 \
   /dev/rbd/rbd/user1-home-lvm-p02 \
   /dev/rbd/rbd/user1-home-lvm-p03
lvcreate -nuser1-home-lv -l%100FREE user1-home-vg
mkfs.ext4 /dev/user1-home-vg/user1-home-lv
mount /dev/user1-home-vg/user1-home-lv /somewhere

~jpr

On 09/24/2013 07:58 PM, Mandell Degerness wrote:
> You need to add a line to /etc/lvm/lvm.conf:
> 
> types = [ "rbd", 1024 ]
> 
> It should be in the "devices" section of the file.
> 
> On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson  wrote:
>> Hi,
>>
>> I'm exploring a configuration with multiple Ceph block devices used with
>> LVM.  The goal is to provide a way to grow and shrink my file systems
>> while they are on line.
>>
>> I've created three block devices:
>>
>> $ sudo ./ceph-ls  | grep home
>> jpr-home-lvm-p01: 102400 MB
>> jpr-home-lvm-p02: 102400 MB
>> jpr-home-lvm-p03: 102400 MB
>>
>> And have them mapped into my kernel (3.2.0-23-generic #36-Ubuntu SMP):
>>
>> $ sudo rbd showmapped
>> id pool imagesnap device
>> 0  rbd  jpr-test-vol01   -/dev/rbd0
>> 1  rbd  jpr-home-lvm-p01 -/dev/rbd1
>> 2  rbd  jpr-home-lvm-p02 -/dev/rbd2
>> 3  rbd  jpr-home-lvm-p03 -/dev/rbd3
>>
>> In order to use them with LVM, I need to define them as physical
>> volumes.  But when I run this command I get an unexpected error:
>>
>> $ sudo pvcreate /dev/rbd1
>>   Device /dev/rbd1 not found (or ignored by filtering).
>>
>> I am able to use other RBD on this same machine to create file systems
>> directly and mount them:
>>
>> $ df -h /mnt-test
>> Filesystem  Size  Used Avail Use% Mounted on
>> /dev/rbd050G  885M   47G   2% /mnt-test
>>
>> Is there a reason that the /dev/rbd[1-2] devices can't be initialized as
>> physical volumes in LVM?
>>
>> Thanks,
>>
>> ~jpr
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Wolfgang Hennerbichler


On 09/25/2013 10:03 AM, Guang wrote:
> Hi ceph-users,
> I deployed a cluster successfully in VMs, and today I tried to deploy a 
> cluster in physical nodes. However, I came across a problem when I started 
> creating a monitor.
> 
> -bash-4.1$ ceph-deploy mon create x

> ssh: Could not resolve hostname web2: Name or service not known
> Does anyone come across the same issue? Looks like I mis-configured the 
> network environment?

The machine you run ceph-deploy on doesn't know "who" web2 is. If this
command succeeds: "ping web2" then ceph deploy will at least be able to
contact that host.

hint: look at your /etc/hosts file.

> Thanks,
> Guang

Wolfgang

> __
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw-admin users list?

2013-09-25 Thread Valery Tschopp

Hi guys,

How do I get a list of all users with the radosgw-admin command and/or 
REST API?


# radosgw-admin --version
ceph version 0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)

Cheers,
Valery
--
SWITCH
--
Valery Tschopp, Software Engineer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
email: valery.tsch...@switch.ch phone: +41 44 268 1544




smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Guang
Hi ceph-users,
I deployed a cluster successfully in VMs, and today I tried to deploy a cluster 
in physical nodes. However, I came across a problem when I started creating a 
monitor.

-bash-4.1$ ceph-deploy mon create x
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts 
[ceph_deploy.mon][DEBUG ] detecting platform for host web2 ...
[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
[ceph_deploy.mon][INFO  ] distro info: RedHatEnterpriseServer 6.4 Santiago
[web2][DEBUG ] determining if provided host has same hostname in remote
[web2][DEBUG ] deploying mon to web2
[web2][DEBUG ] remote hostname: web2
[web2][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
[web2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-web2/done
[web2][INFO  ] create a done file to avoid re-doing the mon deployment
[web2][INFO  ] create the init path if it does not exist
[web2][INFO  ] locating `service` executable...
[web2][INFO  ] found `service` executable: /sbin/service
ssh: Could not resolve hostname web2: Name or service not known
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 21, in 
sys.exit(main())
  File "/usr/lib/python2.6/site-packages/ceph_deploy/util/decorators.py", line 
83, in newfunc
return f(*a, **kw)
  File "/usr/lib/python2.6/site-packages/ceph_deploy/cli.py", line 147, in main
return args.func(args)
  File "/usr/lib/python2.6/site-packages/ceph_deploy/mon.py", line 246, in mon
mon_create(args)
  File "/usr/lib/python2.6/site-packages/ceph_deploy/mon.py", line 105, in 
mon_create
distro.mon.create(distro, rlogger, args, monitor_keyring)
  File 
"/usr/lib/python2.6/site-packages/ceph_deploy/hosts/centos/mon/create.py", line 
15, in create
rconn = get_connection(hostname, logger)
  File "/usr/lib/python2.6/site-packages/ceph_deploy/connection.py", line 13, 
in get_connection
sudo=needs_sudo(),
  File "/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/connection.py", 
line 12, in __init__
self.gateway = execnet.makegateway('ssh=%s' % hostname)
  File 
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/multi.py", 
line 89, in makegateway
gw = gateway_bootstrap.bootstrap(io, spec)
  File 
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_bootstrap.py",
 line 70, in bootstrap
bootstrap_ssh(io, spec)
  File 
"/usr/lib/python2.6/site-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_bootstrap.py",
 line 42, in bootstrap_ssh
raise HostNotFound(io.remoteaddress)
execnet.gateway_bootstrap.HostNotFound: web2

Does anyone come across the same issue? Looks like I mis-configured the network 
environment?

Thanks,
Guang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Speed limit on RadosGW?

2013-09-25 Thread Chu Duc Minh
I have a CEPH cluster with 9 nodes (6 data nodes & 3 mon/mds nodes)
And i setup 4 separate nodes to test performance of Rados-GW:
 - 2 node run Rados-GW
 - 2 node run multi-process put file to [multi] Rados-GW

Result:
a) When i use 1 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node, Rados-GW input/output speed = 50MB/s

b) When i use 2 RadosGW node & 1 upload-node, speed upload = 50MB/s
/upload-node; each RadosGW have input/output = 25MB/s ==> sum input/ouput
of 2 Rados-GW = 50MB/s

c) When i use 1 RadosGW node & 2 upload-node, speed upload = 25MB/s
/upload-node ==> sum output of 2 upload-node = 50MB/s, RadosGW have
input/output = 50MB/s

d) When i use 2 RadosGW node & 2 upload-node, speed upload = 25MB/s
/upload-node ==> sum output of 2 upload-node = 50MB/s; each RadosGW have
input/output = 25MB/s ==> sum input/ouput of 2 Rados-GW = 50MB/s

*Problem*: i can pass limit 50MB/s when put file over Rados-GW, regardless
of the number Rados-GW nodes and upload-nodes.
When i use this CEPH cluster over librados (openstack/kvm), i can easily
achieve > 300MB/s

I don't know why performance of RadosGW is so low. What's bottleneck?

Thank you very much!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com