[ceph-users] Radosgw admin MNG Tools to create and report usage of Object accounts

2015-11-08 Thread Florent MONTHEL
Hi Cephers,

I’ve just released toolkit on python to report usage and inventory of buckets / 
accounts / S3 and Swift keys - In the same way, we have script to create 
account and S3/Swift keys (and initial buckets)
Tool is using rgwadmin python module 

https://github.com/fmonthel/radosgw-admin-mng-tools 


Thanks



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance dégradation after upgrade to hammer

2015-07-22 Thread Florent MONTHEL
Hi Mark

Yes enough PG and no error on Apache logs
We identified some bottleneck on bucket index with huge IOPs on one OSD (IOPs 
is done on only 1 bucket)

With bucket sharding (32) configured write IOPs us now 5x better (and after 
bucket delete/create). But we don't yet reach Firefly performance

RedHat case in progress. I will share later with community 

Sent from my iPhone

 On 22 juil. 2015, at 08:20, Mark Nelson mnel...@redhat.com wrote:
 
 Ok,
 
 So good news that RADOS appears to be doing well.  I'd say next is to follow 
 some of the recommendations here:
 
 http://ceph.com/docs/master/radosgw/troubleshooting/
 
 If you examine the objecter_requests and perfcounters during your cosbench 
 write test, it might help explain where the requests are backing up.  Another 
 thing to look for (as noted in the above URL) are HTTP errors in the apache 
 logs (if relevant).
 
 Other general thoughts:  When you upgraded to hammer did you change the RGW 
 configuration at all?  Are you using civetweb now?  Does the rgw.buckets pool 
 have enough PGs?
 
 
 Mark
 
 On 07/21/2015 08:17 PM, Florent MONTHEL wrote:
 Hi Mark
 
 I've something like 600 write IOPs on EC pool and 800 write IOPs on 
 replicated 3 pool with rados bench
 
 With  Radosgw  I have 30/40 write IOPs with Cosbench (1 radosgw- the same 
 with 2) and servers are sleeping :
 - 0.005 core for radosgw process
 - 0.01 core for osd process
 
 I don't know if we can have .rgw* pool locking or something like that with 
 Hammer (or situation specific to me)
 
 On 100% read profile, Radosgw and Ceph servers are working very well with 
 more than 6000 IOPs on one radosgw server :
 - 7 cores for radosgw process
 - 1 core for each osd process
 - 0,5 core for each Apache process
 
 Thanks
 
 Sent from my iPhone
 
 On 14 juil. 2015, at 21:03, Mark Nelson mnel...@redhat.com wrote:
 
 Hi Florent,
 
 10x degradation is definitely unusual!  A couple of things to look at:
 
 Are 8K rados bench writes to the rgw.buckets pool slow?  You can with 
 something like:
 
 rados -p rgw.buckets bench 30 write -t 256 -b 8192
 
 You may also want to try targeting a specific RGW server to make sure the 
 RR-DNS setup isn't interfering (at least while debugging).  It may also be 
 worth creating a new replicated pool and try writes to that pool as well to 
 see if you see much difference.
 
 Mark
 
 On 07/14/2015 07:17 PM, Florent MONTHEL wrote:
 Yes of course thanks Mark
 
 Infrastructure : 5 servers with 10 sata disks (50 osd at all) - 10gb 
 connected - EC 2+1 on rgw.buckets pool - 2 radosgw RR-DNS like installed 
 on 2 cluster servers
 No SSD drives used
 
 We're using Cosbench to send :
 - 8k object size : 100% read with 256 workers : better results with Hammer
  - 8k object size : 80% read - 20% write with 256 workers : real 
 degradation between Firefly and Hammer (divided by something like 10)
 - 8k object size : 100% write with 256 workers : real degradation between 
 Firefly and Hammer (divided by something like 10)
 
 Thanks
 
 Sent from my iPhone
 
 On 14 juil. 2015, at 19:57, Mark Nelson mnel...@redhat.com wrote:
 
 On 07/14/2015 06:42 PM, Florent MONTHEL wrote:
 Hi All,
 
 I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) 
 to Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM 
 prefork mode
 I'm experiencing huge write performance degradation just after upgrade 
 (Cosbench).
 
 Do you already run performance tests between Hammer and Firefly ?
 
 No problem with read performance that was amazing
 
 Hi Florent,
 
 Can you talk a little bit about how your write tests are setup?  How many 
 concurrent IOs and what size?  Also, do you see similar problems with 
 rados bench?
 
 We have done some testing and haven't seen significant performance 
 degradation except when switching to civetweb which appears to perform 
 deletes more slowly than what we saw with apache+fcgi.
 
 Mark
 
 
 
 Sent from my iPhone
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance dégradation after upgrade to hammer

2015-07-21 Thread Florent MONTHEL
Hi Mark

I've something like 600 write IOPs on EC pool and 800 write IOPs on replicated 
3 pool with rados bench

With  Radosgw  I have 30/40 write IOPs with Cosbench (1 radosgw- the same with 
2) and servers are sleeping :
- 0.005 core for radosgw process
- 0.01 core for osd process

I don't know if we can have .rgw* pool locking or something like that with 
Hammer (or situation specific to me)

On 100% read profile, Radosgw and Ceph servers are working very well with more 
than 6000 IOPs on one radosgw server :
- 7 cores for radosgw process
- 1 core for each osd process
- 0,5 core for each Apache process

Thanks

Sent from my iPhone

 On 14 juil. 2015, at 21:03, Mark Nelson mnel...@redhat.com wrote:
 
 Hi Florent,
 
 10x degradation is definitely unusual!  A couple of things to look at:
 
 Are 8K rados bench writes to the rgw.buckets pool slow?  You can with 
 something like:
 
 rados -p rgw.buckets bench 30 write -t 256 -b 8192
 
 You may also want to try targeting a specific RGW server to make sure the 
 RR-DNS setup isn't interfering (at least while debugging).  It may also be 
 worth creating a new replicated pool and try writes to that pool as well to 
 see if you see much difference.
 
 Mark
 
 On 07/14/2015 07:17 PM, Florent MONTHEL wrote:
 Yes of course thanks Mark
 
 Infrastructure : 5 servers with 10 sata disks (50 osd at all) - 10gb 
 connected - EC 2+1 on rgw.buckets pool - 2 radosgw RR-DNS like installed on 
 2 cluster servers
 No SSD drives used
 
 We're using Cosbench to send :
 - 8k object size : 100% read with 256 workers : better results with Hammer
  - 8k object size : 80% read - 20% write with 256 workers : real degradation 
 between Firefly and Hammer (divided by something like 10)
 - 8k object size : 100% write with 256 workers : real degradation between 
 Firefly and Hammer (divided by something like 10)
 
 Thanks
 
 Sent from my iPhone
 
 On 14 juil. 2015, at 19:57, Mark Nelson mnel...@redhat.com wrote:
 
 On 07/14/2015 06:42 PM, Florent MONTHEL wrote:
 Hi All,
 
 I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to 
 Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM 
 prefork mode
 I'm experiencing huge write performance degradation just after upgrade 
 (Cosbench).
 
 Do you already run performance tests between Hammer and Firefly ?
 
 No problem with read performance that was amazing
 
 Hi Florent,
 
 Can you talk a little bit about how your write tests are setup?  How many 
 concurrent IOs and what size?  Also, do you see similar problems with rados 
 bench?
 
 We have done some testing and haven't seen significant performance 
 degradation except when switching to civetweb which appears to perform 
 deletes more slowly than what we saw with apache+fcgi.
 
 Mark
 
 
 
 Sent from my iPhone
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph 0.94 (and lower) performance on 1 hosts ??

2015-07-21 Thread Florent MONTHEL
Hi Frederic,

When you have Ceph cluster with 1 node you don’t experienced network and 
communication overhead due to distributed model
With 2 nodes and EC 4+1 you will have communication between 2 nodes but you 
will keep internal communication (2 chunks on first node and 3 chunks on second 
node)
On your configuration EC pool is setup with 4+1 so you will have for each write 
overhead due to write spreading on 5 nodes (for 1 customer IO, you will 
experience 5 Ceph IO due to EC 4+1)
It’s the reason for that I think you’re reaching performance stability with 5 
nodes and more in your cluster


 On Jul 20, 2015, at 10:35 AM, SCHAER Frederic frederic.sch...@cea.fr wrote:
 
 Hi,
  
 As I explained in various previous threads, I’m having a hard time getting 
 the most out of my test ceph cluster.
 I’m benching things with rados bench.
 All Ceph hosts are on the same 10GB switch.
  
 Basically, I know I can get about 1GB/s of disk write performance per host, 
 when I bench things with dd (hundreds of dd threads) +iperf 10gbit 
 inbound+iperf 10gbit outbound.
 I also can get 2GB/s or even more if I don’t bench the network at the same 
 time, so yes, there is a bottleneck between disks and network, but I can’t 
 identify which one, and it’s not relevant for what follows anyway
 (Dell R510 + MD1200 + PERC H700 + PERC H800 here, if anyone has hints about 
 this strange bottleneck though…)
  
 My hosts each are connected though a single 10Gbits/s link for now.
  
 My problem is the following. Please note I see the same kind of poor 
 performance with replicated pools...
 When testing EC pools, I ended putting a 4+1 pool on a single node in order 
 to track down the ceph bottleneck.
 On that node, I can get approximately 420MB/s write performance using rados 
 bench, but that’s fair enough since the dstat output shows that real data 
 throughput on disks is about 800+MB/s (that’s the ceph journal effect, I 
 presume).
  
 I tested Ceph on my other standalone nodes : I can also get around 420MB/s, 
 since they’re identical.
 I’m testing things with 5 10Gbits/s clients, each running rados bench.
  
 But what I really don’t get is the following :
  
 -  With 1 host : throughput is 420MB/s
 -  With 2 hosts : I get 640MB/s. That’s surely not 2x420MB/s.
 -  With 5 hosts : I get around 1375MB/s . That’s far from the 
 expected 2GB/s.
  
 The network never is maxed out, nor are the disks or CPUs.
 The hosts throughput I see with rados bench seems to match the dstat 
 throughput.
 That’s as if each additional host was only capable of adding 220MB/s of 
 throughput. Compare this to the 1GB/s they are capable of (420MB/s with 
 journals)…
  
 I’m therefore wondering what could possibly be so wrong with my setup ??
 Why would it impact so much the performance to add hosts ?
  
 On the hardware side, I have Broadcam BCM57711 10-Gigabit PCIe cards.
 I know, not perfect, but not THAT bad neither… ?
  
 Any hint would be greatly appreciated !
  
 Thanks
 Frederic Schaer
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Performance dégradation after upgrade to hammer

2015-07-14 Thread Florent MONTHEL
Hi All,

I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to 
Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM prefork 
mode
I'm experiencing huge write performance degradation just after upgrade 
(Cosbench).

Do you already run performance tests between Hammer and Firefly ?

No problem with read performance that was amazing


Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CPU Hyperthreading ?

2015-07-14 Thread Florent MONTHEL
Hi list

Do you recommend to enable or disable hyper threading on CPU ?
Is it the case for Mon ? Osd ? Radosgw ?
Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance dégradation after upgrade to hammer

2015-07-14 Thread Florent MONTHEL
Yes of course thanks Mark

Infrastructure : 5 servers with 10 sata disks (50 osd at all) - 10gb connected 
- EC 2+1 on rgw.buckets pool - 2 radosgw RR-DNS like installed on 2 cluster 
servers
No SSD drives used

We're using Cosbench to send :
- 8k object size : 100% read with 256 workers : better results with Hammer
 - 8k object size : 80% read - 20% write with 256 workers : real degradation 
between Firefly and Hammer (divided by something like 10)
- 8k object size : 100% write with 256 workers : real degradation between 
Firefly and Hammer (divided by something like 10)

Thanks

Sent from my iPhone

 On 14 juil. 2015, at 19:57, Mark Nelson mnel...@redhat.com wrote:
 
 On 07/14/2015 06:42 PM, Florent MONTHEL wrote:
 Hi All,
 
 I've just upgraded Ceph cluster from Firefly 0.80.8 (Redhat Ceph 1.2.3) to 
 Hammer (Redhat Ceph 1.3) - Usage : radosgw with Apache 2.4.19 on MPM prefork 
 mode
 I'm experiencing huge write performance degradation just after upgrade 
 (Cosbench).
 
 Do you already run performance tests between Hammer and Firefly ?
 
 No problem with read performance that was amazing
 
 Hi Florent,
 
 Can you talk a little bit about how your write tests are setup?  How many 
 concurrent IOs and what size?  Also, do you see similar problems with rados 
 bench?
 
 We have done some testing and haven't seen significant performance 
 degradation except when switching to civetweb which appears to perform 
 deletes more slowly than what we saw with apache+fcgi.
 
 Mark
 
 
 
 Sent from my iPhone
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CPU Hyperthreading ?

2015-07-14 Thread Florent MONTHEL
Thanks for feed-back Somnath

Sent from my iPhone

 On 14 juil. 2015, at 20:24, Somnath Roy somnath@sandisk.com wrote:
 
 I was getting better performance with HT enabled (Intel cpu) for ceph-osd. I 
 guess for mon it doesn't matter, but, for RadosGW I didn't measure the 
 difference...We are running our benchmark with HT enabled for all components 
 though.
 
 Thanks  Regards
 Somnath
 
 -Original Message-
 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
 Florent MONTHEL
 Sent: Tuesday, July 14, 2015 5:19 PM
 To: ceph-users
 Subject: [ceph-users] CPU Hyperthreading ?
 
 Hi list
 
 Do you recommend to enable or disable hyper threading on CPU ?
 Is it the case for Mon ? Osd ? Radosgw ?
 Thanks
 
 Sent from my iPhone
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 
 PLEASE NOTE: The information contained in this electronic mail message is 
 intended only for the use of the designated recipient(s) named above. If the 
 reader of this message is not the intended recipient, you are hereby notified 
 that you have received this message in error and that any review, 
 dissemination, distribution, or copying of this message is strictly 
 prohibited. If you have received this communication in error, please notify 
 the sender by telephone or e-mail (as shown above) immediately and destroy 
 any and all copies of this message in your possession (whether hard copies or 
 electronically stored copies).
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] bucket owner vs S3 ACL?

2015-07-01 Thread Florent MONTHEL
Hi Valery,

With the old account did you try to give FULL access to the new one user ID ?

Process should be :
From OLD account add FULL access to NEW account (S3 ACL with CloudBerry for 
example) 
With radosgw admin update link from OLD account to NEW account (link allow user 
to see bucket with bucket list command)
From NEW account remove FULL access to old account (S3 ACL with CloudBerry for 
example)

Thanks


 On Jun 29, 2015, at 11:46 AM, Valery Tschopp valery.tsch...@switch.ch wrote:
 
 Hi guys,
 
 We use the radosgw (v0.80.9) with the Openstack Keystone integration.
 
 One project have been deleted, so now I have to transfer the ownership of all 
 the buckets to another user/project.
 
 Using radosgw-admin I have changed the owner:
 
 radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME
 
 And the owner have been update:
 
 radosgw-admin bucket stats --bucket BUCKET_NAME
 
 { bucket: BUCKET_NAME,
  pool: .rgw.buckets,
  index_pool: .rgw.buckets.index,
  id: default.4063334.17,
  marker: default.4063334.17,
  owner: NEW_USER_ID,
  ver: 66301,
  master_ver: 0,
  mtime: 1435583681,
  max_marker: ,
  usage: { rgw.main: { size_kb: 189433890,
  size_kb_actual: 189473684,
  num_objects: 19043},
  rgw.multimeta: { size_kb: 0,
  size_kb_actual: 0,
  num_objects: 0}},
  bucket_quota: { enabled: false,
  max_size_kb: -1,
  max_objects: -1}
 }
 
 But the S3 ACL of this bucket is still referencing the old user/project (from 
 radosgw.log) when I try to access it with the new owner:
 
 2015-06-29 17:08:33.236265 7f40d8a76700 15 Read 
 AccessControlPolicyAccessControlPolicy 
 xmlns=http://s3.amazonaws.com/doc/2006-03-01/;OwnerIDOLD_USER_ID/IDDisplayNameOLD_PROJECT_NAME/DisplayName/OwnerAccessControlListGrantGrantee
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
 xsi:type=CanonicalUserIDOLD_USER_ID/IDDisplayNameOLD_PROJECT_NAME/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPolicy
 
 
 Therefore I get a 403, because the S3 ACL still enforce the old owner, not 
 the new one.
 
 How can I update these S3 ACL, and fully transfer the ownership to the new 
 owner/project???
 
 Cheers,
 Valery
 
 
 
 -- 
 SWITCH
 --
 Valery Tschopp, Software Engineer, Peta Solutions
 Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
 email: valery.tsch...@switch.ch phone: +41 44 268 1544
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph references

2015-07-01 Thread Florent MONTHEL
Hi community

Do you know if there is page with all the official Ceph cluster deployed ? With 
number of nodes, volumetry, protocol (block / file / object)
If not are you agree to create this list on Ceph site?
Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Gathering tool to inventory osd

2015-06-13 Thread Florent MONTHEL
Hi list 

Does anyone developed tool to inventory osd / server / disks ?
The goal is to automate disk additional in cluster / disk repair / crushmap 
customization with data center location (in inventory)

Thanks for feedback

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Load balancing RGW and Scaleout

2015-06-11 Thread Florent MONTHEL
Hum thanks David I will check corosync
And maybe Consul can be a solution ?

Sent from my iPhone

 On 11 juin 2015, at 11:33, David Moreau Simard dmsim...@iweb.com wrote:
 
 What I've seen work well is to set multiple A records for your RGW endpoint.
 Then, with something like corosync, you ensure that these multiple IP
 addresses are always bound somewhere.
 
 You can then have as many nodes in active-active mode as you want.
 
 -- 
 David Moreau Simard
 
 On 2015-06-11 11:29 AM, Florent MONTHEL wrote:
 Hi Team
 
 Is it possible for you to share your setup on radosgw in order to use 
 maximum of network bandwidth and to have no SPOF
 
 I have 5 servers on 10gb network and 3 radosgw on it
 We would like to setup Haproxy on 1 node with 3 rgw but :
 - SPOF become Haproxy node
 - Max bandwidth will be on HAproxy node (10gb/s)
 
 Thanks
 
 Sent from my iPhone
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Load balancing RGW and Scaleout

2015-06-11 Thread Florent MONTHEL
Hi Team

Is it possible for you to share your setup on radosgw in order to use maximum 
of network bandwidth and to have no SPOF

I have 5 servers on 10gb network and 3 radosgw on it
We would like to setup Haproxy on 1 node with 3 rgw but :
- SPOF become Haproxy node
- Max bandwidth will be on HAproxy node (10gb/s)

Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Client OS - RHEL 7.1??

2015-06-04 Thread Florent MONTHEL
Hi Bruce

Yes RHEL 6 come with kernel 2.6.32 and you don't have krbd. No backport, 
already asked :)
It's good on RHEL 7.1 that coming with 3.10 kernel and krbd integrated
Thanks

Sent from my iPhone

 On 4 juin 2015, at 13:26, Bruce McFarland bruce.mcfarl...@taec.toshiba.com 
 wrote:
 
 I’ve always used Ubuntu for my Ceph client OS and found out in the lab that 
 Centos/RHEL 6.x doesn’t have the kernel rbd support. I wanted to investigate 
 using RHEL 7.1 for the client OS. Is there a kernel rbd module that installs 
 with RHEL  7.1?? If not are there 7.1 rpm’s or src tar balls available to 
 (relatively) easily create a RHEL 7.1 Ceph client??
  
 Thanks,
 Bruce
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] KB data and KB used

2015-06-03 Thread Florent MONTHEL
Hi list,

ceph version 0.87

On a small cluster, I’m experiencing a behaviour that I have to understand (but 
I have reproduce that on bigger)
I pushed 300MB of data with EC 2+1 so I see 300MB data and 476MB used, right
I deleted data on buckets and after gc collector I see this report :

904 pgs: 904 active+clean; 13565 kB data, 476 MB used, 40426 MB / 40903 MB avail


So 13MB of data in cluster (thanks to gc) and still 476MB used. How I can 
release used block ?
If I write 25GB of data, cluster will switch to 38GB used so OSD will be 
reported near full or full. After deletion of this 25GB of data, OSD will stay 
with near full report ?

Thanks in advance for your help

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] what's the difference between pg and pgp?

2015-05-21 Thread Florent MONTHEL
Thanks Ilya for this clear explanation!
I'm searching that for a long time

Best practices is to have pg = pgp in order to avoid using of the same set of 
osd right ? (On a small cluster you will have)

Sent from my iPhone

 On 21 mai 2015, at 07:49, Ilya Dryomov idryo...@gmail.com wrote:
 
 On Thu, May 21, 2015 at 12:12 PM, baijia...@126.com baijia...@126.com 
 wrote:
 Re: what's the difference between pg and pgp?
 
 pg-num is the number of PGs, pgp-num is the number of PGs that will be
 considered for placement, i.e. it's the pgp-num value that is used by
 CRUSH, not pg-num.  For example, consider pg-num = 1024 and pgp-num
 = 1.  In that case you will see 1024 PGs but all of those PGs will map
 to the same set of OSDs.
 
 When you increase pg-num you are splitting PGs, when you increase
 pgp-num you are moving them, i.e. changing sets of OSDs they map to.
 
 Thanks,
 
Ilya
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] what's the difference between pg and pgp?

2015-05-21 Thread Florent MONTHEL
To be sure to understand, if I create 2 times replicated pool toto with 1024 
pgs and 1 pgp, pg and data of pool toto will be mapped on only 2 OSDs and on 2 
servers right ?

Sent from my iPhone

 On 21 mai 2015, at 18:58, Florent MONTHEL fmont...@flox-arts.net wrote:
 
 Thanks Ilya for this clear explanation!
 I'm searching that for a long time
 
 Best practices is to have pg = pgp in order to avoid using of the same set 
 of osd right ? (On a small cluster you will have)
 
 Sent from my iPhone
 
 On 21 mai 2015, at 07:49, Ilya Dryomov idryo...@gmail.com wrote:
 
 On Thu, May 21, 2015 at 12:12 PM, baijia...@126.com baijia...@126.com 
 wrote:
 Re: what's the difference between pg and pgp?
 
 pg-num is the number of PGs, pgp-num is the number of PGs that will be
 considered for placement, i.e. it's the pgp-num value that is used by
 CRUSH, not pg-num.  For example, consider pg-num = 1024 and pgp-num
 = 1.  In that case you will see 1024 PGs but all of those PGs will map
 to the same set of OSDs.
 
 When you increase pg-num you are splitting PGs, when you increase
 pgp-num you are moving them, i.e. changing sets of OSDs they map to.
 
 Thanks,
 
   Ilya
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Documentation regarding content of each pool of radosgw

2015-05-18 Thread Florent MONTHEL
Hi List,

I would like to know the content of each pools of radosgw in order to 
understand usage
So I have check the content with rados ls -p poolname

### .intent-log
= this pool is empty on my side. what’s the need ?


### .log
= this pool is empty on my side. what’s the need ?


### .rgw
= bucketname and metadata to have id of the region and id of user owner ?

Exemple :
.bucket.meta.montest:default.69634.14
montest

default is the region right ?
14 is user id right ?
69634 what’s this id ? Id of pool ?


### .rgw.buckets
= Objects in the pool

Exemple :
default.70632.2_Guzzle/Plugin/Cookie/CookieJar/CookieJarInterface.php

default is the region right ?
2 is owner id right ?
70632 what’s this id ? ID of pool ?


### .rgw.buckets.extra
= this pool is empty on my side. what’s the need ?


### .rgw.buckets.index
= content of buckets index

Exemple :
.dir.default.69634.12

default is the region right ?
12 is owner id right ?
69634 what’s this id ? ID of pool ?


### .rgw.control
= don’t know…

Exemple :
notify.1
notify.5


### .rgw.gc
= don’t know…

Exemple :
gc.9
gc.31


### .rgw.root
= content of region

Example :
default.region
region_info.default
zone_info.default


### .usage
= content of usage data. 1 object per user ?

Example :
usage.17

id 17 is id of user ?


### .users
= content of access key. s3 only I think

Example :
Z78IS5F47QQJTB2DNVVC
HW698PZQDTZVLLO79NES
snausr016


### .users.email
= email list


### .users.swift
= subuser swift only

Example :
fmonthel:swift


### .users.uid
= User id list with id.buckets

Example :
fmonthel
fmonthel.buckets

what is the need of fmonthel.buckets ?

Thanks for your helping___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Avoid buckets creation

2015-05-18 Thread Florent MONTHEL
Hi List,

We would like to avoid creation of buckets directly by users or subuser (s3 or 
swift)
We would like to do it through administration interface (with user and special 
right) in order to normalize bucketname
Is it possibility to do it (with caps or parameter) ?
Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] client.radosgw.gateway for 2 radosgw servers

2015-05-18 Thread Florent MONTHEL
Hi List,

I would like to know the best way to have several radosgw servers on the same 
cluster with the same ceph.conf file

From now, I have 2 radosgw server but I have 1 conf file on each with below 
section on parrot :

[client.radosgw.gateway]
host = parrot
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = 
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
rgw print continue = false
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1

And below section on cougar node :

[client.radosgw.gateway]
host = cougar
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = 
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
rgw print continue = false
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1

Is it possible to have 2 different keys for parrot and cougar and 2 sections 
client.radosgw in order to have the same ceph.conf for whole cluster (and use 
cep-deploy to push conf) ?

Thanks ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD in ceph.conf

2015-05-06 Thread Florent MONTHEL
Hi teqm,

Is it necessary to indicate in ceph.conf all OSD that we have in the
cluster ?
we have today reboot a cluster (5 nodes RHEL 6.5) and some OSD seem to have
change ID so crush map not mapped with the reality
Thanks

*Florent Monthel*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph RBD devices management OpenSVC integration

2015-03-26 Thread Florent MONTHEL
Hi Team,

I’ve just written blog post regarding integration of CEPH RBD devices 
management in OpenSVC service : 
http://www.flox-arts.net/article30/ceph-rbd-devices-management-with-opensvc-service
 
http://www.flox-arts.net/article30/ceph-rbd-devices-management-with-opensvc-service
Next blog post will be regarding Snapshots  clones (integrated too in OpenSVC)
Thanks

Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Mount CEPH RBD devices into OpenSVC service

2015-02-08 Thread Florent MONTHEL
Hi List,

Fisrt tutorial to map/unmap RBD devices into OpenSVC service : 
http://www.flox-arts.net/article29/monter-un-disque-ceph-dans-service-opensvc-step-1
Sorry it’s in French

Next step : Christophe Varoqui has just integrated CEPH in core OpenSVC code 
with snapshots  clones managing, I will write tutorial in few days


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD deprecated?

2015-02-05 Thread Florent MONTHEL
Hi all

It's quite difficult to introduce ICE block mode in major enterprise because 
major companies still have RHEL 6 and 5 so no good kernel version. 
RHEL 7 is not yet fully qualified in enterprise (real life) and after that we 
have thousand of servers to migrate...
No way to have RPM like solution to use RBD block on RHEL 6/5 ?

Sent from my iPhone

 On 5 févr. 2015, at 19:17, Don Doerner dondoer...@sbcglobal.net wrote:
 
 Ken,
 
 Thanks for the reply.
 
 It's really good news, that RBD is considered strategic.  I'm guessing I can 
 use the firefly kernel modules on a giant ceph system, as long as RHEL7 is in 
 play?  No serious changes in that code from firefly to giant (I'm hoping)?
  
 Regards,
 
 -don-
 
 
 On Thursday, February 5, 2015 10:05 AM, Ken Dreyer kdre...@redhat.com wrote:
 
 
 On 02/05/2015 08:55 AM, Don Doerner wrote:
 
  I have been using Ceph to provide block devices for various, nefarious
  purposes (mostly testing ;-).  But as I have worked with various Linux
  distributions (RHEL7, CentOS6, CentOS7) and various Ceph releases
  (firefly, giant), I notice that the onlycombination for which I seem
  able to find the needed kernel modules (rbd, libceph) is RHEL7-firefly.
 
 
 Hi Don,
 
 The RBD kernel module is not deprecated; quite the opposite in fact.
 
 A year ago things were a bit rough regarding supporting the Ceph kernel
 modules on RHEL 6 and 7. All Ceph kernel module development goes
 upstream first into Linus' kernel tree, and that tree is very different
 than what ships in RHEL 6 (2.6.32 plus a lot of patches) and RHEL 7
 (3.10.0 plus a lot of patches). This meant that it was historically much
 harder for the Ceph developer community to integrate what was going on
 upstream with what was happening in the downstream RHEL kernels.
 
 Currently, Red Hat's plan is to ship rbd.ko and some of the associated
 firefly userland bits in RHEL 7.1. You mention that you've been testing
 on RHEL 7, so I'm guessing you're got a RHEL subscription. As it turns
 out, you can try the new kernel package out today in the RHEL 7.1 Beta
 that's available to all RHEL subscribers. It's a beta, so please open
 support requests with Red Hat if you happen to hit bugs with those new
 packages.
 
 Unfortunately CentOS does not rebuild and publish the public RHEL Betas,
 so for CentOS 7, you'll have to wait until RHEL 7.1 reaches GA and
 CentOS 7.1 rebuilds it. (I suppose you could jump ahead of the CentOS
 developers here and rebuild your own kernel package and ceph userland if
 you're really eager... but you're really on your own there :)
 
 - Ken
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread Florent MONTHEL
Hi Mad

3Gbps so you will have SSD Sata ?
I think you should take 6Gbps controllers to make sure so not have Sata 
limitations
Thanks

Sent from my iPhone

 On 2 févr. 2015, at 09:27, mad Engineer themadengin...@gmail.com wrote:
 
 I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
 on each server.Each server will have 10G NIC.
 SSD disks are of good quality and as per label it can support ~300 MBps
 
 What are the limiting factor that prevents from utilizing full speed
 of SSD disks?
 
 Disk  controllers are 3 Gbps,so if i am not wrong this is the maximum
 i can achieve per host.Can ceph distribute write parallely and over
 come this limit of 3Gbps controller and thus fully utilize the
 capability of ssd disks.
 
 I have a working 3 node ceph setup deployed using ceph-deploy using
 latest firefly and 3.16 kernel but this is on low quality SATA disks
 and i am planning to upgrade to ssd
 
 can some one please help me in understanding this better.
 
 Thanks
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread Florent MONTHEL
Hi,

Writes will be distributed every 4MB (size of IMAGEV1 RBD object)
IMAGEV2 not fully supported on KRBD (but you can customize size of object and 
striping)

You need to take :
- SSD SATA 6gbits
- or SSD SAS 12gbits (more expensive) 



Florent Monthel





 Le 2 févr. 2015 à 18:29, mad Engineer themadengin...@gmail.com a écrit :
 
 Thanks Florent,
can ceph distribute write to multiple hosts?
 
 On Mon, Feb 2, 2015 at 10:17 PM, Florent MONTHEL fmont...@flox-arts.net 
 wrote:
 Hi Mad
 
 3Gbps so you will have SSD Sata ?
 I think you should take 6Gbps controllers to make sure so not have Sata 
 limitations
 Thanks
 
 Sent from my iPhone
 
 On 2 févr. 2015, at 09:27, mad Engineer themadengin...@gmail.com wrote:
 
 I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD
 on each server.Each server will have 10G NIC.
 SSD disks are of good quality and as per label it can support ~300 MBps
 
 What are the limiting factor that prevents from utilizing full speed
 of SSD disks?
 
 Disk  controllers are 3 Gbps,so if i am not wrong this is the maximum
 i can achieve per host.Can ceph distribute write parallely and over
 come this limit of 3Gbps controller and thus fully utilize the
 capability of ssd disks.
 
 I have a working 3 node ceph setup deployed using ceph-deploy using
 latest firefly and 3.16 kernel but this is on low quality SATA disks
 and i am planning to upgrade to ssd
 
 can some one please help me in understanding this better.
 
 Thanks
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph Performance random write is more then sequential

2015-02-01 Thread Florent MONTHEL
Hi Sumit

Do you have cache pool tiering activated ?
Some feed-back regarding your architecture ?
Thanks

Sent from my iPad

 On 1 févr. 2015, at 15:50, Sumit Gaur sumitkg...@gmail.com wrote:
 
 Hi 
 I have installed 6 node ceph cluster and to my surprise when I ran rados 
 bench I saw that random write has more performance number then sequential 
 write. This is opposite to normal disk write. Can some body let me know if I 
 am missing any ceph Architecture point here ?
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD snap unprotect need ACLs on all pools ?

2015-02-01 Thread Florent MONTHEL
Hi,

I’ve ACL with key/user on on 1 pool (client.condor rwx on pool 
rbdpartigsanmdev01)

I would like to unprotect snapshot but I’ve below error :

rbd -n client.condor snap unprotect 
rbdpartigsanmdev01/flaprdsvc01_lun003@sync#1.cloneref.2015-02-01.19:07:21
2015-02-01 22:53:00.903790 7f4d0036e760 -1 librbd: can't get children for pool 
.rgw.root
rbd: unprotecting snap failed: (1) Operation not permitted


After check of source code 
(https://github.com/ceph/ceph/blob/master/src/librbd/internal.cc 
https://github.com/ceph/ceph/blob/master/src/librbd/internal.cc line 715), 
with unprotect action, CEPH would like to check all cluster pools
And we have access only on 1 pool of cluster…

Is it a bug ? Or usage is not good on my side ?

Thanks


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cache tiering writeback mode, object in cold and hot pool ?

2015-01-31 Thread Florent MONTHEL
Hi list

I have a question to estimate volumetry available on each node

With a cache tiering pool in write back mode, object in hot pool is it remove 
from cold pool ? Or object is in hot and cold pool (cache) ?
Thanks


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Rbd device on RHEL 6.5

2015-01-31 Thread Florent MONTHEL
Hi list

Do we have any way to mount Rbd device on RHEL 6.5 (that running 2.6.32 kernel) 
?
Thanks

Sent from my iPad
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD client STRIPINGV2 support

2015-01-24 Thread Florent MONTHEL
Hi Ilya,

Do you know if this ENH will be developed soon 
(http://tracker.ceph.com/issues/3837 http://tracker.ceph.com/issues/3837) ? 
Is it already scheduled in the next releases or not ?

It could really improve BLOCK performance for oracle / database workload

Thanks for your feedback

Florent Monthel





 Le 27 déc. 2014 à 22:37, Ilya Dryomov ilya.dryo...@inktank.com a écrit :
 
 On Sat, Dec 27, 2014 at 6:46 PM, Florent MONTHEL fmont...@flox-arts.net 
 wrote:
 Hi,
 
 I’ve just created image with striping support like below (image type 2 - 16
 stripes of 64K with 4MB object) :
 
 rbd create sandevices/flaprdweb01_lun010 --size 102400 --stripe-unit 65536
 --stripe-count 16 --order 22  --image-format 2
 
 rbd info sandevices/flaprdweb01_lun010
 rbd image 'flaprdweb01_lun010':
 size 102400 MB in 25600 objects
 order 22 (4096 kB objects)
 block_name_prefix: rbd_data.40c52ae8944a
 format: 2
 features: layering, striping
 stripe unit: 65536 bytes
 stripe count: 16
 
 But when I try to map device, I’ve unsupported striping alert on my dmesg
 console.
 
 rbd map sandevices/flaprdweb01_lun010 --name client.admin
 rbd: sysfs write failed
 rbd: map failed: (22) Invalid argument
 
 dmesg | tail
 [15352.510385] rbd: image flaprdweb01_lun010: unsupported stripe unit (got
 65536 want 4194304)
 
 Do you know if it’s scheduled to support STRIPINGV2 on tbd client ?
 How can I mount my device ?
 
 You can't - krbd doesn't support it yet.  It's planned, in fact it's
 the top item on the krbd list.  Currently STRIPINGV2 images can be
 mapped only if su=4M and sc=1 (i.e. if striping params match v1 images)
 and that's the error you are tripping over.
 
 Thanks,
 
Ilya

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Avoid several RBD mapping - Auth Namespace

2015-01-17 Thread Florent MONTHEL
Hi list

No feedbacks ? It's not developed use case ? :(
Thanks

Sent from my iPhone

 On 3 janv. 2015, at 18:45, Florent MONTHEL fmont...@flox-arts.net wrote:
 
 Hi Team,
 
 I’ve 3 servers that have to mount 3 block image into sandevices pool :
 
 SERVER FALCON :
 sandevices/falcon_lun0
 sandevices/falcon_lun1
 sandevices/falcon_lun2
 
 SERVER RAVEN :
 sandevices/raven_lun0
 sandevices/raven_lun1
 sandevices/raven_lun2
 
 SERVER OSPREY :
 sandevices/osprey_lun0
 sandevices/osprey_lun1
 sandevices/osprey_lun2
 
 I created 3 client keys (raven/falcon/osprey) with below caps : ceph auth 
 caps client.raven mon 'allow r' osd 'allow rwx pool=sandevices'
 
 First question :
 
 RBD allow to mount several times same image on the same server :
 
 root@raven:/etc/ceph# rbd showmapped
 id pool   image snap device
 1  sandevices raven_lun0   -/dev/rbd1 
 2  sandevices raven_lun1   -/dev/rbd2 
 root@raven:/etc/ceph# ls -l /dev/rbd/sandevices/raven_lun0   
 lrwxrwxrwx 1 root root 10 Jan  3 18:32 /dev/rbd/sandevices/raven_lun0 - 
 ../../rbd2
 
 RBD1 will failed ?
 I’m trying to play with lock but as indicated lock don’t avoid several rbd 
 map operation. I tried to « no share map option » without succeed… (rbd map 
 doesn’t accept -o on my server)
 
 Do you have the best way to avoid multiple tbd map operation on the same 
 server ?
 
 Second question :
 
 On the same way, I would like to avoid osprey server to mount falcon’s image. 
 osprey has to map only its own LUNs. I’m trying to play with namespace but 
 seem not yet fully supported.
 Do you know how I can manage that (without 1 pool per server because I have 
 thousands servers...) ?
 
 
 Thanks
 
 
 
 Florent Monthel
 
 
 
 
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cache pool tiering SSD journal

2015-01-17 Thread Florent MONTHEL
Hi list,

With cache pool tiering (in write back mode) enhancement, should I keep to use 
SSD journal on SSD ?
Can we have 1 big SSD pool for caching for all low cost storage pools ?
Thanks

Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Avoid several RBD mapping - Auth Namespace

2015-01-03 Thread Florent MONTHEL
Hi Team,

I’ve 3 servers that have to mount 3 block image into sandevices pool :

SERVER FALCON :
sandevices/falcon_lun0
sandevices/falcon_lun1
sandevices/falcon_lun2

SERVER RAVEN :
sandevices/raven_lun0
sandevices/raven_lun1
sandevices/raven_lun2

SERVER OSPREY :
sandevices/osprey_lun0
sandevices/osprey_lun1
sandevices/osprey_lun2

I created 3 client keys (raven/falcon/osprey) with below caps : ceph auth caps 
client.raven mon 'allow r' osd 'allow rwx pool=sandevices'

First question :

RBD allow to mount several times same image on the same server :

root@raven:/etc/ceph# rbd showmapped
id pool   image snap device
1  sandevices raven_lun0   -/dev/rbd1 
2  sandevices raven_lun1   -/dev/rbd2 
root@raven:/etc/ceph# ls -l /dev/rbd/sandevices/raven_lun0   
lrwxrwxrwx 1 root root 10 Jan  3 18:32 /dev/rbd/sandevices/raven_lun0 - 
../../rbd2

RBD1 will failed ?
I’m trying to play with lock but as indicated lock don’t avoid several rbd map 
operation. I tried to « no share map option » without succeed… (rbd map doesn’t 
accept -o on my server)

Do you have the best way to avoid multiple tbd map operation on the same server 
?

Second question :

On the same way, I would like to avoid osprey server to mount falcon’s image. 
osprey has to map only its own LUNs. I’m trying to play with namespace but seem 
not yet fully supported.
Do you know how I can manage that (without 1 pool per server because I have 
thousands servers...) ?


Thanks



Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD client STRIPINGV2 support

2014-12-27 Thread Florent MONTHEL
Hi,

I’ve just created image with striping support like below (image type 2 - 16 
stripes of 64K with 4MB object) :

rbd create sandevices/flaprdweb01_lun010 --size 102400 --stripe-unit 65536 
--stripe-count 16 --order 22  --image-format 2

rbd info sandevices/flaprdweb01_lun010
rbd image 'flaprdweb01_lun010':
size 102400 MB in 25600 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.40c52ae8944a
format: 2
features: layering, striping
stripe unit: 65536 bytes
stripe count: 16

But when I try to map device, I’ve unsupported striping alert on my dmesg 
console.

rbd map sandevices/flaprdweb01_lun010 --name client.admin
rbd: sysfs write failed
rbd: map failed: (22) Invalid argument

dmesg | tail
[15352.510385] rbd: image flaprdweb01_lun010: unsupported stripe unit (got 
65536 want 4194304)

Do you know if it’s scheduled to support STRIPINGV2 on tbd client ?
How can I mount my device ?

Thanks in advance 


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.90 released

2014-12-26 Thread Florent MONTHEL
Hi Sage

To be sure to have the good understanding : if I reached the max number of PG 
per OSD with for example 4 pools, and I have to create 2 new pools without 
adding OSD, I need to migrate old pools to less PGs pool, right ?
Thanks

Sent from my iPhone

 On 23 déc. 2014, at 15:39, Sage Weil sw...@redhat.com wrote:
 
 On Tue, 23 Dec 2014, Ren? Gallati wrote:
 Hello,
 
 so I upgraded my cluster from 89 to 90 and now I get:
 
 ~# ceph health
 HEALTH_WARN too many PGs per OSD (864  max 300)
 
 That is a new one. I had too few but never too many. Is this a problem that
 needs attention, or ignorable? Or is there even a command now to shrink PGs?
 
 It's a new warning.
 
 You can't reduce the PG count without creating new (smaller) pools 
 and migrating data.  You can ignore the message, though, and make it go 
 away by adjusting the 'mon pg warn max per osd' (defaults to 300).  Having 
 too many PGs increases the memory utilization and can slow things down 
 when adapting to a failure, but certainly isn't fatal.
 
 The message did not appear before, I currently have 32 OSDs over 8 hosts and 
 9
 pools, each with 1024 PG as was the recommended number according to the OSD *
 100 / replica formula, then round to next power of 2. The cluster has been
 increased by 4 OSDs, 8th host only days before. That is to say, it was at 28
 OSD / 7 hosts / 9 pools but after extending it with another host, ceph 89 did
 not complain.
 
 Using the formula again I'd actually need to go to 2048PGs in pools but ceph
 is telling me to reduce the PG count now?
 
 The guidance in the docs is (was?) a bit confusing.  You need to take the 
 *total* number of PGs and see how many of those per OSD there are, 
 not create as many equally-sized pools as you want.  There have been 
 several attempts to clarify the language to avoid this misunderstanding 
 (you're definitely not the first).  If it's still unclear, suggestions 
 welcome!
 
 sage
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Best way to simulate SAN masking/mapping with CEPH

2014-12-23 Thread Florent MONTHEL
Hi Users List 

We have a SAN solution with zoning/masking/mapping to segregate LUN allocation 
 avoid security access issue (server srv01 can access on srv02 luns)
I think with CEPH we can only put security on pool side right ? We can’t drill 
down to LUN with client security file like below :

client.serv01 mon 'allow r' osd  'allow rwx pool=serv01/lununxprd01'

So what’s for you the recommandation for my usecase : 1 pool per server / per 
cluster ? Do we have number pool limitation ?

Thanks


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD JOURNAL not associated - ceph-disk list ?

2014-12-23 Thread Florent MONTHEL
Hi Loic,

Hum… I will check. However symlink journal to partition is correctly created 
without action on my side :

journal - /dev/disk/by-partuuid/36741e5b-eee0-4368-9736-a31701a186a1

But no journal_uuid file with cep-deploy

Florent Monthel





 Le 23 déc. 2014 à 00:51, Loic Dachary l...@dachary.org a écrit :
 
 Hi Florent,
 
 On 22/12/2014 19:49, Florent MONTHEL wrote:
 Hi Loic, Hi Robert,
 
 Thanks. I’m integrating CEPH OSD with OpenSVC services 
 (http://www.opensvc.com) so I have to generate UUID myself in order to map 
 services
 It’s the reason for that I’m generating sgdisk commands with my own UUID
 
 After activating OSD, I don’t have mapping osd  journal with cep-disk 
 command
 
 root@raven:/var/lib/ceph/osd/ceph-5# ceph-disk list
 /dev/sda other, ext4, mounted on /
 /dev/sdb swap, swap
 /dev/sdc :
 /dev/sdc1 ceph journal
 /dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.3
 /dev/sde :
 /dev/sde1 ceph journal
 /dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.4
 /dev/sdg :
 /dev/sdg1 ceph journal
 /dev/sdh :
 /dev/sdh1 ceph data, active, cluster ceph, osd.5
 
 After below command (osd 5), ceph-deploy didn’t create file journal_uuid  :
 
 ceph-deploy --overwrite-conf osd create 
 raven:/dev/disk/by-partuuid/6356fd8d-0d84-432a-b9f4-3d02f94afdff:/dev/disk/by-partuuid/36741e5b-eee0-4368-9736-a31701a186a1
 
 root@raven:/var/lib/ceph/osd/ceph-5# ls -l
 total 56
 -rw-r--r--   1 root root  192 Dec 21 23:55 activate.monmap
 -rw-r--r--   1 root root3 Dec 21 23:55 active
 -rw-r--r--   1 root root   37 Dec 21 23:55 ceph_fsid
 drwxr-xr-x 184 root root 8192 Dec 22 19:25 current
 -rw-r--r--   1 root root   37 Dec 21 23:55 fsid
 lrwxrwxrwx   1 root root   58 Dec 21 23:55 journal - 
 /dev/disk/by-partuuid/36741e5b-eee0-4368-9736-a31701a186a1
 -rw---   1 root root   56 Dec 21 23:55 keyring
 -rw-r--r--   1 root root   21 Dec 21 23:55 magic
 -rw-r--r--   1 root root6 Dec 21 23:55 ready
 -rw-r--r--   1 root root4 Dec 21 23:55 store_version
 -rw-r--r--   1 root root   53 Dec 21 23:55 superblock
 -rw-r--r--   1 root root0 Dec 22 19:24 sysvinit
 -rw-r--r--   1 root root2 Dec 21 23:55 whoami
 
 
 So I created for each osd, file journal_uuid » manually and mapping become 
 OK with ceph-disk :)
 
 root@raven:/var/lib/ceph/osd/ceph-5# echo 
 36741e5b-eee0-4368-9736-a31701a186a1 »  journal_uuid
 
 I think this is an indication that when you ceph-disk prepare the device the 
 journal_uuid was not provided and therefore the journal_uuid creation was 
 skipped:
 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L1235 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L1235
 called from
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L1338 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L1338
 
 Cheers
 
 It’s ok now :
 
 root@raven:/var/lib/ceph/osd/ceph-5# ceph-disk list
 /dev/sda other, ext4, mounted on /
 /dev/sdb swap, swap
 /dev/sdc :
 /dev/sdc1 ceph journal, for /dev/sdd1
 /dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.3, journal /dev/sdc1
 /dev/sde :
 /dev/sde1 ceph journal, for /dev/sdf1
 /dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.4, journal /dev/sde1
 /dev/sdg :
 /dev/sdg1 ceph journal, for /dev/sdh1
 /dev/sdh :
 /dev/sdh1 ceph data, active, cluster ceph, osd.5, journal /dev/sdg1
 
 
 Thanks rob...@leblancnet.us mailto:rob...@leblancnet.us 
 mailto:rob...@leblancnet.us mailto:rob...@leblancnet.us for clue ;)
 
 *Florent Monthel**
 *
 
 
 
 
 
 Le 21 déc. 2014 à 18:08, Loic Dachary l...@dachary.org 
 mailto:l...@dachary.org mailto:l...@dachary.org 
 mailto:l...@dachary.org a écrit :
 
 Hi Florent,
 
 It is unusual to manually run the sgdisk. Is there a reason why you need to 
 do this instead of letting ceph-disk prepare do it for you ?
 
 The information about the association between journal and data is only 
 displayed when the OSD has been activated. See 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L2246 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L2246
 Cheers
 
 On 21/12/2014 15:11, Florent MONTHEL wrote:
 Hi,
 
 I would like to separate OSD and journal on 2 différent disks so I have :
 
 1 disk /dev/sde (1GB) for journal = type code JOURNAL_UUID = 
 '45b0969e-9b03-4f30-b4c6-b4b80ceff106'
 1 disk /dev/sdd (5GB) for OSD = type code OSD_UUID = 
 '4fbd7e29-9d25-41b8-afd0-062c0ceff05d'
 
 I execute below commands :
 
 FOR JOURNAL :
 sgdisk --new=1:0:1023M --change-name=1:ceph journal 
 --partition-guid=1:e89f18cc-ae46-4573-8bca-3e782d45849c 
 --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sde
 
 FOR OSD:
 sgdisk --new=1:0:5119M --change-name=1:ceph data 
 --partition-guid=1:7476f0a8-a6cd-4224-b64b-a4834c32a73e 
 --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdd
 
 And I'm preparing OSD :
 ceph-disk prepare --osd-uuid 7476f0a8-a6cd-4224-b64b-a4834c32a73e 
 --journal-uuid e89f18cc-ae46-4573-8bca

Re: [ceph-users] OSD JOURNAL not associated - ceph-disk list ?

2014-12-22 Thread Florent MONTHEL
Hi Loic, Hi Robert,

Thanks. I’m integrating CEPH OSD with OpenSVC services (http://www.opensvc.com 
http://www.opensvc.com/) so I have to generate UUID myself in order to map 
services
It’s the reason for that I’m generating sgdisk commands with my own UUID

After activating OSD, I don’t have mapping osd  journal with cep-disk command

root@raven:/var/lib/ceph/osd/ceph-5# ceph-disk list
/dev/sda other, ext4, mounted on /
/dev/sdb swap, swap
/dev/sdc :
 /dev/sdc1 ceph journal
/dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.3
/dev/sde :
 /dev/sde1 ceph journal
/dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.4
/dev/sdg :
 /dev/sdg1 ceph journal
/dev/sdh :
 /dev/sdh1 ceph data, active, cluster ceph, osd.5

After below command (osd 5), ceph-deploy didn’t create file journal_uuid  :

ceph-deploy --overwrite-conf osd create 
raven:/dev/disk/by-partuuid/6356fd8d-0d84-432a-b9f4-3d02f94afdff:/dev/disk/by-partuuid/36741e5b-eee0-4368-9736-a31701a186a1

root@raven:/var/lib/ceph/osd/ceph-5# ls -l
total 56
-rw-r--r--   1 root root  192 Dec 21 23:55 activate.monmap
-rw-r--r--   1 root root3 Dec 21 23:55 active
-rw-r--r--   1 root root   37 Dec 21 23:55 ceph_fsid
drwxr-xr-x 184 root root 8192 Dec 22 19:25 current
-rw-r--r--   1 root root   37 Dec 21 23:55 fsid
lrwxrwxrwx   1 root root   58 Dec 21 23:55 journal - 
/dev/disk/by-partuuid/36741e5b-eee0-4368-9736-a31701a186a1
-rw---   1 root root   56 Dec 21 23:55 keyring
-rw-r--r--   1 root root   21 Dec 21 23:55 magic
-rw-r--r--   1 root root6 Dec 21 23:55 ready
-rw-r--r--   1 root root4 Dec 21 23:55 store_version
-rw-r--r--   1 root root   53 Dec 21 23:55 superblock
-rw-r--r--   1 root root0 Dec 22 19:24 sysvinit
-rw-r--r--   1 root root2 Dec 21 23:55 whoami


So I created for each osd, file journal_uuid » manually and mapping become OK 
with ceph-disk :)

root@raven:/var/lib/ceph/osd/ceph-5# echo 36741e5b-eee0-4368-9736-a31701a186a1 
»  journal_uuid

It’s ok now :

root@raven:/var/lib/ceph/osd/ceph-5# ceph-disk list
/dev/sda other, ext4, mounted on /
/dev/sdb swap, swap
/dev/sdc :
 /dev/sdc1 ceph journal, for /dev/sdd1
/dev/sdd :
 /dev/sdd1 ceph data, active, cluster ceph, osd.3, journal /dev/sdc1
/dev/sde :
 /dev/sde1 ceph journal, for /dev/sdf1
/dev/sdf :
 /dev/sdf1 ceph data, active, cluster ceph, osd.4, journal /dev/sde1
/dev/sdg :
 /dev/sdg1 ceph journal, for /dev/sdh1
/dev/sdh :
 /dev/sdh1 ceph data, active, cluster ceph, osd.5, journal /dev/sdg1


Thanks rob...@leblancnet.us mailto:rob...@leblancnet.us for clue ;)

Florent Monthel





 Le 21 déc. 2014 à 18:08, Loic Dachary l...@dachary.org a écrit :
 
 Hi Florent,
 
 It is unusual to manually run the sgdisk. Is there a reason why you need to 
 do this instead of letting ceph-disk prepare do it for you ?
 
 The information about the association between journal and data is only 
 displayed when the OSD has been activated. See 
 http://workbench.dachary.org/ceph/ceph/blob/giant/src/ceph-disk#L2246
 Cheers
 
 On 21/12/2014 15:11, Florent MONTHEL wrote:
 Hi,
 
 I would like to separate OSD and journal on 2 différent disks so I have :
 
 1 disk /dev/sde (1GB) for journal = type code JOURNAL_UUID = 
 '45b0969e-9b03-4f30-b4c6-b4b80ceff106'
 1 disk /dev/sdd (5GB) for OSD = type code OSD_UUID = 
 '4fbd7e29-9d25-41b8-afd0-062c0ceff05d'
 
 I execute below commands :
 
 FOR JOURNAL :
 sgdisk --new=1:0:1023M --change-name=1:ceph journal 
 --partition-guid=1:e89f18cc-ae46-4573-8bca-3e782d45849c 
 --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sde
 
 FOR OSD:
 sgdisk --new=1:0:5119M --change-name=1:ceph data 
 --partition-guid=1:7476f0a8-a6cd-4224-b64b-a4834c32a73e 
 --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdd
 
 And I'm preparing OSD :
 ceph-disk prepare --osd-uuid 7476f0a8-a6cd-4224-b64b-a4834c32a73e 
 --journal-uuid e89f18cc-ae46-4573-8bca-3e782d45849c --fs-type xfs --cluster 
 ceph -- /dev/sdd1 /dev/sde1
 
 
 After that, I don’t see relation between /dev/sde1  /dev/sdc1
 
 root@falcon:/srv/ceph01adm001/data/cluster-ceph01# ceph-disk list
 /dev/sdd :
 /dev/sdd1 ceph data, prepared, cluster ceph
 /dev/sde :
 /dev/sde1 ceph journal
 
 Is it normal ?
 
 Thanks
 
 
 *Florent Monthel**
 *
 
 
 
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 -- 
 Loïc Dachary, Artisan Logiciel Libre
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD JOURNAL not associated - ceph-disk list ?

2014-12-21 Thread Florent MONTHEL
Hi,

I would like to separate OSD and journal on 2 différent disks so I have :

1 disk /dev/sde (1GB) for journal = type code JOURNAL_UUID = 
'45b0969e-9b03-4f30-b4c6-b4b80ceff106'
1 disk /dev/sdd (5GB) for OSD = type code OSD_UUID = 
'4fbd7e29-9d25-41b8-afd0-062c0ceff05d'

I execute below commands :

FOR JOURNAL :
sgdisk --new=1:0:1023M --change-name=1:ceph journal 
--partition-guid=1:e89f18cc-ae46-4573-8bca-3e782d45849c 
--typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sde

FOR OSD:
sgdisk --new=1:0:5119M --change-name=1:ceph data 
--partition-guid=1:7476f0a8-a6cd-4224-b64b-a4834c32a73e 
--typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdd

And I'm preparing OSD :
ceph-disk prepare --osd-uuid 7476f0a8-a6cd-4224-b64b-a4834c32a73e 
--journal-uuid e89f18cc-ae46-4573-8bca-3e782d45849c --fs-type xfs --cluster 
ceph -- /dev/sdd1 /dev/sde1


After that, I don’t see relation between /dev/sde1  /dev/sdc1

root@falcon:/srv/ceph01adm001/data/cluster-ceph01# ceph-disk list
/dev/sdd :
 /dev/sdd1 ceph data, prepared, cluster ceph
/dev/sde :
 /dev/sde1 ceph journal

Is it normal ?

Thanks


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Florent MONTHEL
Hi,

I’m buying several servers to test CEPH and I would like to configure journal 
on SSD drives (maybe it’s not necessary for all use cases)
Could you help me to identify number of SSD I need (SSD are very expensive and 
GB price business case killer… ) ? I don’t want to experience SSD bottleneck 
(some abacus ?).
I think I will be with below CONF 2  3


CONF 1 DELL 730XC Low Perf:
10 SATA 7.2K 3.5  4TB + 2 SSD 2.5 » 200GB intensive write

CONF 2 DELL 730XC « Medium Perf :
22 SATA 7.2K 2.5 1TB + 2 SSD 2.5 » 200GB intensive write

CONF 3 DELL 730XC « Medium Perf ++ :
22 SAS 10K 2.5 1TB + 2 SSD 2.5 » 200GB intensive write

Thanks

Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Number of SSD for OSD journal

2014-12-15 Thread Florent MONTHEL
Thanks all

I will probably have 2x10gb : 1x10gb for client and 1x10gb for cluster but I 
take in charge your recommendation Sebastien

The 200GB SSD will probably give me around 500MB/s sequential bandwidth. So 
with only 2 SSD I can  overload 1x 10gb network.

Hum I will take care of osd density

Sent from my iPhone

 On 15 déc. 2014, at 21:45, Sebastien Han sebastien@enovance.com wrote:
 
 Salut,
 
 The general recommended ratio (for me at least) is 3 journals per SSD. Using 
 200GB Intel DC S3700 is great.
 If you’re going with a low perf scenario I don’t think you should bother 
 buying SSD, just remove them from the picture and do 12 SATA 7.2K 4TB.
 
 For medium and medium ++ perf using a ratio 1:11 is way to high, the SSD will 
 definitely be the bottleneck here.
 Please also note that (bandwidth wise) with 22 drives you’re already hitting 
 the theoretical limit of a 10Gbps network. (~50MB/s * 22 ~= 1.1Gbps).
 You can theoretically up that value with LACP (depending on the 
 xmit_hash_policy you’re using of course).
 
 Btw what’s the network? (since I’m only assuming here).
 
 
 On 15 Dec 2014, at 20:44, Florent MONTHEL fmont...@flox-arts.net wrote:
 
 Hi,
 
 I’m buying several servers to test CEPH and I would like to configure 
 journal on SSD drives (maybe it’s not necessary for all use cases)
 Could you help me to identify number of SSD I need (SSD are very expensive 
 and GB price business case killer… ) ? I don’t want to experience SSD 
 bottleneck (some abacus ?).
 I think I will be with below CONF 2  3
 
 
 CONF 1 DELL 730XC Low Perf:
 10 SATA 7.2K 3.5  4TB + 2 SSD 2.5 » 200GB intensive write
 
 CONF 2 DELL 730XC « Medium Perf :
 22 SATA 7.2K 2.5 1TB + 2 SSD 2.5 » 200GB intensive write
 
 CONF 3 DELL 730XC « Medium Perf ++ :
 22 SAS 10K 2.5 1TB + 2 SSD 2.5 » 200GB intensive write
 
 Thanks
 
 Florent Monthel
 
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 
 Cheers.
 
 Sébastien Han
 Cloud Architect
 
 Always give 100%. Unless you're giving blood.
 
 Phone: +33 (0)1 49 70 99 72
 Mail: sebastien@enovance.com
 Address : 11 bis, rue Roquépine - 75008 Paris
 Web : www.enovance.com - Twitter : @enovance
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Again: full ssd ceph cluster

2014-12-11 Thread Florent MONTHEL
Hi

Is it possible to share performance results with this kind of config? How many 
iops? Bandwidth ? Latency ?
Thanks

Sent from my iPhone

 On 11 déc. 2014, at 09:35, Christian Balzer ch...@gol.com wrote:
 
 
 Hello,
 
 On Wed, 10 Dec 2014 18:08:23 +0300 Mike wrote:
 
 Hello all!
 Some our customer asked for only ssd storage.
 By now we looking to 2027R-AR24NV w/ 3 x HBA controllers (LSI3008 chip,
 8 internal 12Gb ports on each), 24 x Intel DC S3700 800Gb SSD drives, 2
 x mellanox 40Gbit ConnectX-3 (maybe newer ConnectX-4 100Gbit) and Xeon
 e5-2660V2 with 64Gb RAM.
 
 A bit skimpy on the RAM given the amount of money you're willing to spend
 otherwise.
 And while you're giving it 20 2.2GHz cores, that's not going to cut, not
 by a long shot. 
 I did some brief tests with a machine having 8 DC S3700 100GB for OSDs
 (replica 1) under 0.80.6 and the right (make that wrong) type of load
 (small, 4k I/Os) did melt all of the 8 3.5GHz cores in that box.
 
 The suggest 1GHz per OSD by the Ceph team is for pure HDD based OSDs, the
 moment you add journals on SSDs it already becomes barely enough with 3GHz
 cores when dealing with many small I/Os.
 
 Replica is 2.
 Or something like that but in 1U w/ 8 SSD's.
 The potential CPU power to OSD ratio will be much better with this.
 
 We see a little bottle neck on network cards, but the biggest question
 can ceph (giant release) with sharding io and new cool stuff release
 this potential?
 You shouldn't worry too much about network bandwidth unless you're going
 to use this super expensive setup for streaming backups. ^o^ 
 I'm certain you'll run out of IOPS long before you'll run out of network
 bandwidth.
 
 Given that what I recall of the last SSD cluster discussion, most of the
 Giant benefits were for read operations and the write improvement was
 about double that of Firefly. While nice, given my limited tests that is
 still a far cry away from what those SSDs can do, see above.
 
 Any ideas?
 Somebody who actually has upgraded an SSD cluster from Firefly to Giant
 would be in the correct position to answer that.
 
 Christian
 -- 
 Christian BalzerNetwork/Systems Engineer
 ch...@gol.com   Global OnLine Japan/Fusion Communications
 http://www.gol.com/
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] EMC ViPER and CEPH

2014-12-08 Thread Florent MONTHEL
Hi

We're going to integrate at work ViPER software that was Openstack Cinder 
compatible
Is there any people here that have integrate CEPH behind EMC Viper software ?
Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] EMC ScaleIO versus CEPH

2014-12-08 Thread Florent MONTHEL
Hi

We're working with EMC provider in our company
EMC is making big teasing to have scaleIO in our environment
I' trying to integrate CEPH with our RedHat GAM
Do you have any  sheet or proof of concept that comparing CEPH and ScaleIO ?
Thanks

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fs-common ceph-mds on ARM Raspberry Debian 7.6

2014-12-01 Thread Florent MONTHEL
Hi Paulo,

Thanks a lot. I’ve just added into /etc/apst/sources.list below back ports:

deb http://ftp.debian.org/debian/ wheezy-backports main

And : apt-get update

But ceph-deploy still throw alerts. So I added package manually (to take them 
from wheezy-backports) :

apt-get -t wheezy-backports install ceph ceph-mds ceph-common ceph-fs-common 
gdisk

And ceph-deploy is now OK :
root@socrate:~/cluster# ceph-deploy install socrate.flox-arts.in
…
[socrate.flox-arts.in][DEBUG ] ceph version 0.80.7 
(6c0127fcb58008793d3c8b62d925bc91963672a3)

Thanks


Florent Monthel





 Le 1 déc. 2014 à 00:03, Paulo Almeida palme...@igc.gulbenkian.pt a écrit :
 
 Hi,
 
 You should be able to use the wheezy-backports repository, which has
 ceph 0.80.7.
 
 Cheers,
 Paulo
 
 On Sun, 2014-11-30 at 19:31 +0100, Florent MONTHEL wrote:
 Hi,
 
 
 I’m trying to deploy CEPH (with ceph-deploy) on Raspberry Debian 7.6
 and I have below error on ceph-deploy install command :
 
 
 
 
 [socrate.flox-arts.in][INFO  ] Running command: env
 DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o
 Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes
 install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
 [socrate.flox-arts.in][DEBUG ] Reading package lists...
 [socrate.flox-arts.in][DEBUG ] Building dependency tree...
 [socrate.flox-arts.in][DEBUG ] Reading state information...
 [socrate.flox-arts.in][WARNIN] E: Unable to locate package ceph-mds
 [socrate.flox-arts.in][WARNIN] E: Unable to locate package
 ceph-fs-common
 [socrate.flox-arts.in][ERROR ] RuntimeError: command returned non-zero
 exit status: 100
 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env
 DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o
 Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes
 install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
 
 
 Do you know how I can have these 2 package on this platform ?
 Thanks
 
 
 
 Florent Monthel
 
 
 
 
 
 
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 
 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-fs-common ceph-mds on ARM Raspberry Debian 7.6

2014-11-30 Thread Florent MONTHEL
Hi,

I’m trying to deploy CEPH (with ceph-deploy) on Raspberry Debian 7.6 and I have 
below error on ceph-deploy install command :


[socrate.flox-arts.in][INFO  ] Running command: env 
DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o 
Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- 
ceph ceph-mds ceph-common ceph-fs-common gdisk
[socrate.flox-arts.in][DEBUG ] Reading package lists...
[socrate.flox-arts.in][DEBUG ] Building dependency tree...
[socrate.flox-arts.in][DEBUG ] Reading state information...
[socrate.flox-arts.in][WARNIN] E: Unable to locate package ceph-mds
[socrate.flox-arts.in][WARNIN] E: Unable to locate package ceph-fs-common
[socrate.flox-arts.in][ERROR ] RuntimeError: command returned non-zero exit 
status: 100
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: env 
DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o 
Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- 
ceph ceph-mds ceph-common ceph-fs-common gdisk

Do you know how I can have these 2 package on this platform ?
Thanks


Florent Monthel





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com