[ceph-users] Red Hat Storage Day – Cupertino

2015-10-12 Thread Kobi Laredo
Bay Area Cephers,

If you are interested in hearing about Ceph @ DreamHost, come join us
at Red Hat Storage Day – Cupertino:
https://engage.redhat.com/storagedays-ceph-gluster-e-201508192024

Lot's of great speakers and a great opportunity to network. Best of all,
It's free to attend!

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to observed civetweb.

2015-09-09 Thread Kobi Laredo
We haven't had the need to explore civetweb's SSL termination ability, so I
don't know the answer to your question.
Either way, haproxy is a safer bet.

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Tue, Sep 8, 2015 at 8:50 PM, Vickie ch  wrote:

> Thanks a lot!!
> One more question.  I can understand use haproxy is a better way for
> loadbalance.
> And github say civetweb already support https.
> But I found some documents mention that civetweb need haproxy for https.
> Which one is true?
>
>
>
> Best wishes,
> Mika
>
>
> 2015-09-09 2:21 GMT+08:00 Kobi Laredo :
>
>> Vickie,
>>
>> You can add:
>> *access_log_file=/var/log/civetweb/access.log
>> error_log_file=/var/log/civetweb/error.log*
>>
>> to *rgw frontends* in ceph.conf though these logs are thin on info
>> (Source IP, date, and request)
>>
>> Check out
>> https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md for
>> more civetweb configs you can inject through  *rgw frontends* config
>> attribute in ceph.conf
>>
>> We are currently testing tuning civetweb's num_threads
>> and request_timeout_ms to improve radosgw performance
>>
>> *Kobi Laredo*
>> *Cloud Systems Engineer* | (*408) 409-KOBI*
>>
>> On Tue, Sep 8, 2015 at 8:20 AM, Yehuda Sadeh-Weinraub 
>> wrote:
>>
>>> You can increase the civetweb logs by adding 'debug civetweb = 10' in
>>> your ceph.conf. The output will go into the rgw logs.
>>>
>>> Yehuda
>>>
>>> On Tue, Sep 8, 2015 at 2:24 AM, Vickie ch 
>>> wrote:
>>> > Dear cephers,
>>> >Just upgrade radosgw from apache to civetweb.
>>> > It's really simple to installed and used. But I can't find any
>>> parameters or
>>> > logs to adjust(or observe) civetweb. (Like apache log).  I'm really
>>> confuse.
>>> > Any ideas?
>>> >
>>> >
>>> > Best wishes,
>>> > Mika
>>> >
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to observed civetweb.

2015-09-08 Thread Kobi Laredo
Vickie,

You can add:
*access_log_file=/var/log/civetweb/access.log
error_log_file=/var/log/civetweb/error.log*

to *rgw frontends* in ceph.conf though these logs are thin on info (Source
IP, date, and request)

Check out
https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md for
more civetweb configs you can inject through  *rgw frontends* config
attribute in ceph.conf

We are currently testing tuning civetweb's num_threads
and request_timeout_ms to improve radosgw performance

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Tue, Sep 8, 2015 at 8:20 AM, Yehuda Sadeh-Weinraub 
wrote:

> You can increase the civetweb logs by adding 'debug civetweb = 10' in
> your ceph.conf. The output will go into the rgw logs.
>
> Yehuda
>
> On Tue, Sep 8, 2015 at 2:24 AM, Vickie ch  wrote:
> > Dear cephers,
> >Just upgrade radosgw from apache to civetweb.
> > It's really simple to installed and used. But I can't find any
> parameters or
> > logs to adjust(or observe) civetweb. (Like apache log).  I'm really
> confuse.
> > Any ideas?
> >
> >
> > Best wishes,
> > Mika
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph version for productive clusters?

2015-08-31 Thread Kobi Laredo
Hammer should be very stable at this point.

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Mon, Aug 31, 2015 at 8:51 AM, German Anders  wrote:

> Hi cephers,
>
>What's the recommended version for new productive clusters?
>
> Thanks in advanced,
>
> Best regards,
>
> *German*
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] HAproxy for RADOSGW

2015-08-06 Thread Kobi Laredo
Why are you using cookies? Try without and see if it works.

Kobi Laredo
Cloud Systems Engineer | (408) 409-KOBI
On Aug 5, 2015 8:42 AM, "Ray Sun"  wrote:

> Cephers,
> I try to use haproxy as a load balancer for my radosgw, but I always got
> 405 not allowed when I run s3cmd md s3://mys3 on my haproxy node. But it's
> ok on radosgw node. I think this is related to the DNS server, but I can't
> find any document to explain it. Please help.
>
> haproxy.cfg
> global
> log /dev/log local0
> log /dev/log local1 notice
> chroot /var/lib/haproxy
> user haproxy
> group haproxy
> daemon
>   stats socket /var/lib/haproxy/stats level admin
>
> defaults
> log global
> mode http
> option httplog
> option dontlognull
>   timeout connect 5000
>   timeout client 5
>   timeout server 5
>
> frontend http_frontend
> bind *:8080
> mode http
> option httpclose
> option forwardfor
> default_backend web_server
>
> backend web_server
> mode http
> balance roundrobin
> cookie RADOSGWLB insert indirect nocache
> server node-78cb.econe.com node-78cb.econe.com:80 check cookie
> node-78cb.econe.com
> #server node-8d80.econe.com node-8d80.econe.com:80 check cookie
> node-8d80.econe.com
>
> --
> DEBUG: signature-v4 headers: {'x-amz-content-sha256':
> 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855',
> 'Authorization': 'AWS4-HMAC-SHA256
> Credential=YUY2HWDKIP3D5T9OCO47/20150805/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=b94711f50e21de6f6d00bdcc095b2ed575d9d690fd8f9c73a256cb2e8226c640',
> 'x-amz-date': '20150805T153856Z'}
> DEBUG: Processing request, please wait...
> DEBUG: get_hostname(mys3): mys3.seed.econe.com:8080
> DEBUG: ConnMan.get(): creating new connection:
> http://mys3.seed.econe.com:8080
> DEBUG: non-proxied HTTPConnection(mys3.seed.econe.com:8080)
> DEBUG: format_uri(): /
> DEBUG: Sending request method_string='PUT', uri='/',
> headers={'x-amz-content-sha256':
> 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855',
> 'Authorization': 'AWS4-HMAC-SHA256
> Credential=YUY2HWDKIP3D5T9OCO47/20150805/US/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=b94711f50e21de6f6d00bdcc095b2ed575d9d690fd8f9c73a256cb2e8226c640',
> 'x-amz-date': '20150805T153856Z'}, body=(0 bytes)
> DEBUG: Response: {'status': 405, 'headers': {'content-length': '82',
> 'set-cookie': 'RADOSGWLB=node-78cb.econe.com; path=/', 'accept-ranges':
> 'bytes', 'connection': 'close', 'date': 'Wed, 05 Aug 2015 15:38:56 GMT',
> 'content-type': 'application/xml'}, 'reason': 'Method Not Allowed', 'data':
> ' encoding="UTF-8"?>MethodNotAllowed'}
> DEBUG: ConnMan.put(): connection put back to pool (
> http://mys3.seed.econe.com:8080#1)
> DEBUG: S3Error: 405 (Method Not Allowed)
> DEBUG: HttpHeader: content-length: 82
> DEBUG: HttpHeader: set-cookie: RADOSGWLB=node-78cb.econe.com; path=/
> DEBUG: HttpHeader: accept-ranges: bytes
> DEBUG: HttpHeader: connection: close
> DEBUG: HttpHeader: date: Wed, 05 Aug 2015 15:38:56 GMT
> DEBUG: HttpHeader: content-type: application/xml
> DEBUG: ErrorXML: Code: 'MethodNotAllowed'
> ERROR: S3 error: 405 (MethodNotAllowed):
>
> Best Regards
> -- Ray
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph -s slow return result

2015-03-28 Thread Kobi Laredo
I'm glad it worked.
You can set a warning to catch this early next time (1GB)

*mon leveldb size warn = 10*



*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Fri, Mar 27, 2015 at 5:45 PM, Chu Duc Minh  wrote:

> @Kobi Laredo: thank you! It's exactly my problem.
> # du -sh /var/lib/ceph/mon/
> *2.6G *   /var/lib/ceph/mon/
> # ceph tell mon.a compact
> compacted leveldb in 10.197506
> # du -sh /var/lib/ceph/mon/
> *461M*/var/lib/ceph/mon/
> Now my "ceph -s" return result immediately.
>
> Maybe monitors' LevelDB store grow so big because i pushed 13 millions
> file into a bucket (over radosgw).
> When have extreme large number of files in a bucket, the state of ceph
> cluster could become unstable? (I'm running Giant)
>
> Regards,
>
> On Sat, Mar 28, 2015 at 12:57 AM, Kobi Laredo 
> wrote:
>
>> What's the current health of the cluster?
>> It may help to compact the monitors' LevelDB store if they have grown in
>> size
>> http://www.sebastien-han.fr/blog/2014/10/27/ceph-mon-store-taking-up-a-lot-of-space/
>> Depends on the size of the mon's store size it may take some time to
>> compact, make sure to do only one at a time.
>>
>> *Kobi Laredo*
>> *Cloud Systems Engineer* | (*408) 409-KOBI*
>>
>> On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh 
>> wrote:
>>
>>> All my monitors running.
>>> But i deleting pool .rgw.buckets, now having 13 million objects (just
>>> test data).
>>> The reason that i must delete this pool is my cluster become unstable,
>>> and sometimes an OSD down, PG peering, incomplete,...
>>> Therefore i must delete this pool to re-stablize my cluster.  (radosgw
>>> is too slow for delete objects when one of my bucket reachs few million
>>> objects).
>>>
>>> Regards,
>>>
>>>
>>> On Sat, Mar 28, 2015 at 12:23 AM, Gregory Farnum 
>>> wrote:
>>>
>>>> Are all your monitors running? Usually a temporary hang means that the
>>>> Ceph client tries to reach a monitor that isn't up, then times out and
>>>> contacts a different one.
>>>>
>>>> I have also seen it just be slow if the monitors are processing so many
>>>> updates that they're behind, but that's usually on a very unhappy cluster.
>>>> -Greg
>>>> On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh 
>>>> wrote:
>>>>
>>>>> On my CEPH cluster, "ceph -s" return result quite slow.
>>>>> Sometimes it return result immediately, sometimes i hang few seconds
>>>>> before return result.
>>>>>
>>>>> Do you think this problem (ceph -s slow return) only relate to
>>>>> ceph-mon(s) process? or maybe it relate to ceph-osd(s) too?
>>>>> (i deleting a big bucket, .rgw.buckets, and ceph-osd(s) disk util
>>>>> quite high)
>>>>>
>>>>> Regards,
>>>>> ___
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph -s slow return result

2015-03-27 Thread Kobi Laredo
What's the current health of the cluster?
It may help to compact the monitors' LevelDB store if they have grown in
size
http://www.sebastien-han.fr/blog/2014/10/27/ceph-mon-store-taking-up-a-lot-of-space/
Depends on the size of the mon's store size it may take some time to
compact, make sure to do only one at a time.

*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Fri, Mar 27, 2015 at 10:31 AM, Chu Duc Minh 
wrote:

> All my monitors running.
> But i deleting pool .rgw.buckets, now having 13 million objects (just test
> data).
> The reason that i must delete this pool is my cluster become unstable, and
> sometimes an OSD down, PG peering, incomplete,...
> Therefore i must delete this pool to re-stablize my cluster.  (radosgw is
> too slow for delete objects when one of my bucket reachs few million
> objects).
>
> Regards,
>
>
> On Sat, Mar 28, 2015 at 12:23 AM, Gregory Farnum  wrote:
>
>> Are all your monitors running? Usually a temporary hang means that the
>> Ceph client tries to reach a monitor that isn't up, then times out and
>> contacts a different one.
>>
>> I have also seen it just be slow if the monitors are processing so many
>> updates that they're behind, but that's usually on a very unhappy cluster.
>> -Greg
>> On Fri, Mar 27, 2015 at 8:50 AM Chu Duc Minh 
>> wrote:
>>
>>> On my CEPH cluster, "ceph -s" return result quite slow.
>>> Sometimes it return result immediately, sometimes i hang few seconds
>>> before return result.
>>>
>>> Do you think this problem (ceph -s slow return) only relate to
>>> ceph-mon(s) process? or maybe it relate to ceph-osd(s) too?
>>> (i deleting a big bucket, .rgw.buckets, and ceph-osd(s) disk util quite
>>> high)
>>>
>>> Regards,
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OSD Forece Removal

2015-03-21 Thread Kobi Laredo
*ceph osd rm osd.#*  should do the trick.


*Kobi Laredo*
*Cloud Systems Engineer* | (*408) 409-KOBI*

On Fri, Mar 20, 2015 at 4:02 PM, Robert LeBlanc 
wrote:

> Yes, at this point, I'd export the CRUSH, edit it and import it back in.
> What version are you running?
>
> Robert LeBlanc
>
> Sent from a mobile device please excuse any typos.
> On Mar 20, 2015 4:28 PM, "Jesus Chavez (jeschave)" 
> wrote:
>
>>  thats what you sayd?
>>
>>  [root@capricornio ~]# ceph auth del osd.9
>> entity osd.9 does not exist
>> [root@capricornio ~]# ceph auth del osd.19
>> entity osd.19 does not exist
>>
>>
>>
>>
>> * Jesus Chavez*
>> SYSTEMS ENGINEER-C.SALES
>>
>> jesch...@cisco.com
>> Phone: *+52 55 5267 3146 <%2B52%2055%205267%203146>*
>> Mobile: *+51 1 5538883255*
>>
>> CCIE - 44433
>>
>>
>> Cisco.com <http://www.cisco.com/>
>>
>>
>>
>>
>>
>>   Think before you print.
>>
>> This email may contain confidential and privileged material for the sole
>> use of the intended recipient. Any review, use, distribution or disclosure
>> by others is strictly prohibited. If you are not the intended recipient (or
>> authorized to receive for the recipient), please contact the sender by
>> reply email and delete all copies of this message.
>>
>> Please click here
>> <http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for
>> Company Registration Information.
>>
>>
>>
>>
>>  On Mar 20, 2015, at 4:13 PM, Robert LeBlanc 
>> wrote:
>>
>>  Does it show DNE in the entry? That stands for Does Not Exist. It will
>> disappear on it's own after a while. I don't know what the timeout is, but
>> they have always gone away within 24 hours. I've edited the CRUSH map
>> before and I don't think it removed it when it was already DNE, I just had
>> to wait for it to go away on it's own.
>>
>> On Fri, Mar 20, 2015 at 3:55 PM, Jesus Chavez (jeschave) <
>> jesch...@cisco.com> wrote:
>>
>>>  Maybe I should Edit the crushmap and delete osd... Is that a way yo
>>> force them?
>>>
>>>  Thanks
>>>
>>>
>>> * Jesus Chavez*
>>> SYSTEMS ENGINEER-C.SALES
>>>
>>> jesch...@cisco.com
>>> Phone: *+52 55 5267 3146 <+52%2055%205267%203146>*
>>> Mobile: *+51 1 5538883255 <+51%201%205538883255>*
>>>
>>> CCIE - 44433
>>>
>>> On Mar 20, 2015, at 2:21 PM, Robert LeBlanc 
>>> wrote:
>>>
>>>   Removing the OSD from the CRUSH map and deleting the auth key is how
>>> you force remove an OSD. The OSD can no longer participate in the cluster,
>>> even if it does come back to life. All clients forget about the OSD when
>>> the new CRUSH map is distributed.
>>>
>>> On Fri, Mar 20, 2015 at 11:19 AM, Jesus Chavez (jeschave) <
>>> jesch...@cisco.com> wrote:
>>>
>>>>  Any idea how to forcé remove ? Thanks
>>>>
>>>>
>>>> * Jesus Chavez*
>>>> SYSTEMS ENGINEER-C.SALES
>>>>
>>>> jesch...@cisco.com
>>>> Phone: *+52 55 5267 3146 <+52%2055%205267%203146>*
>>>> Mobile: *+51 1 5538883255 <+51%201%205538883255>*
>>>>
>>>> CCIE - 44433
>>>>
>>>> Begin forwarded message:
>>>>
>>>>  *From:* Stéphane DUGRAVOT 
>>>> *Date:* March 20, 2015 at 3:49:11 AM CST
>>>> *To:* "Jesus Chavez (jeschave)" 
>>>> *Cc:* ceph-users 
>>>> *Subject:* *Re: [ceph-users] OSD Forece Removal*
>>>>
>>>>
>>>>
>>>>  --
>>>>
>>>> Hi all, can anybody tell me how can I force delete  osds? the thing is
>>>> that one node got corrupted because of outage, so there is no way to get
>>>> those osd up and back, is there anyway to force the removal from
>>>> ceph-deploy node?
>>>>
>>>>
>>>>  Hi,
>>>>  Try manual :
>>>>
>>>>-
>>>>
>>>> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
>>>>
>>>>
>>>>  Thanks
>>>>
>>>>
>>>> * Jesus Chavez*
>>>> SYSTEMS ENGINEER-C.SALES
>>>>
>>>> jesch...@cisco.com
>>>> Phone: *+52 55 5267 3146 <%2B52%2055%20526