[ceph-users] How to observed civetweb.

2015-09-08 Thread Vickie ch
Dear cephers,
   Just upgrade radosgw from apache to civetweb.
It's really simple to installed and used. But I can't find any parameters
or logs to adjust(or observe) civetweb. (Like apache log).  I'm really
confuse. Any ideas?


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to observed civetweb.

2015-09-08 Thread Vickie ch
Thanks a lot!!
One more question.  I can understand use haproxy is a better way for
loadbalance.
And github say civetweb already support https.
But I found some documents mention that civetweb need haproxy for https.
Which one is true?



Best wishes,
Mika


2015-09-09 2:21 GMT+08:00 Kobi Laredo :

> Vickie,
>
> You can add:
> *access_log_file=/var/log/civetweb/access.log
> error_log_file=/var/log/civetweb/error.log*
>
> to *rgw frontends* in ceph.conf though these logs are thin on info
> (Source IP, date, and request)
>
> Check out
> https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md for
> more civetweb configs you can inject through  *rgw frontends* config
> attribute in ceph.conf
>
> We are currently testing tuning civetweb's num_threads
> and request_timeout_ms to improve radosgw performance
>
> *Kobi Laredo*
> *Cloud Systems Engineer* | (*408) 409-KOBI*
>
> On Tue, Sep 8, 2015 at 8:20 AM, Yehuda Sadeh-Weinraub 
> wrote:
>
>> You can increase the civetweb logs by adding 'debug civetweb = 10' in
>> your ceph.conf. The output will go into the rgw logs.
>>
>> Yehuda
>>
>> On Tue, Sep 8, 2015 at 2:24 AM, Vickie ch  wrote:
>> > Dear cephers,
>> >Just upgrade radosgw from apache to civetweb.
>> > It's really simple to installed and used. But I can't find any
>> parameters or
>> > logs to adjust(or observe) civetweb. (Like apache log).  I'm really
>> confuse.
>> > Any ideas?
>> >
>> >
>> > Best wishes,
>> > Mika
>> >
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Vickie ch
Hi cephers,
Have anyone ever created osd with btrfs in Hammer 0.94.3 ? I can create
btrfs partition successfully.  But once use "ceph-deploy" then always get
error like below. Another question there is no parameter " -f " with mkfs.
Any suggestion is appreciated.
​-
[osd3][DEBUG ] The operation has completed successfully.
[osd3][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sda
[osd3][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sda
[osd3][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[osd3][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sda1
[osd3][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t btrfs -m
single -l 32768 -n 32768 -- /dev/sda1
[osd3][WARNIN] /dev/sda1 appears to contain an existing filesystem (xfs).
[osd3][WARNIN] Error: Use the -f option to force overwrite.
[osd3][WARNIN] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'btrfs',
'-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sda1']' returned
non-zero exit status 1
[osd3][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare
--zap-disk --cluster ceph --fs-type btrfs -- /dev/sda
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
​​


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] troubleshooting ceph

2015-10-15 Thread Vickie ch
Hi Artie,
Did you check your mon ? How many monitors in this cluster?



Best wishes,
Mika


2015-10-16 9:23 GMT+08:00 Artie Ziff :

> Hello Ceph-users!
>
> This is my first attempt at getting ceph running.
>
> Does the following, in isolation, indicate any potential troubleshooting
> directions
>
> # ceph -s
> 2015-10-15 18:12:45.586529 7fc86041b700  0 -- :/1006343 >>
> 10.10.20.60:6789/0 pipe(0x7fc85c00d4c0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fc85c011760).fault
> 2015-10-15 18:12:48.586607 7fc86031a700  0 -- :/1006343 >>
> 10.10.20.60:6789/0 pipe(0x7fc858c0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fc850004b60).fault
> ^CError connecting to cluster: InterruptedOrTimeoutError
>
>
> The password-less keys are working amongst root@everyhost.
> ceph was installed with # make install  (the defaults)
>
> I am reading the troubleshooting page
> , of course.
> Does the error above occur if the monitor is not running?
>
> I believe a log directory (/usr/local/var/log/ceph) was created by `make
> install`. Is there some additional step to enable logs? Does no log file
> indicate the monitor is not running?
>
> Thank you in advance for some pointers.
>
> -az
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] troubleshooting ceph

2015-10-15 Thread Vickie ch
One more thing, did you check the setting of firewall?



Best wishes,
Mika


2015-10-16 14:54 GMT+08:00 Vickie ch :

> Hi Artie,
> Did you check your mon ? How many monitors in this cluster?
>
>
>
> Best wishes,
> Mika
>
>
> 2015-10-16 9:23 GMT+08:00 Artie Ziff :
>
>> Hello Ceph-users!
>>
>> This is my first attempt at getting ceph running.
>>
>> Does the following, in isolation, indicate any potential troubleshooting
>> directions
>>
>> # ceph -s
>> 2015-10-15 18:12:45.586529 7fc86041b700  0 -- :/1006343 >>
>> 10.10.20.60:6789/0 pipe(0x7fc85c00d4c0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fc85c011760).fault
>> 2015-10-15 18:12:48.586607 7fc86031a700  0 -- :/1006343 >>
>> 10.10.20.60:6789/0 pipe(0x7fc858c0 sd=4 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fc850004b60).fault
>> ^CError connecting to cluster: InterruptedOrTimeoutError
>>
>>
>> The password-less keys are working amongst root@everyhost.
>> ceph was installed with # make install  (the defaults)
>>
>> I am reading the troubleshooting page
>> <http://docs.ceph.com/docs/master/rados/troubleshooting/>, of course.
>> Does the error above occur if the monitor is not running?
>>
>> I believe a log directory (/usr/local/var/log/ceph) was created by `make
>> install`. Is there some additional step to enable logs? Does no log file
>> indicate the monitor is not running?
>>
>> Thank you in advance for some pointers.
>>
>> -az
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] about PG_Number

2015-11-13 Thread Vickie ch
Hi wah peng,
   Just a thought.
If you have a large amount of OSDs but less pg number. You will find your
data write unevenly.
Some OSD have no change to write data.
In the other side, pg number too large but OSD number too small that have a
chance to cause data lost.



Best wishes,
Mika


2015-11-13 14:12 GMT+08:00 wah peng :

> Hello,
>
> what's the disadvantage if setup PG_Number too large or too small against
> OSD number?
>
> Thanks.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] can not create rbd image

2015-11-17 Thread Vickie ch
Hi ,
 Looks like your cluster have warning message of "2 near full osd(s)".

​Maybe ​try to extend osds first?
​




Best wishes,
Mika


2015-11-12 23:05 GMT+08:00 min fang :

> Hi cepher, I tried to use the following command to create a img, but
> unfortunately, the command hung for a long time until I broken it by
> crtl-z.
>
> rbd -p hello create img-003 --size 512
>
> so I checked the cluster status, and showed:
>
> cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1
>  health HEALTH_WARN
> 2 near full osd(s)
> too many PGs per OSD (320 > max 300)
>  monmap e2: 1 mons at {vl=192.168.90.253:6789/0}
> election epoch 1, quorum 0 vl
>  osdmap e37: 2 osds: 2 up, 2 in
>   pgmap v19544: 320 pgs, 3 pools, 12054 MB data, 3588 objects
> 657 GB used, 21867 MB / 714 GB avail
>  320 active+clean
>
> I did not see error message here could cause rbd create hang.
>
> I opened the client log and see:
>
> 2015-11-12 22:52:44.687491 7f89eced9780 20 librbd: create 0x7fff8f7b7800
> name = img-003 size = 536870912 old_format = 1 features = 0 order = 22
> stripe_unit = 0 stripe_count = 0
> 2015-11-12 22:52:44.687653 7f89eced9780  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6800/5472 -- osd_op(client.34321.0:1 img-003.rbd
> [stat] 2.8a047315 ack+read+known_if_redirected e37) v5 -- ?+0 0x28513d0 con
> 0x285
> 2015-11-12 22:52:44.688928 7f89e066b700  1 -- 192.168.90.253:0/1006121
> <== osd.1 192.168.90.253:6800/5472 1  osd_op_reply(1 img-003.rbd
> [stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6  178+0+0
> (3550830125 0 0) 0x7f89cae0 con 0x285
> 2015-11-12 22:52:44.689090 7f89eced9780  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- osd_op(client.34321.0:2 rbd_id.img-003
> [stat] 2.638c75a8 ack+read+known_if_redirected e37) v5 -- ?+0 0x2858330 con
> 0x2856f50
> 2015-11-12 22:52:44.690425 7f89e0469700  1 -- 192.168.90.253:0/1006121
> <== osd.0 192.168.90.253:6801/5344 1  osd_op_reply(2 rbd_id.img-003
> [stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6  181+0+0
> (1202435393 0 0) 0x7f89b8000ae0 con 0x2856f50
> 2015-11-12 22:52:44.690494 7f89eced9780  2 librbd: adding rbd image to
> directory...
> 2015-11-12 22:52:44.690544 7f89eced9780  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- osd_op(client.34321.0:3 rbd_directory
> [tmapup 0~0] 2.30a98c1c ondisk+write+known_if_redirected e37) v5 -- ?+0
> 0x2858920 con 0x2856f50
> 2015-11-12 22:52:59.687447 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6789/0 -- mon_subscribe({monmap=3+,osdmap=38}) v2 --
> ?+0 0x7f89bab0 con 0x2843b90
> 2015-11-12 22:52:59.687472 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bf40
> con 0x2856f50
> 2015-11-12 22:52:59.687887 7f89e3873700  1 -- 192.168.90.253:0/1006121
> <== mon.0 192.168.90.253:6789/0 11  mon_subscribe_ack(300s) v1 
> 20+0+0 (2867606018 0 0) 0x7f89d8001160 con 0x2843b90
> 2015-11-12 22:53:04.687593 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:09.687731 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:14.687844 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:19.687978 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:24.688116 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:29.688253 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:34.688389 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:39.688512 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
> 2015-11-12 22:53:44.688636 7f89e4074700  1 -- 192.168.90.253:0/1006121
> --> 192.168.90.253:6801/5344 -- ping magic: 0 v1 -- ?+0 0x7f89bab0
> con 0x2856f50
>
>
> Looks to me, we are keeping ping magic process, and no completed.
>
> my ceph version is "ceph version 0.94.5
> (9764da52395923e0b32908d83a9f7304401fee43)"
>
> somebody can help me on this? Or still me to collect more debug
> information for analyzing.
>
> Thanks.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
c

Re: [ceph-users] about PG_Number

2015-11-17 Thread Vickie ch
Hi wah peng,
Hope you don't  mind. Just for reference.
A extreme case. If your ceph cluster have 3 osd disks on different osd
server.
Set pg number is 10240.(Just example)  That's mean all these pg will create
on 3 disks.
Lost one OSD also means a lot of pg lost too. It may bring some trouble for
re-balance and recovery.

In the other side, if you have 1 OSDs but only set pg = 8.
That mean some disks have no chance to using.

Best wishes,
Mika


2015-11-13 16:26 GMT+08:00 wah peng :

> why data lost happens? thanks.
>
> On 2015/11/13 星期五 16:13, Vickie ch wrote:
>
>> In the other side, pg number too large but OSD number too small that
>> have a chance to cause data lost.
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] about PG_Number

2015-11-17 Thread Vickie ch
By the way, here is a useful tool to calculate pg.
http://ceph.com/pgcalc/



Best wishes,
Mika


2015-11-18 11:46 GMT+08:00 Vickie ch :

> Hi wah peng,
> Hope you don't  mind. Just for reference.
> A extreme case. If your ceph cluster have 3 osd disks on different osd
> server.
> Set pg number is 10240.(Just example)  That's mean all these pg will
> create on 3 disks.
> Lost one OSD also means a lot of pg lost too. It may bring some trouble
> for re-balance and recovery.
>
> In the other side, if you have 1 OSDs but only set pg = 8.
> That mean some disks have no chance to using.
>
> Best wishes,
> Mika
>
>
> 2015-11-13 16:26 GMT+08:00 wah peng :
>
>> why data lost happens? thanks.
>>
>> On 2015/11/13 星期五 16:13, Vickie ch wrote:
>>
>>> In the other side, pg number too large but OSD number too small that
>>> have a chance to cause data lost.
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD image can ignore the pool limit

2015-06-16 Thread Vickie ch
Hello Cephers,
 I have a question about pool quota. Is pool quota support RBD?
 My cluster is Hammer 0.94.1 that have 1 Mon and 3 OSD. Each OSD server
have 3 disk.
 My question is when I set pool quota size 1G on pool "rbd". I still
can create a image "abc" = 3G.
 After I mount and formated image"abc", I can put data more than 1G and
there have a warning "pool rbd is full".
 Another question is when warning message "pool rbd is full" show up.
Even I delete all data that I put into , the object still remain.
 And ceph cluster will become "active+clean+inconsistent".
Is this result consistent with expectations?


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw did not create auth url for swift

2015-06-17 Thread Vickie ch
Hi all,
I want to use swift-client to connect ceph cluster. I have done s3 test
on this cluster before.
So I follow the guide to create a subuser and use swift client to test
it. But always got an error "404 Not Found"
How can I create the "auth" page?Any help will be appreciated.

   - 1 Mon(rgw), 3OSD server(each server 3disks).
   - CEPH:0.94.1-13
   - Swift-client:2.4.0

start--
*test@uclient:~$ swift --debug -V 1.0 -A http://192.168.1.110/auth
 -U melon:swift -K
'ujZx+foSYDniRzwypqnqNR7hr763zdt+Qe7TpwvR' list*
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 192.168.1.110
DEBUG:urllib3.connectionpool:Setting read timeout to 
DEBUG:urllib3.connectionpool:"GET /auth HTTP/1.1" 404 279
INFO:swiftclient:REQ: curl -i http://192.168.1.110/auth -X GET
INFO:swiftclient:RESP STATUS: 404 Not Found
INFO:swiftclient:RESP HEADERS: [('date', 'Thu, 18 Jun 2015 01:51:58 GMT'),
('content-length', '279'), ('content-type', 'text/html;
charset=iso-8859-1'), ('server', 'Apache/2.4.7 (Ubuntu)')]
INFO:swiftclient:RESP BODY: 

404 Not Found

Not Found
The requested URL /auth was not found on this server.

Apache/2.4.7 (Ubuntu) Server at 192.168.1.110 Port 80


ERROR:swiftclient:Auth GET failed: http://192.168.1.110/auth 404 Not Found
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line
1253, in _retry
self.url, self.token = self.get_auth()
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line
1227, in get_auth
insecure=self.insecure)
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line
397, in get_auth
insecure=insecure)
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line
278, in get_auth_1_0
http_status=resp.status, http_reason=resp.reason)
ClientException: Auth GET failed: http://192.168.1.110/auth 404 Not Found
Account not found
stop--

​   Guide:
 1.)https://ceph.com/docs/v0.78/radosgw/config/
 2.)http://docs.ceph.com/docs/v0.94/radosgw/config/
3.*)**http://docs.ceph.com/docs/v0.94/radosgw/admin/
*​

Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw did not create auth url for swift

2015-06-21 Thread Vickie ch
Dear all,
 I tried another way that use command ceph-deploy to create radosgw.
After that I can get list or create container finally.
But new problem is if I tried to upload files or delete container that
radosgw will return the message "Access denied".
Totally have no idea. Any help will be appreciated.



Dear

Best wishes,
Mika


2015-06-18 20:26 GMT+08:00 venkat :

>
> can you please let me know if you solved this issue please
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.1.110
DEBUG:requests.packages.urllib3.connectionpool:"GET /auth HTTP/1.1" 204 0
DEBUG:swiftclient:REQ: curl -i http://192.168.1.110:7480/auth -X GET
DEBUG:swiftclient:RESP STATUS: 204 No Content
DEBUG:swiftclient:RESP HEADERS: [('content-length', '0'), ('x-auth-token', 
'AUTH_rgwtk0b006d656c6f6e3a7377696674a4c215527f22deaca1fa8855dd4f9b3afc303f314d61ad4312a233cf3406ec6061366933'),
 ('connection', 'Keep-Alive'), ('x-storage-token', 
'AUTH_rgwtk0b006d656c6f6e3a7377696674a4c215527f22deaca1fa8855dd4f9b3afc303f314d61ad4312a233cf3406ec6061366933'),
 ('x-storage-url', 'http://192.168.1.110:7480/swift/v1'), ('content-type', 
'application/json')]
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.1.110
DEBUG:requests.packages.urllib3.connectionpool:"GET /swift/v1/test?format=json 
HTTP/1.1" 401 23
INFO:swiftclient:REQ: curl -i 
http://192.168.1.110:7480/swift/v1/test?format=json -X GET -H "X-Auth-Token: 
AUTH_rgwtk0b006d656c6f6e3a7377696674a4c215527f22deaca1fa8855dd4f9b3afc303f314d61ad4312a233cf3406ec6061366933"
INFO:swiftclient:RESP STATUS: 401 Unauthorized
INFO:swiftclient:RESP HEADERS: [('accept-ranges', 'bytes'), ('connection', 
'Keep-Alive'), ('content-type', 'application/json; charset=utf-8'), 
('content-length', '23')]
INFO:swiftclient:RESP BODY: {"Code":"AccessDenied"}
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.1.110
DEBUG:requests.packages.urllib3.connectionpool:"GET /auth HTTP/1.1" 204 0
DEBUG:swiftclient:REQ: curl -i http://192.168.1.110:7480/auth -X GET
DEBUG:swiftclient:RESP STATUS: 204 No Content
DEBUG:swiftclient:RESP HEADERS: [('content-length', '0'), ('x-auth-token', 
'AUTH_rgwtk0b006d656c6f6e3a73776966747d5731764d837adfa2fa88556829053b865c872c3c66238b500f2d0fd766f80126e68774'),
 ('connection', 'Keep-Alive'), ('x-storage-token', 
'AUTH_rgwtk0b006d656c6f6e3a73776966747d5731764d837adfa2fa88556829053b865c872c3c66238b500f2d0fd766f80126e68774'),
 ('x-storage-url', 'http://192.168.1.110:7480/swift/v1'), ('content-type', 
'application/json')]
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.1.110
DEBUG:requests.packages.urllib3.connectionpool:"GET /swift/v1/test?format=json 
HTTP/1.1" 401 23
INFO:swiftclient:REQ: curl -i 
http://192.168.1.110:7480/swift/v1/test?format=json -X GET -H "X-Auth-Token: 
AUTH_rgwtk0b006d656c6f6e3a73776966747d5731764d837adfa2fa88556829053b865c872c3c66238b500f2d0fd766f80126e68774"
INFO:swiftclient:RESP STATUS: 401 Unauthorized
INFO:swiftclient:RESP HEADERS: [('accept-ranges', 'bytes'), ('connection', 
'Keep-Alive'), ('content-type', 'application/json; charset=utf-8'), 
('content-length', '23')]
INFO:swiftclient:RESP BODY: {"Code":"AccessDenied"}
ERROR:swiftclient:Container GET failed: 
http://192.168.1.110:7480/swift/v1/test?format=json 401 Unauthorized   
{"Code":"AccessDenied"}
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 
1261, in _retry
rv = func(self.url, self.token, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 
656, in get_container
http_response_content=body)
ClientException: Container GET failed: 
http://192.168.1.110:7480/swift/v1/test?format=json 401 Unauthorized   
{"Code":"AccessDenied"}
Error Deleting: test: Container GET failed: 
http://192.168.1.110:7480/swift/v1/test?format=json 401 Unauthorized   
{"Code":"AccessDenied"}
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 
192.168.1.110
DEBUG:requests.packages.urllib3.connectionpool:"GET /auth HTTP/1.1" 204 0
DEBUG:swiftclient:REQ: curl -i http://192.168.1.110:7480/auth -X GET
DEBUG:swiftclient:RESP STATUS: 204 No Content
DEBUG:swiftclient:RESP HEADERS: [('content-length', '0'), ('x-auth-token', 
'AUTH_rgwtk0b006d656c6f6e3a7377696674febff3bb269a52bbcafa88550f36d831425f2f1d0e071f3f3849723da372011cff9154d4'),
 ('connection', 'Keep-Alive'), ('x-storage-token', 
'AUTH_rgwtk0b006d656c6f6e3a7377696674febff3bb269a52bbcafa88550f36d831425f2f1d0e071f3f3849723da372011cff9154d4'),
 ('x-storage-url', 'http://192.168.1.110:7480/swift/v1'), ('content-type', 
'application/json')]
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP co

Re: [ceph-users] radosgw did not create auth url for swift

2015-06-22 Thread Vickie ch
Dear Venkat,
  Finally, create user and make sure subuser created that I can upload
files and test on Hammer.
But I still need to find out why not working on apache and how to make this
work on Firefly.
I wrote a simple steps, FYR. Hope it helps!





Dear

Best wishes,
Mika


2015-06-22 14:34 GMT+08:00 Vickie ch :

> Dear all,
>  I tried another way that use command ceph-deploy to create radosgw.
> After that I can get list or create container finally.
> But new problem is if I tried to upload files or delete container that
> radosgw will return the message "Access denied".
> Totally have no idea. Any help will be appreciated.
>
>
>
> Dear
>
> Best wishes,
> Mika
>
>
> 2015-06-18 20:26 GMT+08:00 venkat :
>
>>
>> can you please let me know if you solved this issue please
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
CEPH version 0.94.1
CEPH deploy version:1.5.25
1 (Mon+Radosgw):mon0
3OSD (Each 2disks):osd0,osd1,osd2
1 Client:clientpc

[Steps on deploy server.]
1.)Created a ceph cluster.
2.)Create radosgw on Mon
   $ceph-deploy rgw create mon0:rgw1

[Steps on mon(rgw)]
1.)Go to "mon0" and check radosgw daemon exist or not. Daemon must exist then 
go to next steps.
$ps auwx|grep radosgw
2.)Create rgw user "apple".
$radosgw-admin user create --uid=apple --display-name="apple" 
--email=ap...@test.com 
3.)Create subuser.
$radosgw-admin subuser create --uid=apple --subuser=apple:swift 
--access=full
4.)Gen secreate key for subuser.
$radosgw-admin key create --gen-secret --subuser=apple:swift 
--key-type=swift

[Steps on client]
1.)Install swift client.
$pip install python-swiftclient
2.)Try to add a container, replace "swift secret key"
$swift -V 1.0 -A http://172.22.3.110:7480/auth -U apple:swift \
-K "swift secret key" \
post apple-buc

done
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question about change bucket quota.

2015-07-06 Thread Vickie ch
Dear Cephers,
 When a bucket created, the default quota setting is unlimited.  Is
there any setting can change this? That's admin no need to change bucket
quota one by one.


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem of ceph can not find socket /tmp/radosgw.sock and "Internal server error"

2015-08-10 Thread Vickie ch
Dear Cephers,
   One day radosgw totally died. I tried to restart radosgw but I found
/tmp/radosgw.sock is missing.
Even service radosgw is exist that checked by using "ps aux". The process
will die later.
I got "Internal server error" from web page. How can I re-create
/tmp/radosgw.sock?
And ​We try to upload huge files from s3 and swift client. But performance
look not very will.​
Should I change setting of apache2 "mpm_event.conf" ?


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can not active osds (old/different cluster instance?)

2015-08-13 Thread Vickie ch
Dear all,
  I try to create osds and get an error message (old/different cluster
instance?).
And osd can create but not active. This server ever build osds before.
Pls give me some advises.

OS:rhel7
ceph:0.80 firefly


Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] НА: Question

2015-08-20 Thread Vickie ch
Hi ,
 I've do that before and when I try to write file into rbd. It's get
freeze.
​Beside resource, is there any other reason not recommend to combined mon
and osd? ​



Best wishes,
Mika


2015-08-18 15:52 GMT+08:00 Межов Игорь Александрович :

> Hi!
>
> You can run mons on the same hosts, though it is not recommemned. MON
> daemon
> itself are not resurce hungry - 1-2 cores and 2-4 Gb RAM is enough in most
> small
> installs. But there are some pitfalls:
> - MONs use LevelDB as a backstorage, and widely use direct write to ensure
> DB consistency.
> So, if MON daemon coexits with OSDs not only on the same host, but on the
> same
> volume/disk/controller - it will severily reduce disk io available to OSD,
> thus greatly
> reduce overall performance. Moving MONs root to separate spindle, or
> better - separate SSD
> will keep MONs running fine with OSDs at the same host.
> - When cluster is in healthy state, MONs are not resource consuming, but
> when cluster
> in "changing state" (adding/removing OSDs, backfiling, etc) the CPU and
> memory usage
> for MON can raise significantly.
>
> And yes, in small cluster, it is not alaways possible to get 3 separate
> hosts for MONs only.
>
>
> Megov Igor
> CIO, Yuterra
>
> --
> *От:* ceph-users  от имени Luis
> Periquito 
> *Отправлено:* 17 августа 2015 г. 17:09
> *Кому:* Kris Vaes
> *Копия:* ceph-users@lists.ceph.com
> *Тема:* Re: [ceph-users] Question
>
> yes. The issue is resource sharing as usual: the MONs will use disk I/O,
> memory and CPU. If the cluster is small (test?) then there's no problem in
> using the same disks. If the cluster starts to get bigger you may want to
> dedicate resources (e.g. the disk for the MONs isn't used by an OSD). If
> the cluster is big enough you may want to dedicate a node for being a MON.
>
> On Mon, Aug 17, 2015 at 2:56 PM, Kris Vaes  wrote:
>
>> Hi,
>>
>> Maybe this seems like a strange question but i could not find this info
>> in the docs , i have following question,
>>
>> For the ceph cluster you need osd daemons and monitor daemons,
>>
>> On a host you can run several osd daemons (best one per drive as read in
>> the docs) on one host
>>
>> But now my question  can you run on the same host where you run already
>> some osd daemons the monitor daemon
>>
>> Is this possible and what are the implications of doing this
>>
>>
>>
>> Met Vriendelijke Groeten
>> Cordialement
>> Kind Regards
>> Cordialmente
>> С приятелски поздрави
>>
>>
>> This message (including any attachments) may be privileged or
>> confidential. If you have received it by mistake, please notify the sender
>> by return e-mail and delete this message from your system. Any unauthorized
>> use or dissemination of this message in whole or in part is strictly
>> prohibited. S3S rejects any liability for the improper, incomplete or
>> delayed transmission of the information contained in this message, as well
>> as for damages resulting from this e-mail message. S3S cannot guarantee
>> that the message received by you has not been intercepted by third parties
>> and/or manipulated by computer programs used to transmit messages and
>> viruses.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] about command "ceph osd map" can display non-existent object

2015-01-26 Thread Vickie ch
Hello cephers,
After input command "ceph osd map  rbd abcde-no-file". I can get
the result like this:
*"osdmap e42 pool 'rbd' (0) object '*
*abcde-no-file' -> pg 0.2844d191 (0.11) -> up ([3], p3) acting ([3], p3)"*

But the object "abcde-no-file" is not exist. Why ceph osd map can mapping
the object to PG?
Is this my fault to set some setting wrong cause this problem?

Best wishes,
​vk​
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question about CRUSH rule set parameter "min_size" "max_size"

2015-02-02 Thread Vickie ch
Hi ,
  CRUSH map have two parameter are "min_size" and "max_size".
Explanation about min_size is "*If a pool makes fewer replicas than this
number, CRUSH will NOT select this rule*".
The max_size is "*If a pool makes more replicas than this number, CRUSH
will NOT select this rule*"

Default setting of pool replicate size is 1.
Created 2 rules that ruleset0 *「min_size = 3」*, ruleset1 *「min_size = 1」*and
applied.
Then created a new pool named "test" and assume pool "test" will apply
ruleset1.
But found pool "test" apply ruleset0.
Which part I missing?

Thanks a lot for any advice!

Best wishes,
​Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question about CRUSH rule set parameter "min_size" "max_size"

2015-02-02 Thread Vickie ch
Dear Sahana:
​   Thank you for your reply.​
​Are you mean if there several rules in CRUSH.
User still need set rules for each pools not CRUSH.
​If so, the function of  min_size and max_size are ​?


Best wishes,
​Mika

2015-02-03 14:04 GMT+08:00 Sahana Lokeshappa 
:

>  Hi Mika,
>
>
>
> The below command will set ruleset to the pool:
>
>
>
> ceph osd pool set  crush_ruleset 1
>
>
>
> For more info : http://ceph.com/docs/master/rados/operations/crush-map/
>
>
>
> Thanks
>
> Sahana
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Vickie ch
> *Sent:* Tuesday, February 03, 2015 11:30 AM
> *To:* ceph-users
> *Subject:* [ceph-users] Question about CRUSH rule set parameter
> "min_size" "max_size"
>
>
>
> Hi ,
>
>   CRUSH map have two parameter are "min_size" and "max_size".
>
> Explanation about min_size is "*If a pool makes fewer replicas than this
> number, CRUSH will NOT select this rule*".
>
> The max_size is "*If a pool makes more replicas than this number, CRUSH
> will NOT select this rule*"
>
> Default setting of pool replicate size is 1.
> Created 2 rules that ruleset0 *「min_size = 3」*, ruleset1 *「min_size = 1」*and
> applied.
> Then created a new pool named "test" and assume pool "test" will apply
> ruleset1.
>
> But found pool "test" apply ruleset0.
>
> Which part I missing?
>
> Thanks a lot for any advice!
>
>Best wishes,
>
> ​Mika
>
> --
>
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If
> the reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Applied crush rules to pool but not working.

2015-02-08 Thread Vickie ch
Dear cephers:
My cluster( 0.87) got an odd incident.
The incident is when I marked the default crush rule "replicated_ruleset"
and set new rule called "new_rule1".
Content of rule "new_rule1" is just like "replicated_ruleset". Only
difference is ruleset number .

After applied new map into crush then used command "$ceph osd pool create
test 128 128".
I thought pool "test" will works normally.
But when I use command like "rados -p test ls" will get warning "health
HEALTH_WARN 7 requests are blocked > 32 sec".
So I use command "$ceph osd pool set test crush_ruleset 1" applied the rule
to pool "test" again.
The pool "test" still not working and I found pgs of this group are not
create.

Is this a bad way to set crush rules ?
If anyone can give me some hints. I'll very appreciate that.

# rules #
#rule replicated_ruleset {
#   ruleset 0
#   type replicated
#   min_size 1
#   max_size 10
#   step take default
#   step chooseleaf firstn 0 type host
#   step emit
#}

rule new_rule1 {
ruleset 1
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}


Best wishes,
​Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos:
So you have 3 OSD servers and each of them have 2 disks.
I have a question. What result of "ceph osd tree". Look like the osd status
is "down".


Best wishes,
Vickie

2015-02-10 19:00 GMT+08:00 B L :

> Here is the updated direct copy/paste dump
>
> eph@ceph-node1:~$ ceph osd dump
> epoch 25
> fsid 17bea68b-1634-4cd1-8b2a-00a60ef4761d
> created 2015-02-08 16:59:07.050875
> modified 2015-02-09 22:35:33.191218
> flags
> pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 128 pgp_num 64 last_change 24 flags hashpspool
> crash_replay_interval 45 stripe_width 0
> pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
> max_osd 6
> osd.0 up   in  weight 1 up_from 4 up_thru 17 down_at 0 last_clean_interval
> [0,0) 172.31.0.84:6800/11739 172.31.0.84:6801/11739 172.31.0.84:6802/11739
> 172.31.0.84:6803/11739 exists,up 765f5066-d13e-4a9e-a446-8630ee06e596
> osd.1 up   in  weight 1 up_from 7 up_thru 0 down_at 0 last_clean_interval
> [0,0) 172.31.0.84:6805/12279 172.31.0.84:6806/12279 172.31.0.84:6807/12279
> 172.31.0.84:6808/12279 exists,up e1d073e5-9397-4b63-8b7c-a4064e430f7a
> osd.2 up   in  weight 1 up_from 10 up_thru 0 down_at 0 last_clean_interval
> [0,0) 172.31.3.57:6800/5517 172.31.3.57:6801/5517 172.31.3.57:6802/5517
> 172.31.3.57:6803/5517 exists,up 5af5deed-7a6d-4251-aa3c-819393901d1f
> osd.3 up   in  weight 1 up_from 13 up_thru 0 down_at 0 last_clean_interval
> [0,0) 172.31.3.57:6805/6043 172.31.3.57:6806/6043 172.31.3.57:6807/6043
> 172.31.3.57:6808/6043 exists,up 958f37ab-b434-40bd-87ab-3acbd3118f92
> osd.4 up   in  weight 1 up_from 16 up_thru 0 down_at 0 last_clean_interval
> [0,0) 172.31.3.56:6800/5106 172.31.3.56:6801/5106 172.31.3.56:6802/5106
> 172.31.3.56:6803/5106 exists,up ce5c0b86-96be-408a-8022-6397c78032be
> osd.5 up   in  weight 1 up_from 22 up_thru 0 down_at 0 last_clean_interval
> [0,0) 172.31.3.56:6805/7019 172.31.3.56:6806/7019 172.31.3.56:6807/7019
> 172.31.3.56:6808/7019 exists,up da67b604-b32a-44a0-9920-df0774ad2ef3
>
>
> On Feb 10, 2015, at 12:55 PM, B L  wrote:
>
>
> On Feb 10, 2015, at 12:37 PM, B L  wrote:
>
> Hi Vickie,
>
> Thanks for your reply!
>
> You can find the dump in this link:
>
> https://gist.github.com/anonymous/706d4a1ec81c93fd1eca
>
> Thanks!
> B.
>
>
> On Feb 10, 2015, at 12:23 PM, Vickie ch  wrote:
>
> Hi Beanos:
>Would you post the reult of "$ceph osd dump"?
>
> Best wishes,
> Vickie
>
> 2015-02-10 16:36 GMT+08:00 B L :
>
>> Having problem with my fresh non-healthy cluster, my cluster status
>> summary shows this:
>>
>> ceph@ceph-node1:~$* ceph -s*
>>
>> cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d
>>  health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256
>> pgs stuck unclean; pool data pg_num 128 > pgp_num 64
>>  monmap e1: 1 mons at {ceph-node1=172.31.0.84:6789/0}, election
>> epoch 2, quorum 0 ceph-node1
>>  osdmap e25: 6 osds: 6 up, 6 in
>>   pgmap v82: 256 pgs, 3 pools, 0 bytes data, 0 objects
>> 198 MB used, 18167 MB / 18365 MB avail
>>  192 incomplete
>>   64 creating+incomplete
>>
>>
>> Where shall I start troubleshooting this?
>>
>> P.S. I’m new to CEPH.
>>
>> Thanks!
>> Beanos
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi Beanos:
BTW, if your cluster just for test. You may try to reduce replica size and
min_size.
"ceph osd pool set rbd size 2;ceph osd pool set data size 2;ceph osd pool
set metadata size 2 "
"ceph osd pool set rbd min_size 1;ceph osd pool set data min_size 1;ceph
osd pool set metadata min_size 1"
Open another terminal and use command "ceph -w" watch pg and pgs status .

Best wishes,
Vickie

2015-02-10 19:16 GMT+08:00 Vickie ch :

> Hi Beanos:
> So you have 3 OSD servers and each of them have 2 disks.
> I have a question. What result of "ceph osd tree". Look like the osd
> status is "down".
>
>
> Best wishes,
> Vickie
>
> 2015-02-10 19:00 GMT+08:00 B L :
>
>> Here is the updated direct copy/paste dump
>>
>> eph@ceph-node1:~$ ceph osd dump
>> epoch 25
>> fsid 17bea68b-1634-4cd1-8b2a-00a60ef4761d
>> created 2015-02-08 16:59:07.050875
>> modified 2015-02-09 22:35:33.191218
>> flags
>> pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash
>> rjenkins pg_num 128 pgp_num 64 last_change 24 flags hashpspool
>> crash_replay_interval 45 stripe_width 0
>> pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0
>> object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool
>> stripe_width 0
>> pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash
>> rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
>> max_osd 6
>> osd.0 up   in  weight 1 up_from 4 up_thru 17 down_at 0
>> last_clean_interval [0,0) 172.31.0.84:6800/11739 172.31.0.84:6801/11739
>> 172.31.0.84:6802/11739 172.31.0.84:6803/11739 exists,up
>> 765f5066-d13e-4a9e-a446-8630ee06e596
>> osd.1 up   in  weight 1 up_from 7 up_thru 0 down_at 0 last_clean_interval
>> [0,0) 172.31.0.84:6805/12279 172.31.0.84:6806/12279
>> 172.31.0.84:6807/12279 172.31.0.84:6808/12279 exists,up
>> e1d073e5-9397-4b63-8b7c-a4064e430f7a
>> osd.2 up   in  weight 1 up_from 10 up_thru 0 down_at 0
>> last_clean_interval [0,0) 172.31.3.57:6800/5517 172.31.3.57:6801/5517
>> 172.31.3.57:6802/5517 172.31.3.57:6803/5517 exists,up
>> 5af5deed-7a6d-4251-aa3c-819393901d1f
>> osd.3 up   in  weight 1 up_from 13 up_thru 0 down_at 0
>> last_clean_interval [0,0) 172.31.3.57:6805/6043 172.31.3.57:6806/6043
>> 172.31.3.57:6807/6043 172.31.3.57:6808/6043 exists,up
>> 958f37ab-b434-40bd-87ab-3acbd3118f92
>> osd.4 up   in  weight 1 up_from 16 up_thru 0 down_at 0
>> last_clean_interval [0,0) 172.31.3.56:6800/5106 172.31.3.56:6801/5106
>> 172.31.3.56:6802/5106 172.31.3.56:6803/5106 exists,up
>> ce5c0b86-96be-408a-8022-6397c78032be
>> osd.5 up   in  weight 1 up_from 22 up_thru 0 down_at 0
>> last_clean_interval [0,0) 172.31.3.56:6805/7019 172.31.3.56:6806/7019
>> 172.31.3.56:6807/7019 172.31.3.56:6808/7019 exists,up
>> da67b604-b32a-44a0-9920-df0774ad2ef3
>>
>>
>> On Feb 10, 2015, at 12:55 PM, B L  wrote:
>>
>>
>> On Feb 10, 2015, at 12:37 PM, B L  wrote:
>>
>> Hi Vickie,
>>
>> Thanks for your reply!
>>
>> You can find the dump in this link:
>>
>> https://gist.github.com/anonymous/706d4a1ec81c93fd1eca
>>
>> Thanks!
>> B.
>>
>>
>> On Feb 10, 2015, at 12:23 PM, Vickie ch  wrote:
>>
>> Hi Beanos:
>>Would you post the reult of "$ceph osd dump"?
>>
>> Best wishes,
>> Vickie
>>
>> 2015-02-10 16:36 GMT+08:00 B L :
>>
>>> Having problem with my fresh non-healthy cluster, my cluster status
>>> summary shows this:
>>>
>>> ceph@ceph-node1:~$* ceph -s*
>>>
>>> cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d
>>>  health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256
>>> pgs stuck unclean; pool data pg_num 128 > pgp_num 64
>>>  monmap e1: 1 mons at {ceph-node1=172.31.0.84:6789/0}, election
>>> epoch 2, quorum 0 ceph-node1
>>>  osdmap e25: 6 osds: 6 up, 6 in
>>>   pgmap v82: 256 pgs, 3 pools, 0 bytes data, 0 objects
>>> 198 MB used, 18167 MB / 18365 MB avail
>>>  192 incomplete
>>>   64 creating+incomplete
>>>
>>>
>>> Where shall I start troubleshooting this?
>>>
>>> P.S. I’m new to CEPH.
>>>
>>> Thanks!
>>> Beanos
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>>
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread Vickie ch
Hi
The weight is reflect spaces or ability  of disks.
For example, the weight of 100G OSD disk is 0.100(100G/1T).


Best wishes,
Vickie

2015-02-10 22:25 GMT+08:00 B L :

> Thanks for everyone!!
>
> After applying the re-weighting command (*ceph osd crush reweight osd.0
> 0.0095*), my cluster is getting healthy now :))
>
> But I have one question, what if I have hundreds of OSDs, shall I do the
> re-weighting on each device, or there is some way to make this happen
> automatically .. the question in other words, why would I need to do
> weighting in the first place??
>
>
>
>
> On Feb 10, 2015, at 4:00 PM, Vikhyat Umrao  wrote:
>
>  Oh , I have miss placed the places for osd names and weight
>
> ceph osd crush reweight osd.0 0.0095  and so on ..
>
> Regards,
> Vikhyat
>
>  On 02/10/2015 07:31 PM, B L wrote:
>
> Thanks Vikhyat,
>
>  As suggested ..
>
>  ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0
>
>  Invalid command:  osd.0 doesn't represent a float
> osd crush reweight   :  change 's weight to
>  in crush map
> Error EINVAL: invalid command
>
>  What do you think
>
>
>  On Feb 10, 2015, at 3:18 PM, Vikhyat Umrao  wrote:
>
> sudo ceph osd crush reweight 0.0095 osd.0 to osd.5
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to calculate file size when mount a block device from rbd image

2014-10-20 Thread Vickie CH
Hello all,
I have a question about how to calculate file size when mount a block
device from rbd image .
[Cluster information:]
1.The cluster with 1 mon and 6 osds. Every osd is 1T. Total spaces is 5556G.
2.rbd pool:replicated size 2 min_size 1. num = 128. Except rbd pool other
pools is empty.

[Steps]
1.On Linux client I use rbd command to create a 1.5T rbd image and format
it with ext4.
2.Use dd command to create a 1.2T file.
   #dd if=/dev/zero of=/mnt/ceph-mount/test12T bs=1M count=12288000
3.When dd finished the information shows "No space left on device". But
parted -l display the disk space is 1611G. Why does the system show space
not enough?

Is there something I misunderstand or wrong?

Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Use 2 osds to create cluster but health check display "active+degraded"

2014-10-29 Thread Vickie CH
Hi all,
  Try to use two OSDs to create a cluster. After the deply finished, I
found the health status is "88 active+degraded" "104 active+remapped".
Before use 2 osds to create cluster the result is ok. I'm confuse why this
situation happened. Do I need to set crush map to fix this problem?


--ceph.conf-
[global]
fsid = c404ded6-4086-4f0b-b479-89bc018af954
mon_initial_members = storage0
mon_host = 192.168.1.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 128
osd_journal_size = 2048
osd_pool_default_pgp_num = 128
osd_mkfs_type = xfs
-

---ceph -s---
cluster c404ded6-4086-4f0b-b479-89bc018af954
 health HEALTH_WARN 88 pgs degraded; 192 pgs stuck unclean
 monmap e1: 1 mons at {storage0=192.168.10.10:6789/0}, election epoch
2, quorum 0 storage0
 osdmap e20: 2 osds: 2 up, 2 in
  pgmap v45: 192 pgs, 3 pools, 0 bytes data, 0 objects
79752 kB used, 1858 GB / 1858 GB avail
  88 active+degraded
 104 active+remapped



Best wishes,
Mika
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Use 2 osds to create cluster but health check display "active+degraded"

2014-10-29 Thread Vickie CH
Dear all,
Thanks for the reply.
Pool replicated size is 2. Because the replicated size parameter already
write into ceph.conf before deploy.
Because not familiar crush map.  I will according Mark's information to do
a test that change the crush map to see the result.

---ceph.conf--
[global]
fsid = c404ded6-4086-4f0b-b479-
89bc018af954
mon_initial_members = storage0
mon_host = 192.168.1.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

*osd_pool_default_size = 2osd_pool_default_min_size = 1*
osd_pool_default_pg_num = 128
osd_journal_size = 2048
osd_pool_default_pgp_num = 128
osd_mkfs_type = xfs
---

--ceph osd dump result -
pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool
crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 15 flags hashpspool stripe_width 0
pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
max_osd 2
--

Best wishes,
Mika

Best wishes,
Mika

2014-10-29 16:56 GMT+08:00 Mark Kirkwood :

> That is not my experience:
>
> $ ceph -v
> ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec20455b1646)
>
> $ cat /etc/ceph/ceph.conf
> [global]
> ...
> osd pool default size = 2
>
> $ ceph osd dump|grep size
> pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 47 flags
> hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes
> 20 hit_set bloom{false_positive_probability: 0.05, target_size:
> 0, seed: 0} 3600s x1 stripe_width 0
> pool 10 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 102 owner
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 11 '.rgw.control' replicated size 2 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 104 owner
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 12 '.rgw' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 8 pgp_num 8 last_change 106 owner 18446744073709551615
> flags hashpspool stripe_width 0
> pool 13 '.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 8 pgp_num 8 last_change 107 owner 18446744073709551615
> flags hashpspool stripe_width 0
> pool 14 '.users.uid' replicated size 2 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 108 owner
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 15 '.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 110 owner
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 16 '.rgw.buckets' replicated size 2 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 112 owner
> 18446744073709551615 flags hashpspool stripe_width 0
> pool 17 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 1024 pgp_num 1024 last_change 186 flags hashpspool
> stripe_width 0
>
>
>
>
>
>
> On 29/10/14 21:46, Irek Fasikhov wrote:
>
>> Hi.
>> This parameter does not apply to pools by default.
>> ceph osd dump | grep pool. see size=?
>>
>>
>> 2014-10-29 11:40 GMT+03:00 Vickie CH > <mailto:mika.leaf...@gmail.com>>:
>>
>> Der Irek:
>>
>> Thanks for your reply.
>> Even already set "osd_pool_default_size = 2" the cluster still need
>> 3 different hosts right?
>> Is this default number can be changed by user and write into
>> ceph.conf before deploy?
>>
>>
>> Best wishes,
>> Mika
>>
>> 2014-10-29 16:29 GMT+08:00 Irek Fasikhov > <mailto:malm...@gmail.com>>:
>>
>> Hi.
>>
>> Because the disc requires three different hosts, the default
>> number of replications 3.
>>
>> 2014-10-29 10:56 GMT+03:00 Vickie CH > <mailto:mika.leaf...@gmail.com>>:
>>
>>
>> Hi all,
>>Try to use two OSDs to create a cluster. After the
>> deply finished, I found th

Re: [ceph-users] Use 2 osds to create cluster but health check display "active+degraded"

2014-10-29 Thread Vickie CH
Hi:
-ceph osd
tree---
# idweight  type name   up/down reweight
-1  1.82root default
-2  1.82host storage1
0   0.91osd.0   up  1
1   0.91osd.1   up  1

Best wishes,
Mika

2014-10-29 17:05 GMT+08:00 Irek Fasikhov :

> ceph osd tree please :)
>
> 2014-10-29 12:03 GMT+03:00 Vickie CH :
>
>> Dear all,
>> Thanks for the reply.
>> Pool replicated size is 2. Because the replicated size parameter already
>> write into ceph.conf before deploy.
>> Because not familiar crush map.  I will according Mark's information to
>> do a test that change the crush map to see the result.
>>
>> ---ceph.conf--
>> [global]
>> fsid = c404ded6-4086-4f0b-b479-
>> 89bc018af954
>> mon_initial_members = storage0
>> mon_host = 192.168.1.10
>> auth_cluster_required = cephx
>> auth_service_required = cephx
>> auth_client_required = cephx
>> filestore_xattr_use_omap = true
>>
>> *osd_pool_default_size = 2osd_pool_default_min_size = 1*
>> osd_pool_default_pg_num = 128
>> osd_journal_size = 2048
>> osd_pool_default_pgp_num = 128
>> osd_mkfs_type = xfs
>> ---
>>
>> --ceph osd dump result -
>> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>> rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool
>> crash_replay_interval 45 stripe_width 0
>> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0
>> object_hash rjenkins pg_num 64 pgp_num 64 last_change 15 flags hashpspool
>> stripe_width 0
>> pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>> rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
>> max_osd 2
>>
>> --
>>
>> Best wishes,
>> Mika
>>
>> Best wishes,
>> Mika
>>
>> 2014-10-29 16:56 GMT+08:00 Mark Kirkwood :
>>
>>> That is not my experience:
>>>
>>> $ ceph -v
>>> ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec
>>> 20455b1646)
>>>
>>> $ cat /etc/ceph/ceph.conf
>>> [global]
>>> ...
>>> osd pool default size = 2
>>>
>>> $ ceph osd dump|grep size
>>> pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 128 pgp_num 128 last_change 47 flags
>>> hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes
>>> 20 hit_set bloom{false_positive_probability: 0.05, target_size:
>>> 0, seed: 0} 3600s x1 stripe_width 0
>>> pool 10 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 102 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 11 '.rgw.control' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 104 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 12 '.rgw' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 8 pgp_num 8 last_change 106 owner 18446744073709551615
>>> flags hashpspool stripe_width 0
>>> pool 13 '.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 107 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 14 '.users.uid' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 108 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 15 '.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset
>>> 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 110 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 16 '.rgw.buckets' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 112 owner
>>> 18446744073709551615 flags hashpspool stripe_width 0
>>> pool 17 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 1024 pgp_num 1024 last_change 186 flags hashpspool
>>> stripe_width 0
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 29

Re: [ceph-users] Fwd: Error zapping the disk

2014-10-29 Thread Vickie CH
Hi Sakhi:
I got this problem before. Host OS is Ubuntu 14.04 3.13.0-24-generic.
In the end I use fdisk /dev/sdX delete all partition and reboot. Maybe you
can try.

Best wishes,
Mika

2014-10-29 17:13 GMT+08:00 Sakhi Hadebe :

>  Hi Support,
>
>  Can someone please help me with the below error so I can proceed with my
> cluster installation. It has taken a week now not knowing how to carry on.
>
>
>
>
> Regards,
> Sakhi Hadebe
> Engineer: South African National Research Network (SANReN)Competency Area,
> Meraka, CSIR
>
> Tel:   +27 12 841 2308
> Fax:   +27 12 841 4223
> Cell:  +27 71 331 9622
> Email: shad...@csir.co.za
>
>
> >>> Sakhi Hadebe 10/22/2014 1:56 PM >>>
>
> Hi,
>
>
>   I am building a three node cluster on debian7.7. I have a problem in
> zapping the disk of the very first node.
>
>
>  ERROR:
>
> [ceph1][WARNIN] Error: Partition(s) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
> 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
> 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
> 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64 on /dev/sda3 have
> been written, but we have been unable to inform the kernel of the change,
> probably because it/they are in use.  As a result, the old partition(s)
> will remain in use.  You should reboot now before making further changes.
>
> [ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
>
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: partprobe
> /dev/sda3
>
>
>
>  Please help.
>
>
> Regards,
> Sakhi Hadebe
> Engineer: South African National Research Network (SANReN)Competency Area,
> Meraka, CSIR
>
> Tel:   +27 12 841 2308
> Fax:   +27 12 841 4223
> Cell:  +27 71 331 9622
> Email: shad...@csir.co.za
>
>
>
> --
> This message is subject to the CSIR's copyright terms and conditions,
> e-mail legal notice, and implemented Open Document Format (ODF) standard.
> The full disclaimer details can be found at
> http://www.csir.co.za/disclaimer.html.
>
>
> This message has been scanned for viruses and dangerous content by
> *MailScanner* ,
> and is believed to be clean.
>
>
> Please consider the environment before printing this email.
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Use 2 osds to create cluster but health check display "active+degraded"

2014-10-29 Thread Vickie CH
Hi all,
Thanks for you all.
Like Mark's information this problem is releate to CRUSH Map.
After create 2 OSDs on 2 different host, healthy check is OK.
Appreciate the information again~

Best wishes,
Mika

2014-10-29 17:19 GMT+08:00 Vickie CH :

> Hi:
> -ceph osd
> tree---
> # idweight  type name   up/down reweight
> -1  1.82root default
> -2  1.82host storage1
> 0   0.91osd.0   up  1
> 1   0.91osd.1   up  1
>
> Best wishes,
> Mika
>
> 2014-10-29 17:05 GMT+08:00 Irek Fasikhov :
>
>> ceph osd tree please :)
>>
>> 2014-10-29 12:03 GMT+03:00 Vickie CH :
>>
>>> Dear all,
>>> Thanks for the reply.
>>> Pool replicated size is 2. Because the replicated size parameter already
>>> write into ceph.conf before deploy.
>>> Because not familiar crush map.  I will according Mark's information to
>>> do a test that change the crush map to see the result.
>>>
>>> ---ceph.conf--
>>> [global]
>>> fsid = c404ded6-4086-4f0b-b479-
>>> 89bc018af954
>>> mon_initial_members = storage0
>>> mon_host = 192.168.1.10
>>> auth_cluster_required = cephx
>>> auth_service_required = cephx
>>> auth_client_required = cephx
>>> filestore_xattr_use_omap = true
>>>
>>> *osd_pool_default_size = 2osd_pool_default_min_size = 1*
>>> osd_pool_default_pg_num = 128
>>> osd_journal_size = 2048
>>> osd_pool_default_pgp_num = 128
>>> osd_mkfs_type = xfs
>>> ---
>>>
>>> --ceph osd dump result -
>>> pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 64 pgp_num 64 last_change 14 flags hashpspool
>>> crash_replay_interval 45 stripe_width 0
>>> pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0
>>> object_hash rjenkins pg_num 64 pgp_num 64 last_change 15 flags hashpspool
>>> stripe_width 0
>>> pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>> rjenkins pg_num 64 pgp_num 64 last_change 16 flags hashpspool stripe_width 0
>>> max_osd 2
>>>
>>> --
>>>
>>> Best wishes,
>>> Mika
>>>
>>> Best wishes,
>>> Mika
>>>
>>> 2014-10-29 16:56 GMT+08:00 Mark Kirkwood 
>>> :
>>>
>>>> That is not my experience:
>>>>
>>>> $ ceph -v
>>>> ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec
>>>> 20455b1646)
>>>>
>>>> $ cat /etc/ceph/ceph.conf
>>>> [global]
>>>> ...
>>>> osd pool default size = 2
>>>>
>>>> $ ceph osd dump|grep size
>>>> pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>>> rjenkins pg_num 128 pgp_num 128 last_change 47 flags
>>>> hashpspool,incomplete_clones tier_of 1 cache_mode writeback target_bytes
>>>> 20 hit_set bloom{false_positive_probability: 0.05,
>>>> target_size: 0, seed: 0} 3600s x1 stripe_width 0
>>>> pool 10 '.rgw.root' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 102 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 11 '.rgw.control' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 104 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 12 '.rgw' replicated size 2 min_size 1 crush_ruleset 0 object_hash
>>>> rjenkins pg_num 8 pgp_num 8 last_change 106 owner 18446744073709551615
>>>> flags hashpspool stripe_width 0
>>>> pool 13 '.rgw.gc' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 107 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 14 '.users.uid' replicated size 2 min_size 1 crush_ruleset 0
>>>> object_hash rjenkins pg_num 8 pgp_num 8 last_change 108 owner
>>>> 18446744073709551615 flags hashpspool stripe_width 0
>>>> pool 15 '.rgw.buckets.index' replicated size 2 min_size 1 crush_ruleset
>>

Re: [ceph-users] issue with activate osd in ceph with new partition created

2014-11-02 Thread Vickie CH
Is any errors disply when execute "ceph-deploy osd prepare" ?

Best wishes,
Mika

2014-10-31 17:36 GMT+08:00 Subhadip Bagui :

> Hi,
>
> Can anyone please help on this
>
> Regards,
> Subhadip
>
>
> ---
>
> On Fri, Oct 31, 2014 at 12:51 AM, Subhadip Bagui 
> wrote:
>
>> Hi,
>>
>> I'm new in ceph and tying to install the cluster. I'm using single server
>> for mon and osd. I've create one partition with device /dev/vdb1 containing
>> 100 gb with ext4 fs and trying to add as an OSD in ceph monitor. But
>> whenever I'm trying to activate the partition as osd block device we are
>> getting issue. The partition can't be mount with ceph default osd
>> mountpoint. Please let me know what I'm missing
>>
>> [root@ceph-admin my-cluster]# *ceph-deploy osd activate ceph-admin:vdb1*
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /root/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.18): /usr/bin/ceph-deploy osd
>> activate ceph-admin:vdb1
>> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
>> ceph-admin:/dev/vdb1:
>> [ceph-admin][DEBUG ] connected to host: ceph-admin
>> [ceph-admin][DEBUG ] detect platform information from remote host
>> [ceph-admin][DEBUG ] detect machine type
>> [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
>> [ceph_deploy.osd][DEBUG ] activating host ceph-admin disk /dev/vdb1
>> [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
>> [ceph-admin][INFO  ] Running command: ceph-disk -v activate --mark-init
>> sysvinit --mount /dev/vdb1
>> [ceph-admin][WARNIN] No data was received after 300 seconds,
>> disconnecting...
>> [ceph-admin][INFO  ] checking OSD status...
>> [ceph-admin][INFO  ] Running command: ceph --cluster=ceph osd stat
>> --format=json
>> [ceph-admin][WARNIN] No data was received after 300 seconds,
>> disconnecting...
>> [ceph-admin][INFO  ] Running command: chkconfig ceph on
>>
>> 
>>
>> [root@ceph-admin my-cluster]#* ceph status*
>>
>> 2014-10-30 20:40:32.102741 7fcc7c591700  0 -- :/1003242 >>
>> 10.203.238.165:6789/0 pipe(0x7fcc780204b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fcc78020740).fault
>>
>> 2014-10-30 20:40:35.103348 7fcc7c490700  0 -- :/1003242 >>
>> 10.203.238.165:6789/0 pipe(0x7fcc6c000c00 sd=3 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fcc6c000e90).fault
>>
>> 2014-10-30 20:40:38.103994 7fcc7c591700  0 -- :/1003242 >>
>> 10.203.238.165:6789/0 pipe(0x7fcc6c003010 sd=3 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fcc6c0032a0).fault
>>
>> 2014-10-30 20:40:41.104498 7fcc7c490700  0 -- :/1003242 >>
>> 10.203.238.165:6789/0 pipe(0x7fcc6c0039d0 sd=3 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7fcc6c003c60).fault
>>
>> Regards,
>> Subhadip
>>
>> ---
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com