Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-17 Thread Jaemyoun Lee
Thank you, Greg!

I solved it through creating MDS.

- Jae


On Wed, Jul 16, 2014 at 8:36 PM, Gregory Farnum  wrote:

> Your MDS isn't running or isn't active.
> -Greg
>
>
> On Wednesday, July 16, 2014, Jaemyoun Lee  wrote:
>
>>
>> The result is same.
>>
>> # ceph-fuse --debug-ms 1 --debug-client 10 -m 192.168.122.106:6789 /mnt
>> ceph-fuse[3296] :  starting ceph client
>>
>> And the log file is
>>
>> # cat /var/log/ceph/ceph-client.admin.log
>> 2014-07-16 17:08:13.146032 7f9a212f87c0  0 ceph version 0.80.1
>> (a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-fuse, pid 3294
>> 2014-07-16 17:08:13.156429 7f9a212f87c0  1 -- :/0 messenger.start
>> 2014-07-16 17:08:13.157537 7f9a212f87c0  1 -- :/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
>> 0x7f9a23c0e6c0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.158198 7f9a212f6700  1 -- 192.168.122.166:0/3296
>> learned my addr 192.168.122.166:0/3296
>> 2014-07-16 17:08:13.158505 7f9a167fc700 10 client.-1 ms_handle_connect on
>> 192.168.122.106:6789/0
>> 2014-07-16 17:08:13.159083 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 1  mon_map v1  193+0+0 (4132823754
>> 0 0) 0x7f9a0ab0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159182 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 2  auth_reply(proto 2 0 (0) Success) v1
>>  33+0+0 (1915318666 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159375 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
>> 0x7f9a0c0013a0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159845 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 3  auth_reply(proto 2 0 (0) Success) v1
>>  206+0+0 (2967970554 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159976 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
>> 0x7f9a0c001ec0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160810 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 4  auth_reply(proto 2 0 (0) Success) v1
>>  409+0+0 (3799435439 0 0) 0x7f9a11d0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160945 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0
>> 0x7f9a23c102c0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160979 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=0+,osdmap=0}) v2 -- ?+0
>> 0x7f9a23c10630 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161033 7f9a212f87c0  2 client.4705 mounted: have
>> osdmap 0 and mdsmap 0
>> 2014-07-16 17:08:13.161056 7f9a212f87c0 10 client.4705 did not get mds
>> through better means, so chose random mds -1
>> 2014-07-16 17:08:13.161059 7f9a212f87c0 10 client.4705  target mds.-1 not
>> active, waiting for new mdsmap
>> 2014-07-16 17:08:13.161668 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 5  osd_map(45..45 src has 1..45) v3
>>  3907+0+0 (2386867192 0 0) 0x7f9a2060 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161843 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 6  mdsmap(e 1) v1  396+0+0
>> (394292161 0 0) 0x7f9a2500 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161861 7f9a167fc700  1 client.4705 handle_mds_map
>> epoch 1
>> 2014-07-16 17:08:13.161884 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 7  osd_map(45..45 src has 1..45) v3
>>  3907+0+0 (2386867192 0 0) 0x7f9a37a0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161900 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 8  mon_subscribe_ack(300s) v1 
>> 20+0+0 (4226112827 0 0) 0x7f9a3c40 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161932 7f9a212f87c0 10 client.4705 did not get mds
>> through better means, so chose random mds -1
>> 2014-07-16 17:08:13.161942 7f9a212f87c0 10 client.4705  target mds.-1 not
>> active, waiting for new mdsmap
>> 2014-07-16 17:08:14.161453 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:08:34.166977 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:08:54.171234 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:14.174106 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:34.177062 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:54.179365 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:14.181731 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:34.184270 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:46.161158 7f9a15ffb700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=2+,monmap=2+}) v2 -- ?+0
>> 0x7f99f8002c50 con 0x7f9a23c0dd30
>> 2014-07-16 17:10:46.161770 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 9  mon_subscribe_ack(300

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-16 Thread Gregory Farnum
Your MDS isn't running or isn't active.
-Greg

On Wednesday, July 16, 2014, Jaemyoun Lee  wrote:

>
> The result is same.
>
> # ceph-fuse --debug-ms 1 --debug-client 10 -m 192.168.122.106:6789 /mnt
> ceph-fuse[3296] :  starting ceph client
>
> And the log file is
>
> # cat /var/log/ceph/ceph-client.admin.log
> 2014-07-16 17:08:13.146032 7f9a212f87c0  0 ceph version 0.80.1
> (a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-fuse, pid 3294
> 2014-07-16 17:08:13.156429 7f9a212f87c0  1 -- :/0 messenger.start
> 2014-07-16 17:08:13.157537 7f9a212f87c0  1 -- :/3296 -->
> 192.168.122.106:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
> 0x7f9a23c0e6c0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.158198 7f9a212f6700  1 -- 192.168.122.166:0/3296
> learned my addr 192.168.122.166:0/3296
> 2014-07-16 17:08:13.158505 7f9a167fc700 10 client.-1 ms_handle_connect on
> 192.168.122.106:6789/0
> 2014-07-16 17:08:13.159083 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 1  mon_map v1  193+0+0 (4132823754 0
> 0) 0x7f9a0ab0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.159182 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 2  auth_reply(proto 2 0 (0) Success) v1
>  33+0+0 (1915318666 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.159375 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
> 192.168.122.106:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
> 0x7f9a0c0013a0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.159845 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 3  auth_reply(proto 2 0 (0) Success) v1
>  206+0+0 (2967970554 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.159976 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
> 192.168.122.106:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
> 0x7f9a0c001ec0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.160810 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 4  auth_reply(proto 2 0 (0) Success) v1
>  409+0+0 (3799435439 0 0) 0x7f9a11d0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.160945 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
> 192.168.122.106:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0
> 0x7f9a23c102c0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.160979 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=0+,osdmap=0}) v2 -- ?+0
> 0x7f9a23c10630 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.161033 7f9a212f87c0  2 client.4705 mounted: have
> osdmap 0 and mdsmap 0
> 2014-07-16 17:08:13.161056 7f9a212f87c0 10 client.4705 did not get mds
> through better means, so chose random mds -1
> 2014-07-16 17:08:13.161059 7f9a212f87c0 10 client.4705  target mds.-1 not
> active, waiting for new mdsmap
> 2014-07-16 17:08:13.161668 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 5  osd_map(45..45 src has 1..45) v3 
> 3907+0+0 (2386867192 0 0) 0x7f9a2060 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.161843 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 6  mdsmap(e 1) v1  396+0+0
> (394292161 0 0) 0x7f9a2500 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.161861 7f9a167fc700  1 client.4705 handle_mds_map
> epoch 1
> 2014-07-16 17:08:13.161884 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 7  osd_map(45..45 src has 1..45) v3 
> 3907+0+0 (2386867192 0 0) 0x7f9a37a0 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.161900 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 8  mon_subscribe_ack(300s) v1 
> 20+0+0 (4226112827 0 0) 0x7f9a3c40 con 0x7f9a23c0dd30
> 2014-07-16 17:08:13.161932 7f9a212f87c0 10 client.4705 did not get mds
> through better means, so chose random mds -1
> 2014-07-16 17:08:13.161942 7f9a212f87c0 10 client.4705  target mds.-1 not
> active, waiting for new mdsmap
> 2014-07-16 17:08:14.161453 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:08:34.166977 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:08:54.171234 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:09:14.174106 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:09:34.177062 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:09:54.179365 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:10:14.181731 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:10:34.184270 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:10:46.161158 7f9a15ffb700  1 -- 192.168.122.166:0/3296 -->
> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=2+,monmap=2+}) v2 -- ?+0
> 0x7f99f8002c50 con 0x7f9a23c0dd30
> 2014-07-16 17:10:46.161770 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
> mon.0 192.168.122.106:6789/0 9  mon_subscribe_ack(300s) v1 
> 20+0+0 (4226112827 0 0) 0x7f9a3c40 con 0x7f9a23c0dd30
> 2014-07-16 17:10:54.186908 7f9a177fe700 10 client.4705 renew_caps()
> 2014-07-16 17:11:14.189613 7f9a177fe700 10 client.4705 renew_caps

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-16 Thread Jaemyoun Lee
The result is same.

# ceph-fuse --debug-ms 1 --debug-client 10 -m 192.168.122.106:6789 /mnt
ceph-fuse[3296] :  starting ceph client

And the log file is

# cat /var/log/ceph/ceph-client.admin.log
2014-07-16 17:08:13.146032 7f9a212f87c0  0 ceph version 0.80.1
(a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-fuse, pid 3294
2014-07-16 17:08:13.156429 7f9a212f87c0  1 -- :/0 messenger.start
2014-07-16 17:08:13.157537 7f9a212f87c0  1 -- :/3296 -->
192.168.122.106:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
0x7f9a23c0e6c0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.158198 7f9a212f6700  1 -- 192.168.122.166:0/3296
learned my addr 192.168.122.166:0/3296
2014-07-16 17:08:13.158505 7f9a167fc700 10 client.-1 ms_handle_connect on
192.168.122.106:6789/0
2014-07-16 17:08:13.159083 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 1  mon_map v1  193+0+0 (4132823754 0
0) 0x7f9a0ab0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.159182 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 2  auth_reply(proto 2 0 (0) Success) v1
 33+0+0 (1915318666 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
2014-07-16 17:08:13.159375 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
192.168.122.106:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
0x7f9a0c0013a0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.159845 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 3  auth_reply(proto 2 0 (0) Success) v1
 206+0+0 (2967970554 0 0) 0x7f9a0f60 con 0x7f9a23c0dd30
2014-07-16 17:08:13.159976 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
192.168.122.106:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
0x7f9a0c001ec0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.160810 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 4  auth_reply(proto 2 0 (0) Success) v1
 409+0+0 (3799435439 0 0) 0x7f9a11d0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.160945 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
192.168.122.106:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0
0x7f9a23c102c0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.160979 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
192.168.122.106:6789/0 -- mon_subscribe({mdsmap=0+,osdmap=0}) v2 -- ?+0
0x7f9a23c10630 con 0x7f9a23c0dd30
2014-07-16 17:08:13.161033 7f9a212f87c0  2 client.4705 mounted: have osdmap
0 and mdsmap 0
2014-07-16 17:08:13.161056 7f9a212f87c0 10 client.4705 did not get mds
through better means, so chose random mds -1
2014-07-16 17:08:13.161059 7f9a212f87c0 10 client.4705  target mds.-1 not
active, waiting for new mdsmap
2014-07-16 17:08:13.161668 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 5  osd_map(45..45 src has 1..45) v3 
3907+0+0 (2386867192 0 0) 0x7f9a2060 con 0x7f9a23c0dd30
2014-07-16 17:08:13.161843 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 6  mdsmap(e 1) v1  396+0+0 (394292161
0 0) 0x7f9a2500 con 0x7f9a23c0dd30
2014-07-16 17:08:13.161861 7f9a167fc700  1 client.4705 handle_mds_map epoch
1
2014-07-16 17:08:13.161884 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 7  osd_map(45..45 src has 1..45) v3 
3907+0+0 (2386867192 0 0) 0x7f9a37a0 con 0x7f9a23c0dd30
2014-07-16 17:08:13.161900 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 8  mon_subscribe_ack(300s) v1  20+0+0
(4226112827 0 0) 0x7f9a3c40 con 0x7f9a23c0dd30
2014-07-16 17:08:13.161932 7f9a212f87c0 10 client.4705 did not get mds
through better means, so chose random mds -1
2014-07-16 17:08:13.161942 7f9a212f87c0 10 client.4705  target mds.-1 not
active, waiting for new mdsmap
2014-07-16 17:08:14.161453 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:08:34.166977 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:08:54.171234 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:09:14.174106 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:09:34.177062 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:09:54.179365 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:10:14.181731 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:10:34.184270 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:10:46.161158 7f9a15ffb700  1 -- 192.168.122.166:0/3296 -->
192.168.122.106:6789/0 -- mon_subscribe({mdsmap=2+,monmap=2+}) v2 -- ?+0
0x7f99f8002c50 con 0x7f9a23c0dd30
2014-07-16 17:10:46.161770 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
mon.0 192.168.122.106:6789/0 9  mon_subscribe_ack(300s) v1  20+0+0
(4226112827 0 0) 0x7f9a3c40 con 0x7f9a23c0dd30
2014-07-16 17:10:54.186908 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:11:14.189613 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:11:34.192055 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:11:54.194663 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:12:14.196991 7f9a177fe700 10 client.4705 renew_caps()
2014-07-16 17:12:34.199710 7f9a177fe700 10 client.4705

Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Gregory Farnum
On Tue, Jul 15, 2014 at 10:15 AM, Jaemyoun Lee  wrote:
> The output is nothing because ceph-fuse fell into an infinite while loop as
> I explain below.
>
> Where can I find the log file of ceph-fuse?

It defaults to /var/log/ceph, but it may be empty. I realize the task
may have hung, but I'm pretty sure it isn't looping, just waiting on
some kind of IO. You could try running it with the "--debug-ms 1
--debug-client 10" command-line options appended and see what it spits
out.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Jaemyoun Lee
The output is nothing because ceph-fuse fell into an infinite while loop as
I explain below.

Where can I find the log file of ceph-fuse?

Jae.
2014. 7. 16. 오전 1:59에 "Gregory Farnum" 님이 작성:

> What did ceph-fuse output to its log file or the command line?
>
> On Tuesday, July 15, 2014, Jaemyoun Lee  wrote:
>
>> Hi All,
>>
>> I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect
>> to the MON from a client using ceph-fuse.
>>
>> On the client, I installed the ceph-fuse 0.80.1 and added fuse. But, I
>> think it is wrong. The result is
>>
>> # modprobe fuse
>> (Any output was nothing)
>> # lsmod | grep fuse
>> (Any output was nothing)
>> # ceph-fuse -m 192.168.122.106:6789 /mnt
>> ceph-fuse[1905]: starting ceph client
>> (at this point, ceph-fuse fell into an infinite while loop)
>> ^C
>> #
>>
>> What problem is it?
>>
>> My cluster like the follow:
>>
>> Host OS (Ubuntu 14.04)
>> --- VM-1 (Ubuntu 14.04)
>> -- MON-0
>> -- MDS-0
>> --- VM-2 (Ubuntu 14.04)
>> -- OSD-0
>> --- VM-3 (Ubuntu 14.04)
>> -- OSD-1
>> -- OSD-2
>> -- OSD-3
>> --- VM-4 (Ubuntu 14.04)
>> -- it's for client.
>>
>> the result of "# ceph -s" on VM-1, which is MON, is
>>
>> # ceph -s
>> cluster 1ae5585d-03c6-4a57-ba79-c65f4ed9e69f
>>
>>  health HEALTH_OK
>>
>>  monmap e1: 1 mons at {csA=192.168.122.106:6789/0}, election epoch
>> 1, quorum 0 csA
>>
>>  osdmap e37: 4 osds: 4 up, 4 in
>>
>>   pgmap v678: 192 pgs, 3 pools, 0 bytes data, 0 objects
>>
>> 20623 MB used, 352 GB / 372 GB avail
>>
>>  192 active+clean
>>
>> #
>>
>> Regards,
>> Jae
>>
>> --
>>   이재면 Jaemyoun Lee
>>
>>   E-mail : jaemy...@gmail.com
>>   Homepage : http://jaemyoun.com
>>   Facebook :  https://www.facebook.com/jaemyoun
>>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Gregory Farnum
What did ceph-fuse output to its log file or the command line?

On Tuesday, July 15, 2014, Jaemyoun Lee  wrote:

> Hi All,
>
> I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect
> to the MON from a client using ceph-fuse.
>
> On the client, I installed the ceph-fuse 0.80.1 and added fuse. But, I
> think it is wrong. The result is
>
> # modprobe fuse
> (Any output was nothing)
> # lsmod | grep fuse
> (Any output was nothing)
> # ceph-fuse -m 192.168.122.106:6789 /mnt
> ceph-fuse[1905]: starting ceph client
> (at this point, ceph-fuse fell into an infinite while loop)
> ^C
> #
>
> What problem is it?
>
> My cluster like the follow:
>
> Host OS (Ubuntu 14.04)
> --- VM-1 (Ubuntu 14.04)
> -- MON-0
> -- MDS-0
> --- VM-2 (Ubuntu 14.04)
> -- OSD-0
> --- VM-3 (Ubuntu 14.04)
> -- OSD-1
> -- OSD-2
> -- OSD-3
> --- VM-4 (Ubuntu 14.04)
> -- it's for client.
>
> the result of "# ceph -s" on VM-1, which is MON, is
>
> # ceph -s
> cluster 1ae5585d-03c6-4a57-ba79-c65f4ed9e69f
>
>  health HEALTH_OK
>
>  monmap e1: 1 mons at {csA=192.168.122.106:6789/0}, election epoch 1,
> quorum 0 csA
>
>  osdmap e37: 4 osds: 4 up, 4 in
>
>   pgmap v678: 192 pgs, 3 pools, 0 bytes data, 0 objects
>
> 20623 MB used, 352 GB / 372 GB avail
>
>  192 active+clean
>
> #
>
> Regards,
> Jae
>
> --
>   이재면 Jaemyoun Lee
>
>   E-mail : jaemy...@gmail.com
> 
>   Homepage : http://jaemyoun.com
>   Facebook :  https://www.facebook.com/jaemyoun
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-fuse couldn't be connect.

2014-07-15 Thread Jaemyoun Lee
Hi All,

I am using ceph 0.80.1 on Ubuntu 14.04 on KVM. However, I cannot connect to
the MON from a client using ceph-fuse.

On the client, I installed the ceph-fuse 0.80.1 and added fuse. But, I
think it is wrong. The result is

# modprobe fuse
(Any output was nothing)
# lsmod | grep fuse
(Any output was nothing)
# ceph-fuse -m 192.168.122.106:6789 /mnt
ceph-fuse[1905]: starting ceph client
(at this point, ceph-fuse fell into an infinite while loop)
^C
#

What problem is it?

My cluster like the follow:

Host OS (Ubuntu 14.04)
--- VM-1 (Ubuntu 14.04)
-- MON-0
-- MDS-0
--- VM-2 (Ubuntu 14.04)
-- OSD-0
--- VM-3 (Ubuntu 14.04)
-- OSD-1
-- OSD-2
-- OSD-3
--- VM-4 (Ubuntu 14.04)
-- it's for client.

the result of "# ceph -s" on VM-1, which is MON, is

# ceph -s
cluster 1ae5585d-03c6-4a57-ba79-c65f4ed9e69f

 health HEALTH_OK

 monmap e1: 1 mons at {csA=192.168.122.106:6789/0}, election epoch 1,
quorum 0 csA

 osdmap e37: 4 osds: 4 up, 4 in

  pgmap v678: 192 pgs, 3 pools, 0 bytes data, 0 objects

20623 MB used, 352 GB / 372 GB avail

 192 active+clean

#

Regards,
Jae

-- 
  이재면 Jaemyoun Lee

  E-mail : jaemy...@gmail.com
  Homepage : http://jaemyoun.com
  Facebook :  https://www.facebook.com/jaemyoun
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com