> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
__
Dino Yancey
2GNT.com Admin
___
ceph-users mailing list
0.025648
>
>
>
> This is much lower than the expected bandwidth (179 < 330).
>
>
>
> Is this normal? If so, what is the reason for that?
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
__
Dino Yancey
2GNT.com Admin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e a config file in /home/ceph/.ssh?
>
> Thanks for the help!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
__
Dino Yancey
2GNT.com Admin
_
t then mount the rbd on shepard.
>>
>> ** **
>>
>> Where this is going is, I would like to use CEPH as my back end storage
>> solution for my virtualization cluster. The general idea is the
>> hypervisors will all have a shared mountpoint that holds images and vms so
>> vms can easily be migrated between hypervisors. Actually, I was thinking I
>> would create one mountpoint each for images and vms for performance
>> reasons, am I likely to see performance gains using more smaller RBDs vs
>> fewer larger RBDs?
>>
>> ** **
>>
>> Thanks for any feedback,
>>
>> Jon A
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>>
>> ** **
>>
>> --
>>
>> Igor Laskovy
>> facebook.com/igor.laskovy
>>
>> studiogrizzly.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
__
Dino Yancey
2GNT.com Admin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph
> health" I get the following:
>
> HEALTH_WARN 384 pgs degraded; 384 pgs stale; 384 pgs stuck unclean
>
> So this seems to be from one single root cause. Any ideas? Again, is this
> a corrupted drive issue that I can clean up, or is this still a ceph
> configuration erro
rror: (note: I have also been
>>> getting the
>>> first line in various calls, unsure why it is complaining, I
>>> followed
>>> the instructions...)
>>>
>>> warning: line 34: 'host' in section 'mon.a' redefined
>>> 2013-05
we've had it here for some time. Now
> you tell me ceph won't even do that?
>
> Dima
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
__
Dino Ya
> While, unfortunately,I can't find the RBD module when I “make menuconfig”.
> Now, I want to know that Is there another solution to add the RBD module,
> or what I exactly went wrong! thanks a lot!
>
>
>
>
>
>
>
> _______
ideas on what is causing the system to lockup/panic?
>
> Thanks,
>
> -Trevor Adams
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
___
NFS filehandle caching).
>
> Thanks,
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
__
Din
10 matches
Mail list logo