Hello,
I found an issue. I've added a ceph mount to my /etc/fstab. But when I
boot my system it hangs:
libceph: connect 192.168.0.5:6789 error -101
After the system is booted I can successfully run mount -a.
Regards - Willi
Am 25.06.16 um 11:52 schrieb Willi Fehler:
Hello,
fixed it by my
Hello,
On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote:
> Hello,
>
> I found an issue. I've added a ceph mount to my /etc/fstab. But when I
> boot my system it hangs:
>
> libceph: connect 192.168.0.5:6789 error -101
>
> After the system is booted I can successfully run mount -a.
>
So
Hi Christian,
thank you. I found _netdev option by myself. I was a little bit confused
because in the Ceph offical documentation there is no hint that you
should use _netdev.
A last question. I have 3 nodes with 9 OSD:
[root@linsrv001 ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN R
Hi,
it means its on 3 different hosts.
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
IP-Interactive
mailto:i...@ip-interactive.de
Anschrift:
IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver D
Hi,
I'm new to ceph and in the mailing list, so hello all!
I'm testing ceph and the plan is to migrate our current 18TB storage
(zfs/nfs) to ceph. This will be using CephFS and mounted in our backend
application.
We are also planning on using virtualisation (opennebula) with rbd for
images and, i
On 2016-06-26 10:30, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote:
>
>> Hello,
>>
>> I found an issue. I've added a ceph mount to my /etc/fstab. But when I
>> boot my system it hangs:
>>
>> libceph: connect 192.168.0.5:6789 error -101
>>
>> After
Hi Em,
its highly recommanded to bring the journals on SSDs
considering
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
---
Also, if you like speed, its highly recommanded to use cache tier
---
Create the pool with a not too much hig
Hi,
is there any option or chance to have auto repair of pgs in hammer?
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi all,
After a quick review of the mailing list archive, I have a question that is
left unanswered:
Is Ceph suitable for online file storage, and if yes, shall I use RGW/librados
or CephFS ?
The typical workload here is mostly small files 50kB-10MB and some bigger ones
100MB+ up to 4TB max (rou
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Danai,
I think it depends on the way how your client get data.
If your client need POSIX Compatible interface, you can use cephFS,
If your client need block device, rbd may help.
And if your client support S3/swift API, you can use rgw.
however, if you are familiar with rados API, you can alse
On Sun, 26 Jun 2016 19:48:18 +0200 Stefan Priebe wrote:
> Hi,
>
> is there any option or chance to have auto repair of pgs in hammer?
>
Short answer:
No, in any version of Ceph.
Long answer:
There are currently no checksums generated by Ceph and present to
facilitate this.
That's on the roadmap
Hello,
firstly, wall of text, makes things incredibly hard to read.
Use paragraphs/returns liberally.
Secondly, what Yang wrote.
More inline.
On Sun, 26 Jun 2016 18:30:35 + (GMT+00:00) m.da...@bluewin.ch wrote:
> Hi all,
> After a quick review of the mailing list archive, I have a question
Hi Ishmael
Once try to create image with image-feature as layering only.
#rbd create --image pool-name/image-name --size 15G --mage-feature layering
# rbd map --image pool-name/image-name
Thanks
Rakesh Parkiti
On Jun 23, 2016 19:46, Ishmael Tsoaela wrote:Hi All,I have created an image but canno
I have 2 distinct clusters configured, in 2 different locations, and 1
zonegroup.
Cluster 1 has ~11TB of data currently on it, S3 / Swift backups via
the duplicity backup tool - each file is 25Mb and probably 20% are
multipart uploads from S3 (so 4Mb stripes) - in total 3217kobjects.
This cluster
15 matches
Mail list logo