On 02/18/2015 11:48 AM, Garg, Pankaj wrote:
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open
moddep file
Try "sudo moddep" and then running your modprobe again.
This seems more like an OS issue than a Ceph specific issue.
Cheers,
Brad
___
ankaj
-Original Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com]
Sent: Tuesday, February 17, 2015 5:06 PM
To: Garg, Pankaj; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Block Device
On 02/18/2015 09:56 AM, Garg, Pankaj wrote:
> Hi,
>
> I have a Ceph cluster and
On 02/18/2015 09:56 AM, Garg, Pankaj wrote:
Hi,
I have a Ceph cluster and I am trying to create a block device. I execute the
following command, and get errors:
èsudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_m
Hi,
I have a Ceph cluster and I am trying to create a block device. I execute the
following command, and get errors:
è sudo rbd map cephblockimage --pool rbd -k /etc/ceph/ceph.client.admin.keyring
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open
moddep file '/lib/modul
@lists.ceph.com
> Subject: Re: [ceph-users] Ceph Block device and Trim/Discard
>
> On 12.12.2014 12:48, Max Power wrote:
>
> > It would be great to shrink the used space. Is there a way to achieve
> > this? Or have I done something wrong? In a professional environment
> >
On 12/18/2014 10:49 AM, Travis Rhoden wrote:
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
It shouldn't matter which format you use.
Josh
___
ceph-users mai
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
- Travis
On Mon, Dec 15, 2014 at 8:58 AM, Max Power <
mailli...@ferienwohnung-altenbeken.de> wrote:
>
> > Ilya Dryomov hat am 12. Dezember 2014 um
> 18:00
> Ilya Dryomov hat am 12. Dezember 2014 um 18:00
> geschrieben:
> Just a note, discard support went into 3.18, which was released a few
> days ago.
I recently compiled 3.18 on Debian 7 and what do I have to say... It works
perfectly well. The used memory goes up and down again. So I think this wi
On 12/12/2014 01:17 PM, Max Power wrote:
>> Wido den Hollander hat am 12. Dezember 2014 um 12:53
>> geschrieben:
>> It depends. Kernel RBD does not support discard/trim yet. Qemu does
>> under certain situations and with special configuration.
>
> Ah, Thank you. So this is my problem. I use rbd w
On 12.12.2014 12:48, Max Power wrote:
> It would be great to shrink the used space. Is there a way to achieve this? Or
> have I done something wrong? In a professional environment you may can live
> with
> filesystems that only grow. But on my small home-cluster this really is a
> problem.
As Wi
On Fri, Dec 12, 2014 at 2:53 PM, Wido den Hollander wrote:
> On 12/12/2014 12:48 PM, Max Power wrote:
>> I am new to Ceph and start discovering its features. I used ext4 partitions
>> (also mounted with -o discard) to place several osd on them. Then I created
>> an
>> erasure coded pool in this c
Discard works with virtio-scsi controllers for disks in QEMU.
Just use discard=unmap in the disk section (scsi disk).
> On 12 Dec 2014, at 13:17, Max Power
> wrote:
>
>> Wido den Hollander hat am 12. Dezember 2014 um 12:53
>> geschrieben:
>> It depends. Kernel RBD does not support discard/tri
> Wido den Hollander hat am 12. Dezember 2014 um 12:53
> geschrieben:
> It depends. Kernel RBD does not support discard/trim yet. Qemu does
> under certain situations and with special configuration.
Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I think
I should port my
On 12/12/2014 12:48 PM, Max Power wrote:
> I am new to Ceph and start discovering its features. I used ext4 partitions
> (also mounted with -o discard) to place several osd on them. Then I created an
> erasure coded pool in this cluster. On top of this there is the rados block
> device which holds
I am new to Ceph and start discovering its features. I used ext4 partitions
(also mounted with -o discard) to place several osd on them. Then I created an
erasure coded pool in this cluster. On top of this there is the rados block
device which holds also an ext4 filesystem (of course mounted with -
Hi,
I attached one 500G block device to the vm, and test it in vm use
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Di
Hi,
I attached one 500G block device to the vm, and test it in vm use
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Di
> -Original Message-
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Dienstag, 22. Oktober 2013 14:16
> To: Fuchs, Andreas (SwissTXT)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph Block Device install
>
> On Tue, Oct 22, 2013 at
reas (SwissTXT)
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] Ceph Block Device install
>>
>> On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
>> wrote:
>> > Hi
>> >
>> > I try to install a client with ceph block devi
Hi Alfredo
Thanks for picking up on this
> -Original Message-
> From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
> Sent: Montag, 21. Oktober 2013 14:17
> To: Fuchs, Andreas (SwissTXT)
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph Block Device i
On Mon, Oct 21, 2013 at 5:25 AM, Fuchs, Andreas (SwissTXT)
wrote:
> Hi
>
> I try to install a client with ceph block device following the instructions
> here:
> http://ceph.com/docs/master/start/quick-rbd/
>
> the client has a user ceph and ssh is setup passwordless also for sudo
> when I run cep
Hi
I try to install a client with ceph block device following the instructions
here:
http://ceph.com/docs/master/start/quick-rbd/
the client has a user ceph and ssh is setup passwordless also for sudo
when I run ceph-deploy I see:
On the ceph management host:
ceph-deploy install 10.100.21.10
[
22 matches
Mail list logo