Hi Wiessner,
Actually I created zpool and this is mounted to /dev/sdb and then set a
mountpint to osd.The space showing is 144GB.But the osd's are going down after
rbd image is while running.
Thanks,
Ramu.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body
filestore [min|max] sync interval:
Periodically, the filestore needs to quiesce writes and do a syncfs in
order to create
a consistent commit point up to which it can free journal entries. Syncing more
frequently tends to reduce the time required to do the sync, and
reduces the amount
of data tha
Hi Josh,
thanks for the hint.
Can you please spend a view words about the meaing of these parameters ?
- filestore min/max sync interval = int/float ? seconds ? of what ?
- filestore flusher = false
- filestore queue max ops = 1
what is 'one op' ? queue in front of
One more question, what's the behavior behavior if the disk which
contains every journal crashed? All the OSDs will crash?
Thank you in advance :)
On Mon, Aug 27, 2012 at 7:37 PM, Sébastien Han wrote:
> Ok, I see. Thanks for the clarification :)
>
>
> On Mon, Aug 27, 2012 at 6:20 PM, Gregory Fa
On 08/29/2012 01:50 AM, Alexandre DERUMIER wrote:
Nice results !
(can you make same benchmark from a qemu-kvm guest with virtio-driver ?
I have made some bench some month ago with stephan priebe, and we never be able
to have more than 2iops, with a full ssd 3nodes cluster)
How can I set th
On 08/29/2012 09:25 AM, Yehuda Sadeh wrote:
On Wed, Aug 29, 2012 at 9:15 AM, Josh Durgin wrote:
On 08/29/2012 06:40 AM, Alexandre DERUMIER wrote:
Hi,
I'm trying to take a full vm state snapshot with savevm monitor command
(qemu 0.12rc1 + rbd 0.48.1)
it seem that vmstate is not saved in the
On 08/29/2012 07:57 AM, Sylvain Munaut wrote:
Hi,
It might be obvious for people knowing the API but somehow I can't
figure it out:
How and when will the call back specified in a rbd_completion_t be called ?
Imagine I do a rbd_aio_write and then do while (1); ... I don't see
how the library c
On Wed, Aug 29, 2012 at 04:57:39PM +0200, Sylvain Munaut wrote:
> How and when will the call back specified in a rbd_completion_t be called ?
>
> Imagine I do a rbd_aio_write and then do while (1); ... I don't see
> how the library could call my callback unless there is threads
> involved (and the
Thanks Josh,
that's exactly what I want to know.
(BTW: I think that sheepdog block driver also support the save of the vmstate
http://lists.wpkg.org/pipermail/sheepdog/2010-April/000368.html
)
Regards,
Alexandre
- Mail original -
De: "Josh Durgin"
À: "Alexandre DERUMIER"
Cc: ceph-
On Wed, Aug 29, 2012 at 9:15 AM, Josh Durgin wrote:
> On 08/29/2012 06:40 AM, Alexandre DERUMIER wrote:
>>
>> Hi,
>>
>> I'm trying to take a full vm state snapshot with savevm monitor command
>> (qemu 0.12rc1 + rbd 0.48.1)
>>
>> it seem that vmstate is not saved in the snapshot. (I also don't noti
On 08/29/2012 06:40 AM, Alexandre DERUMIER wrote:
Hi,
I'm trying to take a full vm state snapshot with savevm monitor command (qemu
0.12rc1 + rbd 0.48.1)
it seem that vmstate is not saved in the snapshot. (I also don't notice any vm
hang during snapshot)
Snapshot of disk is correctly made.
Hi,
> Did that happen just once, or is it a reoccurring incident? What it
> basically says that the client sent a request using an old ticket. Did
> the client just wake up?
It happens every 15 / 20 min ... it varies a bit.
But what's weird is that it happens only one one OSD. Even though
there
On Wed, Aug 29, 2012 at 8:04 AM, Sylvain Munaut
wrote:
>
> Hi,
>
> In the OSD log I see thos " auth: could not find secret_id=0" ... Any idea
> ?
>
> Because obviously it succeeds a couple of millisecond later.
>
>
> 2012-08-29 13:47:17.873583 7f981ae9d700 0 -- 192.168.1.71:6807/2080
> >> 192.168
On 29 August 2012 23:43, Tommi Virtanen wrote:
> On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander wrote:
>>> Huh ... I've never heard this. Also the guys in ##xen haven't either.
>>> I'm not really involved in xen dev and don't follow it closely but
>>> that seems unlikely. The few slides I lo
Hi,
In the OSD log I see thos " auth: could not find secret_id=0" ... Any idea ?
Because obviously it succeeds a couple of millisecond later.
2012-08-29 13:47:17.873583 7f981ae9d700 0 -- 192.168.1.71:6807/2080
>> 192.168.1.28:0/440990652 pipe(0x75d6800 sd=33 pgs=0 cs=0
l=0).accept peer addr is
Hi,
It might be obvious for people knowing the API but somehow I can't
figure it out:
How and when will the call back specified in a rbd_completion_t be called ?
Imagine I do a rbd_aio_write and then do while (1); ... I don't see
how the library could call my callback unless there is threads
in
On 08/29/2012 03:57 AM, Rutger ter Borg wrote:
Dear list,
are Rados' IoCtx objects able to process multiple asynchronous
operations at the same time, or is it necessary to wait for an operation
to complete, before issuing a following operation?
I.e., can I do the following and expect it to wor
damn, I wanted to say, qemu 1.2rc1 ;)
- Mail original -
De: "Smart Weblications GmbH - Florian Wiessner"
À: "Alexandre DERUMIER"
Cc: ceph-devel@vger.kernel.org
Envoyé: Mercredi 29 Août 2012 15:48:05
Objet: Re: qemu-rbd : savevm monitor command don't save vmstate, is it normal ?
Am 29.08.2012 15:47, schrieb ramu eppa:
> 2012-08-29 10:55:07.384739 7f6e98598780 -1 filestore(/data/osd.1) _detect_fs
> unable to create /data/osd.1/xattr_test: (28) No space left on device
> 2012-08-29 10:55:07.385077 7f6e98598780 -1 OSD::mkfs: couldn't mount
> FileStore:
> error -28
> 2012-08
Hi,
After zfs installed and two osd's are mounted to zfs file system then started
ceph.After creating image through qemu-rbd the osd's are going down.
The error log's are,
ceph-osd.1.log is,
2012-08-29 10:55:07.218136 7f6e98598780 1 journal _open
/data/osd.1/osd.1.journal fd 10: 1048576000 b
Am 29.08.2012 15:40, schrieb Alexandre DERUMIER:
> Hi,
>
> I'm trying to take a full vm state snapshot with savevm monitor command
> (qemu 0.12rc1 + rbd 0.48.1)
>
Have you tried qemu 1.1.1? 0.12 seems very old!
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Weblications GmbH
Martins
On Wed, Aug 29, 2012 at 9:42 AM, Wido den Hollander wrote:
> So you are trying to decrease the number of PGs.
>
> Now, I'm not sure if that is supported. But why would you want to do this?
Support for pg shrinking will come pretty much exactly at the same
time as support for pg growing -- soon, b
On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander wrote:
>> Huh ... I've never heard this. Also the guys in ##xen haven't either.
>> I'm not really involved in xen dev and don't follow it closely but
>> that seems unlikely. The few slides I looked at from the Xen Summit a
>> couple days ago show
CC back to the list.
On 08/29/2012 12:15 PM, hemant surale wrote:
$ ceph osd dump -o -|grep pg_num
pg_pool 0 'data' pg_pool(rep pg_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 200 pgp_num 192 lpg_num 2 lpgp_num 2 last_change 78
owner 0)
So you are trying to decrease the number of PGs.
Hi,
I'm trying to take a full vm state snapshot with savevm monitor command (qemu
0.12rc1 + rbd 0.48.1)
it seem that vmstate is not saved in the snapshot. (I also don't notice any vm
hang during snapshot)
Snapshot of disk is correctly made.
using loadvm monitor command, rollback correctly
On 08/29/2012 02:35 PM, Sylvain Munaut wrote:
Correct me if I'm wrong, but when I was at Citrix in May this year somebody
there told me that Xen was going 100% Qemu?
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it clos
> Correct me if I'm wrong, but when I was at Citrix in May this year somebody
> there told me that Xen was going 100% Qemu?
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it closely but
that seems unlikely. The few slides I
Hi,
After starting apache2 ,The s3 -u list command working properly,but s3 -u
create mytest was not working it is showing error is,
ERROR:XMLParseFailure
And error.log is,
[Wed Aug 29 16:34:53 2012] [notice] FastCGI: process manager initialized (pid
5531)
[Wed Aug 29 16:34:53 2012] [notice]
On Wed, Aug 29, 2012 at 12:46 AM, hemant surale wrote:
> Hi Tommi ,
> Actually I was thinking of "Availability" of related files to
> particular host. I wanted to guide ceph in some way to store related
> files for host on his local osd so that I/O over network can be
> reduced .
That sort
Dear list,
are Rados' IoCtx objects able to process multiple asynchronous
operations at the same time, or is it necessary to wait for an operation
to complete, before issuing a following operation?
I.e., can I do the following and expect it to work?
IoCtx ctx;
ctx.aio_read( ... read 1 ... )
On 08/29/2012 10:20 AM, Sylvain Munaut wrote:
Hi,
How about Xen?
I vote for this :)
Using RBD storage for Xen VM images / disks is IMHO a very nice fit,
the same way people do with QEMU. This should even allow live
migration of VM.
Correct me if I'm wrong, but when I was at Citrix in May
On 08/29/2012 11:42 AM, hemant surale wrote:
Hi Ceph Community ,
I am facing problem while I am explicitly trying to set
"pg_number" of "data" pool.
root@third-virtual-mach
Hi,
> I am facing problem while I am explicitly trying to set
> "pg_number" of "data" pool.
You can't change pg_num of an existing pool, you can only specify this
number when creating the pool.
Support for online pg split/merge is an upcoming feature.
Cheers,
Sylvain
--
To unsubscribe
Hi Ceph Community ,
I am facing problem while I am explicitly trying to set
"pg_number" of "data" pool.
root@third-virtual-machine:~# ceph osd pool set data pg_num 100
2012-08
Nice results !
(can you make same benchmark from a qemu-kvm guest with virtio-driver ?
I have made some bench some month ago with stephan priebe, and we never be able
to have more than 2iops, with a full ssd 3nodes cluster)
>>How can I set the variables when the Journal data have go to the O
Hi,
> How about Xen?
I vote for this :)
Using RBD storage for Xen VM images / disks is IMHO a very nice fit,
the same way people do with QEMU. This should even allow live
migration of VM.
Currently we have to rely on the RBD kernel driver which has some
downsides (no caching / need recent kerne
On Tuesday 28 August 2012 you wrote:
> On Tue, Aug 28, 2012 at 11:51 AM, Dieter Kasper
wrote:
> > Hi Ross,
> >
> > focusing on core stability and feature expansion for RBD was the right
> > appoach in the past and I feel you have reached an adequate maturity
> > level here.
> >
> > Performance en
37 matches
Mail list logo