On Fri, 6 Jul 2012, Mark Kirkwood wrote:
> On 06/07/12 16:17, Sage Weil wrote:
> > On Fri, 6 Jul 2012, Mark Kirkwood wrote:
> > >
> > > FYI: I ran into this too - you need to do:
> > >
> > > apt-get dist-upgrade
> > >
> > > for the 0.47-2 packages to be replaced by 0.48 (of course purging 'em an
On 06/07/12 16:17, Sage Weil wrote:
On Fri, 6 Jul 2012, Mark Kirkwood wrote:
FYI: I ran into this too - you need to do:
apt-get dist-upgrade
for the 0.47-2 packages to be replaced by 0.48 (of course purging 'em and
reinstalling works too...just a bit more drastic)!
That's strange... anyone k
On Fri, 6 Jul 2012, Mark Kirkwood wrote:
> On 06/07/12 14:38, Xiaopong Tran wrote:
> >
> > Thanks for the quick reply, I didn't have the computer with me last
> > night. But you were right. I checked the version of ceph on ubuntu,
> > and it's still stuck with 0.47.3, despite upgrading. I redid th
On 06/07/12 14:38, Xiaopong Tran wrote:
Thanks for the quick reply, I didn't have the computer with me last
night. But you were right. I checked the version of ceph on ubuntu,
and it's still stuck with 0.47.3, despite upgrading. I redid the
upgrade, and it's still stuck with that version. That's
On Fri, 6 Jul 2012, Paul Pettigrew wrote:
> Hi Sage - thanks so much for the quick response :-)
>
> Firstly, and it is a bit hard to see, but the command output below is run
> with the "-v" option. To help isolate what command line in the script is
> failing, I have added in some simple echo out
Hi,
Stefan is on vacation for the moment,I don't know if he can reply you.
But I can reoly for him for the kvm part (as we do same tests together in
parallel).
- kvm is 1.1
- rbd 0.48
- drive option
rbd:pool/volume:auth_supported=cephx;none;keyring=/etc/pve/priv/ceph/ceph.keyring:mon_host=X.X.
Hi Sage - thanks so much for the quick response :-)
Firstly, and it is a bit hard to see, but the command output below is run with
the "-v" option. To help isolate what command line in the script is failing, I
have added in some simple echo output, and the script now looks like:
### prepare-os
On 07/05/2012 10:38 PM, Sage Weil wrote:
On Thu, 5 Jul 2012, Xiaopong Tran wrote:
The problem is that the ceph utility itself is pre-0.48, but the
monitors
are running 0.48. You need to upgrade the utility as well. (There was
a
note about this in the release announcement.)
This only affects t
On Wed, Jul 4, 2012 at 9:33 AM, Sage Weil wrote:
> On Wed, 4 Jul 2012, Gregory Farnum wrote:
>> Hmmm ÿÿ we generally try to modify these versions when the API changes,
>> not on every sprint. It looks to me like Sage added one function in 0.45
>> where we maybe should have bumped it, but that was
Hi Paul,
On Wed, 4 Jul 2012, Paul Pettigrew wrote:
> Firstly, well done guys on achieving this version milestone. I
> successfully upgraded to the 0.48 format uneventfully on a live (test)
> system.
>
> The same system was then going through "rebuild" testing, to confirm
> that also worked fin
On 5 Jul 2012, at 21:21, Samuel Just wrote:
> David,
>
> Could you try rados -p data bench 60 write -t 16 -b 4096?
>
> rados bench defaults to 4MB objects, this'll give us results for 4k objects.
>
> If you could give me the latency too, that would help.
> -Sam
Hi Sam,
I first ran this with
Could you send over the ceph.conf on your KVM host, as well as how
you're configuring KVM to use rbd?
On Tue, Jul 3, 2012 at 11:20 AM, Stefan Priebe wrote:
> I'm sorry but this is the KVM Host Machine there is no ceph running on this
> machine.
>
> If i change the admin socket to:
> admin_socket=
On Wed, Jul 4, 2012 at 10:53 AM, Yann Dupont wrote:
> Le 04/07/2012 18:21, Gregory Farnum a écrit :
>
>> On Wednesday, July 4, 2012 at 1:06 AM, Yann Dupont wrote:
>>>
>>> Le 03/07/2012 23:38, Tommi Virtanen a écrit :
On Tue, Jul 3, 2012 at 1:54 PM, Yann Dupont >>> (mailto:yann.dup...@uni
On Thu, Jul 5, 2012 at 1:25 PM, Florian Haas wrote:
> On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum wrote:
>>> Also, going down the rabbit hole, how would this behavior change if I
>>> used cephfs to set the default layout on some directory to use a
>>> different pool?
>>
>> I'm not sure what y
On Thu, Jul 5, 2012 at 1:19 PM, Florian Haas wrote:
> On Thu, Jul 5, 2012 at 10:04 PM, Gregory Farnum wrote:
>> But I have a few more queries while this is fresh. If you create a
>> directory, unmount and remount, and get the location, does that work?
>
> Nope, same error.
>
>> (actually, just fl
On Thu, Jul 5, 2012 at 10:01 PM, Gregory Farnum wrote:
>> Also, going down the rabbit hole, how would this behavior change if I
>> used cephfs to set the default layout on some directory to use a
>> different pool?
>
> I'm not sure what you're asking here — if you have access to the
> metadata ser
David,
Could you try rados -p data bench 60 write -t 16 -b 4096?
rados bench defaults to 4MB objects, this'll give us results for 4k objects.
If you could give me the latency too, that would help.
-Sam
On Thu, Jul 5, 2012 at 12:49 PM, Mark Nelson wrote:
> On 07/05/2012 01:43 PM, David Blundell
On Thu, Jul 5, 2012 at 10:04 PM, Gregory Farnum wrote:
> But I have a few more queries while this is fresh. If you create a
> directory, unmount and remount, and get the location, does that work?
Nope, same error.
> (actually, just flushing caches would probably do it.)
Idem.
> If you create a
On Thu, Jul 5, 2012 at 10:40 AM, Florian Haas wrote:
> And one more issue report for today... :)
>
> Really easy to reproduce on my 3.2.0 Debian squeeze-backports kernel:
> mount a Ceph FS, create a directory in it. Then run "cephfs
> show_location".
>
> dmesg stacktrace:
>
> [ 7153.714260] libce
On Thu, Jul 5, 2012 at 10:40 AM, Florian Haas wrote:
> Hi everyone,
>
> please enlighten me if I'm misinterpreting something, but I think the
> Ceph FS layer could handle the following situation better.
>
> How to reproduce (this is on a 3.2.0 kernel):
>
> 1. Create a client, mine is named "test",
On 07/05/2012 01:43 PM, David Blundell wrote:
Hi David and Alexandre,
Does this only happen with random writes or also sequential writes? If it
happens with sequential writes as well, does it happen with rados bench?
--
Mark Nelson
Performance Engineer
Inktank
Hi Mark,
I just ran "rados -p
> Hi David and Alexandre,
>
> Does this only happen with random writes or also sequential writes? If it
> happens with sequential writes as well, does it happen with rados bench?
>
> --
> Mark Nelson
> Performance Engineer
> Inktank
Hi Mark,
I just ran "rados -p data bench 60 write -t 16" and
It was during a random write (fio benchmark).
I can't reproduce it now,I'll try to do tests again this week.
- Mail original -
De: "Mark Nelson"
À: "Alexandre DERUMIER"
Cc: "David Blundell" ,
ceph-devel@vger.kernel.org
Envoyé: Jeudi 5 Juillet 2012 19:58:27
Objet: Re: Slow request
> Hi David and Alexandre,
>
> Does this only happen with random writes or also sequential writes? If it
> happens with sequential writes as well, does it happen with rados bench?
>
> --
> Mark Nelson
> Performance Engineer
> Inktank
Hi Mark,
I have only ever seen it with random writes. I'll r
On 07/04/2012 11:58 AM, Alexandre DERUMIER wrote:
Hi, I see same messages here after upgrade to 0.48.
with random write benchmark.
I have more lags than before with 0.47 (but disks are at 100% usage, so can't
tell if it's normal or not)
- Mail original -
De: "David Blundell"
À: ceph
On Thu, Jul 5, 2012 at 10:39 AM, Florian Haas wrote:
> Hi guys,
>
> Someone I worked with today pointed me to a quick and easy way to
> bring down an entire cluster, by making all mons kill themselves in
> mass suicide:
>
> ceph osd setmaxosd 2147483647
> 2012-07-05 16:29:41.893862 b5962b70 0 mon
And one more issue report for today... :)
Really easy to reproduce on my 3.2.0 Debian squeeze-backports kernel:
mount a Ceph FS, create a directory in it. Then run "cephfs
show_location".
dmesg stacktrace:
[ 7153.714260] libceph: mon2 192.168.42.116:6789 session established
[ 7308.584193] divid
Hi everyone,
please enlighten me if I'm misinterpreting something, but I think the
Ceph FS layer could handle the following situation better.
How to reproduce (this is on a 3.2.0 kernel):
1. Create a client, mine is named "test", with the following capabilities:
client.test
key:
Hi guys,
Someone I worked with today pointed me to a quick and easy way to
bring down an entire cluster, by making all mons kill themselves in
mass suicide:
ceph osd setmaxosd 2147483647
2012-07-05 16:29:41.893862 b5962b70 0 monclient: hunting for new mon
I don't know what the actual threshold
On Thu, Jul 5, 2012 at 11:20 PM, Sage Weil wrote:
> On Wed, 4 Jul 2012, Sha Zhengju wrote:
>> On 07/02/2012 10:49 PM, Sage Weil wrote:
>> > On Mon, 2 Jul 2012, Sha Zhengju wrote:
>> > > On 06/29/2012 01:21 PM, Sage Weil wrote:
>> > > > On Thu, 28 Jun 2012, Sha Zhengju wrote:
>> > > >
>> > > > > Fr
On Wed, 4 Jul 2012, Sha Zhengju wrote:
> On 07/02/2012 10:49 PM, Sage Weil wrote:
> > On Mon, 2 Jul 2012, Sha Zhengju wrote:
> > > On 06/29/2012 01:21 PM, Sage Weil wrote:
> > > > On Thu, 28 Jun 2012, Sha Zhengju wrote:
> > > >
> > > > > From: Sha Zhengju
> > > > >
> > > > > Following we will tre
On Thu, 5 Jul 2012, Xiaopong Tran wrote:
> Sage Weil wrote:
>
> >Hi,
> >
> >On Thu, 5 Jul 2012, Xiaopong Tran wrote:
> >> Hi,
> >>
> >> I put up a small cluster with 3 osds, 2 mds, 3 mons, on 3 machines.
> >> They were running 0.47.2, and this is a test to do rolling upgrade to
> >> 0.48.
> >>
Sage Weil wrote:
>Hi,
>
>On Thu, 5 Jul 2012, Xiaopong Tran wrote:
>> Hi,
>>
>> I put up a small cluster with 3 osds, 2 mds, 3 mons, on 3 machines.
>> They were running 0.47.2, and this is a test to do rolling upgrade to
>> 0.48.
>>
>> I shutdown, upgraded the software, then restarted. One nod
On Thu, 5 Jul 2012, Wido den Hollander wrote:
> On 04-07-12 18:18, Sage Weil wrote:
> > On Wed, 4 Jul 2012, Wido den Hollander wrote:
> > > > On Wed, 4 Jul 2012, Wido den Hollander wrote:
> > > > > By using this we prevent scenarios where cephx keys are not accepted
> > > > > in various situations.
Hi,
On Thu, 5 Jul 2012, Xiaopong Tran wrote:
> Hi,
>
> I put up a small cluster with 3 osds, 2 mds, 3 mons, on 3 machines.
> They were running 0.47.2, and this is a test to do rolling upgrade to
> 0.48.
>
> I shutdown, upgraded the software, then restarted. One node at a time.
> The first two se
On 2012. July 4. 09:34:04 Gregory Farnum wrote:
> Hrm, it looks like the OSD data directory got a little busted somehow. How
> did you perform your upgrade? (That is, how did you kill your daemons, in
> what order, and when did you bring them back up.)
Since it would be hard and long to describe i
On 04-07-12 22:40, Sage Weil wrote:
Although Ceph fs would technically work for storing mail with maildir,
when you step back from the situation, Maildir + a distributed file system
is a pretty terrible way to approach mail storage. Maildir was designed
to work around the limited consistency of
On 02-07-12 21:21, Wido den Hollander wrote:
On 06/25/2012 05:45 PM, Wido den Hollander wrote:
On 06/25/2012 05:20 PM, Wido den Hollander wrote:
Hi,
I just tried to start a VM with libvirt with the following disk:
That fails with: "Operation not supported"
I trie
On 04-07-12 18:18, Sage Weil wrote:
On Wed, 4 Jul 2012, Wido den Hollander wrote:
On Wed, 4 Jul 2012, Wido den Hollander wrote:
By using this we prevent scenarios where cephx keys are not accepted
in various situations.
Replacing the + and / by - and _ we generate URL-safe base64 keys
Signe
In these cases + and / are replaced by - and _ to prevent problems when using
the base64 strings in URLs.
Signed-off-by: Wido den Hollander
---
src/common/armor.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/common/armor.c b/src/common/armor.c
index d1d5664..e4b
On 07/04/2012 06:33 PM, Sage Weil wrote:
On Wed, 4 Jul 2012, Gregory Farnum wrote:
Hmmm ÿÿ we generally try to modify these versions when the API changes,
not on every sprint. It looks to me like Sage added one function in 0.45
where we maybe should have bumped it, but that was a long time ago
41 matches
Mail list logo