Do you have full logs from the beginning of replay? I believe you
should only see this when a client is reconnecting to the MDS with
files that the MDS doesn't know about already, which shouldn't happen
at all in a single-MDS system. Although that "pool -1" also looks
suspicious and makes me wonder
On Tue, Aug 20, 2013 at 4:56 PM, Petr Soukup wrote:
> I am using ceph filesystem through ceph-fuse to store product photos and most
> of the time it works great. But if there is some problem on ceph server, my
> connected clients start acting crazy. Load on all servers with mounted ceph
> jumps
We've identified a problem when upgrading directly from bobtail to
dumpling; please wait until 0.67.2 before doing so.
Upgrades from bobtail -> cuttlefish -> dumpling are fine. It is only the
long jump between versions that is problematic.
The fix is already in the dumpling branch. Another po
I am using ceph filesystem through ceph-fuse to store product photos and most
of the time it works great. But if there is some problem on ceph server, my
connected clients start acting crazy. Load on all servers with mounted ceph
jumps very high, webserver and other services start to crash.
I t
Good to hear.
-Sam
On Tue, Aug 20, 2013 at 2:55 PM, Oliver Daudey wrote:
> Hey Samuel,
>
> I picked up 0.67.1-10-g47c8949 from the GIT-builder and the osd from
> that seems to work great so far. I'll have to let it run for a while
> longer to really be sure it fixed the problem, but it looks pro
Hey Samuel,
I picked up 0.67.1-10-g47c8949 from the GIT-builder and the osd from
that seems to work great so far. I'll have to let it run for a while
longer to really be sure it fixed the problem, but it looks promising,
not taking any more CPU than the Cuttlefish-osds. Thanks! I'll get
back to
On 08/20/2013 11:20 AM, Vincent Hurtevent wrote:
I'm not the end user. It's possible that the volume has been detached
without unmounting.
As the volume is unattached and the initial kvm instance is down, I was
expecting the rbd volume is properly unlocked even if the guest unmount
hasn't been
This might be slightly off topic though many of ceph users might have run into
similar issues.
For one of our Grizzly Openstack environment, we are using Ceph/RBD as the
exclusive image and volume storage for VMs, which are booting from rbd backed
Cinder volumes. As a result, nova image cache i
On 08/19/2013 11:24 AM, Gregory Farnum wrote:
On Mon, Aug 19, 2013 at 9:07 AM, Sage Weil wrote:
On Mon, 19 Aug 2013, S?bastien Han wrote:
Hi guys,
While reading a developer doc, I came across the following options:
* osd balance reads = true
* osd shed reads = true
* osd shed reads min laten
Hey Mark,
I'm working on a simple way to reproduce this if we fail to identify it
and `rados bench' indeed looks promising, as it can also simulate many
concurrent ops, which is probably what sets the 3 VM load on my
test-cluster apart from the +-80 VM load on my production-cluster. I'll
look int
I'm not the end user. It's possible that the volume has been detached
without unmounting.
As the volume is unattached and the initial kvm instance is down, I was
expecting the rbd volume is properly unlocked even if the guest unmount
hasn't been done, like a physical disk in fact.
Which p
Hi Oliver,
Have you ever tried using RADOS bench?
On one of the client nodes, you can do:
rados -p -b bench write 120 -t
It would be useful to know if that is also slower without any of the RBD
code involved. Btw, what was the dd command line you were using for
testing?
Thanks,
Mark
I am also experiencing this problem.
http://tracker.ceph.com/issues/2476
regards
Maciej
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Can you try dumpling head without the option?
-Sam
On Tue, Aug 20, 2013 at 1:44 AM, Oliver Daudey wrote:
> Hey Mark,
>
> Sorry, but after some more tests I have to report that it only worked
> partly. The load seems lower with "wip-dumpling-pglog-undirty" in
> place, but the Cuttlefish-osd still
Hello,
I'm using Ceph as Cinder backend. Actually it's working pretty well and
some users are using this cloud platform for few weeks, but I come back
from vacation and I've got some errors removing volumes, errors I didn't
have few weeks ago.
Here's the situation :
Volumes are unattached,
Martin,
Thanks for the confirmation about 3-replica performance.
dmesg | fgrep /dev/sdb # returns no matches
Jeff
--
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Jeff,
I would be surprised as well - we initially tested on a 2-replica cluster
with 8 nodes having 12 osd each - and went to 3-replica as we re-built the
cluster.
The performance seems to be where I'd expect it (doing consistent writes in
a rbd VM @ ~400MB/sec on 10GbE which I'd expect is eit
On Tue, 20 Aug 2013, Joao Eduardo Luis wrote:
> On 08/20/2013 10:24 AM, Valery Tschopp wrote:
> > Hi,
> >
> > For the ones using Nagios to monitor their ceph cluster, I've written a
> > 'ceph health' Nagios plugin:
> >
> > https://github.com/valerytschopp/ceph-nagios-plugins
> >
> > The plug
On Tue, Aug 20, 2013 at 7:34 AM, Fuchs, Andreas (SwissTXT)
wrote:
> Hi
>
> I succesfully setup a ceph cluster with 0.67.1, now I try to get the radosgw
> on a separate node running
>
> Os=ubuntu 12.04 lts
> Install seemed succesfull, but if I try to access the api I see the
> following in the l
On Aug 20, 2013, at 15:18 , Johannes Klarenbeek
wrote:
>
>
> Van: Wolfgang Hennerbichler [mailto:wo...@wogri.com]
> Verzonden: dinsdag 20 augustus 2013 10:51
> Aan: Johannes Klarenbeek
> CC: ceph-users@lists.ceph.com
> Onderwerp: Re: [ceph-users] some newbie questions...
>
> On Aug 20, 2
Hi
I succesfully setup a ceph cluster with 0.67.1, now I try to get the radosgw on
a separate node running
Os=ubuntu 12.04 lts
Install seemed succesfull, but if I try to access the api I see the following
in the logs
Apache2/error.log
2013-08-20 16:22:17.029064 7f927abdf780 -1 warning: unabl
Hi !
Ceph newbie here with a placement question. I'm trying to get a simple Ceph
setup to run well with sequential reads big packets (>256k).
This is for learning/benchmarking purpose only and the setup I'm working with
has a single server with 2 data drives, one OSD on each, journals on th
On 08/20/2013 08:42 AM, Jeff Moskow wrote:
Hi,
More information. If I look in /var/log/ceph/ceph.log, I see 7893 slow
requests in the last 3 hours of which 7890 are from osd.4. Should I
assume a bad drive? I SMART says the drive is healthy? Bad osd?
Definitely sounds suspicious! Might be w
Hi,
More information. If I look in /var/log/ceph/ceph.log, I see 7893 slow
requests in the last 3 hours of which 7890 are from osd.4. Should I
assume a bad drive? I SMART says the drive is healthy? Bad osd?
Thanks,
Jeff
--
___
ceph
ww.eset.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
__ Informatie van ESET Endpoint Antivirus, versie van database
viruske
ph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http:
Hi,
After upgrading to dumpling I appear unable to get the mds cluster
running. The active server just sits in the rejoin state spinning and
causing lots of i/o on the osds. Looking at the logs it appears to be
going through checking a vast number of missing inodes.
2013-08-20 13:50:29.129624 7fd
On 08/20/2013 10:24 AM, Valery Tschopp wrote:
Hi,
For the ones using Nagios to monitor their ceph cluster, I've written a
'ceph health' Nagios plugin:
https://github.com/valerytschopp/ceph-nagios-plugins
The plugin is written in python, and allow to specify a client user id
and keyring to
Hi Valery,
Thank you for taking the time to write this plugin :-) Did you consider
publishing it in http://exchange.nagios.org/directory/Plugins ?
Cheers
On 20/08/2013 11:24, Valery Tschopp wrote:
> Hi,
>
> For the ones using Nagios to monitor their ceph cluster, I've written a 'ceph
> health
Hi,
For the ones using Nagios to monitor their ceph cluster, I've written a
'ceph health' Nagios plugin:
https://github.com/valerytschopp/ceph-nagios-plugins
The plugin is written in python, and allow to specify a client user id
and keyring to execute the plugin as user 'nagios' or other.
On 08/19/2013 03:42 PM, Johannes Klarenbeek wrote:
Dear Ceph Developers and Users,
I was wondering if there is any download location for preinstalled
virtual machine images with the latest release of ceph. Preferably 4
different images with Ceph-OSD, Ceph-Mon, Ceph-MDS and last but not
least a C
Thanks Greg.
>>The typical case is going to depend quite a lot on your scale.
[Guang] I am thinking the scale as billions of objects with size from several
KB to several MB, my concern is over the cache efficiency for such use case.
That said, I'm not sure why you'd want to use CephFS for a smal
On Aug 20, 2013, at 09:54 , Johannes Klarenbeek
wrote:
> dear ceph-users,
>
> although heavily active in the past, i didn’t touch linux for years, so I’m
> pretty new to ceph and i have a few questions, which i hope someone could
> answer for me.
>
> 1) i read somewhere that it is recommen
Then that makes total sense to me.
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Tuesday, August 20, 2013 1:19 PM
Subject: Re: [ceph-users] Usage pattern and design of Ceph
On 20/08/13 13:27, Guang Yang wrote:
> T
Hey Mark,
Sorry, but after some more tests I have to report that it only worked
partly. The load seems lower with "wip-dumpling-pglog-undirty" in
place, but the Cuttlefish-osd still seems significantly faster and even
with "wip-dumpling-pglog-undirty" in place, things slow down way too
much under
Hi,
I am now occasionally seeing a ceph statuses like this:
health HEALTH_WARN 2 requests are blocked > 32 sec
They aren't always present even though the cluster is still slow, but
they may be a clue
Jeff
On Sat, Aug 17, 2013 at 02:32:47PM -0700, Sage Wei
dear ceph-users,
although heavily active in the past, i didn't touch linux for years, so I'm
pretty new to ceph and i have a few questions, which i hope someone could
answer for me.
1) i read somewhere that it is recommended to have one OSD per disk in a
production environment.
is this also
37 matches
Mail list logo