Hey Daleep,
Right now we're planned through the end of the year, but if you know
of a location where we would get good attendance and maybe someone
with a space to host it we could certainly add that to the planning
for 2016. Feel free to shoot me an email if you know of any places
that might fit
On Wed, Aug 26, 2015 at 3:33 PM, Dan van der Ster d...@vanderster.com wrote:
Hi Wido,
On Wed, Aug 26, 2015 at 10:36 AM, Wido den Hollander w...@42on.com wrote:
I'm sending pool statistics to Graphite
We're doing the same -- stripping invalid chars as needed -- and I
would guess that lots of
Yeah, we're still working on nailing down a venue for the Melbourne
event (but it looks like 05 Nov is probably the date). As soon as we
have a venue confirmed we'll put out a call for speakers and post the
details on the /cephdays/ page. Thanks!
On Tue, Aug 25, 2015 at 7:47 PM, Goncalo Borges
Le Tue, 25 Aug 2015 17:07:18 +0200
Jan Schermer j...@schermer.cz écrivait:
There's a nice whitepaper about under-provisioning
everyone using SSDs should read it
http://www.sandisk.com/assets/docs/WP004_OverProvisioning_WhyHow_FINAL.pdf
Hey cephers,
Don't forget that tomorrow is our monthly Ceph Tech Talk. This month
we're taking a look at performance measuring and tuning in Ceph. Mark
Nelson, Ceph's lead performance engineer will be giving an overview of
what's new in the performance world of Ceph and sharing some recent
Maybe if you TRIM it first, but the correct way to do that is like this:
https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm
Jan
On 26 Aug 2015, at 18:58, Emmanuel Florac eflo...@intellique.com wrote:
Le Tue, 25 Aug 2015 17:07:18 +0200
Jan Schermer j...@schermer.cz
Hello Jason,
I checked the version of my built packages and they are all 9.0.2. I purged the
cluster and uninstalled the packages and there seems to be nothing else - no
older version. Could you elaborate on the fix for this issue?
Thanks,
Aakanksha
-Original Message-
From: Jason
Hi,
I would like to know if there will be a new repository hosting Jessie
packages in the near future.
If I am not mistaken the issue for not using the existing packages is that
there are a few (dependency) libraries in Jessie in newer versions and some
porting may be needed.
---
At the moment
Hi,
We have ceph cluster (Ceph version 0.94.2) which consists of four nodes
with four disks on each node. Ceph is configured to hold two replicas
(size 2). We use this cluster for ceph filesystem. Few days ago we had
power outage after which I had to replace three of our cluster OSD
disks.
Hi,
It's something which has been 'bugging' me for some time now. Why are
RGW pools prefixed with a period?
I tried setting the root pool to 'rgw.root', but RGW (0.94.1) refuses to
start:
ERROR: region root pool name must start with a period
I'm sending pool statistics to Graphite and when
Great!
Yes, behaviour exact as i described. So looks like it's root cause )
Thank you, Sam. Ilya!
2015-08-21 21:08 GMT+03:00 Samuel Just sj...@redhat.com:
I think I found the bug -- need to whiteout the snapset (or decache
it) upon evict.
http://tracker.ceph.com/issues/12748
-Sam
On Fri,
If you lost 3 disks with size 2 and at least 2 of those disks were in different
host, that means you lost data with the default CRUSH.
There's nothing you can do but either get those disks back in or recover from
backup.
Jan
On 26 Aug 2015, at 12:18, Andrzej Łukawski alukaw...@interia.pl
Hello,
I am experiencing an issue where OSD Services fail due to an unexpected aio
error. This has happend on two different OSD servers killing two different OSD
Daemons services.
I am running Ceph Hammer on Debian Wheezy with a backported
Kernel(3.16.0-0.bpo.4-amd64).
Below is the log
On Mon, Feb 23, 2015 at 10:18 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com
wrote:
--
*From: *Shinji Nakamoto shinji.nakam...@mgo.com
*To: *ceph-us...@ceph.com
*Sent: *Friday, February 20, 2015 3:58:39 PM
*Subject: *[ceph-users] RadosGW - multiple dns names
We
Thank you for answer. I lost 2 disks on 1st node and 1 disk on 2nd. I
understand it is not possible to recover the data even partially?
Unfortunatelly those disks are lost forever.
Andrzej
W dniu 2015-08-26 o 12:26, Jan Schermer pisze:
If you lost 3 disks with size 2 and at least 2 of those
Thanks, Luis.
The motivation for using the newer version is to keep up-to-date with Ceph
development, since we suspect the old versioned radosgw could not be restarted
possibly due to library mismatch.
Do you know whether the self-healing feature of ceph is applicable between
different
Hi Wido,
On Wed, Aug 26, 2015 at 10:36 AM, Wido den Hollander w...@42on.com wrote:
I'm sending pool statistics to Graphite
We're doing the same -- stripping invalid chars as needed -- and I
would guess that lots of people have written similar json2graphite
convertor scripts for Ceph monitoring
Most of the data is still here, but you won't be able to just mount it if
it's inconsistent.
I don't use CephFS so someone else could tell you if it's able to repair the
filesystem with some parts missing.
You lost part of the data where the copies were only on the 1 disk in one node
and on
Hi,
We have been running Ceph/Radosgw version 0.80.7 (Giant) and stored quite some
amount of data in it. We are only using ceph as an object store via radosgw.
Last week cheph-radosgw daemon suddenly refused to start (with logs only
showing initialization timeout error on Centos 7). This
Hi ,
We got a good deal on 843T and we are using it in our Openstack setup ..as
journals .
They have been running for last six months ... No issues .
When we compared with Intel SSDs I think it was 3700 they were shade
slower for our workload and considerably cheaper.
We did not run any
I Would say the easiest way would be to leverage all the self-healing of
ceph: add the new nodes to the old cluster, allow or force all the data to
migrate between nodes, and then remove the old ones out.
Well to be fair you could probably just install radosgw on another node and
use it as your
That would certainly be something we would use.
QH
On Wed, Aug 26, 2015 at 8:33 AM, Dan van der Ster d...@vanderster.com
wrote:
Hi Wido,
On Wed, Aug 26, 2015 at 10:36 AM, Wido den Hollander w...@42on.com
wrote:
I'm sending pool statistics to Graphite
We're doing the same -- stripping
I tend to not do too much each time: either upgrade or data migrate. The
actual upgrade process is seamless... So you can just as easily upgrade the
current cluster to hammer, and add/remove nodes on the fly. All of this
is quite seamless and straightforward (other than the data migration
itself).
Did you update:
http://tracker.ceph.com/issues/12100
Just question.
Shinobu
On Wed, Aug 26, 2015 at 8:09 PM, Pontus Lindgren pon...@oderland.se wrote:
Hello,
I am experiencing an issue where OSD Services fail due to an unexpected
aio error. This has happend on two different OSD servers
On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander w...@42on.com wrote:
Hi,
It's something which has been 'bugging' me for some time now. Why are
RGW pools prefixed with a period?
I tried setting the root pool to 'rgw.root', but RGW (0.94.1) refuses to
start:
ERROR: region root pool name
There is a cephfs-journal-tool that I believe is present in hammer and
ought to let you get your MDS through replay. Depending on which PGs
were lost you will have holes and/or missing files, in addition to not
being able to find parts of the directory hierarchy (and maybe getting
crashes if you
Hi everyone,
A new version of ceph-deploy has been released. Version 1.5.28
includes the following:
- A fix for a regression introduced in 1.5.27 that prevented
importing GPG keys on CentOS 6 only.
- Will prevent Ceph daemon deployment on nodes that don't have Ceph
installed on them.
- Makes
- Original Message -
From: Aakanksha Pudipeddi-SSI aakanksha...@ssi.samsung.com
To: Jason Dillaman dilla...@redhat.com
Cc: ceph-us...@ceph.com
Sent: Thursday, 27 August, 2015 6:22:45 AM
Subject: Re: [ceph-users] Rados: Undefined symbol error
Hello Jason,
I checked the version of
On Wed, Aug 26, 2015 at 11:52:02AM +0100, Luis Periquito wrote:
On Mon, Feb 23, 2015 at 10:18 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com
wrote:
--
*From: *Shinji Nakamoto shinji.nakam...@mgo.com
*To: *ceph-us...@ceph.com
*Sent: *Friday, February 20,
-Original Message-
From: Dałek, Piotr [mailto:piotr.da...@ts.fujitsu.com]
Sent: Wednesday, August 26, 2015 2:02 AM
To: Sage Weil; Deneau, Tom
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
Subject: RE: rados bench object not correct errors on v9.0.3
-Original
Hey guys...
1./ I have a simple question regarding the appearance of degraded PGs.
First, for reference:
a. I am working with 0.94.2
b. I have 32 OSDs distributed in 4 servers, meaning that I have 8
OSD per server.
c. Our cluster is set with 'osd pool default size = 3' and 'osd
On that note in regards to watching /cephdays for details, the RSS feed
404's!
http://ceph.com/cephdays/feed/
Regards,
Matt.
On 27/08/2015 02:52, Patrick McGarry wrote:
Yeah, we're still working on nailing down a venue for the Melbourne
event (but it looks like 05 Nov is probably the date).
looks like it only works if an nginx is in front of radosgw, it
translates absolute URIs and maybe fix other issues:
https://github.com/docker/distribution/pull/808#issuecomment-135286314
https://github.com/docker/distribution/pull/902
On Mon, Aug 17, 2015 at 1:37 PM, Lorieri lori...@gmail.com
33 matches
Mail list logo