On Mon, Jun 29, 2015 at 7:55 AM, Dan van der Ster d...@vanderster.com wrote:
On Mon, Jun 29, 2015 at 8:31 AM, Dałek, Piotr
piotr.da...@ts.fujitsu.com wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Erik G.
I'm confused. Changing the kernel you're using is changing the
apparent memory usage of a userspace application (Ceph)?
Are you changing the compiler when you change kernel versions?
-Greg
On Mon, Jun 29, 2015 at 3:35 AM, Somnath Roy somnath@sandisk.com wrote:
Some more data point..
1. I
archives
by Gregory Farnum.
- Every casual explanation I found presumes that an omap (a set
of K/V) is associated with an object. But it is not physically in
the object. So, is there a free-standing omap (set of keys)?
Or an omap associated with something else, like a pool?
- Greg
On Sat, Jun 20, 2015 at 11:18 AM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
thanks for the comments!
Gregory Farnum g...@gregs42.com writes:
On Thu, May 28, 2015 at 3:01 AM, Marcel Lauhoff m...@irq0.org wrote:
Gregory Farnum g...@gregs42.com writes:
Do you have a shorter summary than
On Mon, Jun 22, 2015 at 10:18 PM, Bill Sharer bsha...@sharerland.com wrote:
I'm currently running giant on gentoo and was wondering about the stability
of the api for mapping MDS files to rados objects. The cephfs binary
complains that it is obsolete for getting layout information, but it also
On Mon, Jun 22, 2015 at 9:45 PM, Barclay Jameson
almightybe...@gmail.com wrote:
Has anyone seen this?
Can you describe the kernel you're using, the workload you were
running, the Ceph cluster you're running against, etc?
Jun 22 15:09:27 node kernel: Call Trace:
Jun 22 15:09:27 node kernel:
On Mon, Jun 22, 2015 at 10:34 PM, Loic Dachary l...@dachary.org wrote:
Hi Tom,
On 22/06/2015 17:10, Deneau, Tom wrote:
If one has a cluster with some nodes that can run with the ISA plugin
and some that cannot, is there a way to define a pool such that the
ISA-capable nodes can use the ISA
Did we end up creating a ticket for this? I saw this on one FS run as well.
-Greg
On Fri, Jun 19, 2015 at 8:03 AM, Gregory Farnum gfar...@redhat.com wrote:
Not that I'm aware of. I don't mess with services at all in the log rotation
stuff I did (we already disabled generic log rotation when
Not that I'm aware of. I don't mess with services at all in the log rotation
stuff I did (we already disabled generic log rotation when tests were running).
-Greg
On Jun 19, 2015, at 12:13 AM, David Zafman dzaf...@redhat.com wrote:
Greg,
Have you changed anything (log rotation related?)
Can you submit this as a Github pull request?
We can take patches to userspace code over the mailing list but
they're more likely to get lost if you go that route.
-Greg
On Mon, Jun 15, 2015 at 4:53 PM, James Devine fxmul...@gmail.com wrote:
Change config fopen to binary mode, allowing LF and
On Mon, Jun 15, 2015 at 9:37 AM, Yuri Weinstein ywein...@redhat.com wrote:
QE validation is almost completed (there are a couple of jobs that are still
running)
All statis details were summarized in http://tracker.ceph.com/issues/11090
Highlights (by suite/issue):
rados #11914 needs Sam's
On Mon, Jun 15, 2015 at 8:03 AM, John Spray john.sp...@redhat.com wrote:
On 15/06/2015 14:52, Sage Weil wrote:
I seem to remember having a short conversation about something like this a
few CDS's back... although I think it was 'rados top'. IIRC the basic
idea we had was for each OSD to
On Thu, Jun 11, 2015 at 12:33 PM, Robert LeBlanc rob...@leblancnet.us wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
One feature we would like is an rbd top command that would be like
top, but show usage of RBD volumes so that we can quickly identify
high demand RBDs.
Since I
On Tue, Jun 9, 2015 at 7:52 PM, Shishir Gowda shishir.go...@sandisk.com wrote:
Hi All,
We have uploaded the blueprint for the enhancements we are proposing for ceph
tier’ing functionality for Jewel release @
http://tracker.ceph.com/projects/ceph/wiki/Tiering-enhacement
Soliciting
On Thu, Jun 4, 2015 at 5:36 PM, Jason Dillaman dilla...@redhat.com wrote:
...Actually, doesn't *not* forcing a coordinated move from one object
set to another mean that you don't actually have an ordering guarantee
across tags if you replay the journal objects in order?
The ordering
On Thu, May 28, 2015 at 3:01 AM, Marcel Lauhoff m...@irq0.org wrote:
Gregory Farnum g...@gregs42.com writes:
On Wed, May 27, 2015 at 1:39 AM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
I wrote a prototype for an OSD-based object stub feature. An object stub
being an object with it's data
On Thu, Jun 4, 2015 at 8:08 AM, Jason Dillaman dilla...@redhat.com wrote:
A successful append will indicate whether or not the journal is now full
(larger than the max object size), indicating to the client that a new
journal object should be used. If the journal is too large, an error
On Wed, Jun 3, 2015 at 2:36 PM, Ken Dreyer kdre...@redhat.com wrote:
On 06/03/2015 02:45 PM, Sage Weil wrote:
Sounds good to me. It could (should?) even error out if no init
system is
specified? Otherwise someone will likely be in for a surprise.
I was picturing that we'd just autodetect
On Wed, Jun 3, 2015 at 9:13 AM, Jason Dillaman dilla...@redhat.com wrote:
In contrast to the current journal code used by CephFS, the new journal
code will use sequence numbers to identify journal entries, instead of
offsets within the journal.
Am I misremembering what actually got done
On Wed, Jun 3, 2015 at 3:44 PM, Sage Weil s...@newdream.net wrote:
On Mon, 1 Jun 2015, Gregory Farnum wrote:
On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
pvonstamw...@us.fujitsu.com wrote:
On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, May 29, 2015 at 2
On Tue, Jun 2, 2015 at 8:11 AM, Jason Dillaman dilla...@redhat.com wrote:
I am posting to get wider review/feedback on this draft design. In support
of the RBD mirroring feature [1], a new client-side journaling class will be
developed for use by librbd. The implementation is designed to
On Mon, Jun 1, 2015 at 12:39 PM, Sage Weil s...@newdream.net wrote:
I have a pull request posted at
https://github.com/ceph/ceph/pull/4809
that updates the mds cap parser and defines a check method. Please take
a look and see if this makes sense.
Couple comments on the internal
On Mon, Jun 1, 2015 at 6:39 PM, Paul Von-Stamwitz
pvonstamw...@us.fujitsu.com wrote:
On Fri, May 29, 2015 at 4:18 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery
On Sun, May 31, 2015 at 1:40 PM, Ilya Dryomov idryo...@gmail.com wrote:
On Sun, May 31, 2015 at 10:54 PM, Gregory Farnum g...@gregs42.com wrote:
We are getting this error in what looks like everything that specifies
the testing kernel. (That turns out to be almost all of the FS tests
- Original Message -
From: Yuri Weinstein ywein...@redhat.com
To: Loic Dachary l...@dachary.org
Cc: Ceph Development ceph-devel@vger.kernel.org, Abhishek L
abhishek.lekshma...@gmail.com, Alfredo Deza
ad...@redhat.com, Loic Dachary ldach...@redhat.com, Gregory Farnum
gfar
We are getting this error in what looks like everything that specifies
the testing kernel. (That turns out to be almost all of the FS tests
and a surprising number of the non-rados runs; e.g. rgw.) I've checked
that the testing branch of ceph-client.git still exists and when
looking at the
On Sun, May 31, 2015 at 3:19 PM, Sage Weil s...@newdream.net wrote:
On Sun, 31 May 2015, Gregory Farnum wrote:
On Sun, May 31, 2015 at 1:40 PM, Ilya Dryomov idryo...@gmail.com wrote:
On Sun, May 31, 2015 at 10:54 PM, Gregory Farnum g...@gregs42.com wrote:
We are getting this error in what
On Fri, May 29, 2015 at 2:47 PM, Samuel Just sj...@redhat.com wrote:
Many people have reported that they need to lower the osd recovery config
options to minimize the impact of recovery on client io. We are talking
about changing the defaults as follows:
osd_max_backfills to 1 (from 10)
On Thu, May 28, 2015 at 4:09 PM, Deneau, Tom tom.den...@amd.com wrote:
I've noticed that
* with a single node cluster with 4 osds
* and running rados bench rand on that same node so no network traffic
* with a number of objects small enough so that everything is in the cache
so no
On Thu, May 28, 2015 at 2:32 AM, Loic Dachary l...@dachary.org wrote:
Hi,
This morning I'll schedule a job with priority 50, assuming nobody will get
mad at me for using such a low priority because the associated bug fix blocks
the release of v0.94.2 (http://tracker.ceph.com/issues/11546)
On Thu, May 28, 2015 at 4:50 PM, Deneau, Tom tom.den...@amd.com wrote:
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Thursday, May 28, 2015 6:18 PM
To: Deneau, Tom
Cc: ceph-devel
Subject: Re: rados bench throughput with no disk or network activity
On Thu
On Thu, May 28, 2015 at 3:42 AM, John Spray john.sp...@redhat.com wrote:
On 28/05/2015 06:37, Gregory Farnum wrote:
On Tue, May 12, 2015 at 5:42 PM, Josh Durgin jdur...@redhat.com wrote:
Parallelism
^^^
Mirroring many images is embarrassingly parallel. A simple unit of
work
, but I think this is one
thing SMB got right and why I prefer Samba over NFS for multi-tenant
environments.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Wed, May 27, 2015 at 6:42 PM, Gregory Farnum wrote:
On Wed, May 27, 2015 at 5:37
On Wed, May 27, 2015 at 2:12 PM, Yuri Weinstein ywein...@redhat.com wrote:
QE validation status.
All detailed information is summarized in http://tracker.ceph.com/issues/11492
Team leads pls review for do go-no-go decision.
Issues to be considered:
rados - passed ~2.8K jobs, listed issues
On Wed, May 27, 2015 at 5:37 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
On Wed, May 27, 2015 at 4:59 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
On Wed, May 27, 2015 at 4:07 PM, Sage Weil sw...@redhat.com wrote
Thread necromancy! (Is it still necromancy if it's been waiting in my
inbox the whole time?)
On Tue, Apr 7, 2015 at 5:54 AM, John Spray john.sp...@redhat.com wrote:
Hi all,
[this is a re-send of a mail from yesterday that didn't make it, probably
due to an attachment]
It has always annoyed
On Wed, May 27, 2015 at 4:59 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
On Wed, May 27, 2015 at 4:07 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
On Wed, May 27, 2015 at 3:21 PM, Sage Weil sw...@redhat.com wrote
On Tue, May 12, 2015 at 5:42 PM, Josh Durgin jdur...@redhat.com wrote:
We've talked about this a bit at ceph developer summits, but haven't
gone through several parts of the design thoroughly. I'd like to post
this to a wider audience and get feedback on this draft of a design.
The journal
On Wed, May 27, 2015 at 1:39 AM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
I wrote a prototype for an OSD-based object stub feature. An object stub
being an object with it's data moved /elsewhere/. I hope to get some
feedback, especially whether I'm on the right path here and if it
is a
On Wed, May 27, 2015 at 4:07 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
On Wed, May 27, 2015 at 3:21 PM, Sage Weil sw...@redhat.com wrote:
On Wed, 27 May 2015, Gregory Farnum wrote:
I was just talking to Simo about the longer-term kerberos auth goals
On Wed, May 27, 2015 at 2:44 PM, Sage Weil sw...@redhat.com wrote:
On Tue, 26 May 2015, Gregory Farnum wrote:
Basically I'm still stuck on how any of this lets us lock a user into
a subtree while letting them do what they want within it. I'm not sure
how/if NFS solves that problem
On Tue, May 26, 2015 at 9:28 AM, Sage Weil sw...@redhat.com wrote:
On Tue, 26 May 2015, Gregory Farnum wrote:
On Fri, May 22, 2015 at 3:52 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 22 May 2015, Gregory Farnum wrote:
On Fri, May 22, 2015 at 3:18 PM, Sage Weil sw...@redhat.com wrote
On Fri, May 22, 2015 at 3:52 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 22 May 2015, Gregory Farnum wrote:
On Fri, May 22, 2015 at 3:18 PM, Sage Weil sw...@redhat.com wrote:
What would it mean for a user who doesn't have no_root_squash to have
access to uid 0? Why should we allow random
On Tue, May 26, 2015 at 4:57 AM, John Spray john.sp...@redhat.com wrote:
On 26/05/2015 07:55, Yan, Zheng wrote:
the reason for slow file creations is that bonnie++ call fsync(2) after
each creat(2). fsync() wait for safe replies of the create requests. MDS
sends safe reply when log event
On Tue, May 26, 2015 at 4:12 PM, Sage Weil sw...@redhat.com wrote:
On Tue, 26 May 2015, Gregory Farnum wrote:
On Tue, May 26, 2015 at 3:17 PM, Sage Weil sw...@redhat.com wrote:
On Tue, 26 May 2015, Gregory Farnum wrote:
That makes sense to me. I suggest
- the MDS decides
Yep, looks good.
-Greg
On Fri, May 22, 2015 at 1:04 PM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The next firefly release as found at
https://github.com/ceph/ceph/tree/firefly
(68211f695941ee128eb9a7fd0d80b615c0ded6cf) passed the fs suite
(http://tracker.ceph.com/issues/11090#fs).
On Fri, May 22, 2015 at 2:35 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 22 May 2015, John Spray wrote:
On 21/05/2015 01:14, Sage Weil wrote:
Looking at the MDSAuthCaps again, I think there are a few things we might
need to clean up first. The way it is currently structured, the idea is
On Fri, May 22, 2015 at 3:18 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 22 May 2015, Gregory Farnum wrote:
The root_squash option clearly belongs in spec, and Nistha's first patch
adds it there. What about the other NFS options.. should be mirror
those
too?
root_squash
On Wed, May 20, 2015 at 11:15 AM, Barclay Jameson
almightybe...@gmail.com wrote:
I am trying to find out why boniee++ is choking at the creating files
sequentially and deleting sequentially on cephfs.
I enabled mds debug for about 30 seconds and I find a bunch of lines
like the following:
On Mon, May 11, 2015 at 5:21 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the fs suite (http://tracker.ceph.com/issues/11492#fs), which also
includes your last minute addition
On Thu, May 7, 2015 at 6:29 PM, Zhou, Yuan yuan.z...@intel.com wrote:
Ceph use crush algorithm to provide the mapping of objects to OSD servers.
This is great for clients so they could talk to with these OSDs directly.
However there are some scenarios where the application needs to access the
On Fri, May 1, 2015 at 7:54 AM, Steve Capper steve.cap...@linaro.org wrote:
Hello,
Whilst testing Ceph 0.94.1 on 64-bit ARM hardware, I noticed that
switching the kernel PAGE_SIZE from 4KB to 64KB caused an increase by
a factor of ~6 in the total amount of data written to disk (according
to
On Thu, Apr 30, 2015 at 2:57 PM, Somnath Roy somnath@sandisk.com wrote:
Greg,
Probably not supported right now, but, wanted to confirm if there is any way
we can use Ceph cache tier for only writes and reads are forwarded to the
backend erasure coded pool.
Anything like that would be
Yes.
On Thu, Apr 23, 2015 at 11:35 AM, Deneau, Tom tom.den...@amd.com wrote:
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Thursday, April 23, 2015 12:37 PM
To: Deneau, Tom
Cc: ceph-devel
Subject: Re: ceph tell osd bench
On Thu, Apr 23, 2015 at 6:58 AM
On Thu, Apr 23, 2015 at 6:58 AM, Deneau, Tom tom.den...@amd.com wrote:
While running ceph tell osd bench and playing around with the total_bytes and
block_size parameters,
I have noticed that if the total_bytes written is less than about 0.5G, the
bytes/sec is much higher.
Why is that?
On Thu, Apr 16, 2015 at 11:58 PM, Amon Ott a@m-privacy.de wrote:
Am 17.04.2015 um 03:01 schrieb Gregory Farnum:
This looks good to me, but we need an explicit sign-off from you for
it. If you can submit it as a PR on Github that's easiest for us, but
if not can you send it in git email
On Wed, Apr 22, 2015 at 2:57 PM, Ken Dreyer kdre...@redhat.com wrote:
I could really use some eyes on the systemd change proposed here:
http://tracker.ceph.com/issues/11344
Specifically, on bullet #4 there, should we have a single
ceph-mon.service (implying that users should only run one
We’ve been hard at work on CephFS over the last year since Firefly was
released, and with Hammer coming out it seemed like a good time to go over some
of the big developments users will find interesting. Much of this is cribbed
from John’s Linux Vault talk
On Thu, Apr 16, 2015 at 4:16 PM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
On 17/04/2015 00:44, Gregory Farnum wrote:
On Wed, Apr 15, 2015 at 2:37 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the fs
On Thu, Apr 9, 2015 at 11:38 PM, 池信泽 xmdx...@gmail.com wrote:
hi, all:
Now, ceph should received all ack message from remote and then
reply ack to client, What
about directly reply to client if primary has been received some of
them. Below is the request
trace among osd. Primary wait
This looks good to me, but we need an explicit sign-off from you for
it. If you can submit it as a PR on Github that's easiest for us, but
if not can you send it in git email patch form? :)
-Greg
On Wed, Apr 8, 2015 at 2:58 AM, Amon Ott a@m-privacy.de wrote:
Hello Ceph!
The Ceph init
On Sat, Apr 11, 2015 at 8:42 PM, Mark Kirkwood
mark.kirkw...@catalyst.net.nz wrote:
Hi,
Building without --enable-debug produces:
ceph_fuse.cc: In member function ‘virtual void* main(int, const char**,
const char**)::RemountTest::entry()’:
ceph_fuse.cc:146:15: warning: ignoring return value
On Wed, Apr 15, 2015 at 2:37 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the fs suite (http://tracker.ceph.com/issues/11153#fs). Do you think
it is ready for QE to start their own round of testing ?
On Thu, Apr 16, 2015 at 5:38 PM, Sage Weil s...@newdream.net wrote:
On Thu, 16 Apr 2015, Mark Nelson wrote:
On 04/16/2015 01:17 AM, Somnath Roy wrote:
Here is the data with omap separated to another SSD and after 1000GB of fio
writes (same profile)..
omap writes:
-
On Mon, Apr 13, 2015 at 10:28 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
On 13/04/2015 19:04, Gregory Farnum wrote:
On Mon, Apr 13, 2015 at 9:48 AM, Ken Dreyer kdre...@redhat.com wrote:
A while ago this came up in #ceph-devel and I wanted to bring it to a
wider audience.
Should we
On Mon, Apr 13, 2015 at 9:48 AM, Ken Dreyer kdre...@redhat.com wrote:
A while ago this came up in #ceph-devel and I wanted to bring it to a
wider audience.
Should we stop the convention of adding the backport: tags in Git?
Loic brought up the point that this data is essentially immutable
On Wed, Apr 8, 2015 at 9:49 AM, Sage Weil s...@newdream.net wrote:
On Wed, 8 Apr 2015, Haomai Wang wrote:
On Wed, Apr 8, 2015 at 10:58 AM, Sage Weil s...@newdream.net wrote:
On Tue, 7 Apr 2015, Mark Nelson wrote:
On 04/07/2015 02:16 PM, Mark Nelson wrote:
On 04/07/2015 09:57 AM, Mark
On Wed, Apr 8, 2015 at 3:38 PM, Deneau, Tom tom.den...@amd.com wrote:
With 0.93, I tried
ceph tell 'osd.*' injectargs '--ms_crc_data=false' '--ms_crc_header=false'
and saw the changes reflected in ceph admin-daemon
But having done that, perf top still shows time being spent in crc32
On Mon, Apr 6, 2015 at 2:21 AM, Ta Ba Tuan tua...@vccloud.vn wrote:
Hi all,
I have ever to setup the cache-pool for my pool.
But had some proplems about cache-pool running, so I removed the cache pool
from My CEPH Cluster.
The DATA pool currently don't use cache pool, but lfor setting still
On Mon, Apr 6, 2015 at 9:09 AM, kernel neophyte
neophyte.hacker...@gmail.com wrote:
On Mon, Apr 6, 2015 at 9:07 AM, Sage Weil s...@newdream.net wrote:
On Mon, 6 Apr 2015, kernel neophyte wrote:
On Sun, Apr 5, 2015 at 3:07 AM, Loic Dachary l...@dachary.org wrote:
Hi,
My guess is that it's
are
noticeably slow.
-Greg
On Fri, Mar 27, 2015 at 4:50 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, Mar 27, 2015 at 2:46 PM, Barclay Jameson
almightybe...@gmail.com wrote:
Yes it's the exact same hardware except for the MDS server (although I
tried using the MDS on the old node).
I
On Mon, Mar 30, 2015 at 1:01 PM, Sage Weil s...@newdream.net wrote:
Resurrecting this thread since we need to make a decision soon. The
opinions broke down like so:
A - me
B - john
C - alex
D - loic (and drop release names), yehuda, ilya
openstack - dmsimard
So, most people seem to
So this is exactly the same test you ran previously, but now it's on
faster hardware and the test is slower?
Do you have more data in the test cluster? One obvious possibility is
that previously you were working entirely in the MDS' cache, but now
you've got more dentries and so it's kicking data
there are and what they have permissions on and check; otherwise
you'll have to figure it out from the client side.
-Greg
Thanks for the input!
On Fri, Mar 27, 2015 at 3:04 PM, Gregory Farnum g...@gregs42.com wrote:
So this is exactly the same test you ran previously, but now it's on
faster hardware
On Tue, Mar 24, 2015 at 4:26 AM, Alistair Israel aisr...@gmail.com wrote:
Thank you Loïc and Sage for the encouragement!
Yes, we'll look into CMake if it simplifies managing the build.
However, a stretch goal is to possibly have the same autotools build
scripts generate .exe and .dll files
:
https://www.dropbox.com/s/uvmexh9impd3f3c/forgreg.tar.gz?dl=0
In this run, only client 1 starts doing the extra lookups.
On Fri, Jan 16, 2015 at 10:43 AM, Gregory Farnum g...@gregs42.com wrote:
On Fri, Jan 16, 2015 at 10:34 AM, Michael Sevilla
mikesevil...@gmail.com wrote:
On Thu, Jan 15, 2015
On Mon, Mar 23, 2015 at 6:21 AM, Olivier Bonvalet ceph.l...@daevel.fr wrote:
Hi,
I'm still trying to find why there is much more write operations on
filestore since Emperor/Firefly than from Dumpling.
Do you have any history around this? It doesn't sound familiar,
although I bet it's because
On Sat, Mar 21, 2015 at 10:46 AM, shylesh kumar shylesh.mo...@gmail.com wrote:
Hi ,
I was going through this simplified crush algorithm given in ceph website.
def crush(pg):
all_osds = ['osd.0', 'osd.1', 'osd.2', ...]
result = []
# size is the number of copies; primary+replicas
On Mon, Mar 23, 2015 at 7:20 AM, Loic Dachary l...@dachary.org wrote:
Hi,
When scheduling suites that are low priority (giant for instance at
http://pulpito.ceph.com/loic-2015-03-23_01:09:31-rados-giant---basic-multi/),
the --priority 1000 is set because (if I remember correctly) this is
/.
Thanks,
Matt
Matt Conner
keepertechnology
matt.con...@keepertech.com
(240) 461-2657
On Wed, Mar 18, 2015 at 4:11 PM, Gregory Farnum g...@gregs42.com wrote:
On Wed, Mar 18, 2015 at 12:59 PM, Sage Weil s...@newdream.net wrote:
On Wed, 18 Mar 2015, Matt Conner wrote:
I'm working with a 6 rack
On Wed, Mar 18, 2015 at 12:59 PM, Sage Weil s...@newdream.net wrote:
On Wed, 18 Mar 2015, Matt Conner wrote:
I'm working with a 6 rack, 18 server (3 racks of 2 servers , 3 racks
of 4 servers), 640 OSD cluster and have run into an issue when failing
a storage server or rack where the OSDs are
On Tue, Mar 17, 2015 at 6:55 PM, David Zafman dzaf...@redhat.com wrote:
During upgrade testing an error occurred because ceph-objectstore-tool found
during import on a Firefly node the compat_features from a export from
Hammer.
There are 2 new feature bits set as shown in the error message:
Yeah. If this has gotten easier it's fine, but asphyxiate required a
*lot* of tooling that I'd rather we not require as developer build
deps. I'd imagine we can just produce them as part of the Jenkins
build procedure or something?
-Greg
On Tue, Mar 17, 2015 at 12:27 PM, David Zafman
On Tue, Mar 17, 2015 at 6:46 AM, Sage Weil s...@newdream.net wrote:
On Tue, 17 Mar 2015, Ning Yao wrote:
2015-03-16 22:06 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
On Mon, Mar 16, 2015 at 10:04 PM, Xinze Chi xmdx...@gmail.com wrote:
How to process the write request in primary?
Thanks.
On Mon, Mar 9, 2015 at 8:42 AM, Dan van der Ster d...@vanderster.com wrote:
Hi Sage,
On Tue, Feb 10, 2015 at 2:51 AM, Sage Weil s...@newdream.net wrote:
On Mon, 9 Feb 2015, David McBride wrote:
On 09/02/15 15:31, Gregory Farnum wrote:
So, memory usage of an OSD is usually linear
On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk ch...@redhat.com wrote:
- Original Message -
From: Danny Al-Gaaf danny.al-g...@bisect.de
To: Csaba Henk ch...@redhat.com, OpenStack Development Mailing List
(not for usage questions)
openstack-...@lists.openstack.org
Cc:
On Sun, Mar 1, 2015 at 2:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
On 01/03/2015 06:00, Gregory Farnum wrote:
On Sat, Feb 28, 2015 at 6:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The fs teuthology suite for the next firefly release as found in
https://github.com/ceph
On Sat, Feb 28, 2015 at 6:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The fs teuthology suite for the next firefly release as found in
https://github.com/ceph/ceph/commits/firefly-backports came back with three
failures : http://tracker.ceph.com/issues/10641#fs. Do you think it is
stress test for lossless_peer_reuse policy, it
can reproduce it easily
On Wed, Feb 25, 2015 at 2:27 AM, Gregory Farnum gfar...@redhat.com wrote:
On Feb 24, 2015, at 7:18 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar...@redhat.com wrote
On Feb 24, 2015, at 7:18 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar...@redhat.com wrote:
On Feb 12, 2015, at 9:17 PM, Haomai Wang haomaiw...@gmail.com wrote:
On Fri, Feb 13, 2015 at 1:26 AM, Greg Farnum gfar...@redhat.com wrote:
Sorry
- Original Message -
From: John Spray john.sp...@redhat.com
To: ceph-devel@vger.kernel.org, z...@redhat.com, Gregory Farnum
gfar...@redhat.com
Sent: Thursday, February 19, 2015 2:23:21 PM
Subject: ceph-fuse remount issues
Background: a while ago, we found (#10277) that existing
On Tue, Feb 17, 2015 at 9:37 AM, Mark Nelson mnel...@redhat.com wrote:
Hi All,
I wrote up a short document describing some tests I ran recently to look at
how SSD backed OSD performance has changed across our LTS releases. This is
just looking at RADOS performance and not RBD or RGW. It also
Clocks in the labs have seemed a lot less well-synced lately than they
had been previously. :( I think there was some issue and then a change
to the NTP configuration, but I'm not clear on the details.
-Greg
On Fri, Feb 20, 2015 at 3:08 PM, David Zafman dzaf...@redhat.com wrote:
On 2 of my
On Fri, Feb 13, 2015 at 5:05 AM, Sage Weil sw...@redhat.com wrote:
Got this from JJ:
The SA expanded on this by stating that there are basically three main
scenarios here:
1) We trust the UID/GID in a controlled environment. In which case we
can safely rely on the POSIX permissions. As long
On Fri, Feb 13, 2015 at 3:35 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 13 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 13, 2015 at 5:05 AM, Sage Weil sw...@redhat.com wrote:
Got this from JJ:
The SA expanded on this by stating that there are basically three main
scenarios here:
1
On Fri, Feb 13, 2015 at 10:34 PM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
I'm curious to know how you handle the flow of mails from QA runs. Here is a
wild guess:
* from time to time check that the nightlies run the suites that should be run
Uh, I guess?
* read the ceph-qa reports
On Thu, Feb 12, 2015 at 12:48 AM, GuangYang yguan...@outlook.com wrote:
Thanks Sage and Greg for the response.
2) having a separate switchover point (besides the code upgrade) which
enables all the disk change bits and which doesn't allow you to roll
back.
Let me give two examples which
On Wed, Feb 11, 2015 at 8:42 AM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
Yesterday the dumpling giant backport integration branches were approved by
Yehuda, Sam and Josh and were handed over to QE. An interesting discussion
followed and it revealed that my understanding of the
On Wed, Feb 11, 2015 at 9:33 AM, Alyona Kiseleva
akisely...@mirantis.com wrote:
Hi,
I would like to propose something.
There are a lot of perf counters in different places in code, but the most
of them are undocumented. I found only one commented counter in OSD.cc code,
but not for all
On Wed, Feb 11, 2015 at 4:09 AM, GuangYang yguan...@outlook.com wrote:
Hi ceph-devel,
Recently we are trying the upgrade from Firefly to Giant and it goes pretty
smoothly, however, the problem is that it does not support rollback and seems
like that is by design. For example, there is new
101 - 200 of 1146 matches
Mail list logo