Hi,
The make check bot [1] that executes run-make-check.sh [2] on pull requests and
reports results as comments [3] is experiencing problems. It may be a hardware
issue and the bot is paused while the issue is investigated [4] to avoid
sending confusing false negatives. In the meantime the
On Wed, Apr 15, 2015 at 2:01 PM, Somnath Roy somnath@sandisk.com wrote:
Hi Sage/Mark,
I did some WA experiment with newstore with the similar settings I mentioned
yesterday.
Test:
---
64K Random write with 64 QD and writing total of 1 TB of data.
Newstore:
Fio
8AM PST as usual! IE in 10 minutes. :D Forgot to send this out earlier,
sorry! Anyway, the hew hotness is newstore, and there's been a lot of
testing going on!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To
Nuts, this got out before I hit cancel. This should be 04/15/2015. :)
On 04/15/2015 09:51 AM, Mark Nelson wrote:
8AM PST as usual! IE in 10 minutes. :D Forgot to send this out earlier,
sorry! Anyway, the hew hotness is newstore, and there's been a lot of
testing going on!
Here's the links:
Hello Guys,
Swift -- besides implementing the OpenStack Object Storage API v1 -- provides
a bunch of huge extensions like SLO/DLO (static/dynamic large objects), bulk
operations (inc. server-side archive extraction), staticweb and many more.
Full list is available in Swift's source code [1]. I'm
8AM PST as usual! IE in 10 minutes. :D Forgot to send this out earlier,
sorry! Anyway, the hew hotness is newstore, and there's been a lot of
testing going on!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To
Hoamai,
Yes, separating out the kvdb directory is the path I will take to identify the
cause of the WA.
This tool I have written on top of these disk counters. I can share that but
you need SanDisk optimus echo (or max) drive to make it work :-)
Thanks Regards
Somnath
-Original
- Original Message -
From: Radoslaw Zarzynski rzarzyn...@mirantis.com
To: Ceph Development ceph-devel@vger.kernel.org
Sent: Wednesday, April 15, 2015 8:31:22 AM
Subject: [rgw][Swift API] Reusing WSGI middleware of Swift
Hello Guys,
Swift -- besides implementing the OpenStack
- Original Message -
From: Loic Dachary l...@dachary.org
To: Yehuda Sadeh yeh...@redhat.com
Cc: Ceph Development ceph-devel@vger.kernel.org, Abhishek L
abhishek.lekshma...@gmail.com
Sent: Wednesday, April 15, 2015 2:43:12 AM
Subject: radosgw and the next giant release v0.87.2
Hello,
there is a slow leak which is presented in all ceph versions I assume
but it is positively exposed only on large time spans and on large
clusters. It looks like the lower is monitor placed in the quorum
hierarchy, the higher the leak is:
Hi
Ken Dreyer kdre...@redhat.com writes:
On 04/14/2015 09:21 AM, Sage Weil wrote:
I think we still want them to be static across a distro; it's the
cross-distro change that will be relatively rare. So a fixed ID from each
distro family ought to be okay?
Sounds sane to me. I've filed
Hi
Ceph is currently developing unprivileged user support, and the upstream
developers are asking each of the distros to allocate a static numerical
UID for the ceph user and group.
The reason for the static number is to allow Ceph users to hot-swap hard
drives from one OSD server to another.
Hi All, I am new to this mailing list.
I have few basic questions on the CEPH, hope someone can answer me.
Thanks in advance.!!
I would like to understand more the the placement policy, especially
on the failure path.
1. Machines going up and down is fairly common in a data center.
How
Hi Somnath,
You could try apply this one:)
https://github.com/ceph/ceph/pull/4356
BTW the previous RocksDB configuration has a bug that set
rocksdb_disableDataSync to true by default, which may cause data loss in
failure. So pls update the newstore to latest or manually set it
Thanks Xiaoxi..
But, I have already initiated test by making db/ a symbolic link to another
SSD..Will share the result soon.
Regards
Somnath
-Original Message-
From: Chen, Xiaoxi [mailto:xiaoxi.c...@intel.com]
Sent: Wednesday, April 15, 2015 6:48 PM
To: Somnath Roy; Haomai Wang
Cc:
On 04/13/2015 10:27 AM, Sage Weil wrote:
[adding ceph-devel]
On Mon, 13 Apr 2015, Chen, Xiaoxi wrote:
Hi,
Actually I have done the tuning survey on RocksDB when I was
updating the RocksDB to newer version and exposed the tuning in
ceph.conf.
What we need to ensure is the WAL
Yeah, once we're confident in it in master. The idea behind this
feature was to allow using object maps with existing images. There
just wasn't time to include it in the base hammer release.
Ok, thanks Josh !
(I'm planning to implement this feature in proxmox when it'll be released).
-
On 04/14/2015 08:01 PM, shiva rkreddy wrote:
The clusters are in test environment, so its a new deployment of 0.80.9.
OS on the cluster nodes is reinstalled as well, so there shouldn't be
any fs aging unless the disks are slowing down.
The perf measurement is done initiating multiple cinder
Hi Sage/Mark,
I did some WA experiment with newstore with the similar settings I mentioned
yesterday.
Test:
---
64K Random write with 64 QD and writing total of 1 TB of data.
Newstore:
Fio output at the end of 1 TB write.
---
On 04/15/2015 01:21 AM, Sage Weil wrote:
On Tue, 14 Apr 2015, Tim Serong wrote:
On 04/14/2015 11:05 AM, Sage Weil wrote:
Tim, Owen:
Can we get a 'ceph' user/group uid/gid allocated for SUSE to get this
unstuck? IMO the radosgw systemd stuff is blocked behind this too.
I haven't yet been
Hi Greg,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the fs suite (http://tracker.ceph.com/issues/11153#fs). Do you think it
is ready for QE to start their own round of testing ?
Note that it will be the last giant release.
Cheers
P.S.
Hi Yehuda,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the rgw suite (http://tracker.ceph.com/issues/11153#rgw). One run had a
transient failure (http://tracker.ceph.com/issues/11259) that did not repeat.
You will also find traces of failed run because of
Hi Sam,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the rados suite (http://tracker.ceph.com/issues/11153#rados). Do you
think it is ready for QE to start their own round of testing ?
Note that it will be the last giant release.
Cheers
P.S.
Hi Josh,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the rbd suite (http://tracker.ceph.com/issues/11153#rbd). Do you think
it is ready for QE to start their own round of testing ?
Note that it will be the last giant release.
Cheers
P.S.
Ping ?
On 08/04/2015 11:22, Loic Dachary wrote: Hi,
I see you have been busy backporting issues to Firefly today, this is great
:-)
https://github.com/ceph/ceph/pulls/xinxinsh
It would be helpful if you could update the pull requests (and the
corresponding issues) as explained at
25 matches
Mail list logo