> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Mark Nelson
> Sent: Tuesday, September 29, 2015 1:34 AM
>
> Hi Everyone,
>
> A while back Alexandre Derumier posted some test results looking at how
> transparent huge p
No James, Facing library issues with libnss3 and libcommon(ceph). Will resolve
them, generate a new pull request soon on master.
Thanks,
Varada
> -Original Message-
> From: James (Fei) Liu-SSI [mailto:james@ssi.samsung.com]
> Sent: Monday, September 28, 2015 11:51 PM
> To: Varada Kar
Hi Everyone,
A while back Alexandre Derumier posted some test results looking at how
transparent huge pages can reduce memory usage with jemalloc. I went
back and ran a number of new tests on the community performance cluster
to verify his findings and also look at how performance and cpu usa
Since the OpenStack Keystone team will move to use v3 API and try to
decommission v2 completely, probably we need to modify codes in /src/rgw/.
./src/common/config_opts.h
./src/rgw/rgw_json_enc.cc
./src/rgw/rgw_swift.cc
./src/rgw/rgw_swift_auth.cc
./src/rgw/rgw_rest_swift.cc
Hi Varada,
Have you rebased the pull request to master already?
Thanks,
James
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Varada Kari
Sent: Friday, September 11, 2015 3:28 AM
To: Sage Weil; Matt W. Benjamin; Lo
Thanks Mark !
>>Again, sorry for the delay on these.
No problem, it's already fantasctic that you manage theses meetings each week !
Regards,
Alexandre
- Mail original -
De: "Mark Nelson"
À: "aderumier"
Cc: "ceph-devel"
Envoyé: Lundi 28 Septembre 2015 18:24:22
Objet: Re: 09/23/2015
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 20 of 38 defect(s)
** CID 717233: Uninitialized scalar field (UNINIT_CTOR)
/mds/Capability.h: 249 in Capability::Capability(CInode *, unsigned long,
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 6 of 6 defect(s)
** CID 1019567: Thread deadlock (ORDER_REVERSAL)
** CID 1231681: Thread deadlock (ORDER_REVERSAL)
** CID 1231682: Thread dead
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 1230671: Missing unlock (LOCK)
/msg/SimpleMessenger.cc: 258 in SimpleMessenger::reaper()()
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
14 new defect(s) introduced to ceph found with Coverity Scan.
4 defect(s), reported by Coverity Scan earlier, were marked fixed in the recent
build analyzed by Coverity Scan.
New defect(s) Reported
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 1243158: Resource leak (RESOURCE_LEAK)
/test/librbd/test_librbd.cc: 1370 in
LibRBD_ListChildrenTiered_Test::TestBody()()
_
Hi,
Please find the latest report on new defect(s) introduced to ceph found with
Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 1241497: Thread deadlock (ORDER_REVERSAL)
On 25.09.2015 17:14, Sage Weil wrote:
On Fri, 25 Sep 2015, Igor Fedotov wrote:
Another thing to note is that we don't have the whole object ready for
compression. We just have some new data block written(appended) to the object.
And we should either compress that block and save mentioned mappin
Hi Alexandre,
Sorry for the long delay. I think I got through all of them. They
should be public now and I've listed them in the etherpad:
http://pad.ceph.com/p/performance_weekly
Again, sorry for the delay on these. I can't find any way to make
bluejeans default to making the meetings pu
Hi folks,
Here is a brief summary on potential compression implementation options.
I think we should choose the desired approach prior to start working on
the compression feature.
Comments, additions and fixes are welcome.
Compression At Client - compression/decompression to be performed at t
Hi,
puppet-ceph currently lives in stackforge [1] which is being retired
[2]. puppet-ceph is also mirrored on the Ceph Github organization [3].
This version of the puppet-ceph module was created from scratch and
not as a fork of the (then) upstream puppet-ceph by Enovance [4].
Today, the version b
-- All Branches --
Abhishek Varshney
2015-09-22 15:11:25 +0530 hammer-backports
Adam C. Emerson
2015-09-14 12:32:18 -0400 wip-cxx11time
2015-09-15 12:09:20 -0400 wip-cxx11concurrency
Adam Crume
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza
Hi Sage,
HWCAP_CRC32 and others were added to the Kernel with this commit
4bff28ccda2b7a3fbdf8e80aef7a599284681dc6; it looks like this first
landed in v3.14. Are you using the stock Kernel on Trusty (v3.13?)?
Can you update to a later version for gitbuilder? For regular testing
v3.19 (lts-vivid) m
Hi,
--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103
http://www.redhat.com/en/technologies/storage
tel. 734-761-4689
fax. 734-769-8938
cel. 734-216-5309
- Original Message -
> From: "John Spray"
> To: "Matt Benjamin"
> Cc: "Ceph Developmen
On Mon, 28 Sep 2015, Loic Dachary wrote:
> Hi,
>
> On 28/09/2015 12:19, Abhishek Varshney wrote:
> > Hi,
> >
> > The rest-bench tool has been removed in master through PR #5428
> > (https://github.com/ceph/ceph/pull/5428). The backport PR #5812
> > (https://github.com/ceph/ceph/pull/5812) is curr
On Sat, Sep 26, 2015 at 8:03 PM, Matt Benjamin wrote:
> Hi John,
>
> I prototyped an invalidate upcall for libcephfs and the Gasesha Ceph fsal,
> building on the Client invalidation callback registrations.
>
> As you suggested, NFS (or AFS, or DCE) minimally expect a more generic
> "cached vnode
That's really good info, thanks for tracking that down. Do you expect this to
be a common configuration going forward in Ceph deployments?
Joe
> On Sep 28, 2015, at 3:43 AM, Somnath Roy wrote:
>
> Xiaoxi,
> Thanks for giving me some pointers.
> Now, with the help of strace I am able to figure
HI,
Now I am running a cluster on giant0.87 ,After a serials of disks
failures, we have to add new osd and set out bad ones, so the
rebalance process began
unfortunately,one osd was stuck in down+peering ,and cat not recover by itself.
>From log , we found that one pg was stuck,because two files o
Hi,
On 28/09/2015 12:19, Abhishek Varshney wrote:
> Hi,
>
> The rest-bench tool has been removed in master through PR #5428
> (https://github.com/ceph/ceph/pull/5428). The backport PR #5812
> (https://github.com/ceph/ceph/pull/5812) is currently causing failures
> on the hammer-backports integrat
Hi,
The rest-bench tool has been removed in master through PR #5428
(https://github.com/ceph/ceph/pull/5428). The backport PR #5812
(https://github.com/ceph/ceph/pull/5812) is currently causing failures
on the hammer-backports integration branch. These failures can be
resolved by either backportin
Hi,
On 28/09/2015 07:24, Bharath Krishna wrote:
> Hi Dachary,
>
> Thanks for the reply. I am following your blog http://dachary.org/?p=3767
> And the README in
> https://github.com/dachary/teuthology/tree/wip-6502-openstack-v2/#openstack
> -backend
The up to date instructions are at
https://gi
Xiaoxi,
Thanks for giving me some pointers.
Now, with the help of strace I am able to figure out why it is taking so long
in my setup to complete blkid* calls.
In my case, the partitions are showing properly even if it is connected to JBOD
controller.
root@emsnode10:~/wip-write-path-optimization
FWIW, blkid works well in both GPT(created by parted) and MSDOS(created by
fdisk) in my environment.
But blkid doesn't show the information of disk in external bay (which is
connected by a JBOD controller) in my setup.
See below, SDB and SDH are SSDs attached to the front panel but the rest osd
28 matches
Mail list logo