Re: rgw/civetweb privileged port bind

2015-11-26 Thread Karol Mroz
On Thu, Nov 26, 2015 at 11:38:26AM -0800, Sage Weil wrote: > On Thu, 26 Nov 2015, Karol Mroz wrote: > > Hello, > > > > As I understand it, with the release of infernalis, ceph > > daemons are no longer being run as root. Thus, rgw/civetweb > > is unable to bind to privileged ports: > > > > http:/

Re: rgw/civetweb privileged port bind

2015-11-26 Thread Sage Weil
On Thu, 26 Nov 2015, Karol Mroz wrote: > Hello, > > As I understand it, with the release of infernalis, ceph > daemons are no longer being run as root. Thus, rgw/civetweb > is unable to bind to privileged ports: > > http://tracker.ceph.com/issues/13600 > > We encountered this problem as well in

rgw/civetweb privileged port bind

2015-11-26 Thread Karol Mroz
Hello, As I understand it, with the release of infernalis, ceph daemons are no longer being run as root. Thus, rgw/civetweb is unable to bind to privileged ports: http://tracker.ceph.com/issues/13600 We encountered this problem as well in our downstream (hammer based) product, where we run rgw/c

Re: [PATCH] mm: Allow GFP_IOFS for page_cache_read page cache allocation

2015-11-26 Thread Michal Hocko
On Thu 12-11-15 10:53:01, Jan Kara wrote: > On Wed 11-11-15 15:13:53, mho...@kernel.org wrote: > > From: Michal Hocko > > > > page_cache_read has been historically using page_cache_alloc_cold to > > allocate a new page. This means that mapping_gfp_mask is used as the > > base for the gfp_mask. Ma

Re: Quick comparison of Hammer & Infernalis on NVMe

2015-11-26 Thread Alexandre DERUMIER
Thanks for sharing ! Results are impressive, Great to see that write are finally improving. I just wonder how much you could have with rbd_cache=false. Also, with such high load, maybe using jemalloc on fio could help too (I have seen around 20% improvement on fio client) LD_PRELOAD=${JEMALLO

Re: block-rbd: One function call less in rbd_dev_probe_parent() after error detection

2015-11-26 Thread SF Markus Elfring
>> * Why was the function "rbd_dev_probe_parent" implemented in the way >> that it relies on a sanity check in the function "rbd_dev_destroy" then? > > Because it's not a bad thing? There are different opinions about this implementation detail. > What's wrong with an init to NULL, a possible

Re: why my cluster become unavailable (min_size of pool)

2015-11-26 Thread Sage Weil
On Thu, 26 Nov 2015, hzwulibin wrote: > Hi, Sage > > I has a question about min_size of pool. > > The default value of min_size is 2, but in this setting, when two OSDs > are down(mean two replicas lost) at same time, the IO will be blocked. > We want to set the min_size to 1 in our production

Re: block-rbd: One function call less in rbd_dev_probe_parent() after error detection

2015-11-26 Thread Ilya Dryomov
On Thu, Nov 26, 2015 at 8:54 AM, SF Markus Elfring wrote: >>> I interpreted the eventual passing of a null pointer to the >>> rbd_dev_destroy() >>> function as an indication for further source code adjustments. >> >> If all error paths could be adjusted so that NULL pointers are never passed >>

RE: Scaling Ceph reviews and testing

2015-11-26 Thread piotr.da...@ts.fujitsu.com
> So, my proposition is to postpone QA'ing performance pull > requests until someone unrelated to PR author (or even author's company) > can confirm that claims in that particular PR are true. Providing code snippet > that shows the perf difference (or provide a way to verify those claims in > repr

RE: Scaling Ceph reviews and testing

2015-11-26 Thread piotr.da...@ts.fujitsu.com
> -Original Message- > From: Podoski, Igor > Sent: Thursday, November 26, 2015 10:25 AM > > > But correctness and reliability regressions are one thing, performance > > regressions are another one. I already see PRs that promise > > performance increase, when at (my) first glance it looks

RE: Scaling Ceph reviews and testing

2015-11-26 Thread igor.podo...@ts.fujitsu.com
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Dalek, Piotr > Sent: Thursday, November 26, 2015 9:56 AM > To: Gregory Farnum; ceph-devel > Subject: RE: Scaling Ceph reviews and testing > > > -Original Message-

RE: Scaling Ceph reviews and testing

2015-11-26 Thread piotr.da...@ts.fujitsu.com
> -Original Message- > From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- > ow...@vger.kernel.org] On Behalf Of Gregory Farnum > Sent: Wednesday, November 25, 2015 11:14 PM > > It has been a long-standing requirement that all code be tested by > teuthology before being merged to mas

Re: Re: why my cluster become unavailable (min_size of pool)

2015-11-26 Thread hzwulibin
Hi, haomai Thanks for quick reply, your explain make sense for me. Thanks! -- hzwulibin 2015-11-26 - 发件人:Haomai Wang 发送日期:2015-11-26 16:00 收件人:hzwulibin 抄送:Sage Weil,ceph-devel 主题:Re: why

Re: why my cluster become unavailable (min_size of pool)

2015-11-26 Thread Haomai Wang
On Thu, Nov 26, 2015 at 3:54 PM, hzwulibin wrote: > Hi, Sage > > I has a question about min_size of pool. > > The default value of min_size is 2, but in this setting, when two OSDs are > down(mean two replicas lost) at same time, the IO will be blocked. > We want to set the min_size to 1 in our p