IIRC we had to adjust settings in /etc/security to allow ulimit
adjustment of at least core:
sed -i 's/^#\*.*soft.*core.*0/\*softcore
unlimited/g' /etc/security/limits.conf
or something like that. That seems to apply to centos/fedora/redhat
systems.
On 08/08/2013 02:5
On Fri, 9 Aug 2013, James Harper wrote:
> > > But I think this still won't have the desired outcome if you have 2 OSD's.
> > > The possible situations if the resource is supposed to be running are:
> > > . Both running => all good, pacemaker will do nothing
> > > . Both stopped => all good, pacemak
> > But I think this still won't have the desired outcome if you have 2 OSD's.
> > The possible situations if the resource is supposed to be running are:
> > . Both running => all good, pacemaker will do nothing
> > . Both stopped => all good, pacemaker will start the services
> > . One stopped one
I think Stefan's problem is probably distinct from Mike's.
Stefan: Can you reproduce the problem with
debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20
on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd u
On Fri, Aug 9, 2013 at 3:44 PM, Sage Weil wrote:
> On Fri, 9 Aug 2013, Milosz Tanski wrote:
>> Sage,
>>
>> Great. Is there some automated testing system that looks for
>> regressions in cephfs that I can be watching for?
>
> Yep, you can join the ceph...@ceph.com email list and watch for the
> kce
On Fri, 9 Aug 2013, Milosz Tanski wrote:
> Sage,
>
> Great. Is there some automated testing system that looks for
> regressions in cephfs that I can be watching for?
Yep, you can join the ceph...@ceph.com email list and watch for the
kcephfs suite results (see http://ceph.com/resources/mailing-l
Sage,
Great. Is there some automated testing system that looks for
regressions in cephfs that I can be watching for?
- Milosz
On Fri, Aug 9, 2013 at 1:44 PM, Sage Weil wrote:
> Hi Milosz,
>
> I pulled both these into the testing branch. Thanks!
>
> sage
>
> On Fri, 9 Aug 2013, Milosz Tanski wr
Hi Li,
Thanks for discussing this at the summit! As I mentioned, I think email
will be the easiest way to detail my suggestion for handling the shared
writer or read/write case. The notes from the summit are at
http://pad.ceph.com/p/mds-inline-data
For the single-writer case, it is simple
Hi Milosz,
I pulled both these into the testing branch. Thanks!
sage
On Fri, 9 Aug 2013, Milosz Tanski wrote:
> Currently ceph_invalidatepage has is overly eger with it's checks which are
> moot. The second change cleans up the case where offset is non zero.
>
> Please pull the from:
> http
Hi Matthew,
please have a look at:
http://www.spinics.net/lists/linux-rdma/msg16710.html
http://wiki.ceph.com/01Planning/02Blueprints/Emperor/msgr%3A_implement_infiniband_support_via_rsockets
Maybe you should switch this discussion from ceph-user to the ceph-devel ML.
Kind Regards,
-Dieter
On
The invalidatepage code bails if it encounters a non-zero page offset. The
current logic that does is non-obvious with multiple if statements.
This should be logically and functionally equivalent.
Signed-off-by: Milosz Tanski
---
fs/ceph/addr.c | 29 +++--
1 file changed
The early bug checks are moot because the VMA layer ensures those things.
1. It will not call invalidatepage unless PagePrivate (or PagePrivate2) are set
2. It will not call invalidatepage without taking a PageLock first.
3. Guantrees that the inode page is mapped.
Signed-off-by: Milosz Tanski
-
Currently ceph_invalidatepage has is overly eger with it's checks which are
moot. The second change cleans up the case where offset is non zero.
Please pull the from:
https://bitbucket.org/adfin/linux-fs.git wip-invalidatepage
This simple patchset came from the changes I made while working on f
Sage,
I can spin some of this out into a another patch; In the things I've
been sending I've been squashing the changes just because I've done so
many less then smart things to get to this point.
After reviewing this one more time and going from memory... I believe
the invalidate page code is ove
Hi Dieter,
On Fri, 9 Aug 2013, Kasper Dieter wrote:
> On Fri, Aug 09, 2013 at 03:06:37PM +0200, Yan, Zheng wrote:
> > On Fri, Aug 9, 2013 at 5:03 PM, Kasper Dieter
> > wrote:
> > > OK,
> > > I found this nice page: http://ceph.com/docs/next/dev/file-striping/
> > > which explains "--stripe_unit -
On Fri, 9 Aug 2013, James Harper wrote:
> > > I haven't tried your patch yet, but can it ever return 0? It seems to
> > > set it to 3 initially, and then change it to 1 if it finds an error. I
> > > can't see that it ever sets it to 0 indicating that daemons are running.
> > > Easy enough to fix by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 09/08/13 14:58, Chen, Xiaoxi wrote:
> I do think Launchpad is a mature alternative since there are big
> successful story such like Ubuntu and Openstack :)
Indeed
> OpenStack has already use it for years, and seems everyone happy
> with it, I am
Hi,
sorry for the late response.
On Tue, Aug 6, 2013 at 5:10 AM, Roald van Loon wrote:
> Hi ceph devs,
>
> I was working with a RGW / keystone implementation, source tree from
> github master. I stumbled against this error from the radosgw log;
>
> 2013-08-06 14:00:02.523331 7f929c011780 10 a
I do think Launchpad is a mature alternative since there are big successful
story such like Ubuntu and Openstack :)
OpenStack has already use it for years, and seems everyone happy with it, I am
also happy with it, but I am not familiar with jira so I cannot do a
comparison.
Is there any estim
On Fri, Aug 09, 2013 at 03:06:37PM +0200, Yan, Zheng wrote:
> On Fri, Aug 9, 2013 at 5:03 PM, Kasper Dieter
> wrote:
> > OK,
> > I found this nice page: http://ceph.com/docs/next/dev/file-striping/
> > which explains "--stripe_unit --stripe_count --object_size"
> >
> > But still I'm not sure about
On Fri, Aug 9, 2013 at 5:03 PM, Kasper Dieter
wrote:
> OK,
> I found this nice page: http://ceph.com/docs/next/dev/file-striping/
> which explains "--stripe_unit --stripe_count --object_size"
>
> But still I'm not sure about
> (1) what is the equivalent command on cephfs to 'rbd create --order 16'
Thanks for the info!
I now got these settings in my ceph.conf
mon_cluster_log_file = /dev/null
mon_cluster_log_to_syslog = true
clog_to_syslog = true
log_to_syslog = true
err_to_syslog = true
clog_to_syslog_level = "warn"
mon_cluster_log_to_syslog_level = "warn"
I have the cluster
Hi,
> I've had a few occasions where tapdisk has segfaulted:
>
> tapdisk[9180]: segfault at 7f7e3a5c8c10 ip 7f7e387532d4 sp
> 7f7e3a5c8c10 error 4 in libpthread-2.13.so[7f7e38748000+17000]
> tapdisk:9180 blocked for more than 120 seconds.
> tapdisk D 88043fc13540 0 9180
OK,
I found this nice page: http://ceph.com/docs/next/dev/file-striping/
which explains "--stripe_unit --stripe_count --object_size"
But still I'm not sure about
(1) what is the equivalent command on cephfs to 'rbd create --order 16' ?
(2) how to use those parameters to achieve different optimized
Hi,
my goal is to set the 'object size' used in the distribution inside rados
in an equal (or similar) way between RBD and CephFS.
To set obj_size=64k in RBD I use the command:
rbd create --size 1024000 --pool SSD-r2 ssd2-1T-64k --order 16
On cephfs set_layout '-s 65536' runs into EINVAL:
ceph
> > I haven't tried your patch yet, but can it ever return 0? It seems to
> > set it to 3 initially, and then change it to 1 if it finds an error. I
> > can't see that it ever sets it to 0 indicating that daemons are running.
> > Easy enough to fix by setting the EXIT_STATUS=0 after the check of
>
26 matches
Mail list logo