I checked with Sam on this and it turns out he created a new subsystem whose
output you can control with the "debug optracker" (or --debug-optracker) option
(in the same way as the other debug log systems). In 0.45 the output for that
system was at inappropriately high levels (1) and it's fixed
On Fri, 20 Apr 2012, Calvin Morrow wrote:
> I'm seeing the same. < 12 hours with 6 OSDs resulted in ~18 GB of
> logs. I had to change my log rotate config to compress based on size
> instead of once a day or I ended up with a full root partition.
>
> I would love to know if there's a better way
On 20 kwi 2012, at 21:35, Greg Farnum wrote:
> On Friday, April 20, 2012 at 12:00 PM, Sławomir Skowron wrote:
>> Maybe it's a lame question, but is anybody knows simplest procedure,
>> for most non-disrubtive upgrade of ceph cluster with real workload on
>> it ??
>
> Unfortunate though it is, non
I'm seeing the same. < 12 hours with 6 OSDs resulted in ~18 GB of
logs. I had to change my log rotate config to compress based on size
instead of once a day or I ended up with a full root partition.
I would love to know if there's a better way to handle it.
Calvin
On Fri, Apr 20, 2012 at 5:05
Is there a recommended log config for production systems? I'm also
trying to decrease the verbosity in 0.45, using the options specified
here: http://ceph.newdream.net/wiki/Debugging. Setting them down to
'1' doesn't end the insane log sprawl I'm seeing.
On Tue, Apr 17, 2012 at 10:09 PM, Greg F
On Friday, April 20, 2012 at 12:00 PM, Sławomir Skowron wrote:
> Maybe it's a lame question, but is anybody knows simplest procedure,
> for most non-disrubtive upgrade of ceph cluster with real workload on
> it ??
Unfortunate though it is, non-disruptive upgrades aren't a great idea to
attempt ri
Maybe it's a lame question, but is anybody knows simplest procedure,
for most non-disrubtive upgrade of ceph cluster with real workload on
it ??
It's most important if we want semi-automate this process with some
tools. Maybe there is a cookbook for this operation ?? I know that
automate this is n
On Fri, 20 Apr 2012, Sage Weil wrote:
> On Fri, 20 Apr 2012, Vladimir Bashkirtsev wrote:
> > Dear devs,
> >
> > Playing around with ceph and gradually moving it from a toy thing into
> > production I wanted ceph to actually make its run for the money (so to
> > speak).
> > I have assembled number
On Fri, 20 Apr 2012, Vladimir Bashkirtsev wrote:
> Dear devs,
>
> Playing around with ceph and gradually moving it from a toy thing into
> production I wanted ceph to actually make its run for the money (so to speak).
> I have assembled number of OSDs which are really built on different hardware:
After running ceph on XFS for some time, I decided to try btrfs again.
Performance with the current "for-linux-min" branch and big metadata
is much better. The only problem (?) I'm still seeing is a warning
that seems to occur from time to time:
[87703.784552] [ cut here ]
Dear devs,
Playing around with ceph and gradually moving it from a toy thing into
production I wanted ceph to actually make its run for the money (so to
speak). I have assembled number of OSDs which are really built on
different hardware: starting from old P4 with 512MB of RAM and ending up
w
Sage,
Thank you for quick response. I've seen notes about localized PGs and
that they do not make sense. However I did not realize that it was what
I had hit. I will follow your suggestion and wait out for 0.47 - if it
is not (actually) broken - don't fix it. :)
Regards,
Vladimir
On 20/04/1
12 matches
Mail list logo