Cool, I don't realize "the max file size, and number of log files" before.
Thank you very much.


On Wed, Mar 19, 2014 at 7:49 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Here is a related post:
>
> http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of
>
>
> On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar <enis....@gmail.com> wrote:
>
> > DFRA already deletes old logs, you do not necessarily have to have a cron
> > job.
> >
> > You can use RollingFileAppender to limit the max file size, and number of
> > log files to keep around.
> >
> > Check out conf/log4j.properties.
> > Enis
> >
> >
> > On Tue, Mar 18, 2014 at 7:22 AM, haosdent <haosd...@gmail.com> wrote:
> >
> > > Yep, I use INFO level. Let me think about this later. If I found a
> better
> > > way, I would open a issue and record it. Thanks for your great help.
> > @tedyu
> > >
> > >
> > > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> > >
> > > > If log grows so fast that disk space is to be exhausted, verbosity
> > should
> > > > be lowered.
> > > >
> > > > Do you turn on DEBUG logging ?
> > > >
> > > > Cheers
> > > >
> > > > On Mar 18, 2014, at 6:08 AM, haosdent <haosd...@gmail.com> wrote:
> > > >
> > > > > Thanks for your reply. DailyRollingFileAppender and a cron job
> could
> > > > works
> > > > > in normal scenario. But sometimes log grow too fast, or disk space
> > may
> > > > use
> > > > > by other applications. Is there a way make Log more "smart" and
> > choose
> > > > > policy according to current disk space?
> > > > >
> > > > >
> > > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yuzhih...@gmail.com>
> wrote:
> > > > >
> > > > >> Can you utilize
> > > > >>
> > > >
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > > ?
> > > > >>
> > > > >> And have a cron job cleanup old logs ?
> > > > >>
> > > > >> Cheers
> > > > >>
> > > > >> On Mar 18, 2014, at 5:29 AM, haosdent <haosd...@gmail.com> wrote:
> > > > >>
> > > > >>> Sometimes the call of Log.xxx couldn't return if the disk
> partition
> > > of
> > > > >> Log
> > > > >>> path is full. And HBase would hang because of this. So I think if
> > > there
> > > > >> is
> > > > >>> a better way to handle too much log. For example, through a
> > > > configuration
> > > > >>> item in hbase-site.xml, we could delete the old logs periodically
> > or
> > > > >> delete
> > > > >>> old logs when this disk didn't have enough space.
> > > > >>>
> > > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > > Looking
> > > > >>> forward your ideas. Thanks in advance.
> > > > >>>
> > > > >>> --
> > > > >>> Best Regards,
> > > > >>> Haosdent Huang
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best Regards,
> > > > > Haosdent Huang
> > > >
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> > >
> >
>



-- 
Best Regards,
Haosdent Huang

Reply via email to