> Another issue that I just remembered now is that currently there's no way
to
> roll empty files on date/time boundaries. Rolling only happens when a log
> event is generated. If there's no log event, files won't be rolled. Thus
we
> should investigate if we should implement the appender as a buffering
async
> appender that does the rolling in his async work cycles and whose work
> cycles do not depend on the availability of log events.

 
Making the appender async is a major change and it has ramifications that
might prove problematic like unflushed buffers.
And if we make the buffer of size 1 then its really synchronous again, isn't
it?


Does a 'roll' create a new file? So if there are no log event the folder
will be polluted with empty files?
If a 'roll' does not create a file then we can make the decision to roll or
not as part of the logging process.

Whats wrong with rolling on logging events?


> Futher, the rolling file appender should probably receive a persistent
> storage where he can keep a set of rolled files he has rolled in the past.
> This way we eliminate the dark magic behind detecting rolled files. It
> allows the user also to modify or mix the rolling strategies and/or
rolling
> filename configuration from one instance to the next. The persistence will
> have to include a set of filenames. It would probably be nice to allow
> people configuring a relative filepath that will be stripped from the
> filenames. Otherwise we would break the usecase where a user moves an
> application from, let's say, drive C: to drive D: along with all logfiles
> because the rolled filenames would no longer be found and some files would
> become zombies that pollute the drive. Thus a persistable rolling history
> class could look alike:



> Public class RollingHistory {

>   Public List<string> Files { get; }

I'm up for a clean way to roll but I think that a logging framework that
creates files other than logs is odd. And adding a dependency on some
persistent storage provider is also odd.

I think we can assume that applications that need to perform rolling on size
or date are probably long running which means we can save this information
in memory and in case the application crashed or restarts then yes, the old
files will remain on disk and it should be solved by a local policy because
it the exception, not the rule.
 

> To exchange information between the rolling strategies, rolling conditions
> and the rolling file appender itself we should probably use a data class.
> The name of it could be "RollingContext" and basically could be
implemented
> like:



> public class RollingContext {

>   public RollingHistory RollingHistory { get; }

>   public string Logfile { get; }

>   public long CurrentLogfileSize { get; }





> That means that a rolling strategy could be an interface like:



> public interface RollingStrategy {

>   public void DoRolling(RollingContext context);





> which either throws exceptions on failure or returns true|false to notify
> the caller that something went wrong and log the errors into the log4net
> internal debug log.



> The next topic is the rolling condition. The rolling condition decides if
> rolling should be done right now. Thus its interface should be something
> like:



> public interface IRollingCondition {

>   public bool IsRollingConditionMet(RollingContext context);





> Concrete rolling conditions could be:



> *         Roll if filesize exceeds a specific limit

> *         Roll if a specific date/time condition is met
> ==> this is already kind of supported with an implementation of a
> "cron"-like syntax



> Making the rolling condition pluggable would greatly benefit others to
> invent their own rolling conditions. For example an implementation of
> IRollingCondition like:



> public class ManualRollingCondition implements IRollingCondition {

>   public static bool RollOnNextCycle = false;

>   public bool IsRollingConditionMet(RollingContext context) {

>     if(RollOnNextCycle) {

>       // reset roll on next cycle

>       RollOnNextCycle = false;

>       return true;

>     }

>    return false;







> could let the application that uses log4net decide when a file should be
> rolled by a simple invoke to:



> ManualRollingCondition.RollOnNextCycle=true;



> And now we come to the disposal of old files. Since we have a persistent
set
> of old files we should pass it into the disposing logic and the interface
to
> it could look like:



> Public interface IDisposeStrategy {

>   Public bool DoDisposal(RollingContext context);





> And a concrete implementation of it could be:



> Public class LimitNumberOfFilesDisposeStrategy implements IDisposeStrategy
{

>    Public int MaxNumberOfFiles { get; set; }

>    Public bool DoDisposal(RollingContext context) {

>      // delete all files in history

>      While(MaxNumberOfFiles < Context.RollingHistory.Count) {

>        If(Context.RollingHistory.Count == 0)

>           Return true;

>        // delete oldest file and persist the history

>        // let it crash if someone has a logfile open and locked

>        // and let the invoker decide if he wants to retry later

>        Try {

>          Delete(Context.RollingHistory.Oldest());

>          Context.RollingHistory.Remove(Context.RollingHistory.Oldest());

>          Context.RollingHistory.Persist();

>         }

>         Catch

>              // internal logging

>         Return false;

>      }

>      Return true;

>    }



Why is the disposal of log files a different strategy? Shouldn't it be part
of the rolling process? Shouldn't the RollingStrategy.Roll method create and
delete the needed files? 
 



> Some of the ideas above are already implemented in the patch, but there
are
> several things that are not yet finished.



> Von: Dominik Psenner [mailto:dpsen...@gmail.com
<mailto:dpsen...@gmail.com> ]
> Gesendet: Sonntag, 11. August 2013 22:59
> An: Log4NET Dev
> Betreff: Re: Creating a development environment for resolving LOG4NET-367



> This is something that wouldn't work right now:



> <appender>

>  <rollingfileappender>


>
<filename>C:/fancydirectory/%processid/%username/%year/%month/mylogfile.log<
> /filename>

>   <locking>inter-process</locking>

>   <rollOn><date where="hour%3==0" /></rollOn>


>
<rollTo>C:/fancydirectory/%processid/%username/%year/%month/%filenumber/mylo
> gfile.log</rollTo>

>   <dispose><files where="filenumber > 50" /></dispose>

>  </rollingfileappender>

> </appender>



> but we want it to work in the future. Of couse the syntax may be
different,
> but it should be easy enough to write down how the RFA has to behave.
Right
> now there are too many fancy flicks and switches that do things magically.



> Cheers

I think that an XML hierarchy will be easier to parse than strings
containing conditions.
Something like:

<rollOn><date hours='3' /></rollOn>
<dispose><files filenumber='50' /></dispose>
And we can provide some custom data to be passed back to a custom rolling
strategy as a string and the you can parse it the way you want in the
strategy you've written.


> 2013/8/11 Dominik Psenner <dpsen...@gmail.com <mailto:dpsen...@gmail.com>
<mailto:dpsen...@gmail.com <mailto:dpsen...@gmail.com> > >

> See the inlines..



> 2013/8/10 d_k <mail...@gmail.com <mailto:mail...@gmail.com>
<mailto:mail...@gmail.com <mailto:mail...@gmail.com> > >

> Paperizing ideas sounds good. How does the log4net project handle
> requirements? JIRA?



> Yes



> Are there any requirements other than the ones under the 'supercedes'
list?



> Yes and no. It should be a rolling file appender that handles things
smarter
> than the current one. If there are ideas they are welcome.



> It should roll files (obvious) on "rolling conditions" with a "rolling
> strategy" and dispose old files with a "dispose strategy".



> The rolling condition, rolling strategy and dispose strategy should be
> pluggable and configurable (i.e. they are interfaces that can be
implemented
> in different ways).



> It should support dynamic filenames that may change over time



> It should support changes to the rolling strategy in respect of disposal
of
> old files (i.e. between process instances).



> It should support all currently supported locking strategies.



> For now I can't think of more.






> I'm afraid I don't have a 'vision' for the RFA-NG but I think we can take
> the patch under log4net-patches, make sure to fix the 'supercedes' list
and
> off we go.

> I wouldn't want to spend time on a grand design because we can't tell how
> long will it take, and I think it is better to ship a less than perfect
> RFA-NG and fix it later on when we'll know what we want than to postpone
the
> next release indefinitely.



> We have a RFA that does not what we want. We don't need another one. If we
> reimplement it now and release it, we have another RFA with a public api
we
> do not want to change - sounds familiar.




> BTW, no offense, but is there a chance it will be easier to fix the
current
> RFA than to rewrite it?



> There are things that are unfixable with the current implementation
without
> breaking the public API and that's something we dont want.




> Is there a way to measure a given implementation of any RFA to determine
if
> its good enough?



> It's good enough when it boils eggs and toast my bread .. just joking. ;-)
> It's good enough when it does what we want from it.





> On Fri, Aug 9, 2013 at 8:00 PM, Dominik Psenner <dpsen...@gmail.com
<mailto:dpsen...@gmail.com> 
> <mailto:dpsen...@gmail.com <mailto:dpsen...@gmail.com> > > wrote:

> Howdie,



> the patch there is mainly a first implementation showing the road we would
> like to go. There are many things to be discussed and I would start with
> paperizing the ideas before starting the implementation. The
> reimplementation should solve all known current issues of the rolling file
> appender.



> The patches repository is nothing else than a repository that holds
patches
> that can be applied to the log4net source, which is located at apache.org
<http://apache.org> 
> <http://apache.org> 's svn. It's there to materialize the ideas without
> requiring write permissions to the actual svn and can be used as a sandbox
> to play with ideas.



> I started off with an implementation as a patch and wanted to improve that
> patch until it is stable enough to join the log4net appenders in svn.



> Cheers



> 2013/8/9 d_k <mail...@gmail.com <mailto:mail...@gmail.com>
<mailto:mail...@gmail.com <mailto:mail...@gmail.com> > >

> Hi,

> So I think I got a working development environment for log4net.

> I installed mercurial and forked the log4net-crew repository
> (https://bitbucket.org/NachbarsLumpi/log4net-crew) and the log4net-patches
> repository (https://bitbucket.org/NachbarsLumpi/log4net-patches) and
applied
> the RFA-NG patch with 'hg import'.

> Should I prefer to use http://svn.apache.org/viewvc/logging/log4net/trunk
> over https://bitbucket.org/NachbarsLumpi/log4net-crew? Will svn and
> mercurial coexist peacefully?

> Do I need anything else to start developing?

> Should I create my own RFA-NG2 patch? Or perhaps RFA-NG-NNN for each bug
in
> the supercedes list of LOG4NET-367
> (https://issues.apache.org/jira/browse/LOG4NET-367)?

> Are there any common pitfalls I should avoid?








 

Reply via email to