According to this website:

 http://www.connectionstrings.com/

Sql Server allows for Max Pool Size and Min Pool Size to be specified
in the connection string. Have you verified that only a single
connection is being used? Have you verified that log messages are being
lost when buffering is on? When you turn on buffering, does your
database throughput increase? Perhaps your database is not fast enough
to handle the number of inserts per second that it is recieving. 

If you had to write your own code for connecting and inserting records
into the database, how would you write it differently from what log4net
is doing? How would you handle inserting many hundreds of records at a
time without buffering? What do you think log4net can do that its not
doing? The buffering mode was designed to handle the type of case
you're describing. 

--- [EMAIL PROTECTED] wrote:

> The other thing I would add is that it seems strange to introduce
> this
> choke point.  My database access has x amount of concurrent
> connections
> available to it, however, because of this behaviour (and without the
> buffering on) I'm reducing myself to 1 connection which doesn't seem
> very efficient to me.  Is this a by-product of the reliance on
> buffering?
> 
>   _____  
> 
> From: Darren Blackett [mailto:[EMAIL PROTECTED] 
> Sent: 01 June 2005 18:06
> To: log4net-user@logging.apache.org
> Subject: RE: RE:ASP.NET Blocking Problem
> 
> 
> 
>  
> 
> Sorry to reply from a different address but I'm at home now :-)
> 
>  
> 
> I'm logging to a sql database using the standard sql appender.  I
> can't
> use the buffering as I need to guarantee that the entry has been
> posted
> to the database before I move on to do the actual code (can't have
> the
> buffer being cleared halfway through due to the app domain dying). 
> I'll
> be doing some changes in the code to support this at some point in
> the
> future.
> 
>  
> 
> Appender config is...
> 
>  
> 
>     <appender name="xxAuditDBAppender"
> type="Logging.log4net.XXDBAppender, Logging.log4net, Version=1.0 " >
> 
>         <param name="ConnectionString"
> value="Server=xxxxxxxxxxxxxx;Database=xxxxxxxxxxxx;User
> ID=xxxxxxxxxxxx;Password=xxxxxxxxxxxxxxxxx; " />
> 
>         <param name="BufferSize" value="1" />
> 
>     </appender>
> 
>  
> 
> The XXDBAppender just inherits form the sqlappender class and sets up
> some stored proc parameters.
> 
>  
> 
> The problem doesn't actually seem to be that the database call is
> taking
> a long time as it isn't.  It's just that when you've got 300 threads
> coming in and all wanting to log the pile up in the call as it goes
> through the appender code.
> 
>  
> 
>
------------------------------------------------------------------------
> ----------------------------------------------------------------
> 
> Could you post your <appender> node for us. What database are you
> logging to? 
>  
> According the documentation for 1.2.9 beta, there is a property
> called
> bufferSize which writes statements to the database in batches:
>  
>  <bufferSize value="100" />
>  
> A similiar property exists in 1.2.0 beta 8.
>  
> Its very common to declare your loggers like this:
>  
> private static readonly log4net.ILog log =
> log4net.LogManager.GetLogger(
> System.Reflection.MethodBase.GetCurrentMethod().DeclaringType );
>  
> I don't think creating a new instance of a logger for each request is
> the most efficient way to do things.
>  
> --- [EMAIL PROTECTED] wrote:
>  
> > Hi,
> > 
> > I'm using log4net in an asp.net application to log page hits to a
> > database.  It's working really well, however, whilst recently doing
> > some
> > performance testing of 200+ clients I discovered that I was
> blocking
> > which was seriously hitting my performance (halving the number of
> > requests per second).  The block was in the lock in
> > AppenderSkeleton.DoAppend.  It would appear that all of my asp.net
> > requests are coming and hitting using the same instance of the
> logger
> > which ultimately ends up waiting for the appenders to run through. 
> > What
> > is happening is that I'm creating a choke point around my database
> > calls
> > which is seriously costing me.
> > 
> > I'm creating the logger in the standard fashion in a HTTP Module
> > (although not into a static instance), although if I understand the
> > code
> > correctly I would always end up with the same logger objects as the
> > wrappermap in the logmanager is static.  
> > 
> > My question is; is there anyway of getting the logmanager to create
> > me a
> > new instance of my loggers?  Or of not sharing the loggers with
> every
> > other thread in the appdomain?  I'm using quite an old version of
> > log4net (1.2.0 beta8), however, I can't see any massive difference
> in
> > the way the logmanager works in the latest version - however, if
> the
> > answer is, go and try the latest version then this is something
> I'll
> > have to think about :).
> > 
> > Alternatively, have I done something really daft in the way I've
> > instantiated this?  Is there something in the config which would
> > solve
> > my problem?
> > 
> > Any help gratefully received.
> > 
> > Cheers,
> > 
> > Darren
> 
>  
> 
> 
> -- 
> 
>
------------------------------------------------------------------------------
> Halifax plc, Registered in England No. 2367076.  Registered Office:
> Trinity Road, Halifax, West Yorkshire HX1 2RG. Authorised and
> regulated by the Financial Services Authority.  Represents only the
> Halifax Financial Services Marketing Group for the purposes of
> advising on and selling life assurance, pensions and collective
> investment scheme business.  
>
==============================================================================
> 

Reply via email to