Hi Darren,

An idea would be to extract the DoAppend function into a separate class
(DoActionHelper)

Then, in the DoAction of the AppenderSkeleton, you check the active thread's
named slots if you have an instance of your class for that specific thread.
If you do, you reuse the class for that thread. If not, you create one with
the parameters from your active AppenderSkeleton, you register it in the
Thread's local storage (AllocateNamedDataSlot and SetData on Thread)
Then you delegate the work to your helper class but you can do that already
outside the lock, allowing the DoAppend do it's job in a multithreaded way.
You will have to move in your helper class your recursive guard, but you can
remove the all the locks.

Good luck,
Corneliu.



-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, 2 June 2005 10:36 PM
To: log4net-user@logging.apache.org
Subject: RE: RE:ASP.NET Blocking Problem

I have verified that I have one instance of the appenderskeleton and
that every request I have is contending for the lock in the DoAppend.
With upwards of 300 clients even implementing buffering I would pile up
behind this lock (especially if I had more than one appender).

My concern with implementing buffering is that if my appdomain dies I
will lose the contents of the buffer.  I've seen the app domain die in
testing and can't afford to lose the information in my live application.
I also have to guarantee that this data has reached the database before
I move on to actually do something (messaging isn't an option).

I'm coming to this from a VB/MTS world where you would get your own copy
of the object and the only bottleneck was the number of connections you
could have.  I realise that this isn't how it's done in the .net world
but I'm struggling to see how I could get round this issue as it does
appear to be a choke point.

-----Original Message-----
From: Ron Grabowski [mailto:[EMAIL PROTECTED]
Sent: 02 June 2005 13:19
To: Log4NET User
Subject: RE: RE:ASP.NET Blocking Problem

According to this website:

 http://www.connectionstrings.com/

Sql Server allows for Max Pool Size and Min Pool Size to be specified in
the connection string. Have you verified that only a single connection
is being used? Have you verified that log messages are being lost when
buffering is on? When you turn on buffering, does your database
throughput increase? Perhaps your database is not fast enough to handle
the number of inserts per second that it is recieving.

If you had to write your own code for connecting and inserting records
into the database, how would you write it differently from what log4net
is doing? How would you handle inserting many hundreds of records at a
time without buffering? What do you think log4net can do that its not
doing? The buffering mode was designed to handle the type of case you're
describing.

--

----------------------------------------------------------------------------
--
Halifax plc, Registered in England No. 2367076.  Registered Office: Trinity
Road, Halifax, West Yorkshire HX1 2RG. Authorised and regulated by the
Financial Services Authority.  Represents only the Halifax Financial
Services Marketing Group for the purposes of advising on and selling life
assurance, pensions and collective investment scheme business.
============================================================================
==



--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.322 / Virus Database: 267.4.1 - Release Date: 2/06/2005


--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.322 / Virus Database: 267.4.1 - Release Date: 2/06/2005

--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.322 / Virus Database: 267.4.1 - Release Date: 2/06/2005


Reply via email to