Hey,

I do that at next deployment. --> I Thursday... 

So far I'm trying executor for tomcat. As far I read, when I'm using connectors 
idle process are forced to be close..


Mit freundlichen Grüßen
David Kumar  
Softwareentwickler, B. Sc.
Abteilung Infotech - Interaktiv 
TELESTAR-DIGITAL GmbH
Am Weiher 14
D-56766 Ulmen

Tel.: + 49 (0) 6592 / 712 -2826
Tel.: + 49 (0) 2676 / 9520 -183

Fax: + 49 (0) 6592 / 712 -2829

http://www.telestar.de/




-----Ursprüngliche Nachricht-----
Von: André Warnier [mailto:a...@ice-sa.com] 
Gesendet: Freitag, 18. Januar 2013 11:10
An: Tomcat Users List
Betreff: Re: AW: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept 
open

David Kumar wrote:
> Hey André,
> 
> are you talking about running System.gc()? 
Yes.

> That should be possible..
> 
> Mit freundlichen Grüßen
> David Kumar  
> Softwareentwickler, B. Sc.
> Abteilung Infotech - Interaktiv 
> TELESTAR-DIGITAL GmbH
> Am Weiher 14
> D-56766 Ulmen
> 
> http://www.telestar.de/
> 
> 
> 
> 
> -----Ursprüngliche Nachricht-----
> Von: André Warnier [mailto:a...@ice-sa.com] 
> Gesendet: Freitag, 18. Januar 2013 10:07
> An: Tomcat Users List
> Betreff: Re: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept 
> open
> 
> David,
>   (and sorry for top-posting here)
> 
> just to verify something.
> Can you trigger a Major Garbage Collection at the Tomcat JVM level, at a 
> moment when you 
> have all these connections in CLOSE_WAIT, and see if they disappear after the 
> GC ?
> 
> If yes, it may give a good clue about where all these CLOSE_WAITs are coming 
> from.
> 
> 
> David Kumar wrote:
>> Just read this email.. :-)
>>
>> I figured out we are not using executor connector... 
>>
>>
>> Mit freundlichen Grüßen
>> David Kumar  
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv 
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: David Kumar [mailto:dku...@telestar.de] 
>> Gesendet: Freitag, 18. Januar 2013 09:11
>> An: Tomcat Users List
>> Betreff: AW: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open 
>>
>> here you are with attachment :-)
>>
>>
>> btw: in mod_jk.log I found some 
>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] 
>> ajp_get_reply::jk_ajp_common.c (2055): (tomcat2) Tomcat is down or refused 
>> connection. No response has been sent to the client (yet)
>> [Thu Jan 17 23:00:08 2013] [11196:140336689317632] [error] 
>> ajp_service::jk_ajp_common.c (2559): (tomcat2) connecting to tomcat failed.
>>
>>
>> but realy just a few one...
>>
>>
>> Mit freundlichen Grüßen
>> David Kumar  
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv 
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>>
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: David Kumar 
>> Gesendet: Freitag, 18. Januar 2013 09:08
>> An: 'Tomcat Users List'
>> Betreff: ( ajp on 7009 and 9009 not afs3-rmtsys): connections keept open 
>>
>> Hey,
>>
>> thanks for reply. I got that about the Apache configuration. Since we had 
>> our problem yesterday, again and there was no error at the apache logs I'm 
>> willing to say that is not the main problem and I have to check that, when 
>> my main problem is solved.. :-)
>>
>>
>>
>> I agree with you about wrong reporting of service. Its just shown up as afs3 
>> because these service uses 7009 per default. But I'm using 7009 and 9009 for 
>> ajp.
>>
>>
>> So doesn't this mean there is a connection problems between my Apache and 
>> the tomcats?
>>
>> You're right, both Webapps doing the same and are configured identically 
>> besides the ports.
>>
>> I'm using more than one database, but all of them are used through a 
>> database pool. If there is a bug, I  think I should have found some error at 
>> my logs like no free connection or something like that. As there is no such 
>> log entry I'm willing to say that my database connections processing like 
>> they should. 
>>
>> Basically on each tomcat there are running two services. One is a axis2 
>> project. Our CRM is posting customer data to this webapp. This data will be 
>> persisted into a database. Depending on the information given by our CRM 
>> axis sends a email.
>>
>> The second one is basically a cache for our websites. We have a PIM with all 
>> our product data. These app is gathering all the data from PIM and a CMS and 
>> is merging these information together so that the data can be displayed. 
>> All the mentioned data is hold in different "cache objects". Also some 
>> communication with our ERP and some databases are made trough this app. 
>>
>> The second app is a REST service. Information will be posted as POST or GET 
>> request to it. Most likely the responses are JSON Object. 
>>
>> When ever one webApp is reloading (automatically or manually) itself, the 
>> result will be posted to the other tomcat/webapp as a serialized object, so 
>> the other on do not need to reload it self.
>>
>> I can't say how many SMB files there are, it is depending on some other 
>> stuff so it is dynamic.
>>
>> Attached you can find a printed list by lsof.
>>
>> There you can see a really strange thing. Yesterday just tomcat2 had the 
>> problem with to many open files. A few days before it was just tomcat1 
>> having this problem.
>>
>> Now let my answer your question:
>>
>> 1. That is hard to say, I guess I have to do some more investigation on our 
>> logfiles.
>>
>> 2. / 3.  Here is my httpd.conf:
>> <IfModule mpm_worker_module>
>>      ThreadLimit          25                 
>>      StartServers          2
>>      MaxClients          150
>>      MinSpareThreads      25
>>      MaxSpareThreads      75 
>>      ThreadsPerChild      25
>>      MaxRequestsPerChild   4000
>> </IfModule>
>>
>> we are using worker .... 
>>
>> And here are our tomcat connectors again:
>> tomcat1:
>>
>> <Connector port="7080" protocol="HTTP/1.1" connectionTimeout="20000" 
>> redirectPort="8443"/>
>>
>>
>> <Connector port="7009" protocol="AJP/1.3" redirectPort="8443"/>
>>
>>
>> tomcat2:
>>              <Connector port="9080" protocol="HTTP/1.1" 
>> connectionTimeout="20000" redirectPort="9443"/>
>>
>> <Connector port="9009" protocol="AJP/1.3" redirectPort="9443"/>
>>
>>
>> Okay we are not using executor.. I will check that.. 
>>
>> You probably read my copy-paste error. I did copy some comments out of out 
>> server config --> Sry again.
>>
>> 4. we are using..
>> 5. via a multipart message sending to the other tomcat.
>> 6. I don't think so also because of that the connections are kept open on 
>> our ajp ports.
>>
>> I know that "CLOSE_WAIT" means, waiting for connections to be closed, but 
>> wondering that it is not closing.. 
>>
>>
>> Thanks again
>>
>> Mit freundlichen Grüßen
>> David Kumar  
>> Softwareentwickler, B. Sc.
>> Abteilung Infotech - Interaktiv 
>> TELESTAR-DIGITAL GmbH
>> Am Weiher 14
>> D-56766 Ulmen
>> http://www.telestar.de/
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Christopher Schultz [mailto:ch...@christopherschultz.net] 
>> Gesendet: Donnerstag, 17. Januar 2013 18:38
>> An: Tomcat Users List
>> Betreff: Re: AW: AW: afs3-rmtsys: connections keept open
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> David,
>>
>> On 1/17/13 1:49 AM, David Kumar wrote:
>>> I just checked /var/logs/apache2/error.logs. And found following
>>> errors:
>>>
>>> [Wed Jan 16 15:14:46 2013] [error] server is within MinSpareThreads
>>> of MaxClients, consider raising the MaxClients setting [Wed Jan 16
>>> 15:14:56 2013] [error] server reached MaxClients setting, consider
>>> raising the MaxClients setting
>> So you are maxing-out your connections: you are experiencing enough
>> load that your configuration cannot handle any more connections:
>> requests are being queued by the TCP/IP stack and some requests may be
>> rejected entirely depending upon the queue length of the socket.
>>
>> The first question to ask yourself is whether or not your hardware can
>> take more than you have it configured to accept. For instance, if your
>> load average, memory usage, and response time are all reasonable, then
>> you could probably afford to raise your MaxClients setting in httpd.
>>
>> Note that the above has almost nothing to do with Tomcat: it only has
>> to do with Apache httpd.
>>
>>> Yesterday my problem occurred about the same time.
>> So, the problem is that Tomcat cannot handle your peak load due to a
>> file handle limitation. IIRC, your current file handle limit for the
>> Tomcat process is 4096.
>>
>>> I'm checking every five minutes how many open files there are:
>>>
>>> count open files started: 01-16-2013_15:10: Count: 775 count open
>>> files started: 01-16-2013_15:15: Count: 1092
>> Okay. lsof will help you determine how many of those are "real" files
>> versus sockets. Limiting socket usage might be somewhat easier
>> depending upon what your application actually does.
>>
>>> But maybe the afs3 connection causing the Apache error?
>> afs3 is a red herring: you are using port 7009 for AJP communication
>> between httpd and Tomcat and it's being reported as afs3. This has
>> nothing to do with afs3 unless you know for a fact that your web
>> application uses that protocol for something. I don't see any evidence
>> that afs3 is related to your environment in the slightest. I do see
>> every indication that you are using port 7009 yourself for AJP so
>> let's assume that's the truth.
>>
>> Let's recap what your webapp(s) actually do to see if we can't figure
>> out where all your file handles are being used. I'll assume that each
>> Tomcat is configured (reasonably) identically, other than port numbers
>> and such. I'll also assume that you are running the same webapp using
>> the same (virtually) identical configuration and that nothing
>> pathological is happening (like one process totally going crazy and
>> making thousands of socket connections due to an application bug).
>>
>> First, all processes need access to stdin, stdout, stderr: that's 3
>> file handles. Plus all shared libraries required to get the process
>> and JVM started. Plus everything Java needs. Depending on the OS,
>> that's about 30 or so to begin with. Then, Tomcat uses /dev/random (or
>> /dev/urandom) plus it needs to load all of its own libraries from JAR
>> files. There are about 25 of them, and they generally stay open. So,
>> we're up to about 55 file handles. Don't worry: we won't be counting
>> these things one-at-a-time for long. Next, Tomcat has two <Connector>s
>> defined with default connection sizes. At peak load, they will both be
>> maxed-out at 200 connections each for a total of 402 file handles (1
>> bind file handle + 200 file handles for the connections * 2
>> connectors). So, we're up to 457.
>>
>> Now, onto your web application. You have to count the number of JAR
>> files that your web application provides: each one of those likely
>> consumes another file handle that will stay open. Does your webapp use
>> a database? If so, do you use a connection pool? How big is the
>> connection pool? Do you have any leaks? If you use a connection pool
>> and have no leaks, then you can add 'maxActive' file handles to our
>> running count. If you don't use a connection pool, then you can add
>> 400 file handles to your count, because any incoming request on either
>> of those two connectors could result in a database connection. (I
>> highly recommend using a connection pool if you aren't already).
>>
>> Next, you said this:
>>
>>> Both of the tomcats are "synchronising" them self. The send some 
>>> serialized objects via http to each other.
>> So the webapps make requests to each other? How? Is there a limit to
>> the number of connections directly from one Tomcat to another? If not,
>> then you can add another 400 file handles because any incoming
>> connection could trigger an HTTP connection to the other Tomcat. (What
>> happens if an incoming client connection causes a connection to the
>> other Tomcat... will that Tomcat ever call-back to the first one and
>> set-up a communication storm?).
>>
>>> And both of them getting some file from SMB shares.
>> How many files? Every file you open consumes a file handle. If you
>> close the file, you can reduce your fd footprint, but if you keep lots
>> of files open...
>>
>> If you have a dbcp with size=50 and you limit your cross-Tomcat
>> connections to, say another 50 and your webapp uses 50 JAR files then
>> you are looking at 600 or so file handles required to run your webapp
>> under peak load, not including files that must be opened to satisfy a
>> particular request.
>>
>> So the question is: where are all your fds going? Use lsof to
>> determine what they are being used for.
>>
>> Some suggestions:
>>
>> 1. Consider the number of connections you actually need to be able to
>> handle: for both connectors. Maybe you don't need 200 possible
>> connections for your HTTP connector.
>>
>> 2. Make sure your MaxClients in httpd matches make sense with what
>> you've got in Tomcat's AJP connector: you want to make sure that you
>> have enough connections available from httpd->Tomcat that you aren't
>> making users wait. If you're using prefork MPM that means that
>> MaxClients should be the same as your <Connector>'s maxThreads setting
>> (or, better yet, use an <Executor>).
>>
>> 3. Use an <Executor>. Right now, you might allocate up to 400 threads
>> to handle connections from both AJP and HTTP. Maybe you don't need
>> that. You can share request-processing threads by using an <Executor>
>> and have both connectors share the same pool.
>>
>> 4. Use a DBCP. Just in case you aren't.
>>
>> 5. Check to see how you are communicating Tomcat-to-Tomcat: you may
>> have a problem where too many connections are being opened.
>>
>> 6. Check to make sure you don't have any resource leaks: JDBC
>> connections that aren't closed, files not being closed, etc. etc.
>> Check to make sure you are closing files that don't need to be open
>> after they are read.
>>
>>> But I can't imagine that might be the problem? I'm wondering why
>>> the tcp connections with state "CLOSE_WAIT" doesn't get closed.
>> http://en.wikipedia.org/wiki/Transmission_Control_Protocol
>>
>> - -chris
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
>> Comment: GPGTools - http://gpgtools.org
>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>>
>> iEYEAREIAAYFAlD4NvsACgkQ9CaO5/Lv0PC4EACfURhDENZPf28HDIazwPqAqri5
>> KqYAni9AOSQZVIdsBtQLoEfDcYkpuf7f
>> =dEDY
>> -----END PGP SIGNATURE-----
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
>> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to