e' is passed, IGNORE_HEADERS_SET && nelts == 0.) */ new = (char **)apr_array_push(conf->ignore_headers);- (*new) = (char *)header;+ (*new) = header;>>> [EMAIL PROTECTED] 10/25/04 10:32 AM >>>
Jean-Jacques Clar wrote:> T
Thanks for reporting the problem Norm,
The patch was submitted.
Jean-Jacques>>> [EMAIL PROTECTED] 10/24/04 5:08 PM >>>
Greetings All,Just trying a 'build' of current 2.1 CVS on a Windows machine for NetWare and get the following error...Calling NWGNUmod_cachCompiling cache_util.cCompiling mod_cac
>>> [EMAIL PROTECTED] 10/07/04 11:14 AM >>>
--On Thursday, October 7, 2004 9:08 AM -0600 Jean-Jacques Clar <[EMAIL PROTECTED]> wrote:>I *really* don't like the M prefix at all. I'd much prefer us to just spell >the thing out: CacheMem* and CacheDisk* inst
All the directives for mod_mem_cache are preceded with an "M".
The ones for mod_cache are starting with a "c"
Should all the disk_cache directives be preceded with a "D" for
consistency and clarity?
I know it is going to break every user configuration, but before
the modules move out of experime
Norm,
If you can just do a cut and paste of the define
from the windows header into apr.hnw for today.
Tomorrow I will figure out what is the right thing to do for
NetWare.
Have a good day,
JJ>>> [EMAIL PROTECTED] 09/23/04 4:18 PM >>>
NormW wrote:> Compiling network_io/win32/sendrecv.c> ### mwcc
Should the cleanup field be removed from the object structure in
mod_cache.h?>>> [EMAIL PROTECTED] 09/17/04 9:03 AM >>>
stoddard 2004/09/17 08:03:08 Modified: modules/experimental Tag: APACHE_2_0_BRANCH mod_mem_cache.c Log: eliminate cleanup bit in favor of managing the object exclusivel
+1
Tried different configurations in lab and changes are acting as expected.
Thanks to Greg and Bill S. for fixing the problem.
JJ>>> [EMAIL PROTECTED] 09/16/04 11:39 AM >>>
Adds back obj->complete check in decrement_refcount. Makes changes recommended by Greg.
Oups!, sorry I was few minutes slower than you Greg.
The overall patch looks good and much cleaner.
I would love to move my office next to both of you...
I will put the new code on my mp box and make sure
It does not break.
Thanks,
JJ>>> [EMAIL PROTECTED] 09/16/04 9:50 AM >>>
Bill Stoddard wrot
Are there any reasons why that part of decrement_refcount() is not
needed anymore?
@@ -267,31 +271,9 @@ { cache_object_t *obj = (cache_object_t *) arg;- /* If obj->complete is not set, the cache update failed and the- * object needs to be removed from the cache then cleaned up.-
>>> [EMAIL PROTECTED] 09/15/04 12:41 PM >>>
>I'm still trying to learn the complete design though. I see that >memcache_cache_free() can return with OBJECT_CLEANUP_BIT set, and that >decrement_refcount() will react to it and free the memory if it is registered as >a cleanup. Will decrement_refco
What about calling memcache_cache_free() instead of the
apr_atomic_set(). The refcount must be more than 1, we
hold the mutex. It should work just fine..
What do you think?
JJ>>> [EMAIL PROTECTED] 09/15/04 8:26 AM >>>
Jean-Jacques Clar wrote:> Should I then go ahead and c
I share your enjoyment.
Should I then go ahead and commit my patch to the 2.1 tree?Thanks,
JJ>>> [EMAIL PROTECTED] 09/14/04 3:41 PM >>>
Jean-Jacques Clar wrote:> >Can you help me/us understand the following a little better? > Let's look at al the cases where t
>Can you help me/us understand the following a little better? If it is possible >for two threads to simultaneously to have the same obj pointer and try to free >the object, what prevents a seg fault* due to:>- thread #1 gets a pointer to obj A, then looses its time slice due to the OS >scheduler
This one is working as expected on my server. I tested most
of the paths and it looks fine. Added comments to the previous
path. Will like some double-checking and feedback if possible.
The race condition is fixed, no performance hit like with
the mutex patch, and no memory leak, from what I c
>>> [EMAIL PROTECTED] 09/10/04 7:33 PM >>>
At 06:53 PM 9/10/2004, Jean-Jacques Clar wrote:>I replaced the cleanup field with a bit set in refcount. This is >done to prevent race conditions when refcount is accessed on two >different threads/CPUS.>>+#def
The following patch is a rework of the refcount and cleanup
fields in mod_mem_cache.c. I replaced the cleanup field with a bit set
in refcount. This is done to prevent race conditions when refcount
is accessed on two different threads/CPUS. The complete description
was done on that mail thread e
>>> [EMAIL PROTECTED] 09/09/04 2:27 AM >>>
>>As far as performance, I currently don't see any slow down on my >>box, but will run longer tests tomorrow morning. >no worries about that hereLooking more closely at the implication of the submitted patch,
I don't think we want to lock decrement_refco
>That's bad already. At any time when there are n threads with a>"handle" to a cache object, refcount ought to be n in order to be able>to free it when refcount goes to zero.
>Unless:
>Condition 1) a mutex is held between the time that a thread gets a>handle to the cache object and when the
>I find the possible need for the patch very disturbing; it implies to>me a problem in the atomics code
I don't know.
I had the NetWare dec code reviewed by the people who
wrote/maintain our kernel, this is why I changed the
apr_atomic_dec(), and from what I understand the
problem should be ou
I am using a set of files I got from VeriTest
http://www.etestinglabs.com/benchmarks/webbench/default.asp
The tar file include about 6000 files of a total size around 60 MB.
Has you can see in the configuration MCacheSize and
MCacheMaxObjectCount force cached objects to be ejected
often. This i
>It should not be possibe for two threads to atomically decrement the refcount on the same object to 0.
I think there is a small window in there where it is possible to have the decs
happening on both CPUs one after the other making that bug possible in
decrement_refcount() and memcache_cache_f
Testing 2.0 and 2.1 Head.
I am running a test that requires frequent ejections of cache entities:
max_cache_size and max_object_count are smaller than my sampling.
2 threads running at the concurrently on 2 CPUs.
Thread1(T1): an entry is ejected from the cache in cache_insert(),
resulting in a
I am seeing a double free happening on a removed
cache entry on my MP box.
It is a race condition that is happening because of
the way apr_atomic_dec is implemented on NetWare.
I should have a fix for it later today.
JJ
>I can make that change. Is this the only netware breakage?
Yes, this is the only one.
Thanks,
JJ
Should the type for refcount be apr_atomic_t instead of apr_uint32_t?
It does not build currently for NetWare.
JJ>>> [EMAIL PROTECTED] 08/26/04 10:59 AM >>>
stoddard 2004/08/26 09:59:46 Index: mod_cache.h === RCS file: /home/
> Index: core.c> + char *w = strsep(&p, ",");
strsep() seems to be platform dependant. That function does not
exist on NetWare, and I don't think it exists on Windows.
It should at least be made an APR function.
Here is the notes from the man page:
The strsep() function was in
>> 3- overload detach field from cgi_exec_info_t in 2.0 to make sure no bump are needed,>> no backport entry needed in that case.>Not sure what you mean there, any proposed backport to the>APACHE_2_0_BRANCH for httpd needs to be added to STATUS following the>normal procedure.The change done in 2.
>I'm writing an optimized caching module.
Would you please share with us some info on that
optimized caching module.
Is mod_url_cache a modified copy of the current ones?
Why are the current ones not fulfilling your needs?
Thanks,
JJ
The problem is in our own library. The recent changes cause it to
don't catch the end of the generated file and go on reading it until
the read fails.
In 2.1 previous to mod_cgi.c:r1.153
the return value of ap_pass_brigade() was not checked in
include_cmd(), but now it is done. That change mix
>I am getting the following error when running CGIs on the current
>version of NetWare (6.5 sp2):
> (32)Broken pipe: ap_content_length_filter: apr_bucket_read() failed
>I am working on tracking down the problem.
Changes done to mod_cgi.c, mod_include.(h & c) back in August 22, 2003
are expos
To replace the addrspace field that was added in the cgi_exec_info_t
struct in mod_cgi.h I will like to propose extending the use of the
detached (apr_int32_t) field in cgi_exec_info_t and apr_procattr_t structs.
Currently that field is set to 0 by default and 1 if the process to be
created wi
Loading a process in its own address space require that all of the
modules that it has direct dependencies on, have also to be loaded
in that same address space. This is an expensive process, especially
when it comes to CGIs that are loaded to serve their content and
then unloaded.
A marshal
+1 on NetWare
>Can I ask the obvious, then? When would a separate address space>be desirable for an apr-based app to invoke a child/forked process?
It is a desirable option mainly for developers using unstable modules
to ensure the child process will not kill the parent application, or the server,
in ca
eters description is off)
>>> [EMAIL PROTECTED] 6/18/2004 2:16:29 PM >>>
On Fri, 18 Jun 2004, Jean-Jacques Clar wrote:> new field in> apr_procattr_t and cgi_exec_info_t structuresThis, however, would require one, if a module would ever be responsiblefor allocating the apr_procattr_t
A new API was added in APR (apr_procattr_addrspace_set()) with changes bubbling up
in httpd-2.0-head.
The API is relevant on NetWare and a no-op on all other platforms.
If you look at the status file, the following entry was added:
*) mod_cgi: Added API call and new field in apr_procattr_t
I think there is a little typo with the submitted patch:
Index: server/log.c===RCS file: /home/cvs/httpd-2.0/server/log.c,vretrieving revision 1.145diff -u -r1.145 log.c--- server/log.c 27 May 2004 23:35:41 - 1.145+++ server/lo
>>> Jean-Jacques Clar 5/18/2004 10:57:40 AM >>>
Just replaced tabs with spaces and reworked indentation within brackets.
If no objections will commit later.
Thanks,
JJ
sorry I had to zip the patch, size was causing a failure from apache mail server:
ezmlm-reject: fat
Duh?
What is that?
I was just playing with Cervisia and gvcs setting up my connection to cvs and updating my files.
Did I screw up something?
JJ>>> [EMAIL PROTECTED] 4/27/2004 11:28:25 AM >>>
On Tue, 27 Apr 2004 [EMAIL PROTECTED] wrote:> clar 2004/04/27 11:22:27>> Log:> no message>>
I added comments within the following lines (jjc) that are not in the attached patch.
I am just explaining what I did.
My only question is why 0.5 is added to the mean value when displaying the results (lines 988-90-92-94)?
Please review the attached patch and let me know if my corrections are va
The added call to usage() on line 2165 is missing the closing parenthesis.
JJ>>> [EMAIL PROTECTED] 3/17/2004 11:22:35 AM >>>
madhum 2004/03/17 10:22:35 Modified: support ab.c Log: Limit the concurrency to MAX_CONCURRENCY. Otherwise, ab may dump core (calloc fails) when a arbitrarily
Just a quick survey on how robust ab should be.
I think that ab should not seg fault on any user parameters,
I could spent some time making it a little bit more robust.
Is it of any interest, or the general thinking is; if the user enters an out of range or a bogus value and seg fault, this sh
Sometime negative values are displayed in the min column of the Connection Times statistics.
I sure will find out why, but the following patch make sure that cannot happen and does not hurt.
Please comments.
If no comments, I will commit it later today.
Thanks,
@@ -1286,9 +1286,9 @@ c->don
+1 NetWare
Jean-Jacques>>> [EMAIL PROTECTED] 3/13/2004 5:32:53 AM >>>
Hi,There are 2.0.49-rc2 tarballs available at: http://httpd.apache.org/dev/dist/The differences with respect to the rc1 tarball are:- BeOS specific MPM fixes- Netware specific rand.c fixes- Documentation update- Berkeley DB
I am Inside the conditional filter in mod_cache,
output filter type = AP_FTYPE_CONTENT_SET-2 (18),
and the r->status field is currently a 304. I would like
to change it to a 200, but it looks like the status field
was already stuffed in the rec->headers_out table,
or is it somewhere else?
Who has the right to vote for backport changes,
is it only members or that right also apply to committers?
Thank you,
Jean-Jacques
Joe,
You will probably need more info, but if you have time give it a try and
if you run into problems we could contact greg ames from the ASF.
Thanks,
JJ
>>> [EMAIL PROTECTED] 1/12/2004 9:45:50 AM >>>
Jean-Jacques Clar wrote:> Attached are 2.0.48 numbers on RH AS 2.
This patch make it possible to return 304s when using the quick handler,
if a fresh cache entity is found.
The r->mtime field is usually set in default_handler() (core.c) by calling
ap_update_mtime(), which does not happen when the quick handler
find a fresh cache entry.
From cache_url_handle
>What did you use for ThreadsPerChild? The default?
yes we used the default values for the worker threading model.
>I'm very curious to know why prefork is beating worker. Could you get a profile >of the CPU usage for both cases? I've had good luck with oprofile. I had a >little trouble com
These are comments copied from ap_cache_check_freshness()
line 163 cache_util.c:
--
* - RFC2616 14.9.4 End to end reload, Cache-Control: no-cache. no-cache in * either the request or the cached response means that we must * revalidate the request unconditionally,
>1. What was the CPU utilization during the tests
I think CPU utilization is above 90% during the test,
at least on my 4 CPUs box, which is not the one used to
gather the submitted results. I could find out but will be
delighted to have more details on what you are exactly
looking for based on CP
The current request falls in one of the MUST RFC condition,
line 168 mod_cache.c:
---
else { if (ap_cache_liststr(NULL, cc_in, "no-store", NULL) || ap_cache_liststr(NULL, pragma, "no-cache", NULL) || (auth != NULL)) { /* delete the previously cached fi
>HyperThreading enabled or not when limited to 1/2 CPUs, if these are HT>CPUs? That can make a difference (either way) when benchmarking IIRC.
CPUs are not HT.
Bill,
The patch you committed is only for 21285.
My bet for 21287 (no mutex lock protection in decrement_refcount).
from the bug description:
"There are no mutex lock protection in decrement_refcount if it isdefined USE_ATOMICS.I think you simply forgot the mutex in function decrement_refcou
Attached are 2.0.48 numbers on RH AS 2.1 and 3.0.
Apache is build with worker MPM and default options on both versions.
C:
Apache is servicing more requests per sec on 2.1 on 1 and 2 CPUs, 3.0 is picking up the slack between 2 and 4 CPUs.
Prefork still serving static requests faster than worker
There is a memory leak with your patch when running my test.
Using only the following part, it does not leak and works fine;
+ if (obj->cleanup) {+ /* If obj->cleanup is set, the object has been prematurly+ * ejected from the cache by the g
Just replaced tabs with spaces to follow guidelines.>>> [EMAIL PROTECTED] 12/11/2003 5:00:31 PM >>>
Bugzilla Defect #21285
This is a rework of the already posted patch.
It address the following situation;
1- request comes in for streaming response
2- before that request could be completed, the
Just replaced tabs with spaces to follow guidelines.
>>> [EMAIL PROTECTED] 12/12/2003 1:23:20 PM >>>
Changes in decrement_refcount() and remove_url().
Patch extracted and modified from attachment in Bug 21285.
The submitted patch was combining fix for 21285 and 21287.
It is for decrement_refcoun
Changes in decrement_refcount() and remove_url().
Patch extracted and modified from attachment in Bug 21285.
The submitted patch was combining fix for 21285 and 21287.
It is for decrement_refcount().
Also extended mutex protection in remove_url().
Thanks,
Jean-Jacques
mod_mem_cache.21287.p
The fields:
apr_size_t cache_size apr_size_t object_cntfrom the struct mem_cache_conf are useless since the introduction of
the cache_pqueue stuff.
Just removing them.
Thanks,
Jean-Jacques
mod_mem_cache.patch
Description: Binary data
Bugzilla Defect #21285
This is a rework of the already posted patch.
It address the following situation;
1- request comes in for streaming response
2- before that request could be completed, the entry is ejected from the cache
3- when completing the write body step, the incomplete entry is remo
>The only thing I don't understand is why this went from:>>- queue_clock - mobj->total_refs>>to:>>queue_clock - mobj->total_refs>>When the other case negated the whole statement. Is this intentional or>did you forget to account for the - in this case being distributed across>the ()'s?
If you loo
and returns thevalue as is rather than negating the returns). I'd also add a comment infoo_algo where the negated value is stored to explain why it is negated.Cliff Woolley wrote:> On Wed, 19 Nov 2003, Jean-Jacques Clar wrote:> > >>The heap in cache_pqueue is a max head, it alw
The heap in cache_pqueue is a max head, it always keep the object
with the highest priority in its root position. The object with the highest
priority is the one that is ejected when a new element has to be
inserted when the cache is full (max nb of entities or size reached).
The return value
Priority value stored in the mem_cache_object is => 0.
The stored value increases with the priority of the cached entry.
A smaller value means a smaller priority.
All the logic in cache_pqueue for Bubbling Up and Percolating Down
is wrong. It does not ensure that the smallest element is always
>If they are no longer being used, remove them. I expect we were using >them at one time, then Ian (?) added the nifty cache_pqueue stuff which >made the fields superfluous.Thanks, will do.
Looking at two pairs of fields used to keep the current cache size and object count:
from struct:
mem_cache_conf (mod_mem_cache.c):
apr_size_t cache_size
apr_size_t object_cnt
cache_cache_t (cache_cache.c):
apr_size_t current_size
cache_pqueue_t (cache_pqueue.c):
apr_size_t size (-1, elem
This is a double send from last week (look at the date).
It was probably lost on a queue somewhere for a week.
Anyway, that proposal did not fly.
I will try to get memory/performance data on my Linux box by the end of the week.
As a possible solution for NetWare, we will put the MAXMEMFREE dire
MAXMEMFREE fixes were checked in the 2.1 tree last week for most of the MPMs
(except Win32). A back port has also been proposed.
I will like to propose insertion of the MAXMEMFREE directive in the configuration file.
I am suggesting that 100 (KBytes) should be used as a default size.
Used wit
MAXMEMFREE fixes were checked in the 2.1 tree last week for most of the MPMs
(except Win32). A back port has also been proposed.
I will like to propose insertion of the MAXMEMFREE directive in the configuration file.
I am suggesting that 100 (KBytes) should be used as a default size, but coul
The function ap_meets_conditions() is doing unnecessary calls to
apr_time_now() and apr_time_sec().
Since the variables (mtime) has its target value in
second, it should
be possible to use r->request_time instead of calling
apr_time_now().
I did not see cases where the delay between a ca
I have two Questions/Suggestions to improve the current performance
of Apache 2.
1:
Apache and Apr are keeping the time stamp in uSEC, it implies multiple
calls to apr_time_sec(), which is just doing a 64 bits division to
transform
uSEC in sec.
I think this is a waste of CPU cycles for
Having a time structure in the mpm keeping updated sec and uSEC
field could help performance for the web server.
It could replace most calls to apr_time_now() and apr_time_sec()
at run-time:
NetWare has its main thread resuming every 100 uSEC (1 sec)
to do the idle server maintenance
Since the target value is in second, it should be possible to use
r->request_time instead of calling apr_time_now().
I ran a load test on my box and did not ever see a difference between the
sec value in r->request_time and the one returned by apr_time_now().
end of the patch part..
-
Just bypassing a call to apr_filepath_merge by using the
canonical_filename.
Jean-Jacques
request.c.patch
Description: Binary data
modules are still in the experimental area?
Thank you,
Jean-Jacques Clar
allocator with pool_create_ex, and then create
a> bucket_alloc on the same allocator (and under the same limit) using
the> new bucket_alloc_create_ex.> > Sound
good?Makes sense to me.Brian
Works for me.
Jean-Jacques Clar
I have been trying to understand why MaxMemFree is not working as
I am expecting it to in our MPM (NetWare).
I am getting to the conclusion that either the memory allocation in our
MPM
is missing a piece or there is a missing function in the apr_bucket
set.
I think this is relevant to other MPMs.
Why will I wouldn't see my allocated memory decrease if it has been
freed?
JJ
>>> [EMAIL PROTECTED] 09/12/02 04:19PM >>>
On Thu, Sep 12, 2002 at 04:16:35PM -0600, Jean-Jacques Clar wrote:
> "Memory in Apache never shrinks".
> Is this a true statement?
&
might happen...
Bill
> -----Original Message-
> From: Jean-Jacques Clar [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 12, 2002 10:38 AM
> To: [EMAIL PROTECTED]
> Subject: Re: memory leak in apr_buck_alloc...
>
>
> I am using WebBench to run my test.
> Reques
t are you using to determine the amount of leaked memory?
Jean-Jacques Clar wrote:
> Using the atomic test, we do past all of them.
> Is it enough or there are other actions that should be taken?
>
> I ran another test requesting the same single small file (GIF, 223
> bytes) from 5 c
operators are not implemented properly on Netware, then all
kinds
of bad things might happen...
Bill
> -Original Message-
> From: Jean-Jacques Clar [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, September 12, 2002 10:38 AM
> To: [EMAIL PROTECTED]
> Subject: Re: memory leak in a
f request that I need to
> use to trigger the memory leak? I just ran a few hundred
> thousand requests for the same 1KB file through 2.0.41-pre2
> with the mod_mem_cache settings below, and I can't see
> any sign of memory growth in the httpd yet.
>
> Thanks,
> Brian
>
>
w hundred
thousand requests for the same 1KB file through 2.0.41-pre2
with the mod_mem_cache settings below, and I can't see
any sign of memory growth in the httpd yet.
Thanks,
Brian
On Tue, 2002-09-10 at 14:40, Jean-Jacques Clar wrote:
> I guess my leak is from a different source.
> Refr
Which values are you using for MaxRequestsPerChild .
If mine is different than 0, then nice, I don't see a leak because
each of my Threads are kill once in a while.
But if it is = 0, then it leaks when refreshing entries in the cache,
could it be the fact in NetWare we have one process and mu
Anyone with some kind of architecture diagrams related to the
memory management in APR?
Thanks,
Jean-Jacques Clar
htpasswd.patch
Description: Binary data
I guess my leak is from a different source.
Refreshing files in mod_mem_cache drive up the allocated memory in
aprlib.
It is consistent and has been there since I started using mod_cache
(3-4 months ago).
Justin,
Were you using caching modules when you were able to reproduce/fix
Brad's leak?
Wha
I think I have been seeing the same thing for a long time when
refreshing entries
in mod_mem_cache. If you find something in the alloc path I
will be
more than happy to try it in my lab.
"Jean-Jacques Clar" <[EMAIL PROTECTED]> on 05/21/2002 06:23:58
PMTo: W G
Stod
for
worker.RyanOn Thu, 20 Jun 2002, Jean-Jacques Clar
wrote:> I am building 2.0.39, but it was the same thing with
2.0.35.> > >>> [EMAIL PROTECTED] 06/20/02 10:26AM
>>>> On Thu, Jun 20, 2002 at 09:09:45AM -0600, Jean-Jacques Clar
wrote:> > There are erro
I
am building 2.0.39, but it was the same thing with
2.0.35.>>> [EMAIL PROTECTED] 06/20/02 10:26AM
>>>On Thu, Jun 20, 2002 at 09:09:45AM -0600, Jean-Jacques Clar
wrote:> There are errors when building 2.0.x on Linux RH7.3 with the
workerWhich version of 2.0 exactly?-aaron
On using the pq code:
I currently have it crashing in cache_pq_pop after only few
refresh of a cache entry,
I am trying to fix it.
cache_pq_remove() -> cache_pq_pop()->page
fault
Jean-Jacques
>>> [EMAIL PROTECTED] 06/13/02 09:45AM
>>>Arnauld Dravet wrote:> Yup i know it's a work-in-progress
I have been using mem cache for over two
months
on NetWare 6 servers with a great deal of
acceleration.
However, because of issues to be resolved, it is used
only in a
test environment and not on production
servers.
--Jean-Jacque
>>> [EMAIL PROTECTED] 06/06/02 03:33PM >>>On Thu, Jun
06, 2002 at 03:25:01PM -0600, Jean-Jacques Clar wrote:> Here are results
done on a 4 x 450Mhz server with 1 Gb of RAM, 1> SysKonnect Gb
LAN.> The tests were done using WebBench with 26 clients and 1
controller&
Here are results done on a 4 x 450Mhz server with 1 Gb of RAM,
1 SysKonnect Gb LAN.
The tests were done using WebBench with 26 clients and 1
controller running the default static test.
The OS is Linux RH 7.3 and Apache 2.0 is using the prefork
model.
1.3 top off slightly higher than 2.0 on al
95 matches
Mail list logo