Re: cvs commit: httpd-2.0/server/mpm/worker fdqueue.c
Greg Ames wrote: > Brian Pane wrote: > > Greg Ames wrote: > > > >> apr_atomic_dec? That does return something. > > > The problem is that, since we have to use apr_atomic_cas for the > > increment (due to the lack of a return value on apr_atomic_inc), > > we can't use apr_atomic_dec on the same variable. apr_atomic_cas > > works on apr_uint32_t, while apr_atomic_dec works on apr_atomic_t. > > If we could change the apr_atomic_inc/dec functions to use apr_uint32_t, > > this part of the fdqueue code could become a lot simpler. > > I am certainly in favor of changing apr_atomic_inc/dec so they can be useful. > I'm wondering if it's ok to use an ordinary apr type, like apr_uint32_t? or do > we need special atomic types marked as volatile or memory-resident or whatever > so that gcc won't assign them to registers or optimize them out or ??? I don't > know the answer, but have seen kernel code do such things (OK Jeff, I've been > infected by the GPL...no hope for me). As far as I know, we can make it work with apr_uint32_t on most platforms, as long as we declare any inline assembly blocks as volatile (thanks to Sascha Schumann for fixing this recently in apr_atomic_cas on Linux). The one platform I'm not sure of is OS/390, due to the special data types and API involved: http://marc.theaimsgroup.com/?l=apr-dev&m=104129861312811 > > > I still have one more change I'm hoping to make in the fdqueue > > synchronization logic: move the queue manipulation code in > > ap_queue_push/pop outside the mutex protected region by maintaining > > a linked list of queue nodes with apr_atomic_casptr. > > Sounds good, as long as the pops are single threaded somehow, or if you have > some other way of getting around the A-B-A cas pop window. I'm planning on spinning around the CAS; i.e., retry if the CAS fails due to a collision with another thread. The queue_info synchronization, which uses that same technique, seems to be working well in practice. Brian
using my module
Hi all, I have no a server at home, only a linux station. I'm writing a module that modify the html content. I have the problem that I don't know how to simulate the client server environment. I mean: I can set up a module in a location (with its handler) and I can see how it works (like example-info). but if I want that my module interacts with a file requested from the client, how xan i do it? For example, I can add to my apache the module gzip, but how can I see if it works? I want just to see on my browser the file I request after my module has "filtered" it. Thanks a lot, fabio __ Mio Yahoo!: personalizza Yahoo! come piace a te http://it.yahoo.com/mail_it/foot/?http://it.my.yahoo.com/
Re: Proxy stripping Content-Length, Transfer-Encoding
On Wed, 2003-01-08 at 23:25, Graham Leggett wrote: > Hi all, > > In httpd v2.0 mod_proxy, both the Content-Length and Transfer-Encoding > headers are stripped from the backend server response before passing it > to the frontend: > > /* We need to copy the output headers and treat them as input >* headers as well. BUT, we need to do this before we remove >* TE and C-L, so that they are preserved accordingly for >* ap_http_filter to know where to end. >*/ > rp->headers_in = apr_table_copy(r->pool, r->headers_out); > > /* In order for ap_set_keepalive to work properly, we can NOT >* have any length information stored in the output headers. >*/ > apr_table_unset(r->headers_out,"Transfer-Encoding"); > apr_table_unset(r->headers_out,"Content-Length"); > > After much discussion, the removal of the Content-Length was taken out > in httpd-2.1. Can anyone confirm this didn't break anything? If it > didn't I want to remove this in httpd v2.0 also. I haven't heard reports of anything breaking in 2.1 due to the change. I'll add an entry in the 2.0 STATUS file to track votes on back-porting the change. > What concerns me too is the removal of the transfer encoding - I assume > this is because it is a hop-by-hop header - my question is why are we > not removing all hop-by-hop headers, like we do for the initial request > from the browser? I think we should be removing all hop-by-hop headers here. Brian
Re: [patch] include/util_filter.h
--On Saturday, January 11, 2003 9:13 PM +1100 Stas Bekman <[EMAIL PROTECTED]> wrote: My simple document patch is a great example of something that could be handled by someone who doesn't have to be an expert in httpd-2.0. It's expected that a new committer may need to do some dirty/non-sexy/non-itch-scratching work while learning the guts of the project and working on enlarging his karma. I'll just chime in and state the reason that I haven't applied your patch is that you aren't following the style guides. You seem to have tabs or some other weirdness going on. -- justin
Re: CGIs and HEAD requests
* Martin Kutschker wrote: > How about making Apache (read mod_cgi) ignore extra output for HEAD requests? since 2.0, apache should do this automatically, i.e. you should handle a HEAD request exactly as a GET, so that the headers are the same. (think of content filters that change headers etc.) nd -- Real programmers confuse Christmas and Halloween because DEC 25 = OCT 31. -- Unknown (found in ssl_engine_mutex.c)
Re: [Win32] compile errors in xlate.c
Sebastian Bergmann wrote: Juergen Heckel wrote: since 2 or three days I get the following compile errors: Current HEAD builds fine for me on Win32. Hi, thank you. I found it: my apr-iconv was three days to old :-( -- Juergen Heckel
Re: [Win32] compile errors in xlate.c
Juergen Heckel wrote: > since 2 or three days I get the following compile errors: Current HEAD builds fine for me on Win32. -- Sebastian Bergmann http://sebastian-bergmann.de/ http://phpOpenTracker.de/ Did I help you? Consider a gift: http://wishlist.sebastian-bergmann.de/
Re: [patch] include/util_filter.h
Greg Stein wrote: On Fri, Jan 10, 2003 at 12:41:38PM +1100, Stas Bekman wrote: Jeff Trawick wrote: ... As has been mentioned many times before on this list, if a patch isn't committed or commented on, you have to remind us. There are as many whys for this requirement as there are httpd committers trying to juggle multiple responsibilities. Consider us reminded, but not chastised. Many of us have been playing hookey through the holidays and have all manner of todos to catch up with. It's understandable. But it doesn't help to make other people want to contribute. Volunteers only have so much time to contribute. I don't think it is fair to get upset at people because they aren't providing you with enough of their time. You get upset the first few times, after that you either get used to it or move on. I was simply trying to suggest a possible solution seeing it working for other projects. Which as you've explained later isn't applicable here. [...] Others who submit things they have noticed wrong, but don't really require a fix, move on, when their posts/patches are ignored, so the efforts are just getting lost. Quite unfortunate, but that is life. What more do you expect? People have limited bandwidth, and can only see and track so much. And that is also focused on "what is interesting to me". That is simply the way it works. Yes and no. You forget that there are many others who currently don't contribute to httpd, for various reasons you all know about. So while several developers indeed have a limited bandwidth, there is virtually an unlimited bandwidth if the entrance barrier is lowered and more people are encouraged to contribute. My simple document patch is a great example of something that could be handled by someone who doesn't have to be an expert in httpd-2.0. It's expected that a new committer may need to do some dirty/non-sexy/non-itch-scratching work while learning the guts of the project and working on enlarging his karma. Yes, it would be good to see every single patch, and to track every single one, but the developers are simply busy busy busy. I'm not less busy than other developers. And I'm working pretty much on the same project, but a different segment of it. If you gave me the commit access, I'd have committed the fix long time ago and neither you or I had to spend time on this thread. I think I've posted enough small code and docs fixes in the last few years, so I can be trusted to commit simple things. Believe me that I'm not planning on committing anything that I don't know is simple and won't break the code. If you find me breaking things, you can always revoke the commit access. That's absolutely fair. You are talking about httpd committers having "multiple responsibilities", but I think you really mean "multiple itches to scratch". Don't even start. You have no idea what kinds of responsibilities people have, so it is totally unfair of you to imply something else. Jeff says he has a bunch of other responsibilities. Great. He does. Don't try and tell him or us that he doesn't, unless you happen to stand in his shoes, too. The real truth is that Jeff works for IBM and part of his job responsibility is to work on Apache. Great for us. But his efforts are going to be extremely bound to the commercial needs of IBM. Certainly, there is a personal component over and above IBM's needs, but then you're really moving into personal interests. And you can't claim that time for yourself; that's Jeff's time. I was *not* implying that real responsibilities aren't real. I apologize if it sounded like I was. I was talking about things that people do at their spare time. And I was talking in general, and specifically about Jeff. It's pretty well known that people want to work on things, that they enjoy, at their spare time. That's what I meant. Perhaps the httpd project could benefit from having a pumpkin, similar That isn't part of our culture. I don't think it would work here. The httpd group doesn't have any notion of central authority, so a pumpkin isn't going to receive the kind of mandate that Perl pumpkins get. And there isn't a Larry here to bestow the pumpkin title on anybody. Central authorities definitely help with moving projects forward, but you can't simply swoop in and impose such a thing. In fact, the authority is getting more distributed in the current Perl project. Where there are several developers who are responsible for sub-projects of the Perl core, and they are benevolent dictators on their territories. The pumpkin only handles so much load because others didn't step front and took over other territories. But since you say that this approach is futile at the httpd project, I won't waste our time on this. ... If that was the case, things (especially simple ones like my patch) won't fall between chairs, leading to more inspiration from users to help. It could, but it also (obviously) requires somebody to tr
[Win32] compile errors in xlate.c
Hi, since 2 or three days I get the following compile errors: ... xlate.c F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(102) : error C2061: syntax error : identifier 'apr_iconv_t' F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(104) : error C2059: syntax error : '}' F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(125) : error C2037: left of 'ich' specifies undefined struct/union 'apr_xlate_t' F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(125) : error C2065: 'apr_iconv_t' : undeclared identifier F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(126) : error C2037: left of 'ich' specifies undefined struct/union 'apr_xlate_t' F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(126) : error C2037: left of 'pool' specifies undefined struct/union 'apr_xlate_t' F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(126) : error C2198: 'apr_iconv_close' : too few actual parameters F:\Projects\MSVC\httpd-2.0\srclib\apr-util\xlate\xlate.c(192) : error C2037: left of 'ich' specifies undefined struct/union 'apr_xlate_t' After replacing the xlate.c with an older version all is ok again. -- Juergen Heckel
Re: CGIs and HEAD requests
Date: Fri, 10 Jan 2003 10:01:01 +0100 (MET) From: Martin Kutschker <[EMAIL PROTECTED]> > Is it possible for a CGI to handle HEAD requests? Mozilla uses HEAD for > it's 'save link target' feature, which 'breaks' my web app - annoyingly > every file (suggested to be downloaded) gets a .html extension as Apches > 1.3 sends the default mime type. Sorry, my fault. The CGI runs ...completely! That is it sends contents over the wire not just the headers. Shame on me. Mozilla doesn't seem to like this behaviour, though. I have fixed the CGI in question, but there are many other CGIs which don't cope with head. How about making Apache (read mod_cgi) ignore extra output for HEAD requests? Masi
Re: EOS bucket in RESOURCE filters
Justin Erenkrantz wrote: --On Saturday, January 11, 2003 8:07 PM +1100 Stas Bekman <[EMAIL PROTECTED]> wrote: ap_finalize_request_protocol covers all the other cases, by checking r->sent_eos. My question is why not always add the eos in ap_finalize_request_protocol()? ap_finalize_request_protocol() is a last resort to ensure that even with a faulty handler an EOS is sent down the chain. But, if an EOS is already sent, it is illegal to send another. -- justin Thanks Justin, Does that mean that mod_status, mod_info and other standard generator handlers should be changed to send eos to be proper? Currently there is not much documentation available, so the only way to learn how things should be written properly is to look at the core modules, in hope that they provide a proper example. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
Re: EOS bucket in RESOURCE filters
--On Saturday, January 11, 2003 8:07 PM +1100 Stas Bekman <[EMAIL PROTECTED]> wrote: ap_finalize_request_protocol covers all the other cases, by checking r->sent_eos. My question is why not always add the eos in ap_finalize_request_protocol()? ap_finalize_request_protocol() is a last resort to ensure that even with a faulty handler an EOS is sent down the chain. But, if an EOS is already sent, it is illegal to send another. -- justin
Re: EOS bucket in RESOURCE filters
Greg Ames wrote: Stas Bekman wrote: Is it possible that the RESOURCE filters don't get the EOS bucket? anything is possible in software ;-) but that would be pretty broken IMO. I don't recall seeing cases recently where we don't send EOS down the complete output filter chain. I've looked at the existing generator modules and it seems that they send the eos bucket only if they send a pipe/file down the stream. the default handler behaves similarly. ap_finalize_request_protocol covers all the other cases, by checking r->sent_eos. My question is why not always add the eos in ap_finalize_request_protocol()? I'm working on filter examples which use context to maintain status/keep remainder data between filter invocations for the same request. For some reason I don't get the EOS bucket, so I don't know how to flush the data stored in the filter context. I do see EOS in CONNECTION filters. I've tried to look at the existing modules for an example, but I didn't find any RESOURCE filters that use the context. mod_includes's filter has tons of variables in its ctx and uses them frequently. I sometimes wonder if this contributes to the number of bugs we've seen in it. It certainly should be a RESOURCE filter. The OLD_WRITE filter also stashes stuff in its ctx IIRC, and should be a RESOURCE filter. I did look at mod_include, it just was hard to quickly find the eos/ctx flush logic. I think i'm getting the grip of it. Thanks Greg. __ Stas BekmanJAm_pH --> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com