Re: library-ization
On Sun, Oct 13, 2002 at 09:21:23AM -0700, David N. Welton wrote: [...] And I would like an HTTP tester library. I think that if done in this way, it would be versatile enough to replace ab, and it would also give people the freedom to experiment with other front ends. Like someting in Tk, gtk, or whatever... I was also thinking about HTTP test library (on top of a HTTP client library, like apr-serf) long before I joined this project. However, such a library would probably be just like an application (reading config files and so). If you decide to put only essential stuff in such lib, then apps would need to reimplement many things on their own. That little command tool we develop here is very suited for GUI integration. We don't have command line options/switches, there's no i18n (which is a pain for GUI wrappers) and so on. You just feed flood with XML data (snip from my personal TODO list: write flood DTD), and receive results on stdout (snip from my personal TODO list: make flood output XML'ized). I must say, that i like this approach much more, than separate library. OTOH you might be righ. Please post an simplified, pseudocode API of such HTTP test library. We can talk about it a bit more to see if such thing makes sense. regards, -- Jacek Prucia 7bulls.com S.A.
Re: public key authentication apache
On Mon, 14 Oct 2002, Ian Holsman wrote: I was wondering if anyone knows of something (preferably using openSSH) which would allow Apache to authenticate via a SSH keypair. what i would like ideally is for the browser to use the passwords/pass-phrases of the ssh-agent running on the local machine to execute something remotely without the middleman (web-server) requiring to know the passphrase/private key of the user I've once had to do this - but it was hard to get it working properly/perform decently - as, if you are not careful, the negotiation needs to be done again and again. If all you want is making sure that the web server does not know the password; there are a lot of one way crypt/digest things one can do to solve that. Even standard crypt()ed passwords go a long way. Dw
Re: Compiling Apache modules for windows
Hi all! Günter sent me a binary version of mod_replace. I started trying it and I couldn't get it to work. At last I found that there is an error in the example configuration file. Instead of being 'Replace colour color text/html', it should be 'Replace colour color text/html'. At least this is the only way it worked for me. I say this in case anyone is interested. Now I've got some questions. I've been able to make replacements in the files if they have .html extension, but not in the rest (.js, etc.). In the example cfg you sent me, this is what you said (with my correction already done): Replace colour color text/html AddOutputFilter REPLACE html I've tried writing js instead of html in the second line, but I don't know what to write in the first line instead of text/html. I've tried lots of things but none work. What should I do to apply the replacements to all the files served, independent of their extension? I tried *, but it doesn't work. Another question: do you know how should I write the httpd.conf to make the replacements to files proxied from another server? I wrote this, but it doesn't work: Location /someloc Replace colour color text/html AddOutputFilter REPLACE html ProxyPass http://someserver/someloc ProxyPassReverse http://someserver/someloc /Location I've tried different orders of the lines, but none worked. Thanks in advance, Igor Leturia
stable 2.0 trees
After a million messages on related topics, I'm not sure that any two developers agree on all of the following topics: . how much to consider the needs of users relative to desires of developers . how hard to try not to break binary compatibility . how much to use 2.0 HEAD as a sandbox for new features . whether or not to start 2.1 now for auth changes Meanwhile, a number of the 2.0 users which have dared poke their heads into our mailing list point out through their comments that we have a PR problem (regardless of whether or not you agree technically on their particular concerns). I would like to propose the following: . let 2.0 HEAD proceed as it seems to be going now... maybe not everybody is happy about every aspect but the way it is handled is rough consensus of developers... I'm not saying this is wrong, but I think that the volatility of 2.0 HEAD + APR HEAD is high enough that it is hard to put out a good release quickly unless we're very conservative and put something out with just a couple of changes beyond 2.0.43 . let those who are interested (not more than a few would be needed to make it viable) maintain a separate tree based on 2.0.43, including apr and apr-util... call it httpd-2.0.43, with potential releases 2.0.43.1, 2.0.43.2, etc. priorities would be . quick integration of critical fixes from HEAD . skepticism regarding any changes other than critical fixes; for some fixes it would be best to wait to see if any users of the stable tree actually encounter the problem . maintaining the MMN starting this now would let folks with modules from 2.0.41-2.0.43 continue to work into the future, even if they need to pick up a security fix in Apache, even if 2.0 HEAD needs to bump MMN tomorrow When it becomes impractical to achieve these goals with 2.0.43 code (e.g., a critical problem can't reliably be fixed without bumping MMN), then it is time to discard this particular stable tree and tell folks to move to the current release to get new fixes. If there are still concerns about volatility in 2.0 HEAD at that point then there will be a need for another stable tree. Note: There are ways other than a separate CVS tree to implement the same objective. Rather than pick on the mechanism, the first order of business is to address the ability for some of us to make releases with such a conservative set of changes. Then we (hopefully comprised mostly of people who plan to do the work) can worry about how it should be done. (grabbing flak jacket) -- Jeff Trawick | [EMAIL PROTECTED] Born in Roswell... married an alien...
Re: public key authentication apache
The ssh tools don't export the operations (signing, checking signatures). I looked into teases them out out of the code, for example authfd.c has the signing code. One could create a command to bootstrap an authenticated session and then hand it off to the browser. Bridging auth to unlocked keys available in client side session state (like ssh-agent) is a good thing. Hooking it into some browsers is intentionally difficult. Which is one reason why Liberty was designed to allow zero-install on the clients/browsers. - ben On Tuesday, October 15, 2002, at 03:54 AM, Dirk-Willem van Gulik wrote: On Mon, 14 Oct 2002, Ian Holsman wrote: I was wondering if anyone knows of something (preferably using openSSH) which would allow Apache to authenticate via a SSH keypair. what i would like ideally is for the browser to use the passwords/pass-phrases of the ssh-agent running on the local machine to execute something remotely without the middleman (web-server) requiring to know the passphrase/private key of the user I've once had to do this - but it was hard to get it working properly/perform decently - as, if you are not careful, the negotiation needs to be done again and again. If all you want is making sure that the web server does not know the password; there are a lot of one way crypt/digest things one can do to solve that. Even standard crypt()ed passwords go a long way. Dw
RE: stable 2.0 trees
After a million messages on related topics, I'm not sure that any two developers agree on all of the following topics: . how much to consider the needs of users relative to desires of developers . how hard to try not to break binary compatibility . how much to use 2.0 HEAD as a sandbox for new features . whether or not to start 2.1 now for auth changes Meanwhile, a number of the 2.0 users which have dared poke their heads into our mailing list point out through their comments that we have a PR problem (regardless of whether or not you agree technically on their particular concerns). Worth reading... http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2882203,00.html I am generally in favor of maintaining a binary compatible/stable 2.0 cvs repository. I think this may help the third party module authors to finally do the work to get their modules running on Apache 2.0, which should help improve the 2.0 adoption rate. What we call that repository is not particularly important to me, though the name we choose may have PR implications which we should be sensitive to. My suggestion is we freeze 2.0 MMN major bumps (unless there is a really, -really-, REALLY compelling reason to do a bump) and start a new development tree for 2.1. Lets set some goals for what we (the developers and the user community) want to see in 2.1 and work toward those goals (ie, finish and agree on the ROADMAP we've already started). Bill
RE: Compiling Apache modules for windows
Igor, You have to put the replace entries like this: Proxy /someloc Replace colour color text/html SetOutputFilter REPLACE /Proxy ProxyPass /someloc http://Server/someloc ProxyPassReverse /someloc http://Server/someloc Best regards, Juan C. Rivera Citrix Systems, Inc. -Original Message- From: Igor Leturia [mailto:[EMAIL PROTECTED]] Sent: Tuesday, October 15, 2002 6:06 AM To: [EMAIL PROTECTED] Subject: Re: Compiling Apache modules for windows Hi all! Günter sent me a binary version of mod_replace. I started trying it and I couldn't get it to work. At last I found that there is an error in the example configuration file. Instead of being 'Replace colour color text/html', it should be 'Replace colour color text/html'. At least this is the only way it worked for me. I say this in case anyone is interested. Now I've got some questions. I've been able to make replacements in the files if they have .html extension, but not in the rest (.js, etc.). In the example cfg you sent me, this is what you said (with my correction already done): Replace colour color text/html AddOutputFilter REPLACE html I've tried writing js instead of html in the second line, but I don't know what to write in the first line instead of text/html. I've tried lots of things but none work. What should I do to apply the replacements to all the files served, independent of their extension? I tried *, but it doesn't work. Another question: do you know how should I write the httpd.conf to make the replacements to files proxied from another server? I wrote this, but it doesn't work: Location /someloc Replace colour color text/html AddOutputFilter REPLACE html ProxyPass http://someserver/someloc ProxyPassReverse http://someserver/someloc /Location I've tried different orders of the lines, but none worked. Thanks in advance, Igor Leturia
apache itself to seteuid
Is there currently a way to configure apache so that the server itself suid's to the owner of a page? ie Accessing http://hostname/~username will causes httpd to setuid to username while accessing files with ~username/public_html? This isn't the same as suexec, which only runs cgis as the owner. The problem is that I work in an academic environment, where some assignments are web pages written in jsp, php, perl and etc, and the required permissions on ~username and ~username/public_html requires that the web server (and tomcat if being used) can access the files. Plagiarism becomes an issue when any student can write a simple program (eg in perl) to search through other people's public_html directories. I can only think of two solutions: 1) each student runs their own web server as themselves (httpd runs as their own uid). 2) httpd runs as root with seteuid calls to the user specified in the url. In both cases, the result is that the public_html would only require access permissions for the owner himself. I know the solutions above are incomplete, for example they would still not work for tomcat, I'm just wondering if this issue has been addressed before, and how was it resolved? Thanks for any help. Bj -- +---+--+ | Benjamin (Bj) Kuit | Building CB10.3.354 | | Systems Programmer | Faculty of Information Technology | | Phone: 02 9514 1841 | University of Technology, Sydney| | Mobile: 0416 184 972 | Email: [EMAIL PROTECTED] | +---+--+
RE: stable 2.0 trees
At 08:16 AM 10/15/2002, Bill Stoddard wrote: After a million messages on related topics, I'm not sure that any two developers agree on all of the following topics: . how much to consider the needs of users relative to desires of developers . how hard to try not to break binary compatibility . how much to use 2.0 HEAD as a sandbox for new features . whether or not to start 2.1 now for auth changes Meanwhile, a number of the 2.0 users which have dared poke their heads into our mailing list point out through their comments that we have a PR problem (regardless of whether or not you agree technically on their particular concerns). Worth reading... http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2882203,00.html I am generally in favor of maintaining a binary compatible/stable 2.0 cvs repository. I think this may help the third party module authors to finally do the work to get their modules running on Apache 2.0, which should help improve the 2.0 adoption rate. What we call that repository is not particularly important to me, though the name we choose may have PR implications which we should be sensitive to. My suggestion is we freeze 2.0 MMN major bumps (unless there is a really, -really-, REALLY compelling reason to do a bump) and start a new development tree for 2.1. Lets set some goals for what we (the developers and the user community) want to see in 2.1 and work toward those goals (ie, finish and agree on the ROADMAP we've already started). I have to concur with Bill on this (in spite of the fact that Jeff's arguments try to appeal to everyone's sensibilities.) I think the new proposal, that we have a maintenance tree stemming from Apache 2.0.43 using sub-subversions, follows from the fact that the list has been unresponsive to using revisions by the usual definitons. My question is just this... why do we feel that every revision must be 'completed'? Clearly, 2.0.x is new territory. Many will never upgrade to any 2.0.x simply because of the magic .0. in the middle. And this magic .0. has been GA for over six months. We got to 2.0 GA only because of tons of effort by many enthusiastic hackers. Right now, there is no place for such individuals to express their creative energy in improving the project, unless they want to get mired down in debates between breaking modules or configurations. It seems that the 'maintainers', the stodgy 'old men' of the group, want everyone to row together on bug fixes. That isn't how OS works. The folks with no interest in tracking down obscure bugs just leave, or quietly bide their time. The number of commits to the project is way down, meaning the rate of improvement for the project has slowed. Some folks are primarily focused on new development and ideas. Others are primarily focused on cleaning up functionality. Some have few interests outside of cleaning up grammar and legibility. And some want little more than to shape the architecture of the server, coding is simply a means to that end, for performance, scalability, security, etc. All of these are worthwhile contributions if done in the right context. Jeff especially hit one nail on the head; . let those who are interested (not more than a few would be needed to make it viable) maintain a separate tree based on 2.0.43, including apr and apr-util... call it httpd-2.0.43, with potential releases 2.0.43.1, 2.0.43.2, etc. Dropping the sub-subversion discussion for the moment, he hit on the magic words let those who are interested. Those who want to maintain stable will, it's their itch. Those who want to make forward progress on the alpha/beta tree will have that outlet. I have an interesting idea about resistance to stability. Perhaps it's nothing less than immediate gratification. Most anyone who writes code wants users to adopt that code, the quicker the better. When the new code is the right way(R) we want people to immediately quit doing things the 'wrong way'. But 1.3 shows us that our end users don't always adopt the right code quickly. What's the penalty for stable/development trees? Users don't have the development code (at least not many) for some time, until the development tree becomes GA quality. But that's how it should be, and that's the only way we will ever find 1.3 adopters moving to 2.x. Anyone solid code contributions that don't break the API can always be merged back to the GA maintenance tree. We've done that for two years from 2.0 to 1.3, and it's worked. Bill
RE: stable 2.0 trees
At 08:16 AM 10/15/2002, Bill Stoddard wrote: After a million messages on related topics, I'm not sure that any two developers agree on all of the following topics: . how much to consider the needs of users relative to desires of developers . how hard to try not to break binary compatibility . how much to use 2.0 HEAD as a sandbox for new features . whether or not to start 2.1 now for auth changes Meanwhile, a number of the 2.0 users which have dared poke their heads into our mailing list point out through their comments that we have a PR problem (regardless of whether or not you agree technically on their particular concerns). Worth reading... http://techupdate.zdnet.com/techupdate/stories/main/0,14179,28822 03,00.html I am generally in favor of maintaining a binary compatible/stable 2.0 cvs repository. I think this may help the third party module authors to finally do the work to get their modules running on Apache 2.0, which should help improve the 2.0 adoption rate. What we call that repository is not particularly important to me, though the name we choose may have PR implications which we should be sensitive to. My suggestion is we freeze 2.0 MMN major bumps (unless there is a really, -really-, REALLY compelling reason to do a bump) and start a new development tree for 2.1. Lets set some goals for what we (the developers and the user community) want to see in 2.1 and work toward those goals (ie, finish and agree on the ROADMAP we've already started). I have to concur with Bill on this (in spite of the fact that Jeff's arguments try to appeal to everyone's sensibilities.) I think the new proposal, that we have a maintenance tree stemming from Apache 2.0.43 using sub-subversions, follows from the fact that the list has been unresponsive to using revisions by the usual definitons. My question is just this... why do we feel that every revision must be 'completed'? Clearly, 2.0.x is new territory. Many will never upgrade to any 2.0.x simply because of the magic .0. in the middle. And this magic .0. has been GA for over six months. At the risk of racing too far ahead in this discussion, here is my suggestion... 2.0.43 becomes 2.1 and the MMN major does not change for subsequent 2.1 series releases (except for a compelling reason, eg a security fix -requires- a bump). Why 2.1? No technical reason; purely a PR tactic to telegraph to the user community we are putting a lot of focus on maintaining binary backward compatability and to get rid of the *.0.* in the version number (yea, to appease the folks who are allergic to 0's in version numbers). New ROADMAP development is started in 3.0. Bill
Re: stable 2.0 trees
Bill Stoddard wrote: At the risk of racing too far ahead in this discussion, here is my suggestion... 2.0.43 becomes 2.1 and the MMN major does not change for subsequent 2.1 series releases (except for a compelling reason, eg a security fix -requires- a bump). Why 2.1? No technical reason; purely a PR tactic to telegraph to the user community we are putting a lot of focus on maintaining binary backward compatability and to get rid of the *.0.* in the version number (yea, to appease the folks who are allergic to 0's in version numbers). I like. -- === Jim Jagielski [|] [EMAIL PROTECTED] [|] http://www.jaguNET.com/ A society that will trade a little liberty for a little order will lose both and deserve neither - T.Jefferson
Re: apache itself to seteuid
Benjamin, the only way to accomplish this is with the perchild MPM for Apache 2.0, and only by calling out each and every user. Perhaps it would be good to add perchild options for mass-user hosting in the schema you suggest. Note that each 'user' then has an apache process running in the uid of the user, while the main process dispatches requests to the various users. It could get expensive, if, say, you have 1000 users (all with public_html directories) and few require this feature. Security on the web is all about assigning world read permission for anything that is publicly accessible from the outside world. For the 80/20 problem, well over 80% of the files are public anyway via the server, so what if they are also readable on the server across users? But in the script case you are discussing, you aren't looking for public_html documents to be protected. You are really asking if the private_html/cgi-bin and private_html/servlets might be private and executed in the users context. If you set up that only users who create this sort of schema have perchild processes created, then the burden on the server would drop from 1000 students to perhaps 250 (or whomever is taking the tomcat/cgi classes, plus the few extra users who start work early or continue afterwards.) So an auto-user schema based on perchild would be great, but please don't tie it to ~user/public_html as the criteria! Bill At 10:01 PM 10/14/2002, Benjamin Kuit wrote: Is there currently a way to configure apache so that the server itself suid's to the owner of a page? ie Accessing http://hostname/~username will causes httpd to setuid to username while accessing files with ~username/public_html? This isn't the same as suexec, which only runs cgis as the owner. The problem is that I work in an academic environment, where some assignments are web pages written in jsp, php, perl and etc, and the required permissions on ~username and ~username/public_html requires that the web server (and tomcat if being used) can access the files. Plagiarism becomes an issue when any student can write a simple program (eg in perl) to search through other people's public_html directories. I can only think of two solutions: 1) each student runs their own web server as themselves (httpd runs as their own uid). 2) httpd runs as root with seteuid calls to the user specified in the url. In both cases, the result is that the public_html would only require access permissions for the owner himself. I know the solutions above are incomplete, for example they would still not work for tomcat, I'm just wondering if this issue has been addressed before, and how was it resolved? Thanks for any help. Bj -- +---+--+ | Benjamin (Bj) Kuit | Building CB10.3.354 | | Systems Programmer | Faculty of Information Technology | | Phone: 02 9514 1841 | University of Technology, Sydney| | Mobile: 0416 184 972 | Email: [EMAIL PROTECTED] | +---+--+
ER: Compiling Apache modules for windows
Thanks very much, Juan! I tried it and it worked! -Jatorrizko mezua- Nondik: Juan Rivera [mailto:[EMAIL PROTECTED]] Bidalia: Asteartea, 2002.eko urriak 15 15:34 Nora: '[EMAIL PROTECTED]' Gaia: RE: Compiling Apache modules for windows Igor, You have to put the replace entries like this: Proxy /someloc Replace colour color text/html SetOutputFilter REPLACE /Proxy ProxyPass /someloc http://Server/someloc ProxyPassReverse /someloc http://Server/someloc Best regards, Juan C. Rivera Citrix Systems, Inc. -Original Message- From: Igor Leturia [mailto:[EMAIL PROTECTED]] Sent: Tuesday, October 15, 2002 6:06 AM To: [EMAIL PROTECTED] Subject: Re: Compiling Apache modules for windows Hi all! Günter sent me a binary version of mod_replace. I started trying it and I couldn't get it to work. At last I found that there is an error in the example configuration file. Instead of being 'Replace colour color text/html', it should be 'Replace colour color text/html'. At least this is the only way it worked for me. I say this in case anyone is interested. Now I've got some questions. I've been able to make replacements in the files if they have .html extension, but not in the rest (.js, etc.). In the example cfg you sent me, this is what you said (with my correction already done): Replace colour color text/html AddOutputFilter REPLACE html I've tried writing js instead of html in the second line, but I don't know what to write in the first line instead of text/html. I've tried lots of things but none work. What should I do to apply the replacements to all the files served, independent of their extension? I tried *, but it doesn't work. Another question: do you know how should I write the httpd.conf to make the replacements to files proxied from another server? I wrote this, but it doesn't work: Location /someloc Replace colour color text/html AddOutputFilter REPLACE html ProxyPass http://someserver/someloc ProxyPassReverse http://someserver/someloc /Location I've tried different orders of the lines, but none worked. Thanks in advance, Igor Leturia
RE: stable 2.0 trees
At the risk of racing too far ahead in this discussion, here is my suggestion... 2.0.43 becomes 2.1 and the MMN major does not change for subsequent 2.1 series releases (except for a compelling reason, eg a security fix -requires- a bump). Why 2.1? No technical reason; purely a PR tactic to telegraph to the user community we are putting a lot of focus on maintaining binary backward compatability and to get rid of the *.0.* in the version number (yea, to appease the folks who are allergic to 0's in version numbers). New ROADMAP development is started in 3.0. thunderous applause/ I think Bill hit an important point here.version numbers signal a lot to the user community about the compatability of the code, and the pain in migrating to various versions. To a developer, the name of 2.0.43++ isn't so imporant - call it 2.0.44, call it 2.1.0, call it 17.9 - it's the same code. To a user, the migration from 2.0.43 to 2.0.44 should be easy. From 2.0.43 to 2.1 can be a little harder. It's much easier for end-users to understand releases if major functionality and/or API changes are coupled with minor version number bumps (instead of subversion bumps). From that perspective, changing the auth semantics (which have been pretty stable since at least early 1.3) 2.0.44 seems almost sneaky compared to changing them in 2.1.0. If minor version numbers are bumping weekly, that's not so good. If they are bumping quarterly or so as APIs change, that may well be healthier than carrying the 2.0 series on until the next major code reorganization. Version numbers are a marketing issue at least as much as a technology issue - here's an easy chance to give non-developers more insight into what is going on.
Re: stable 2.0 trees
* Jim Jagielski ([EMAIL PROTECTED]) wrote : Bill Stoddard wrote: At the risk of racing too far ahead in this discussion, here is my suggestion... 2.0.43 becomes 2.1 and the MMN major does not change for subsequent 2.1 series releases (except for a compelling reason, eg a security fix -requires- a bump). Why 2.1? No technical reason; purely a PR tactic to telegraph to the user community we are putting a lot of focus on maintaining binary backward compatability and to get rid of the *.0.* in the version number (yea, to appease the folks who are allergic to 0's in version numbers). I like. Me too. -- Thom May - [EMAIL PROTECTED] spectra Hello! spectra What is the voting period? From Mar 24th until? asuffield until the candidate manoj wants to win is in the lead * asuffield ducks into the icbm shelter
ability to restrict scope of require directive to a single module
Hi, I'm facing the following problem : I'm using 2 auth modules in authoritative mode (if one fail, try the other one). I have one authorization check (using a require directive) for the first module and another one for the other module. My problem is that the second directive as a syntax that is valid for the first module and will prevent authorization with the first module. He is an example of what I mean : Users are authenticated using basic auth against my ldap server. Authorized users are : 1) all non contractors users 2) plus a list of authorized-contractors (not managed in the LDAP server) AuthTypeBasic AuthNameaccess restricted AuthLDAPURL ldap URL require ldap-filter !(employeeType=contractor) AuthLDAPAuthoritative off AuthUserFile.htpasswd AuthGroupFile .htgroup require group authorized-contractors The problem with this is that the 'require group' is a valid directive for the auth_ldap module and will prevent the rule 1) to succeed. That way I'm solving this is by patching the mod_auth module by telling him to support both 'require group' and 'require mod_auth_group' directives. In this case, the following configuration is doing what I wanted : AuthTypeBasic AuthNameaccess restricted AuthLDAPURL ldap URL require ldap-filter !(employeeType=contractor) AuthLDAPAuthoritative off AuthUserFile.htpasswd AuthGroupFile .htgroup require mod_auth_group authorized-contractors I'm wondering if it's not a good idea for any auth modules to support 2 names for any require options: the common name (group) and a unique name (module_name_group). In this case, it could help implementing a strict OR between require directives when using authoritative mode. Any thoughts? Xavier
[PATCH] event driven read
Something I've been hacking on (in the pejorative sense of the word 'hack'. Look at the patch and you will see what I mean :-). This should apply and serve pages on Linux, though the event_loop is clearly broken as it does not timeout keep-alive connections and will hang on the apr_poll() (and hang the server) if a client leaves a keep-alive connection active but does not send anything on it. Scoreboard is broken, code structure is poor, yadda yadda. I plan to reimplement some of this more cleanly but no idea when I'll get around to it. Key points: 1. routines to read requests must be able to handl getting APR_EAGAIN (or APR_EWOULDBLOCK): 2. ap_process_http_connection reimplemented to be state driven 3. event loop in worker_mpm to wait for pending i/o Bill Index: include/http_protocol.h === RCS file: /home/cvs/httpd-2.0/include/http_protocol.h,v retrieving revision 1.83 diff -u -r1.83 http_protocol.h --- include/http_protocol.h 11 Jul 2002 19:53:04 - 1.83 +++ include/http_protocol.h 15 Oct 2002 14:54:01 - -92,6 +92,13 request_rec *ap_read_request(conn_rec *c); /** + * Read a request and fill in the fields. + * param c The current connection + * return The new request_rec + */ +request_rec *ap_create_request(conn_rec *c); + +/** * Read the mime-encoded headers. * param r The current request */ -103,8 +110,8 * param r The current request * param bb temp brigade */ -AP_DECLARE(void) ap_get_mime_headers_core(request_rec *r, - apr_bucket_brigade *bb); +AP_DECLARE(apr_status_t) ap_get_mime_headers_core(request_rec *r, + http_state_t *hs); /* Finish up stuff after a request */ -582,6 +589,7 * param r The request * param fold Whether to merge continuation lines * param bb Working brigade to use when reading buckets + * param block block or non block * return APR_SUCCESS, if successful * APR_ENOSPC, if the line is too big to fit in the buffer * Other errors where appropriate -590,15 +598,17 AP_DECLARE(apr_status_t) ap_rgetline(char **s, apr_size_t n, apr_size_t *read, request_rec *r, int fold, - apr_bucket_brigade *bb); + apr_bucket_brigade *bb, + apr_read_type_e block); #else /* ASCII box */ -#define ap_rgetline(s, n, read, r, fold, bb) \ -ap_rgetline_core((s), (n), (read), (r), (fold), (bb)) +#define ap_rgetline(s, n, read, r, fold, bb, block) \ +ap_rgetline_core((s), (n), (read), (r), (fold), (bb), (block)) #endif AP_DECLARE(apr_status_t) ap_rgetline_core(char **s, apr_size_t n, apr_size_t *read, request_rec *r, int fold, - apr_bucket_brigade *bb); + apr_bucket_brigade *bb, + apr_read_type_e block); /** * Get the method number associated with the given string, assumed to Index: include/httpd.h === RCS file: /home/cvs/httpd-2.0/include/httpd.h,v retrieving revision 1.189 diff -u -r1.189 httpd.h --- include/httpd.h 1 Jul 2002 17:49:53 - 1.189 +++ include/httpd.h 15 Oct 2002 14:54:03 - -684,6 +684,8 /** A structure that represents the current request */ typedef struct request_rec request_rec; +typedef struct http_state_t http_state_t; + /* ### would be nice to not include this from httpd.h ... */ /* This comes after we have defined the request_rec type */ #include apr_uri.h -1017,8 +1019,32 void *sbh; /** The bucket allocator to use for all bucket/brigade creations */ struct apr_bucket_alloc_t *bucket_alloc; + +/* request rec */ +request_rec *r; +http_state_t *hs; }; +typedef enum { +HTTP_STATE_NEW_CONNECTION, +HTTP_STATE_READ_REQUEST_LINE, +HTTP_STATE_PARSE_REQUEST_LINE, +HTTP_STATE_READ_MIME_HEADERS, +HTTP_STATE_WRITE_RESPONSE, +HTTP_STATE_LINGER, +HTTP_STATE_DONE, +HTTP_STATE_ERROR +} http_state_e; + +struct http_state_t { +http_state_e state; +conn_rec *c; +request_rec *r; +apr_bucket_brigade *bb; +apr_table_t *headers; +apr_pool_t *p; +apr_socket_t *sock; +}; /* Per-vhost config... */ /** Index: modules/http/http_core.c === RCS file: /home/cvs/httpd-2.0/modules/http/http_core.c,v retrieving revision 1.307 diff -u -r1.307 http_core.c --- modules/http/http_core.c15 Jul 2002 08:05:11 - 1.307 +++ modules/http/http_core.c15 Oct 2002 14:54:04 - -56,8 +56,10 * University of Illinois, Urbana-Champaign.
Final patch for a long time.
The recent conversations on this list have made me finally realize that I have been here too long. I need a project that is not the Apache web server. So, this is my good-bye. I will be unsubscribing from the Apache web server development lists in the next day or two. I will still be involved in APR development work, so I will still be reachable at [EMAIL PROTECTED] There are two projects that I started that have not been finished. I would hope that one of two things would happen to both of them. Either somebody else should pick them up and run with them, or they should be removed from the server. The first project is the Perchild MPM. It basically works, but there are bugs. The second is SSL upgrade. I have the patches, they haven't been committed yet. I have attached them at the bottom of this message. The reason they haven't been committed, is that I don't have a client to test them with, and I haven't had time to create one. The responses are correct I have checked them in plain text. The place that bugs most likely exist, is the actual upgrade code that does the handshake. This is an important feature, and I would really like to see it in 2.0. It has been a lot of fun the last four years working on Apache, but I have been here too long, and it isn't fun anymore. It isn't worth doing if it isn't fun. Ryan ___ Ryan Bloom [EMAIL PROTECTED] 550 Jean St Oakland CA 94610 --- ? build.err ? build.log ? output.log ? sslupgrade.patch ? modules/new ? srclib/apr/APRVARS ? srclib/apr/build.err ? srclib/apr/build.log ? srclib/apr/newpoll2.tar.gz ? srclib/apr/i18n/unix/Makefile ? srclib/apr/test/build.err ? srclib/apr/test/build.log ? srclib/apr/test/garg ? srclib/apr/test/testall Index: modules/ssl/mod_ssl.c === RCS file: /home/cvs/httpd-2.0/modules/ssl/mod_ssl.c,v retrieving revision 1.72 diff -u -d -b -w -r1.72 mod_ssl.c --- modules/ssl/mod_ssl.c 14 Oct 2002 04:15:57 - 1.72 +++ modules/ssl/mod_ssl.c 15 Oct 2002 16:49:07 - @@ -105,7 +105,7 @@ /* * Per-server context configuration directives */ -SSL_CMD_SRV(Engine, FLAG, +SSL_CMD_SRV(Engine, TAKE1, SSL switch for the protocol engine (`on', `off')) SSL_CMD_ALL(CipherSuite, TAKE1, @@ -274,7 +274,7 @@ return 1; } -static int ssl_hook_pre_connection(conn_rec *c, void *csd) +int ssl_init_ssl_connection(conn_rec *c) { SSLSrvConfigRec *sc = mySrvConfig(c-base_server); SSL *ssl; @@ -283,40 +283,14 @@ modssl_ctx_t *mctx; /* - * Immediately stop processing if SSL is disabled for this connection + * Seed the Pseudo Random Number Generator (PRNG) */ -if (!(sc (sc-enabled || - (sslconn sslconn-is_proxy -{ -return DECLINED; -} +ssl_rand_seed(c-base_server, c-pool, SSL_RSCTX_CONNECT, ); -/* - * Create SSL context - */ if (!sslconn) { sslconn = ssl_init_connection_ctx(c); } -if (sslconn-disabled) { -return DECLINED; -} - -/* - * Remember the connection information for - * later access inside callback functions - */ - -ap_log_error(APLOG_MARK, APLOG_INFO, 0, c-base_server, - Connection to child %ld established - (server %s, client %s), c-id, sc-vhost_id, - c-remote_ip ? c-remote_ip : unknown); - -/* - * Seed the Pseudo Random Number Generator (PRNG) - */ -ssl_rand_seed(c-base_server, c-pool, SSL_RSCTX_CONNECT, ); - mctx = sslconn-is_proxy ? sc-proxy : sc-server; /* @@ -368,6 +342,44 @@ return APR_SUCCESS; } +static int ssl_hook_pre_connection(conn_rec *c, void *csd) +{ +SSLSrvConfigRec *sc = mySrvConfig(c-base_server); +SSLConnRec *sslconn = myConnConfig(c); + +/* + * Immediately stop processing if SSL is disabled for this connection + */ +if (!(sc (sc-enabled == TRUE || + (sslconn sslconn-is_proxy +{ +return DECLINED; +} + +/* + * Create SSL context + */ +if (!sslconn) { +sslconn = ssl_init_connection_ctx(c); +} + +if (sslconn-disabled) { +return DECLINED; +} + +/* + * Remember the connection information for + * later access inside callback functions + */ + +ap_log_error(APLOG_MARK, APLOG_INFO, 0, c-base_server, + Connection to child %ld established + (server %s, client %s), c-id, sc-vhost_id, + c-remote_ip ? c-remote_ip : unknown); + +return ssl_init_ssl_connection(c); +} + static apr_status_t ssl_abort(SSLFilterRec *filter, conn_rec *c) { SSLConnRec *sslconn = myConnConfig(c); @@ -572,6 +584,15 @@
Re: Final patch for a long time.
On Tue, Oct 15, 2002, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: The first project is the Perchild MPM. It basically works, but there are bugs. Can you give some more information? I'm interested in using the Perchild MPM myself, but haven't because of the reports that it's buggy. I'm willing to help debug it and make it usable, but the 3 bug reports I can find in bugzilla aren't very verbose. Any hints or direction where to look? Anything not in bugzilla that you know about? JE
Re: stable 2.0 trees
William A. Rowe, Jr. [EMAIL PROTECTED] writes: It seems that the 'maintainers', the stodgy 'old men' of the group, want everyone to row together on bug fixes. That isn't how OS works. The folks with no interest in tracking down obscure bugs just leave, or quietly bide their time. The number of commits to the project is way down, meaning the rate of improvement for the project has slowed. just curious... which proposal corresponds to want everyone to row together on bug fixes? Jeff especially hit one nail on the head; . let those who are interested (not more than a few would be needed to make it viable) maintain a separate tree based on 2.0.43, including apr and apr-util... call it httpd-2.0.43, with potential releases 2.0.43.1, 2.0.43.2, etc. Dropping the sub-subversion discussion for the moment, he hit on the magic words let those who are interested. Those who want to maintain stable will, it's their itch. Those who want to make forward progress on the alpha/beta tree will have that outlet. I would expect that anybody working on maintaining the extremely stable release would be involved with the alpha/beta tree too, since the very definition of the extremely stable release (only critical fixes always close to release-able since there aren't many changes to test) means that there isn't much work associated with it. Bill S and I had some discussions over lunch which make me suspect that I'm not communicating very well the spirit of what I proposed. First, I'm pretty happy with what is going on in 2.0 HEAD now. I don't think MMN is changed gratuitously, I don't think the code gets destabilized a whole lot on a regular basis, I think that having some aspects of the config change (i.e., the auth issue) change at this point in the =2.0 lifetime is not completely unreasonable (scripts can certainly help admins). I think we're still at a point where changing MMN is reasonable under certain conditions. My proposal was just to allow extremely conservative/stable releases which any current 2.0 site could upgrade to with no fears (either of new Apache problems from some of the itch scratching or breaking compatibility with 3rd party modules) in order to pick up a fix for a security problem they're concerned with. This proposal wasn't intended to address the big picture of how overall development proceeds and which changes can be delivered within 2.0 framework or some new set of ideas. It was only to calm the concerns of sites running 2.0 which have things working pretty well but are concerned that the 2.0 tree has enough activity that they risk breaking something else by picking up a new 2.0 release. -- Jeff Trawick | [EMAIL PROTECTED] Born in Roswell... married an alien...
Re: stable 2.0 trees
i'm responding to the head of this thread because i haven't read the rest of it yet.. so, as usual, my comments may be stale. Jeff Trawick wrote: . let 2.0 HEAD proceed as it seems to be going now : . let those who are interested (not more than a few would be needed to make it viable) maintain a separate tree based on 2.0.43, including apr and apr-util... call it httpd-2.0.43, with potential releases 2.0.43.1, 2.0.43.2, etc. priorities would be . quick integration of critical fixes from HEAD . skepticism regarding any changes other than critical fixes; for some fixes it would be best to wait to see if any users of the stable tree actually encounter the problem . maintaining the MMN this works for me, except for some of the details -- like the version nomenclature. let's get away from the term 'branch', and use something less technically overloaded. i propose 'stream'. i'd like to combine this with the leap-frogging stable/development stream idea. for stake-in-the-ground and pr reasons, i'd suggest taking whatever we want to start this stable stream with and giving it a new number, such as 2.1. (i suspect i'm anticipating firstbill's probably-already-posted comment on this..) then rename head to 2.2 and that's where development continues. 2.0 as such dries up and blows away. when we repeat with a new stable stream, the 2.2 head gets snapshot as 2.3, head becomes 2.4, and 2.2 vanishes. whew, that's a load off the top of my head.. :-) -- #kenP-)} Ken Coar, Sanagendamgagwedweinini http://Golux.Com/coar/ Author, developer, opinionist http://Apache-Server.Com/ Millennium hand and shrimp!
RE: ability to restrict scope of require directive to a single module
-- Original Message -- Reply-To: [EMAIL PROTECTED] Date: Tue, 15 Oct 2002 18:27:18 +0200 From: Xavier MACHENAUD [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: ability to restrict scope of require directive to a single module Hi, I'm facing the following problem : I'm using 2 auth modules in authoritative mode (if one fail, try the other one). This is your problem here. if both are in authoritative mode, it means (in your words) : if one fail, DONT try the other one. You need to load them both, and make the second one authoritative. The problem here is twofold: 1) there is no way to order auth modules so if you're authoritative module happens to run first, the other modules will NEVER get a chance to try 2) if there is no 'authoritative' module and auth fails (i.e. all modules return declined) apache core returns INTERNAL SERVER ERROR. instead of UNAUTHORIZED. Until either one of the previous things change, the only workaround is to make the last auth module called the authoritative one that way both their authorize methods will get invoked. sterling
RE: stable 2.0 trees
On Tue, 15 Oct 2002, Bill Stoddard wrote: Worth reading... http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2882203,00.html On October 2nd; *after* RedHat 8.0 was released he wrote And I doubt Red Hat will make 2.0 the default install until [...]. Really Impressive Predictions. - ask -- ask bjoern hansen, http://www.askbjoernhansen.com/ !try; do();
Counting of I/O bytes
I have submitted the patch along those lines a few days back, which also includes an MMN bump. In the light of the latest discussions on the list, it seems that a patch like that is not being looked at favourably by many people :-( Do you want me to rework that patch so it uses a privata data structure (to avoid MMN bump) rather then two new fields in conn_rec? Or is the quality of the patch unacceptable in general? Just to refresh everyone's memory, this is about the incorrect number of output bytes reported by mod_logio through %O. It does not include the incorrect number of bytes reported in r-bytes_sent, which is a completely separate issue. Any ideas welcome... Bojan
Re: Counting of I/O bytes
On Tue, 2002-10-15 at 22:09, Bojan Smojver wrote: I have submitted the patch along those lines a few days back, which also includes an MMN bump. In the light of the latest discussions on the list, it seems that a patch like that is not being looked at favourably by many people :-( The MMN bump in this case is no problem. Your patch just adds fields at the end of the structure. It's backward compatible, so it only needs an increase in the MMN minor number, not the major number. If the patch added fields in the middle of the struct, thus breaking binary compatibility, then it would be a problem. Do you want me to rework that patch so it uses a privata data structure (to avoid MMN bump) rather then two new fields in conn_rec? Or is the quality of the patch unacceptable in general? I've only had a couple of minutes to look at the patch (too busy with my day job this week), but I think the basic problem with the patch is that you're updating the bytes-sent field in the conn_rec after the writes complete in the core_output_filter, but you're reading that field in a logger function. But the logger function can be called *before* the socket write happens. When handling keep-alive requests, for example, the server may buffer up a small response to one request but not send it until the next response is generated. The first request gets logged as soon as all the response data is sent to core_output filter, so the mod_logio logging code might find that the bytes-sent value in the conn_rec is zero because the socket write hasn't actually happened yet. The best solution is to delay the logging until after the response has actually been sent. This would require more significant changes to the server, though. Most of the data that the logger needs is in the request's pool. Currently, this pool is cleared as soon as we send the response to the core_output_filter and log it. In order to accurately log the bytes sent, we'd have to keep the request pool around until the last of the that request's response data was written. That's not impossible, but it would require some design changes to the httpd core and the output filters. IMHO, that's a good example of a worthwhile design change to make for 2.1. Brian
Re: Counting of I/O bytes
On 15 Oct 2002, Brian Pane wrote: major number. If the patch added fields in the middle of the struct, thus breaking binary compatibility, then it would be a problem. Even adding at the end can break binary compat, since sizeof(conn_rec) changes, so you might have 3rd party code that allocates too little space. Assuming there is ever a case where 3rd party code allocates conn_rec's, that is. --Cliff
Re: Counting of I/O bytes
Quoting Brian Pane [EMAIL PROTECTED]: On Tue, 2002-10-15 at 22:09, Bojan Smojver wrote: I have submitted the patch along those lines a few days back, which also includes an MMN bump. In the light of the latest discussions on the list, it seems that a patch like that is not being looked at favourably by many people :-( The MMN bump in this case is no problem. Your patch just adds fields at the end of the structure. It's backward compatible, so it only needs an increase in the MMN minor number, not the major number. If the patch added fields in the middle of the struct, thus breaking binary compatibility, then it would be a problem. What if someone creates an array of conn_rec structures in their module? Isn't the code that fetches the next one going to be incorrect? For instance: conn_rec c[5]; c[1].not_the_first_field = some_value; /* OOPS, we'll probably get garbage */ Do you want me to rework that patch so it uses a privata data structure (to avoid MMN bump) rather then two new fields in conn_rec? Or is the quality of the patch unacceptable in general? I've only had a couple of minutes to look at the patch (too busy with my day job this week), but I think the basic problem with the patch is that you're updating the bytes-sent field in the conn_rec after the writes complete in the core_output_filter, but you're reading that field in a logger function. But the logger function can be called *before* the socket write happens. When handling keep-alive requests, for example, the server may buffer up a small response to one request but not send it until the next response is generated. The first request gets logged as soon as all the response data is sent to core_output filter, so the mod_logio logging code might find that the bytes-sent value in the conn_rec is zero because the socket write hasn't actually happened yet. The best solution is to delay the logging until after the response has actually been sent. This would require more significant changes to the server, though. Most of the data that the logger needs is in the request's pool. Currently, this pool is cleared as soon as we send the response to the core_output_filter and log it. In order to accurately log the bytes sent, we'd have to keep the request pool around until the last of the that request's response data was written. That's not impossible, but it would require some design changes to the httpd core and the output filters. IMHO, that's a good example of a worthwhile design change to make for 2.1. OK. I think I finally understand what the problem is here. That's what William was trying to explain to me before, but because I don't know the guts of Apache 2 all that well, I didn't understand :-( Basically, what would be happenning with the patch is that the numbers would be logged with the next request, rather then this one. However, if the data was SENT with the next request, then it should be LOGGED with the next request. This seems like the question of semantics to me. The problem I can see with this is the following: - connection opened (keepalive) - request for aaa.domain - not sent (i.e. the server is buffering it) - logged zero output - request for bbb.domain (on the same connection) - sent - logged bytes out for bbb.domain PLUS aaa.domain Is something like this possible? If not, I think we should be pretty much OK as the whole point of mod_logio is to log the traffic, most likely per virtual host. Bojan
Re: cvs commit: httpd-2.0/modules/experimental cache_util.c
On Sat, 2002-10-12 at 20:26, Paul J. Reder wrote: Okay, this takes care of item 4 from the list below. Thanks Brian, saves me from having to do the commit. :) What about the other 3? Should they be fixed by the change from apr_time_t to apr_int64_t? Apr_time_t is really apr_int64_t under the covers and I was seeing only the lower 32 bits being set when the variables were assigned 0 and -1. The value was correctly set when it was assigned APR_DATE_BAD (which has an embedded cast) so it seems that 1-3 still need to be done. If I remember correctly, the ANSI arithmetic conversion rules should cause the 0 or -1 to be sign-extended to long long (or whatever other integral type apr_int64_t is typedef'ed to). Brian
Re: stable 2.0 trees
On Tue, 2002-10-15 at 10:46, Thom May wrote: * Jim Jagielski ([EMAIL PROTECTED]) wrote : Bill Stoddard wrote: At the risk of racing too far ahead in this discussion, here is my suggestion... 2.0.43 becomes 2.1 and the MMN major does not change for subsequent 2.1 series releases (except for a compelling reason, eg a security fix -requires- a bump). Why 2.1? No technical reason; purely a PR tactic to telegraph to the user community we are putting a lot of focus on maintaining binary backward compatability and to get rid of the *.0.* in the version number (yea, to appease the folks who are allergic to 0's in version numbers). I like. Me too. Not to make a me too post BUT.. me too. I've been using apache 2. on non critical projects or where I MUST use apache 2. (IE Subversion repository). So far, it's stable BUT (and I stress this HIGHLY) it's not been pounded on AT all. IE at most, 100 hits in a day by 3 or 4 people AT most! :) -- Jeff Stuart [EMAIL PROTECTED] signature.asc Description: This is a digitally signed message part