segmentation fault in worker.c

2009-07-07 Thread Andrej van der Zee
Hi,

I compiled httpd-2.2.11 with "./configure --with-included-apr
--enable-ssl --disable-cgi --disable-cgid --with-mpm=prefork
--enable-status". HTTP requests seem to be processed fine from a users
point of view, but I get many segfaults in my apache log when I
seriously increase the workload. Here a trace from gdb:

Core was generated by `/usr/local/apache2/bin/httpd -k start'.
Program terminated with signal 11, Segmentation fault.
[New process 9935]
#0  apr_pollset_add (pollset=0x0, descriptor=0xbf8225dc) at
poll/unix/epoll.c:150
150 if (pollset->flags & APR_POLLSET_NOCOPY) {
(gdb) print pollset
$1 = (apr_pollset_t *) 0x0
(gdb) bt
#0  apr_pollset_add (pollset=0x0, descriptor=0xbf8225dc) at
poll/unix/epoll.c:150
#1  0x080c2c41 in child_main (child_num_arg=) at
prefork.c:532
#2  0x080c30f3 in make_child (s=0x9c849a8, slot=138) at prefork.c:746
#3  0x080c3ef8 in ap_mpm_run (_pconf=0x9c7d0a8, plog=0x9cbb1a0,
s=0x9c849a8) at prefork.c:881
#4  0x0806e808 in main (argc=164081968, argv=0xbf822904) at main.c:740
(gdb)

When I compiled with mpm-worker, I did not get into these problems.
These are my mpm-prefork settings:

ServerLimit  512
StartServers  100
MinSpareServers   25
MaxSpareServers  75
MaxClients  256
MaxRequestsPerChild   0


When I start about 4000 user sessions in a few seconds this happens,
not with lower values. Changing KeepAllive to On/Off does not change
anything.

I was hoping this rings a bell, otherwise I could provide you with
more information on your request, provided someone is kind enough to
pick this up ;)

Thank you,
Andrej


need some help from an awk wizard ...

2009-07-07 Thread Guenter Knauf
All,
I'm now trying for hours to get 4 symbols of mod_watchdog into an export
list :(
these 4 symbols are: ap_hook_watchdog_exit, ap_hook_watchdog_init,
ap_hook_watchdog_need, ap_hook_watchdog_step.
The problem seems to be that in the pre-preocessed file the function
macro expands to one line with a typedef before; see here (scroll down
to the end to find these 4 long lines):
http://people.apache.org/~fuankg/awk_test/nw_export.i
everything needed is in this dir, including the original awk version
(source and win32 binary) I have to use (since it has to run on Win32
finally):
http://people.apache.org/~fuankg/awk_test/
I thought it should be simple to cut off everything before and including
the semicolon, but seems I'm too stupid, and cant get it to work ...

thanks in advance for any help with this!

Günter.




Re: Events, Destruction and Locking

2009-07-07 Thread Bojan Smojver
On Tue, 2009-07-07 at 16:01 +0200, Graham Leggett wrote:
> As is httpd prefork :)

Yeah, definitely my favourite MPM :-)

As far as I understand this, the deal is that we need to have a complete
request before we start processing it. Otherwise, we can get stuck and
one of our precious resources is tied up for a long time.

Is there anything stopping us from having not just fds in listen in that
apr_pollset_poll() of prefork.c, but also a bunch of already accepted
fds that are waiting for more data to come in? I'm guessing we'd have to
use ap_process_http_async_connection() and have multiple ptrans pools,
but that should not be all that hard to do.

So, the loop would be:

- poll()
- try assembling a full request from data read so far
  - process if successful
  - go back to poll() if not

Too naive?

-- 
Bojan



Re: Events, Destruction and Locking

2009-07-07 Thread Ruediger Pluem


On 07/07/2009 07:02 PM, Graham Leggett wrote:

> Ideally any async implementation should be 100% async end to end. I
> don't believe that its necessary though for a single request to be
> handled by more than one thread.

I agree. I see no reason for multiple threads working on the same request at
the same time (at least handler wise). On the other side it may be interesting
to develop async handlers that wait for external events like a post body or a 
database
response and that might want to free the thread until this event happens.
The same may be interesting for filters.
So it should be possible for a request to move over to a different thread,
but not more than one thread should be working on the same request at the
same time.

Regards

Rüdiger




Re: svn commit: r791617 - in /httpd/httpd/trunk/modules: cluster/mod_heartmonitor.c proxy/balancers/mod_lbmethod_heartbeat.c

2009-07-07 Thread Ruediger Pluem


On 07/06/2009 11:14 PM, jfcl...@apache.org wrote:
> Author: jfclere
> Date: Mon Jul  6 21:14:21 2009
> New Revision: 791617
> 
> URL: http://svn.apache.org/viewvc?rev=791617&view=rev
> Log:
> Add use slotmem. Directive HeartbeatMaxServers > 10 to activate the logic.
> Otherwise it uses the file logic to store the heartbeats.
> 
> Modified:
> httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c
> httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c
> 
> Modified: httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c?rev=791617&r1=791616&r2=791617&view=diff
> ==
> --- httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c (original)
> +++ httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c Mon Jul  6 21:14:21 
> 2009

>  
> @@ -440,7 +530,17 @@
>  return HTTP_INTERNAL_SERVER_ERROR;
>  }
>  apr_brigade_flatten(input_brigade, buf, &len);
> -hm_processmsg(ctx, r->pool, r->connection->remote_addr, buf, len);
> +
> +/* we can't use hm_processmsg because it uses hm_get_server() */
> +buf[len] = '\0';
> +tbl = apr_table_make(r->pool, 10);
> +qs_to_table(buf, tbl, r->pool);
> +apr_sockaddr_ip_get(&ip, r->connection->remote_addr);
> +hmserver.ip = ip;
> +hmserver.busy = atoi(apr_table_get(tbl, "busy"));
> +hmserver.ready = atoi(apr_table_get(tbl, "ready"));
> +hmserver.seen = apr_time_now();
> +hm_slotmem_update_stat(&hmserver, r);

Sorry for being confused, but this means that we are storing the data in 
different
locations dependent on whether we use the handler or the UDP listener and more 
so
we provide them in different locations for other modules to use (sharedmem / 
file).
Does this make sense?
IMHO we should either provide them in both locations (sharedmem / file) no 
matter
which source contributed it or we should make it configurable where this 
information
is offered.


> Modified: httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c
> URL: 
> http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c?rev=791617&r1=791616&r2=791617&view=diff
> ==
> --- httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c 
> (original)
> +++ httpd/httpd/trunk/modules/proxy/balancers/mod_lbmethod_heartbeat.c Mon 
> Jul  6 21:14:21 2009

> @@ -39,9 +47,20 @@
>  int busy;
>  int ready;
>  int seen;
> +int id;
>  proxy_worker *worker;
>  } hb_server_t;
>  
> +#define MAXIPSIZE  64
> +typedef struct hm_slot_server_t
> +{
> +char ip[MAXIPSIZE];
> +int busy;
> +int ready;
> +apr_time_t seen;
> +int id;
> +} hm_slot_server_t;
> +

Shouldn't these things go to a common include file?
I guess defining them in each file is waiting for a missed-to-update
error to happen.


Regards

Rüdiger


Re: svn commit: r791454 - in /httpd/httpd/branches/2.2.x: CHANGES STATUS server/core_filters.c

2009-07-07 Thread Dan Poirier
traw...@apache.org writes:

> Author: trawick
> Date: Mon Jul  6 12:03:20 2009
> New Revision: 791454
>
> URL: http://svn.apache.org/viewvc?rev=791454&view=rev
> Log:
> SECURITY: CVE-2009-1891 (cve.mitre.org)
> Fix a potential Denial-of-Service attack against mod_deflate or other 
> modules, by forcing the server to consume CPU time in compressing a 
> large file after a client disconnects.  [Joe Orton, Ruediger Pluem]
>
> Submitted by: jorton, rpluem
> Reviewed by:  jim, trawick
>
>
> Modified:
> httpd/httpd/branches/2.2.x/CHANGES
> httpd/httpd/branches/2.2.x/STATUS
> httpd/httpd/branches/2.2.x/server/core_filters.c

Would anyone care to backport this to 2.0.x?  The changes appear to
apply trivially to the core_output_filter() in server/core.c.  I'll
attach the patch:

Index: CHANGES
===
--- CHANGES (revision 791478)
+++ CHANGES (working copy)
@@ -1,6 +1,12 @@
  -*- coding: utf-8 -*-
 Changes with Apache 2.0.64
 
+  *) SECURITY: CVE-2009-1891 (cve.mitre.org)
+ Fix a potential Denial-of-Service attack against mod_deflate or other 
+ modules, by forcing the server to consume CPU time in compressing a 
+ large file after a client disconnects.  PR 39605.
+ [Joe Orton, Ruediger Pluem]
+
   *) SECURITY: CVE-2008-2939 (cve.mitre.org)
  mod_proxy_ftp: Prevent XSS attacks when using wildcards in the path of
  the FTP URL. Discovered by Marc Bevand of Rapid7. [Ruediger Pluem]
Index: server/core.c
===
--- server/core.c   (revision 791906)
+++ server/core.c   (working copy)
@@ -3969,6 +3969,12 @@
 apr_read_type_e eblock = APR_NONBLOCK_READ;
 apr_pool_t *input_pool = b->p;
 
+/* Fail quickly if the connection has already been aborted. */
+if (c->aborted) {
+apr_brigade_cleanup(b);
+return APR_ECONNABORTED;
+}
+
 if (ctx == NULL) {
 ctx = apr_pcalloc(c->pool, sizeof(*ctx));
 net->out_ctx = ctx;
@@ -4336,12 +4342,9 @@
 /* No need to check for SUCCESS, we did that above. */
 if (!APR_STATUS_IS_EAGAIN(rv)) {
 c->aborted = 1;
+return APR_ECONNABORTED;
 }
 
-/* The client has aborted, but the request was successful. We
- * will report success, and leave it to the access and error
- * logs to note that the connection was aborted.
- */
 return APR_SUCCESS;
 }
 


-- 
Dan Poirier 


Re: Events, Destruction and Locking

2009-07-07 Thread Akins, Brian
On 7/7/09 1:02 PM, "Graham Leggett"  wrote:

> Ideally any async implementation should be 100% async end to end. I
> don't believe that its necessary though for a single request to be
> handled by more than one thread.

True.  However, what about things that may be "process" intensive. Ie,
running lua in process.  And we'd want to run multiple async threads (or
processes). One of the issues with lighttpd with multiple processes (to use
multiple cores, etc) is that lots of stuff is broken (ie, log files
interleave).  We just need to be aware of the issues that other servers have
uncovered in this area.

-- 
Brian Akins



Re: Events, Destruction and Locking

2009-07-07 Thread Graham Leggett
Akins, Brian wrote:

> This is how I envisioned the async stuff working.
> 
> -Async event thread is used only for input/output of httpd to/from network*
> -After we read the headers, we pass the request/connection to the worker
> threads.  Each request is "sticky" to a thread.  Request stuff may block,
> etc, so this thread pool size is configurable and in mod_status, etc.
> -any "writes" out of the request to the clientare passed into the async
> thread.  This may be wrapped in filters, whatever.
> 
> *We may allow there to be multiple ones of these, ie one for proxies, or
> have a very well defined way to add watches to this.
> 
> This is a very simplistic view.  I was basically thinking that all conn_rec
> "stuff" is handled in the async event thread, all the request_rec "stuff" is
> handled in the worker threads.

The trouble with this is that all you need to do to wedge one of the
worker threads is to promise to send two bytes as a request body, and
then send one (or zero), then hang.

Ideally any async implementation should be 100% async end to end. I
don't believe that its necessary though for a single request to be
handled by more than one thread.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Events, Destruction and Locking

2009-07-07 Thread Paul Querna
On Tue, Jul 7, 2009 at 12:54 PM, Akins, Brian wrote:
> This is how I envisioned the async stuff working.
>
> -Async event thread is used only for input/output of httpd to/from network*
> -After we read the headers, we pass the request/connection to the worker
> threads.  Each request is "sticky" to a thread.  Request stuff may block,
> etc, so this thread pool size is configurable and in mod_status, etc.
> -any "writes" out of the request to the clientare passed into the async
> thread.  This may be wrapped in filters, whatever.
>
> *We may allow there to be multiple ones of these, ie one for proxies, or
> have a very well defined way to add watches to this.
>
> This is a very simplistic view.  I was basically thinking that all conn_rec
> "stuff" is handled in the async event thread, all the request_rec "stuff" is
> handled in the worker threads.

Right, but I think the 'waiting for X' while processing is a very
important case, it can get you to a fully async reverse proxy, which
opens up lots of possibilities.


Re: Events, Destruction and Locking

2009-07-07 Thread Akins, Brian
This is how I envisioned the async stuff working.

-Async event thread is used only for input/output of httpd to/from network*
-After we read the headers, we pass the request/connection to the worker
threads.  Each request is "sticky" to a thread.  Request stuff may block,
etc, so this thread pool size is configurable and in mod_status, etc.
-any "writes" out of the request to the clientare passed into the async
thread.  This may be wrapped in filters, whatever.

*We may allow there to be multiple ones of these, ie one for proxies, or
have a very well defined way to add watches to this.

This is a very simplistic view.  I was basically thinking that all conn_rec
"stuff" is handled in the async event thread, all the request_rec "stuff" is
handled in the worker threads.


-- 
Brian Akins



Re: Events, Destruction and Locking

2009-07-07 Thread Jeff Trawick
On Tue, Jul 7, 2009 at 9:39 AM, Paul Querna  wrote:

> On Tue, Jul 7, 2009 at 8:39 AM, Graham Leggett wrote:
> > Paul Querna wrote:
> >
> >> Nah, 90% of what is done in moduels today should be out of process aka
> >> in FastCGI or another method, but out of process. (regardless of
> >> MPM)
> >
> > You're just moving the problem from one server to another, the problem
> > remains unsolved. Whether the code runs within httpd space, or fastcgi
> > space, the code still needs to run, and if it's written badly, the code
> > will still leak/crash, and you still have to cater for it.
>
> Yes, but in a separate process it has fault isolation.. and we can
> restart it when it fails, neither of which are true for modules using
> the in-process API directly -- look at the reliability of QMail, or
> the newer architecture of Google's Chrome, they are both great
> examples of fault isolation.
>

Also, it simplifies the programming problem by reducing the number of
separate memory and concurrency models that must be accommodated by the
application-level code.


Re: Events, Destruction and Locking

2009-07-07 Thread Graham Leggett
Paul Querna wrote:

> It breaks the 1:1: connection mapping to thread (or process) model
> which is critical to low memory footprint, with thousands of
> connections, maybe I'm just insane, but all of the servers taking
> market share, like lighttpd, nginx, etc, all use this model.
> 
> It also prevents all variations of the slowaris stupidity, because its
> damn hard to overwhelm the actual connection processing if its all
> async, and doesn't block a worker.

But as you've pointed out, it makes our heads bleed, and locks slow us down.

At the lowest level, the event loop should be completely async, and be
capable of supporting an arbitrary (probably very high) number of
concurrent connections.

If one connection slows or stops (deliberately or otherwise), it won't
block any other connections on the same event loop, which will continue
as normal.

The only requirement is that each request accurately registers event
deregistration functions in their cleanups, so that the request is
cleanly deregistered and future events canceled on apr_pool_destroy().

The event loop can also choose to proactively kill too-slow connections
if certain memory or concurrent connection threshholds are reached.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Events, Destruction and Locking

2009-07-07 Thread Paul Querna
On Tue, Jul 7, 2009 at 10:01 AM, Graham Leggett wrote:
> Paul Querna wrote:
>
>> Yes, but in a separate process it has fault isolation.. and we can
>> restart it when it fails, neither of which are true for modules using
>> the in-process API directly -- look at the reliability of QMail, or
>> the newer architecture of Google's Chrome, they are both great
>> examples of fault isolation.
>
> As is httpd prefork :)
>
> I think the key target for the event model is for low-complexity
> scenarios like shipping raw files, or being a cache, or a reverse proxy.
>
> If we have three separate levels, a process, containing threads,
> containing an event loop, we could allow the behaviour of prefork (many
> processes, one thread, one-request-per-event-loop-at-a-time), or the
> bahaviour of worker (one or many processes, many threads,
> one-request-per-event-loop-at-a-time), or an event model (one or many
> processes, one or many threads,
> many-requests-per-event-loop-at-one-time) at the same time.
>
> I am not sure that splitting request handling across threads (in your
> example, connection close handled by event on thread A, timeout handled
> by event on thread B) buys us anything (apart from the complexity you
> described).

It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe I'm just insane, but all of the servers taking
market share, like lighttpd, nginx, etc, all use this model.

It also prevents all variations of the slowaris stupidity, because its
damn hard to overwhelm the actual connection processing if its all
async, and doesn't block a worker.


Re: Events, Destruction and Locking

2009-07-07 Thread Graham Leggett
Paul Querna wrote:

> Yes, but in a separate process it has fault isolation.. and we can
> restart it when it fails, neither of which are true for modules using
> the in-process API directly -- look at the reliability of QMail, or
> the newer architecture of Google's Chrome, they are both great
> examples of fault isolation.

As is httpd prefork :)

I think the key target for the event model is for low-complexity
scenarios like shipping raw files, or being a cache, or a reverse proxy.

If we have three separate levels, a process, containing threads,
containing an event loop, we could allow the behaviour of prefork (many
processes, one thread, one-request-per-event-loop-at-a-time), or the
bahaviour of worker (one or many processes, many threads,
one-request-per-event-loop-at-a-time), or an event model (one or many
processes, one or many threads,
many-requests-per-event-loop-at-one-time) at the same time.

I am not sure that splitting request handling across threads (in your
example, connection close handled by event on thread A, timeout handled
by event on thread B) buys us anything (apart from the complexity you
described).

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Events, Destruction and Locking

2009-07-07 Thread Paul Querna
On Tue, Jul 7, 2009 at 8:39 AM, Graham Leggett wrote:
> Paul Querna wrote:
>
>> Nah, 90% of what is done in moduels today should be out of process aka
>> in FastCGI or another method, but out of process. (regardless of
>> MPM)
>
> You're just moving the problem from one server to another, the problem
> remains unsolved. Whether the code runs within httpd space, or fastcgi
> space, the code still needs to run, and if it's written badly, the code
> will still leak/crash, and you still have to cater for it.

Yes, but in a separate process it has fault isolation.. and we can
restart it when it fails, neither of which are true for modules using
the in-process API directly -- look at the reliability of QMail, or
the newer architecture of Google's Chrome, they are both great
examples of fault isolation.


Re: Events, Destruction and Locking

2009-07-07 Thread Graham Leggett
Paul Querna wrote:

> Nah, 90% of what is done in moduels today should be out of process aka
> in FastCGI or another method, but out of process. (regardless of
> MPM)

You're just moving the problem from one server to another, the problem
remains unsolved. Whether the code runs within httpd space, or fastcgi
space, the code still needs to run, and if it's written badly, the code
will still leak/crash, and you still have to cater for it.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Events, Destruction and Locking

2009-07-07 Thread Paul Querna
On Tue, Jul 7, 2009 at 7:34 AM, Graham Leggett wrote:
> Paul Querna wrote:
>> I think it is possible to write a complete server that deals with all
>> these intricacies and gets everything just 'right', but as soon as you
>> introduce 3rd party module writers, no matter how 'smart' we are, our
>> castle of event goodness will crumble.
>
> You've hit the nail on the head as to why the prefork and worker models
> are still relevant - they are very forgiving of "irresponsible
> behaviour" by modules.

Nah, 90% of what is done in moduels today should be out of process aka
in FastCGI or another method, but out of process. (regardless of
MPM)


Re: Events, Destruction and Locking

2009-07-07 Thread Graham Leggett
Paul Querna wrote:

> Can't sleep, so finally writing this email I've been meaning to write
> for about 7 months now :D
> 
> One of the challenges in the Simple MPM, and to a smaller degree in
> the Event MPM, is how to manage memory allocation, destruction, and
> thread safety.
> 
> A 'simple' example:
>  - 1) Thread A: Client Connection Created
>-  2) Thread A: Timer Event Added for 10 seconds in the future to
> detect  IO timeout,
>  - 3) Thread B: Client Socket closes in 9.99 seconds.
>  - 4) Thread C: Timer Event for IO timeout is triggered after 10 seconds
> 
> The simple answer is placing a Mutex around the connection object.
> Any operation which two threads are working on the connection, locks
> this Mutex.

As you've said, locks create many problems, the most fatal of which is
that locks potentially block the event loop. Ideally if a try_lock
fails, the event should reschedule itself to try again at some point in
the near future, but that relies on people bothering to write this
logic, and I suspect many won't.

A pragmatic approach might be to handle a request completely within a
single event loop running in a single thread. In this case the timer
event for IO timeout is in the same thread as the socket close event.

At this point you just need to make sure that your pool cleanups are
handled correctly. So if a timeout runs, all the timeout does is
apr_destroy_pool(r->pool), and that's it. It is up to the pool cleanup
to make sure that all events (such as the event that calls connection
close) are cleanly deregistered so that they won't get called in future.

We may offer a mechanism (such as a watchdog) that allows a request to
kick off code in another thread, but a prerequisite for that is that the
pool cleanup will have to be created to make sure that other thread is
terminated cleanly, or the request is cleanly removed from that other
thread's event loop.

> I think it is possible to write a complete server that deals with all
> these intricacies and gets everything just 'right', but as soon as you
> introduce 3rd party module writers, no matter how 'smart' we are, our
> castle of event goodness will crumble.

You've hit the nail on the head as to why the prefork and worker models
are still relevant - they are very forgiving of "irresponsible
behaviour" by modules.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread ricardo13



Mladen Turk-3 wrote:
> 
> ricardo13 wrote:
>> 
>> I want to modify MPM Worker (worker.c) to develop some scheduling
>> algorithms.
>> 
>> A first scheduling algorithm would be implement priority. Two queues
>> (worker_queue1 and worker_queue2) of sockets where threads (workers)
>> "get"
>> all requests from worker_queue1 first, after"get" all requests from
>> worker_queue2.
>>
> 
> Now we are talking! You should explain that initially ;)
> 
>> That is what I wanted to do.
>> 
> 
> I suppose each 'queue' would bind to a different listener,
> otherwise this is a sort of throttling.
> Since worker connection model is protocol independent and
> if you wish a url/host based scheduling you cannot do that inside
> worker since http protocol is handled after the worker handles
> the connection. The only possible solution for a connection
> scheduling would be scheduling a different connection pools
> (eg. events on different listening socket)
> 
> I've seen this Apache architecture. A person develop.
> Regards http://www.nabble.com/file/p24370972/arch.jpeg 
> 
> Note that he uses a listener (A), a queue (B) and the threads (C).
> I sent a email for him. He told me that implement 3 queue (each queue for
> user-level).
> 
> user-level would be GOLD, SILVER and BRONZE.
> 
> Thank you
> Ricardo
> -- 
> ^(TM)
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370972.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread ricardo13



Graham Dumpleton-2 wrote:
> 
> 2009/7/7 ricardo13 :
>>
>>
>>
>> Graham Dumpleton-2 wrote:
>>>
>>> 2009/7/7 ricardo13 :

 Hi,

 Sorry, I didn't know that was in wrong forum. What's the best list to
 write
 this doubt ??
>>>
>>> You may well be on the right list, but right now it isn't too clear
>>> that you really need to be modifying the actual MPM code.
>>>
 I want to modify MPM Worker (worker.c) to develop some scheduling
 algorithms.

 A first scheduling algorithm would be implement priority. Two queues
 (worker_queue1 and worker_queue2) of sockets where threads (workers)
 "get"
 all requests from worker_queue1 first, after"get" all requests from
 worker_queue2.
>>>
>>> By what criteria would requests get delegated to each queue? In other
>>> words, what is the high level outcome you are trying to achieve. For
>>> example, are you trying to give priority to certain virtual hosts or
>>> listener ports???
>>>
>>> Firstly, The requests would be classified (module of classify) by IP.
>>>
>>> For example:
>>>    If IP = x then forward_queue_1();
>>>    else if IP = y then forward_queue_2();
>>>
>>>
>>> I want explain. I'm studying graduate and my final test is a project
>>> about
>>> webservers.
>>> I choose subject about QoS in webservers (application-level). The
>>> concepts
>>> about QoS are apply in network-level.
> 
> Have you seen:
> 
> http://mod-qos.sourceforge.net/
> 
> Not sure how much it overlaps what you are wanting to do.
> 
> Very interesting. I didnt know.
> But there are scheduling algorithms (including mathematical formulas) that
> I would want to implement.
> 
> Ricardo
> 
> Graham
> 
>>> Thank you
>>> Ricardo
>>>
>>>
>>> Graham
>>>
 That is what I wanted to do.

 Thank you
 Ricardo



 Graham Dumpleton-2 wrote:
>
> Rather than keep demanding an answer to how to do whatever it is you
> want, that you explain why you want to do it in the first place. Given
> what looks like a rather inadequate knowledge of Apache, it is quite
> likely you are going about it all the completely wrong way. So, give
> some context about why you need it and people may be able to give more
> informed answers. At which point we may also be able to suggest you
> are in the wrong forum anyway and that you can do it as a module and
> so should use modules-dev list and not the list for development of the
> core of httpd.
>
> Graham
>
> 2009/7/7 ricardo13 :
>>
>> Hi all,
>>
>> Can anybody explain what's doing the function worker_thread in
>> worker.c
>> ?
>>
>> I dont't know APR and don't undestood the following lines:
>>
>>        worker_sockets[thread_slot] = csd;
>>        bucket_alloc = apr_bucket_alloc_create(ptrans);
>>        process_socket(ptrans, csd, process_slot, thread_slot,
>> bucket_alloc); // Here processing the csd socket ??
>>        worker_sockets[thread_slot] = NULL;
>>        requests_this_child--; /* FIXME: should be synchronized -
>> aaron
>> */
>>
>> I need know it.
>>
>> Thank you
>> Ricardo
>>
>>
>> ricardo13 wrote:
>>>
>>> Anyone ??
>>>
>>>
>>> ricardo13 wrote:

 Hi all,

 I would like to know how I create other queue of requests ?? Where
 I
 create ?? worker.c ??

 Thank you
 Ricardo


>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24357634.html
>> Sent from the Apache HTTP Server - Dev mailing list archive at
>> Nabble.com.
>>
>>
>
>

 --
 View this message in context:
 http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370202.html
 Sent from the Apache HTTP Server - Dev mailing list archive at
 Nabble.com.


>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370640.html
>> Sent from the Apache HTTP Server - Dev mailing list archive at
>> Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370817.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread ricardo13


Mladen Turk-3 wrote:
> 
> ricardo13 wrote:
>> 
>> I want to modify MPM Worker (worker.c) to develop some scheduling
>> algorithms.
>> 
>> A first scheduling algorithm would be implement priority. Two queues
>> (worker_queue1 and worker_queue2) of sockets where threads (workers)
>> "get"
>> all requests from worker_queue1 first, after"get" all requests from
>> worker_queue2.
>>
> 
> Sorry, but don't undestood !!
> Ricardo
> 
> Now we are talking! You should explain that initially ;)
> 
>> That is what I wanted to do.
>> 
> 
> I suppose each 'queue' would bind to a different listener,
> otherwise this is a sort of throttling.
> Since worker connection model is protocol independent and
> if you wish a url/host based scheduling you cannot do that inside
> worker since http protocol is handled after the worker handles
> the connection. The only possible solution for a connection
> scheduling would be scheduling a different connection pools
> (eg. events on different listening socket)
> 
> 
> Regards
> -- 
> ^(TM)
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370741.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread Graham Dumpleton
2009/7/7 ricardo13 :
>
>
>
> Graham Dumpleton-2 wrote:
>>
>> 2009/7/7 ricardo13 :
>>>
>>> Hi,
>>>
>>> Sorry, I didn't know that was in wrong forum. What's the best list to
>>> write
>>> this doubt ??
>>
>> You may well be on the right list, but right now it isn't too clear
>> that you really need to be modifying the actual MPM code.
>>
>>> I want to modify MPM Worker (worker.c) to develop some scheduling
>>> algorithms.
>>>
>>> A first scheduling algorithm would be implement priority. Two queues
>>> (worker_queue1 and worker_queue2) of sockets where threads (workers)
>>> "get"
>>> all requests from worker_queue1 first, after"get" all requests from
>>> worker_queue2.
>>
>> By what criteria would requests get delegated to each queue? In other
>> words, what is the high level outcome you are trying to achieve. For
>> example, are you trying to give priority to certain virtual hosts or
>> listener ports???
>>
>> Firstly, The requests would be classified (module of classify) by IP.
>>
>> For example:
>>    If IP = x then forward_queue_1();
>>    else if IP = y then forward_queue_2();
>>
>>
>> I want explain. I'm studying graduate and my final test is a project about
>> webservers.
>> I choose subject about QoS in webservers (application-level). The concepts
>> about QoS are apply in network-level.

Have you seen:

http://mod-qos.sourceforge.net/

Not sure how much it overlaps what you are wanting to do.

Graham

>> Thank you
>> Ricardo
>>
>>
>> Graham
>>
>>> That is what I wanted to do.
>>>
>>> Thank you
>>> Ricardo
>>>
>>>
>>>
>>> Graham Dumpleton-2 wrote:

 Rather than keep demanding an answer to how to do whatever it is you
 want, that you explain why you want to do it in the first place. Given
 what looks like a rather inadequate knowledge of Apache, it is quite
 likely you are going about it all the completely wrong way. So, give
 some context about why you need it and people may be able to give more
 informed answers. At which point we may also be able to suggest you
 are in the wrong forum anyway and that you can do it as a module and
 so should use modules-dev list and not the list for development of the
 core of httpd.

 Graham

 2009/7/7 ricardo13 :
>
> Hi all,
>
> Can anybody explain what's doing the function worker_thread in worker.c
> ?
>
> I dont't know APR and don't undestood the following lines:
>
>        worker_sockets[thread_slot] = csd;
>        bucket_alloc = apr_bucket_alloc_create(ptrans);
>        process_socket(ptrans, csd, process_slot, thread_slot,
> bucket_alloc); // Here processing the csd socket ??
>        worker_sockets[thread_slot] = NULL;
>        requests_this_child--; /* FIXME: should be synchronized - aaron
> */
>
> I need know it.
>
> Thank you
> Ricardo
>
>
> ricardo13 wrote:
>>
>> Anyone ??
>>
>>
>> ricardo13 wrote:
>>>
>>> Hi all,
>>>
>>> I would like to know how I create other queue of requests ?? Where I
>>> create ?? worker.c ??
>>>
>>> Thank you
>>> Ricardo
>>>
>>>
>>
>>
>
> --
> View this message in context:
> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24357634.html
> Sent from the Apache HTTP Server - Dev mailing list archive at
> Nabble.com.
>
>


>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370202.html
>>> Sent from the Apache HTTP Server - Dev mailing list archive at
>>> Nabble.com.
>>>
>>>
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370640.html
> Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.
>
>


Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread ricardo13



Graham Dumpleton-2 wrote:
> 
> 2009/7/7 ricardo13 :
>>
>> Hi,
>>
>> Sorry, I didn't know that was in wrong forum. What's the best list to
>> write
>> this doubt ??
> 
> You may well be on the right list, but right now it isn't too clear
> that you really need to be modifying the actual MPM code.
> 
>> I want to modify MPM Worker (worker.c) to develop some scheduling
>> algorithms.
>>
>> A first scheduling algorithm would be implement priority. Two queues
>> (worker_queue1 and worker_queue2) of sockets where threads (workers)
>> "get"
>> all requests from worker_queue1 first, after"get" all requests from
>> worker_queue2.
> 
> By what criteria would requests get delegated to each queue? In other
> words, what is the high level outcome you are trying to achieve. For
> example, are you trying to give priority to certain virtual hosts or
> listener ports???
> 
> Firstly, The requests would be classified (module of classify) by IP.
> 
> For example:
>If IP = x then forward_queue_1();
>else if IP = y then forward_queue_2();
> 
> 
> I want explain. I'm studying graduate and my final test is a project about
> webservers.
> I choose subject about QoS in webservers (application-level). The concepts
> about QoS are apply in network-level.
> 
> Thank you
> Ricardo
> 
> 
> Graham
> 
>> That is what I wanted to do.
>>
>> Thank you
>> Ricardo
>>
>>
>>
>> Graham Dumpleton-2 wrote:
>>>
>>> Rather than keep demanding an answer to how to do whatever it is you
>>> want, that you explain why you want to do it in the first place. Given
>>> what looks like a rather inadequate knowledge of Apache, it is quite
>>> likely you are going about it all the completely wrong way. So, give
>>> some context about why you need it and people may be able to give more
>>> informed answers. At which point we may also be able to suggest you
>>> are in the wrong forum anyway and that you can do it as a module and
>>> so should use modules-dev list and not the list for development of the
>>> core of httpd.
>>>
>>> Graham
>>>
>>> 2009/7/7 ricardo13 :

 Hi all,

 Can anybody explain what's doing the function worker_thread in worker.c
 ?

 I dont't know APR and don't undestood the following lines:

        worker_sockets[thread_slot] = csd;
        bucket_alloc = apr_bucket_alloc_create(ptrans);
        process_socket(ptrans, csd, process_slot, thread_slot,
 bucket_alloc); // Here processing the csd socket ??
        worker_sockets[thread_slot] = NULL;
        requests_this_child--; /* FIXME: should be synchronized - aaron
 */

 I need know it.

 Thank you
 Ricardo


 ricardo13 wrote:
>
> Anyone ??
>
>
> ricardo13 wrote:
>>
>> Hi all,
>>
>> I would like to know how I create other queue of requests ?? Where I
>> create ?? worker.c ??
>>
>> Thank you
>> Ricardo
>>
>>
>
>

 --
 View this message in context:
 http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24357634.html
 Sent from the Apache HTTP Server - Dev mailing list archive at
 Nabble.com.


>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370202.html
>> Sent from the Apache HTTP Server - Dev mailing list archive at
>> Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370640.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread Mladen Turk

ricardo13 wrote:


I want to modify MPM Worker (worker.c) to develop some scheduling
algorithms.

A first scheduling algorithm would be implement priority. Two queues
(worker_queue1 and worker_queue2) of sockets where threads (workers) "get"
all requests from worker_queue1 first, after"get" all requests from
worker_queue2.



Now we are talking! You should explain that initially ;)


That is what I wanted to do.



I suppose each 'queue' would bind to a different listener,
otherwise this is a sort of throttling.
Since worker connection model is protocol independent and
if you wish a url/host based scheduling you cannot do that inside
worker since http protocol is handled after the worker handles
the connection. The only possible solution for a connection
scheduling would be scheduling a different connection pools
(eg. events on different listening socket)


Regards
--
^(TM)


Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread Graham Dumpleton
2009/7/7 ricardo13 :
>
> Hi,
>
> Sorry, I didn't know that was in wrong forum. What's the best list to write
> this doubt ??

You may well be on the right list, but right now it isn't too clear
that you really need to be modifying the actual MPM code.

> I want to modify MPM Worker (worker.c) to develop some scheduling
> algorithms.
>
> A first scheduling algorithm would be implement priority. Two queues
> (worker_queue1 and worker_queue2) of sockets where threads (workers) "get"
> all requests from worker_queue1 first, after"get" all requests from
> worker_queue2.

By what criteria would requests get delegated to each queue? In other
words, what is the high level outcome you are trying to achieve. For
example, are you trying to give priority to certain virtual hosts or
listener ports???

Graham

> That is what I wanted to do.
>
> Thank you
> Ricardo
>
>
>
> Graham Dumpleton-2 wrote:
>>
>> Rather than keep demanding an answer to how to do whatever it is you
>> want, that you explain why you want to do it in the first place. Given
>> what looks like a rather inadequate knowledge of Apache, it is quite
>> likely you are going about it all the completely wrong way. So, give
>> some context about why you need it and people may be able to give more
>> informed answers. At which point we may also be able to suggest you
>> are in the wrong forum anyway and that you can do it as a module and
>> so should use modules-dev list and not the list for development of the
>> core of httpd.
>>
>> Graham
>>
>> 2009/7/7 ricardo13 :
>>>
>>> Hi all,
>>>
>>> Can anybody explain what's doing the function worker_thread in worker.c ?
>>>
>>> I dont't know APR and don't undestood the following lines:
>>>
>>>        worker_sockets[thread_slot] = csd;
>>>        bucket_alloc = apr_bucket_alloc_create(ptrans);
>>>        process_socket(ptrans, csd, process_slot, thread_slot,
>>> bucket_alloc); // Here processing the csd socket ??
>>>        worker_sockets[thread_slot] = NULL;
>>>        requests_this_child--; /* FIXME: should be synchronized - aaron */
>>>
>>> I need know it.
>>>
>>> Thank you
>>> Ricardo
>>>
>>>
>>> ricardo13 wrote:

 Anyone ??


 ricardo13 wrote:
>
> Hi all,
>
> I would like to know how I create other queue of requests ?? Where I
> create ?? worker.c ??
>
> Thank you
> Ricardo
>
>


>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24357634.html
>>> Sent from the Apache HTTP Server - Dev mailing list archive at
>>> Nabble.com.
>>>
>>>
>>
>>
>
> --
> View this message in context: 
> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370202.html
> Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.
>
>


Re: Where Do I Create Queues in MPM Worker

2009-07-07 Thread ricardo13

Hi,

Sorry, I didn't know that was in wrong forum. What's the best list to write
this doubt ??

I want to modify MPM Worker (worker.c) to develop some scheduling
algorithms.

A first scheduling algorithm would be implement priority. Two queues
(worker_queue1 and worker_queue2) of sockets where threads (workers) "get"
all requests from worker_queue1 first, after"get" all requests from
worker_queue2.

That is what I wanted to do.

Thank you
Ricardo



Graham Dumpleton-2 wrote:
> 
> Rather than keep demanding an answer to how to do whatever it is you
> want, that you explain why you want to do it in the first place. Given
> what looks like a rather inadequate knowledge of Apache, it is quite
> likely you are going about it all the completely wrong way. So, give
> some context about why you need it and people may be able to give more
> informed answers. At which point we may also be able to suggest you
> are in the wrong forum anyway and that you can do it as a module and
> so should use modules-dev list and not the list for development of the
> core of httpd.
> 
> Graham
> 
> 2009/7/7 ricardo13 :
>>
>> Hi all,
>>
>> Can anybody explain what's doing the function worker_thread in worker.c ?
>>
>> I dont't know APR and don't undestood the following lines:
>>
>>        worker_sockets[thread_slot] = csd;
>>        bucket_alloc = apr_bucket_alloc_create(ptrans);
>>        process_socket(ptrans, csd, process_slot, thread_slot,
>> bucket_alloc); // Here processing the csd socket ??
>>        worker_sockets[thread_slot] = NULL;
>>        requests_this_child--; /* FIXME: should be synchronized - aaron */
>>
>> I need know it.
>>
>> Thank you
>> Ricardo
>>
>>
>> ricardo13 wrote:
>>>
>>> Anyone ??
>>>
>>>
>>> ricardo13 wrote:

 Hi all,

 I would like to know how I create other queue of requests ?? Where I
 create ?? worker.c ??

 Thank you
 Ricardo


>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24357634.html
>> Sent from the Apache HTTP Server - Dev mailing list archive at
>> Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Where-Do-I-Create-Queues-in-MPM-Worker-tp24354526p24370202.html
Sent from the Apache HTTP Server - Dev mailing list archive at Nabble.com.



Re: Events, Destruction and Locking

2009-07-07 Thread Mladen Turk

Paul Querna wrote:


This deals with removing an event from the pollset, but what about an
event that had already fired, as I gave in the original example  of a
timeout event firing the same time a socket close event happened?



In that case I suppose the only solution is to make the operations
atomic. Since both operations would lead to the same result
(closing a connection) I suppose an atomic state flag should be enough.


In that state you have two threads both in a 'run state' for a
connection, and I'm not sure how the pre-cleanup to pools solves this
in any way?



It won't because the cleanup pool API doesn't bother with
cleanup callback return values, so there's no way to bail out
from the pool cleanup call. I suppose we could modify the
pre-cleanup to handle the retval from callback and breaks the
entire pool cleanup if one of them returns something other
then APR_SUCCESS. Then the callback function can decide
weather there is a pending close operation or not.


Regards
--
^(TM)