Problems using Apache::Test on Debian (and Ubuntu)

2017-03-03 Thread Sam Tregar
Hello all.  I've been working on getting Apache::Test running on Debian and
it's not going well.  One problem seems to be that Debian's system Apache
conf is not named what Apache::Test thinks it should be named (apache2.conf
vs httpd.conf).

After solving that problem I hit a bigger one - the system apache2.conf
file shipped with Debian is quite peculiar.  It requires numerous
environment variables (i.e. APACHE_RUN_DIR, APACHE_USER, etc) which are set
in apache2ctl.  Worse, it doesn't define a ServerRoot and neither does apxs
have a value for prefix.

Part of this bug appears in this bug report:

https://rt.cpan.org/Public/Bug/Display.html?id=118445

In short it seems like Apache::Test relies on the system Apache conf being
fairly vanilla and sane, but the Debian/Ubuntu maintainers have different
ideas.  I'm not sure what to do here - perhaps package up a very simple
default config for Debian/Ubuntu and sub that in?

Any help would be appreciated.  I'm happy to work up a patch if I can only
figure out what needs to change.

Thanks!

Sam


Re: Read scattered to gather send

2017-03-03 Thread Jacob Champion

On 03/03/2017 11:38 AM, Yann Ylavic wrote:

First fix :)

On Fri, Mar 3, 2017 at 6:41 PM, Yann Ylavic  wrote:


With apr_allocator_bulk_alloc(), one can request several apr_memnode_t
of a fixed (optional) or minimal given size, and in the worst case get
a single one (allocaœted), or in the best case as much free ones as
available (within a maximal size, also given).


The non-fixed version was buggy (couldn't reuse lower indexes), so
"apr_allocators-bulk-v2.patch" attached.
Will post some numbers (w.r.t. buffers' resuse and writev sizes) soon.


Neat! I will try to give this a shot some time next week (and I still 
haven't followed up on your mod_bucketeer fix in another thread, sorry...).


--Jacob


Re: release v1.9.2

2017-03-03 Thread Stefan Priebe - Profihost AG
Hi Stefan,

live. Sorry for the late reply.

Stefan

Am 28.02.2017 um 11:49 schrieb Stefan Eissing:
> Meh, my mistake. Sorry about that. Did not cleanup properly. Dare you with an 
> improved version:
> 
> 
> 
> 
> 
>> Am 28.02.2017 um 08:41 schrieb Stefan Priebe - Profihost AG 
>> :
>>
>> That one breaks everyting. Multiple crashes per second.
>>
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>> Core was generated by `/usr/local/apache/bin/httpd -DFOREGROUND'.
>> Program terminated with signal SIGABRT, Aborted.
>> #0  0x7f02724ea067 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #0  0x7f02724ea067 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #1  0x7f02724eb448 in abort () from /lib/x86_64-linux-gnu/libc.so.6
>> #2  0x7f02724e3266 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #3  0x7f02724e3312 in __assert_fail () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x7f027287062f in __pthread_tpp_change_priority ()
>>   from /lib/x86_64-linux-gnu/libpthread.so.0
>> #5  0x7f0272865e9f in __pthread_mutex_lock_full ()
>>   from /lib/x86_64-linux-gnu/libpthread.so.0
>> #6  0x7f0272a98db8 in allocator_alloc (in_size=in_size@entry=8032,
>>allocator=0x7f025c03f3c0) at memory/unix/apr_pools.c:244
>> #7  apr_allocator_alloc (allocator=0x7f025c03f3c0, size=size@entry=8032)
>>at memory/unix/apr_pools.c:438
>> #8  0x7f0272cbcac4 in apr_bucket_alloc (size=8032, size@entry=8000,
>>list=0x7f022c0008e8) at buckets/apr_buckets_alloc.c:157
>> #9  0x7f0272cbdca2 in socket_bucket_read (a=0x7f022c000b08,
>>str=0x7f02627a87b8, len=0x7f02627a87c0, block=)
>>at buckets/apr_buckets_socket.c:34
>> #10 0x565098cf1fb1 in ap_core_input_filter (f=0x7f022c0056b0,
>>b=0x7f022c005630, mode=, block=APR_NONBLOCK_READ,
>>readbytes=5) at core_filters.c:235
>> #11 0x565098d3aaca in logio_in_filter (f=,
>>bb=0x7f022c005630, mode=, block=,
>>readbytes=) at mod_logio.c:165
>> #12 0x565098d45b9a in bio_filter_in_read (bio=0x7f024800b460,
>>in=0x7f02480492a3 "", inlen=5) at ssl_engine_io.c:483
>> #13 0x7f02736b014c in BIO_read ()
>>   from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0
>> #14 0x7f0273a13c92 in ?? () from
>> /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
>> #15 0x7f0273a1548d in ?? () from
>> /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
>> #16 0x7f0273a12024 in ?? () from
>> /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
>> #17 0x565098d469f3 in ssl_io_input_read (inctx=0x7f022c0035b8,
>>buf=0x7f022c003600 "GET
>> /media/image/07/3c/c2/4612_13721_160192280322.jpeg HTTP/1.1\r\nHost:
>> www.onlinestore.it\r\nUser-Agent: curl/7.15+ (x64-criteo) libcurl/7.15+
>> OpenSSL zlib libidn\r\nAccept: */*\r\n\r\n", len=0x7f02627a8b38)
>>at ssl_engine_io.c:614
>> #18 0x565098d493ec in ssl_io_filter_input (f=0x7f022c005608,
>>bb=0x7f022c006dd0, mode=, block=,
>>readbytes=8000) at ssl_engine_io.c:1474
>> #19 0x565098d5cd85 in h2_filter_core_input (f=0x7f022c005980,
>>brigade=0x7f022c006dd0, mode=738225624, block=APR_NONBLOCK_READ,
>>readbytes=8000) at h2_filter.c:149
>> #20 0x565098d6809b in h2_session_read (session=0x7f022c006920,
>>block=) at h2_session.c:1440
>> #21 0x565098d6b97f in h2_session_process (session=0x7f022c006920,
>>async=3964) at h2_session.c:2074
>> #22 0x565098d5b84e in h2_conn_run (ctx=,
>> c=0x7f025c03f7d8)
>>at h2_conn.c:218
>> #23 0x565098d5e551 in h2_h2_process_conn (c=0x7f025c03f7d8) at
>> h2_h2.c:658
>> #24 0x565098d00fd0 in ap_run_process_connection (c=0x7f025c03f7d8)
>>at connection.c:42
>> #25 0x565098d9b6d0 in process_socket (my_thread_num=,
>>my_child_num=, cs=0x7f025c03f748, sock=,
>>p=, thd=) at event.c:1134
>> #26 worker_thread (thd=0xf5e, dummy=0xf7c) at event.c:2137
>> #27 0x7f02728680a4 in start_thread ()
>>   from /lib/x86_64-linux-gnu/libpthread.so.0
>> #28 0x7f027259d62d in clone () from /lib/x86_64-linux-gnu/libc.so.6
>> (gdb) (gdb) quit
>> Reading symbols from /usr/local/apache/bin/httpd...Reading symbols from
>> /usr/lib/debug//usr/local/apache2/bin/httpd...done.
>> done.
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
>> Core was generated by `/usr/local/apache/bin/httpd -DFOREGROUND'.
>> Program terminated with signal SIGABRT, Aborted.
>> #0  0x7f02724ea067 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #0  0x7f02724ea067 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>> #1  0x7f02724eb448 in abort () from /lib/x86_64-linux-gnu/libc.so.6
>> #2  0x7f02724e3266 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
>> #3  0x7f02724e3312 in __assert_fail () from
>> /lib/x86_64-linux-gnu/libc.so.6
>> #4  0x7f027287062f in __pthread_tpp_change_priority ()
>>   from /lib/x86_64-linux-gnu/libpthread.so.0
>> #5  

Re: Read scattered to gather send (was [too long]: httpd 2.4.25, mpm_event, ssl: segfaults)

2017-03-03 Thread Yann Ylavic
First fix :)

On Fri, Mar 3, 2017 at 6:41 PM, Yann Ylavic  wrote:
>
> With apr_allocator_bulk_alloc(), one can request several apr_memnode_t
> of a fixed (optional) or minimal given size, and in the worst case get
> a single one (allocaœted), or in the best case as much free ones as
> available (within a maximal size, also given).

The non-fixed version was buggy (couldn't reuse lower indexes), so
"apr_allocators-bulk-v2.patch" attached.
Will post some numbers (w.r.t. buffers' resuse and writev sizes) soon.
Index: srclib/apr/include/apr_allocator.h
===
--- srclib/apr/include/apr_allocator.h	(revision 1763844)
+++ srclib/apr/include/apr_allocator.h	(working copy)
@@ -93,6 +93,26 @@ APR_DECLARE(apr_memnode_t *) apr_allocator_alloc(a
  apr_size_t size)
  __attribute__((nonnull(1)));
 
+/* Get free nodes of [@a total_size:@a block_size] bytes, or allocate
+ * one of @a block_size bytes if none is found, putting the returned
+ * number of nodes in @a num and the number of bytes in @a size.
+ * @param a The allocator to allocate from
+ * @param block_size The minimum size of a mem block (excluding the memnode
+ *   structure)
+ * @param total_size The maximum overall size to get on input, the gotten
+ *   size on output (excluding the memnode structure for each
+ *   block)
+ * @param blocks_num The number of nodes returned in the list (can be NULL)
+ * @param blocks_fixed Whether all blocks should have a fixed size (i.e.
+ * @a block_size rounded up to boundary)
+ */
+APR_DECLARE(apr_memnode_t *) apr_allocator_bulk_alloc(apr_allocator_t *a,
+  apr_size_t block_size,
+  apr_size_t *total_size,
+  apr_size_t *blocks_num,
+  int blocks_fixed)
+ __attribute__((nonnull(1,3)));
+
 /**
  * Free a list of blocks of mem, giving them back to the allocator.
  * The list is typically terminated by a memnode with its next field
@@ -104,6 +124,13 @@ APR_DECLARE(void) apr_allocator_free(apr_allocator
  apr_memnode_t *memnode)
   __attribute__((nonnull(1,2)));
 
+/**
+ * Get the aligned size corresponding to the requested size
+ * @param size The size to align
+ * @return The aligned size
+ */
+APR_DECLARE(apr_size_t) apr_allocator_align(apr_size_t size);
+
 #include "apr_pools.h"
 
 /**
Index: srclib/apr/memory/unix/apr_pools.c
===
--- srclib/apr/memory/unix/apr_pools.c	(revision 1763844)
+++ srclib/apr/memory/unix/apr_pools.c	(working copy)
@@ -115,7 +115,10 @@ struct apr_allocator_t {
 
 #define SIZEOF_ALLOCATOR_T  APR_ALIGN_DEFAULT(sizeof(apr_allocator_t))
 
+/* Returns the amount of free space in the given node. */
+#define node_free_space(node_) ((apr_size_t)(node_->endp - node_->first_avail))
 
+
 /*
  * Allocator
  */
@@ -209,38 +212,21 @@ APR_DECLARE(void) apr_allocator_max_free_set(apr_a
 #endif
 }
 
-static APR_INLINE
-apr_memnode_t *allocator_alloc(apr_allocator_t *allocator, apr_size_t in_size)
+static
+apr_memnode_t *allocator_alloc_index(apr_allocator_t *allocator,
+ apr_uint32_t index, apr_size_t size,
+ int lock, int free, int fixed)
 {
 apr_memnode_t *node, **ref;
 apr_uint32_t max_index;
-apr_size_t size, i, index;
+apr_size_t i;
 
-/* Round up the block size to the next boundary, but always
- * allocate at least a certain size (MIN_ALLOC).
- */
-size = APR_ALIGN(in_size + APR_MEMNODE_T_SIZE, BOUNDARY_SIZE);
-if (size < in_size) {
-return NULL;
-}
-if (size < MIN_ALLOC)
-size = MIN_ALLOC;
-
-/* Find the index for this node size by
- * dividing its size by the boundary size
- */
-index = (size >> BOUNDARY_INDEX) - 1;
-
-if (index > APR_UINT32_MAX) {
-return NULL;
-}
-
 /* First see if there are any nodes in the area we know
  * our node will fit into.
  */
 if (index <= allocator->max_index) {
 #if APR_HAS_THREADS
-if (allocator->mutex)
+if (lock)
 apr_thread_mutex_lock(allocator->mutex);
 #endif /* APR_HAS_THREADS */
 
@@ -257,9 +243,11 @@ APR_DECLARE(void) apr_allocator_max_free_set(apr_a
 max_index = allocator->max_index;
 ref = >free[index];
 i = index;
-while (*ref == NULL && i < max_index) {
-   ref++;
-   i++;
+if (!fixed) {
+while (*ref == NULL && i < max_index) {
+   ref++;
+   i++;
+}
 }
 
 if ((node = 

Re: svn commit: r1785116 - in /httpd/httpd/branches/2.4.x: ./ modules/lua/config.m4

2017-03-03 Thread Jacob Champion

On 03/03/2017 10:27 AM, Jim Jagielski wrote:

But what if you use the full path?


That would work if I'd compiled Lua myself and installed into a separate 
prefix, but on my machine all the lua versions are right next to each 
other in the same prefix. So 5.3 will still take precedence.


To try to clear up what's going on: Ubuntu's (and as I understand it, 
Debian's) version of liblua5.3 has not been compiled with some of the 
optional 5.1/2 compatibility APIs. mod_lua relies on them, and it won't 
load when compiled against liblua5.3 on my machine.


IOW: this patch forces the use of 5.3 if a Debian user has it installed, 
but mod_lua won't work with 5.3 on that platform.


My preference is to move to the new replacements for those APIs (which 
have been deprecated for six years now, I think), fixing this for 
everyone into the future. But I don't write Lua and I'm not familiar 
with the stack API. I'm happy to share more research if it helps someone 
implement the replacements.


--Jacob


Re: svn commit: r1785116 - in /httpd/httpd/branches/2.4.x: ./ modules/lua/config.m4

2017-03-03 Thread Jim Jagielski
But what if you use the full path?

> On Mar 3, 2017, at 1:17 PM, Jacob Champion  wrote:
> 
> On 03/02/2017 04:27 AM, j...@apache.org wrote:
>> Author: jim
>> Date: Thu Mar  2 12:27:23 2017
>> New Revision: 1785116
>> 
>> URL: http://svn.apache.org/viewvc?rev=1785116=rev
>> Log:
>> Merge r1785115 from trunk:
>> 
>> Look for specific versioned installs of Lua 5.3
>> Reviewed by: jim
>> 
>> Modified:
>>httpd/httpd/branches/2.4.x/   (props changed)
>>httpd/httpd/branches/2.4.x/modules/lua/config.m4
> 
> -1. Both this backport and the trunk commit it's based on break mod_lua on 
> Ubuntu 16.04. Lua 5.3 is not ready for primetime on Debian-alikes.
> 
>Cannot load /tmp/apache-svn/modules/mod_lua.so into server: 
> /tmp/apache-svn/modules/mod_lua.so: undefined symbol: luaL_openlib
> 
> --Jacob



Re: svn commit: r1785116 - in /httpd/httpd/branches/2.4.x: ./ modules/lua/config.m4

2017-03-03 Thread Jacob Champion

On 03/02/2017 04:27 AM, j...@apache.org wrote:

Author: jim
Date: Thu Mar  2 12:27:23 2017
New Revision: 1785116

URL: http://svn.apache.org/viewvc?rev=1785116=rev
Log:
Merge r1785115 from trunk:

Look for specific versioned installs of Lua 5.3
Reviewed by: jim

Modified:
httpd/httpd/branches/2.4.x/   (props changed)
httpd/httpd/branches/2.4.x/modules/lua/config.m4


-1. Both this backport and the trunk commit it's based on break mod_lua 
on Ubuntu 16.04. Lua 5.3 is not ready for primetime on Debian-alikes.


Cannot load /tmp/apache-svn/modules/mod_lua.so into server: 
/tmp/apache-svn/modules/mod_lua.so: undefined symbol: luaL_openlib


--Jacob


Re: Read scattered to gather send (was [too long]: httpd 2.4.25, mpm_event, ssl: segfaults)

2017-03-03 Thread Yann Ylavic
On Fri, Mar 3, 2017 at 6:41 PM, Yann Ylavic  wrote:
>
> Currently I have good results (with gdb/LOG_TRACE, no stress test yet ;)
>
> For "http:" (main server) with:
>
> EnableMMAP off
> EnableSendfile off
>
> EnableScatterReadfile on
> #FileReadBufferSize 8192 <= default
>
> FlushThreshold1048576

Sorry, wrong copy/paste value, I tested that but 1MB looks not really
reasonable for a common case.
So:

 FlushThreshold131072

should be enough for a (already powerful) test ;)

> FlushMemThreshold 65536
> FlushMaxPipelined 5

>
> And for "https:" (vhost) with overridden:
>
> EnableScatterReadfile fixed
>
> FileReadBufferSize 16384

The other values are fine I think.


Threaded MPM Graceful segfaults

2017-03-03 Thread Jacob Perkins
Howdy,I adjusted the patch from Yann ( https://svn.apache.org/viewvc?view=revision=1783849 ) and got it compiling into Apache 2.4 mainline. The patch I’m using is attached to this email.During testing of this patch, we noticed that CGId is now crashing and not correctly handling the restarts properly. This causes any following requests to be returned with a 503.[Thu Mar 02 12:28:40.926742 2017] [cgid:error] [pid 7247:tid 13133972608] (98)Address already in use: AH01243: Couldn't bind unix domain socket /etc/apache2/run/cgid_sock.6322[Thu Mar 02 12:28:41.927489 2017] [cgid:crit] [pid 6322:tid 13133972608] AH01238: cgid daemon failed to initialize[Thu Mar 02 12:37:19.159596 2017] [cgid:error] [pid 8552:tid 139765481343104] (98)Address already in use: AH01243: Couldn't bind unix domain socket /etc/apache2/run/cgid_sock.7971[Thu Mar 02 12:37:19.168898 2017] [cgid:crit] [pid 7971:tid 139765481343104] AH01238: cgid daemon failed to initialize[Thu Mar 02 12:42:50.097123 2017] [cgid:error] [pid 8876:tid 139765481343104] (98)Address already in use: AH01243: Couldn't bind unix domain socket /etc/apache2/run/cgid_sock.7971[Thu Mar 02 12:42:51.097846 2017] [cgid:crit] [pid 7971:tid 139765481343104] AH01238: cgid daemon failed to initialize[Thu Mar 02 12:42:59.215291 2017] [cgid:error] [pid 9046:tid 139765481343104] (98)Address already in use: AH01243: Couldn't bind unix domain socket /etc/apache2/run/cgid_sock.7971[Thu Mar 02 12:42:59.220158 2017] [cgid:crit] [pid 7971:tid 139765481343104] AH01238: cgid daemon failed to initializeI was hoping I might be able to get some pointers as to how to solve this issue. As a note, this is the only issue we saw with this patch. We are no longer getting segfaults, just having some issues with CGId. 

0002-Handle-Gracefuls-Better.patch
Description: Binary data

Thanks!—Jacob PerkinsProduct OwnercPanel Inc.jacob.perk...@cpanel.netOffice:  713-529-0800 x 4046Cell:  713-560-8655



smime.p7s
Description: S/MIME cryptographic signature


Read scattered to gather send (was [too long]: httpd 2.4.25, mpm_event, ssl: segfaults)

2017-03-03 Thread Yann Ylavic
Hi,

few free time lately, yet I tried to implement this and it seems to
work pretty well.

Details below (and 2.4.x patches attached)...

On Thu, Feb 23, 2017 at 4:38 PM, Yann Ylavic  wrote:
>
> So I'm thinking of another way to achieve the same with the current
> APR_BUCKET_BUFF_SIZE (2 pages) per alloc.
>
> The idea is to have a new apr_allocator_allocv() function which would
> fill an iovec with what's available in the allocator's freelist (i.e.
> spare apr_memnodes) of at least the given min_size bytes (possibly a
> max too but I don't see the need for now) and up to the size of the
> given iovec.
>
> This function could be the base of a new apr_bucket_allocv() (and
> possibly apr_p[c]allocv(), though out of scope here) which in turn
> could be used by file_bucket_read() to get an iovec of available
> buffers.
> This iovec could then be passed to (new still) apr_file_readv() based
> on the readv() syscall, which would allow to read much more data in
> one go.
>
> With this the scheme we'd have iovec from end to end, well, sort of
> since mod_ssl would be break the chain but still produce transient
> buckets on output which anyway will end up in the core_output_filter's
> brigade of aside heap buckets, for apr_socket_sendv() to finally
> writev() them.
>
> We'd also have more recycled heap buckets (hence memnodes in the
> allocator) as the core_output_filter retains buckets, all with
> APR_BUCKET_BUFF_SIZE, up to THRESHOLD_MAX_BUFFER which, if
> configurable and along with MaxMemFree, would be the real limiter of
> recycling.

So here is how I finally got this, the principles remain.
Changes are (obviously) on both httpd and (mainly) APR sides (didn't
propose to APR team yet, wanted some feedbacks here first ;)

On the httpd side, it's mainly the parametrization of some hardcoded
values in the core output filter, namely:
+AP_INIT_TAKE1("OutputFlushThreshold", set_output_flush_threshold,
NULL, RSRC_CONF,
+  "Size (in bytes) above which pending data are flushed to the network"),
+AP_INIT_TAKE1("OutputFlushMemThreshold",
set_output_flush_mem_threshold, NULL, RSRC_CONF,
+  "Size (in bytes) above which pending memory data are flushed to the
network"),
+AP_INIT_TAKE1("OutputFlushMaxPipelined",
set_output_flush_max_pipelined, NULL, RSRC_CONF,
+  "Number of pipelined/pending responses above which they are flushed
to the network"),

These were respectively THRESHOLD_MIN_WRITE, THRESHOLD_MAX_BUFFER and
MAX_REQUESTS_IN_PIPELINE.
The iovec used previously (on stack) is removed, in favor of a
per-connection (ctx) dynamic/growable one which allows to get rid of
an artificial/harcoded flush limit (MAX_IOVEC_TO_WRITE).

Hence the flush is now solely governed by either the overall pending
bytes (OutputFlushThreshold), the in-memory pending bytes
(OutputFlushMemThreshold, which catches the MAX_IOVEC_TO_WRITE case),
or the number of pipelined requests (OutputFlushMaxPipelined).

Also, to reach these thresholds, there is a way to enable scattered
reads (i.e. readv) on files (à la EnableMMAP/EnableSendfile), with a
tunable buffer size:
+AP_INIT_TAKE1("EnableReadfileScatter", set_enable_readfile_scatter,
NULL, OR_FILEINFO,
+  "Controls whether buffer scattering may be used to read files
('off', 'on', 'fixed')"),
+AP_INIT_TAKE1("ReadfileBufferSize", set_readfile_buf_size, NULL, OR_FILEINFO,
+  "Size (in bytes) of the memory buffers used to read files"),

These are set in the core_handler only (for now), and it's where APR
comes into play :)

Currently I have good results (with gdb/LOG_TRACE, no stress test yet ;)

For "http:" (main server) with:

EnableMMAP off
EnableSendfile off

EnableScatterReadfile on
#FileReadBufferSize 8192 <= default

FlushThreshold1048576
FlushMemThreshold 65536
FlushMaxPipelined 5

And for "https:" (vhost) with overridden:

EnableScatterReadfile fixed

FileReadBufferSize 16384

Not heavily tested though, but the traces look good :p


So on the APR side, apr_file_readv() obviously (implemented only with
readv() for now, i.e. APR_ENOTIMPL on Windows and OS/2 which seem to
be the ones that don't share the unix fileio code), is used by
apr_bucket_read()::file_bucket_read() to fill in its buffers and
morph/chain heap buckets.

The buffers come from an apr_bucket_alloc_t, thus the new
apr_bucket_bulk_alloc(), apr_bucket_bulk_free() and
apr_allocator_bulk_alloc() functions where finally all the "automagic"
is.

With apr_allocator_bulk_alloc(), one can request several apr_memnode_t
of a fixed (optional) or minimal given size, and in the worst case get
a single one (allocaœted), or in the best case as much free ones as
available (within a maximal size, also given).
Since this is old and sensible code, I tried to be the least invasive
possible, though efficient too :)

Regarding apr_bucket_bulk_alloc() (and its apr_bucket_bulk_free()
counterpart), it's mainly wrappers around apr_allocator_bulk_alloc()
to produce an iovec of memory blocks from a 

Re: Segfault in mod_xml2enc.c with big5 charset

2017-03-03 Thread Ewald Dieterich

On 05.12.2016 14:38, Ewald Dieterich wrote:

I have a segfault in mod_xml2enc.c, xml2enc_ffunc() when processing a
page with big5 charset.


I have another crash at exactly the same location, this time with 
charset "euc-kr". mod_xml2enc is definitely not able to handle 
multi-byte charsets reliably.