RE: [users@httpd] apache caching 400 http status?

2014-02-12 Thread Geoff Millikan
We had this problem too and so stopped using mod_disk_cache for a few months.  
I was under the impression it had been fixed and so
we've recently turned it back on.  We're on Apache/2.2.26.

 

From: Mike Rumph [mailto:mike.ru...@oracle.com] 
Sent: Wednesday, February 12, 2014 4:54 PM
To: users@httpd.apache.org
Subject: Re: [users@httpd] apache caching 400 http status?

 

Hello Anthony,

The discussion on the caching of 503 errors in bug 55669 may apply to this 
email thread as well.
- https://issues.apache.org/bugzilla/show_bug.cgi?id=55669

Thanks,

Mike Rumph

On 2/12/2014 2:35 PM, Anthony J. Biacco wrote:

I'm running compiled apache 2.2.24 on centos 6.4

I have a URL which gets proxied to tomcat with mod_proxy_ajp.

The URL in a test scenario is producing a 400 status and content using tomcat 
custom error pages.

Said URL space is cached in apache with mod_disk_cache.

Apache is caching said content and serving it out of cache.

 

I was under the assumption that 400 statuses were not supposed to be cached and 
the 2.2 says as much.

Is this incorrect (even if a Cache-Control header is being set)?

 

-Tony

---

Manager, IT Operations

Format Dynamics, Inc.

P: 303-228-7327

F: 303-228-7305

abia...@formatdynamics.com

http://www.formatdynamics.com

 

 



Re: [users@httpd] SSI is destroying my file

2014-01-27 Thread Geoff Millikan
We had a lot of problems w/SSI, deflate and caching.  The deflate and the SSI 
and the cacheing just fought each other.  Sometimes it would work, sometimes we 
got corrupted junk - but it was usually more messed up that just some dashes 
like you have.

But if you've turned off mod_cache, deleted the cache from the drive and 
cleared your browser cache and you're still seeing it then

Suggest moving SSI file into a place where you can access directly in your 
browser.  Does it look ok there?  If yes, then try including it from there...

Best of luck...

Sent from my iPhone

On Jan 27, 2014, at 5:09 PM, David Williams  wrote:

> I tried uncommenting mod_cache from httpd.conf, but that did not resolve the 
> issue.  Here is another example, the next include in the list (this is a 
> javascript file) is modernizr, here's how it looks after mod_include:
> 
> u+conteionId=b;i(= .fail]mdoc u+cont, )ra0ara", loaefinth.) {
> 
> nupB Reafhns= 'a(S.1',
> 
> ionId=b;i(= {},
> 
> 
> )raors(d)}(= .locose-n.locose-ors(d)},
> 
> mons= 'monId=b;i',
> monors((= .locose-nn.showMessage(mon),
> mSed")(= monors(ibed"),
> 
> "#rejors(( ,
> 
> 
> toS(var (= {}.toS(var ,t");b(= {},
> "#rejb(= {},
> Mu;fb(= {},

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] SSI is destroying my file

2014-01-27 Thread Geoff Millikan
Are you using mod_cache?



[users@httpd] mod_cache > Faster load times using both mod_mem_cache and mod_disk_cache??

2013-12-10 Thread Geoff Millikan
Dear List,

 

When both are configured the docs say Apache tries to pull the object from the 
first cache source and if it's not there, it tries
the second.

 

Based on this, is it efficient to cache the exact same content twice: 

 

- in mod_mem_cache for faster access of more frequently accessed objects AND

- in mod_disk_cache for slightly slower access (but nearly unlimited storage 
space)?

 

Seems this would be good but does the overhead of polling both caches result in 
everything being slower?

 

Sample config below.

 

Thanks,

 

Geoff Millikan

 

---== Sample Config Snippet ==---

 

# Enable the RAM cache first because this should be a bit faster than the hard 
drive based cache.

CacheEnable  mem /

 

# Enable hard drive-based cache second.

CacheEnable disk /

 

# Limit RAM cache to 10MB and 100 items, everything else goes to the disk.

MCacheSize 2048

MCacheMaxObjectCount 100

MCacheRemovalAlgorithm GDSF

CacheDisable /images

CacheRoot /var/cache/mod_cache/

 



[users@httpd] mod_cache > Faster load times using both mod_mem_cache and mod_disk_cache?

2013-12-08 Thread Geoff Millikan
Dear List,

When both are configured the documentation says Apache tries to pull the object 
from the first cache source and if it's not there,
it tries the second.

Based on this, is it efficient to cache the exact same content twice: 

- in mod_mem_cache for faster access of more frequently accessed objects AND
- in mod_disk_cache for slightly slower access (but nearly unlimited storage 
space)?

Seems this would be good but does the overhead of polling both caches result in 
everything being slower?

Sample config below.

Thanks,

Geoff Millikan

---== Sample Config Snippet ==---

# Enable the RAM cache first because this should be a bit faster than the hard 
drive based cache.
CacheEnable  mem /

# Enable hard drive-based cache second.
CacheEnable disk /

# Limit RAM cache to 10MB and 100 items, everything else goes to the disk.
MCacheSize 2048
MCacheMaxObjectCount 100
MCacheRemovalAlgorithm GDSF
CacheDisable /images
CacheRoot /var/cache/mod_cache/




-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] mod_cache > Using both mod_mem_cache and mod_disk_cache for faster times?

2013-12-06 Thread Geoff Millikan
Dear List,

 

When two cache sources are configured the documentation says Apache tries to 
pull the object from the first cache source and if it's
not there, it tries the second.

 

Based on this, is it efficient to use 2 caches for the same content: 

 

- mod_mem_cache for frequently used and fast access 

- and mod_disk_cache for slightly slower access (but nearly unlimited storage 
space)?

 

Seems this would be good but maybe the overhead of polling both caches results 
in everything being slower?

Anyone with experience here?  Sample config below.

 

Thanks,

 

Geoff Millikan, http://www.t1shopper.com/

 

---== Sample Config Snippet ==---

CacheRoot /var/cache/mod_cache/

CacheEnable  mem /

CacheEnable disk /

#Limit RAM cache to 10MB and 100 items.  Put everything else on the disk.

MCacheSize 2048

MCacheMaxObjectCount 100

MCacheRemovalAlgorithm GDSF

CacheDisable /images

 

 

 



RE: [users@httpd] RE: Apache 2.2.26 > mod_headers > Header onsuccess unset Age > not working

2013-11-27 Thread Geoff Millikan
> No, It doesn't unset the header because mod_headers only runs at the
> point where the original headers were sent.  Most of the server
> processing is skipped when it's served from the cache, with the
> exception of connection-level stuff and mod_cache's own processing.

Got it.  

Must all pages served via mod_cache have the "Age" header?  

Thanks,

Geoff Millikan

PS. Happy Thanksgiving tomorrow to all the USA-based folks.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] RE: Apache 2.2.26 > mod_headers > Header onsuccess unset Age > not working

2013-11-27 Thread Geoff Millikan
I think I found it but anyone want to confirm Apache doesn't unset the "Age" 
header when mod_cache is used because the HTTP 1.1 spec
requires it, "An HTTP/1.1 server that includes a cache MUST include an Age 
header field in every response generated from its own
cache."



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache 2.2.26 > mod_headers > Header onsuccess unset Age > not working

2013-11-27 Thread Geoff Millikan
Hi all,

Is it impossible for mod_headers to remove the "Age" header from items served 
out of mod_cache?  (Example below)

Thanks,

Geoff Millikan

Request URL: http://www.t1shopper.com/ssi/broadband.css
Request Method: GET
Status Code: 200 OK

---== Request Headers ==---
Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Cookie:tsmc=no%20id
Host:www.t1shopper.com
Referer:http://www.t1shopper.com/
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/31.0.1650.57 Safari/537.36

---== Response Headers =---
Age:52
Connection:Keep-Alive
Content-Encoding:gzip
Content-Length:935
Content-Type:text/css
Date:Wed, 27 Nov 2013 18:29:47 GMT
Expires:Fri, 27 Dec 2013 18:28:55 GMT
Keep-Alive:timeout=5, max=39
Last-Modified:Thu, 19 May 2011 17:26:48 GMT
Server:Apache
Vary:Accept-Encoding


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Rate limiting

2013-09-02 Thread Geoff Millikan
Since this is a common request I'd love to see someone put together a "best 
practices, solutions and examples" post for the community.

> I would recommend managing it at network level, with iptables for example

Agreed, and while it's off topic for this list take a look at "iptables 
hashlimit."  We've been looking at it but remember it's just one tool of 
several needed to mitigate a DOS. 




-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache time zone > daylight saving time > missing an hour of logs?

2013-03-10 Thread Geoff Millikan
The date and time shown in Apache logs comes from the time zone of the server.  
If the server's time jumps forward an hour
(adjusting for daylight savings time) the time in the Apache log will jump 
forward too.  To the untrained eye, this may appear as an
1-hour gap in the logs (example below).  But there is no gap - Apache properly 
logged all requests - no log entries were lost!

For example let us say Apache is busy logging requests and writing the date and 
time on each log entry.  The seconds are ticking by:
01:59:57, 01:59:58, 01:59:59 and Apache is writing logs in each second.  Now 
Apache goes to write the next log but before it does,
it retrieves the server time.  The server reports that the time (which is the 
next second) as 03:00:00.  So Apache without missing a
beat simply writes that time down**.   In other words, the "gap" in the log is 
not because Apache stopped logging, it is because the
hour did not exist.  

It is generally considered best practice to have Apache (and any other 
software) record things in UTC time to avoid this issue.
Since there may be many humans viewing the log, it is usually the role of the 
log analyzer GUI program (like www.analog.cx) to shift
the UTC time into the human's local time as needed for the human's viewing 
pleasure.

Hope this helps,

Geoff Millikan
http://www.t1shopper.com/

**Story is illustrative only, actual Apache workflow for time handling under 
the hood probably different.

---= Example "gap" in logs =---
65.55.213.38-   -   [10/Mar/2013:01:59:51 -0800]GET 
/ssi/t1shopper.js HTTP/1.1 
157.55.32.103   -   -   [10/Mar/2013:01:59:58 -0800]GET 
/service/dsl/oh/Hancock.shtml HTTP/1.1 
74.111.23.38-   -   [10/Mar/2013:01:59:59 -0800]GET 
/tools/port-number/1125 HTTP/1.1 
157.55.32.103   -   -   [10/Mar/2013:03:00:07 -0700]GET 
/us/ga/White%20County.shtml HTTP/1.1 
74.111.23.38-   -   [10/Mar/2013:03:00:08 -0700]GET 
/tools/port-number/1126/ HTTP/1.1 

---= Additional background  =---
http://httpd.apache.org/docs/current/mod/mod_log_config.html

http://en.wikipedia.org/wiki/Daylight_saving_time







-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.2.22 bug > mod_cache cacheing 403 "Directory index forbidden by Options directive" pages?

2012-06-02 Thread Geoff Millikan
Submitted bug report for Apache 2.2.22 at link below.  We've turned of 
mod_cache on our production web servers until it's resolved
because we can't have phantom 403 pages going out to customers.  Considering 
jumping to 2.4.2...

https://issues.apache.org/bugzilla/show_bug.cgi?id=53350



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache 2.2.22 bug > mod_cache cacheing 403 "Directory index forbidden by Options directive" pages?

2012-06-01 Thread Geoff Millikan
Two things are going amiss with Apache here.

1. Apache is intermittently issuing a 403 "Directory index forbidden by Options 
directive" but it never should.  For example the two
below links work fine.  But look at the log entry showing the 403 error.  
Crazy.  I cannot duplicate the error at all (except see #2
issue below).  I just see it in the logs.  

  
http://www.t1shopper.com/tools/calculate/file-size/result?size=123456789&unit=kilobytes

  http://www.t1shopper.com/tools/calculate/file-size/result/

  Here's error log:
  [Fri Jun 01 11:05:06 2012] [error] [client 71.256.256.256] Directory index 
forbidden by Options directive:
/home/t1shopper/www/tools/calculate/file-size/result/

2. After grepping through the access logs and finding 403's, I can go to those 
pages and get the 403 again (if I use the same
browser as the original requestor) because mod_cache is cacheing the 403!  
Here's two entries in the log file showing the 403
getting served (my local IP address removed) at 11:33:09.  Then I force refresh 
the page (by making my browser send in a "no-cache"
in my HTTP request) then the page comes back just fine there at 11:33:18.  I 
cannot get the 403 to ever come back again.  As long as
I don't send the "Cache-Control: no-cache" I can keep getting served the 403 
out of the cache.

 Here's the log file:
 [01/Jun/2012:11:33:09 -0700]GET 
/tools/calculate/file-size/result/?size=32&unit=kilobytes HTTP/1.1  403 302 
-
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0 -
 [01/Jun/2012:11:33:18 -0700]GET 
/tools/calculate/file-size/result/?size=32&unit=kilobytes HTTP/1.1  200 
3347-
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0 

So my questions are: Why is the 403 happening at all?  And more importantly, 
why is mod_cache serving the 403?  

Thanks,

Geoff

PS. I suppose I'll open a bug report but wanted to post here first in case I 
was missing something basic.



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] SSL / HTTPS Negotiation > 30% performance improvement by using just RSA, RC4, MD5

2012-01-01 Thread Geoff Millikan
Not sure if the improvement is happening on server side or host side however 
here's what we see:

Using these settings (which I believe are PCI compliant) our average SSL 
negotiation was 300ms using the http://tools.pingdom.com/fpt/ test tool:
SSLProtocol -SSLv2 +SSLv3 +TLSv1
SSLCipherSuite ALL:!ADH:!NULL:!EXP:!SSLv2:!LOW:!MEDIUM:RC4+RSA:+HIGH

Using these settings our average time dropped to 200ms:
SSLProtocol (same as above)
SSLCipherSuite kRSA:!kKRB5:!aNULL:RC4:!eNULL:MD5:!SHA1:!DES:!3DES:!RC2:!SSLv2

These last settings restrict HTTPS to just RSA key exchange, RSA 
authentication, RC4 encoding and MD5 hash function which resulted in best 
performance according to the (not dated) WebSphere performance chart at link 
below.  The following browsers passed without complaint: Windows7 (IE9, FF8, 
O11, Chrome16), iPhone4s Safari.  I'm not sure if these settings cause older 
browsers to cough.  As the Apache documentation says, it helped me to play with 
the OpenSSL settings to see which values Apache will use like this (at Linux 
command line):

$ openssl ciphers -v 
'kRSA:!kKRB5:!aNULL:RC4:!eNULL:MD5:!SHA1:!DES:!3DES:!RC2:!SSLv2'

Here's the performance chart:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/topic/com.ibm.websphere.express.doc/info/exp/ae/rprf_ssl.html

Thanks,

Geoff Millikan
http://www.t1shopper.com/

PS. I'm no SSL/TLS guru - I'm just fiddling with settings and testing but I 
wanted to share our results in the hopes it might help your site but your 
mileage may vary.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Alternative to Apache htcacheclean for mod_disk_cache

2011-10-29 Thread Geoff Millikan
We put this in our cron job to run once a day and it's working great.  

It deletes all files and directories which haven't been modified in 30 days.  
We didn't use access time "-atime" to determine which
files to delete because for performance reasons, we disabled atime on the file 
system.  A disadvantage of this method is that it
doesn't keep cache size under a certain size (htcacheclean does).  It does have 
the advantage of keeping the cached content fresh,
is much faster, and seems less resource intensive.  We found that our cache 
size is pretty stable at about 2 to 4 GB (we average 8MM
hits/month) so we like this better.

root /usr/bin/find /var/httpd/proxy/* -type f -mtime +30 -exec rm -rf {} \;

Remember to change the "/var/httpd/proxy/*" directory to point to wherever your 
cache is.

Happy Web Serving,

Geoff Millikan
http://www.t1shopper.com/




-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.2 > Timeout & RequestReadTimeout (mod_reqtimeout)

2011-07-25 Thread Geoff Millikan
>> When both RequestReadTimeout and Timeout values are set, the smaller 
>> of the two takes precedence, right?  For example, if Timeout 6 and 
>> RequestReadTimeout header=10 body=30 then Apache will close the connection 
>> at 6 seconds and the RequestReadTimeout will never be activated, right?

> No.  The Timeout refers to each individual read or write, so you can 
> easily take more than 10 seconds end-to-end to read the headers but 
> have never waited more than 5 seconds for an individual read.

How could it easily take more than 10 seconds end-to-end to read the headers?  
For example let's assume we have request headers of
4000 bytes (which is bigger than average) and a dial up connection of 56 kbps.  
The headers should transfer in 0.57 seconds (not
accounting for slow start and other TCP/IP oddities).  Assuming a 300% error 
margin for the end-to-end to read, we're still only at
1.71 seconds.  

So it seems like setting "RequestReadTimeout header=10" should be plenty high 
but obviously it's not because at 10 seconds we get
many false hits but we don't understand why.

What am I missing?

Thanks,

Geoff


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache 2.2 > Timeout & RequestReadTimeout (mod_reqtimeout)

2011-06-28 Thread Geoff Millikan
When both RequestReadTimeout and Timeout values are set, the smaller of the two 
takes precedence, right?  For example, if Timeout 6
and RequestReadTimeout header=10 body=30 then Apache will close the connection 
at 6 seconds and the RequestReadTimeout will never be
activated, right?

http://httpd.apache.org/docs/current/mod/core.html#timeout

http://httpd.apache.org/docs/2.2/mod/mod_reqtimeout.html



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-02 Thread Geoff Millikan
>> Without the ServerName, the vhost entry matches everything.
>
> Everything, or the ServerName defined globally?



You're right, good catch.  Without the ServerName, the vhost entry matches only 
whatever the ServerName is defined as globally
(which is the ServerName outside any  containers).

Boy, I owe a lot of people beer.






-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-02 Thread Geoff Millikan
Poop.  It's a simple oversight.  Sorry everyone.  Tom Evans example works just 
fine.  The example from the original post would have
worked with the addition of a *ServerName*.  Without the ServerName, the vhost 
entry matches everything.

#This doesn't work

  RewriteEngine On
  RewriteRule .* http://www.mydomain.com%{REQUEST_URI} [L,R=301]
  ErrorLog /var/log/httpd/error_log
  CustomLog /var/log/httpd/access_log combined


#This works great

  ServerName localhost-monkey-whatever-you-want-just-put-something-here
  RewriteEngine On
  RewriteRule .* http://www.mydomain.com%{REQUEST_URI} [L,R=301]
  ErrorLog /var/log/httpd/error_log
  CustomLog /var/log/httpd/access_log combined


Again my apologies for filling your inboxes.  Beer is on me next time.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-02 Thread Geoff Millikan
> You misunderstood what Eric said. The first vhost absolutely can be
> used to redirect requests to another vhost. 
> I don't think I can make it any clearer than that - hope that helps.

Wow, thank you!  And just for kicks, look back at the original post - you can 
see the configuration you proposed is exactly what we
attempted.  But it didn't work. 
 
When we went to http://www.foo.com/ we expected the second vhost entry to pick 
it up because of the hostname match. But it didn't.
Instead the first vhost entry picked it up and redirected http://www.foo.com/ 
right back to http://www.foo.com/ putting us in an
infinite loop.

It shouldn't happen that way, but it is - and that's the original question, 
"Why isn't it working?"  I feel a bit embarrassed for
not being able to debug/figure it out and I'm hoping it's some simple 
oversight.  
 
http://www.gossamer-threads.com/lists/apache/users/399377



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-02 Thread Geoff Millikan
> Incorrect. The first vhost is ALWAYS used when no vhost with a
> matching host name is found - it is the catch all vhost, by
> definition.

Yes, I totally agree.  But, the first vhost cannot be used to *redirect 
requests to another vhost.*  Per Eric Covener, "You can't
make the catch-all vhost also be a reachable name-based vhost, with those 
rewrite rules.  Create an additional one solely for that
purpose."

If you're wanting to redirect all hostnames to parent domain name then the 
first vhost cannot be used for this purpose unless
there's another solution the list hasn't thought of yet? 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-02 Thread Geoff Millikan
> Not sure what you mean by crash...  

If I put "ServerAlias *" into httpd.conf and try to restart Apache, it will not 
start.  There's no error log as to why it will not
start.

> The first-listed vhost in a set of namevirtualhosts is the default.  

Agreed.  However it was said:

> You can't make the catch-all vhost also be a reachable name-based 
> vhost, with those rewrite rules.  Create an addl one solely 
> for that purpose.

In order to have a "catch all" vhost that redirects all hostnames to another 
vhost we must capture everything with a ServerAlias.
There's no other way.

Why is this needed?  As you know, Google will consider delisting you if you 
have two domains serving up the exact same content.
They consider it spamming the 'net.  

It's easy for this to happen accidentally - for us we created an A Record for 
performance.mydomain.com and pointed it to our
production web host IP address for some testing.  Google somehow found it and 
reindexed our whole site on that domain name.
Essentially Google had two copies of our site in the search results. Ugh.

Upon further inspection, we found there were other A Records which were showing 
duplicate content because of wildcards in place like
*.mycompany.com or just because we forgot to remove them.  With several hundred 
domain names, and a httpd.conf file several thousand
lines long, it's easy to lose track.

In other words, the "catch all" vhost is needed when an unknown number of 
hostnames are pointing at a server and it's desired that
they all resolve successfully yet redirect to the proper domain name so there's 
no duplicate content issues.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
> Eric C: You can't make the catch-all vhost also be a reachable name-based 
> vhost, with those rewrite rules.  Create an additional one 
> solely for that purpose.

> Jeroen G: Get rid of all the rewrite junk and just set a dummy servername; 
> this will catch ALL undefined hostnames.
> Then Redirect / to the correct vhost.

Ah ha! The issue was that doing "ServerAlias *" crashes Apache with no error 
log. ServerAlias apparently needs more than just a
single wildcard character.  Putting the below  entry after the 
last  works great!  Sweet.


  ServerName my-catch-all-dummy-domain.com
  ServerAlias*.com *.net *.org *.co *.mobi *.info *.me *.us *.biz *.tv 
*.ws
  RewriteEngine  On
  RewriteRule.* http://www.mydomain.com%{REQUEST_URI} [L,R=301]
  ErrorLog   /var/log/httpd/error_log
  CustomLog  /var/log/httpd/access_log combined


Now I cannot do the same thing for the 443 virtual host section - adding the 
below  section crashes Apache with the
below strange error:

[Thu Jun 02 04:55:13 2011] [error] Illegal attempt to re-initialise SSL for 
server (theoretically shouldn't happen!)


  ServerName my-catch-all-dummy-domain-SSL.com
  ServerAlias*.com *.net *.org *.co *.mobi *.info *.me *.us *.biz *.tv 
*.ws
  RewriteEngine  On
  RewriteRule.* https://www.mydomain.com%{REQUEST_URI} [L,R=301]
  ErrorLog   /var/log/httpd/error_log
  CustomLog  /var/log/httpd/access_log combined



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] nfs webroot 403's

2011-06-01 Thread Geoff Millikan
> Everything looks correct with httpd -S on both servers.

Except for the Warning: DocumentRoot [/var/www/html/example.com/store] does not 
exist.

Wouldn't that produce 403 errors when you tried to list directory contents?


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
> Get rid of all the rewrite junk and just set a dummy servername; this 
> will catch ALL undefined hostnames.
> Then Redirect / to the correct vhost.

Yep, we tried something like this (see below) and it crashed Apache even though 
the syntax is correct.


  #virtual host entry #1



  #virtual host entry #2



#very last virtual host entry #934
ServerName t1shopper-catch-all.com
ServerAlias*
RewriteEngine On
RewriteRule .* http://www.t1shopper.com%{REQUEST_URI} [L,R=301]



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
>> default server www.t1shopper.com (/etc/httpd/conf/httpd.conf:663)
>> port 80 namevhost www.t1shopper.com (/etc/httpd/conf/httpd.conf:663)
>
>Is this the host that loops?

Yes.  When I uncomment the first  entry below, the infinite loop 
happens on the second  entry but then
again, that's the only domain we tested.

#First virtual host entry
#
#   RewriteEngine On
#   RewriteRule .* http://www.t1shopper.com%{REQUEST_URI} [L,R=301]
#   ErrorLog/var/log/httpd/error_log
#   CustomLog   /var/log/httpd/access_log combined
#


   ServerName www.t1shopper.com
   DocumentRoot   /home/.../www
   
  


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_deflate and chunked encoding

2011-06-01 Thread Geoff Millikan
> In my case, it appears Apache selects the chunked encoding automagically 
> when the size of the page goes over a certain size.  Apache doesn't seem 
> to set the smaller pages to the chunked encoding (even though they are 
> gzipped).

No, this was wrong. Mod_file_cache was tricking me (again).  Apache appears to 
set the Transfer-Encoding to chunked when the page is
actively being gzipped and the final gzip file is bigger than 
DeflateBufferSize.  But if the page was gzipped and stored on disk by
mod_file_cache (like we do) it means Apache is merely spooling the file off the 
disk (it doesn't put it though gzip a second time)
and chunked encoding will *not* be used.

Now I'm going to speculate that a chunked gzip web page is going to be slower 
than the unchunked equivalent because the browser have
to spend "some amount" of time putting the chunked version back together.  But 
I'm wrong again.  We just tested this and the timing
of Firefox 4's load event didn't increase with the number of chunks.  Even 
setting DeflateBufferSize down to 500 bytes on a 40K page
didn't produce consistent differences.

The one consistent metric was that with chunking on (DeflateBufferSize smaller 
than total compressed page size), Apache was able to
get the first byte to the web browser about 75ms faster but again, the faster 
response time didn't translate into a consistently
faster rendering of the page by the browser.  

This testing needs further study before it's taken as gospel.  We only ran 
about 4 tests per DeflateBufferSize.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
>Looks fine -- apachectl/apache2ctl/httpd -S output?

$ /usr/sbin/httpd -S
VirtualHost configuration:
wildcard NameVirtualHosts and _default_ servers:
*:443  is a NameVirtualHost
   default server www.t1shopper.com (/etc/httpd/conf/httpd.conf:1399)
   port 443 namevhost www.t1shopper.com (/etc/httpd/conf/httpd.conf:1399)
   port 443 namevhost static.t1shopper.com (/etc/httpd/conf/httpd.conf:1647)
   
   port 443 namevhost t1shopper.com (/etc/httpd/conf/httpd.conf:1713)
   port 443 namevhost wireless-t1.com (/etc/httpd/conf/httpd.conf:1797)
   
   port 443 namevhost t3shopper.com (/etc/httpd/conf/httpd.conf:1888)
*:80   is a NameVirtualHost
   default server www.t1shopper.com (/etc/httpd/conf/httpd.conf:663)
   port 80 namevhost www.t1shopper.com (/etc/httpd/conf/httpd.conf:663)
   port 80 namevhost a.static.t1shopper.com (/etc/httpd/conf/httpd.conf:978)
   
   port 80 namevhost t1shopper.com (/etc/httpd/conf/httpd.conf:1072)
   port 80 namevhost wireless-t1.com (/etc/httpd/conf/httpd.conf:1157)
   port 80 namevhost t1carrier.com (/etc/httpd/conf/httpd.conf:1170)
   
   port 80 namevhost t3shopper.com (/etc/httpd/conf/httpd.conf:1240)
   port 80 namevhost vpnshopper.com (/etc/httpd/conf/httpd.conf:1255)
Syntax OK


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
> missing NameVirtualHost *:80?  That would be the easiest way to get
> sent back to the 1st vhost after the redirect.

Eric, we've got that already but I'm hoping it's some simple oversight like 
that which is causing this.

NameVirtualHost *:80
Listen 80
NameVirtualHost *:443
Listen 443




-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
> Actually on second thought, the rewrite rule in the first host 
> should be like this to avoid loop:
>       
>  RewriteEngine On
>  RewriteCond %{HTTP_HOST} !^www\.mydomain\.com [OR]
>  RewriteCond %{HTTP_HOST} !^mydomain\.com
>      RewriteRule     .* http://www.mydomain.com%{REQUEST_URI} [L,R=301]

Per manual excerpted below, the first  section should only be used 
when the hostname being requested isn't listed
anywhere else in a ServerName or ServerAlias, right?!  So these RewriteCond 
shouldn't be needed!  Right?

I'd use exclusion RewriteCond but since we have other domain names on the 
server like www.MyPersonalWebsite.com and
www.OurOldCompanyName.com etc we'd have to write rule for each domain.  Messy. 

Thanks,

Geoff

"Now when a request arrives, the server will first check if it is using an IP 
address that matches the NameVirtualHost. If it is,
then it will look at each  section with a matching IP address and 
try to find one where the ServerName or ServerAlias
matches the requested hostname. If it finds one, then it uses the configuration 
for that server. If no matching virtual host is
found, then the first listed virtual host that matches the IP address will be 
used." 

http://httpd.apache.org/docs/current/vhosts/name-based.html



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
> Try making the first one _default_ host 
> 

Thanks but since we're doing name-based virtual hosting I don't believe that 
will apply to us because the manual says the _default_
setting only applies to IP virtual hosting but let me know if you have thoughts 
otherwise!

"The string _default_ is used only with IP virtual hosting to catch unmatched 
IP addresses." -
http://httpd.apache.org/docs/current/mod/core.html#virtualhost




-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Special configuration for requests that do not match any particular virtual host? Apache 2.2

2011-06-01 Thread Geoff Millikan
I want to make a catch-all virtual host (like the manual mentions below) which 
redirects any errant hostnames like
http://oopsie.mydomain.com/ to our main hostname at http://www.mydomain.com/   
But the below example doesn't work - I'm getting an
infinite redirect from http://www.mydomain.com/ right back to 
http://www.mydomain.com/ 

What am I missing?

#First virtual host entry

RewriteEngine On
RewriteRule .* http://www.mydomain.com%{REQUEST_URI} [L,R=301]
ErrorLog/var/log/httpd/error_log
CustomLog   /var/log/httpd/access_log combined


#Second virtual host entry

ServerName  www.mydomain.com
DocumentRoot   /home/mydomain/www

Options -ExecCGI +FollowSymLinks +IncludesNOEXEC -Indexes 
-MultiViews +Includes
AllowOverride None

ErrorLog/var/log/httpd/error_log
CustomLog   /var/log/httpd/access_log combined


http://httpd.apache.org/docs/2.2/vhosts/name-based.html - "If you would like to 
have a special configuration for requests that do
not match any particular virtual host, simply put that configuration in a 
 container and list it first in the
configuration file."


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_deflate and chunked encoding

2011-05-31 Thread Geoff Millikan
> Can you provide some more information to be able
> to reproduce your test over here? 

Just make a web page like the one described.  Enable mod_deflate and load the 
page in your favorite browser that has debugging
(Firebug, IE9, Chrome, etc).  In my case, it appears Apache selects the chunked 
encoding automagically when the size of the page
goes over a certain size.  Apache doesn't seem to set the smaller pages to the 
chunked encoding (even though they are gzipped).
 
> And how many chunks did the response
> contain?

Hypothetically it would be the compressed 41,841 byte response body divided by 
the DeflateBufferSize (which is 8096), right?  But if
you're using KeepAlive, the size or count of the chucks shouldn't have any 
performance implications on the client side.  If
KeepAlive was off, then the bigger the chunk size the better.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_deflate and chunked encoding

2011-05-31 Thread Geoff Millikan
> My test showed (according to Firebug) that the 15 MB 
> page downloaded in 618ms.  

Should clarify that on disk, the page was 14,254,523 bytes but after deflating, 
I downloaded a mere 314 bytes of headers
(uncompressed) plus the 41,841 byte response body (compressed) for a total 
payload of 42,155 bytes (thanks Firebug!).


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_deflate and chunked encoding

2011-05-31 Thread Geoff Millikan
> Goal is to get the HEAD of HTML documents in the client side as soon
> as possible ...thus having a more responsive page...

Agreed!

> Can anyone confirm or deny this...

+1

I ran a quick test on a 10MB file that looks like this:





About 15 megabytes of dummy ascii text here...


And my FF4 browser didn't seem to try to load the css in the  area until 
the whole page finished inflating.  My test showed
(according to Firebug) that the 15 MB page downloaded in 618ms.  The request 
for the style sheet *started* 4.39 seconds after the
initial 15 MB page request stared.  In other words, it took FF4 about 4 seconds 
to inflate the 15MB page, and then figure out that
the  section required looking up of additional resources. Below are the 
response headers showing gzip with chunked.

What we've found is that on lengthy pages like this, sometimes it's 
advantageous to the User to not DEFLATE because although the
overall download time of the parent is slower, they experience what appears to 
be a faster page load time because the browser can
start rendering the page as soon as it receives the first chunk (and also start 
requesting any additional resources that are in the
). But some of this is good web design too (like don't put your whole web 
page in a  because most browsers cannot start
rendering the table until they hit the closing  tag).

Date: Tue, 31 May 2011 11:09:31 GMT
Server: Apache
Last-Modified: Tue, 31 May 2011 11:08:02 GMT
Expires: Mon, 06 Jun 2011 11:09:31 GMT
Vary: Accept-Encoding
Content-Encoding: gzip
Keep-Alive: timeout=3, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

> ...or point to authoritative sources?

Stabbing in the dark: The link below *seems* to say that the "inflate" job 
running in the web browser has "...no flush variable,
since inflate() can tell from the zlib stream itself when the stream is 
complete."  In other words it seems like the "inflate" job
in the web browser cannot flush it's progress out (like the  part 
of the web page) until it gets to the end of the
whole stream/file.  It goes on to say, "If end-of-file is reached before the 
compressed data self-terminates, then the compressed
data is incomplete and an error is returned."  But all the zip/unzip programs 
I've worked with will flush their progress out as they
work so this makes no sense.

http://www.zlib.net/zlib_how.html

Happy chunking,

Geoff @ http://www.t1shopper.com/






-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_deflate and chunked encoding

2011-05-31 Thread Geoff Millikan
> ...is it possible that mod_deflate works by chunks...

Why are you doing this?  It's not to increase client-side performance because 
correct me if I'm wrong here but it's been my
understanding that the web browser cannot start decompressing the page until it 
receives the final chunk.  Based on that, there's no
advantage to the client in receiving a deflated page in chunks.  

And regarding host-side performance: In my experience, compression happens so 
fast there wouldn't seem to be much performance gain
on the host side by switching over to chunked but maybe I'm missing the boat?

Thanks,

Geoff

http://en.wikipedia.org/wiki/Chunked_transfer_encoding

"Chunked transfer encoding can be used to delimit parts of the compressed 
object. In this case the chunks are not individually
compressed. Instead, the complete payload is compressed and the output of the 
compression process is chunk encoded. In the case of
compression, chunked encoding has the benefit that the compression can be 
performed on the fly while the data is delivered, as
opposed to completing the compression process beforehand to determine the final 
size."



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-29 Thread Geoff Millikan
> Nor sure I understand where you say memcache is slow, we uses memcache
> for session variables and for mysql query caching. I did not think 
> there was anything as reliable and fast or faster than that? Can you 
> elaborate on file-based-cache fro mysql querrie?

Bostjan Skufca replied already however I'll say that mod_disk_cache is a great 
tool.  But only on those pages which are high
traffic, dynamic and most importantly are not customized based on session or IP 
address.   You would not need to use any other
caching tools like memcache - because the preassembled, compressed/zipped page 
is already assembled, sitting on disk - the page is
just read off the disk and sent to the clients.  Queries will not be run, code 
will not be executed - basically you're serving a
static page right off the disk.  

You can see how this would both speed up the serving of the page and free up 
the database (and RAM) to do other things.  

Thanks,

Geoff Millikan @ http://www.t1shopper.com/


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-29 Thread Geoff Millikan
> This went well until there were not too many files on the file system...

Our experience here has been this: We're on ext3 and caching (on disk) several 
hundred thousand files using mod_disk_cache.  We got
worried we were going to have too many files but using "df -i" below shows 
we're at about a million files on the file system so
we're only at 2% usage.

Now we did run into the 32,000 subdirectory limit that ext3 has so we modified 
our mod_file_cache settings to get around this like
this:

CacheDirLevels 2
CacheDirLength 1 

$ df -i
FilesystemInodes   IUsed   IFree IUse% Mounted on
VolGroup00  72712192  972739 717394532% /
/dev/sda1  26104  73   260311% /boot
tmpfs1974407   1 19744061% /dev/shm



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-28 Thread Geoff Millikan
> I will mod fstab and reboot one server at a time...

I thought there was a way of dropping atime in real time without a reboot so 
you can test performance differences without rebooting
but I could be mistaken.  Might want to check out using relatime but I have no 
experience with that.  I do know that doing noatime
helped our performance.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] error 404 document in php

2011-05-28 Thread Geoff Millikan
> I'm not seeing anything
> in the access log file except the reference to the 500 error
> Running
> a php cli on the php file does not return any errors

Check the error log.  Sounds like a permissions issue.  Check the owner of the 
file.  And recursive permissions.  Are directories
all 0755?

> i'm using cPanel

Unless you a web host, IMHO, cPanel doesn't make admin easier.  Better just to 
learn how to do it manually.

Best of luck.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-28 Thread Geoff Millikan
> I commented out mod_dir and Apache would not start

Yep, you'll need that one.  You probably will not need the below list but you 
kinda have to read up on what each module supports and
see if you're using any of those things.

mod_userdir.so
mod_usertrack.so
mod_status.so
mod_logio.so
mod_cgi.so
mod_cern_meta.so
mod_actions.so


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-27 Thread Geoff Millikan
> We also use New Relic and new PHP performance tuning tool, its very cool...

Oh, and remember to remove/comment out any PHP modules you aren't using.  Every 
PHP module you load (like the GD image module which
is loaded by default that people don't often use) is loaded up into the Apache 
server too. 

As we say here, "RAM is like cowbell, you can never have enough."




-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] ErrorDocument 404 returns a 200 status code on local URL-path

2011-05-27 Thread Geoff Millikan
> Either the documentation at link below needs updating or 
> there's a bug in Apache 2.2.17.  

Or there's a user error. :-)

I tested once more before submitting bug report and now it's working, just as 
documented.  No changes on our side.  I think our
servers can smell it when I'm posting here and they just fix themselves.  
Amazing.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] error 404 document in php

2011-05-27 Thread Geoff Millikan
> Does anyone have this working?

Yes. But, as noted under separate cover, it does not return a 404 status code 
in 2.2.17 even when a local URL-path is used like in
the below.  The lack of the 404 response will get you dinged by your favorite 
search engine as the Apache documentation mentions.

ErrorDocument 404 /error/404.php


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.x configuration for high load servers

2011-05-27 Thread Geoff Millikan
#Increase this number.  The zombies you're seeing 
#is every time an Apache child process dies.
#The child will die @1500.  We run at 2 and are having no problems.
MaxRequestsPerChild   1500

#We don't run as hot as you (we average 3 hits/sec and max at like 15 
#in a 24 hour period) but here's our settings:
StartServers70
MinSpareServers 70
ServerLimit 364
MaxClients  364
MaxRequestsPerChild  2

#Shrink the size of the response header:
ServerTokens Prod

#Defiantly want keepalive on, it will help page load time for customers.
#Everything else looks good.
Keepalive is on

#We have this turned on but we use server side includes SSI and we cache.
UseCanonicalName On

You have way too many modules enabled.  You need user_dir module?  Every module 
loaded takes up RAM.  You should be able to get the
"RES" RAM use per process down to under 20 MB unless you're doing something 
funky.

If you're finding that 80% of the time you're running 100 processes, then start 
that many and keep that many going.  We found that
the process that starts up new children takes too long and it was causing pages 
to hang while Apache started up new children.   So
just start as many as you need, even if a lot of them are just idle.

Couple of other tips:  

1. Turn off the "access time" atime on your file system.  This will speed disk 
access a lot.

2. Using mod_file_cache on a big site is hard to get everyone on board for.  
It's complicated and it's easy to cache the wrong
stuff.  However, with a bit of pain, there is a lot of gain to be had there in 
terms of reduced work load on the server and thus
faster page load times.  Suggest you bite the bullet and do it.  BTW, the 
CentOS rpm of Apache doesn't work with mod_file_Cahce.
Lots of bugs.  So have to compile from source on this.  We're on CentOS 5 too.

Lots of people will suggest the Worker MPM due to it's lower RAM usage however 
we haven't done it as the PHP web site speaks so
strongly against it.

Best,

Geoff Millikan @ http://www.t1shopper.com/


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] MPM worker question

2011-05-24 Thread Geoff Millikan
> I am trying to calculate how much memory a server running worker would use. 
> With around 700 
> 'ps' reports around 10 child processes using 30-100M...

It would be nice if it were that easy.  Remember a lot of the processes are 
reusing components.  Do a 'pmap' on one of the Apache
process IDs for details.  Lots on this already written but since there's no 
easy, scientific way to get a hard number it's an
ongoing discussion point.  

http://lmgtfy.com/q=pmap+apache

http://apache.markmail.org/thread/gabfzn36i7goisbz



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Special log file for denied client

2011-05-24 Thread Geoff Millikan
> In my .htaccess file i denied access by many ips. 
> I want to log if  that ip is trying to access our site, 
> in a special log file  for audit purpose.

Not that I'm aware of.  Suggest doing something like this:

http://lmgtfy.com/?q=load+apache+log+into+database



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: RE: [users@httpd] How to check whether apache support FIPS 140-2

2011-05-24 Thread Geoff Millikan
> I thought that FIPS is within mod_ssl, right?

Doubt it:
http://en.wikipedia.org/wiki/OpenSSL#FIPS_140-2_compliance

 


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] ErrorDocument 404 returns a 200 status code on local URL-path

2011-05-24 Thread Geoff Millikan
> When a *local URL-path* is used the original status code should be used but
> we're getting a 200 code when using "ErrorDocument 404 /error/404.php"

Either the documentation at link below needs updating or there's a bug in 
Apache 2.2.17.  If I don't hear back on this list, I'll
submit bug report.  

Thanks,

Geoff Millikan

http://httpd.apache.org/docs/current/mod/core.html#errordocument



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] How to check whether apache support FIPS 140-2

2011-05-23 Thread Geoff Millikan
Maybe this?

"If httpd was compiled against an SSL library which did not support the 
FIPS_mode flag, SSLFIPS on will fail."

http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#sslfips



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Need advice to choose a configuration

2011-05-23 Thread Geoff Millikan
> there are still a large number of "non modern" clients out there, including 
> anyone 
> using the default browser on an Android phone and anyone with any flavor of 
> IE 
> on WindowsXP

Maybe it's marketing hype but commercial link below shows Subject Alternative 
Name certificates being supported since 2003 in
Internet Explorer (since Win98) as well as Firefox, Opera, Safari, and 
Netscape.  I have no experience w/SAN certs so cannot vouch
for this.  YMMV.

http://www.digicert.com/subject-alternative-name-compatibility.htm




-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] ErrorDocument 404 returns a 200 status code on local URL-path

2011-05-23 Thread Geoff Millikan
We're getting a normal "200" response when our custom "/error/404.php" is 
served but when a *local URL-path* is used such as
"ErrorDocument 404 /error/404.php" I expected Apache 2.2.17 to return the 
original 404 error status code.  

Am I mistaken?

http://httpd.apache.org/docs/current/mod/core.html#errordocument

Thanks,

Geoff at http://www.t1shopper.com/


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Custom Message

2011-05-14 Thread Geoff Millikan
> Is there anyway I can have a custom message
> displayed when a site listed in the above file is blocked?

Yes, make a custom webpage for the 401 response:
http://httpd.apache.org/docs/2.2/mod/core.html#errordocument



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] htcacheclean vs just "delete all files where mtime > xx days"

2011-05-14 Thread Geoff Millikan
Dear List,

We were running this:

htcacheclean -n -t -i -d720 -p /var/httpd/proxy -l 20480M

But we found it:

 1. Takes ~100MB of virtual memory to run
 2. Doesn't have logic (documented) to what's being deleted
 3. Takes many hours to run even when not deleting anything (even without the 
-n switch)
 4. On our website which has hundreds of thousands of pages getting cached, the 
biggest our cache has grown is 9 GB, which isn't
very much. So for us (and I'd imagine most people), htcacheclean usually 
doesn't end up cleaning anything.

So instead we're considering switching to using the below command once a day 
which will delete all files and directories older than
15 days.  If no deletions (Which is usually the case), the command seems to 
traverse all the directories in under a minute.


find /var/httpd/proxy/* -mtime +15 -exec rm -rf {} \; 

We just don't want it growing astronomically big, like 100GB.  It would seem we 
could do this by simply removing the files which
haven't been modified for 30 days.  (Using atime would be better except we've 
disabled it on our file system because it's such a
performance suck.)

Is this solution a bad idea?

Thanks,

Geoff 
gmillikan @ http://www.t1shopper.com/


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] htcacheclean vs just "delete all files where mtime > xx days"

2011-05-13 Thread Geoff Millikan
Dear List,

We were running this:

htcacheclean -n -t -i -d720 -p /var/httpd/proxy -l 20480M

But we found it:

 1. Takes ~100MB of virtual memory to run
 2. Doesn't have logic (documented) to what's being deleted
 3. Takes many hours to run even when not deleting anything (even without the 
-n switch)
 4. On our website which has hundreds of thousands of pages getting cached, the 
biggest our cache has grown is 9 GB, which isn't
very much. So for us (and I'd imagine most people), htcacheclean usually 
doesn't end up cleaning anything.

So instead we're considering switching to using the below command once a day 
which will delete all files and directories older than
15 days.  If no deletions (Which is usually the case), the command seems to 
traverse all the directories in under a minute.

find /var/httpd/proxy/* -mtime +15 -exec rm -rf {} \; 

We just don't want it growing astronomically big, like 100GB.  It would seem we 
could do this by simply removing the files which
haven't been modified for 30 days.  (Using atime would be better except we've 
disabled it on our file system because it's such a
performance suck.)

Is this solution a bad idea?

Thanks,

Geoff 
gmillikan @ http://www.t1shopper.com/


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] XHTML > Set Content-Type as "application/xhtml+xml" for browsers that support it otherwise degrade to "text/html"

2011-05-02 Thread Geoff Millikan
>Is the best way to do this like: 
> AddDefaultCharset text/html
> SetEnvIfNoCase Accept "xhtml" ua_supports_xhtml=1
> Header set Content-Type "application/xhtml+xml" env=ua_supports_xhtml

Not sure if above method is "best" but we tested and it "works" but since we're 
using mod_cache it doesn't work.  Why?  Because the
first time the page is requested it's cached.  And the cached version contains 
randomly either "application/xhtml+xml" or
"text/html."  So everyone other than the first requestor is going to get the 
cached version - which may or may not match the request
header parameters. 

W3C has a nice write up on this issue at link below and they even say they are 
using one of the techniques (using mod_negotiation)
however it doesn't seem to be working because when I request a page from W3C 
sending in an Accept header of
"text/html,application/xhtml+xml,application/xml..." I get back headers of 
"Content-Type text/html..." instead of the expected
"application/xhtml+xml."

http://www.w3.org/2003/01/xhtml-mimetype/content-negotiation

The latest mod_negotiation module might circumvent whatever issue W3C appears 
to be having and/or the above mod_cache issue per link
below but it's a bit vague:
http://httpd.apache.org/docs/current/content-negotiation.html#caching

Anyway, I think we're going to have to scrap using XHTML.

Thanks,

http://www.t1shopper.com/ 
 


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] XHTML > Set Content-Type as "application/xhtml+xml" for browsers that support it otherwise degrade to "text/html"

2011-05-02 Thread Geoff Millikan
Dear List,

For the browsers that support XHTML we'll set the Content-Type as 
"application/xhtml+xml" otherwise we'll send the same XHTML
document as Content-Type "text/html."  Is the best way to do this like: 

AddDefaultCharset text/html
SetEnvIfNoCase Accept "xhtml" ua_supports_xhtml=1
Header set Content-Type "application/xhtml+xml" env=ua_supports_xhtml

Thanks,

http://www.t1shopper.com/ 

PS. We'd consider using mod_negotiation to do this but we'd prefer not to make 
a HTML and XHTML of the same page.  We're just going
to make one XHTML document and send it to everyone. The browsers that don't 
support XHTML will just have to do their best.

PPS. The recommended Content-Type for XHTML documents (which despite mixed 
support, is W3C's recommended language for mobile
browsers: http://www.w3.org/TR/mobile-bp/#ddc ) is "application/xhtml+xml" but 
it's permitted to be "text/html."  

http://www.w3.org/TR/2009/NOTE-xhtml-media-types-20090116/#text-html





-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] mod_expires > Stop setting the Age header?

2011-04-23 Thread Geoff Millikan

After researching let me write note to self: Apache doesn't let me unset the 
Age header using "Header unset Age" as a global config
because even though it would appear to be waste of overhead to calculate it, 
and a waste of bandwidth to send it, RFC2616 requires
it because I'm using mod_disk_cache.

"An HTTP/1.1 server that includes a cache MUST include an Age header field in 
every response generated from its own cache."
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.6 

Good work Apache for enforcing the spec!

Thanks,

http://www.t1shopper.com/



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_disk_cache doesn't cache URLs ending with a forward slash?

2011-04-13 Thread Geoff Millikan
> fwliw, I've looked before and it is "on purpose" that those requests
> aren't cached; I don't recall the exact reason

I saw something about some caching issues where
http://www.t1shopper.com

was getting properly 301 redirected to
http://www.t1shopper.com/

but mod_cache was doing something odd with caching the redirect instead of the 
destination page but this is different.

If you think of some reason this should not be cached, pray tell.


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_disk_cache doesn't cache URLs ending with a forward slash?

2011-04-13 Thread Geoff Millikan
To close this out:  The symptoms as described in this thread and submitted as a 
bug at
https://issues.apache.org/bugzilla/show_bug.cgi?id=51057

just point to the real issue which is with mod_dir (DirectoryIndex) and is 
already a known bug but the way it was written up, I
don't believe the scope of the issue was known.  I've updated the parent ticket:
https://issues.apache.org/bugzilla/show_bug.cgi?id=45273

I searched and found no patches submitted for this bug at the time of this 
writing.

Thanks list! 


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_disk_cache doesn't cache URLs ending with a forward slash?

2011-04-13 Thread Geoff Millikan
> Patches welcome.

My skill at cheeky comments exceeds that of my Apache dev skills which isn't 
saying much.
 
> Or if there's already a patch but it's languishing
> unloved, give us a friendly prod on the dev list!

Searched below for info on patches with no success.  Where does one find 
patches?  

http://httpd.apache.org/dev/patches.html

http://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x/modules/cache/



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] mod_disk_cache doesn't cache URLs ending with a forward slash?

2011-04-13 Thread Geoff Millikan
Do we give out t-shirts or coffee mugs if someone stumps the list?   ;-)


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] mod_disk_cache doesn't cache URLs ending with a forward slash?

2011-04-12 Thread Geoff Millikan
Dear List,

This is cached by mod_disk_cache:
http://www.t1shopper.com/mod_disk_cache_bug/index.php

This isn't cached:
http://www.t1shopper.com/mod_disk_cache_bug/

Anyone know what might cause this?

Thanks,

Geoff

---= Log showing page being written to host's cache =--- 
mod_cache.c(131): Adding CACHE_SAVE filter for /mod_disk_cache_bug/index.php 
mod_cache.c(138): Adding CACHE_REMOVE_URL filter for 
/mod_disk_cache_bug/index.php 
mod_deflate.c(602): Zlib: Compressed 19 to 21 : URL 
/mod_disk_cache_bug/index.php
mod_headers.c(663): headers: ap_headers_output_filter() 
mod_cache.c(639): cache: Caching url: /mod_disk_cache_bug/index.php 
mod_cache.c(645): cache: Removing CACHE_REMOVE_URL filter.
mod_disk_cache.c(962): disk_cache: Stored headers for URL 
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
mod_disk_cache.c(1051): disk_cache: Body for URL 
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php? cached.

---= Log showing page served from host cache to client =---
mod_disk_cache.c(476): disk_cache: Recalled cached URL info header 
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
mod_disk_cache.c(749): disk_cache: Recalled headers for URL 
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
mod_cache.c(282): cache: running CACHE_OUT filter 
mod_cache.c(296): cache: serving /mod_disk_cache_bug/index.php


---= Log showing "forward slash page" not served to client =--- 
mod_cache.c(131): Adding CACHE_SAVE filter for /mod_disk_cache_bug/ 
mod_cache.c(138): Adding CACHE_REMOVE_URL filter for /mod_disk_cache_bug/ 
mod_deflate.c(602): Zlib: Compressed 19 to 21 : URL 
/mod_disk_cache_bug/index.php

We're on:
httpd-2.2.3-45.el5.centos.x86_64.rpm


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [users@httpd] Apache 2.2.3 > mod_disk_cache not caching if a DirectoryIndex is used

2011-04-11 Thread Geoff Millikan
This issue impacts all index.html, index.php, index.shtml, etc files even
the "homepage." For example: 

This is cached: http://www.mycompany.com/index.php

This *is not* cached: http://www.mycompany.com/



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache 2.2.3 > mod_disk_cache not caching if a DirectoryIndex is used

2011-04-10 Thread Geoff Millikan
Is this a known bug?

At URL #1 below the content isn't cached (or served out of cache):
http://www.t1shopper.com/mod_disk_cache_bug/

At URL #2, we just add the "index.php" into the URL string and the cache is
working properly (see log below).
http://www.t1shopper.com/mod_disk_cache_bug/index.php

Am I missing something?

Thanks,

http://www.t1shopper.com/


---= Log from URL #1 =---
[Mon Apr 11 00:28:13 2011] [debug] mod_cache.c(131): Adding CACHE_SAVE
filter for /mod_disk_cache_bug/
[Mon Apr 11 00:28:13 2011] [debug] mod_cache.c(138): Adding CACHE_REMOVE_URL
filter for /mod_disk_cache_bug/
[Mon Apr 11 00:28:13 2011] [debug] mod_deflate.c(602): [client 71.177.216.6]
Zlib: Compressed 19 to 21 : URL /mod_disk_cache_bug/index.php


---= Log from URL #2 (First Request) =---
[Mon Apr 11 00:28:46 2011] [debug] mod_cache.c(131): Adding CACHE_SAVE
filter for /mod_disk_cache_bug/index.php
[Mon Apr 11 00:28:46 2011] [debug] mod_cache.c(138): Adding CACHE_REMOVE_URL
filter for /mod_disk_cache_bug/index.php
[Mon Apr 11 00:28:46 2011] [debug] mod_deflate.c(602): [client 71.177.216.6]
Zlib: Compressed 19 to 21 : URL /mod_disk_cache_bug/index.php
[Mon Apr 11 00:28:46 2011] [debug] mod_headers.c(663): headers:
ap_headers_output_filter()
[Mon Apr 11 00:28:46 2011] [debug] mod_cache.c(639): cache: Caching url:
/mod_disk_cache_bug/index.php
[Mon Apr 11 00:28:46 2011] [debug] mod_cache.c(645): cache: Removing
CACHE_REMOVE_URL filter.
[Mon Apr 11 00:28:46 2011] [debug] mod_disk_cache.c(962): disk_cache: Stored
headers for URL http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
[Mon Apr 11 00:28:46 2011] [debug] mod_disk_cache.c(1051): disk_cache: Body
for URL http://www.t1shopper.com:80/mod_disk_cache_bug/index.php? cached.

---= Log from URL #2 (Second Request) =---
[Mon Apr 11 00:30:38 2011] [debug] mod_disk_cache.c(476): disk_cache:
Recalled cached URL info header
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
[Mon Apr 11 00:30:38 2011] [debug] mod_disk_cache.c(749): disk_cache:
Recalled headers for URL
http://www.t1shopper.com:80/mod_disk_cache_bug/index.php?
[Mon Apr 11 00:30:38 2011] [debug] mod_cache.c(282): cache: running
CACHE_OUT filter
[Mon Apr 11 00:30:38 2011] [debug] mod_cache.c(296): cache: serving
/mod_disk_cache_bug/index.php

We're on:
httpd-2.2.3-45.el5.centos.x86_64.rpm



-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



RE: [us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-18 Thread Geoff Millikan
It's a bummer not to be able to use mod_disk_cache, it's such a good tool.
Would love a solution on this.

> -Original Message-
> Sent: Friday, June 11, 2010 8:50 AM
> To: users@httpd.apache.org
> Subject: RE: [us...@httpd] mod_disk_cache causing corrupted 
> output on server side includes?
> 
> > Interesting... what other modules did you load?  Looking for 
> > possible sources of memory corruption.
> 
> Why memory corruption?  Everything is being served from disk...
>  
> > The uncached results to the client, they are ok?  
> 
> Yes.
> 
> > E.g. is the cache polluted on the way in, 
> > or on the way out?  Simulated load would 
> > help determine this.
> 
> Just the pages in cache are bad.  The whole page isn't bad, 
> just the section
> where the server-side-include should be (see screenshot):
> https://issues.apache.org/bugzilla/attachment.cgi?id=25524
> 
> Thanks,
> 
> Geoff Millikan
> 
> "Most people say that is it is the intellect which makes a 
> great scientist.
> They are wrong: it is character." -Albert Einstein
> 
> "If you can't explain it simply, you don't understand it well enough"
> -Albert Einstein
> 
> 


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-11 Thread Geoff Millikan
> Interesting... what other modules did you load?  Looking for 
> possible sources of memory corruption.

Why memory corruption?  Everything is being served from disk...
 
> The uncached results to the client, they are ok?  

Yes.

> E.g. is the cache polluted on the way in, 
> or on the way out?  Simulated load would 
> help determine this.

Just the pages in cache are bad.  The whole page isn't bad, just the section
where the server-side-include should be (see screenshot):
https://issues.apache.org/bugzilla/attachment.cgi?id=25524

Thanks,

Geoff Millikan

"Most people say that is it is the intellect which makes a great scientist.
They are wrong: it is character." -Albert Einstein

"If you can't explain it simply, you don't understand it well enough"
-Albert Einstein



smime.p7s
Description: S/MIME cryptographic signature


RE: [users@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-10 Thread Geoff Millikan
> > Just in case your filesystem doesn't support them, have you tried
> > disabling sendfile and mmap?
> 
> Since all our files are being read directly off the drive 
> (we're not using a NFS or CIFS mount) this shouldn't 
> be an issue (at least on RedHat Enterprise
> Linux).  But we'll give it a shot and report back.

Nope, that didn't help.  Within 2 hours the *.shtml web pages have binary
data where the server side includes should be.  The cache worked great for
the first 90 minutes or so.

Bug report untouched after 7 days:
https://issues.apache.org/bugzilla/show_bug.cgi?id=49385



smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-10 Thread Geoff Millikan
> Just in case your filesystem doesn't support them, have you tried
> disabling sendfile and mmap?

Since all our files are being read directly off the drive (we're not using a
NFS or CIFS mount) this shouldn't be an issue (at least on RedHat Enterprise
Linux).  But we'll give it a shot and report back.

http://kbase.redhat.com/faq/docs/DOC-4857

Maybe the issue is htcacheclean is corrupting the included files?  Nope.  We
turned it off and still had the problem.  Maybe the corruption is happening
when the cache is getting rebuilt (because CacheMaxExpire or
CacheLastModifiedFactor had been exceeded).  Nope.  We set everything very
high (CacheDefaultExpire 2592000 and CacheLastModifiedFactor 30) so that the
first cached page would be preserved and not altered (the initial cached
page is not corrupted) however within a few hours the server side include
pages served out of the cache had binary data in them again.  It's
impossible to trace when the corruption is happening and what's causing it.
So mod_disk_cache is not useable for us, we've had to turn it off.


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-09 Thread Geoff Millikan
Submitted bug report 6 days ago but no movement.  Tips?

https://issues.apache.org/bugzilla/show_bug.cgi?id=49385

Thanks!
 

> -Original Message-
> From: Geoff Millikan
> Sent: Thursday, June 03, 2010 2:25 PM
> To: users@httpd.apache.org
> Subject: [us...@httpd] mod_disk_cache causing corrupted 
> output on server side includes?
> 
> Dear List,
> 
> It appears that when mod_disk_cache reads server side 
> includes to create its final cached web page, it sometimes 
> corrupts the included file.
> 
> I think the issue may be that the included file is getting 
> DEFLATEd and Apache is intermittently forgetting to ungzip it 
> prior to putting it into the parent page.
> 
> Any other thoughts?
> 
> Thanks,
> 
> http://www.t1shopper.com/
> 
> Details: The parent web page is called "index.shtml" and the 
> child file is getting included like this:
> 
> 
> Everything else on the page looks fine but where the 
> my_html_file.html should be we see binary output in the 
> source code like this:
> í\énÛHþÜ|á]åâu?-SìA½¡!ŸÄoÿ›IÅÇsØß'±"’ÿ ¼ñ*dLXúIpV.n§Œ 
> ©äÉb®H&Ùˆð˜I
> 
> If I restart Apache, the problem remains. But the problem 
> goes away if I delete the cache on the web server. So the 
> cache must have gotten corrupted. I can refresh the page many 
> times after that and the page is fine. This good page is kept 
> in cache for about 30 days because of the config settings (below).
> 
> 
> LoadModule deflate_module modules/mod_deflate.so
> DeflateCompressionLevel 1
> DeflateMemLevel 9
> DeflateWindowSize 15
> SetEnvIfNoCase Request_URI \
> \.(?:gif|jpe?g|png|ico)$ no-gzip dont-vary
> #Header append Vary User-Agent env=!dont-vary
> 
> 
> LoadModule disk_cache_module modules/mod_disk_cache.so
> CacheRoot /var/httpd/proxy/
> CacheEnable disk /
> CacheDisable /i
> CacheMaxFileSize 50
> CacheMinFileSize 1000
> CacheDirLevels 2
> CacheDirLength 2
> CacheIgnoreCacheControl Off
> CacheIgnoreNoLastMod On
> CacheIgnoreHeaders Set-Cookie
> CacheLastModifiedFactor 0.1
> CacheMaxExpire 172800
> CacheDefaultExpire 86400 
> 
> http://mail-archives.apache.org/mod_mbox/httpd-users/201004.mbox/%3c9e388fa64dd04b3591a47dd984702...@rabbithill%3e
> 
> 
> 


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-06-03 Thread Geoff Millikan
Dear List,

It appears that when mod_disk_cache reads server side includes to create its 
final cached web page, it sometimes corrupts the included file.

I think the issue may be that the included file is getting DEFLATEd and Apache 
is intermittently forgetting to ungzip it prior to putting it into the parent 
page.

Any other thoughts?

Thanks,

http://www.t1shopper.com/

Details: The parent web page is called "index.shtml" and the child file is 
getting included like this:


Everything else on the page looks fine but where the my_html_file.html should 
be we see binary output in the source code like this:
í\énÛHþÜ|á]åâu?-SìA½¡!ŸÄoÿ›IÅÇsØß'±"’ÿ ¼ñ*dLXúIpV.n§Œ ©äÉb®H&Ùˆð˜I

If I restart Apache, the problem remains. But the problem goes away if I delete 
the cache on the web server. So the cache must have gotten corrupted. I can 
refresh the page many times after that and the page is fine. This good page is 
kept in cache for about 30 days because of the config settings (below).


LoadModule deflate_module modules/mod_deflate.so
DeflateCompressionLevel 1
DeflateMemLevel 9
DeflateWindowSize 15
SetEnvIfNoCase Request_URI \
\.(?:gif|jpe?g|png|ico)$ no-gzip dont-vary
#Header append Vary User-Agent env=!dont-vary


LoadModule disk_cache_module modules/mod_disk_cache.so
CacheRoot /var/httpd/proxy/
CacheEnable disk /
CacheDisable /i
CacheMaxFileSize 50
CacheMinFileSize 1000
CacheDirLevels 2
CacheDirLength 2
CacheIgnoreCacheControl Off
CacheIgnoreNoLastMod On
CacheIgnoreHeaders Set-Cookie
CacheLastModifiedFactor 0.1
CacheMaxExpire 172800
CacheDefaultExpire 86400 

http://mail-archives.apache.org/mod_mbox/httpd-users/201004.mbox/%3c9e388fa64dd04b3591a47dd984702...@rabbithill%3e




smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] mod_expires > Why are both the Expires and Cache-Control HTTP headers set?

2010-05-10 Thread Geoff Millikan


Is this the wrong forum for this question?



> The HTTP spec linked below says, "Servers specify explicit 
> expiration times using *either* the Expires header, or
> the max-age directive of the Cache-Control header."
> 
> So why then does mod_expires set *both* the Expires HTTP 
> header *and* the max-age directive of the 
> Cache-Control HTTP header?  It's overkill to set
> both, right? 
> 
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.2.1
> 
> http://httpd.apache.org/docs/2.2/mod/mod_expires.html
> 
> Thanks,
> 
> http://www.t1shopper.com/
> 
> PS. It would be great if mod_expires had a Directive something like
> "ExpiresHeader" where the syntax could be something like
> Expires|Cache-Control|All
> 


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] mod_expires > Why are both the Expires and Cache-Control HTTP headers set?

2010-05-06 Thread Geoff Millikan
The HTTP spec linked below says, "Servers specify explicit expiration times
using *either* the Expires header, or the max-age directive of the
Cache-Control header."

So why then does mod_expires set *both* the Expires HTTP header *and* the
max-age directive of the Cache-Control HTTP header?  It's overkill to set
both, right? 

http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.2.1

http://httpd.apache.org/docs/2.2/mod/mod_expires.html

Thanks,

http://www.t1shopper.com/

PS. It would be great if mod_expires had a Directive something like
"ExpiresHeader" where the syntax could be something like
Expires|Cache-Control|All


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] mod_disk_cache causing corrupted output on server side includes?

2010-04-21 Thread Geoff Millikan
Dear List,

You can see the corrupted/binary garbled mess on the page at the link here:
http://www.t1shopper.com/tools/calculate/index.shtml

The scrambled mess is outputted right at the server side include like this:


Everything else on the page looks fine, it's only the included file that's 
mangled like this:
í\énÛHþÜ|á]åâu?-SìA½¡!ŸÄoÿ›IÅÇsØß'±"’ÿ ¼ñ*dLXúIpV.n§Œ ©äÉb®H&Ùˆð˜I

If you look at the same page at the link below, everything is fine.
http://www.t1shopper.com/tools/calculate/

If I restart Apache, the problem remains but if I delete the disk_cache the 
problem goes away.
shell> rm -rf /var/httpd/proxy/*

Why is mod_disk_cache screwing this include up?

Thanks,

http://www.t1shopper.com/


CacheRoot /var/httpd/proxy/
CacheEnable disk /
CacheDisable /i
CacheMaxFileSize 50
CacheMinFileSize 1000
CacheDirLevels 2
CacheDirLength 2
CacheIgnoreCacheControl Off
CacheIgnoreNoLastMod On
CacheIgnoreHeaders Set-Cookie
CacheLastModifiedFactor 0.1
CacheMaxExpire 172800
CacheDefaultExpire 86400


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Timestamps of access log entries

2010-04-20 Thread Geoff Millikan
Why not change the timezone on the whole server, not just Apache?


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Scrubbing log files

2010-04-13 Thread Geoff Millikan
> Are there any lists of common robots on the net?  Are there 
> some regular expressions or searches that would help? Are 
> there known IP addresses that are safe to discard?

I believe your question is off topic for this forum however I'll share our
joy with you.

Some are known by hostname:
http://googlewebmastercentral.blogspot.com/2006/09/how-to-verify-googlebot.h
tml 

others by IP:
http://www.cuil.com/info/webmaster_info/ 

We whitelist certain bots and others, if they crawl too fast and don't obey
robots.txt, become banned.  Maintaining this is alot of ongoing task,
especially if the bot company is using plain IP addresses to identify
instead of 
http://en.wikipedia.org/wiki/Forward-confirmed_reverse_DNS which Google,
MSN, Yahoo, etc. use which is much more flexible.

Some code & thoughts to keep you busy:
http://www.webmasterworld.com/google/3092423.htm
http://www.webmasterworld.com/php/3606836.htm

Thanks,

http://www.t1shopper.com/


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Re: Preventing DoS attacks from single client host

2010-04-06 Thread Geoff Millikan
> add Operating System wide firewall rules to
> disallow more than N number of concurrent TCP connections to port 80
> from a single IP address.

You using iptables?  What rules did you end up using to accomplish this?


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] htcacheclean

2010-04-06 Thread Geoff Millikan
How big do we set our mod_disk_cache cache?  100 Megs?  1 Gig?


"Determining how frequently to run htcacheclean and what target size to use
for the cache is somewhat complex and trial and error may be needed to
select optimal values."
http://httpd.apache.org/docs/2.1/caching.html#disk


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Cannot access my first web app

2010-04-02 Thread Geoff Millikan
> If yes, then try this:
> http://rafael/Cambifon/indice.pl

Make that:

http://rafael:8080/Cambifon/indice.pl


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Cannot access my first web app

2010-04-02 Thread Geoff Millikan
>From the command line, can you ping rafael like this:
 
prompt> ping rafael 

If not, then you need to figure out what the FQDN is.  Something like this:
prompt> ping rafael.mycompany.com
 
If yes, then try this:
http://rafael/Cambifon/indice.pl






smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] mod_disk_cache -> How big do we set the cache?

2010-04-02 Thread Geoff Millikan
Question: What are the rules of thumb for setting the size of the cache?

Fact: The memory usage on the server is below for reference.

Fact: htcacheclean will continuously prune the cache to a certain size.  

Fact: When the cache got to 587,692 items taking up 2.2GB of disk space I
started getting this error, "(103)Software caused connection abort: cache:
error returned while trying to return disk cached data"

Fact: I already read the manual, "Determining how frequently to run
htcacheclean and what target size to use for the cache is somewhat complex
and trial and error may be needed to select optimal values."
http://httpd.apache.org/docs/2.1/caching.html#disk

shell> free -m
 total   used   free sharedbuffers cached
Memo: 15425  14644781  0   1046   8867
-/+ buffers/cache:   4731  10694
Swap: 1983  0   1983


shell> cat /proc/meminfo
MemTotal: 15796044 kB
MemFree:794144 kB
Buffers:   1071256 kB
Cached:9080128 kB
SwapCached:  0 kB
Active:   10197176 kB
Inactive:  4076964 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal: 15796044 kB
LowFree:794144 kB
SwapTotal: 2031608 kB
SwapFree:  2031544 kB
Dirty: 908 kB
Writeback:   0 kB
AnonPages: 4122388 kB
Mapped:  32604 kB
Slab:   651948 kB
PageTables:  46564 kB
NFS_Unstable:0 kB
Bounce:  0 kB
CommitLimit:   9929628 kB
Committed_AS:  8145084 kB
VmallocTotal: 34359738367 kB
VmallocUsed:  2164 kB
VmallocChunk: 34359735859 kB
HugePages_Total: 0
HugePages_Free:  0
HugePages_Rsvd:  0
Hugepagesize: 2048 kB


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] metrics on apache httpd

2010-03-29 Thread Geoff Millikan
> http://lmgtfy.com/?q=apache+status+page

Dude, that was harsh, but that's the funniest site I've seen in 6 months,
including The Onion. 


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Apache 2.2 optimization -> Three tips

2010-03-22 Thread Geoff Millikan
> +1.  Isn't that documented?  I'm surprised!

Suppose so, as a n00b though, I thought maybe cutting the default time of 15
in half would do it.  It didn't.  With many people on fast Internet
connections now, even 1 or 2 second KeepAliveTimeout's seem to work good.
 
> Interesting!  Do you have any insights into how this varies 
> with filesystem and operating system/hardware behaviour?  
> If this comes close to the difference made by .htaccess, 
> I'd be looking at what lies beneath.

Unfortunately the day we did this we also optimized a bunch of MySQL
queries.  That aside, and with the disclaimer that we're total newbies at
file system optimization, we saw was about a 40% decrease in CPU time and (I
have no idea why on this) about 20% more free RAM memory.  We're on ext3
filesystem but it would seem like any gain you'd see from noatime would be
impactful on any filesystem but I have no idea.  I can say that of all the
tuning we've done to the system, the noatime and the MinSpareServers
produced the most immediate, "I can see a difference with my naked eye" type
results.  YMMV.  The article below had convincing data which led us to try
it but screwing with our filesystem isn't something we took lightly:
http://www.philchen.com/2009/02/09/some-tuning-tips-for-apache-mod_cache-mod
_disk_cache

> Furthermore, what you describe appears to assume prefork MPM,
> which is unlikely to be the best solution where performance 
> is an issue.

Agreed, from what I've read worker MPM seems better but as you guys may
know, if you want tech support from Red Hat on RHEL5, you're stuck with
prefork.  Otherwise it's recompile time (yes, all the books say everything
should be recompiled) and if something goes haywire we're screwed with no
tech support (albeit the tech support from Red Hat hasn't been very helpful
anyway).


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Apache 2.2 optimization -> Three tips

2010-03-22 Thread Geoff Millikan
> Fair enough, but if your testing was of ~30 requests, and we 
> are believing that
> the typical browser is making 6 simultaneous connections, 
> then it sounds like
> the real magic was 6 * fudge factor of 5 ;-P

LOL, yes, I totally agree and I kept thinking that no optimization guide has
MinSpareServers and StartServers in the 20ies, 30ies or 40ies so I kept
telling myself that there's no way that the "waiting" issues we saw could be
solved by increasing the Servers.  However, we eliminated everything else:
we're on 100Mbit Internet connection in 1 Wilshire, Los Angeles which is one
of the more interconnected buildings in the USA, network optimized (all
hardware Cisco GigE rated), server NIC cards all GigE rated and tuned, file
system optimized (see noatime) and disk buffers optimized on RAID 10, extra
RAM to spare, DNS resolution times tested, etc, etc, etc.  We were going
crazy. We started looking into serving content with Lighttpd or Nginx to try
and get around the issue but didn't want to configure/maintain more web
servers so went back to the drawing board with Apache.

Increasing MinSpareServers and StartServers beyond "normal" is what solved
the issue for us.  

Although it doesn't make sense it seems 5 or 10 or more servers cannot
*simultaneously* both read 35 files off the server and serve them.  Now if
the Client is on dial-up, or mediocre DSL then the extra 500ms to 800ms of
wait time probably wouldn't even be noticed as the last few images streaming
in and the page finishes rendering.  But our office is on a 20 Mbps Verizon
fiber link, and many of our customers are on fast pipes.  So we really do
notice the difference between a web page that "snaps up to attention" and
one that renders 90% with the last few pieces shuffling around.

Best,

http://www.t1shopper.com/

-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
"The nine most terrifying words in the English language are: I'm from the
government and I'm here to help." - Ronald Reagan, August 21, 1986
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Apache 2.2 optimization -> Three tips

2010-03-22 Thread Geoff Millikan
> if your server averages 300 simulatious connections, you need
> to start with 300 servers, and you never want it to drop
> below that number.

Your experience might show otherwise however based on our experience - if we
averaged 300 new customers/min at once (not 300 requests/sec) a
MinSpareServers of 300 wouldn't be enough.

> Yes, a browser can make multiple connections, but this is 
> typically only two
> parallel pipelines, perhaps even four.  

The "average" browser now makes 6 parallel connections per hostname per:
http://www.browserscope.org/

> But 30 workers are not handling the 30 requests comprising
> one user connecting to your site!!!  You just happened
> to hit a magic correlation in your testing :)

I agree, the way I understand the prefork model to work, the 30 processes
aren't each serving one of the 30+ requests this Customer's browser made (is
keepalive tracked across processes?).  However, the way the testing worked
out for us, it seems that way.  We did a lot of testing to come up with our
numbers and just "ball parking" it, the number of servers seemed to work out
best when matched to the number of requests per page.  

Don't take my word for it, use Firebug and test it - if you're seeing big
"waiting" bar times and you know you have the bandwidth on both ends (ask
you web host how much burst or max bandwidth you get), then up the
MinSpareServers and see what happens (comment out the MaxSpareServers).  For
the record, our site averages ~3 requests per second 24-hours a day and
averages about 35 (ugh, I know) requests per page.  Really should be using
CSS sprites to cut down on the number of requests but instead I'm typing
this and pretending I shouldn't be going to bed.  ;-)

Best,

http://www.t1shopper.com/

-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
"The nine most terrifying words in the English language are: I'm from the
government and I'm here to help." - Ronald Reagan, August 21, 1986
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] Apache 2.2 optimization -> Three tips

2010-03-22 Thread Geoff Millikan
Here's three things I wish I had read about Apache optimization that are not
commonly discussed (we found out by years of trial and error).  These three
settings are worth more than all the other optimization we've done (e.g.
SendBufferSize, AcceptFilter http data, EnableMMAP On, EnableSendfile On,
eliminating all .htaccess files, etc.)

1. Set your KeepAliveTimeout to 3 seconds or less.

2. Turn off atime "accesss time" by using noatime.  Do it in real time with
*something like* the command line below.  This will make it so you only have
to read the file off the disk instead of doing both a read (to get the data)
then a write to update the access time.

shell> mount -o noatime,remount,rw / 

To make the noatime setting last between reboots edit your fstab and modify
this line:
/dev/VolGroup00/LogVol00 /   ext3defaults1 1
To read *something like* to like this:
/dev/VolGroup00/LogVol00 /   ext3defaults,noatime
1 1

3. Increase your StartServers to the number of maximum requests on your
biggest page.  For example, if your busiest page requires 30 hits/requests
to load all the images, css, javascript, etc then if you have the RAM to
support it, set your StartServers to 30 and your MinSpareServers to 30.  To
know if you have the RAM to support it, run the command "top" and look for
the httpd process under the column "RES."  That's the amount of physical RAM
the Apache process is taking up.  On our server, each process take up about
12 megs of RAM.  So having 30 Apache process running will takes up 360 megs
of RAM.  Now use Firefox Firebug.  Look at the time each item is in
"waiting."  Before you increased  StartServers you might see the first 10 or
20 items have a "waiting" time of 20 to 30 ms then the last 10 items have a
"waiting" time of 300 or 500ms or more.  When you have enough processes
going, all the "waiting" times will be about the same.  Don't raise your
StartServers above the amount of RAM available otherwise you'll swap which
I've read is very slow (you can tell by looking at the "top" output under
the row called "swap").

Money and premature optimization is the root of all evil.

Best,

http://www.t1shopper.com/


-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
"The nine most terrifying words in the English language are: I'm from the
government and I'm here to help." - Ronald Reagan, August 21, 1986
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] Order of log entries in the access log?

2010-02-26 Thread Geoff Millikan
What order does Apache write the log entries?  

I'm assuming that if Apache serves 100 responses all in the same time second
window (100 responses per second) I suppose it's going to write them in the
order the response happened and not scramble them up willy-nilly.  In other
words, although there's 100 log entries all stamped "26/Feb/2010:20:24:25",
the order they are written in is hopefully the rough order in which they
were served.  

Is that right?

Thanks,

http://www.t1shopper.com/


-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

"The nine most terrifying words in the English language are: I'm from the
government and I'm here to help." - Ronald Reagan, August 21, 1986

-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Custom Log Format -> Adding milliseconds to timestamp -> %{format}t

2010-02-21 Thread Geoff Millikan
> Well, for sequence and order any quantity that only increases (or only
> decreases, for that matter) each time it is sampled, would serve.
> Such as a gadget that just hands back the next integer in series every
> time it is queried (properly interlocked across threads/processes).

Agreed, good point and if such a beast existed, we would use it.

> Come to think of it, if you are serving requests fast enough,
> milliseconds won't be sufficient.

If we were looking for a UUID for each log entry, mod_serial would provide
that (a hash across IP address, requested object, user agent, etc is pretty
good too).  That's not the problem.  The problem is that when we're doing 30
to 60 requests per second, the log entries get difficult to separate as to
which *order* the events are happening, etc.

> But the available
> precision will depend on what APR can get from the OS, so you may only
> be getting the illusion of microsecond precision while in fact the
> value returned jumps by units of 1,000,000 (that is, seconds) or so.

Agreed but that's not a problem here - most current *NIX systems (we're on
RHEL 5/CentOS 5) perform actual millisecond or better precision.
 
> Time is far more complex.

Agreed, but as Consumer's bandwidth increases and processors get faster
we're probably going to want to start logging things with millisecond
precision.  But that's speculation and this isn't the forum for that.

We thought of another solution that might meet the need: use the line number
of the log entry.  So the first entry in the log is #1, second log entry is
#2, etc.  But as with many busy web sites, we don't wait till the end of the
day to look at web logs.  We have a job that ETL's the log into a database
every x minutes.  Each time it runs, it picks up where we left off.  So for
us, the line number starts over at "1" each time we start the ETL job.
Thus, the incrementing number would be unique only for each ETL job.  This
is ok because, again, we're not looking for a UUID for each log line as much
as we want to recognize the sequence of events.  But this leads to another
question.

What order does Apache write the log entries?  I'd guess that as soon as the
response the given to the client, Apache generates the log entry?  If Apache
serves 100 responses all in the same time second window, I suppose it's
going to write them in the order the response happened and not scramble them
up.  In other words, although there's 100 log entries all stamped
"21/Feb/2010:20:24:25", the order they are written in is hopefully the rough
order in which they were served?

Thanks List,

http://www.t1shopper.com/


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Custom Log Format -> Adding milliseconds to timestamp -> %{format}t

2010-02-18 Thread Geoff Millikan
Dan,

Yes, I suppose adding milliseconds to the request header using mod_headers
with a custom header name like "Milliseconds" would work (example below).
It would have to be a custom header because it appears that milliseconds
cannot be appended to any "normal" date time fields since the HTTP 1.1 spec
only permits times in seconds
(http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3).  

I wonder how much overhead adding custom headers to every request adds in
CPU time + RAM?

#Config line for mod_headers
RequestHeader set Milliseconds "%t"

Thanks,

http://www.t1shopper.com/

PS. There's a part of our site that dumps the Client's headers (link below).
We'd have to remove the custom header on this part of the web site otherwise
people would say, "My browser didn't send that header!"  So this solution
works only most of the time for us.
http://www.t1shopper.com/tools/http-headers.php


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] Custom Log Format -> Adding milliseconds to timestamp -> %{format}t

2010-02-18 Thread Geoff Millikan
Using %{UNIQUE_ID}e from mod_unique_id is a *great* suggestion if our
interest was merely identifying each log line uniquely.  However the initial
question is focused more on the sequence and order of the requests.  Thus, I
believe a time-based solution is what we're after.  

This isn't a Linux forum but it seems like *nix would keep time with
milliseconds in an environmental var somewhere?  Then we could stuff it in
the log like %{TIME_WITH_MILLISECONDS}e 

Thanks,

http://www.t1shopper.com/

PS. We thought about using %k for the "number of keepalive requests handled
on this connection" as that would track sequence however if the client
doesn't use keepalive then %k would always be set to zero.  Adding
milliseconds to the date seems like the cleanest solution.

PPS. Using mod_unique_id is more elegant than our current solution for
insuring uniqueness so we're looking into moving over to this depending on
how CPU/memory efficient mod_unique_id 


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] Custom Log Format -> Adding milliseconds to timestamp -> %{format}t

2010-02-18 Thread Geoff Millikan
Dear List,

Has anyone figured out how to add milliseconds to the date and time the
request was made for logging?  Something like the below Common Log Format
where the ":1234" represents microseconds:

127.0.0.1 - - [18/Feb/2010:17:02:37:1234 +] "GET /favicon.ico HTTP/1.1"
200 145 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)"

The reason I ask is because when we're doing 30 to 50 requests per second
it's getting tough to separate log entries when we do Customer tracking
analysis and debugging. I'm not talking about %T or %D here (time taken to
the load the request), I'm talking about the date and time the request was
made.

Maybe setting and using an environment variable like %{FOOBAR}e would work?

http://httpd.apache.org/docs/2.2/mod/mod_log_config.html#formats

Thanks,

http://www.t1shopper.com/


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] ETag (entity tag) response header not being added to server side include's

2010-01-06 Thread Geoff Millikan
Thanks.  Apache 2.2 rocks and it's still rocking the free world after 15
years this February.  Here's to another 15 years - keep up the good work
Apache Project!


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] ETag (entity tag) response header not being added to server side include's

2010-01-05 Thread Geoff Millikan
> [The Apache documentation] should more clearly state that 
> it generates ETags for static files only. The resource in 
> question is not really a static file.

I imagine there's nothing I can do to help get the 2.2 docs updated but if
there is, I'm happy to help.
 
> The cache doesn't know the full contents of the response until it's
> all been written to the client via the output filters.  You can't
> calculate headers at the end of a response and still use them.

I understand the full contents of the item (and its headers) cannot be known
until it's cached - but then once cached, everything is known, headers
included I believe?  This implies items served from cache could have an Etag
merged or  appended to the already-known headers.  Regardless, Apache
doesn't offer a way to add an Etag to dynamic content today.  Rats.  

Thanks,

Geoff

PS. The digest at http://httpd.markmail.org/search/?q=etag shows half a
dozen people trying to write custom applications to do this.  Now if the
HTTP Project takes it on then the world saves untold CPU cycles/bandwidth or
maybe even bringing global peace.  For that, I'll put up a couple hundred
bucks up to see the feature added - anyone on Apace dev reading this, just
let me know when it's in trunk and just email me your PayPal info. Or let me
know what beer you like - I'm good for a case shipped anywhere in the USA.

PPS. BTW Apache rocks.  We indorse it publically so keep up the good work!
http://www.t1shopper.com/businesspartners.shtml#apache


smime.p7s
Description: S/MIME cryptographic signature


RE: [us...@httpd] ETag (entity tag) response header not being added to server side include's

2010-01-05 Thread Geoff Millikan
> I think this is a long-standing limitation, since the etag can't be
> easily precomputed for the overall response (all the included stuff
> smushed together from potential exec's, etc)

I understand however the FileETag documentation doesn't even *hint* at such
a limitation.

By comparison, mod_disk_cache has no problem reading all the SSI stuff
together and then caching the page (as long as "include virtual" is used).
I think if mod_disk_cache can do it then so can FileETag. (For that matter,
FileETag could generate its Etag off the cached document!)

http://httpd.apache.org/docs/2.2/caching.html#overview

http://httpd.apache.org/docs/2.2/mod/core.html#fileetag


smime.p7s
Description: S/MIME cryptographic signature


[us...@httpd] ETag (entity tag) response header not being added to server side include's

2010-01-05 Thread Geoff Millikan
Dear List,

Our ETag response header is being added successfully on all responses except
those files containing server side includes.  For example:

ETag is being set:
http://www.t1shopper.com/1KB.html

Etag not set:
http://www.t1shopper.com/index.shtml

The shtml files are using virtual includes like this:


How can I get Etag response header to show up on shtml files?

Best Regards,
 
Geoff Millikan
Newbury Park, CA, USA 91320
http://www.t1shopper.com/

---== httpd.conf Apache 2.2.3 (RedHat5) ==---
FileETag All
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml

--== Headers ==--
http://www.t1shopper.com/1KB.html
HTTP/1.x 200 OK
Date: Tue, 05 Jan 2010 19:01:40 GMT
Server: Apache
Last-Modified: Wed, 02 Jul 2008 23:12:50 GMT
Etag: "16904da-45112a1a44080"
Accept-Ranges: bytes
Cache-Control: max-age=15552000
Expires: Sun, 04 Jul 2010 19:01:21 GMT
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Content-Length: 41
Age: 19
Keep-Alive: timeout=3, max=100
Connection: Keep-Alive
Content-Type: text/html

http://www.t1shopper.com/index.shtml
HTTP/1.x 200 OK
Date: Tue, 05 Jan 2010 19:02:33 GMT
Server: Apache
Accept-Ranges: bytes
Cache-Control: max-age=15552000
Expires: Sun, 04 Jul 2010 19:02:33 GMT
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Content-Length: 7523
Keep-Alive: timeout=3, max=100
Connection: Keep-Alive
Content-Type: text/html

This couldn't possibly be a bug as Etag support started in 1.3.23 and SSI
before that!


-
The official User-To-User support forum of the Apache HTTP Server Project.
See http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org