Re: [users@httpd] missing image

2024-02-01 Thread Adam Weremczuk

Yup, all it needed was that missing slash. TA!

On 01/02/2024 13:46, Sherrard Burton wrote:
that is not the correct form of the absolute URL. absolute URLs are 
based at your DocumentRoot. so since your DocumentRoot is


DocumentRoot /var/www/html/holding

the correct absolute URL would be "/ms-logo.png"", and the resulting 
img tag would be




HTH

best,
Sherrard 


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] missing image

2024-02-01 Thread Adam Weremczuk

Hi Sherrard,

My index.html is super simple:

--








p {text-align: center;}


UNAVAILABLE





 


The Matrix Science website is currently unavailable.
We apologise for the unexpected downtime.
If you require assistance, please contact href=mailto:supp...@matrixscience.com>supp...@matrixscience.com.







--

Apache config file is very basic as well:

--



ServerName holding.matrixscience.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html/holding

ErrorLog ${APACHE_LOG_DIR}/holding/error.log
CustomLog ${APACHE_LOG_DIR}/holding/access.log combined


Options Indexes FollowSymLinks
AllowOverride FileInfo
Require all granted



--

and .htaccess in root directory to handle errors:

--

ErrorDocument 400 /
ErrorDocument 401 /
ErrorDocument 402 /
ErrorDocument 403 /
ErrorDocument 404 /
ErrorDocument 405 /
ErrorDocument 406 /
ErrorDocument 407 /
ErrorDocument 408 /
ErrorDocument 409 /
ErrorDocument 410 /
ErrorDocument 411 /
ErrorDocument 412 /
ErrorDocument 413 /
ErrorDocument 414 /
ErrorDocument 415 /
ErrorDocument 416 /
ErrorDocument 417 /
ErrorDocument 422 /
ErrorDocument 423 /
ErrorDocument 424 /
ErrorDocument 426 /
ErrorDocument 428 /
ErrorDocument 429 /
ErrorDocument 431 /
ErrorDocument 451 /
ErrorDocument 500 /
ErrorDocument 501 /
ErrorDocument 502 /
ErrorDocument 503 /
ErrorDocument 504 /
ErrorDocument 505 /
ErrorDocument 506 /
ErrorDocument 507 /
ErrorDocument 508 /
ErrorDocument 510 /
ErrorDocument 511 /

--

After I replace:

img src="ms-logo.png"

with

img src="/var/www/html/holding/ms-logo.png"   (valid path)

the image doesn't even show on the home page.

Regards,
Adam


On 31/01/2024 19:53, Sherrard Burton wrote:



On 1/31/24 02:26 PM, Adam Weremczuk wrote:


I've already tried replacing relative path to the image with absolute 
but it made no difference.


Any ideas?



do you have a live example with the absolute path? the broken ones that 
i looked at all had the relative paths which (understandably) doesn't work.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] missing image

2024-01-31 Thread Adam Weremczuk

Hi all,

Apache 2.4.56.

I'm trying to set up a holding page that would catch all traffic 
(including errors) and redirect to a very basic holding page.


It seems to be working now as I want. Apart from the logo image missing 
if I supply any multi level path, e.g:


1. Logo showing:

https://holding.matrixscience.com/invalid.php
https://holding.matrixscience.com/invalid

2. Logo missing:

https://holding.matrixscience.com/invalid/
https://holding.matrixscience.com/invalid/invalid.php
https://holding.matrixscience.com/invalid1/invalid2/invalid.html

I've already tried replacing relative path to the image with absolute 
but it made no difference.


Any ideas?

Thanks,
Adam


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] If statement against AUTHENTICATE_memberOf variable created by authnz_ldap

2024-01-30 Thread Adam Cecile
ot;admin"
  elseif has_value_case_insensitive(member_of_profiles, "manager") then
    r:info(("Username %s authorized with PCC manager 
privileges"):format(sam_account_name, member_of_entry))

    r.headers_in['X-PCC-Group'] = "manager"
  elseif has_value_case_insensitive(member_of_profiles, "operator") then
    r:err(("Username %s authorized with PCC operator 
privileges"):format(sam_account_name, member_of_entry))

    r.headers_in['X-PCC-Group'] = "operator"
  elseif has_value_case_insensitive(member_of_profiles, "viewer") then
    r:err(("Username %s authorized with PCC viewer 
privileges"):format(sam_account_name, member_of_entry))

    r.headers_in['X-PCC-Group'] = "viewer"
  else
    r:err(("Username %s ActiveDirectory groups did not give ANY PCC 
privileges, it should have NOT been authorized to 
login"):format(sam_account_name))

    return 403
  end

  return apache2.OK
end

Which is called from config using:

LuaHookFixups /etc/apache2/snippets/active_directory_to_pcc_profile.lua 
authcheck_hook



It seems to be working as expected, and is also working with my 
auth_form / cookie_session setup, which is great !


Any comment is welcome is you see something borked...


Regards, Adam.


[users@httpd] If statement against AUTHENTICATE_memberOf variable created by authnz_ldap

2024-01-25 Thread Adam Cécile

Hello,


I'm struggling with authnz_ldap configuration. What I'm trying to 
achieve is the following:


1. Authentication is done against Active Directory

2. Groups of user (memberOf) are retreived and X-PCC-Profile header is 
set depending on groups:


- If ADMIN is listed in groups, profile is set to admin

- If OPERATOR is listed in groups, profile is set to operator

- If VIEWER is listed in groups, profile is set to viewer

- If none of the above group is found, X-PCC-Profile header is not set


I came up with a configuration that works, but is ugly (imho) and I have 
to perform some fuzzy regex matching and I do not understand why, so I'm 
requesting your advice.


Configuration is the following:


Require ldap-group CN=ALLOWED,OU=Groups,DC=domain,DC=internal

RequestHeader add X-PCC-User "%{AUTHENTICATE_sAMAccountName}e"

# For debugging purpose, dump AUTHENTICATE_memberOf variable to unused 
X-PCC-Groups header


RequestHeader add X-PCC-Groups "%{AUTHENTICATE_memberOf}e"

RewriteEngine on
RewriteCond %{ENV:AUTHENTICATE_memberOf} "(^|; 
)CN=VIEWER,OU=Groups,DC=domain,DC=internal" [NC]

RewriteRule ".*" - [E=PCC_PROFILE:viewer,NE,NS]
RewriteCond %{ENV:AUTHENTICATE_memberOf} "(^|; 
)CN=OPERATOR,OU=Groups,DC=domain,DC=internal" [NC]

RewriteRule ".*" - [E=PCC_PROFILE:operator,NE,NS]
RewriteCond %{ENV:AUTHENTICATE_memberOf} "(^|; 
)CN=VIEWER,OU=Groups,DC=domain,DC=internal" [NC]

RewriteRule ".*" - [E=PCC_PROFILE:admin,NE,NS]

RequestHeader add X-PCC-Profile "%{PCC_PROFILE}e" "expr=-n 
%{ENV:PCC_PROFILE}"



For the record, here is how the debugging X-PCC-Groups header is seen by 
next "hop" (in this POC, apache is proxy-passing to NGINX which is 
configured to log all headers):


2024/01/25 17:46:14 [debug] 10006#10006: *147 http header: 
"X-PCC-Groups: CN=ANOTHERGROUP,OU=Groups,DC=domain,DC=internal; 
CN=ALLOWED,OU=Groups,DC=domain,DC=internal; 
CN=VIEWER,OU=Groups,DC=domain,DC=internal"




So the first question is: Is it normal that I have to use mod_rewrite to 
check for group membership ? I tried hundred of syntaxes with SetEnvIf 
or SetEnvIfExpr but I never managed to get it working. I'm not sure why 
but I guess it's somehow related to "race condition" (lazy evaluation) 
while evaluating environment variable, does it makes sense ?


Second question is: I cannot use "$" to make a proper regex matcher. If 
the group is not the last one, I can match it with ;.*$, if it is the 
last one, I should be able to match [...]DC=internal$, however that does 
not work. There's is one unknown character and I have no idea what it 
is. Matching with DC=internal.?$ works, so that's one SINGLE char... Any 
idea ?



Thanks in advance,

Best regards, Adam.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] pwauth to external server

2023-05-23 Thread Adam Weremczuk

Thank you Frank.

This is my entire gitweb config:

cat /etc/apache2/conf-available/gitweb.conf

  
    
  Define ENABLE_GITWEB
    
    
  Define ENABLE_GITWEB
    
  



  Alias /gitweb /usr/share/gitweb

    AddExternalAuth pwauth /usr/sbin/pwauth
    SetExternalAuthMethod pwauth pipe

  
    Options +FollowSymLinks +ExecCGI
    AddHandler cgi-script .cgi

    AuthName 'Enter your username and password'
    AuthType Basic
    AuthBasicProvider external
    AuthExternal pwauth
    Require valid-user

    Order Deny,Allow
    Satisfy any
    Deny from all
    Require valid-user

  


The configuration is working fine and authenticates users as expected.

What I don't understand is how does apache know which server to consult 
for the credentials?


Just saying "external" surely shouldn't be enough without specifying a 
FQDN or IP, port number etc. like you do with:


AuthLDAPURL ldap://

What am I missing here?

Regards,
Adam

On 18/05/2023 20:21, Frank Gingras wrote:

This comes to mind:

https://code.google.com/archive/p/mod-auth-external/wikis/AuthNZ.wiki

On Wed, May 17, 2023 at 12:48 PM Adam Weremczuk 
 wrote:


Hi all,

I run some old Bugzilla 3.6.11 (https://www.bugzilla.org) on SERVER1
(Debian 7 / Apache 2.2.22 / MySQL 5.5.31).

The following authentication works locally:

AuthType Basic
AuthPAM_Enabled on
AuthBasicAuthoritative off
AuthUserFile /dev/null

I have migrated Bugzilla to a modern stack on SERVER2 (Debian 11 /
Apache 2.4.56 / MariaDB 10.5.19) but struggle with authentication.

Is it possible to use pwauth to consult usernames/passwords on
SERVER1
from SERVER2 by IP?

What other authentication options do I have?

I would rather avoid doing things such as copying usernames and
passwords across.

Regards,
Adam


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org


[users@httpd] pwauth to external server

2023-05-17 Thread Adam Weremczuk

Hi all,

I run some old Bugzilla 3.6.11 (https://www.bugzilla.org) on SERVER1 
(Debian 7 / Apache 2.2.22 / MySQL 5.5.31).


The following authentication works locally:

AuthType Basic
AuthPAM_Enabled on
AuthBasicAuthoritative off
AuthUserFile /dev/null

I have migrated Bugzilla to a modern stack on SERVER2 (Debian 11 / 
Apache 2.4.56 / MariaDB 10.5.19) but struggle with authentication.


Is it possible to use pwauth to consult usernames/passwords on SERVER1 
from SERVER2 by IP?


What other authentication options do I have?

I would rather avoid doing things such as copying usernames and 
passwords across.


Regards,
Adam


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Enable both basic auth and form auth ?

2023-04-12 Thread Adam Cecile

Hello,


I'm looking for a way to enable both type of authentication, is it 
possible ?



Regards


[users@httpd] logging SSL handshake failures

2020-06-19 Thread Adam Weremczuk

Hi all,

I'm running Apache 2.4.25 on Debian 9 and trying to debug SSL.

Even with LogLevel set to trace8 error.log doesn't produce exhaustive 
details when I e.g. try to connect using older unsupported protocol:


openssl s_client -connect www.mysite.com:443 -tls1

[Fri Jun 19 16:15:54.339546 2020] [ssl:info] [pid 11437] [client 
192.168.10.196:46016] AH01964: Connection to child 2 established (server 
www.mysite.com:443)
[Fri Jun 19 16:15:54.339631 2020] [ssl:trace2] [pid 11437] 
ssl_engine_rand.c(126): Seeding PRNG with 656 bytes of entropy
[Fri Jun 19 16:15:54.339705 2020] [ssl:trace3] [pid 11437] 
ssl_engine_kernel.c(1989): [client 192.168.10.196:46016] OpenSSL: 
Handshake: start
[Fri Jun 19 16:15:54.339721 2020] [ssl:trace3] [pid 11437] 
ssl_engine_kernel.c(1998): [client 192.168.10.196:46016] OpenSSL: Loop: 
before/accept initialization
[Fri Jun 19 16:15:54.339737 2020] [ssl:trace4] [pid 11437] 
ssl_engine_io.c(2135): [client 192.168.10.196:46016] OpenSSL: read 11/11 
bytes from BIO#5641ea41b3e0 [mem: 5641ea420a40] (BIO dump follows)
[Fri Jun 19 16:15:54.339740 2020] [ssl:trace7] [pid 11437] 
ssl_engine_io.c(2064): 
+-+
[Fri Jun 19 16:15:54.339744 2020] [ssl:trace7] [pid 11437] 
ssl_engine_io.c(2102): | : 16 03 01 00 81 01 00 00-7d 03 
01 }..  |
[Fri Jun 19 16:15:54.339745 2020] [ssl:trace7] [pid 11437] 
ssl_engine_io.c(2108): 
+-+
[Fri Jun 19 16:15:54.339747 2020] [ssl:trace3] [pid 11437] 
ssl_engine_kernel.c(2027): [client 192.168.10.196:46016] OpenSSL: Exit: 
error in SSLv2/v3 read client hello A
[Fri Jun 19 16:15:54.339751 2020] [ssl:info] [pid 11437] [client 
192.168.10.196:46016] AH02008: SSL library error 1 in handshake (server 
www.mysite.com:443)
[Fri Jun 19 16:15:54.339775 2020] [ssl:info] [pid 11437] SSL Library 
Error: error:14076102:SSL routines:SSL23_GET_CLIENT_HELLO:unsupported 
protocol
[Fri Jun 19 16:15:54.339779 2020] [ssl:info] [pid 11437] [client 
192.168.10.196:46016] AH01998: Connection closed to child 2 with 
abortive shutdown (server www.mysite.com:443)


It doesn't say e.g. which protocol was attempted, URL, agent etc.

This type of info doesn't seem possible here according to:

http://httpd.apache.org/docs/trunk/mod/core.html#errorlogformat

Therefore I've attempted the following:

/etc/apache2/mods-available/ssl.conf


(...)
    ErrorLog /var/log/apache2/ssl_error.log
    LogLevel trace8
(...)


But nothing is being logged to this file when I make various invalid SSL 
requests to the server.


All I get is:

[Fri Jun 19 16:39:12.156511 2020] [core:notice] [pid 11679] AH00094: 
Command line: '/usr/sbin/apache2'
[Fri Jun 19 16:39:12.156514 2020] [core:debug] [pid 11679] log.c(1546): 
AH02639: Using SO_REUSEPORT: yes (1)
[Fri Jun 19 16:39:12.156521 2020] [mpm_prefork:debug] [pid 11679] 
prefork.c(1032): AH00165: Accept mutex: fcntl (default: sysvsem)
[Fri Jun 19 16:39:12.156615 2020] [watchdog:debug] [pid 11686] 
mod_watchdog.c(563): AH02980: Watchdog: nothing configured?


with the last message being repeated.

Is it a false positive?

apache2ctl -M | grep watchdog
[Fri Jun 19 16:42:05.186631 2020] [core:trace3] [pid 11707] 
core.c(3289): Setting LogLevel for all modules to trace8
[Fri Jun 19 16:42:05.186778 2020] [core:trace3] [pid 11707] 
core.c(3289): Setting LogLevel for all modules to trace8

 watchdog_module (static)

How can I log details of SSL handshake failures?

Thanks,
Adam



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Breaking fcgi change between 2.4.25 and 2.4.38 ?

2020-04-21 Thread Adam Cécile

Hello,


I just upgraded a Debian server and went into big troubles.

Current configuration uses docker running PHP FPM in different version 
and usually have snippets like:



  SetHandler "proxy:fcgi://127.0.0.1:9004/"


Inside fpm container, www folder is mounted at the same place as on the 
host FS.



It works just fine with Apache 2.4.25 from Debian Stretch but with 
2.4.38 from Debian Buster all I get is 404 white page, which if I 
understood correctly means the .php file has not been found.


Rolling back apache2 package to 2.4.25 make websites work again immediately.


I guess the issue is somehow related to SCRIPT_FILENAME or some 
environment variable not passed correctly to fpm through fcgi.


FPM is working as expect, as it's working perfectly fine with older 
httpd and I also confirmed it is working by running:


SCRIPT_FILENAME=/var/www/vhosts/host1/test.php REQUEST_METHOD=GET 
cgi-fcgi -bind -connect 127.0.0.1:9004



Changing SCRIPT_FILENAME to someting incorrect produce the same 404 
error so I really think this is related.



Thanks in advance for your help,


Regards, Adam.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] How to deal with www and non-www domain names with one certificate?

2020-02-05 Thread Adam Powell
Hi Ed,
When I am setting up a server or virtual host, I start with this:
https://github.com/h5bp/html5-boilerplate/blob/master/dist/.htaccess

That's the configuration file from a project that is intended to jumpstart
web development projects and set smart defaults. It is well documented
particularly with inline comments.

I believe the default configuration is what you're describing but if you
need to make adjustments you can just comment or uncomment the appropriate
settings for your use case.

I then use Certbot <https://certbot.eff.org/> to generate an SSL
certificate that covers both the naked domain and the www subdomain (and
any other subdomains).

These 2 documents outline how to set up the configuration files:
https://httpd.apache.org/docs/2.4/configuring.html
https://httpd.apache.org/docs/2.4/sections.html

My suggestion would be to use the code in the HTML5 Boilerplate (the first
link) but do so in your main config file so that you avoid using
`.htaccess` files (and `AllowOverride` directives) altogether.

Another tool you might find useful is the Mozilla SSL configuration
generator <https://ssl-config.mozilla.org/>

It looks like you shared both the public and private encryption keys when
you included the SSL certificates so you should generate new
keys/certificates.

Hope that helps.

-Adam Powell
Founder, ADA First <https://www.adafirst.org/>

On Tue, Feb 4, 2020 at 1:03 PM edflecko .  wrote:

> I don't understand how to deal with forcing all connections to
> www.sierraprogress.org to simply sierraprogress.org , forcing all
> connections to my website with https , and using only one certificate per
> domain name?
>
> Here's my unique server information:
> CentOS 7
> Server version: Apache/2.4.41 (codeit)
> OpenSSL 1.1.1c
>
> 1.) Forcing all connections to www.domainname.com to domainname.comis
> best done with a rewrite rule, isn't it? I've found some examples online,
> but I don't know if one is better than the others?
>
> RewriteEngine On
> RewriteCond %{HTTPS} off [OR]
> RewriteCond %{HTTP_HOST} ^www\. [NC]
> RewriteCond %{HTTP_HOST} ^(?:www\.)?(.+)$ [NC]
> RewriteRule ^ https://%1%{REQUEST_URI} [L,NE,R=301]
>
> RewriteEngine On
> RewriteBase /
> RewriteCond %{HTTP_HOST} ^([^.]+)\.sierraprogress\.org$ [NC]
> RewriteRule ^(.*)$ https://sierraprogress.org/$1 [R=301,L]
>
> RewriteEngine On
> RewriteCond %{HTTP_HOST} ^sierraprogress\.org$ [NC]
> RewriteRule ^ https://www. sierraprogress.org %{REQUEST_URI} [R=301,L]
>
> RewriteEngine on
> RewriteCond %{HTTP_HOST} ^www.sierraprogress.org
> RewriteRule (.*) https:// sierraprogress.org /$1 [R=301,L]
>
> Since I want ALL websites that this server will host to remove the www AND
> be https connections, maybe the first example is best?
>
> Do I just place this code snippet in my httpd.conf file?
>
> 2.) Here's my sierraprogress.org.conf file:
>
> 
> ServerName sierraprogress.org
> ServerAlias www.sierraprogress.org
> DocumentRoot /var/www/sierraprogress.org/public_html
> 
> Options -Indexes +FollowSymLinks
> AllowOverride All
> 
> ErrorLog /var/www/sierraprogress.org/error.log
> CustomLog /var/www/sierraprogress.org/requests.log combined
> 
>
> 
> DocumentRoot /var/www/sierraprogress.org/public_html
> Protocols h2 h2c http/1.1
> ServerName sierraprogress.org
> ServerAlias www.sierraprogress.org
> 
> Options -Indexes +FollowSymLinks
> AllowOverride All
> 
> ErrorLog /var/www/sierraprogress.org/error.log
> CustomLog /var/www/sierraprogress.org/requests.log combined
> SSLEngine on
> SSLCertificateFile /etc/httpd/ssl/sierraprogress.crt
> SSLCertificateKeyFile /etc/httpd/ssl/sierraprogress.key
> SSLCipherSuite HIGH:!aNULL:!MD5
> 
>
> The one certificate I'm using ( sierraprogress.crt) works fine for
> sierraprogress.org connections but, of course, will NOT work for
> www.sierraprogress.org connections because of the domain name mis-match.
> I've also tried using a wildcard certificate for *.sierraprogress.org
> (see below), but I couldn't get that to work at all.
>
> Suggestions on how to handle these issues?
>
> Thank you for your time and suggestions!
> Ed
>
> Certificate Decoder - https://www.sslshopper.com/certificate-decoder.html
>
> -BEGIN CERTIFICATE-
>
> MIIIRTCCBy2gAwIBAgIRAOKGYmn0tkDkNQnJmw6pxPUwDQYJKoZIhvcNAQELBQAwdjELMAkGA1UE
>
> BhMCVVMxCzAJBgNVBAgTAk1JMRIwEAYDVQQHEwlBbm4gQXJib3IxEjAQBgNVBAoTCUludGVybmV0
>
> MjERMA8GA1UECxMISW5Db21tb24xHzAdBgNVBAMTFkluQ29tbW9uIFJTQSBTZXJ2ZXIgQ0EwHhcN
>
> MjAwMjAzMDAwMDAwWhcNMjIwMjAyMjM1OTU5WjCB9zELMAkGA1UEBhMCVVMxDjAMBgNVBBETBTk1
>
> ODExMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRMwEQYDVQQHEwpTYWNyYW1lbnRvMRMwEQYDVQQJEwpT
>
> dWl0ZS

[users@httpd] Consulting Support Resource(s)?

2019-12-11 Thread Adam Chace
Our team is looking for onsite (ideally) or offsite consulting support to 
investigate intermittent issues (502 gateway errors) in a web application.  
Please contact me if you’re interested, located in New England, USA.

Adam Chace (ach...@cainc.com)



Re: [users@httpd] Correct Apache cache keys on rewritten URL's

2019-03-30 Thread adam . vest
As I consider this to be undesirable behavior on the part of the Apache 
cache, I went ahead and opened a bug report on this matter about a week 
ago:

* https://bz.apache.org/bugzilla/show_bug.cgi?id=63282

I tried sifting through the source code to find where it was defining 
the key to see if I could propose a change, but my limited C coding (and 
coding in general) was quickly overwhelmed. The change, at least 
conceptually, seems simple enough - instead of using the rewritten URI 
for the cache key, use the original URI.


On 2019-03-19 12:10, adam.v...@vestfarms.com wrote:

Hello All,

I've been working on this for the past few days, attempting to find a
way to get the Apache mod_disk_cache module to properly store requests
that are rewritten. As I'm sure many of you are aware, it's fairly
common practice these days for modern CMS's to just rewrite everything
to index.php and let that handle things (so-called "pretty urls" or
"permalinks"). However, this rewrite renders Apache's cache useless
because this then means that ALL dynamic requests are cached under the
same "http://domain.com/index.php; key, which would mean that anyone
who would request, say, "http://domain.com/thing2; might get the
content of "http://domain.com/thing1; that is already in the cache.

I've been trying many different approaches to fixing this (from
attempting to detect when/if the URL was rewritten, to attempting to
dynamically set the CacheKeyBaseURL directive), but so far I've been
unable to come up with any way to resolve this issue. I feel like
dynamically setting the "CacheKeyBaseURL" holds the most promise if
this can actually be done, but then again fixing the Apache caching
function to better handle rewritten URI's would be ideal.

Anyone have any thoughts on how I might get this working? Other than
"don't use it" or "use something else" :) - I'd really like to use
Apache cache if possible.

Thanks in advance!

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Correct Apache cache keys on rewritten URL's

2019-03-19 Thread adam . vest

Hello All,

I've been working on this for the past few days, attempting to find a 
way to get the Apache mod_disk_cache module to properly store requests 
that are rewritten. As I'm sure many of you are aware, it's fairly 
common practice these days for modern CMS's to just rewrite everything 
to index.php and let that handle things (so-called "pretty urls" or 
"permalinks"). However, this rewrite renders Apache's cache useless 
because this then means that ALL dynamic requests are cached under the 
same "http://domain.com/index.php; key, which would mean that anyone who 
would request, say, "http://domain.com/thing2; might get the content of 
"http://domain.com/thing1; that is already in the cache.


I've been trying many different approaches to fixing this (from 
attempting to detect when/if the URL was rewritten, to attempting to 
dynamically set the CacheKeyBaseURL directive), but so far I've been 
unable to come up with any way to resolve this issue. I feel like 
dynamically setting the "CacheKeyBaseURL" holds the most promise if this 
can actually be done, but then again fixing the Apache caching function 
to better handle rewritten URI's would be ideal.


Anyone have any thoughts on how I might get this working? Other than 
"don't use it" or "use something else" :) - I'd really like to use 
Apache cache if possible.


Thanks in advance!

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Redirect only a specific index.php page to new location

2018-01-21 Thread Adam Powell
Have you read this document?
https://httpd.apache.org/docs/trunk/rewrite/avoid.html

-Adam Powell

> On Jan 19, 2018, at 3:33 AM, Luca Toscano <toscano.l...@gmail.com> wrote:
> 
> Hi Kory,
> 
> 2018-01-18 5:53 GMT+01:00 Kory Wheatley <kory.wheat...@gmail.com>:
>> When someone types to go to http://sftpinterface/deptblogs/  or a link I 
>> need it to redirect to http://intranet/template_departments.cfm.  Which I 
>> was able to accomplish in the index.php header content with
>> 
>> > /* Redirect browser */
>>  header("Location: http://intranet/template_departments.cfm;);
>> 
>> /* Make sure that code below does not get executed when we redirect. */
>> exit;
>> ?>
>> 
>> But the problem is all pages underneath http://sftpinterface/deptblogs  
>> redirect to  http://intranet/template_departments.cfm.  Like I don't want 
>> http://sftpinterface/deptblogs/nursing to be redirected to 
>> http://intranet/template_departments.cfm.  I want it to stay on that page 
>> along with the others.  Only http://sftpinterface/deptblogs or 
>> http://sftpinterface/deptblogs/index.php  needs to be redirect to  
>> http://intranet/template_departments.cfm and not the sub directory sites 
>> underneath /deptblogs.  What's the possible way of doing this.
>> 
> 
> have you checked 
> https://httpd.apache.org/docs/current/mod/mod_alias.html#redirectmatch ?
> 
> Luca


Re: [users@httpd] slower https transfer speeds compared with rsync/smb/sftp

2018-01-11 Thread Adam Teale
I have tried setting SendBufferSize to all many different amounts but it
doesn't affect the transfer speed.

mod_info:

Current Configuration:
In file: /Library/Server/Web/Config/apache2/httpd_server_app.conf
 460: StartServers 4
 461: MinSpareServers 3
 462: MaxSpareServers 10
 463: ServerLimit 256
 464: MaxClients 256
 466: SendBufferSize 1042560

2018-01-11 10:42 GMT-03:00 Adam Teale <a...@believe.tv>:

> Hi Yann! Thanks for your help.
>
> The value for sendspace is "1042560"
> ​What does that value mean? Is that how many bytes per packet​ can be sent
> at a time?
>
> ​At that current setting the downloads sit at around 34MB/second.
>
> I appreciate your help, cheers!
>
> Adam​
>
>
> 2018-01-11 10:22 GMT-03:00 Yann Ylavic <ylavic@gmail.com>:
>
>> On Thu, Jan 11, 2018 at 1:46 PM, Adam Teale <a...@believe.tv> wrote:
>> > Hey Eric thanks for letting me know about SendBufferSize, looking into
>> it
>> > now.
>> > Any idea how to see what it currently defaults to via a command?
>>
>> It defaults to the value of the system, so possibly on Mac OS the
>> value of "sysctl net.inet.tcp.sendspace"?
>>
>> Regards,
>> Yann.
>>
>> -
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: users-h...@httpd.apache.org
>>
>>
>


Re: [users@httpd] slower https transfer speeds compared with rsync/smb/sftp

2018-01-11 Thread Adam Teale
Hi Yann! Thanks for your help.

The value for sendspace is "1042560"
​What does that value mean? Is that how many bytes per packet​ can be sent
at a time?

​At that current setting the downloads sit at around 34MB/second.

I appreciate your help, cheers!

Adam​


2018-01-11 10:22 GMT-03:00 Yann Ylavic <ylavic@gmail.com>:

> On Thu, Jan 11, 2018 at 1:46 PM, Adam Teale <a...@believe.tv> wrote:
> > Hey Eric thanks for letting me know about SendBufferSize, looking into it
> > now.
> > Any idea how to see what it currently defaults to via a command?
>
> It defaults to the value of the system, so possibly on Mac OS the
> value of "sysctl net.inet.tcp.sendspace"?
>
> Regards,
> Yann.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] slower https transfer speeds compared with rsync/smb/sftp

2018-01-11 Thread Adam Teale
Hey Eric thanks for ​letting me know about SendBufferSize, looking into it
now.
Any idea how to see what it currently defaults to via a command?

2018-01-08 16:49 GMT-03:00 Eric Covener <cove...@gmail.com>:

> On Mon, Jan 8, 2018 at 1:46 PM, Adam Teale <a...@believe.tv> wrote:
> > Hi everyone,
> >
> > Firstly I hope this is an appropriate question for this list.
> >
> > I am looking into what could be causing decreased speeds when
> > uploading/downloading files to our apache server via https.
> >
> > Can anyone suggest why upload/download transfers speeds in a web browser
> > (Firefox, Chrome, Safari) via https sustain between 30-35MB/sec where as
> > file transfers via rsync/smb/sftp sit between 90-110MB/sec (from the same
> > server)?
> >
> > I have just tested http upload/download in the opposite direction
> (running
> > apache server on the client machine via httpd-userdir) and the transfer
> > rates are 110MB/sec
> >
> > Are there settings that can be adjusted in Apache or perhaps some system
> > files?
> >
> > We are running Mac OS 10.12 and Mac OS Server 5.2 (Apache 2.4).
> >
> > Any feedback would be greatly appreciated. Thanks!
> >
>
> Tried tweaking SendBufferSize?
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


[users@httpd] slower https transfer speeds compared with rsync/smb/sftp

2018-01-08 Thread Adam Teale
Hi everyone,

Firstly I hope this is an appropriate question for this list.

I am looking into what could be causing decreased speeds when
uploading/downloading files to our apache server via https.

Can anyone suggest why upload/download transfers speeds in a web browser
(Firefox, Chrome, Safari) via https sustain between 30-35MB/sec where as
file transfers via rsync/smb/sftp sit between 90-110MB/sec (from the same
server)?

I have just tested http upload/download in the opposite direction (running
apache server on the client machine via httpd-userdir) and the transfer
rates are 110MB/sec

Are there settings that can be adjusted in Apache or perhaps some system
files?

We are running Mac OS 10.12 and Mac OS Server 5.2 (Apache 2.4).

Any feedback would be greatly appreciated. Thanks!


Adam


Re: [users@httpd] if statement and ssl directives (apache 2.4)

2017-12-06 Thread Adam Cecile

Hi,

Well it depends who's editing the file. Some people are used to modify 
just the first block and ignore the following. You know what I mean ;-)
This is the reason why I'm trying to turn these Apache configuration 
"nginx way". Less blocks, less lines, less mistakes made.


Adam.

On 12/06/2017 10:56 AM, Gillis J. de Nijs wrote:

Hi Adam,

Simplest, in my opnion, is simplest to read and parse for a human.

What's wrong with:

## One VirtualHost that does everything

    ServerName www.comptoir-hardware.com 
<http://www.comptoir-hardware.com>


    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/comptoir-hardware.com.crt
    SSLCertificateKeyFile /etc/ssl/private/comptoir-hardware.com.key
    SSLCACertificateFile /etc/ssl/certs/comptoir-hardware.com.ca 
<http://re.com.ca>


    DocumentRoot ...


## Redirect to main VirtualHost

    ServerName new.comptoir-hardware.com 
<http://new.comptoir-hardware.com>

    ServerAlias comptoir-hardware.com <http://comptoir-hardware.com>
    ServerAlias comptoir.co <http://comptoir.co>
    ServerAlias www.comptoir.co <http://www.comptoir.co>

    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/comptoir-hardware.com.crt
    SSLCertificateKeyFile /etc/ssl/private/comptoir-hardware.com.key
    SSLCACertificateFile /etc/ssl/certs/comptoir-hardware.com.ca 
<http://re.com.ca>


    Redirect / https://www.comptoir-hardware.com/


## Redirect http to https main VirtualHost

    ServerName www.comptoir-hardware.com 
<http://www.comptoir-hardware.com>
    ServerAlias new.comptoir-hardware.com 
<http://new.comptoir-hardware.com>

    ServerAlias comptoir-hardware.com <http://comptoir-hardware.com>
    ServerAlias comptoir.co <http://comptoir.co>
    ServerAlias www.comptoir.co <http://www.comptoir.co>

    Redirect / https://www.comptoir-hardware.com/


Cheers,
Gillis

On Wed, Dec 6, 2017 at 10:10 AM, Adam Cecile <acec...@le-vert.net 
<mailto:acec...@le-vert.net>> wrote:


Hi,

I'm trying to achieve a simplier vhost configuration using if
statements but httpd refuses to start when I put SSL related
directive inside the if block:



  ServerName www.comptoir-hardware.com
<http://www.comptoir-hardware.com>
  ServerAlias www.comptoir-hardware.com
<http://www.comptoir-hardware.com>
  ServerAlias new.comptoir-hardware.com
<http://new.comptoir-hardware.com>
  ServerAlias comptoir.co <http://comptoir.co>
  ServerAlias www.comptoir.co <http://www.comptoir.co>

  
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/comptoir-hardware.com.crt
    SSLCertificateKeyFile /etc/ssl/private/comptoir-hardware.com.key
    SSLCACertificateFile  /etc/ssl/certs/comptoir-hardware.com.ca
<http://comptoir-hardware.com.ca>
  

  
    RedirectMatch (.*) http://www.comptoir-hardware.com
<http://www.comptoir-hardware.com>$1
  




Can you confirm there's a way to do what I want ? Can you see
what's wrong ?

Thanks in advance,


Adam.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
<mailto:users-unsubscr...@httpd.apache.org>
For additional commands, e-mail: users-h...@httpd.apache.org
<mailto:users-h...@httpd.apache.org>






[users@httpd] if statement and ssl directives (apache 2.4)

2017-12-06 Thread Adam Cecile

Hi,

I'm trying to achieve a simplier vhost configuration using if statements 
but httpd refuses to start when I put SSL related directive inside the 
if block:




  ServerName www.comptoir-hardware.com
  ServerAlias www.comptoir-hardware.com
  ServerAlias new.comptoir-hardware.com
  ServerAlias comptoir.co
  ServerAlias www.comptoir.co

  
    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/comptoir-hardware.com.crt
    SSLCertificateKeyFile /etc/ssl/private/comptoir-hardware.com.key
    SSLCACertificateFile  /etc/ssl/certs/comptoir-hardware.com.ca
  

  
    RedirectMatch (.*) http://www.comptoir-hardware.com$1
  




Can you confirm there's a way to do what I want ? Can you see what's wrong ?

Thanks in advance,


Adam.


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] SSI conditionals AFTER apache auth?

2017-11-13 Thread Adam Vest

Evening everyone,

I'm trying to make it so that only certain elements on a web page are 
visible to users logged in, and are otherwise not displayed using 
mod_include flow control 
. 
The only way I've been able to do that so far is to detect the cookie 
that the apache auth sets, which works sort of. Of course if I just 
manually set the cookie in my browser then the stuff shows and just 
confuses the whole setup I put. I know from reading the docs that 
%{REMOTE_USER} isn't exposed to these conditionals due to the order of 
operations. However, it'd be super-cool if it WERE. I know that filter 
processing can be tweaked to a degree with mod_filter, so I'm wondering 
if I can instruct apache to process authentication ahead of mod_include? 
I couldn't find anything directly saying yes or no, so figured I'd see 
if anyone on here knew one way or the other, or see if anyone had any 
other suggestions for accomplishing what I'm looking for.


Appreciate your help!



Re: [users@httpd] SSI conditionals not accepting "||" or "&&"

2017-11-04 Thread Adam Vest
Worked a charm. Thank you so much for your help!


On 11/04/2017 12:55 PM, Eric Covener wrote:
>> 
> Try 1 string after expr= only:
> #if expr="%{..} !~ ... && %{...} !~..."
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>


-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] SSI conditionals not accepting "||" or "&&"

2017-11-04 Thread adam . vest

Good afternoon all,

I've been tinkering with setting up SSI's in some HTML of mine, and  
one thing I'm trying to do is have the server decide if it should post  
a signup link, or display a logged-in users name (using perl cgi  
includes). Using this kind of works:



    Sign up here!

    


The problem with this is that, after the user signs in, they still see  
the sign up text until they refresh their browser (assuming the  
session cookie isn't seen immediately after sign-in). So, I tweaked a  
little to try and compensate (by showing the sign up text if they're  
not already on the clients page, which requires login):




    Sign up here!

    


However, this always results in the sign up text showing. The Apache  
docs[1] indicate that all expressions are supported here, but, unless  
my syntax is wrong, that doesn't look to be the case.


Am I missing something? Is my syntax wrong? Appreciate any help!


Links:
--
[1] https://httpd.apache.org/docs/current/howto/ssi.html


Re: [users@httpd] adding footer to all web pages

2017-05-25 Thread Adam Powell
Google Analytics, by default, only tracks pages by path.

Meaning if you add a Google Analytics snippet to all virtual hosts visitors
to the home page of each site will be almost impossible to distinguish from
each other in the reports...there are ways around this but you should be
aware of it.

I believe you'll want to review the documentation for server side includes
(SSI).

Adam Powell
http://www.adaminfinitum.com


On Thu, May 25, 2017 at 9:32 PM, Rose, John B <jbr...@utk.edu> wrote:

> If we wanted to add a Google Analytics footer to all pages on our server,
> meaning all virtual hosts, what is the best way to do that via Apache
> without having to touch the individual web sites?
>


Re: [users@httpd] Suggestion/Question about HTTP & HTTPS configurations

2017-05-20 Thread Adam Powell
Hi Daniel,

Thanks for trying to help but maybe I didn't explain this well enough.

Debian uses "Include" by default because of it's built-in `a2ensite`
shortcut.

Even with the Include (as your code illustrates) there needs to be a
Virtual Host configuration block for HTTP on port 80 and for HTTPS on port
443.

Unless specifically configured differently, why not assume they are the
same (as HTTP/port 80 for a matching Virtual Host)?

I hope that helps clarify.

Adam Powell
http://www.adaminfinitum.com


On Sat, May 20, 2017 at 6:05 AM, Daniel <dferra...@gmail.com> wrote:

> There is a directive called "Include"
>
> With this directive you can specify any number of directives in a file
> and then define the Include pointing to the same file wherever you may
> need.
>
> For instance
>
> 
> Include conf/common.conf
> 
>
> 
> SSLEngine on
> SSLCertificatefile conf/x509.crt
> SSLCertitificateKeyFile conf/rsa.key
> Include conf/common.conf
> 
>
> and common.conf can have:
> ServerName myserver.exam.com
> DocumentRoot /var/www
> DirectoryIndex index.html
> FallbackResource /index.html
> Redirect /one/ /two/
> Header set myheader "Hello"
> # and all directives you may need.
>
>
>
>
> 2017-05-20 2:53 GMT+02:00 Adam Powell <a...@adaminfinitum.com>:
> > Hello,
> >
> > I am a user of Apache in the sense that I install it, configure it and
> run
> > it to host sites...I'm hoping this is the correct list to send this to.
> >
> > Anyway, I recently did my first "from scratch" Apache install, build and
> > configuration in a cloud server (I had always used cPanel & WHM before).
> >
> > My suggestion is that Apache should "assume" that port 80 for HTTP and
> port
> > 443 for HTTPS and that they both serve the same content.
> >
> > I'm not suggesting people shouldn't be able to customize it, but adding
> > duplicate and redundant directives for each Virtual Host for HTTP and
> HTTPS
> > seems unneeded.
> >
> > In short, I'm suggesting a "smart default" that in the absence of a
> specific
> > Virtual Host configuration for HTTPS, just assumes that the HTTPS matches
> > the HTTP config for that Virtual Host.
> >
> > Background: I got Apache (2.4.x) up and running on a Debian VM,
> configured
> > all my Virtual Hosts, installed an SLL certificate and went to view the
> > HTTPS version of a site.
> >
> > I was redirected to the 'default' page for the server (not the default
> page
> > for the Virtual Host).
> >
> > I then realized I needed additional, identical rules for that Virtual
> Host
> > for HTTPS on port 443...simply put, it seems like that extra level of
> > configuration shouldn't be required...that it should work that way
> > automagically unless specifically configured otherwise.
> >
> > If not, I'd love to know why that's a bad idea.
> >
> > Thanks!
> >
> > Adam Powell
> > http://www.adaminfinitum.com
> >
>
>
>
> --
> Daniel Ferradal
> IT Specialist
>
> email dferradal at gmail.com
> linkedin es.linkedin.com/in/danielferradal
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


[users@httpd] Suggestion/Question about HTTP & HTTPS configurations

2017-05-19 Thread Adam Powell
Hello,

I am a user of Apache in the sense that I install it, configure it and run
it to host sites...I'm hoping this is the correct list to send this to.

Anyway, I recently did my first "from scratch" Apache install, build and
configuration in a cloud server (I had always used cPanel & WHM before).

My suggestion is that Apache should "assume" that port 80 for HTTP and port
443 for HTTPS and that they both serve the same content.

I'm not suggesting people shouldn't be able to customize it, but adding
duplicate and redundant directives for each Virtual Host for HTTP and HTTPS
seems unneeded.

In short, I'm suggesting a "smart default" that in the absence of a
specific Virtual Host configuration for HTTPS, just assumes that the HTTPS
matches the HTTP config for that Virtual Host.

Background: I got Apache (2.4.x) up and running on a Debian VM, configured
all my Virtual Hosts, installed an SLL certificate and went to view the
HTTPS version of a site.

I was redirected to the 'default' page for the server (not the default page
for the Virtual Host).

I then realized I needed additional, identical rules for that Virtual Host
for HTTPS on port 443...simply put, it seems like that extra level of
configuration shouldn't be required...that it should work that way
automagically unless specifically configured otherwise.

If not, I'd love to know why that's a bad idea.

Thanks!

Adam Powell
http://www.adaminfinitum.com


Re: [users@httpd] XSS Issue in v2.0.59

2017-05-02 Thread Adam R. Vest
Hey, I don't have any input on how to address those vulnerabilities, but I 
think the energy you're going to expend trying to patch those would be put to 
better use trying to fix whatever's incompatible with newer versions of apache 
so you can upgrade.

Just my two cents. Good luck either way.

On May 1, 2017 11:24:01 PM EDT, "Hagan, Mark "  wrote:
>Hello All,
>
>Looking for some help to determine if I can configure Apache 2.0.59 to
>address a couple Cross Site Scripting (XSS) vulnerabilities. I'm not
>able to upgrade to a later version, so I'm trying to understand if
>there is functionality within this version to address the XSS issue.
>
>
>I have 2 specific issues:
>
>1. Validating input (whitelisting acceptable characters)
>
>2. Sanitizing or encoding output (For instance, the character < would
>be encoded as  which would be displayed by the browser as the
>"less-than" character instead of being interpreted as the start
>of an HTML tag.)
>
>
>I am not an experienced apache administrator, so any help would be most
>appreciated.
>
>
>Thanks.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [users@httpd] Apache rewrite for redirect local directoryes

2017-02-13 Thread Adam R. Vest
Change the virtualhosts document root?

  * https://httpd.apache.org/docs/2.4/mod/core.html#documentroot


On 02/13/2017 02:00 PM, Rodrigo Cunha wrote:
> Dears,
> i need redirect the all requestions for my website to anoter directory.
> My directory is: /opt/www/curso.intranet/public_html
> The new directory is: /opt/www/curso.intranet/st1
> what is better solution for this problem?
>
> I want setting this in conf or in the .htaccess.
>
> -- 
> Atenciosamente,
> Rodrigo da Silva Cunha
> São Gonçalo, RJ - Brasil
>



[users@httpd] Solved - WebSocket connections, reverse proxy, MacOS Server 5.2, django channels

2017-01-12 Thread Adam Teale
the proxy submodules are included in
the configuration using LoadModule.


I had read of many people finding a solution at least with Apache on Ubuntu
or a custom install on MacOS.
I even tried installing Apache using Brew and when that didn't work I
almost proceeded to install nginx.

After countless hours/days of googling I reached to the Apache mailing list
for some help with this error. Yann Ylavic was very generous with his time
and offered me various ideas on how to get it going. After trying the
following:

SetEnvIf Request_URI ^/chat/stream/ is_websocket
RequestHeader set Upgrade WebSocket env=is_websocket
ProxyPass /chat/stream/ ws://myserver.local:8001/chat/stream/


I noticed that the interface server on port 8001 Daphne was starting to
receive ws connections!
However in the client browser it was logging:

"Error during WebSocket handshake: 'Upgrade' header is missing"


>From what I could see mod_dumpio was logging that the "Connection: Upgrade"
and "Upgrade: WebSocket" headers were being sent as part of the web socket
handshake:

mod_dumpio:  dumpio_in (data-HEAP): *HTTP/1.1 101 Switching
Protocols*\r\nServer:
AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection:
Upgrade\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n
mod_dumpio.c(164): [client 127.0.0.1:63944] mod_dumpio: dumpio_out
mod_dumpio.c(58): [client 127.0.0.1:63944] mod_dumpio:  dumpio_out
(data-TRANSIENT): 160 bytes
mod_dumpio.c(100): [client 127.0.0.1:63944] mod_dumpio:  dumpio_out
(data-TRANSIENT): HTTP/1.1 101 Switching Protocols\r\nServer:
AutobahnPython/0.17.1\r\*nUpgrade: WebSocket\r\nConnection: Upgrade*
\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n

However the client browser showed nothing in the response headers.

I was more stumped than ever.

I explored the client side jQuery framework as well as the  the Django
channels & autobahn module to see if perhaps something was amiss, and then
revised my own app and various combinations of suggestions about Apache and
it's module. But nothing stood out to me.

​Then I reread the ReadMe.txt inside the apache2 dir:
/Library/Server/Web/Config/apache2/ReadMe.txt

​"​
Special notes about the web proxy architecture in Server application 5.0:

This version of Server application contains a revised architecture for all
HTTP-based services. In previous versions there was a single instance of
httpd acting as a reverse proxy for Wiki, Profile, and Calendar/Address
services, and also scting as the Websites service.
​º​
 With this version, there is a major change: A single instance of httpd
runs as a reverse proxy, called the Service Proxy, and several additional
instances of httpd run behind that proxy to support specific HTTP-based
services, including an instance for the Websites service.

Since the httpd instance for the Websites service is now behind a reverse
proxy, or Service Proxy, note the following:
​...
 It is only the external Service Proxy httpd instance that listens on TCP
ports 80 and 443; it proxies HTTP requests and responses to Websites and
other HTTP-based services.
​...
​"​

​I wondered if thus ServiceProxy had somet​hing to do with it. I had a look
over:
/Library/Server/Web/Config/Proxy/apache_serviceproxy.conf
and noticed a comment - "# The user websites, and webdav"​.
I figured it wouldn't hurt to try adding the proxypass definitions &
rewrite rules that people had suggested on the forums as their solution.

   ProxyPass / http://localhost:8001/
   ProxyPassReverse / http://localhost:8001/

   RewriteEngine on
   RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
   RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
   RewriteRule .* ws://localhost:8001%{REQUEST_URI} [P]

​Sure enough after restarting the ServiceProxy it all started to work!​
---


Adam


Re: [users@httpd] HTTP 401: Apache strips out response headers

2017-01-11 Thread Adam Teale
Marco I think I am experiencing this too.

I am using apache to reverse proxy to our app that handles the web sockets
/ chat.

As far as I can tell from mod_dumpio's logging apache returns the correct
response headers to the client - particularly:


[Wed Jan 11 09:31:43.807204 2017] [dumpio:trace7] [pid 8091]
mod_dumpio.c(100): [remote 192.168.1.136:8001] mod_dumpio:  dumpio_in
(data-HEAP): HTTP/1.1 101 Switching Protocols\r\nServer:
AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection:
Upgrade\r\nSec-WebSocket-Accept:
9+mE2HjR58djdFt7E0WxNbqemsM=\r\n\r\n\x81\x10{"accept": true}


[Wed Jan 11 09:31:43.807235 2017] [dumpio:trace7] [pid 8091]
mod_dumpio.c(100): [client 127.0.0.1:51749] mod_dumpio:  dumpio_out
(data-TRANSIENT): HTTP/1.1 101 Switching Protocols\r\nServer:
AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection:
Upgrade\r\nSec-WebSocket-Accept:
9+mE2HjR58djdFt7E0WxNbqemsM=\r\n\r\n\x81\x10{"accept": true}



However according to Firefox, Safari & Chrome those headers aren't there:
"Error during WebSocket handshake: 'Upgrade' header is missing"

I'm looking forward to getting to the bottom of this





2017-01-10 17:57 GMT-03:00 Marco Pizzoli :

> Hi all,
> I am reverse proxying a backend which returns a http code 401.
> I see Apache is stripping out all the http headers returned by the backend
> along with the 401 and this is causing trouble to the client application.
>
> Is there a way to get the original http headers to the client?
>
> I can't find any documentation about this. Not even a reference to the
> fact Apache is re-writing the answer, but I checked myself and now I am
> sure of this behaviour
>
> Apache currently used for this setup is 2.2. Migration to 2.4 is planned
> in the coming motnhs.
>
> Thank you in advance for your help
> Marco
>


Re: [users@httpd] HTTP 401: Apache strips out response headers

2017-01-11 Thread Adam Teale
We are using apache​ 2.4.23

2017-01-11 9:33 GMT-03:00 Adam Teale <a...@believe.tv>:

> Marco I think I am experiencing this too.
>
> I am using apache to reverse proxy to our app that handles the web sockets
> / chat.
>
> As far as I can tell from mod_dumpio's logging apache returns the correct
> response headers to the client - particularly:
>
>
> [Wed Jan 11 09:31:43.807204 2017] [dumpio:trace7] [pid 8091]
> mod_dumpio.c(100): [remote 192.168.1.136:8001] mod_dumpio:  dumpio_in
> (data-HEAP): HTTP/1.1 101 Switching Protocols\r\nServer:
> AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection:
> Upgrade\r\nSec-WebSocket-Accept: 
> 9+mE2HjR58djdFt7E0WxNbqemsM=\r\n\r\n\x81\x10{"accept":
> true}
>
>
> [Wed Jan 11 09:31:43.807235 2017] [dumpio:trace7] [pid 8091]
> mod_dumpio.c(100): [client 127.0.0.1:51749] mod_dumpio:  dumpio_out
> (data-TRANSIENT): HTTP/1.1 101 Switching Protocols\r\nServer:
> AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection:
> Upgrade\r\nSec-WebSocket-Accept: 
> 9+mE2HjR58djdFt7E0WxNbqemsM=\r\n\r\n\x81\x10{"accept":
> true}
>
>
>
> However according to Firefox, Safari & Chrome those headers aren't there:
> "Error during WebSocket handshake: 'Upgrade' header is missing"
>
> I'm looking forward to getting to the bottom of this
>
>
>
>
>
> 2017-01-10 17:57 GMT-03:00 Marco Pizzoli <marco.pizz...@gmail.com>:
>
>> Hi all,
>> I am reverse proxying a backend which returns a http code 401.
>> I see Apache is stripping out all the http headers returned by the
>> backend along with the 401 and this is causing trouble to the client
>> application.
>>
>> Is there a way to get the original http headers to the client?
>>
>> I can't find any documentation about this. Not even a reference to the
>> fact Apache is re-writing the answer, but I checked myself and now I am
>> sure of this behaviour
>>
>> Apache currently used for this setup is 2.2. Migration to 2.4 is planned
>> in the coming motnhs.
>>
>> Thank you in advance for your help
>> Marco
>>
>
>


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2017-01-05 Thread Adam Teale
Hi Yann thanks again for all your help.

I am still trying to work out why the client isn't receiving the Upgrade
header in the response.
I see a proxy debug log:

[Thu Jan 05 11:04:24.002173 2017] [proxy:debug] [pid 65956]
proxy_util.c(3754): (54)Connection reset by peer: [client 127.0.0.1:51776]
AH03308: ap_proxy_transfer_between_connections: error on sock -
ap_get_brigade

[Thu Jan 05 11:04:24.002356 2017] [proxy:debug] [pid 65956]
proxy_util.c(2169): AH00943: WS: has released connection for (alfred.local)

[Thu Jan 05 11:04:28.001195 2017] [proxy:debug] [pid 65946]
proxy_util.c(3754): (54)Connection reset by peer: [client 127.0.0.1:51780]
AH03308: ap_proxy_transfer_between_connections: error on sock -
ap_get_brigade

[Thu Jan 05 11:04:28.001369 2017] [proxy:debug] [pid 65946]
proxy_util.c(2169): AH00943: WS: has released connection for (alfred.local)

Any idea what that might be about and if it is related?


2017-01-05 6:55 GMT-03:00 Yann Ylavic <ylavic@gmail.com>:

> On Thu, Jan 5, 2017 at 10:36 AM, Yann Ylavic <ylavic@gmail.com> wrote:
> >
> > For the record (after private discussion with Adam), it seems that a
> > configuration like the below would work for http(s) and ws(s) on the
> > same URL:
> >
> >   RewriteEngine on
> >   RewriteCond %{HTTP:Upgrade} "(?i)websocket"
> >   RewriteRule ^/(.*)$ wss://backend/$1 [P]
> >   ProxyPass / https://backend/
>
> *But* note that having both HTTP(s) and WS(s) on the same URL it is
> *not* recommended, mainly for security reasons.
>
> While mod_proxy_http is a strict HTTP protocol validator,
> mod_proxy_wstunnel is only a tunnel (a TCP proxy) once the upgrade is
> asked by the client/browser).
>
> So with the above configuration a simple Upgrade header in the request
> would open a tunnel with backend, including for "normal" HTTP traffic.
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-28 Thread Adam Teale
Yann & Eric thank you for getting back to me.

As far as I know the "normal" http traffic is via "/chat/" (well at least
that is visited page url).

When that page loads is connects to "/chat/stream" via the wss:// protocol
using a javascript library called "reconnecting-websocket".

So what I have expected to happen is that proxypass would catch the urls
that contain /chat/stream and proxy them to the app running on
localhost:8000 via the wss:// protocol - all other http/s traffic handled
by the app running via mod_wsgi.

I have just tried:
ProxyPass /chat/ https://localhost:8000/chat/
ProxyPass /chat/stream wss://localhost:8000/chat/stream

Now Apache just sits there and nothing loads -

[Wed Dec 28 12:41:14.523727 2016] [proxy:debug] [pid 45890]
mod_proxy.c(1198): [client 127.0.0.1:62404] AH01143: Running scheme https
handler (attempt 0)

[Wed Dec 28 12:41:14.523739 2016] [proxy:debug] [pid 45890]
proxy_util.c(2154): AH00942: HTTPS: has acquired connection for (localhost)

[Wed Dec 28 12:41:14.523749 2016] [proxy:debug] [pid 45890]
proxy_util.c(2208): [client 127.0.0.1:62404] AH00944: connecting
https://localhost:8000/chat/ to localhost:8000

[Wed Dec 28 12:41:14.523777 2016] [proxy:debug] [pid 45890]
proxy_util.c(2417): [client 127.0.0.1:62404] AH00947: connected /chat/ to
localhost:8000

[Wed Dec 28 12:41:14.523946 2016] [proxy:debug] [pid 45890]
proxy_util.c(2798): (61)Connection refused: AH00957: HTTPS: attempt to
connect to [::1]:8000 (localhost) failed

[Wed Dec 28 12:41:14.524222 2016] [proxy:debug] [pid 45890]
proxy_util.c(2807): AH02824: HTTPS: connection established with
127.0.0.1:8000 (localhost)

[Wed Dec 28 12:41:14.524266 2016] [proxy:debug] [pid 45890]
proxy_util.c(2975): AH00962: HTTPS: connection complete to [::1]:8000
(localhost)

[Wed Dec 28 12:41:14.524283 2016] [ssl:info] [pid 45890] [remote
127.0.0.1:8000] AH01964: Connection to child 0 established (server
alfred.local:80)


*Browser*
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /chat/.
Reason: Error reading from remote server














2016-12-28 11:57 GMT-03:00 Eric Covener <cove...@gmail.com>:

> On Wed, Dec 28, 2016 at 9:53 AM, Eric Covener <cove...@gmail.com> wrote:
> > On Tue, Dec 27, 2016 at 8:39 AM, Adam Teale <a...@believe.tv> wrote:
> >> Hi!
> >>
> >> I've been trying to setup a reverse proxy to a localhost websocket url.
> >>
> >> ProxyPass /chat/stream/ wss://localhost:8000/chat/stream/
> >> ProxyPassReverse /chat/stream/ wss://localhost:8000/chat/stream/
> >>
> >> I get an error in the apache error_log that reads:
> >>
> >> No protocol handler was valid for the URL /chat/stream/. If you are
> using a
> >> DSO version of mod_proxy, make sure the proxy submodules are included
> in the
> >> configuration using LoadModule.
> >>
> >> I have read a lot of pages via google of people using this method so I
> >> wonder if there is some issue in our setup/install of Apache that ships
> with
> >> Mac OS X 10.11 & Server.app 5.2?
> >>
> >> I have all the standard modules loaded in httpd_server_app.conf
> >>
> >> LoadModule proxy_module libexec/apache2/mod_proxy.so
> >> LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so
> >> LoadModule proxy_wstunnel_module libexec/apache2/mod_proxy_wstunnel.so
> >>
> >> When I access the application running on localhost:8000 directly on the
> >> server everything works fine
> >>
> >> Any ideas what could be going on?
> >
> > There is a bug in this area, but you need to decide what you expect to
> > happen with non-websockets requests to /chat/stream/ which is what's
> > happening here.
> >
> > If you intend to proxy it, you might need to change the LoadModule
> > order of mod_proxy_http and mod_proxy_wstunnel to try to get a
> > different order at runtime.
> >
> > If you expect to satisfy it somehow else... you have a bit of a
> > puzzler.  I'm not sure there's a good recipe for this.  Otherwise as
> > Yann said, you should use different URLs if you can.
>
> Yann pointed out that the shareing is doomed for other reasons, so
> disregard the above!
>
> --
> Eric Covener
> cove...@gmail.com
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-28 Thread Adam Teale
more detail from the error log:

[Wed Dec 28 09:31:05.974520 2016] [remoteip:debug] [pid 27844]
mod_remoteip.c(357): [client 127.0.0.1:54002] AH01569: RemoteIP: Header
X-Forwarded-For value of fe80::761b:b2ff:feda:99e7 appears to be a private
IP or nonsensical.  Ignored

[Wed Dec 28 09:31:05.974700 2016] [authz_core:debug] [pid 27844]
mod_authz_core.c(834): [client 127.0.0.1:54002] AH01628: authorization
result: granted (no directives)

[Wed Dec 28 09:31:05.974744 2016] [proxy:debug] [pid 27844]
mod_proxy.c(1198): [client 127.0.0.1:54002] AH01143: Running scheme wss
handler (attempt 0)

[Wed Dec 28 09:31:05.974756 2016] [proxy_http:debug] [pid 27844]
mod_proxy_http.c(1893): [client 127.0.0.1:54002] AH01113: HTTP: declining
URL wss://127.0.0.1:8000/chat/stream/

[Wed Dec 28 09:31:05.974765 2016] [proxy_scgi:debug] [pid 27844]
mod_proxy_scgi.c(537): [client 127.0.0.1:54002] AH00865: declining URL
wss://127.0.0.1:8000/chat/stream/

[Wed Dec 28 09:31:05.974773 2016] [proxy_wstunnel:debug] [pid 27844]
mod_proxy_wstunnel.c(299): [client 127.0.0.1:54002] AH02900: declining URL
wss://127.0.0.1:8000/chat/stream/  (not WebSocket)

[Wed Dec 28 09:31:05.976075 2016] [proxy_ajp:debug] [pid 27844]
mod_proxy_ajp.c(738): [client 127.0.0.1:54002] AH00894: declining URL wss://
127.0.0.1:8000/chat/stream/

[Wed Dec 28 09:31:05.976088 2016] [proxy:warn] [pid 27844] [client
127.0.0.1:54002] AH01144: No protocol handler was valid for the URL
/chat/stream/. If you are using a DSO version of mod_proxy, make sure the
proxy submodules are include

2016-12-27 16:10 GMT-03:00 Adam Teale <a...@believe.tv>:

> Perhaps something useful - I do see that the "serveradmin" command says
> that it runs apache 2.2 even though "httpd -v" says 2.4.23??
>
> httpd -v
>
> Server version: Apache/2.4.23 (Unix)
>
> Server built:   Aug  8 2016 16:31:34
>
>
>
> sudo serveradmin fullstatus web
>
>
> web:health = _empty_dictionary
>
> web:readWriteSettingsVersion = 1
>
> web:apacheVersion = "2.2"
>
> web:servicePortsRestrictionInfo = _empty_array
>
> web:startedTime = "2016-12-27 19:08:59 +"
>
> web:apacheState = "RUNNING"
>
> web:statusMessage = ""
>
> web:ApacheMode = 2
>
> web:servicePortsAreRestricted = "NO"
>
> web:state = "STOPPED"
>
> web:setStateVersion = 1
>


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-27 Thread Adam Teale
Perhaps something useful - I do see that the "serveradmin" command says
that it runs apache 2.2 even though "httpd -v" says 2.4.23??

httpd -v

Server version: Apache/2.4.23 (Unix)

Server built:   Aug  8 2016 16:31:34



sudo serveradmin fullstatus web


web:health = _empty_dictionary

web:readWriteSettingsVersion = 1

web:apacheVersion = "2.2"

web:servicePortsRestrictionInfo = _empty_array

web:startedTime = "2016-12-27 19:08:59 +"

web:apacheState = "RUNNING"

web:statusMessage = ""

web:ApacheMode = 2

web:servicePortsAreRestricted = "NO"

web:state = "STOPPED"

web:setStateVersion = 1


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-27 Thread Adam Teale
I have upgraded to MacOS 10.12 so now Apache 2.4.23 is the current version.
Either way the patch that was suggested was supposed to have been included
in 2.4.10.



2016-12-27 11:29 GMT-03:00 Adam Teale <a...@believe.tv>:

> Otis do you know if it pretty straightforward to apply a patch to an
> apache module?
>
> 2016-12-27 11:28 GMT-03:00 Adam Teale <a...@believe.tv>:
>
>> Hi Otis thanks for looking into it for me.
>> The link to revathskumar's blog is pretty much what I have setup at the
>> moment. I'll check out the bug report! Thanks!
>>
>> 2016-12-27 11:23 GMT-03:00 Otis Dewitt - NOAA Affiliate <
>> otis.dew...@noaa.gov>:
>>
>>> You can also check this URL: http://blog.revathskumar.
>>> com/2015/09/proxy-websocket-via-apache.html
>>>
>>> Thanks,
>>> Otis
>>>
>>> On Tue, Dec 27, 2016 at 9:07 AM, Adam Teale <a...@believe.tv> wrote:
>>>
>>>> ​Hi Daniel,
>>>>
>>>> Yes in the http_server_app.conf file it is activated:
>>>> LoadModule ssl_module libexec/apache2/mod_ssl.so
>>>>
>>>> It is interesting though that when I run an "sudo apachectl -M" i can't
>>>> see ssl_module in there.
>>>>
>>>>
>>>> ​
>>>>
>>>
>>>
>>
>


Re: [users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-27 Thread Adam Teale
​Hi Daniel,

Yes in the http_server_app.conf file it is activated:
LoadModule ssl_module libexec/apache2/mod_ssl.so

It is interesting though that when I run an "sudo apachectl -M" i can't
see ssl_module in there.


​


[users@httpd] Web sockets & proxypass - No protocol handler was valid for the URL

2016-12-27 Thread Adam Teale
Hi!

I've been trying to setup a reverse proxy to a localhost websocket url.

ProxyPass /chat/stream/ wss://localhost:8000/chat/stream/
ProxyPassReverse /chat/stream/ wss://localhost:8000/chat/stream/

I get an error in the apache error_log that reads:

No protocol handler was valid for the URL /chat/stream/. If you are using a
DSO version of mod_proxy, make sure the proxy submodules are included in
the configuration using LoadModule.

I have read a lot of pages via google of people using this method so I
wonder if there is some issue in our setup/install of Apache that ships
with Mac OS X 10.11 & Server.app 5.2?

I have all the standard modules loaded in httpd_server_app.conf

LoadModule proxy_module libexec/apache2/mod_proxy.so
LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so
LoadModule proxy_wstunnel_module libexec/apache2/mod_proxy_wstunnel.so

When I access the application running on localhost:8000 directly on the
server everything works fine

Any ideas what could be going on?

Thanks!

Adam
Apache 2.4.18, Mac OS X 10.11, Server.app 5.2


Re: [users@httpd] "Define" directive is ALWAYS parsed

2016-09-18 Thread Adam
Well, bummer. I think it would be a terrific improvement to
functionality if both of those things were implemented:
   1. AllowOverride negating (maybe we could do like AllowOverride all 
-SetHandler?)
   2. mod_info/status & friends restricting (not allow being added in htaccess 
context, or give the option to remove that access?)
I just tested with some long AllowOverrideList configs and confirmed
going that route would allow me to (mostly) do what I'd like to do. I
still think it would be very nice to be able to load modules (or maybe
only allow access to module functionality?) under specific
circumstances.
Thanks again for the help.
On Sun, 2016-09-18 at 15:30 -0400, Eric Covener wrote:
> On Sun, Sep 18, 2016 at 3:25 PM, Adam 
> d> wrote:
> > 
> > Ah yes, the monkey wrench. So the reason why going that route isn't
> > an
> > option is because this is being done in a shared environment, with
> > .htaccess
> > enabled for users. In an environment like that, anyone can just
> > drop
> > SetHandler server-info into any .htaccess they want and get all of
> > that
> > (sometimes sensitive) info. Due to the nature of all this, it was
> > looking
> > like the only way to truly limit who could gain access to that info
> > would be
> > to only load the module itself under specific circumstances, which
> > is what
> > led me to where I'm at now.
> That's just not possible, modules can only be loaded at startup.
> 
> > 
> > 
> > Is there a way I've not yet found that allows me to disable using
> > SetHandler
> > in an .htaccess context (while still allowing other things), or to
> > not allow
> > defining server-info there?
> You cannot really do it well.  You can block  all of FileInfo, or
> list
> what's overideable in AllowOverrideList but you can't use negation in
> that.
> 
> There has been discussion in the past about moving some mods (like
> info and status) away from SetHandler configuration for this very
> reason but nothing was ever implemented.
> 
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
> 

Re: [users@httpd] "Define" directive is ALWAYS parsed

2016-09-18 Thread Adam
Ah yes, the monkey wrench. So the reason why going that route isn't an
option is because this is being done in a shared environment, with
.htaccess enabled for users. In an environment like that, anyone can
just drop SetHandler server-info into any .htaccess they want and get
all of that (sometimes sensitive) info. Due to the nature of all this,
it was looking like the only way to truly limit who could gain access
to that info would be to only load the module itself under specific
circumstances, which is what led me to where I'm at now.
Is there a way I've not yet found that allows me to disable using SetHa
ndler in an .htaccess context (while still allowing other things), or
to not allow defining server-info there?
Thanks for your help thus far, also!
On Sun, 2016-09-18 at 15:09 -0400, Eric Covener wrote:
> On Sun, Sep 18, 2016 at 1:11 PM, Adam 
> d> wrote:
> > 
> > Specifically, I'm trying to limit accessibility to the mod_info
> > page to only
> > specific users/IP's. I thought I would be able to get away with
> > that by
> > doing something like:
> > 
> > 
> > 
> > Define me
> > 
> > 
> > 
> > 
> > Then do something like:
> > 
> > 
> > 
> > LoadModule info_module modules/mod_info.so
> > 
> > 
> > 
> > SetHandler server-info
> > 
> > 
> > 
> > 
> 
> This mixes a lot of per-request things with non-per-request things --
> like loading modules (and Define).
> 
> Why not just use Require ip ... inside of the Location block w/
> SetHandler?
> 
> 

Re: [users@httpd] "Define" directive is ALWAYS parsed

2016-09-18 Thread Adam
So if I'm understanding you correctly, the Define directive should only
ever be allowed globally, and not nested within conditionals? If so, I
guess I'm having trouble understanding the purpose of having the Define
directive. In any case, it's sounding less likely that this will
accomplish what it is that I'm trying to do.
Specifically, I'm trying to limit accessibility to the mod_info page to
only specific users/IP's. I thought I would be able to get away with
that by doing something like:

Define me

Then do something like:

LoadModule info_module modules/mod_info.so

SetHandler server-info


But this of course isn't working the way I was wanting it to (hence my
email inquiry), and judging by what you're telling me, it's not
supposed to work. If that the case, might you (or anyone else on this
thread) have any alternative ideas on how I might accomplish what I'm
trying to do here?
Thanks again.
On Sun, 2016-09-18 at 12:24 -0400, Eric Covener wrote:
> On Sun, Sep 18, 2016 at 11:52 AM, Adam 
> id> wrote:
> > 
> > Perhaps the only way I could get Define to only be applied
> > conditionally is
> > by following the example of nesting it within 
> > wrappers. But if
> > Define is otherwise always set globally and unconditionally, then
> > the
> >  directive seems superfluous.
> > 
> > Am I vastly misunderstanding the usage of this here?
> It's a defect that's tricky to fix. It shouldn't have been allowed in
> any non-global context.  A note should be added.
> 
> However, It doesn't make IfDefine useless. IfDefine is older and
> understands command-line -D arguments.
> 

[users@httpd] "Define" directive is ALWAYS parsed

2016-09-18 Thread Adam
Hello,
I'm working on a way of making specific content available only under
certain circumstances, and I believe the best way to go about this is
to use the "Define" directive, then do some stuff within "" wrappers. The problem is, no matter what kind of conditional
stuff I put around the Define line, it is always parsed.
Example:

Define myvar

With the above, the "myvar" parameter will always be defined for every
request, regardless of user-agent. This will also work:

Define myvar

Very clearly that should never ever match, yet it does. Reading the
documentation seems to indicate that the Define directive should be
obeying the context it is being put in:
Documentation for "If" directive (https://httpd.apache.org/docs/2.4/mod
/core.html#if):
> Only directives that support the directory context can be used within
> this configuration section.
Documentation for "Define" directive (https://httpd.apache.org/docs/2.4
/mod/core.html#define):
> Context:  server config, virtual host, directory
Perhaps the only way I could get Define to only be applied
conditionally is by following the example of nesting it within  wrappers. But if Define is otherwise always set globally and
unconditionally, then the  directive seems superfluous.
Am I vastly misunderstanding the usage of this here?
Thanks for any assistance.
-Adam


Re: [users@httpd] mod_proxy - Status lines without response phrases are getting turned into 500 errors

2015-12-07 Thread Adam
Thanks Nick!  I'm not sure what our plans are to upgrade, but we do have an
easy fix in our application for now.  Thanks for clarifying where it was
fixed and not fixed.

Adam

On Mon, Dec 7, 2015 at 2:34 PM, Nick Kew <n...@webthing.com> wrote:

> On Mon, 2015-12-07 at 14:03 -0500, Adam wrote:
> > We are using Apache 2.2.29 in production with mod_perl and mod_proxy
>
> What's the role of mod_perl in your proxy?  Can the
> problem be replicated without mod_perl?
>
> Oh, right, just looked up the bug you reference: seems
> I was there.  The final comment suggests that the issue
> you describe was fixed in 2.3/2.4, but not in 2.2.
> You could presumably apply the patch attached to that
> bug report if you don't want to upgrade?
>
> --
> Nick Kew
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
> For additional commands, e-mail: users-h...@httpd.apache.org
>
>


[users@httpd] mod_proxy - Status lines without response phrases are getting turned into 500 errors

2015-12-07 Thread Adam
Hi,

We are using Apache 2.2.29 in production with mod_perl and mod_proxy (we're
acting as a reverse proxy) and are experiencing a problem with proxying
responses from the back end server that don't include a response phrase
being turned into a 500 error by Apache when it proxies to the client.  The
client is using a custom response code of 320 and is not including a
response phrase in their status line.  This sounds almost identical to an
old bug that was fixed (or supposedly fixed) a very long time ago:

https://bz.apache.org/bugzilla/show_bug.cgi?id=44995

In our mod_perl application if we modify the status line read from the
backend server to include a response phrase then this avoids the bug.
E.g., things are ok if we do something like this when the status line
doesn't contain a response phrase.

$r->status_line($r->status . ' OK');

Does anyone have any experience with this or should I file a new bug with
Apache?

Here is some diag info from the httpd binary:

Loaded Modules:
 core_module (static)
 authn_file_module (static)
 authn_default_module (static)
 authz_host_module (static)
 authz_groupfile_module (static)
 authz_user_module (static)
 authz_default_module (static)
 auth_basic_module (static)
 file_cache_module (static)
 cache_module (static)
 disk_cache_module (static)
 reqtimeout_module (static)
 filter_module (static)
 deflate_module (static)
 log_config_module (static)
 env_module (static)
 headers_module (static)
 setenvif_module (static)
 version_module (static)
 proxy_module (static)
 proxy_http_module (static)
 proxy_scgi_module (static)
 proxy_ajp_module (static)
 proxy_balancer_module (static)
 ssl_module (static)
 mpm_prefork_module (static)
 http_module (static)
 mime_module (static)
 status_module (static)
 actions_module (static)
 alias_module (static)
 so_module (static)

Server version: Apache/2.2.29 (Unix)
Server built:   Oct 17 2014 13:47:09
Server's Module Magic Number: 20051115:36
Server loaded:  APR 1.5.1, APR-Util 1.5.3
Compiled using: APR 1.5.1, APR-Util 1.5.3
Architecture:   32-bit
Server MPM: Prefork
  threaded: no
forked: yes (variable process count)
Server compiled with
 -D APACHE_MPM_DIR="server/mpm/prefork"
 -D APR_HAS_SENDFILE
 -D APR_HAS_MMAP
 -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
 -D APR_USE_SYSVSEM_SERIALIZE
 -D APR_USE_PTHREAD_SERIALIZE
 -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
 -D APR_HAS_OTHER_CHILD
 -D AP_HAVE_RELIABLE_PIPED_LOGS
 -D DYNAMIC_MODULE_LIMIT=128
 -D HTTPD_ROOT="/var/httpd"
 -D SUEXEC_BIN="/var/httpd/bin/suexec"
 -D DEFAULT_PIDLOG="logs/httpd.pid"
 -D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
 -D DEFAULT_LOCKFILE="logs/accept.lock"
 -D DEFAULT_ERRORLOG="logs/error_log"
 -D AP_TYPES_CONFIG_FILE="conf/mime.types"
 -D SERVER_CONFIG_FILE="conf/httpd.conf"

Thanks,
Adam


[users@httpd] RE: merging Apache context

2015-10-30 Thread Greenberg, Adam
Hi Bill (this is a resend to include the dev and user communities, per your 
instructions):

Thanks very much for the prompt response. I believe we have covered all of the 
steps you indicate below. Attached please find a tar file that contains a 
simple C++ module and a makefile to build it.

Our test server http.conf has a LoadModule line for the module and then the 
following set of directory sections:


  dirConfig /foo



  dirConfig /



  dirConfig /foo/bar


When we execute the server, we see this on the console:

# bin/apachectl start
Created dirConfig for context: unset
Created dirConfig for context: /foo/
Inserted dirConfig with message: /foo
Created dirConfig for context: /
Inserted dirConfig with message: /
Created dirConfig for context: /foo/bar/
Inserted dirConfig with message: /foo/bar
Created dirConfig for context: unset

And this in the log:

Merged config: unset:/
…
hookFunc: for request /foo/bar/foo.html the context is: unset:/

We do not see any other messages. The file /foo/bar/foo.html exists and the 
browser displays it. This suggests that the merge sequence for the last 
directory section stopped with the configuration for the “/” section.  Perhaps 
you could point out what would cause this behavior?

Thanks:
Adam


From: William A. Rowe Jr. [mailto:wr...@pivotal.io]
Sent: Thursday, October 29, 2015 6:04 PM
To: PAN, JIN
Cc: wr...@apache.org; Greenberg, Adam
Subject: Re: merging Apache  context

Hi Jin,

there might be more than one thing going on here.

First, it is critical that a directive belonging to the module occurs in each 
of the  blocks you are merging.

Remember httpd is not going to even create a config section, never mind merge 
them, for every module whose directives do not appear in a given  - 
this is what makes httpd so efficient.

Second, if there is a bug in the cmd handler, your ctx member might not be 
correctly updated for a section, same is true for a bug in the create or merge 
function.

Third, httpd does perform some optimization, it may premerge global configs and 
may resume a merge from previously merged sections when they are encountered in 
a subrequest.

Is the resulting ->ctx member correct for the resulting  context?  
Is it simply that the merge isn't called as often as expected?  Optimization 
may be the cause.

Is the cmd record for your directive set to OR_ACCESS (telling httpd that it is 
a per-dir and not per-server config?)

Is there a bug in your create code that is returning NULL instead of a newly 
initialized config section?

If we look at the example of mod_dir.c, here are the key points...


AP_DECLARE_MODULE(dir) = {

STANDARD20_MODULE_STUFF,

create_dir_config,  /* create per-directory config structure */

merge_dir_configs,  /* merge per-directory config structures */
All is well, we have a create + merge handler...


static void *create_dir_config(apr_pool_t *p, char *dummy)

{

dir_config_rec *new = apr_pcalloc(p, sizeof(dir_config_rec));



new->index_names = NULL;

new->do_slash = MODDIR_UNSET;

new->checkhandler = MODDIR_UNSET;

new->redirect_index = REDIRECT_UNSET;

return (void *) new;

}
the correct structure size is created and members initialized to empty (e.g. 
'unset') - the new allocation is returned.


static void *merge_dir_configs(apr_pool_t *p, void *basev, void *addv)

{

dir_config_rec *new = apr_pcalloc(p, sizeof(dir_config_rec));

dir_config_rec *base = (dir_config_rec *)basev;

dir_config_rec *add = (dir_config_rec *)addv;



new->index_names = add->index_names ? add->index_names : base->index_names;

new->do_slash =

(add->do_slash == MODDIR_UNSET) ? base->do_slash : add->do_slash;

new->checkhandler =

(add->checkhandler == MODDIR_UNSET) ? base->checkhandler : 
add->checkhandler;

new->redirect_index=

(add->redirect_index == REDIRECT_UNSET) ? base->redirect_index : 
add->redirect_index;

new->dflt = add->dflt ? add->dflt : base->dflt;

return new;

}
A new config is created, the various per-dir values updated, and the resulting 
new allocation is returned.


static const command_rec dir_cmds[] =

{

...

AP_INIT_RAW_ARGS("DirectoryIndex", add_index, NULL, DIR_CMD_PERMS,

"a list of file names"),
The DIR_CMD_PERMS (defined as OR_INDEXES) assures httpd that this is a per-dir 
config directive allowed wherever the 'AllowOverride Indexes' is set.


static const char *add_index(cmd_parms *cmd, void *dummy, const char *arg)

{

dir_config_rec *d = dummy;

const char *t, *w;

int count = 0;



if (!d->index_names) {

d->index_names = apr_array_make(cmd->pool, 2, sizeof(char *));

}



t = arg;

while ((w = ap_getword_conf(cmd->pool, )) && w[0]) {

if (count == 0 && !strcasecmp(w, "disabled")) {

 

[users@httpd] RE: merging Apache context

2015-10-30 Thread Greenberg, Adam
:error] [pid 21809] [client 
10.27.15.3:54410] hookFunc: for request, /foo/bar/foo.html, the context is: 
unset:/

Apache appears to behave differently from what I would expect from reading the 
section merging documentation:

Adam

-Original Message-
From: William A. Rowe Jr. [mailto:wmr...@gmail.com]
Sent: Friday, October 30, 2015 8:53 PM
To: Greenberg, Adam
Cc: users@httpd.apache.org; PAN, JIN
Subject: Re: merging Apache  context

On Fri, Oct 30, 2015 at 5:23 PM, Greenberg, Adam 
<adam.greenb...@fmr.com<mailto:adam.greenb...@fmr.com>> wrote:
> Hi Bill (this is a resend to include the dev and user communities, per your
> instructions):

Sorry for any misunderstanding - this seems like a good users@ question
(with some C++ thrown in!) so I'm dropping the dev@ list. FWIW attaching
the source file as text instead of a tar file lets others quickly examine and
participate in the discussion, posts with archive attachments are often
ignored for lack of time, even if they might be interesting to other readers.

> Thanks very much for the prompt response. I believe we have covered all of
> the steps you indicate below.

Yea, someone should steal that troubleshooting list and throw it on the wiki.
Might be useful to Nick if he publishes an update to the module authoring
volume.

> Attached please find a tar file that contains
> a simple C++ module and a makefile to build it.
>
> Our test server http.conf has a LoadModule line for the module and then the
> following set of directory sections:
>
> 
>   dirConfig /foo
> 
>
> 
>   dirConfig /
> 
>
> 
>   dirConfig /foo/bar
> 
>
> When we execute the server, we see this on the console:
> # bin/apachectl start
> Created dirConfig for context: unset
> Created dirConfig for context: /foo/
> Inserted dirConfig with message: /foo
> Created dirConfig for context: /
> Inserted dirConfig with message: /
> Created dirConfig for context: /foo/bar/
> Inserted dirConfig with message: /foo/bar
> Created dirConfig for context: unset
>
> And this in the log:
> Merged config: unset:/
> hookFunc: for request /foo/bar/foo.html the context is: unset:/
>
> We do not see any other messages. The file /foo/bar/foo.html exists and the
> browser displays it. This suggests that the merge sequence for the last
> directory section stopped with the configuration for the “/” section.
> Perhaps you could point out what would cause this behavior?

As I mentioned when I had a chance to teach this material, server
configs are almost
exclusively allocated at startup time out of the pconf - configuration
pool.  It is a pool
whose contents should -never- change for the lifetime of the server
worker processes.
But it *does* change in the parent process, when a restart is
requested, the existing
pconf is dumped and a new pconf is allocated, and the configuration is re-read.

But dir configs are a different beast, they may be created at runtime
(for example,
.htaccess file contents), merged for the top request, all internal
redirects, and all
of the subrequests (mod_autoindex or mod_dav_fs can cause thousands of these
to present just one file listing).  They absolutely must be allocated
from the pool
passed to the create() or merge() handler.

In your C++ code, you have;

void * dirConfig::create_dc( apr_pool_t * pool, char * context )
{
  // Create the configuration object.
  dirConfig * cfg = new dirConfig();
  if (NULL == cfg)
  {
// An allocation error.
return NULL;
  }
  // Keep the context name.
  std::string temp = "unset";

  if (NULL != context)
  {
temp = context;
  }
  cfg->setContext( temp );

  std::cerr << "Created dirConfig for context: " << temp << std::endl;
  return (void *) cfg;
}

Here's the first problem... you allocated dirConfig() but didn't allocate this
from the given pool.  You then returned a naked pointer to the C++ object,
and I have no idea offhand whether that permanently increases the object's
use count, or whether the dirConfig object is then released immediately
upon the return from the create_dc function.

You can register a pool cleanup against this pool which destroys the
given dirConfig, but that would be a costly proposition performance-wise.
Do this only if it must be a managed type, and that should only be for
your per-request or per-connection objects.  You should handle these
httpd structures (server config, dir config, etc) as unmanaged "C" data
and let the pool schema perform your cleanups for you.

You probably want to start with this change to watch lifetimes;
dirConfig::~dirConfig()
{
  std::cerr << "Destroyed dirConfig of context: " << _context << std::endl;
}

You also may want to watch your objects a little more closely by emitting
the memory address of 'this' itself, so you can determine that the object
being modified is the ob

[users@httpd] Proxy Internal Webserver

2014-06-16 Thread Adam Bellin
I have an apache server that I want to proxy an internal server (
192.168.1.25:32400/web) from an external domain name example.com/site1.

Below is my virtualhost for the main server.

VirtualHost *:80
ServerName www.example.com
DocumentRoot /var/www/html/example
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
ProxyRequests On
Location /site1
ProxyPass http://192.168.1.25:32400/web/
ProxyPassReverse http://192.168.1.25:32400/web/
Order allow,deny
Allow from all
/Location
/Virtualhost

What I see is when I try to go to my site example.com/site1, is that the
client's url changes to example.com/web/index.html which is invalid. What I
need is the client to display the page of 192.168.1.25:32400:/web/ at the
domain of example.com/site1.

What do I need to change in the above to make this work?

Thanks.


[users@httpd] DirectoryListing Hiding Folder that contains .htaccess with 'require valid-user'

2014-04-30 Thread Adam Brenner
Howdy,

I am running into an issue where the directory listing with:

  Options +Indexes +FollowSymLinks +MultiViews
  AllowOverride All

is hiding any folder that contains a .htaccess file with 'require
valid-user'. The specific .htaccess file we are using is:

  AuthUserFile /data/hpc/www/accounting/graphical/.htpasswd
  AuthName HPC Graphical Accounting
  AuthType Basic
  require valid-user

When the line 'require valid-user' is commented out, the directory is
shown in the parent's directory listing. When added, it is hidden.

How do I disable Apache to stop hiding the directory when 'require
valid-user' is added?

Apache's IndexIgnore is the default:
   IndexIgnore .??* *~ *# HEADER* README* RCS CVS *,v *,t

The folder name is graphical-accounting which does not match the regex
above. The directory and all files are correctly owned by the same
user apache is running as, and has the correct folder/file
permissions.

Any ideas?
-Adam

--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences

System Administrator, HPC Cluster
Office of Information Technology
http://hpc.oit.uci.edu/

University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] DirectoryListing Hiding Folder that contains .htaccess with 'require valid-user'

2014-04-30 Thread Adam Brenner
On Wed, Apr 30, 2014 at 11:50 AM, Eric Covener cove...@gmail.com wrote:

 IndexOptions +showForbidden, or arrange or the same authentication to
 occur in the parent directory.



That did the trick. Thanks Eric!

--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences

System Administrator, HPC Cluster
Office of Information Technology
http://hpc.oit.uci.edu/

University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] Apache 2.4 mod_ratelimit and mod_proxy_fcgi issue

2014-02-12 Thread Adam Hurkala
Hi,

I've just noticed that mod_ratelimit does not work as expected with
mod_proxy_fcgi. I set a download limit to 500 KB/s for PHP (php-fpm) and
for some reason I'm still able to download at full speed.
If download limit is set to some low value e.g. 10 KB/s it pretty much
works (see results).

Configuration 1:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://
127.0.0.1:9000/home/adam/apache2.4/htdocs/$1
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 500

Results 1 (download 74 MB file with wget with rate_limit set to 500):
2014-02-12 14:32:25 (3.83 MB/s) - `file.php' saved [76581888/76581888]

Configuration 2:
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so
ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://
127.0.0.1:9000/home/adam/apache2.4/htdocs/$1
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 10

Results 2 (download 74 MB file with wget with rate_limit set to 10):
2014-02-12 15:32:05 (13.3 KB/s) - `file.php' saved [76581888/76581888]

I've tested this issue on LAN and internet and always got the same results.

PHP script that can be used to test this issue can be found here (only path
to file needs to be set):
http://pastebin.com/zXWUMaxf

Thanks,
Adam


[users@httpd] Apache 2.2.26 - POST Requests Failing after SIGTERM Signal Received

2014-01-27 Thread Adam Brenner
Howdy,

We are experiencing an issue where POST requests are failing after
Apache 2.2.26 receives a SIGTERM and restarts. The response received
from CURL is:

  curl: (52) Empty reply from server

Before the SIGTERM, we are receiving a response from the server. No
proxy or filtering is occurring between the CURL POST request and
Apache. This problem is reproducible in both PHP and a CGI script
(perl). Because of this, we are not looking into any module (PHP).
Perl was ran via hashbang and not via mod_perl. mod_security has no
configuration and not filtering any rules, same with any custom
rewrite rules. Apache was compiled with:

./configure
--disable-v4-mapped
--enable-deflate
--enable-expires
--enable-headers
--enable-info
--enable-logio
--enable-proxy
--enable-rewrite
--enable-ssl
--enable-suexec
--enable-unique-id
--prefix=/usr/local/apache
--with-included-apr
--with-mpm=prefork
--with-pcre=/opt/pcre
--with-ssl=/usr
--with-suexec-caller=nobody
--with-suexec-docroot=/
--with-suexec-gidmin=100
--with-suexec-logfile=/usr/local/apache/logs/suexec_log
--with-suexec-uidmin=100
--with-suexec-userdir=public_html


Right after the SIGTERM signal is received, the process via ps looks like so:

$ lsof -p 24428 | grep bin
httpd 24428 root txt REG 8,3 1213648 38274047
/usr/local/apache/bin/httpd (deleted)

It appears the initial root process spawned by httpd is provided with
a SIGTERM shortly after a graceful restart (random interval), and the
root process that spawns in it place to maintain the existing child
processes does not appear to be entirely functional, and actually
reports to be deleted as far as the /proc/*/exec information is
reporting. This directly coincides with the point where POST ceases to
work, after graceful restarts.


Running the httpd service with strace showed the following
information[1]. The core dumps produced from httpd are below:

(gdb) where
#0 0x7f524f8cc86f in ?? ()
#1 0x00440cdf in ap_is_recursion_limit_exceeded ()
#2 0x004b8ade in ap_byterange_filter ()
#3 0x0045bbbf in get_ptoken ()
#4 0x0045c070 in parse_expr ()
#5 0x004e49c9 in ?? ()
#6 0x7f524fb3c710 in ?? ()
#7 0x in ?? ()


Any help in tracking down this issue would be greatly appreciated as
the ability to not process POST requests are making sites unusable.

Would it be better to submit a BugZilla report on this? I tried
searching the list however could not find other bugs related to this.
Or perhaps the developer list?


[1]: 
https://gist.github.com/abrenner/8662912/raw/4b56aa0d114f678ddfdd3b8e6b90e4cb92317c20/gistfile1.txt


--
Adam Brenner
Computer Science, Undergraduate Student
Donald Bren School of Information and Computer Sciences

System Administrator, HPC Cluster
Office of Information Technology
http://hpc.oit.uci.edu/

University of California, Irvine
www.ics.uci.edu/~aebrenne/
aebre...@uci.edu

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] VirtualHost configuration not working as expected with ePages solution

2013-01-10 Thread Adam Dosch

On Thu, 10 Jan 2013 17:15:48 +, Tom Evans wrote:
On Thu, Jan 10, 2013 at 12:03 PM, Tom Frost fro5...@yahoo.com 
wrote:

Hi Adam

Thanks for your reply.

Yes, I did c/p and then replace the domains with the placeholders.

I too noticed that there wasnt any entry for url1.mydomain.com, and 
I have
been down the same process you advised of starting with one working 
and

gradually adding.

So I start off with httpd.conf and the relevant lines are:

NameVirtualHost *:80

Include conf.d/*.conf

IfDefine PROXY


Is this so that this vhost is only loaded in the case that the proxy
module is included in Apache?

If so, I would completely drop it. If the proxy_module is not
available, your website would not work anyway, and better to be told
that when starting apache, than apache to silently drop your vhost 
and

start up anyway.

If you did want to keep it (and I correctly guessed why it is there),
you should replace it with this:

IfModule proxy_module

I would just drop it though.

With this gone, httpd should see the url1.domain.com vhost as the
first and default vhost, and all should work. Please test.


Tom F, thanks for posting it all.  I was too lazy to go look back 
through the list to find the original postings of your config.


I agree with Tom E., dump that IfDefine PROXY.  The chances you 
aren't going to not load the proxy module some given time I'd say are 
slim; you're setting it up for a reason, so it's needed.  Instead of 
making your configuration 'that' dynamic, if you decide not to use that 
vhost for url1 or don't use mod_proxy* for what you're doing, just 
remove it and quick change the vhost config.  Keep it simple.


-A

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] VirtualHost configuration not working as expected with ePages solution

2013-01-09 Thread Adam Dosch

Tom,

Sorry for the belated reply.


The output requested is:

VirtualHost configuration:
127.0.0.1:443 localhost (/etc/httpd/conf.d/ssl.conf:81)
wildcard NameVirtualHosts and _default_ servers:
*:80 is a NameVirtualHost
 default server url2.mydomain.com
(/etc/httpd/conf.d/zzz-epages-httpd.conf:215)
 port 80 namevhost url2.mydomain.com
(/etc/httpd/conf.d/zzz-epages-httpd.conf:215)
 port 80 namevhost url2.mydomain.com
(/etc/httpd/conf.d/zzz-epages-httpd.conf:225)
 port 80 namevhost url2.mydomain.com
(/etc/httpd/conf.d/zzz-epages-httpd.conf:215)
 port 80 namevhost url2.mydomain.com
(/etc/httpd/conf.d/zzz-epages-httpd.conf:225)
Syntax OK


I'm assuming you copy/pasted this out and replaced the real domains to 
'protect the innocent'.  But if you didn't flub copy/paste, I --do not-- 
see any indication that Apache knows what to do or will do anything 
specific when a request comes in for 'url1.mydomain.com'.  If you notice 
the output above, you will always default to 'url2.mydomain.com' for any 
HTTP request coming in on that listening service on that IP address.  
There isn't --any-- VirtualHost reference to 'url1' listed.


I forgot what your VirtualHost containers looked like for both ur1 and 
ur2 domains, but I think you definitely just take a isolation approach 
to it:  work with one, get it working with it's own VirtualHost 
container, then add another vhost in and so on, then re-verify with 
'apachectl -t -D DUMP_VHOSTS' and from a web browser.  From there once 
you get it nailed down, then do any merging for maintenance sake on your 
.conf files.


Hope that helps.

-A



-
 FROM: Adam Dosch
 TO: Tom Frost ; users@httpd.apache.org
 SENT: Thursday, 3 January 2013, 19:28
 SUBJECT: Re: [users@httpd] VirtualHost configuration not working as
expected with ePages solution

Tom,

I'd be curious what the output of your 'apachectl -t -D DUMP_VHOSTS'
looks like?

I've come across this problem as well in a related degree, and
interrogating the output of the 'DUMP_VHOSTS' above will at least 
tell

you the top-to-bottom order your vhost requests will travel down in
your configuration.

One way I had to solve it was take my VirtualHost container for
'_default_', put it in it's own configuration file and include it
prior to any other vhost config files in httpd.conf. It looked a bit
like this in my httpd.conf:

 NameVirtualHost *:80
 Include conf/mydefault-vhost.conf # which would contain your default
vhost container for url2.mydomain.com
 Include conf/*-vhost.conf # contain your others like url1, urlfoo,
urlboo, urlbar, etc., it would be one config, or many, your choice.

Using this approach, I did notice that a blanket wildcard/greedy
include of all *.conf file gives you varying results, especially if
you were managing all your vhosts in separate configuration files for
clarity/organization sake like I was.

Otherwise, sounds like you've verified client-side caching. My last
logical thought would be perhaps if you're not using CNAME's in DNS
for this and right-out calling them from the client without any
hostname resolution on those FQDNs, that you need to add that those
host aliases of 'url1.mydomain.com' and 'url1.mydomain.com' to your
/etc/hosts or equiv in Windows.

-A

On Thu, 3 Jan 2013 08:05:26 -0800 (PST), Tom Frost wrote:

If I use either url1.mydomain.com or url2.mydomain.com they both go
to the url2.mydomain.com VirtualHost site.

I have cleared caches and done a Ctrl-F5 to force the page to

reload.


I'm sure that its something to do with epages, as I said there is a
lot of other config in there but I'm honestly not sure what is what.

Thanks again for your help, any more suggestions would be

appreciated.




-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org [1]
For additional commands, e-mail: users-h...@httpd.apache.org [2]



Links:
--
[1] mailto:users-unsubscr...@httpd.apache.org
[2] mailto:users-h...@httpd.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] VirtualHost configuration not working as expected with ePages solution

2013-01-03 Thread Adam Dosch

Tom,

I'd be curious what the output of your 'apachectl -t -D DUMP_VHOSTS' 
looks like?


I've come across this problem as well in a related degree, and 
interrogating the output of the 'DUMP_VHOSTS' above will at least tell 
you the top-to-bottom order your vhost requests will travel down in your 
configuration.


One way I had to solve it was take my VirtualHost container for 
'_default_', put it in it's own configuration file and include it prior 
to any other vhost config files in httpd.conf.  It looked a bit like 
this in my httpd.conf:


  NameVirtualHost *:80
  Include conf/mydefault-vhost.conf # which would contain your default 
vhost container for url2.mydomain.com
  Include conf/*-vhost.conf # contain your others like url1, urlfoo, 
urlboo, urlbar, etc., it would be one config, or many, your choice.


Using this approach, I did notice that a blanket wildcard/greedy 
include of all *.conf file gives you varying results, especially if you 
were managing all your vhosts in separate configuration files for 
clarity/organization sake like I was.


Otherwise, sounds like you've verified client-side caching.  My last 
logical thought would be perhaps if you're not using CNAME's in DNS for 
this and right-out calling them from the client without any hostname 
resolution on those FQDNs, that you need to add that those host aliases 
of 'url1.mydomain.com' and 'url1.mydomain.com' to your /etc/hosts or 
equiv in Windows.


-A

On Thu, 3 Jan 2013 08:05:26 -0800 (PST), Tom Frost wrote:

If I use either url1.mydomain.com or url2.mydomain.com they both go
to the url2.mydomain.com VirtualHost site.

I have cleared caches and done a Ctrl-F5 to force the page to 
reload. 


I'm sure that its something to do with epages, as I said there is a
lot of other config in there but I'm honestly not sure what is what.

Thanks again for your help, any more suggestions would be 
appreciated.




-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] SSLSessionCache - DSO Load Failed

2012-10-02 Thread Adam Jaber
Here is the output from the error_log:
[error] (20019)DSO load failed: Cannot create SSLSessionCache DBM file 
`/usr/local/apache/conf/ssl/SessionCache'

Here is my entry in the conf file, which is also entered before the 
virtual host, as stated in the documentation:
SSLSessionCache dbm:/usr/local/apache/conf/ssl/SessionCache

Here are the cache modules loaded:
LoadModule file_cache_module 
/opt/freeware/lib/httpd/modules/mod_file_cache.so
LoadModule cache_module /opt/freeware/lib/httpd/modules/mod_cache.so
LoadModule disk_cache_module 
/opt/freeware/lib/httpd/modules/mod_disk_cache.so

I tried touching SessionCache and setting open permissions on it, but when 
I restart apache, the file gets deleted. Please assist with issue.

Thank you 

[users@httpd] mix pre-compressed and mod_deflate

2011-05-10 Thread Adam Schrotenboer
I have a reasonably working mod_rewrite solution for serving
pre-compressed files. But if I enable mod_deflate, it seems to override
my mod_rewrite method.

Basic problem is this:
serving jquery.js with mod_deflate takes 1sec to load, serving it
pre-compressed (and mod_deflate disabled) takes 180msec to load.
With mod_rewrite + mod_deflate enabled, I get the same 1sec to load this
file.

So I want to pre-compress some files, and have the rest be compressed on
the fly. How do I make mod_deflate get out of the way for pre-compressed
files?


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[users@httpd] flush(STDOUT) + mod_deflate - was: mix pre-compressed and mod_deflate

2011-05-10 Thread Adam Schrotenboer
On 05/10/2011 11:14 AM, Adam Schrotenboer wrote:
 I have a reasonably working mod_rewrite solution for serving
 pre-compressed files. But if I enable mod_deflate, it seems to override
 my mod_rewrite method.

 Basic problem is this:
 serving jquery.js with mod_deflate takes 1sec to load, serving it
 pre-compressed (and mod_deflate disabled) takes 180msec to load.
 With mod_rewrite + mod_deflate enabled, I get the same 1sec to load this
 file.

 So I want to pre-compress some files, and have the rest be compressed on
 the fly. How do I make mod_deflate get out of the way for pre-compressed
 files?

After more playing with it, the high latency under mod_deflate is caused
by the change required to make 'flush' work (setting a low SetOutputBuffer).

I'm _trying_ to make the javascript start being loaded while the CGI
finishes putting together the page. It seems I can have one or the other
(working flush, or fast mod_deflate).

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] flush(STDOUT) + mod_deflate - was: mix pre-compressed and mod_deflate

2011-05-10 Thread Adam Schrotenboer
On 05/10/2011 02:46 PM, Macks, Aaron wrote:
 Can you have the .js get called from a different VHOST (probably will need a 
 FQ url) and configure the static vhost and the cgi vhost each tuned to 
 the specific purpose



An interesting idea, but that seems that it would require 2 VMs (I tried
putting a SetOutputBuffer in a Directory, but it refuses that context,
so i'm guessing it'll reject a virtualhost as well), and that would
never pass review for deployment.

I'm tempted to try setting this one CGI to no-gzip, as it's not that
much content (most of the content is the javascripts and json files).
just ~30kb.
and I can also try pre-compressing the JSON files.

 A
 --
 Aaron Macks
 Sr. Unix Systems Engineer

 Harvard Business Publishing
 300 North Beacon St.|   Watertown, MA 02472
 (617) 783-7461  |   Fax: (617) 783-7467
 www.harvardbusiness.org |   Cell:(978) 317-3614

 On May 10, 2011, at 5:39 PM, Adam Schrotenboer wrote:

 On 05/10/2011 11:14 AM, Adam Schrotenboer wrote:
 I have a reasonably working mod_rewrite solution for serving
 pre-compressed files. But if I enable mod_deflate, it seems to override
 my mod_rewrite method.

 Basic problem is this:
 serving jquery.js with mod_deflate takes 1sec to load, serving it
 pre-compressed (and mod_deflate disabled) takes 180msec to load.
 With mod_rewrite + mod_deflate enabled, I get the same 1sec to load this
 file.

 So I want to pre-compress some files, and have the rest be compressed on
 the fly. How do I make mod_deflate get out of the way for pre-compressed
 files?

 After more playing with it, the high latency under mod_deflate is caused
 by the change required to make 'flush' work (setting a low SetOutputBuffer).

 I'm _trying_ to make the javascript start being loaded while the CGI
 finishes putting together the page. It seems I can have one or the other
 (working flush, or fast mod_deflate).

 -
 The official User-To-User support forum of the Apache HTTP Server Project.
 See URL:http://httpd.apache.org/userslist.html for more info.
 To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
 For additional commands, e-mail: users-h...@httpd.apache.org


 -
 The official User-To-User support forum of the Apache HTTP Server Project.
 See URL:http://httpd.apache.org/userslist.html for more info.
 To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   from the digest: users-digest-unsubscr...@httpd.apache.org
 For additional commands, e-mail: users-h...@httpd.apache.org



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] Configure Server To Run In Windows Vista

2009-06-17 Thread Sevis , en Adam


PROBLEM:  When Windows starts/boots, the Apache service does not start and
reports the error (not in Apache error log, but in Windows management error 
log).

PROBLEM:  When attempting to start the service for the first time in a Windows
session from the Apache Monitor (running in system tray), there is a UAC prompt
requesting to Allow/Cancel the request.  Then a dialog box shows The requested
operation has failed.  Curiously, this only occurs for the Start of a stopped
Apache service;  Restart of an existing service does not fail.

The following errors are reported at the same time, although the first two are
in this order in the system log:

  CompManag-EventViewer-Windows Logs-System:
  Level:  Error
  Event ID: 7024
  Source: Service Control Manager Eventlog Provider
  Detail:  The ApacheHttpd service terminated with service-specific
error 1 (0x1).

  CompManag-EventViewer-Windows Logs-System:
  Level:  Information
  Event ID: 7036
  Source: Service Control Manager Eventlog Provider
  Detail:  The ApacheHttpd service entered the stopped state.

  CompManag-EventViewer-Windows Logs-Application:
  Level:  Error
  Event ID: 3299
  Source: Apache Service
  Detail:  The Apache service named  reported the following error:
   (20024)The given path is misformatted or contained invalid
  characters: Invalid config file path
  C:/Users/joeUser/AppData/Roaming/Apache/httpd.conf .

QUESTIONS:

1. There must a way to configure the Start shortcut and Apache Monitor system
task so that they are run without UAC prompting.  How is that done?

2. How do I eliminate the problem (a) the Apache service not starting at system
boot and (b) the Apache Monitor failing to start the service (after UAC 
allowance)?

3. How can the Application log error indicate an invalid config file path when
it is the same as the Start shortcut (see info below), which works?  Note that
pathname separator character is forward slash '/' (Unix-style) rather than
unescaped backward slash '\' (Windows style).

Note the background  other info below are in case there are any other questions
as to what is going on.

 BACKGROUND and OTHER INFO =

Operating environment is Vista Home Prem (SP2).

The Start shortcut (which works, but only after UAC prompting) is:

C:\Program Files\Apache Software Foundation\Apache2.2\bin\httpd.exe -w  -f
C:/Users/joeUser/AppData/Roaming/Apache/httpd.conf -n ApacheHttpd -k start

The Apache Monitor shortcut (which does not work, even after UAC prompting) 
is:

C:\Program Files\Apache Software Foundation\Apache2.2\bin\ApacheMonitor.exe

The Restart shortcut also works:

C:\Program Files\Apache Software Foundation\Apache2.2\bin\httpd.exe -w -n
ApacheHttpd -k restart -f C:/Users/joeUser/AppData/Roaming/Apache/httpd.conf


I have long ago installed Apache 2.2.11 (Win32) in its default configuration but
became very annoyed at not being able to modify httpd.conf because of Vista UAC,
which I keep in place because in principle, with Microsoft products, it is
probably BETTER to keep it enabled rather than disabled.  On my system it is
installed as a service.

So what I did was to move large parts of the directory tree for
updatable/modifiable files to where they really belong in the Windows 
filesystem:

C:\users\joeUser\AppData\Roaming\Apache

Under this directory is
* httpd.conffile
* cgi-bin   directory
* conf  directory
* error directory
* icons directory
* logs  directory
* manualdirectory
* modules   directory

Note that the server is started from its installation directory:

C:\Program Files\Apache Software Foundation\Apache2.2\bin\httpd.exe

The relevant modifications to httpd.conf as part of this movement are:

  ServerRoot C:/Users/joeUser/AppData/Roaming/Apache




-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[us...@httpd] mod_setenvinf OID() and x509v3 extension

2009-02-26 Thread Adam
Hello,

We're trying to extract an extension field value from an x509v3
certificate into an environment variable.
(Or get the value and pass it somehow to a Zope 3 application,
which is currently running as a separate process, using
RewriteRule ... [P,L] on win32)

I just realized that something like
SetEnvIf OID(2.16.840.1.113730.1.13) (.*) NetscapeComment=$1
will work only in httpd 2.4.

But when is 2.4 due?

OR

Is there any other chance to get that working?

Any help is welcome.

-- 
Best regards,
 Adam  mailto:agroszer...@gmail.com


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



[EMAIL PROTECTED] ProxyPass nocanon causes css files to have type text/plain

2008-10-23 Thread Adam Woodworth
Hi,

I am using mod_proxy as the reverse proxy in our application.  I need
mod_proxy to send the request through to the backend server w/o adding
or removing any encodings from the URL.  I also have
AllowEncodedSlashses on.  But what's happening is that some things
that are encoded in the URL submitted by the client are being unecoded
by httpd, e.g. %26 is turned into a literal .  This is causing the
backend application server to break because it expects the URL to be
encoded.

For the record, it seems bad that %26 is turned into a  -- a 
normally designates a CGI argument split, but in this case it's
actually the value of a CGI argument so it needs to be encoded.

Changing the back-end application server is also not an option.  So I
need Apache to just send the URL through w/o changing it.

So I tried using the nocanon argument to ProxyPass, which looks like
it was going to work fine.  But nocanon causes httpd to incorrectly
tags certain .css and .js files as text/plain.  This is only when the
back-end server doesn't send a Content-Type header, and the file has a
CGI argument, such as:

/css/style.css?12345
/js/main.js?12345

When there is a ?12345 in the URL, and nocanon is ON (don't
canonicalise), then httpd breaks and for whatever reason isn't able
to figure out that the file is a .css or .js file and appends a
Content-Type: text/plain header instead of the appropriate type.

With nocanon OFF, then httpd adds the correct type headers for these files.

My guess is that the ?12345 CGI argument to the .css and .js files
does something weird to httpd when using nocanon.  I don't know what,
though.  I bet that this is a problem for any file that doesn't have a
content-type header from the server -- httpd will just default to
text/plain for everything.

Is using nocanon the correct Apache option to use here?  Is this
problem with the .css and .js types a bug in Apache?  Is there a
better way to do what I want -- send the URLs through w/o changing
them?  Are there any drawbacks to doing so?

Thanks,
Adam

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] pointing documentroot to a file instead of a directory

2008-09-22 Thread Adam Williams
Is there a way or a command like documentroot but points to a file 
instead of a directory?  I'm running Apache httpd 2.2 on fedora 9 linux 
and have a the files /pubs/ma.html and /pubs/jmh.html, but when I tried 
documentroot I get the error upon running apachectl start:


Warning: DocumentRoot [/var/www/sites/mdah-live/pubs/ma.html] does not exist
Warning: DocumentRoot [/var/www/sites/mdah-live/pubs/jmh.html] does not 
exist
Warning: DocumentRoot [/var/www/sites/mdah-live/pubs/jmh.html] does not 
exist



what I have in my httpd.conf for them is:

VirtualHost *
   ServerName msarchaeology.com
   ServerAlias *.msarchaeology.com
   DocumentRoot /var/www/sites/mdah-live/pubs/ma.html
   UserDir disable
/VirtualHost

VirtualHost *
   ServerName journalmshistory.com
   ServerAlias *.journalmshistory.com
   DocumentRoot /var/www/sites/mdah-live/pubs/jmh.html
   UserDir disable
/VirtualHost

VirtualHost *
   ServerName journalmshistory.org
   ServerAlias *.journalmshistory.org
   DocumentRoot /var/www/sites/mdah-live/pubs/jmh.html
   UserDir disable
/VirtualHost

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] pointing documentroot to a file instead of a directory

2008-09-22 Thread Adam Williams

Davide Bianchi wrote:

Adam Williams wrote:
  

Is there a way or a command like documentroot but points to a file
instead of a directory?



No. You can use the DirectoryIndex directives to specify which file to
consider the 'main' page (in place of index.html). But the DocRoot is a
directory.

Davide

  


so basically the only solution is to use a documentroot of /pubs/ma and 
/pubs/jmh and move ma.html to /pubs/ma/index.html and jmh.html to 
/pubs/jmh/index.html?


Re: [EMAIL PROTECTED] pointing documentroot to a file instead of a directory

2008-09-22 Thread Adam Williams

Justin and Krist, thanks for the help!

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] pointing documentroot to a file instead of a directory

2008-09-22 Thread Adam Williams

André Warnier wrote:

Try this instead :

snip



and save yourself one VirtualHost.
Now, that's assuming that apart from this homepage, your virtual 
servers share *everything* under your DocumentRoot (if there is 
anything else).

even better! thanks!

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] High availability concept question

2008-05-02 Thread Adam Martin

Guy and Jet, thank you for taking the time to respond.  I appreciate it.

-Original Message-
From: Guy Waugh [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 30, 2008 6:51 PM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] High availability concept question

Adam Martin wrote:
 If I load balance between two 2.2 Apache HTTP Servers on two separate 
 physical servers, what type of device or software can I place in front

 of the web servers to allow users to access either web server via one
IP 
 Address?  I did a quick search in the documentation but wasn't sure
what 
 to look for.  Would it be as simple as pointing the users to a router 
 configured with some type of forwarding?  How do I prevent this device

 from becoming a single point of failure?  I know these questions are
not 
 Apache specific, at least I don't think they, but after being a member

 of this list for a couple of months and seeing the level of expertise
on 
 this board I figured someone could point me in the right direction.  I

 am not asking for a how-to response, just concepts on where to start
to 
 start looking or which documentation I should look at.  Thanks.

One (open source) solution would be to use an LVS load balancing server 
(http://www.linuxvirtualserver.org) to do the load balancing. If you 
want high availability (no single point of failure in the load balancer)

couple LVS with heartbeat (http://www.linux-ha.org), so that if your 
active load balancer fails, the inactive load balancer takes over.

There is a learning curve with both those technologies, so if you don't 
have much time (or even if you do), it might be worth a visit to 
http://www.ultramonkey.org, Simon Horman's project site, in which he 
supplies prebuilt packages to combine LVS and heartbeat together.

-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] log files missing randomly from access_log

2008-04-06 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I have a number of virtualhosts configured with a different access_log
for each one. I also have a monitoring server which sends a request to
each of the virtualhosts every 5 minutes.

Sometimes, these regular requests don't get logged in some of the
virtualhost access_log files. ie, each virtualhost has a different
number of lines from the monitoring host. In some hours, there are 0
logged requests, while in other hours there are exactly 12 as expected.

There are no error messages in the error_log apart from the usual
miscellaneous 404/etc errors for bad links/etc

The server is running Debian Etch stable with all patches/updates
applied. Linux 2.6.18, Apache 2.2.3-4+etch4

The system is in fact a virtualhost under XEN, and is using a NFS root
mounted from another machine running Debian Etch Stable with all
updates/etc.

I'm at quite a loss as to where to look for problems, since the only
evidence of a problem is the lack of entries in the log file

Any advice on things to try/places to look/etc, more than welcome.

Thanks,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFH+balGyoxogrTyiURAvh+AJ46YgghTDVSc/GdwonqNkhjHAHaMwCg1jDy
L+ytYGrzFOr5hkGHbWlGBKk=
=xgu7
-END PGP SIGNATURE-

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] Access is slow when accessed by IP address instead of 'localhost'

2008-03-19 Thread Adam Martin
Dinesh,

 

If accessing the Apache machine from other machines on the network is
slow you may have a missing reverse lookup entry in your DNS.  To test
you might try the following from one of the slow machines.

 

nslookup (hostname where Apache is installed)

nslookup 192.168.1.2

 

The results from the two commands should match.  Another test would be
to add the 192.168.1.2 host to the hosts file on one of the slow
machines and see if response time improves.  Good luck.

 



From: Dinesh Kumar [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 17, 2008 1:35 AM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] Access is slow when accessed by IP address
instead of 'localhost'

 

This is resolved - an entry for 192.168.1.2 in the hosts file did it for
access from the *same* machine.  Now, I'm trying to check how to resolve
the slow response time when accessed over the LAN.   

Thanks to Adam  others who responded.

Regards,
Dinesh.



On Tue, Mar 11, 2008 at 8:13 PM, David Cassidy [EMAIL PROTECTED]
wrote:

Sorry

and glad to hear it :)



On Tue, 2008-03-11 at 20:12 +0530, Nilesh Govindrajan wrote:
 On Tue, 11 Mar 2008 14:39:36 +
 David Cassidy [EMAIL PROTECTED] wrote:

  Nilesh
 
  Check if its doing a reverse lookup.
 
  you need DNS for that ...
 
  D
 
  On Tue, 2008-03-11 at 19:48 +0530, Nilesh Govindrajan wrote:
   On Tue, 11 Mar 2008 09:10:16 -0500
   Adam Martin [EMAIL PROTECTED] wrote:
  
Dinesh,
   
   
   
When I started reading I thought you may be experiencing a delay
with the host resolution but when you mentioned that you could
access another Tomcat application using the IP Address it
doesn't
sound like you are.
   
   
   
To be sure you may want to add the 192.168.1.2 host to your
hosts
file on the server and make sure the server is set to resolve
names locally before accessing DNS to see if it makes any
difference.  Other than that I got zip.  Good luck.
   
   
   

   
From: Dinesh Kumar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 6:29 AM
To: users@httpd.apache.org
Subject: [EMAIL PROTECTED] Access is slow when accessed by IP
address
instead of 'localhost'
   
   
   
Hi all,
   
I'm facing a peculiar problem.  My environment is as follows:
   
Apache 2.2.4
mod_jk 1.2.23
Tomcat 6.0.13
   
When I access the Java application running on Tomcat like this:
curl http://localhost/community/forums/list.page, I get a
response within 1 second.  However, when I access it like this:
http://192.168.1.2/community/forums/list.page, the response
takes
10 seconds.
   
To be clear, in both the cases I'm accessing the application
from
the *same* machine that runs httpd/tomcat.  I found out about
this issue when I tried to access it from another machine on the
same LAN, which is how it will be actually accessed when the
issue is resolved. I tried to find out what happened when I
accessed it from the same machine on which it is running, and
found that it behaves similarly as I mentioned above.
   
192.168.1.2 is on eth0, and eth0 is setup as a trusted device.
When I access another Tomcat 6 application running locally like
this: curl http://192.168.1.2/recruit/admin, the request is
served instantly.
   
The extra time occurs between the following lines of mod_jk.log:
   
[Mon Mar 10 17:03:54 2008] [7071:60304] [debug]
ajp_send_request::jk_ajp_common.c (1287): (forum) request body
to
send 0
- request body to resend 0
[Mon Mar 10 17:04:04 2008] [7071:60304] [debug]
ajp_connection_tcp_get_message::jk_ajp_common.c (1043): received
from ajp13 pos=0 len=133 max=8192
   
I thought it was something to do with mod_jk and installed
version
1.2.26, but the problem persists.
   
Any pointers would be welcome!
   
Thanks,
Dinesh.
   
  
   According to me, an IP address doesn't need the DNS to resolve
   something.
  
 
 
 
-
  The official User-To-User support forum of the Apache HTTP Server
  Project. See URL:http://httpd.apache.org/userslist.html for more
  info. To unsubscribe, e-mail: [EMAIL PROTECTED]
from the digest: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 

 I am not the guy who has the problem!!! BTW, I know that for Reverse
 lookups, a DNS is needed.



-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

 



RE: [EMAIL PROTECTED] Access is slow when accessed by IP address instead of 'localhost'

2008-03-11 Thread Adam Martin
Dinesh,

 

When I started reading I thought you may be experiencing a delay with
the host resolution but when you mentioned that you could access another
Tomcat application using the IP Address it doesn't sound like you are.

 

To be sure you may want to add the 192.168.1.2 host to your hosts file
on the server and make sure the server is set to resolve names locally
before accessing DNS to see if it makes any difference.  Other than that
I got zip.  Good luck.

 



From: Dinesh Kumar [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 6:29 AM
To: users@httpd.apache.org
Subject: [EMAIL PROTECTED] Access is slow when accessed by IP address
instead of 'localhost'

 

Hi all,

I'm facing a peculiar problem.  My environment is as follows:

Apache 2.2.4
mod_jk 1.2.23
Tomcat 6.0.13

When I access the Java application running on Tomcat like this:  curl
http://localhost/community/forums/list.page, I get a response within 1
second.  However, when I access it like this:
http://192.168.1.2/community/forums/list.page, the response takes 10
seconds.

To be clear, in both the cases I'm accessing the application from the
*same* machine that runs httpd/tomcat.  I found out about this issue
when I tried to access it from another machine on the same LAN, which is
how it will be actually accessed when the issue is resolved. I tried to
find out what happened when I accessed it from the same machine on which
it is running, and found that it behaves similarly as I mentioned above.

192.168.1.2 is on eth0, and eth0 is setup as a trusted device.  When I
access another Tomcat 6 application running locally like this: curl
http://192.168.1.2/recruit/admin, the request is served instantly.

The extra time occurs between the following lines of mod_jk.log:

[Mon Mar 10 17:03:54 2008] [7071:60304] [debug]
ajp_send_request::jk_ajp_common.c (1287): (forum) request body to send 0
- request body to resend 0
[Mon Mar 10 17:04:04 2008] [7071:60304] [debug]
ajp_connection_tcp_get_message::jk_ajp_common.c (1043): received from
ajp13 pos=0 len=133 max=8192

I thought it was something to do with mod_jk and installed version
1.2.26, but the problem persists.

Any pointers would be welcome!

Thanks,
Dinesh.



RE: [EMAIL PROTECTED] Problem building 2.2.8 on hpux itanum

2008-02-29 Thread Adam Martin
You need to install the expat software.

http://sourceforge.net/projects/expat/


-Original Message-
From: Rush [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 28, 2008 12:07 PM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] Problem building 2.2.8 on hpux itanum


I'm running into identical problem... Did anyone found solution . if
possible can you please email me solution at [EMAIL PROTECTED]


thanks
Rush



Peter Olofson wrote:
 
 Any ideas I tried 2.2.6 with the same result/problem
 /peo
 I'm trying to build on hpux with hp C compiler, it fails like this:
 .
 .
 Making all in mappers
 Making all in support
 /home/peo/httpd-2.2.8/srclib/apr/libtool --silent --mode=link
cc 
 -g -Ae +Z -mt -L/usr/local/lib   -o htdigest  htdigest.lo   -lm 
 /home/peo/httpd-2.2.8/srclib/pcre/libpcre.la 
 /home/peo/httpd-2.2.8/srclib/apr-util/libaprutil-1.la -lexpat 
 /home/peo/httpd-2.2.8/srclib/apr/libapr-1.la -lrt -lm -lpthread -ldld
 ld: Can't find dependent library libexpat.so.1
 Fatal error.
 *** Error exit code 1
 
 Stop.
 *** Error exit code 1
 
 Stop.
 *** Error exit code 1
 
 Stop.
 
 
 -
 The official User-To-User support forum of the Apache HTTP Server
Project.
 See URL:http://httpd.apache.org/userslist.html for more info.
 To unsubscribe, e-mail: [EMAIL PROTECTED]
   from the digest: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

-- 
View this message in context:
http://www.nabble.com/Problem-building-2.2.8-on-hpux-itanum-tp15570478p1
5742275.html
Sent from the Apache HTTP Server - Users mailing list archive at
Nabble.com.


-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] RE: make failure

2008-02-29 Thread Adam Martin
I did not receive any responses to this but figured I would post what I
found in hopes it may help someone in the future.  Sorry for the length.

While doing some additional testing I found that if I cut and pasted the
line that fails during the make and added
-L/scratch/apache/httpd-2.2.8/srclibs/arp/.libs to it the compilation
for that program completed successfully.  But when I attempted to
re-execute the make it failed when trying to build the next program in
the support directory.

I could not find the logic in the makefiles that built the compile
string for the support directory to add the -L option to it.  I then
tried adding the --with-included-apr to the configure script but the
make failed when attempting to compile the support directory with the
same error.

I then tried compiling the software using the gcc 4.2.2 compiler but
experienced the same issue posted by Tamer on 2/27.

gcc: +b: No such file or directory
gcc: /scratch/apache/httpd-2.2.8/srclib/apr/.libs:/opt/apache2/lib: No
such file or directory
*** Error exit code 1

After following the solution Tamer provided and updating libtool in the
Apache distribution directory to version 1.5.26 I successfully compiled
the software using the gcc compiler.

On a side note I wanted to publicly thank Tamer for his assistance.  I
contacted him directly via email out of frustration and he went out of
his way to assist a total stranger.  Thank you.


-Original Message-
From: Adam Martin 
Sent: Tuesday, February 26, 2008 10:09 AM
To: users@httpd.apache.org
Subject: make failure

This is my first attempt to compile the Apache web server so I will try
and include all the relevant information and apologize in advance for
any rookie mistakes.  I have successfully compiled version 2.2.8 on the
IBM AIX, Sun Solaris, and HP-UX PA-RISC platforms but have an issue
trying to compile it on the HP-UX Itanium platform using the HP ANSI C
Compiler.  The configure script appears to complete successfully but
when I attempt to execute the make I receive the following error.

ld: Can't find dependent library libapr-1.so.2
Fatal error.

The libapr-1.so.2 file resides in the
/scratch/apache/httpd-2.2.8/srclib/apr/.libs and is linked to the
libapr-1.so.2.12 file.  Here are the options I passed to the configure
script.

./configure --prefix=/opt/apache2 --enable-mods-shared=all cache proxy
authn_alias mem_cache file_cache charset_lite dav_lock disk_cache


Any suggestions for resolving this would be greatly appreciated.

Make output:
Making all in support
/scratch/apache/httpd-2.2.8/srclib/apr/libtool --silent
--mode=compile cc -g -Ae +Z -mt-DHPUX11 -D_REENTRANT -D_HPUX_SOURCE
-D_LARGEFILE64_SOURCE -I/scratch/apache/httpd-2.2.8/srclib/pcre -I.
-I/scratch/apache/httpd-2.2.8/os/unix
-I/scratch/apache/httpd-2.2.8/server/mpm/prefork
-I/scratch/apache/httpd-2.2.8/modules/http
-I/scratch/apache/httpd-2.2.8/modules/filters
-I/scratch/apache/httpd-2.2.8/modules/proxy
-I/scratch/apache/httpd-2.2.8/include
-I/scratch/apache/httpd-2.2.8/modules/generators
-I/scratch/apache/httpd-2.2.8/modules/mappers
-I/scratch/apache/httpd-2.2.8/modules/database
-I/scratch/apache/httpd-2.2.8/srclib/apr/include
-I/scratch/apache/httpd-2.2.8/srclib/apr-util/include
-I/usr/local/include -I/scratch/apache/httpd-2.2.8/server
-I/scratch/apache/httpd-2.2.8/modules/proxy/../generators
-I/scratch/apache/httpd-2.2.8/modules/ssl
-I/scratch/apache/httpd-2.2.8/modules/dav/main  -prefer-non-pic -static
-c htpasswd.c  touch htpasswd.lo
/scratch/apache/httpd-2.2.8/srclib/apr/libtool --silent
--mode=link cc -g -Ae +Z -mt -L/usr/local/lib   -o htpasswd
htpasswd.lo   -lm /scratch/apache/httpd-2.2.8/srclib/pcre/libpcre.la
/scratch/apache/httpd-2.2.8/srclib/apr-util/libaprutil-1.la -lexpat
/scratch/apache/httpd-2.2.8/srclib/apr/libapr-1.la -lrt -lm -lpthread
-ldld
ld: Can't find dependent library libapr-1.so.2
Fatal error.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] Compiling Apache 2.2.8 under HP-UX 11i v2

2008-02-27 Thread Adam Martin
Tamer,

I believe I am experiencing the same issue on HP-UX 11.31 and sent an
email to the group yesterday asking for help.  When I utilize the gcc
compiler I get the same error you do and when I utilize the HP ANSI C
compiler if receive a message:

ld: Can't find dependent library libapr-1.so.2

Hopefully someone will have a suggestion.  If I find anything I'll pass
it along.

-Original Message-
From: Tamer Embaby [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 27, 2008 5:15 AM
To: users@httpd.apache.org
Subject: [EMAIL PROTECTED] Compiling Apache 2.2.8 under HP-UX 11i v2

Hello all,

I'm trying to compile Apache 2.2.8 but failing with something that I
believe to be error on the generated libtool utility, or may be a
mistake from my side, but I'm not sure which.

Information about the system:
$ uname -a
HP-UX SPHINX B.11.23 U ia64 4017848671 unlimited-user license

$ gcc -v
Using built-in specs.
Target: ia64-hp-hpux11.23
Configured with: ../gcc/configure
Thread model: posix
gcc version 4.1.1

Apache 2.2.8 source from Apache.org

Configured with:
$ export CC=gcc
$ export CFLAGS=-mlp64
$ ./configure --prefix=/apps/apache2.2 --with-included-apr

Everything went fine so far (I can include the output of the command if
needed).

The error:
$ make
... Lots of output ...
/usr/bin/posix/sh /home/te/httpd-2.2.8/srclib/apr/libtool
--silent --mode=link  gcc -pthread  -mlp64 -DHAVE_CONFIG_H -DHPUX11
-D_REENTRANT -D_HPUX_SOURCE
-I/home/te/httpd-2.2.8/srclib/apr-util/include
-I/home/te/httpd-2.2.8/srclib/apr-util/include/private
-I/home/te/httpd-2.2.8/srclib/apr/include
-I/home/te/httpd-2.2.8/srclib/apr-util/xml/expat/lib  -version-info
2:12:2-o libaprutil-1.la -rpath /apps/apache2.2/lib
buckets/apr_brigade.lo buckets/apr_buckets.lo
buckets/apr_buckets_alloc.lo buckets/apr_buckets_eos.lo
buckets/apr_buckets_file.lo buckets/apr_buckets_flush.lo
buckets/apr_buckets_heap.lo buckets/apr_buckets_mmap.lo
buckets/apr_buckets_pipe.lo buckets/apr_buckets_pool.lo
buckets/apr_buckets_refcount.lo buckets/apr_buckets_simple.lo
buckets/apr_buckets_socket.lo crypto/apr_md4.lo crypto/apr_md5.lo
crypto/apr_sha1.lo crypto/getuuid.lo crypto/uuid.lo dbm/apr_dbm.lo
dbm/apr_dbm_berkeleydb.lo dbm/apr_dbm_gdbm.lo dbm/apr_dbm_ndbm.lo
dbm/apr_dbm_sdbm.lo dbm/sdbm/sdbm.lo dbm/sdbm/sdbm_hash.lo
dbm/sdbm/sdbm_lock.lo dbm/sdbm/sdbm_pair.lo encoding/apr_base64.lo
hooks/apr_hooks.lo ldap/apr_ldap_init.lo ldap/apr_ldap_option.lo
ldap/apr_ldap_url.lo misc/apr_date.lo misc/apr_queue.lo
misc/apr_reslist.lo misc/apr_rmm.lo misc/apu_version.lo uri/apr_uri.lo
xml/apr_xml.lo strmatch/apr_strmatch.lo xlate/xlate.lo dbd/apr_dbd.lo
dbd/apr_dbd_mysql.lo dbd/apr_dbd_pgsql.lo dbd/apr_dbd_sqlite2.lo
dbd/apr_dbd_sqlite3.lo   -lrt -lm  -lpthread
/home/te/httpd-2.2.8/srclib/apr-util/xml/expat/lib/libexpat.la
/home/te/httpd-2.2.8/srclib/apr/libapr-1.la -lrt -lm -lpthread
gcc: +b: No such file or directory
gcc:
/home/te/httpd-2.2.8/srclib/apr-util/xml/expat/lib/.libs:/home/te/httpd-
2.2.8/srclib/apr/.libs:/apps/apache2.2/lib: No such file or directory

I ran into libtool code and I think libtool script mistakenly passing
gcc the options: +b ${libdir} instead of -Wl,+b -Wl,${libdir}.

Any ideas about if I'm missing something in my configuration? Or
how to fix this issue?

Regards,
Tamer

-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] make failure

2008-02-26 Thread Adam Martin
This is my first attempt to compile the Apache web server so I will try
and include all the relevant information and apologize in advance for
any rookie mistakes.  I have successfully compiled version 2.2.8 on the
IBM AIX, Sun Solaris, and HP-UX PA-RISC platforms but have an issue
trying to compile it on the HP-UX Itanium platform using the HP ANSI C
Compiler.  The configure script appears to complete successfully but
when I attempt to execute the make I receive the following error.

ld: Can't find dependent library libapr-1.so.2
Fatal error.

The libapr-1.so.2 file resides in the
/scratch/apache/httpd-2.2.8/srclib/apr/.libs and is linked to the
libapr-1.so.2.12 file.  Here are the options I passed to the configure
script.

./configure --prefix=/opt/apache2 --enable-mods-shared=all cache proxy
authn_alias mem_cache file_cache charset_lite dav_lock disk_cache


Any suggestions for resolving this would be greatly appreciated.

Make output:
Making all in support
/scratch/apache/httpd-2.2.8/srclib/apr/libtool --silent
--mode=compile cc -g -Ae +Z -mt-DHPUX11 -D_REENTRANT -D_HPUX_SOURCE
-D_LARGEFILE64_SOURCE -I/scratch/apache/httpd-2.2.8/srclib/pcre -I.
-I/scratch/apache/httpd-2.2.8/os/unix
-I/scratch/apache/httpd-2.2.8/server/mpm/prefork
-I/scratch/apache/httpd-2.2.8/modules/http
-I/scratch/apache/httpd-2.2.8/modules/filters
-I/scratch/apache/httpd-2.2.8/modules/proxy
-I/scratch/apache/httpd-2.2.8/include
-I/scratch/apache/httpd-2.2.8/modules/generators
-I/scratch/apache/httpd-2.2.8/modules/mappers
-I/scratch/apache/httpd-2.2.8/modules/database
-I/scratch/apache/httpd-2.2.8/srclib/apr/include
-I/scratch/apache/httpd-2.2.8/srclib/apr-util/include
-I/usr/local/include -I/scratch/apache/httpd-2.2.8/server
-I/scratch/apache/httpd-2.2.8/modules/proxy/../generators
-I/scratch/apache/httpd-2.2.8/modules/ssl
-I/scratch/apache/httpd-2.2.8/modules/dav/main  -prefer-non-pic -static
-c htpasswd.c  touch htpasswd.lo
/scratch/apache/httpd-2.2.8/srclib/apr/libtool --silent
--mode=link cc -g -Ae +Z -mt -L/usr/local/lib   -o htpasswd
htpasswd.lo   -lm /scratch/apache/httpd-2.2.8/srclib/pcre/libpcre.la
/scratch/apache/httpd-2.2.8/srclib/apr-util/libaprutil-1.la -lexpat
/scratch/apache/httpd-2.2.8/srclib/apr/libapr-1.la -lrt -lm -lpthread
-ldld
ld: Can't find dependent library libapr-1.so.2
Fatal error.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] Question on permissions

2008-02-26 Thread Adam Martin
Richard,

 

I'm new to the group and thought I'd take a crack at this one.  Is
www-data a user or group?  From the end of your email it sounds like a
user since you added it to the rgeddes group but I am a little confused
when you changed the group from rgeddes to www-data in the middle of
your example.  If it is a group then I don't believe you can add a group
to another group in the /etc/group file.

 

If it is a user I did a quick test one of our servers to verify but I
don't believe the adding of a user to a group is dynamic.  In order for
the new group assignment to take affect I had to log out and log in as
the test user for the id command to reflect the change.  You didn't
mention it in your email but did you try and restart your server after
adding the www-data user to the rgeddes group?

 

My apologies if I am misunderstanding your question.

 



From: Richard Geddes [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 26, 2008 10:08 AM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] Question on permissions

 

Thanks for the response.  

I set up a directory under the main DocumentRoot called test 

drwxr-xr-x 2 rgeddes rgeddes  80 2008-02-18 15:18 test

and it appeared in a directory listing in the webpage of my main
DocumentRoot.

Changed permissions as follows:

drwxr-x--- 2 rgeddes rgeddes  80 2008-02-18 15:18 test

and test disappears from the webpage (this makes sense)

changed group as follows:

drwxr-x--- 2 rgeddes www-data  80 2008-02-18 15:18 test

and test appears in the webpage (this makes sense) as the servers are
running as www-data.

Now if I change the group back to:

drwxr-x--- 2 rgeddes rgeddes  80 2008-02-18 15:18 test

and I add www-data to the rgeddes group in /etc/group, the directory
fails to show up.  This does not make sense to me as www-data is part of
the rgeddes group and rgeddes has r-x permissions.

Is there a reason why www-data is not being granted rgeddes group
permissions?

Thanks
Richard


Joshua Slive wrote: 

On Mon, Feb 25, 2008 at 12:59 AM, Richard Geddes
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]  wrote:
  

Hello,
 
 I'm using apache 2.2 on Ubuntu 7.10 setting up name-based
virtual
 hosting .  The apache servers servicing requests run as
www-data.
 
 The idea is to allow users to make their own websites under
their home
 directories, and for the admin to symlink the users'
DocumentRoot
 directories below main DocumentRoot directory, and have the
apache
 configuration file with VirtualHost sections direct the http
requests
 appropriately.
 
 I got this to work correctly, but I had to set the 'other'
execution bit
 for directories that lead to the users symlinked directory.
This means
 that users will have execute permissions on each others'
directories,
 but I want to keep the users strictly separated from each
other I
 think the FAQ suggests this, if I'm not mistaken, but I think
there is a
 security issue here.


 
Having world-executable (searchable, really) home directories is not
an uncommon configuration. Yes, your users need to be a little more
careful about the permissions of stuff inside their home directories,
but that isn't such a big deal.
 
Alternatively, do the symlink in the other direction: put the
directories under DocumentRoot and include a symlink in the home
directories pointing to the correct location so your users know what
to edit.
 
Joshua.
 
-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html
http://httpd.apache.org/userslist.html  for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 
 
  


[EMAIL PROTECTED] Getting mod_perl to run after mod_cache with Apache 2.2.6

2007-11-29 Thread Adam Woodworth
Hi Folks,

I originally posted this message on the mod_perl mailing list, but I
did not receive any responses, so I'm posting here in the hopes that
someone can help me out.

My current LAMP stack is using Apache 2.0.54 and mod_perl 2.0.0-RC4,
and I'm doing something very much like what is mentioned in a previous
mod_perl mailing list post from 2 years ago, the thread of which can
be seen here:

http://www.gossamer-threads.com/lists/modperl/modperl/79672

In summary (taking from the above posting), here's a simple flow of
what we have happening:

1 - mod_cache [..got valid content in cache? If so, go to 4; if not, go to 2]
2 - mod_proxy [fetch content from origin web]
3 - mod_cache [content cacheable? If so, cache it locally]
4 - *MY FILTER*
5 - deflate

Using the same modifications mentioned in the above posting, we were
able to get mod_cache to run *before* mod_perl by changing mod_cache.c
so that the CACHE_SAVE and CACHE_OUT filters hook in at
AP_FTYPE_CONTENT_SET-2, and changing mod_perl.c so that mod_perl hooks
in at AP_FTYPE_CONTENT_SET-1. This solution is mentioned at the
bottom of the above posting.

But now I am upgrading our LAMP stack to Apache 2.2.6 and mod_perl
2.0.3, and I'm having trouble getting the above flow to work.

First off, mod_cache.c has changed so that instead of just CACHE_SAVE
and CACHE_OUT, there are also CACHE_SAVE_SUBREQ and CACHE_OUT_SUBREQ.
I'm not sure what the subrequests are really for, but here's what I've
been doing to try to get my desired flow to work:

The original Apache 2.2.6 filter order for mod_cache is:
CACHE_SAVE = AP_FTYPE_CONTENT_SET+1
CACHE_SAVE_SUBREQ = AP_FTYPE_CONTENT_SET-1
CACHE_OUT = AP_FTYPE_CONTENT_SET+1
CACHE_OUT_SUBREQ = AP_FTYPE_CONTENT_SET-1

The original mod_perl 2.0.3 filter order is:
MODPERL_REQUEST_OUTPUT = AP_FTYPE_RESOURCE

I've modified these to be:

Modified Apache 2.2.6 filter order for mod_cache:
CACHE_SAVE = AP_FTYPE_CONTENT_SET-2
CACHE_SAVE_SUBREQ = AP_FTYPE_CONTENT_SET-3
CACHE_OUT = AP_FTYPE_CONTENT_SET-2
CACHE_OUT_SUBREQ = AP_FTYPE_CONTENT_SET-3

Modified mod_perl 2.0.3 filter order:
MODPERL_REQUEST_OUTPUT = AP_FTYPE_CONTENT_SET-1

These modifications make it (theoretically from what I understand
about the filtering order numbers and my experience with the older
Apache and mod_perl) so that mod_perl runs after mod_cache, in the
same way that I was able to do this for Apache 2.0.54 and mod_perl
2.0.0-RC4. However it is not working as I expected -- mod_cache
appears to not be returning the body of the content before my mod_perl
filter is run. So the user hits the site, mod_cache sees the page is
cached, mod_perl is run but doesn't see any content, only the headers,
then somewhere down the line mod_cache must be serving up the cached
content after mod_perl.

So, I'm stumped. Can anyone point me in the right direction?

Thanks!

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Getting mod_perl to run after mod_cache with Apache 2.2.6

2007-11-29 Thread Adam Woodworth
Hi Nick,

Thanks for the response!

  1 - mod_cache [..got valid content in cache? If so, go to 4; if not,
  go to 2] 2 - mod_proxy [fetch content from origin web]
  3 - mod_cache [content cacheable? If so, cache it locally]
  4 - *MY FILTER*
  5 - deflate

 That makes sense if and only if you want to repeat your-filter and
 DEFLATE on every request rather than cache the ready-processed contents.

Yup, due to the nature of our product, our mod_perl filter doesn't do
the same thing for each request to the same page.

 directive.  The problem with that is that mod_cache does its own thing.

Could you ellaborate on what you mean by mod_cache doing its own thing?

The problem I seem to be running into is that when my mod_perl filter
runs, mod_cache has served up the headers of the file, but the content
of the file is empty, so mod_perl has nothing to process.  Then
somewhere down the line mod_cache must serve up the rest of the file.
Do you know how this might be happening, or if it's just the way
mod_cache operates?  Is there perhaps some interaction with mod_proxy
where mod_cache only spits out the cached data at some particular
point in the filter chain...

I'm going to dig into mod_cache deeper now...

Thanks!
Adam

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Getting mod_perl to run after mod_cache with Apache 2.2.6

2007-11-29 Thread Adam Woodworth
After some more digging, I've figured out (mostly) what is going on.
It turns out that our mod_perl filter actually does get the body
content from mod_cache -- originally I thought that our filter wasn't
seeing the body at all.  But, the Content-Type header is missing.

In cache_storage.c, around line 120 (in the 2.2.6 source code) there
is this code:

apr_table_unset(h-resp_hdrs, Content-Type);
/*
 * Also unset possible Content-Type headers in r-headers_out and
 * r-err_headers_out as they may be different to what we have received
 * from the cache.
 * Actually they are not needed as r-content_type set by
 * ap_set_content_type above will be used in the store_headers functions
 * of the storage providers as a fallback and the HTTP_HEADER filter
 * does overwrite the Content-Type header with r-content_type anyway.
 */
apr_table_unset(r-headers_out, Content-Type);
apr_table_unset(r-err_headers_out, Content-Type);

While I'm not clear why this is happening, mod_cache is clearing out
the Content-Type header from the response.  So, our mod_perl filter
doesn't see the content-type filter and I think that content-type gets
set again by Apache somewhere after all the filters run.  Since
Content-Type is missing during mod_perl's running, and our mod_perl
filter triggers off of Content-Type being matching certain types
(normally, text/html), our filter doesn't do anything to the body
because the Content-Type is missing.

I was able to fix this by commenting out the
apr_table_unset(h-resp_hdrs, ... line above.

Does anyone know the affect that this could have on the rest of
mod_cache/etc (any ill effects, cases where this would break
something, etc?), and why exactly the Content-Type is removed in the
first place my mod_cache?  I see that the comments above try to
explain it, but it doesn't quite make sense to me.

Cheers,
Adam


On Nov 29, 2007 2:27 PM, Adam Woodworth [EMAIL PROTECTED] wrote:
 Hi Nick,

 Thanks for the response!

   1 - mod_cache [..got valid content in cache? If so, go to 4; if not,
   go to 2] 2 - mod_proxy [fetch content from origin web]
   3 - mod_cache [content cacheable? If so, cache it locally]
   4 - *MY FILTER*
   5 - deflate
 
  That makes sense if and only if you want to repeat your-filter and
  DEFLATE on every request rather than cache the ready-processed contents.

 Yup, due to the nature of our product, our mod_perl filter doesn't do
 the same thing for each request to the same page.

  directive.  The problem with that is that mod_cache does its own thing.

 Could you ellaborate on what you mean by mod_cache doing its own thing?

 The problem I seem to be running into is that when my mod_perl filter
 runs, mod_cache has served up the headers of the file, but the content
 of the file is empty, so mod_perl has nothing to process.  Then
 somewhere down the line mod_cache must serve up the rest of the file.
 Do you know how this might be happening, or if it's just the way
 mod_cache operates?  Is there perhaps some interaction with mod_proxy
 where mod_cache only spits out the cached data at some particular
 point in the filter chain...

 I'm going to dig into mod_cache deeper now...

 Thanks!
 Adam


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] Apache1.3 forward to Jboss

2007-11-08 Thread Schaible, Adam
Excellent, I'll give that a shot! 

-Original Message-
From: Martin Strand [mailto:[EMAIL PROTECTED] 
Sent: Thursday, November 08, 2007 9:38 AM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] Apache1.3 forward to Jboss

A quick fix would be to add this:

ServerAlias www.rememberit.us


On Thu, 08 Nov 2007 15:31:06 +0100, Schaible, Adam
[EMAIL PROTECTED]
wrote:

 Hello everyone,

   I have a webapp running at 8080 and want port 80 connections to
be 
 sent to 8080.  I have the following configuration

 NameVirtualHost my.ip:80

 VirtualHost my.ip:80
   ServerName rememberit.us
   ProxyPass /repos !
   ProxyPass / http://www.rememberit.us:8080/ /VirtualHost



 This is working fine if you type http://rememberit.us/ in your
browser,
 however if you type http://www.rememberit.us/ it's not forwarding.

 Any suggestions?
 This e-mail transmission contains information that is confidential and

 may be privileged.
 It is intended only for the addressee(s) named above. If you receive  
 this e-mail in error,
 please do not read, copy or disseminate it in any manner.  If you are

 not the intended
 recipient, any disclosure, copying, distribution or use of the
contents  
 of this information
 is prohibited. Please reply to the message immediately by informing
the  
 sender that the
 message was misdirected. After replying, please erase it from your  
 computer system. Your
 assistance in correcting this error is appreciated.






-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

This e-mail transmission contains information that is confidential and may be 
privileged.   It is intended only for the addressee(s) named above. If you 
receive this e-mail in error, please do not read, copy or disseminate it in any 
manner. If you are not the intended recipient, any disclosure, copying, 
distribution or use of the contents of this information is prohibited. Please 
reply to the message immediately by informing the sender that the message was 
misdirected. After replying, please erase it from your computer system. Your 
assistance in correcting this error is appreciated.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Apache1.3 forward to Jboss

2007-11-08 Thread Schaible, Adam
Hello everyone,

I have a webapp running at 8080 and want port 80 connections to
be sent to 8080.  I have the following configuration

NameVirtualHost my.ip:80

VirtualHost my.ip:80
  ServerName rememberit.us
  ProxyPass /repos !
  ProxyPass / http://www.rememberit.us:8080/
/VirtualHost



This is working fine if you type http://rememberit.us/ in your browser,
however if you type http://www.rememberit.us/ it's not forwarding.

Any suggestions?
This e-mail transmission contains information that is confidential and may be 
privileged.
It is intended only for the addressee(s) named above. If you receive this 
e-mail in error,
please do not read, copy or disseminate it in any manner.  If you are not the 
intended 
recipient, any disclosure, copying, distribution or use of the contents of this 
information
is prohibited. Please reply to the message immediately by informing the sender 
that the 
message was misdirected. After replying, please erase it from your computer 
system. Your 
assistance in correcting this error is appreciated.





[EMAIL PROTECTED] Ajax Recipe

2007-06-17 Thread Adam Bragg

Hi All,

I am trying to build as lean as an Apache as possible. Please forgive  
my extended explanation. I only include all the excess here so that I  
might find help that will remove the excess from my Apache build. I  
am a web developer/designer who is great with Javascript and decent  
with plsql and mysql. I can do a standard and modified install of  
apache and configure the server to work well out to the real world  
but I am not very experienced with Apache.


MY GOAL is to build as lean of an Apache as possible with  
functionality that will act as a pass through to the database from  
the client. After some research I found that Nick Kew at Webthing has  
created what I think I need but I am having difficulty implementing  
the modules and component that follow:
1. mod_form - http://apache.webthing.com/mod_form/ (to accept post  
ajax calls)
2. mod_upload - http://apache.webthing.com/mod_upload/ (to accept  
images posted)
3. mod_xmlns - http://apache.webthing.com/mod_xmlns/ (to support  
mod_sql)
4. mod_sql - http://apache.webthing.com/database/mod_sql.html (to  
insert sql requests and replies from mysql)
5. apr_dbd_mysql.c - http://apache.webthing.com/svn/apache/apr/ 
apr_dbd_mysql.c (to connect to mysql) ( I placed it in httpNN/srclib/ 
apr-util/dbd before the ./config)


After reading the Apache documentation, I decided to attempt to load  
all my needed modules during the configuration and install so that  
Apache would run as efficiently as possible. I have been successful  
at getting modules that come with the standard Apache 2.2.4  
distribution to load as shared modules and static modules but I have  
not been successful with the modules above. I placed the modules  
above in the 'modules/experimental' directory as well as in their own  
'modules/ajax' directory but I think I do not know the purpose of the  
distinction of the directories and I am likely using them  
incorrectly. During my time in the Apache documentation I also came  
across some discussion that made me wonder if these third party  
modules could be compiled with Apache during the install or if they  
can only be added via apxs as shared modules.


QUESTIONS:
1. Is there a better set of modules I can use to accomplish my goal?
2. Do you see any issues with the above modules accomplishing my goal?
3. What is the proper method for loading these modules into Apache?

Some excess excess information:
I am trying to build a web server diminishing the middle tier as much  
as possible. I code all day long writing ColdFusion, Javascript, and  
PL/SQL, but I find myself relying on ColdFusion as just a passthrough  
for my Ajax calls to the database. I am most often just returning  
datasets to my Javascript on the client which is making ColdFusion  
seem a bit bloated for my needs. So, I want to find an architecture  
that will allow me to make requests to the database tier and return  
preformatted results. Specifically, I want to return JSON strings  
from my PL/SQL, which I already do, but use Apache to write the  
strings in the reply to the request.


In the end, after I have everything running, I want to write this up  
as a recipe because I think this type of architecture will become  
much more popular with the trend of more of the application moving  
away from the middle tier to the front-end and to the back end.


Thank you for your time and attention. Any and all help is  
appreciated, even if it is comments on how I have made my request or  
that it is such a grand request in my very first post in this list.


Adam P. Bragg
[EMAIL PROTECTED]



[EMAIL PROTECTED] Apache / Tomcat questions

2007-05-10 Thread Adam Lipscombe

Hi

Firstly, apologies if this is a  no-brainer - I am newbie to Apache config.

I have a perplexing problem:

Our webapp uses Tomcat, and we have recently started to front it with Apache.
The app runs on fedora core 6 and uses the mod_jk 1.2.21 connector and https.


We have a problem with POST's on one of our JSP's - this particular JSP form is huge - it has over 
100 fields. T


The problem is that the receiving HttpServletRequest object does not have some of the request 
parameters when the request is made via http. If the request is made via https it works fine.
The params are simply not there at all. Sometimes the parameter values are non-printable characters, 
but not always. It doesn't seem to be consistent.



I am pretty sure that Tomcat itself is not an issue, 'cos standalone TC works 
fine.

My feeling is its something to do with either the Apache https / http setup or 
the mod_jk connector.


Questions:

Has anyone experienced this before?
Is it possibly to get Apache and mod_jk to log the POST header data so I can see where its going 
wrong. If, so how do I do that?



TIA -Adam






-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Blocking Requests Based Off of HTTP Headers

2007-02-08 Thread Adam Serediuk
Hello all,

 

I am trying to block requests based off of HTTP Headers using a RewriteCond
to a RewriteMap.

 

I have the following:

 

RewriteMaphosts-deny  txt:/path_to/hosts.deny

RewriteCond   ${hosts-deny:%{REMOTE_HOST}|NOT-FOUND} !=NOT-FOUND [OR]

RewriteCond   ${hosts-deny:%{REMOTE_ADDR}|NOT-FOUND} !=NOT-FOUND [OR]

RewriteCond   ${hosts-deny:%{true-client-ip}|NOT-FOUND} !=NOT-FOUND

RewriteRule   ^/.*  -  [F]

 

I want to block requests if the REMOTE_HOST, REMOTE_ADDR or true-client-ip
header matches the contents of the hosts.deny file. The hosts.deny file I
have created looks like:

 

192.168.1.2 -

192.168.1.3 -

 

If the REMOTE_HOST or REMOTE_ADDR matches the contents of the hosts.deny
file, the block works. However, if I set an HTTP header for true-client-ip it
does not match. I've tried a number of combinations and cannot get this to
work as expected. I know that the true-client-ip header exists, as I am using
it to log information into a log file successfully.

 

 

--
Adam

 



RE: [EMAIL PROTECTED] Blocking Requests Based Off of HTTP Headers

2007-02-08 Thread Adam Serediuk
Thank you for the quick response; that did the trick. I tend to be blind
sometimes when a problem thwarts me for awhile, just need another set of eyes
:)



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Joshua Slive
Sent: Thursday, February 08, 2007 3:22 PM
To: users@httpd.apache.org
Subject: Re: [EMAIL PROTECTED] Blocking Requests Based Off of HTTP Headers

On 2/8/07, Adam Serediuk [EMAIL PROTECTED] wrote:


 RewriteCond   ${hosts-deny:%{true-client-ip}|NOT-FOUND}
 !=NOT-FOUND

 RewriteRule   ^/.*  -  [F]

 If the REMOTE_HOST or REMOTE_ADDR matches the contents of the hosts.deny
 file, the block works. However, if I set an HTTP header for true-client-ip
 it does not match. I've tried a number of combinations and cannot get this
 to work as expected. I know that the true-client-ip header exists, as I am
 using it to log information into a log file successfully.

You would need to use %{HTTP:true-client-ip}, as noted in the RewriteCond
docs.

Joshua.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Namebased Virtual Hosts

2006-10-17 Thread Adam Clater




Gregor
Schneider wrote on 10/17/2006, 12:05 PM:




Well, what is going to
happen if I do specify more than one SSL-site per IP/port-pair? Do I
just get the message that the cert is invalid (I could pretty much live
with that)?




Yes
- The cert will only be valid for the vhost for which it was issued.


ac








-
The official User-To-User support forum of the Apache HTTP Server Project.
See  for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
   "   from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] rewritelock

2006-05-01 Thread Adam Hewitt








Hi Guys,



I have written a rewritemap program and I am experiencing
the sync issues that I have seen all over Google. I discovered the rewritelock
option is used to prevent this from occurring, however I cant seem to get
apache to use one. I am using Apache 2.0.54-4 under Debian and it doesnt
matter where I specify the lock file to be it is never created (which I
understand that it should be) and even if I manually create the lock file it
doesnt appear to be used at all. Is there any kind of trick involved that I am
missing, or is this a bug in this version of Apache? I have seen bugs related
to this in previous versions, but they said that they were resolved.



Kind regards,



Adam.








RE: [EMAIL PROTECTED] newbie vhost?

2006-02-02 Thread Adam Hewitt


 -Original Message-
 From: Chris Pat [mailto:[EMAIL PROTECTED] 
 Sent: Friday, 3 February 2006 11:08 AM
 To: users@httpd.apache.org
 Subject: [EMAIL PROTECTED] newbie vhost?
 
 Hello
 I have Apache2.055 working with a bare domain name e.g.
 http://domain.com, however it will not work with 
 http://www.domain.com.  My vendor is saying it something 
 wrong with my apache setup.  It is the minimal vhost with 
 just a server name, admin, document root.  Can someone tell 
 me who is wrong?  tia.
 

You need to also have a 

ServerAlias www.domain.com 

line in your vhost config as well if you want that to resolve.

Adam. 

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache email address

2006-01-17 Thread Adam Ossenford


- Original Message - 
From: Gerry Danen [EMAIL PROTECTED]

To: users@httpd.apache.org
Sent: Tuesday, January 17, 2006 3:35 PM
Subject: [EMAIL PROTECTED] Apache email address


When I use a php form to send email, an address called
[EMAIL PROTECTED] is used. Where would that be set?

--
Gerry
http://portal.danen.org/

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
***

Most likely that will be set within the php mail() function.  You can modify 
the headers as you see fit to change the from and reply to address.
I don't know all the details off the top of my head but if you take a look 
at http://us2.php.net/manual/en/ref.mail.php .
It has the php mail() functions attributes that you can add/modify.  The one 
you would be needing would be the headers string.
It seems that the headers string will need to be formatted differently 
depending on the mail server you are sending through (Qmail, Exchange, 
etc...)


If you are using a mailserver that is different than the webserver your form 
is hosted on and the mailserver uses authentication you might take a look at 
phpmailer.
http://phpmailer.sourceforge.net/  is a email transfer php class that 
supports smtp authentication.


I hope that is what you are after.
something like
?php
$headers  = From: \Your Server\Your Email [EMAIL PROTECTED]\n;
$headers .= MIME-Version: 1.0\n;
$headers .= Content-Type: multipart/mixed; boundary=\$boundary\;
mail('[EMAIL PROTECTED]', 'Email with attachment from PHP', $message, 
$headers);

?

hope this points you in the right direction

Sincerely,
Adam Ossenford 



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache 1.3 vs Apache 2

2005-12-07 Thread Adam
If your using the 4.4.1 version of php you might be having problems with the 
bug described here http://bugs.php.net/bug.php?id=35067


I had to step up to the 4.4.1 dev version where that bug is patched.  The 
problem was creating a situation where httpd process consumption would spike 
out to 100% on one of my servers.  I updated to the developer snapshot of 
php and everything went back to normal on that server.


I hope this helps.

Sincerely,
Adam Ossenford
- Original Message - 
From: Joshua Slive [EMAIL PROTECTED]

To: users@httpd.apache.org
Sent: Wednesday, December 07, 2005 1:00 PM
Subject: Re: [EMAIL PROTECTED] Apache 1.3 vs Apache 2


On 12/7/05, Michael Jeung [EMAIL PROTECTED] wrote:


Yesterday afternoon, we put this new server into production and it
seemed to be behaving relatively OK, with system-load of 4-5.  We
thought everything was going well and that this issue was wrapped
up.  Serves us right - today we got into the office and found that
loads on our server had spiked to 150.

Before we pulled the server from production, we grabbed a few
snapshots of what the system was doing.  After taking a look at
these, I haven't been able to make much head-way.  If someone with
more experience in this matter could take a look, I would greatly
appreciate it.


You need to use tools like mod_status, your access_log, strace, and a
debugger to see what those processes are actually doing.  Given the
amount of memory they are using, they clearly aren't serving static
files, so this is likely something php or database related.



Ultimately, my goal here is to get Apache to behave.  Any solution
that will allow Apache to run without killing the server is
acceptable -- including upgrading to Apache 2.  (Does Apache2
outperform Apache 1.3?)


Apache 2 will outperform 1.3 in some cases because of sendfile
support, among other things.  But since your site looks very
php/database dependent, it's unlikely changing the underlying web
server will have any measurable effect.

Joshua.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Apache2 + userdir + suexec -- getting there :S

2005-12-07 Thread Adam Hewitt



Hi 
All,

I am now one step 
further in diagnosing/fixing this issue that I am currently 
experiencing.

I now have a test 
setup which has removed all LDAP functionality and is reading directly from the 
/etc/passwd file. Now, if someone browses to http://members.domainname.com.au/~username/cgi-bin/filedel.cgi 
I have a rewrite in place to change the ~username to [EMAIL PROTECTED], which 
matches the user I have in the /etc/passwd file. This works for normal html 
pages, but it still fails for cgi's.

I have ruled out 
suexec as being the issue altogether. I modified the suexec code to output the 
arguments it was recieving from apache right at the beginning of the code, and 
found that if I remove the rewrite and use the full username in the URL then it 
works and suexec outputs to the log, but if I use the rewrite it fails and I get 
no output to the log file. Therefore the issue is laying somewhere inside the 
apache code.

So, can anyone 
suggest a way to keep the browser from seeing any rewrite so that there is no 
change from ~username from the customers point of view, or sugegst any reason 
why this would be failing? I feel that this is an apache bug, but it may be by 
design...and my C skills are not high enough to be able to go through the entire 
apache code base to try and modify my version to get around this 
issue.

Cheers,

Adam.


RE: [EMAIL PROTECTED] Apache2 + userdir + suexec -- getting there :S

2005-12-07 Thread Adam Hewitt



Sorry to reply to my own post, but I have fixed it... 
adding a [PT] to the end of the rewrite rule fixed the 
problem.

thanks,

Adam.

  
  
  From: Adam Hewitt Sent: Thursday, 8 
  December 2005 11:47 AMTo: users@httpd.apache.orgSubject: 
  [EMAIL PROTECTED] Apache2 + userdir + suexec -- getting there 
  :S
  
  Hi 
  All,
  
  I am now one step 
  further in diagnosing/fixing this issue that I am currently 
  experiencing.
  
  I now have a test 
  setup which has removed all LDAP functionality and is reading directly from 
  the /etc/passwd file. Now, if someone browses to http://members.domainname.com.au/~username/cgi-bin/filedel.cgi 
  I have a rewrite in place to change the ~username to [EMAIL PROTECTED], 
  which matches the user I have in the /etc/passwd file. This works for normal 
  html pages, but it still fails for cgi's.
  
  I have ruled out 
  suexec as being the issue altogether. I modified the suexec code to output the 
  arguments it was recieving from apache right at the beginning of the code, and 
  found that if I remove the rewrite and use the full username in the URL then 
  it works and suexec outputs to the log, but if I use the rewrite it fails and 
  I get no output to the log file. Therefore the issue is laying somewhere 
  inside the apache code.
  
  So, can anyone 
  suggest a way to keep the browser from seeing any rewrite so that there is no 
  change from ~username from the customers point of view, or sugegst any reason 
  why this would be failing? I feel that this is an apache bug, but it may be by 
  design...and my C skills are not high enough to be able to go through the 
  entire apache code base to try and modify my version to get around this 
  issue.
  
  Cheers,
  
  Adam.


[EMAIL PROTECTED] suexec

2005-12-02 Thread Adam Hewitt








Hi All,



I have made progress with my previous suexec + mod_ldap_user
+ multiple vhosts issue, however I am now getting a strange problem where
suexec is being called when I try accessing a cgi in one vhost but not another:



[pid 23260] read(43, [EMAIL PROTECTED]
Available\0/u/a/[EMAIL PROTECTED]/bin/default-shell\0, 119) = 119

[pid 23260] close(43) = 0

[pid 23260] stat64(/u, {st_mode=S_IFDIR|0755,
st_size=4096, ...}) = 0

[pid 23260] stat64(/u/a/[EMAIL PROTECTED]/cgi-bin/filedel.cgi,
{st_mode=S_IFREG|0755, st_size=522, ...}) = 0

[pid 23260] open(/u/a/.htaccess, O_RDONLY) = -1
ENOENT (No such file or directory)

[pid 23260] open(/u/a/[EMAIL PROTECTED]/.htaccess,
O_RDONLY) = -1 ENOENT (No such file or directory)

[pid 23260] open(/u/a/[EMAIL PROTECTED]/cgi-bin/.htaccess,
O_RDONLY) = -1 ENOENT (No such file or directory)

[pid 23260] open(/u/a/[EMAIL PROTECTED]/cgi-bin/filedel.cgi/.htaccess,
O_RDONLY) = -1 ENOTDIR (Not a directory)

[pid 23260] getpid() = 23260

[pid 23260] pipe([43, 44]) = 0

[pid 23260] fcntl64(44, F_GETFL) = 0x1 (flags
O_WRONLY)

[pid 23260] fcntl64(44, F_SETFL, O_WRONLY|O_NONBLOCK) = 0

[pid 23260] pipe([45, 46]) = 0

[pid 23260] fcntl64(45, F_GETFL) = 0 (flags O_RDONLY)

[pid 23260] fcntl64(45, F_SETFL, O_RDONLY|O_NONBLOCK) = 0

[pid 23260] pipe([47, 48]) = 0

[pid 23260] fcntl64(47, F_GETFL) = 0 (flags O_RDONLY)

[pid 23260] fcntl64(47, F_SETFL, O_RDONLY|O_NONBLOCK) = 0

[pid 23260] fork(Process 23294 attached

) = 23294

[pid 23260] close(43 unfinished ...

[pid 23294] --- SIGSTOP (Stopped (signal)) @ 0 (0) ---

[pid 23260] ... close resumed ) = 0

[pid 23260] close(46) = 0

[pid 23260] close(48) = 0

[pid 23294] getpid( unfinished ...

[pid 23260] close(44) = 0

[pid 23260] poll( unfinished ...

[pid 23294] ... getpid resumed ) = 23294

[pid 23294] getrlimit(RLIMIT_STACK, {rlim_cur=RLIM_INFINITY,
rlim_max=RLIM_INFINITY}) = 0

[pid 23294] close(3) = 0

[pid 23294] close(41) = 0

[pid 23294] close(40) = 0

[pid 23294] close(39) = 0

[pid 23294] close(38) = 0

[pid 23294] close(37) = 0

[pid 23294] close(36) = 0

[pid 23294] close(35) = 0

[pid 23294] close(34) = 0

[pid 23294] close(33) = 0

[pid 23294] close(32) = 0

[pid 23294] close(31) = 0

[pid 23294] close(30) = 0

[pid 23294] close(29) = 0

[pid 23294] close(28) = 0

[pid 23294] close(27) = 0

[pid 23294] close(25) = 0

[pid 23294] close(23) = 0

[pid 23294] close(26) = 0

[pid 23294] close(22) = 0

[pid 23294] close(21) = 0

[pid 23294] close(20) = 0

[pid 23294] close(19) = 0

[pid 23294] close(18) = 0

[pid 23294] close(17) = 0

[pid 23294] close(16) = 0

[pid 23294] close(15) = 0

[pid 23294] close(10) = 0

[pid 23294] close(9) = 0

[pid 23294] close(8) = 0

[pid 23294] close(6) = 0

[pid 23294] close(5) = 0

[pid 23294] close(4) = 0

[pid 23294] close(42) = 0

[pid 23294] close(44) = 0

[pid 23294] dup2(43, 0) = 0

[pid 23294] close(43) = 0

[pid 23294] close(45) = 0

[pid 23294] dup2(46, 1) = 1

[pid 23294] close(46) = 0

[pid 23294] close(47) = 0

[pid 23294] dup2(48, 2) = 2

[pid 23294] close(48) = 0

[pid 23294] rt_sigaction(SIGCHLD, {SIG_DFL}, {SIG_DFL}, 8) =
0

[pid 23294] chdir(/u/a/[EMAIL PROTECTED]/cgi-bin/)
= 0

[pid 23294] getpid() = 23294

[pid 23294] getrlimit(RLIMIT_STACK, {rlim_cur=RLIM_INFINITY,
rlim_max=RLIM_INFINITY}) = 0

[pid 23294] rt_sigaction(SIGRTMIN, {SIG_DFL}, NULL, 8) = 0

[pid 23294] rt_sigaction(SIGRT_1, {SIG_DFL}, NULL, 8) = 0

[pid 23294] rt_sigaction(SIGRT_2, {SIG_DFL}, NULL, 8) = 0

[pid 23294] execve(/u/a/[EMAIL PROTECTED]/cgi-bin/filedel.cgi,
[/u/a/[EMAIL PROTECTED]/cgi-bin/filedel.cgi], [/* 20 vars
*/]) = 0





So Apache is now doing a correct lookup for the user (with
the realm appended) and it is getting a correct reply for the home directory,
but then it just does an normal exec instead of using suexec?



If I try this with another realm it works (granted the other
realm doesnt get the @domainname.com appended, however the fact that it
is returning the correct user details says to me that this part of it is ok). I
have diffed the 2 vhost config files, but they pretty much look
identical.



Does anyone have any ideas why suexec would not be getting
executed?



Adam.








RE: [EMAIL PROTECTED] suexec + mod_ldap_user + multiple realms

2005-12-01 Thread Adam Hewitt
, {SIG_DFL}, NULL, 8) = 0
[pid  2556] rt_sigaction(SIGRT_2, {SIG_DFL}, NULL, 8) = 0
[pid  2556] execve(/usr/lib/apache2/suexec2,
[/usr/lib/apache2/suexec2, ~869640, 105, filedel.cgi], [/* 20
vars */]) = 0


As you can see here, Apache finds the correct home directory after
looking it up from LDAP (/u/0/3/1572830/) and allows the 'filedel.cgi'
script to be run. It then tries to lookup the details from nscd, but it
only passes sword instead of [EMAIL PROTECTED], but because we
have a second user with uid of 'sword' this uid and gid is returned and
then passed onto suexec (~869640, 105)...so for some reason apache2
isn't passing the realm onto libnss-ldap??

Can anyone please confirm that I am not doing something stupid, and if
there really is an issue then I will lodge a bug report.

Adam.

-Original Message-
From: Adam Hewitt 
Sent: Wednesday, 30 November 2005 2:03 PM
To: users@httpd.apache.org
Subject: [EMAIL PROTECTED] suexec + mod_ldap_user + multiple realms

Hi All,

I have a setup where I have roughly 14 different realms (aquired ISP's)
and users in each realm are listed in LDAP using [EMAIL PROTECTED]
straight forward.

I have configured apache2 with mod_ldap_userdir such that if
[EMAIL PROTECTED] accesses http://homepages.domain1.com/~bill that the
mod_ldap_userdir config appends the realm to the username when it is
being looked up ([EMAIL PROTECTED])...all of this works perfectly and is
fairly straight forward.

The problem I am having is that apache2 is passing suexec the username
and suexec is passing the username onto libnss-ldap to be looked up,
*but* this is failing as it doesn't include the realm with the username.
Is there anyway to get around this? Somehow append the realm onto the
username when its passed to suexec? Or how are other people getting
around this issue?

Cheers,

Adam.

-
The official User-To-User support forum of the Apache HTTP Server
Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] suexec + mod_ldap_user + multiple realms

2005-11-29 Thread Adam Hewitt
Hi All,

I have a setup where I have roughly 14 different realms (aquired ISP's)
and users in each realm are listed in LDAP using [EMAIL PROTECTED]
straight forward.

I have configured apache2 with mod_ldap_userdir such that if
[EMAIL PROTECTED] accesses http://homepages.domain1.com/~bill that the
mod_ldap_userdir config appends the realm to the username when it is
being looked up ([EMAIL PROTECTED])...all of this works perfectly and is
fairly straight forward.

The problem I am having is that apache2 is passing suexec the username
and suexec is passing the username onto libnss-ldap to be looked up,
*but* this is failing as it doesn't include the realm with the username.
Is there anyway to get around this? Somehow append the realm onto the
username when its passed to suexec? Or how are other people getting
around this issue?

Cheers,

Adam.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Strange problem

2005-11-23 Thread Adam Olsen
We are using perl and apache2 for our software.  With our software, our 
users have the ability to upload an optional 4 images via an html form 
(sometimes the images can be quite big, they are from digital cameras). 
 While and image is uploading, apache2 queues all other requests until 
the upload completes.  I have confirmed this by watching the process 
list and attempting to upload a large image.  While it's uploading, I 
can watch the processes stack up.  They do this until I hit stop on my 
browser, and then they all get sent.  This is really causing a huge 
problem, because it's a pretty high volume site, and if someone on 
dialup is uploading a 1MB image, the server is virtually unuseable.


Note that during an image upload, the server's CPU load is usually 
floating around 90% idle, and there's plenty of memory left.


Any ideas on this one would be appreciated.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
 from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [EMAIL PROTECTED] VirtualHost confusion

2005-11-02 Thread Adam Roberts
Thanks, but that doesn't seem to have done anything.  I am still having the
same problem.

Is there something externally that might be causing this?  I have my routing
setup with this host as a DMZ and my firewall is allowing port 80.



Thanks!
Adam R. 






The only difference between this place and the Titanic was the Titanic had
a band
-Original Message-
From: Boyle Owen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 02, 2005 1:14 AM
To: users@httpd.apache.org
Subject: RE: [EMAIL PROTECTED] VirtualHost confusion

 -Original Message-
 From: Adam Roberts [mailto:[EMAIL PROTECTED]
 Apache2 running on RHEL 3 ES Update 6.
 
 VirtualHost snippet from /etc/httpd/conf/httpd.conf
 --
 
 NameVirtualHost ip-address:80
 
 VirtualHost ip-address
 ServerName www.domain2.net
 ServerAdmin [EMAIL PROTECTED]
 DocumentRoot /var/www/html/domain2
 ServerAlias www.domain2.net domain2.net /VirtualHost
 
 VirtualHost ip-address
 ServerName www.domain1.com
 ServerAdmin [EMAIL PROTECTED]
 DocumentRoot /var/www/html
 ServerAlias www.domain1.com domain1.com /VirtualHost
 
 When I try to access either domain from my web browser (Firefox 
 1.0.7), I can only see the pages for domain1.com.  If I go to 
 domain2.net, I am served the pages for domain1.com.

I was always under the impression (might be wrong) that the correlation
between NameVirtualHost argument and VirtualHost argument is done using
string matching. Therefore, the two have to match for it to work. You've
added the port number to the NVH argument. So either drop it there or add it
also the VH arguments.

Rgds,
Owen Boyle
Disclaimer: Any disclaimer attached to this message may be ignored.  


 Thus far, I am stumped.  I
 managed to create
 configuration files that served no pages, errors, one but not the 
 other, but never successfully both.  This current configuration is the 
 best I have come up with so far.
 
 I own both of these domains and the DNS for these domains are pointing 
 to the correct IP address, as indicated by getting something when I 
 enter either address.
 
 Any help would be very much appreciated.  
 
 Thanks!!
 Adam R.  
 
 
 
 -
 The official User-To-User support forum of the Apache HTTP Server 
 Project.
 See URL:http://httpd.apache.org/userslist.html for more info.
 To unsubscribe, e-mail: [EMAIL PROTECTED]
   from the digest: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 
Diese E-mail ist eine private und persönliche Kommunikation. Sie hat keinen
Bezug zur Börsen- bzw. Geschäftstätigkeit der SWX Gruppe. This e-mail is of
a private and personal nature. It is not related to the exchange or business
activities of the SWX Group. Le présent e-mail est un message privé et
personnel, sans rapport avec l'activité boursière du Groupe SWX.
 
 
This message is for the named person's use only. It may contain
confidential, proprietary or legally privileged information. No
confidentiality or privilege is waived or lost by any mistransmission. If
you receive this message in error, please notify the sender urgently and
then immediately delete the message and any copies of it from your system.
Please also immediately destroy any hardcopies of the message. You must not,
directly or indirectly, use, disclose, distribute, print, or copy any part
of this message if you are not the intended recipient. The sender's company
reserves the right to monitor all e-mail communications through their
networks. Any views expressed in this message are those of the individual
sender, except where the message states otherwise and the sender is
authorised to state them to be the views of the sender's company.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] VirtualHost confusion

2005-11-01 Thread Adam Roberts
It has been a longtime since I have needed to use VirtualHost's and now my
ignorance has caught up with me.

I have a two domain names (domain1.com and domain2.net) that I am trying to
host on the same machine with VirtualHost, however, I'm not having much
luck.

Background:
Apache2 running on RHEL 3 ES Update 6.

VirtualHost snippet from /etc/httpd/conf/httpd.conf
--

NameVirtualHost ip-address:80

VirtualHost ip-address
ServerName www.domain2.net
ServerAdmin [EMAIL PROTECTED]
DocumentRoot /var/www/html/domain2
ServerAlias www.domain2.net domain2.net
/VirtualHost

VirtualHost ip-address
ServerName www.domain1.com
ServerAdmin [EMAIL PROTECTED]
DocumentRoot /var/www/html
ServerAlias www.domain1.com domain1.com
/VirtualHost

--

Of the different variations of this section I have read in books and other
posts, this appears to be the most common setup.

Here's my problem:

When I try to access either domain from my web browser (Firefox 1.0.7), I
can only see the pages for domain1.com.  If I go to domain2.net, I am served
the pages for domain1.com.  Thus far, I am stumped.  I managed to create
configuration files that served no pages, errors, one but not the other, but
never successfully both.  This current configuration is the best I have come
up with so far.

I own both of these domains and the DNS for these domains are pointing to
the correct IP address, as indicated by getting something when I enter
either address.

Any help would be very much appreciated.  

Thanks!!
Adam R.  



-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[users@httpd] Re: Makefiles in subprojects have non-overridable install paths

2005-06-29 Thread Adam Welch
Actually, all the config.layout files' RedHat stanzas read like this...

prefix:/usr
exec_prefix:   ${prefix}
bindir:${prefix}/bin
sbindir:   ${prefix}/sbin
libdir:${prefix}/lib

..., but in the resulting Makefiles, ${prefix} has been replaced by
/usr.  To my (naive) eye, the config.layout files appear correctly
parameterized.

Regards,
Adam

On Wed, 2005-06-29 at 03:22, Joost de Heer wrote:
  1.  ./configure --enable-layout=RedHat
  2.  vi srclib/apr-util/Makefile, where I see that libdir is set to
  /usr/lib
 
 Because in config.layout, in the RedHat layout, the libdir is set to
 /usr/lib. If you want this changed, edit the layout file.
 
 Joost
 


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[users@httpd] Makefiles in subprojects have non-overridable install paths

2005-06-28 Thread Adam Welch
Hello.  I have httpd-2.0.54 source and am building on MAC OS X Tiger. 
Here is what I have done...

1.  ./configure --enable-layout=RedHat
2.  vi srclib/apr-util/Makefile, where I see that libdir is set to
/usr/lib

Now, I'd like to make install prefix=/tmp/foo (building an RPM), but
this won't work, since libdir is not $prefix/lib or some such.  Now, if
I make distclean at the top, then cd apr-util and do a configure
(after configuring apr, yes, and using --with-apr=), then the resulting
Makefile has libdir set to ${prefix}/lib, which I what I want.

So, can the gurus tell me if I am missing something or if this is a bug
(in my understanding?)?  I am not familiar enough with the tools and
sources to know.

Regards,
Adam


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [users@httpd] mod_proxy / mod_rewrite: Passing remote IP address to internal server

2005-06-15 Thread David Adam
  Have you tried looking at the X-Forwarded-For header? (Appears in CGI
  scripts as HTTP_X_FORWARDED_FOR) This is passed automatically by
  mod_proxy, as far as I know.

 yes I know this but all the scripts of my customers are looking for
 REMOTE_ADDR etc. So how can I forward this to my internal servers behind the
 proxy server?

Werner,

(You're probably not going to like this answer - all I can say is that I'm
sorry, I'm not an Apache developer and I'm not a mod_rewrite guru. Nor do
I manage more than about fifty users with CGI web pages, and our attitude
to them is very much 'if it breaks, fix it yourself'.)

From what I understand of CGI, it is difficult do this. The
REMOTE_ADDR variable is set on the receiving (internal) server - you'll
have to override it from there.

You might want to examine:
- mod_rewrite on the internal servers - I don't know enough about
mod_rewrite to be able to tell you if (and how) it can change local CGI
variables.

- writing some evil wrapper that rearranges - for example, replacing your
PERL/Python/whatever executables with a shell script that tests for the
presence of X_FORWARDED_FOR and replaces REMOTE_ADDR with its contents. Be
careful with this - X_FORWARDED_FOR does -not- have the same semantics as
REMOTE_ADDR (see what happens when you pass through two proxy servers, for
example).

- sed s/REMOTE_ADDR/HTTP_X_FORWARDED_FOR/g and warning your
customers! There are bucketloads of examples for detecting and fixing
proxy headers

Have a look at the nearest thing to a CGI standard at
http://cgi-spec.golux.com/draft-coar-cgi-v11-03-clean.html for more
information on CGI variables.

The second option above is what someone like me would do :-) (we have no
qualms about 'evil hacks' here - our version of suexec has to be patched
every time we upgrade Apache, to give just one example).

Best of luck,

David Adam
[EMAIL PROTECTED]


-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  1   2   >