Re: [us...@httpd] How can I stick with www only ?

2009-01-22 Thread solprovider
.htaccess files are used to control directories.  httpd needs to
connect a server name with the directory.  Having directories specify
server names would be complicated.  The prime use of .htaccess files
is delegating control of a directory to someone without access to the
main configuration.  Configuring server names requires access to the
main configuration.  (Or a file added to the main config with Include,
which has the same responsibility as being able to edit the main
config.)

Three scenarios:
1. httpd ignores server names.
http://example.com
http://www.example.com
http://www.example.net
If these URLs accessed the IP Address and port of httpd, they would
receive the same response.

2. Named-based Virtual Hosts
The ServerName and ServerAlias commands specify which VirtualHost
section applies to which server names.

3. Rewrite based on server name.
A complicated method to implement name-based virtual hosts.

httpd must either use one configuration for all requests or use
name-based virtual hosts.  Any commands specifying different
configurations for specific server names must be in the main
configuration.

Did I miss anything?

solprovider

On 1/22/09, J. Bakshi joyd...@infoservices.in wrote:
  I have a site which can be accessed  by both  http://example.com  as well
  as  http://www.example.com

  I like to make an arrangement by .htaccess that both the link always
  stick to http://www.example.com

  Could any one suggest how can I do this with .htaccess ?
  Thanks

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
  from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [EMAIL PROTECTED] a rather tricky mod_rewrite problem?

2008-11-15 Thread solprovider
RewriteCond %{REMOTE_USER} ^([a-z0-9_]+)$
RewriteRule ^/mysvn/(.*) /svn/%1/$1 [L]

The first line places a valid username into %1.
The second rewrites /mysvn/something to /svn/bob/something when
the REMOTE_USER is bob.
Invalid usernames will not pass the condition so /mysvn should
display an error or login page when the Rewrite is bypassed.

solprovider

On 11/14/08, morgan gangwere [EMAIL PROTECTED] wrote:
  I've got a tricky question... How would one go about having it so that
  mod_auth and mod_rewrite talk to one anther like this:
  i have the file structure /var/svn/
  It is to be used for WebDAV svn access -- its attached on the server to /svn/
  I want it so that if a user (lets say bob) authenticates you get
  /var/svn/users/bob/ not /var/svn/ for /svn/
  and if steve logs in,
  /var/svn/users/steve/ for /svn/

  Any way to do this? or am i going to have to do the old trick of doing
  /svn/(username) and writing a PHP script to handle them?

  Morgan gangwere

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] HTTPS connexion on the port 80

2008-11-10 Thread solprovider
1. SSL allows one certificate per port/IP Address.
2. Only one server (protocol) can run on each port/IP Address.  You
cannot use HTTP and HTTPS on the same port/IP Address.  HTTP and HTTPS
are distinct protocols.  Imagine running SMTP and HTTP servers on the
same port.  That one server software installation (e.g. Apache httpd)
can handle more than one protocol (FTP, HTTP, HTTPS, LDAP, etc.) does
not allow breaking this rule.
3. You cannot use multiple SSL certificates for virtual hosts on one
port/IP Address.  Assigning a SSL certificate to a virtual server must
specify a distinct port -- VirtualHost * will not work.

See:
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#aboutconfig
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#vhosts

The summary is HTTPS encryption must be negotiated before the server
reads the request.  If the port allows HTTP (unencrypted) sessions,
the SSL negotiation will not happen.  If an HTTPS request is
attempted, browsers try to negotiate encryption and the connection
fails.

Yes, SSL for HTTPS was designed poorly.  TLS/SNI tries to fix these
issues, but requires ubiquitous browser and server support -- unlikely
for several years.

Today, the only solution is to use a distinct port or IP Address for
each SSL certificate/HTTPS server.  I recommend using separate IP
Addresses to avoid port numbers in URLs.

Sorry,
solprovider

On 11/9/08, David BERCOT [EMAIL PROTECTED] wrote:
  I'm new on this list and this is my first message. So, a little
  presentation : I'm French, I work on Debian and I have a problem ;-)

  On my server, I can only use port 80, for http and https access. So,
  here is my /etc/apache2/sites-available/default file :
  ServerAdmin [EMAIL PROTECTED]
  Directory /
 Options FollowSymLinks
 AllowOverride AuthConfig
  /Directory
  ErrorLog /var/log/apache2/error.log
  # Possible values include: debug, info, notice, warn, error, crit,
  alert, emerg. LogLevel warn
  CustomLog /var/log/apache2/access.log combined
  ServerSignature On
  NameVirtualHost *
  VirtualHost *
 ServerName site1.mydomaine.org
 DocumentRoot /site1
  /VirtualHost

  VirtualHost *
 ServerName site2.mydomaine.org
 DocumentRoot /site2
  /VirtualHost

  VirtualHost *
 ServerName site3.mydomaine.org
 DocumentRoot /site3
 SSLEngine on
 SSLCertificateFile /ssl/site3.cert
 SSLCertificateKeyFile /ssl/site3.key
  /VirtualHost

  If I try https://site3.mydomaine.org:80/, it should works, no ?
  In fact, it is ok for site1 and site2, and for site3, it works only in
  http !!! It seems it should not work in http, no ?
  If I open (only for tests) the 443 port, it works correctly in https.
  Do you have any clue ?

  Thanks.
  David.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] rewrite help

2008-11-03 Thread solprovider
Do not escape the question mark.

RewriteRule ^/(.*) /wc?uni=$1 [L]
- the first character must be a slash and is not included in the $1 variable.
- Add /wc?uni= before the rest of the URL on the same server.
- Discard any querystring from the visitor.  (No QSA flag.)
- [L] = stop processing RewriteRules.

You may want another RewriteRule for /,

HTH,
solprovider

On 11/3/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I am trying to get a redirect to work so that I can get friendl URLs for
  my website. I am using mod_perl and have written a little handler as a
  controler to handle all requests.

  What I currently have that works as follows.

  RewriteEngine On
  RewriteBase /
  RewriteCond %{REQUEST_FILENAME} !-f [OR]
  RewriteCond %{REQUEST_FILENAME} !-d
  RewriteRule ^wc/(.*) /wc\?uni=$1 [L]

  The user types in:  http://example.com/wc/docName
  and apache rewrites: http://example.com/wc?arg=docName

  Where /wc is my perl handler, as such:

  PerlModule Examplepackage::WC
  Location /wc
 SetHandler perl-script
 PerlResponseHandler Examplepackage::WC
  /Location

  This works and it's great, but I want to work just a little different.

  I want the user to type in: http://example.com/docName
  and apache rewrite: http://example.com/wc?arg=docName

  I have tried a few different RewriteRule types and either they 404 or
  it exceedes 10 internal redirects (internal server error).

  I have tried:
  RewriteRule ^/(.*) /wc\?uni=$1 [L]
  RewriteRule ^(.*) /wc\?uni=$1 [L]
  RewriteRule /(.*) /wc\?uni=$1 [L]
  RewriteRule . /wc\?uni=$1 [L]
  RewriteRule /(.*)$ /wc\?uni=$1 [L]
  and other such permutations.

  What am I doing wrong?

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache server - Output to the same page whatever request

2008-10-15 Thread solprovider
On 10/14/08, MierMier [EMAIL PROTECTED] wrote:
  MierMier wrote:
   I have an appche server 2.x. + PHP, And I woundered rather it is possible
   to output the same page (i.e. a.php) for a request of whatever page.
  
   I will give an examples:
   if the client try to reach /hello/index.php (which do not really exist
   in the server)
  
   Can apache just tell the client - I return you that page
   (/hello/index.php) but infact, this page's content is a.php's content.
   wiout 404 Errors and stuff?
   Lior.

 Well thanks alot, but somhing went wrong, and after reading, still can't
  understand why.

  I have an Apache server server on my PC, based on WinXP,
  and when I use these in .htaccess
  Options +FollowSymLinks
  RewriteEngine On
  RewriteRule ^.*$ /a.php [QSA]

  it works well.
  How ever at my Linux apache server server it doesn't, it always show me 404
  error, and the rewrite mod is enabled.

  Any ideas?

The querystring is passed to redirected pages by default if the
redirect does not contain a question mark.  Ending with a querystring
overwrites the querystring with the new values.  Ending the redirect
with a question mark clears the original values -- no new values. The
QSA (QueryString Append) option merges new querystring values with the
original values, unecessary without new querystring values  See
http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html

Status 404 is Not Found.  Does /a.php exist?  Can httpd read it?
File permission issues?

Redirecting error pages may not require a RewriteRule or even
mod_rewrite.  ErrorDocument is a core command.  Try:
   ErrorDocument 404 /a.php
The documentation does not mention how ErrorDocument handles the
original querystring. See:
http://httpd.apache.org/docs/2.2/mod/core.html#errordocument

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] redirect issue

2008-10-12 Thread solprovider
On 10/11/08, Yoom Nguyen [EMAIL PROTECTED] wrote:
  It doesn't work for me.  I am not quite sure what the problem is.  Here is 
 the error again.

  Syntax error on line 25 of /etc/apache2/vhosts.d/corp-yast2_vhost.conf:
 Redirect takes two or three arguments, an optional status, then document to 
 be redirected and destination URL

 httpd not running, trying to start

  THIS IS WHAT I HAVE

  RewriteEngine on
   RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK)
   RewriteRule .* -[F]
  
   RedirectMatch Permanent ^/$ http://corp.example.com/Pub/portal/desktop
   Redirect Permanent /groupMaterials/MPSERS_plan.pdf 
 http://corp.example.com/Pub/Grials/MPSERS_Plan.pdf
   Redirect Permanent /Vis/dvDnal.pdf http://corp.test.com/Pub.pdf

I tested by pasting your text without any errors, even redirected
properly on httpd-2.2 for Windows.  What version of httpd are you
using?

Try:
   head -n 25 /etc/apache2/vhosts.d/vipcorpinuat-yast2_vhost.conf
and check the last line.  Also try 24 lines.  Maybe a line is not
terminated properly?

Case sensitivity? You can use:
Redirect permanent /a.pdf  http://example.com/a.pdf
OR
RedirectPermanent /a.pdf  http://example.com/a.pdf
You can try the latter as an alternative.

Finally, delete the lines and manually type them (no pasting).  This
seems more an issue with the file than httpd.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Address rewriting with mod_rewrite

2008-10-07 Thread solprovider
On 10/6/08, Mauro Sacchetto [EMAIL PROTECTED] wrote:
 Il lunedì 6 ottobre 2008 19:14:02 hai scritto:

  None of the examples in the manual match against the protocol or
   hostname part of the URL.
   I previously described that you also can't match the query string in
   the rewriterule.
   You also probably need a trailing slash on the 2nd argument.
   Note that this rule redirects from the index.php page _to_ the /
   page which seems to be the opposite of what you want.

 To optimize the page for search motors, I neet to have always
  a (pseudo)-static address in home. But I proceed for trials  errors
  (moreover errors...). So I tried this (still not working):

  Options +FollowSymLinks
  RewriteEngine on
  RewriteCond %{HTTP_HOST} ^example.net [NC]
  RewriteRule (.*) http://www.example.net/$1 [L,R=301]
  RewriteCond %{QUERY_STRING} pagina=home
  RewriteRule ^$ http://www.example.net [R=301,L]

  I think more other syntactical mistakes...
  Thax a lot
  MS

The second rewrite redirects requests with the querystring
?pagina=home to the homepage.  To check querystrings with multiple
parameters:
   RewriteCond %{QUERY_STRING} ^(.*)*pagina=home()?
   RewriteRule (*.) / [P]

You seem to have the very common goal of having the homepage appear
for / even though your internal URL for the homepage includes extra
text.
RewriteRule ^/$ /index.php?pagina=home [P]

(From other posts)
   After, I come back to home using the menu, the address is:
  http://www.example.net/index.php?pagina=home
   I'd like the address to be http://www.example.net/

You also want the any links to the homepage in your application to use
href=/ rather than href=/index.php?pagina=home.  The application
creates those links.  The code generating the menus includes the link
to:
   http://www.example.net/index.php?pagina=home
This cannot be fixed with basic Apache httpd; you should correct your
menus in the application. (mod_proxy_html might help if you get it to
work.)

HTH,
solprovider


Re: [EMAIL PROTECTED] http-equiv=refresh ignored?

2008-10-07 Thread solprovider
On 10/7/08, Kynn Jones [EMAIL PROTECTED] wrote:
 Thanks for your help.
 ... SNIP ...
 In retrospect, I wish there had been an easier way for me to inspect the
 requests sent by each browser.  I asked our sysadmin if there was a way to
 configure Apache temporarily (i.e. only for debugging purposes) to record
 all requests verbatim, but he did not know how to do this.

 What browser-independent Apache tools are there for this type of analysis?
 ... SNIP ...
 TIA!
 Kynn

I do not know any Apache tools for troubleshooting network
communications.  Others may have suggestions.

Network sniffers|analyzers and tools for Windows and/or *nix
tcpdump (*nix): logs all communications.
WireShark (forked from Ethereal): logs/analyzes all communications
telnet (*nix): Talk to any port.
ssh (*nix): Talk to any port including encryption.
Hyperterminal: Windows standard telnet program.
putty: Windows client for telnet and ssh.

The (poorly-named for enterprises) WireShark is probably best to see
every communication -- includes ACKs and other control packets so you
should understand (or will quickly learn) the intricacies of TCP/IP.
WireShark logs every communication so filter by IP Address and packet
type to isolate relevant communications.

ssh (for *nix) and putty (for Windows) let you specify a port and type
commands.  I have used these to troubleshoot SMTP and HTTP issues.
You need to type the entire request.  For HTTP, this includes the GET
or POST, headers, and content (for POST).  Changing the User-Agent
header will show responses for different browsers.

You should probably use both.  Use WireShark to learn everything the
browsers are sending, then use a easier-to-control less-detailed
client for testing.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Address rewriting with mod_rewrite

2008-10-07 Thread solprovider
On 10/7/08, Mauro Sacchetto [EMAIL PROTECTED] wrote:
 Il martedì 7 ottobre 2008 16:41:50 Krist van Besien ha scritto:
   Ofcourse, this is not the end. You might have to look at the links
   that are in the page that index.php generates...
 Not easy for me... Just learning...

   Is this by any chance a CMS you've installed? If so, which one?
 No, it's a little site create by myself
  Thanx
  MS

If the menus are not being generated, they are just HTML files (or PHP
files including static HTML for menus.)  Just change the links for the
homepage to the now-working URL:
   a href=/index.php?pagina=homeHome/a
Becomes:
   a href=/Home/a
The new links will only work through Apache httpd.  Directly accessing
the application server will error for these links.

solprovider


Re: [EMAIL PROTECTED] http-equiv=refresh ignored?

2008-10-06 Thread solprovider
1. Always describe the intended functionality.
This code creates a session.  Most application servers needing
sessions just send a page with the session in the querystring and a
Cookie.  If subsequent requests do not include the Cookie, the
querystring is used; otherwise just use the Cookie.  A redirect is not
required -- the first response can add a Cookie and the querystring
parameter to every local URL.  If the Cookie is not found, the
querystring parameter must be added to every local URL on every page.
Many websites do not support sessions without Cookies because:
- changing every local URL on every page is beyond their programmers'
abilities.
- URL-based sessions are lost if the visitor opens a page without the
session parameter (such as using a bookmark or returning after
visiting another website.)
Other security concerns include randomization of session identifiers
and locking the session to an IP Address and User-Agent.

The code seems complicated for no purpose.  Why are you forking with
expect-more-data?

2. Perl Script:
   If no session, redirect with session, exit.
   Else send page.
For maximum 2 pages seen in Firefox.  Why is Safari sending extra requests?

3. JavaScript:
Did you see the code in the HTML source?  Is an earlier version in cache?

Onload runs setTimeout( 'maybe_refresh()', 3000 );
function maybe_refresh() {
// This line never runs.
// document.images always exists so test is always false.
// Probably wanted (0  document.images.length)
  if ( !document.images ) return;
  window.location.reload();
}
After fixing the code, an IMG element is required to prevent reload.

Some browsers run onLoad functions when transfer completes but before
processing the HTML completes so JavaScript expecting HTML objects may
fail.  I add function(s) at the end of the BODY so browsers do not run
the code until all HTML is processed.
...
script language=JavaScript
setTimeout( 'maybe_refresh()', 3000 );
/script./body/html

4. If this is not what you wanted, please explain what you are
attempting.  We can assist better when we do not need to guess the
purpose from reading non-working code.

HTH,
solprovider


On 10/1/08, Kynn Jones [EMAIL PROTECTED] wrote:
 I am trying to debug a large Perl/CGI-based legacy system that is not
 working right on some browsers, like Firefox 3.  It runs on an Ubuntu host
 with Apache 2.2.3.

 I reduced the problem to a very simple case, in the form of a short Perl CGI
 script.  This script has the following logical structure (in pseudo code):

 if called without session ID
   create new session id
   use this id to cache a data stub (with expect-more-data flag set)
   fork
   if parent
 respond with a 302 redirect whose URL is the script URL + the session ID
 exit
   if child
 repeat for i in 1..20
   sleep 1 second
   add i\n to the cached data
 unset the expect-more-data flag in the cached data object
 exit
 else (i.e. session ID available)
   retrieve cached data
   if expect-more-data flag is set
 add ...continuing... at the end of the response
 add meta http-equiv=refresh content=3/ to the response's header
   display a page with the newly retrieved data

 This works fine with Safari, but with Firefox (v. 3), the browser appears to
 be loading for about 20 seconds and then in the end it displays only the
 last page; none of the intermediate update pages is displayed.

 If one looks at Apache's access logs, when the request is done via Safari,
 one sees several GET requests logged, right in synch with the page updates
 as they're being displayed by the browser, as expected. But when one uses
 Firefox, the access log show only two GET requests, the very first one and
 the very last one, and they both appear simultaneously when the last (and
 only) page is finally displayed.  (The error logs do not register any
 messages during this time.)

 It was as if the initial request from Firefox was not deemed finalized until
 the very last one was...

 I thought that perhaps for some reason Firefox was simply dropping all the
 responses that had a meta http-equiv=refresh .../ directive in the
 header.  So I decided to implement the same server-browser interaction using
 JavaScript; i.e. I replaced the meta http-equiv=refresh .../ tag with
 the following bit of JavaScript:

 script
 var delay = 3;
 var calls = 0;
 function maybe_refresh() {
   if ( !document.images ) return;
   if ( calls  0 ) {
 window.location.reload();
   }
else {
 calls += 1;
 setTimeout( 'maybe_refresh()', delay * 1000 );
   }
 }
 window.onload = maybe_refresh;
 /script

 Strangely enough, when I did this the observed behavior was exactly the same
 as before: the new script works fine with Safari, but not with Firefox.

 This pretty much blows my best explanation out of the water.  In other
 words, although it may be minimally plausible that Firefox would be
 disregarding all the responses from the server that have a meta
 http-equiv=refresh.../ tag

Re: [EMAIL PROTECTED] How can I uninstall Apache2 from Windows98?

2008-10-01 Thread solprovider
On 9/28/08, Luke Turner [EMAIL PROTECTED] wrote:
 HTTPD.EXE won't shut down when I turn my computer off. And I see no
 mechanism by which I could properly shut it down. I keep having to do an
 end task on it. Google search on this problem indicates that Apache2
 simply isn't right for Windows98se. Hence I want to uninstall it.

I know your issue was resolved; just replying to correct misinformation.

This command stops Apache httpd cleanly:
   httpd.exe -k shutdown
You can also use the Apache Service Monitor's Stop button.
This applies to all versions of Windows, including Windows98SE.
See http://httpd.apache.org/docs/2.2/platform/windows.html#wincons

To the best of my knowledge, Windows98SE does not have a place to put
commands to run during shutdown.  Shutdown closes all programs,
completes any writes to disk, and clears the disk write buffer.
Apache httpd should close without corrupting anything.  Using ALT_F4,
CTRL_C, or the X window button may not close all processes -- a
performance issue if the computer were to remain running, but shutdown
should kill those processes.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Trailing slash problem, not the typical one

2008-09-27 Thread solprovider
On 9/27/08, Melvin Foong [EMAIL PROTECTED] wrote:
 Hello there,
  I am having this problem with my new server where I could not get it to work.
  I am looking to request for a page where my URL ends with a trailing slash.

  Example
  http://domainmame/directory/filename/
  This returns me a 404, Apache is looking it as a directory and not a file

  My configurations on 3 of my Apache server are the same, however only
  1 server running refuse to work with a trailing slash filename.

  I do not know what kind of info should I include, just wondering if
  anyone had this problem before and had solved it.

  Am using Apache 2.2.3
  TIA.
  Melvin Foong

Your expectation is not standard Web behavior -- a final slash implies
a directory name.  The information needed is in httpd.conf (and maybe
.htaccess files if you use them in content directories.)

You might try copying a working httpd.conf to the problem server.
Backup the current httpd.conf in case the installation/configuration
is different enough that the server does not start.

Diff the httpd.conf from this server with a httpd.conf that works as
you want.  DirectoryIndex (default = index.html) sets the filename(s)
delivered when a directory is specified.  DirectorySlash adds a final
slash when missing -- this is the opposite of your issue.  You may
have RewriteRule commands to remove final slashes.

The important file are httpd.conf and .htaccess files in the problem
directory and all parent directories of the problem directory.
Compare these files with the files from a server that works as you
desire.  Post any portions that seem relevant if you need more
assistance.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] How can I uninstall Apache2 from Windows98?

2008-09-27 Thread solprovider
On 9/27/08, Luke Turner [EMAIL PROTECTED] wrote:
 I accidentally installed Apache2 on my Windows98se computer and I want
 to remove it because Apache2 is not compatible with Windows98. I found no
 instructions for this anywhere on the WWW.

 Apache does not appear in the list of programs when I go to my Add/Remove
 Programs dialog box.

This sounds like something went wrong during your installation.
Apache 2.2.4 is working on my Windows98SE computer.  Apache HTTP
Server 2.2.4 appears in my Add/Remove Programs list.  What does not
work for you?

I cannot find my installation file.  Some installers will uninstall
when run again.  Otherwise...

The installation is mostly contained under the directory where you
installed Apache httpd, e.g.  C:\Program Files\Apache Software
Foundation\Apache2.2.  The start links are in: C:\WINDOWS\Start
Menu\Programs\Apache HTTP Server 2.2.4. Delete these two directories
(and all subdirectories.)
- The Windows registry contains a few entries you can ignore.  Search
for Apache if you want to remove them and have not installed other
Apache products.

Making a program not work on Windows98SE often takes extra development
work.  Eclipse 3.2 used XP's transparent text; this was fixed for
Eclipse 3.3.  I think PostgreSQL needed a file system feature
unavailable in Windows98SE.  Maven 2 did not work on Windows98SE; I
have yet to discover why (or try a recent version.)  Proprietary
software companies often teach installers not to install to
Windows98SE to reduce support calls.  Most OSS only stops working
because no tester reports problems; complain and the problems
disappear (as with Eclipse.)

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] What is the Difference between A webserver(apache) and an application server?

2008-09-20 Thread solprovider
On 9/19/08, Varuna Seneviratna [EMAIL PROTECTED] wrote:
 What is the Difference between A webserver(apache) and an application
 server?

 Varuna

- A server handles centralized processing for multiple client computers.
- A basic Web server serves HTML pages and associated content (images,
other files) using the HTTP protocol.
- An application server runs one or more interactive applications.
- Web applications add dynamic interaction to the HTTP protocol.  This
was first handled with CGI scripts -- the HTTP request triggered an
external (to the Web server) program on the server.  Web servers later
incorporated the ability to include programming (Java servlets, better
integration of PERL, etc.).

Apache httpd has many modules for creating applications.  Apache httpd
is also commonly used as a (front-end) proxy to multiple (back-end)
applications servers (e.g. Tomcat, Geronimo, Cocoon, and non-Apache
Web application servers).

For more details, search Wikipedia for Application server, Web
server, Web application server, and even Apache HTTP Server.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] java app URL rewrite

2008-09-18 Thread solprovider
On 9/18/08, Stauffer, Robert G [EMAIL PROTECTED] wrote:
 Same thing happened: It serves up the correct page, but the browser URL
 is http://example.com/ instead of
 http://example.com/site/jazzd/pid/9.  And the second RewriteRule is
  ignored:
  Bob Stauffer

You may be confusing several different processes.
1. The browser sends a request.  If the URL for http://example.com/
should redirect to http://example.com/site/jazzd/pid/9, then you need
a redirect.  Only a redirect will change the browser's URL.
2. The server responds to a request.  If
http://example.com/site/jazzd/pid/9 should be served by
http://portal-dev:81/jahia/Jahia/site/jazzd/pid/9, then a RewriteRule
should add the jahia/Jahia/.  RewriteRules do not affect the
browser, only how the server processes the request.
3. The page is sent to the browser.  Links on the page may include the
jahia/Jahia/.  People following the links will see the
jahia/Jahia/.  Fixing the links requires either changing the
application that created the page (often by configuring the root URL)
or using mod_proxy_html.  Rewriting links is obviously critical
functionality still not included in Apache httpd.  (My first Web
application included this functionality.)

Rewrite_Rules do not change browsers' URLs.  You need to use the
correct technology to create your solution.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] How to Find Online Users?

2008-09-17 Thread solprovider
On 9/17/08, André Warnier [EMAIL PROTECTED] wrote:
 Justin Pasher wrote:
  amiribarksdale wrote:
   What is the standard way to determine whether a user is indeed logged
 in to
   a site and online right then? I have a web app where users log in and
 get a
   cookie. Part of it is the sessin cookie which expires at the close of
 the
   session, and part of it is a longer lasting authentication cookie. Am I
   supposed to use the session cookie for this? Does it have to be stored
 in
   the db so it can be timestamped?
   Amiri
 
  Since HTTP is a stateless protocol, it requires a little creativity to
 track online users. One way is to have a table in a database that keeps
 track of a person based upon their username/IP address and the last time
 they loaded a page. For example
 
  * Client visits a page
  * Add/Update a row in the table with the client's username/IP address and
 set the timestamp to the current time
  * To retrieve a list of online users, pull all rows in the database with
 a timestamp within the last X minutes (for example, 10 minutes).
 
  You could then periodically delete any rows from the table that are older
 than X minutes or hours. This would help keep the size down. The username
 for a client would be based upon a cookie or session information stored
 within your page.

A more efficient table would contain all visitors with the timestamp
of the last visit rather than adding a row for each visit.  You must
already have a table of all visitors so this only requires adding a
LastVisited field/column.  The data could also be queried for
visitors that have not visited in the last 6 months.

  Another way of saying this, is that HTTP as a protocol, and the HTTP server
 itself, have no such concept as a logged-in user.  Each request from the
 browser to the server, as far as they are concerned, is independent from the
 next one, even if it comes from the same workstation or IP address.
  So the concepts of logged-in user or connected workstation are at the
 application level, and that is also where you have to handle it.

  If both the server and the browser use the KeepAlive feature, then to
 some extent there is one TCP-IP session kept open between them for a certain
 duration or a certain number of requests-responses, but that has only a
 vague relationship with the a concept of on-line users : such a session
 may remain connected for a while after a single browser request, even if the
 browser just requested the homepage once without ever logging in to any
 application afterward.
  The same thing with a disconnect or logout from an application : if the
 browser just moves to another page on another server, or is just closed, or
 the workstation is powered off, the server would never know about it.  Some
 web applications implement a timeout, and internally do some kind of
 logout of the session if they have not seen any new interaction for a
 while. But this happens at the back-end application level, not at the HTTP
 server level.

As André wrote, tracking online visitors is handled at the application level.

I once wrote a Web chat application.  The discussion page refreshed
every minute -- updating the conversation and informing the
application that the visitor was still active.  This was 1996 --
frames separated the discussion page and the input page. Other pages
on the website had an alert symbol when a message was sent to that
person.  The alert graphic was refreshed every minute (using
JavaScript) -- telling the visitor when a message was received, but
also informing the application that the visitor was still online.
Today, the application could use AJAX to update the discussion area
and track online visitors without refreshing the page.

solprovider


Re: [EMAIL PROTECTED] hardware for proxy

2008-09-11 Thread solprovider
On 9/11/08, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
  On 9/10/08, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
On 09.09.08 21:23, [EMAIL PROTECTED] wrote:
  5000 reqs/sec @ 20 KB/req = 100 MB/sec = 1Gbps.  One gigabit network1
 it's even 800, not 1000 Mbits per second...
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
   Rough conversion (from the old days) was:
   1 byte of data
   = 8 bits on disk
   = 10 bits of network traffic
 this was also correct in modem times. I don't think that network
  (http/tcp/ip) headers cause that big overhead now :) Yes, it depends on size
  of average requests... However we should count that into average size of
  request...

Network overhead is difficult to estimate.  IPv4 adds 32-36 bytes per
packet; IPv6 adds 60-64 bytes per packet.  Packet size has a large
effect --  smaller packets require more packets with more protocol
overhead ; larger packets waste more space in the final packet.  Then
add non-data packets e.g. ACKs.  My 25% overhead (2 bits overhead per
8 bits data) has produced reasonable estimates.

   = 13 bits of encrypted (SSL) network traffic
 Interesting, I guess that mostly applies to SSL handshake overhead. I don't
  have the numbers but I guess encrypted text should not be much bigger than
  non-encrypted.

I once read that encryption added 20-30%.  Modern streaming encryption
seems more efficient, but adds handshake overhead per transmission.
Again, my 30% overhead has produced reasonable working estimates.

  Does somebody have the data?
 Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/

Ditto.  I would like better estimates, or at least more details to
support my current calculations.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] hardware for proxy

2008-09-10 Thread solprovider
On 9/10/08, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
 On 09.09.08 21:23, [EMAIL PROTECTED] wrote:
   5000 reqs/sec @ 20 KB/req = 100 MB/sec = 1Gbaud.  One gigabit network1

  please don't mess bauds and bits per second. it's something very different.
  http://en.wikipedia.org/wiki/Baud

Thanks.  Back in the modem days, baud was (correctly) shorthand for
bps.  Wikipedia states that is no longer valid.

  it's even 800, not 1000 Mbits per second...
  Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/

Rough conversion (from the old days) was:
1 byte of data
= 8 bits on disk
= 10 bits of network traffic
= 13 bits of encrypted (SSL) network traffic
Data compression can reduce the traffic up to 50%.

Maintaining 800 MB per second without compression may completely fill
a gigabit network connection.  The OP would want a second network
connection to avoid running at full capacity and handle spikes.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] hardware for proxy

2008-09-09 Thread solprovider
On Tue, Sep 9, 2008 at 9:43 AM, Alexandru David Constantinescu
[EMAIL PROTECTED] wrote:
 -Original Message-
 From: Alexandru David Constantinescu [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, September 09, 2008 3:20 AM
 To: users@httpd.apache.org
 Subject: [EMAIL PROTECTED] hardware for proxy
 I plan to implement a proxy server for apache. The idea is to act like
 a
 firewall, proxy , load balancer and cache. It must  serve around 2000
 sites. The backend servers I don't know for now how many will be, but I
 am prepare to start with 2 or 3 and in case of heavy load , increase
 this number. My question is what hardware do you recommend for proxy.
 do
  I need fast cpu's or lots of core's. In terms of ram the things are
 clear : apache need ram. Do you recommend scsi or sata disks etc ?
 If someone have experience or suggestions please give me a sign.
 Thanks

 There is no SSL.
 The sites are very active (it is a share hosting environment and this is the
 reason why I wanna try the proxy) and beside that we plan to expand.
 We have between 50~300 reqs/sec (depend on time of the day) with around
 10~20 kb/reqs and this is not the busiest server. Probably we need something
 to hold around 5000 reqs/sec like a frontend.

5000 reqs/sec @ 20 KB/req = 100 MB/sec = 1Gbaud.  One gigabit network
connection might max out so you probably want two gigabit network
connections -- standard on most rack servers.

A recent single-core CPU is probably more than enough -- proxying is
not very processor-intensive.  Bus speed is more important than CPU
speed.

SCSI is stable; SATA is new.  One of the SATA hard drives in our most
recently purchased server died after a few weeks (and the RAID failed
to rebuild.)  Everything should run in RAM if you really need
performance so drive speed only affects start times (unless this
server will cache too.)

500 MB RAM is probably overkill; a new server will have at least 2 GB.

A modern desktop computer should handle the expected load (excluding
the second network connection.)  Use that server you just bought and
have not delivered.  Install and load test.  If you notice any
performance problems, adjust the specs for the new server.  Start
inexpensive.  You do not need the first server to handle future
capacity.  When the first server slows even a little, you can move
half the websites to another server before deciding how to build the
ultimate system.  Then you will have real performance numbers for the
decision.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Setting up a subdomain?

2008-08-29 Thread solprovider
On 8/29/08, Zach Uram [EMAIL PROTECTED] wrote:
  I run apache2 on Debian. I want to set up subdomain so instead of
  www.example.org/html/foo.html visitors can just visit http://foo.example.org 
 so
  what specifically must I do in apache conf to do this and what type
  of record should I add in my GoDaddy DNS control panel and what will
  the record look like?

  Zach

Short post, many concepts.
- DNS connects server names to IP Addresses.
- A Web server can handle several server names on one IP Address.
Apache httpd calls these virtual servers.
- A website has content.  Basic websites associate a URL with a
content file under a root directory.  Multiple websites can access the
same files with different root directories.

You must:
1. Add foo.example.org to your DNS.  (I do not know GoDaddy's control
panel. Ask GoDaddy.)
2. Add a virtual server in httpd.conf setting the DocumentRoot to the
correct subdirectory.
VirtualHost *
ServerName foo.example.org
DocumentRoot /var/www/html/foo
/VirtualHost

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] A question about the request line and the referer

2008-08-02 Thread solprovider
On 8/2/08, Paul Li [EMAIL PROTECTED] wrote:
  Some referer pages are -,  some are other pages in the same website,
  and still some are pages of other website. If I want to find the
  users' request history (visiting history), could I just ignore the
  referer page but only check the request page?

  Btw, why the referer pages are different, like what I asked above,
  some are -, some are other pages in either the same or different
  websites?

  Thanks,
 Paul

The referer is not the last page viewed when someone types an address
into a browser's address bar.  This would create a security issue
(typed URLs should not surrender viewing history) and a functionality
issue (the page in the referer should contain a link to the current
request.)  Analyzing the referer should tell you:
- what images are contained in what pages (and if other websites are
using your images.)
- what pages link to each page.

Note that cache makes Web server logs inaccurate.  Returning to a
previously viewed page may load the page from local cache without
contacting the Web server.  A gateway server can also cache pages and
intercept requests -- companies and ISPs may cache static pages to
reduce bandwidth.

Assuming a page is being loaded from the Web server:

The referer is  - if someone types an address into a browser's address bar.
The referer is  - if someone uses a Favorite or Bookmark.
The referer is  - if someone clicks a link from within a local HTML
file on their computer.
The referer is the URL of the page containing the link if someone
clicks a link from a website on the Internet,
The referer is the URL of the page containing an image or other
immediately downloaded file.

The referer comes from the HTTP Headers of the request.  Anybody using
a tool allowing control of the headers (e.g. telnet, putty,
HyperTerminal) can set the referer.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] A question about the request line and the referer

2008-08-02 Thread solprovider
André answered well with words.  My only addition is an example
assuming a well-behaved browser and no interference from cache.

http://example.ORG/ contains:
img src=/internal.gif/
img src=http://example.COM/external.gif/
a href=/internal.html/Internal Page/a
a href=http://example.COM/external.html;
a href=http://example.COM/external.gif/

Opening this page will create two log entries for example.ORG:
- The referer for the page entry depends on how the page was opened.
See previous post.
- The referer for internal.gif will be http://example.ORG/;.  The
administrator of example.ORG would consider this to be an internal
referer.

Opening this page also creates a log entry for example.COM:
- The referer for external.gif will be http://example.ORG/;.  The
administrator of example.COM would see an external referer and be
upset that another website was borrowing the graphic (and bandwidth
and processing.)

Clicking any of the links will create a log entry on the specified server:
- Clicking the link for internal.html creates a log entry on
example.ORG with the referer of http://example.ORG/;.  The
administrator of example.ORG would see an internal referer.
- Clicking the link for external.html creates a log entry on
example.COM with the referer of http://example.ORG/;.  The
administrator of example.COM would see an external referer and
(usually) be happy that another website was linking to the page.
- Clicking the link for external.gif opens the image in the browser
and creates a log entry on example.COM with the referer of
http://example.ORG/;.  The administrator of example.COM would see an
external referer and probably assume the image was used on a page
until investigating -- opening http://example.ORG/ and discovering the
link.

Each server sees only requests for files (page, images, etc.) on that
server.  The referer is an indicator of why the file was requested.
The logs do not differentiate between embedded elements and anchor
links -- opening the referer page is the only method to discover why
the file was accessed (if the referer page has not changed since the
request was made.)

solprovider

On 8/2/08, Paul Li [EMAIL PROTECTED] wrote:
 solprovider,
  I'm really appreciated your message and it helps me A LOT!!! Just want
  to clarify the two cases when the referer is url :

  1. The referer is the URL of the page containing the link if someone
 clicks a link from a website on the Internet

 a. the referer URL is the  website on the Internet from which a user
  clicks the link, and
  b. the  website on the Internet  is aother website but not my website.

  2. The referer is the URL of the page containing an image or other
  immediately downloaded file.
  this url is of a web page on my my site.

  Is my understanding correct?
  Thanks again!
  Paul

  On Sat, Aug 2, 2008 at 12:46 PM,  [EMAIL PROTECTED] wrote:
   On 8/2/08, Paul Li [EMAIL PROTECTED] wrote:
Some referer pages are -,  some are other pages in the same website,
and still some are pages of other website. If I want to find the
users' request history (visiting history), could I just ignore the
referer page but only check the request page?
  
Btw, why the referer pages are different, like what I asked above,
some are -, some are other pages in either the same or different
websites?
  
Thanks,
   Paul
  
   The referer is not the last page viewed when someone types an address
   into a browser's address bar.  This would create a security issue
   (typed URLs should not surrender viewing history) and a functionality
   issue (the page in the referer should contain a link to the current
   request.)  Analyzing the referer should tell you:
   - what images are contained in what pages (and if other websites are
   using your images.)
   - what pages link to each page.
  
   Note that cache makes Web server logs inaccurate.  Returning to a
   previously viewed page may load the page from local cache without
   contacting the Web server.  A gateway server can also cache pages and
   intercept requests -- companies and ISPs may cache static pages to
   reduce bandwidth.
  
   Assuming a page is being loaded from the Web server:
  
   The referer is  - if someone types an address into a browser's address 
 bar.
   The referer is  - if someone uses a Favorite or Bookmark.
   The referer is  - if someone clicks a link from within a local HTML
   file on their computer.
   The referer is the URL of the page containing the link if someone
   clicks a link from a website on the Internet,
   The referer is the URL of the page containing an image or other
   immediately downloaded file.
  
   The referer comes from the HTTP Headers of the request.  Anybody using
   a tool allowing control of the headers (e.g. telnet, putty,
   HyperTerminal) can set the referer.
   solprovider


Re: [EMAIL PROTECTED] Using RewriteRule for secure requests

2008-08-01 Thread solprovider
On 7/31/08, Bobby Jack [EMAIL PROTECTED] wrote:
 Thanks, solprovider. I had to look up your TLAs, but I am familiar with most 
 of the points you make in your post, and tend to agree :)

I assume you know URL, CMS, and SSL.
TMI = Too Much Information - I was halfway into a rant and used the
TLA question as an apology if reading the extra text was wasting your
time.
HTH = Hope This Helps - Very common closing on support mailing lists.

FYI (For Your Information),  IIUC = If I Understand Correctly - used
later in this post.  (OK. a four letters is not a TLA.)

  --- On Wed, 7/30/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   Whatever creates the URLs should look up the correct protocol.
 Of course, this would be ideal, but I'm not working with the ideal system, 
 unfortunately. The CMS does not appear to provide the functionality required 
 to achieve this. In short, navigational links are shared across pages. These 
 are relative for all; to achieve the correct setup, I'd need these to be 
 relative on http://-served pages, and absolute on https://; ones.

  Short of duplicating the navigation structure (which is not trivially small) 
 and creating two copies to maintain, or implementing URL-rewriting logic in 
 the application server (which lies somewhere between very difficult and 
 impossible), I cannot see a way around this.

  My current solution, therefore, is to simply leave everything as is in the 
 CMS, but implement redirects in apache to convert
  https://page/should/not/be/secure;
  into
  http://page/should/not/be/secure;
  (whose original link would have been relative: /page/should ...)
  Admittedly, this is not the best solution; it's a workaround. Workarounds 
 are always sub-optimal, but there are often things that need to be 
 worked-around when using proprietary software.

  Many thanks for your feedback,
  - Bobby

  P.S. The whole reason I'm looking into this IS to ensure that all forms 
 (yes, even the 'contact us' one) are accessed via https. If it's a choice 
 between that setup + a nasty workaround or not having that setup, IMHO (and, 
 I believe, yours), the former is preferable.

Agreed.

IIUC, HTTPS pages contain relative links that are not currently
available using HTTPS -- you are attempting to internally serve the
pages by internally redirecting the URL to the HTTP server.

A solution is let HTTPS try to serve the page.  If this fails, pass to
a CGI program that attempts to find the page with HTTP.  If the page
still cannot be found, return an error page.

ErrorDocument 404 /cgi-bin/TryHttpBeforeErroring

TryHttpBeforeErroring pseudocode:
If REDIRECT_SERVER_PORT = 443 Then
   Get page from http://; + REDIRECT_SERVER_NAME + REDIRECT_URL
   If success, send page and exit.
For all other cases, return a Not Found error page.

RewriteRules would not require a CGI program, but requires that httpd
differentiate between HTTP and HTTPS pages.  Using ErrorDocument does
not require the differentiation -- pages are automatically served by
the best method without affecting the browser.

Disclaimer: Use the ideas in this post at your own risk.  I have not
tested anything.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Using RewriteRule for secure requests

2008-07-29 Thread solprovider
On 7/28/08, Bobby Jack [EMAIL PROTECTED] wrote:
  I'm maintaining a site whose http/https configuration is all muddled up, 
 partly because of the (lack of) facilities available in the CMS used to 
 deliver it. However, I'm considering an apache-based solution which might 
 solve things, and I'd like some feedback.

  The main problems revolve around the two following requirements (which are 
 global to any properly built site using https):

  1. Links (whether resident on an http or https delivered page) should begin 
 http://...; or https://;, as appropriate - i.e. dependent on which 'set' 
 the target page belongs to. As far as I can think, those two sets are 
 completely distinct, so this should not be a significant problem to resolve.

  2. Certain resources (e.g. images, css, etc.) need to be requested via 
 http://...; OR https://...; dependant on how the requesting page has been 
 delivered.

  So, assuming I can solve problem 1 within the confines of the CMS, I'm 
 thinking problem 2 can be resolved using RewriteRule with an appropriate 
 condition on HTTP_REFERER. My one slight concern is the performance hit, 
 although I'm guessing this should be minuscule, especially in relation to the 
 hit of using https in the first place.

  Any thoughts?
  Many thanks,
  - Bobby

Controlling whether HTTP or HTTPS is used for links and images is the
responsibility of the referer HTML page.  You should have three
categories of URLs:
1. External URLs: The protocol is defined by the content.
2. Internal URLs requiring a certain protocol: The protocol is defined
by the CMS/database/content.  Whatever creates the URLs should look up
the correct protocol.
3. Internal URL using the current protocol: Use relative URLs.
Browsers create the absolute URL from the current page's URL,
including the protocol.

The protocols are set when creating the referer page, not when the
images are loaded or a link is clicked.

HTTP pages can be redirected to HTTPS.  This requires telling the
browser to ask again using HTTPS.  Internal redirects are pointless
since the server-to-browser communication would still use HTTP.

Some websites only secure the login page.  This also seems silly --
why secure content with access control and then send the content as
cleartext so anybody can intercept it?  After a visitor establishes
credentials, the session should remain encrypted.  SSL communication
should be established before any confidential information is
exchanged.  Any HTTP URL should be considered outside the session.

The Web was established when the Internet was still a nice place.
Browsers should automatically send any form information using SSL.
The insecure HTTP protocol should not allow GET querystrings or POST
requests.  Ideally, the only information outside the encrypted load of
every request would be the target IP Address and server name.
Changing the Web will be very difficult.  Major websites like Google
do not encrypt most requests.  SSL places the server name inside the
encrypted load so only one SSL server name is allowed per IP Address
making SSL almost useless for virtual servers.  OTOH, the additional
bandwidth needed to encrypt everything might overload the Internet
(and would have been unacceptable when most people used dial-up.)

TMI?  HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] different kinds of proxies

2008-07-24 Thread solprovider
On 7/24/08, Rich Schumacher [EMAIL PROTECTED] wrote:
 On Wed, Jul 23, 2008 at 8:50 AM, André Warnier [EMAIL PROTECTED] wrote:
  Hi. Me again butting in, because I am confused again.
  When users workstations within a company's local network have browsers
 configured to use an internal http proxy in order to access Internet HTTP
 servers, is this internal proxy system a forward or a reverse proxy ?
  I am not talking here about a generic IP Internet router doing NAT, I am
 talking specifically about a web proxy.  This HTTP proxy may also do NAT
 of course, but its main function I believe is to cache pages from external
 servers for the benefit of internal workstations, no ?
  If this is a forward proxy, then I do not understand the comment of
 Solprovider that seems to indicate that such things are obsolete and/or
 dangerous.  At any rate, they are in use in most corporate networks I am
 aware of.
  André

 What you are talking about is a forward proxy and most of the time they are
 transparent to the users behind them.  Things do get a little blurry,
 though, as sometimes they handle routing and NATing as well. SafeSquid
 (http://en.wikipedia.org/wiki/SafeSquid) of this in terms
 of software.  They are also hardware based solutions, such as Barracuda
 networks web filter, but I do not believe this does caching.

Forward proxies are considered dangerous because the client is hidden
from Internet servers -- the Internet servers see the proxy server's
IP Address instead of the client's IP address creating a shield for
the client.  A malicious attacker can daisy-chain several open forward
proxies making tracking the client very difficult for administrators
and law enforcement.

I stated forward proxies were obsolete because they require
configuring the client to integrate with the forward proxy while most
of the beneficial legitimate functions can be gained without requiring
client configuration.  A gateway server can handle
- NAT between internal corporate clients and the Internet,
- Firewalling blacklisted IP Addresses and websites,
- Logging all traffic, and
- Saving and serving static pages from cache
Without the definitive feature of a forward proxy -- requiring every
client be configured to use the gateway server as a forward proxy.  A
gateway is protected by the NAT functionality -- only internal clients
can use the proxy function.  A forward proxy requires additional
security to prevent external clients from using the proxy function.

Any NAT protects the IP Addresses of internal clients, but integration
is handled at the network routing level rather than the application
level.  A NAT can be called a proxy because it hides the internal IP
Addresses or a gateway because it connects networks.  Proxy
requires disambiguation: forward, reverse, or network.  I prefer
gateway rather than network proxy and front-end Web server
rather than the technically accurate reverse proxy because
non-technical people understand better.

SafeSquid is described as a proxy in Wikipedia and as a gateway in
Novell's marketing material:
   http://www.novell.com/partnerguide/product/206554.html
This page also states SafeSquid can deliver user-benefits with
zero-software deployment at user-level systems so SafeSquid does not
meet the definition of a forward proxy while providing the benefits of
cache, firewalling, blacklisting, logging, etc..

Definitions:
- Proxy: Something or someone hiding the clients' information.   A
lawyer may be a proxy bidding on property without identifying the
client.
- Gateway (or Network Proxy): Server connecting networks.  Called a
router if dedicated hardware.  Called a gateway server when
handling functions beyond network routing.
- Forward Proxy: A proxy requiring clients be configured to use the
forward proxy.  Clients' information is hidden even on same network.
- Reverse Proxy: A front-end server able to parse requests to
distribute to multiple applications.
- NAT (Network Address Translation): A function of a gateway when
different networks use different address schemes.  The address is
translated to the gateway's address on the new network; the gateway
translates responses to return to the requesting client.  The function
was once important to integrate different network types (IP, NetBIOS,
AppleTalk, etc.).  With the demise of most network protocols, this
term is currently almost-exclusively associated with IP masquerading
for connecting local networks to the Internet.

As SafeSquid proves, the many functions required to implement
Corporate Internet Access Policies can be handled by a gateway
server without requiring a forward proxy.  The only function specific
to a forward proxy is hiding client information from other computers
on the same network; I am still wondering if this function has a
legitimate use.

[As Rich's other posts indicate, his use of forward proxies was
laziness/productivity (using a forward proxy to avoid extra work
remotely accessing different computers during testing

Re: [EMAIL PROTECTED] different kinds of proxies

2008-07-23 Thread solprovider
On 7/22/08, Rich Schumacher [EMAIL PROTECTED] wrote:
 Solprovider,

 While I agree with your sentiment that forward proxies can be very
 dangerous, I think you are jumping the gun with your statement doubting they
 have any legitimate use today.

 Here is a a real-world example that I use at my current job.  My employer
 operates a series of websites that are hosted in servers all around the
 country.  A couple of these servers are located in Canada and run a site
 specifically geared towards Canadian customers.  As such, they have Canadian
 IP addresses.  A while back we wanted to inform our Canadian customers who
 visited our non-Canadian site that we have a site specifically for them.  We
 easily accomplished this using the MaxMind geoIP database and could display
 whatever information we wanted when we detected a Canadian IP.  The quickest
 way to QA this was for us to setup a proxy (Squid, in this case) and point
 our browsers at it.  The server was already locked down tight with iptables,
 so all we had to do was open a (nonstandard) port to our specific gateway
 and we were all set. Doing this we can now masquerade as a Canadian customer
 and QA can make sure it works as planned.

 Forward proxies can also be used as another layer of cache that can greatly
 speed up web requests.

 Hope that clears the air a little bit as I feel there are several good
 examples where forward proxies can be useful.

 Cheers,
 Rich

Thank you.  I was wondering if anybody noticed the question at the end
of my post.  I am truly interested in the answer.

How would you have handled this if forward proxies did not exist?
Your answer was the forward proxy helped testing, not production.  QA
could test:
- using real Canadian addresses.
- using a network with specialized routing to fake Canadian and
non-Canadian addresses.
- faking the database response so specific addresses appear Canadian.
Did the production system require using a forward proxy?

I discourage using IP Addresses to determine geographical locations.
Slashdot recently had an article about the inaccuracies of the
databases.  (IIRC, an area of Arizona is listed as Canadian, which
might affect your system.)  I checked the IP Addresses of Web spam to
discover that recent submits were from:
- Moscow, Russia (or London, UK in one database)
- Taipei or Hsinchu, Taiwan
- Apache Junction, AZ.
Some databases place my IP Address in the next State south.  Choose
your country links are popular on the websites of global companies.
(I dislike websites that force the country choice before showing
anything useful.  If the website is .com, assume the visitor reads
English and provide links to other languages and country-specific
information.)

I believe cache does not depend on forward proxy.  Any Web server with
cache enabled should serve static pages from the cache without
configuring a proxy.  Specific scenarios with a front-end server
specifically for cache seem more likely to use a reverse proxy.  While
this is how a recent project for a major website handled cache, I do
not have good information about general practices.

Am I missing something?  Other ideas?

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Setting cookies from proxied backend

2008-07-19 Thread solprovider
 are translated so the Web proxy server
(www.example.com) sends the requests including Cookies to amazon.com.

Read http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
Read the sections applying to reverse proxies.  Ignore forward
proxying because that process is not transparent -- the client
computer must be configured to use a forward proxy.

I once had difficulty with ProxyPass and switched to using Rewrites so
I would handle this with something like:
RewriteEngine On
RewriteRule ^/amazon/(.*)$ http://www.amazon.com/$1 [P]
ProxyPassReverseCookieDomain amazon.com example.com
ProxyPassReverse /amazon/   http://www.amazon.com/
This should handle Cookies and handle removing/adding /amazon in the path.

We have not discussed changing links in pages from amazon.com to use
example.com.  This simple often-needed functionality has been ignored
by the Apache httpd project.  (This functionality was included in a
servlet I wrote in 1999.) Research mod_proxy_html.

Does this answer your question?

solprovider


Re: [EMAIL PROTECTED] Setting cookies from proxied backend

2008-07-19 Thread solprovider
On 7/19/08, jamanbo jamanbo [EMAIL PROTECTED] wrote:
  If the applications use Cookies, the
   application Cookies must be rewritten by the Web proxy server because
   the browsers use the server name of the Web proxy server, not the
   application servers.
   1. The browser requests http://myapp.example.com.
   2. The Web proxy server myapp.example.com sends the request to
   myInternalApplicationServer.example.org.
   3. The myInternalApplicationServer.example.org sends a response with a
   Cookie for myInternalApplicationServer.example.org to the Web proxy
   server.
   4. The Web proxy server changes the Cookie from
   myInternalApplicationServer.example.org to myapp.example.com.
   5. The browser receives the Cookie for myapp.example.com and send the
   Cookie with future requests to the Web proxy server.
   6. The Web proxy server sends the incoming Cookies with the request to
   the application server as in #2.  (Depending on security, the incoming
   Cookies may need to be changed to match the receiving server.)
   7. GOTO #3.

 This is how I have come to understand the process too.

  It is step 4 I would like to change though. In my case I need cookies
  to continue to be set for .example.ORG and not modify them to
  .example.COM. Whilst there seems to be no difficulty in doing this in
  Apache (you simply omit the ProxyPassReverseCookieDomain), I am
  thinking that it amounts to a cross domain cookie injection attack
  and that no half-decent browser would accept the cookies.

  What I have been asking for most of this last week is whether or not
  it is possible for me to visit a site via a proxy yet continue to have
  cookies set as though I had visited the site directly. Those who said
  yes you can also generally said something like thats the way
  proxies work. I just want to make absolutely certain that this was
  just a misunderstanding and that what they were really saying was that
  the cookies can be set, but only by translating them into the proxy
  domain ... otherwise I have made some rash claims about how I was
  going to prove a concept of mine rapidly by using a proxy, and will
  have to make an embarrassing climb down in work on Monday :S

I think you understand.  Browsers should discard Cookies not from the
originating server, i.e. example.com cannot set Cookies for
example.org.  Servers can only set Cookies for themselves and higher
domains containing at least a prefixed and one internal dot
.example.com.  (This is dangerous when the domain is .com.us or
some other country codes when two levels does not determine
ownership.)

There are workarounds when you control both domains.  Single-sign-on
solutions often redirect several times using different URLs to set
Cookies for multiple domains.  The process might be:
1. Send login information to first domain.
2. First domain's login redirects browser to master domain.
3. Master domain checks for Cookie from master domain.  If master
domain Cookie is not found, create session and Cookie.
4. Redirect to first domain with session ID in URL.
5. First domain checks with master domain that session is valid.
6. First domain sets Cookie based on new URL
Step 5 in this process requires the first domain is able to verify the
master domain session using a backend process.  The first domain can
maintain its session independent of the master session after this
process completes once.  Again, this requires that either you control
both domains or the master domain provides an API for single-sign-on.
This probably does not apply to your situation.

Using a Web reverse proxy, the answer is that creating a Cookie for
amazon.com requires the browser to receive a page from a server named
*.amazon.com.  Using something similar to a cross-site attack could
handle this (e.g. opening www.amazon.com in a frame), but is highly
discouraged.

If you want people on an internal network to be able to access
amazon.com and receive Cookies from amazon.com, a Web reverse proxy is
not the solution.  The two solutions are:
- A forward proxy could be used.  This requires clients to be
configured to use the forward proxy.
- Any NAT firewall should handle this transparently. No client
configuration is necessary; just configure routing between networks to
use the NAT firewall as the gateway..

Apache httpd can handle both forward and reverse proxying, but is not
a NAT firewall.  Standard *nix software (e.g. iptables and PF) can be
NAT firewalls.  (Using Microsoft software for networking may be a
criminal offense and should lead to a justified termination.)

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Setting cookies from proxied backend

2008-07-19 Thread solprovider
/... will not include Cookies for path
/google.

solprovider


Re: [EMAIL PROTECTED] different kinds of proxies

2008-07-19 Thread solprovider
On 7/19/08, André Warnier [EMAIL PROTECTED] wrote:
  From a recent thread originally dedicated to find out if a proxy server can
 be really transparent, I'll first quote a summary from solprovider.

  quote

  I think the confusion is between an network proxy server and a Web
  reverse proxy server.

  A network proxy server handles NAT (Network Address Translation).  A
  company internally uses private IP addresses (e.g. 10.*.*.*).  All
  Internet traffic from these internal addresses use a network proxy
  server to reach the Internet.  The proxy server changes the
  originating IP Addresses on the outbound packets from the internal
  network IP address to the proxy's Internet IP address.  Responses from
  the Internet server are received by the proxy server and changed again
  to be sent to the originating computer on the internal network.  The
  browser uses the Internet domain name so Cookies are not affected.

  A Web reverse proxy server handles multiple software applications
  appearing as a single server.  The applications can be found on
  multiple ports on one server or on multiple hardware servers.  Visitor
  traffic to several applications goes to one IP Address.  The Web
  server at that IP Address decides where the request should be sent
  distinguishing based on the server name (using Virtual Servers) or the
  path (using Rewrites).  If the applications use Cookies, the
  application Cookies must be rewritten by the Web proxy server because
  the browsers use the server name of the Web proxy server, not the
  application servers.
  1. The browser requests http://myapp.example.com.
  2. The Web proxy server myapp.example.com sends the request to
  myInternalApplicationServer.example.org.
  3. The myInternalApplicationServer.example.org sends a
 response with a
  Cookie for myInternalApplicationServer.example.org to the
 Web proxy
  server.
  4. The Web proxy server changes the Cookie from
  myInternalApplicationServer.example.org to
 myapp.example.com.
  5. The browser receives the Cookie for myapp.example.com and send the
  Cookie with future requests to the Web proxy server.
  6. The Web proxy server sends the incoming Cookies with the request to
  the application server as in #2.  (Depending on security, the incoming
  Cookies may need to be changed to match the receiving server.)
  7. GOTO #3.

  Deciding the type of proxy server being used may be confusing.  An
  Internet request for an internal server can be handled with either
  type depending on the gateway server.
  - Network proxy: The gateway uses firewall software for NAT -- all
  requests for the internal server are sent to the internal server.  The
  internal server sends Cookies using its Internet name.
  - Web proxy: The gateway is a Web server.  Internal application
  servers do not use Internet names so the gateway must translate URLs
  and Cookies.

  --
  The specification in the OP was how to Web proxy requests:
  1. Server receives request for
 http://www.example.com/amazon/...
  2. Server passes request to http://www.amazon.com/...
  3. Server translates response from amazon so the visitor receives
  Cookies from .example.com.
  4. Future requests are translated so the Web proxy server
  (www.example.com) sends the requests including Cookies to amazon.com.

  Read http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
  Read the sections applying to reverse proxies.  Ignore forward
  proxying because that process is not transparent -- the client
  computer must be configured to use a forward proxy.

  I once had difficulty with ProxyPass and switched to using Rewrites so
  I would handle this with something like:
 RewriteEngine On
 RewriteRule ^/amazon/(.*)$ http://www.amazon.com/$1 [P]
 ProxyPassReverseCookieDomain amazon.com example.com
 ProxyPassReverse /amazon/   http://www.amazon.com/
  This should handle Cookies and handle removing/adding /amazon in the
 path.

  We have not discussed changing links in pages from amazon.com to use
  example.com.  This simple often-needed functionality has been ignored
  by the Apache httpd project.  (This functionality was included in a
  servlet I wrote in 1999.) Research mod_proxy_html.

  unquote

  Now, I believe that there is still a third type of proxy, as follows :

  When I configure my browser to use ourproxy.ourdomain.com:8000 as the
 HTTP proxy for my browser, it means that independently of whatever NAT may
 be effected by an internal router that connects my internal network to the
 internet, something else is going on :
  Whenever I type in my browser a URL like http://www.amazon.com;, my
 browser will not resolve www.amazon.com and send it a request like :
  GET / HTTP/1.1
  Host: www.amazon.com

  Instead, my browser will send a request to ourproxy.ourdomain.com:8000,
 as follows :
  GET http://www.amazon.com HTTP/1.1
  Host: www.amazon.com
  ...

  The server at ourproxy.ourdomain.com:8000 will then look up in his page
 cache

Re: [EMAIL PROTECTED] Redirection

2008-07-19 Thread solprovider
On 7/19/08, Alberto García Gómez [EMAIL PROTECTED] wrote:
 I have this URL
  http://www.server.com/index.php?article1.html

  and work like that
  http://www.server.com/?article1.html

  But I really really need this
  http://www.server.com/article1.html

  And I need to work like previous URL and I need to make the changes in
 .htaccess file
  PLEASE I had try everything and nothing work, somebody can help me please.

Am I missing something?  The answer is your title.  Just use
mod_rewrite to translate the old URLs to the new URLS or vice versa.

# Required for Rewrite
Options FollowSymLinks
RewriteEngine On
# Choose one or create potential infinite loop.
# Translate /article1.html - /index.html?article1.html
RewriteRule ^/article1.html$  /index.html?article1.html [P]
# OR
# Translate ?article1.html - /article1.html
RewriteCond %{QUERY_STRING}  ^article1.html$
RewriteRule ^*$ /article1.html [P]

You could use [L] instead of [P] if you are certian that no proxy is
needed to find the file.

HTH,
solprovider


Re: [EMAIL PROTECTED] Setting cookies from proxied backend

2008-07-18 Thread solprovider
Thank you for clarifying.
- I forgot to mention the Set-Cookie domain must match the suffix of
the originating host.
- Neither of us mentioned that IP Addresses are exempt from partial
domain matching.  IP Addresses are allowed as the Cookie domain for
exact matches.
- We had difficulty receiving Cookies for c.b.a.com set by b.a.com at
the .a.com level during the last decade (1998?).  Hopefully all modern
browsers work as specified in the RFC.  I should have marked this
information as suggestive for testing rather than definitive.

The server example.com can set a cookie for .example.com.
This should be inaccurate because .example.com is not a suffix of
the originating server.  Server example.com may be able to set
Cookies for itself -- the RFC suggests the server name is used if no
Domain parameter is specified in Set-Coookie:
   Domain Defaults to the request-host.  (Note that there is no dot
at the beginning of request-host.)
The two-dots rule only applies to Domain parameters. I have not tested.

solprovider

On 7/18/08, André Warnier [EMAIL PROTECTED] wrote:
  First, I found a thread which might provide some useful information for the
 original poster :
 http://www.theserverside.com/patterns/thread.tss?thread_id=31258

  Second,
  [EMAIL PROTECTED] wrote:
  On 7/17/08, jamanbo jamanbo [EMAIL PROTECTED] wrote:
  Rescpectfully, I believe there are several inaccuracies in the explanation
 given by solprovider, and this might induce the OP in error.
  The notes below represent my own understanding of the matter, based on
  http://www.w3.org/Protocols/rfc2109/rfc2109
  and
  http://en.wikipedia.org/wiki/HTTP_cookie#Implementation
  Please correct me if I am wrong.

  Cookies are set for the parent domain part of the server name.  The
  Cookie for espn.example.com is set at.example.com.

  The server espn.example.com can technically (try to) set a cookie for
 whatever domain it chooses, via a Set-Cookie header.  By default (when not
 specified), the cookie domain is understood as being the domain that exactly
 matches the server's FQDN (fully-qualified domain name, like
 a.example.com).

  Now whether the browser accepts it is another story.

  A browser respectful of the specification would only accept a cookie from a
 server, if the server's own domain belongs to (is a sub-domain of) the
 cookie domain.
  For example, from a server known as a.b.c.example.com, a browser will
 accept a cookie for the domain a.b.c.example.com or .b.c.example.com or
 .c.example.com or .example.com (but not for .com because that domain
 does not contain at least two dots).

  (The reason for that is that it is considered unsafe that a server
 www.kgb.ru.gov should be able to set a cookie for the server
 www.cia.us.gov for instance).

  Cookies cannot be set at the TLD level.
 
  True in a way, see above, but only because the browser should not accept a
 cookie for a domain that does not contain at least 2 dots.

  Default domain no-name servers

  (example.com) cannot use Cookies because the Cookie would be set at
  the .com TLD.
 
  The server example.com can set a cookie for .example.com.
  Browsers will save the Cookie

  at the next level (.example.com) and send the Cookie with every
  request to *.example.com.  A server name at the same level must be
  specified.  Requests to example.com and
  server.subdomain.example.com will not include the Cookie.
 
  The browser will save the cookie with the domain exactly as specified in
 the cookie, it this is valid (iow the domain of the cookie contains at least
 2 dots, and the server issuing the cookie is a member of that domain).

  A cookie set for .example.com will be sent by the browser with any
 request to a.b.c.example.com, or .b.c.example.com, or .c.example.com
 or .example.com.
  A cookie set for .c.example.com will be sent with every request to a
 server a.b.c.example.com or .b.c.example.com or .c.example.com, but
 not for .example.com not for d.example.com e.g.
  André


Re: [EMAIL PROTECTED] 403 Errors and Virtual Hosts

2008-07-18 Thread solprovider
On 7/17/08, Rob [EMAIL PROTECTED] wrote:
 Just wondering if some one could give me a hand with my Virtual Host
 # mysite
 VirtualHost 0.0.0.0:80
 ServerName mysite.co.nz
  RewriteEngine on
 RewriteCond %{HTTP_HOST}   !^$
 RewriteRule ^/(.*) http://www.mysite.co.nz/$1 [NE,R]
  /VirtualHost

 VirtualHost 0.0.0.0:80
 ServerAdmin [EMAIL PROTECTED]
 DocumentRoot /var/www/html/mysite
  ServerName www.mysite.co.nz
  ErrorDocument 403 /var/www/html/mysite403/index.html
 /VirtualHost
 Thats what my current virtual host looks like. Im trying to get my 403
 errors for a certain website to display that index file. Its not working
 this way, can some one please advise where i have gone wrong ?

See http://httpd.apache.org/docs/2.0/mod/core.html#errordocument

The second ErrorDocument parameter is either a message or a page found
like a URL redirect.  Do you have a webpage at this location?
   http://mysite.co.nz/var/www/html/mysite403/index.html
My guess is you configured an absolute filepath and Apache httpd is
handling the value as a relative URL.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Setting cookies from proxied backend

2008-07-17 Thread solprovider
On 7/17/08, jamanbo jamanbo [EMAIL PROTECTED] wrote:
  My question is Is it possible to set up an Apache proxy of another
  server in such a way that the proxy is invisible, in terms of cookies
  at least? I.e. when I visit my proxy I want cookies from the backend
  to get set exactly as if I had visited the backend directly
  (by-passing the proxy).

  I've been using a test configuration which I will show below. I picked
  two big sites to test on. They appear to have been lucky choices as
  they seem to exhibit different behaviour.

  In the first case, I proxy www.espn.go.com and it appears that (some)
  cookies from that site get set when I visit my proxy.

  However in the second case, when I proxy www.amazon.com and visit my
  proxy, I don't see any cookies (although the headers do contain
  Set-Cookies).

  Can somebody tell me if I am trying to do something impossible. Will
  browser security features prevent cookies for www.espn.go.com being
  set when I visit localhost:/espn? Or is my set up just wrong?

  This is the test config if you want to try it:

  Listen 
  VirtualHost *:
   ServerName localhost
   DocumentRoot /var/www/revoxy

   ProxyPreserveHost On
   proxy
 Order deny,allow
 Allow from all
   /proxy

   # Cookies from espn get set
   LocationMatch /espn/
 ProxyPass http://www.espn.go.com/
 ProxyPassReverse /
 # ProxyPassReverseCookieDomain espn.go.com localhost
   /LocationMatch

   # Cookies from amazon don't get set
   LocationMatch /amazon/
 ProxyPass http://www.amazon.com/
 ProxyPassReverse /
 # ProxyPassReverseCookieDomain amazon.com localhost
   /LocationMatch
  /VirtualHost

  Desperatley awaiting your advice,
  JMBO!

Cookies are set for the parent domain part of the server name.  The
Cookie for espn.example.com is set at .example.com.

Cookies cannot be set at the TLD level. Default domain no-name servers
(example.com) cannot use Cookies because the Cookie would be set at
the .com TLD.  This may be the problem in your second example.

localhost should not work (although I have not tested lately).  You
should configure a server name for testing.  If httpd is responding to
all requests without virtual servers, you can configure the server
name in hosts (Windows) or resolv.conf (*nix).

I use the following in a virtual server configuration to proxy to an
application server firewalled from the Internet and runnng on port
8000 on the same hardware server.  I use RewriteRule instead of
ProxyPass to pass incoming requests to the application server.
ProxyPassReverseCookieDomain 127.0.0.1 www.example.com
ProxyPassReverse /   http://10.1.1.1:8000/
The application sends Cookies as 127.0.0.1.  The first line translates
the Cookies to be from www.example.com.  Browsers will save the Cookie
at the next level (.example.com) and send the Cookie with every
request to *.example.com.  A server name at the same level must be
specified.  Requests to example.com and
server.subdomain.example.com will not include the Cookie.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Removing a permanent redirect from httpd.conf

2008-07-13 Thread solprovider
On 7/13/08, Paul Trunfio [EMAIL PROTECTED] wrote:
  I have a couple of permanent redirects set in my httpd.conf file.
  But I want to now undo them.

  I first tried commenting them out and restarting apache.  Didn't work.
  Then I added another explicit redirect to the new page. Didn't work.

  So, I'm stuck.
  Is there a solution?
  What does it mean to be permanent?
  Thanks, paul
  Paul Trunfio

Permanent Redirects are meant to be permanent.  Every cache (httpd,
Web cache servers, browsers) should remember that requests to the
specified URL should be changed to the new URL.  Undoing the permanent
redirect requires waiting for all the caches to expire.  This is
similar to changing the IP address of a server -- most DNS servers
cache the IP Address for some duration based on the configurations of
the controlling DNS server and each DNS server.  While DNS
configuration includes the maximum cache duration (typically one day
to one week), no such setting is available for permanent redirects.

After undoing the permanent redirect:
1. Clear the caches under your control.  Restart httpd, empty your
browser's cache, and restart your browser..
2. Test using a browser on the server (or at least use wget) to verify
the change.
3. You may need to wait up to a full week for most Web caches to
expire.  Some (e.g. Google and other search engines) may update their
caches at even slower frequencies.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Hosting Scenario

2008-07-10 Thread solprovider
On 7/10/08, S. Ural [EMAIL PROTECTED] wrote:
 There are 2 domains, example.com and example.net seeing the same document
 root and thus having the same content.
 I want both them get indexed in search engines as separate domains.
 1-  Do I need to create these 2 domains under Apache separately?
 2-  Or aliasing one to another can do the trick?
 3-  Does aliasing  mean that only one is Ok to be defined in httpd.conf?
 (Pooling the 2nd in DNS will do it work?)
 Thank you

First, domains do not affect Apache httpd.  DNS and Apache httpd can
be completely separate.  DNS defines IP Addresses for server names
within a domain.  Possible server names include the no-name
(example.com), defined names (www.example.com and
mail.example.com) and the wildcard (*.example.com) receiving
requests for undefined names.  The Internet sends requests for a named
server to the specified IP Address.  Then the server decides how to
respond.  (Research ports and protocols for information about
multiple software servers on the same hardware.)

The challenge has always been how to serve different content based on
the server names.  Apache httpd provides this functionality with
virtual servers.

With no virtual servers, every request to Apache httpd server is
handled alike.  If DNS points multiple server names to the IP Address,
the same content will be served for each of the server names.

The same results happen with virtual servers if the domains are not
defined -- both use the default (first) virtual server and serve the
same content.

The same results happen if multiple server names are aliased in the
virtual server configuration.  The configuration is used for the
specified server names leading to the same content.

The same results happen with virtual servers if the domains use the
same content directory.  The process uses separate configuration
settings to reach the same content.

Unless different virtual servers are configured to use separate
content directories, the same content will be served.  (Rewrite rules
can also serve different content for different server names, but why
bother when virtual servers exist?)
--
Using multiple domains to boost hits is counter-productive.  Traffic
will be split amongst the multiple domains lowering the score for all
the domains.  Better choose a primary domain and use the extra domains
as shortcuts to specific information.  Example:
  http://fgulen.com
can permanently redirect to:
  http://example.com/companies/fgulen/products.html
You can market the short URL while search engines push traffic to the
primary website.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache 2.2.8, SNI, SSL and Virtual Hosts

2008-02-17 Thread solprovider
What browser are you testing?  The server may be working fine, but few
browsers are SNI-capable.  From the page you linked:

Supported Browsers
SNI has only recently gained support in browsers. The browsers that
have been confirmed to support SNI by this author are:
* Firefox 2.0.0.12
* Internet Explorer 7.0.5730.11

solprovider

On 2/17/08, Norman Khine [EMAIL PROTECTED] wrote:
 Hello,
  I have some virtual hosts and would like to run SSL with different
  certificates on each. Having followed the following how-to,
  http://gentoo-wiki.com/HOWTO_Apache_with_Name_Based_Hosting_and_SSL and
  rebuilding apache with SNI support, I am having some issues in that
  domain2.com only returns the server.crt and not the one specified in my
  rule.
  Even if I put the certificate for domain_one, I get the server.crt
  certificate showing.
  Any ideas on how to solve this problem? And how to test SNI is working?
  Norman

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Do NOT add a slash at the end of the directory path.

2008-02-07 Thread solprovider
On 2/6/08, Lloyd Parkes [EMAIL PROTECTED] wrote:
 Why?

  We've all seen this comment for ServerRoot, but does anyone know why it's 
 there?
  What bad things will happen if I put a slash at the end of my ServerRoot?

  The documentation for ServerRoot doesn't say anything about this. Google 
 finds
  nothing obvious.

  I would give it a go to test things out, but I only have access to large
  government web servers, so maybe not.
  Lloyd Parkes

Input validation is typically handled with something like:
if(ServerRoot ends with slash) ServerRoot = ServerRootWithFinalSlashRemoved;

Reading the code using the web svn, I found checks to verify that
ServerRoot exists and is a directory in core.c.  I have yet to find
where the config file is read so the validation could happen before
core.c (maybe in http_config.c?)  I need the code local to read it
properly so I am adding CDT to Eclipse.  I'll post again if someone
more familiar with the code does not.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Do NOT add a slash at the end of the directory path.

2008-02-07 Thread solprovider
On Thu, Feb 7, 2008 at 4:29 PM, Lloyd Parkes [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
   Input validation is typically handled with something like:
   if(ServerRoot ends with slash) ServerRoot = 
 ServerRootWithFinalSlashRemoved;
  I must confess that I thought this was the normal way to do things as well.

  Of course, one problem is that removing a trailing slash may not be the right
  thing to do because Apache runs on more than just Unix. Windows is probably 
 easy
  to handle compared with NetWare and mainframes.
  Lloyd Parkes

The directory separator depends on the OS: slash (*nix and others),
backslash (MS), and colon (Apple).  The period was used by at least
one OS. Most programming languages have a method for discovering the
proper separator e.g. File.separator in Java.  The difficulty is not
discovering the separator character, but handling multibyte
characters.  The final byte may match the separator while part of a
multibyte character so removing that byte would not be correct.

My designs prefer input validation over instructions like your
objection.  I often add code to verify results from my own functions
under the assumption that I or someone else may break things someday.
My research on how httpd handles this will be delayed until next week.
I am still hoping an httpd dev will comment.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache Restart

2008-01-26 Thread solprovider
See:
http://httpd.apache.org/docs/2.0/programs/apachectl.html
http://httpd.apache.org/docs/2.0/programs/httpd.html

A one-line restart command needs -DSSL.  Be explicit using the
parameters from the second link:
apachectl -k restart -DSSL

A better solution is to reconfigure for SSL without command line parameters.

solprovider

On 1/26/08, Ashwin  Basagouda Patil [EMAIL PROTECTED] wrote:
 How I can restart the Apache in one line command ?
 I am using this command to start with SSL connections:
 /usr/local/apache/bin/apachectl startssl

 This command restarts without SSL connections:
 /usr/local/apache/bin/apachectl restart

 Please suggest how apache can restart with SSL connections using a one-line 
 command.

 Thanks  Regards
 Ashwin Patil

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Where to download the Apache developer versions?

2008-01-22 Thread solprovider
The latest code is in Subversion.  See:
http://httpd.apache.org/dev/devnotes.html

solprovider

On 1/22/08, Sonixxfx [EMAIL PROTECTED] wrote:
 I am sorry, I should have been more specific. Sometimes when a
 vulnerabilty is found in Apache, the vulnerability is fixed in a
 developer version. Instead of waiting for the stable release, I would
 like to get the developer version to make sure my Apache version isn't
 vulnerable.

 Thanks.
 Regards,
 Ben
 2008/1/22, Sander Temme [EMAIL PROTECTED]:
  Ben,
 
  On Jan 21, 2008, at 10:27 PM, Sonixxfx wrote:
   Can someone tell me where the Apache developer versions can be
   downloaded from? I am unable to find them.
 
  When you download the Apache HTTP Server, version 2.2.8, through
  http://httpd.apache.org/download.cgi
 
  ...you'll find everything you need to serve web sites and dynamic CGI
  content, build an application router with mod_proxy and manipulate
  requests with mod_rewrite and mod_alias, and develop custom modules
  using the C API.
 
  If you want to develop dynamic content in another language, you'll
  need to install the appropriate module like mod_php, mod_perl,
  mod_python or mod_tcl.
  Sander Temme

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Looking for suggestions for URL redirection

2008-01-19 Thread solprovider
On 1/18/08, Myles Wakeham [EMAIL PROTECTED] wrote:
 I have a web application running on Linux in Apache 2, php5.  The
 application manages a media database that is accessed by subscription.  The
 content is served off separate Apache servers – some are located in
 different geographic regions.  All users access the content by common URL,
 such as http://example.com/123/file.avi

 I use .htaccess with mod_rewrite to modify the incoming URL to a PHP script
 such as
 http://example.com/getfile.php?user=123file=file.avi

 The PHP script is called, logs the request, checks the
 user's subscription rights, and if ok redirects them to the actual file to
 obtain by way of a Header() command (ie. Modifies the HTTP header to do a
 Location: ….  To where the file actually resides).

 Although this works perfectly, the problem is that the user's browser will
 change to reflect the endpoint URL where the file actually resides.  Users
 then simply have been cutting  pasting this URL into their own websites and
 providing unaudited access to the raw file directly and bypassing our
 script.

 I need to find a way to do this without displaying the endpoint URL to the
 user in anyway.  But it has to be able to be done through a PHP script.
 Clearly Header() in PHP isn't cutting it.  I also have to use Apache at each
 endpoint web server location.

 I'm wondering if anyone has a suggestion on how best to do this?  Can I
 install something in .htaccess on the endpoint server end to reject incoming
 requests that are not via authenticated redirects?  Can I use the
 HTTP_REFERRER in some way to ensure that what has come to this server came
 by way of a legitimate referral?

 All ideas are greatly appreciated.

 Thanks
 Myles

You have two issues.
1. How to redirect so browsers do not learn the address of the media servers.
2. How to block direct access to the media servers.

The first issue is the URLs are redirecting to the media server.  You
do not want redirection; you want proxy.  The media should come from
the main server during the request that activated the PHP script.  The
second issue can be solved with firewalls.

This ML would be good for explaining how Apache httpd security could
work with your authorization system.  By your specification, this is a
PHP issue and does not belong on the httpd ML as you prohibited
non-PHP solutions.  This code should send the file as the response to
the current request:
HttpResponse::setFile(http://media.example.com/file.mpg;);

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Frustrated with rewrite rule, please help...

2007-12-25 Thread solprovider
On 12/23/07, Phil Wild [EMAIL PROTECTED] wrote:
 I am having trouble with my first attempts at using rewrite rules under
 apache. I am not sure where I am going wrong and have spent the morning
 googling to try and figure this out.

 I want to rewrite
 http://www.example.com/doc/bin/view/TWiki/WebHome so it looks
 like http://www.example.com/doc/WebHome where WebHome changes

 I have the following in my conf file:
 ScriptAlias /doc/bin /opt/twiki/bin
 Alias /doc/pub /opt/twiki/pub
 Alias /doc /opt/twiki

 RewriteEngine On
 RewriteLogLevel 9
 Directory /opt/twiki/bin
 RewriteRule ^doc/([A-Z].*) /doc/bin/view/$1 [PT] [L]

 Any clues as to what I am doing wrong?
 The error log does not contain any log entries relating to rewriting so I
 think I am completely missing the mark...
 Phil

The RewriteRule should begin with a slash.  Multiple flags are
separated with commas, not by adding additional parameters (which
should be ignored or cause an error.)
   RewriteRule ^/doc/([A-Z].*)$ /doc/bin/view/$1 [PT,L]

The Alias commands serve content outside DocumentRoot.  The Alias
commands belong after the RewriteRules (just for logic.  I do not know
if the order matters.)  Parameters do not require quotes.  The third
command makes the second line redundant.

Needing RewriteRule's [L] flag indicates poor design (but is needed
because sometimes the web administrator does not control the entire
design.)  For sanity, do not use the same identifier for multiple
purposes.  You are using /doc/ as  the external URL path and the
indicator for internal redirection.  This substitution:
   /doc/* - /doc/abc/*
allows for an infinite loop.  Using:
  /doc/* - /mydocs/abc/*
cannot create infinite loops, does not require the [L] flag, and does
not affect anything outside your control -- just the Alias commands.

None of these commands belong with a Directory section.  httpd cannot
decide which directory to use until after the Rewrite and Alias
commands have been processed.

RewriteEngine On
RewriteLogLevel 9
RewriteRule ^/doc/([A-Z].*)$ /tdocs/bin/view/$1 [PT]
RewriteRule ^/doc/(.*)$ /tdocs/$1 [PT]
ScriptAlias /tdocs/bin /opt/twiki/bin
Alias /tdocs /opt/twiki

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] mod_rewrite exception

2007-12-25 Thread solprovider
On 12/24/07, Thomas Hart [EMAIL PROTECTED] wrote:
 As some of you may be aware, apache has a bug in how it handles pooled
 connections to an ldap server (to be fair, it's not an apache bug, it's
 a problem with windows active directory acting differently than it's
 supposed to). The gist of it is that if apache doesn't connect to a
 Windows 2003 Active Directory server to do an ldap auth for 10 minutes,
 then the connection times out. However the communication between the
 ldap server and apache is not handled correctly and apache bounces the
 request with a 500 internal server error. There are a couple patches on
 the bugzilla for this, however re-compiling apache is not an option for
 me at this time unfortunately.

 My current idea for a workaround is this. I'd like to set up a cron job
 (the windows equivalent anyway) that connects to the apache server and
 sends http headers with auth info every 5 minutes, so that the apache
 server is reusing that connection every 5 minutes, thus keeping it from
 reaching the fail state. I've crafted a script that sends pre-crafted
 http headers to the web server, containing the auth information. Here's
 my issue:

 I have a rewrite rule

 RewriteEngine on
 RewriteCond %{SERVER_PORT} !^443$
 RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R]

 that takes all requests and changes them to https (seems to even if the
 web address is https). This causes apache to respond with a page has
 moved page, and it doesn't request the auth info. What I need to do is
 this (sorry for the long explanation).

 I need to modify my rewrite rule, so that it excludes one page
 (https://server/testing/test.php). This way I can request that page, and
 apache will pay attention to the auth headers, and my goal will be
 accomplished :-) Any regex/rewrite gurus that can point me in the right
 direction?

 TIA
 Tom Hart

Add conditions to test the protocol and bypass the specific URL.

RewriteEngine on
RewriteCond %{SERVER_PORT} !^443$
RewriteCond %{SERVER_PROTOCOL} !^https$ [NC]
RewriteCond %{REQUEST_URI} !^/testing/test\.php$
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R]

For your example, the specific URL test will not be needed if the
protocol test works properly.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] How to make a website like ftpsite?

2007-12-10 Thread solprovider
On 12/10/07, Matus UHLAR - fantomas [EMAIL PROTECTED] wrote:
 On 30.11.07 18:21, goommy wrote:
  What i want to do is make a website like ftpsite??after open the URL i can 
  see the file list
  and i can download all files!

  First.i use Options Indexes and DirectoryIndex none list the all file!
  By default, when i click on the file, some types can brows 
  directly(just like *.html *.txt *.gif *.jpg ...) and some other types will 
  prompt download dialog for choosen(like *.tar.gz *.mp3 and so on). So i 
  want to try when click on *.html it also appear the prompt dialog for 
  choosen!
  And i found ??If the media type remains unknown, the recipient SHOULD 
  treat it as type application/octet-stream.??on rfc2616! So i think maybe 
  i can get the result by change the response header!!??qu.1: Is it a right 
  idea???
  So i set   VirtualHost 125.35.14.229:80
DocumentRoot /opt/webdata/www2cn
ServerName www2.putiantaili.com.cn
DirectoryIndex index.none
Directory /opt/webdata/www2cn/download-dir
#try to : all files download only ,can't 
  browser with IE
Options Indexes FollowSymLinks
Header set Content-Type 
  application/octet-stream
Header append 123-Type 123
AllowOverride All
Order deny,allow
allow from all
/Directory
   /VirtualHost
  But the result is failure! ??very depressed!!??
  And the reponse header is :
  HTTP/1.0 200 OK
  Server: Apache/2.2.4 (Unix) mod_ssl/2.2.4 OpenSSL/0.9.7a
  ETag: 97b7d-2d-75d78680
  Accept-Ranges: bytes
  Content-Type: text/plain (why not application/octet-stream)
  123-Type: 123  (123-type header set )

 Try using Content-Disposition: attachment instead of changing
 Content-Type.
 website like ftpsite? That tells nothing. What do you mean?

The OP wants to display the contents of a directory as a list of files
like an FTP server.  Options Indexes will do that if no index.html
file exists in the specified directory.
The OP added many extra lines of configuration and managed to break it.

From:  http://httpd.apache.org/docs/2.2/mod/core.html#options
 Directory /web/docs
Options Indexes FollowSymLinks
/Directory

The name of the directory should not be in quotes.  Requires
mod_autoindex.  Delete the extra configuration.

I have not tried Directory elements inside VirtualHosts.  The
documentation implies that is valid.  Test using the main host; worry
about the virtual host after proving your configuration is correct for
the main host.

HTH,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Geting rid of the ? in the URL

2007-12-05 Thread solprovider
Read the Drupal documentation for how to fix the configuration of Drupal:
http://drupal.org/node/15365

solprovider

On 12/5/07, Rashmi Amlani [EMAIL PROTECTED] wrote:
 I am a total newbie to apache. Have recently installed Apache 2.2, mySql and
 Drupal on  winxp and everything seems to be working fine However I cannot
 figure out a way of getting rid of the ? in the url so like instead of
 http://localhost/drupal/?q=manufacturer  I would like
 http://localhost/drupal/manufacturer
 I have tried to read and follow all the clean url instructions that I could
 fine online but nothing seems to be working. Pls help!
 Thanks

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Weird NameVirtualHost problem

2007-11-02 Thread solprovider
1. Which VirtualHosts work should change when you change the order of
the entries.  Are you certain you are editing the correct
configuration file?  Rename the configuration file and restart the
server to verify.  Apache httpd will error without the configuration
file.  If the server starts properly, check your init scripts for the
-f option.

2. Do approximately the same number of entries work?  What OS are you
using?  VirtualHosts can require file handles, especially with log
files specific to each virtual host; did the server exceed its limit?

3. Maybe a typo in httpd.conf.  Please post the NameVirtualHost and a
few VirtualHost sections, especially the last working one and the
first broken one.  You can change the severs to example.com and the IP
Addresses.

For easier configuration and less chance of accidentally destroying
the main configuration file, you can replace the VirtualHost
sections with
  Include conf/virtualhosts.conf
and put all the VirtualHost entries in the new file.

solprovider


On 11/2/07, tech1 [EMAIL PROTECTED] wrote:
 I'm running apache 1.3 on a FreeBSD box. I've just started using
 NameVirtualHosts for about 100 web sites. There are other web sites on the
 server using individual IP addresses. The VirtualHost IP.IP.IP.IP
 containers are all identical except for the info pertinent to each domain
 name and directory, they were ALL generated by a script, including the
 old ones. I've tested them and they are accurate. I rearranged the list
 in the conf file to make sure there wasn't an issue with one VirtualHost
 container. The problem: the first 10 or so that I set up 2 months ago work
 just fine, anything added after that doesn't work at all. I've checked DNS
 on the names that aren't working, and I've also run an 'httpd -S' and can
 see all of the NameVirtualHost domains with no errors. I've restarted the
 server and apache multiple times. I can't get anything other than the first
 batch to work.

 I don't know what else to do I thought maybe it was a DNS cache related
 thing so I waited a couple days to see if they'd kick in. I tried removing
 the whole NameVirtualHost block and restarting the server, then putting it
 all back and restarting again. I tried rearranging them all so that others
 would be listed before the ones that work. Nothing I've done makes any
 difference.

 I don't want to waste 100 IP addresses, anyone have any ideas or
 suggestions? Any other places I might look?

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Very Puzzling Question About mod_rewrite

2007-11-02 Thread solprovider
On 11/1/07, Jon Forrest [EMAIL PROTECTED] wrote:
 (This is on Solaris 10-x86 with Apache 2.2.4)
 When I give the URL
 1)  http://www-demo.cchem.berkeley.edu/username/public_html

 everything works fine. However, for historical reasons,
 I can't require that people give the /public_html at
 the end of the URL. In other words, I want users to
 be able to enter
 2) http://www-demo.cchem.berkeley.edu/username/

 to see the same results as produced by URL #1 .
 So, I decided to try mod_rewrite. I use the following
 in the proper VirtualHost section of my httpd.conf file:

 RewriteEngine on
 RewriteRule  ^(.+)$  $1/public_html/  [L]

 This doesn't work. The client sees a 403 Forbidden message.
 The apache log says:
 Directory index forbidden by Options directive:
 /users/chemweb/apache2/http-cchem/htdocs/username/public_html/

 I don't understand why I'm getting this message when
 URL #1 above works.

 The rewrite log shows the following (I added the #numbers):

 #1 (2) init rewrite engine with requested uri /username/
 #2 (3) applying pattern '^(.+)$' to uri '/username/'
 #3 (2) rewrite '/username/' - '/username//public_html/'
 #4 (2) local path result: /username//public_html/
 #5 (2) prefixed with document_root to
 /users/chemweb/apache2/http-cchem/htdocs/username/public_html/
 #6 (1) go-ahead with
 /users/chemweb/apache2/http-cchem/htdocs/username/public_html/ [OK]
 #7 (2) init rewrite engine with requested uri /username/index.html
 #8 (3) applying pattern '^(.+)$' to uri '/username/index.html'
 #9 (2) rewrite '/username/index.html' - '/username/index.html/public_html/'
 #10 (2) local path result: /username/index.html/public_html/
 #11 (2) prefixed with document_root to
 /users/chemweb/apache2/http-cchem/htdocs/username/index.html/public_html/
 #12 (1) go-ahead with
 /users/chemweb/apache2/http-cchem/htdocs/username/index.html/public_html/
 [OK]

 Line #6 above looks correct to me so I don't understand why mod_rewrite
 tried the other possibilities.

 I'm guessing all these problem have something to do with directory
 protection but if this is true then I don't understand why URL #1
 works.

 Any ideas?

 Cordially,
 Jon Forrest

Try some basic settings before mod_rewrite.
- DirectoryIndex sets the file that Apache will serve if a directory
is requested.
- UserDir is the name of the directory appended onto a user's home
directory if a ~user request is received.
   UserDir public_html
   DirectoryIndex index.html
   http://www-demo.cchem.berkeley.edu/~username
will serve:
   /home/username/public_html/index.html

If you do not want the tilde (~) in the URLs, use mod_rewrite to add it.
   RewriteEngine On
   RewriteRule ^/(.*)$ /~$1 [P]

If you want to continue with the ideas in the long thread, think about
why httpd is attempting to find username directories under
/users/chemweb/apache2/htdocs/cchem.  Is that what you want?

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Weird NameVirtualHost problem

2007-11-02 Thread solprovider
You did not test #1.  If you run httpd -S without the correct -f
option, you are testing httpd.conf. (FreeBSD's Apache httpd defaults
to /usr/local/etc/apache/httpd.conf.) httpd.conf will not affect
your production server if your init scripts use the -f option.

The best scenario matching the symptoms you described is that the
production server is not using the configuration file you have been
editing.

As suggested in my previous post, rename the file and restart httpd to
prove the file is the one being used.  If Apache httpd runs properly
without that file, search for the configuration file being used.

solprovider

On 11/2/07, tech1 [EMAIL PROTECTED] wrote:
 Thanks.

 1) I'm sure it's the correct conf file, I removed the entire
 NameVirtualHost section and checked with httpd -S to see that they were
 gone after a -HUP. Then I put them back and checked it again.

 2) I mentioned I'm using FreeBSD. It is always the same hosts that work, I
 added them a couple months ago. I had some trouble then also, but it just
 started working after a reboot, one of many. I checked file handles and
 they are around 8k.

 3) I see no errors, and the problem persists even with the hosts
 rearranged. And they were all generated by a script and look ok. Here's a
 sample from the conf file:


 NameVirtualHost 123.456.789.10
 VirtualHost 123.456.789.10
 ServerName a.com
 ServerAlias www.a.com a.com
 DocumentRoot /var/www/htdocs/sites/a.com
 ErrorLog /var/www/logs/a.com-error_log
 TransferLog /var/www/logs/a.com-access_log
 /VirtualHost
 VirtualHost 123.456.789.10
 ServerName m.com
 ServerAlias www.m.com m.com
 DocumentRoot /var/www/htdocs/sites/m.com
 ErrorLog /var/www/logs/m.com-error_log
 TransferLog /var/www/logs/m.com-access_log
 /VirtualHost
 .
 At 12:41 PM 11/2/2007, you wrote:
 1. Which VirtualHosts work should change when you change the order of
 the entries.  Are you certain you are editing the correct
 configuration file?  Rename the configuration file and restart the
 server to verify.  Apache httpd will error without the configuration
 file.  If the server starts properly, check your init scripts for the
 -f option.
 
 2. Do approximately the same number of entries work?  What OS are you
 using?  VirtualHosts can require file handles, especially with log
 files specific to each virtual host; did the server exceed its limit?
 
 3. Maybe a typo in httpd.conf.  Please post the NameVirtualHost and a
 few VirtualHost sections, especially the last working one and the
 first broken one.  You can change the severs to example.com and the IP
 Addresses.
 
 For easier configuration and less chance of accidentally destroying
 the main configuration file, you can replace the VirtualHost
 sections with
Include conf/virtualhosts.conf
 and put all the VirtualHost entries in the new file.
 
 solprovider
 
 
 On 11/2/07, tech1 [EMAIL PROTECTED] wrote:
   I'm running apache 1.3 on a FreeBSD box. I've just started using
   NameVirtualHosts for about 100 web sites. There are other web sites on the
   server using individual IP addresses. The VirtualHost IP.IP.IP.IP
   containers are all identical except for the info pertinent to each domain
   name and directory, they were ALL generated by a script, including the
   old ones. I've tested them and they are accurate. I rearranged the list
   in the conf file to make sure there wasn't an issue with one VirtualHost
   container. The problem: the first 10 or so that I set up 2 months ago work
   just fine, anything added after that doesn't work at all. I've checked DNS
   on the names that aren't working, and I've also run an 'httpd -S' and can
   see all of the NameVirtualHost domains with no errors. I've restarted the
   server and apache multiple times. I can't get anything other than the 
   first
   batch to work.
  
   I don't know what else to do I thought maybe it was a DNS cache 
   related
   thing so I waited a couple days to see if they'd kick in. I tried removing
   the whole NameVirtualHost block and restarting the server, then putting it
   all back and restarting again. I tried rearranging them all so that others
   would be listed before the ones that work. Nothing I've done makes any
   difference.
  
   I don't want to waste 100 IP addresses, anyone have any ideas or
   suggestions? Any other places I might look?

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] routing requests to two different servers

2007-10-31 Thread solprovider
On 10/31/07, Boyle Owen [EMAIL PROTECTED] wrote:
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 
  Splitting a stream is useful.  Older people remember when forms were
  sent in triplicate.  Then office workers made a copy of every paper to
  cross their desks.  Now smart people keep a copy of every file passed
  to others.
 
  Splitting a stream is not unusual.  Unix has the tee command.
  Apache Cocoon has the TeeTransformer.  Apache httpd copies part of
  each request into the log before fulfilling the request.
 
  Splitting a stream is easy:
  while(b = in.read()){  out1.write(b); out2.write(b); }

 The *concept* isn't unusual - what's complicated is applying the concept
 to *HTTP*. I think, rather than having to write a web-server, the OP was
 hoping it could be done using config directives - maybe something like:

 ProxyPassSplitStream / http://server1
 ProxyPassSplitStream / http://server2

 Rgds,
 Owen Boyle

  Discarding the output from the test server is difficult, but not
  impossible.  When configuring the server for the above code:
 Connection1 = client--splitter
 Connection2 = splitter--production server
 Connection3 = splitter--test server
  The splitter should compare responses from the production and test
  servers and log the differences.
 
  I am uncertain this functionality should be added to Apache httpd.  I
  recommend writing a simple fast dedicated server to handle splitting,
  logging, and comparing.  That server could be added and removed from
  the production stream without affecting the other servers beyond an IP
  address configuration change.

Most configuration applies to the primary client-serving stream so
configuration for splitting would only need settings for additional
streams:
Can we assume all requests should be duplicated?
What configuration is useful?  Matching URLs?  Matching protocols?
Should the command be allowed within a VirtualHost or other container?
Should the duplication happen before or after URL rewriting?
Should the duplication happen before, after, or during proxy redirects?
   ProxyPassDuplicate / http://server2
Should this function be a separate module since this function is
rarely needed and can be outside mod_proxy?

Assuming dedicated Tee server eliminates most of the issues:
   ProxyPassReverse /   http://myappserver.example.com/
   Duplicate mytestserver.example.com

Would this module record every response from additional servers?
Would this module allow recording of responses from the primary stream?
Would the module handle comparison?
What are the security implications?

solprovider
(Any poor ideas are due to having just awoken.)

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] routing requests to two different servers

2007-10-30 Thread solprovider
On 10/30/07, Boyle Owen [EMAIL PROTECTED] wrote:
  -Original Message-
  From: Wm.A.Stafford [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, October 30, 2007 2:04 PM
  To: users@httpd.apache.org
  Subject: Re: [EMAIL PROTECTED] routing requests to two different servers
 
  Good point.  The test driver mentioned is a 'fake client'
  that will send
  a request to the server under test, get the reply from the
  server under
  test and do some sort of analysis.  The reply from the
  production server
  would follow the same route as always back to the real client.
 
  We already do this in a rather crude way by using a file of URLs
  gathered from production to drive testing but I think a
  simpler and more
  realistic test scheme would be to just split the stream of production
  requests and direct one stream to the servers under test.

 split the stream? That's like delivering a single letter to two
 addresses :-)

 Think of HTTP as old-fashioned mail-order; the client sends off a
 single, sealed envelope containing an order form. The envelope passes
 through various post offices and mail vans until it arrives at the
 warehouse. The warehouse packs the goods and sends them back over the
 postal network to the client's house. How do you split the stream?

 You'd need a worker in the post office opening orders and photocopying
 them then posting the copy on to a test warehouse. He'd also have to
 make sure he could intercept parcels coming back the other way so they
 didn't get back to the original client. In any case, this is a bespoke,
 standards-violating procedure that could only be done in a test
 environment.

 Rgds,
 Owen Boyle
 Disclaimer: Any disclaimer attached to this message may be ignored.

  Thanks for your insight,
  -=bill
 
  Boyle Owen wrote:
   -Original Message-
   From: Wm.A.Stafford [mailto:[EMAIL PROTECTED]
   Sent: Monday, October 29, 2007 6:18 PM
   To: users@httpd.apache.org
   Subject: [EMAIL PROTECTED] routing requests to two different servers
  
   I would like to have Apache send incoming requests to two
  locations.
   Our Apache is currently configured as a reverse proxy to send
   requests
   to a production server.  I would like to send the same
   request to a test
   driver that will forward the request to one or more servers
   undergoing
   testing so these servers can be driven by the same load
  seen by the
   production server.
  
   Ok - but where is the response supposed to go?
  
   Remember that HTTP is about a client sending a *request*
  and the server
   returning a *response*. The whole idea implies a one-to-one mapping
   between client and server. If you somehow clone a request
  and fan it out
   to another server, you create two responses to the same
  request. How is
   the client supposed to handle this? If you're talking about a purely
   research environment, you could invent a client that handles two
   responses, but no real-world browser can handle this.
  
   Maybe you plan to trap the response from the test server
  and short it to
   ground? Then you'd need to configure the test server (at the TCP/IP
   level - not HTTP) to route all outgoing traffic to a machine that
   terminated the TCP/IP traffic (ie, acknowledged it) but
  didn't deliver
   it further. That would be a router, I guess... by now you're into
   network-layer programming and have left HTTP behind.
  
   Rgds,
   Owen Boyle
   Disclaimer: Any disclaimer attached to this message may be ignored.
  
   The test driver will reside on a completely different machine
   and have
   no association with the Apache that is forwarding requests
  so I don't
   think a simple reverse proxy configuration will handle this.
  
   Any guidance or ideas appreciated,
   -=beeky

Splitting a stream is useful.  Older people remember when forms were
sent in triplicate.  Then office workers made a copy of every paper to
cross their desks.  Now smart people keep a copy of every file passed
to others.

Splitting a stream is not unusual.  Unix has the tee command.
Apache Cocoon has the TeeTransformer.  Apache httpd copies part of
each request into the log before fulfilling the request.

Splitting a stream is easy:
while(b = in.read()){  out1.write(b); out2.write(b); }

Discarding the output from the test server is difficult, but not
impossible.  When configuring the server for the above code:
   Connection1 = client--splitter
   Connection2 = splitter--production server
   Connection3 = splitter--test server
The splitter should compare responses from the production and test
servers and log the differences.

I am uncertain this functionality should be added to Apache httpd.  I
recommend writing a simple fast dedicated server to handle splitting,
logging, and comparing.  That server could be added and removed from
the production stream without affecting the other servers beyond an IP
address configuration change.

solprovider

Re: [EMAIL PROTECTED] unexpected EOF on client side when Apache sends applet class

2007-10-26 Thread solprovider
This problem affects multiple files and affects all files larger than
256 bytes.  This proves the problem is not the class file.  Even if
Apache httpd has a setting to limit the size of outbound transfers, I
doubt you accidentally configured it.

This issue sounds like a network problem, usually bad hardware -- a
network card, switch, or router is broken.  This can also be caused by
bad drivers, especially when binding network cards.   Since WGET
failed from the server, the problem must be in the server (eliminating
external switches and routers.)

Slax is a small live-CD Linux distribution based on Slackware.  Even
though a server edition exits, small size is the priority.  You
might try a more full-sized server distro with more drivers.

The next troubleshooting step is to prove the problem is or is not
with Apache httpd.  Test with another server program such as Tomcat or
Jetty.  I recommend jetty-6.1.5 because jetty-6.1.6rc0 is broken and
Tomcat is larger.  Just download from:
   http://dist.codehaus.org/jetty/
Unzip the download, copy your class file to the test directory, and:
   wget http://localhost:8080/test/helloapp.class

solprovider

On 10/25/07, steve sawtelle [EMAIL PROTECTED] wrote:
 In response to solprovider:
 I'm compiling on the server, so that should not be a problem?

 and to Michael:
 In httpd.conf searching for 'timeout' should find a couple. Usually the
 main timeout should be set for 300. 

 I need to check that tomorrow.
 I also assume the system lets non-http traffic of  256 bytes through
 without problem? If you have a non-applet will larger files transfer okay?

 I actually tested the problem with text files using wget, so FTP transfers
 are affected as well.
 I then verified the problem by making an empty applet to get it's size below
 256 bytes and that transferred via the browser fine.

 I'm pretty certain this problem is due to data over 256 bytes. Is there
 anything in Apache that would
 create that limit? Or isn't there a mechanism in the server to break a large
 file up into several packets?
 On 10/25/07, steve sawtelle wrote:
  Didn't know about wget, thanks!. I just downloaded it and tried it:
  wget 192.168.1.159/helloapp.class
  It tried many times without success - here are the last two:
  Retrying.
  --14:04:40-- http://192.168.1.159/helloapp.class
  (try:19) = `helloapp.class.19'
  Connecting to 192.168.1.159:80... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 532 [application/octet-stream]
 
  0% [ ] 0
  --.--K/s
 
  14:04:40 (0.00 B/s) - Connection closed at byte 0. Retrying.
 
  --14:04:40-- http://192.168.1.159/helloapp.class
  (try:20) = `helloapp.class.19'
  Connecting to 192.168.1.159:80... connected.
  HTTP request sent, awaiting response... 200 OK
  Length: 532 [application/octet-stream]
  helloapp.class.19 has sprung into existence.
  Giving up.
 
  - The packets look like the ones I was getting with the html code. It
 seems
  to send back all the info except for the data.
 
  I can upload other things in the directory. Ahhh - I made a small text
 file
  and named it 'test.class' to see if was a problem with the extension. That
  worked, but if I made the file larger - 475 bytes - it fails.
 
  So I'm thinking it has a problem with files larger than a few hundred
 bytes.
  Is there a setting for max data length or packet size?
 
  - Steve
 
  Michael McGlothlin wrote:
  Can you wget the applet file?
   Apache 2.2.4 on Slax Linux, clients are win 2000 and win 98 with
   Mozilla an IE
  
   I'm trying to get Apache on a Slax Linux machine to serve an applet to
   a browser on a Windows PC.
  
   The html loads and displays; the applet is a simple 'hello world' app
   that runs in appletviewer fine. html and class are in the htdocs
   directory. hppd.conf was edited to allow to all.
  
   The PC requests the html, Apache delivers, ack, ack etc. PC requests
   helloapp.class, Apache responds, but the PC shows errors in Java
   Console and the applet does not run (applet notinited).
  
   errors:
   load:class helloapp not found.
   java.lang.ClassNotFoundException: helloapp
   at ..
   at ...
   Caused by: java.io.IOException: unexpected EOF
   at ...
  
   The tcp packet returned by Apache is:
  
 HTTP/1.1 200.OK..
 Date: Thu, 25 Oct 2007 10:16:04 GMT
 Server: Apache/2.2.4.(Unix) mod_ssl/2.2.4 OpenSSL/0.9.8b.DAV/2
 Last-Modified: Thu, 25 Oct 2007 09:58:21.GMT
 ETag: 220c-214-462de940
 Accept-Ranges: bytes
 Content-Length: 532
 Keep-Alive: timeout=5, max=100
  
   The packet is 372 bytes but it says the Content-Length is 532
  
   This and the error suggest that the actual applet class is not being
   included in the packet.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional

Re: [EMAIL PROTECTED] Avoiding conditional requests

2007-10-25 Thread solprovider
On 10/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Solprovider,
 Thanx for your persistence :)

 i created the test.html page and ran it only to get the same results:
 first request:
 GET /testpage.html HTTP/1.1
 GET /Eip/Static/Images/Logos/siteLogo.gif HTTP/1.1
 and refreshing...:

The problem is your test.  You are refreshing the page using the
Refresh/Reload command, which forces reloading from the server.  A
proper test must open the page without forcing a reload.  The best
method is to follow a link to the page.  Another method is to set the
browser's Address bar, then press return or click the Go button.

solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] unexpected EOF on client side when Apache sends applet class

2007-10-25 Thread solprovider
How did you transfer the class file to the server?  Did you use ASCII
mode FTP instead of binary mode?  The unexpected file size may be
because the file has been corrupted.  That would cause EOF errors.
Verify the class file on the server is correct -- maybe comparing MD5
checksums.  Or transfer the source and compile on the server.

Check out PSCP from the putty suite of network tools for transferring
files from Windows to Linux (openssh).  I have a batch file in the
SendTo directory so I can right-click/send any file to my Linux home
directory.

solprovider

On 10/25/07, steve sawtelle [EMAIL PROTECTED] wrote:
 Didn't know about wget, thanks!. I just downloaded it and tried it:
 wget 192.168.1.159/helloapp.class
 It tried many times without success - here are the last two:
 Retrying.
 --14:04:40--  http://192.168.1.159/helloapp.class
   (try:19) = `helloapp.class.19'
 Connecting to 192.168.1.159:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 532 [application/octet-stream]

  0% [ ] 0
 --.--K/s

 14:04:40 (0.00 B/s) - Connection closed at byte 0. Retrying.

 --14:04:40--  http://192.168.1.159/helloapp.class
   (try:20) = `helloapp.class.19'
 Connecting to 192.168.1.159:80... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 532 [application/octet-stream]
 helloapp.class.19 has sprung into existence.
 Giving up.

 - The packets look like the ones I was getting with the html code. It seems
 to send back all the info except for the data.

 I can upload other things in the directory. Ahhh - I made a small text file
 and named it 'test.class' to see if was a problem with the extension. That
 worked, but if I made the file larger - 475 bytes - it fails.

 So I'm thinking it has a problem with files larger than a few hundred bytes.
 Is there a setting for max data length or packet size?

 - Steve

 Michael McGlothlin [EMAIL PROTECTED] wrote:
  Can you wget the applet file?
  Apache 2.2.4 on Slax Linux, clients are win 2000 and win 98 with
  Mozilla an IE
 
  I'm trying to get Apache on a Slax Linux machine to serve an applet to
  a browser on a Windows PC.
 
  The html loads and displays; the applet is a simple 'hello world' app
  that runs in appletviewer fine. html and class are in the htdocs
  directory. hppd.conf was edited to allow to all.
 
  The PC requests the html, Apache delivers, ack, ack etc. PC requests
  helloapp.class, Apache responds, but the PC shows errors in Java
  Console and the applet does not run (applet notinited).
 
  errors:
  load:class helloapp not found.
  java.lang.ClassNotFoundException: helloapp
  at ..
  at ...
  Caused by: java.io.IOException: unexpected EOF
  at ...
 
  The tcp packet returned by Apache is:
 
HTTP/1.1 200.OK..
Date: Thu, 25 Oct 2007 10:16:04 GMT
Server: Apache/2.2.4.(Unix) mod_ssl/2.2.4 OpenSSL/0.9.8b.DAV/2
Last-Modified: Thu, 25 Oct 2007 09:58:21.GMT
ETag: 220c-214-462de940
Accept-Ranges: bytes
Content-Length: 532
Keep-Alive: timeout=5, max=100
 
  The packet is 372 bytes but it says the Content-Length is 532
 
  This and the error suggest that the actual applet class is not being
  included in the packet.

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Avoiding conditional requests

2007-10-24 Thread solprovider
Check your browser settings:

Internet Explorer 6.0 - Tools - Internet Options - General - Temporary
Internet Files - Settings - Check for newer versions of stored pages:
__ Every visit to the page
__ Every time you start Internet Explorer
__ Automatically
__ Never

Mozilla 1.7.3 - Edit - Preferences - Advanced - Cache - Compare the
page in the cache to the page on the network:
__ Every time I view the page
__ When the page is out of date
__ Once per session
__ Never

The server-specified expiration period should only affect MSIE's
Automatically and Mozilla's When the page is out of date options.
As a web developer, you are very likely to have set your browser to
the top most-frequently-updated setting to force the appearance of
changes from your current work (and may still sometimes need to
manually clear the cache.)  The every time settings will send the
conditional request for every request while ignoring expiration
datetimes.

You can see the Expires header so you know the server is configured as
you desire.  Your browser must be configured to use the setting to
prove your browser works as desired.

solprovider


On 10/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I've been trying to improve my webapp's performance by telling Apache
 [2.2.4] to force clients to cache static content using mod_expires.
 I'm basically trying to avoid having the client browser resend the
 'if-modified' conditional request for the static content upon a refresh or a
 revisit to the site, thus saving the round-trip time and having the page
 presented straight from cache . For this, I've added the following lines to
 my httpd.conf:

 LoadModule expires_module modules/mod_expires.so
 ExpiresActive on
 ExpiresDefault access plus 2 hours
 ExpiresByType image/gif access plus 7 days
 ExpiresByType image/jpeg access plus 7 days
 ExpiresByType text/css access plus 7 days
 ExpiresByType application/x-javascript access plus 12 hours

 The problem is that for some reason this doesn't seem to work, and the
 browser [ie6] still sends the conditional requests, disregarding the
 expiration directive.
 here is an example HTTP traffic caputre (using Fiddler):

 First Request

 GET /Eip/Static/Images/Logos/siteLogo.gif HTTP/1.0
 Accept: */*
 Referer: http://qcxp2/Eip/bin/ibp.jsp?ibpPage=HomePage;
 Accept-Language: he
 Proxy-Connection: Keep-Alive
 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
 InfoPath.1; FDM; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)
 Host: qcxp2
 Cookie: JSESSIONID=4662B8AA7EC6B9AE09258672CBDCE54C

 First Response

 HTTP/1.1 200 OK
 Date: Wed, 24 Oct 2007 08:37:29 GMT
 Server: Apache/2.2.4 (Win32) mod_jk/1.2.25
 Last-Modified: Mon, 17 Sep 2007 09:00:24 GMT
 ETag: 8e5a-782-8f6da00
 Accept-Ranges: bytes
 Content-Length: 1922
 Cache-Control: max-age=604800
 Expires: Wed, 31 Oct 2007 08:37:29 GMT
 Connection: close
 Content-Type: image/gif

 GIF89a]

 Second Request (the one that souldn't actually occur at all)

 GET /Eip/Static/Images/Logos/siteLogo.gif HTTP/1.0
 Accept: */*
 Referer: http://qcxp2/Eip/bin/ibp.jsp?ibpPage=HomePage;
 Accept-Language: he
 Proxy-Connection: Keep-Alive
 If-Modified-Since: Mon, 17 Sep 2007 09:00:24 GMT
 If-None-Match: 8e5a-782-8f6da00
 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
 InfoPath.1; FDM; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)
 Host: qcxp2
 Pragma: no-cache
 Cookie: JSESSIONID=4662B8AA7EC6B9AE09258672CBDCE54C

 Second Response

 HTTP/1.1 304 Not Modified
 Date: Wed, 24 Oct 2007 08:41:12 GMT
 Server: Apache/2.2.4 (Win32) mod_jk/1.2.25
 Connection: close
 ETag: 8e5a-782-8f6da00
 Expires: Wed, 31 Oct 2007 08:41:12 GMT
 Cache-Control: max-age=604800

 Any ideas?
  Uri

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Avoiding conditional requests

2007-10-24 Thread solprovider
Why is the client sending HTTP/1.0?  MSIE 6.0 and modern browsers
should be sending HTTP/1.1.

Why does the response contain the Header Pragma: no-cache?  That
implies something is telling the client that the graphic should not
depend on the cache.  The JSP is probably setting the no-cache for the
page (as an HTTP Header or an HTML Head META HTTP-EQUIV command) and
the child requests inherit the setting.

!!! Test with a static no-querystring URL:
   http://qcxp2/mytestpage.html

FILE testpage.html:
HTMLBODYIMG SRC=/Eip/Static/Images/Logos/siteLogo.gif/BODY/HTML

I rewrote the cache functions for Lenya to bypass caching when a
querystring exists because a querystring may completely change the
response:
   /homepage
   /homepage?ShowTheLoginScreen
Some software noticing the presence of the querystring may set no-cache.

solprovider


On 10/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 That was the first place I turned to as well, but the browser is actually set 
 to the default setting - Automatically.
 another reason i think this is not a browser issue is the fact that I've also 
 tested this on firefox and on ie7 - both set to default cache settings and 
 the latter being on a fresh  xp installation VM machine.
 In all cased I'm seeing these recurring conditional requests.
 I guess I wanted to verify that the response code apache is returning looks 
 as is should - the kind that should actually achieve the purpose i talked 
 about.

 Uri.

  [EMAIL PROTECTED]  24/10/2007 14:06

 Check your browser settings:

  Internet Explorer 6.0 - Tools - Internet Options - General - Temporary
  Internet Files - Settings - Check for newer versions of stored pages:
  __ Every visit to the page
  __ Every time you start Internet Explorer
  __ Automatically
  __ Never

  Mozilla 1.7.3 - Edit - Preferences - Advanced - Cache - Compare the
  page in the cache to the page on the network:
  __ Every time I view the page
  __ When the page is out of date
  __ Once per session
  __ Never

  The server-specified expiration period should only affect MSIE's
  Automatically and Mozilla's When the page is out of date options.
  As a web developer, you are very likely to have set your browser to
  the top most-frequently-updated setting to force the appearance of
  changes from your current work (and may still sometimes need to
  manually clear the cache.)  The every time settings will send the
  conditional request for every request while ignoring expiration
  datetimes.

  You can see the Expires header so you know the server is configured as
  you desire.  Your browser must be configured to use the setting to
  prove your browser works as desired.

  solprovider

  On 10/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   I've been trying to improve my webapp's performance by telling Apache
   [2.2.4] to force clients to cache static content using mod_expires.
   I'm basically trying to avoid having the client browser resend the
   'if-modified' conditional request for the static content upon a refresh or 
 a
   revisit to the site, thus saving the round-trip time and having the page
   presented straight from cache . For this, I've added the following lines to
   my httpd.conf:
  
   LoadModule expires_module modules/mod_expires.so
   ExpiresActive on
   ExpiresDefault access plus 2 hours
   ExpiresByType image/gif access plus 7 days
   ExpiresByType image/jpeg access plus 7 days
   ExpiresByType text/css access plus 7 days
   ExpiresByType application/x-javascript access plus 12 hours
  
   The problem is that for some reason this doesn't seem to work, and the
   browser [ie6] still sends the conditional requests, disregarding the
   expiration directive.
   here is an example HTTP traffic caputre (using Fiddler):
  
   First Request
  
   GET /Eip/Static/Images/Logos/siteLogo.gif HTTP/1.0
   Accept: */*
   Referer: http://qcxp2/Eip/bin/ibp.jsp?ibpPage=HomePage;
   Accept-Language: he
   Proxy-Connection: Keep-Alive
   User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
   InfoPath.1; FDM; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)
   Host: qcxp2
   Cookie: JSESSIONID=4662B8AA7EC6B9AE09258672CBDCE54C
  
   First Response
  
   HTTP/1.1 200 OK
   Date: Wed, 24 Oct 2007 08:37:29 GMT
   Server: Apache/2.2.4 (Win32) mod_jk/1.2.25
   Last-Modified: Mon, 17 Sep 2007 09:00:24 GMT
   ETag: 8e5a-782-8f6da00
   Accept-Ranges: bytes
   Content-Length: 1922
   Cache-Control: max-age=604800
   Expires: Wed, 31 Oct 2007 08:37:29 GMT
   Connection: close
   Content-Type: image/gif
  
   GIF89a]
  
   Second Request (the one that souldn't actually occur at all)
  
   GET /Eip/Static/Images/Logos/siteLogo.gif HTTP/1.0
   Accept: */*
   Referer: http://qcxp2/Eip/bin/ibp.jsp?ibpPage=HomePage;
   Accept-Language: he
   Proxy-Connection: Keep-Alive
   If-Modified-Since: Mon, 17 Sep 2007 09:00:24 GMT
   If-None-Match: 8e5a-782-8f6da00
   User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
   InfoPath.1; FDM

Re: [EMAIL PROTECTED] SetEnvIf SESSION_USE_TRANS_SID=0

2007-10-23 Thread solprovider
Check the User-Agent appearing in your logs.  The regexp would not
match Googlebot's User-Agent from the logs on my server:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

Try:
SetEnvIf User-Agent (.*)Googlebot(.*) SESSION_USE_TRANS_SID=0

solprovider

WARNING: Code was written freehand, is completely untested, and may
cause catastrophic failures.  Use at your own risk.


On 10/22/07, Sylvain Viollat [EMAIL PROTECTED] wrote:
 We have a website which works with session.use_trans_sid set to On. 
 Everything works just fine, but when googlebot comes on the website and does 
 its job, it's also getting the php's session id in the URL. I've search a lot 
 about this problem and one of the solution which could do what I want, 
 disabling session.use_trans_sid if User-Agent is ^googlebot, doesn't work. 
 Here is what I tried, in a .htaccess file :

 SetEnvIf User-Agent ^googlebot SESSION_USE_TRANS_SID=0
 - EnvVar is properly set up but has no effect if I check a phpinfo 
 (session.use_trans_sid still to On)

 I can't disable session.use_trans_sid for the whole website, but only if 
 googlebot is coming.
 Sylvain, France

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache 2.2.0 binary download

2007-10-18 Thread solprovider
I could not find Windows binaries for 2.2.0 on the Web.  If you MUST
use that version, you MUST compile.

For old versions, go to the Downloads page and click the archive
download site link in the second paragraph for:
http://archive.apache.org/dist/httpd/
Then click binaries and win32 to discover that the obsolete
Windows binaries for 2.2.0 are missing.  Only 2.2.2, 2.2.4, and 2.2.6
are listed.

Alexander's URL is for current releases. There have been too many
changes since 2.2.0 for any sane person to recommend or facilitate
installing it. The security page lists seven holes affecting 2.2.0.
http://httpd.apache.org/security/vulnerabilities_22.html

Sorry,
solprovider

On 10/18/07, Alexander Fortin [EMAIL PROTECTED] wrote:
 Ashwani Kumar Sharma wrote:
  I want to download binary for apache 2.2.0 for windows urgently. I am not
  finding it on apache.org
 http://apache.planetmirror.com.au/dist/httpd/binaries/win32/
 Alexander Fortin

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Apache 2.2.0 binary download

2007-10-18 Thread solprovider
There were discussions on the Dev ML about building 2.2.0 using VC6,
VS2003, and VS2005 around December 2005.  Read those threads for ideas
if you insist on compiling.

For the recommended stable version, use Alexander's link for the
Windows binaries of 2.2.6.

solprovider

On 10/18/07, Ashwani Kumar Sharma [EMAIL PROTECTED] wrote:
 Yeah that's true,

 There are no binaries for apache 2.2.0 on that url. I have downloaded the
 source code for same from
  http://archive.apache.org/dist/httpd/
 I am facing some build problems.

 Is it true that apache 2.2.0 is not a stable version. Please suggest me the
 stable version for apache.

 Thanks and Regards,
 Ashwani Sharma
 Mob: 09916454843
 Off: +91-80-26265053


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 [EMAIL PROTECTED]
 Sent: Thursday, October 18, 2007 11:45 AM

 I could not find Windows binaries for 2.2.0 on the Web.  If you MUST
 use that version, you MUST compile.

 For old versions, go to the Downloads page and click the archive
 download site link in the second paragraph for:
 http://archive.apache.org/dist/httpd/
 Then click binaries and win32 to discover that the obsolete
 Windows binaries for 2.2.0 are missing.  Only 2.2.2, 2.2.4, and 2.2.6
 are listed.

 Alexander's URL is for current releases. There have been too many
 changes since 2.2.0 for any sane person to recommend or facilitate
 installing it. The security page lists seven holes affecting 2.2.0.
 http://httpd.apache.org/security/vulnerabilities_22.html

 Sorry,
 solprovider

 On 10/18/07, Alexander Fortin [EMAIL PROTECTED] wrote:
  Ashwani Kumar Sharma wrote:
   I want to download binary for apache 2.2.0 for windows urgently. I am not
   finding it on apache.org
  http://apache.planetmirror.com.au/dist/httpd/binaries/win32/
  Alexander Fortin

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] rewrite or proxypass?

2007-10-18 Thread solprovider
Why not put the homepage/login page on port 80 and proxy the POST to
Tomcat?  Or are dynamic elements on the login page?

If everything will be served by Tomcat, why not run tomcat on port 80?

If you are using virtual servers, you should already have configured the proxy.

solprovider


On 10/17/07, Patrick Coleman [EMAIL PROTECTED] wrote:
 I don't exactly know if this is a rewrite or proxypass or something
 else type of thing so
 I was hoping someone could help.
 I have a Tomcat app running on port 8080
 I can get to it through
 http://www.example.com:8080

 it goes to a login page and the URL displays
 http://www.example.com:8080/login

 I would like to be able to just put in
 http://www.example.com
 and get
 http://www.example.com/login

 Thanks.
 Pat

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] rewrite or proxypass?

2007-10-18 Thread solprovider
Where is the login homepage?  Where does it POST?
Where is the application?
What URLs do clients see?
How do those URLs reach the correct server/port?
How does the application send URLs usable by the client?

From my understanding of your needs, only example.com:80 should be
accessible to clients.  You want to firewall port 8080 from clients.
Apache httpd on port 80 will proxy to port 8080.  Questions:
1. Are all requests using the proxy?  Why bother using a proxy?  (That
was my previous question.)
2. Does a virtual server name indicate using the proxy?  (app.example.com)
3. Do certain paths indicate using the proxy? (example.com/app/...)

Your current issue is the application is not aware of the proxy.
mod_proxy only rewrites request URLs.  You need to use one of the
following options:

1. Configure the application for use behind the proxy.  Something like:
ROOT_PREFIX = http://www.example.com/app/;
The syntax will depend on the application and the abilities of the developers.

2. Use mod_proxy_html to rewrite the HTML sent by the application to
change every http://www.example.com:8080/; to
http//www.example.com/app/.

solprovider

On 10/18/07, Patrick Coleman [EMAIL PROTECTED] wrote:
 I may have spoken a little too soon.

 While the entries I used ..

 ProxyPass / http://www.ourcompany.com:8080/
 ProxyPassReverse / http://www.ourcompany.com:8080/

 ..work on the initial screen and the first screen after logging in
 if I click on any that the 8080 shows up again in the URL?

 Is there something else that needs to be added to pick that up each
 time?

 On Oct 18, 2007, at 2:31 AM, [EMAIL PROTECTED] wrote:
  Why not put the homepage/login page on port 80 and proxy the POST to
  Tomcat?  Or are dynamic elements on the login page?
 
  If everything will be served by Tomcat, why not run tomcat on port 80?
 
  If you are using virtual servers, you should already have
  configured the proxy.
 
  solprovider
 
  On 10/17/07, Patrick Coleman [EMAIL PROTECTED] wrote:
  I don't exactly know if this is a rewrite or proxypass or something
  else type of thing so
  I was hoping someone could help.
  I have a Tomcat app running on port 8080
  I can get to it through
  http://www.example.com:8080
 
  it goes to a login page and the URL displays
  http://www.example.com:8080/login
 
  I would like to be able to just put in
  http://www.example.com
  and get
  http://www.example.com/login
 
  Thanks.
  Pat

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Logging Options - Extended W3C Log Format?

2007-10-15 Thread solprovider
W3C's Extended Log File Format adds some header information to
standard logs.  The Working Draft datetime examples use
language-dependent month names; the ISO format for datetimes would be
better if the W3C can be convinced to follow standards.  Here is an
example:

#Version: 1.0
#Fields: remotehost rfc931 authuser [date] request status bytes
referer browser
#Software: Apache-httpd-2.2.6
#Start-Date: 20070901T00Z
#End-Date: 20070930T235959Z
#Date: 20071001T01Z
#Remark: Sample Header for W3C Extended Log Format using Apache
httpd's Combined Format

The Extended Format should not be implemented by logging software
because the data should be added after the log is closed.  This
function could be included in log rotation software.

Most of the data is constant once a server is configured; only the
three datetimes change.  The Date field is useless.  The Start-Date
and End-Date datetimes are only important to make certain no logs are
missing and should be specified as the same time for consecutive logs.
Most logs would not need those fields -- just look at the dates for
the first and last entries.  The Fields field could be very useful,
but the W3C does not explain the field names well -- the
specifications are imprecise and confuse field names with field types.
 Adding a unique and immutable key for the log's source would be
useful.

#Version: 1.0
#Source: Solprovider.com-Combined-httpd-Log
#Software: Apache-httpd-2.2.6
#Fields: remotehost rfc931 authuser [datetime] request status bytes
referer browser
#Remark: Sample Header for W3C Extended Log Format using Apache
httpd's Combined Format

--
Just add the information to the log file before you send it.  1) Open
file.  2) Paste standard text.  3) Fix datetimes. 4) Save file.  5)
Send.

Ask whoever automated Step 5 to add this function to that process.

solprovider


On 10/14/07, J R M [EMAIL PROTECTED] wrote:
 I've recently had a request from one of our clients to provide their logs
 in the 'Extended W3C Log Format'.  From my googling around the
 place, this appears to be pretty much a log format exclusive to IIS.

 From poking around google and reading http://www.w3.org/TR/WD-logfile.html
 the fomat looks awfully strange.

 Is it possible to configure Apache in a way to create logfiles in
 this format without using some sort of external logging system?  Perhaps
 my google-fu is letting me down severely that I havent found an easy way
 to do it yet - but maybe this is simply not possible to do 'out of the
 box'.

 Thanks in advance
 Regards
 jrm

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[EMAIL PROTECTED] Improve ProxyPassReverseDomain

2007-10-15 Thread solprovider
I am trying to use ProxyPassReverseCookieDomain to reverse proxy a
server that uses Cookies without specifying DOMAIN.  Does anybody know
how?

---
[The rest of this post probably belongs on a dev ML.]

I found the code:
DOWNLOAD: httpd-2.2.6-win32-src-r2
FILE: modules/proxy/proxy_util.c
FUNCTION: ap_proxy_cookie_reverse_map()

The code works by matching strings.  There is no case for adding a
domain to the string when the incoming Set-Cookie header does not have
a DOMAIN parameter. ap_proxy_cookie_reverse_map() should be modified
to be useful when no domain and/or no path is given

--- Fixing ap_proxy_cookie_reverse_map()
According to the RFCs, DOMAIN and PATH are optional.  DOMAIN defaults
to the server name. PATH defaults to the request up to and including
the right-most /.

Cookie Domain and Path are filters so browsers only send the Cookies
to certain servers.  Domain is rewritten to include the proxy server
or to share the Cookie with the entire domain or subdomain.  Path is
rewritten to match the rewritten URL or to increase the scope by using
a shorter path.  These options are not used to make certain that
Cookies will not be returned.  Almost everybody would be satisfied if
the function just set the Domain to domain.tld (removing server
names and subdomains) and set the Path to /.

BETTER SPECIFICATIONS:
ProxyPassReverseDomain uses one parameter -- the domain to put in
Cookies.  If multiple domains are listed, use last entry (for
backwards-compatibility.)  Matching is pointless -- most usecases use
shortest possible domain; the rest use the current server name.

Example:  myServer.mySubdomain.solprovider.com can only set Cookies for:
myServer.mySubdomain.solprovider.com
mySubdomain.solprovider.com
solprovider.com

Proxying Usecase: internalServer.solprovider.com =
myServer.mySubdomain.solprovider.com
Sharing Usecase: myServer.mySubdomain.solprovider.com = solprovider.com

ProxyPassReversePath should replace beginning of path.  Better would
match configuration of RewriteRule since primary use of this statement
is to follow rewrites or proxy settings.

Example: Path=/dir1/dir2/dir3/
Sharing Usecase: May want to shorten PATH to one of:
Path=/
Path=/dir1/
Path=/dir1/dir2/
Proxying/Rewriting Usecase:  If /proxy1/* is proxied to another
server, then response may need one of:
PATH=/proxy1/dir1/dir2/dir3/
PATH=/proxy1/
PATH=/

Here is pseudocode:

ap_proxy_cookie_reverse_map(request, conf, oldHeaderValue){
   Read through oldHeaderValue{
  if(conf-domain) remove domain.
  if(conf-path) find, store, and remove path,
   }
   // ASSUME: newHeaderValue does not contain Domain.
   if(conf-path){
  If path was not found, set to path of current document.
  Compare ProxyPassReverseCookiePath strings with beginnning of
path.  Replace matched portion if found.
  newHeaderValue += newPath; //Watch semicolons
   }
   if(conf-domain) newHeaderValue += conf-domain;
   return newHeaderValue;
}

Regards,
solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [EMAIL PROTECTED] Problems with ProxyPassReverseCookieDomain

2007-10-15 Thread solprovider
I apologize for implying anything about your emotions. Maybe I am the
only one weeping over RFCs .  Bad specifications cause physical pain
to me. - solprovider

On 10/15/07, Axel-Stephane  SMORGRAV
[EMAIL PROTECTED] wrote:
 Obviously the original poster has an application that only returns the domain 
 part in the cookie domain, hence the lack of rewriting despite 
 ProxyPassReverseCookieDomain.

 and no I do not make a habit of weeping over RFCs.

 -ascs
 
 -Message d'origine-
 De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de [EMAIL 
 PROTECTED]
 Envoyé : lundi 15 octobre 2007 15:06
 À : users@httpd.apache.org
 Objet : Re: [EMAIL PROTECTED] Problems with ProxyPassReverseCookieDomain

 Distinguishing between FQDN and Domain is barely relevant when discussing 
 Cookies.  RFC2965 states, Domain Defaults to the request-host.  Cookies 
 default to using the FQDN as the Domain if the Domain is not specified.  
 Specified domains must start with a period, must be exactly one level below 
 the server name, and are only returned to servers in the one level above the 
 period.  Read Section 3.3.2 and weep.  The drafters deliberately made Cookies 
 as limited as possible.
 solprovider.com and www.solprovider.com cannot share Cookies even
 though both addresses reach the same server.   Look at the Cookies in
 your browser.  The server name will be part of the domain if no domain was 
 specified (e. g. www.cnn.com and www.networksolutions.com).

 I wonder if the leading period is missing from example.com.
 .example.com (leading period) would be sent to www.example.com and 
 www1.example.com.  example.com (no leading period) would only be sent to 
 example.com.  Browsers should add the leading period if domain is specified.

 solprovider

-
The official User-To-User support forum of the Apache HTTP Server Project.
See URL:http://httpd.apache.org/userslist.html for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]