Re: [squid-users] Reverse proxy for multiple backend servers

2006-09-20 Thread Mohamed Navas

Hi,

OK ... thanks for the replay ...

Our requirement is, we should have an initial URL 
http://proxy.mydomain.com for the proxy server. Clients browser want 
to just type simple folder name after this initial URL to look on to 
the respective backend web servers ...


thanks,

Br--
Navas

At 01:03 AM 9/20/2006, Henrik Nordstrom wrote:

tis 2006-09-19 klockan 15:18 +0400 skrev Mohamed Navas V:
 Hi,

 The reverse proxy to redirect for 1 web backend server is working fine
 for me ...
 Still I am confused with some setting for multiple backend servers.

 For single backend server setup http://proxy.my-domain.com is
 redirecting to http://backendserver1.my-domain.com

 For multiple backend server setup, we have to do as follows:-

 http://proxy.mydomain.com -- http://backendserver1.my-domain.com

ok, just cache_peer + cache_peer_domain/access..

 http://proxy.mydomain.com/folder1 -- http://backendserver2.my-domain.com
 http://proxy.mydomain.com/folder2 --
 http://backendserver3.my-domain.com etc ...

These is a little trickier if you really want to rewrite the url-path
after the host. If you can rearrange the backend servers to each have
their content in unique directories then things gets a whole lot simpler
and much less error prone. In such case it's just a matter of cache_peer
+ cache_peer_access to select which backend to use.

If you really need to rewrite the url-path then use a redirector to
rewrite the requested URL.

Regards
Henrik





[squid-users] 5 second delay

2006-09-20 Thread George Dominguez
 In regards to a five minutes delay people are experiencing when
accessing http://cat.lib.unimelb.edu.au/

I run the following iptables rules. What I'm trying to achieve is; if
the request is destine to http://cat.lib.unimelb.edu.au/ then redirect
to firewall.

The rule is where I placed the ### mark. Could someone tell me if it
will work, before I apply the rules on the live environment.

Thanks in advance 

My IP Tables--

#!/bin/sh
#


# See URL:
http://www.cyberciti.biz/tips/linux-setup-transparent-proxy-squid-howto.
html
# (c) 2006, nixCraft under GNU/GPL v2.0+
#

-
# squid server IP
SQUID_SERVER=128.250.180.100
# cat.lib.unimelb.edu.au Ip address
catlib=128.250.144.132
# Firewall IP address 
firewall=128.250.2.21
# Interface connected to Internet
INTERNET=eth0
# Interface connected to LAN
LAN_IN=eth1
# Squid port
SQUID_PORT=3128
FTP_PORT=21

# DO NOT MODIFY BELOW
# Clean old firewall
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
# Load IPTABLES modules for NAT and IP conntrack support
modprobe ip_conntrack
# Support for connection tracking of FTP and mod for  win xp ftp client
modprobe ip_conntrack_ftp
modprobe ip_nat_ftp
# Enable IP Forwarding
echo 1  /proc/sys/net/ipv4/ip_forward
# Setting default filter policy
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
# Unlimited access to loop back
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow ICMP 
iptables -A INPUT -p icmp --icmp-type any -j ACCEPT
# Allow UDP, DNS and Passive FTP
iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j
ACCEPT
# set this system as a router for Rest of LAN
iptables --table nat --append POSTROUTING --out-interface $INTERNET -j
MASQUERADE
iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT
# unlimited access to LAN
iptables -A INPUT -i $LAN_IN -j ACCEPT
iptables -A OUTPUT -o $LAN_IN -j ACCEPT
### DNAT port 80 request comming from LAN systems to squid 3128
($SQUID_PORT) aka transparent proxy
iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d $catlib -j DNAT --to
$firewall
iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to
$SQUID_SERVER:$SQUID_PORT
### if it is same system
iptables -t nat -A PREROUTING -i $INTERNET -p tcp -d $catlib -j REDIRECT
--to $firewall
iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT
--to-port $SQUID_PORT
# Do some checks for obviously spoofed IP's
iptables -t nat -A PREROUTING -i $INTERNET -s 192.168.0.0/16 -j DROP
iptables -t nat -A PREROUTING -i $INTERNET -s 127.0.0.1/8 -j DROP
# Drop Microsoft packets outbound from Server
iptables -A OUTPUT -p tcp -o $INTERNET --dport 139 -j DROP
iptables -A OUTPUT -p tcp -o $INTERNET --dport 445 -j DROP
iptables -A OUTPUT -p tcp -o $INTERNET --dport 32875 -j DROP
iptables -A OUTPUT -p udp -o $INTERNET --source-port 32875 -j DROP
iptables -A OUTPUT -p udp -o $INTERNET --dport 137 -j DROP
# DROP everything 
iptables -A INPUT -j DROP

-Original Message-
From: George Dominguez [mailto:[EMAIL PROTECTED] 
Sent: Monday, 18 September 2006 12:36 PM
To: squid-users@squid-cache.org
Subject: [squid-users] 5 second delay

Good morning,

It was brought to my attention that there is a 5 second delay when
accessing the following page and their respective sub menus
http://cat.lib.unimelb.edu.au/

This was not the case before the proxy was introduced. 

We ran squid in transparent mode, on a RedHat EL4 server with 5GB of RAM
an a small cache of 20GB. I checked the logs but can't find anything
wrong at all. 

Any Ideas, on what I should/could be looking for? The page is simple
html. 

Regards
George Dominguez
Business and Systems Engineer
Information Technology  Services Department Melbourne Business School
PH: 9349-8473



--

___

 

Notice from Melbourne Business School Ltd 


The information contained in this e-mail is confidential, and is
intended for the named person's use only.  It may contain proprietary or
legally privileged information. If you have received this email in
error, please notify the sender and delete it immediately.  You must
not, directly or indirectly, use, disclose, distribute, print, or copy
any part of this message if you are not the intended recipient

Internet communications are not secure. You should scan this message and
any attachments for viruses. Melbourne Business School does not accept
any liability for loss or damage which may result from receipt of this
message or any attachments.


__ 



 




--
___

 

Notice from Melbourne Business School Ltd 


The information 

Re: [squid-users] squid error running out of filedescriptors and others

2006-09-20 Thread Visolve Squid

2006/09/19 14:36:24| WARNING! Your cache is running out of filedescriptors
2006/09/19 14:36:40| WARNING! Your cache is running out of 
filedescriptors
2006/09/19 14:36:56| WARNING! Your cache is running out of 
filedescriptors
2006/09/19 14:37:12| WARNING! Your cache is running out of 
filedescriptors
2006/09/19 14:37:28| WARNING! Your cache is running out of 
filedescriptors 


Hello Dny,

Squid might be compiled with low number of filedescriptors. So need to 
increase the filedescriptor value by using ulimit -HSn  and 
recompile the squid.


Thanks,
Visolve Squid Team.
www.visolve.com/squid/



RE: [squid-users] Squid ACL (Is this Possible)

2006-09-20 Thread Mehmet, Levent \(Accenture\)
 Hi

This config u have suggested we use may not be able to support what we
like:

How would this access-list cope with a website like this:

http://nww.nhsmessaging.co.uk/ or 

www.nhs.uk - this should go direct but 

nww.nhs.uk needs to go to peer

acl NWW dstdom_regex \.?nww\.
acl NHS dstdomain .nhs.uk
cache_peer_access 3.3.3.3 allow NHS
cache_peer_access 3.3.3.3 allow NWW
never_direct allow NWW

-Original Message-
From: Chris Robertson [mailto:[EMAIL PROTECTED] 
Sent: 19 September 2006 19:56
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid ACL (Is this Possible)

Mehmet, Levent (Accenture) wrote:
  All

 I currently have a setup which sends different domains to different 
 Cache_peers. This has been working fine with the below config.:

 cache_peer 1.1.1.1 parent 80 80 no-query cache_peer 2.2.2.2 parent 80 
 80 no-query cache_peer 3.3.3.3 parent 3128 3130 no-query

 cache_peer_domain 3.3.3.3 parent  nww. .nhs.uk
   
Hmmm...  I don't think that text followed by a dot is valid syntax for
cache_peer_domain or dstdomain.  I'd advise making a dstdom_regex acl
and using cache_peer_access for this peer.  Something like...

acl NWW dstdom_regex \.?nww\.
acl NHS dstdomain .nhs.uk
cache_peer_access 3.3.3.3 allow NHS
cache_peer_access 3.3.3.3 allow NWW
never_direct allow NWW

...in addition to the other rules you have listed.
 cache_peer_domain 1.1.1.1 parent .gsi.gov.uk cache_peer_domain 2.2.2.2

 parent .gsi.gov.uk

 acl NHS dstdomain  nww. .nhs.uk
   
Obviously, this ACL should be adjusted as shown above.
 acl GSI dstdomain .gsi.gov.uk

 cache_peer_access 3.3.3.3 allow NHS
 cache_peer_access 1.1.1.1 allow GSI

 never_direct allow NHS
 never_direct allow GSI


 When trying to access http://nww.nhs.uk this goes via the correct path

 of 3.3.3.3, but our clients now wish to access the following websites,

 which cause a conflict: http://nww.nhsmessaging.co.uk/ Web sites like 
 this cause me a issue because of the .co.uk which tries to go direct 
 and nww tries to go via 3.3.3.3, also with 
 http://www.pasa.nhs.uk/cat_default.asp www. Go direct and the nhs.uk 
 tries to go via 3.3.3.3. This is a major show stopper for the company.
 Is there a way around this as we need to send all nww down 3.3.3.3

 Thanks
  

 Levent Mehmet
 Network Analyst
 Server and Network Team
 [EMAIL PROTECTED] Operate Unit
 Market Towers, 20th Floor
 1 Nine Elms Lane
 London
 SW8 5NQ

 E-mail: [EMAIL PROTECTED]
 Phone: +44 20 7084 3517 
 Fax:   +44 20 7084 2536 
   
Chris

PLEASE NOTE: THE ABOVE MESSAGE WAS RECEIVED FROM THE INTERNET.
On entering the GSI, this email was scanned for viruses by the
Government Secure Intranet (GSi) virus scanning service supplied
exclusively by Cable  Wireless in partnership with MessageLabs.
In case of problems, please call your organisational IT Helpdesk.
The MessageLabs Anti Virus Service is the first managed service to
achieve the CSIA Claims Tested Mark (CCTM Certificate Number
2006/04/0007), the UK Government quality mark initiative for information
security products and services.  For more information about this please
visit www.cctmark.gov.uk



This email and any files transmitted with it are confidential. If you are not 
the intended recipient, any reading, printing, storage, disclosure, copying or 
any other action taken in respect of this email is prohibited and may be 
unlawful. 

If you are not the intended recipient, please notify the sender immediately by 
using the reply function and then permanently delete what you have 
received.Incoming and outgoing email messages are routinely monitored for 
compliance with the Department of Healths policy on the use of electronic 
communications. 

For more information on the Department of Healths email policy, click 
http;//www.doh.gov.uk/emaildisclaimer.htm

The original of this email was scanned for viruses by Government Secure 
Intranet (GSi)  virus scanning service supplied exclusively by Cable  Wireless 
in partnership with MessageLabs.
On leaving the GSI this email was certified virus free.
The MessageLabs Anti Virus Service is the first managed service to achieve the 
CSIA Claims Tested Mark (CCTM Certificate Number 2006/04/0007), the UK 
Government quality mark initiative for information security products and 
services.  For more information about this please visit www.cctmark.gov.uk


[squid-users] squid with optional authentication?

2006-09-20 Thread Otmar Spoettel

Hello,

i've run a squid proxy for abaout 2000 users. The users have not to 
authenticate to use the squid.

But on the squid are two delay pools for downloads configured.
The users can only use (max 256 kB/s to min 32 kB/s - as larger the 
file as slower the download)


Now i have about 30 Power Users This Power Users have the right to use 
the full Speed (is always max 256 kB/s). I use for the user a IP 
address acl with static IP addresses.
But this will not longer work, because this users often work from othe 
computers (shared computers)


So i need a mechanismus to authenticate this power users. I'll not 
activate authentication for all users.


Is it possible to have a website with a authentication form which the 
user opens, and after successfully authentication the user get full 
access from proxy.


Or can i use a other solution (perhabs a redirector, second squid 
instance with basic authentication, Identd...)


But remember only the Power Users should be able to authenticate (trough 
manual action: i.e. open a special website). The standard user should 
never get a authentication window.



Thank you for your help!

Regards

Otmar



[squid-users] content filtering with squid

2006-09-20 Thread Benner, Uwe
Hi,

How can I realise content filtering with squid?
Has anybody an example/experience

Thanks for your help

Yours,
UB



Re: [squid-users] content filtering with squid

2006-09-20 Thread Adrian Chadd
On Wed, Sep 20, 2006, Benner, Uwe wrote:
 Hi,
 
 How can I realise content filtering with squid?
 Has anybody an example/experience

It'll happen once Squid-3 is released.




Adrian



Re: [squid-users] parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available

2006-09-20 Thread Víctor J. Hernández Gómez

 2006/09/19 11:44:30| parseHttpRequest: NF getsockopt(SO_ORIGINAL_DST)
 failed: (92) Protocol not available

 ...in our cache.log.

 Any idea on what is going on?
 
 Do you have the NAT iptable loaded?

On the squid-box? No, I have not loaded iptable nat. Should I? NAT is
applied in a box before the packets reach squid.

Thank you
--
Víctor





Re: [squid-users] IOWAIT and second disk

2006-09-20 Thread Matus UHLAR - fantomas
On 19.09.06 12:30, Michał Margula wrote:
 Matus UHLAR - fantomas wrote:
 I would use the 4), decrease cache_disk for both disks and create ~1GB file
 for COSS on both disks.
 
 
 Can you explain why? I am not saying you're wrong, but I want to 
 understand :)

i would keep small files in COSS and (preferrably) large in aufs spool
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Your mouse has moved. Windows NT will now restart for changes to take
to take effect. [OK]


Re: [squid-users] content filtering with squid

2006-09-20 Thread Adrian Chadd
On Wed, Sep 20, 2006, Benner, Uwe wrote:
 
 Any planned release date available?

No fixed It'll be stable date. We'll be making an announcement shortly
of the roadmap going forward.

Squid-3 isn't that far off however. There's not that many showstopper
bugs (that we know about!)

What kind of content modification are you interested in?




Adrian



Re: [squid-users] Reverse proxy for multiple backend servers

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 10:01 +0400 skrev Mohamed Navas:
 Hi,
 
 OK ... thanks for the replay ...
 
 Our requirement is, we should have an initial URL 
 http://proxy.mydomain.com for the proxy server. Clients browser want 
 to just type simple folder name after this initial URL to look on to 
 the respective backend web servers ...

Yes, and thats fine.

The problematic part is if you want http://proxy.mydomain.com/folder1/
to point to http://backendserver2.my-domain.com/ (without the folder)
instead of http://backendserver2.my-domain.com/folder1/ or even more
preferred http://proxy.mydomain.com/folder1/ on the server
backendserver2.my-domain.com.

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Squid ACL (Is this Possible)

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 11:28 +0100 skrev Mehmet, Levent (Accenture):
 Thanks
 
 Please can you explain what this line means with its characters:
 
  acl NWW dstdom_regex \.?nww\.


maybe a dot followed by nww followed by a dot, anywhere in the hostname
component of the requested URL.

Probably this is not was wat intended as maybe a dot is always true..
even in somethingnnw.example.com. I think the indended pattern more
likely is

  (^|\.)nww\.

which matches hostnames beginning with nww. or having .nww. anywhere in
their name.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham
RFC2616 refers to RFC2068 for HTTP/1.0-style persistent connections,  
which is the most normative source we have for this.

  http://rfc.net/rfc2068.html#s19.7.1

The way that that's written leads me to believe that a HTTP/1.1  
client can send a request to a HTTP/1.0 server and expect the  
resulting connection to be persistent, as long as it has a Content- 
Length.


However, since this is a spec interpretation issue, I might take it  
up with the folks over at HTTP-WG.


Cheers,


On 2006/09/20, at 5:55 AM, Henrik Nordstrom wrote:


Except that HTTP/1.1 doesn't define Connection: keep-alive, only
Connection: close. The keep-alive of an HTTP/1.1 connection is
implicit by the protocol being HTTP/1.1.

Connection: keep-alive is keep-alive of a HTTP/1.0+ style persistent
web server connection. HTTP/1.0+ defines different signaling for web
servers and proxies due to Connection not being an HTTP/1.0 header
making it likely proxies does not understand Connection: keep- 
alive.. A

client accepting Connection: keep-alive as keep-alive of a proxied
connection is broken not respecting the Netscape specifications for
keep-alive for HTTP/1.0.

Regards
Henrik


--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] WARNING: out of aiocb slots!

2006-09-20 Thread Pranav Desai

Hello All,

I am running a polymix-4 test with 3 coss partition. 2 are files (one
of them is in memory, tmpfs) and one is a raw partition.

I am getting the above message once in a while, not very frequently.
Can I safely increase
MAX_ASYNCOP ?

Does this indicate that the disk is overloaded ?

Are there any messages under coss that would suggest that the disk is
getting overloaded,
similar to the Queue congestion message in aufs ?

Setup
---
polymix-4, 700req/s

box
-
Dual Core AMD Opteron(tm) Processor 270 HE, 16GB RAM,  2x SATA disks.

squid-version
---
Squid Cache: Version 2.6.STABLE3
configure options: '--prefix=/usr/squid' '--exec-prefix=/usr/squid'
'--sysconfdir=/usr/squid/etc' '--enable-snmp'
'--enable-err-languages=English' '--enable-linux-netfilter'
'--enable-async-io=24' '--enable-storeio=ufs,aufs,null,coss'
'--enable-coss-aio-ops' '--enable-linux-tproxy' '--enable-gnuregex'
'--enable-internal-dns' '--enable-epoll' '--with-maxfd=32768'
'CFLAGS=-g -O2 -pg '

squid.conf

visible_hostname10.51.6.102
cache_dir   coss /mnt/cache/coss 8000 max-size=131072 block-size=512
cache_dir   coss /mnt/tmp/coss 7500 max-size=131072 block-size=512
cache_dir   coss /dev/sdb1 8000 max-size=131072 block-size=512
http_port   8080
request_body_max_size   0
snmp_port   3401
negative_ttl0 minutes
pid_filename/var/run/squid.pid
coredump_dir/var/log/squid
cache_effective_usersquid
cache_effective_group   squid
cache_access_log/var/log/squid/access.log
cache_log   /var/log/squid/cache.log
cache_store_log none
cache_swap_log  /var/log/squid/swap.log
logfile_rotate  10
icp_port3130
icp_query_timeout   2
log_icp_queries on
extension_methods   SEARCH PROPPATCH
forwarded_for on
acl all src 0.0.0.0/0.0.0.0
acl localhost   src 127.0.0.1 10.51.6.102
acl manager proto   cache_object
acl snmppublic  snmp_community  public
http_access allow   localhost
miss_access allow   all
http_access allow   all
snmp_access allow snmppublicall
memory_pools on
cache_mem 1 GB

-- Pranav

--
http://pd.dnsalias.org


Re: [squid-users] Squid - Erros in portuguese language problem.

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 10:43 -0300 skrev Palula Brasil:

 Previously, in the squid folder, a symbolic link existed called errors
 linking to the Portuguese language folder. When I upgraded squid, this link
 was erased... Anyway I created the link using LN but it still doesn't work.
 
 Down anybody know how to get this one runnin'?

You could do it the standard way and set error_dir in squid.conf to the
directory you want to use...

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] WCCPv2 current instructions?

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 08:07 -0600 skrev Shaun Skillin (home):
 more help.  I've experienced a few times that certain websites (like
 Ebay auctions) have problems.  I can access most things on the site, but
 certain functions just die horribly in timeout-land.  When I put the
 settings directly into the browser, I have no problem, but when running
 transparent using WCCP I have this problem.

Sites causing timeout problems when intercepted but not when using proxy
configuration usually indicates an MTU related problem for traffic
proxy-client.

What does the network between your station and the proxy look like? Any
path there with a MTU smaller than the standard ethernet?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] WARNING: out of aiocb slots!

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 11:29 -0700 skrev Pranav Desai:

 I am running a polymix-4 test with 3 coss partition. 2 are files (one
 of them is in memory, tmpfs) and one is a raw partition.

 I am getting the above message once in a while, not very frequently.
 Can I safely increase
 MAX_ASYNCOP ?

Yes, but 128 is qutie plenty for COSS..

 Does this indicate that the disk is overloaded ?

Maybe. 


 Squid Cache: Version 2.6.STABLE3

You may want to try with a nightly snapshot. COSS is still under rapid
development and there has been some changes after 2.6.STABLE3.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Persistent Connections

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 09:52 -0700 skrev Mark Nottingham:
 RFC2616 refers to RFC2068 for HTTP/1.0-style persistent connections,  
 which is the most normative source we have for this.
http://rfc.net/rfc2068.html#s19.7.1

which doesn't define persistent proxy connections at all, other than a
MUST NOT which doesn't make anyone much happier.. (and which nobody
follows anyway).

 The way that that's written leads me to believe that a HTTP/1.1  
 client can send a request to a HTTP/1.0 server and expect the  
 resulting connection to be persistent, as long as it has a Content- 
 Length.

Not really. From that same section:

   An HTTP/1.0 server would then respond with the Keep-Alive connection
   token and the client may proceed with an HTTP/1.0 (or Keep-Alive)
   persistent connection.

But it's true that we probably could assume a HTTP/1.1 message is
persistent unless it has a connection: close tag as the close tag is
required by HTTP/1.1. But at the same time RFC 2616 8.1.2.1 says:

   Clients and servers SHOULD NOT assume that a persistent connection is
   maintained for HTTP versions less than 1.1 unless it is explicitly
   signaled. See section 19.6.2 for more information on backward
   compatibility with HTTP/1.0 clients.

8.1.3 says

   A proxy server MUST NOT establish a HTTP/1.1 persistent connection
   with an HTTP/1.0 client (but see RFC 2068 [33] for information and
   discussion of the problems with the Keep-Alive header implemented by
   many HTTP/1.0 clients).

and 19.6.2 says:

   response. The result is that HTTP/1.0 clients must be prevented from
   using Keep-Alive when talking to proxies.


The only document I know of which defines persistent HTTP/1.0 proxy
connections is the original Netscape document, defining the
Proxy-Connection header and why it is needed.. The validity of the
reasoning behind Proxy-Connection can be debated as the solution isn't
safe at all (fails in hierarchies), but at least it solved the client
migration path while there existed dumb HTTP/1.0 proxies without
persistent connections.

 However, since this is a spec interpretation issue, I might take it  
 up with the folks over at HTTP-WG.

You are welcome.

But I don't really see much value to stir up discussions around HTTP/1.0
persistent connections, they work the ways they do and can not be
changed, only documented (was a dead end). For proxy connections it's
signaled using Proxy-Connection: keep-alive, for origin server
connections using Connection: keep-alive, the difference is there to
sort of work around old HTTP/1.0 proxies not knowing about keep-alive. A
HTTP server/proxy can not assume a HTTP/1.0 client knows about HTTP/1.1
so keep-alive must be signaled in the same manner in the response as
well, and similarly it cannot be blindly assumed a HTTP/1.1 client knows
about HTTP/1.0 persistent connections. The only specifications available
actually defining persistent proxy connections (the Netscape
specifications) defines the Proxy-Connection header for the purpose.
This was not taken up in the official specifications as it can not be
guaranteed the negotiation works in all configurations.

The most significant blank spot is how HTTP/1.0 proxies knowing about
persistent connections should react to HTTP/1.1 clients not explicitly
signaling persistent connections. Here we choose take the safe path and
assumes the client doesn't know about HTTP/1.0 persistent connections
and close the connection.

Unfortunately I have no idea where to find that Netscape document today
after all their restructuring. Maybe in the Internet Archive?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham

On 2006/09/20, at 2:14 PM, Henrik Nordstrom wrote:


But it's true that we probably could assume a HTTP/1.1 message is
persistent unless it has a connection: close tag as the close tag is
required by HTTP/1.1. But at the same time RFC 2616 8.1.2.1 says:

   Clients and servers SHOULD NOT assume that a persistent  
connection is

   maintained for HTTP versions less than 1.1 unless it is explicitly
   signaled. See section 19.6.2 for more information on backward
   compatibility with HTTP/1.0 clients.


... and one could argue that it's explicitly signalled by the Content- 
Length header in the response.



8.1.3 says

   A proxy server MUST NOT establish a HTTP/1.1 persistent connection
   with an HTTP/1.0 client (but see RFC 2068 [33] for information and
   discussion of the problems with the Keep-Alive header  
implemented by

   many HTTP/1.0 clients).


I'm actually more interested in this in the gateway case, but point  
taken.



However, since this is a spec interpretation issue, I might take it
up with the folks over at HTTP-WG.


You are welcome.

But I don't really see much value to stir up discussions around  
HTTP/1.0

persistent connections, they work the ways they do and can not be
changed, only documented (was a dead end).


If you haven't seen Roy's... colourful response on HTTP-WG along  
these lines, I'll forward. :)



The most significant blank spot is how HTTP/1.0 proxies knowing about
persistent connections should react to HTTP/1.1 clients not explicitly
signaling persistent connections. Here we choose take the safe path  
and

assumes the client doesn't know about HTTP/1.0 persistent connections
and close the connection.

Unfortunately I have no idea where to find that Netscape document  
today

after all their restructuring. Maybe in the Internet Archive?


I'll look for it.

Just thinking aloud -- the obvious solution to this is to make Squid  
HTTP/1.1. Of course, that's a lot of work, but I wonder if it would  
be more manageable by going 1.1 on just the client side at first,  
while remaining 1.0 on the server side, to avoid chunked responses.


Yes, I realise that's pretty sick.

Cheers,

--
Mark Nottingham
[EMAIL PROTECTED]




Re: [squid-users] Persistent Connections

2006-09-20 Thread Mark Nottingham
I reaise that C-L has two different purposes. Since closing the  
connection signals both, synthesising the C-L doesn't seem like  
taking a huge liberty in the face of serving the partial cached  
response as if it's the whole thing. YMMV (obviously).


Thanks for the help,


On 2006/09/20, at 2:22 PM, Henrik Nordstrom wrote:


The problem you're pointing out WRT Squid caching partial responses
exists today; if I send a connection-delimited response and close
early, Squid will cache it, given the appropriate headers...


Yes, what I said. In many cases it's impossible to tell the two apart
with both signaled by close of connection.




--
Mark Nottingham
[EMAIL PROTECTED]





[squid-users] https: fails with CONNECT error:invalid-request

2006-09-20 Thread Brand, Thomas R.
Hi,

Squid newbie here with a problem regarding squid and HTTPS. 
HTTP URLS work as desired.  I've spent the past two days googling 
and reading the archives, but still don't understand how to 
build/configure Squid to do what I want.

Any pointers/help would be greatly appreciated.

Setup :

Remote office user - 
Remote office Squid (squid-2.6.STABLE2) - Company WAN -
HQ ISA2004 proxy server - 
External Sites

User (remote office) connects to an URL https://www.aaa.com  
Browser is configured to use proxy server (Squid 2.6 STABLE 2, UnixWare
7.1.3)
Proxy server is at the remote office.

Squid forwards request to ISA2004 proxy server at main office
cache_peer 10.6.10.20 parent 8080 0 no-query default
no-digest
ISA proxy server passes request on to the outside world.
User receives  The connection to www.aaa.com was interrupted while the
page was loading.

SQUID logs show  CONNECT error:invalid-request

Squid was configured with:
./configure -q  \
 --enable-delay-pools   \
 --enable-cache-digests \
 --enable-poll  \
 --disable-ident-lookups\
 --enable-truncate  \
 --enable-agent-log \
 --enable-referer-log   \
 --enable-default-err-language=English  \
 --enable-err-languages=English


access.log:1158787213.443  0 10.31.129.28 TCP_DENIED/400 1381
CONNECT error:invalid-request - NONE/- text/html

cache.log:2006/09/20 17:20:13| storeKeyPrivate: CONNECT
error:invalid-request
cache.log:2006/09/20 17:20:13| errorConvert: %R -- 'CONNECT
www.ipledgeprogram.com:443 HTTP/1.1
cache.log:2006/09/20 17:20:13| The reply for CONNECT
error:invalid-request is ALLOWED, because it matched 'all'

store.log:1158787213.443 RELEASE -1 
637E356A235E6902243378ABCC05146A  400 1158787213 0 1158787213
text/html 1381/1381 CONNECT error:invalid-request
cvs9058[logs]


Tom Brand




Re: [squid-users] Persistent Connections

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 14:27 -0700 skrev Mark Nottingham:

 Clients and servers SHOULD NOT assume that a persistent  
  connection is
 maintained for HTTP versions less than 1.1 unless it is explicitly
 signaled. See section 19.6.2 for more information on backward
 compatibility with HTTP/1.0 clients.
 
 ... and one could argue that it's explicitly signalled by the Content- 
 Length header in the response.

Not quite.. Content-Length is a HTTP/1.0 header with very well defined
semantics. It doesn't imply that the connection is persistent. The
exception being that no content-length and no chunked transfer encoding
automatically implies that the connection is certainly not persistent as
that's the only message delimiting method available then..

 Just thinking aloud -- the obvious solution to this is to make Squid  
 HTTP/1.1.

Yes.

  Of course, that's a lot of work, but I wonder if it would  
 be more manageable by going 1.1 on just the client side at first, 

It's not really that much work to get Squid up to the level that
HTTP/1.1 message signaling works. Just needs chunked transfer encoding
on both sides.. Getting it up to the level that trailers also works
requires a bit more work, but shouldn't be that tricky in Squid-3 I
think..

The really big part is to assure HTTP/1.1 compliance, but it can be
debated how important that really is.. But as the two goes a little hand
in hand HTTP/1.1 for Squid never seems to get anywhere...

There was a transfer-encoding project for Squid some years ago, but it
died a slow death from being a bit too ambitious and handle all forms of
transfer encoding efficiently (not only chunked but also gzip 
deflate), and then getting wound up in design considerations if the
gzip/deflate should be cached or not..

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Persistent Connections

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 14:31 -0700 skrev Mark Nottingham:
 I reaise that C-L has two different purposes. Since closing the  
 connection signals both, synthesising the C-L doesn't seem like  
 taking a huge liberty in the face of serving the partial cached  
 response as if it's the whole thing. YMMV (obviously).

Agreed.

Can be done, has been done, but isn't done as we didn't like Squid to
add entity semantics where there was none. The main reason for this is
that Content-Length is a entity header and we aim for semantic
transparency and don't like altering entities..

But reading the RFC again it's quite OK for us to add Content-Length
even if it's a entity header. It has a bit of special position due to
it's shared entity  message properties.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] https: fails with CONNECT error:invalid-request

2006-09-20 Thread Henrik Nordstrom
ons 2006-09-20 klockan 17:37 -0400 skrev Brand, Thomas R.:

 access.log:1158787213.443  0 10.31.129.28 TCP_DENIED/400 1381
 CONNECT error:invalid-request - NONE/- text/html

Looks like your Squid didn't like the request at all.

What does cache.log say?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid - Erros in portuguese language problem.

2006-09-20 Thread Palula Brasil
I found these lines for configuring the err_dir in squid but I don't know
the information relative to: langs and charsets. Does anybody know the
portuguese config for these? Or knows somewhere I can find a list of
possible solutions?

Here is the conceptual line:
#error_directory directory in error_dir langs charsets

Here are some examples I found on the internet:
error_directory /opt/squid/share/errors/Russian-1251 ru,ru-* windows-1251
error_directory /opt/squid/share/errors/Russion-koi8-r ru,ru-* koi8-r
error_directory /opt/squid/share/errors/Simplify_Chinese zh,zh-* gb2312


 Previously, in the squid folder, a symbolic link existed called errors
 linking to the Portuguese language folder. When I upgraded squid, this
link
 was erased... Anyway I created the link using LN but it still doesn't
work.

 Down anybody know how to get this one runnin'?

You could do it the standard way and set error_dir in squid.conf to the
directory you want to use...

Regards
Henrik

- Original Message - 
From: Henrik Nordstrom [EMAIL PROTECTED]
To: Palula Brasil [EMAIL PROTECTED]
Cc: Squid squid-users@squid-cache.org
Sent: Wednesday, September 20, 2006 4:44 PM
Subject: Re: [squid-users] Squid - Erros in portuguese language problem.




RE: [squid-users] WCCPv2 current instructions?

2006-09-20 Thread Shaun Skillin \(home\)
Hi Henrik,
For this test, all of the clients are on the same LAN, 172.16.1.X/24.
The firewall is 172.16.1.254, squid is 172.16.1.3, clients are .100-.199
(dhcp range). Client gateway is 172.16.1.252, which is a Cisco 3550
Layer 3 switch with IP redirects.
I was wondering if I perhaps need to adjust the TCP MSS because we're
using a GRE tunnel? I don't know if this is possible on an ASA firewall.
I also heard something about vport?

Shaun


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 20, 2006 1:48 PM
To: Shaun Skillin (home)
Cc: Squid Users
Subject: RE: [squid-users] WCCPv2 current instructions?

ons 2006-09-20 klockan 08:07 -0600 skrev Shaun Skillin (home):
 more help.  I've experienced a few times that certain websites (like
 Ebay auctions) have problems.  I can access most things on the site,
but
 certain functions just die horribly in timeout-land.  When I put the
 settings directly into the browser, I have no problem, but when
running
 transparent using WCCP I have this problem.

Sites causing timeout problems when intercepted but not when using proxy
configuration usually indicates an MTU related problem for traffic
proxy-client.

What does the network between your station and the proxy look like? Any
path there with a MTU smaller than the standard ethernet?

Regards
Henrik


RE: [squid-users] WCCPv2 current instructions?

2006-09-20 Thread Shaun Skillin \(home\)
More information, if this helps to narrow it down...I have tried adjusting MTU 
sizes to try to solve this.  When I set wccp0 to 1200, it seems to make no 
difference at all.  When I set eth0 to 1200, ebay.com will not load at all.  
Other sites (presumably with smaller page?) can load OK.
Also, very curious to me, I notice from a sniffer trace on the Squid box, that 
the SYN packet goes through the GRE tunnel, the SYN-ACK does not (seems to be a 
spoof from Squid back to client), and the final ACK goes through the tunnel.  
Is this normal?



From: Shaun Skillin (home)
Sent: Wed 9/20/2006 8:19 PM
To: Henrik Nordstrom; Shaun Skillin (home)
Cc: Squid Users
Subject: RE: [squid-users] WCCPv2 current instructions?



Hi Henrik,
For this test, all of the clients are on the same LAN, 172.16.1.X/24.  The 
firewall is 172.16.1.254, squid is 172.16.1.3, clients are .100-.199 (dhcp 
range). Client gateway is 172.16.1.252, which is a Cisco 3550 Layer 3 switch 
with IP redirects.
I was wondering if I perhaps need to adjust the TCP MSS because we're using a 
GRE tunnel? I don't know if this is possible on an ASA firewall.  I also heard 
something about vport?

Shaun


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 20, 2006 1:48 PM
To: Shaun Skillin (home)
Cc: Squid Users
Subject: RE: [squid-users] WCCPv2 current instructions?

ons 2006-09-20 klockan 08:07 -0600 skrev Shaun Skillin (home):
 more help.  I've experienced a few times that certain websites (like
 Ebay auctions) have problems.  I can access most things on the site, but
 certain functions just die horribly in timeout-land.  When I put the
 settings directly into the browser, I have no problem, but when running
 transparent using WCCP I have this problem.

Sites causing timeout problems when intercepted but not when using proxy
configuration usually indicates an MTU related problem for traffic
proxy-client.

What does the network between your station and the proxy look like? Any
path there with a MTU smaller than the standard ethernet?

Regards
Henrik