Re: [squid-users] Wiki help for WPAD/PAC stuff (was Re: [squid-users] proxy.pac config)

2007-05-16 Thread K K

I'll take a look at the updated Wiki later today.

On 5/15/07, SSCR Internet Admin [EMAIL PROTECTED] wrote:

However, if the browser is not configured to use a PAC
file but a PAC file is delivered it brings up a
Security Alert because the browser never requested it.
I know the old Netscape browsers did this but am not
sure about IE.

Well, im sure local users will accept it happily by clicking OK, if not they
don't have access.. :)


The Netscape alert doesn't give the option to accept the PAC, it just
gives a warning that an unsolicited PAC was received.   If there was a
trivial way to reconfigure browsers to use a PAC just by returning the
right Active-X or Java, then we'd see all sorts of malicious sites
using that technique to force random Internet users to use the
attacker's proxy.

So how do you force your users to use the PAC?


What you can do is make sure your DHCP server and DNS are set up to be
fully compatible with WPAD, and then if any clients do make an attempt
to go DIRECT, return a web page containing:

1) Text instructing how to correctly enable WPAD and/or how to
configure PAC in the most popular browsers.
2) A link to a .REG file which forces the registry settings for IE to
use PAC on Microsoft Windows clients.
3) Instructions for manual configuration, for UNIX and for ancient
MacOS clients.

Even with all of this, expect to get plenty of support calls from
confused users.

I manage an environment with tens of thousands of internal customers,
and all default route HTTP/HTTPS/SMTP/etc traffic is denied, the only
exception being for a couple of really braindead clients that are
downright proxy-hostile, maybe a half dozen workstations total have an
exception to the policy.


Kevin

(P.S. Think carefully before conditioning users to accept REG files
from strangers).


Re: [squid-users] half of a transparent proxy question I guess.....

2007-05-16 Thread Emilio Casbas

Chris Robertson escribió:

Pat Riehecky wrote:

This is a bit of a odd duck, but

The university I work for has a bunch of library pages that can only be
accessed from on campus as they are hosted off site and authenticated by
IP address.  


I think that ezproxy will be the perfect solution to your problem.



This sounds like a perfect scenario for an acceleration setup.  You can 
dispense with having users set proxy in their browser and only require 
authentication for off-site access.


But let me know if someone has achieve this with squid accel.


Thanks
Emilio C.


Re: [squid-users] Squid shutting down

2007-05-16 Thread Sean O'Reilly
Hadn't thought of that, don't think so but will investigate. Thanks

Sean


On Tue, 2007-05-15 at 16:24 +0200, Henrik Nordstrom wrote:
 tis 2007-05-15 klockan 14:49 +0100 skrev Sean O'Reilly:
  Yes, but i haven't done anything other than watch the cache.log ?!?
 
 Not you, but perhaps something else..  Any system monitor running on
 your server killing unrecognized left-over processes?
 
 Regards
 Henrik



RE: [squid-users] Unable to download files over 2GB of size

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 14:34 -0700 skrev Sathyan, Arjonan:

 Was there any trace from the files which I have uploaded? Can you please
 tell me why I am not able to download the files which are  2GB via
 squid using IE 6?

The http_headers only contains an Squid access denied result.

The packet trace only contains a few SSH packets.

 Is this a bug in squid...?

Not from what it looks so far. Pretty sure it's an MSIE6 bug.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Serve the same page always!

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 17:00 +0300 skrev Odhiambo WASHINGTON:
 Perhaps this a FAQ, but...how do I configure squid to serve the same 
 page regardless of the incoming URL.

Please be a bit more specific. HTTP does not have pages.. (just
objects..).

0) Is this a forward proxy for LAN - Internet access, or a reverse
proxy for Internet - Your web server?

a) Do you want Squid to ignore the hostname component of received URLs
and always return the same site?

b) Do you want to redirect the user to a specific page unless the URL is
for that server?

c) As 'b' but without a redirect just rewriting the requested URL on the
fly. Requires the redirection page to use absolute URLs as the browser
is unaware the URL has been changed under it's feets.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] logformat and emulate_httpd_log

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 17:27 +0200 skrev Emilio Casbas:

 But we need a %ru parameter like the httpd native log, that is showing;
 /SI/images/servicios/normasdeuso/normas.swf instead of
 http://X.X.X.60/SI/images/servicios/normasdeuso/normas.swf

This is not implemented yet. Patches adding this format is welcome.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] deny_info Question

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 17:59 -0400 skrev Brad Taylor:
 I'm using Squid 2.6 STABLE6 the deny_info function in the below config
 worked in Squid 2.4 but is not working in 2.6 STABLE6. I get this error
 message when going to http://192.168.60.19 (the squid server):

If you want to be able to use the Squid server as a web server then you
need accelerator / reverse proxy mode.

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

The syntax how to use this is a bit different from earlier releases. See
the release notes. The examples in the FAQ is up to date.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] TPROXY

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 19:59 -0300 skrev Alexandre Correa:
 Hello,
 
 Anyone using TPROXY patch (balabit.com) with linux 64bits ? using NAT too ?
 
 here, kernel crash after some time that kernel boot... (causing the
 driver of ethernet card hangs) ...

The tproxy mailinglist is a more appropriate forum for TPROXY kernel
problems.

http://www.balabit.com/support/community/products/tproxy/

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Wiki help for WPAD/PAC stuff (was Re: [squid-users] proxy.pac config)

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 16:56 -0700 skrev Jeff Smith:

 However, if the browser is not configured to use a PAC
 file but a PAC file is delivered it brings up a
 Security Alert because the browser never requested it.
 I know the old Netscape browsers did this but am not
 sure about IE.

What they do varies. Some just show an error page, some asks you where
to save the file. Some displays it on the screen.

To do the automatic configuration thing this way you need to write a
program to automatically reconfigure the client. It's not possible via
javascript or similar (at least not when fetched over the network, not
sure when loaded from file:///)

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Really transparent proxy

2007-05-16 Thread Henrik Nordstrom
ons 2007-05-16 klockan 11:54 +0800 skrev Adrian Chadd:

 I think thats enough to go on. You could try visiting 
 http://www.squid-cache.org/
 and then tell me what IP it should be coming from..

http://devel.squid-cache.org/cgi-bin/test also shows all the interesting
details about the request..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] tcp_outgoing_address - Multiple external IPs

2007-05-16 Thread Henrik Nordstrom
ons 2007-05-16 klockan 11:19 +0530 skrev Ashutosh Naik:

 What I would rather have is squid automatically bind to every IP  the
 tcp_outgoing_address should be the one that was connected to.

See tcp_outgoing_address. You can specify multiple addresses and rules
when each should be used..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Collapsed forwarding question

2007-05-16 Thread Henrik Nordstrom
tis 2007-05-15 klockan 19:55 -0400 skrev BARRY J BLUMENFELD:
 The question is how does squid schedule the response to multiple
 connections?

It waits for the HTTP headers. Then if the object could be cached it
sends it to all clients while beinge retreived from the origin.

If the object could not be cached the first client gets this response,
and the other clients gets forwarded as separate requests.

 There are no timeouts with collapsed forwarding off.

Could it be that the timeouts is related to range requests? Not entirely
sure what collapsed_forwarding will do on range requests..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Squid shutting down

2007-05-16 Thread Sean O'Reilly
Hi, i can't see anything that could be sending the signal to squid to
shutdown.

These are the only processes running at the moment

anacron 0:off   1:off   2:on3:on4:on5:on6:off
atd 0:off   1:off   2:off   3:on4:on5:on6:off
autofs  0:off   1:off   2:off   3:on4:on5:on6:off
crond   0:off   1:off   2:on3:on4:on5:on6:off
cups0:off   1:off   2:on3:on4:on5:on6:off
gpm 0:off   1:off   2:on3:on4:on5:on6:off
kudzu   0:off   1:off   2:off   3:on4:on5:on6:off
messagebus  0:off   1:off   2:off   3:on4:on5:on6:off
named   0:off   1:off   2:on3:on4:on5:on6:off
netfs   0:off   1:off   2:off   3:on4:on5:on6:off
network 0:off   1:off   2:on3:on4:on5:on6:off
nfslock 0:off   1:off   2:off   3:on4:on5:on6:off
pcscd   0:off   1:off   2:on3:on4:on5:on6:off
portmap 0:off   1:off   2:off   3:on4:on5:on6:off
readahead_early 0:off   1:off   2:on3:on4:on5:on6:off
sendmail0:off   1:off   2:on3:on4:on5:on6:off
squid   0:off   1:off   2:on3:on4:on5:on6:off
squid-out   0:off   1:off   2:on3:on4:on5:on6:off
sshd0:off   1:off   2:on3:on4:on5:on6:off
syslog  0:off   1:off   2:on3:on4:on5:on6:off
xfs 0:off   1:off   2:on3:on4:on5:on6:off
yum-updatesd0:off   1:off   2:off   3:on4:on5:on6:off

any other information i could provide that might help ?

Sean



On Wed, 2007-05-16 at 08:36 +0100, Sean O'Reilly wrote:
 Hadn't thought of that, don't think so but will investigate. Thanks
 
 Sean
 
 
 On Tue, 2007-05-15 at 16:24 +0200, Henrik Nordstrom wrote:
  tis 2007-05-15 klockan 14:49 +0100 skrev Sean O'Reilly:
   Yes, but i haven't done anything other than watch the cache.log ?!?
  
  Not you, but perhaps something else..  Any system monitor running on
  your server killing unrecognized left-over processes?
  
  Regards
  Henrik
 



[squid-users] urlgroup usage weirdness

2007-05-16 Thread Volodymyr Kostyrko

I just fail to understand how can I use the urlgroup's in squid. I
just wrote a url_rewrite program, which returns something like
'!porn!http://previous.url' on matched queries. I added this to squid
config:

acl porn urlgroup porn
acl classes src some.dotted.quad.and/mask

http_access deny porn
http_access allow classes

... and this seems not to work. If i strip out the second
'http_access' directive squid blocks access totally.

What have I done wrong? Is there some example on using urlgroup's?

Also, can I specify more that one urlgroup when returning result from
url_rewriter?

--
Sphinx of black quartz judge my vow!



RE: [squid-users] deny_info Question

2007-05-16 Thread Brad Taylor
The Squid server is working in accelerator / reverse proxy mode. That is
working. What is not working is the deny_info option. I need that to
work to redirect anyone coming from port 80 to be redirected to an https
address. As far as I can tell the deny_info option didn't change in 2.6.

Here is my config:

http_port 80
https_port 443 cert=/etc/squid/autotask.net-11-07.pem
key=/etc/squid/autotask.net_key-11-07.pem options=NO_SSLv2
cipher=DEFAULT:!EXPORT:!LOW defaultsite=qa3 acl QUERY urlpath_regex
cgi-bin \? 
no_cache deny QUERY
acl JS url_regex .js$
no_cache deny JS
acl CSS url_regex .css$
no_cache deny CSS
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server auth_param basic
credentialsttl 2 hours #Suggested default: 
refresh_pattern ^ftp:144020%10080 
refresh_pattern ^gopher:14400%1440 
refresh_pattern .020%4320 
refresh_pattern -i \.jpg$0 100% 10080 
refresh_pattern -i \.gif$0 100% 10080 
refresh_pattern -i \.png$0 100% 10080 
refresh_pattern -i \.bmp$0 100% 10080 
#Recommended minimum configuration: 
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst
127.0.0.0/8 acl CONNECT method CONNECT acl port80 myport 80 acl port443
port 443 acl port563 port 563 acl TheOriginServer dst 192.168.60.40
#Recommended minimum configuration: 
# Only allow cachemgr access from localhost http_access allow manager
localhost http_access deny manager http_access allow port563 http_access
allow port443 http_access deny port80 http_access allow TheOriginServer
http_access deny all http_reply_access allow all icp_access allow all
cache_peer 192.168.60.40 parent 80 0 no-query originserver
#2.4 Squid config next 4 lines
#httpd_accel_host 192.168.60.40
#httpd_accel_port 80
#httpd_accel_single_host on
#httpd_accle_with_proxy off
deny_info https://qa3/ port80
coredump_dir /var/spool/squid
visible_hostname qa3
logfile_rotate 9
negative_ttl 0 minutes



-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 16, 2007 8:39 AM
To: Brad Taylor
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] deny_info Question

tis 2007-05-15 klockan 17:59 -0400 skrev Brad Taylor:
 I'm using Squid 2.6 STABLE6 the deny_info function in the below config
 worked in Squid 2.4 but is not working in 2.6 STABLE6. I get this
error
 message when going to http://192.168.60.19 (the squid server):

If you want to be able to use the Squid server as a web server then you
need accelerator / reverse proxy mode.

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

The syntax how to use this is a bit different from earlier releases. See
the release notes. The examples in the FAQ is up to date.

Regards
Henrik


Re: [squid-users] Squid shutting down

2007-05-16 Thread Henrik Nordstrom
ons 2007-05-16 klockan 13:52 +0100 skrev Sean O'Reilly:
 Hi, i can't see anything that could be sending the signal to squid to
 shutdown.

 squid 0:off   1:off   2:on3:on4:on5:on6:off
 squid-out 0:off   1:off   2:on3:on4:on5:on6:off

You have two Squid init scripts. Could it be that the second one shuts
down the first when started?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] deny_info Question

2007-05-16 Thread Henrik Nordstrom
ons 2007-05-16 klockan 09:46 -0400 skrev Brad Taylor:
 The Squid server is working in accelerator / reverse proxy mode. That is
 working.

Can't be working with that config. An http_port (or https_port) without
any accelerator options won't accept web server requests, only proxy
requests. And the error you see is because of this, not the deny_info..

 What is not working is the deny_info option. I need that to
 work to redirect anyone coming from port 80 to be redirected to an https
 address. As far as I can tell the deny_info option didn't change in 2.6.

It didn't. The httpd_accel and http_port options did, significantly..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Squid vs caching products like memcached

2007-05-16 Thread lightbulb432

What’s the difference between the reverse proxying features of Squid and a
caching product like memcached?

I don’t necessarily mean specific comparisons of both products (e.g.
performance), but rather explanations of what both types of products do. I
understand that there are some large-scale websites out there that make use
of both, so clearly they are better at different things and both have a
place in a given architecture.

As a newbie, however, I’m unable to determine how they both come together
and fit into an architecture to make a website more scalable, so your help
would be really appreciated.

Thanks.
-- 
View this message in context: 
http://www.nabble.com/Squid-vs-caching-products-like-memcached-tf3765217.html#a10643894
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid vs caching products like memcached

2007-05-16 Thread Jose Celestino
Words by lightbulb432 [Wed, May 16, 2007 at 08:16:44AM -0700]:
 
 What’s the difference between the reverse proxying features of Squid and a
 caching product like memcached?


Memcache has nothing to do with proxying. Squid talks http and caches
http objects, memcache talks the memcache protocol and caches objects
(can be the key/value you want). memcache is not related (directily at
least) with http, it is just a cache engine, you have to program around
it to turn it into something useful.

 
 I don’t necessarily mean specific comparisons of both products (e.g.
 performance), but rather explanations of what both types of products do. I
 understand that there are some large-scale websites out there that make use
 of both, so clearly they are better at different things and both have a
 place in a given architecture.
 

Yes. At first Squid is something you put between the cliente and the web
server. Memcache is something you put between your web servers and your
database/filesystem/whatever, it stays on the backend.

-- 
Jose Celestino

http://www.msversus.org/ ; http://techp.org/petition/show/1
http://www.vinc17.org/noswpat.en.html

And on the trillionth day, Man created Gods. -- Thomas D. Pate


Re: [squid-users] Squid vs caching products like memcached

2007-05-16 Thread Sean Walberg

On 5/16/07, lightbulb432 [EMAIL PROTECTED] wrote:


What's the difference between the reverse proxying features of Squid and a
caching product like memcached?


Memcached is a distributed, in memory, hash table with a network
interface.  Most often you stuff the results of expensive queries
(database, computations, XML processing) into memcached so that
multiple nodes can get the data without having to do the expensive
query.

Squid as a reverse proxy caches http objects -- pages, css,
javascript, images, etc.  You use squid to offload entire requests to
your web server.

As an example at b5media we front our web farm with Squid, but only
cache images, javascript, and CSS.  WordPress and Squid don't play
well together because WordPress doesn't send out proper headers for
Squid to use, so we don't cache pages.

Beside taking hits off the web server, Squid is also good at spoon
feeding slow clients.  Previously a slow client keeps an expensive
Apache slot tied up, now Squid takes that data  from Apache and feeds
the client -- squid is more efficient at this task than Apache.

On the WordPress backend we store a lot of internal stuff in
memcached.  We have some internal code that uses a REST API that
figures out what blog goes in which channel.  Rather than make a
handful of REST calls on every page view, which incurs latency for the
web hit and CPU for the XML processing, we check memcached to see if
the PHP object exists.  If we get a memcached cache hit we've just
saved ourselves a lot of time.  If we get a miss, we make the API
calls and stuff the PHP object back into memcached for the next
person.

I look at the caching within a LAMP application as a multilevel thing.
We cache the http objects we can with squid.  If we have to generate
a page we cache what we can in memcached, just like we cache compiled
PHP scripts in an opcode cache.  If we have to hit the database we use
MySQL query caching at that layer.

It's not a one-or-the-other type of thing, these are two tools that
are clearly the best at what they do, and can (should?) be used
together as part of a good architecture.

Sean

--
Sean Walberg [EMAIL PROTECTED]http://ertw.com/


RE: [squid-users] Unable to download files over 2GB of size

2007-05-16 Thread Sathyan, Arjonan

Henrik,

 Is this a bug in squid...?

 Not from what it looks so far. Pretty sure it's an MSIE6 bug.

I don't think this is an MSIE6 bug, since I am able to download the same
DVD ISO file without using Squid. (i.e., if directly connected to
internet)

This issue arises only when downloading through Squid Proxy...
 
Regards,
Sathyan Arjunan
Unix Support | +1 408-962-2500 Extn : 22824
Kindly copy [EMAIL PROTECTED] or reach us @ 22818 for any
correspondence alike to ensure your email are being replied in timely
manner


Re: [squid-users] ESI feature in squid3?

2007-05-16 Thread howard chen

This is interesting...ESI seems to be around for a number of years,
claimed to be lead to great improvements...

but no one care abt it? Or other better alternatives? What are the difficulties?



On 5/16/07, Alex Rousskov [EMAIL PROTECTED] wrote:

On Wed, 2007-05-16 at 01:43 +0800, howard chen wrote:

 Is that true that ESI will be part of squid3 release? Seems not many
 people are taking abt this feature, is it still active? or
 developments have been finished?

Squid 3.0 comes with experimental ESI code, but lack of volunteers or
sponsors is pushing known ESI bugs to Squid 3.1 or later.

Alex.





Re: RE: [squid-users] Unable to download files over 2GB of size

2007-05-16 Thread Dieter Bloms
Hi,

On Wed, May 16, Sathyan, Arjonan wrote:

 I don't think this is an MSIE6 bug, since I am able to download the same
 DVD ISO file without using Squid. (i.e., if directly connected to
 internet)
 
 This issue arises only when downloading through Squid Proxy...

the internet explorer has a different behaviour, if you configure a
proxy or not.
We had some issues with https sides with authentication and a proxy,
and the wrong behaviour came from internet explorer.

Please try some different browsers and then you  will see, that the
internet explorer has the most bugs, bugs, bugs 


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


signature.asc
Description: Digital signature


Re: [squid-users] Squid vs caching products like memcached

2007-05-16 Thread lightbulb432

Great answer, thanks!

How does Squid's page caching ability work in terms of caching pages (as
though they are static) but that were generated dynamically?

For example, Amazon.com's homepage is dynamic but not generated dynamically
on each request for that page; rather, I assume they set it to be cached
anytime a request for that page comes in, with some sort of expiration
policy (e.g. only dynamically generate the homepage once an hour, then serve
that cached static page for the rest of that hour).

I really hope Squid makes such a configuration possible and easy.

Thanks.



Sean A. Walberg wrote:
 
 On 5/16/07, lightbulb432 [EMAIL PROTECTED] wrote:
 
 What's the difference between the reverse proxying features of Squid and
 a
 caching product like memcached?
 
 Memcached is a distributed, in memory, hash table with a network
 interface.  Most often you stuff the results of expensive queries
 (database, computations, XML processing) into memcached so that
 multiple nodes can get the data without having to do the expensive
 query.
 
 Squid as a reverse proxy caches http objects -- pages, css,
 javascript, images, etc.  You use squid to offload entire requests to
 your web server.
 
 As an example at b5media we front our web farm with Squid, but only
 cache images, javascript, and CSS.  WordPress and Squid don't play
 well together because WordPress doesn't send out proper headers for
 Squid to use, so we don't cache pages.
 
 Beside taking hits off the web server, Squid is also good at spoon
 feeding slow clients.  Previously a slow client keeps an expensive
 Apache slot tied up, now Squid takes that data  from Apache and feeds
 the client -- squid is more efficient at this task than Apache.
 
 On the WordPress backend we store a lot of internal stuff in
 memcached.  We have some internal code that uses a REST API that
 figures out what blog goes in which channel.  Rather than make a
 handful of REST calls on every page view, which incurs latency for the
 web hit and CPU for the XML processing, we check memcached to see if
 the PHP object exists.  If we get a memcached cache hit we've just
 saved ourselves a lot of time.  If we get a miss, we make the API
 calls and stuff the PHP object back into memcached for the next
 person.
 
 I look at the caching within a LAMP application as a multilevel thing.
  We cache the http objects we can with squid.  If we have to generate
 a page we cache what we can in memcached, just like we cache compiled
 PHP scripts in an opcode cache.  If we have to hit the database we use
 MySQL query caching at that layer.
 
 It's not a one-or-the-other type of thing, these are two tools that
 are clearly the best at what they do, and can (should?) be used
 together as part of a good architecture.
 
 Sean
 
 -- 
 Sean Walberg [EMAIL PROTECTED]http://ertw.com/
 
 

-- 
View this message in context: 
http://www.nabble.com/Squid-vs-caching-products-like-memcached-tf3765217.html#a10646669
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Anton Melser

Hi,
I have searched high and low for this, and can't get anywhere!!! I am
using 2.6.STABLE5 (standard debian etch package).
I am trying to get squid to accelerate both a local apache and a
distant apache (I only want accelerating, nothing else).
If I set squid up on 3128 (with both local and distant apache on 80),
then everything works fine. However, when I set up squid on 80 and
local apache on either 81 (or whatever) or 127.0.0.1:80 then for the
local site I get an access denied.

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://lesite.org/

The following error was encountered:

  * Access Denied.

Access control configuration prevents your request from being
allowed at this time. Please contact your service provider if you feel
this is incorrect.

Your cache administrator is webmaster.
Generated Wed, 16 May 2007 17:59:44 GMT by lesite.org (squid/2.6.STABLE5)

In the access.log I get :

1179338384.598  0 ip_address_of_machine TCP_DENIED/403 1568 GET
http://lesite.org/ - NONE/- text/html
1179338384.598  9 firwall_ip TCP_MISS/403 1766 GET
http://lesite.org/ - DIRECT/172.16.116.1 text/html

But putting debug_options ALL,1 33,2
In cache.log I get
2007/05/16 19:59:44| The request GET http://lesite.org/ is ALLOWED,
because it matched 'sites_server_2'
2007/05/16 19:59:44| The request GET http://lesite.org/ is ALLOWED,
because it matched 'sites_server_2'
2007/05/16 19:59:44| WARNING: Forwarding loop detected for:
Client: anip http_port: an_ip.1:80
GET http://lesite.org/ HTTP/1.0
Host: lesite.org
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8.1.3)
Gecko/20070309 Firefox/2.0.0.3
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en,fr;q=0.8,fr-fr;q=0.5,en-us;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: LangCookie=fr;
Wysistat=0.9261449444734621_1179321228414%uFFFD6%uFFFD1179321268023%uFFFD2%uFFFD1179317769%uFFFD0.5886263648254288_1179223653760;
PHPSE
SSID=b0319d53833d11da790f5868f56c32e1; TestCookie=OK
Pragma: no-cache
Via: 1.1 lesite.org:80 (squid/2.6.STABLE5)
X-Forwarded-For: unip
Cache-Control: no-cache, max-age=259200
Connection: keep-alive

2007/05/16 19:59:44| The reply for GET http://lesite.org/ is ALLOWED,
because it matched 'QUERY'
2007/05/16 19:59:44| The reply for GET http://lesite.org/ is ALLOWED,
because it matched 'all'
2007/05/16 19:59:52| Preparing for shutdown after 2 requests

Can someone tell me what is going on here? I have tried pretty much
everything I can think of with no luck, and the boss is getting mighty
impatient!
Cheers
Anton


Re: Re: [squid-users] Squid vs caching products like memcached

2007-05-16 Thread Jose Celestino
Words by lightbulb432 [Wed, May 16, 2007 at 10:39:29AM -0700]:
 
 Great answer, thanks!
 
 How does Squid's page caching ability work in terms of caching pages (as
 though they are static) but that were generated dynamically?
 
 For example, Amazon.com's homepage is dynamic but not generated dynamically
 on each request for that page; rather, I assume they set it to be cached
 anytime a request for that page comes in, with some sort of expiration
 policy (e.g. only dynamically generate the homepage once an hour, then serve
 that cached static page for the rest of that hour).
 
 I really hope Squid makes such a configuration possible and easy.
 

Yes. That's the basics :)

-- 
Jose Celestino

http://www.msversus.org/ ; http://techp.org/petition/show/1
http://www.vinc17.org/noswpat.en.html

And on the trillionth day, Man created Gods. -- Thomas D. Pate


Re: [squid-users] Re: Squid vs caching products like memcached

2007-05-16 Thread lightbulb432

So are you saying that it is possible and quite basic to do this with Squid?

My understanding is that Squid can cache static objects, but am unaware
about whether it can cache entire dynamically generated pages (not just the
static content like images and stylesheets contained within those pages),
and under custom expiration rules like the one I described in my previous
post about Amazon.com.



Jose Celestino wrote:
 
 Words by lightbulb432 [Wed, May 16, 2007 at 10:39:29AM -0700]:
 
 Great answer, thanks!
 
 How does Squid's page caching ability work in terms of caching pages (as
 though they are static) but that were generated dynamically?
 
 For example, Amazon.com's homepage is dynamic but not generated
 dynamically
 on each request for that page; rather, I assume they set it to be cached
 anytime a request for that page comes in, with some sort of expiration
 policy (e.g. only dynamically generate the homepage once an hour, then
 serve
 that cached static page for the rest of that hour).
 
 I really hope Squid makes such a configuration possible and easy.
 
 
 Yes. That's the basics :)
 
 -- 
 Jose Celestino
 
 http://www.msversus.org/ ; http://techp.org/petition/show/1
 http://www.vinc17.org/noswpat.en.html
 
 And on the trillionth day, Man created Gods. -- Thomas D. Pate
 
 

-- 
View this message in context: 
http://www.nabble.com/Squid-vs-caching-products-like-memcached-tf3765217.html#a10652380
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] deny_info Question

2007-05-16 Thread Brad Taylor
I tried:

http_port 80 accel defaultsite=your.main.website

and that did not allow squid to start

when I tired:

http_port 80 defaultsite=your.main.website

without the accel it did work.

Is this site wrong: http://wiki.squid-cache.org/SquidFaq/ReverseProxy


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 16, 2007 10:58 AM
To: Brad Taylor
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] deny_info Question

ons 2007-05-16 klockan 09:46 -0400 skrev Brad Taylor:
 The Squid server is working in accelerator / reverse proxy mode. That
is
 working.

Can't be working with that config. An http_port (or https_port) without
any accelerator options won't accept web server requests, only proxy
requests. And the error you see is because of this, not the deny_info..

 What is not working is the deny_info option. I need that to
 work to redirect anyone coming from port 80 to be redirected to an
https
 address. As far as I can tell the deny_info option didn't change in
2.6.

It didn't. The httpd_accel and http_port options did, significantly..

Regards
Henrik


Re: [squid-users] Re: Squid vs caching products like memcached

2007-05-16 Thread Chris Robertson

lightbulb432 wrote:

So are you saying that it is possible and quite basic to do this with Squid?

My understanding is that Squid can cache static objects, but am unaware
about whether it can cache entire dynamically generated pages (not just the
static content like images and stylesheets contained within those pages),
and under custom expiration rules like the one I described in my previous
post about Amazon.com.
  


Read and be enlightened:

http://www.mnot.net/cache_docs/

Chris


RE: [squid-users] Really transparent proxy

2007-05-16 Thread Facundo Vilarnovo
Zul,
What variables are you referring to? We test setting up the proxy ip on 
the IE.
Pointing to port 3128 using http://www.whatsmyipaddress.com, as a result it 
says it passes the original source ip address (client's ip), but detects a 
proxy server. Doing totally transparent with wccp, nothing configured on IE, 
we get the same results.
The point is we are still getting the proxy detected. Using variables like via 
and XFF, the result of using the XFF and via is that passes the client ip 
address or don't. it's seems to have nothing to do with the problem of the 
cache being visible or don't.

Via off XFF off = clients source ip it's shown, proxy detected.

Via on XFF on = clients source ip it's not shown (shows proxy ip), proxy not 
detected.

Tnxs!
Facundo Vilarnovo

  




-Mensaje original-
De: zulkarnain [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 16 de Mayo de 2007 01:43 a.m.
Para: Facundo Vilarnovo; squid-users@squid-cache.org
Asunto: RE: [squid-users] Really transparent proxy

--- Facundo Vilarnovo [EMAIL PROTECTED] wrote:
 Zul, we already do that... it doesn't chance
 anything :(
 
 I don't remember right now how it was but, in option
 1 via off, forward off, show that I'm BEHIND a
 proxy, but show the client ip address. Option 2:
 Without via and forward doesn't, but shows the squid
 ip address, instead the clients ip, I don't know if
 you understand me :(
 

What proxy variables that excatly said that you are
behind a proxy server on your testing?

Zul



 

Need Mail bonding?
Go to the Yahoo! Mail QA for great tips from Yahoo! Answers users.
http://answers.yahoo.com/dir/?link=listsid=396546091


Re: [squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Chris Robertson

Anton Melser wrote:

Hi,
I have searched high and low for this, and can't get anywhere!!! I am
using 2.6.STABLE5 (standard debian etch package).
I am trying to get squid to accelerate both a local apache and a
distant apache (I only want accelerating, nothing else).
If I set squid up on 3128 (with both local and distant apache on 80),
then everything works fine. However, when I set up squid on 80 and
local apache on either 81 (or whatever) or 127.0.0.1:80 then for the
local site I get an access denied.


When you change what port Apache is listening on, did you just change 
the http_port, or did you specify an IP as well in the squid.conf?  Did 
you change the cache_peer line in Squid? Just asking because...



2007/05/16 19:59:44| WARNING: Forwarding loop detected for:
Client: anip http_port: an_ip.1:80


...this looks like it could be caused by one (or both) of those.



Can someone tell me what is going on here? I have tried pretty much
everything I can think of with no luck, and the boss is getting mighty
impatient!
Cheers
Anton


Have a peek at the FAQ entries on accelerator setups, if you haven't 
already. http://wiki.squid-cache.org/SquidFaq/ReverseProxy/


Chris


Re: [squid-users] Really transparent proxy

2007-05-16 Thread Chris Robertson

Facundo Vilarnovo wrote:

Zul,
What variables are you referring to? We test setting up the proxy ip on 
the IE.
Pointing to port 3128 using http://www.whatsmyipaddress.com, as a result it says it 
passes the original source ip address (client's ip), but detects a proxy server. Doing 
totally transparent with wccp, nothing configured on IE, we get the same 
results.
The point is we are still getting the proxy detected. Using variables like via and XFF, the result of using the XFF and via is that passes the client ip address or don't. 


While the above is correct...


it's seems to have nothing to do with the problem of the cache being visible or 
don't.
  


...this is not.


Via off XFF off = clients source ip it's shown, proxy detected.
  


Makes sense.  You are still transmitting a X-Forwarded-For header.  Just 
not populating it with data.



Via on XFF on = clients source ip it's not shown (shows proxy ip), proxy not 
detected.
  


This is a bit of a mystery.  Perhaps the script is being tricked by 
having a valid XFF and VIA header which don't agree with the client 
source address.



Tnxs!
Facundo Vilarnovo
  


In any case, setting the tag forwarded_for to off in the squid.conf 
file does not prevent its addition by Squid (see 
http://www.squid-cache.org/Versions/v2/HEAD/cfgman/forwarded_for.html).  
Setting via off only prevents the instance of Squid where it is set 
from adding its own Via header.  Try using...


header_access Via deny all
header_access X-Forwarded-For deny all

...and accessing whatsmyipaddress.com.  You might have better luck.

Chris


[squid-users] Custom error pages

2007-05-16 Thread Omar M
Hello everyone:

I have two questions and I was wondering what could I do...

one: Is possible to use a css file with my error pages? I wanna give a
nice output these pages.

two: Could I load some pics? jpg or png? I wanna use my logo.

Of course I already tried both things and I couldn't. I can't use my
style.css file :S . And I can see the images even if I use full path.

Any idea would be great...

Regrets.

Omar M.



RE: [squid-users] Really transparent proxy

2007-05-16 Thread Facundo Vilarnovo
Chris,
 
Thanx for your quick answer.
We´ve also tried that, now that you mencion it, we are still trying a few 
combinations of the following lines.
 
header_access Via deny all / none
header_access X-Forwarded-For deny all / none
via off / on / deny
forwarder_for off / on / deny
 
The best result we´ve got is that is not detecting the proxy 
server..but it is still going out with proxy ips.
 
Some conclusion left we are studying are:
 
-Our squid has only one nic, not two like lots of examples here. (eth0 + gre0)
-We are using REDIRECT in iptables instead of nathas anything to do 
with that?
-We are trying transparently (not setting proxy con IE) and forcing 
it...results are the same i guess?



-Mensaje original-
De: Chris Robertson [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 16 de Mayo de 2007 05:36 p.m.
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] Really transparent proxy

Facundo Vilarnovo wrote:
 Zul,
   What variables are you referring to? We test setting up the proxy ip on 
 the IE.
 Pointing to port 3128 using http://www.whatsmyipaddress.com, as a result it 
 says it passes the original source ip address (client's ip), but detects a 
 proxy server. Doing totally transparent with wccp, nothing configured on 
 IE, we get the same results.
 The point is we are still getting the proxy detected. Using variables like 
 via and XFF, the result of using the XFF and via is that passes the client ip 
 address or don't. 

While the above is correct...

 it's seems to have nothing to do with the problem of the cache being visible 
 or don't.
   

...this is not.

 Via off XFF off = clients source ip it's shown, proxy detected.
   

Makes sense.  You are still transmitting a X-Forwarded-For header.  Just 
not populating it with data.

 Via on XFF on = clients source ip it's not shown (shows proxy ip), proxy not 
 detected.
   

This is a bit of a mystery.  Perhaps the script is being tricked by 
having a valid XFF and VIA header which don't agree with the client 
source address.

 Tnxs!
 Facundo Vilarnovo
   

In any case, setting the tag forwarded_for to off in the squid.conf 
file does not prevent its addition by Squid (see 
http://www.squid-cache.org/Versions/v2/HEAD/cfgman/forwarded_for.html).  
Setting via off only prevents the instance of Squid where it is set 
from adding its own Via header.  Try using...

header_access Via deny all
header_access X-Forwarded-For deny all

...and accessing whatsmyipaddress.com.  You might have better luck.

Chris


[squid-users] FW: ldap and squid authentication

2007-05-16 Thread Trevor Akers
My ldap authentication works flawlessly, but users do not want to have
to put in their username and password every time they open a browser.
 
Is it possible to authorize the user with squid_ldap_group and let them
access the internet without them having to put in credentials, or only
put in their credentials once?


Re: [squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Anton Melser

On 16/05/07, Chris Robertson [EMAIL PROTECTED] wrote:

Anton Melser wrote:
 Hi,
 I have searched high and low for this, and can't get anywhere!!! I am
 using 2.6.STABLE5 (standard debian etch package).
 I am trying to get squid to accelerate both a local apache and a
 distant apache (I only want accelerating, nothing else).
 If I set squid up on 3128 (with both local and distant apache on 80),
 then everything works fine. However, when I set up squid on 80 and
 local apache on either 81 (or whatever) or 127.0.0.1:80 then for the
 local site I get an access denied.

When you change what port Apache is listening on, did you just change
the http_port, or did you specify an IP as well in the squid.conf?  Did
you change the cache_peer line in Squid? Just asking because...

 2007/05/16 19:59:44| WARNING: Forwarding loop detected for:
 Client: anip http_port: an_ip.1:80

...this looks like it could be caused by one (or both) of those.


 Can someone tell me what is going on here? I have tried pretty much
 everything I can think of with no luck, and the boss is getting mighty
 impatient!
 Cheers
 Anton

Have a peek at the FAQ entries on accelerator setups, if you haven't
already. http://wiki.squid-cache.org/SquidFaq/ReverseProxy/

Chris


Thanks Chris, I definitely changed the port (the live sites, which I
put in my hosts file so not to cause too much trouble...), and could
access with no problems the non localhost sites. I tried both setting
a hostname and a ip with the ports - no luck, and had apache2
listening on 127.0.0.7:80 and *.81.
I had a very long look at the article mentioned (and you need the
right keywords to get to it!) but doing both local and distant reverse
proxying wasn't mentioned.
I followed the instructions on that page for one of my attempts (with
both squid and apache listening on 80 but one localhost and one
external) but alas exactly the same results.
I have seen in various places about compiling without internal dns but
the vast bulk of the literature is on =2.5, and 2.6 seems pretty
different (particularly for http acceleration), and I didn't know
whether this was desirable or necessary. Anyway, I will try a couple
of things with /etc/hosts, and a few things, but I think it may be due
to some resolution issues.
Thanks for your input,
Anton


Re: [squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Chris Robertson

Anton Melser wrote:


Thanks Chris, I definitely changed the port (the live sites, which I
put in my hosts file so not to cause too much trouble...), and could
access with no problems the non localhost sites. I tried both setting
a hostname and a ip with the ports


Using an IP will be more explicit, and therefore is what I would 
recommend.  Use the hostname for the defaultsite argument to http_port.



- no luck, and had apache2
listening on 127.0.0.7:80 and *.81.
I had a very long look at the article mentioned (and you need the
right keywords to get to it!) but doing both local and distant reverse
proxying wasn't mentioned.


But should just be a matter of putting two of the FAQs ((5 and 6) or (9 
and 6)*) together.


Assuming:
* The external IP of the Squid server is 4.5.6.
* Local apache is listening on 127.0.0.7:80 (and possibly *:81) and is 
hosting local.my.domain

* The remote host's IP is 1.2.3.4 and is hosting remote.my.domain
the following should do what you want...

# Define the HTTP port
http_port 4.5.6.7:80 accel defaultsite=local.my.domain
# Specify the local and remote peers
cache_peer 127.0.0.7 parent 80 0 no-query originserver name=local
cache_peer 1.2.3.4 parent 80 0 no-query originserver name=remote
#Define ACLs to direct traffic to the correct servers
# Local
acl sites_local dstdomain local.my.domain
cache_peer_access local allow sites_local
# Remote
acl sites_remote dstdomain remote.my.domain
cache_peer_access remote allow sites_remote
# Make sure that access to your accelerated sites is allowed
acl mysites dstdomain .my.domain
http_access allow mysites
# Deny everything else
http_access deny all


I followed the instructions on that page for one of my attempts (with
both squid and apache listening on 80 but one localhost and one
external) but alas exactly the same results.


A forwarding loop?  That would indicate to me that your cache_peer line 
was not adjusted to reflect the originserver listening on localhost.  No 
forwarding loop, but an access denied?  Check your ACLs in Apache, and 
make sure that localhost can access pages.  Otherwise verify you have 
not uncommented the http_access deny to_localhost line in your 
squid.conf.  It's present and commented by default.



I have seen in various places about compiling without internal dns but
the vast bulk of the literature is on =2.5, and 2.6 seems pretty
different (particularly for http acceleration), and I didn't know
whether this was desirable or necessary.


In a forwarding setup, where you are setting your cache_peers by IP, it 
should be mostly* irrelevant.  In a normal proxy setup, you probably 
don't want to disable the internal DNS.



Anyway, I will try a couple
of things with /etc/hosts, and a few things, but I think it may be due
to some resolution issues.


Again, given the setup above (all peers are designated using IP 
addresses) DNS has a negligible effect on an acceleration setup.



Thanks for your input,
Anton


Chris

* If someone surfs to your site by IP, a dstdomain ACL will try a 
reverse DNS lookup.


Re: [squid-users] Really transparent proxy

2007-05-16 Thread Chris Robertson

Facundo Vilarnovo wrote:

Chris,
 
Thanx for your quick answer.
  


You are welcome, but please don't top-post .  It makes referencing 
messages in the archive much more difficult by ruining the flow of a 
conversation.



We´ve also tried that, now that you mencion it, we are still trying a few 
combinations of the following lines.
 
header_access Via deny all / none

header_access X-Forwarded-For deny all / none
via off / on / deny
forwarder_for off / on / deny
  


Defining header_access Via deny all will prevent your Squid from 
passing ANY Via headers.  Also specifying via on (or via off) is 
superfluous.  Same thing for header_access X-Forwarded-For deny all.  
Be sure you have not changed the definition of the all ACL.  An 
earlier post shows it intact.


 
The best result we´ve got is that is not detecting the proxy server..but it is still going out with proxy ips.
  


I maintain, that is an odd result.

 
Some conclusion left we are studying are:
 
-Our squid has only one nic, not two like lots of examples here. (eth0 + gre0)
  


If I'm not mistaken, gre0 is a virtual interface, not a physical one.


-We are using REDIRECT in iptables instead of nathas anything to do 
with that?
  


It might.  Set the header_access denies I suggested, surf to 
http://devel.squid-cache.org/cgi-bin/test with a proxied client and post 
the first three lines of the results (source address, via, and forwarded 
from).



-We are trying transparently (not setting proxy con IE) and forcing 
it...results are the same i guess?
  


This shouldn't make a difference in how a website perceives the 
traffic.  Just in how the browser requests it.


Chris



Re: [squid-users] Custom error pages

2007-05-16 Thread Chris Robertson

Omar M wrote:

Hello everyone:

I have two questions and I was wondering what could I do...

one: Is possible to use a css file with my error pages? I wanna give a
nice output these pages.
  


Yes.


two: Could I load some pics? jpg or png? I wanna use my logo.
  


Yes.


Of course I already tried both things and I couldn't. I can't use my
style.css file :S . And I can see the images even if I use full path.
  


What do you mean by full path?  Using the path on the file system to 
include images and/or CSS is not going to work (nor would it work on a 
web server).  You need to host them on a web server and reference the 
full URL.  Relative links won't work with Squid, though the referenced 
elements will be cached if possible.



Any idea would be great...

Regrets.

Omar M.
  


Chris



RE: [squid-users] Really transparent proxy

2007-05-16 Thread Facundo Vilarnovo
Colin,
Thanks a lot for your extensive reply, we were hoping that it would be 
possible to do a magical masquerade, I understand that the one that origins 
the request to the destination web server was the squid, but I was believing 
that it would do some kind of magical spoofing of the source ip address. 
We've got offers from bluecoat products, they say that they have a product that 
can match our requirement.. we were hoping that squid have the same ability.
Here we have an neighbor ISP, that runs squid proxy servers, with 
tproxy patch, and they could hide the squid ip, so when you do a test with 
any URL the source seems to be the clients ip address. They don't wanna say how 
they do it.
I still believe in magic, so I will still investigate how can we do it, 
even if it means recode the tcp/ip suite.

Regards
Facundo Vilarnovo


-Mensaje original-
De: Colin Campbell [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 16 de Mayo de 2007 08:24 p.m.
Para: Facundo Vilarnovo
CC: zulkarnain; squid-users@squid-cache.org
Asunto: RE: [squid-users] Really transparent proxy

Hi,

On Wed, 2007-05-16 at 16:54 -0300, Facundo Vilarnovo wrote:
 Zul,
   What variables are you referring to? We test setting up the proxy ip on 
 the IE.
 Pointing to port 3128 using http://www.whatsmyipaddress.com, as a result it 
 says it passes the original source ip address (client's ip), but detects a 
 proxy server. Doing totally transparent with wccp, nothing configured on 
 IE, we get the same results.
 The point is we are still getting the proxy detected. Using variables like 
 via and XFF, the result of using the XFF and via is that passes the client ip 
 address or don't. it's seems to have nothing to do with the problem of the 
 cache being visible or don't.
 
 Via off XFF off = clients source ip it's shown, proxy detected.
 
 Via on XFF on = clients source ip it's not shown (shows proxy ip), proxy not 
 detected.

There seems to be a fundamental misunderstanding here of what a proxy
actually is and how it works.

When you configure a browser to use a proxy, the browser connects to the
proxy and tells it what URL to fetch. The proxy then makes a connection
to the server and retrieves the data. The server sees the proxy address
because that's who made the connection. If you have XFF set, there's an
HTTP header added to the request that states the request was forwarded
on behalf of the listed IP. The end server can access this information
but the connection to the server is still from the proxy ip, not the
client ip.

When you use WCCP, the router grabs the packets and forwards them to
the proxy. The proxy then extracts the information from the packets and
connects to the end server. The end server therefore only sees a
connection from the proxy.

If you use a proxy be it explicitly by configuring the browser or
transparently using WCCP or any other method (eg iptables REDIRECT)
the connection is ALWAYS from the proxy to the server. You can never get
a connection at the server end from the client IP if you use a proxy.

Colin

 
 Tnxs!
 Facundo Vilarnovo
 
   
 
 
 
 
 -Mensaje original-
 De: zulkarnain [mailto:[EMAIL PROTECTED] 
 Enviado el: Miércoles, 16 de Mayo de 2007 01:43 a.m.
 Para: Facundo Vilarnovo; squid-users@squid-cache.org
 Asunto: RE: [squid-users] Really transparent proxy
 
 --- Facundo Vilarnovo [EMAIL PROTECTED] wrote:
  Zul, we already do that... it doesn't chance
  anything :(
  
  I don't remember right now how it was but, in option
  1 via off, forward off, show that I'm BEHIND a
  proxy, but show the client ip address. Option 2:
  Without via and forward doesn't, but shows the squid
  ip address, instead the clients ip, I don't know if
  you understand me :(
  
 
 What proxy variables that excatly said that you are
 behind a proxy server on your testing?
 
 Zul
 
 
 
  
 
 Need Mail bonding?
 Go to the Yahoo! Mail QA for great tips from Yahoo! Answers users.
 http://answers.yahoo.com/dir/?link=listsid=396546091
 
-- 
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334


Re: [squid-users] Unable to download files over 2GB of size

2007-05-16 Thread Adrian Chadd
On Wed, May 16, 2007, Henrik Nordstrom wrote:
 tis 2007-05-15 klockan 14:34 -0700 skrev Sathyan, Arjonan:
 
  Was there any trace from the files which I have uploaded? Can you please
  tell me why I am not able to download the files which are  2GB via
  squid using IE 6?
 
 The http_headers only contains an Squid access denied result.
 
 The packet trace only contains a few SSH packets.
 
  Is this a bug in squid...?
 
 Not from what it looks so far. Pretty sure it's an MSIE6 bug.

Can we narrow down the specific bug behaviour? I'll fire it off to
someone in the IE team and see what 'e says.



Adrian



Re: [squid-users] Squid vs caching products like memcached

2007-05-16 Thread Adrian Chadd
On Wed, May 16, 2007, lightbulb432 wrote:
 
 Great answer, thanks!
 
 How does Squid's page caching ability work in terms of caching pages (as
 though they are static) but that were generated dynamically?
 
 For example, Amazon.com's homepage is dynamic but not generated dynamically
 on each request for that page; rather, I assume they set it to be cached
 anytime a request for that page comes in, with some sort of expiration
 policy (e.g. only dynamically generate the homepage once an hour, then serve
 that cached static page for the rest of that hour).
 
 I really hope Squid makes such a configuration possible and easy.

You'd probably be surprised - sites seem happy to assemble their PHP pages
almost every time, and try to use various constructs to cache the data used
to create the page (RSS, XML, SQL, etc.)

Dynamic content page authors need to assemble some behaviours which
are cache-friendly. Its not impossible, it just requires a little smart
thinking whilst designing stuff.

The Squid homepage at the moment is assembled via PHP, but it:

* Assemble a last-modified header based on the datestamp of the bits of the
  page (it takes the most recent time of each of the elements and uses that
  as LM - including the page content, the header/footer, and the generation 
script.)
* Generates an E-Tag based on the above Last-Modified header
* Handle If-Modified-Since

I'm sure there's more that can be done - I'll be looking into what else can be
done if/when I dynamically pull in RSS for a news section, for example -
but you have to keep in mind you're trying to improve user experience.

Most sites seem to concentrate on improving user experience through larger
pipes to the internet and employing CDNs. (There's some game to being able
to say you push large amounts of traffic, it seems to pull funding.)
You can also improve user experience by ensuring the relevant bits of your
page are cachable with the right options - and even if that requires
revalidation (a couple RTTs for the TCP setup, then request/reply), you're
still saving on the object transfer time.




Adrian



RE: [squid-users] Really transparent proxy

2007-05-16 Thread Facundo Vilarnovo
Mistake,
We achieve passing clients IP trough squid, but the squid remains 
visible to pages like whatsmyipaddress.com (pages is showing clients ip 
address, but detects the proxy).

Regards.
Facundo Vilarnovo


-Mensaje original-
De: Facundo Vilarnovo [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 16 de Mayo de 2007 09:02 p.m.
Para: Colin Campbell
CC: zulkarnain; squid-users@squid-cache.org
Asunto: RE: [squid-users] Really transparent proxy

Colin,
Thanks a lot for your extensive reply, we were hoping that it would be 
possible to do a magical masquerade, I understand that the one that origins 
the request to the destination web server was the squid, but I was believing 
that it would do some kind of magical spoofing of the source ip address. 
We've got offers from bluecoat products, they say that they have a product that 
can match our requirement.. we were hoping that squid have the same ability.
Here we have an neighbor ISP, that runs squid proxy servers, with 
tproxy patch, and they could hide the squid ip, so when you do a test with 
any URL the source seems to be the clients ip address. They don't wanna say how 
they do it.
I still believe in magic, so I will still investigate how can we do it, 
even if it means recode the tcp/ip suite.

Regards
Facundo Vilarnovo


-Mensaje original-
De: Colin Campbell [mailto:[EMAIL PROTECTED] 
Enviado el: Miércoles, 16 de Mayo de 2007 08:24 p.m.
Para: Facundo Vilarnovo
CC: zulkarnain; squid-users@squid-cache.org
Asunto: RE: [squid-users] Really transparent proxy

Hi,

On Wed, 2007-05-16 at 16:54 -0300, Facundo Vilarnovo wrote:
 Zul,
   What variables are you referring to? We test setting up the proxy ip on 
 the IE.
 Pointing to port 3128 using http://www.whatsmyipaddress.com, as a result it 
 says it passes the original source ip address (client's ip), but detects a 
 proxy server. Doing totally transparent with wccp, nothing configured on 
 IE, we get the same results.
 The point is we are still getting the proxy detected. Using variables like 
 via and XFF, the result of using the XFF and via is that passes the client ip 
 address or don't. it's seems to have nothing to do with the problem of the 
 cache being visible or don't.
 
 Via off XFF off = clients source ip it's shown, proxy detected.
 
 Via on XFF on = clients source ip it's not shown (shows proxy ip), proxy not 
 detected.

There seems to be a fundamental misunderstanding here of what a proxy
actually is and how it works.

When you configure a browser to use a proxy, the browser connects to the
proxy and tells it what URL to fetch. The proxy then makes a connection
to the server and retrieves the data. The server sees the proxy address
because that's who made the connection. If you have XFF set, there's an
HTTP header added to the request that states the request was forwarded
on behalf of the listed IP. The end server can access this information
but the connection to the server is still from the proxy ip, not the
client ip.

When you use WCCP, the router grabs the packets and forwards them to
the proxy. The proxy then extracts the information from the packets and
connects to the end server. The end server therefore only sees a
connection from the proxy.

If you use a proxy be it explicitly by configuring the browser or
transparently using WCCP or any other method (eg iptables REDIRECT)
the connection is ALWAYS from the proxy to the server. You can never get
a connection at the server end from the client IP if you use a proxy.

Colin

 
 Tnxs!
 Facundo Vilarnovo
 
   
 
 
 
 
 -Mensaje original-
 De: zulkarnain [mailto:[EMAIL PROTECTED] 
 Enviado el: Miércoles, 16 de Mayo de 2007 01:43 a.m.
 Para: Facundo Vilarnovo; squid-users@squid-cache.org
 Asunto: RE: [squid-users] Really transparent proxy
 
 --- Facundo Vilarnovo [EMAIL PROTECTED] wrote:
  Zul, we already do that... it doesn't chance
  anything :(
  
  I don't remember right now how it was but, in option
  1 via off, forward off, show that I'm BEHIND a
  proxy, but show the client ip address. Option 2:
  Without via and forward doesn't, but shows the squid
  ip address, instead the clients ip, I don't know if
  you understand me :(
  
 
 What proxy variables that excatly said that you are
 behind a proxy server on your testing?
 
 Zul
 
 
 
  
 
 Need Mail bonding?
 Go to the Yahoo! Mail QA for great tips from Yahoo! Answers users.
 http://answers.yahoo.com/dir/?link=listsid=396546091
 
-- 
Colin Campbell
Unix Support/Postmaster/Hostmaster
Citec
+61 7 3227 6334


Re: [squid-users] Really transparent proxy

2007-05-16 Thread Adrian Chadd
On Wed, May 16, 2007, Facundo Vilarnovo wrote:
 Colin,
   Thanks a lot for your extensive reply, we were hoping that it would be 
 possible to do a magical masquerade, I understand that the one that origins 
 the request to the destination web server was the squid, but I was believing 
 that it would do some kind of magical spoofing of the source ip address. 
 We've got offers from bluecoat products, they say that they have a product 
 that can match our requirement.. we were hoping that squid have the same 
 ability.
   Here we have an neighbor ISP, that runs squid proxy servers, with 
 tproxy patch, and they could hide the squid ip, so when you do a test 
 with any URL the source seems to be the clients ip address. They don't wanna 
 say how they do it.
   I still believe in magic, so I will still investigate how can we do it, 
 even if it means recode the tcp/ip suite.

Squid has that ability starting with Squid-2.6 and TPROXY under Linux.
Its had it for close to a year now.  You use WCCPv2 to redirect traffic
in both directions and not just in one direction. YOu setup TPROXY
rules to redirect traffic that the proxy is intersted in, if it sees
traffic for a non-established connection it fires it back at the router.
It works very well for one Squid proxy and WCCPv2.

I'm happy to set this all up in my lab at home and test it out but
paid work takes precedence over fun (which this, for the most part,
is.)

Tell you what. If people who would like to see full documentation,
kernel packages and such for a fully transparent Squid setup with WCCPv2
then how about ye make some small donations to the Squid project.
If I see enough donations coming in I'll spend a weekend setting this
up in the lab, building a fully transparent environment with Linux,
TPROXY, Squid-2.6, WCCPv2 and some non-official patches to make things
even 'more' transparent, and put it all up on the website.

(ObNote: if people who left squid and went commercial would only come
talk to us first, they may find we'd suddenly have the resources to make
Squid a -whole- lot faster, flexible and easier to use, and they'd save
$100k + a proxy. Hm, guess its not too late to do some marketing electives
at university next semester..)




Adrian



Re: [squid-users] Unable to download files over 2GB of size

2007-05-16 Thread Chris Nighswonger

On 5/16/07, Adrian Chadd [EMAIL PROTECTED] wrote:

On Wed, May 16, 2007, Henrik Nordstrom wrote:
 tis 2007-05-15 klockan 14:34 -0700 skrev Sathyan, Arjonan:

  Was there any trace from the files which I have uploaded? Can you please
  tell me why I am not able to download the files which are  2GB via
  squid using IE 6?

 The http_headers only contains an Squid access denied result.

 The packet trace only contains a few SSH packets.

  Is this a bug in squid...?

 Not from what it looks so far. Pretty sure it's an MSIE6 bug.

Can we narrow down the specific bug behaviour? I'll fire it off to
someone in the IE team and see what 'e says.


When IE6 is setup to use proxy (squid), and the aforementioned file is
downloaded, a download window opens and the progress indicator zips to
100% in the first second after which IE announces that the download is
complete. What the user really has is a file with a size equivalent to
the effective kbps of their internet pipe.

When IE6 is setup *not* to use proxy, and the same file is downloaded,
the behaviour is as expected and the resulting file is the correct
size.

Two tcpdumps have been submitted.

Let me know if you need more specific information, and I will provide it.

Chris