FW: [squid-users] Peering squid multiple instances.

2010-03-24 Thread GIGO .




> From: gi...@msn.com
> To: squ...@treenet.co.nz
> Subject: RE: [squid-users] Peering squid multiple instances.
> Date: Wed, 24 Mar 2010 07:12:15 +
>
>
> Dear Amos,
>
> Thank you for your response and better design tips. However i am not able to 
> comprehend it well (due to lack of expereince and knowledge both however at 
> current). So i request you to elaborate it a bit more. Your guidance would be 
> a real valuable.
>
> Question 1:
>
> You said that under my configuration this is the case:
>
> Client -> squidinstance1 -> squidinstance2 -> (web servers)
>
> or
>
> client -> squidinstance2 -> webserver
>
> Well i am failing to understand how clients can talk to squidinstance2 
> directly when:
>
> 1. squidinstance2 is configured with an acl to accept traffic from localhost 
> only.
> 2. On the Squid clients (browsers) the port 8080 of first instance is 
> configured. And this is the only traffic that is being accepted through the 
> iptables as well.
>
> according to my perception isnt this the case
>
> client ->squidinstance1 -> webserver
> client ->squidinstance1 -> squidinstance2 -> webserver
>
> Please guide me in this respect.
>
>
> Question 2:
>
> I have created multiple instances to run on the same machine ,because in my 
> server there are three hard drives. OS is on Physical RAID1.Cache directory 
> is on the third hard drive (comprising 80% of total space). This setup is 
> done because i wanted to survive a directory failure. so even all my drives 
> which are holding cache directories get failed. Even then my client will be 
> able to browse the internet through proxy-only instance until the disk system 
> holding the OS fails. I am not sure that whether this approach is correct or 
> not but this is what i have learnt in these days through available faqs and 
> ofcourse guidance through squidmail help. Please guide me on this.
>
>
> Question 3:
>
>
> what does it mean by "parent is the peering method for origin web servers"? 
> also you wrote that by reason of Parent it does not matter which protocol you 
> are using. Pleae guide me.
>
>
>
> Question 4:
>
> i interpret that you mean that two instances running on the same machine 
> should have sibling type relationships configured identically with digest 
> type protocol between them. It means that i should run two instances but 
> pointing to different cache directories on my third hard drive and instead of 
> 50 Gb big cache give lets say 25 Gb space to each.((Holding two cache 
> directories on the same hard isnt it degrade performance ? so is it only 
> possible when i have multiple drives for holding cache ))Both permitted to 
> cache data from origin servers.However in case of a cache miss first check 
> the sibling before going to the origin server. Am i correct in understanding 
> you?
>
>
> You further said that for failover which i am sorry that i failed to 
> understand at this point of time due to my current skill/competency. However 
> i am eager to learn and determined to work hard. your detailed response will 
> be really really valueable to me (I have just started a couple of weeks 
> back). Please is the following setup is for failover of a whole squid proxy 
> server or failover of squid processes?
>
>> * a cache_peer "parent" type to the web server. With "originserver"
>> and "default" selection enabled.
 This topology utilizes a single layer of multiple proxies. Possibly with
>> hardware load balancing in iptables etc sending alternate requests to
>> each of the two proxies listening ports.
>> Useful for small-medium businesses requiring scale with minimal
>> hardware. Probably their own existing load balancers already purchased
>> from earlier attempts. IIRC the benchmark for this is somewhere around
>> 600-700 req/sec.
>>>
>> The next step up in performance and HA is to have an additional layer of
>> Squid acting as the load-balancer doing CARP to reduce cache duplication
>> and remove sibling data transfers. This form of scaling out is how
>> WikiMedia serve their sites up.
>> It is documented somewhat in the wiki as ExtremeCarpFrontend. With a
>> benchmark so far for a single box reaching 990 req/sec.
>>
>> These maximum speed benchmarks are only achievable by reverse-proxy
>> people. Regular ISP setups can expect their maximum to be somewhere
>> below 1/2 or 1/3 of that rate due to the content diversity and RTT lag
>> of remote servers. (well that part i understood)
>
> Question 5:
>
> can you please tell some good read for knowledge/concepts builder? I have get 
> hold of squid definitve guide though a very good one however isnt'it a bit 
> outdated.Can you recommend please? Specially on the topics of Authenticating 
> Active directory users in squid proxy.
>
>
>
>
>
>
>
>
> 
>> Date: Wed, 24 Mar 2010 18:06:46 +1300
>> From: squ...@treenet.co.nz
>> To: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Peering squid 

Re: [squid-users] Peering squid multiple instances.

2010-03-24 Thread Amos Jeffries

GIGO . wrote:

Dear Amos,
 
Thank you for your response and better design tips. However i am not able to comprehend it well (due to lack of expereince and knowledge both however at current). So i request you to elaborate it a bit more. Your guidance would be a real valuable.
 
Question 1:
 
You said that under my configuration this is the case:
 
Client -> squidinstance1 -> squidinstance2 -> (web servers)
 
or 
 
client -> squidinstance2 -> webserver
 
Well i am failing to understand how clients can talk to squidinstance2 directly when:
 
1. squidinstance2 is configured with an acl to accept traffic from localhost only.


I did not see any http_access lines in your displayed config. I assumed 
some things wrongly it seems. And also mixed your questions up with 
someone others similar questions.


What you posted was a good setup for failover if a normal caching proxy 
(squid2) dies. Using a non-caching interface instance (squid1) to prefer 
fetching from cache with direct no-caching route as a backup.


In this case the "parent" type was correct and with only two Squid the 
ICP/HTCP/digest selection methods should be avoided.


(The bits I said that lead to your Q2-4 were intended for that other 
setup. Very sorry.)




Question 5:
 
can you please tell some good read for knowledge/concepts builder? I have get hold of squid definitve guide though a very good one however isnt'it a bit outdated.Can you recommend please? Specially on the topics of Authenticating Active directory users in squid proxy.
 


The wiki is where we point people. It started as a copy of the 
definitive guide and the older FAQ guide. Then we tried to improve it, 
increase it and update things for the currently supported Squid releases.


Hopefully its easy enough to read and learn from. Suggestions for 
improvement are always welcome.






Date: Wed, 24 Mar 2010 18:06:46 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Peering squid multiple instances.

GIGO . wrote:

I have successfully setup running of multiple instances of squid for the sake 
of surviving a Cache directory failure. However I still have few confusions 
regarding peering multiple instances of squid. Please guide me in this respect.


In my setup i percept that my second instance is doing caching on behalf of 
requests send to Instance 1? Am i correct.


You are right in your understanding of what you have configured. I've
some suggestions below on a better topology though.



what protocol to select for peers in this scenario? what is the recommendation? 
(carp, digest, or icp/htcp)


Under your current config there is no selection, ALL requests go through
both peers.

Client -> Squid1 -> Squid2 -> WebServer

or

Client -> Squid2 -> WebServer

thus Squid2 and WebServer are both bottleneck points.



If syntax of my cache_peer directive is correct or local loop back address 
should not be used this way?


Syntax is correct.
Use of localhost does not matter. It's a useful choice for providing
some security and extra speed to the inter-proxy traffic.



what is the recommended protocol for peering squids with each other?


Does not matter to your existing config. By reason of the "parent"
selection.



what is the recommended protocl for peering squid with ISA Server.


"parent" is the peering method for origin web servers. With
"originserver" selection method.


Instance 1:

visible_hostname vSquidlhr
unique_hostname vSquidMain
pid_filename /var/run/squid3main.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log
cache_log /var/logs/cache.log

cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query proxy-only 
no-delay
prefer_direct off
cache_dir aufs /var/spool/squid3 100 256 16
coredump_dir /var/spool/squid3
cache deny all



Instance 2:

visible_hostname SquidProxylhr
unique_hostname squidcacheprocess
pid_filename /var/run/squid3cache.pid
http_port 3128
icp_port 0
snmp_port 7172
access_log /var/logs/access2.log
cache_log /var/logs/cache2.log


coredump_dir /cache01/var/spool/squid3
cache_dir aufs /cache01/var/spool/squid3 5 48 768
cache_swap_low 75
cache_mem 1000 MB
range_offset_limit -1
maximum_object_size 4096 MB
minimum_object_size 12 bytes
quick_abort_min -1





Amos

--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] The requested URL could not be retrieved TCP_MISS/502

2010-03-24 Thread Umesh Bodalina
Could this mean that there is a problem with the web site or their network?
Or is it some kind of configuration issue on our squid proxy or our network?

Regards
Umesh


On 23 March 2010 16:20, Zeller, Jan  wrote:
> hmm seems not to work properly :
>
> behind proxy :
> $ httping -g http://www.bitlifesciences.com/wcvi2010 -c 3 mysquidproxy:80
> PING www.bitlifesciences.com:80 (http://www.bitlifesciences.com/wcvi2010):
> timeout connecting to host
> timeout connecting to host
> timeout connecting to host
> --- http://www.bitlifesciences.com/wcvi2010 ping statistics ---
> 3 connects, 0 ok, 100.00% failed
>
> without proxy
> ./squidclient -v -h localhost -p 80 http://www.bitlifesciences.com/wcvi2010
> headers: 'GET http://www.bitlifesciences.com/wcvi2010 HTTP/1.0
> Accept: */*
>
> '
> HTTP/1.0 502 Bad Gateway
> Server: squid
> Mime-Version: 1.0
> Date: Tue, 23 Mar 2010 14:20:17 GMT
> Content-Type: text/html
> Content-Length: 1493
> X-Squid-Error: ERR_READ_ERROR 104
> X-Cache: MISS from mysquidproxy
> Via: 1.0 mysquidproxy (squid)
> Proxy-Connection: close
>
> .
> .
> .
>
> The following error was encountered:
> 
> 
> 
> Read Error
> 
> 
>
> .
> .
> ,
>
> ./squid -v
> Squid Cache: Version 3.0.STABLE23
> configure options:  '--prefix=/opt/squid-3.0.STABLE23' '--enable-icap-client' 
> '--enable-ssl' '--enable-default-err-language=English' 
> '--enable-err-languages=English' '--enable-linux-netfilter' '--with-pthreads' 
> '--with-filedescriptors=32768'
>
>
> regards,
>
> Jan
>
>
>
> 
> Von: Ralf Hildebrandt [ralf.hildebra...@charite.de]
> Gesendet: Dienstag, 23. März 2010 14:45
> An: squid-users@squid-cache.org
> Betreff: Re: [squid-users] The requested URL could not be retrieved 
> TCP_MISS/502
>
> * Umesh Bodalina :
>> Hi Squid
>> I'm getting the following error when I try to access the following
>> site through Squid:
>
> Works for me with:
> Squid 2.7.STABLE8-1 from Debian
>
>
>
>


[squid-users] Not accessing privoxy parent

2010-03-24 Thread Gerard Earley

Hi all
I'm trying to set up squid as a adblocking proxy by using privoxy 
as a parent.

Unfortunately squid doesn't seem to want to access the parent privoxy.
The authentication works, its simply accessing the parent that's not 
doing what it should.


Any help would be appreciated.

via off
forwarded_for off
http_port 3128
access_log /var/log/squid/access.log squid
cache_peer 127.0.0.1 parent 8118 7 no-query no-digest
#acl office_network src xx.xx.xx.xx
#http_access allow office_network
visible_hostname xx.com
auth_param basic program /usr/lib/squid/ncsa_auth 
/etc/squid/squid_passwords.txt

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users

The Squid installed is compiled from squid-3.1.0.15-2.src.rpm with no errors

Pardon my ignorance but this is my first use of squid.

Many thanks.


[squid-users] Issues with Radius,Squid3, 64 Bit

2010-03-24 Thread mickymax
Hi,

I am using Squid3S25 on Suse SLES 10, 64 bit, squid_radius_auth-1.10.

When I try squid_radius_auth with a user bob with password secret with the 
command

squid_radius_auth -h x.x.x.x -w shared_secret1 

I can see that the password "secret" of a user seems to be garbage:
[files] users: Matched entry bob at line 51
++[files] returns ok
++[expiration] returns noop
++[logintime] returns noop
++[pap] returns updated
Found Auth-Type = PAP
+- entering group PAP {...}
[pap] login attempt with password "��?{bDl8?+EbI��Y"

This only happens with my Squid 64 Bit system. If testing this on a 32 bit 
system, it works.

Are there known issues with squid_radius_auth and 64 bit?

Thxn in advance and regards,
Micky

-- 
Sicherer, schneller und einfacher. Die aktuellen Internet-Browser -
jetzt kostenlos herunterladen! http://portal.gmx.net/de/go/chbrowser


Re: [squid-users] Issues with Radius,Squid3, 64 Bit

2010-03-24 Thread Amos Jeffries

micky...@gmx.de wrote:

Hi,

I am using Squid3S25 on Suse SLES 10, 64 bit, squid_radius_auth-1.10.

When I try squid_radius_auth with a user bob with password secret with the 
command

squid_radius_auth -h x.x.x.x -w shared_secret1 


I can see that the password "secret" of a user seems to be garbage:
[files] users: Matched entry bob at line 51
++[files] returns ok
++[expiration] returns noop
++[logintime] returns noop
++[pap] returns updated
Found Auth-Type = PAP
+- entering group PAP {...}
[pap] login attempt with password "��?{bDl8?+EbI��Y"

This only happens with my Squid 64 Bit system. If testing this on a 32 bit 
system, it works.

Are there known issues with squid_radius_auth and 64 bit?



Quite possibly. The RADIUS helper for Squid was written to an old spec 
before the days of 64-bit. It needs a fair bit of loving added to bring 
it up to modern RADIUS RFC standards and more efficient capabilities.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Issues with Radius,Squid3, 64 Bit

2010-03-24 Thread mickymax
Thx for the quick reply.

Do you know if there is a timeline for adjusting the RADIUS module for 
squid/64? Or is there no priority for this?

Micky

 Original-Nachricht 
> Datum: Thu, 25 Mar 2010 01:09:34 +1300
> Von: Amos Jeffries 
> An: squid-users@squid-cache.org
> Betreff: Re: [squid-users] Issues with Radius,Squid3, 64 Bit

> micky...@gmx.de wrote:
> > Hi,
> > 
> > I am using Squid3S25 on Suse SLES 10, 64 bit, squid_radius_auth-1.10.
> > 
> > When I try squid_radius_auth with a user bob with password secret with
> the command
> > 
> > squid_radius_auth -h x.x.x.x -w shared_secret1 
> > 
> > I can see that the password "secret" of a user seems to be garbage:
> > [files] users: Matched entry bob at line 51
> > ++[files] returns ok
> > ++[expiration] returns noop
> > ++[logintime] returns noop
> > ++[pap] returns updated
> > Found Auth-Type = PAP
> > +- entering group PAP {...}
> > [pap] login attempt with password "��?{bDl8?+EbI��Y"
> > 
> > This only happens with my Squid 64 Bit system. If testing this on a 32
> bit system, it works.
> > 
> > Are there known issues with squid_radius_auth and 64 bit?
> > 
> 
> Quite possibly. The RADIUS helper for Squid was written to an old spec 
> before the days of 64-bit. It needs a fair bit of loving added to bring 
> it up to modern RADIUS RFC standards and more efficient capabilities.
> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>Current Beta Squid 3.1.0.18

-- 
GMX DSL: Internet, Telefon und Entertainment für nur 19,99 EUR/mtl.!
http://portal.gmx.net/de/go/dsl02


Re: [squid-users] squid 3.0.19 + transparent + sslbump

2010-03-24 Thread Leonardo Carneiro - Veltrac


Amos Jeffries wrote:

Some factums worth knowing:

 * 3.0 does not support sslBump or any other form of HTTPS 
man-in-middle attacks. 3.1 is required for that.


 * sslBump in 3.1 requires that the client machines all have a CA 
certificate installed to make them trust the proxy for decryption.


 * sslBump requires clients to be configured for using the proxy. 
(Some of the 'transparent' above work this way some do not.)


Amos
Hi Amos. What is the vantage of use sslBump if I cannot use a 
transparent proxy with it? Is the ability to cache SSL content?

Tks in advance.


Re: [squid-users] Issues with Radius,Squid3, 64 Bit

2010-03-24 Thread Amos Jeffries

micky...@gmx.de wrote:

Thx for the quick reply.

Do you know if there is a timeline for adjusting the RADIUS module for 
squid/64? Or is there no priority for this?



There is no plans for RADIUS in Squid.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] squid 3.0.19 + transparent + sslbump

2010-03-24 Thread Amos Jeffries

Leonardo Carneiro - Veltrac wrote:


Amos Jeffries wrote:

Some factums worth knowing:

 * 3.0 does not support sslBump or any other form of HTTPS 
man-in-middle attacks. 3.1 is required for that.


 * sslBump in 3.1 requires that the client machines all have a CA 
certificate installed to make them trust the proxy for decryption.


 * sslBump requires clients to be configured for using the proxy. 
(Some of the 'transparent' above work this way some do not.)


Amos
Hi Amos. What is the vantage of use sslBump if I cannot use a 
transparent proxy with it? Is the ability to cache SSL content?

Tks in advance.


Somewhat. Mostly for corporate networks AV scanning or filtering HTTPS 
connections.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Issues with Radius,Squid3, 64 Bit

2010-03-24 Thread Amos Jeffries

micky...@gmx.de wrote:

Thx for the quick reply.

Do you know if there is a timeline for adjusting the RADIUS module for 
squid/64? Or is there no priority for this?



There is no plans for RADIUS in Squid.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] TCP_HIT/504 on fetch after UDP_HIT

2010-03-24 Thread Taylan Develioglu
Hi,

I'm trying to set up two squid siblings in front of lighttpd as reverse
proxies and have a question about some behavior I'm seeing.

Squid versions are 2.7.STABLE7-1~bpo50+1 from the debian backports
repository.

My goal is to create a setup with two cache siblings and one origin
server (lighttpd) where no duplicate cached entries between siblings
exist.

I am getting these TCP_HIT/504's and I can't understand why they are
happening.

When I do a refresh in firefox and request a picture from sibling1,
requested before on sibling2, I see the following:

sibling1 is 5.5.5.5

- The original request on sibling 1:

1269431848.626  4 1.1.1.1 TCP_MISS/304 378 GET
http://pictures.something.com/pics/d/3/c/picture.jpg -
DEFAULT_PARENT/pict-dev image/jpeg

- A UDP_HIT occurs on sibling2, sibling1 tries to fetch the file but it
received a 504 instead (excerpt below).

269431848.881  0  5.5.5.5 UDP_HIT/000 76 ICP_QUERY
http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- -
1269431848.883  1 5.5.5.5 TCP_HIT/504 1752 GET
http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- text/html

start

HTTP/1.0 504 Gateway Time-out
Expires: Wed, 24 Mar 2010 09:55:19 GMT
X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
Age: 1269424520
Warning: 113 localhost (squid/3.0.STABLE8) This cache hit is still fresh
and more than 1 day old
X-Cache: HIT from localhost
X-Cache-Lookup: HIT from localhost:80

The requested URL could not be retrieved
Valid document was not found in the cache and 'only-if-cached'
directive was specified.

You have issued a request with a 'only-if-cached' cache control
directive. The document was not found in the cache, or it required
revalidation prohibited by 'only-if-cached' directive.

end

Why does sibling2 respond with a UDP_HIT but then sends a 504 error page
to sibling1 when sibling1 tries to fetch the picture?

Is this normal behavior or am I doing something wrong here? Please see
my squid.conf below, suggestions are very much appreciated and thanks in
advance.

acl all src all 
acl manager proto cache_object  
acl localhost src 127.0.0.1/32  
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  
acl Safe_ports port 80   #http  
acl Safe_ports port 21   #ftp   
acl Safe_ports port 443  #https 
acl Safe_ports port 70   #gopher
acl Safe_ports port 210  #wais  
acl Safe_ports port 1025-65535   #unregistered ports
acl Safe_ports port 280  #http-mgmt 
acl Safe_ports port 488  #gss-http  
acl Safe_ports port 591  #filemaker 
acl Safe_ports port 777  #multiling http
acl CONNECT method CONNECT  

acl sites dstdomain pictures.something.com
acl siblings src 6.6.6.6

cache_peer 4.4.4.4 parent 80 0 default no-query no-digest originserver
name=lighttpd-server login=PASS
cache_peer 6.6.6.6 sibling 80 9832 proxy-only
name=sibling2  

cache_peer_access lighttpd-server allow sites
cache_peer_access sibling2 allow sites

http_access allow sites
http_access allow siblings
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all

icp_access allow siblings
icp_access deny all

htcp_access deny all

miss_access deny siblings

http_port 80 act-as-origin accel vhost
icp_port 9832

access_log /var/log/squid/access.log
debug_options ALL,3
cache_mgr cach...@domain.com

cache_dir aufs /var/spool/squid 4000 32 128

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320




Re: [squid-users] squid 3.0.19 + transparent + sslbump

2010-03-24 Thread Stefan Reible

Zitat von Amos Jeffries :


Leonardo Carneiro - Veltrac wrote:


Amos Jeffries wrote:

Some factums worth knowing:

* 3.0 does not support sslBump or any other form of HTTPS  
man-in-middle attacks. 3.1 is required for that.


* sslBump in 3.1 requires that the client machines all have a CA  
certificate installed to make them trust the proxy for decryption.


* sslBump requires clients to be configured for using the proxy.  
(Some of the 'transparent' above work this way some do not.)


Amos
Hi Amos. What is the vantage of use sslBump if I cannot use a  
transparent proxy with it? Is the ability to cache SSL content?

Tks in advance.


Somewhat. Mostly for corporate networks AV scanning or filtering  
HTTPS connections.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18



Transparent https is working with squid 3.1.0.15_beta-r1.
With transparent I meen, that the browser request will routed to  
squids without any configuration.


iptables:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT  
--to-destination 192.168.1.1:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT  
--to-destination 192.168.1.1:3129


squid.conf:
http_port 127.0.0.1:3128
http_port 192.9.200.32:3128 transparent
https_port 192.9.200.32:3129 transparent sslBump  
cert=/etc/squid/ssl_cert/proxy.testdomain.deCert.pem  
key=/etc/squid/ssl_cert/private/proxy.testdomain.deKey_without_Pp.pem


Only Problem I have, that the browser gives warnings, because  
certificate didn`t pass to domain!


Can I get other problems with cookie or something else?

Can I run this squid version in productivity environment?

Now I will test it for some hours..

Regards,
Stefan





Re: [squid-users] TCP_HIT/504 on fetch after UDP_HIT

2010-03-24 Thread Taylan Develioglu
I know it's bad form to reply to your own post, but I found a partial
explanation.

from
http://linuxdevcenter.com/pub/a/linux/2001/09/17/squidpeering.html?page=2

FALSE HITS: (ICP only) Because ICP does not communicate request headers
(only the URI is presented in an ICP query), it is possible for a peer
to return an affirmative for a given URI but not be able to satisfy the
request from cache.


  * cache1 sends an ICP query to cache2 for
http://www.example.org/index.html.

  * cache2 has a cached copy of the object (87,376 seconds old), and
answers in the affirmative.

  * cache1 then issues the request to cache2, but the request
headers contain "Max-Age: 86400"). cache2's copy is too old to
satisfy this request.

  * If cache1 has miss_access on cache2, then cache2 will go forward
to the origin server (or a parent) and fetch a new copy, 
If not, cache2 will return a 504 HTTP response and cache1 will
have to select a new source for the object.

But why is it then recommended that miss_access be disabled for
siblings? Would it be a bad thing for cache2 to fetch a new copy ?

As far as I can see cache1 doesn't cache the object retrieved from the
new source after the 504.

On Wed, 2010-03-24 at 13:40 +0100, Taylan Develioglu wrote:
> Hi,
> 
> I'm trying to set up two squid siblings in front of lighttpd as reverse
> proxies and have a question about some behavior I'm seeing.
> 
> Squid versions are 2.7.STABLE7-1~bpo50+1 from the debian backports
> repository.
> 
> My goal is to create a setup with two cache siblings and one origin
> server (lighttpd) where no duplicate cached entries between siblings
> exist.
> 
> I am getting these TCP_HIT/504's and I can't understand why they are
> happening.
> 
> When I do a refresh in firefox and request a picture from sibling1,
> requested before on sibling2, I see the following:
> 
> sibling1 is 5.5.5.5
> 
> - The original request on sibling 1:
> 
> 1269431848.626  4 1.1.1.1 TCP_MISS/304 378 GET
> http://pictures.something.com/pics/d/3/c/picture.jpg -
> DEFAULT_PARENT/pict-dev image/jpeg
> 
> - A UDP_HIT occurs on sibling2, sibling1 tries to fetch the file but it
> received a 504 instead (excerpt below).
> 
> 269431848.881  0  5.5.5.5 UDP_HIT/000 76 ICP_QUERY
> http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- -
> 1269431848.883  1 5.5.5.5 TCP_HIT/504 1752 GET
> http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- text/html
> 
> start
> 
> HTTP/1.0 504 Gateway Time-out
> Expires: Wed, 24 Mar 2010 09:55:19 GMT
> X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
> Age: 1269424520
> Warning: 113 localhost (squid/3.0.STABLE8) This cache hit is still fresh
> and more than 1 day old
> X-Cache: HIT from localhost
> X-Cache-Lookup: HIT from localhost:80
> 
> The requested URL could not be retrieved
> Valid document was not found in the cache and 'only-if-cached'
> directive was specified.
> 
> You have issued a request with a 'only-if-cached' cache control
> directive. The document was not found in the cache, or it required
> revalidation prohibited by 'only-if-cached' directive.
> 
> end
> 
> Why does sibling2 respond with a UDP_HIT but then sends a 504 error page
> to sibling1 when sibling1 tries to fetch the picture?
> 
> Is this normal behavior or am I doing something wrong here? Please see
> my squid.conf below, suggestions are very much appreciated and thanks in
> advance.
> 
> acl all src all 
> acl manager proto cache_object  
> acl localhost src 127.0.0.1/32  
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443  
> acl Safe_ports port 80   #http  
> acl Safe_ports port 21   #ftp   
> acl Safe_ports port 443  #https 
> acl Safe_ports port 70   #gopher
> acl Safe_ports port 210  #wais  
> acl Safe_ports port 1025-65535   #unregistered ports
> acl Safe_ports port 280  #http-mgmt 
> acl Safe_ports port 488  #gss-http  
> acl Safe_ports port 591  #filemaker 
> acl Safe_ports port 777  #multiling http
> acl CONNECT method CONNECT  
> 
> acl sites dstdomain pictures.something.com
> acl siblings src 6.6.6.6
> 
> cache_peer 4.4.4.4 parent 80 0 default no-query no-digest originserver
> name=lighttpd-server login=PASS
> cache_peer 6.6.6.6 sibling 80 9832 proxy-only
> name=sibling2  
> 
> cache_peer_access lighttpd-server allow sites
> cache_peer_access sibling2 allow sites
> 
> http_access allow sites
> http_access allow siblings
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_acces

[squid-users] Allowing ports used by Squid through Iptables.

2010-03-24 Thread GIGO .

I want to do  the security hardening of my Squid Server with Iptables. I intend 
to have no rule on outbond traffic however ibound traffic would be restricted. 
please guide what are the minimum ports that are required to be open on 
iptables.
 
 
Following is what i thought:
 
Allow all incoming traffic from loopback adapter
Allow SSH traffic incoming
Allow 80,443,161,389 these multiple ports (389 as i intend to authenticate my 
clients from active directory)
Allow Squid specific http_port (i am using 8080)
Allow snmp port according to the defined directive (mine is 3161 & 7172)
Deny all other incoming traffic
Any other perhaps i am not calculating?
 
Please guide me.
 
thanks
 
Regards,
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] TPROXY and DansGuardian

2010-03-24 Thread Jason Healy
On Mar 24, 2010, at 1:37 AM, Amos Jeffries wrote:

> From what I understand of your requirements you don't actually need DG or 
> anything but Squid alone. Squid can log in any format you choose to 
> configure. If there is anything it does not yet log we'd be interested in 
> hearing about that.

DG will do content-based filtering (check the HTML for naughty words), which is 
of interest to us.  Otherwise, you're correct in that we could just log all 
accesses and run a URL analyzer to see if people are going somewhere they 
shouldn't.

Jason

--
Jason Healy|jhe...@logn.net|   http://www.logn.net/






[squid-users] Map Single URL to Multiple Store urls

2010-03-24 Thread Ken Struys
Is there anyway to map single url's to multiple store url's based on a cookie?

Lets say I have a user cookie and I want to implement caching for logged in 
users.

I there anyway in squid I can append the cookie to the cached url? (in squid 
not on the client side url).
 
I've looked at doing storeurl_rewrite_programs but they don't receive cookie 
information (or anything else useful) as input.


Re: [squid-users] TCP_HIT/504 on fetch after UDP_HIT

2010-03-24 Thread Taylan Develioglu
I switched to htcp, but I'm still getting false hits (504).

On Wed, 2010-03-24 at 15:10 +0100, Taylan Develioglu wrote:
> I know it's bad form to reply to your own post, but I found a partial
> explanation.
> 
> from
> http://linuxdevcenter.com/pub/a/linux/2001/09/17/squidpeering.html?page=2
> 
> FALSE HITS: (ICP only) Because ICP does not communicate request headers
> (only the URI is presented in an ICP query), it is possible for a peer
> to return an affirmative for a given URI but not be able to satisfy the
> request from cache.
> 
> 
>   * cache1 sends an ICP query to cache2 for
> http://www.example.org/index.html.
> 
>   * cache2 has a cached copy of the object (87,376 seconds old), and
> answers in the affirmative.
> 
>   * cache1 then issues the request to cache2, but the request
> headers contain "Max-Age: 86400"). cache2's copy is too old to
> satisfy this request.
> 
>   * If cache1 has miss_access on cache2, then cache2 will go forward
> to the origin server (or a parent) and fetch a new copy, 
> If not, cache2 will return a 504 HTTP response and cache1 will
> have to select a new source for the object.
> 
> But why is it then recommended that miss_access be disabled for
> siblings? Would it be a bad thing for cache2 to fetch a new copy ?
> 
> As far as I can see cache1 doesn't cache the object retrieved from the
> new source after the 504.
> 
> On Wed, 2010-03-24 at 13:40 +0100, Taylan Develioglu wrote:
> > Hi,
> > 
> > I'm trying to set up two squid siblings in front of lighttpd as reverse
> > proxies and have a question about some behavior I'm seeing.
> > 
> > Squid versions are 2.7.STABLE7-1~bpo50+1 from the debian backports
> > repository.
> > 
> > My goal is to create a setup with two cache siblings and one origin
> > server (lighttpd) where no duplicate cached entries between siblings
> > exist.
> > 
> > I am getting these TCP_HIT/504's and I can't understand why they are
> > happening.
> > 
> > When I do a refresh in firefox and request a picture from sibling1,
> > requested before on sibling2, I see the following:
> > 
> > sibling1 is 5.5.5.5
> > 
> > - The original request on sibling 1:
> > 
> > 1269431848.626  4 1.1.1.1 TCP_MISS/304 378 GET
> > http://pictures.something.com/pics/d/3/c/picture.jpg -
> > DEFAULT_PARENT/pict-dev image/jpeg
> > 
> > - A UDP_HIT occurs on sibling2, sibling1 tries to fetch the file but it
> > received a 504 instead (excerpt below).
> > 
> > 269431848.881  0  5.5.5.5 UDP_HIT/000 76 ICP_QUERY
> > http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- -
> > 1269431848.883  1 5.5.5.5 TCP_HIT/504 1752 GET
> > http://pictures.something.com/pics/d/3/c/picture.jpg - NONE/- text/html
> > 
> > start
> > 
> > HTTP/1.0 504 Gateway Time-out
> > Expires: Wed, 24 Mar 2010 09:55:19 GMT
> > X-Squid-Error: ERR_ONLY_IF_CACHED_MISS 0
> > Age: 1269424520
> > Warning: 113 localhost (squid/3.0.STABLE8) This cache hit is still fresh
> > and more than 1 day old
> > X-Cache: HIT from localhost
> > X-Cache-Lookup: HIT from localhost:80
> > 
> > The requested URL could not be retrieved
> > Valid document was not found in the cache and 'only-if-cached'
> > directive was specified.
> > 
> > You have issued a request with a 'only-if-cached' cache control
> > directive. The document was not found in the cache, or it required
> > revalidation prohibited by 'only-if-cached' directive.
> > 
> > end
> > 
> > Why does sibling2 respond with a UDP_HIT but then sends a 504 error page
> > to sibling1 when sibling1 tries to fetch the picture?
> > 
> > Is this normal behavior or am I doing something wrong here? Please see
> > my squid.conf below, suggestions are very much appreciated and thanks in
> > advance.
> > 
> > acl all src all 
> > acl manager proto cache_object  
> > acl localhost src 127.0.0.1/32  
> > acl to_localhost dst 127.0.0.0/8
> > acl SSL_ports port 443  
> > acl Safe_ports port 80   #http  
> > acl Safe_ports port 21   #ftp   
> > acl Safe_ports port 443  #https 
> > acl Safe_ports port 70   #gopher
> > acl Safe_ports port 210  #wais  
> > acl Safe_ports port 1025-65535   #unregistered ports
> > acl Safe_ports port 280  #http-mgmt 
> > acl Safe_ports port 488  #gss-http  
> > acl Safe_ports port 591  #filemaker 
> > acl Safe_ports port 777  #multiling http
> > acl CONNECT method CONNECT  
> > 
> > acl sites dstdomain pictures.something.com
> > acl siblings src 6.6.6.6
> > 
> > cache_peer 4.4.4.4 parent 80 0 default no-query no-digest originserver
> > name=lighttpd-server login=PASS
> > cache_peer 6.6.6.6 sibling 80 9832 proxy-only
> > name=sibling2

[squid-users] HTCP for consistent caches for reverse proxies

2010-03-24 Thread Georg Höllrigl

Hello,

Is there a way to get two squid caches used as reverse proxy to have an 
consistent cache?

An example would be a file that contains "abcd" - I request the file, get balanced to squid1 which 
caches the file du to the expire header for one hour. Then the file gets changed to contain "abcde". 
The next request gets to squid2 and is cached there.


Now I get different files out of the caches!

What even more confuses me - somtimes the caches re-request the file from source and I get an 
updated version of the file, even when the expire time isn't over.

How does squid determine, when to re-request the cached files?


Georg


[squid-users] Help with accelerated site

2010-03-24 Thread a...@gmail

Hello All,

I have followed this configuration, but when I try and access the website 
from outside my network
All I get is the default page of the apache on the machine where the Squid 
proxy is installed


Here is the link:

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

here is the configuration I followed

http_port 80 accel defaultsite=your.main.website.name(changed my port to 81 
my backend server listens on port 81)I havehttp_port 81 accel 
defaultsite=www.my.website.org vhostand then used thiscache_peer 
ip.of.webserver parent 80 0 no-query originserver name=myAccelcache_peer 
192.168.1.5 parent 81 0 no query originserver name=myAccel(myAccel I have 
put a name)and then acl our_sites dstdomain my.website.org

http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all Anybody with any suggestions please?Any 
help would be appreciated thank youRegardsAdam 



Re: [squid-users] Map Single URL to Multiple Store urls

2010-03-24 Thread Amos Jeffries
On Wed, 24 Mar 2010 11:04:03 -0400, "Ken Struys"  wrote:
> Is there anyway to map single url's to multiple store url's based on a
> cookie?
> 
> Lets say I have a user cookie and I want to implement caching for logged
> in users.
> 
> I there anyway in squid I can append the cookie to the cached url? (in
> squid not on the client side url).

What you are attempting to do is equivalent to those seriously annoying
admin who place session IDs into URLs.
This is a Very Bad Idea. Your cache becomes extremely bloated very fast.
the Cookie header contains a lot of garbage even between requests made by
one user.

How are they logged in?

HTTP authentication...
  add WWW-Authenticate to the Vary: HTTP headers.

Web-based login...
  break each page down into modules and objects, setting public on the
common objects. private on very specific private objects and personal data.
AJAX / Web2.0 stuff can help here.


And yes, if you read between those lines, this type of control can only be
implemented safely by the website author.

>  
> I've looked at doing storeurl_rewrite_programs but they don't receive
> cookie information (or anything else useful) as input.

They receive the URL and source info. Which for the purpose of _reducing_
duplicate objects is quite useful.

Amos


Re: [squid-users] Allowing ports used by Squid through Iptables.

2010-03-24 Thread Amos Jeffries
On Wed, 24 Mar 2010 14:11:46 +, "GIGO ."  wrote:
> I want to do  the security hardening of my Squid Server with Iptables. I
> intend to have no rule on outbond traffic however ibound traffic would
be
> restricted. please guide what are the minimum ports that are required to
be
> open on iptables.
>  

Please lookup guidelines on best-practice for firewall administration.

Minimum ports for Squid depend on your usage. Either port 80 for reverse
proxies or usually port 3128 for forward proxies.

In essence look at the squid.conf for *_port lines being used. Those are
the ones you need to look at for inbound traffic to Squid.
Exclude http(s)_port's with "transparent", "tproxy" or "intercept"
flagged. They should always be blocked from direct external access.

Amos



Re: [squid-users] HTCP for consistent caches for reverse proxies

2010-03-24 Thread Amos Jeffries
On Wed, 24 Mar 2010 17:44:27 +0100, Georg Höllrigl
 wrote:
> Hello,
> 
> Is there a way to get two squid caches used as reverse proxy to have an
> consistent cache?
> 
> An example would be a file that contains "abcd" - I request the file,
get
> balanced to squid1 which 
> caches the file du to the expire header for one hour. Then the file gets
> changed to contain "abcde". 
> The next request gets to squid2 and is cached there.
> 
> Now I get different files out of the caches!

Only if the request/response headers specify that both are valid at the
same time.

> 
> What even more confuses me - somtimes the caches re-request the file
from
> source and I get an 
> updated version of the file, even when the expire time isn't over.
> How does squid determine, when to re-request the cached files?

Many reasons...

 when the Expires: header is in the past
 when the client request contains no-cache
 when the client request contains no-store
 when the client request contains private
 when the client request contains authentication credentials
 when the client request contains must-revalidate
 when the client request contains max-age shorted than the stored object
 when the stored object contains must-revalidate
 when the stored object contains Vary: header for a different set of
client headers
 (I'm sure I've missed a few)

It sounds to me like you are dealing with a web server that does not set
the correct response headers. OR that you or one of your upstream caches
are overriding some of those correct headers with refresh_pattern badness.

Amos




[squid-users] Re: Squid Kerb Auth Issue

2010-03-24 Thread Markus Moeller

How did you create the keytab ?

Markus

"Nick Cairncross"  wrote in message 
news:c7ce8144.1d5e1%nick.cairncr...@condenast.co.uk...

Hi,

I'm concerned by a problem with my HTTP.keytab 'expiring'. My test base have 
reported a problem to me that they are prompted repeatedly for an 
unsatisfiable username and password. When I checked cache.log I noticed that 
there was a KVNO mismatch being reported. I regenerated my keytab and all 
was well again. However, I was worried by this so I looked back over my 
emails and I noticed the same problem occurred 7 days ago (almost to the 
hour). Does anyone have a suggestion as to what might have caused 
this/things to check? There haven't been any AD changes.


Thanks,


Nick

** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be 
unlawful.  Disclosure to any party other than the addressee, whether 
inadvertent or otherwise, is not intended to waive privilege or 
confidentiality.  Internet communications are not secure and therefore Conde 
Nast does not accept legal responsibility for the contents of this message. 
Any views or opinions expressed are those of the author.


Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900




Re: [squid-users] Help with accelerated site

2010-03-24 Thread Ron Wheeler

What is squid proxying?
Usually the normal behaviour is exactly what you are getting since squid 
normally proxies Apache on 80.

Browser ==> Squid on 80==>proxied to Apache on port 81.


If Squid is not proxying Apache, then it looks like you have Apache 
running on 80.


If you are trying to redirect port 80 to another program that is not 
Apache, then you need to get Apache off port 80.

You can not have 2 programs listening to port 80.

If Apache is running and owns port 80, Squid will not start.

If this is the case, You likely have errors in the logs to this effect.

Shut down Apache and and restart Squid.

Try to start Apache and now it should howl with anger (or log in anger) 
at not getting port 80.



Ron

a...@gmail wrote:

Hello All,

I have followed this configuration, but when I try and access the 
website from outside my network
All I get is the default page of the apache on the machine where the 
Squid proxy is installed


Here is the link:

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

here is the configuration I followed

http_port 80 accel defaultsite=your.main.website.name(changed my port 
to 81 my backend server listens on port 81)I havehttp_port 81 accel 
defaultsite=www.my.website.org vhostand then used thiscache_peer 
ip.of.webserver parent 80 0 no-query originserver 
name=myAccelcache_peer 192.168.1.5 parent 81 0 no query originserver 
name=myAccel(myAccel I have put a name)and then acl our_sites 
dstdomain my.website.org

http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all Anybody with any suggestions 
please?Any help would be appreciated thank youRegardsAdam







[squid-users] pinger? what for

2010-03-24 Thread Luis Daniel Lucio Quiroz
HI squids,

I did realize that in latest snapshoot 3.1 has pinger disabled.  I wonder to 
know what for pinger is?

TIA

LD


[squid-users] WebFilter by ip

2010-03-24 Thread Landy Landy
Hello List.

I have an acl blocking a batch of ip addresses banned from using the internet 
and have others that can use the internet without problems. Now, I would like 
to filter the web content to those users that use the internet. I would like to 
block sexual content and stuff like that that can be desturbing at work.

How can I create another acl to filter pages to the specific ip's that are 
allowed to the internet?

Any suggestions???

Thanks in advanced for your help.


  


Re: [squid-users] WebFilter by ip

2010-03-24 Thread Luis Daniel Lucio Quiroz
Le Mercredi 24 Mars 2010 18:30:49, Landy Landy a écrit :
> Hello List.
> 
> I have an acl blocking a batch of ip addresses banned from using the
> internet and have others that can use the internet without problems. Now,
> I would like to filter the web content to those users that use the
> internet. I would like to block sexual content and stuff like that that
> can be desturbing at work.
> 
> How can I create another acl to filter pages to the specific ip's that are
> allowed to the internet?
> 
> Any suggestions???
> 
> Thanks in advanced for your help.
go to  Dansguardian or SquidGuard


Re: [squid-users] WebFilter by ip

2010-03-24 Thread Landy Landy
 
> > Thanks in advanced for your help.
> go to  Dansguardian or SquidGuard
> 

I've read about these two utilities but, the problem is filtering the content 
for specific ip addresses. 





Re: [squid-users] Help with accelerated site

2010-03-24 Thread Amos Jeffries
On Wed, 24 Mar 2010 19:48:27 -0400, Ron Wheeler
 wrote:
> What is squid proxying?
> Usually the normal behaviour is exactly what you are getting since squid

> normally proxies Apache on 80.
> Browser ==> Squid on 80==>proxied to Apache on port 81.
> 
> 
> If Squid is not proxying Apache, then it looks like you have Apache 
> running on 80.
> 
> If you are trying to redirect port 80 to another program that is not 
> Apache, then you need to get Apache off port 80.
> You can not have 2 programs listening to port 80.
> 
> If Apache is running and owns port 80, Squid will not start.
> 
> If this is the case, You likely have errors in the logs to this effect.
> 
> Shut down Apache and and restart Squid.
> 
> Try to start Apache and now it should howl with anger (or log in anger) 
> at not getting port 80.
> 
> 
> Ron
> 
> a...@gmail wrote:
>> Hello All,
>>
>> I have followed this configuration, but when I try and access the 
>> website from outside my network
>> All I get is the default page of the apache on the machine where the 
>> Squid proxy is installed
>>
>> Here is the link:
>>
>> http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator
>>
>> here is the configuration I followed
>>
>> http_port 80 accel defaultsite=your.main.website.name(changed my port 
>> to 81 my backend server listens on port 81)I havehttp_port 81 accel 
>> defaultsite=www.my.website.org vhostand then used thiscache_peer 
>> ip.of.webserver parent 80 0 no-query originserver 
>> name=myAccelcache_peer 192.168.1.5 parent 81 0 no query originserver 
>> name=myAccel(myAccel I have put a name)and then acl our_sites 
>> dstdomain my.website.org
>> http_access allow our_sites
>> cache_peer_access myAccel allow our_sites
>> cache_peer_access myAccel deny all Anybody with any suggestions 
>> please?Any help would be appreciated thank youRegardsAdam
>>

Sorry, took me a while to un-mangle that original email text.

You are missing the "vhost" option on https_port 80. All traffic Squid
receives on port 80 will go to Apache's default virtual host.

Amos



Re: [squid-users] WebFilter by ip

2010-03-24 Thread donovan jeffrey j


On Mar 24, 2010, at 8:30 PM, Landy Landy wrote:


Hello List.

I have an acl blocking a batch of ip addresses banned from using the  
internet and have others that can use the internet without problems.  
Now, I would like to filter the web content to those users that use  
the internet. I would like to block sexual content and stuff like  
that that can be desturbing at work.


How can I create another acl to filter pages to the specific ip's  
that are allowed to the internet?


Any suggestions???

Thanks in advanced for your help.


greetings

Squid + SquidGuard very easy to do. you need to ask yourself, do you  
want transparent or configure the client browser ?

then you can filter with a blacklist

start here for one.

http://www.shallalist.de/categories.html

any and all traffic that comes into the device can be viewed and sent  
to a log file for processing.

-j


Re: [squid-users] pinger? what for

2010-03-24 Thread Amos Jeffries
On Wed, 24 Mar 2010 18:28:53 -0600, Luis Daniel Lucio Quiroz
 wrote:
> HI squids,
> 
> I did realize that in latest snapshoot 3.1 has pinger disabled.  I
wonder
> to 
> know what for pinger is?
> 

Squid uses it to securely do ICMP to measure distance to the possible
source servers of a request. Optimizing which peers get used for fastest
responses.
It was on for some extra testing on 3.1 betas, but is not yet installed
properly, so has been disabled temporarily again for the coming production
release.

Amos



[squid-users] sarg and Squid 3 Stable20

2010-03-24 Thread Joseph L. Casale
Using the redhat package on CentOS 5x64, sarg faults and can't generate
all of the files needed for the view.

This worked on the older version in the main repo, is there something known
to change to allow sarg to work or is the issue unexpected?

Thanks!
jlc


Re: [squid-users] Help with accelerated site

2010-03-24 Thread a...@gmail

Hello there,
Thanks for the reply Ron and Amos


Maybe my original e-mail wasn't clear a bit confusing I am sorry if I 
confused you


I have squid running on Machine A with let's say local ip 192.168.1.4
the backend server is running on machine B and ip address 192.168.1.3

Now, instead of getting the website that is located on Machine B 192.168.1.3 
which is listening on port 81 not 80.
I am getting the default Apache Page on the Proxy server Machine which is 
192.168.1.4


And I do have the vhost in my configuration
Well there are two apaches running on the two machines, the proxy machine 
and the web-server machine, except the web-server apache listens on port 81, 
logically (technically) speaking it should work, but for some reason it 
doesn't.

I hope it makes more sense to you what I am trying to describe here

Thank you all for your help
Regards
Adam

- Original Message - 
From: "Amos Jeffries" 

To: 
Sent: Thursday, March 25, 2010 1:01 AM
Subject: Re: [squid-users] Help with accelerated site



On Wed, 24 Mar 2010 19:48:27 -0400, Ron Wheeler
 wrote:

What is squid proxying?
Usually the normal behaviour is exactly what you are getting since squid



normally proxies Apache on 80.
Browser ==> Squid on 80==>proxied to Apache on port 81.


If Squid is not proxying Apache, then it looks like you have Apache
running on 80.

If you are trying to redirect port 80 to another program that is not
Apache, then you need to get Apache off port 80.
You can not have 2 programs listening to port 80.

If Apache is running and owns port 80, Squid will not start.

If this is the case, You likely have errors in the logs to this effect.

Shut down Apache and and restart Squid.

Try to start Apache and now it should howl with anger (or log in anger)
at not getting port 80.


Ron

a...@gmail wrote:

Hello All,

I have followed this configuration, but when I try and access the
website from outside my network
All I get is the default page of the apache on the machine where the
Squid proxy is installed

Here is the link:

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

here is the configuration I followed

http_port 80 accel defaultsite=your.main.website.name(changed my port
to 81 my backend server listens on port 81)I havehttp_port 81 accel
defaultsite=www.my.website.org vhostand then used thiscache_peer
ip.of.webserver parent 80 0 no-query originserver
name=myAccelcache_peer 192.168.1.5 parent 81 0 no query originserver
name=myAccel(myAccel I have put a name)and then acl our_sites
dstdomain my.website.org
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all Anybody with any suggestions
please?Any help would be appreciated thank youRegardsAdam



Sorry, took me a while to un-mangle that original email text.

You are missing the "vhost" option on https_port 80. All traffic Squid
receives on port 80 will go to Apache's default virtual host.

Amos





Re: [squid-users] Help with accelerated site

2010-03-24 Thread Ron Wheeler

a...@gmail wrote:

Hello there,
Thanks for the reply Ron and Amos


Maybe my original e-mail wasn't clear a bit confusing I am sorry if I 
confused you


I have squid running on Machine A with let's say local ip 192.168.1.4
the backend server is running on machine B and ip address 192.168.1.3

Now, instead of getting the website that is located on Machine B 
192.168.1.3 which is listening on port 81 not 80.
I am getting the default Apache Page on the Proxy server Machine which 
is 192.168.1.4


And I do have the vhost in my configuration
Well there are two apaches running on the two machines, the proxy 
machine and the web-server machine, except the web-server apache 
listens on port 81, logically (technically) speaking it should work, 
but for some reason it doesn't.

I hope it makes more sense to you what I am trying to describe here


Very helpful.
You can not have apache listening for port 80 on 192.168.1.4 and Squid 
trying to do the same thing.

Only one process can have port 80.
You will very likely find a note in the squid logs that says something 
to the effect that squid can not bind to port 80.
If you shutdown apache on 192.168.1.4 and restart squid, your proxy will 
work (if the rest of the configuration is correct)
If you then try to start apache on 192.168.1.4 it will certainly 
complain loudly about port 80 not being free.


If you want to use Apache on both 192.168.1.4 and 192.168.1.3 you need 
to set the apache on 192.168.1.4 to listen on port 81 and set squid to 
proxy to the apache on 192.168.1.4 and use apache's proxy and vhost 
features to reach 192.168.1.5 which can be set to listen on port 80.

This will support
browser=>Squid on 192.168.1.4 ==> Apache on 192.168.1.4:81 (vhost) 
==>Apache 192.168.1.3:80

That is a pretty common approach.

Ron




Thank you all for your help
Regards
Adam

- Original Message - From: "Amos Jeffries" 
To: 
Sent: Thursday, March 25, 2010 1:01 AM
Subject: Re: [squid-users] Help with accelerated site



On Wed, 24 Mar 2010 19:48:27 -0400, Ron Wheeler
 wrote:

What is squid proxying?
Usually the normal behaviour is exactly what you are getting since 
squid



normally proxies Apache on 80.
Browser ==> Squid on 80==>proxied to Apache on port 81.


If Squid is not proxying Apache, then it looks like you have Apache
running on 80.

If you are trying to redirect port 80 to another program that is not
Apache, then you need to get Apache off port 80.
You can not have 2 programs listening to port 80.

If Apache is running and owns port 80, Squid will not start.

If this is the case, You likely have errors in the logs to this effect.

Shut down Apache and and restart Squid.

Try to start Apache and now it should howl with anger (or log in anger)
at not getting port 80.


Ron

a...@gmail wrote:

Hello All,

I have followed this configuration, but when I try and access the
website from outside my network
All I get is the default page of the apache on the machine where the
Squid proxy is installed

Here is the link:

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

here is the configuration I followed

http_port 80 accel defaultsite=your.main.website.name(changed my port
to 81 my backend server listens on port 81)I havehttp_port 81 accel
defaultsite=www.my.website.org vhostand then used thiscache_peer
ip.of.webserver parent 80 0 no-query originserver
name=myAccelcache_peer 192.168.1.5 parent 81 0 no query originserver
name=myAccel(myAccel I have put a name)and then acl our_sites
dstdomain my.website.org
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all Anybody with any suggestions
please?Any help would be appreciated thank youRegardsAdam



Sorry, took me a while to un-mangle that original email text.

You are missing the "vhost" option on https_port 80. All traffic Squid
receives on port 80 will go to Apache's default virtual host.

Amos








Re: [squid-users] Cancelled downloads

2010-03-24 Thread Carlos Lopez
Hi,

I have the same situation with users on my site, they download many BIG files 
and then cancel them, eventhough I set some delay pools so they get bured, but 
the big files are kept by squid and the HD is getting full.

Is there any solution to solve it, thru SQUID?.

Carlos

--- El sáb, 3/20/10, Marcello Romani  escribió:

> De: Marcello Romani 
> Asunto: Re: [squid-users] Cancelled downloads
> A: squid-users@squid-cache.org
> Fecha: sábado, 20 de marzo de 2010, 04:51 pm
> Marcello Romani ha scritto:
> > CASALI COMPUTERS - Michele Brodoloni ha scritto:
> >> Hello,
> >> is it possible to stop squid from keep downloading
> a file when a user stops the download from his browser?
> >> If an user initiates a 1GB of web download and
> then hits “cancel”, squid doesn’t mind it and
> continues to download until it finishes, and this is a waste
> of bandwidth.
> >> 
> >> Is there a solution for this behavior?
> >> 
> >> Thanks
> >> 
> > 
> > Hallo,
> >     I have the same problem here.
> I have set quick_abort_min and _max to 0 to avoid any
> (useless, in my situation) download.
> > 
> > But what to do with downloads that have been
> interrupted before the config change ?
> > 
> > I.e., I have now 5-6 huge iso file that are beign
> downloaded by squid as leftover from previous interrupted
> downloads.
> > 
> > Can I tell squid to abort them via some kind of
> administrative interface (cachemgr doesn't seem to provie
> such command) or should I go the iptables route ?
> > 
> > Thanks in advance.
> > 
> 
> Ok, just for reference: I dig'ed the sites hosting those
> iso files and set up some iptables rules in the INPUT chain
> to block tcp/80 traffic coming from them, plus some OUPUT
> chain rules to block outgoing traffic as well.
> Within seconds the wan traffic decreased even though I did
> not put those rules on the gateway, only on the squid host.
> After some time squid dropped the connection, probably due
> to a timeout, and I deleted the blocking rules.
> 
> This way I've been able to "cancel" ongoing downloads
> without restarting squid.
> 
> Marcello
> 


  

¡Obtén la mejor experiencia en la web!
Descarga gratis el nuevo Internet Explorer 8. 
http://downloads.yahoo.com/ieak8/?l=e1


[squid-users] Squid redirection

2010-03-24 Thread jayesh chavan
Hi,
I have written script which redirects my squid to local
apache.It works fine for FOLLOWING SCRIPT

#!c:/perl/bin/perl.exe
$|=1;
while (<>) {
s...@http://www.az@http://117.195.4.252@;
print;
 }

But whenever I use this script
#!c:/perl/bin/perl.exe
$|=1;
while (<>) {
s...@http://www.az@http://117.195.4.252//index.html@;
print;
   }

It doesnt work.I observed that this is happening due to redirect
program which is appending / at the end of rewritten url.It gives
error as:
   The requested URL
/index.html/ was not found on this server.
How to avoid that?
Regards,
  Jayesh


[squid-users] Squid Compilation and Active Directory Authentication

2010-03-24 Thread GIGO .


purpose:
 
To authenticate squid users through active directory before allowing them 
access to internet.
 
 
Compile Options: 
 
./configure --prefix=/usr --localstatedir=/var --libexecdir=${prefix}/lib/squid 
--srcdir=. --datadir=${prefix}/shares/squid --sysconfdir=/etc/squid3 
--enable-cache-digests --enable-removal-policies=lru --enable-delay-pools 
--enable-storeio=aufs,ufs --with-large-files --disable-ident-lookups 
--with-default-user=proxy --enable-basic-auth-helpers="LDAP" 
--enable-auth="basic,negotiate,ntlm" 
--enable-external-acl-helpers="wbinfo_group,ldap_group" 
--enable-negotiate-auth-helpers="squid_kerb_auth"
 
 
Question:
 
1. --enable-digest-auth-helpers=\"list of helpers\" if this option to have any 
role in authentication through active directory. 
 
2. If comiling with more options then you currently required has a down side or 
its a good option to compile with as many options as you can guess you may need 
in future.
 
3. Could you refer to an online complete guide for the authentication of squid 
users through active directory. Currently i am refering to these hoping that 
they are the latest and complete.
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory
 
regards,
 
 
  
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] pinger? what for

2010-03-24 Thread Luis Daniel Lucio Quiroz
Le Mercredi 24 Mars 2010 19:07:48, Amos Jeffries a écrit :
> On Wed, 24 Mar 2010 18:28:53 -0600, Luis Daniel Lucio Quiroz
> 
>  wrote:
> > HI squids,
> > 
> > I did realize that in latest snapshoot 3.1 has pinger disabled.  I
> 
> wonder
> 
> > to
> > know what for pinger is?
> 
> Squid uses it to securely do ICMP to measure distance to the possible
> source servers of a request. Optimizing which peers get used for fastest
> responses.
> It was on for some extra testing on 3.1 betas, but is not yet installed
> properly, so has been disabled temporarily again for the coming production
> release.
> 
> Amos


ok, and installing with suid will be enoguht?


[squid-users] pid file

2010-03-24 Thread Luis Daniel Lucio Quiroz
Strange,
I know default path for squid.pid file is /var/run/squid.pid, however i need to 
put it in /var/run/squid/squid.pid.

I realize there is flag '--with-pidfile=/var/run/squid/squid.pid' and this 
should work.  However after recompilling, pid file is still in 
/var/run/squid.pid

any suggestion?

LD

Squid Cache: Version 3.1.0.18-20100324
configure options:  '--build=x86_64-mandriva-linux-gnu' '--prefix=/usr' '--exec-
prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--
sysconfdir=/etc/squid' '--datadir=/usr/share/squid' '--
includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--sharedstatedir=/usr/com' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--x-includes=/usr/include' '--x-
libraries=/usr/lib64' '--enable-shared=yes' '--enable-static=no' '--enable-
xmalloc-statistics' '--enable-carp' '--enable-async-io' '--enable-
storeio=aufs,diskd,ufs' '--enable-removal-policies=heap,lru' '--enable-icmp' 
'--enable-delay-pools' '--disable-esi' '--enable-icap-client' '--enable-ecap' 
'--enable-useragent-log' '--enable-referer-log' '--enable-wccp' '--enable-
wccpv2' '--disable-kill-parent-hack' '--enable-snmp' '--enable-cachemgr-
hostname=localhost' '--enable-arp-acl' '--enable-htcp' '--enable-ssl' '--
enable-forw-via-db' '--enable-cache-digests' '--disable-poll' '--enable-epoll' 
'--enable-linux-netfilter' '--disable-ident-lookups' '--enable-default-
hostsfile=/etc/hosts' '--enable-auth=basic,digest,negotiate,ntlm' '--enable-
basic-auth-helpers=getpwnam,LDAP,MSNT,multi-domain-
NTLM,NCSA,PAM,SMB,YP,SASL,POP3,DB,squid_radius_auth' '--enable-ntlm-auth-
helpers=fakeauth,no_check,smb_lm' '--enable-negotiate-auth-
helpers=squid_kerb_auth' '--enable-digest-auth-
helpers=password,ldap,eDirectory' '--enable-external-acl-
helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--with-default-
user=squid' '--with-pthreads' '--with-dl' '--with-openssl=/usr' '--with-large-
files' '--with-build-environment=default' '--enable-mit=/usr' '--with-
logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--enable-
http-violations' '--with-filedescriptors=8192' 'build_alias=x86_64-mandriva-
linux-gnu' 'CFLAGS=-O2 -g -pipe -Wformat -Werror=format-security -Wp,-
D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -fstack-
protector-all -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64' 'LDFLAGS= -Wl,--
as-needed -Wl,--no-undefined -Wl,-z,relro -Wl,-O1' 'CPPFLAGS=-
I/usr/include/openssl ' 'CXXFLAGS=-O2 -g -pipe -Wformat -Werror=format-
security -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -
fstack-protector-all -D_LARGEFI