[squid-users] Sarg don't generate reports

2010-06-22 Thread Sam Przyswa

Hi,

I installed sarg 2.2.5-2 with squid3 on Debian 5.0.4 but the reports are 
not creates on /var/lib/sarg/Daily the daily mail report work fine !?


What's wrong ?

Thanks for your help.

Sam.




Re: [squid-users] maxage/s-maxage on reverse proxy mode

2010-06-22 Thread Henrik Nordström
ons 2010-06-23 klockan 15:36 +0900 skrev sheng zheng:

> content when the backend original web servers send  a dynamic page with  
> "max-age=60 s-maxage=120" header?

That s-maxage should be fine, even if quite short.

> Is there a way to search the neighbor cache peers  first

Should be sufficient to have siblings defined I think, but have a vague
memory of this being a problem in accelerator setups.

Which Squid version are you using?

Try setting "nonhierarchical_direct off".

Regards
Henrik



Re: [squid-users] Re: Squid Concerns

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 12:30 -0700 skrev Superted666:
> Hello,
> 
> Thanks so much again, nearly there now!
> That gets me through to my webserver BUT squid doesn't appear to be sending
> me to the correct address.

What address you try to connect to? www.f1fanatic.co.uk or something
else?

If your server has multiple domains then you need the vhost option in
Squid http_port to enable domain based virtual hosts.

Regards
Henrik



[squid-users] maxage/s-maxage on reverse proxy mode

2010-06-22 Thread sheng zheng

Hello , all
 We have a couple of squid servers behind a load balancer as a reverse 
proxy farm. Does anybody know how to make all squids to cache same  
content when the backend original web servers send  a dynamic page with  
"max-age=60 s-maxage=120" header? Is there a way to search the neighbor 
cache peers  first , if udp_miss  then  forward the request to  the  
original web servers? I have tried cache_peer setting like bellow but 
does not work.



cache_peer  $neighbor_ip sibling 80  3130
cache_peer $original_server_ip  parent 80 0 no-query no-digest default 
originserver



Thanks

-Sheng
//



Re: [squid-users] Re: Squid Concerns

2010-06-22 Thread Amos Jeffries
On Tue, 22 Jun 2010 12:30:05 -0700 (PDT), Superted666
 wrote:
> Hello,
> 
> Thanks so much again, nearly there now!
> That gets me through to my webserver BUT squid doesn't appear to be
sending
> me to the correct address.
> 
> Apache is set to serve as a virtualhost so looks out for the URL, it
looks

Your squid http_port line may also need the "vhost" flag to handle virtual
hosts.

> like squid is stripping this URL before its passed onto apache. Then
apache
> moans and gives a 404.

Squid does not strip anything without special configuration. Your earlier
posted config showed no signs of such alterations. So something else is
going on.

Amos


Re: [squid-users] About proxy_auth alc

2010-06-22 Thread Amos Jeffries
On Tue, 22 Jun 2010 16:30:52 +0200, Alberto Cappadonia
 wrote:
> Hi,
> 
> I've a question about proxy_auth acl.
> 
> if I've an acl list like the following
> 
> acl friends proxy_auth mary jane carl
> acl target dst 10.0.0.1
> 
> http_access friends allow
> http_access target deny

On startup your Squid barfs with "FATAL: Bungled squid.conf"

The syntax is:
 "http_access" ( "allow" | "deny" ) [acl] [acl ...]


> 
> What happens when mary contacts 10.0.0.1? always allow?

Yes. "mary", "jane" and "carl" are allowed unrestricted access to HTTP
once logged in.

> 
> If "http_access friends allow" is evaluated to true, is the second also 
> checked?

No. *_access lines always evaluate to one of two results:
  true -> stop and do (allow|deny).
  false -> test next rule.

> 
> I mean, the proxy_auth acl is considered by squid like the others acl,
or
> is
> evaluated only the first time and when the timeout expires?

ACL are evaluated every test.

All ACL which require remote lookups (ie DNS lookups, proxy_auth, ident
and external) each have an internal cache of results which gets checked
first before the slow helper is asked. Some protocols see M/ttl of M
requests, others see M of M requests.

> 
> Is there some doc explaining the state-chart of the entire 
> authentication scheme?

No. Each authentication protocol (auth_param X) differs.

Note that *authentication* is very different to the *authorization* scheme
you are asking about.
 Access Controls authorizes some particular request to happen or not to
happen. Sometimes, as in your config an user is required to be
authenticated before they can be authorized access. Usually they can be
denied without authentication (ie external machines).

The state diagram of your access controls is called squid.conf.
 * Starting at the top each line is evaluated top-down left-to-right.
 * First word is the point of transfer affected by the control
(http_access -> each HTTP request).
 * Second word is the policy to enforce (allow/deny).
 * Third and following is a list of stats to be tested.
 * if an ACL is true, the next on the line gets tested, end of line the
policy applied.
 * if an ACL is false, the next line gets checked.

http://wiki.squid-cache.org/SquidFaq/SquidAcl#Common_Mistakes

Amos


Re: [squid-users] Skype block

2010-06-22 Thread Amos Jeffries
On Tue, 22 Jun 2010 23:47:05 +0200, Giovanni Panozzo 
wrote:
> Last year I used the following method: skype is a program which does not

> send a "User-Agent:" header to the HTTP proxy (squid).
> This is not considered a good practice in RFC2616:
> http://tools.ietf.org/html/rfc2616#section-14.43
> 
> So I added to squid.conf the following two lines:
> 
> acl validUserAgent browser \S+
> http_access deny !validUserAgent
> 
> (this second line must be place/order in the right place on your 
> http_access list of squid.conf)
> 
> And skype stopped to work (1 year ago... now, really, I'm not sure if it

> still working).

Some versions send the UA "Skype". The latest releases have gone to no UA
at all.
Thank you for this, it will help several people who have been looking for
something to match those new versions.

Amos


Re: [squid-users] problems access some pages

2010-06-22 Thread Amos Jeffries
On Tue, 22 Jun 2010 16:07:34 -0500, "Vernon A. Fort"
 wrote:
> Having issues connecting to this site:
> 
>
http://www.bbb.org/nashville/accredited-business-directory/roofing-contractors
> 
> Actually its when i select one of the links contained on this page via 
> Squid-3.0.20.  IE shows a page  error indicating that the 'Cookie' is 
> undefined.  Turning up debugging does not  really show anything.  we can

> access this site and sub-pages without using squid.  Can anyone point me

> in the right direction?

For a public web page its being rather fanatical about preventing
temporary caching. Everything including the session cookie, and TCP link
are closed and expired immediately on creation.

  HTTP/1.1 200 OK
  Date: Wed, 23 Jun 2010 01:10:31 GMT
  Server: Apache
  Expires: Thu, 19 Nov 1981 08:52:00 GMT
  Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-
check=0
  Pragma: no-cache
  Content-Type: text/html;charset=utf-8
  Set-Cookie: PHPSESSID=acku3mcc20hkkletn4e4l570u0; path=/
  Set-Cookie:
bbb=50.48.52.46.50.51.50.46.50.49.48.46.50.48.50.124.99.51.54.10
2.122.52.109; path=/
  Set-Cookie: before=deleted; expires=Tue, 23-Jun-2009 01:10:30 GMT;
path=/
  Set-Cookie: previous=deleted; expires=Tue, 23-Jun-2009 01:10:30 GMT;
path=/
  Set-Cookie:
current=www.bbb.org%2Fnashville%2Fabpages%2Froofing-contractors;
path=/
  Connection: close
  Vary: Accept-Encoding, User-Agent
  Content-Encoding: gzip
  Transfer-Encoding: chunked


This note about the CSS from redbot.org analysis of the whole page is a
little worrying:
  "Content negotiation for gzip compression makes the response 19395%
larger."
  "Content negotiation for gzip compression makes the response 22454%
larger."


Anyway, assuming you have not configured any local HTTP protocol overrides
(with refresh_pattern) to force the caching, then these pages will simply
be passed through Squid as received. If cached the page will have cookies
and authentication stripped when passed out to clients.

Amos



Re: [squid-users] Skype block

2010-06-22 Thread Giovanni Panozzo


Last year I used the following method: skype is a program which does not 
send a "User-Agent:" header to the HTTP proxy (squid).

This is not considered a good practice in RFC2616:
http://tools.ietf.org/html/rfc2616#section-14.43

So I added to squid.conf the following two lines:

acl validUserAgent browser \S+
http_access deny !validUserAgent

(this second line must be place/order in the right place on your 
http_access list of squid.conf)


And skype stopped to work (1 year ago... now, really, I'm not sure if it 
still working).


Giovanni



[squid-users] problems access some pages

2010-06-22 Thread Vernon A. Fort

Having issues connecting to this site:

http://www.bbb.org/nashville/accredited-business-directory/roofing-contractors

Actually its when i select one of the links contained on this page via 
Squid-3.0.20.  IE shows a page  error indicating that the 'Cookie' is 
undefined.  Turning up debugging does not  really show anything.  we can 
access this site and sub-pages without using squid.  Can anyone point me 
in the right direction?


Vernon


[squid-users] Optimized Squids

2010-06-22 Thread Seann Clark

All,

   I have been playing with/tweaking/breaking my squid for a few months 
now, and I am looking for suggestions from the list on improving 
performance. This is on a home system, which does not have a large user 
base. I am running a Dual Xeon 2.0 Ghz system with 2 gigs ram, 120 Gigs 
hard drive, in a Raid 5 configuration controlled by a 3ware RAID card.  
I was using the stock Fedora 8 RPM for this, which was single threaded, 
squid 2.6 Stable 22. I am also running this with diskd currently.


   I have recently recompiled squid to the latest stable for version 
2.7 (Stable 9) with the async io flag passed to the configure command. 
After a little updating of my configurations, just enough to get it to 
work (I haven't changed any of the settings that are new to 2.7, so they 
are defaults right now), I have noticed a drastic improvement in speed 
and even when the system is at a high load (3-5 on the system load, 
reported by top) it runs fairly well. I am looking at moving my cache 
directories off to a different disk, formatted with reiserFS, I am also 
planning on running it using diskd as the storage scheme. The drive I am 
using will be about 100 gigs of formatted space, and I plan to use all 
that space for the cache.


   I would like to know if this is a good plan, or should I change some 
things and how to change them, as well as any suggestions of 
configuration settings with the cache, and 2.7 options that may make a 
cache run even faster.




Thanks in advance,
Seann


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [squid-users] MSN sniff over the squid server

2010-06-22 Thread Leonardo Carneiro - Veltrac

For this you can use msn-proxy

http://msn-proxy.sourceforge.net/


Best regards.

On 06/22/2010 05:34 PM, Luis Daniel Lucio Quiroz wrote:

Le mardi 22 juin 2010 14:13:43, Henrik Nordström a écrit :
   

tis 2010-06-22 klockan 14:05 -0500 skrev Juan Cardoza:
 

Do you know if there a way to sniff the MSN packets through the squid
server?
At this moment the MSN is working through the squid server.

Kind regards
   

wireshark is a quite nice sniffing tool for inspecting any kind of
traffic, proxied or not.

Regards
Henrik
 

Henrik

I think what he is trying to do is something like this:

user ->  proxy ->   internet
-IM logging

basically the problem is that M$ msn client sucks,  it use a  non HTTP at-all
msn protocol,  so it is kind of weird to  get messages without all garbage.
However there is an other solution that could be quite easy:
you may use inconjution with squid an socks proxy like SS5 to let ms client
use native msnp.

LD

   


Re: [squid-users] MSN sniff over the squid server

2010-06-22 Thread Luis Daniel Lucio Quiroz
Le mardi 22 juin 2010 14:13:43, Henrik Nordström a écrit :
> tis 2010-06-22 klockan 14:05 -0500 skrev Juan Cardoza:
> > Do you know if there a way to sniff the MSN packets through the squid
> > server?
> > At this moment the MSN is working through the squid server.
> > 
> > Kind regards
> 
> wireshark is a quite nice sniffing tool for inspecting any kind of
> traffic, proxied or not.
> 
> Regards
> Henrik
Henrik

I think what he is trying to do is something like this:

user -> proxy ->  internet
   -IM logging

basically the problem is that M$ msn client sucks,  it use a  non HTTP at-all 
msn protocol,  so it is kind of weird to  get messages without all garbage.  
However there is an other solution that could be quite easy:
you may use inconjution with squid an socks proxy like SS5 to let ms client 
use native msnp.

LD


Re: [squid-users] how to cache podcasts with '?' in URL using squid?

2010-06-22 Thread David Wetzel
Hi,

Am 22.06.2010 um 12:13 schrieb Henrik Nordström:

> Then you may use an url rewriter helper to strip the query part from the
> URL.

I want to avoid starting rewriters, because I am running on an embedded box 
with only 256 MB RAM, but an 32 GB SSD.

> Or if using a Squid version with store url rewrite support (2.7) then
> tell Squid to ignore that query part for caching purposes but still
> forwarding it on cache misses.

I am using 3.1.3.
It seems to cache google maps without changes. Has that been build in?
I have seen some examples to support google maps and youtube but they seem to 
be for older versions than mine!?

How do I tell squid to ignore that query part?

Thanks!

David



Re: [squid-users] Skype block

2010-06-22 Thread Riccardo Castellani

I'm reading about method to block users for using Skype, can you
confirm unique way is to deny access directly to all IP address when
method 'connect' (SSL) is used ?

That is the preferred way, you should never allow a HTTPS connection for
a unknown site.


Example given, in my company I have http-server on dmz, where some 
application access to it by IP address;

I'm sure it's known site because it's mine.



In this way people cannot access directly to specific site using IP
instead FQDN !

Only to those HTTPs connection to sites using ip address instead of the
cannonical name on the x509 certificate which is the recommended way.


I don't understand, but you confirm I can access sites ONLY by FQDN


Can I restrict Skype access in another way to avoid this behaviour ?

Yes, apply a enterprise policy for software usage :)


;)



--
Jorge Armando Medina
Computación Gráfica de México
Web: http://www.e-compugraf.com
Tel: 55 51 40 72, Ext: 124
Email: jmed...@e-compugraf.com
GPG Key: 1024D/28E40632 2007-07-26
GPG Fingerprint: 59E2 0C7C F128 B550 B3A6  D3AF C574 8422 28E4 0632 



Re: [squid-users] Skype block

2010-06-22 Thread Jorge Armando Medina
Riccardo Castellani wrote:
> I'm reading about method to block users for using Skype, can you
> confirm unique way is to deny access directly to all IP address when
> method 'connect' (SSL) is used ?
That is the preferred way, you should never allow a HTTPS connection for
a unknown site.
> In this way people cannot access directly to specific site using IP
> instead FQDN !
Only to those HTTPs connection to sites using ip address instead of the
cannonical name on the x509 certificate which is the recommended way.

> Can I restrict Skype access in another way to avoid this behaviour ?
Yes, apply a enterprise policy for software usage :)
>
>
>
>
>


-- 
Jorge Armando Medina
Computación Gráfica de México
Web: http://www.e-compugraf.com
Tel: 55 51 40 72, Ext: 124
Email: jmed...@e-compugraf.com
GPG Key: 1024D/28E40632 2007-07-26
GPG Fingerprint: 59E2 0C7C F128 B550 B3A6  D3AF C574 8422 28E4 0632



[squid-users] Skype block

2010-06-22 Thread Riccardo Castellani
I'm reading about method to block users for using Skype, can you confirm 
unique way is to deny access directly to all IP address when method 
'connect' (SSL) is used ?
In this way people cannot access directly to specific site using IP instead 
FQDN !

Can I restrict Skype access in another way to avoid this behaviour ?







[squid-users] Re: Squid Concerns

2010-06-22 Thread Superted666

Hello,

Thanks so much again, nearly there now!
That gets me through to my webserver BUT squid doesn't appear to be sending
me to the correct address.

Apache is set to serve as a virtualhost so looks out for the URL, it looks
like squid is stripping this URL before its passed onto apache. Then apache
moans and gives a 404.

78.109.177.139 TCP_MISS/404 689 GET http://www.f1fanatic.co.uk/ -
FIRST_UP_PARENT/127.0.0.1 text/html

Make sense?
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Concerns-tp2264334p2264658.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] MSN sniff over the squid server

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 14:05 -0500 skrev Juan Cardoza:
> Do you know if there a way to sniff the MSN packets through the squid
> server?
> At this moment the MSN is working through the squid server.
> 
> Kind regards


wireshark is a quite nice sniffing tool for inspecting any kind of
traffic, proxied or not.

Regards
Henrik



Re: [squid-users] how to cache podcasts with '?' in URL using squid?

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 11:56 -0700 skrev David Wetzel:
> Hi,
> 
> is really nobody caching podcasts with squid?
> 
> The URLs in the XML look like
> 
> http://podfiles.zdf.de/podcast/zdf_podcasts/100621_hjo_p.mp4?2010-06-21+21-39
> 
> The part after the '?' is useless as far as I can tell. 

Then you may use an url rewriter helper to strip the query part from the
URL.

Or if using a Squid version with store url rewrite support (2.7) then
tell Squid to ignore that query part for caching purposes but still
forwarding it on cache misses.

Regards
Henrik



[squid-users] MSN sniff over the squid server

2010-06-22 Thread Juan Cardoza
Do you know if there a way to sniff the MSN packets through the squid
server?
At this moment the MSN is working through the squid server.

Kind regards



Re: [squid-users] how to cache podcasts with '?' in URL using squid?

2010-06-22 Thread David Wetzel
Hi,

is really nobody caching podcasts with squid?

The URLs in the XML look like

http://podfiles.zdf.de/podcast/zdf_podcasts/100621_hjo_p.mp4?2010-06-21+21-39

The part after the '?' is useless as far as I can tell. 

d...@hilly>squidclient 
"http://podfiles.zdf.de/podcast/zdf_podcasts/100621_hjo_p.mp4?2010-06-21+21-39";
HTTP/1.0 200 OK
Age: 147
Accept-Ranges: bytes
Date: Tue, 22 Jun 2010 18:49:21 GMT
Content-Length: 99123080
Content-Type: video/mp4
Server: Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13
Last-Modified: Mon, 21 Jun 2010 21:37:26 GMT
ETag: "63126287-5e87f88-1c5d9180"
X-Cache: MISS from hilly
Via: 1.0 hilly (squid/3.1.3)
Proxy-Connection: close

()

If I leave the part after '?', I get

d...@hilly>squidclient 
"http://podfiles.zdf.de/podcast/zdf_podcasts/100621_hjo_p.mp4";
HTTP/1.0 200 OK
Age: 0  <--- fishy right?
Accept-Ranges: bytes
Date: Tue, 22 Jun 2010 18:54:15 GMT
Content-Length: 99123080
Content-Type: video/mp4
Server: Apache/2.2.3 (Debian) PHP/5.2.0-8+etch13
Last-Modified: Mon, 21 Jun 2010 21:37:26 GMT
ETag: "63126287-5e87f88-1c5d9180"
X-Cache: MISS from hilly
Via: 1.0 hilly (squid/3.1.3)
Proxy-Connection: close


Do you need more information?

Thanks!

David

Am 19.06.2010 um 19:58 schrieb David Wetzel:

> Hi,
> 
> I want to cache the video files linked in
> 
> http://content.zdf.de/podcast/zdf_hjo/hjo.xml
> 
> for 24 hours on squid.
> (So that multiple local users can get the file without the need to get it 
> over the internet again)
> 
> I was trying several ways suggested on the web, but it does not seem to work.
> 
> maximum_object_size is 15 KB
> I disabled all lines containing a "?"
> 
> Any hints? I am using squid-3.1.3 from pkgsrc on NetBSD.
> 
> Thanks!
> 
> David


Re: [squid-users] NTLM authentication pass-through to upstream proxy

2010-06-22 Thread Henrik Nordström
tor 2010-06-10 klockan 09:31 +0100 skrev Jeff Silver:

> If the pinning/unpinning in Squid is dependent on the hostname in the 
> request, then this might 
> explain what I'm seeing.

It should only be dependent on the host name when not forwarded via
another proxy.

Regards
Henrik



Re: [squid-users] Re: Squid Concerns

2010-06-22 Thread Henrik Nordström

tis 2010-06-22 klockan 11:09 -0700 skrev Superted666:

> # And finally deny all other access to this proxy
> http_access allow all

One culpit is here: You allow whole world to do pretty much whatever
they like via your proxy.

What you should have is an acl listing your web sites, and allow only
that.

acl port80 port 80
acl mysites dstdomain your.website.domain
http_access allow port80 mysites

And followed by a deny all as the comment says

http_access deny all


> http_port 77.92.76.176:80 transparent 

The other culpit is here. You have configured your proxy as a
transparently intercepting Lan->Internet proxy. While your actual use is
as a reverse proxy / accelerator in front of your web server (Internet
-> Webserver).

Should read

http_port 77.92.76.176:80 accel defaultsite=your.website.domain

In addition you need a cache_peer line telling Squid how to contact the
actual web server.

cache_peer 127.0.0.1 parent 80 0 originserver

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

Regards
Henrik



Re: [squid-users] New to "Squid.conf", basic help

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 23:33 +0530 skrev Parshwa Murdia:

> 1. I am having windows xp with fedora 11, but as you say to upgrade, i
> would but as all the things right now in fc11, so it would take me
> much time to alter all the things. but in the coming future, i wd.
> Though it could be upgraded via yum but that much I don't know in
> linux, have just started. 

That's fine. As long as you don't forget that the Fedora version you are
using is outside any support.

> 2. as you recommend configuring a cache in squid.conf (look
> for cache_dir), but how to configure, i don't know. it would not take
> much time for your if you write the steps.

The only change you need to minimally make there is to decide for the
size of the cache. Just look for the directive, there is an example
ready for you.

> 3. security aspects i am intened for:
> 
> i). Anonymous webs surfing.

Which is not very easy to accomplish with a proxy compared to Firefox
safe browsing mode.

> ii). IP address is not revealed.

Ok. But NAT accomplished the same thing.

> iii). No malware, virus or any such threat.

For that you need a virus scanned. Squid can use clamav (via c-icap).
But look into that once you have the basic setup completed.  c-icap is
not yet packaged for Fedora but indend to get that done for Fedora 14
however.

Regards
Henrik



Re: [squid-users] Cannont Access SSL sites

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 14:48 -0400 skrev Jorge Perez:
> Hello we are using transparent squid, but we cant access sites that use ssl 
> like gmail, banks etc.
> 
> But if we set the proxy on the web browser it works fine.

In transparent mode SSL traffic do not go via Squid. Instead you need to
NAT/Masquerade this traffic, enabling clients direct access to Internet
port 443.

Regards
Henrik



[squid-users] Re: Squid Concerns

2010-06-22 Thread Superted666

Thankyou for the prompt response!

Below is the squid config, trimmed most of the guff out.

Thanks


#Recommended minimum configuration:
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
#http_access deny CONNECT !SSL_ports
#http_access deny CONNECT

http_access allow localnet

# And finally deny all other access to this proxy
http_access allow all

htcp_access allow localnet
htcp_access deny all
http_port 77.92.76.176:80 transparent 

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Concerns-tp2264334p2264547.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid is not aware of logged and anonymous users

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 19:18 +0200 skrev Daniel Gomez:
> Good afternoon everyone,
> 
> I´m using Squid in front of Zope/Plone. Since my main pages
> (Homepage,...) are quite static I would want to cach them for Anoymous
> users, but not for logged users. I am using the policies:
> 
> - Anonymous: Cache in proxy for 24 hours (tested with ETag header and without)
> - Logged user: Cache in the browser with ETag

You also need Vary in that mix, telling caches on what information your
web server decided if the request is anonymous or logged in.

Generally speaking cookie authentication works very very bad with
caches. This because the response then varies on the Cookie header and
if your anonymous visitors have any session like cookies (i.e. goodle
adsense trackers, old session cookies etc) except when logged in then
things go very bad as pretty much every user is then unique to the cache
even if your server faithfully responds with nice ETags. This because
Squid do not know which ETag matches which cookie header combination
before asking your server.

A better design is to use https:// for authenticated access and http://
for anonymous access. In addition to solving the problem it also
increases security of the authenticated users login credentials.

Then in addition I would strongly recommend using HTTP DIgest
authentication instead of form based cookie authentication for
authenticated access. If properly implemented then our authenticated
users passwords is reasonably secure even if your site gets hacked.

Regards
Henrik



Re: [squid-users] DNS Issues with Squid

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 12:29 -0400 skrev Anushan Rajakulasingam:
> Hello,
> I'm experimenting with squid to lower overall bandwidth usage for
> about 500 users, I've implemented squid and squidguard and blocked
> porn and warez sites successfully but I'm running into difficulties
> with the local intranet websites, or any internal websites. ( note I
> am testing this currently so no users are affected :P )
> Upon going to any internal websites I receive the following error
> 
> > While trying to retrieve the URL: http://test/

That should be http://test.yourinternal.domain/ as published in your
intranet DNS.

Or alternatively (but not recommended) you can set "dns_defnames on" in
squid.conf to have Squid search the DNS search path to find the host.
But be warned that this may have negative impact on internet surfing
performance.

> I've configured my resolv.conf with the respective dns servers...

What you mean by by "respective DNS servers"? The configured DNS servers
all need to be able to resolve all names. It does not work having one
DNS for external and one DNS for internal names.

Regards
Henrik



[squid-users] Cannont Access SSL sites

2010-06-22 Thread Jorge Perez
Hello we are using transparent squid, but we cant access sites that use ssl 
like gmail, banks etc.

But if we set the proxy on the web browser it works fine.

heres is my squid.conf
---

http_port 192.168.169.3:3128 transparent
cache_dir ufs /usr/local/squid/var/cache 250 16 256
cache_effective_user squid
cache_effective_group squid
access_log /usr/local/squid/var/logs/access.log squid

acl localnet src 192.168.169.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443 563
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

 SITIOS BLOKEADOS #
acl restobb src 192.168.169.1-192.168.169.129
acl sucky_urls dstdomain .facebook.com .twitter.com .doubleclick.com 
.fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com 
.megaupload.com .rapidshare.de .medi$
deny_info http://www.trabajoweb.cl/error.html sucky_urls
http_access deny restobb sucky_urls
 NO DESCARGAS #
acl resto src 192.168.169.1-192.168.169.29/32
acl descargas_negadas url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip 
.rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov .torrent
deny_info http://www.trabajoweb.cl/error.html descargas_negadas
http_access deny resto descargas_negadas
 SITIOS CASI BLOKEADOS ###
acl restobb2 src 192.168.169.130-192.168.169.149
acl sucky_urls2 dstdomain .doubleclick.com .fotolog.com .warez-bb.org 
.fotolog.cl .chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de 
.mediafire.com .depositfiles.co$
deny_info http://www.trabajoweb.cl/error.html sucky_urls2
http_access deny restobb2 sucky_urls2

http_access allow CONNECT SSL_ports
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost
http_access deny all
##
http_reply_access allow localnet
http_reply_access deny all
#
#REGLAS DESCARGAS
acl normales src 192.168.169.30-192.168.169.129/32
acl tecnicos src 192.168.169.130-192.168.169.149/32
acl administrador src 192.168.169.150-192.168.169.189/32
acl estudio src 192.168.169.190-192.168.169.219/32
acl gerencia src 192.168.169.220-192.168.169.252/32
acl descargas url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi 
.mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .mov

delay_pools 5

delay_class 1 1
delay_parameters 1 10240/10485760 10240/10485760
delay_access 1 allow normales descargas
delay_access 1 deny all

delay_class 2 1
delay_parameters 2 30720/104857600  30720/104857600
delay_access 2 allow tecnicos descargas
delay_access 2 deny all

delay_class 3 1
delay_parameters 3 30720/104857600 30720/104857600
delay_access 3 allow administrador descargas
delay_access 3 deny all

delay_class 4 1
delay_parameters 4 -1/-1  -1/-1
delay_access 4 allow gerencia descargas
delay_access 4 deny all

delay_class 5 1
delay_parameters 5 10240/10240 10240/10240
delay_access 5 allow estudio descargas
delay_access 5 deny all
--

My Iptables rules:

echo "Aplicando reglas iptables..."
iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z
iptables -F
iptables -X
iptables -Z

echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i eth2 -s 192.168.169.0/24 -d ! 192.168.169.0/24 
-p tcp --dport 80 -j REDIRECT --to-port 3128
iptables -t nat -A PREROUTING -i eth2 -s 192.168.169.0/24 -d ! 192.168.169.0/24 
-p tcp --dport 80 -j DNAT --to 192.168.169.3:3128
iptables -A INPUT -i eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT

--

I hope that u guys can help me out



-- 
Atte
Jorge Perez V.
Departamento de informática
Anexo: 359


[squid-users] Squid is not aware of logged and anonymous users

2010-06-22 Thread Daniel Gomez
Good afternoon everyone,

I´m using Squid in front of Zope/Plone. Since my main pages
(Homepage,...) are quite static I would want to cach them for Anoymous
users, but not for logged users. I am using the policies:

- Anonymous: Cache in proxy for 24 hours (tested with ETag header and without)
- Logged user: Cache in the browser with ETag

So what happens is, when a user visits a main page as anoymous it gets
the page from Squid, since already other anonymous users have visited
them before. For example the obtained ETag for the page is:

Etag||Anonymous|318239|Plone Default|en|en-gb|...

Then the user logs into Plone but keeps seeing the same page as
anoymous. The ETag from the requested Page is the same as anonymous.

If the user refresh the browser a "Cache-Control max-age=0" is sent in
the Request-Header and then the user can see the page as logged. The
ETag changes to:

Etag|userid|Authenticated;Contributor;Editor;Manager;Member|318239|Plone
Default|en|en-gb|...

Thinking about the cookies I´ve checked if they sent when a request is
performed, and they are. After logging, when a different page is
requested, the new cookie is also sent in the header but the request
NEVER reaches Plone, the page is gived by Squid.

Cookie  LOCALIZER_LANGUAGE="en"; __ac="fRskMyzSPHK1YzQZ/+sTA5/775kgZ29tZXo="

What I deduce is that Squid is saving the pages with a cache key (URL
+ ETag), and it is not able to know if the user that requests the page
is Logged or Anonymous, since no information is saved in the cache key
of Squid about that.

* Happens the same without using ETag on the Anonymous Policy"


Anyone has an idea about how to solve this? May be adding some
additional information like the cookie value on the ETag?

Thanks,

Daniel G.A.


[squid-users] DNS Issues with Squid

2010-06-22 Thread Anushan Rajakulasingam
Hello,
I'm experimenting with squid to lower overall bandwidth usage for
about 500 users, I've implemented squid and squidguard and blocked
porn and warez sites successfully but I'm running into difficulties
with the local intranet websites, or any internal websites. ( note I
am testing this currently so no users are affected :P )
Upon going to any internal websites I receive the following error

> While trying to retrieve the URL: http://test/
>
> The following error was encountered:
>
> Unable to determine IP address from host name for home
>
> The dnsserver returned:
>
> Name Error: The domain name does not exist.
>
> This means that:
>
> The cache was not able to resolve the hostname presented in the URL.
>
> Check if the address is correct.

I've configured my resolv.conf with the respective dns servers...


Summary

-Following
works-
If I ping 'test' it pings fine from the squidbox.
If i visit the website test.inside.domain.com it works fine.
If I visit http://test/ without the proxy it works fine.

-Following
doesn't work---
If I go to http://test with proxy it doesn't work.


Also, I was wondering is there anyway I could cache a live stream such
as the worldcup broadcasts from various sites. Over the past few days
I am noticing a lot of traffic usage.

Any and all help appreciated,
Thank you
Anushan Rajakulasingam


Re: [squid-users] Squid Concerns

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 08:53 -0700 skrev Superted666:
> H
> The access.log shows entries from a chinese IP every second or so, below is
> an extract of the hits im seeing. 
> 
> 1277220997.529   1187 124.31.204.10 TCP_MISS/200 102 CONNECT
> 205.188.251.43:443 - DIRECT/205.188.251.43 - 

That's bad indeed.

What http_port and http_access directives do you have in squid.conf?

You should only have one http_port directive listening to a public IP,
and it should be configured for reverse proxying.

  http_port publicip:80 accel defaultsite=your.web.domain

make sure you DO NOT have another proxy http_port directive also
listening on the public IP such as

  http_port 3128

In addition, configure your http_access rules to only allow access to
content on your site.

Regards
Henrik



[squid-users] Squid Concerns

2010-06-22 Thread Superted666

Hello, 

Im testing out using squid sat in front of a wordpress installation. 
Wordpress sometimes isn't the fastest so i thought id try squid to serve the
sites static content. 

The site is quite popular so i have set squid to run on a separate IP which
i hit through host file entries. Squid then acts as a transparent proxy and
sends the request to apache over the loopback address. 

Now this is working fine, the problem i have is i believe it may be being
abused which is likely due to some misconfiguration i have made. 

The access.log shows entries from a chinese IP every second or so, below is
an extract of the hits im seeing. 

1277220997.529   1187 124.31.204.10 TCP_MISS/200 102 CONNECT
205.188.251.43:443 - DIRECT/205.188.251.43 - 
1277221000.132   1190 124.31.204.10 TCP_MISS/200 102 CONNECT
64.12.202.116:443 - DIRECT/64.12.202.116 - 
1277221002.730   1188 124.31.204.10 TCP_MISS/200 102 CONNECT
205.188.251.43:443 - DIRECT/205.188.251.43 - 
1277221004.336362 95.25.187.148 TCP_MISS/200 103 CONNECT 64.12.202.8:443
- DIRECT/64.12.202.8 - 
1277221005.274   1188 124.31.204.10 TCP_MISS/200 102 CONNECT
64.12.202.116:443 - DIRECT/64.12.202.116 - 

Now they are not legitimately using the site, it looks to me like there
using it to connect to other site's. 

What do you guys think? I can post the config if you like?
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Concerns-tp2264334p2264334.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Streaming MMS

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 15:39 +0200 skrev ML Alasta:

> Sorry but i don't find a response or it's old response.
> I have a RH server with Squid 3.0 and le mms streaming doesn't work !
> Squid support the mms streaming or it's my configuration ?

Squid supports MMS streaming over HTTP only. For other streaming methods
you need to use NAT or similar to provide full Internet network access
to the client.

Regards
Henirk



Re: [squid-users] New to "Squid.conf", basic help

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 16:29 +0530 skrev Parshwa Murdia:

> I am using FC11.

Please upgrade. Fedora 11 is end-of-life and no further updates will be
seen in F11.

> Squid not activated (only installed via yum).

This is normal. After installation you need to start the service.

   service squid start

And you also need to configure your web browser to use Squid as proxy.

Additionally I would recommend configuring a cache in squid.conf (look
for cache_dir).

> Installed squid only for the desktop single PC and is to be used for
> the single PC only. The main reasons to implement it is:
> 
> 1. Speed improvement (or bandwidth improvement)
> 2. For security purpose so that intruders or transpassers cannot keep an eye.
> 
> How could it be activated and implemented in FC11

For a single user the speed improvements from a shared cache such as
Squid is quite small. You already have a cache in your browser.

And in terms of security the benefits is also somewhat limited. Depends
on which security aspects you want to address, but in general the "Safe
browsing" mode of Firefox adds a lot more security.

Regards
Henrik



[squid-users] About proxy_auth alc

2010-06-22 Thread Alberto Cappadonia

Hi,

I've a question about proxy_auth acl.

if I've an acl list like the following

acl friends proxy_auth mary jane carl
acl target dst 10.0.0.1

http_access friends allow
http_access target deny

What happens when mary contacts 10.0.0.1? always allow?

If "http_access friends allow" is evaluated to true, is the second also 
checked?


I mean, the proxy_auth acl is considered by squid like the others acl, or is
evaluated only the first time and when the timeout expires?

Is there some doc explaining the state-chart of the entire 
authentication scheme?


Thanks in advance
Regards
Alberto



smime.p7s
Description: S/MIME Cryptographic Signature


[squid-users] Streaming MMS

2010-06-22 Thread ML Alasta
Hi,

Sorry but i don't find a response or it's old response.
I have a RH server with Squid 3.0 and le mms streaming doesn't work !
Squid support the mms streaming or it's my configuration ?

Thank you.
Best regard,

Samuel


[squid-users] New to "Squid.conf", basic help

2010-06-22 Thread Parshwa Murdia
First of all, I installed the squid (SQUID 3.0.STABLE25) via yum as:

su

yum install squid*
yum list squid
cd /etc/squid
ls

The output of the last command ('ls') is:

cachemgr.conf  mime.conf  squid.conf.default
cachemgr.conf.default  mime.conf.default  squidGuard.conf
errors msntauth.conf  squidGuard.conf.rpmnew
icons  msntauth.conf.default
mib.txtsquid.conf

I am using FC11.
Squid not activated (only installed via yum).

Installed squid only for the desktop single PC and is to be used for
the single PC only. The main reasons to implement it is:

1. Speed improvement (or bandwidth improvement)
2. For security purpose so that intruders or transpassers cannot keep an eye.

How could it be activated and implemented in FC11


Re: [squid-users] Active/Backup Squid cluster

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 16:13 +0600 skrev Eugene M. Zheganin:

> Actually I have 2 boxes with N services. And their state can be pretty 
> easy "N running on A and M running on B, with P total services".

I'd use N VIPs, one per service. Unless there is dependencies between
services requiring a group of services to be on the same node.

Regards
Henrik



Re: [squid-users] Active/Backup Squid cluster

2010-06-22 Thread Eugene M. Zheganin

Hi.

On 22.06.2010 15:01, Henrik Nordström wrote:

So your CARP setup forgot to monitor the status of Squid when
determining which node should be master.
   
So, in general case, life is more complicated than "one box - one 
service" model.
Actually I have 2 boxes with N services. And their state can be pretty 
easy "N running on A and M running on B, with P total services".


Eugene.


Re: [squid-users] Active/Backup Squid cluster

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 09:53 +0600 skrev Eugene M. Zheganin:
> Hi.
> 
> On 21.06.2010 23:12, Henrik Nordström wrote:
> >> However, this doesn't solve the service outage, which I have to handle
> >> manually, for example raising the priority on the backup node.
> >>  
> > What kind of service failure do you need manual action?
> >
> In this case - squid crash.

So your CARP setup forgot to monitor the status of Squid when
determining which node should be master.

A simple monitor which "ifconfig carp0 down" when Squid is not running
and "ifconfig carp0 up" when found running should be sufficient.

> Last time I saw heartbeat - it was using some script stuff to set/unset 
> ip aliases on a node interface. This is kinda... weird.

Linux heartbeat runs in userspace. No need to clutter the kernel with
this kind of functionality.

Regards
Henrik



Re: [squid-users] empty basic/digest realm

2010-06-22 Thread Henrik Nordström
tis 2010-06-22 klockan 00:22 +0200 skrev Khaled Blah:
> That's not completely true. RFC 2617 states that the realm of either
> digest/basic auth is a quoted string but it doesn't say that this
> string has to be a minimum number of characters.

True, but is clearly not the intention that this should be empty.

I asked why you want to use an empty realm.

Regards
Henrik