Re: [squid-users] Squid 3.1.6 tracking image ?

2010-08-17 Thread Amos Jeffries

John Michaels wrote:

Hello everyone.


First of all, let me begin by thanking the developer team for their hard work ... 
I've been using squid to improve network performance for a small network (~200 sistems) for some years.


Recently, I've upgraded to 3.1.6 (from the Gentoo portage) and I was ... 
unpleasantly surprised to discover that the
CSS used to generate errors pages (errorpage.css) contains a reference to 
'http://www.squid-cache.org/Artwork/SN.png'.

While i agree that the new error page looks better, I find it an odd choice to 
include an absolute url to an external
site. Not only is this generating additional load on the squid-cache.org site, 
but it also makes every browser that
encounters an error download this .PNG, possibly transmitting user agent and 
other identifying information.

If this topic has already beed discused, please direct me to the relevant 
thread.
If not, then I would like to heard your opinions/comments.


It has been mentioned. Please be assured we do intend or use it as a 
tracker.


The image provided has a long a caching time to push it out as far 
towards the client as possible. If working your Squid should be able to 
cache it on the first error and display it's cached version to all 
following clients.


It's pulled in via the CSS config file installed in your /etc/squid 
directory and fully editable to remove or replace the branding if you 
desire.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] Squid 3.1.6 tracking image ?

2010-08-17 Thread John Michaels

On Tue, 17 Aug 2010 18:20:32 +1200
Amos Jeffries squ...@treenet.co.nz wrote:

 It has been mentioned. Please be assured we do intend or use it as a 
 tracker.

Could you please point me to the discution ?
I think there is a typo in the second line ... it reads like you _do_ intend to 
use it as a tracker. 

 The image provided has a long a caching time to push it out as far 
 towards the client as possible. If working your Squid should be able to 
 cache it on the first error and display it's cached version to all 
 following clients.

It sounds like a (new) install tracker, in that case. 

 It's pulled in via the CSS config file installed in your /etc/squid 
 directory and fully editable to remove or replace the branding if you 
 desire.

I've already edited it ... In my opinion it is wrong as a default, 
as it adds another oh, edit *that* on every upgrade/install to the network 
administrator's tasks.

Thank you for your answer and sorry if I seem to make a big deal out of nothing 
...


[squid-users] Connection lost in browser based curriculum

2010-08-17 Thread DanC

List,
I have recently setup Squid for the first time.  I work for a small school
and our goal is to use a machine running Squid, Dan's Guardian, Shorewall
and a few other things to make an effective filter to protect our students
and keep our parents happy.  So far, everything works wonderfully with Squid
proxying transparently.  Everyone can get where they need to go and not get
where they shouldn't go.  In general we are quite happy with this setup.

We do have one problem though.  We use a browser based curriculum on web
servers somewhere 2000 miles away from us.  These servers require a constant
connection to the browser, apparently to prevent cheating.  When the
workstations are connected through my squid box, they give Connection lost
errors after 5-10 minutes even though I can continuously ping the whole time
without any dropped packets.  Connecting to the internet directly through
our old firewall works fine and the connections don't get lost.

So far I have tried using cache and always_direct to fix my symptoms,
but have been unsuccessful.  Does anyone know what I might be missing? 
Thanks in advance.

Daniel
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Connection-lost-in-browser-based-curriculum-tp2327858p2327858.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Connection lost in browser based curriculum

2010-08-17 Thread Amos Jeffries

DanC wrote:

List,
I have recently setup Squid for the first time.  I work for a small school
and our goal is to use a machine running Squid, Dan's Guardian, Shorewall
and a few other things to make an effective filter to protect our students
and keep our parents happy.


Good luck. The cynic in me says pick any two, the other one is not 
really happening.



 So far, everything works wonderfully with Squid
proxying transparently.  Everyone can get where they need to go and not get
where they shouldn't go.  In general we are quite happy with this setup.

We do have one problem though.  We use a browser based curriculum on web
servers somewhere 2000 miles away from us.  These servers require a constant
connection to the browser, apparently to prevent cheating.  When the


If so they are wrong. A standard connection has nothing to do with 
identification of the individuals using it. Simply using Squid you have 
broken such tracking and will be pushing requests from all your active 
students in an overlapping random manner down a much smaller number of 
server connections.


But not much you can do about that, nor much reason to care either.


workstations are connected through my squid box, they give Connection lost
errors after 5-10 minutes even though I can continuously ping the whole time
without any dropped packets.  Connecting to the internet directly through
our old firewall works fine and the connections don't get lost.

So far I have tried using cache and always_direct to fix my symptoms,


cache only really sets stricter than normal boundaries on things not 
to be stored.


always_direct only prevents cache_peer entries being used to fetch data.

but have been unsuccessful.  Does anyone know what I might be missing? 


These are what you need to be looking at, in order of relevance to your 
usage:


http://www.squid-cache.org/Doc/config/server_persistent_connections/
http://www.squid-cache.org/Doc/config/persistent_connection_after_error/
http://www.squid-cache.org/Doc/config/client_persistent_connections/
http://www.squid-cache.org/Doc/config/pconn_timeout/


Also, the latest Squid release you can use will also be important. We 
are incrementally improving HTTP/1.1 support on an ongoing basis.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] Error loading pdf behind squid

2010-08-17 Thread Amos Jeffries

Joseph L. Casale wrote:
Users are needing access to the pdf's in http://ccemc.ca/process/guidelines 
such as http://ccemc.ca/_uploads/CCEMC-166-Proposal-Guide6.pdf but in ie8 and

ff 3.6.8 the pdfs fail to render, w/o the proxy they seem to always load.

I have tried in squid-3.0.STABLE20 and squid-3.1.4 and the issue is the same.

Any known workarounds for this behavior, the config is nearly stock with the
exception of a kerb auth params...

Thanks!
jlc


Some quick checks from here show no problem.

Can you provide a trace of the headers between the browser(s) and Squid 
please?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


[squid-users] maximum concurrent user limit

2010-08-17 Thread Ozan UÇAR

Hello Squid Users,

I have been on the lookout for a solution to limit the number of users 
allowed to connect to the Internet. What I am looking for is not 
maxconn. I do not wish to limit the number of total connections per 
client. A client can have as many connections as s/he wishes. Say, I 
want to limit Internet access to 20 users. I have been reading the 
manuals and googling for awhile now, with no luck. Anyone know of a way 
to achieve this?


Thanks in advance,




[squid-users] Log Files running out disk space

2010-08-17 Thread Nyamul Hassan
Hi,

One of proxies died down today, because the log files were overwhelming:

-rw-r- 1 squid squid       61440 Aug 17 16:01 access.log
-rw-r- 1 squid squid   523366451 Aug 17 02:59 access.log.0
-rw-r- 1 squid squid   771658231 Aug 17 00:00 access.log.1
-rw-r- 1 squid squid   562853886 Aug 16 21:00 access.log.2
-rw-r- 1 squid squid   618221433 Aug 16 18:00 access.log.3
-rw-r- 1 squid squid   572403480 Aug 16 15:00 access.log.4
-rw-r- 1 squid squid   379977665 Aug 16 12:00 access.log.5
-rw-r- 1 squid squid   348474013 Aug 16 09:00 access.log.6
-rw-r- 1 squid squid   367307983 Aug 16 06:00 access.log.7
-rw-r- 1 squid squid   663904388 Aug 16 03:00 access.log.8
-rw-r- 1 squid squid   735110835 Aug 16 00:00 access.log.9
-rw-r- 1 squid squid 36715761664 Aug 17 16:01 cache.log
-rw-r- 1 squid squid 14262776941 Aug 17 03:00 cache.log.0
-rw-r- 1 squid squid      955445 Aug 17 00:00 cache.log.1
-rw-r- 1 squid squid      748262 Aug 16 21:00 cache.log.2
-rw-r- 1 squid squid     1069482 Aug 16 18:00 cache.log.3
-rw-r- 1 squid squid      698758 Aug 16 15:00 cache.log.4
-rw-r- 1 squid squid      497547 Aug 16 11:59 cache.log.5
-rw-r- 1 squid squid      271153 Aug 16 08:59 cache.log.6
-rw-r- 1 squid squid      355351 Aug 16 05:59 cache.log.7
-rw-r- 1 squid squid      759748 Aug 16 02:59 cache.log.8
-rw-r- 1 squid squid     1037802 Aug 15 23:59 cache.log.9

As you can see, those HUGE cache log files were filled up in less
than 12 hours.  Opening them up, I find they were filled with the
following lines, repeated over and over again:

2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument

And, that is the time from when it started.  Is there any way to
determine what is causing this?

Regards
HASSAN


Re: [squid-users] maximum concurrent user limit

2010-08-17 Thread John Doe
From: Ozan UÇAR m...@ozanucar.com

 I have been on the lookout for a solution to limit the  number of users 
 allowed 
to connect to the Internet. What I am looking for is not  maxconn. I do not 
wish 
to limit the number of total connections per client. A  client can have as 
many 
connections as s/he wishes. Say, I want to limit  Internet access to 20 users. 
I 
have been reading the manuals and googling for  awhile now, with no luck. 
Anyone 
know of a way to achieve this?

Don't you think limiting the bandwidth per user with delay pools would be 
better 
(more fair)...?

But if you really want to limit the number of users, maybe you could use an 
external acl that will take note of the src IPs (with a given ttl, so no ttl on 
the squid side).
If at a given time your list of IPs includes more than 20 IPs (different than 
the current one), deny...
Those IPs will be taken off the list when they reach their ttl.
You could implement a fifo stream so that when the first IP is taken out, the 
21th becomes the 20th and is accepted...
But that means they might receive only half of their webpage if the ttl expire 
in the middle...
So you could reset the ttl of an IP at each connection...
But, with this setup, these 20 users could block others access forever if they 
are really active...
Anyway, I do not think it is a good idea...

JD





[squid-users] Fwd: %path% in acl list squid 2.6

2010-08-17 Thread sushi squid
HI there,

I am a newbie in squid ... my squid config file is giving some strange error
My OS is Windows XP and squid version is 2.6Stable

In the acl permission list the path is as follows
acl goodsite url_regex -i %userprofile%/whitelist.txt

on starting squid the error is as follows :
strtokFile: %userprofile%/whitelist.txt not found
aclParseAclLine: WARNING: empty ACL: acl PERMESSE url_regex -i
%userprofile%/whitelist.txt

i tried '/' and '\' no success  i want to use the %path% ... how
do i do that


Re: [squid-users] maximum concurrent user limit

2010-08-17 Thread Ozan UÇAR

1. How do I reset the TTL of an IP?
2. Do you know of any example external ACLs that can be used to limit 
number of users? After all, if there is a way to limit number of users, 
that is what I'm looking for..


John Doe yazmış:

From: Ozan UÇAR m...@ozanucar.com

  
I have been on the lookout for a solution to limit the  number of users allowed 
to connect to the Internet. What I am looking for is not  maxconn. I do not wish 
to limit the number of total connections per client. A  client can have as many 
connections as s/he wishes. Say, I want to limit  Internet access to 20 users. I 
have been reading the manuals and googling for  awhile now, with no luck. Anyone 
know of a way to achieve this?



Don't you think limiting the bandwidth per user with delay pools would be better 
(more fair)...?


But if you really want to limit the number of users, maybe you could use an 
external acl that will take note of the src IPs (with a given ttl, so no ttl on 
the squid side).
If at a given time your list of IPs includes more than 20 IPs (different than 
the current one), deny...

Those IPs will be taken off the list when they reach their ttl.
You could implement a fifo stream so that when the first IP is taken out, the 
21th becomes the 20th and is accepted...
But that means they might receive only half of their webpage if the ttl expire 
in the middle...

So you could reset the ttl of an IP at each connection...
But, with this setup, these 20 users could block others access forever if they 
are really active...

Anyway, I do not think it is a good idea...

JD




Re: [squid-users] Fwd: %path% in acl list squid 2.6

2010-08-17 Thread John Doe
From: sushi squid sushi.sq...@gmail.com

 I am a newbie in squid ... my squid config file is giving some  strange error
 My OS is Windows XP and squid version is 2.6Stable
 In  the acl permission list the path is as follows
 acl goodsite url_regex -i  %userprofile%/whitelist.txt

Maybe I am wrong but I do not think squid will resolve your %userprofile% 
variable...

JD


  


Re: [squid-users] Log Files running out disk space

2010-08-17 Thread Amos Jeffries

Nyamul Hassan wrote:

Hi,

One of proxies died down today, because the log files were overwhelming:

-rw-r- 1 squid squid   61440 Aug 17 16:01 access.log
-rw-r- 1 squid squid   523366451 Aug 17 02:59 access.log.0
-rw-r- 1 squid squid   771658231 Aug 17 00:00 access.log.1
-rw-r- 1 squid squid   562853886 Aug 16 21:00 access.log.2
-rw-r- 1 squid squid   618221433 Aug 16 18:00 access.log.3
-rw-r- 1 squid squid   572403480 Aug 16 15:00 access.log.4
-rw-r- 1 squid squid   379977665 Aug 16 12:00 access.log.5
-rw-r- 1 squid squid   348474013 Aug 16 09:00 access.log.6
-rw-r- 1 squid squid   367307983 Aug 16 06:00 access.log.7
-rw-r- 1 squid squid   663904388 Aug 16 03:00 access.log.8
-rw-r- 1 squid squid   735110835 Aug 16 00:00 access.log.9
-rw-r- 1 squid squid 36715761664 Aug 17 16:01 cache.log
-rw-r- 1 squid squid 14262776941 Aug 17 03:00 cache.log.0
-rw-r- 1 squid squid  955445 Aug 17 00:00 cache.log.1
-rw-r- 1 squid squid  748262 Aug 16 21:00 cache.log.2
-rw-r- 1 squid squid 1069482 Aug 16 18:00 cache.log.3
-rw-r- 1 squid squid  698758 Aug 16 15:00 cache.log.4
-rw-r- 1 squid squid  497547 Aug 16 11:59 cache.log.5
-rw-r- 1 squid squid  271153 Aug 16 08:59 cache.log.6
-rw-r- 1 squid squid  355351 Aug 16 05:59 cache.log.7
-rw-r- 1 squid squid  759748 Aug 16 02:59 cache.log.8
-rw-r- 1 squid squid 1037802 Aug 15 23:59 cache.log.9

As you can see, those HUGE cache log files were filled up in less
than 12 hours.  Opening them up, I find they were filled with the
following lines, repeated over and over again:

2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument

And, that is the time from when it started.  Is there any way to
determine what is causing this?


Start with the Squid version and what settings your http_port are 
configured with.


Then we check for what it means. Google locates several requests, 
strangely around August each year for the last few.


Someone describes it thus: The problem is however elsewhere, since it 
somewhere fails to obtain a socket (or has its socket destroyed by the 
kernel somehow) so that when it calls accept(2) on the socket it's not a 
socket any more.


Might be a SYN-flood DoS by that description. But your OS security 
should be catching such a thing before it gets near any internal 
software like Squid.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] Log Files running out disk space

2010-08-17 Thread Nyamul Hassan
On Tue, Aug 17, 2010 at 17:03, Amos Jeffries squ...@treenet.co.nz wrote:

 Nyamul Hassan wrote:

 Hi,

 One of proxies died down today, because the log files were overwhelming:

 -rw-r- 1 squid squid       61440 Aug 17 16:01 access.log
 -rw-r- 1 squid squid   523366451 Aug 17 02:59 access.log.0
 -rw-r- 1 squid squid   771658231 Aug 17 00:00 access.log.1
 -rw-r- 1 squid squid   562853886 Aug 16 21:00 access.log.2
 -rw-r- 1 squid squid   618221433 Aug 16 18:00 access.log.3
 -rw-r- 1 squid squid   572403480 Aug 16 15:00 access.log.4
 -rw-r- 1 squid squid   379977665 Aug 16 12:00 access.log.5
 -rw-r- 1 squid squid   348474013 Aug 16 09:00 access.log.6
 -rw-r- 1 squid squid   367307983 Aug 16 06:00 access.log.7
 -rw-r- 1 squid squid   663904388 Aug 16 03:00 access.log.8
 -rw-r- 1 squid squid   735110835 Aug 16 00:00 access.log.9
 -rw-r- 1 squid squid 36715761664 Aug 17 16:01 cache.log
 -rw-r- 1 squid squid 14262776941 Aug 17 03:00 cache.log.0
 -rw-r- 1 squid squid      955445 Aug 17 00:00 cache.log.1
 -rw-r- 1 squid squid      748262 Aug 16 21:00 cache.log.2
 -rw-r- 1 squid squid     1069482 Aug 16 18:00 cache.log.3
 -rw-r- 1 squid squid      698758 Aug 16 15:00 cache.log.4
 -rw-r- 1 squid squid      497547 Aug 16 11:59 cache.log.5
 -rw-r- 1 squid squid      271153 Aug 16 08:59 cache.log.6
 -rw-r- 1 squid squid      355351 Aug 16 05:59 cache.log.7
 -rw-r- 1 squid squid      759748 Aug 16 02:59 cache.log.8
 -rw-r- 1 squid squid     1037802 Aug 15 23:59 cache.log.9

 As you can see, those HUGE cache log files were filled up in less
 than 12 hours.  Opening them up, I find they were filled with the
 following lines, repeated over and over again:

 2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
 2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
 2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
 2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
 2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
 2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument

 And, that is the time from when it started.  Is there any way to
 determine what is causing this?

 Start with the Squid version and what settings your http_port are configured 
 with.

 Then we check for what it means. Google locates several requests, strangely 
 around August each year for the last few.

 Someone describes it thus: The problem is however elsewhere, since it 
 somewhere fails to obtain a socket (or has its socket destroyed by the kernel 
 somehow) so that when it calls accept(2) on the socket it's not a socket any 
 more.

 Might be a SYN-flood DoS by that description. But your OS security should be 
 catching such a thing before it gets near any internal software like Squid.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1

Squid 2.7STABLE9
http_port 3128 transparent

iptables is running, but no rules are there.

Regards
HASSAN


Re: [squid-users] maximum concurrent user limit

2010-08-17 Thread John Doe
From: Ozan UÇAR m...@ozanucar.com

 John Doe  yazmış:
  From: Ozan UÇAR m...@ozanucar.com
  I have been on the lookout for a solution to limit the  number  of users 
allowed to connect to the Internet. What I am looking for is not   maxconn. I 
do 
not wish to limit the number of total connections per client.  A  client can 
have as many connections as s/he wishes. Say, I want to  limit  Internet 
access 
to 20 users. I have been reading the manuals and  googling for  awhile now, 
with 
no luck. Anyone know of a way to achieve  this?
  Don't you think limiting the  bandwidth per user with delay pools would be 
better (more fair)...?
  But if you really want to limit the number of users, maybe you could  use 
  an 
external acl that will take note of the src IPs (with a given ttl, so no  ttl 
on 
the squid side).
  If at a given time your list of IPs includes  more than 20 IPs (different 
than the current one), deny...
  Those IPs  will be taken off the list when they reach their ttl.
  You could  implement a fifo stream so that when the first IP is taken out, 
the 21th becomes  the 20th and is accepted...
  But that means they might receive only half  of their webpage if the ttl 
expire in the middle...
  So you could reset  the ttl of an IP at each connection...
  But, with this setup, these 20  users could block others access forever if 
they are really active...
   Anyway, I do not think it is a good idea...
 1. How do I reset the TTL of an IP?

You have to handle it in your external helper program (so it  depends how you 
implement it, memcache, home made fifo list, etc)...
You have to develop this program in whatever language you know (C, perl, 
python...).

 2. Do you know of any example external  ACLs that can be used to limit number 
of users? After all, if there is a way to  limit number of users, that is what 
I'm looking for..

If there was such an helper, I would have just pointed to it.  ^_^
Maybe others will point to one...
http://www.squid-cache.org/Doc/config/external_acl_type/

By curiousity, why do you seek such limit...?

JD





Re: [squid-users] Log Files running out disk space

2010-08-17 Thread Amos Jeffries

Nyamul Hassan wrote:

On Tue, Aug 17, 2010 at 17:03, Amos Jeffries squ...@treenet.co.nz wrote:

Nyamul Hassan wrote:

Hi,

One of proxies died down today, because the log files were overwhelming:

-rw-r- 1 squid squid   61440 Aug 17 16:01 access.log
-rw-r- 1 squid squid   523366451 Aug 17 02:59 access.log.0
-rw-r- 1 squid squid   771658231 Aug 17 00:00 access.log.1
-rw-r- 1 squid squid   562853886 Aug 16 21:00 access.log.2
-rw-r- 1 squid squid   618221433 Aug 16 18:00 access.log.3
-rw-r- 1 squid squid   572403480 Aug 16 15:00 access.log.4
-rw-r- 1 squid squid   379977665 Aug 16 12:00 access.log.5
-rw-r- 1 squid squid   348474013 Aug 16 09:00 access.log.6
-rw-r- 1 squid squid   367307983 Aug 16 06:00 access.log.7
-rw-r- 1 squid squid   663904388 Aug 16 03:00 access.log.8
-rw-r- 1 squid squid   735110835 Aug 16 00:00 access.log.9
-rw-r- 1 squid squid 36715761664 Aug 17 16:01 cache.log
-rw-r- 1 squid squid 14262776941 Aug 17 03:00 cache.log.0
-rw-r- 1 squid squid  955445 Aug 17 00:00 cache.log.1
-rw-r- 1 squid squid  748262 Aug 16 21:00 cache.log.2
-rw-r- 1 squid squid 1069482 Aug 16 18:00 cache.log.3
-rw-r- 1 squid squid  698758 Aug 16 15:00 cache.log.4
-rw-r- 1 squid squid  497547 Aug 16 11:59 cache.log.5
-rw-r- 1 squid squid  271153 Aug 16 08:59 cache.log.6
-rw-r- 1 squid squid  355351 Aug 16 05:59 cache.log.7
-rw-r- 1 squid squid  759748 Aug 16 02:59 cache.log.8
-rw-r- 1 squid squid 1037802 Aug 15 23:59 cache.log.9

As you can see, those HUGE cache log files were filled up in less
than 12 hours.  Opening them up, I find they were filled with the
following lines, repeated over and over again:

2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument
2010/08/17 02:33:11| comm_accept: FD 28: (22) Invalid argument
2010/08/17 02:33:11| httpAccept: FD 28: accept failure: (22) Invalid argument

And, that is the time from when it started.  Is there any way to
determine what is causing this?

Start with the Squid version and what settings your http_port are configured 
with.

Then we check for what it means. Google locates several requests, strangely 
around August each year for the last few.

Someone describes it thus: The problem is however elsewhere, since it somewhere 
fails to obtain a socket (or has its socket destroyed by the kernel somehow) so that when 
it calls accept(2) on the socket it's not a socket any more.

Might be a SYN-flood DoS by that description. But your OS security should be 
catching such a thing before it gets near any internal software like Squid.



Squid 2.7STABLE9
http_port 3128 transparent

iptables is running, but no rules are there.


One interesting thing I note is that you have your logs rotated every 3 
hours. Except during the event. The Squid problem seems to be that 
something (possibly the accepting) blocked the rotation from happening 
several times.


FWIW; Squid has a connection limiter to prevent more connections being 
opened than there are available FD resource on the system. There is an 
outside chance this limiter paused a great number of sudden connections 
which died off. Which at a later point got 'kicked' for acceptance but 
were already gone. Generating that error.


Might be something else. I've cc'd Henrik who still maintains 2.7.

The 40GB size of logs seems to point at a DoS behind it all anyway.

Meanwhile if its still going I suggest finding some SYN-flood protection 
rules and adding them to iptables. See what changes with that in place.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] Restricting bandwidth usage through squid

2010-08-17 Thread Andrew Beverley
  I have been looking around for a howto on this. Numerous google searches
  have only lead me to half explanations, etc. Can anyone please point me
  to a nice howto on setting this up.
   
  Depending on what exactly you want to achieve, you could, of course,
  also use some of the tc traffic shaping facilities (assuming you are
  running *nix).
 
 I am using Ubuntu 10.4. Running squid 2.7 stable. We are trying to 
 restrict how much a particular group is downloading as well as 
 individuals in that group.

In that case you're better using the built-in Squid functionality that
was mentioned in a previous list message.

Regards,

Andy





[squid-users] Error 101 network unreachable

2010-08-17 Thread Babelo Gmvsdm

Hi,

My squid server has a strange behaviour with one website: http://www.01net.com

when I do this search on google for instance: 7zip 01

the results sent by google give me www.01net.com at first place, but when i try 
to click the link i have this error:

L'erreur suivante a été rencontrée en essayant d'accéder à l'URL : 
http://www.01net.com/telecharger/windows/Utilitaire/compression_et_decompression/fiches/4035.html
   La connexion à www.01net.com a échouée.   Le système a retourné : (101) 
Network is unreachable
Whereas If i click on the link gave by the error, I reach the searched page!!
Right now it's the only website giving me this error, but I fear to have much 
more later
Thanks to help me to understand what's happening and sorry for my terrible 
english!!

  

[squid-users] RE: EXTERNAL: Re: [squid-users] Feasibility - Squid as user-specific SSL tunnel (poor-man's V

2010-08-17 Thread Bucci, David G
 Squid *C* needs a cache_peer line for each separate certificate it 
 uses to contact Squid S.

Getting back to this, Amos.  Have roughed out the solution, but am now trying 
to layer in client certificates.  Again, we have multiple users/PC, but can 
guarantee that only one user will be on at a time (no concurrent logon and 
remote access sessions, e.g.).

I guess I'm not understanding how to make sure that the tunnel established 
between the squid instances (Client and Server) is authenticated with the 
user-specific certificate.  I had thought I would have to brute-force it -- 
e.g., have a known location for a user certificate, a cache-peer line that 
points at that known location, and on user login have that particular user's 
certificate copied to that known location, then restart Squid C.  But your 
mention of a cache-peer line per certificate implies there's a more elegant 
approach?

Can you explain the above -- if I put a cache-peer line, pointing to a 
user-specific certificate for each user on the PC, how does Squid know which 
one to use?  Does it somehow do it dynamically, based on the owning user of the 
process issuing the incoming request?

If I do have to brute-force it, do you know if the Windows version accepts env 
vars in squid.conf, e.g. %HOMEPATH%?  (may be a q. for Acme)  The concept 
being, rather than having a known location, writeable by all users, I could 
have a single cache-peer line that points to %HOMEPATH%/usercert.pem, run Squid 
on the PC not as a service, and have it started up as part of a user's logon 
(so the env var is picked up).

Thoughts?  Thank you, as always.


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Wednesday, August 04, 2010 11:01 AM
To: squid-users@squid-cache.org
Subject: EXTERNAL: Re: [squid-users] Feasibility - Squid as user-specific SSL 
tunnel (poor-man's V

Bucci, David G wrote:
 Multiple users with per-user certificates just get multiple cache_peer 
 entries (one per user certificate) for Squid S.
 
 I'm sorry, can you explain that a bit more?  Do you mean Squid S would need 
 to have an entry ahead of time in squid.conf for each user, pointing to 
 something different for each user certificate that Squid C might try to use 
 to connect to it in 2-way SSL mode?
 

Sorry, I was not very clear.

Squid S only needs the CA which Squid C certificates are signed by (eg 
Verisign).

Squid *C* needs a cache_peer line for each separate certificate it uses to 
contact Squid S.


 If all the user certificates were issued by a valid CA (e.g., Verisign), why 
 would it not be enough for Squid S to have sslcafile|sslcapath point to CA 
 certs that the user certificates chain to (e.g., a CA cert for Verisign)?
 
 Or am I completely missing the point?
 
 Thx!
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, August 04, 2010 6:21 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] RE: EXTERNAL: Re: [squid-users] Feasibility - 
 Squid as user-specific SSL tunnel (poor-man's V
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Tuesday, August 03, 2010 7:39 AM
 To: squid-users@squid-cache.org
 Subject: EXTERNAL: Re: [squid-users] Feasibility - Squid as user-specific 
 SSL tunnel (poor-man's V

 Bucci, David G wrote:
 Hi, all - about to play with an approach to something, and I was 
 hoping to bounce the idea off people here - pls let me know if that's 
 not strictly within bounds/intents of the mailing list (new here).
 This is close to the same concept as discussed here with a D.Veenker, 
 in an exchange in April/2010 -- but not quite the same.

 Is it possible to use Squid to create an ssh-tunnel effect, including 
 use of a client certificate?  This would be to layer in SSL and client 
 authentication, for applications and web servers for which (for 
 reasons I won't go into here) it's not possible to reconfigure/recode 
 to use SSL.
 snip
 One more comes to mind:  client apps wanting Squid to perform the SSL 
 wrapping need to send an absolute URL including protocol to Squid (ie 
 https://example.com/some.file).  They can do that over regular HTTP. 
 Squid will handle the conversion to HTTPS once it gets such a URL.

 In the case where you have a small set of domains that are pre-known somehow 
 there is an alternative setup which is much more in to a VPN than what you 
 are currently thinking.

   Consider two squid setup as regular proxies: Squid C where the client apps 
 connect and Squid S which does the final web server connection.

   Squid C gets configured with a parent cache_peer entry for Squid S with 
 the SSL options.

   The domain names which require the HTTPS link are forced (via never_direct 
 and cache_peer_access) to use the peer. Other requests are permitted to go 
 direct and maybe denied access through the peer.

 That is it.

 Multiple users with per-user certificates just get multiple cache_peer 
 

Re: [squid-users] Fwd: %path% in acl list squid 2.6

2010-08-17 Thread sushi squid
Thanks JD for the reply,
My Problem is this ...
Imagine a system with three accounts:
1)Administrator
2)John
3)Sushi
I want that in the config file the path should be such that …
when John logsin he has a different block list and when sushi logs in
a different black list is loaded

This has to be done with single installation of squid ….
any ideas ..???

On 8/17/10, John Doe jd...@yahoo.com wrote:
 From: sushi squid sushi.sq...@gmail.com

 I am a newbie in squid ... my squid config file is giving some  strange
 error
 My OS is Windows XP and squid version is 2.6Stable
 In  the acl permission list the path is as follows
 acl goodsite url_regex -i  %userprofile%/whitelist.txt

 Maybe I am wrong but I do not think squid will resolve your %userprofile%
 variable...

 JD






RE: [squid-users] ldap fallback not working

2010-08-17 Thread Joseph L. Casale
I think its a matter of username (Basic) vs dom...@username
(Kerberos).

You can test this by replacing the group lookup with a fake
external_acl_helper which logs the credentials passed to the group helper.
Doing a few requests through both auth mechanisms will show you what
difference the group helper sees.

Amos,
I made a simple perl script that takes STDIN and writes it to a
file in /var/log/squid that is owned by squid:squid and returns
OK but its not working. Either I missed the error with ALL,9
(I didn’t know which module to focus on). How does one get a helper
to log in cache.log like the included binaries do when you enable
debug in them?

Thanks!
jlc


Re: [squid-users] Squid blocks web page in port 7779

2010-08-17 Thread p3dRø
Hi Amos,

I have my proxy as another host in the network (with only one ethernet
card = eth0). The communication flow is:

Internet -- Router ADSL -- Firewall -- Squid -- PCs

What I mean with transparent is that all the hosts go to proxy without
authentication and without blocking anything yet. They don't know that
there is any proxy.

I reconfigured my config file and I have this now:

http_port 3128 intercept
cache_mem 100 MB
cache_dir ufs /var/spool/squid 150 16 256
acl red_local src 192.168.1.0/24
acl localhost src 127.0.0.1/32
acl all src all
http_access allow localhost
http_access allow red_local
acl SSL_ports port 443
acl SSL_ports port 7779
acl Safe_ports port 8080
acl Safe_ports port 80
acl Safe_ports port 7779
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
visible_hostname Squid


Log send me this:

1282067264.181121 192.168.1.110 TCP_MISS/503 4218 GET
http://ww4.essalud.gob.pe:7779/acredita/ - DIRECT/ww4.essalud.gob.pe
text/html

Another debug:

[r...@squid]# squid -X
2010/08/17 13:02:52.092| command-line -X overrides: ALL,7
2010/08/17 13:02:52.092| CacheManager::registerAction: registering legacy mem
2010/08/17 13:02:52.092| CacheManager::findAction: looking for action mem
2010/08/17 13:02:52.092| Action not found.
2010/08/17 13:02:52.092| CacheManager::registerAction: registered mem
2010/08/17 13:02:52.092| CacheManager::registerAction: registering
legacy squidaio_counts
2010/08/17 13:02:52.092| CacheManager::findAction: looking for action
squidaio_counts
2010/08/17 13:02:52.092| Action not found.
2010/08/17 13:02:52.092| CacheManager::registerAction: registered
squidaio_counts
2010/08/17 13:02:52.092| CacheManager::registerAction: registering legacy diskd
2010/08/17 13:02:52.092| CacheManager::findAction: looking for action diskd
2010/08/17 13:02:52.092| Action not found.
2010/08/17 13:02:52.092| CacheManager::registerAction: registered diskd
2010/08/17 13:02:52.092| aclDestroyACLs: invoked
2010/08/17 13:02:52.092| ACL::Prototype::Registered: invoked for type src
2010/08/17 13:02:52.092| ACL::Prototype::Registered:yes
2010/08/17 13:02:52.092| ACL::FindByName 'all'
2010/08/17 13:02:52.092| ACL::FindByName found no match
2010/08/17 13:02:52.092| aclParseAclLine: Creating ACL 'all'
2010/08/17 13:02:52.092| ACL::Prototype::Factory: cloning an object
for type 'src'
2010/08/17 13:02:52.092| aclIpParseIpData: all
2010/08/17 13:02:52.092| aclIpParseIpData: magic 'all' found.
2010/08/17 13:02:52.092| aclParseAclList: looking for ACL name 'all'
2010/08/17 13:02:52.092| ACL::FindByName 'all'
2010/08/17 13:02:52.092| Processing Configuration File:
/etc/squid/squid.conf (depth 0)
2010/08/17 13:02:52.093| Processing: 'http_port 3128 intercept'
2010/08/17 13:02:52.093| http(s)_port: found Listen on Port: 3128
2010/08/17 13:02:52.093| http(s)_port: found Listen on wildcard
address: [::]:3128
2010/08/17 13:02:52.093| Starting Authentication on port [::]:3128
2010/08/17 13:02:52.093| Disabling Authentication on port [::]:3128
(interception enabled)
2010/08/17 13:02:52.093| Disabling IPv6 on port [::]:3128 (interception enabled)
2010/08/17 13:02:52.094| Processing: 'cache_mem 100 MB'
2010/08/17 13:02:52.094| Processing: 'cache_dir ufs /var/spool/squid 150 16 256'
2010/08/17 13:02:52.094| file_map_create: creating space for 16384 files
2010/08/17 13:02:52.094| -- 512 words of 4 bytes each
2010/08/17 13:02:52.094| Processing: 'acl red_local src 192.168.1.0/24'
2010/08/17 13:02:52.094| ACL::Prototype::Registered: invoked for type src
2010/08/17 13:02:52.094| ACL::Prototype::Registered:yes
2010/08/17 13:02:52.094| ACL::FindByName 'red_local'
2010/08/17 13:02:52.094| ACL::FindByName found no match
2010/08/17 13:02:52.094| aclParseAclLine: Creating ACL 'red_local'
2010/08/17 13:02:52.094| ACL::Prototype::Factory: cloning an object
for type 'src'
2010/08/17 13:02:52.094| aclIpParseIpData: 192.168.1.0/24
2010/08/17 13:02:52.094| aclIpParseIpData: '192.168.1.0/24' matched:
SCAN3-v4: %[0123456789.]/%[0123456789.]
2010/08/17 13:02:52.094| Ip.cc(517) FactoryParse: Parsed:
192.168.1.0-[::]/[:::::::ff00](/120)
2010/08/17 13:02:52.094| Processing: 'acl localhost src 127.0.0.1/32'
2010/08/17 13:02:52.094| ACL::Prototype::Registered: invoked for type src
2010/08/17 13:02:52.094| ACL::Prototype::Registered:yes
2010/08/17 13:02:52.094| ACL::FindByName 'localhost'
2010/08/17 13:02:52.094| ACL::FindByName found no match
2010/08/17 13:02:52.094| aclParseAclLine: Creating ACL 'localhost'
2010/08/17 13:02:52.094| ACL::Prototype::Factory: cloning an object
for type 'src'
2010/08/17 13:02:52.094| aclIpParseIpData: 127.0.0.1/32
2010/08/17 13:02:52.094| aclIpParseIpData: '127.0.0.1/32' matched:
SCAN3-v4: %[0123456789.]/%[0123456789.]
2010/08/17 13:02:52.094| Ip.cc(517) FactoryParse: Parsed:
127.0.0.1-[::]/[:::::::](/128)
2010/08/17 13:02:52.094| Processing: 'acl all src all'
2010/08/17 13:02:52.094| 

[squid-users] WCCP and parent authentication

2010-08-17 Thread Dean Weimer
I know when using squid as an intercept proxy it can't do authentication as the 
clients don't know it's there, but do any of you out there know if you can use 
it with a parent proxy that requires authentication?

The specific scenario I am considering is Squid in DMZ with WCCPv2 used in 
conjunction with a Cisco ASA 5520 firewall and an external (Websense filtering) 
proxy that requires authentication, both NTLM and basic authentication is 
supported.

Clients
   |
Cisco ASA5520 -WCCPv2- Squid 3.1.6 (In DMZ) -- Secondary Internet Connection -- 
Parent Proxy Service 
   |
Internet

We are currently using auto-detect, but continually keep running into 
applications that don't recognize auto-detect, or sometimes don't even have the 
ability to read a configuration script.  I am trying to come up with a way to 
alleviate the user's issues, without losing our local cache.  And keeping the 
HR and Legal departments happy by continuing to filter websites with content 
that some could find offensive, as well as blocking unsafe (malware/spyware) 
websites.


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] using squid to proxy from internal lan to remote ldaps

2010-08-17 Thread Derek Doucette
Hello:

I have a java based web application that is sitting behind our DMZ.  A customer 
we host has an ldaps instance running which I am trying to connect to through 
from the application server.  I was wondering if anyone has ever attempted to 
use squid to proxy ldaps requests to a remote site.  In the past the option has 
been to use a hide NAT at on the network side to permit traffic through in a 
one way to the remote ldap server, but still preventing anyone from connecting 
directly to the application server behind the DMZ.  Making use of something 
like squid could simplify things from our deployment process.  



[squid-users] Re: Squid_kerb_ldap intermittently failing auth

2010-08-17 Thread Markus Moeller
Can you run both squid_kerb_ldap and squid_kerb_auth with -d. It should give 
a lot more details to find out why it happens


Markus

Mark deJong dejo...@gmail.com wrote in message 
news:aanlktikvdju6+ysywkdn7vxyzyts4rtdjgf7ccnzm...@mail.gmail.com...

Hello,
I'm having an issue with squid_kerb_auth. It seems not all proxy
requests are getting serviced. When falling back on NTLM the requests
come though fine.

My guess is subsequent GET requests made over Proxy_KeepAlive sessions
are not getting serviced. I confirmed this on a trace using Wireshark
where the client requests a page but Squid doesn't come back with an
answer. Is this a known issue?

I'm currently running squid3-3.1.6 and have seen this behavior both
with the include squid_kerb_auth and a seperately compiled binary.

squid.conf follows:


http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
acl apache rep_header Server ^Apache
logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st
%{Referer}h %{User-Agent}h %Ss:%Sh

access_log /var/log/squid/access.log combined



auth_param negotiate program /usr/libexec/squid/squid_kerb_auth -d  -s
HTTP/dc32-wgw01.nix.dom.lo...@ushs.dom.local
auth_param negotiate children 30
auth_param negotiate keep_alive on

auth_param ntlm program 
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 30
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 2 minutes
auth_param ntlm use_ntlm_negotiate on

external_acl_type AD_US_TEMPS ttl=3600  negative_ttl=3600  %LOGIN
/usr/bin/squid_kerb_ldap -d -g te...@us.dom.local
external_acl_type AD_US_ITDEPT ttl=3600  negative_ttl=3600  %LOGIN
/usr/bin/squid_kerb_ldap -d -g itd...@us.dom.local





refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320



acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8

acl firefox_browser browser Firefox

acl UnrestrictedUsers external AD_US_ITDEPT
acl TempUsers external AD_US_TEMPS
acl AuthorizedUsers proxy_auth REQUIRED


acl hq-dmz src 10.50.192.0/24
acl hq-servers src 10.50.64.0/23 10.50.4.0/24
acl hq-services src 10.50.8.0/24 10.50.2.0/24
acl hq-dev src 10.50.66.0/24

acl ie_urls dstdomain /etc/squid/ie_urls.allow

acl service_urls dstdomain /etc/squid/service_urls.allow
acl dev_urls dstdomain /etc/squid/dev_urls.allow
acl hq-servers_urls dstdomain /etc/squid/servers_urls.allow
acl temp_urls dstdomain /etc/squid/temp_urls.allow

acl SSL_ports port 443
acl CONNECT method CONNECT


http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


http_access allow hq-servers hq-servers_urls
http_access deny hq-servers

http_access allow hq-services service_urls
http_access deny hq-services

http_access allow hq-dev dev_urls
http_access deny hq-dev


http_access allow TempUsers temp_urls
http_access deny TempUsers all

http_access allow UnrestrictedUsers
http_access deny UnrestrictedUsers all

http_access deny !AuthorizedUsers
http_access allow all
http_access deny all


http_reply_access allow all
icp_access allow all
cache_mgr supp...@dom.local
coredump_dir /var/spool/squid



Thanks,
M. de Jong






Re: [squid-users] using squid to proxy from internal lan to remote ldaps

2010-08-17 Thread Jakob Curdes


Am 17.08.2010 21:29,  Derek Doucette wrote:


 I was wondering if anyone has ever attempted to use squid to proxy ldaps 
requests to a remote site.


I haven't, but I see no reason it should not work.
Remarks:
- you will need to add the standard ldaps port to safe_ports or use port 443 
for your ldaps server
- be aware that squid does not really check the content of the SSL-encrypted 
connection, so the protection is limited to SSL protocol attacks
- It will only work with LDAPS, not with LDAP as then squid wants to see HTTP 
traffic in the connection


HTH, Jakob





[squid-users] rewrite domain

2010-08-17 Thread Thomas E. Maleshafske
Hey everyone,
I'm having problems with rewrite rules.
here is my situation.
I run an Apt Mirror inside my local network, but currently I set my
sources.list to reflect that, but if a laptop leaves the network it
still has that sources.list which is no good due.  basically I'm wanting
to rewrite the domain so that it points the
http://us.archive.ubuntu.com/somefolder/somefile.deb to
http://host.localdomain.com/samefolder/samefile.deb

that way the source.list file is valid regardless of location, but
redirects to the local mirror if on the LAN

Any help is appreciated. 
-- 
Thomas E. Maleshafske tmaleshaf...@maleshafske.com



Re: [squid-users] Squid blocks web page in port 7779

2010-08-17 Thread Ulises M. Alvarez

On 8/17/10 1:14 PM, p3dRø wrote:

I reconfigured my config file and I have this now:

http_port 3128 intercept
cache_mem 100 MB
cache_dir ufs /var/spool/squid 150 16 256
acl red_local src 192.168.1.0/24
acl localhost src 127.0.0.1/32
acl all src all
http_access allow localhost
http_access allow red_local
acl SSL_ports port 443
acl SSL_ports port 7779
acl Safe_ports port 8080
acl Safe_ports port 80
acl Safe_ports port 7779
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
visible_hostname Squid


Log send me this:

1282067264.181121 192.168.1.110 TCP_MISS/503 4218 GET
http://ww4.essalud.gob.pe:7779/acredita/  - DIRECT/ww4.essalud.gob.pe
text/html


It look like you should change:

acl SSL_ports port 7779

To:

acl Safe_ports port 7779

Regards.
--
Ulises M. Alvarez
http://sophie.fata.unam.mx/


Re: [squid-users] Squid blocks web page in port 7779

2010-08-17 Thread p3dRø
acl Safe_ports port 7779 is already included in the configuration.

--
Pedro



2010/8/17 Ulises M. Alvarez u...@fata.unam.mx:
 On 8/17/10 1:14 PM, p3dRř wrote:

 I reconfigured my config file and I have this now:

 http_port 3128 intercept
 cache_mem 100 MB
 cache_dir ufs /var/spool/squid 150 16 256
 acl red_local src 192.168.1.0/24
 acl localhost src 127.0.0.1/32
 acl all src all
 http_access allow localhost
 http_access allow red_local
 acl SSL_ports port 443
 acl SSL_ports port 7779
 acl Safe_ports port 8080
 acl Safe_ports port 80
 acl Safe_ports port 7779
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 visible_hostname Squid


 Log send me this:

 1282067264.181    121 192.168.1.110 TCP_MISS/503 4218 GET
 http://ww4.essalud.gob.pe:7779/acredita/  - DIRECT/ww4.essalud.gob.pe
 text/html

 It look like you should change:

 acl SSL_ports port 7779

 To:

 acl Safe_ports port 7779

 Regards.
 --
 Ulises M. Alvarez
 http://sophie.fata.unam.mx/



[squid-users] How to deal with metalinks?

2010-08-17 Thread Stefan Jensen
Hi,...

i'm using squid-3.1.4-2.fc13.x86_64 and have a acl for update-sites
e.g.:

acl updates dstdomain .windowsupdate.microsoft.com
www.update.microsoft.com .windowsupdate.com download.microsoft.com
ntservicepack.microsoft.com wustat.windows.com urs.microsoft.com
spynet2.microsoft.com current.cvd.clamav.net clamwin.sourceforge.net
database.clamav.net java.sun.com javadl-esd.sun.com

and for working-time e.g.:

acl worktime time MTWHF 08:00-17:00
http_access deny !localweb !updates !worktime

This works fine for the Windows boxes, but for Linux clients, i have
problems allowing 24h access for updates, because of most linux
package-manager uses some kind of mirrorlists with metalinks.

Here is a sample file, that is requested by the package-manager and
contains a list of mirrors:

https://mirrors.fedoraproject.org/metalink?repo=fedora-source-13arch=i386

How can i allow access based on the content of that metalink file? Is
that passible? I don't want to hook all linux boxes on a single mirror.

thanks

best regards

Stefan
-- 



Re: [squid-users] Re: Squid_kerb_ldap intermittently failing auth

2010-08-17 Thread Mark deJong
Hello Markus,
It turns out it was an issue with ipv6. I recompiled and that fixed
the problem. Thanks for getting back!

Best,
Mark

On Tue, Aug 17, 2010 at 3:39 PM, Markus Moeller hua...@moeller.plus.com wrote:
 Can you run both squid_kerb_ldap and squid_kerb_auth with -d. It should give
 a lot more details to find out why it happens

 Markus

 Mark deJong dejo...@gmail.com wrote in message
 news:aanlktikvdju6+ysywkdn7vxyzyts4rtdjgf7ccnzm...@mail.gmail.com...

 Hello,
 I'm having an issue with squid_kerb_auth. It seems not all proxy
 requests are getting serviced. When falling back on NTLM the requests
 come though fine.

 My guess is subsequent GET requests made over Proxy_KeepAlive sessions
 are not getting serviced. I confirmed this on a trace using Wireshark
 where the client requests a page but Squid doesn't come back with an
 answer. Is this a known issue?

 I'm currently running squid3-3.1.6 and have seen this behavior both
 with the include squid_kerb_auth and a seperately compiled binary.

 squid.conf follows:


 http_port 8080
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 acl apache rep_header Server ^Apache
 logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st
 %{Referer}h %{User-Agent}h %Ss:%Sh

 access_log /var/log/squid/access.log combined



 auth_param negotiate program /usr/libexec/squid/squid_kerb_auth -d  -s
 HTTP/dc32-wgw01.nix.dom.lo...@ushs.dom.local
 auth_param negotiate children 30
 auth_param negotiate keep_alive on

 auth_param ntlm program /usr/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 30
 auth_param ntlm max_challenge_reuses 0
 auth_param ntlm max_challenge_lifetime 2 minutes
 auth_param ntlm use_ntlm_negotiate on

 external_acl_type AD_US_TEMPS ttl=3600  negative_ttl=3600  %LOGIN
 /usr/bin/squid_kerb_ldap -d -g te...@us.dom.local
 external_acl_type AD_US_ITDEPT ttl=3600  negative_ttl=3600  %LOGIN
 /usr/bin/squid_kerb_ldap -d -g itd...@us.dom.local





 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320



 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8

 acl firefox_browser browser Firefox

 acl UnrestrictedUsers external AD_US_ITDEPT
 acl TempUsers external AD_US_TEMPS
 acl AuthorizedUsers proxy_auth REQUIRED


 acl hq-dmz src 10.50.192.0/24
 acl hq-servers src 10.50.64.0/23 10.50.4.0/24
 acl hq-services src 10.50.8.0/24 10.50.2.0/24
 acl hq-dev src 10.50.66.0/24

 acl ie_urls dstdomain /etc/squid/ie_urls.allow

 acl service_urls dstdomain /etc/squid/service_urls.allow
 acl dev_urls dstdomain /etc/squid/dev_urls.allow
 acl hq-servers_urls dstdomain /etc/squid/servers_urls.allow
 acl temp_urls dstdomain /etc/squid/temp_urls.allow

 acl SSL_ports port 443
 acl CONNECT method CONNECT


 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports


 http_access allow hq-servers hq-servers_urls
 http_access deny hq-servers

 http_access allow hq-services service_urls
 http_access deny hq-services

 http_access allow hq-dev dev_urls
 http_access deny hq-dev


 http_access allow TempUsers temp_urls
 http_access deny TempUsers all

 http_access allow UnrestrictedUsers
 http_access deny UnrestrictedUsers all

 http_access deny !AuthorizedUsers
 http_access allow all
 http_access deny all


 http_reply_access allow all
 icp_access allow all
 cache_mgr supp...@dom.local
 coredump_dir /var/spool/squid



 Thanks,
 M. de Jong






Re: [squid-users] Fwd: %path% in acl list squid 2.6

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 22:37:31 +0530, sushi squid sushi.sq...@gmail.com
wrote:
 Thanks JD for the reply,
 My Problem is this ...
 Imagine a system with three accounts:
 1)Administrator
 2)John
 3)Sushi
 I want that in the config file the path should be such that …
 when John logsin he has a different block list and when sushi logs in
 a different black list is loaded
 
 This has to be done with single installation of squid ….
 any ideas ..???

I suggest forgetting loading config on login. That requires that Squid
load and startup during their login, which may not be realistic.
Particularly when running as a system service, or on a different box
altogether.

Find some measure to identify the users inside Squid and structure your
access controls to identify the user before testing the user-specific ACL.
User AD account name would be a good choice here since it's logins you want
to base things on. The mswin_* helpers bundled with squid for windows
builds contact the local AD/SSPI directly.

Each http_access (and other access types) are tested left-to-right along a
line. So a config like this:

 acl userJohn proxy_auth john
 acl userBob proxy_auth bob
 acl userJohnBlocklist dstdomain C:/userJohnBlocklist.txt
 acl userBobBlocklist dstdomain C:/userBobBlocklist.txt

 http_access allow userJohn !userJohnBlocklist
 http_access allow userBob !userBobBlocklist
 http_access deny all

will only block requests which match userJohn using the
userJohnBlocklist list. vice versa for userBob and his list.

Amos

 
 On 8/17/10, John Doe jd...@yahoo.com wrote:
 From: sushi squid sushi.sq...@gmail.com

 I am a newbie in squid ... my squid config file is giving some 
strange
 error
 My OS is Windows XP and squid version is 2.6Stable
 In  the acl permission list the path is as follows
 acl goodsite url_regex -i  %userprofile%/whitelist.txt

 Maybe I am wrong but I do not think squid will resolve your
%userprofile%
 variable...

 JD






RE: [squid-users] ldap fallback not working

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 17:57:13 +, Joseph L. Casale
jcas...@activenetwerx.com wrote:
I think its a matter of username (Basic) vs dom...@username
(Kerberos).

You can test this by replacing the group lookup with a fake
external_acl_helper which logs the credentials passed to the group
helper.
Doing a few requests through both auth mechanisms will show you what
difference the group helper sees.
 
 Amos,
 I made a simple perl script that takes STDIN and writes it to a
 file in /var/log/squid that is owned by squid:squid and returns
 OK but its not working. Either I missed the error with ALL,9
 (I didn’t know which module to focus on). How does one get a helper
 to log in cache.log like the included binaries do when you enable
 debug in them?

Anything dumping to stderr from the helper appears in the squid cache.log.

Amos


Re: [squid-users] RE: EXTERNAL: Re: [squid-users] Feasibility - Squid as user-specific SSL tunnel (poor-man's V

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 11:43:38 -0400, Bucci, David G
david.g.bu...@lmco.com wrote:
 Squid *C* needs a cache_peer line for each separate certificate it 
 uses to contact Squid S.
 
 Getting back to this, Amos.  Have roughed out the solution, but am now
 trying to layer in client certificates.  Again, we have multiple
users/PC,
 but can guarantee that only one user will be on at a time (no concurrent
 logon and remote access sessions, e.g.).
 
 I guess I'm not understanding how to make sure that the tunnel
established
 between the squid instances (Client and Server) is authenticated with
the
 user-specific certificate.  I had thought I would have to brute-force it
--
 e.g., have a known location for a user certificate, a cache-peer line
that
 points at that known location, and on user login have that particular
 user's certificate copied to that known location, then restart Squid C. 
 But your mention of a cache-peer line per certificate implies there's a
 more elegant approach?

Well, yes. Still a bit of a blunt object though.

 
 Can you explain the above -- if I put a cache-peer line, pointing to a
 user-specific certificate for each user on the PC, how does Squid know
 which one to use?  Does it somehow do it dynamically, based on the
owning
 user of the process issuing the incoming request?

The idea goes like this:

 cache_peer can be configured with a client certificate (one AFAIK).
 cache_peer can be selected based on arbitrary ACL rules
(cache_peer_access).
 username can be found and matched with an ACL.

So... every user can have their own unique cache_peer entry in squid.conf
which sends their certificate out. :)

For easy management if you have more than a few users, I'd throw in the
include directive and have a folder of config snippets. One file per user
with their whole snippet included. Since its user specific and all are
identical the sequence of snippets is not to important between themselves.

The problems remaining is that username has to be checked and cached in
the main access controls (http_access) so that it becomes usable to
cache_peer_access.

What we end up with is:

/etc/squid/users/snippet-JoeBlogs:
  # match only this user
  acl userJoeBlogs proxy_auth JoeBlogs

  # forces the username to be looked up early. But !all prevents the allow
happening.
  # if you have more general access controls that use a proxy auth
REQUIRED this can be skipped.
  http_access allow userJoeBlogs !all

  # private link to the master server for this user
  cache_peer srv.example.com parent 443 0 name=peer-JoeBlogs ssl ...
  cache_peer_access peer-JoeBlogs allow userJoeBlogs
  cache_peer_access peer-JoeBlogs deny all


/etc/squid/squid.conf:
  ...
  auth_param 
  ...
  include /etc/squid/users/*
  http_access deny all


 
 If I do have to brute-force it, do you know if the Windows version
accepts
 env vars in squid.conf, e.g. %HOMEPATH%?  (may be a q. for Acme)  The

No. There is some limited support in specialized areas using the registry.
But not for files like that AFAIK.

Amos



Re: [squid-users] How to deal with metalinks?

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 23:56:01 +0200, Stefan Jensen sjen...@versanet.de
wrote:
 Hi,...
 
 i'm using squid-3.1.4-2.fc13.x86_64 and have a acl for update-sites
 e.g.:
 
 acl updates dstdomain .windowsupdate.microsoft.com
 www.update.microsoft.com .windowsupdate.com download.microsoft.com
 ntservicepack.microsoft.com wustat.windows.com urs.microsoft.com
 spynet2.microsoft.com current.cvd.clamav.net clamwin.sourceforge.net
 database.clamav.net java.sun.com javadl-esd.sun.com
 
 and for working-time e.g.:
 
 acl worktime time MTWHF 08:00-17:00
 http_access deny !localweb !updates !worktime
 
 This works fine for the Windows boxes, but for Linux clients, i have
 problems allowing 24h access for updates, because of most linux
 package-manager uses some kind of mirrorlists with metalinks.
 
 Here is a sample file, that is requested by the package-manager and
 contains a list of mirrors:
 

https://mirrors.fedoraproject.org/metalink?repo=fedora-source-13arch=i386
 
 How can i allow access based on the content of that metalink file? Is
 that passible? I don't want to hook all linux boxes on a single mirror.

Why not? restricting to a small sub-set of close or fast mirrors can
improve your bandwidth speeds and overall long-haul costs.

Squid does not itself consider the data content of any requests beyond the
basic requirements of transfer encoding. You will have to find or create
helpers to do the inspection and store the results and an external_acl_type
helper to give Squid a live verdict about whats currently okay to accept.
An ICAP or eCAP adapter saving okay URLs/domains to a BerkleyDB (1.85
format) could leverage the session helper.

Amos


Re: [squid-users] Re: Squid_kerb_ldap intermittently fa iling auth

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 18:00:46 -0400, Mark deJong dejo...@gmail.com wrote:
 Hello Markus,
 It turns out it was an issue with ipv6. I recompiled and that fixed
 the problem. Thanks for getting back!

What was the problem specifically please?
 and was there anything other than a simple recompile with same options
required?

Amos



Re: [squid-users] Error 101 network unreachable

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 15:10:46 +0200, Babelo Gmvsdm hercul...@hotmail.com
wrote:
 Hi,
 
 My squid server has a strange behaviour with one website:
 http://www.01net.com
 
 when I do this search on google for instance: 7zip 01
 
 the results sent by google give me www.01net.com at first place, but
when
 i try to click the link i have this error:
 
 L'erreur suivante a été rencontrée en essayant d'accéder à l'URL :

http://www.01net.com/telecharger/windows/Utilitaire/compression_et_decompression/fiches/4035.html
   La connexion à www.01net.com a échouée.   Le système a retourné :
(101)
 Network is unreachable
 Whereas If i click on the link gave by the error, I reach the searched
 page!!
 Right now it's the only website giving me this error, but I fear to have
 much more later
 Thanks to help me to understand what's happening and sorry for my
terrible
 english!!

www.01net.com is an IPv6-enabled website. You don't say which version, but
this behaviour is known with some broken 3.1 releases when you have
connectivity problems over IPv6.

If your squid version is older than 3.1.6 I suggest an upgrade. If you
self-build please prefer the daily snapshot as there are now some extra
v6-related fixes. Alternatively 3.1.7 is due out in a few days if you
require a formal signed release to work from.

Amos



Re: [squid-users] WCCP and parent authentication

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 14:00:57 -0500, Dean Weimer dwei...@orscheln.com
wrote:
 I know when using squid as an intercept proxy it can't do authentication
 as the clients don't know it's there, but do any of you out there know
if
 you can use it with a parent proxy that requires authentication?
 
 The specific scenario I am considering is Squid in DMZ with WCCPv2 used
in
 conjunction with a Cisco ASA 5520 firewall and an external (Websense
 filtering) proxy that requires authentication, both NTLM and basic
 authentication is supported.
 
 Clients
|
 Cisco ASA5520 -WCCPv2- Squid 3.1.6 (In DMZ) -- Secondary Internet
 Connection -- Parent Proxy Service 
|
 Internet
 
 We are currently using auto-detect, but continually keep running into
 applications that don't recognize auto-detect, or sometimes don't even
have
 the ability to read a configuration script.  I am trying to come up with
a
 way to alleviate the user's issues, without losing our local cache.  And
 keeping the HR and Legal departments happy by continuing to filter
websites
 with content that some could find offensive, as well as blocking unsafe
 (malware/spyware) websites.


1) IF the client thinks its talking to the parent proxy. cache_peer
login=PASS (or login=PASSTHRU) will pass on the credentials without
requiring auth within Squid.

2) IF Squid itself needs to login to the parent. cache_peer login= with
username:password will insert the given login to relayed requests.

NP: Older Squid only allow Basic auth protocol credentials to be added
this way. 3.2 brings the ability to do Negotiate/Kerberos as well. NTLM
remains a sticky problem.


This login= is only relevant once on a cache_peer entry. So its one or the
other can be used at once. #2 is probably better/simpler for you since the
clients are not involved in the auth process.


Hope this helps.

Amos


[squid-users] High load server Disk problem

2010-08-17 Thread Robert Pipca
Hi.

I'm using squid on a high speed network (with 110M of http traffic).

I'm using 2.7.STABLE7 with these cache_dir:

cache_dir aufs /cache 756842 60 100
cache_dir coss /cache/coss1 65520 max-size=1048575
max-stripe-waste=32768 block-size=4096 membufs=15
cache_dir coss /cache/coss2 65520 max-size=1048575
max-stripe-waste=32768 block-size=4096 membufs=15
cache_dir coss /cache/coss3 65520 max-size=1048575
max-stripe-waste=32768 block-size=4096 membufs=15

Everything works fine most of the day, but on peak hours, I got these:

2010/08/17 20:06:59| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/17 20:06:59| squidaio_queue_request: Queue Length:
current=981, high=1488, low=321, duration=170

After a while, I got a few of these, with duration increasing, until:

2010/08/17 20:23:09| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/17 20:23:09| squidaio_queue_request: Queue Length:
current=558, high=2177, low=321, duration=531

The web browsing started to get very slow, which is when the support
team took squid down.

All cache_dir are on a single sata-2 7200RPM 1TB hard drive.

Is there a way to know which cache_dir is the problem and what I can
so this doesn't happen?

I tried using both 16 and 32 AIO threads, but didn't help.

cache manager tells me that I have around 10 million objects:

Average HTTP requests per minute since start: 18851.1

Storage Swap size: 693535688 KB
Storage Mem size: 30872 KB
Mean Object Size: 64.50 KB

Internal Data Structures:
10752896 StoreEntries
   49 StoreEntries with MemObjects
   26 Hot Object Cache Items
10752847 on-disk objects

Please help!

- Robert


Re: [squid-users] Squid blocks web page in port 7779

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 13:14:25 -0500, p3dRø ip2tr...@gmail.com wrote:
 Hi Amos,
 
 I have my proxy as another host in the network (with only one ethernet
 card = eth0). The communication flow is:
 
 Internet -- Router ADSL -- Firewall -- Squid -- PCs
 
 What I mean with transparent is that all the hosts go to proxy without
 authentication and without blocking anything yet. They don't know that
 there is any proxy.

With only one NIC on the proxy this gets close to some tricky packet
routing issues. If you can use a second NIC, physically separating the DMZ
(Squid-ADSL linkage) from the internal PCs would be a great help in
avoiding problems. (Ironically I have a long 3-day callout ahead to fix
exactly these issues for a client who decided to re-wire their net-cafe
themselves).

For NAT interception (http_port ... intercept) to work properly the Squid
box must be the once doing NAT. Otherwise there are not box-internal NAT
tables for Squid to retrieve the client real-destinations from.

In these setups I recommend making the Squid box setup as a full router +
firewall and the access device (ADSL here) as a pure modem/bridge pushing
everything complex over to the Squid box.


Due to vulnerabilities with direct access to an interception port 3.1 and
later will now prohibit the two modes from sharing a port. If the NAT
lookups fail (see above) its considered a direct-access connection and may
be blocked.

The fix for you is to do NAT on the Squid box.
 http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect
 http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat

That seems to be the main problem in a nutshell.

There are a few minor issues and details to make things run more smoothly.
I cover them below...

 
 I reconfigured my config file and I have this now:
 
 http_port 3128 intercept
 cache_mem 100 MB
 cache_dir ufs /var/spool/squid 150 16 256
 acl red_local src 192.168.1.0/24
 acl localhost src 127.0.0.1/32

With 3.1 Squid is IPv6-enabled. You may want to update these to include
your LAN IPv6 ranges. Those are ::1 for localhost and fe80::/7 for the
private equivalent to 192.168.*

Though having said that the NAT will not work on IPv6 traffic.
NP: you can instead v6-enable your LAN PCs traffic to Squid by using WPAD
to silently configure them for a proxy hostname with  records
available. :)


 acl all src all

all is pre-defined in all Squid-3.x. Remove it to quieten the startup
warnings.

 http_access allow localhost
 http_access allow red_local
 acl SSL_ports port 443
 acl SSL_ports port 7779
 acl Safe_ports port 8080
 acl Safe_ports port 80
 acl Safe_ports port 7779
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

Ah, so all the stuff about Safe_ports and SSL_ports was a red-herring.
They are never used anyway.

To actually work these two config lines are supposed to be above your LAN
access permissions:
  http_access allow localhost
  http_access allow red_local


Amos


Re: [squid-users] High load server Disk problem

2010-08-17 Thread Jose Ildefonso Camargo Tolosa
Hi!

In my own personal opinion: your hard drive alone is not enough to
handle that much traffic (110MBytes/s, ~1Gbps).  See, most SATA hard
drives (7200rpm) gives around 50~70MB/s *sequential* read speed, your
cache reads are *not* sequential, so, it will be slower.  In my
opinion, you need something like a 8 drives RAID10 array, and/or use
faster disks (10k), or maybe 15k SAS disks.

Also, I would put a minimum object size for disk of 1M, and a maximum
object size of whatever you want (this depends on your network, but
usually ~150MB is enough to fit almost any upgrade download). And for
RAM, I would put a maximum object size of 1M, with no minimum.  Thus,
keeping small files out of the disk cache.

Also, other questions:  How many clients/connections are you handling?
what are your server's specifications? and how is the system load over
time? (maybe zabbix or any other monitoring system will let you know
your system load over time).

I hope this helps,

Ildefonso Camargo

On Tue, Aug 17, 2010 at 10:26 PM, Robert Pipca robertpi...@gmail.com wrote:
 Hi.

 I'm using squid on a high speed network (with 110M of http traffic).

 I'm using 2.7.STABLE7 with these cache_dir:

 cache_dir aufs /cache 756842 60 100
 cache_dir coss /cache/coss1 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss2 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss3 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15

 Everything works fine most of the day, but on peak hours, I got these:

 2010/08/17 20:06:59| squidaio_queue_request: WARNING - Disk I/O overloading
 2010/08/17 20:06:59| squidaio_queue_request: Queue Length:
 current=981, high=1488, low=321, duration=170

 After a while, I got a few of these, with duration increasing, until:

 2010/08/17 20:23:09| squidaio_queue_request: WARNING - Disk I/O overloading
 2010/08/17 20:23:09| squidaio_queue_request: Queue Length:
 current=558, high=2177, low=321, duration=531

 The web browsing started to get very slow, which is when the support
 team took squid down.

 All cache_dir are on a single sata-2 7200RPM 1TB hard drive.

 Is there a way to know which cache_dir is the problem and what I can
 so this doesn't happen?

 I tried using both 16 and 32 AIO threads, but didn't help.

 cache manager tells me that I have around 10 million objects:

 Average HTTP requests per minute since start: 18851.1

 Storage Swap size: 693535688 KB
 Storage Mem size: 30872 KB
 Mean Object Size: 64.50 KB

 Internal Data Structures:
 10752896 StoreEntries
    49 StoreEntries with MemObjects
    26 Hot Object Cache Items
 10752847 on-disk objects

 Please help!

 - Robert



Re: [squid-users] High load server Disk problem

2010-08-17 Thread Jose Ildefonso Camargo Tolosa
Hi!

Sorry, had to post some corrections. duh

On Tue, Aug 17, 2010 at 10:43 PM, Jose Ildefonso Camargo Tolosa
ildefonso.cama...@gmail.com wrote:
 Hi!

 In my own personal opinion: your hard drive alone is not enough to
 handle that much traffic (110MBytes/s, ~1Gbps).  See, most SATA hard
 drives (7200rpm) gives around 50~70MB/s *sequential* read speed, your
 cache reads are *not* sequential, so, it will be slower.  In my
 opinion, you need something like a 8 drives RAID10 array, and/or use
 faster disks (10k), or maybe 15k SAS disks.

 Also, I would put a minimum object size for disk of 1M, and a maximum
 object size of whatever you want (this depends on your network, but
 usually ~150MB is enough to fit almost any upgrade download). And for
 RAM, I would put a maximum object size of 1M, with no minimum.  Thus,
 keeping small files out of the disk cache.

Forget the minimum object size for disk (1M would leave most objects
just *in ram*, which may only be good if you have lots of RAM).  Now,
if you have *lots* of RAM, you could use these settings.


 Also, other questions:  How many clients/connections are you handling?
 what are your server's specifications? and how is the system load over
 time? (maybe zabbix or any other monitoring system will let you know
 your system load over time).

 I hope this helps,

 Ildefonso Camargo

 On Tue, Aug 17, 2010 at 10:26 PM, Robert Pipca robertpi...@gmail.com wrote:
 Hi.

 I'm using squid on a high speed network (with 110M of http traffic).

 I'm using 2.7.STABLE7 with these cache_dir:

 cache_dir aufs /cache 756842 60 100
 cache_dir coss /cache/coss1 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss2 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss3 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15

 Everything works fine most of the day, but on peak hours, I got these:

 2010/08/17 20:06:59| squidaio_queue_request: WARNING - Disk I/O overloading
 2010/08/17 20:06:59| squidaio_queue_request: Queue Length:
 current=981, high=1488, low=321, duration=170

 After a while, I got a few of these, with duration increasing, until:

 2010/08/17 20:23:09| squidaio_queue_request: WARNING - Disk I/O overloading
 2010/08/17 20:23:09| squidaio_queue_request: Queue Length:
 current=558, high=2177, low=321, duration=531

 The web browsing started to get very slow, which is when the support
 team took squid down.

 All cache_dir are on a single sata-2 7200RPM 1TB hard drive.

 Is there a way to know which cache_dir is the problem and what I can
 so this doesn't happen?

 I tried using both 16 and 32 AIO threads, but didn't help.

 cache manager tells me that I have around 10 million objects:

 Average HTTP requests per minute since start: 18851.1

 Storage Swap size: 693535688 KB
 Storage Mem size: 30872 KB
 Mean Object Size: 64.50 KB

 Internal Data Structures:
 10752896 StoreEntries
    49 StoreEntries with MemObjects
    26 Hot Object Cache Items
 10752847 on-disk objects

 Please help!

 - Robert




Re: [squid-users] High load server Disk problem

2010-08-17 Thread Amos Jeffries
On Tue, 17 Aug 2010 22:43:33 -0430, Jose Ildefonso Camargo Tolosa
ildefonso.cama...@gmail.com wrote:
 Hi!
 
 In my own personal opinion: your hard drive alone is not enough to
 handle that much traffic (110MBytes/s, ~1Gbps).  See, most SATA hard
 drives (7200rpm) gives around 50~70MB/s *sequential* read speed, your
 cache reads are *not* sequential, so, it will be slower.  In my
 opinion, you need something like a 8 drives RAID10 array, and/or use
 faster disks (10k), or maybe 15k SAS disks.
 
 Also, I would put a minimum object size for disk of 1M, and a maximum
 object size of whatever you want (this depends on your network, but
 usually ~150MB is enough to fit almost any upgrade download). And for
 RAM, I would put a maximum object size of 1M, with no minimum.  Thus,
 keeping small files out of the disk cache.

The COSS storage type he has setup already does this very efficiently with
added disk-backing of the COSS chunks for cross-restart recovery of the
cache.

 
 Also, other questions:  How many clients/connections are you handling?
 what are your server's specifications? and how is the system load over
 time? (maybe zabbix or any other monitoring system will let you know
 your system load over time).
 
 I hope this helps,
 
 Ildefonso Camargo
 
 On Tue, Aug 17, 2010 at 10:26 PM, Robert Pipca robertpi...@gmail.com
 wrote:
 Hi.

 I'm using squid on a high speed network (with 110M of http traffic).

 I'm using 2.7.STABLE7 with these cache_dir:

 cache_dir aufs /cache 756842 60 100


Whats missing appears to be min-size=1048576 on the AUFS to push all the
small objects into the better COSS directories. (NOTE: the value is COSS
max-size+1)


 cache_dir coss /cache/coss1 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss2 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15
 cache_dir coss /cache/coss3 65520 max-size=1048575
 max-stripe-waste=32768 block-size=4096 membufs=15

 Everything works fine most of the day, but on peak hours, I got these:

 2010/08/17 20:06:59| squidaio_queue_request: WARNING - Disk I/O
 overloading
 2010/08/17 20:06:59| squidaio_queue_request: Queue Length:
 current=981, high=1488, low=321, duration=170

 After a while, I got a few of these, with duration increasing, until:

 2010/08/17 20:23:09| squidaio_queue_request: WARNING - Disk I/O
 overloading
 2010/08/17 20:23:09| squidaio_queue_request: Queue Length:
 current=558, high=2177, low=321, duration=531

 The web browsing started to get very slow, which is when the support
 team took squid down.

 All cache_dir are on a single sata-2 7200RPM 1TB hard drive.

 Is there a way to know which cache_dir is the problem and what I can
 so this doesn't happen?

AIO is the I/O method preferred by AUFS. That aufs dir is also listed
first, which may affect the default choice.


 I tried using both 16 and 32 AIO threads, but didn't help.

 cache manager tells me that I have around 10 million objects:

 Average HTTP requests per minute since start: 18851.1

 Storage Swap size: 693535688 KB
 Storage Mem size: 30872 KB
 Mean Object Size: 64.50 KB

 Internal Data Structures:
 10752896 StoreEntries
49 StoreEntries with MemObjects
26 Hot Object Cache Items
 10752847 on-disk objects

*49* of 10 million objects are in-transit? that is very low. Though it
could be a result of the queue overload.

Amos