Re: [squid-users] url_rewrite

2010-11-01 Thread viswanathan

Thanks much for the reply.

So is it necessary to increase url rewrite concurrency too?

Thanks
-Viswa

So there is no need to increase

On 11/02/2010 10:53 AM, Brett Lymn wrote:

On Tue, Nov 02, 2010 at 10:29:06AM +0530, viswanathan wrote:
   

We are not sure whether websense plugin supports concurrency any views
on it.

 

it does.  Increase the number of url_rewrite children until the
machine handles the task.  The websense redirector does take a bit of
time to do it's work so you do need quite a few about.

   




Re: [squid-users] url_rewrite

2010-11-01 Thread Amos Jeffries

On 02/11/10 17:59, viswanathan wrote:

Thanks much for the reply

Actally we are redirecting all request to the websense filter.

If we increase url_rewrite children and url_rewrite concurrency the
problem will be solved?


It's a workaround to fix a shortcoming in the url-rewriter hack.



We are not sure whether websense plugin supports concurrency any views
on it.


It is likely not to. They have migrated websense to be an ICAP server 
instead. This is a much more efficient interface for content filters.

http://www.websense.com/content/support/library/data/v753/install/squid_icap.aspx

Upgrading from 2.6 the best path is straight to 3.1.
A check of the options in 2.6 which are not yet available in 3.1 is 
recommended first to see if there is anything very important blocking 
the upgrade.

 http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html#ss6.1

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

2010-11-01 Thread Amos Jeffries

On 29/10/10 05:28, Rajkumar Seenivasan wrote:

switching from LFUDA to GDSF didn't make much of a difference.

I assume the following is happening...
I pre-cache around 2 to 3GB of data everyday and get 40 to 50% HITS everyday.


Check that those 40-50% hits are from actual visitors, not from your 
pre-cache scan. Ideally your pre-cache scan will be entirely MISSes for 
content which is cacheable through until at least the next scan.



Once the cache_dir size reaches the cache_swap_low threshold, squid is
not aggressive enough in removing the old objects. Infact, I think
squid is not doing anything to remove the old objects.


Correct. Aggressive removal starts when cache_dir stores more than 
*_high threshold % its max data and continues until *_low is reached by 
the erasures.
 Slow removal occurs with every visitor request as Squid becomes aware 
of things being stale.




So the pre-caching requests are not getting into the store and the HIT
rate goes down big time.


huh? pre-cache requests for stuff which is cacheable will always get 
them into cache. Things requested earlier may get dropped, even if it 
was from earlier in the pre-cache scan.


HIT ratio dropping means only that items not in the cache are wanted 
more than those already there. When working correctly the things in 
cache are dropped and the now-wanted ones are added.


This is *exactly* how the pre-caching run works. It scans URLs through 
Squid pretending to be a regular client which does not care about the 
loading lag.  By making Squid aware of and update any "pre-cache" items 
which are stale they get reduced lag for later visitors. If they become 
stale between pre-cache runs the next visitor to fetch them will face 
the same slow-down.


NP: all of this pre-caching stuff is a different problem to the issue of 
how much RAM squid is using.



When this happens, if I increase the store size, I can see better HIT rates.


Yes. More stuff gets cached, thus more can be served from the cache.



What can be done to resolve this issue? Is there a equivalent of
"reference_age" for squid V3.1.8?


heap LFUDA and lru should provide it in the usual place. They are the 
only replacement policies which base things on age.
heap GDSF bases removal on object size, so avg. object size will reduce 
when it does removal.



cache mgr always reports swap.files_cleaned = 0.
my understanding is that this counter will show the # of objects
removed from the disk store based on the replacement policy.


My understanding matches yours.



I changed the cache_replacement_policy from "heap GDSF" to "lru"
yesterday to see if it makes a difference.
Removal policy: lru
LRU reference age: 1.15 days


issues with memory usage:
both squids are running with 100% mem usage (15GB). Nothing else is
running on these 2 servers.
Stopping and starting the squid doesn't bring down the memory usage.

The only way to release memory is to stop squid, move the cache dir to
something_old, recreate and start squid with empty cache
AND DELETE the old cache dirs.
If I don't delete the old cache dir, memory is not getting released.


I saw earlier that you had Squid mempools turned off. This means that 
memory usage is directly under control of the operating system default 
allocator. They tend to like to keep memory alocations around, at least 
as virtual memory in case its needed again.




squid runs in accel mode and serves only sqllite and xml files. nothing else.


Can you provide me with some example URLs which I can run some tests on?
And if possible, access to make "manager" requests to your proxies. (I 
visit from 58.28.153.233 or 2002:3a1c:99e9:0:206:5bff:fe7c:b8a if allowed)




Squid Cache: Version 3.1.8
configure options:  '--enable-icmp'
'--enable-removal-policies=heap,lru' '--enable-useragent-log'
'--enable-referer-log' '--enable-follow-x-forwarded-for'
'--enable-default-hostsfile=/etc/hosts' '--enable-x-accelerator-vary'
'--disable-ipv6' '--enable-htcp' '--enable-icp'
'--enable-storeio=diskd,aufs' '--with-large-files'
'--enable-http-violations' '--disable-translation'
'--disable-auto-locale' '--enable-async-io'
--with-squid=/root/downlaods/squid/squid-3.1.8
--enable-ltdl-convenience



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


Re: [squid-users] url_rewrite

2010-11-01 Thread viswanathan

Thanks much for the reply

Actally we are redirecting all request to the websense filter.

If we increase url_rewrite children and url_rewrite concurrency the 
problem will be solved?


We are not sure whether websense plugin supports concurrency any views 
on it.


Thanks
-Viswa

On 11/02/2010 10:09 AM, Amos Jeffries wrote:

On 02/11/10 17:28, viswanathan wrote:

Hi All,

I am using squid 2.6 stable 19 .we are using url redirector.Every
request passes through redirector.


Why? Is there not some large portion which can go without having the 
URL fiddled with?



In cache.log we can see following error,

WARNING: All url_rewriter processes are busy.
WARNING:Up to 300 pending requests queued

We increased url_rewrite_children upto 100 .Still the problem exist
.Whether the error will stop if we increase url_rewrite_children?
The " 300 pending requests queued" means whether the requests are not
assigned to url_redirector process or it was waiting for previous
requests response from the url_rewrite program.


Yes.

Solution is not to play with your visitors URLs, and to use a 
concurrency enabled URL re-writer.


Amos




Re: [squid-users] url_rewrite

2010-11-01 Thread Amos Jeffries

On 02/11/10 17:28, viswanathan wrote:

Hi All,

I am using squid 2.6 stable 19 .we are using url redirector.Every
request passes through redirector.


Why? Is there not some large portion which can go without having the URL 
fiddled with?



In cache.log we can see following error,

WARNING: All url_rewriter processes are busy.
WARNING:Up to 300 pending requests queued

We increased url_rewrite_children upto 100 .Still the problem exist
.Whether the error will stop if we increase url_rewrite_children?
The " 300 pending requests queued" means whether the requests are not
assigned to url_redirector process or it was waiting for previous
requests response from the url_rewrite program.


Yes.

Solution is not to play with your visitors URLs, and to use a 
concurrency enabled URL re-writer.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.2


[squid-users] url_rewrite

2010-11-01 Thread viswanathan

Hi All,

I am using squid 2.6 stable 19 .we are using url redirector.Every 
request passes through redirector.

In cache.log we can see following error,

WARNING: All url_rewriter processes are busy.
WARNING:Up to 300 pending requests queued

We increased url_rewrite_children upto 100 .Still the problem exist 
.Whether the error will stop if we increase url_rewrite_children?
The " 300 pending requests queued" means whether the requests are not 
assigned to url_redirector process or it was waiting for previous 
requests response from the url_rewrite program.


Cheers
-Viswa


Re: [squid-users] squid not storing objects to disk andgettingRELEASED on the fly

2010-11-01 Thread Rajkumar Seenivasan
Hi,
Can someone pls help fix my 2 issues?
I wish there was an equivalent of "reference_age" in 3.1

thanks.


On Thu, Oct 28, 2010 at 12:28 PM, Rajkumar Seenivasan  wrote:
> switching from LFUDA to GDSF didn't make much of a difference.
>
> I assume the following is happening...
> I pre-cache around 2 to 3GB of data everyday and get 40 to 50% HITS everyday.
> Once the cache_dir size reaches the cache_swap_low threshold, squid is
> not aggressive enough in removing the old objects. Infact, I think
> squid is not doing anything to remove the old objects.
>
> So the pre-caching requests are not getting into the store and the HIT
> rate goes down big time.
> When this happens, if I increase the store size, I can see better HIT rates.
>
> What can be done to resolve this issue? Is there a equivalent of
> "reference_age" for squid V3.1.8?
> cache mgr always reports swap.files_cleaned = 0.
> my understanding is that this counter will show the # of objects
> removed from the disk store based on the replacement policy.
>
> I changed the cache_replacement_policy from "heap GDSF" to "lru"
> yesterday to see if it makes a difference.
> Removal policy: lru
> LRU reference age: 1.15 days
>
>
> issues with memory usage:
> both squids are running with 100% mem usage (15GB). Nothing else is
> running on these 2 servers.
> Stopping and starting the squid doesn't bring down the memory usage.
>
> The only way to release memory is to stop squid, move the cache dir to
> something_old, recreate and start squid with empty cache
> AND DELETE the old cache dirs.
> If I don't delete the old cache dir, memory is not getting released.
>
> squid runs in accel mode and serves only sqllite and xml files. nothing else.
>
> Squid Cache: Version 3.1.8
> configure options:  '--enable-icmp'
> '--enable-removal-policies=heap,lru' '--enable-useragent-log'
> '--enable-referer-log' '--enable-follow-x-forwarded-for'
> '--enable-default-hostsfile=/etc/hosts' '--enable-x-accelerator-vary'
> '--disable-ipv6' '--enable-htcp' '--enable-icp'
> '--enable-storeio=diskd,aufs' '--with-large-files'
> '--enable-http-violations' '--disable-translation'
> '--disable-auto-locale' '--enable-async-io'
> --with-squid=/root/downlaods/squid/squid-3.1.8
> --enable-ltdl-convenience
>
> Please help.
>
> thanks.
>
>
>
>
>
>
>
>
>
>
> On Fri, Sep 24, 2010 at 1:16 PM, Rajkumar Seenivasan  
> wrote:
>> Hello Amos,
>> see below for my responses... thx.
>>
>> ? 50% empty cache required so as not to fill RAM? => cache is too big or 
>> RAM not enough.
>> cache usage size is approx. 6GB per day.
>> We have 15GB of physical memory on each box and the cache_dir is set for 
>> 20GB.
>> I had cache_swap_low 65 and cache_swap_high 70% and the available
>> memory went down to 50MB out of 15GB when the cache_dir used was 14GB
>> (reached the high threshold).
>>
>> What was the version in use before this happened? 3.1.8 okay for a 
>> while? or did it start discarding right at the point of upgrade from 
>> another?
>> We started testing with 3.1.6 and then used 3.1.8 in production. This
>> issue was noticed even during the QA. We didn't have any caching
>> servers before.
>>
>> Server advertised the content-length as unknown then sent 279307 bytes. 
>> (-1/279307) Squid is forced to store it to disk immediately (could be a 
>> TB
>> about to arrive for all Squid knows).
>> I looked further into the logs and the log entry I pointed out was
>> from the SIBLING request. sorry about that.
>>
>> These tell squid 50% of the cache allocated disk space MUST be empty at 
>> all times. Erase content if more is used. The defaults for these are less
>> than 100% in order to leave some small buffer of space for use by 
>> line-speed stuff still arriving while squid purged old objects to fit 
>> them.
>> Since our data changes every day, I don't need a cache dir with more
>> than 11GB to give enough buffer. On an average, 6GB of disk cache is
>> used per day.
>>
>> filesystem is resizerfs with RAID-0. only 11GB used for the cache.
>> Used or available?
>> 11GB used out of 20GB.
>>
>> The 10MB/GB of RAM usage by the in-memory index is calculated from an 
>> average object size around 4KB. You can check your available RAM roughly
>> meets Squid needs with:  10MB/GB of disk cache + the size of cache_mem + 
>> 10MB/GB of cache_mem + about 256 KB per number of concurrent clients at
>> peak traffic. This will give you a rough ceiling.
>>
>> Yesterday morning, we changed the cache_replacement_policy from "heap
>> LFUDA" to "heap GDSF", cleaned up the cache_dir and started squid
>> fresh.
>>
>> current disk cache usage is 8GB (out of 20GB). ie. after 30 hours.
>> Free memory is 1.7GB out of 15GB.
>>
>> Based on your math, the memory usage shouldn't be more than 3 or 4GB.
>> In this case, the used mem is far too high.
>>
>>
>> On Thu, Sep 23, 2010 at 12:21 AM, Amos Jeffries  wrote:
>>> On Wed, 22 Sep 2010 15:09:31 -0400, "Cha

[squid-users] Kerberos auth with Active Directory.

2010-11-01 Thread Rolf Loudon
hello

I am trying to setup kerberos auth against Active Directory - Windows 2000 - in 
squid, 2.7.  This is primarily so that the username is captured in the access 
log. But also user based access control will occasionally be used.

I've installed the squid_kerb_auth software from 
http://squidkerbauth.sourceforge.net/

The relevant squid config looks like this:

auth_param negotiate program /usr/lib/squid/squid_kerb_auth -d
auth_param negotiate children 10
auth_param negotiate keep_alive on

external_acl_type squid_kerb_ldap ttl=3600 negative_ttl=3600 %LOGIN 
/usr/local/squid/squid_kerb_ldap -d -g active-directory-gr...@my.domain

acl ldap_group_check external squid_kerb_ldap

acl k_test src [some.test.host.address]
http_access allow k_test ldap_group_check
http_access deny k_test


Initially I used the msktutil package to create the AD account keytab, thus:

msktutil -c -b "CN=COMPUTERS" -s HTTP/squidhost.my.domain -k 
/etc/squid/HTTP.keytab --computer-name squidhost --upn HTTP/squidhost.my.domain 
--server windows_ad_host.my.domain --verbose

This produced the desired keytab but in the verbose output noted that the 
ticket version number was not returned ("must be Windows 2000" - it is) and so 
set the kvno to zero.  This is reflected in the output of kvno 
HTTP/squidhost.my.domain

When the client connected (Mac OS X 10.6) using the Chrome browser, squid's 
cache.log reported that the ticket version number didn't match:

squid_kerb_auth: gss_accept_sec_context() failed: Unspecified GSS failure.  
Minor code may provide more information. Key version number for principal in 
key table is incorrect.

 Using kvno HTTP/squidhost.my.domain on this client the version number was 3  
while doing the same on the proxy the version was zero.  So that made sense.

I fixed this by not using msktutil and using ktpass on the Active DIrectory 
server and specifying  -kvno 3.  Installed this on the proxy host and that 
error went away.

Reading about ktpass and kerberos auth in Microsoft's KB, it said that the 
(squid) host needs have an account created for it as a user in the domain.  
Weird but I did this, using the host name as the user shortname.  I used this 
hostname in ktpass with -mapuser

Now in squid's cache.log the logs show, in part,

2010/11/02 12:01:55| squid_kerb_auth: parseNegTokenInit failed with rc=102
2010/11/02 12:01:55| squid_kerb_auth: AF AA== r...@my.domain
2010/11/02 12:01:55| squid_kerb_ldap: Got User: rolf Domain: MY.DOMAIN
2010/11/02 12:01:55| squid_kerb_ldap: User domain loop: gr...@domain 
actiive-directory-gr...@my.domain
2010/11/02 12:01:55| squid_kerb_ldap: Found gr...@domain 
active-directory-gr...@my.domain
2010/11/02 12:01:55| squid_kerb_ldap: Setup Kerberos credential cache
2010/11/02 12:01:55| squid_kerb_ldap: Get default keytab file name
2010/11/02 12:01:55| squid_kerb_ldap: Got default keytab file name 
/etc/squid/HTTP.keytab
2010/11/02 12:01:55| squid_kerb_ldap: Get principal name from keytab 
/etc/squid/HTTP.keytab
2010/11/02 12:01:55| squid_kerb_ldap: Keytab entry has realm name: MY.DOMAIN
2010/11/02 12:01:55| squid_kerb_ldap: Found principal name: 
HTTP/squidhost.my.dom...@my.domain
2010/11/02 12:01:55| squid_kerb_ldap: Set credential cache to 
MEMORY:squid_ldap_20411
2010/11/02 12:01:55| squid_kerb_ldap: Got principal name 
HTTP/squidhost.my.dom...@my.domain
2010/11/02 12:01:55| squid_kerb_ldap: Stored credentials
2010/11/02 12:01:55| squid_kerb_ldap: Initialise ldap connection
2010/11/02 12:01:55| squid_kerb_ldap: Canonicalise ldap server name for domain 
MY.DOMAIN

Apart from the first line ... "failed with rc=102"  this looks ok.

Then there are many (from debugging I presume) instances of:

squid_kerb_ldap: Resolved SRV _ldap._tcp.MY.DOMAIN record to 
ad-domain-controller.my.domain
for various domain controllers on the network.

Then lots of 

2010/11/02 12:02:09| squid_kerb_ldap: Setting up connection to ldap server 
various-domain-servers-and-workstati...@my.domain:389
2010/11/02 12:02:09| squid_kerb_ldap: SASL not supported on system

Finally these log entries which show the deny reason - that I'm not a member of 
the group. But I confirm that I am a member of the group:

2010/11/02 12:02:09| squid_kerb_ldap: Error during initialisation of ldap 
connection: Success
2010/11/02 12:02:09| squid_kerb_ldap: Error during initialisation of ldap 
connection: Success
2010/11/02 12:02:09| squid_kerb_ldap: User rolf is not member of gr...@domain 
active-directory-gr...@my.domain
2010/11/02 12:02:09| squid_kerb_ldap: Default domain loop: gr...@domain 
active-directory-gr...@my.domain
2010/11/02 12:02:09| squid_kerb_ldap: Default group loop: gr...@domain 
active-directory-gr...@my.domain
2010/11/02 12:02:09| squid_kerb_ldap: ERR

I have tried many combinations of service keytab creation and so on, but I 
cannot get any further than this.  Any help most appreciated.

thanks

rolf.








Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Amos Jeffries
On Tue, 2 Nov 2010 00:55:38 +, Declan White 
wrote:
> On Tue, Nov 02, 2010 at 12:10:25AM +, Amos Jeffries wrote:
>> On Mon, 1 Nov 2010 22:55:12 +, Declan White 
>> wrote:
>> > On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
>> >> On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
>> 
>> Looks like one of the side effects of 3090:
>> http://www.squid-cache.org/Versions/v3/3.1/changesets/
>> 
>> (just fixing the reply text makes squid produce a regular error page
>> where
>> it should have produced an auth challenge to get some usable Basic-auth
>> credentials).
> 
> Ah, does that mean there's a cleaner fix somewhere I should be stealing?

Yes. The patch at that link I gave.

> 
>> 64KB is about the buffer size Squid uses internally, so that is about
>> right for keeping a completely full buffer I think.
> 
> Is there any wisdom in me subtracting a few bytes to account for some
> memory overhead anywhere?

Not sure. The overheads I'm aware of are accounted separate form the
actual buffers already.

>  
> By the by, I *think* I've gotten to the bottom of my 2046 byte read()
> buffer
> issue (which wasn't tcp_recv_bufsize after all). *If* I am reading this
> right:
> 
> http.cc:79: HttpStateData::HttpStateData 
> readBuf = new MemBuf;
> readBuf->init();
> 
> With no arguments, init() will cook up an empty 2048 size buffer, which
> gets
> carried all the way through the OO to the read() call (wherever that is)
> that dutifully only reads as much as the buffer says it currently has.
> 
> So I'm thinking of hardwiring something in there tomorrow and seeing
what
> explodes.

I agree. Came to the same conclusion. It *should* be exponentially grown
so the read are 2K, 2K, 4K. But that 4K seems not to happen.


> 
>> Okay. I think from the resource comments above you want it OFF. Squid
>> will
>> respond to HTTP/1.1 "Expect:" requests immediately and broken clients
>> that
>> can't handle the required HTTP/1.1 replies disappear with error pages.
> 
> Ah, by 'broken clients' are we talking IE6 by any chance? :)

PDF readers and java applets mostly that I've heard of.

Amos


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 23:20:52 +, Declan White 
wrote:
> On Mon, Nov 01, 2010 at 10:55:12PM +, Declan White wrote:
>> On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
>> > On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
>> > > I went for a rummage in the code for the buffer size decisions, but
>> > > got
>> > > very very lost in the OO abstractions very quickly. Can anyone
point
>> > > me at
>> > > anything I can tweak to fix this?
>> > 
>> > It's a global macro defined by auto-probing your operating systems
TCP
>> > receiving buffer when building. Default is 16KB and max is 64KB.
There
>> > may
>> > also be auto-probing done at run time.
>> > 
>> > It is tunable at run-time with
>> > http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/
>> 
>> Oh thank God! Thanks :) (and annoyed with myself that I missed that)
> 
> Nuts.. actually, that didn't do anything :(
> 
> 17314:  write(16, " G E T   / c f g m a n .".., 639)= 639
> 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> 17314:  write(6, "\0\0\010\b\0\0\0\0\0\010".., 16)  = 16
> 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> 17314:  read(11, " H T T P / 1 . 1   2 0 0".., 2046)= 2046
> 17314:  write(6, "\0\0\0\n\004\0\0", 8) = 8
> 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
> 17314:  write(10, " H T T P / 1 . 0   2 0 0".., 2180)   = 2180
> 17314:  read(11, " f o n t - s i z e :   3".., 2046)= 834
> 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
> 17314:  write(10, " f o n t - s i z e :   3".., 834)= 834
> 17314:  read(11, "   n o n e ;\n }\n\n # m".., 2046)= 1066
> 17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
> 17314:  write(10, "   n o n e ;\n }\n\n # m".., 1066)   = 1066
> 17314:  write(8, " [ 0 1 / N o v / 2 0 1 0".., 403) = 403
> 
> It's still reading from the remote server in 2046 byte lumps, which
meant
> three trips round the event loop where it might only have needed one.
> 
> I'm guessing that setting is for the kernel level TCP receive buffer,
and
> not the application read-from-that-buffer size.
> 
> I even doubled HTTP_REQBUF_SZ in defines.h for fun, and that did nothing
> either. I can't find where the read() size might be decided in the code.

Hmm. I think I've tracked this down. It appears to be MemBuf::init()
defaulting to 2KB of buffer. The default is defined in src/Membuf.cc.
Though I'm unsure why an exponential grow() is not triggered to make that
third read 4K.

If you can confirm that I'll work on making that config setting affect the
buffer initial values for the future.

Amos


[squid-users] RE:FW: $--.

2010-11-01 Thread eliud Kataraihya
I have a good information to share with you.
A while ago,a trading company attractive to me,
the price is very competitive advantage, so I bought some products.
It is very exciting,very pleased when I got and saw my goods.
I think you can go to see:  nowfid.com
you'll save more money in there. d--.


Re: [squid-users] High cpu load with squid

2010-11-01 Thread Michał Prokopiuk
Witam,

Dnia pon, lis 01, 2010 at 01:17:46 +, Amos Jeffries napisał:

Thank you for reply.

> > redirect_program /usr/local/bin/redirector.pl
> > redirect_children 80
> 
> Replace those with:
>   url_write_program /usr/local/bin/redirector.pl
>   url_rewrite_children 80
> 
> > 
> > store_avg_object_size 8 kB
> > minimum_object_size 1 KB
> > maximum_object_size 100 MB
> > maximum_object_size_in_memory  1 MB
> > cache_mem 1000 MB
> > 
> > cache_swap_low 80%
> > cache_swap_high 100%
> 
> cache_swap_high should be something less than 100% (99% may be better).
> Squid will ONLY begin the aggressive space clearing when cache_swap_high
> threshold is passed. So with 100% this may cause active connections to be
> stopped while 20% of the cache is discarded.

I try set cache_swap_high to 95%, and today squid again get a lot of
cpu.

> 
> > # previous cache
> > #cache_dir ufs /var/spool/squid-cache/cache 3 12 256
> > cache_dir aufs /var/spool/squid-cache/cache 3 60 100
> > 
> > dns_nameservers 192.168.1.100 192.168.1.200
> > ipcache_size 8192
> > fqdncache_size 1024
> > positive_dns_ttl 2 hours
> > negative_dns_ttl 1 minutes
> > ipcache_low 90
> > ipcache_high 95
> > 
> > emulate_httpd_log on
> > access_log /var/log/squid/access.log
> 
> Remove emulate... and change access_log to:
>   access_log /var/log/squid/access.log common
> 
> > cache_log /var/log/squid/cache.log
> > cache_store_log /dev/null
> 
> Set this to "cache_store_log none".
> 
> > 
> > 
> > Rest of squid.conf are acls. I have about 30 - 40 mbps of traffic. On
> 
> You have "http_access allow sloneczko" at the top of this config.  so its
> possible your other ACL are not working.
> 
> > board 
> > are core2duo 1.8 ghz 4 gb ram on intel chipset, 2x SATA on soft raid
> (for
> > cache - md0).
> 
> by "soft raid" you mean *software* raid? That is a disk IO killer for
> Squid.

Yes, software raid. 

> 
> > 
> > Any ideas? When I run -k debug I can't do anything - load has increased,
> 
> > and nothing could be seen, thousands lines per second was write to
> > cache.log.
> 
> During a time when things go slow what seems to be the most common
> thing(s) logged?
> 

Nothing strange, fo example "Invalid request". I try set Your
suggestions, and  write for fiev days.

-- 
Pozdrawiam
Michał Prokopiuk
mich...@]sloneczko.net
http://www.sloneczko.net


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Declan White
On Tue, Nov 02, 2010 at 12:10:25AM +, Amos Jeffries wrote:
> On Mon, 1 Nov 2010 22:55:12 +, Declan White  wrote:
> > On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
> >> On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
> 
> Looks like one of the side effects of 3090:
> http://www.squid-cache.org/Versions/v3/3.1/changesets/
> 
> (just fixing the reply text makes squid produce a regular error page where
> it should have produced an auth challenge to get some usable Basic-auth
> credentials).

Ah, does that mean there's a cleaner fix somewhere I should be stealing?

> 64KB is about the buffer size Squid uses internally, so that is about
> right for keeping a completely full buffer I think.

Is there any wisdom in me subtracting a few bytes to account for some memory 
overhead anywhere?
 
By the by, I *think* I've gotten to the bottom of my 2046 byte read() buffer
issue (which wasn't tcp_recv_bufsize after all). *If* I am reading this right:

http.cc:79: HttpStateData::HttpStateData 
readBuf = new MemBuf;
readBuf->init();

With no arguments, init() will cook up an empty 2048 size buffer, which gets
carried all the way through the OO to the read() call (wherever that is)
that dutifully only reads as much as the buffer says it currently has.

So I'm thinking of hardwiring something in there tomorrow and seeing what 
explodes.

> Okay. I think from the resource comments above you want it OFF. Squid will
> respond to HTTP/1.1 "Expect:" requests immediately and broken clients that
> can't handle the required HTTP/1.1 replies disappear with error pages.

Ah, by 'broken clients' are we talking IE6 by any chance? :)

DeclanW


Re: [squid-users] TPROXY - possible in such network setup (hanging connections)?

2010-11-01 Thread Amos Jeffries
On Mon, 01 Nov 2010 23:55:27 +0100, Tomasz Chmielewski 
wrote:
> I'm trying to configure Squid to work in tproxy mode (IPv4, when it 
> works, IPv6), but my connections are hanging and I'm not sure how to 
> debug this.
> 
> 
> Perhaps my network setup won't just work with tproxy?
> 
> 
> My network setup looks like below:
> 
> 
> internet gateway - squid - client
> 
> 
> Internet gateway, squid, client - all have public IPv4 addresses.
> 
> 
> The client has squid IP address set as a gateway for addresses I'd like 
> to proxy.
> If I ping the destination from the client, all packets go through the 
> proxy, but the replies don't go through the proxy.

This is called asymmetrical routing. Your network routing structure needs
to be altered to symmetrical routing for the reply traffic to work with
TPROXY.

"ping" is also different protocol entirely (ICMP) to the ones which TPROXY
works on (TCP/UDP). There are known bugs in the ICMP bits related to
TPROXY. The kernel guys have patches which are coming out alongside IPv6
support in kernel 2.6.37.

> 
> I see the website in the internet gets TCP packets with client IP and 
> replies to them. Client receives packets with website IPs.

Good.

> 
> However, the connection hangs:
> 
> $ wget -O /dev/null example.com
> --2010-11-02 06:48:51--  http://example.com
> Resolving example.com... 1.2.3.4
> Connecting to example.com|1.2.3.4|:80... connected.
> HTTP request sent, awaiting response...
> 
> 
> If I press ctrl+c on the client, Squid logs the page I tried to access:
> 
> 1288651691.229  29850 client_ip TCP_MISS/000 0 GET http://example.com/ -

> DIRECT/1.2.3.4 -
> 
> 
> What is wrong in my setup? It works when I use NAT, but I'd like to use 
> IPv6 too, so I have to use TPROXY.

Find out why the reply packets are not coming back to Squid. Fix that and
this should start working.

Amos


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 22:55:12 +, Declan White 
wrote:
> On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
>> On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
>> > I went for a rummage in the code for the buffer size decisions, but
got
>> > very very lost in the OO abstractions very quickly. Can anyone point
>> > me at
>> > anything I can tweak to fix this?
>> 
>> It's a global macro defined by auto-probing your operating systems TCP
>> receiving buffer when building. Default is 16KB and max is 64KB. There
>> may
>> also be auto-probing done at run time.
>> 
>> It is tunable at run-time with
>> http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/
> 
> Oh thank God! Thanks :) (and annoyed with myself that I missed that)
> 
>> The others have already covered the main points of this. ufdbGuard is
>> probably the way to go once you have restricted the size down by
>> elminiating all the entries which can be done with dstdomain and other
>> faster ACL types.
> 
> Aye, I've got much to ruminate over, but it does all sounds promising.
>  
>> > Beyond that, I assume, to get the most out of a multi-cpu system I
>> > should
>> > be running one squid per CPU, which means I need more IP's and that
>> > they
>> > can't share their memory or disk caches with each other directly, and
I
>> > would need to switch on HTCP to try and re-merge them?
>> 
>> Possibly. You may want to test out 3.2 with SMP support. Reports have
>> been
>> good so far (for a beta).
> 
> Alas I'm already flying a little too close to the wind just running
3.1.9. 
> This'll all be live soon, now we traced a ftp code nullref coredump :
> 
> +++ ../squid-3.1.8/src/ftp.cc   Wed Oct 27 14:21:01 2010
> @@ -3707,1 +3707,1 @@
> -else
> +else if (ctrl.last_reply)
> @@ -3709,0 +3709,2 @@
> +else
> +reply = "" ; 

Looks like one of the side effects of 3090:
http://www.squid-cache.org/Versions/v3/3.1/changesets/

(just fixing the reply text makes squid produce a regular error page where
it should have produced an auth challenge to get some usable Basic-auth
credentials).

> 
>> > Build: Sun Solaris 9
>> > PATH=~/sunstudio12.0/bin:$PATH ./configure CC=cc CXX=CC CFLAGS="-fast
>> > -xtarget=ultra3i -m64 -xipo" CXXFLAGS="-fast -xtarget=ultra3i -m64
>> > -xipo"
>> > --enable-cache-digests --enable-removal-policies=lru,heap
>> > --enable-storeio=aufs,ufs --enable-devpoll
>> 
>> Ah. You will definitely be wanting 3.1.9. /dev/poll support is included
>> and several ACL problems specific to the S9 are fixed.
> 
> Aye, I'm the one that whined at my local dev to patch devpoll back in
;-)
> 
> Actually, I *just* found out my freshly deployed 3.1.9 with
> --enable-devpoll
> does NOT use devpoll, as configure prioritises poll() above it, which
> kinda defeats the point of the exercise :)

Gah. My fault. Sorry. Fix applied. It *may* have been in time for todays
snapshot.

> 
> --- configure~  Mon Nov  1 21:26:53 2010
> +++ configure   Mon Nov  1 21:26:53 2010
> @@ -46912,10 +46912,10 @@
> SELECT_TYPE="epoll"
>  elif test -z "$disable_kqueue" && test "$ac_cv_func_kqueue" = "yes" ;
then
> SELECT_TYPE="kqueue"
> -elif test -z "$disable_poll" && test "$ac_cv_func_poll" = "yes" ; then
> -SELECT_TYPE="poll"
>  elif test "x$enable_devpoll" != "xno" && test "x$ac_cv_devpoll_works" =
>  "xyes"; then
>  SELECT_TYPE="devpoll"
> +elif test -z "$disable_poll" && test "$ac_cv_func_poll" = "yes" ; then
> +SELECT_TYPE="poll"
>  elif test -z "$disable_select" && test "$ac_cv_func_select" = "yes" ;
then
> case "$host_os" in
> mingw|mingw32)
> 
> has fixed that. Yes, I should have edited the .in and autoconfed, but
I'm
> scared of autoconf.
> 
>> > Tuney bits of Config:
>> > htcp_port 0
>> > icp_port 0
>> > digest_generation off   
>> > quick_abort_min 0 KB
>> > quick_abort_max 0 KB
>> > read_ahead_gap 64 KB
>> > store_avg_object_size 16 KB 
>> > read_timeout 5 minutes  
>> > request_timeout 30 seconds  
>> > persistent_request_timeout 30 seconds   
>> > pconn_timeout 3 seconds
>> 
>> NOTE: pconn_timeout tuning can no longer be done based on info from
older
>> versions. There have been a LOT of fixes that make 3.1.8+ pconn support
>> HTTP compliant, used more often and less resources hungry than older
>> versions.
> 
> Oh I hadn't measured it or anything :) I've just seen linux servers
> collapse
> from complications with SYN queues and client exponential backoff. I
just
> need a hint of a permanent connection to avoid that connection-thrashing
> scenario, but I don't have the resources to keep things around 'just in
> case'.

This reminds me we don't have a max limit on active pconns.

>  
>> > cache_mem 512 MB
>> > maximum_object_size_in_memory 64 KB 
>> 
>> NP: It's worth noting that 3.x has fixed the large file in memory
>> problems
>> which 2.x suffers from. 3.x will handle them in linear time instead of
>> with

Re: [squid-users] TPROXY - possible in such network setup (hanging connections)?

2010-11-01 Thread Tomasz Chmielewski

On 01.11.2010 23:55, Tomasz Chmielewski wrote:

I'm trying to configure Squid to work in tproxy mode (IPv4, when it
works, IPv6), but my connections are hanging and I'm not sure how to
debug this.


Perhaps my network setup won't just work with tproxy?


My network setup looks like below:


internet gateway - squid - client


Internet gateway, squid, client - all have public IPv4 addresses.


(...)


If I press ctrl+c on the client, Squid logs the page I tried to access:

1288651691.229 29850 client_ip TCP_MISS/000 0 GET http://example.com/ -
DIRECT/1.2.3.4 -


What is wrong in my setup? It works when I use NAT, but I'd like to use
IPv6 too, so I have to use TPROXY.


I figured this entry on the gateway helps:

route add -host  gw 


--
Tomasz Chmielewski
http://wpkg.org


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Declan White
On Mon, Nov 01, 2010 at 10:55:12PM +, Declan White wrote:
> On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
> > On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
> > > I went for a rummage in the code for the buffer size decisions, but got
> > > very very lost in the OO abstractions very quickly. Can anyone point me at
> > > anything I can tweak to fix this?
> > 
> > It's a global macro defined by auto-probing your operating systems TCP
> > receiving buffer when building. Default is 16KB and max is 64KB. There may
> > also be auto-probing done at run time.
> > 
> > It is tunable at run-time with
> > http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/
> 
> Oh thank God! Thanks :) (and annoyed with myself that I missed that)

Nuts.. actually, that didn't do anything :(

17314:  write(16, " G E T   / c f g m a n .".., 639)= 639
17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
17314:  write(6, "\0\0\010\b\0\0\0\0\0\010".., 16)  = 16
17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
17314:  read(11, " H T T P / 1 . 1   2 0 0".., 2046)= 2046
17314:  write(6, "\0\0\0\n\004\0\0", 8) = 8
17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
17314:  write(10, " H T T P / 1 . 0   2 0 0".., 2180)   = 2180
17314:  read(11, " f o n t - s i z e :   3".., 2046)= 834
17314:  ioctl(6, DP_POLL, 0x100459B90)  = 2
17314:  write(10, " f o n t - s i z e :   3".., 834)= 834
17314:  read(11, "   n o n e ;\n }\n\n # m".., 2046)= 1066
17314:  ioctl(6, DP_POLL, 0x100459B90)  = 1
17314:  write(10, "   n o n e ;\n }\n\n # m".., 1066)   = 1066
17314:  write(8, " [ 0 1 / N o v / 2 0 1 0".., 403) = 403

It's still reading from the remote server in 2046 byte lumps, which meant
three trips round the event loop where it might only have needed one.

I'm guessing that setting is for the kernel level TCP receive buffer, and
not the application read-from-that-buffer size.

I even doubled HTTP_REQBUF_SZ in defines.h for fun, and that did nothing
either. I can't find where the read() size might be decided in the code.

DeclanW


Re: [squid-users] Problem with ACL (disabling download)

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 23:01:42 +0100, Konrado Z 
wrote:
> Thanks for your response.
> 
>>> acl officeFiles urlpath_regex "/etc/squid/officeFiles"
>>>
>>> http_access deny clients workingHours funWebsites
>>> http_access deny clients !officeFiles
>>> http_access allow all
>>
>> NP: "allow all" means traffic from the entire Internet. That should be
>> "allow clients".
>>
> 
> Thanks that is a useful tip
> 
>>
>> As requested earlier:
>>  "Please list the exact fill set of patterns you are using. One of them
>> is probably wrong."
>>
>> That means the exact and full content of /etc/squid/officeFiles. Sorry
if
>> I was unclear.
> 
> \.[Dd][Oo][Cc]$
> \.[Pp][Dd][Ff]$
> \.[Xx][Ll][Ss]$
> \.[Zz][Ii][Pp]$
> \.[Gg][Ii][Ff]$
> \.[Pp][Pp][Tt]$
> 
> And jpg, rar, tiff, bmp, txt in the same style.
> 
> I know that using this into http_access deny clients !officeFiles
> causes blocking the whole WWW service (clients are allowed to download
> only these types of files) but I'm not able to list every extension
> such html, htm, php, asp etc. I want to make Internet service

The pattern to match for the common web files is quite short:

 # defined white-list of acceptable web file extensions
 acl webFiles urlpath_regex -i
[^?]*(\.([xd]?html?|aspx?|php[345]?|cgi|css|js|jpe?g|gif|png|x[ms]l||xst|swf)|/)(\?.*)?$


> available for clients but I want to deny DOWNLOADING files which are
> not typical office files. And how to do it? I have no idea :)

You face a concept problem:
  In HTTP *everything* including the HTML structure of the page is a
DOWNLOAD. There is zero difference in file type between a "Download"
button, a menu bar and some porn. Only the browser controls whether it asks
to save the object or displays it (eg. opening an XHTML web page in IE4
will ask you where to save it).

  Consider as well how does one find these office files in order to
download? when the HTML page (or HTML email), download button graphics,
captcha security, search scripts and layout CSS are all blocked?


I really think you need to clarify which types and sizes of object things
are limited to. Then use http_reply_access on the file rep_mime_type.
Probably source websites in http_access.

> 
> P.S I was probably unclear earlier. Unfortunately my English is not so
> well, so sorry :)
> Konradoz
> 
>> Amos
>>
>>>
>>> 2010/11/1 Amos Jeffries :
 On 01/11/10 12:46, Konrado Z wrote:
>
> Hello,
>
> I have encountered a problem with ACL. I want to disable download
all
> kinds of files for subnet specified except pdf, doc, xls, txt, zip.
I
> have created officeFile file wich is shown below:
>
> \.[Dd][Oo][Cc]$
> \.[Tt][Xx][Tt]$
> etc.
>
> but,
>
> acl clients 192.168.56.0/24
> acl officeFiles urlpath_regex "/etc/squid/officeFiles"

 Using -i makes the pattern non-case-sensitive.
  acl officeFiles urlpath_regex -i \.(doc|txt)$


>
> and
>
> http_access deny clients !officeFiles
> http_access allow all #It has to be here because it is the last line
> in my config which is associated with other ACLS
>
>
> doesn't work because clients cannot open even google.com. I have no
> idea, how to overcome that problem. How to write this ACL and
> http_access to work properly.
> Please help.

 Please list the exact fill set of patterns you are using. One of them
>> is
 probably wrong.


 You could also match the actual reply mime types. This reply ACL
allows
 some
 types and denies the rest:

  acl webMime rep_mime_type -i text/html image/jpeg image/png
image/gif
 text/css
  http_reply_access deny !webMime


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2

>>


[squid-users] TPROXY - possible in such network setup (hanging connections)?

2010-11-01 Thread Tomasz Chmielewski
I'm trying to configure Squid to work in tproxy mode (IPv4, when it 
works, IPv6), but my connections are hanging and I'm not sure how to 
debug this.



Perhaps my network setup won't just work with tproxy?


My network setup looks like below:


internet gateway - squid - client


Internet gateway, squid, client - all have public IPv4 addresses.


The client has squid IP address set as a gateway for addresses I'd like 
to proxy.
If I ping the destination from the client, all packets go through the 
proxy, but the replies don't go through the proxy.



I see the website in the internet gets TCP packets with client IP and 
replies to them. Client receives packets with website IPs.


However, the connection hangs:

$ wget -O /dev/null example.com
--2010-11-02 06:48:51--  http://example.com
Resolving example.com... 1.2.3.4
Connecting to example.com|1.2.3.4|:80... connected.
HTTP request sent, awaiting response...


If I press ctrl+c on the client, Squid logs the page I tried to access:

1288651691.229  29850 client_ip TCP_MISS/000 0 GET http://example.com/ - 
DIRECT/1.2.3.4 -



What is wrong in my setup? It works when I use NAT, but I'd like to use 
IPv6 too, so I have to use TPROXY.



--
Tomasz Chmielewski
http://wpkg.org


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Declan White
On Mon, Nov 01, 2010 at 09:36:53PM +, Amos Jeffries wrote:
> On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
> > I went for a rummage in the code for the buffer size decisions, but got
> > very very lost in the OO abstractions very quickly. Can anyone point me at
> > anything I can tweak to fix this?
> 
> It's a global macro defined by auto-probing your operating systems TCP
> receiving buffer when building. Default is 16KB and max is 64KB. There may
> also be auto-probing done at run time.
> 
> It is tunable at run-time with
> http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/

Oh thank God! Thanks :) (and annoyed with myself that I missed that)

> The others have already covered the main points of this. ufdbGuard is
> probably the way to go once you have restricted the size down by
> elminiating all the entries which can be done with dstdomain and other
> faster ACL types.

Aye, I've got much to ruminate over, but it does all sounds promising.
 
> > Beyond that, I assume, to get the most out of a multi-cpu system I should
> > be running one squid per CPU, which means I need more IP's and that they
> > can't share their memory or disk caches with each other directly, and I
> > would need to switch on HTCP to try and re-merge them?
> 
> Possibly. You may want to test out 3.2 with SMP support. Reports have been
> good so far (for a beta).

Alas I'm already flying a little too close to the wind just running 3.1.9. 
This'll all be live soon, now we traced a ftp code nullref coredump :

+++ ../squid-3.1.8/src/ftp.cc   Wed Oct 27 14:21:01 2010
@@ -3707,1 +3707,1 @@
-else
+else if (ctrl.last_reply)
@@ -3709,0 +3709,2 @@
+else
+reply = "" ; 

> > Build: Sun Solaris 9
> > PATH=~/sunstudio12.0/bin:$PATH ./configure CC=cc CXX=CC CFLAGS="-fast
> > -xtarget=ultra3i -m64 -xipo" CXXFLAGS="-fast -xtarget=ultra3i -m64 -xipo"
> > --enable-cache-digests --enable-removal-policies=lru,heap
> > --enable-storeio=aufs,ufs --enable-devpoll
> 
> Ah. You will definitely be wanting 3.1.9. /dev/poll support is included
> and several ACL problems specific to the S9 are fixed.

Aye, I'm the one that whined at my local dev to patch devpoll back in ;-)

Actually, I *just* found out my freshly deployed 3.1.9 with --enable-devpoll
does NOT use devpoll, as configure prioritises poll() above it, which
kinda defeats the point of the exercise :)

--- configure~  Mon Nov  1 21:26:53 2010
+++ configure   Mon Nov  1 21:26:53 2010
@@ -46912,10 +46912,10 @@
SELECT_TYPE="epoll"
 elif test -z "$disable_kqueue" && test "$ac_cv_func_kqueue" = "yes" ; then
SELECT_TYPE="kqueue"
-elif test -z "$disable_poll" && test "$ac_cv_func_poll" = "yes" ; then
-SELECT_TYPE="poll"
 elif test "x$enable_devpoll" != "xno" && test "x$ac_cv_devpoll_works" = 
"xyes"; then
 SELECT_TYPE="devpoll"
+elif test -z "$disable_poll" && test "$ac_cv_func_poll" = "yes" ; then
+SELECT_TYPE="poll"
 elif test -z "$disable_select" && test "$ac_cv_func_select" = "yes" ; then
case "$host_os" in
mingw|mingw32)

has fixed that. Yes, I should have edited the .in and autoconfed, but I'm 
scared of autoconf.

> > Tuney bits of Config:
> > htcp_port 0
> > icp_port 0
> > digest_generation off   
> > quick_abort_min 0 KB
> > quick_abort_max 0 KB
> > read_ahead_gap 64 KB
> > store_avg_object_size 16 KB 
> > read_timeout 5 minutes  
> > request_timeout 30 seconds  
> > persistent_request_timeout 30 seconds   
> > pconn_timeout 3 seconds
> 
> NOTE: pconn_timeout tuning can no longer be done based on info from older
> versions. There have been a LOT of fixes that make 3.1.8+ pconn support
> HTTP compliant, used more often and less resources hungry than older
> versions.

Oh I hadn't measured it or anything :) I've just seen linux servers collapse
from complications with SYN queues and client exponential backoff. I just
need a hint of a permanent connection to avoid that connection-thrashing
scenario, but I don't have the resources to keep things around 'just in case'.
 
> > cache_mem 512 MB
> > maximum_object_size_in_memory 64 KB 
> 
> NP: It's worth noting that 3.x has fixed the large file in memory problems
> which 2.x suffers from. 3.x will handle them in linear time instead of with
> exponential CPU load.

Good to hear :) But I don't have the memory to stretch much beyond 512 atm,
as squid seems to take 1.2Gb VM with these setting alone, and no disk cache.
I do wonder if I overcooked the read_ahead_gap though..
 
> > memory_replacement_policy heap GDSF
> > ignore_expect_100 on
> 
> If this is actually a problem you may benefit extra from 3.2 beta here as
> well.

The GDSF is just to up the per-req hits. I'm hoping to get disk cache going
for larger objects later with the opposite policy emphasis.

To be frank, I don't know if I need ignore_expect_100 on or not :)
 
Thanks for the quick response!

DeclanW


Re: [squid-users] Kerb auth with LDAP groups

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 17:03:11 -0400, "Kelly, Jack"
 wrote:
> Hi everyone,
> I've successfully set up authentication to my proxy with squid_kerb_auth
> to get us away from using basic LDAP authentication for everything. I
> used the config guide from the squid-cache wiki (below) which worked
> perfectly.
> http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
> 
> 
> One thing I'd like to do is continue using LDAP Groups and/or
> Organizational Units to grant permissions to certain websites. So my
> question is in two parts:
> 
> Is there a way to use squid_ldap_auth such that it will only prompt for
> credentials when you try to visit a certain website? (Previously I've
> had it set up so it would prompt you right when the browser opens.)

This is merely a matter of ACL organization. http_access (and other
*_access lines) are tested left-to-right top-to-bottom. So place the group
ACL on the end of a line which starts by testing the website with a
dstdomain ACL.

  acl foo dstdomain .example.com
  acl people external ldapGroups ...
  http_access deny foo !people
  ...

> 
> Alternatively: Is there a straightforward equivalent to squid_ldap_group
> when using Kerberos authentication?

"squid_ldap_group -K" strips the Kerberos domain parts from the
credentials. Allowing group lookup against NTLM.

Markus squid_kerb_auth helper bundles with 3.2 under a slightly changed
name. It's available as a stand-alone helper for older Squid from
http://sourceforge.net/projects/squidkerbauth/files/

> 
> Running 3.1.1 on Ubuntu x64, installed from Synaptic.

You need an upgrade. If there is not a newer version of squid3 in synaptic
(Ubuntu supplies 3.0.STABLE25 and 3.1.6) there are ported source packages
for 3.1.9 up at https://launchpad.net/~yadi/+archive/ppa

Amos


Re: [squid-users] Problem with ACL (disabling download)

2010-11-01 Thread Konrado Z
Thanks for your response.

>> acl officeFiles urlpath_regex "/etc/squid/officeFiles"
>>
>> http_access deny clients workingHours funWebsites
>> http_access deny clients !officeFiles
>> http_access allow all
>
> NP: "allow all" means traffic from the entire Internet. That should be
> "allow clients".
>

Thanks that is a useful tip

>
> As requested earlier:
>  "Please list the exact fill set of patterns you are using. One of them
> is probably wrong."
>
> That means the exact and full content of /etc/squid/officeFiles. Sorry if
> I was unclear.

\.[Dd][Oo][Cc]$
\.[Pp][Dd][Ff]$
\.[Xx][Ll][Ss]$
\.[Zz][Ii][Pp]$
\.[Gg][Ii][Ff]$
\.[Pp][Pp][Tt]$

And jpg, rar, tiff, bmp, txt in the same style.

I know that using this into http_access deny clients !officeFiles
causes blocking the whole WWW service (clients are allowed to download
only these types of files) but I'm not able to list every extension
such html, htm, php, asp etc. I want to make Internet service
available for clients but I want to deny DOWNLOADING files which are
not typical office files. And how to do it? I have no idea :)

P.S I was probably unclear earlier. Unfortunately my English is not so
well, so sorry :)
Konradoz

> Amos
>
>>
>> 2010/11/1 Amos Jeffries :
>>> On 01/11/10 12:46, Konrado Z wrote:

 Hello,

 I have encountered a problem with ACL. I want to disable download all
 kinds of files for subnet specified except pdf, doc, xls, txt, zip. I
 have created officeFile file wich is shown below:

 \.[Dd][Oo][Cc]$
 \.[Tt][Xx][Tt]$
 etc.

 but,

 acl clients 192.168.56.0/24
 acl officeFiles urlpath_regex "/etc/squid/officeFiles"
>>>
>>> Using -i makes the pattern non-case-sensitive.
>>>  acl officeFiles urlpath_regex -i \.(doc|txt)$
>>>
>>>

 and

 http_access deny clients !officeFiles
 http_access allow all #It has to be here because it is the last line
 in my config which is associated with other ACLS


 doesn't work because clients cannot open even google.com. I have no
 idea, how to overcome that problem. How to write this ACL and
 http_access to work properly.
 Please help.
>>>
>>> Please list the exact fill set of patterns you are using. One of them
> is
>>> probably wrong.
>>>
>>>
>>> You could also match the actual reply mime types. This reply ACL allows
>>> some
>>> types and denies the rest:
>>>
>>>  acl webMime rep_mime_type -i text/html image/jpeg image/png image/gif
>>> text/css
>>>  http_reply_access deny !webMime
>>>
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE9 or 3.1.8
>>>  Beta testers wanted for 3.2.0.2
>>>
>


Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 15:00:21 +, decl...@is.bbc.co.uk wrote:
> Greetings!
> 
> I am poking a potential squid upgrade from squid 2 to 3.1.8 with a new
> config, but it's added around 40% more CPU load, and I'm looking for 
> tune-ups.
> 
> One thing I notice it doing when I truss is read()ing HTTP responses
with
> only a 2046 byte buffer, whereas squid2 used 24Kb. This is making for a
> lot of
> unnecessary system calls and go-arounds in the main polling loop.
> 
> I went for a rummage in the code for the buffer size decisions, but got
> very
> very lost in the OO abstractions very quickly. Can anyone point me at
> anything I can tweak to fix this?

It's a global macro defined by auto-probing your operating systems TCP
receiving buffer when building.
Default is 16KB and max is 64KB. There may also be auto-probing done at
run time.

It is tunable at run-time with
http://www.squid-cache.org/Doc/config/tcp_recv_bufsize/

> 
> Besides that, I have a laaarge url_regexp file to process, and I was
> wondering if there was any benefit to trying to break this regexp out to
a
> perl helper process (and if anyone has a precooked setup doing this I
can
> borrow)

The others have already covered the main points of this. ufdbGuard is
probably the way to go once you have restricted the size down by
elminiating all the entries which can be done with dstdomain and other
faster ACL types.

> 
> Beyond that, I assume, to get the most out of a multi-cpu system I
should
> be running one squid per CPU, which means I need more IP's and that they
> can't share their memory or disk caches with each other directly, and I
> would need to switch on HTCP to try and re-merge them?

Possibly. You may want to test out 3.2 with SMP support. Reports have been
good so far (for a beta).

> 
> While I'm musing here, is there any way to make an ACL construct that
makes
> a decision based on whether something is already cached? I have a lot of
> heavy ACL's, and if something is cached, in my case, it will prove that
it
> has previously passed all the ACL's and I can just return the cached
copy.

Not as such.  The "acl source hier_code NONE" ACL could do this in the
http_reply_access tests where the source is known. The cache index lookup
is one of the heavier operations and depends on all request adaptation
being done first. It may be possible to make a cache_status ACL or such
which works in adapted_http_access and later tests.

All ACL with external data sources (DNS, external_acl_type, ident, etc)
cache their results for faster re-use. Tuning the TTL for each of these can
reduce their impact.

> 
> Build: Sun Solaris 9
> PATH=~/sunstudio12.0/bin:$PATH ./configure CC=cc CXX=CC CFLAGS="-fast
> -xtarget=ultra3i -m64 -xipo" CXXFLAGS="-fast -xtarget=ultra3i -m64
-xipo"
> --enable-cache-digests --enable-removal-policies=lru,heap
> --enable-storeio=aufs,ufs --enable-devpoll

Ah. You will definitely be wanting 3.1.9. /dev/poll support is included
and several ACL problems specific to the S9 are fixed.

> 
> Tuney bits of Config:
> htcp_port 0
> icp_port 0
> digest_generation off   
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> read_ahead_gap 64 KB
> store_avg_object_size 16 KB 
> read_timeout 5 minutes  
> request_timeout 30 seconds  
> persistent_request_timeout 30 seconds   
> pconn_timeout 3 seconds

NOTE: pconn_timeout tuning can no longer be done based on info from older
versions. There have been a LOT of fixes that make 3.1.8+ pconn support
HTTP compliant, used more often and less resources hungry than older
versions.

> cache_mem 512 MB
> maximum_object_size_in_memory 64 KB 

NP: It's worth noting that 3.x has fixed the large file in memory problems
which 2.x suffers from. 3.x will handle them in linear time instead of with
exponential CPU load.

> memory_replacement_policy heap GDSF
> ignore_expect_100 on

If this is actually a problem you may benefit extra from 3.2 beta here as
well.


Amos


RE: [squid-users] forward and reverse proxy in 3.1.x https forward proxy failing

2010-11-01 Thread Dean Weimer
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, November 01, 2010 3:57 PM
> To: Dean Weimer
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] forward and reverse proxy in 3.1.x https forward
> proxy failing
> 
> On Mon, 1 Nov 2010 12:41:44 -0500, "Dean Weimer" 
> wrote:
> > I had an older machine that was still running 3.0 STABLE 12, that was
> > functioning as a forward and reverse proxy using port 80 for both.  And
> a
> > reverse proxy for one site on Port 443, the machine sits in a DMZ the
> > forward proxy only directs about to web sites for machines connected
> > through WAN connections, and functions as a reverse proxy for those
> > machines when connecting to a couple internal sites.  This machine had a
> > hardware failure last night and I was forced to put in place the newer
> > machine that had already had the software installed but wasn't
> configured
> > or tested yet.
> >
> > The problem I am having is that this machine running squid 3.1.9
> functions
> > fine as both forward and reverse for http websites, and is working for
> the
> > reverse HTTPS site, though I had to use the sslproxy_cert_error acl
> method
> > to bypass a cert error, even though the cert is valid, it's not
> accepting
> > it.  That's a minor problem though, as its functioning.  The more
> pressing
> > problem is that HTTPS forward proxy is not working, the logs show an
> error
> > every time stating a connect method was received on an accelerator port.
> >
> > 2010/11/01 12:26:43| clientProcessRequest: Invalid Request
> > 2010/11/01 12:26:44| WARNING: CONNECT method received on http
> Accelerator
> > port 80
> > 2010/11/01 12:26:44| WARNING: for request: CONNECT armmf.adobe.com:443
> > HTTP/1.0
> > User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR
> > 1.1.4322)
> > Host: armmf.adobe.com
> > Content-Length: 0
> > Proxy-Connection: Keep-Alive
> > Pragma: no-cache
> >
> > Is using the same port for both forward of http & https not allowed
> while
> > using it for a reverse proxy anymore?
> 
> It's never been allowed. The ability in older Squid was a bug.
> You will need a separate http_port line for the two modes if you want
> CONNECT tunnels.
> 
> It's a good idea to keep each of the four modes (forward, reverse,
> intercept and transparent) on separate http_port. From 3.1 onwards this is
> being enforced where possible.
> 
> Amos

Thanks for the reply Amos, I had came to that conclusion myself, about it not 
working anyways, didn't realize it was a bug that allowed it in the old version 
though.  I have already configured an additional port and verified that worked 
shortly after sending the first post.  The majority of our PCs browsers are set 
to use a configuration script, and that has been corrected with the new port.  
We have one application that has it in an INI file which will be delivered in 
our nightly polling process.  Now we just have to find the machines that are 
incorrectly set with a manual proxy setting and have them updated.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] Kerb auth with LDAP groups

2010-11-01 Thread Kelly, Jack
Hi everyone,
I've successfully set up authentication to my proxy with squid_kerb_auth
to get us away from using basic LDAP authentication for everything. I
used the config guide from the squid-cache wiki (below) which worked
perfectly.
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos


One thing I'd like to do is continue using LDAP Groups and/or
Organizational Units to grant permissions to certain websites. So my
question is in two parts:

Is there a way to use squid_ldap_auth such that it will only prompt for
credentials when you try to visit a certain website? (Previously I've
had it set up so it would prompt you right when the browser opens.)

Alternatively: Is there a straightforward equivalent to squid_ldap_group
when using Kerberos authentication?

Running 3.1.1 on Ubuntu x64, installed from Synaptic.

Thanks!
Jack
 


This message and any attachments are the property of WS Development, may be 
privileged or confidential 
and are intended only for the addressee. If you have received this email in 
error, please delete it 
immediately. Any views expressed herein are the author's and do not necessarily 
represent those of the company.


Re: [squid-users] forward and reverse proxy in 3.1.x https forward proxy failing

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 12:41:44 -0500, "Dean Weimer" 
wrote:
> I had an older machine that was still running 3.0 STABLE 12, that was
> functioning as a forward and reverse proxy using port 80 for both.  And
a
> reverse proxy for one site on Port 443, the machine sits in a DMZ the
> forward proxy only directs about to web sites for machines connected
> through WAN connections, and functions as a reverse proxy for those
> machines when connecting to a couple internal sites.  This machine had a
> hardware failure last night and I was forced to put in place the newer
> machine that had already had the software installed but wasn't
configured
> or tested yet.
> 
> The problem I am having is that this machine running squid 3.1.9
functions
> fine as both forward and reverse for http websites, and is working for
the
> reverse HTTPS site, though I had to use the sslproxy_cert_error acl
method
> to bypass a cert error, even though the cert is valid, it's not
accepting
> it.  That's a minor problem though, as its functioning.  The more
pressing
> problem is that HTTPS forward proxy is not working, the logs show an
error
> every time stating a connect method was received on an accelerator port.
> 
> 2010/11/01 12:26:43| clientProcessRequest: Invalid Request
> 2010/11/01 12:26:44| WARNING: CONNECT method received on http
Accelerator
> port 80
> 2010/11/01 12:26:44| WARNING: for request: CONNECT armmf.adobe.com:443
> HTTP/1.0
> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR
> 1.1.4322)
> Host: armmf.adobe.com
> Content-Length: 0
> Proxy-Connection: Keep-Alive
> Pragma: no-cache
> 
> Is using the same port for both forward of http & https not allowed
while
> using it for a reverse proxy anymore?

It's never been allowed. The ability in older Squid was a bug.
You will need a separate http_port line for the two modes if you want
CONNECT tunnels.

It's a good idea to keep each of the four modes (forward, reverse,
intercept and transparent) on separate http_port. From 3.1 onwards this is
being enforced where possible.

Amos


Re: [squid-users] Squid LOAD

2010-11-01 Thread Mr. Issa(*)
So mates no more analysis?

On Sat, Oct 30, 2010 at 1:56 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le vendredi 29 octobre 2010 15:34:46, Mr. Issa(*) a écrit :
>> Where's IO 2.5% ?
>> Well i dont the cache_log will cause that
> If you dont do log, you wont know who is going to where
>


Re: [squid-users] Problem with ACL (disabling download)

2010-11-01 Thread Amos Jeffries
On Mon, 1 Nov 2010 14:41:19 +0100, Konrado Z 
wrote:
> Hello,
> Thanks for reply but I still have problem
> 
> My all acls and http_access
> acl clients 192.168.56.0/24
> acl funWebsites dstdom_regex "/etc/squid/funWebsites"
> acl workingHours time M T W H F 8:00-16:00

There are not meant to be any spaces between the day letters above. The
above with spaces will likely be blocking all day only on Mondays or not at
all.

> acl officeFiles urlpath_regex "/etc/squid/officeFiles"
> 
> http_access deny clients workingHours funWebsites
> http_access deny clients !officeFiles
> http_access allow all

NP: "allow all" means traffic from the entire Internet. That should be
"allow clients".

> 
> But the problem for sure is here: http_access deny clients
> !officeFiles (file officeFiles is presented in the 1st post) - I have
> check it.
> 
> I want that clients have an access to the Internet but download files
> only listed in the file officeFiles. But when I write
> http_access deny clients !officeFiles they cannot browse any website
> but can only dowloads these files given. How to write it, to give them
> access to the Internet and allow them download only 4, 5 types of
> files?

As requested earlier:
  "Please list the exact fill set of patterns you are using. One of them
is probably wrong."

That means the exact and full content of /etc/squid/officeFiles. Sorry if
I was unclear.

Amos

> 
> 2010/11/1 Amos Jeffries :
>> On 01/11/10 12:46, Konrado Z wrote:
>>>
>>> Hello,
>>>
>>> I have encountered a problem with ACL. I want to disable download all
>>> kinds of files for subnet specified except pdf, doc, xls, txt, zip. I
>>> have created officeFile file wich is shown below:
>>>
>>> \.[Dd][Oo][Cc]$
>>> \.[Tt][Xx][Tt]$
>>> etc.
>>>
>>> but,
>>>
>>> acl clients 192.168.56.0/24
>>> acl officeFiles urlpath_regex "/etc/squid/officeFiles"
>>
>> Using -i makes the pattern non-case-sensitive.
>>  acl officeFiles urlpath_regex -i \.(doc|txt)$
>>
>>
>>>
>>> and
>>>
>>> http_access deny clients !officeFiles
>>> http_access allow all #It has to be here because it is the last line
>>> in my config which is associated with other ACLS
>>>
>>>
>>> doesn't work because clients cannot open even google.com. I have no
>>> idea, how to overcome that problem. How to write this ACL and
>>> http_access to work properly.
>>> Please help.
>>
>> Please list the exact fill set of patterns you are using. One of them
is
>> probably wrong.
>>
>>
>> You could also match the actual reply mime types. This reply ACL allows
>> some
>> types and denies the rest:
>>
>>  acl webMime rep_mime_type -i text/html image/jpeg image/png image/gif
>> text/css
>>  http_reply_access deny !webMime
>>
>>
>> Amos
>> --
>> Please be using
>>  Current Stable Squid 2.7.STABLE9 or 3.1.8
>>  Beta testers wanted for 3.2.0.2
>>


[squid-users] Re: Re: Re: squid_ldap_group against nested groups/Ous

2010-11-01 Thread Markus Moeller
Let me see if I can get a 8.0/7.x build. Does it compile AND work on 8.1 or 
do you still see the crash when reading the keytab ?


Markus

"Eugene M. Zheganin"  wrote in message 
news:4ccd5f0e.9080...@zhegan.in...

 Hi.

On 30.10.2010 00:14, Markus Moeller wrote:

Hi,

 I have now a 64bit freebsd box and can not replicate the error. Also the 
compile error I got where only a symbol problem dup in support_group and 
the sasl prototype error.


Yeah, I agree, on fresh 8.1 installation it does compile (with -Werror 
commented out).

On non-fresh 8.0/7.x it doesn't.

8.0 has heimdal 1.1.0 and 7.x has 0.6.3; however the symptoms are the 
same.


Is there something I can do to narrow the scope or the supposed decision 
is upgrade everywhere to 8.1 ?


Thanks.
Eugene.







Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Marcus Kool


I am author of ufdbGuard, a free URL filter for Squid.
You may want to check it out: ufdbGuard is multithreaded and supports
POSIX regular expressions.

If you do not want to use ufdbGuard, here is a tip:
ufdbGuard composes large REs from a set of "simple" REs:
largeRE = (RE1)|(RE2)|...|(REn)
which reduces the CPU time for the RE matching logic considerably.

Marcus


Henrik K wrote:

On Mon, Nov 01, 2010 at 03:00:21PM +, decl...@is.bbc.co.uk wrote:

Besides that, I have a laaarge url_regexp file to process, and I was
wondering if there was any benefit to trying to break this regexp out to a
perl helper process (and if anyone has a precooked setup doing this I can
borrow)


The golden rule is to run as few regexp as possible.. no matter how big they
are.

Dump your regexpes through Regexp::Assemble:
http://search.cpan.org/dist/Regexp-Assemble/Assemble.pm

Then compile Squid with PCRE support (LDFLAGS="-lpcre -lpcreposix") for
added performance.

I've only modified Squid2 myself, but for Squid3 you probably need to change
this in cache_cf.cc:

- while (fgets(config_input_line, BUFSIZ, fp)) {
+ while (fgets(config_input_line, 65535, fp)) {

... because Squid can't read a huge regexp in a single line otherwise.
Of course your script must not feed too many regex to go over that limit.

I'm also assuming you've converted as many rules as possible to dstdomain
etc, which is the first thing to do.





Re: [squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread Henrik K
On Mon, Nov 01, 2010 at 03:00:21PM +, decl...@is.bbc.co.uk wrote:
> 
> Besides that, I have a laaarge url_regexp file to process, and I was
> wondering if there was any benefit to trying to break this regexp out to a
> perl helper process (and if anyone has a precooked setup doing this I can
> borrow)

The golden rule is to run as few regexp as possible.. no matter how big they
are.

Dump your regexpes through Regexp::Assemble:
http://search.cpan.org/dist/Regexp-Assemble/Assemble.pm

Then compile Squid with PCRE support (LDFLAGS="-lpcre -lpcreposix") for
added performance.

I've only modified Squid2 myself, but for Squid3 you probably need to change
this in cache_cf.cc:

- while (fgets(config_input_line, BUFSIZ, fp)) {
+ while (fgets(config_input_line, 65535, fp)) {

... because Squid can't read a huge regexp in a single line otherwise.
Of course your script must not feed too many regex to go over that limit.

I'm also assuming you've converted as many rules as possible to dstdomain
etc, which is the first thing to do.



[squid-users] forward and reverse proxy in 3.1.x https forward proxy failing

2010-11-01 Thread Dean Weimer
I had an older machine that was still running 3.0 STABLE 12, that was 
functioning as a forward and reverse proxy using port 80 for both.  And a 
reverse proxy for one site on Port 443, the machine sits in a DMZ the forward 
proxy only directs about to web sites for machines connected through WAN 
connections, and functions as a reverse proxy for those machines when 
connecting to a couple internal sites.  This machine had a hardware failure 
last night and I was forced to put in place the newer machine that had already 
had the software installed but wasn't configured or tested yet.

The problem I am having is that this machine running squid 3.1.9 functions fine 
as both forward and reverse for http websites, and is working for the reverse 
HTTPS site, though I had to use the sslproxy_cert_error acl method to bypass a 
cert error, even though the cert is valid, it's not accepting it.  That's a 
minor problem though, as its functioning.  The more pressing problem is that 
HTTPS forward proxy is not working, the logs show an error every time stating a 
connect method was received on an accelerator port.

2010/11/01 12:26:43| clientProcessRequest: Invalid Request
2010/11/01 12:26:44| WARNING: CONNECT method received on http Accelerator port 
80
2010/11/01 12:26:44| WARNING: for request: CONNECT armmf.adobe.com:443 HTTP/1.0
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET CLR 
1.1.4322)
Host: armmf.adobe.com
Content-Length: 0
Proxy-Connection: Keep-Alive
Pragma: no-cache

Is using the same port for both forward of http & https not allowed while using 
it for a reverse proxy anymore?

I tried adding the new allow-direct option to my http_port line with no change 
in behavior.

Current line is:
http_port 10.40.1.254:80 accel vhost allow-direct

Anyone have any ideas as to what I am doing wrong here?


Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co
 Phone: (660) 269-3448
 Fax: (660) 269-3950




[squid-users] Squid network read()'s only 2k long?

2010-11-01 Thread declanw
Greetings!

I am poking a potential squid upgrade from squid 2 to 3.1.8 with a new
config, but it's added around 40% more CPU load, and I'm looking for 
tune-ups.

One thing I notice it doing when I truss is read()ing HTTP responses with
only a 2046 byte buffer, whereas squid2 used 24Kb. This is making for a lot of
unnecessary system calls and go-arounds in the main polling loop.

I went for a rummage in the code for the buffer size decisions, but got very
very lost in the OO abstractions very quickly. Can anyone point me at
anything I can tweak to fix this?

Besides that, I have a laaarge url_regexp file to process, and I was
wondering if there was any benefit to trying to break this regexp out to a
perl helper process (and if anyone has a precooked setup doing this I can
borrow)

Beyond that, I assume, to get the most out of a multi-cpu system I should
be running one squid per CPU, which means I need more IP's and that they
can't share their memory or disk caches with each other directly, and I
would need to switch on HTCP to try and re-merge them?

While I'm musing here, is there any way to make an ACL construct that makes
a decision based on whether something is already cached? I have a lot of
heavy ACL's, and if something is cached, in my case, it will prove that it
has previously passed all the ACL's and I can just return the cached copy.

Build: Sun Solaris 9
PATH=~/sunstudio12.0/bin:$PATH ./configure CC=cc CXX=CC CFLAGS="-fast 
-xtarget=ultra3i -m64 -xipo" CXXFLAGS="-fast -xtarget=ultra3i -m64 -xipo" 
--enable-cache-digests --enable-removal-policies=lru,heap 
--enable-storeio=aufs,ufs --enable-devpoll

Tuney bits of Config:
htcp_port 0
icp_port 0
digest_generation off   
quick_abort_min 0 KB
quick_abort_max 0 KB
read_ahead_gap 64 KB
store_avg_object_size 16 KB 
read_timeout 5 minutes  
request_timeout 30 seconds  
persistent_request_timeout 30 seconds   
pconn_timeout 3 seconds 
cache_mem 512 MB
maximum_object_size_in_memory 64 KB 
memory_replacement_policy heap GDSF
ignore_expect_100 on
client_db off   

Grateful for any tips and pointers.

DeclanW


Re: [squid-users] Problem with ACL (disabling download)

2010-11-01 Thread Konrado Z
Hello,
Thanks for reply but I still have problem

My all acls and http_access
acl clients 192.168.56.0/24
acl funWebsites dstdom_regex "/etc/squid/funWebsites"
acl workingHours time M T W H F 8:00-16:00
acl officeFiles urlpath_regex "/etc/squid/officeFiles"

http_access deny clients workingHours funWebsites
http_access deny clients !officeFiles
http_access allow all

But the problem for sure is here: http_access deny clients
!officeFiles (file officeFiles is presented in the 1st post) - I have
check it.

I want that clients have an access to the Internet but download files
only listed in the file officeFiles. But when I write
http_access deny clients !officeFiles they cannot browse any website
but can only dowloads these files given. How to write it, to give them
access to the Internet and allow them download only 4, 5 types of
files?

?

2010/11/1 Amos Jeffries :
> On 01/11/10 12:46, Konrado Z wrote:
>>
>> Hello,
>>
>> I have encountered a problem with ACL. I want to disable download all
>> kinds of files for subnet specified except pdf, doc, xls, txt, zip. I
>> have created officeFile file wich is shown below:
>>
>> \.[Dd][Oo][Cc]$
>> \.[Tt][Xx][Tt]$
>> etc.
>>
>> but,
>>
>> acl clients 192.168.56.0/24
>> acl officeFiles urlpath_regex "/etc/squid/officeFiles"
>
> Using -i makes the pattern non-case-sensitive.
>  acl officeFiles urlpath_regex -i \.(doc|txt)$
>
>
>>
>> and
>>
>> http_access deny clients !officeFiles
>> http_access allow all #It has to be here because it is the last line
>> in my config which is associated with other ACLS
>>
>>
>> doesn't work because clients cannot open even google.com. I have no
>> idea, how to overcome that problem. How to write this ACL and
>> http_access to work properly.
>> Please help.
>
> Please list the exact fill set of patterns you are using. One of them is
> probably wrong.
>
>
> You could also match the actual reply mime types. This reply ACL allows some
> types and denies the rest:
>
>  acl webMime rep_mime_type -i text/html image/jpeg image/png image/gif
> text/css
>  http_reply_access deny !webMime
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.8
>  Beta testers wanted for 3.2.0.2
>