RE: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Martin Sperl
Would it change anything I we reduced the epoll_wait timeouts from 1000 (but 
effectively from 10ms) to 1ms?

E.g: via this patch:
--- src/comm_epoll.cc.orig  2010-09-15 20:06:11.0 +
+++ src/comm_epoll.cc   2010-09-15 20:07:32.0 +
@@ -66,7 +66,7 @@
 #include 

 static int kdpfd;
-static int max_poll_time = 1000;
+static int max_poll_time = 1;

 static struct epoll_event *pevents;

@@ -333,7 +333,7 @@
 void
 comm_quick_poll_required(void)
 {
-max_poll_time = 10;
+max_poll_time = 1;
 }

 #endif /* USE_EPOLL */

Thanks,
Martin

> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Donnerstag, 16. September 2010 01:02
> To: squid-users@squid-cache.org
> Subject: RE: [squid-users] Strange performance effects on squid during off
> peak hours
> 
> On Wed, 15 Sep 2010 20:53:04 +0100, Martin Sperl 
> wrote:
> > Hi Amos!
> >
> > Thanks for your feedback.
> >
> >> Squid is still largely IO event driven. If the network IO is less than
> >> say 3-4 req/sec Squid can have a queue of things waiting to happen
> which
> >> get delayed a long time (hundreds of ms) waiting to be kicked off.
> >>   Your overview seems to show that behaviour clearly.
> >>
> >> There have been some small improvements and fixes to several of the
> >> lagging things but I think its still there in even the latest Squid.
> >
> > Here the Hit/s statistics on this specific server for the time:
> > +--+---+---+
> > | h| allHPS| cssART|
> > +--+---+---+
> > |0 | 48.34 | 0.016 |
> > |1 | 49.80 | 0.015 |
> > |2 | 49.01 | 0.015 |
> > |3 | 47.08 | 0.018 |
> > |4 | 17.34 | 0.024 |
> > |5 |  4.00 | 0.042 |
> > |6 |  0.52 | 0.054 |
> > |7 |  9.02 | 0.034 |
> > |8 |  7.18 | 0.038 |
> > |9 |  8.25 | 0.035 |
> > |   10 |  9.45 | 0.034 |
> > |   11 | 14.71 | 0.030 |
> > |   12 | 23.94 | 0.023 |
> > |   13 | 31.04 | 0.021 |
> > |   14 | 35.02 | 0.020 |
> > |   15 | 38.87 | 0.019 |
> > |   16 | 40.92 | 0.019 |
> > |   17 | 43.39 | 0.017 |
> > |   18 | 45.62 | 0.016 |
> > |   19 | 47.58 | 0.017 |
> > |   20 | 51.91 | 0.014 |
> > |   21 | 53.65 | 0.014 |
> > |   22 | 40.87 | 0.016 |
> > |   23 | 47.40 | 0.016 |
> > +--+---+---+
> >
> > So to summarize it: we need to keep the number of hits above 30 hits/s
> for
> > squid, so that we get an acceptable Response time.
> >
> > I believe it will need some convincing of management to get this
> > assumption tested in production ;)
> >
> > One other Question: is squid 3.1 "better" in this respect than 3.0?
> 
> Than 3.0? I believe so. Though have no data on it.
> The upper req/sec cap where the most effort has gone is 15%-20% higher, I
> have not done any serious testing like this with the lower limits before.
> 
> If you are able to it would be very enlightening and helpful for many I
> think.
> 
> Amos

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp

Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Danil Nafikov
I had very similar problem.
My problem was that CSS and javascript files were not loading. after
looking to it closer, I find out that it was due to content filter
(dansguardian). CSS and javascript files were  on different domain,
which was blocked.

Do you have content filter installed on your machine?


--
DaniL


Re: [squid-users] A few questions about squid

2010-09-15 Thread Jordon Bedwell
On 09/15/2010 10:24 PM, Amos Jeffries wrote:
> On Wed, 15 Sep 2010 19:24:40 -0500, Jordon Bedwell 
> Not DDoS in the malicious sense. 

I don't know if you've actually read and comprehended most of what I
said throughout all of my emails so I'm just going to cut this long
debate based on assumptions short and call it a day.  While I appreciate
the explanation of how networks work, even though I know how networks
work, and the assumption I contradicted myself, when I didn't, it's just
not worth it to continue a lengthy debate that is not actually
addressing, being productive or being informative more than assumptive.


Good day to you and happy trails.


Re: [squid-users] Re: Re: squid client authentication against AD computer account

2010-09-15 Thread Manoj Rajkarnikar
On Thu, Sep 16, 2010 at 3:28 AM, Markus Moeller  wrote:
>
>> "Manoj Rajkarnikar"  wrote in message
>> news:aanlktimrpzfwid0ehc0cbfchndc7nv=-jstxtngmm...@mail.gmail.com...
>> Thanks for the quick response Marcus.
>>
>> The reason I need to  limit computer account and not user account is
>> that people here move out to distant branches and the internet access
>> policy is to allow to the position they hold, and thus the computer
>> they will use.
>>
>> I've successfully setup the kerberos authentication but I don't see
>> how squid will fetch the computer information from client request and
>> authorize it based on the group membership in AD. What I wish to
>> accomplish is:
>>
>> 1. create a security group in AD
>> 2. add computer accounts to this security group
>> 3. squid checks if the computer trying to access internet is member of
>> this security group.
>> 4. if not, don't allow access to internet or request of AD user login
>> that is allowed.
>>
>> I'm not sure if this is achievable.
>>
>
> I don't think this is possible with Kerberos as the ticket does not have
> (usable) information about the client computer.
>
Is there any other way that I can achieve this?? kerberos or no
kerberos..?? I will have multiple layers of auth acls and the major
portion will be handled by this auth(if possible i.e. if not, will
have to use user based auth)

This is how I plan to do.
1. sites allowed to all..(internal sites + some update sites.)
2. privileged users all sites allowed... (computer account if
possible, or IP based or user based)
3.  semi-privileged users.. (some sites like facebook/hotmail/gmail
etc. allowed to computer accounts or user accounts)
4. whitelist allowed to all...
5. blacklist denied to all...(porn/video sites and many others that are blocked)
6. other authenticated users allowed to rest of the sites...(this is
the main acl where I want it to be computer account based if possible)

Thanks
Manoj


[squid-users] Re: Trouble between Squid and SSL proxied host

2010-09-15 Thread mikek


Amos Jeffries-2 wrote:
> 
> Close, there are some problems:
> 
> https_port still needs accel and maybe vhost options to be a real
> accelerator.
> 
> always_direct prevents the cache_peer config ever being used.
> 
> Is the public DNS that clients are connecting to x.appspot.com or
> secure.x.com?
> 
> You may need to add the forcedomain=x.appspot.com option to
> cache_peer and remove the always_direct.
> 
> Amos
> 

Thanks very much Amos.

The public clients are connecting to secure.x.com, and then squid is
proxying the request to x.appspot.com.

My understanding that to use vhost or accel with https_port, you needed a
wildcard SSL cert, which I don't have. Is that right?

I'm not sure what you mean here: always_direct prevents the cache_peer
config ever being used.

Thanks again! :)
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Trouble-between-Squid-and-SSL-proxied-host-tp2539814p2541528.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] When is the url_rewrite_program called?

2010-09-15 Thread David Parks
When is the url_rewrite_program called?

Is it before ACL matches occur? Or after the http_access tag is matched?

I'm just trying to figure out the flow of events that occur.

Looking for an answer like:
1) http_access is matched, if denied end
2) url_rewrite_program called
3) acls are matched a second time
4) http_access is matched a second time

Thanks,
David




Re: [squid-users] A few questions about squid

2010-09-15 Thread Amos Jeffries
On Wed, 15 Sep 2010 19:24:40 -0500, Jordon Bedwell 
wrote:
> On 09/15/2010 06:54 PM, Amos Jeffries wrote:
>> 
>> Yes. A problem with the meaning of the word "accelerate". What people
>> tend
>> to mean when they say that is "reverse-proxy" which has more in
relation
>> to
>> a router than a race horse.
> 
> That's intriguing, but understood now.
> 
>> 
>> Be aware that for a period after starting with no cached items the full
>> traffic load hits the backend web server. When using more than one
proxy
>> this can DoS the origin server. Look at your proxy HIT ratios to see
the
>> amount of traffic reduction Squid is doing in bytes and req/sec.
> 
> The only time I would really worry about a DDOS is if said person is
> trying to saturate the network, which our firewall and IDS systems
> handle, as far as the cache creating a DOS situation on the origin
> servers, it would take a lot, since each region has dedicated origin
> servers under it.  So acc01 would not share the same origin servers as
> acc02, we designed it like that because it's not a single site, it's
> actually several loaded clients who need power without control.

Not DDoS in the malicious sense.

Imagine that you had 4 proxies each handling say 500req/T on behalf of a
web app which had a maximum capacity of 1000req/T. At a 75% HIT ratio
(relatively low for reverse proxies) that means 2000req/T coming in from
the Internet and 500req/sec hitting the web app.
Now turn off caching in any one of the proxies and the web app suddenly
faces 875req/T. Mighty close to its capacity. Disable caching in two and
it's overloaded.


> 
>> name= MUST be unique for each entry.
>> 
> 
> Even if they are meant to balance?

Yes. It's a peer ID. The balancing algorithms depend on being able to do
accounting for each peerage.

Siblings are unbalanced and by default chosen based on which responds with
an "i have it" ICP message (if ICP is on), if none then the parent is
chosen. You can alter that by adding specific algorithm flags to the
cache_peer line. See "PEER SELECTION METHODS" in here:
 http://www.squid-cache.org/Doc/config/cache_peer

>From your description I suspect the siblings are in fact the remote
proxies which can be chosen as alternatives? In that type of topology you
want a selection method to pick the close web server and use the siblings
as fallback. (the following assumes that is the case.)

It's not clear how to reliably do that from the config available. Squid
will split them into a parent and a sibling groups before applying the
selection algorithms, with a preference for asking siblings first (a
by-product of being optimized for use in clusters).

I think having all as parents with the web server listed first. The
default first-up algorithm causes most requests go straight to the first
listed cache_peer, falling back to the other sources in sequence if they
get overloaded or die.


> 
>> refresh_pattern is used to *extend* the caching times when no Expires:
or
>> Cache-Control max-age are provided by the server. ie when squid has to
>> calculate and guess a good storage time it uses refresh_pattern to make
>> better guesses.
>> 
> 
> We don't send caching headers with our content on the server, even
> static content (unless it comes from the CDN) but a good case of what we
> want to do is force HIT/HIT not MISS/HIT, we don't want the clients

"HIT/HIT" ? "MISS/HIT"? you mean hit on the proxy, hit on the web server?

> caching any content we want the server to cache the final output and
> serve it, not hitting the origin servers and we only want to do it this
> way so we can reduce latency and TTL, not load.

You contradict yourself there. The ultimate in low-latency is for the
clients browser to fetch the object out of its own cache (some
nanoseconds). The next best is for it to be cached by their ISP (very low
milliseconds), then by their national proxies, then your CDN frontend
(whole ms may have passed at this stage, maybe even hundreds), then a cache
on your web server, then generated dynamically by your web server. Worst
case is for client browser to have to generate the page via scripts after
waiting for multiple non-cacheable bits to download first.
Load is only a nice by-product of reducing the latency, which is directly
linked to distance between client browser and the data.


With Cache-Control: the s-maxage can set caching time for proxies (yours
and the ISP ones) separate from browsers which use max-age (proxies will
use max-age if s-maxage is not present). emitting "cache-Control:
stale-while-revalidate" (Squid-2.7 only at present sadly) is also a
powerful tool in reducing client-visible delays on the requests which need
to re-check for updated content.

If the Squid support Surrogate/1.0 protocol extensions (Squid-3.1+); the
Surrogate-Control headers can provide a full alternative version of
Cache-Control which applies to a specific reverse-proxy.


As a corollary to this; if you follow several big providers policy of
globally emit

Re: [squid-users] A few questions about squid

2010-09-15 Thread Jordon Bedwell
On 09/15/2010 06:54 PM, Amos Jeffries wrote:
> 
> Yes. A problem with the meaning of the word "accelerate". What people tend
> to mean when they say that is "reverse-proxy" which has more in relation to
> a router than a race horse.

That's intriguing, but understood now.

> 
> Be aware that for a period after starting with no cached items the full
> traffic load hits the backend web server. When using more than one proxy
> this can DoS the origin server. Look at your proxy HIT ratios to see the
> amount of traffic reduction Squid is doing in bytes and req/sec.

The only time I would really worry about a DDOS is if said person is
trying to saturate the network, which our firewall and IDS systems
handle, as far as the cache creating a DOS situation on the origin
servers, it would take a lot, since each region has dedicated origin
servers under it.  So acc01 would not share the same origin servers as
acc02, we designed it like that because it's not a single site, it's
actually several loaded clients who need power without control.

> name= MUST be unique for each entry.
> 

Even if they are meant to balance?

> refresh_pattern is used to *extend* the caching times when no Expires: or
> Cache-Control max-age are provided by the server. ie when squid has to
> calculate and guess a good storage time it uses refresh_pattern to make
> better guesses.
> 

We don't send caching headers with our content on the server, even
static content (unless it comes from the CDN) but a good case of what we
want to do is force HIT/HIT not MISS/HIT, we don't want the clients
caching any content we want the server to cache the final output and
serve it, not hitting the origin servers and we only want to do it this
way so we can reduce latency and TTL, not load.

> One thing to look into is HTCP instead of ICP and cache digests on top.
> This will let each proxy know much more details about what its siblings can
> provide and reduce the time spent waiting for their replies before
> fetching.
> 
> If you are really needing nauseatingly high request rates look into the
> "ExtremeCarpFrontend" pages of the Squid wiki and be prepared for buying
> 4-core / 8-core servers. The TCP stacks will need tuning for higher speed
> as well.
> 

Way ahead of the curve, some of these servers have 32 cores and oddly
enough 32GB of ram too. Others have only 16 cores but they're still
considered beefy today. TCP is already optimized.  Even my personal site
has 16 cores, though if this works out, I might put my own site on this
setup if my client will let me. Talk about power.


Re: [squid-users] A few questions about squid

2010-09-15 Thread Amos Jeffries
On Wed, 15 Sep 2010 17:50:03 -0500, Jordon Bedwell 
wrote:
> I have a few questions about squid that the documentation doesn't seem
> to cover easily, and some people don't seem to cover it when I do a
> Google search, I hope I don't have to adjust the upstream source to get
> what I want easily.
> 
> 1.) When I use Squid as an accelerator it didn't accelerate anything, as
> a matter of fact, I got less latency from PHP and Apache and even less
> latency when I used NGINX and PHP vs. Squid accelerators. Even under
> high loads.

Yes. A problem with the meaning of the word "accelerate". What people tend
to mean when they say that is "reverse-proxy" which has more in relation to
a router than a race horse.

What the speed differences are depends on which version of Squid you use,
how its configured and where it's running. The rule-of-thumbs for config
performance are all about avoiding speed-slow ACL. I've recently had a
conversation on IRC with several people seeing a 1-2 order or magnitude
slower Squid simply because the ran it in a VM clone (as versus an
identical VM).

So far we have records of Squid-2.7 pushing 990 req/sec. Squid-3.1 has not
been put to the same test, but sits roughly 5% behind it on others. I have
not yet seen any numbers for 3.2 beta which has multi-CPU support.

Which brings back into memory the catch-22 when comparing Squid with
Apache or Nginx. To get a genuine fair comparison you have to hobble those
others down to a single CPU and single worker process just like Squid. Last
I tried single-threaded Apache could handle under 20 req/sec and Nginx not
much more than 100 with variation on file size.

> 
> 2.) Is there a way I can force it to reduce latency? The entire goal of
> the accelerator is to cache and reduce latency or to see if we could
> reduce latency more.  Before we were using homegrown solutions to
> balance mixed with some patched Memcached severs to sync PHP Sessions.

In general roughly approximations cache_mem directive is Squid acts like
memcached and the disk cache acts like static files on the proxy machine.
The newer Squid you can get the better its HTTP/1.1 compliance and
reverse-proxy controls. I'll comment on your config below if anything
stands out.

> 
> 3.) Is there a way to easily clear the cache? The entire cache? While
> Squid claims it's not possible, we can clear it manually but I was
> wondering if there might be a client that can do it?

It's not. The cache has an index which is split between memory and disk,
and constantly in use.
There are tricks around with creating an empty disk cache updating
squid.conf and restarting or reconfiguring Squid to swap it over to the new
location.

Be aware that for a period after starting with no cached items the full
traffic load hits the backend web server. When using more than one proxy
this can DoS the origin server. Look at your proxy HIT ratios to see the
amount of traffic reduction Squid is doing in bytes and req/sec.

> 
> We get the same latency from squid we do from Apache 2 so maybe I have
> the configuration file wrong? I mean I might have it a bit confused
> since I've just started working with squid extensively to see if we can
> reduce latency more than we already have it reduced.
> 
> Squid can't blame the origin servers since they are accessed through the
> local network with a local switch and each region has a matching
> accelerator that attaches itself to local origin servers. Example: UK
> has acc04, EU General has acc05-06 and US has acc01-03 each acc0* is a
> squid server with several origin servers under it, previously it was our
> homegrown balancer.

Sounds very normal.


> 
> Squid CONF:
> 
> acl our_sites dstdomain example.com
> http_access allow our_sites
> http_port 0.0.0.0:80 accel defaultsite=example.com
> cache_peer 0.0.0.1 parent  80 0 no-query originserver name=acc01
> cache_peer 0.0.0.2 sibling 80 0 no-query originserver name=acc01
> cache_peer 0.0.0.3 sibling 80 0 no-query originserver name=acc01
> cache_peer 0.0.0.4 sibling 80 0 no-query originserver name=acc01
> cache_peer 0.0.0.5 sibling 80 0 no-query originserver name=acc01

name= MUST be unique for each entry.

> cache_peer_access acc01 allow our_sites

Duplicate for each peer.

> refresh_pattern . 60 100% 4320

refresh_pattern is used to *extend* the caching times when no Expires: or
Cache-Control max-age are provided by the server. ie when squid has to
calculate and guess a good storage time it uses refresh_pattern to make
better guesses.

> hosts_file /etc/hosts
> coredump_dir /var/spool/squid3
> visible_hostname example.com
> dns_defnames off
> reply_header_access X-Cache deny all
> reply_header_access X-Cache-Lookup deny all
> reply_header_access Via deny all

These are sibling proxy communication headers used to optimize and trace
success/failure and routing loops.
If you have to, limit the removal to port 80 replies and change the
sibling-sibling peerage off port 80 to some other port, 3128 is the default
for anythin

Re: [squid-users] Upload of files not working

2010-09-15 Thread Chema cueto


Chema cueto mailto:chemacg_at_gmail.com?Subject=Re:%20[squid-users]%20Upload%20of%20files%20not%20working>>
 writes:

> Hi, i've been looking for any information on this problem i have and
> have found nothing, my problem isn't the well known problem of the 1MB
> limit, it's true that i can upload small files, but even files with
> 800Kb fails some times.
> The problem is that i can't upload any file (of certain size, the
> smallest do upload) using megaupload and it's kind.
> I've seen that when uploading with dropbox (https://www.dropbox.com) i
> can upload any file of any size without problems if i use the "basic
> uploader".
> I've also seen that the file i want to upload is sent from my PC to the
> PC with squid running on it at LAN speed (8-10Mb/s) (with gkrellm and
> the upload bar of megaupload, dropbox and such), but when i use the
> "basic uploader" of dropbox the file is sent directly from my PC at WAN
> speed (~100Kb/s)
> When i stop squid this problem disappear.
> I'm running squid version 3.1.6 on Gentoo and have
> arno-iptables-firewall enabled.
> Any one can help me or say where to look? acces.log, cache.log and
> store.log are total gibberish.
> Thanks.

Nobody knows??



RE: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Amos Jeffries
On Wed, 15 Sep 2010 20:53:04 +0100, Martin Sperl 
wrote:
> Hi Amos!
> 
> Thanks for your feedback.
> 
>> Squid is still largely IO event driven. If the network IO is less than
>> say 3-4 req/sec Squid can have a queue of things waiting to happen
which
>> get delayed a long time (hundreds of ms) waiting to be kicked off.
>>   Your overview seems to show that behaviour clearly.
>> 
>> There have been some small improvements and fixes to several of the
>> lagging things but I think its still there in even the latest Squid.
> 
> Here the Hit/s statistics on this specific server for the time:
> +--+---+---+
> | h| allHPS| cssART|
> +--+---+---+
> |0 | 48.34 | 0.016 |
> |1 | 49.80 | 0.015 |
> |2 | 49.01 | 0.015 |
> |3 | 47.08 | 0.018 |
> |4 | 17.34 | 0.024 |
> |5 |  4.00 | 0.042 |
> |6 |  0.52 | 0.054 |
> |7 |  9.02 | 0.034 |
> |8 |  7.18 | 0.038 |
> |9 |  8.25 | 0.035 |
> |   10 |  9.45 | 0.034 |
> |   11 | 14.71 | 0.030 |
> |   12 | 23.94 | 0.023 |
> |   13 | 31.04 | 0.021 |
> |   14 | 35.02 | 0.020 |
> |   15 | 38.87 | 0.019 |
> |   16 | 40.92 | 0.019 |
> |   17 | 43.39 | 0.017 |
> |   18 | 45.62 | 0.016 |
> |   19 | 47.58 | 0.017 |
> |   20 | 51.91 | 0.014 |
> |   21 | 53.65 | 0.014 |
> |   22 | 40.87 | 0.016 |
> |   23 | 47.40 | 0.016 |
> +--+---+---+
> 
> So to summarize it: we need to keep the number of hits above 30 hits/s
for
> squid, so that we get an acceptable Response time.
> 
> I believe it will need some convincing of management to get this
> assumption tested in production ;)
> 
> One other Question: is squid 3.1 "better" in this respect than 3.0?

Than 3.0? I believe so. Though have no data on it.
The upper req/sec cap where the most effort has gone is 15%-20% higher, I
have not done any serious testing like this with the lower limits before.

If you are able to it would be very enlightening and helpful for many I
think.

Amos


[squid-users] A few questions about squid

2010-09-15 Thread Jordon Bedwell
I have a few questions about squid that the documentation doesn't seem
to cover easily, and some people don't seem to cover it when I do a
Google search, I hope I don't have to adjust the upstream source to get
what I want easily.

1.) When I use Squid as an accelerator it didn't accelerate anything, as
a matter of fact, I got less latency from PHP and Apache and even less
latency when I used NGINX and PHP vs. Squid accelerators. Even under
high loads.

2.) Is there a way I can force it to reduce latency? The entire goal of
the accelerator is to cache and reduce latency or to see if we could
reduce latency more.  Before we were using homegrown solutions to
balance mixed with some patched Memcached severs to sync PHP Sessions.

3.) Is there a way to easily clear the cache? The entire cache? While
Squid claims it's not possible, we can clear it manually but I was
wondering if there might be a client that can do it?

A few notes: Squid has no way of knowing if PHP code is dynamic or
static since we don't serve PHP headers and we certainly don't serve
anything but Apache (Debian ~ EnvyGeeks) and NGINX (Debian ~ EnvyGeeks)
so even if it's PHP it looks as if it's a normal HTML file. We don't use
query strings at all, as a matter of fact if a query string is used the
server pretends it doesn't know what's going on.

We get the same latency from squid we do from Apache 2 so maybe I have
the configuration file wrong? I mean I might have it a bit confused
since I've just started working with squid extensively to see if we can
reduce latency more than we already have it reduced.

Squid can't blame the origin servers since they are accessed through the
local network with a local switch and each region has a matching
accelerator that attaches itself to local origin servers. Example: UK
has acc04, EU General has acc05-06 and US has acc01-03 each acc0* is a
squid server with several origin servers under it, previously it was our
homegrown balancer.

Squid CONF:

acl our_sites dstdomain example.com
http_access allow our_sites
http_port 0.0.0.0:80 accel defaultsite=example.com
cache_peer 0.0.0.1 parent  80 0 no-query originserver name=acc01
cache_peer 0.0.0.2 sibling 80 0 no-query originserver name=acc01
cache_peer 0.0.0.3 sibling 80 0 no-query originserver name=acc01
cache_peer 0.0.0.4 sibling 80 0 no-query originserver name=acc01
cache_peer 0.0.0.5 sibling 80 0 no-query originserver name=acc01
cache_peer_access acc01 allow our_sites
refresh_pattern . 60 100% 4320
hosts_file /etc/hosts
coredump_dir /var/spool/squid3
visible_hostname example.com
dns_defnames off
reply_header_access X-Cache deny all
reply_header_access X-Cache-Lookup deny all
reply_header_access Via deny all
cache_effective_user 101
cache_effective_group 103
minimum_object_size 0 KB
cache_mem 1000 MB


Hopefully somebody can help me figure out what's going on before I flip
the switch and go back to the way we used to have it, it's sad that I
can't figure out how to get it to accelerate...


Re: [squid-users] WCCP + Squid with Cisco 2811. Not working

2010-09-15 Thread Henrik Nordström
ons 2010-09-15 klockan 17:09 -0400 skrev Chris Abel:

> I only have those 2 iptables rules set on my squid bos so I'm not sure how
> my iptables could be the problem. This is all of my active iptables
> printed out:

iptables-save is recommended for inspecting iptables rules.

but nothing obviously wrong in your iptables rules that I could see from
the -L outputs.

> wccp0 Link encap:UNSPEC  HWaddr
> C0-A8-00-15-00-00-65-74-00-00-00-00-00-00-00-00  
>   inet addr:192.168.0.21  P-t-P:192.168.0.21  Mask:255.255.255.255
>   UP POINTOPOINT RUNNING NOARP  MTU:1476  Metric:1
>   RX packets:285 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0 
>   RX bytes:22823 (22.2 KiB)  TX bytes:0 (0.0 B)

What does the following say about the wccp interface?

  ip tunnel show wccp0
  ip addr show dev wccp0


One thing about WCCP/GRE. Make sure that the router sends it's GRE
packets with the source & destination address you think it's using. The
gre tunnel definition must match this. Depending on router model and
version it's not always entirely obvious which address the router will
be using for the WCCP GRE traffic.

The GRE tunnel addresses used by the router is easily visible with

  tcpdump -n -p -i eth0 proto gre

If you also see TCP packets on the wccp0 interface then the GRE tunnel
is defined correctly.

  tcpdump -n -p -i wccp0

If you see GRE packets on eth0 but no TCP packets on wccp0 then the GRE
tunnel is not correctly defined.


Basic requirements for WCCP/GRE intercept mode operation (proxy mode
assumed to work already)

 - WCCP configuration needs to be correct so that router & proxy agrees
on using WCCP, resulting in router sending any port 80 traffic to the
cache server using WCCP redirection (GRE or layer2 depending on config &
router capabilities)
 - When using GRE redirection method the GRE tunnel defined to match the
GRE tunnel profile used by the router
 - rp_filter must be disabled on the wccp0 GRE interface.
 - A valid IP address needs to be assigned on the wccp0 GRE interface
 - iptables nat table need to have a rule to redirect incoming port80
traffic on the wcccp0 interface to the Squid proxy port.
 - Squid must be listening to the address of the wccp0 interface, or the
default wildcard address.


For TPROXY operation the requirements is similar, plus some more..

 - iptables rule different.
 - policy routing table requires (ip rule & ip route)
 - wccp configuration more complex
 all three detailed in the wiki pages relating to TPROXY.

 in addition
 - proxy preferably on a separate leg from the router (physical or vlan)

I recommend verifying intercept mode operation before trying tproxy.
Most of the concept is the same, just a bit more complex when doing
tproxy.

  
Regards
Henrik



Re: [squid-users] Persistent Server connections, pipelining and matching responses

2010-09-15 Thread Henrik Nordström
ons 2010-09-15 klockan 11:27 -0700 skrev cachenewbie:

>  I am trying to understand Squid behavior when server side connection is
> pinned (persistent) and pipelining is enabled on both client and server side
> in a transparent proxy configuration.

In default configuration Squid serializes pipelined requests, processing
them one at a time.

> If there are multiple HTTP requests coming from multiple clients for the
> same server and if the requests are sent on the same TCP connection to the
> server, how will the proxy match the responses to those requests to
> appropriate clients?

Squid just sends one request at a time per server connection, and reuses
the same connection for another request when the complete response have
been seen.

In future we may pipeline many requests concurrently under specific
conditions. HTTP defines how replies match up with their requests by
same order. If requests A B C is sent pipelined to a server then the
server MUST respond in the same order, response A B C.

Regards
Henrik




[squid-users] Re: Re: squid client authentication against AD computer account

2010-09-15 Thread Markus Moeller


"Manoj Rajkarnikar"  wrote in message 
news:aanlktimrpzfwid0ehc0cbfchndc7nv=-jstxtngmm...@mail.gmail.com...

Thanks for the quick response Marcus.

The reason I need to  limit computer account and not user account is
that people here move out to distant branches and the internet access
policy is to allow to the position they hold, and thus the computer
they will use.

I've successfully setup the kerberos authentication but I don't see
how squid will fetch the computer information from client request and
authorize it based on the group membership in AD. What I wish to
accomplish is:

1. create a security group in AD
2. add computer accounts to this security group
3. squid checks if the computer trying to access internet is member of
this security group.
4. if not, don't allow access to internet or request of AD user login
that is allowed.

I'm not sure if this is achievable.



I don't think this is possible with Kerberos as the ticket does not have 
(usable) information about the client computer.



Thanks for the help.
Manoj

On Wed, Sep 15, 2010 at 12:28 AM, Markus Moeller
 wrote:


"Manoj Rajkarnikar"  wrote in message
news:aanlktingxtowx+aysrvgoaseiqrs1qrmx2vym8t5i...@mail.gmail.com...


Hi all.

I've been trying to setup this squid box with authentication to AD
2003 server. The need in our situation is to allow the workstation
allow access to internet and not the user since the users are always
moving from station to station. I've already setup kerberos
authentication successfully. I've searched through the list for any
thing related to authorizing computer account but found none..



Why do you want to limit the computer not the user ? I assume the user 
login
to the stations with their credentials, so moving stations should not be 
an

issue or ?


I'm not very familiar with ldap queries. any help would be greatly
appreciated.. i'm trying to use squid_kerb_ldap for ldap
authorization...




squid_kerb_ldap will connect to AD and determines if a user is a member 
of

an AD group. The connection to AD is authenticated using the Kerbeors key
from the squid keytab file and the AD server is found by using SRV DNS
records which are usually defined in a Windows environment with AD.


Thank you very much for your help.

Regards
Manoj












[squid-users] Re: Persistent Server connections, pipelining and matchingresponses

2010-09-15 Thread Chad Naugle
The answer you are looking for is true for almost any / all Server Daemons that 
are not multi-threaded.  Imagine every "Request" and "Response" as a separate 
task that is done independently of all the others, and bound together via some 
type of identification tags or pointers in memory that are then compared to 
match up to each other.  If Squid were to send a single GET across a 
connection, it will wait for the data to be received in that connection before 
reusing it, if at all.  Otherwise, just like any other daemon, the connection 
is dropped, and a new connection is opened for more data.  It's all done on a 
"case-by-case" basis.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


>>> cachenewbie  9/15/2010 5:00 PM >>>

Hi Chad - Thanks. How does it work for a single client that has pipelining
implemented. If two GET requests are sent to the server and if
hypothetically  ( a poorly implemented) server responds only to the second
GET, how does squid (and the client) map the response to the second request
? If squid happens to cache that response, any subsequent client requesting
that resource could get the wrong page served. This is possible right ?
Essentially, squid continues to depend on the "sequence" of the responses
from the origin server and has no way of matching responses to particular
request (even from the same client). 

Thanks,
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2541220.html
 
Sent from the Squid - Users mailing list archive at Nabble.com.


Travel Impressions made the following annotations
-
"This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you."


Re: [squid-users] WCCP + Squid with Cisco 2811. Not working

2010-09-15 Thread Chris Abel
Amos Jeffries  writes:
>In the wiki our example sets routing table 100 only on "lo". Does changing
>that to "eth0" or "wccp0" make any difference? You can test by creating a
>table 100 on all of them individually.

The squid wiki example I am looking at does not have any routing tables.
I'm using the one you gave me here
http://wiki.squid-cache.org/Features/Wccp2#Squid_configuration_for_WCCP_version_2

Following the instructions for Cisco IOS 12.4(6) T2 router.

I only have those 2 iptables rules set on my squid bos so I'm not sure how
my iptables could be the problem. This is all of my active iptables
printed out:

fortress:/var/log/squid# iptables -L   
Chain INPUT (policy ACCEPT)
target prot opt source   destination 

Chain FORWARD (policy ACCEPT)
target prot opt source   destination 

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination 
fortress:/var/log/squid# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source   destination 
REDIRECT   tcp  --  anywhere anywheretcp dpt:www
redir ports 3129 

Chain POSTROUTING (policy ACCEPT)
target prot opt source   destination 
MASQUERADE  all  --  anywhere anywhere

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination 

And just so I know this isn't the problem I will also give you my ifconifg:

fortress:/var/log/squid# ifconfig
eth0  Link encap:Ethernet  HWaddr 00:03:47:ea:a4:b9  
  inet addr:192.168.0.21  Bcast:192.168.0.23  Mask:255.255.255.252
  inet6 addr: fe80::203:47ff:feea:a4b9/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:1776 errors:0 dropped:0 overruns:0 frame:0
  TX packets:1703 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:426053 (416.0 KiB)  TX bytes:431866 (421.7 KiB)

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:734 errors:0 dropped:0 overruns:0 frame:0
  TX packets:734 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:293220 (286.3 KiB)  TX bytes:293220 (286.3 KiB)

wccp0 Link encap:UNSPEC  HWaddr
C0-A8-00-15-00-00-65-74-00-00-00-00-00-00-00-00  
  inet addr:192.168.0.21  P-t-P:192.168.0.21  Mask:255.255.255.255
  UP POINTOPOINT RUNNING NOARP  MTU:1476  Metric:1
  RX packets:285 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:22823 (22.2 KiB)  TX bytes:0 (0.0 B)

Thanks for all the help thus far. I'm not very good when it comes to
iptables so it could very well be that. Let me know if you see something
or want me to try anything else.

-Chris

___
Chris Abel
Systems and Network Administrator
Wildwood Programs 
2995 Curry Road Extension
Schenectady, NY  12303
518-836-2341



[squid-users] Re: Persistent Server connections, pipelining and matching responses

2010-09-15 Thread cachenewbie

Hi Chad - Thanks. How does it work for a single client that has pipelining
implemented. If two GET requests are sent to the server and if
hypothetically  ( a poorly implemented) server responds only to the second
GET, how does squid (and the client) map the response to the second request
? If squid happens to cache that response, any subsequent client requesting
that resource could get the wrong page served. This is possible right ?
Essentially, squid continues to depend on the "sequence" of the responses
from the origin server and has no way of matching responses to particular
request (even from the same client). 

Thanks,
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2541220.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Martin Sperl
Hi Amos!

Thanks for your feedback.

> Squid is still largely IO event driven. If the network IO is less than
> say 3-4 req/sec Squid can have a queue of things waiting to happen which
> get delayed a long time (hundreds of ms) waiting to be kicked off.
>   Your overview seems to show that behaviour clearly.
> 
> There have been some small improvements and fixes to several of the
> lagging things but I think its still there in even the latest Squid.

Here the Hit/s statistics on this specific server for the time:
+--+---+---+
| h| allHPS| cssART|
+--+---+---+
|0 | 48.34 | 0.016 |
|1 | 49.80 | 0.015 |
|2 | 49.01 | 0.015 |
|3 | 47.08 | 0.018 |
|4 | 17.34 | 0.024 |
|5 |  4.00 | 0.042 |
|6 |  0.52 | 0.054 |
|7 |  9.02 | 0.034 |
|8 |  7.18 | 0.038 |
|9 |  8.25 | 0.035 |
|   10 |  9.45 | 0.034 |
|   11 | 14.71 | 0.030 |
|   12 | 23.94 | 0.023 |
|   13 | 31.04 | 0.021 |
|   14 | 35.02 | 0.020 |
|   15 | 38.87 | 0.019 |
|   16 | 40.92 | 0.019 |
|   17 | 43.39 | 0.017 |
|   18 | 45.62 | 0.016 |
|   19 | 47.58 | 0.017 |
|   20 | 51.91 | 0.014 |
|   21 | 53.65 | 0.014 |
|   22 | 40.87 | 0.016 |
|   23 | 47.40 | 0.016 |
+--+---+---+

So to summarize it: we need to keep the number of hits above 30 hits/s for 
squid, so that we get an acceptable Response time.

I believe it will need some convincing of management to get this assumption 
tested in production ;)

One other Question: is squid 3.1 "better" in this respect than 3.0?

Thanks,
Martin

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] RE: EXTERNAL: Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Bucci, David G
I just tried it on my Ubuntu 10.04 running the std repo squid 3.0.STABLE19-1.  
I see the same behavior, running Google Chrome (nightly experimental build).

Interestingly, through Squid, it hangs trying to retrieve something from 
http://directgov.stcllctrs.com ... which is in a  block.  
So something is causing Chrome to think it should execute noscripts (?)

The only other thing I notice is that your certificate is invalid for SSL to 
the smallsteps4life.direct.gov.uk ... the server name is a wildcarded 
*.workingwithprofero.com.  Don't think that should be affecting Squid though.

When I save the html source before/after squid, they have a huge amount of 
difference ... oh, interesting, the noscript isn't correct, it's 

RE: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Martin Sperl
> ons 2010-09-15 klockan 14:01 +0100 skrev Martin Sperl:
> > Essentially we see that during peak hours the Average response time is
> better than during off-peak hours.
> 
> Average response time measured on what?
Measured via the access log of squid.

> 
> Every request being handled, or the response time of some well known
> synthetic request?
All requests show this statistical behavior
Also our "synthetic tests" that are measured externally every 5 minutes.

> 
> If measured over all requests then it may well be normal. Under low
> traffic conditions a couple slow requests such as downloads to a low
> bandwidth client has big impact on the statistics, while under high
> traffic those drowns in the rest of the traffic.
> 
> I am afraid you need to drill down a bit in the data to tell what this
> really is about. May be perfectly normal or may be a sign of problems.
> Can't tell from the statistics alone.

I know, that is why I have created and added the histogram data, that shows 
that say at 6am UTC >90% of the requests are above 0.03 seconds. While during 
peak hours (say 6pm) we have a peak at 0.011 seconds and only <5% above 0.030 
seconds

As Amos has said: if there are not enough "requests", the event driven design 
may be the "culprit" for the introduced latencies...

I will need to investigate this...

Thanks,
Martin



This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp

Re: [squid-users] Persistent Server connections, pipelining and matching responses

2010-09-15 Thread Chad Naugle
I do not believe squid uses a single TCP "Persistent Connection" for more than 
1 client at a time, hence the functionality of "Connection Pinning".  This is 
required to perform things such as NTLM web-based logins.

-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
 


>>> cachenewbie  9/15/2010 2:27 PM >>>

Hi:

I am trying to understand Squid behavior when server side connection is
pinned (persistent) and pipelining is enabled on both client and server side
in a transparent proxy configuration.

If there are multiple HTTP requests coming from multiple clients for the
same server and if the requests are sent on the same TCP connection to the
server, how will the proxy match the responses to those requests to
appropriate clients ?  HTTP is stateless - so the response will not identify
the request. If the proxy gets the reply to all those requests, how will it
send the right response to the right client.

Thanks in advance.
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2540989.html
 
Sent from the Squid - Users mailing list archive at Nabble.com.


Travel Impressions made the following annotations
-
"This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information.  If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited.  If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments.
Thank you."


[squid-users] Persistent Server connections, pipelining and matching responses

2010-09-15 Thread cachenewbie

Hi:

 I am trying to understand Squid behavior when server side connection is
pinned (persistent) and pipelining is enabled on both client and server side
in a transparent proxy configuration.

If there are multiple HTTP requests coming from multiple clients for the
same server and if the requests are sent on the same TCP connection to the
server, how will the proxy match the responses to those requests to
appropriate clients ?  HTTP is stateless - so the response will not identify
the request. If the proxy gets the reply to all those requests, how will it
send the right response to the right client.

Thanks in advance.
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2540989.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Jordon Bedwell
On 09/15/2010 08:45 AM, Seb Harrington wrote:
> 
>> did check it with SQUID3.1.8 on FreeBSD, and have no problems
> whatsoever.
> 
> Thankyou.
> 
> Is there anyone else on the list using squid3 as packaged by ubuntu that
> could also test the site for me?
> 
> Thanks,
> 
> Seb
> 
> 
> This email carries a disclaimer, a copy of which may be read at 
> http://learning.longhill.org.uk/disclaimer

I'm on my Ubuntu machine right now, if you want to send me a direct
email with the problem you were having I can use my local squid install
to check it out.


Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Amos Jeffries

On 15/09/10 23:24, Seb Harrington wrote:


Hi everyone,

I have a problem when accessing http://smallsteps4life.direct.gov.uk/
through squid.

When accessing the site directly the site is properly formatted, when
accessing through squid the site appears 'unformatted', some of the
images do not load and it looks as if the CSS has not been applied.

I thought this behaviour was a little strange so I've tested it on two
more instances of squid, one a default fresh install allowing everything
through (the all acl).

When accessing the site these are the logs:
access.log: http://pastebin.com/HtkyfjUJ
store.log: http://pastebin.com/0NjnDZzW
cache.log: did not output anything useful or informative.

I'm using the ubuntu version of squid3 (apt-get squid3) and I'm using
ubuntu 10.04 Lucid Lynx.

Squid version: Squid Cache: Version 3.0.STABLE19

Could someone please run that website through their version of squid for
me and let me know if this is a squid issue, a website issue or a bug in
the ubuntu packaged version of squid.

Cheers,

Seb



That trace from the "working" squid? There is zero CSS in it. Just 
JavaScript files that generate page content on the fly. Most of the 
content seems to be going through HTTPS which passes straight through Squid.


The all ACL working where regular config catches only some occasional 
files makes me think either those files are on a domain being blocked, 
or you have regex patterns that are catching more than you are aware of.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread Amos Jeffries

On 16/09/10 01:57, Jordon Bedwell wrote:

On 09/15/2010 08:12 AM, Amos Jeffries wrote:


Then you face the problem of what the real web servers do with
http://10.0.0.0/something or whatever the IP is. Most likely you see a
fancy error page saying Host does not exist with the server logo and
server details.
Amos


I don't get what you're trying to say.  I suggested he redirect any IP
based entries to the hostname, I do it on all my servers, and doing so
would not cause *any errors*, there are certain cases where the error
would happen, but not in the context I suggested it and if it did, it
would be caused by improper configuration.


Oh sorry. I mis-read what you wrote.

Seems we three (including Henriks response) all point at the same answer 
with two different ways to configure it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread Jordon Bedwell
On 09/15/2010 08:12 AM, Amos Jeffries wrote:

> Then you face the problem of what the real web servers do with
> http://10.0.0.0/something or whatever the IP is. Most likely you see a
> fancy error page saying Host does not exist with the server logo and
> server details.
> Amos

I don't get what you're trying to say.  I suggested he redirect any IP
based entries to the hostname, I do it on all my servers, and doing so
would not cause *any errors*, there are certain cases where the error
would happen, but not in the context I suggested it and if it did, it
would be caused by improper configuration.


RE: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Seb Harrington

> did check it with SQUID3.1.8 on FreeBSD, and have no problems
whatsoever.

Thankyou.

Is there anyone else on the list using squid3 as packaged by ubuntu that
could also test the site for me?

Thanks,

Seb


This email carries a disclaimer, a copy of which may be read at 
http://learning.longhill.org.uk/disclaimer


Re: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Henrik Nordström
ons 2010-09-15 klockan 14:01 +0100 skrev Martin Sperl:
> Hi everyone,
> 
> we are seeing a strange response-time effect over 24 hours when delivering 
> content via Squid+icap service (3.0.STABLE9 - I know old, but getting 
> something changed in a production environment can be VERY hard...). Icap 
> server we use is rewriting some URLs and also rewriting some of the content 
> response.
> 
> Essentially we see that during peak hours the Average response time is better 
> than during off-peak hours.

Average response time measured on what?

Every request being handled, or the response time of some well known
synthetic request?

If measured over all requests then it may well be normal. Under low
traffic conditions a couple slow requests such as downloads to a low
bandwidth client has big impact on the statistics, while under high
traffic those drowns in the rest of the traffic.

I am afraid you need to drill down a bit in the data to tell what this
really is about. May be perfectly normal or may be a sign of problems.
Can't tell from the statistics alone.

Regards
Henrik



Re: [squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Amos Jeffries

On 16/09/10 01:01, Martin Sperl wrote:

Hi everyone,

we are seeing a strange response-time effect over 24 hours when delivering 
content via Squid+icap service (3.0.STABLE9 - I know old, but getting something 
changed in a production environment can be VERY hard...). Icap server we use is 
rewriting some URLs and also rewriting some of the content response.

Essentially we see that during peak hours the Average response time is better 
than during off-peak hours.
Here a report for one day for all CSS files that are delivered with CacheStatus 
TCP_MEM_HIT (as taken from the extended access-logs of squid) for a single 
server (all servers show similar effects):

Here the quick overview:
+--+--+---+
| hour | hits | ART   |
+--+--+---+
|0 | 4232 | 0.016 |
|1 | 4553 | 0.015 |
|2 | 4238 | 0.015 |
|3 | 4026 | 0.018 |
|4 | 1270 | 0.024 |
|5 |  390 | 0.042 |
|6 |   61 | 0.054 |
|7 |  591 | 0.034 |
|8 |  445 | 0.038 |
|9 |  505 | 0.035 |
|   10 |  716 | 0.034 |
|   11 | 1307 | 0.030 |
|   12 | 2552 | 0.023 |
|   13 | 3197 | 0.021 |
|   14 | 3567 | 0.020 |
|   15 | 4095 | 0.019 |
|   16 | 4037 | 0.019 |
|   17 | 4670 | 0.017 |
|   18 | 5349 | 0.016 |
|   19 | 5638 | 0.017 |
|   20 | 6262 | 0.014 |
|   21 | 5634 | 0.014 |
|   22 | 4809 | 0.016 |
|   23 | 5393 | 0.016 |
+--+--+---+

You can see that for off-peak hours (6am UTC 91% of all request with TCP_MEM_HIT 
for css files are>0.030 seconds).
As for "peak" hours most requests are responded at 0.011s and 0.001s (@18:00 
with 5.5% of all requests).

I know, that the numbers reported by squid also include some "effects" of the 
network itself.
But we also see similar effects on active monitoring of html+image downloads 
within our Span of control (this is one of our KPIs, which we are exceeding 
during graveyard-shift hours...).

We have tried a lot of things:
* virtualized versus real HW (0.002s improvement during peak hours)
* removing diskcache (uses the default settings compiled into squid when no 
diskcache is defined - at least the version of squid that we have)
* moving diskcache to ramdisk and increasing it (this has a negative effect!!!) 
- I wanted to change to aufs, but the binary we have does not support it..
* tuning some linux kernel parameters for increasing TCP buffers

Has someone experienced similar behavior and has got any recommendations what 
else we can do/test (besides upgrading to squid 3.1, which is a major effort 
from the testing perspective and which may not resolve the issue either)?



Squid is still largely IO event driven. If the network IO is less than 
say 3-4 req/sec Squid can have a queue of things waiting to happen which 
get delayed a long time (hundreds of ms) waiting to be kicked off.

 Your overview seems to show that behaviour clearly.

There have been some small improvements and fixes to several of the 
lagging things but I think its still there in even the latest Squid.



With the knowledge that it only happens under very low loads and 
self-corrects as soon as traffic picks up; is it still a problem? if so 
you may want to contact The Measurement Factory and see if they have 
anything to help for 3.0.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread Henrik Nordström
ons 2010-09-15 klockan 13:20 +0100 skrev twintu...@f2s.com:

> What is the best way to either return a blank page? or is there an easy way to
> rewrite the request other than in squirm?

Normally you do not need to do any rewrites in a reverse proxy.

Just map the requests to a suitable web server using cache_peer_access.

Regards
Henrik



Re: [squid-users] vhost for reverse proxy - two web applications with one Squid

2010-09-15 Thread Henrik Nordström
mån 2010-09-13 klockan 11:22 +0200 skrev Michael Grimm:

> cache_peer 192.168.1.100 parent 8080 0 originserver no-query
> name=server1 forceddomain=server1.mydomain.de

Don't use forceddomain= unless you absolutely have to. And from the rest
of your config it looks like you don't need this.

forceddomain= is overriding the hostname requested by the client when
forwarding requests to this specific peer.

Regards
Henrik



Re: [squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread Amos Jeffries

On 16/09/10 00:23, Jordon Bedwell wrote:

On 09/15/2010 07:20 AM, twintu...@f2s.com wrote:

Dear Squidders

I am setting up a reverse proxy so we can move from a temporary Apache Reverse
proxy.

It works fine for all the Domains/Urls Hosted etc..

But if I go to the IP of the Proxy I get "URL could not be retrived page" with
the proxy details, obvioulsy I would rather nto have this presented to the
general public.



The squid error messages can be branded easily nowdays. No need to be 
ashamed of them. http://www.squid-cache.org/Versions/langpack/ has 
updated and HTML compliant templates with CSS hooks.


Squid version information can be removed leaving only the anonymous text 
"squid" http://www.squid-cache.org/Doc/config/httpd_suppress_version_string/




I tried squirm rewirting the IP to our default domain, but that did not seem to
work. ( squirm does rewrite some other stuff ok though )

So.

What is the best way to either return a blank page? or is there an easy way to
rewrite the request other than in squirm?


Free your mind from the concept of re-writing whenever bad things happen. :)


For requests sent to Squid without a Host: header specifying the domain. 
Squid provides the defaultsite= option to your http_port. This will 
pretend that the Host: header contains whatever domain is set there, 
using it for a Host: header passed to the web servers.


To cleanly redirect a request to your main domain home page change your 
terminal "http_access deny all" to this:


  acl bounce src all
  http_access deny bounce
  deny_info 303:http://example.com/ bounce

with example.com being whatever your domain is.


deny_info could also be set to "TCP_RESET" to abandon the clients 
request. Leaving them with whatever their browser presents.



If you have the latest squid beta you can do trickier things like 
preserving the path or http/https portions. :)

  http://wiki.squid-cache.org/Features/CustomErrors



Cheers

Rob


Add the IP to the ACL and it *should* by theory work though I've never
actually done it since I redirect before it even hits Squid. After you
do the previously mentioned you can use Apache or whatever other server
you so choose to use to redirect to the domain name.


Then you face the problem of what the real web servers do with 
http://10.0.0.0/something or whatever the IP is. Most likely you see a 
fancy error page saying Host does not exist with the server logo and 
server details.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


[squid-users] can't increase Filedescriptor

2010-09-15 Thread flm

Hi,
I got this message in cache.log : "Your cache is running out of
filedescriptors"
So I increased the FD, but now when starting squid, I got this message : 
NOTICE: Could not increase the number of filedescriptor

Steps I follow to increase this FD :
squid.conf : 
added instruction "max_filedesc 4096"
/etc/security/limits.conf :
added instruction "* - nofile 4096"

$ squid restart

Informations :
Squid Cache version 2.7.STABLE6 for i686-pc-linux-gnu
OS RedHAT  2.6.9-34.ELsmp
$ ulimit -n
4096
$ squidclient mgr:info | grep -A 1 "File descriptor"
File descriptor usage for squid:
Maximum number of file descriptors: 1024

Thanks a lot


 

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/can-t-increase-Filedescriptor-tp2540496p2540496.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Re: Re: Squid 3.0 STABLE 19 and SPNEGO with Windows Firefox 3.6.3

2010-09-15 Thread Henrik Nordström
tor 2010-09-09 klockan 23:32 +0100 skrev Markus Moeller:
> So it looks like a Firefox issue. Unfortunately I don't have a setup to test 
> on.

Firefox only speaks SPNEGO to "trusted servers". There is an setting in
about:config you need to set to define what is trusted.

Regards
Henrik



[squid-users] Strange performance effects on squid during off peak hours

2010-09-15 Thread Martin Sperl
Hi everyone,

we are seeing a strange response-time effect over 24 hours when delivering 
content via Squid+icap service (3.0.STABLE9 - I know old, but getting something 
changed in a production environment can be VERY hard...). Icap server we use is 
rewriting some URLs and also rewriting some of the content response.

Essentially we see that during peak hours the Average response time is better 
than during off-peak hours.
Here a report for one day for all CSS files that are delivered with CacheStatus 
TCP_MEM_HIT (as taken from the extended access-logs of squid) for a single 
server (all servers show similar effects):

Here the quick overview:
+--+--+---+
| hour | hits | ART   |
+--+--+---+
|0 | 4232 | 0.016 |
|1 | 4553 | 0.015 |
|2 | 4238 | 0.015 |
|3 | 4026 | 0.018 |
|4 | 1270 | 0.024 |
|5 |  390 | 0.042 |
|6 |   61 | 0.054 |
|7 |  591 | 0.034 |
|8 |  445 | 0.038 |
|9 |  505 | 0.035 |
|   10 |  716 | 0.034 |
|   11 | 1307 | 0.030 |
|   12 | 2552 | 0.023 |
|   13 | 3197 | 0.021 |
|   14 | 3567 | 0.020 |
|   15 | 4095 | 0.019 |
|   16 | 4037 | 0.019 |
|   17 | 4670 | 0.017 |
|   18 | 5349 | 0.016 |
|   19 | 5638 | 0.017 |
|   20 | 6262 | 0.014 |
|   21 | 5634 | 0.014 |
|   22 | 4809 | 0.016 |
|   23 | 5393 | 0.016 |
+--+--+---+

Obviously there are statistical effects involved, but when looking at the % 
histograms for the hours we see that:
Hour 0.000 to 0.030 in 0.001 steps and >0.030 
00   0.2 6.0 3.2 3.3 3.3 2.7 4.0 3.2 3.4 3.0 4.4 4.7 4.2 3.9 3.6 3.9 3.1 3.1 
3.0 2.6 3.7 2.8 2.9 2.1 1.7 2.0 1.7 1.5 1.5 1.6 1.5 8.2
01   0.2 6.1 3.7 3.3 3.4 3.2 3.3 3.3 3.9 3.8 4.3 5.1 4.0 4.4 3.4 3.7 3.3 2.8 
3.1 2.7 3.6 2.6 2.3 2.1 2.1 1.8 1.5 1.7 1.4 1.8 1.4 6.6
02   0.1 6.5 3.4 3.5 3.2 3.3 3.4 3.1 3.0 3.3 4.5 5.0 4.3 3.8 3.4 3.6 3.7 3.1 
3.3 2.6 2.9 3.1 2.8 2.0 2.3 1.9 2.0 1.4 1.1 1.6 1.7 7.3
03   0.3 6.1 3.2 2.9 3.2 3.2 3.6 3.5 3.2 3.4 4.5 5.1 3.8 4.0 4.2 4.0 3.4 3.2 
3.1 3.2 2.8 3.1 2.6 1.5 1.7 1.9 1.7 1.6 1.1 1.5 1.5 8.0
04   0.4 3.6 1.9 1.5 1.8 1.3 2.0 2.0 1.9 2.0 3.8 3.5 2.8 3.1 1.7 3.1 2.4 2.8 
2.4 2.2 3.0 3.6 2.6 1.7 2.5 1.7 1.3 2.1 1.7 3.4 3.9 26.3
05   0.0 0.5 0.8 0.0 0.0 0.3 0.3 0.0 0.0 0.8 1.0 0.8 0.5 0.3 1.0 0.5 1.5 0.0 
1.0 0.3 1.8 1.8 0.3 0.8 1.0 0.8 0.5 1.5 0.8 4.6 5.1 71.5
06   0.0 0.0 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.6 1.6 3.3 91.8
07   0.5 2.4 0.3 0.2 0.7 0.3 0.5 0.8 0.2 0.7 1.2 2.2 2.4 2.0 0.7 1.0 1.5 0.3 
0.8 2.2 2.0 3.2 2.5 1.9 2.0 0.8 2.2 1.0 1.4 2.4 5.6 54.0
08   0.4 1.6 1.1 0.0 1.1 0.9 1.1 0.4 0.2 0.4 0.9 0.9 0.9 0.7 1.3 1.1 0.7 1.6 
1.1 1.6 1.1 3.6 1.8 2.0 1.8 1.8 1.8 1.3 1.8 1.3 5.2 58.2
09   0.0 1.6 0.8 1.0 0.8 0.6 0.4 0.8 0.4 0.2 1.4 1.0 0.6 0.6 0.8 1.8 1.4 1.0 
1.2 1.0 2.8 3.0 1.0 2.2 2.8 1.0 1.6 1.4 1.6 3.2 5.5 56.8
10   0.3 1.3 0.6 0.7 0.4 0.0 0.3 0.1 0.6 0.8 0.3 1.1 1.3 1.1 1.1 1.4 1.7 0.8 
1.1 1.4 3.8 2.8 1.1 1.5 2.2 1.0 2.2 1.1 1.4 4.5 8.4 53.6
11   0.4 3.0 1.0 0.8 0.9 0.5 0.7 0.4 0.8 0.3 1.8 2.1 2.0 1.4 1.2 1.3 1.8 1.3 
1.3 1.0 3.3 3.3 2.5 2.7 2.2 2.4 1.8 2.3 1.8 4.3 4.8 44.6
12   0.2 3.4 1.8 1.4 1.3 1.7 1.5 1.5 1.3 1.2 3.0 3.6 2.8 2.9 2.7 2.4 2.7 2.2 
2.3 2.3 3.7 3.7 2.9 2.5 2.5 2.6 2.2 2.4 2.2 2.8 4.4 25.7
13   0.2 4.6 1.5 1.8 2.2 1.6 1.6 2.0 2.2 1.8 3.7 4.2 2.7 2.9 3.1 3.1 2.8 2.9 
2.7 3.1 3.6 3.8 2.9 2.6 2.3 2.6 2.3 1.9 1.7 2.7 3.7 19.3
14   0.4 4.3 1.9 1.9 2.4 1.7 2.6 1.8 2.2 1.8 3.8 4.3 3.2 3.2 3.1 2.5 3.4 2.7 
3.0 2.9 3.8 4.0 2.7 2.9 2.4 2.6 2.2 2.5 2.0 2.5 3.0 16.7
15   0.4 5.3 2.0 2.3 2.1 2.2 2.5 2.5 2.5 2.7 3.9 4.7 3.4 3.5 3.3 3.0 3.2 3.0 
3.0 2.8 4.4 3.8 2.7 2.1 2.6 1.9 2.2 1.5 2.1 1.9 2.1 14.3
16   0.3 5.3 3.1 2.7 2.4 2.5 3.0 2.6 2.5 2.9 3.4 4.9 4.0 3.5 3.3 3.5 3.5 2.9 
2.7 2.8 3.4 3.1 2.6 2.7 2.3 2.7 2.1 1.9 1.5 1.9 1.8 12.5
17   0.2 5.4 2.7 2.6 2.8 3.0 2.5 2.5 3.0 3.2 4.5 4.8 3.9 3.6 4.2 3.3 3.3 3.3 
3.2 3.5 3.5 3.3 2.5 2.3 1.9 2.0 2.0 1.6 1.5 1.9 2.0 9.8
18   0.4 5.5 3.0 3.2 2.8 3.1 3.0 3.5 2.9 2.9 4.3 5.5 3.8 3.1 3.9 3.1 3.2 3.3 
2.7 3.4 3.3 3.1 2.7 2.3 2.2 1.9 1.6 1.5 1.7 1.6 1.8 9.4
19   0.3 6.2 3.1 3.0 3.4 3.4 3.1 2.9 3.5 2.9 4.2 5.0 4.5 3.5 3.4 3.6 2.9 3.2 
2.9 3.0 3.4 3.1 2.7 2.1 1.9 1.6 1.6 1.7 1.3 1.7 2.0 8.6
20   0.4 6.8 3.7 3.4 3.6 3.7 3.3 3.4 3.4 3.4 5.1 5.3 4.1 3.5 4.0 3.6 3.6 2.5 
3.1 2.8 3.0 2.8 2.5 2.1 2.1 1.7 1.4 1.2 1.1 1.3 1.5 6.5
21   0.4 7.1 3.6 4.1 3.7 3.3 3.6 3.8 3.6 3.3 4.4 4.6 5.1 4.5 4.1 3.1 3.2 3.2 
2.7 2.9 3.3 2.9 2.3 1.9 1.8 1.5 1.5 1.2 1.2 1.3 1.2 5.6
22   0.3 6.8 3.6 3.2 2.7 3.1 3.2 3.0 3.4 2.9 4.2 4.4 3.8 3.2 3.5 3.9 3.0 3.1 
2.9 2.5 3.7 3.2 2.7 2.4 2.1 1.5 1.9 1.4 1.4 1.6 2.0 9.4
23   0.4 6.0 3.8 3.2 3.4 3.0 3.0 2.8 3.0 3.1 4.5 4.5 4.1 3.9 3.6 3.7 3.4 2.8 
2.8 3.1 4.0 2.9 2.7 1.9 2.1 2.1 1.6 1.6 1.3 1.6 1.5 8.5

You can see that for off-peak hours (6am UTC 91% of all request with 
TCP_MEM_HIT for css files are >0.030 seconds).
As for "peak" hours most requests are responded at 0.011s and 0.001s (@18:00 
with 5.5% of all requests).

I know, that the numbers reported by squid also include

Re: [squid-users] WCCP + Squid with Cisco 2811. Not working

2010-09-15 Thread Henrik Nordström
tor 2010-09-09 klockan 17:25 -0400 skrev Chris Abel:

> Thanks. After spending a lot of time with wccp and trying the tutorial on
> squids wiki, I think I have given up. It "seems" to work before I play
> around with my iptables. I say seems because I can actually see gre
> traffic on the squid server and I see wccp packets being sent to the squid
> server on the cisco router, but I am not sure if this is actually working
> though.

Then the WCCP part works fully, and you only need to get the GRE
interface & related iptables interception/REDIRECT rules correct.

> Is there a way I can actually check squid logs to see if it's
> getting anything? For some reason I don't have an access.log. I have an
> access.log.1, but not an access.log.

Odd.

Is Squid running?

try restarting it.

> When I put this in:
> iptables -t nat -A PREROUTING -i gre1 -p tcp --dport 80 -j REDIRECT
> --to-port 3129
> It seems to break it and I'm left with the same problem I had before.

Which was?

Regards
Henrik



Re: [squid-users] 522 error missing protocol negotiation hints?

2010-09-15 Thread Ralf Hildebrandt
* Amos Jeffries :

> If you set debug_options 9,2 squid will list the FTP messages going on.

That FTP server is publically reachable, incidentially it's
ftp.hu-berlin.de :)

It's running vsFTPd 2.0.1

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Amos Jeffries

On 15/09/10 23:24, Seb Harrington wrote:


Hi everyone,

I have a problem when accessing http://smallsteps4life.direct.gov.uk/
through squid.

When accessing the site directly the site is properly formatted, when
accessing through squid the site appears 'unformatted', some of the
images do not load and it looks as if the CSS has not been applied.

I thought this behaviour was a little strange so I've tested it on two
more instances of squid, one a default fresh install allowing everything
through (the all acl).

When accessing the site these are the logs:
access.log: http://pastebin.com/HtkyfjUJ
store.log: http://pastebin.com/0NjnDZzW
cache.log: did not output anything useful or informative.

I'm using the ubuntu version of squid3 (apt-get squid3) and I'm using
ubuntu 10.04 Lucid Lynx.

Squid version: Squid Cache: Version 3.0.STABLE19


You may find a backport of 3.1 useful:
 https://launchpad.net/~yadi/+archive/ppa

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] DNS config - squid

2010-09-15 Thread Amos Jeffries

On 15/09/10 17:18, viswa wrote:

Hi All


is it possible to configure squid to use different DNS server for
different clients ?
example if request from 172.16.1.25 then DNS response from dns-server-1,
otherwise is dns-server-2 ?


No its not.

Fundamentally there is no point to it. The visitor will never be using 
the DNS lookups squid performs. *Squid itself* will be the client when 
fetching the data from servers those results produce.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] 522 error missing protocol negotiation hints?

2010-09-15 Thread Amos Jeffries

On 15/09/10 20:47, Ralf Hildebrandt wrote:

In my log, I'm getting:

Sep 13 08:10:39 proxy-cbf-1 squid[13350]: Broken FTP Server at 141.20.1.43. 522 
error missing protocol negotiation hints
Sep 13 08:12:09 proxy-cbf-1 squid[13350]: Broken FTP Server at 141.20.1.43. 522 
error missing protocol negotiation hints

What exactly am I supposed to tell the admin of vinus.rz.hu-berlin.de
= 141.20.1.43?


If you set debug_options 9,2 squid will list the FTP messages going on.

Squid thinks the server is sending Squid a 522 "try again" message in 
response to EPRT, but apparently not sending the flags to indicate 
whether data channel should be made using IPv4 or IPv6.


EPRT and 522 are covered in RFC 2428 section 2. The (bracket) bit at the 
end of line is not optional, brackets mid-line are forbidden and the 
brackets *should* contain 1 or 2 maybe both with a comma.


Squid should be defaulting to the same protocol as the servers IP 
address. That is worth checking before reporting. It's a bit strange 
that a server listening on 141.20.1.43 cannot make IPv4 data connections.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread Jordon Bedwell
On 09/15/2010 07:20 AM, twintu...@f2s.com wrote:
> Dear Squidders
> 
> I am setting up a reverse proxy so we can move from a temporary Apache Reverse
> proxy.
> 
> It works fine for all the Domains/Urls Hosted etc..
> 
> But if I go to the IP of the Proxy I get "URL could not be retrived page" with
> the proxy details, obvioulsy I would rather nto have this presented to the
> general public.
> 
> I tried squirm rewirting the IP to our default domain, but that did not seem 
> to
> work. ( squirm does rewrite some other stuff ok though )
> 
> So.
> 
> What is the best way to either return a blank page? or is there an easy way to
> rewrite the request other than in squirm?
> 
> Cheers
> 
> Rob
> 
> 
> 
> 
> 

Add the IP to the ACL and it *should* by theory work though I've never
actually done it since I redirect before it even hits Squid. After you
do the previously mentioned you can use Apache or whatever other server
you so choose to use to redirect to the domain name.


[squid-users] Reverse proxy, what to do with requests to it's IP addres?

2010-09-15 Thread twinturbo
Dear Squidders

I am setting up a reverse proxy so we can move from a temporary Apache Reverse
proxy.

It works fine for all the Domains/Urls Hosted etc..

But if I go to the IP of the Proxy I get "URL could not be retrived page" with
the proxy details, obvioulsy I would rather nto have this presented to the
general public.

I tried squirm rewirting the IP to our default domain, but that did not seem to
work. ( squirm does rewrite some other stuff ok though )

So.

What is the best way to either return a blank page? or is there an easy way to
rewrite the request other than in squirm?

Cheers

Rob







Re: [squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Goetz R Schultz
Hi,

did check it with SQUID3.1.8 on FreeBSD, and have no problems whatsoever.

HTH,

Thanks and regards

  Goetz R. Schultz

"I intend to live forever - so far, so good."
===
Verify the GnuPG-Sig at www.goetz.co.uk
Get the rootcertificate at www.cacert.org
===
 /"\
 \ /ASCII Ribbon Campaign
  X  against HTML e-mail
 / \

"Si forte in alienas manus oberraverit hec peregrina epistola
incertis ventis dimissa, sed Deo commendata, precamur ut ei
reddatur cui soli destinata, nec preripiat quisquam non sibi parata."

On 15/09/2010 12:24, Seb Harrington wrote:
>  
> Hi everyone,
> 
> I have a problem when accessing http://smallsteps4life.direct.gov.uk/
> through squid.
> 
> When accessing the site directly the site is properly formatted, when
> accessing through squid the site appears 'unformatted', some of the
> images do not load and it looks as if the CSS has not been applied.
>  
> I thought this behaviour was a little strange so I've tested it on two
> more instances of squid, one a default fresh install allowing everything
> through (the all acl).
> 
> When accessing the site these are the logs: 
>   access.log: http://pastebin.com/HtkyfjUJ
>   store.log: http://pastebin.com/0NjnDZzW
>   cache.log: did not output anything useful or informative.
> 
> I'm using the ubuntu version of squid3 (apt-get squid3) and I'm using
> ubuntu 10.04 Lucid Lynx.
> 
> Squid version: Squid Cache: Version 3.0.STABLE19
> configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
> '--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
> '--infodir=${prefix}/share/info' '--sysconfdir=/etc'
> '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
> '--disable-maintainer-mode' '--disable-dependency-tracking'
> '--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3'
> '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man'
> '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8'
> '--enable-storeio=ufs,aufs,diskd,null'
> '--enable-removal-policies=lru,heap' '--enable-delay-pools'
> '--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
> '--enable-follow-x-forwarded-for'
> '--enable-auth=basic,digest,ntlm,negotiate'
> '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,getpwnam,mul
> ti-domain-NTLM' '--enable-ntlm-auth-helpers=SMB'
> '--enable-digest-auth-helpers=ldap,password'
> '--enable-negotiate-auth-helpers=squid_kerb_auth'
> '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbi
> nfo_group' '--enable-arp-acl' '--enable-snmp'
> '--with-filedescriptors=65536' '--with-large-files'
> '--with-default-user=proxy' '--enable-epoll' '--enable-linux-netfilter'
> 'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2'
> 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall
> -O2' 'FFLAGS=-g -O2'
> 
> Could someone please run that website through their version of squid for
> me and let me know if this is a squid issue, a website issue or a bug in
> the ubuntu packaged version of squid.
> 
> Cheers,
> 
> Seb
> 
> 
> This email carries a disclaimer, a copy of which may be read at 
> http://learning.longhill.org.uk/disclaimer
> 



smime.p7s
Description: S/MIME Cryptographic Signature


[squid-users] Problem accessing a particular site through squid

2010-09-15 Thread Seb Harrington
 
Hi everyone,

I have a problem when accessing http://smallsteps4life.direct.gov.uk/
through squid.

When accessing the site directly the site is properly formatted, when
accessing through squid the site appears 'unformatted', some of the
images do not load and it looks as if the CSS has not been applied.
 
I thought this behaviour was a little strange so I've tested it on two
more instances of squid, one a default fresh install allowing everything
through (the all acl).

When accessing the site these are the logs: 
access.log: http://pastebin.com/HtkyfjUJ
store.log: http://pastebin.com/0NjnDZzW
cache.log: did not output anything useful or informative.

I'm using the ubuntu version of squid3 (apt-get squid3) and I'm using
ubuntu 10.04 Lucid Lynx.

Squid version: Squid Cache: Version 3.0.STABLE19
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc'
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
'--disable-maintainer-mode' '--disable-dependency-tracking'
'--disable-silent-rules' '--srcdir=.' '--datadir=/usr/share/squid3'
'--sysconfdir=/etc/squid3' '--mandir=/usr/share/man'
'--with-cppunit-basedir=/usr' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,null'
'--enable-removal-policies=lru,heap' '--enable-delay-pools'
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,getpwnam,mul
ti-domain-NTLM' '--enable-ntlm-auth-helpers=SMB'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbi
nfo_group' '--enable-arp-acl' '--enable-snmp'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-epoll' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CFLAGS=-g -O2 -g -Wall -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -Wall
-O2' 'FFLAGS=-g -O2'

Could someone please run that website through their version of squid for
me and let me know if this is a squid issue, a website issue or a bug in
the ubuntu packaged version of squid.

Cheers,

Seb


This email carries a disclaimer, a copy of which may be read at 
http://learning.longhill.org.uk/disclaimer


[squid-users] 522 error missing protocol negotiation hints?

2010-09-15 Thread Ralf Hildebrandt
In my log, I'm getting:

Sep 13 08:10:39 proxy-cbf-1 squid[13350]: Broken FTP Server at 141.20.1.43. 522 
error missing protocol negotiation hints
Sep 13 08:12:09 proxy-cbf-1 squid[13350]: Broken FTP Server at 141.20.1.43. 522 
error missing protocol negotiation hints

What exactly am I supposed to tell the admin of vinus.rz.hu-berlin.de 
= 141.20.1.43?
-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Re: squid client authentication against AD computer account

2010-09-15 Thread Manoj Rajkarnikar
Thanks for the quick response Marcus.

The reason I need to  limit computer account and not user account is
that people here move out to distant branches and the internet access
policy is to allow to the position they hold, and thus the computer
they will use.

I've successfully setup the kerberos authentication but I don't see
how squid will fetch the computer information from client request and
authorize it based on the group membership in AD. What I wish to
accomplish is:

1. create a security group in AD
2. add computer accounts to this security group
3. squid checks if the computer trying to access internet is member of
this security group.
4. if not, don't allow access to internet or request of AD user login
that is allowed.

I'm not sure if this is achievable.

Thanks for the help.
Manoj

On Wed, Sep 15, 2010 at 12:28 AM, Markus Moeller
 wrote:
>
> "Manoj Rajkarnikar"  wrote in message
> news:aanlktingxtowx+aysrvgoaseiqrs1qrmx2vym8t5i...@mail.gmail.com...
>>
>> Hi all.
>>
>> I've been trying to setup this squid box with authentication to AD
>> 2003 server. The need in our situation is to allow the workstation
>> allow access to internet and not the user since the users are always
>> moving from station to station. I've already setup kerberos
>> authentication successfully. I've searched through the list for any
>> thing related to authorizing computer account but found none..
>>
>
> Why do you want to limit the computer not the user ? I assume the user login
> to the stations with their credentials, so moving stations should not be an
> issue or ?
>
>> I'm not very familiar with ldap queries. any help would be greatly
>> appreciated.. i'm trying to use squid_kerb_ldap for ldap
>> authorization...
>>
>>
>
> squid_kerb_ldap will connect to AD and determines if a user is a member of
> an AD group.  The connection to AD is authenticated using the Kerbeors key
> from the squid keytab file and the AD server is found by using SRV DNS
> records which are usually defined in a Windows environment with AD.
>
>> Thank you very much for your help.
>>
>> Regards
>> Manoj
>>
>
>
>