On Fri, 2014-06-27 at 19:06 +1200, Amos Jeffries wrote:
> On 27/06/2014 6:53 p.m., Jasper Van Der Westhuizen wrote:
> >
> >
> > On Thu, 2014-06-26 at 18:03 +0300, Eliezer Croitoru wrote:
> >> On 06/25/2014 04:06 PM, Jasper Van Der Westhuizen wrote:
> >>
On Thu, 2014-06-26 at 18:03 +0300, Eliezer Croitoru wrote:
> On 06/25/2014 04:06 PM, Jasper Van Der Westhuizen wrote:
> > As a matter of interest, in my cache logs I see many lines like these
> >
> > 2014/06/25 14:52:58 kid1| WARNING: swapfile header inconsistent with
> &
>
> Are you using SMP workers with an AUFS, UFS or diskd cache_dir?
> UFS/AUFS/diskd are not SMP-aware and this is how it shows up when two
> or more workers are over-writing cache disk files and corrupting each
> others records.
>
> Amos
>
Hi Amos
No, I don't make use of multiple SMP worker
Hi all
I'm running a compiled version of Squid 3.4.4 and I'm having some
strange behavior lately. I have a two node cluster load balancing via a
F5 LB and at times one of the two servers will simply not complete a
connection. Squid is running, the logs keep rolling(although much slower
and most en
On Tue, 2014-04-15 at 14:38 +0100, Nick Hill wrote:
> URLs with query strings have traditionally returned dynamic content.
> Consequently, http caches by default tend not to cache content when
> the URL has a query string.
>
> In recent years, notably Microsoft and indeed many others have adopte
> > On Tue, 2014-04-15 at 13:11 +0100, Nick Hill wrote:
> >> This may the the culprit
> >>
> >> hierarchy_stoplist cgi-bin ?
> >>
> >> I believe this will prevent caching of any URL containing a ?
> >>
> >
> > Should I remove the "?" and leave cgi-bin?
>
> You can remove the whole line quite saf
On Tue, 2014-04-15 at 13:11 +0100, Nick Hill wrote:
> This may the the culprit
>
> hierarchy_stoplist cgi-bin ?
>
> I believe this will prevent caching of any URL containing a ?
>
Should I remove the "?" and leave cgi-bin?
Regards
Jasper
On Tue, 2014-04-15 at 12:09 +0100, Nick Hill wrote:
> Hi Jaspar
>
> I use an expression like this, which will work on almost all Limux
> machines, Cygwin on windows and I expect Mac OSX or a terminal in
> Android so long as you have a version of grep similar to GNU grep.
>
> echo
> "http://cac
> >
> > Hi Pieter
> >
> > No, that gives me an incorrect regular expression error.
>
> NP: regex has an implied .* prefix and suffix on patterns unless you use
> the ^ and $ endpoint anchors.
>
>
> What are the HTTP headers for these requests and replies?
> The 206 status indicates a Range r
> > refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
> > override-expire override-lastmod ignore-no-cache ignore-reload
> > reload-into-ims ignore-private
> >
> > I see the following behavior in my logs. This is for the same
> > client(source). Multiple entries, like it gets downl
s fine.
> I had an issue which ICAP settings delayed the page loading but what you
> describe is not a blank page but an error page.
> Can you look at the development console of IE11 and see what is
> happening in the network layer?
>
> Eliezer
>
> On 04/09/2014 01:05
Hi all
I'm trying to cache chrome updates, but I see it always fetches over and
over again.
I have the following refresh pattern in my config.
refresh_pattern -i pack.google.com/.*\.(exe|crx) 10080 80% 43200
override-expire override-lastmod ignore-no-cache ignore-reload
reload-into-ims ignore-
> >
> > You could avoid that by upgrading Squid, perferrably to the current
> > supproted release (3.4.4). I have a client running many IE11 with their
> > default settings behind a Squid-3.4 and not seeing problems.
> >
> > Amos
> >
> >
>
> Thank you Amos. I will go to 3.4 then.
>
Hi Amos
> > With the first two options enabled in IE and SPDY/3 disabled, google
> > loads fine first time. IE9 doesn't have a SPDY/3 setting, but disabling
> > the HTTP1.1 settings work.
> >
> > So to me it seems that HTTP1.1 is the problem here(as well as the SPDY/3
> > problem). We run Squid 3.1.
>
>
> > Do you see anything coming back *from* the webserver?
> > Is anything being delivered by Squid to the client?
>
> Hi Amos
>
> Yes I do see traffic coming back from the server.
>
> What I'm found though was that when going to http://www.google.co.za or
> even http://www.google.com, it redi
On Mon, 2014-04-07 at 18:42 +1200, Pieter De Wit wrote:
> >> My setup is 3 servers running squid 3-3.1.12-8.12.1 behind an F5 load
> >> balancer. From there I send all traffic to a ZScaler cache peer. In my
> >> testing I have bypassed the cache peer but without any success.
> >>
> >> Has anyone
> > In my squid logs I can see the request going to the website. The client
> > just gets a blank page until they reload it.
>
> Do you see anything coming back *from* the webserver?
> Is anything being delivered by Squid to the client?
Hi Amos
Yes I do see traffic coming back from the server.
Hi all
I have a problem with some of my users getting blank pages when loading
sites like google and MSN. They would open the site and get a blank
page, but when refreshing it loads. These users mostly use IE11 but have
had it with browsers like Safari. Although I have to say that 98% of the
time
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Sunday, May 27, 2012 1:22 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Connection pinning (NTLM pass through)
>
> On 26/05/2012 8:31 a.m., Petter Abrahamsson wrote:
> > Hi,
> >
> > I'm t
> To proxy FTP well you need an FTP proxy. + FTP was designed not to be
> proxied. There is one called frox proxy which handles FTP with some tricks.
Thank you for the tip Amos. I will have a look at Frox now.
There is another requirement though. Ideally I would like, as in the case with
Squid,
I don't think that is the problem.. If I ftp directly from the squid server to
my test ftp site, it works fine. Via a browser it works fine. Only when using a
FTP client, such as FileZilla for example, it fails..
> -Original Message-
> From: Jakob Curdes [mailto:j...@info-systems.de]
>
Hi
I'm trying to force all FTP connections direct. I have a parent cache and at
the moment ftp connections via a brower works fine and is sent directly but my
problem is that when using a client like filezilla it sends the connection to
the parent cache and not directly.
I have enabled the fo
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Saturday, June 23, 2012 12:18 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] IP based ACL - regex?
>
> On 22/06/2012 11:30 p.m., Jasper Van Der Westhuizen wrote:
>
Hi all
Could anyone give me some pointers on how to set up a ACL based on allowing :
If I want to set up an ACL that includes all hosts(different subnets) that end
in .105, how would I go about?
Any help is appreciated.
Regards
Jasper
>-Original Message-
>From: Greg Whynott [mailto:greg.whyn...@gmail.com]
>Sent: Wednesday, April 04, 2012 5:04 PM
>To: Squid Users
>Subject: [squid-users] does a match on an ACL stop or continue?
>
>If i have a list of 10 ACLs and a client matches on ACL#4, will ACLs
>#6-10 be considered
-Original Message-
From: Jasper Van Der Westhuizen [mailto:javanderwesthui...@shoprite.co.za]
Sent: Wednesday, April 04, 2012 11:13 AM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Allowing linked sites - NTLM and un-authenticated
users
> This allows my un-authentica
> This allows my un-authenticated users access to the whitelisted domains and
> blocks any links in the sites that are not whitelisted(like facebook and
> youtube). It also allows my authenticated users access to all sites,
> including whitelisted sites, as well as allowing linked sites like fa
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Tuesday, April 03, 2012 8:43 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated
users
On 3/04/2012 6:12 p.m., Jasper Van Der Westhuizen wrote
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Monday, April 02, 2012 9:27 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated
users
On 2/04/2012 5:54 p.m., Jasper Van Der Westhuizen wrote
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Saturday, March 31, 2012 10:11 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Allowing linked sites - NTLM and un-authenticated
users
On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote
Hi everyone
I've been struggling to get a very specific setup going.
Some background: Our users are split into "Internet" users and "Non-Internet"
users. Everyone in a specific AD group is allowed to have full internet access.
I have two SQUID proxies with squidGuard load balanced with NTLM au
31 matches
Mail list logo