On 5/02/2014 4:58 p.m., babajaga wrote:
> Thanx, I did not think about the simple solution :-)
>
> However, this does not work for CONNECT. Not because of squid, but because
> of "a new standard" set by the browser developers, NOT to display proxies
> custom error message in this special case.
> R
Thanx, I did not think about the simple solution :-)
However, this does not work for CONNECT. Not because of squid, but because
of "a new standard" set by the browser developers, NOT to display proxies
custom error message in this special case.
Ref. here, for example:
https://bugzilla.mozilla.org/
Hi Amos,
"Amos Jeffries" wrote in message
news:b596a7df3abbf894689873c1a4bda...@treenet.coz...
On 2014-02-05 10:06, Markus Moeller wrote:
> Hi Amos,
>
> I tried 3.4.3 and it didn't change. I attach a access.log cache.log
> and a wireshark capture file. You will see the first Negotiate/N
I am trying to set up Squid as a proxy with HTTPS support.
No matter what I try, I cannot get CONNECT methods to work (via both HTTP
and HTTPS protocols).
The problem seems to be very strange and unique, because the connection URL
get's converted to something odd.
When I have enabled *never_direc
On 2014-02-05 11:20, Hussam Al-Tayeb wrote:
On Wednesday 05 February 2014 11:05:55 Amos Jeffries wrote:
On 2014-02-05 07:38, Hussam Al-Tayeb wrote:
> same thing happens with
>
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg
>
> it generates an warning:
> Warning: 113 SERV
On 2014-02-05 10:06, Markus Moeller wrote:
Hi Amos,
I tried 3.4.3 and it didn't change. I attach a access.log cache.log
and a wireshark capture file. You will see the first Negotiate/NTLM
authentication attempt is declined and the Negotiate/Kerberos attempt
is not processed by the auth helper
On Wednesday 05 February 2014 11:05:55 Amos Jeffries wrote:
> On 2014-02-05 07:38, Hussam Al-Tayeb wrote:
> > same thing happens with
> >
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg
> >
> > it generates an warning:
> > Warning: 113 SERV1 (squid) This cache hit is still f
On 2014-02-05 07:38, Hussam Al-Tayeb wrote:
same thing happens with
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg
it generates an warning:
Warning: 113 SERV1 (squid) This cache hit is still fresh and more
than 1
day old
Any way to tell squid not to cache objects that
The message was introduced recently to display when this failure
happens. The issue may or may not be occuring in your 3.2 as well, but
not showing anything about it (just a mystery line about using 1024 FD
on startup even when you configured more).
The most recent emails from me are probably
Hi Amos,
I tried 3.4.3 and it didn't change. I attach a access.log cache.log and
a wireshark capture file. You will see the first Negotiate/NTLM
authentication attempt is declined and the Negotiate/Kerberos attempt is not
processed by the auth helper ( I assume because it is on the same se
Hi Eliezer,
Sorry yes - Debian Wheezy 64 bit, no SELinux (or no SELinux
configuration - I think its pretty much disabled by default). It starts
as root and spawns a child process as proxy.
Jim
On 04/02/2014 17:00, Eliezer Croitoru wrote:
Hey Jim,
I have seen the last email and it depends o
Hello,
On Jan 31, 2014, at 12:12 PM, Amos Jeffries wrote:
On 31/01/2014 11:56 a.m., Al Zick wrote:
Hi,
I am considering switching to authentication via a web page. Are
there
examples of how to do this somewhere? What are the pros and cons
of this
configuration? I am very concerned about
For me, the version 3.4.3 have the same behavior. It uses 100% CPU (in
one core, the others are normal). For the users, it's just a slowed
down navigation. As soon as I change back to the 3.3.8, everything
works fine.
Actually I'm not sure the problem is caused by ntlm or kerberos or
external_acl_
same thing happens with
http://cdn.static6.opensubtitles.org/gfx/thumbs/5/1/2/3/2193215.jpg
it generates an warning:
Warning: 113 SERV1 (squid) This cache hit is still fresh and more than 1
day old
Any way to tell squid not to cache objects that would generate this
warning?
thank you
sig
Hey Jim,
I have seen the last email and it depends on the OS.
squid starts as a root or another user or\and have selinux restrictions
in different situations of runtime(as an example).
What OS are you using?
Eliezer
On 02/04/2014 06:31 PM, Mr J Potter wrote:
Hi all,
More on this - squid -k
Hi all,
More on this - squid -k parse gives this warning about my
file_descriptors line in squid.conf
WARNING: max_filedescriptors disabled. Operating System
setrlimit(RLIMIT_NOFILE) is missing
I've seen in a previous post from Amos that this is an issue. But how
do I fix it? I've got another sys
On 02/04/2014 03:34 AM, Yury Paykov wrote:
> MY QUESTION IS - Is there a way to use CN information from server
> certificate which is retrieved with /server-first/ method? Can I construct
> an ACL rule based on it?
Yes, but only after Peek and Splice project is finished. And, as
discussed on tha
On 02/04/2014 05:36 AM, Юрий Пайков wrote:
> I found a feature request here
> http://wiki.squid-cache.org/Features/SslPeekAndSplice
> Its status says: "stalled due to lack of sponsor interest", though there
> several commits with quite a few impovements. Anybody knows how "usable"
> is that code?
On 4/02/2014 11:28 p.m., Bhagwat Yadav wrote:
> Thanks Amos for quick response.
>
> Actually I need to take decision in the code where the HTTP response is
> handled.
>
> Could you please guide me to the correct location in code where I need
> implement my check for the above processing?
If you
On Tue, 04 Feb 2014 19:17:51 +0600, Amos Jeffries
wrote:
On 4/02/2014 11:34 p.m., Yury Paykov wrote:
Hello, squid users, I'm currently having an issue trying to configure
That would be because the IP address is all Squid has to work with from
the TCP packet and the best domain that can be
On 4/02/2014 11:34 p.m., Yury Paykov wrote:
> Hello, squid users, I'm currently having an issue trying to configure Squid
> (use 3.3) to bypass a handful of sites.
> I mean, i want squid to NOT bump the connection.
>
> I employ the following in the config :
>
> acl https_proxy dstdomain www.goo
I use a pac file that points some domains to an ssl-bump proxy and
some to a non-ssl bump. works for me:
function FindProxyForURL(url, host) {
if (
dnsDomainIs(host, ".because.org.uk") ||
dnsDomainIs(host, ".bec.lan") ||
dnsDomainIs(host, ".n
On 5/02/2014 1:36 a.m., Юрий Пайков wrote:
> I found a feature request here
> http://wiki.squid-cache.org/Features/SslPeekAndSplice
> Its status says: "stalled due to lack of sponsor interest", though there
> several commits with quite a few impovements. Anybody knows how "usable"
> is that code?
>
I found a feature request here
http://wiki.squid-cache.org/Features/SslPeekAndSplice
Its status says: "stalled due to lack of sponsor interest", though there
several commits with quite a few impovements. Anybody knows how "usable"
is that code?
--
Sincerely Yours,
Yury Paykov, aka Cry
Hello, squid users, I'm currently having an issue trying to configure Squid
(use 3.3) to bypass a handful of sites.
I mean, i want squid to NOT bump the connection.
I employ the following in the config :
acl https_proxy dstdomain www.google.com
acl https_proxy dstdomain google.ru
ssl_bump non
On Fri, Jan 10, 2014 at 10:38:30AM +1300, Amos Jeffries wrote:
> On 2014-01-10 01:17, Peter Benko wrote:
> >Hi squid users,
> >
> >I'm trying to upgrade squid from 3.3.11 to 3.4.2 on Debian 6. Both
> >squids are
> >compiled from source with the same compilation flags and their
> >config files are
>
Thanks Amos for quick response.
Actually I need to take decision in the code where the HTTP response is handled.
Could you please guide me to the correct location in code where I need
implement my check for the above processing?
TIA,
Bhagwat
On Tue, Feb 4, 2014 at 2:27 PM, Amos Jeffries wrote:
On 4/02/2014 9:37 p.m., Marko Cupać wrote:
> On Tue, 04 Feb 2014 10:29:59 +1300
> Amos Jeffries wrote:
>
>> Try clamav as an ICAP service used by Squid. Whitelisting is done
>> either in clamav or in the squid.conf adaptation_access rules. ICAP
>> also allows the scanner to "step out" of the tran
On 4/02/2014 6:40 a.m., Peter Warasin wrote:
> Hi guys
>
> OMG, found the issue. It was a stupid config mistake.
> For the records: Setup is squid on a bridge. I configured as default
> gateway the ip address of the bridge instead of the hop behind the bridge.
>
Maybe it was you or maybe not. Th
Hello Marko,
Please take a look at qlproxy (ICAP server for Squid) - this might done what
you describe.
Best regards,
Raf
From: Marko Cupać
Sent: Tuesday, February 4, 2014 9:37 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squidclamav reg
On 4/02/2014 9:21 p.m., Bhagwat Yadav wrote:
> Thanks Amos,
>
> Actually in my deployment I have url filtering engine after squid.
> When anybody accesses any blocked url, then filtering engine responds
> with a page showing that the accessed url is blocked. I want to check
> that if the error cod
On Tue, 04 Feb 2014 10:29:59 +1300
Amos Jeffries wrote:
> Try clamav as an ICAP service used by Squid. Whitelisting is done
> either in clamav or in the squid.conf adaptation_access rules. ICAP
> also allows the scanner to "step out" of the transaction at any time
> it determines a pass result (l
Thanks Amos,
Actually in my deployment I have url filtering engine after squid.
When anybody accesses any blocked url, then filtering engine responds
with a page showing that the accessed url is blocked. I want to check
that if the error code is that particular code in case of blocked url,
then cu
33 matches
Mail list logo