Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2015-11-04 Thread Edouard Gaulué

Hi Marcus,

Well that just an URL rewriter program. You can just test it from the 
command line :

echo "URL" | /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf

Before I understood it was possible to precise the redirect code I got that:
#> echo 
"https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386? 
- - GET"|/usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
#> OK 
rewrite-url="https://proxyweb.X.X/cgi-bin/squidGuard-simple.cgi?clientaddr=-pipo===default=unknown=https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?;


After a little change in the squidguard.conf, I get:
#> OK status=302 
url="https://proxyweb.echoppe.lan/cgi-bin/squidGuard-simple.cgi?clientaddr=-pipo===default=unknown=https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?;


It's not so better handled by my browser saying "can't connect to 
https://ad.doubleclick.net; message. But, I don't get the squid message 
anymore regarding http/https.


It may be that rewrite_rule_program come after peek and splice stuff 
leading squid to an unpredictable situation. Is there a way to play on 
order things happen in squid?


Regards, EG


Le 04/11/2015 14:10, Marcus Kool a écrit :

You need to know what squidGuard actually sends to Squid.
squidGuard does not have a debug option for this, so you have to set
   debug_options ALL,1 61,9
in squid.conf to see what Squid receives.
I bet that what Squid receives, is what it complains about:
the URL starts with 'https://http'

Marcus

On 11/04/2015 10:55 AM, Edouard Gaulué wrote:

Le 04/11/2015 11:00, Amos Jeffries a écrit :

On 4/11/2015 12:48 p.m., Marcus Kool wrote:
I suspect that the problem is that you redirect a HTTPS-based URL 
to an

HTTP URL and Squid does not like that.

Marcus
To give it a try in that direction I now redirect to an https server. 
And I get :


The following error was encountered while trying to retrieve the URL: 
https://https/*


*Unable to determine IP address from host name "https"*

The DNS server returned:

Name Error: The domain name does not exist.


Moreover this would leads sometimes to HTTP-based URL to an HTTPS URL 
and I don't know how much squid likes it either.


No it is apparently the fact that the domain name being redirected 
to is

"http".

As in:"http://http/something;

I can assure my rewrite_url looks like 
"https://proxyweb.x.x/var1=&...;.


And this confirm ssl_bump parse this result and get the left part 
before the ":". To play with, I have also redirect to 
"proxyweb.x.x:443/var1=&..." (ie. I removed the 
"https://; and add a
":443") to force the parsing. Then I don't get this message anymore, 
but Mozilla gets crazy waiting for the ad.doubleclick.net certificate 
and getting the proxyweb.x.x one. And of course it

breaks my SG configuration and can't be production solution.

Which brings up the question of why you are using SG to block adverts?

squid.conf:
  acl ads dstdomain .doubleclick.net
  http_access deny ads

Amos


I don't use SG to specificaly block adverts, I use it to block 90 % 
of the web. Here it's just an example with ads but it could be with 
so much other things...


I just want to try make SG and ssl_bump live together.

Is this possible to have a rule like "if it has been rewrite then 
don't try to ssl_bump"?


Regards, EG


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread HackXBack
Dear Yuri,
MR Amos is sure !!
we will see a solution Dear Amos ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674378.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Is ntlm_fake_auth known to work?

2015-11-04 Thread Edouard Gaulué

Dear community,

ntlm_fake_auth looks to be the authentication helper I'm looking for, 
but trying to set it as mentionned here doesn't work:

 * http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly
 * 
http://dsysadm.blogspot.fr/2012/03/my-book-live-with-squid-and-fakeauth.html


Last information found is there : 
http://www.squid-cache.org/mail-archive/squid-users/201310/0087.html


Browser keeps asking for credentials. Is this a configuration matter or 
could it be deeper?


Regards, EG


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump with cache_peer problem: Handshake fail after Client Hello.

2015-11-04 Thread Amos Jeffries
On 5/11/2015 3:47 p.m., maple wrote:
> sorry, I post my question again since last time I was not a subscriber yet.
> 
> 
> 
> Hi,
> 
> after a lot of google, I finally got this post, I met the exactly same
> problem as you, and can't use squid  to handle https traffic behind parent
> proxy. I also tried with proxychains + squid, but without luck, it didn't
> work, so could I ask your configuration about proxychains + squid ? this is
> mine:
> 
> for proxychains, it's very easy:
> strict_chain
> [ProxyList]
> http  127.0.0.1 12345 (for some reason, I must use ssh reverse tunnel to map
> my parent http proxy to my local port 12345)
> 
> for squid 3.4:

Please upgrade to the latest Squid.

SSL-Bump in particular is a feature that is taking part in an arms-race.
It changes, and it changes fast. Sometimes on a daily or weekly basis.

These particular use-case issue was resolved in the current Squid 3.5
and 4.x. But does remain for traffic received by explicit proxies in the
middle of a 3+ proxy chain.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Amos Jeffries
On 5/11/2015 4:30 p.m., Amos Jeffries wrote:
> On 5/11/2015 3:43 p.m., Prashanth Prabhu wrote:
>> Hi folks,
>>
>> I have a setup with ICAP running a custom server alongside Squid.
>> While testing file upload scenarios, I ran into a slow upload issue
>> and have narrowed it down to slowness between squid and icap,
>> especially in the request handling path.
> 
> 
> Hi Prashanth.
> 
> This is bugs 4353 and 4206. There is a workaround patch in bug 4353.

Sorry, here is the link 

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Prashanth Prabhu
Hi folks,

I have a setup with ICAP running a custom server alongside Squid.
While testing file upload scenarios, I ran into a slow upload issue
and have narrowed it down to slowness between squid and icap,
especially in the request handling path.

The slowness is down to extremely small packets sent by squid towards
the ICAP server. These packets are a few 10s of bytes in size. This
despite receiving large-sized packets from the client over the HTTPS
connection. The ICAP server responds with the ACK quickly enough, so
this isn't a case of small packets being generated because the server
isn't quick enough to read.

The debugs haven't shown any hints. It appears that there are times
when Squid allocates only small buffers to read from the HTTPS
connection. I see buffers of even a single byte being allocated during
message processing. I am new to the Squid code, so I might be reading
it all wrong.

I have pasted below a sample TCP dump (incomplete) showing the
behavior, resulting from a curl request. The request was generated on
the same node where squid and ICAP are resident.

Any hints/tips on what may be going wrong here? Appreciate any help in
this matter. Thank you.

Regards.
Prashanth


TCP dump:
Note that on my setup, squid is running on port 443.
Note also that the packets to-from both ports 443 and 1344 are
available in this sequence.

20:53:31.479166 IP localhost.56475 > localhost.https: Flags [S], seq
2166915705, win 32792, options [mss 16396,sackOK,TS val 3300947254 ecr
0,nop,wscale 12], length 0
20:53:31.479178 IP localhost.https > localhost.56475: Flags [S.], seq
728006122, ack 2166915706, win 32768, options [mss 16396,sackOK,TS val
3300947254 ecr 3300947254,nop,wscale 12], length 0
20:53:31.479186 IP localhost.56475 > localhost.https: Flags [.], ack
1, win 9, options [nop,nop,TS val 3300947254 ecr 3300947254], length 0
20:53:31.479308 IP localhost.56475 > localhost.https: Flags [P.], seq
1:221, ack 1, win 9, options [nop,nop,TS val 3300947254 ecr
3300947254], length 220
20:53:31.479317 IP localhost.https > localhost.56475: Flags [.], ack
221, win 9, options [nop,nop,TS val 3300947254 ecr 3300947254], length
0
20:53:31.483620 IP localhost.https > localhost.56475: Flags [P.], seq
1:40, ack 221, win 9, options [nop,nop,TS val 3300947255 ecr
3300947254], length 39
20:53:31.483636 IP localhost.56475 > localhost.https: Flags [.], ack
40, win 9, options [nop,nop,TS val 3300947255 ecr 3300947255], length
0
20:53:31.497413 IP localhost.56475 > localhost.https: Flags [P.], seq
221:534, ack 40, win 9, options [nop,nop,TS val 3300947259 ecr
3300947255], length 313
20:53:31.530394 IP localhost.https > localhost.56475: Flags [P.], seq
40:3153, ack 534, win 9, options [nop,nop,TS val 3300947267 ecr
3300947259], length 3113
20:53:31.531331 IP localhost.56475 > localhost.https: Flags [P.], seq
534:1108, ack 3153, win 10, options [nop,nop,TS val 3300947267 ecr
3300947267], length 574
20:53:31.549229 IP localhost.https > localhost.56475: Flags [P.], seq
3153:3204, ack 1108, win 9, options [nop,nop,TS val 3300947272 ecr
3300947267], length 51
20:53:31.549589 IP localhost.56475 > localhost.https: Flags [P.], seq
1108:1453, ack 3204, win 10, options [nop,nop,TS val 3300947272 ecr
3300947272], length 345

20:53:31.556517 IP localhost.46489 > localhost.1344: Flags [S], seq
2773005283, win 32792, options [mss 16396,sackOK,TS val 3300947274 ecr
0,nop,wscale 12], length 0
20:53:31.556527 IP localhost.1344 > localhost.46489: Flags [S.], seq
2778855454, ack 2773005284, win 32768, options [mss 16396,sackOK,TS
val 3300947274 ecr 3300947274,nop,wscale 12], length 0
20:53:31.556534 IP localhost.46489 > localhost.1344: Flags [.], ack 1,
win 9, options [nop,nop,TS val 3300947274 ecr 3300947274], length 0
20:53:31.559075 IP localhost.46489 > localhost.1344: Flags [P.], seq
1:602, ack 1, win 9, options [nop,nop,TS val 3300947274 ecr
3300947274], length 601
20:53:31.559092 IP localhost.1344 > localhost.46489: Flags [.], ack
602, win 9, options [nop,nop,TS val 3300947274 ecr 3300947274], length
0

20:53:31.588467 IP localhost.https > localhost.56475: Flags [.], ack
1453, win 10, options [nop,nop,TS val 3300947282 ecr 3300947272],
length 0
20:53:32.550821 IP localhost.56475 > localhost.https: Flags [.], seq
1453:17837, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947282], length 16384
20:53:32.550849 IP localhost.https > localhost.56475: Flags [.], ack
17837, win 12, options [nop,nop,TS val 3300947522 ecr 3300947522],
length 0
20:53:32.550856 IP localhost.56475 > localhost.https: Flags [P.], seq
17837:17866, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947282], length 29
20:53:32.550859 IP localhost.https > localhost.56475: Flags [.], ack
17866, win 12, options [nop,nop,TS val 3300947522 ecr 3300947522],
length 0
20:53:32.550916 IP localhost.56475 > localhost.https: Flags [.], seq
17866:34250, ack 3204, win 10, options [nop,nop,TS val 3300947522 ecr
3300947522], length 16384
20:53:32.550938 IP 

Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2015-11-04 Thread Amos Jeffries
On 5/11/2015 11:55 a.m., Edouard Gaulué wrote:
> Hi Marcus,
> 
> Well that just an URL rewriter program. You can just test it from the
> command line :
> echo "URL" | /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
> 
> Before I understood it was possible to precise the redirect code I got
> that:
> #> echo
> "https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?
> - - GET"|/usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
> #> OK
> rewrite-url="https://proxyweb.X.X/cgi-bin/squidGuard-simple.cgi?clientaddr=-pipo===default=unknown=https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?;
> 
> 
> After a little change in the squidguard.conf, I get:
> #> OK status=302
> url="https://proxyweb.echoppe.lan/cgi-bin/squidGuard-simple.cgi?clientaddr=-pipo===default=unknown=https://ad.doubleclick.net/N4061/adi/com.ythome/_default;sz=970x250;tile=1;ssl=1;dc_yt=1;kbsg=HPFR151103;kga=-1;kgg=-1;klg=fr;kmyd=ad_creative_1;ytexp=9406852,9408210,9408502,9417689,9419444,9419802,9420440,9420473,9421645,9421711,9422141,9422865,9423510,9423563,9423789;ord=968558538238386?;
> 
> 
> It's not so better handled by my browser saying "can't connect to
> https://ad.doubleclick.net; message. But, I don't get the squid message
> anymore regarding http/https.


What Squid version?
 There was a bug about the wrong SNI being sent to servers on bumped
traffic that got re-written. That got fixed in Squid-3.5.7 and
re-writers should have been fully working since then.

Note that CONNECT requests should not be re-written though. We dont
prevent it automatically because it is sometimes actually useful, but SG
cannot handle them correctly.

> 
> It may be that rewrite_rule_program come after peek and splice stuff
> leading squid to an unpredictable situation. Is there a way to play on
> order things happen in squid?

debug_options to raise the amount of output each part of Squid produces
in cache.log.

A lits of the sections can be found at
 - slightly
outdated, but not much changes with these. Or the latest list in
doc/debug-sections.txt of the Squid sources.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Transparent HTTPS Squid proxy with upstream parent

2015-11-04 Thread Michael Ludvig

Hi

I've got a network without direct internet access where I have Squid 
3.5.9as a transparent proxylistening on tcp/8080for HTTP and on 
tcp/8443for HTTPS (redirected via iptablesfrom tcp/80 and tcp/443 
respectively).


This Squid (proxy-test) doesn't have a direct Internet access either but 
can talk to a parent Squid (proxy-upstream) in other part of the network 
that does have Internet access.


With HTTP it works well - client makes a request to 
http://www.example.com(port 80), router and iptables redirect the 
connection to Squid's port 8080, that intercepts the request and makes a 
request to the upstream proxy that serves it as usual. Here are the 
config options used:


http_port 8080 intercept cache_peer proxy-upstream parent 3128 0 no-query
never_direct allow all

Now I wanted to do a similar thing for HTTPS:

https_port 8443 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert=/etc/squid/myCA.pem

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslcrtd_children 5
ssl_bump bump all

Without cache_peerit works as expected (when I enable temporary internet 
access), i.e. auto-generates a fake SSL cert and makes a direct 
connection to the target.


However with cache_peerit doesn't work. I get HTTP/503 error from the proxy:

1446684476.877 0 proxy-client TAG_NONE/200 0 CONNECT 198.51.100.10:443 - 
HIER_NONE/- -
1446684476.970 3 proxy-client TCP_MISS/503 4309 GET 
https://secure.example.com/ - FIRSTUP_PARENT/proxy-upstream text/html


Alternatively if I change the ssl_bumpsetup to this:

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all

I get a crash message in cache.log:

2015/11/05 01:07:11 kid1| assertion failed: PeerConnector.cc:116: 
"peer->use_ssl"


When I use this proxy in non-transparent mode, i.e. configuring the 
proxy on client to proxy-test:3128, it works:


1446684724.879 141 proxy-client TCP_TUNNEL/200 1886 CONNECT 
secure.example.com:443 - FIRSTUP_PARENT/proxy-upstream -


So I need to somehow turn the HTTPSrequest that lands on proxy-testinto 
CONNECTrequest that's forwarded to proxy-upstream.
If Squid can't do that is there any other transparent-to-nontransparent 
proxy software that can do that?


Thanks!

Michael
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is ntlm_fake_auth known to work?

2015-11-04 Thread Amos Jeffries
On 5/11/2015 11:21 a.m., Edouard Gaulué wrote:
> Dear community,
> 
> ntlm_fake_auth looks to be the authentication helper I'm looking for,
> but trying to set it as mentionned here doesn't work:
>  * http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly
>  *
> http://dsysadm.blogspot.fr/2012/03/my-book-live-with-squid-and-fakeauth.html
> 
> 
> Last information found is there :
> http://www.squid-cache.org/mail-archive/squid-users/201310/0087.html
> 
> Browser keeps asking for credentials. Is this a configuration matter or
> could it be deeper?


Depends on what Squid version you are using. It was broken for a few
years. We fixed that issue a few months back and it was apparently
working now. that Good news is you can grab the latest Squid code (v4 or
3.5), build it and use the helper generated on older Squid installations
if you need to use old Squid for some reason.

It also depends on what software you are trying to authenticate. NTLM
was deprecated in 2006 by MS and they started disabling it by default in
software since 2006, and fully removed it from some products around 2010
sometime.

It also depends what security level you have your NTLM set to. Use with
NLMv2-only clients may vary. It will definitely not work with NTLMv2
with security extensions.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid: Small packets and low performance between squid and icap

2015-11-04 Thread Amos Jeffries
On 5/11/2015 3:43 p.m., Prashanth Prabhu wrote:
> Hi folks,
> 
> I have a setup with ICAP running a custom server alongside Squid.
> While testing file upload scenarios, I ran into a slow upload issue
> and have narrowed it down to slowness between squid and icap,
> especially in the request handling path.


Hi Prashanth.

This is bugs 4353 and 4206. There is a workaround patch in bug 4353.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl_bump with cache_peer problem: Handshake fail after Client Hello.

2015-11-04 Thread maple
hi Amos,

what did you exactly refer to for "These particular use-case issue"? it
means in 3.5+, cache_peer can be used with ssl_bump together smoothly? or It
resolves the integration problem between squid and proxychains?

anyway, I have already upgraded my squid to 3.5.9, but neither for
cache_peer used with ssl_bump nor squid with proxychains works.

for cache_peer used with ssl_bump:
http_access allow all
http_port 3128 intercept
https_port 3129 cert=/etc/squid/ssl_cert/squid.crt
key=/etc/squid/ssl_cert/private.key ssl-bump intercept
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
ssl_bump peek all
ssl_bump bump all
cache_peer 127.0.0.1 parent 12345 0 no-query no-digest default
never_direct allow all

for squid with proxychians:
http_access allow all
http_port 3128 intercept
https_port 3129 cert=/etc/squid/ssl_cert/squid.crt
key=/etc/squid/ssl_cert/private.key ssl-bump intercept
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
ssl_bump peek all
ssl_bump bump all
always_direct allow all

proxychains4 -f proxychains.conf squid -f /etc/squid/squid.conf

for proxychians + squid, it looks like proxychians still can chain squid
with my parent proxy up.

anything I did wrong?

best regards.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ssl-bump-with-cache-peer-problem-Handshake-fail-after-Client-Hello-tp4672064p4674388.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread HackXBack
Loool Joe, really are you going back to V2.7 ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674362.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2015-11-04 Thread Marcus Kool

You need to know what squidGuard actually sends to Squid.
squidGuard does not have a debug option for this, so you have to set
   debug_options ALL,1 61,9
in squid.conf to see what Squid receives.
I bet that what Squid receives, is what it complains about:
the URL starts with 'https://http'

Marcus

On 11/04/2015 10:55 AM, Edouard Gaulué wrote:

Le 04/11/2015 11:00, Amos Jeffries a écrit :

On 4/11/2015 12:48 p.m., Marcus Kool wrote:

I suspect that the problem is that you redirect a HTTPS-based URL to an
HTTP URL and Squid does not like that.

Marcus

To give it a try in that direction I now redirect to an https server. And I get 
:

The following error was encountered while trying to retrieve the URL: 
https://https/*

*Unable to determine IP address from host name "https"*

The DNS server returned:

Name Error: The domain name does not exist.


Moreover this would leads sometimes to HTTP-based URL to an HTTPS URL and I 
don't know how much squid likes it either.


No it is apparently the fact that the domain name being redirected to is
"http".

As in:"http://http/something;


I can assure my rewrite_url looks like 
"https://proxyweb.x.x/var1=&...;.

And this confirm ssl_bump parse this result and get the left part before the ":". To play with, I have 
also redirect to "proxyweb.x.x:443/var1=&..." (ie. I removed the "https://; 
and add a
":443") to force the parsing. Then I don't get this message anymore, but 
Mozilla gets crazy waiting for the ad.doubleclick.net certificate and getting the 
proxyweb.x.x one. And of course it
breaks my SG configuration and can't be production solution.

Which brings up the question of why you are using SG to block adverts?

squid.conf:
  acl ads dstdomain .doubleclick.net
  http_access deny ads

Amos



I don't use SG to specificaly block adverts, I use it to block 90 % of the web. 
Here it's just an example with ads but it could be with so much other things...

I just want to try make SG and ssl_bump live together.

Is this possible to have a rule like "if it has been rewrite then don't try to 
ssl_bump"?

Regards, EG


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread joe
if you notice   not only   dynamic static  img as well 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674371.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread HackXBack

>>I've been trying to figure out how it happens for the last year or so.
>>Apparently everybody (all three of you...) but not me can see it
happening.

>>The proxies I manage do not have it happen, and I can't seem to force it
>>to happen either unless I unmount or delete the HDD cache directories
>>while Squid is still running - which is when SWAPFAIL is the expected
>>working beaviour.

with basic squid.conf and fresh system, without any add, SWAPFAIL happen ,
sorry you are wrong this problem is not from three of us, but a lot of squid
users dont post in this wiki, and a lot of squid users i know having the
same issue.
if it is not from squid then it is from what ?
ReiserFS ? gdisk ? ext4 ?
from what ? what you use ? which type ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674369.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Amos Jeffries
On 4/11/2015 11:51 p.m., joe wrote:
>> I don't think the two are the same at all. 
> right they ar 2 diferent problem and they ar very bad for production to be
> on
> 
>> REFRESH is (in jo's case) an indicator that the private content is being 
>> checked before use. If the server behaves itself the answer would be 
>> UNMODIFIED/304 not MODIFIED/200 status, and the transfer size under 1KB.
>
> i test almost all the rev...  from 4.02 down 

Down to what?

> here is wat i found  squid
> become as browser 

Squid does not do graphics rendering.

> that all it dose not save object as i show the header on
> top of the topic so min max ar use less and overide reload ar just a extra
> word to type  as i sayd   squid supost to be cache server not browser

I'm wondering what you think a browser does? it has caching you know. Of
a type called a "private cache". Which is part of the problem with
caching private content in Squid.

Putting two caches in a row and fetching objects through them from one
client will only use the first cache for HITs and the second cache will
have low traffic of mostly MISS. Consider that your browser cache is
always the first cache, Squid can only be the second.

A shared cache (such as Squid) requires multiple clients to be fetching
from it to have much chance to have any HIT's. Which is indicated
partially by the word "shared" in the classification name.

It is kind of funny that you are configuring Squid to force it to store
private content in ways only a browser cache is technically allowed to
do, then turning around and complaining about how Squid acts like a
browser. When you told it to do so.


> folowing the rools of google or the rfc is bad it depend to us admin to
> control witsh object need to be stored and how long
> so that for the REFRESH
> 

You seem to be expecting and demanding a cache to act like an archive.
Serving up responses stored from a snapshot of what the Internet used to
look like some time ago. The words are different because the behaviour
is different. Squid is a caching proxy, not an archive.

The rules in the RFC are how HTTP *works*. Not following them breaks
things, sometimes very very badly.

If you really want software that *doesn't* do HTTP. By all means go and
use something other than Squid. Squid is an HTTP proxy.



>> SWAPFAIL is errors loading the on-disk file where the object ws stored 
>> in the cache. Unless you want to serve random bytes out to the client 
>> that failure will always have a MISS/200 or DENIED/500 result. 
>> In your case the bug is that you are having the disk I/O failure at 
>> all. jo is not. 
>  the swap fail   another very very bad bug  it happen that yesterday i got
> almost evry object scrooling one after the other with swap fail

How did that happen?

I've been trying to figure out how it happens for the last year or so.
Apparently everybody (all three of you...) but not me can see it happening.

The proxies I manage do not have it happen, and I can't seem to force it
to happen either unless I unmount or delete the HDD cache directories
while Squid is still running - which is when SWAPFAIL is the expected
working beaviour.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread HackXBack
You are right Yuri,
its like a proxy bypassed system ..



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674361.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2015-11-04 Thread Edouard Gaulué

Le 04/11/2015 11:00, Amos Jeffries a écrit :

On 4/11/2015 12:48 p.m., Marcus Kool wrote:

I suspect that the problem is that you redirect a HTTPS-based URL to an
HTTP URL and Squid does not like that.

Marcus
To give it a try in that direction I now redirect to an https server. 
And I get :


The following error was encountered while trying to retrieve the URL: 
https://https/*


   *Unable to determine IP address from host name "https"*

The DNS server returned:

   Name Error: The domain name does not exist.


Moreover this would leads sometimes to HTTP-based URL to an HTTPS URL 
and I don't know how much squid likes it either.



No it is apparently the fact that the domain name being redirected to is
"http".

As in:"http://http/something;

I can assure my rewrite_url looks like 
"https://proxyweb.x.x/var1=&...;.


And this confirm ssl_bump parse this result and get the left part before 
the ":". To play with, I have also redirect to 
"proxyweb.x.x:443/var1=&..." (ie. I removed the "https://; 
and add a ":443") to force the parsing. Then I don't get this message 
anymore, but Mozilla gets crazy waiting for the ad.doubleclick.net 
certificate and getting the proxyweb.x.x one. And of course it 
breaks my SG configuration and can't be production solution.

Which brings up the question of why you are using SG to block adverts?

squid.conf:
  acl ads dstdomain .doubleclick.net
  http_access deny ads

Amos


I don't use SG to specificaly block adverts, I use it to block 90 % of 
the web. Here it's just an example with ads but it could be with so much 
other things...


I just want to try make SG and ssl_bump live together.

Is this possible to have a rule like "if it has been rewrite then don't 
try to ssl_bump"?


Regards, EG
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread joe
translate to browser act like



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674365.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to cache youtube videos

2015-11-04 Thread joe
>>its being more complex and complicated but even so every security can be
hacked ..
100% :)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-cache-youtube-videos-tp4674341p4674366.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Amos Jeffries
On 5/11/2015 3:26 a.m., joe wrote:
> if you notice   not only   dynamic static  img as well 
> 

Yeah, and hits and misses. Basically all possible processing codes are
replaced with "SWAPFAI_MISS".

Though I do notice that the other log entries are showing things that
could not possibly happen on a real SWAFAIL. Such as HIT responses
happening, or 304 status codes.

So I am thinking at least most of these are a logging error, not a Squid
caching error.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


04.11.15 21:59, Amos Jeffries пишет:
> On 5/11/2015 3:26 a.m., joe wrote:
>> if you notice   not only   dynamic static  img as well
>>
>
> Yeah, and hits and misses. Basically all possible processing codes are
> replaced with "SWAPFAI_MISS".
>
> Though I do notice that the other log entries are showing things that
> could not possibly happen on a real SWAFAIL. Such as HIT responses
> happening, or 304 status codes.
>
> So I am thinking at least most of these are a logging error, not a Squid
> caching error.
You are thinking or you are sure?
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWOiv1AAoJENNXIZxhPexGubwH/RBDBAkNPneM9AluP0lhRx8d
8A+tIVAOfTnee/sa2LxLBSoDn1OPsFDti74ejTuDjyJRb8JovjWtxCiRpYhw5lCJ
Ay06kNwV9jboO4noDUsuj3uBNPAS3Oc9k7lengxV3wDpCwNAaUDN7AaTn7oPqgX9
912YZo9BXwBB1C0dG5tvtCngsRGpyb+BHpzvxH0tPnejSZXUCXghxIb1FWENIWLE
pbwHTR2u3twCIP4sUFMvgoI5z5lH7+1rvCB+nswvtxyKOgw7I2oGteRBQfsr9PHv
cpHZfwQo/wEI096raOAIRNyFhPPR3FPxXKkg3IhaK6ZNoZpS0eHlgUkR+s6E3MQ=
=ZG+k
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Banner Insertation

2015-11-04 Thread Fahimeh Ashrafy
Hello
I am new member of this mailing list. is  it possible to
insert banner by c-icap? could you please help how to start?

Thank you
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] caching issues - caching traffic from another proxy, and caching https traffic

2015-11-04 Thread John Smith
Hi,

I'm trying to improve our cache hit ratio.  We have a fairly complicated
layer of squid 3.10 proxies as previously detailed.

Problem 1.  Some of the traffic is identified by domain to go to another
layer of proxies.  I've called this proxy otherl1proxy in the squid.conf
below.  I've noticed that this traffic is not cached at all on either set
of proxies.   I'd like it cached at the top layer if possible because these
will be the largest servers with the largest caches.  I've removed
'originserver' from the squid.conf to test but that didn't seem to help.

Problem 2.  We are not caching any https traffic.  Is it possible to cache
https traffic, and if so how would one do it?  As many websites are moving
towards https for all traffic this lowers the effectiveness of cache...

squid.conf below

Thanks,
John

# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl httpacl port 80
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

negative_ttl 3600 seconds

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128 transparent
http_port 3130

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

# Uncomment and adjust the following to add a disk cache directory.
access_log /logs/squid/access.log
cache_log /logs/squid/cache.log

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

visible_hostname domain.com

# Add any of your own refresh_pattern entries above these.
refresh_pattern -i (robots\.txt)$ 60 40% 240
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 259200
refresh_pattern -i \.(gif|png|jpg|jpeg|ico|otf|woff|eot|ttf|svg)$ 10080 90%
259200 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern ^ftp:   1440  20% 10080
refresh_pattern ^gopher:  1440  0%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0 20% 4320

cache_peer otherl1proxy parent 3128 0 no-query originserver no-digest
name=other_l1_proxy
acl sites_other_l1_proxy dstdomain .othersite.com
cache_peer_access other_l1_proxy allow sites_other_l1_proxy
cache_peer_access other_l1_proxy deny all

cache_peer httpelb  parent 80 0 no-query no-digest name=http_peer
cache_peer_access http_peer allow httpacl
cache_peer httpselb  parent 3129 0 no-query no-digest name=https_peer
cache_peer_access https_peer deny httpacl
never_direct allow all
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] "NF getsockopt(SO_ORIGINAL_DST)" filling cache.log due to AWS ELB healthchecks

2015-11-04 Thread John Smith
Hi,

Just to close the loop on this issue, I worked offline with Amos.  He was
able to help me to eliminate all the noise from cache.log, but only for
http traffic, not both http and https traffic using the same port, so I
ended up using my original configuration.  Amos indicated that I would need
to have http and https on different ports to make this work properly, but I
can't make that change.

My end result is that the AWS ELB healthcheck traffic is now pointed to a
different port so it does not get logged as 'noise' in cache.log, but every
single squid request still gets logged as 'noise'.  Still quite an
improvement.

Thanks Amos and Eliezer for reaching out!
John

On Thu, Oct 29, 2015 at 2:31 PM, Amos Jeffries  wrote:

> On 30/10/2015 9:51 a.m., John Smith wrote:
> > The outbound traffic from the L1proxy instance in question connects to a
> > public IP / DNS name of an ELB in another AWS region.
> > We need to send some traffic to a different AWS region, thus the mess
> below:
> >
> > AWS instances (clients) ->
> > AWS internal ELB for L1 proxies -> AWS L1 proxy instances ->
> > a different AWS internal ELB for  L1 proxy cluster -> a different AWS L1
> > proxy instance (this is where we have the problem is with 'intercept or
> > transparent) ->
> > *One AWS region above, a different AWS region below*
> > AWS external (publicly addressable) ELB for L2 proxies in a different AWS
> > region -> AWS L2 proxy instances -> the Internet
> >
> > These AWS instances have both internal IPs and public IPs, and they don't
> > really know about their own public IPs.  That may be part or all of the
> > confusion.
> >
> > AWS ELBs are published as DNS names, they have multiple IPs, and we are
> > using DNS to connect to them.
>
> Okay. I suspect I know what is going on now. Before I confuse things any
> more by mentioning it...
>
> Could you send me a wireshark trace of a small bunch of the connections
> coming to Squid?  Along with the DNS name for the ELB the clients are
> connecting to.
>
> Amos
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Amos Jeffries
On 4/11/2015 11:38 a.m., joe wrote:
> at least you pay attention on "gvs" :) +1
> 
> lets forget about youtube:)  im just asking why TCP_REFRESH_MODIFIED if
> i don't or did not force reload
> ignore-privet its working but ignore-reload its not .. suppose to prevent
> TCP_REFRESH_MODIFIED from hapening right ?

No, ignore-reload means ignore the Ctrl+F5 / reload button headers
received from clients (if any). It has nothing to do with the server
response details.

> 
> and talking about private control its used for public control now not just
> privet content
> 

The design model for Squid caching is "shared cache". Which means
strictly following RFCs Cache-Control:private content is not allowed to
be stored. Doing so in a shared cache design causes big problems.

We get around that danger in Squid by doing the revalidations. For most
traffic it should work find and reduce bandwidth. For some it does not.
But the worst-case there is no different from the proxy having done what
was mandatory to do in the first place.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Refresh_pattern % bug ?

2015-11-04 Thread FredB
Hello

With 3.5.10 I can't add a value with more than 100 %

Something like 

refresh_pattern -i  \.gif$  1440 500% 262800
refresh_pattern -i  \.ram   2880 1000% 262800

The % should be reduced below 100% - Squid Terminated abnormally - 

This is a new limit or a bug ?

Regards 
Fred 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh_pattern % bug ?

2015-11-04 Thread Amos Jeffries
On 4/11/2015 10:45 p.m., FredB wrote:
> Hello
> 
> With 3.5.10 I can't add a value with more than 100 %
> 
> Something like 
> 
> refresh_pattern -i  \.gif$  1440 500% 262800
> refresh_pattern -i  \.ram   2880 1000% 262800
> 
> The % should be reduced below 100% - Squid Terminated abnormally - 
> 
> This is a new limit or a bug ?

Config parser bug I think. That is one place where % is legitimately
much higher than 100%.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl bump and url_rewrite_program (like squidguard)

2015-11-04 Thread Amos Jeffries
On 4/11/2015 12:48 p.m., Marcus Kool wrote:
> I suspect that the problem is that you redirect a HTTPS-based URL to an
> HTTP URL and Squid does not like that.
> 
> Marcus
> 

No it is apparently the fact that the domain name being redirected to is
"http".

As in: "http://http/something;


Which brings up the question of why you are using SG to block adverts?

squid.conf:
 acl ads dstdomain .doubleclick.net
 http_access deny ads

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Banner Insertation

2015-11-04 Thread Amos Jeffries
On 4/11/2015 10:31 p.m., Fahimeh Ashrafy wrote:
> Hello
> I am new member of this mailing list. is it possible to
> insert banner by c-icap? could you please help how to start?

What is the law in your country about taking somebody elses copyrighted
property, altering it, then re-publishing without their permission?

If you have to inject advertising into your clients data streams the
best practice method is to carefully select HTML requests and redirect
them with 302 to a different URL where your advertising is displayed.
Then allow them to continue their regular use after it has been visited.

That can be done in squid.conf with just some ACLs for request selection
and the provided session helper in active mode.

This is also Squid mailing list, not a C-ICAP mailing list or help forum.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to get c-icap url category from squid access lo

2015-11-04 Thread Christos Tsantilas

On 11/04/2015 08:34 AM, Murat K wrote:

Hi guys,

please can someone tell me if it is possible to send url category info
from c-icap to squid access log?


The ICAP response headers can be logged using the "adapt::

Re: [squid-users] Refresh_pattern % bug ?

2015-11-04 Thread FredB

> Config parser bug I think. That is one place where % is legitimately
> much higher than 100%.
> 
> Amos
> 

Hi 

I open a bug ?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh_pattern % bug ?

2015-11-04 Thread Amos Jeffries
On 4/11/2015 10:54 p.m., FredB wrote:
> 
>> Config parser bug I think. That is one place where % is legitimately
>> much higher than 100%.
>>
>> Amos
>>
> 
> Hi 
> 
> I open a bug ?
> 

If you would. I'm a little too busy to do it right away.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to cache youtube videos

2015-11-04 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


04.11.15 16:07, Amos Jeffries пишет:
> On 4/11/2015 6:40 p.m., linux admin wrote:
>> Can anyone please tell me how to cache youtube videos.??
>>
>
> Every time anyone publishes that info YT mysteriously change their
> system so it gets even more complex and difficult to cache.
>
> There are some closed source but freeware tools that can be used with
> Squid like squidvideosbooster to help caching.
It's fake, Amos. YT now cannot be cached excluding very special and very
complex special rewriter. No one solution can't real cache YT now. I
explain in Squid wiki why.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWOdxYAAoJENNXIZxhPexGiXAH/ig++so8IBDdwGTXHO9F2pN5
uKWITT1Hhr6y6cR5Ft9qPNfZt2TbJZWrheJLEgqg0t9boJ8xOT/i2MCoB2b6PqYC
w9eeD+oTPESoTPv46E+YAiUq+VnmHDyDtq0grnD0IhQg9b62v7p3qP49sm8wSkWj
Ek6eZM+HusZo/ordNTMhuUr2ysjHVfvanWws8RMqFI51Rp5/0S57iwhM6rhCbWpO
eXskigTYHHOvJZetQk1wcId5/FAuyCr8gF9kzzIVf8Ozr+JCvr+tf75Msb8xWU+t
VlRGg+7Pocre740uaxTyKBWjr3zC4Cvwvyw9SJDumgxiR74GsgCWxJQwkSosiUE=
=TCHm
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Amos Jeffries
On 4/11/2015 11:53 a.m., HackXBack wrote:
> what joe is going to tell us is that his HIT ratio decrease and he is seeing
> TCP_REFRESH_MODIFIED  instead of tcp_hit when he used V4
> this problem is right also with tcp swalfail miss

I don't think the two are the same at all.

REFRESH is (in jo's case) an indicator that the private content is being
checked before use. If the server behaves itself the answer would be
UNMODIFIED/304 not MODIFIED/200 status, and the transfer size under 1KB.

SWAPFAIL is errors loading the on-disk file where the object ws stored
in the cache. Unless you want to serve random bytes out to the client
that failure will always have a MISS/200 or DENIED/500 result.
 In your case the bug is that you are having the disk I/O failure at
all. jo is not.


> with V3.4 these strange problems is not exists ..
> 

3.4 does not cache several of the types of objects that 3.5+ do.

We are still in the process of converting Squid from HTTP/1.0 behaviour
to HTTP/1.1 behaviour. It is going slowly, one step at a time.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to cache youtube videos

2015-11-04 Thread HackXBack
FredT is alright ,
some ppl cant cache youtube but some can do it 
its being more complex and complicated but even so every security can be
hacked ..



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-cache-youtube-videos-tp4674341p4674356.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread joe
>I don't think the two are the same at all. 
right they ar 2 diferent problem and they ar very bad for production to be
on

>REFRESH is (in jo's case) an indicator that the private content is being 
>checked before use. If the server behaves itself the answer would be 
>UNMODIFIED/304 not MODIFIED/200 status, and the transfer size under 1KB. 
i test almost all the rev...  from 4.02 down  here is wat i found  squid
become as browser that all it dose not save object as i show the header on
top of the topic  so min max ar use less and overide reload ar just a extra
word to type  as i sayd   squid supost to be cache server not browser
folowing the rools of google or the rfc is bad it depend to us admin to
control witsh object need to be stored and how long
so that for the REFRESH

>SWAPFAIL is errors loading the on-disk file where the object ws stored 
>in the cache. Unless you want to serve random bytes out to the client 
>that failure will always have a MISS/200 or DENIED/500 result. 
> In your case the bug is that you are having the disk I/O failure at 
>all. jo is not. 
 the swap fail   another very very bad bug  it happen that yesterday i got
almost evry object scrooling one after the other with swap fail

so im thinking to go back to   v2.7 that provide beter saving for me tks 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-REFRESH-MODIFIED-tp4674325p4674358.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Amos Jeffries
On 4/11/2015 11:35 p.m., HackXBack wrote:
> and how we can cache Control:private content ?
> must be a choice ?

Yes. By adding the ignore-private refresh_pattern control.

Though be aware it still does very bad things to data in most Squid 3.x
versions for some configs. It is only fully safe in v4+.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_REFRESH_MODIFIED

2015-11-04 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


04.11.15 17:05, Amos Jeffries пишет:
> On 4/11/2015 11:35 p.m., HackXBack wrote:
>> and how we can cache Control:private content ?
>> must be a choice ?
>
> Yes. By adding the ignore-private refresh_pattern control.
>
> Though be aware it still does very bad things to data in most Squid 3.x
> versions for some configs. It is only fully safe in v4+.
4+ too cautious and dull does not cache that makes him even the
slightest doubt. And, moreover, does not allow the administrator to give
yourself direct instructions.

Which leads to an unprecedented reduction coefficient caching.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-BEGIN PGP SIGNATURE-
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWOelCAAoJENNXIZxhPexGTzQH/1LU6oEskIzTuek3YIWJ9HNj
i8Eyh/p72mqvPGsV1woKoeEUlihN3B/rKqIrB+0g+N0sJNh1yIYgQ5HfWLxAZVoS
tHXJHfN4+xMrxWMcVi5D70lvssi72n77OuSIdcfHPLfroM6yHQpBOoQbOtg+GV6F
S/M6TI5Pa/nm7DPVL89rpGOe0EKkoCD/rSSjxyCoVncXWH+EN3q2UFXL5TWCdOzz
+3feS4SJfC/bo3YBdu61AGRM/BPNiiidx3wKGcedVMKnPHQXtd9BLBWZQ77Ed63W
inU47fsYNdtgXbEMb+D9kY/QXdGsue0qksrXWJuplk7de+7u49fkEpY4UHFLl7E=
=zidE
-END PGP SIGNATURE-

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users