Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Alex Rousskov
On 07/12/2016 03:56 PM, Heiler Bemerguy wrote:

> (if the first one had already received at
> least the replying HEADER of the server) 

That is not collapsed forwarding. That is regular caching. Collapsed
forwarding covers the time range from the first parsed request header
until the corresponding response header is parsed.


> (without using Range: header).

That's your squid.conf customization, I presume.


> After the server starts sending the data, squid will correctly forward
> the traffic to the clients. Each one will get the range it asked for

That's expected.


> That's why I don't understand why it does not work on a REAL
> enviroment.

Many things can go wrong -- the real requests may require collapsed
forwarding that you do not test, the real requests may have no-cache,
the real response may not be cachable, or there is some Range handling
bug that your test scripts do not tickle (e.g., they request ranges that
are always close to each other and are always available at the same time).

You need to figure out the difference between your test and the real
world. Comparing test and real access.log might help. If that does not
help, you can try capturing incoming/outgoing traffic.

You can also try your test script against the real Squid. Does it still
work?

Beyond that, you would have to do detailed traffic analysis (packet
captures; ALL,2; ALL,9).

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Heiler Bemerguy


Em 12/07/2016 18:30, Alex Rousskov escreveu:


Said that, some special clients do send concurrent *Range* requests for
the same URL! If Squid receives N concurrent Range requests for the same
URL, I believe Squid will try to collapse them, possibly with disastrous
results, especially if Squid does not strip the Range header when
forwarding a request. Somebody should test that and [at least] file a
bug report if there is indeed a Range request collapsing bug.



LOL Alex, I thought of that disaster too!! But in my tests it, works 
flawlessly.


Squid will accept all ranged requests from clients, being the second and 
the subsequent ones a HIT (if the first one had already received at 
least the replying HEADER of the server) and open only A single 
connection to the server (without using Range: header).


After the server starts sending the data, squid will correctly forward 
the traffic to the clients. Each one will get the range it asked for


That's why I don't understand why it does not work on a REAL 
enviroment


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Heiler Bemerguy


10.1.10.9 is the proxy's IP, dude

And it is connecting to the same server multiple times because a client 
is doing a RANGE download.. (windows update to be exact)


All GETS are like these:

GET 
http://au.v4.download.windowsupdate.com/d/msdownload/update/software/defu/2016/06/am_base_7684a3445029744f69529e465ac573f76bd68144.exe 
HTTP/1.1

Accept: */*
Accept-Encoding: identity
If-Unmodified-Since: Wed, 29 Jun 2016 22:14:50 GMT
*Range: bytes=103416560-104234280*
User-Agent: Microsoft BITS/7.8
Proxy-Connection: Keep-Alive
Host: au.v4.download.windowsupdate.com

But with an increasing *range *every ~2 secs and squid will create a 
new connection to the server everytime


Anyway.. I'm trying to figure out this by myself for months now..

In my "test lab" it seems collapsed_forwarding and range-requests are 
working together.. but in this production server I always get this 
behaviour.. lots of connections to the same server, and to get the same 
file..


How to get rid of it without giving up on caching it? lol


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751

Em 12/07/2016 17:08, joe escreveu:

root@proxy:~# netstat -n |grep 201.30.251.27 |grep ESTAB
tcp   243802  0 10.1.10.9:27788 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:15343 201.30.251.27:80ESTABELECIDA
tcp14480  0 10.1.10.9:32548 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:25426 201.30.251.27:80ESTABELECIDA
tcp48322  0 10.1.10.9:8560 201.30.251.27:80ESTABELECIDA
tcp   329234  0 10.1.10.9:54205 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:1656 201.30.251.27:80ESTABELECIDA
tcp  993  0 10.1.10.9:50820 201.30.251.27:80ESTABELECIDA
tcp   330227  0 10.1.10.9:56519 201.30.251.27:80ESTABELECIDA

10.1.10.9  one client ip i cant tell he might be downloading more then one
file
range_offset_limit -1 or none it will force range to non range download


Look my conf:
acl fullDLext urlpath_regex -i
\(exe|ms(i|u|p)|deb|cab|rpm|bin|zip|ax|r(a|p)m|app|pkg|mar|nzp|dat|iop|xpi|dmg|dds|thor|nar|gpf|pdf|appx|appxbundle|esd)

ouchhh 2 much unless you have plenty of bandwith


acl fullDLurl url_regex -i \.microsoft\.com\/filestreamingservice
quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 100
range_offset_limit -1 fullDLext
range_offset_limit -1 fullDLurl

better to change that to   one control i dont know if that will be bad idea



collapsed_forwarding on

this is good if you have other client downloading on same time same file it
has nothing to do with multi connection to the same  ip only it will help
saving you example if you have 10 client downloading at the same time same
file so one download it will be.. not 10 download i dont know if im
mistaking amos or other will answer to that





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462p4678468.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Alex Rousskov
On 07/12/2016 02:08 PM, joe wrote:

>>> collapsed_forwarding on

> this is good if you have other client downloading on same time same file it
> has nothing to do with multi connection to the same  ip only it will help
> saving you example if you have 10 client downloading at the same time same
> file so one download it will be.. not 10 download

collapsed_forwarding does not look at client IPs or even connections.
The feature collapses requests based on request URLs. If Squid gets N
more-or-less simultaneous requests for the same URL, then N-1 of those
requests will be "collapsed", subject to certain other conditions(*).

It is rare for the same client to send N concurrent requests for the
same URL, so, in practice, collapsed_forwarding is mostly about multiple
clients a.k.a. "flash crowds".

Said that, some special clients do send concurrent *Range* requests for
the same URL! If Squid receives N concurrent Range requests for the same
URL, I believe Squid will try to collapse them, possibly with disastrous
results, especially if Squid does not strip the Range header when
forwarding a request. Somebody should test that and [at least] file a
bug report if there is indeed a Range request collapsing bug.


HTH,

Alex.
P.S. (*) There are several special conditions that determine whether
Squid will collapse a request. For example, Squid will not collapse a
request if Squid thinks that the future response will not be cachable.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] url_rewrite_program shows IP addresses instead of domain name when rewriting SSL/HTTPS

2016-07-12 Thread Alex Rousskov
On 07/12/2016 11:46 AM, Moataz Elmasry wrote:
> All what was needed is to peek the important domains in step2 in order
> not to cause them harm and bump everything else in step3. In this case
> I'm able to read the dns names in the redirect script and block them
> accordingly 

> ssl_bump peek step1
> ssl_bump peek step2 https_sites
> ssl_bump bump step3 !https_sites


That broken configuration does not tell Squid what to do with:

* non-https_sites during step2
* https_sites during step3

If that configuration actually bumps some traffic, it is due to an
unknown Squid bug! Technically, IIRC, Squid should splice everything
given the configuration above (but it would be wrong to rely on that).


If you want Squid to splice https_sites (as determined during step2) and
bump everything else, then you can try something like this:

  ssl_bump peek step1
  ssl_bump splice https_sites
  ssl_bump bump all

In any case, make sure you tell Squid what to do at every step. Do not
leave Squid guessing because its guess is likely to mismatch your needs.


> The SslPeekAndSplice wiki page needs serious rework though as many of
> the stuff discussed here are not explained on the page, which makes life
> really hard for noobs like me. Is there a way to contribute back a
> little bit by reworking that wiki page? I'll try to write a small post
> about the SslPeekAndSplice in the next few days.

You are more than welcome to suggest documentation fixes and
improvements! SslPeekAndSplice page authors probably do not know what is
missing or wrong and, without others help, the page may remain as it is now.

Said that, if you keep adding "all" to the list of ACLs, then you may
want to start by fixing the wiki page that documents how to use ACLs.
That benign but distracting mistake has nothing to do with SslBump.


Thank you,

Alex.


> On Sun, Jul 10, 2016 at 10:42 AM, Amos Jeffries wrote:
> 
> On 10/07/2016 8:13 p.m., Moataz Elmasry wrote:
> > Hi Amos,
> >
> > Thanks I really learnt alot from your previous email.
> >
> > going on..
> >
> > On Fri, Jul 8, 2016 at 1:18 PM, Amos Jeffries
> mailto:squ...@treenet.co.nz>> wrote:
> >
> >> On 8/07/2016 10:20 p.m., Moataz Elmasry wrote:
> >>> Hi Amos,
> >>>
> >>> Do you know any of those 'exceptional' redirectors that can
> handle https?
> >>>
> >>
> >> I know they exist, some of my clients wrote and use some. But I can't
> >> point you to any if thats what you are asking.
> >>
> >> I can say though there r two things that can reliably be done with a
> >> CONNECT request by a URL-rewriter;
> >>
> >> 1) return ERR, explicitly telling Squid not to re-write those
> tunnels.
> >>
> >> This trades helper complexity for simpler squid.conf ACLs. Both
> simply
> >> telling Squid not to re-write.
> >>
> >> 2) re-write the URI from domain:port to be IP:port.
> >>
> > Funny thing is when I'm getting the URL in the redirect.bash, I'm not
> > getting an IP. I probed and logged in many fields as described in the
> > logformat page, and I usually get either the IP or the DNS inside
> > redirect.bash but not both
> >
> >>
> >> If the IP it gets re-written to is the one the client was going
> to, this
> >> is in effect telling Squid not to do DNS lookup when figuring out
> where
> >> to send it. That can be useful when you don't want Squid to use
> >> alternative IPs it might find via DNS.
> >>  (NP: This wont affect the host verify checking as it happens too
> late.
> >> This is actually just a fancy way to enforce the ORIGINAL_DST
> pass-thru
> >> behaviour based on more complex things than host-verify detects)
> >>
> >>
> >>> Ok. So let's ignore the redirection for now and just try to
> whitelist
> >> some
> >>> https urls and deny anything else.
> >>>
> >>> Now I'm trying to peek and bump the connection, just to obtain the
> >>> servername without causing much harm, but the https sites are
> now either
> >>> loading infinitely, or loading successfully, where they should
> have been
> >>> blacklisted, assuming the https unwrapping happened
> successfully. Could
> >> you
> >>> please have a look and tell me what's wrong with the following
> >>> configuration? BTW after playing with ssl_bump I realized that I
> didn't
> >>> really understand the steps(1,2,3) as well as when to
> peek/bump/stare
> >>> etc... . The squid.conf contains some comments and questions
> >>>
> >>> squid.conf
> >>>
> >>> "
> >>> acl http_sites dstdomain play.google.com
>  mydomain.com 
> >>> acl https_sites ssl::server_name play.google.com
>  mydomain.com 
> >>>
> >>> #match any url where the servername in the SNI is not empty
> 

Re: [squid-users] time based range_offset_limit

2016-07-12 Thread joe
>>root@proxy:~# netstat -n |grep 201.30.251.27 |grep ESTAB
>>tcp   243802  0 10.1.10.9:27788 201.30.251.27:80ESTABELECIDA
>>tcp0  0 10.1.10.9:15343 201.30.251.27:80ESTABELECIDA
>>tcp14480  0 10.1.10.9:32548 201.30.251.27:80ESTABELECIDA
>>tcp0  0 10.1.10.9:25426 201.30.251.27:80ESTABELECIDA
>>tcp48322  0 10.1.10.9:8560 201.30.251.27:80ESTABELECIDA
>>tcp   329234  0 10.1.10.9:54205 201.30.251.27:80ESTABELECIDA
>>tcp0  0 10.1.10.9:1656 201.30.251.27:80ESTABELECIDA
>>tcp  993  0 10.1.10.9:50820 201.30.251.27:80ESTABELECIDA
>>tcp   330227  0 10.1.10.9:56519 201.30.251.27:80ESTABELECIDA

10.1.10.9  one client ip i cant tell he might be downloading more then one
file
range_offset_limit -1 or none it will force range to non range download

>>Look my conf:

>>acl fullDLext urlpath_regex -i 
>>\(exe|ms(i|u|p)|deb|cab|rpm|bin|zip|ax|r(a|p)m|app|pkg|mar|nzp|dat|iop|xpi|dmg|dds|thor|nar|gpf|pdf|appx|appxbundle|esd)
ouchhh 2 much unless you have plenty of bandwith

>>acl fullDLurl url_regex -i \.microsoft\.com\/filestreamingservice

>>quick_abort_min 0 KB
>>quick_abort_max 0 KB
>>quick_abort_pct 100

>>range_offset_limit -1 fullDLext
>>range_offset_limit -1 fullDLurl
better to change that to   one control i dont know if that will be bad idea 


>>collapsed_forwarding on
this is good if you have other client downloading on same time same file it
has nothing to do with multi connection to the same  ip only it will help
saving you example if you have 10 client downloading at the same time same
file so one download it will be.. not 10 download i dont know if im
mistaking amos or other will answer to that





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462p4678468.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Heiler Bemerguy


Hello Joe, I've tried doing that but it seems collapsed_forwarding won't 
work for windows updates for some reason.. it's really annoying.


Look the number of simultaneos connections to the SAME ip... this 
shouldn't happen, right?


root@proxy:~# netstat -n |grep 201.30.251.27 |grep ESTAB
tcp   243802  0 10.1.10.9:27788 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:15343 201.30.251.27:80ESTABELECIDA
tcp14480  0 10.1.10.9:32548 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:25426 201.30.251.27:80ESTABELECIDA
tcp48322  0 10.1.10.9:8560 201.30.251.27:80ESTABELECIDA
tcp   329234  0 10.1.10.9:54205 201.30.251.27:80ESTABELECIDA
tcp0  0 10.1.10.9:1656 201.30.251.27:80ESTABELECIDA
tcp  993  0 10.1.10.9:50820 201.30.251.27:80ESTABELECIDA
tcp   330227  0 10.1.10.9:56519 201.30.251.27:80ESTABELECIDA


Look my conf:

acl fullDLext urlpath_regex -i 
\.(exe|ms(i|u|p)|deb|cab|rpm|bin|zip|ax|r(a|p)m|app|pkg|mar|nzp|dat|iop|xpi|dmg|dds|thor|nar|gpf|pdf|appx|appxbundle|esd)

acl fullDLurl url_regex -i \.microsoft\.com\/filestreamingservice

quick_abort_min 0 KB
quick_abort_max 0 KB
quick_abort_pct 100

range_offset_limit -1 fullDLext
range_offset_limit -1 fullDLurl

refresh_pattern -i 
(microsoft|windowsupdate)\.com.*\.(cab|exe|ms[i|u|f]|dat|zip|[ap]sf|appx|appxbundle|esd) 
483840 100% 483840 override-expire ignore-reload ignore-must-revalidate 
ignore-private ignore-no-store store-stale
refresh_pattern -i \.microsoft.com\/filestreamingservice 483840 80% 
483840 override-expire ignore-private ignore-no-store ignore-reload 
ignore-must-revalidate store-stale


reload_into_ims on

connect_retries 3

client_idle_pconn_timeout 30 seconds

client_persistent_connections on
server_persistent_connections on
pipeline_prefetch 10

collapsed_forwarding on
detect_broken_pconn on
negative_ttl 30 seconds
negative_dns_ttl 2 minutes
incoming_dns_average 8
incoming_tcp_average 16

connect_timeout 60 seconds
request_timeout 60 seconds
read_timeout 60 seconds


--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 12/07/2016 13:47, joe escreveu:

acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
range_offset_limit none download_until_end_by_ip freetimes

wen you have  simultaneous connections on one large file update

try that

collapsed_forwarding on <- enable that will help if there is lots a
connection to one file can shre

acl range_list_path urlpath_regex \.(mar|msp|esd|pkg\?)   

Re: [squid-users] url_rewrite_program shows IP addresses instead of domain name when rewriting SSL/HTTPS

2016-07-12 Thread Moataz Elmasry
Hi Amos,

I kinda solved the problem (Thanks to you!!!)
All what was needed is to peek the important domains in step2 in order not
to cause them harm and bump everything else in step3. In this case I'm able
to read the dns names in the redirect script and block them accordingly

Here is the relevant part:
acl http_sites dstdomain play.google.com mydomain.com
acl https_sites ssl::server_name play.google.com mydomain.com

ssl_bump peek step1 all
ssl_bump peek step2 https_sites
ssl_bump bump step3 all !https_sites #http_sites won't be bumped anyway.
But just to be sure
url_rewrite_access allow all !http_sites

Of course I'm still not able to rewrite https address as discussed, but
this is a different story I guess.

The SslPeekAndSplice wiki page needs serious rework though as many of the
stuff discussed here are not explained on the page, which makes life really
hard for noobs like me. Is there a way to contribute back a little bit by
reworking that wiki page? I'll try to write a small post about
the SslPeekAndSplice in the next few days.

Many Thanks again for the great help. Really appreciate it

Cheers,
Moataz

On Sun, Jul 10, 2016 at 10:42 AM, Amos Jeffries 
wrote:

> On 10/07/2016 8:13 p.m., Moataz Elmasry wrote:
> > Hi Amos,
> >
> > Thanks I really learnt alot from your previous email.
> >
> > going on..
> >
> > On Fri, Jul 8, 2016 at 1:18 PM, Amos Jeffries 
> wrote:
> >
> >> On 8/07/2016 10:20 p.m., Moataz Elmasry wrote:
> >>> Hi Amos,
> >>>
> >>> Do you know any of those 'exceptional' redirectors that can handle
> https?
> >>>
> >>
> >> I know they exist, some of my clients wrote and use some. But I can't
> >> point you to any if thats what you are asking.
> >>
> >> I can say though there r two things that can reliably be done with a
> >> CONNECT request by a URL-rewriter;
> >>
> >> 1) return ERR, explicitly telling Squid not to re-write those tunnels.
> >>
> >> This trades helper complexity for simpler squid.conf ACLs. Both simply
> >> telling Squid not to re-write.
> >>
> >> 2) re-write the URI from domain:port to be IP:port.
> >>
> > Funny thing is when I'm getting the URL in the redirect.bash, I'm not
> > getting an IP. I probed and logged in many fields as described in the
> > logformat page, and I usually get either the IP or the DNS inside
> > redirect.bash but not both
> >
> >>
> >> If the IP it gets re-written to is the one the client was going to, this
> >> is in effect telling Squid not to do DNS lookup when figuring out where
> >> to send it. That can be useful when you don't want Squid to use
> >> alternative IPs it might find via DNS.
> >>  (NP: This wont affect the host verify checking as it happens too late.
> >> This is actually just a fancy way to enforce the ORIGINAL_DST pass-thru
> >> behaviour based on more complex things than host-verify detects)
> >>
> >>
> >>> Ok. So let's ignore the redirection for now and just try to whitelist
> >> some
> >>> https urls and deny anything else.
> >>>
> >>> Now I'm trying to peek and bump the connection, just to obtain the
> >>> servername without causing much harm, but the https sites are now
> either
> >>> loading infinitely, or loading successfully, where they should have
> been
> >>> blacklisted, assuming the https unwrapping happened successfully. Could
> >> you
> >>> please have a look and tell me what's wrong with the following
> >>> configuration? BTW after playing with ssl_bump I realized that I didn't
> >>> really understand the steps(1,2,3) as well as when to peek/bump/stare
> >>> etc... . The squid.conf contains some comments and questions
> >>>
> >>> squid.conf
> >>>
> >>> "
> >>> acl http_sites dstdomain play.google.com mydomain.com
> >>> acl https_sites ssl::server_name play.google.com mydomain.com
> >>>
> >>> #match any url where the servername in the SNI is not empty
> >>> acl haveServerName ssl::server_name_regex .
> >>>
> >>>
> >>> http_access allow http_sites
> >>> http_access allow https_sites #My expectation is that this rule is
> >> matched
> >>> when the https connection has been unwrapped
> >>
> >> On HTTP traffic the "http_sites" ACL will match the URL domain.
> >>
> >> On HTTPS traffic without (or before finding) the SNI neither ACL will
> >> match. Because URL is a raw-IP at that stage.
> >>
> >> On HTTPS traffic with SNI the "http_sites" ACL will match. Because the
> >> SNI got copied to the request URI.
> >>
> >> The "https_sites" ACL will only be reached on traffic where the SNI does
> >> *not* contain the values its looking for. This test will always be a
> >> non-match / false.
> >>
> > Ouch, I now see in the docs that ssl::server_name is suitable for usage
> > within ssl_bump. So this is the only use case I suppose.
> >
> >>
> >>>
> >>> sslcrtd_program /lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
> >>>
> >>> http_access deny all
> >>>
> >>> http_port 3127
> >>> http_port 3128 intercept
> >>> https_port 3129 cert=/etc/squid/ssl/example.com.cert
> >>> key=/etc/squid/ssl/example.com.private ssl-bump intercept
> 

Re: [squid-users] time based range_offset_limit

2016-07-12 Thread joe
>acl download_until_end_by_ip dst 13.107.4.50
>acl freetimes time 03:00-08:00
>range_offset_limit none download_until_end_by_ip freetimes

wen you have  simultaneous connections on one large file update 

try that

collapsed_forwarding on <- enable that will help if there is lots a
connection to one file can shre

acl range_list_path urlpath_regex \.(mar|msp|esd|pkg\?)   

[squid-users] assertion failed: DestinationIp.cc:41: "checklist->conn() && checklist->conn()->clientConnection != NULL"

2016-07-12 Thread Omid Kosari
Hello,

squid crashes after following error
assertion failed: DestinationIp.cc:41: "checklist->conn() &&
checklist->conn()->clientConnection != NULL"


From the error massage i guess that following config may cause the problem

#acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
#range_offset_limit none download_until_end_by_ip freetimes

As you can see i have commented first and third lines to see what happens .
Still soon to be sure but after commenting those lines the problem did not
happen . Maybe a bug !

Squid Version 3.5.12 (distribution default package)
Ubuntu 16.04 Linux 4.4.0-28-generic on x86_64





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-DestinationIp-cc-41-checklist-conn-checklist-conn-clientConnection-NULL-tp4678464.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-12 Thread Heiler Bemerguy


You're having problems with huge bandwidth being used by simultaneous 
connections from the proxy? All of them to the same IP, getting the same 
file in paralell ??



--
Best Regards,

Heiler Bemerguy
Network Manager - CINBESA
55 91 98151-4894/3184-1751


Em 12/07/2016 08:14, Omid Kosari escreveu:

Hello,

I want to have "range_offset_limit none" for specific acl in specific time .
The config is and squid -k parse/check does not show any error .

acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
range_offset_limit none download_until_end_by_ip freetimes

But please somebody confirm that it is correct and should work .

Squid Version 3.5.12

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] time based range_offset_limit

2016-07-12 Thread Omid Kosari
Hello,

I want to have "range_offset_limit none" for specific acl in specific time .
The config is and squid -k parse/check does not show any error . 

acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
range_offset_limit none download_until_end_by_ip freetimes

But please somebody confirm that it is correct and should work .

Squid Version 3.5.12

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users