Re: [squid-users] Ideas for better caching these popular urls

2018-04-11 Thread Omid Kosari
Eliezer Croitoru wrote
> You will need more then just the urls but also the response headers for
> these.
> I might be able to write an ICAP service that will log requests and
> response headers and it can assist Cache admins to improve their
> efficiency but this can take a while.

Hi Eliezer,

Nice idea. I am ready to test/help/share what you need in real production
environment. Please also do a general thing which includes other domains in
first post attachment. They worth a try .

Thanks




--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Omid Kosari
Thanks for reply . 

I assumed the community at different scales from little isp to large ISPs
may have common domains like those i highlighted so they may have same issue
as mine . So i ignored common parts .

One of problems with redbot is it shows timeout for big files like 

http://gs2.ww.prod.dl.playstation.net/gs2/appkgo/prod/CUSA00900_00/2/f_2df8e321f37e2f5ea3930f6af4e9571144916013ee38893d881890b454b5fed6/f/UP9000-CUSA00900_00-BLOODBORNE00_4.pkg?downloadId=0187=018700e2291bda0f868f=us=ob=aa2cd9c8d1f359feb843ae4a6c99cfcdb6569ca9cc60ad6d28b6f8de3b5fac23=0=23.57.69.81=0027

http://gs2.ww.prod.dl.playstation.net/gs2/ppkgo/prod/CUSA07557_00/25/f_053bab8c9dec6fbc68a0bd9fc58793285ae350ccf7dadacb35b5840228a9d802/f/EP4001-CUSA07557_00-F12017EMASTER000-A0113-V0100_0.pkg?downloadId=0059=005900e22977e62f91a2=ob=0183=8.248.5.254=0032


I assumed anyone with few thousand of users may have same problem and maybe
they like to share for example their refresh_pattern or storeid to solve my
problem . You better know that playstation is everywhere playstation ;)

Here is part of storeid_db file
^http:\/\/.*\.sonycoment\.loris-e\.llnwd\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1
^http:\/\/.*\.playstation\.net\/(.*?\.pkg)
http://playstation.net.squidinternal/$1

Almost all of the playstation huge downloads are with 206 code but it will
download the file from start to end , if i remember correctly in this
situation squid will correctly cache the file .



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Ideas for better caching these popular urls

2018-04-10 Thread Omid Kosari
Hello,

squid-top-domains.JPG

  

This image shows stats from one of my squid boxes . I have question about
highlighted ones . I think they should have better hit ratio because they
are popular between clients .
I have checked a lot of things like calamaris and logs , played with
refresh_pattern , storeid rules etc .

I want gurus and community to please help for better HITs .

Also i am ready to share specific parts of access.log and others if
requested .

Thanks



--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache poisoning vulnerability 3.5.23

2017-07-27 Thread Omid Kosari
Amos Jeffries wrote
> Cache poisoning (if it is that) is a serious security issue. Please 
> bring the details of security problems to the *squid-bugs* mailing list 
> so it can be investigated and solved, rather than blind-siding everyone 
> with a public announcement like this.
> 
> Amos

I tried it before posting here but my message did not accepted after hours ,
so then i posted here . 
I'll try again there .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-poisoning-vulnerability-3-5-23-tp4683215p4683221.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache poisoning vulnerability 3.5.23

2017-07-26 Thread Omid Kosari
By my experience if you see any output from following command you may be a
victim

grep -a 'generate_204' /var/log/squid/access.log | grep -v '/204 ' | grep -v
'/000' | grep -v opera | grep -v ucweb | grep -v apple



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-poisoning-vulnerability-3-5-23-tp4683215p4683216.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache poisoning vulnerability 3.5.23

2017-07-26 Thread Omid Kosari
Hello,

Recently i have seen some Cache poisoning specially on android captive
portal detection sites .
My squid was 3.5.19 (from https://packages.debian.org/stretch/squid) on
Ubuntu Linux 16.04 . Then i have upgraded to latest version 3.5.23 (from
https://packages.debian.org/stretch/squid) and purged specific pages but
again i can see cache poisoning on same pages .

http://connectivitycheck.gstatic.com/generate_204
http://clients3.google.com/generate_204
http://172.217.20.206/generate_204
http://clients1.google.com/generate_204
http://google.com/generate_204




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cache-poisoning-vulnerability-3-5-23-tp4683215.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] What would be the maximum ufs\aufs cache_dir objects?

2017-07-26 Thread Omid Kosari
Interesting because i was going to create a new topic like this but Eliezer
read my mind ;)

Nowadays i can see that the http traffic is going fewer and fewer and every
day i am thinking about retiring the squid . 

But currently is see that most of the remaining http traffic which worth
caching is 
Microsoft ( Windows Updates + App Updates ) 
Apple (IOS updates + App Updates )
Game Consoles (Playstation + Xbox + Game Updates )
Google ( Android Apps + Chrome Apps )
Samsung (Firmware Update + AppUpdates )
CDNs (Akamai + llnwd )
Antivirus Updates

The international HTTP traffic is less than 20% of all international traffic
. The sites mentioned above include more than 60% of international http
traffic so they are more than 10% of all international traffic .

Now i prefer to only cache mentioned sites . But each line needs a special
customization like what Eliezer tool for windows updates .

Squid is an advance general caching software/platform and customizing for
each website is far from its roadmap . So i think supporting others like
Eliezer to create custom helpers/services for each website may help Squid be
more popular and active.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-would-be-the-maximum-ufs-aufs-cache-dir-objects-tp4683100p4683213.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] range_offset_limit not working as expected

2017-07-16 Thread Omid Kosari
Amos Jeffries wrote
> Squid DNS system can be updated to do things better.

 Hi,

Any news or updates ? 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-tp4679355p4683112.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-05-11 Thread Omid Kosari
Eliezer Croitoru wrote
> You can try to use the atime and not the mtime.

Each time the fetcher script runs , all of request files will access and
then atime will refreshed .
I think for "request" directory it should be "mtime" and for "body"
directory it should be "atime" .


Eliezer Croitoru wrote
> It is possible that some fetchers will consume lots of memory and some of
> the requests are indeed un-needed but... don’t delete them.
> Try to archive them and only then remove from them some by their age or
> something similar.
> Once you have the request you have the option to fetch files and since
> it's such a small thing(max 64k per request) it's better to save and
> archive first and later wonder if some file request is missing.

But currently there is more than 23 files in old request directory .
Maybe the garbage collector of GoLang will not release the memory after
processing each file .



Eliezer Croitoru wrote
> * if you want me to test or analyze your archived requests archive them
> inside a xz and send them over to me.

I have sent you the request directory in previous private email .

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682360.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-05-10 Thread Omid Kosari
I have deleted and recreate the request directory and see huge decrease in
memory usage of the fetcher process .

Did i do the right thing ? Is there anything that should i do after a while
?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682352.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-04-24 Thread Omid Kosari
Hello,

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682189.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-04-15 Thread Omid Kosari
Hello,

I have sent the files you mentioned to your email 2 days ago . 

A little more investigation shows that some big files (~ 2GB ) are
downloading slowly ( ~ 100KBytes/s) while some others downloading very
faster. The problem is related to networking (BGP and IXP ) stuff and the
fetcher script can not solve that .

But is there a way to run more than one fetcher script at the same time to
parallel downloading and not one by one ? There is free bandwidth but
fetcher script takes a long time for some downloads .

Thanks again for you support



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682113.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-04-08 Thread Omid Kosari
Thanks for reply.


Eliezer Croitoru wrote
> Also what is busy for you?

The fetcher script is always downloading . For example right now i can see
that a fetcher script is running for more than 3 days and it is downloading
files one by one .


Eliezer Croitoru wrote
> Also what is busy for you?
> Are you using a lock file ?( it might be possible that your server is
> downloading in some loop and this is what causing this load)

Yes . Everything looks fine in that mean .


Eliezer Croitoru wrote
> Did you upgraded to the latest version?

Yes


I will send you the files .

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682033.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2017-04-06 Thread Omid Kosari
Hey Eliezer,

Recently i have found that the fetcher script is very busy and it is always
downloading . It seems that microsoft changed something . I am not sure and
it is just a guess . 

Whats up at your servers ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682002.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid tproxy connection time out

2017-01-03 Thread Omid Kosari
Hello,

I think your problem is topology . I suggest change the position of squid so
the mikrotik router stands between clients and squid box .

Also assign a private ip address to your squid and also one ip from same
range to your mikrotik router . Then try to mangle and route to that private
ip .





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-tproxy-connection-time-out-tp4681027p4681037.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2016-09-13 Thread Omid Kosari
Amos Jeffries wrote
> ==> ORIGINAL_DST is should *only* ever be used on MISS or
> REFRESH/revalidate traffic. Never on a HIT. Thus zero (0%) hit-ratio is
> the expected behaviour.
> 
> For the same reason that a report of the log traffic using "grep -v HIT"
> will show zero cache ratio.

I have describe my problem in another thread
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-td4679355.html
. Based on your suggestion , now squid only has one dns server which is same
as users .

I am sure that this url
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
existed and cached . So why there are lots of log lines with ORIGINAL_DST ?





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4679477.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TProxy and client_dst_passthru

2016-09-11 Thread Omid Kosari
Antony Stone wrote
> On Thursday 08 September 2016 at 12:27:42, Omid Kosari wrote:
> 
>> Hi Fred,
>> 
>> Same problem here . Do you found any solution or workaround ?
> 
> Please clarify which message you are reply / referring to.
> 
> Thanks,
> 
> 
> Antony.
> 
> -- 
> Archaeologists have found a previously-unknown dinosaur which seems to
> have 
> had a very large vocabulary.  They've named it Thesaurus.
> 
>Please reply to the
> list;
>  please *don't* CC
> me.
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users

I refer to following messages .i have same problem


FredT wrote
> Hi Amos,
> 
> We have done additional tests in production with ISPs and the ORIGINAL_DST
> in tproxy cannot be cached.
> In normal mode (not tproxy), ORIGINAL_DST can be cached, no problem.
> But once in tproxy (http_port 3128 tproxy), no way, it's impossible to get
> TCP_HIT.
> 
> We have played with the client_dst_passthru and the host_verify_strict,
> many combinaisons on/off.
> By settings client_dst_passthru ON and host_verify_strict OFF, we can
> reduce the number of ORIGINAL_DST (generating DNS "alerts" in the
> cache.log) but it makes issues with HTTPS websites (facebook, hotmail,
> gmail, etc...).
> We have also tried many DNS servers (internals and/or externals), same
> issue.
> 
> I read what you explain in your previous email but it seems there is
> something weird.
> The problem is that the ORIGINAL_DST could be up to 25% of the traffic
> with some installations meaning this part is "out-of-control" in term of
> cache potential.
> 
> All help is welcome here
> Thanks in advance.
> 
> Bye Fred 


FredT wrote
> Hi Eliezer,
> 
> Well, we have done many tests with Squid (3.1 to 3.5.x), disabling
> "client_dst_passthru" (off) will stop the DNS entry as explained in the
> wiki, the option directly acts on the flag "ORIGINAL_DST".
> As you know, ORIGINAL_DST switches the optimization off (ex: StoreID) then
> it's not possible to cache the URL (ex:
> http://cdn2.example.com/mypic.png).
> 
> In no tproxy/NAT mode, the client_dst_passthru works perfectly by
> disabling the DNS entry control, so optimization is done correctly.
> But in tproxy/NAT, the client_dst_passthru has no effect, we see
> ORIGINAL_DST in logs.
> 
> So, maybe I'm totaly wrong here the client_dst_passthru is not related to
> the ORIGINAL_DST, or there is an explaination why the client_dst_passthru
> does not act in tproxy/NAT...
> 
> Bye Fred

please look at following results 
As you know the following command shows statistics of line which only have
ORIGINAL_DST

tail -n 100 /var/log/squid/access.log | grep -a ORIGINAL_DST | calamaris 
--config-file /etc/calamaris/calamaris.conf --all-useful-reports | more


- --
--
Proxy statistics
- --
--
Total amount:   requests
378310
unique hosts/users:hosts  
1859
Total Bandwidth:Byte
16453M
Proxy efficiency (HIT [kB/sec] / DIRECT [kB/sec]):factor  
1.22
Average speed increase:%  
0.39
TCP response time of 100% requests: msec
0M
- --
--
Cache statistics
- --
--
Total amount cached:requests 
11945
Request hit rate:  %  
3.16
Bandwidth savings:  Byte  
355M
Bandwidth savings in Percent (Byte hit rate):  %  
2.16
Average cached object size: Byte
0M
Average direct object size: Byte
0M
Average object size:Byte
0M
- --
--

# Incoming TCP-requests by status
status  request  %  sec/req   Byte   % 
kB/sec
-- - -- ---  --
---
HIT11945   3.161.94 355M   2.16  
15.66
 TCP_REFRESH_UNMODIFIED_ABORTED

Re: [squid-users] TProxy and client_dst_passthru

2016-09-08 Thread Omid Kosari
Hi Fred,

Same problem here . Do you found any solution or workaround ?

Regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4679422.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-09-06 Thread Omid Kosari
Hey Eliezer,

According to these threads
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-td4679355.html

http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-td4670189.html

Is there any chance that you implement something that may be used for other
(206 partial) popular sites like download.cdn.mozilla.net . I think it has
also same problem as windows update and has lots of uncachable requests .

Thanks in advance .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4679373.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] range_offset_limit not working as expected

2016-09-06 Thread Omid Kosari
It is tproxy cache . Is there a way to force caching them .
current configs:
host_verify_strict off
client_dst_passthru on

dns_nameservers x.y.160.172
dns_nameservers 217.218.155.155
dns_nameservers 217.218.127.127
dns_nameservers 8.8.8.8
dns_nameservers 8.8.4.4
dns_nameservers 208.67.222.222
dns_nameservers 208.67.220.220
dns_nameservers 208.67.222.220
dns_nameservers 208.67.220.222

The first dns server is our caching dns server and its parents dns servers
are the others which appears in this config . Users dns traffic intercepted
to x.y.160.172 .

So the problem seems the cdn dns round-robin . Is there a way to solve that
?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/range-offset-limit-not-working-as-expected-tp4679355p4679362.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-09-06 Thread Omid Kosari
Filed a bug report http://bugs.squid-cache.org/show_bug.cgi?id=4585



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679361.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] range_offset_limit not working as expected

2016-09-05 Thread Omid Kosari
Hello,

My config:
acl download_until_end dstdomain .cdn.mozilla.net
range_offset_limit none download_until_end


Squid 3.5.19




1473082997.229697 x.29.186.47 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.72 application/octet-stream
1473082999.365   1122 x.10.189.81 TCP_MISS/206 300512 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.80 application/octet-stream
1473083001.570573 x.29.184.205 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.81 application/octet-stream
1473083004.941   2196 x.10.184.141 TCP_MISS/206 300514 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.80 application/octet-stream
1473083005.383631 x.10.184.65 TCP_MISS/206 300514 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win64/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.80 application/octet-stream
1473083006.711970 x.10.191.220 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/49.0b6/update/win32/en-US/firefox-49.0b6.complete.mar
- ORIGINAL_DST/2.21.246.11 application/octet-stream
1473083008.124  74151 x.10.182.124 TCP_MISS/206 300627 GET
http://download.cdn.mozilla.net/pub/firefox/releases/12.0/update/win32/en-US/firefox-12.0.complete.mar
- ORIGINAL_DST/54.192.128.59 application/octet-stream
1473083012.317646 x.10.185.248 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.81 application/octet-stream
1473083015.680793 x.29.187.142 TCP_MISS/206 300474 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/2.21.246.11 application/octet-stream
1473083017.918515 x.10.184.114 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.81 application/octet-stream
1473083023.480514 x.29.185.194 TCP_MISS/206 300474 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win64/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.59 application/octet-stream
1473083027.521   1208 x.6.25.227 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/47.0.1/update/win32/en-US/firefox-47.0.1.complete.mar
- ORIGINAL_DST/188.43.76.80 application/octet-stream
1473083028.096   1017 x.6.24.5 TCP_MISS/206 300474 GET
http://download.cdn.mozilla.net/pub/firefox/releases/43.0.1/update/win32/en-US/firefox-43.0.1.complete.mar
- ORIGINAL_DST/188.43.76.81 application/octet-stream
1473083032.366   1288 x.10.183.80 TCP_MISS/206 300474 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.59 application/octet-stream
1473083038.171548 x.29.185.69 TCP_MISS/206 300514 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.80 application/octet-stream
1473083038.579   1298 x.10.182.90 TCP_HIT/206 300623 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win64/en-US/firefox-48.0.1-48.0.2.partial.mar
- HIER_NONE/- application/octet-stream
1473083038.706277 x.10.185.96 TCP_MISS/206 300476 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- ORIGINAL_DST/188.43.76.59 application/octet-stream
1473083039.127   7808 x.29.189.2 TCP_MISS/206 300487 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.2.complete.mar
- STANDBY_POOL/1.1.1.12 application/octet-stream
1473083044.820   1296 x.6.25.74 TCP_MISS/206 300473 GET
http://download.cdn.mozilla.net/pub/firefox/releases/48.0.2/update/win32/en-US/firefox-48.0.1-48.0.2.partial.mar
- ORIGINAL_DST/188.43.76.81 application/octet-stream
1473083047.174577 x.29.188.118 TCP_HIT/206 300622 GET
http://download.cdn.mozilla.net/pub/firefox/releases/47.0.1/update/win32/fa/firefox-47.0.1.complete.mar
- HIER_NONE/- application/octet-stream
1473083050.059623 x.10.188.231 TCP_MISS/206 300474 GET
http://download.cdn.mozilla.net/pub/firefox/releases/47.0.1/update/win32/en-US/firefox-47.0.1.complete.mar
- ORIGINAL_DST/188.43.76.72 application/octet-stream
1473083050.332436 x.6.25.60 TCP_MISS/206 300624 GET
http://download.cdn.mozilla.net/pub/firefox/releases/47.0.1/update/win32/en-US/firefox-47.0.1.complete.mar
- STANDBY_POOL/1.1.1.12 application/octet-stream
1473083053.730824 x.10.183.153 TCP_MISS/206 300621 GET

Re: [squid-users] reply_header_access Server deny (IF Server==squid)

2016-09-05 Thread Omid Kosari
Thanks but according to my other thread
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-td4679102.html
deny_info generates some other headers/footprints.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/reply-header-access-Server-deny-IF-Server-squid-tp4679338p4679354.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] reply_header_access Server deny (IF Server==squid)

2016-09-04 Thread Omid Kosari
Because squid should be really transparent tproxy . I want to remove its
footprints.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/reply-header-access-Server-deny-IF-Server-squid-tp4679338p4679341.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] reply_header_access Server deny (IF Server==squid)

2016-09-03 Thread Omid Kosari
Hello,

I want to hide squid's OWN reply header . I have tested following config

acl squid_server rep_header Server squid
reply_header_access Server deny squid_server
reply_header_replace Server Foo/1.0

But got the error "ACL is used in context without an HTTP response. Assuming
mismatch."

The only thing that works is following but it hide all Server headers . I
just want to remove if it is from squid's own .

reply_header_access Server deny all


HTTP/1.1 400 Bad Request
===
Server: squid
===
Mime-Version: 1.0
Date: Sun, 28 Aug 2016 09:00:12 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/reply-header-access-Server-deny-IF-Server-squid-tp4679338.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-29 Thread Omid Kosari
Alex Rousskov wrote
> On 08/28/2016 03:10 AM, Omid Kosari wrote:
>> Alex Rousskov wrote
>>> I understand that it works for regular requests. Does it also work
>>> (i.e.,
>>> does Squid reset the connection) when handling a non-HTTP request on
>>> port 80?
> 
>> No , when the request is non-HTTP it does not reset the connection .
> 
> Great. Now please go back to the simpler configuration I asked you to
> test some time ago:
> 
>   http_reply_access deny all
>   deny_info TCP_RESET all
> 
> Does that work for non-HTTP request on port 80?

config:
http_reply_access deny all
deny_info TCP_RESET all 

=
test type:
telnet 123.com 80
sgsdgsdgsdgsdg 

RESULT: 
HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Mon, 29 Aug 2016 13:30:47 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 5
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close

reset

Connection to host lost.
==




Alex Rousskov wrote
> I am confused. Earlier you said "As i mention before the deny_info works
> in other configs" and gave a very similar configuration example with
> dstdomain ACL. Now you are showing that this example does _not_ work
> even with regular requests (you are getting HTTP headers from Squid
> instead of a TCP connection reset). Am I missing something?

Sorry i mean with adapted_http_access . Maybe my typo 





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679239.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-28 Thread Omid Kosari
Alex Rousskov wrote
> I understand that it works for regular requests. Does it also work (i.e.,
> does Squid
> reset the connection) when handling a non-HTTP request on port 80?

No , when the request is non-HTTP it does not reset the connection .



Here is my test results . i would test with 123.com ip address which is
69.58.188.49 .






config:
acl test dst 69.58.188.49
deny_info TCP_RESET test
http_reply_access deny test 


=
test type:
telnet 123.com 80
GET / HTTP/1.1
host: 123.com


RESULT:
HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Sun, 28 Aug 2016 08:45:23 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 5
X-Cache: MISS from cache1
X-Cache-Lookup: MISS from cache1:3128
Connection: keep-alive

reset

note:telnet will not disconnect until i hit few Enter

=
test type:
telnet 123.com 80
sgsdgsdgsdgsdg

RESULT:
HTTP/1.1 400 Bad Request
Server: squid
Mime-Version: 1.0
Date: Sun, 28 Aug 2016 09:00:12 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close



Connection to host lost.




config:
acl test dst 69.58.188.49
deny_info TCP_RESET test
adapted_http_access deny test


=
test type:
telnet 123.com 80
GET / HTTP/1.1
host: 123.com



RESULT:
note:empty, just disconnects the telnet

=
test type:
telnet 123.com 80
sgsdgsdgsdgsdg

RESULT:
HTTP/1.1 400 Bad Request
Server: squid
Mime-Version: 1.0
Date: Sun, 28 Aug 2016 08:56:14 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close



Connection to host lost.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679222.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-27 Thread Omid Kosari
Alex Rousskov wrote
> I recommend starting with something like this:
> 
>   http_reply_access deny all
>   deny_info TCP_RESET all
> 
> Does that reset all connections to Squid (after Squid fetches the reply)?

Thanks for reply .

As i mention before the deny_info works in other configs for example 

acl test dstdomain 123.com
deny_info TCP_RESET test
http_reply_access deny test 

works fine and it only reset the connection without any additional headers .

But if you looking for special purpose i will schedule a maintenance time
and do following config as you said .

  http_reply_access deny all
  deny_info TCP_RESET all






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679212.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-26 Thread Omid Kosari
Alex Rousskov wrote
> I do not know why deny_info does not work
> in your tests.

Should i give up ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679207.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-24 Thread Omid Kosari
Alex Rousskov wrote
> Thus, the existing implementation should cover non-HTTP
> requests on port 80 (or 3128). If it does not, it is a bug. We should
> polish the documentation to make this clear.

The problem is not squid itself . The problem is in some situations for
example DOS(with malformed requests) , infected clients sends lots of
requests to target server . The requests goes through squid tproxy so squid
will send back about 250 byte in reply to each request .

So i am looking for a way to just send tcp reset and not that 250 bytes .

HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Wed, 24 Aug 2016 14:11:35 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 5
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679147.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-24 Thread Omid Kosari
acl status_400 http_status 400
deny_info TCP_RESET status_400
http_reply_access deny status_400


still send headers . just the 400 changed to 403


HTTP/1.1 403 Forbidden
Server: squid
Mime-Version: 1.0
Date: Wed, 24 Aug 2016 14:11:35 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 5
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close

reset



Isn't a way that squid does not send these headers and just send reset ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679139.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_RESET non http requests on port 80

2016-08-24 Thread Omid Kosari
This config works for dstdomain acl type

acl test dstdomain 123.com
deny_info TCP_RESET test
adapted_http_access deny test


but it is not what i want . I want

acl status_400 http_status 400
deny_info TCP_RESET status_400 
adapted_http_access deny status_400 

OR

acl HTTP proto HTTP
acl PORT_80 port 80 
deny_info TCP_RESET PORT_80 !HTTP
adapted_http_access deny PORT_80 !HTTP 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679126.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_RESET non http requests on port 80

2016-08-24 Thread Omid Kosari
Hello,

I want to squid send tcp_reset as reply to non http requests on port 80 . 

I want that squid DONT reply these headers

HTTP/1.1 400 Bad Request
Server: squid
Mime-Version: 1.0
Date: Wed, 24 Aug 2016 12:08:02 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-Cache: MISS from cache1
X-Cache-Lookup: NONE from cache1:3128
Connection: close


but i want just something LIKE DROP in FIREWALL .

acl HTTP proto HTTP
acl PORT_80 port 80
#acl status_400 http_status 400
#deny_info TCP_RESET status_400
#http_access deny PORT_80 !HTTP
#http_access deny !HTTP
deny_info TCP_RESET PORT_80 !HTTP
#adapted_http_access deny PORT_80 !HTTP

As you can see i have tried other configs which commented right now but no
success .


Squid 3.5.19 from debian repo 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TCP-RESET-non-http-requests-on-port-80-tp4679102p4679111.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Yet another store_id question HIT MISS

2016-08-20 Thread Omid Kosari
I have also tested with several browsers, PCs etc . Also i have disabled
every refresh pattern except default squid rules as like as yours . Same
result . 

The only way to get hits are the way i mentioned in
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972p4679025.html


Do you tested when enabled my storeid config (mentioned in my first email )
was there ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972p4679067.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Yet another store_id question HIT MISS

2016-08-18 Thread Omid Kosari
I was correct .

If one of following conditions happens then the mentioned urls will not
cache .

1-in squid.conf have this line
acl storeiddomainregex dstdom_regex
^igcdn(\-photos|\-videos)[a-z0-9\-]{0,9}\.akamaihd\.net$ 

2-in storeid_db have this line
^http:\/\/igcdn-.*\.akamaihd\.net/hphotos-ak-.*/(t5.*?)(?:\?|$)
http://instagramcdn.squid.internal/$1

IF 1 OR 2 THEN 
http://igcdn-photos-h-a.akamaihd.net/hphotos-ak-xap1/t51.2885-15/s640x640/sh0.08/e35/13702999_1008425479275495_76276919_n.jpg
will not cache at all even if we open that url for many times .

But if i remove 1 and 2 then the url will be cached .

My first email was incorrect because i realized that first url is hit but
second miss . The hit was from before store_id rules added .

Now the problem is with my mentioned squid.conf and store_id rules , the
mentioned urls will not cache at all.Even if same url reopens many times .

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972p4679025.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Yet another store_id question HIT MISS

2016-08-18 Thread Omid Kosari
Simply open following url in firefox
http://igcdn-photos-h-a.akamaihd.net/hphotos-ak-xap1/t51.2885-15/s640x640/sh0.08/e35/13702999_1008425479275495_76276919_n.jpg

then rename h to a,b,c,d,e,f for example

http://igcdn-photos-a-a.akamaihd.net/hphotos-ak-xap1/t51.2885-15/s640x640/sh0.08/e35/13702999_1008425479275495_76276919_n.jpg

According to my store_id rules it should be hit but it is not .

Even i am uncertain about if we open same exact url 2 times , is it hit in
second time ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972p4679021.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Malformed HTTP on tproxy squid

2016-08-18 Thread Omid Kosari
Amos Jeffries wrote
> About the only thing you could do to speed it up is locate the error
> page templates (file paths: en/ERR_INVALID_REQ and
> templates/ERR_INVALID_REQ) and remove their contents. Then restart Squid.
> That should remove at least a few of the vprintf() syscalls that your
> earlier trace showed as being a significant source of CPU load.

Fine. This resolved the problem .
Thanks


samples  %image name   symbol name
190728   34.3901  squid/usr/sbin/squid
26003 4.6886  r8169/r8169
22958 4.1396  libc-2.23.so _int_malloc
13812 2.4904  nf_conntrack /nf_conntrack
11146 2.0097  libc-2.23.so re_search_internal
11044 1.9913  libc-2.23.so _int_free
8748  1.5774  libstdc++.so.6.0.21 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
7240  1.3054  reiserfs /reiserfs
6087  1.0975  libc-2.23.so malloc_consolidate
5850  1.0548  libc-2.23.so malloc
4840  0.8727  libc-2.23.so vfprintf
4468  0.8056  ip_tables/ip_tables
4423  0.7975  libm-2.23.so __ieee754_log_avx
4364  0.7869  libc-2.23.so __memcpy_sse2_unaligned
3935  0.7095  kallsyms sys_epoll_ctl
3929  0.7084  libc-2.23.so free
3829  0.6904  libc-2.23.so build_upper_buffer
3562  0.6423  kallsyms __fget
3413  0.6154  kallsyms copy_user_generic_string
3169  0.5714  libc-2.23.so calloc
2815  0.5076  kallsyms delay_tsc
2767  0.4989  kallsyms csum_partial_copy_generic
2739  0.4939  kallsyms tcp_sendmsg
2454  0.4425  kallsyms memcpy
2192  0.3952  libc-2.23.so _wordcopy_fwd_dest_aligned
2139  0.3857  kallsyms _raw_spin_lock_irqsave
2108  0.3801  kallsyms _raw_spin_lock
2075  0.3741  kallsyms nf_iterate
1916  0.3455  libc-2.23.so __memset_sse2
1900  0.3426  [vdso] (tgid:12101 range:0x7fff9fbca000-0x7fff9fbcbfff)
[vdso] (tgid:12101 range:0x7fff9fbca000-0x7fff9fbcbfff
)
1842  0.3321  libc-2.23.so __strcmp_sse2_unaligned
1794  0.3235  kallsyms sock_poll
1753  0.3161  libc-2.23.so strlen
1702  0.3069  kallsyms entry_SYSCALL_64_after_swapgs
1618  0.2917  kallsyms tcp_poll
1611  0.2905  kallsyms irq_entries_start
1593  0.2872  kallsyms ep_send_events_proc
1567  0.2825  kallsyms ___slab_alloc
1539  0.2775  kallsyms __local_bh_enable_ip
1523  0.2746  nf_conntrack_ipv4/nf_conntrack_ipv4
1467  0.2645  libc-2.23.so re_string_reconstruct
1455  0.2624  kallsyms tcp_transmit_skb
1425  0.2569  nf_nat_ipv4  /nf_nat_ipv4
1366  0.2463  kallsyms _raw_spin_lock_bh
1333  0.2404  kallsyms __alloc_skb
1319  0.2378  kallsyms mutex_spin_on_owner.isra.3
1313  0.2367  kallsyms tcp_recvmsg
1307  0.2357  kallsyms tcp_write_xmit
1279  0.2306  kallsyms __fget_light
1266  0.2283  libc-2.23.so __memmove_sse2
1234  0.2225  libnettle.so.6.2
/usr/lib/x86_64-linux-gnu/libnettle.so.6.2
1202  0.2167  kallsyms __inet_lookup_established
1177  0.2122  kallsyms __lock_text_start
1116  0.2012  kallsyms common_file_perm
1080  0.1947  kallsyms tcp_ack
1075  0.1938  kallsyms tcp_clean_rtx_queue
1046  0.1886  kallsyms tcp_v4_rcv





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Malformed-HTTP-on-tproxy-squid-tp4678951p4679009.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Yet another store_id question HIT MISS

2016-08-17 Thread Omid Kosari
Eliezer Croitoru-2 wrote
> StoreID is not the only thing which can affect a HIT or a MISS.
> A nice tool which was written to understand the subject is RedBot at:
> https://redbot.org/
> 
> From a simple inspection of the file it seems that it should get  hit but,
> why are you using StoreID for this object?

Already tested with redbot and then asked here .
This url belongs to instagram and it uses a lot of such urls for same file .


Eliezer Croitoru-2 wrote
> Also why are you using:
> refresh_pattern -i ^http:\/\/[a-zA-Z0-9\-\_\.]+\.squid\.internal\/.* 10080
> 95% 86400   override-lastmod override-expire reload-into-ims ignore-reload
> ignore-must-revalidate ignore-no-store ignore-private 
> 
> ??
> You would only need this for the specific case which the hostname is
> "dynamic".

Thanks removed it .


Eliezer Croitoru-2 wrote
> This url seems by default cache friendly and only if you have enough
> details on their cdn network you should try to use StoreID.
> Something that may help you is the next log format settings:
> logformat cache_headers %ts.%03tu %6tr %>a %Ss/%03>Hs %
> 
> h" "%{Cache-Control}>ha" Q-P: "%{Pragma}>h" "%{Pragma}>ha" REP-CC:
> "%{Cache-Control}h REP-EXP: %{Expires}h VARY:
> %{Vary}h %eui
> access_log daemon:/var/log/squid/access.log cache_headers
> 
> Try to see how the requests for these looks like in the logs.

Yes we have enough details. Now i am investigating on hit miss problem with
logformat you provided .I will inform you the result .

Thanks






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972p4678979.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-17 Thread Omid Kosari
Aha . We have found that this request belongs to a cheap popular satellite
receiver www.starmax.co . Maybe it has been infected and becomes zombie of a
btnet . Maybe you should buy one device from them 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678978.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Yet another store_id question HIT MISS

2016-08-17 Thread Omid Kosari
Why following link is HIT

X-Cache:"HIT from cache1"
X-Cache-Lookup:"HIT from cache1:3128"


http://igcdn-photos-c-a.akamaihd.net/hphotos-ak-xaf1/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg

but this one is MISS

http://igcdn-photos-a-a.akamaihd.net/hphotos-ak-xaf1/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg



store_id_program "/usr/lib/squid/storeid_file_rewrite"
"/etc/squid/storeid_db"
store_id_children 50 startup=10 idle=5 concurrency=50
acl storeiddomainregex dstdom_regex
^igcdn(\-photos|\-videos)[a-z0-9\-]{0,9}\.akamaihd\.net$
store_id_access allow storeiddomainregex
store_id_access deny all
refresh_pattern -i ^http:\/\/[a-zA-Z0-9\-\_\.]+\.squid\.internal\/.* 10080
95% 86400  override-lastmod override-expire reload-into-ims ignore-reload
ignore-must-revalidate ignore-no-store ignore-private  

storeid_db content

^http:\/\/igcdn-.*\.akamaihd\.net/hphotos-ak-.*/(t5.*?)(?:\?|$)
http://instagramcdn.squid.internal/$1


root@cache:~# echo
'http://igcdn-photos-c-a.akamaihd.net/hphotos-ak-xaf1/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg'
| /usr/lib/squid/storeid_file_rewrite /etc/squid/storeid_db
OK
store-id=http://instagramcdn.squid.internal/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg
root@cache:~# echo
'http://igcdn-photos-a-a.akamaihd.net/hphotos-ak-xaf1/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg'
| /usr/lib/squid/storeid_file_rewrite /etc/squid/storeid_db
OK
store-id=http://instagramcdn.squid.internal/t51.2885-15/s150x150/e35/13649137_1547514802224163_950421795_n.jpg



StoreId helper Statistics:
program: /usr/lib/squid/storeid_file_rewrite
number active: 10 of 50 (0 shutting down)
requests sent: 1755734
replies received: 1755734
queue length: 0
avg service time: 0 msec

Number of requests bypassed because all StoreId helpers were busy: 0





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Yet-another-store-id-question-HIT-MISS-tp4678972.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-17 Thread Omid Kosari
Matus UHLAR - fantomas wrote
> are you intercepting traffic for port 80 only?

yes



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678968.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Malformed HTTP on tproxy squid

2016-08-17 Thread Omid Kosari
Hi Eliezer,


Eliezer Croitoru-2 wrote
> If you know what domain or ip address causes and issue the first thing I
> can think about is bypassing the malicious traffic to allow other
> clients\users to reach the Internet.

Source ip may be 70% of our customers because it is a popular device so it
is not an option . Destination ip or domains are too much . 

Unfortunately because the requests are not normal http , so squid log does
not have the dst url/domain/ip so it is hard job to find them .
1- First i should keep looking the squid access.log to find client which has
such request . 
2-Then try to sniff that client from router. 
3-Separate normal requests from malformed . 
4-Find the destination from malformed requests.
5-Put that ip in router acl to exclude from tproxy routing to squid .

Nobody knows how many times this loop should be repeated because nobody
knows count of destinations .



Eliezer Croitoru-2 wrote
> And since squid is also being used as a http ACL enforcement tool
> malformed requests basically should be dropped and not bypassed
> automatically.

So then squid should be able to simply drop them.
Even it would be fine to have some patterns in iptables or something like
mod_security for apache etc which introduce by squid gurus to prevent these
kinds of problems .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Malformed-HTTP-on-tproxy-squid-tp4678951p4678966.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-16 Thread Omid Kosari
Even one ip address with less than 5 requests per second can grow squid cpu
usage up to 30% . And 10 requests per second made 100% cpu usage . While
there is nothing other than that client goes through squid . The client
bandwidth is less than 10Kbps .

Isn't it crazy also ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678961.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Malformed HTTP on tproxy squid

2016-08-16 Thread Omid Kosari
Squid access.log and wireshark PCAP attached
access_(1).log

  
dump2.pcap
  



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Malformed-HTTP-on-tproxy-squid-tp4678951p4678952.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Malformed HTTP on tproxy squid

2016-08-16 Thread Omid Kosari
According to my other post
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-td4678894.html


Squid cpu usage becomes 100% when it forwatds some kind of malformed http
traffic .
Even one ip address with less than 5 requests per second can grow squid cpu
usage up to 30% 

We have found that this request belongs to a cheap popular satellite
receiver www.starmax.co . Maybe it has been infected and becomes zombie of a
btnet .

Apart from the client type , my question is 

Shouldn't squid have a mechanism to defend this types of problems ? Isn't
possible for squid to simply ignore malformed http requests ?

Is there any workaround to prevent this problem ?




Squid is in tproxy mode with routing

Ubuntu Linux 16.04 , 4.4.0-34-generic on x86_64
Squid Cache: Version 3.5.19 from debian repository


samples  %image name   symbol name
1532894  42.8190  libc-2.23.so _IO_strn_overflow
1028537  28.7306  libc-2.23.so _IO_default_xsputn
662802   18.5143  libc-2.23.so vfprintf
77019 2.1514  squid/usr/sbin/squid
28861 0.8062  libc-2.23.so __memset_sse2
26948 0.7528  r8169/r8169
25320 0.7073  libc-2.23.so __memcpy_sse2_unaligned
21712 0.6065  libc-2.23.so __GI___mempcpy
14918 0.4167  libc-2.23.so _int_malloc
8889  0.2483  nf_conntrack /nf_conntrack
8130  0.2271  libc-2.23.so __GI_strchr
6357  0.1776  libc-2.23.so _int_free
4152  0.1160  libc-2.23.so re_search_internal
4043  0.1129  libc-2.23.so strlen
2754  0.0769  libstdc++.so.6.0.21 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
2753  0.0769  libc-2.23.so free
2704  0.0755  ip_tables/ip_tables
2560  0.0715  reiserfs /reiserfs
2332  0.0651  kallsyms ___slab_alloc
2284  0.0638  libc-2.23.so malloc_consolidate
2204  0.0616  libc-2.23.so malloc
2175  0.0608  kallsyms sys_epoll_ctl
2035  0.0568  kallsyms csum_partial_copy_generic
1614  0.0451  libc-2.23.so calloc
1552  0.0434  kallsyms _raw_spin_lock
1208  0.0337  kallsyms memcpy
1203  0.0336  kallsyms nf_iterate
1177  0.0329  kallsyms irq_entries_start
1165  0.0325  kallsyms __fget
1072  0.0299  kallsyms copy_user_generic_string
1037  0.0290  kallsyms __alloc_skb
1002  0.0280  kallsyms tcp_sendmsg
945   0.0264  libc-2.23.so build_upper_buffer
875   0.0244  kallsyms kmem_cache_free
873   0.0244  kallsyms tcp_rack_mark_lost
868   0.0242  nf_nat_ipv4  /nf_nat_ipv4
861   0.0241  kallsyms kfree
837   0.0234  kallsyms __inet_lookup_established
834   0.0233  kallsyms get_partial_node.isra.61
825   0.0230  kallsyms __slab_free
815   0.0228  kallsyms sock_poll
810   0.0226  kallsyms skb_release_data
802   0.0224  nf_conntrack_ipv4/nf_conntrack_ipv4
792   0.0221  kallsyms tcp_transmit_skb
771   0.0215  kallsyms kmem_cache_alloc
719   0.0201  kallsyms fib_table_lookup
704   0.0197  kallsyms _raw_spin_lock_irqsave
701   0.0196  kallsyms tcp_v4_rcv
699   0.0195  libm-2.23.so __ieee754_log_avx
686   0.0192  nf_nat   /nf_nat
684   0.0191  kallsyms tcp_write_xmit
674   0.0188  kallsyms __cmpxchg_double_slab.isra.44
626   0.0175  kallsyms __netif_receive_skb_core
621   0.0173  libnettle.so.6.2
/usr/lib/x86_64-linux-gnu/libnettle.so.6.2
608   0.0170  kallsyms delay_tsc
600   0.0168  kallsyms ksize
595   0.0166  kallsyms tcp_ack
592   0.0165  kallsyms __local_bh_enable_i



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Malformed-HTTP-on-tproxy-squid-tp4678951.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-15 Thread Omid Kosari
One of server's ip addresses that i've found belongs to cloudflare .
Cloudflare does not accept anything other than HTTP on port 80 . So it seems
an attack to some servers .
Maybe our clients are infected and they are zombies .

Anyone knows some good ways to defend squid . I mean when squid forwards
these requests it becomes crazy .

I manage to create some iptables rules on squid box to only accept http
protocol . But i know it will have at least 2 problems .
1. Performance will be degraded
2. Some sites/apps may have problems

Any suggestion ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678937.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-14 Thread Omid Kosari
My investigation shows even 1 random chosen ip address makes squid cpu usage
about 30% . 
I have chosen that ip address based on users with TAG_NONE/400 errors .

I've found that a kind of request makes a loop in squid . Wireshark shows
infinite loop of

X-Squid-Error: ERR_INVALID_REQ 0

and

X-Squid-Error: ERR_INVALID_URL 0

which makes high cpu usage .

Please find the attachements . The last files edited and personal info
removed from it

squid-access-log.JPG

  
squid-access-log2.JPG

  
squid-problem.squid-problem

  



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678931.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-13 Thread Omid Kosari
debug_options ALL,1



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678901.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cpu usage 100% from few days ago !!

2016-08-13 Thread Omid Kosari
The bandwidth was about 120Mbps to each squid box but now even 10Mbps makes
100% cpu usage

With 10% of users
Average HTTP requests per minute since start:   2355.3


16GB of ram and i3-2100 CPU @ 3.10GHz, 4 cores and NO SMP like before .

It seems like an attack to/from our clients to/from internet which makes
squid crazy .

Also the profiling result attached to end of my first post .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894p4678899.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid cpu usage 100% from few days ago !!

2016-08-13 Thread Omid Kosari
Hello,

Recently 2 different squid boxes grows from ~40% cpu usage to 100% without
any changes to config/banwidth/number of clients/etc

The problems forced me to bypass squid until the problem found . 
Right now even 10% of users can make squid 100% .

Info 

Squid is in tproxy mode with routing

Ubuntu Linux 16.04 , 4.4.0-34-generic on x86_64
Squid Cache: Version 3.5.19 from debian repository


samples  %image name   symbol name
1532894  42.8190  libc-2.23.so _IO_strn_overflow
1028537  28.7306  libc-2.23.so _IO_default_xsputn
662802   18.5143  libc-2.23.so vfprintf
77019 2.1514  squid/usr/sbin/squid
28861 0.8062  libc-2.23.so __memset_sse2
26948 0.7528  r8169/r8169
25320 0.7073  libc-2.23.so __memcpy_sse2_unaligned
21712 0.6065  libc-2.23.so __GI___mempcpy
14918 0.4167  libc-2.23.so _int_malloc
8889  0.2483  nf_conntrack /nf_conntrack
8130  0.2271  libc-2.23.so __GI_strchr
6357  0.1776  libc-2.23.so _int_free
4152  0.1160  libc-2.23.so re_search_internal
4043  0.1129  libc-2.23.so strlen
2754  0.0769  libstdc++.so.6.0.21 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21
2753  0.0769  libc-2.23.so free
2704  0.0755  ip_tables/ip_tables
2560  0.0715  reiserfs /reiserfs
2332  0.0651  kallsyms ___slab_alloc
2284  0.0638  libc-2.23.so malloc_consolidate
2204  0.0616  libc-2.23.so malloc
2175  0.0608  kallsyms sys_epoll_ctl
2035  0.0568  kallsyms csum_partial_copy_generic
1614  0.0451  libc-2.23.so calloc
1552  0.0434  kallsyms _raw_spin_lock
1208  0.0337  kallsyms memcpy
1203  0.0336  kallsyms nf_iterate
1177  0.0329  kallsyms irq_entries_start
1165  0.0325  kallsyms __fget
1072  0.0299  kallsyms copy_user_generic_string
1037  0.0290  kallsyms __alloc_skb
1002  0.0280  kallsyms tcp_sendmsg
945   0.0264  libc-2.23.so build_upper_buffer
875   0.0244  kallsyms kmem_cache_free
873   0.0244  kallsyms tcp_rack_mark_lost
868   0.0242  nf_nat_ipv4  /nf_nat_ipv4
861   0.0241  kallsyms kfree
837   0.0234  kallsyms __inet_lookup_established
834   0.0233  kallsyms get_partial_node.isra.61
825   0.0230  kallsyms __slab_free
815   0.0228  kallsyms sock_poll
810   0.0226  kallsyms skb_release_data
802   0.0224  nf_conntrack_ipv4/nf_conntrack_ipv4
792   0.0221  kallsyms tcp_transmit_skb
771   0.0215  kallsyms kmem_cache_alloc
719   0.0201  kallsyms fib_table_lookup
704   0.0197  kallsyms _raw_spin_lock_irqsave
701   0.0196  kallsyms tcp_v4_rcv
699   0.0195  libm-2.23.so __ieee754_log_avx
686   0.0192  nf_nat   /nf_nat
684   0.0191  kallsyms tcp_write_xmit
674   0.0188  kallsyms __cmpxchg_double_slab.isra.44
626   0.0175  kallsyms __netif_receive_skb_core
621   0.0173  libnettle.so.6.2
/usr/lib/x86_64-linux-gnu/libnettle.so.6.2
608   0.0170  kallsyms delay_tsc
600   0.0168  kallsyms ksize
595   0.0166  kallsyms tcp_ack
592   0.0165  kallsyms __local_bh_enable_ip




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-cpu-usage-100-from-few-days-ago-tp4678894.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-25 Thread Omid Kosari
Following config in squid does not log anything

logformat nfmark %ts.%03tu %6tr %>a %Ss/%03>Hs %nfmark %http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678676.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-25 Thread Omid Kosari
Hi,

Thanks for support .

recently i have seen a problem with version beta 0.2 . when fetcher is
working the kernel logs lots of following error
TCP: out of memory -- consider tuning tcp_mem

I think the problem is about orphaned connections which i mentioned before .
Managed to try new version to see what happens.

Also i have a feature request . Please provide a configuration file for
example in /etc/foldername or even beside the binary files to have selective
options for both fetcher and logger . 

I have seen following change log
beta 0.3 - 19/07/2016
+ Upgraded the fetcher to honour private and no-store cache-control headers
when fetching objects.

As my point of view the more hits is better and there is no problem to store
private and no-store objects if it helps to achieve more hits and bandwidth
saving . So it would be fine to have an option in mentioned config file to
change it myself .

Thanks again



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678669.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-21 Thread Omid Kosari
Amos Jeffries wrote
> Note that it is the connection CONNMARK value not the packet MARK value
> that is copied.

Can you confirm my iptables rules ?

iptables -t mangle -A OUTPUT -j CONNMARK --restore-mark
iptables -t mangle -A OUTPUT -p tcp -d 127.0.0.1,1.1.1.12 --sport 8080 -j
MARK --set-mark 0x30
iptables -t mangle -A OUTPUT -j CONNMARK --save-mark
iptables -t mangle -A OUTPUT -m mark --mark 0x30 -j LOG --log-prefix
"connmark 0x30: "

If yes then with the squid.conf

qos_flows tos local-hit=0x30
qos_flows tos sibling-hit=0x30
qos_flows tos parent-hit=0x30
qos_flows mark

now squid should send peer 8080 contents with tos 0x30 to clients ? if i am
wrong please describe squid behavior thanks .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678633.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-21 Thread Omid Kosari
Amos Jeffries wrote
> 2) Squid can do pass-thru using Netfilter MARK flags. Each squid.conf
> directive that deals with TOS has both a 'tos' and a 'mark' variant. The
> 'mark' ones are able to pass-thru these netfilter markings the way you
> want.
> 
> However, since netfilter marks are local to the one machine and not
> transmitted externally. You need to use iptables rules to convert
> received TOS/DSCP values into local MARK values on packets arriving, and
> the reverse translation for packets leaving the machine.
> 
> IIRC there were some gotchas involved. I do remember specifically that
> the TOS needed to be converted to CONNMARK (not MARK) in mangle or
> earlier. Then the NF MARK values sync'd with CONNMARK at some stage just
> after that (sorry my memory of that particular bit is long gone). The
> sync'd NF MARK is what gets passed between Squid and the kernel.
> 
> It is a bit clumsy and annoying, but without any kernel API to receive
> the TOS/DSCP values on incoming packets it is what it is.
> 
> 
> Amos

First i am going to to it on same server which may be simpler and no need to
involve with convert to/from TOS

I have following iptables log

 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=4148 TOS=0x00 PREC=0x00 TTL=64
ID=57642 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=29780 TOS=0x00 PREC=0x00 TTL=64
ID=57643 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=32820 TOS=0x00 PREC=0x00 TTL=64
ID=57644 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=32820 TOS=0x00 PREC=0x00 TTL=64
ID=57645 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64
ID=16894 DF PROTO=TCP SPT=12513 DPT=8080 WINDOW=4671 RES=0x00 ACK URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=32820 TOS=0x00 PREC=0x00 TTL=64
ID=57646 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=6700 TOS=0x00 PREC=0x00 TTL=64
ID=57647 DF PROTO=TCP SPT=8080 DPT=12513 WINDOW=1495 RES=0x00 ACK PSH URGP=0
MARK=0x30 
 IN= OUT=lo SRC=127.0.0.1 DST=127.0.0.1 LEN=52 TOS=0x00 PREC=0x00 TTL=64
ID=16895 DF PROTO=TCP SPT=12513 DPT=8080 WINDOW=4598 RES=0x00 ACK URGP=0
MARK=0x30 

Now please provide squid config side .

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678630.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid determine peer content to routers

2016-07-19 Thread Omid Kosari
According to my previous emails i have created this topic to summerize my
need .

Squid has peer config as follow

acl wu dstdom_regex \.download\.windowsupdate\.com$
\.download\.microsoft\.com$
acl wu-rejects dstdom_regex stats
acl GET method GET
cache_peer 1.1.1.14 parent 8080 0 proxy-only no-tproxy no-digest no-query
no-netdb-exchange name=ms1
cache_peer_access ms1 allow GET wu !wu-rejects
cache_peer_access ms1 deny all
never_direct allow GET wu !wu-rejects
never_direct deny all

The peer software is a web service from
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-td4678454.html

So far so good . the problem begins here .

Currently we have exclude the cache hits (based on TOS value) from our
customers reserved bandwidth . For example you have 150Mbps internet link
from our company and we have limitation for you on our QOS routers . But we
have excluded cache hits from your 150M and you may have more than that if
you are downloading from our cache hits .

The peer software is a web service and is not aware of hit/miss or TOS/DSCP
. So i should to a trick with help of squid .

Squid (or even iptables, linux , etc ) which is aware of which contents goes
to/comes from the peer should do a trick to mark those content with DSCP and
send them to routers .
Router see that dscp and exclude it from users limitation .

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-determine-peer-content-to-routers-tp4678586.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-19 Thread Omid Kosari
Amos Jeffries wrote
> Sure, if I can assist with that I will do so in a reply to that thread.

Thanks . My bad , you have replied my last email in that thread but main
problem was in previous emails . Now i will describe it in new topic .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678585.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-19 Thread Omid Kosari
Amos Jeffries wrote
> On 18/07/2016 8:05 p.m., Omid Kosari wrote:
>> Maybe i should describe more .
>> The port 8080 is a parent peer of squid . It is
>> http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-td4678454.html
>> 
>> squid config is 
>> 
>> acl wu dstdom_regex \.download\.windowsupdate\.com$
>> acl wu-rejects dstdom_regex stats
>> acl GET method GET
>> cache_peer 127.0.0.1 parent 8080 0 proxy-only no-tproxy no-digest
>> no-query
>> no-netdb-exchange name=ms1
>> cache_peer_access ms1 allow GET wu !wu-rejects
>> cache_peer_access ms1 deny all
>> never_direct allow GET wu !wu-rejects
>> never_direct deny all
>> 
>> and
>> 
>> iptables -t mangle -A OUTPUT -p tcp -m tcp -d
>> 127.0.0.1,192.168.1.1,192.168.1.2 --sport 8080 -j DSCP --set-dscp 0x60
>> 
>> Now with this iptables rule i want to change the dscp of packets which
>> comes
>> from parent peer to squid . Then squid preserve that dscp and send it to
>> clients . With my description will everything work as i want ?
> 
> That is a clearer description. Thanks
> 
> Your answer is:  No. There are kernel patches required to allow Squid to
> load the DSCP TOS marking from *incoming* packets from the peer.
> 
> Last I heard those patches were not accepted into the kernel, no longer
> being maintained and no recent Linux kernel is compatible with them. You
> might be lucky and find out otherwise, but I am doubtful.
> 
> There are two alternatives though:
> 
>  1) your above iptables rule is no different in behaviour on the
> outgoing traffic side of Squid from what "qos_flows tos parent-hit=0x60"
> should be doing.
> 
> So modulo bugs, there is no need to do anything with TOS on incoming
> because Squid cache_peer line has the info saying that traffic was from
> a parent (a versus any random connection marked with DSCP 0x60 inbound).
> Data from the parent always arrives over connections associated by Squid
> with that cache_peer config.
> 
> 
> 2) Squid can do pass-thru using Netfilter MARK flags. Each squid.conf
> directive that deals with TOS has both a 'tos' and a 'mark' variant. The
> 'mark' ones are able to pass-thru these netfilter markings the way you
> want.
> 
> However, since netfilter marks are local to the one machine and not
> transmitted externally. You need to use iptables rules to convert
> received TOS/DSCP values into local MARK values on packets arriving, and
> the reverse translation for packets leaving the machine.
> 
> IIRC there were some gotchas involved. I do remember specifically that
> the TOS needed to be converted to CONNMARK (not MARK) in mangle or
> earlier. Then the NF MARK values sync'd with CONNMARK at some stage just
> after that (sorry my memory of that particular bit is long gone). The
> sync'd NF MARK is what gets passed between Squid and the kernel.
> 
> It is a bit clumsy and annoying, but without any kernel API to receive
> the TOS/DSCP values on incoming packets it is what it is.
> 
> 
> Amos
> 
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users

About alternative 1 .Simpler english please  . I even could not understand
what you say .

About 2 . Seems painful . I hope other threads solve the problem .

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678582.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-19 Thread Omid Kosari
Eliezer Croitoru-2 wrote
> Hey Omid,
> 
> Indeed my preference is that if you can ask ask and I will try to give you
> couple more details on the service and the subject.

Hey Eliezer,

1.I have refresh patterns from days before your code . Currently i prefer
not to store windows updates in squid internal storage because of
deduplication . Now what should i do ? delete this refresh pattern ? or even
create a pattern not to cache windows updates ?

refresh_pattern -i
(microsoft|windowsupdate)\.com/.*?\.(cab|exe|dll|ms[iuf]|asf|wm[va]|dat|zip|iso|psf)$
10080 100% 172800 ignore-no-store ignore-reload ignore-private
ignore-must-revalidate override-expire override-lastmod

2.Is the position of your squid config important to prevent logical
conflicts? for example should it be before above refresh patterns to prevent
deduplication ?

acl wu dstdom_regex \.download\.windowsupdate\.com$
acl wu-rejects dstdom_regex stats
acl GET method GET
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-tproxy no-digest no-query
no-netdb-exchange name=ms1
cache_peer_access ms1 allow GET wu !wu-rejects
cache_peer_access ms1 deny all
never_direct allow GET wu !wu-rejects
never_direct deny all

3.Is it good idea to change your squid config as bellow to have more hits?
Or maybe it is big mistake !

acl msip dst 13.107.4.50
acl wu dstdom_regex \.download\.windowsupdate\.com$
\.download\.microsoft\.com$
acl wu-rejects dstdom_regex stats
acl GET method GET
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-tproxy no-digest no-query
no-netdb-exchange name=ms1
cache_peer_access ms1 allow GET wu !wu-rejects
cache_peer_access ms1 allow GET msip !wu-rejects
cache_peer_access ms1 deny all
never_direct allow GET wu !wu-rejects
never_direct allow GET msip !wu-rejects
never_direct deny all

4.Current storage capacity is 500G andmore than 50% of it becomes full and
growing fast . Is there any mechanism for garbage collection in your code ?
If not is it good idea to remove files based on last access time (ls -ltu
/cache1/body/v1/) ? should i also delete old files from header and request
folders ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678581.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-19 Thread Omid Kosari
Amos Jeffries wrote
> On 19/07/2016 5:48 p.m., Omid Kosari wrote:
>> Amos Jeffries wrote
>>> On 19/07/2016 2:42 a.m., Omid Kosari wrote:
>>>> Hello,
>>>>
>>>> It seems rep_header does not work at all.
>>>>
>>>> acl mshit rep_header X-SHMSCDN .
>>>> acl mshit rep_header Content-Type -i text\/html
>>>> acl html rep_header Content-Type -i ^text\/html
>>>> acl apache rep_header Server ^Apache
>>>> debug_options 28,3
>>>>
>>>
>>> If thats all you put in the config, theres nothing telling Squid when to
>>> use the ACL.
>>>
>>> PS. the other thread where you posted better details of the problem and
>>> config has already been answered, so I wont repeat the details here.
>>>
>> 
>> I thought acl should match even if nothing to do with it . ok .
>> 
>> now
>> #acl mshit rep_header X-SHMSCDN HIT
>> #acl mshit rep_header X-SHMSCDN .
>> acl mshit rep_header X-Shmscdn -i HIT
>> acl testip src 192.168.1.10
>> http_access deny testip mshit
>> 
>> Maybe the problem is  "any of the known reply headers" as Eliezer
>> mentioned
>> in other thread . If so what is the meaning of  known (please refer me to
>> source file in squid to not ask more questions about it :) ) ? Also is
>> there
>> a way to work with unknown headers ?
>> 
> 
> The rep_header ACL code is at [1] which indicates the match()'ing
> function is the generic HTTP headers matching function from [2], applied
> to the HTTP reply object headers.
> 
> [1]
> http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/src/acl/HttpRepHeader.cc;
> 
> [2]
> http://bazaar.launchpad.net/~squid/squid/trunk/view/head:/src/acl/HttpRepHeader.cc;
> 
> I see in [2] that both registered header ID (aka "known headers") and
> by-name (custom header lookup) are tested. So your ACL should be
> locating the custom header *if* it exists in the relevant reply headers.
> 
> That 'if' is important, the HTTP state is not always what one thinks it
> is. As demonstrated by the *real* traffic flow in my first reply to the
> "Wrong req_header result in cache_peer_access when using ssl_bump" thread.
> 
> Amos
> 
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users

If i understand correctly you mean the rule should work correctly with
custom headers but the problem is squid is not at right place to see that
header .

May i ask you please help me to solve problem from other thread
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-td4678454.html

I think now you know what is my problem . The prefered way is by rep_header
and clientside_tos if possible . 
Right now with help of Eliezer i have injected custom header in static
header files . Eliezers code (peer port 8080) successfully sends that header
to clients and squid(i don't know how to be sure ,the important if).

Is it possible to use rep_header and clientside_tos with each other ? (Alexa
says no in other thread but he is not deeply aware of my needs ) 
If yes how to squid be aware of rep_header from peer ?

Knocking my head to wall :(

Thanks ,



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678580.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-19 Thread Omid Kosari
Also i have seen that another guy did successfully something like that (not
exactly ) in this thread
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-hit-miss-and-reject-td4661928.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678574.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-19 Thread Omid Kosari
Amos Jeffries wrote
> On 19/07/2016 2:42 a.m., Omid Kosari wrote:
>> Hello,
>> 
>> It seems rep_header does not work at all.
>> 
>> acl mshit rep_header X-SHMSCDN .
>> acl mshit rep_header Content-Type -i text\/html
>> acl html rep_header Content-Type -i ^text\/html
>> acl apache rep_header Server ^Apache
>> debug_options 28,3
>> 
> 
> If thats all you put in the config, theres nothing telling Squid when to
> use the ACL.
> 
> PS. the other thread where you posted better details of the problem and
> config has already been answered, so I wont repeat the details here.
> 
> Amos
> 
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users

I thought acl should match even if nothing to do with it . ok .

now
#acl mshit rep_header X-SHMSCDN HIT
#acl mshit rep_header X-SHMSCDN .
acl mshit rep_header X-Shmscdn -i HIT
acl testip src 192.168.1.10
http_access deny testip mshit

Maybe the problem is  "any of the known reply headers" as Eliezer mentioned
in other thread . If so what is the meaning of  known (please refer me to
source file in squid to not ask more questions about it :) ) ? Also is there
a way to work with unknown headers ?

Thanks





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678573.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-19 Thread Omid Kosari
Eliezer Croitoru-2 wrote
> Well I cannot say a thing until I will study the subject.
> One thing I was thinking about was:
> Can you analyze the squid access.log and to reduce from the account\user
> the HIT traffic?
> If so then I can recommend some log format special log to give you the
> needed details.
> 
> Eliezer

Because of high traffic and performance penalty we have disabled access.log
. BTW is possible to parse logs and make them TOS/DSCP compatible ? As i
said before the only way that squid and qos routers can talk is DSCP .

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678572.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-18 Thread Omid Kosari
Alex Rousskov wrote
> On 07/18/2016 05:39 AM, Omid Kosari wrote:
> 
>> acl mshit rep_header X-SHMSCDN HIT
>> clientside_tos 0x30 mshit
> 
> You cannot use response-based ACLs like rep_header with clientside_tos.
> That directive is currently evaluated only at request processing time,
> before there is a response.
> 
>> 2016/07/18 16:26:31.927 kid1| WARNING: mshit ACL is used in context
>> without
>> an HTTP response. Assuming mismatch.
> 
> ... which is what Squid is trying to tell you.
> 
> 
> HTH,
> 
> Alex.
> 
> ___
> squid-users mailing list

> squid-users@.squid-cache

> http://lists.squid-cache.org/listinfo/squid-users

Apart from that , can you confirm that we may use cutom header in rep_header
?
Also the problem is acl mshit does not count att all .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678566.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rep_header not working

2016-07-18 Thread Omid Kosari
Hey Eliezer,

I am aware of thay sentence . I have carefully read that . But as you see
even apache or html one does not work .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561p4678565.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] rep_header not working

2016-07-18 Thread Omid Kosari
Hello,

It seems rep_header does not work at all.

acl mshit rep_header X-SHMSCDN .
acl mshit rep_header Content-Type -i text\/html
acl html rep_header Content-Type -i ^text\/html
acl apache rep_header Server ^Apache
debug_options 28,3

Other types of acl works fine

the log is very huge because of thousands of clients .

Squid Object Cache: Version 3.5.19 Official Debian Package
Ubuntu Linux 16.04  4.4.0-28-generic on x86_64



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rep-header-not-working-tp4678561.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-18 Thread Omid Kosari
Dear Eliezer,

Unfortunately no success . I will describe what i did maybe i missed
something .

run the command
perl -pi -e '$/=""; s/\r\n\r\n/\r\nX-SHMSCDN: HIT\r\n\r\n/;' 
/cache1/header/v1/*

and verified that the text injected correctly

squid config

acl mshit rep_header X-SHMSCDN HIT
clientside_tos 0x30 mshit

but got the following popular log
2016/07/18 16:26:31.927 kid1| WARNING: mshit ACL is used in context without
an HTTP response. Assuming mismatch.
2016/07/18 16:26:31.927 kid1| 28,3| Acl.cc(158) matches: checked: mshit = 0


One more thing . as i am not so familiar with perl , may i ask you to please
edit it to ignore the files which already have the text ?

Thanks




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678557.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-18 Thread Omid Kosari
Maybe i should describe more .
The port 8080 is a parent peer of squid . It is
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-td4678454.html

squid config is 

acl wu dstdom_regex \.download\.windowsupdate\.com$
acl wu-rejects dstdom_regex stats
acl GET method GET
cache_peer 127.0.0.1 parent 8080 0 proxy-only no-tproxy no-digest no-query
no-netdb-exchange name=ms1
cache_peer_access ms1 allow GET wu !wu-rejects
cache_peer_access ms1 deny all
never_direct allow GET wu !wu-rejects
never_direct deny all

and

iptables -t mangle -A OUTPUT -p tcp -m tcp -d
127.0.0.1,192.168.1.1,192.168.1.2 --sport 8080 -j DSCP --set-dscp 0x60

Now with this iptables rule i want to change the dscp of packets which comes
from parent peer to squid . Then squid preserve that dscp and send it to
clients . With my description will everything work as i want ?

Thanks





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678547.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-17 Thread Omid Kosari
Apart from previous email , maybe this is a bug or not but the fetcher does
not release open files/sockets . 
Its number of open files just grows . currently i have added 'ulimit 65535'
at the line 4 of fetch-task.sh to see what happens . before it was killed.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678536.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-17 Thread Omid Kosari
Lets assume the all of parents replies are hits . Now is there a way ?

iptables -t mangle -A OUTPUT -t mangle -p tcp -m tcp -d
192.168.1.1,192.168.1.2 --sport 8080 -j DSCP --set-dscp 0x60

is this ok ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678534.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-17 Thread Omid Kosari
It looks like the guy there is having the same request as I have. 

http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-td4600931.html



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678532.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2016-07-17 Thread Omid Kosari
Do you found any solution ? I have same problem and looking for solution .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4678531.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-15 Thread Omid Kosari
Hi,

Questions
1-What happens if disk or partition becomes full ?
2-Is there a way to use more than one location for store ?
3-Currently hits from your code , could not be counted .How i can use qos
flows/tos mark those hits ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678524.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Windows Updates a Caching Stub zone, A windows updates store.

2016-07-14 Thread Omid Kosari
Hi,

Great idea . I was looking for something like this for years and i was too
lazy to start it myself ;)

I am going to test your code in a multi thousand client ISP .

It would more great if use the experiences of http://www.wsusoffline.net/
specially for your fetcher . It is GPL

Also the ip address 13.107.4.50 is mainly used by microsoft for its download
services . With services like
https://www.virustotal.com/en-gb/ip-address/13.107.4.50/information/ we have
found that other domains also used for update/download services . Maybe not
bad if create special things for this ip address .

Thanks in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4678492.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-13 Thread Omid Kosari
Amos Jeffries wrote
> Though be aware that Squid being Internet software operates using UTC
> timezone. Not local wall-time.

After many try and false i can confirm that the time is NOT UTC and it is
local time !!
Recheck plz




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462p4678481.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] time based range_offset_limit

2016-07-13 Thread Omid Kosari
Amos Jeffries wrote
> Though be aware that Squid being Internet software operates using UTC
> timezone. Not local wall-time.

Good point , Thanks


Amos Jeffries wrote
> Whether that behaviour will "work" for whatever your problem actually is
> nobody knows, because you did not state what the problem you are
> attempting to solve is.
> 
> Amos

As you know this ip related to microsoft downloads and updates . I want to
use free bandwidth times to allow squid full download large updates and use
them at peak times . As a workaround for chunk downloads .





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462p4678478.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] assertion failed: DestinationIp.cc:41: "checklist->conn() && checklist->conn()->clientConnection != NULL"

2016-07-12 Thread Omid Kosari
Hello,

squid crashes after following error
assertion failed: DestinationIp.cc:41: "checklist->conn() &&
checklist->conn()->clientConnection != NULL"


From the error massage i guess that following config may cause the problem

#acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
#range_offset_limit none download_until_end_by_ip freetimes

As you can see i have commented first and third lines to see what happens .
Still soon to be sure but after commenting those lines the problem did not
happen . Maybe a bug !

Squid Version 3.5.12 (distribution default package)
Ubuntu 16.04 Linux 4.4.0-28-generic on x86_64





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-DestinationIp-cc-41-checklist-conn-checklist-conn-clientConnection-NULL-tp4678464.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] time based range_offset_limit

2016-07-12 Thread Omid Kosari
Hello,

I want to have "range_offset_limit none" for specific acl in specific time .
The config is and squid -k parse/check does not show any error . 

acl download_until_end_by_ip dst 13.107.4.50
acl freetimes time 03:00-08:00
range_offset_limit none download_until_end_by_ip freetimes

But please somebody confirm that it is correct and should work .

Squid Version 3.5.12

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/time-based-range-offset-limit-tp4678462.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Hypothetically comparing SATA\SAS to NAS\SAN for squid.

2015-02-03 Thread Omid Kosari
The only reason for extend is more capacity .
Currently there is no problem with current setup except capacity .
I can replace each SSD with new 500GB which doubles the capacity and it is
not enough . and old SSDs will be unusable . So i prefer a long term
solution like NAS .


Current spec of squid boxes are core i3 (with current 3.1.20 version one
core utilizes) and 16GB of ram . so far so good .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Hypothetically-comparing-SATA-SAS-to-NAS-SAN-for-squid-tp4664350p4669531.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Hypothetically comparing SATA\SAS to NAS\SAN for squid.

2015-02-03 Thread Omid Kosari
How we can test this ?
What protocol suggested for Squid ? NFS, iSCSI,... ?

Apart from bandwidth, is there any important difference between 1Gbit
ethernet and 10G ? Do you suggest me to buy 1Gbit storage and monitor it or
you think the money will be wasted ?

Any news about this REALLY interesting thread ?

@Eliezer , Any benchmark ?

This topic is very important for me .






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Hypothetically-comparing-SATA-SAS-to-NAS-SAN-for-squid-tp4664350p4669494.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.9 RPM release

2014-12-01 Thread Omid Kosari
Any news about ubuntu version ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-4-9-is-available-tp4668181p4668576.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High cpu usage by re_search_internal

2014-10-06 Thread Omid Kosari
Thanks for the tip .
1. Is there any way to detect that what is current LANG without need to
restart squid ?
2. Is there any way to put that string inside /etc/init/squid.conf ? how ?

Thanks again .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/High-cpu-usage-by-re-search-internal-tp4667550p4667700.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users