Re: [squid-users] Transparent Squid on OpenVZ container?

2013-01-03 Thread Eliezer Croitoru

Tproxy or intercept\transparent?

Eliezer

On 1/1/2013 11:28 AM, Tim Bates wrote:

On 1/01/2013 4:18 PM, Amos Jeffries wrote:

On 30/12/2012 10:55 p.m., Tim Bates wrote:

Has anyone had experiences with running Squid *transparently* on an
OpenVZ container in combination with a Cisco router?
Can it be done?
Is there anything to watch out for, or any tricks?

TB


Which definition of "transparent" are you trying to achieve?


Just HTTP redirected from a Cisco router. I have tried doing this from a
Linux IPTables based firewall in the past and had issues.

I tried a quick test last night and it seemed to work fine with no
tricks. But I'd still like to know if there's anything odd to watch out
for.

TB


Re: [squid-users] Transparent Mode and WCCP

2013-01-03 Thread Eliezer Croitoru

Hey,

I have found this:
http://kb.fortinet.com/kb/viewContent.do?externalId=FD30096

which pretty much covers what needed to be done.

WCCP suppose to be a layer 2 interception which TPROXY is the closest 
thing for that.


TPROXY use the same src IP of the client for outgoing traffic based on a 
client connection.


You can try to configure the fortigate device and maybe try to open a 
ticket for the FORTI guys in case you dont get it right.


WCCP works with most catalyst devices I have tried.
There are other ways to intercept traffic and it's only up to the level 
of your skills and knowledge.


It seems like the fortigate is the right place to integrate squid 
interception to me.


I noticed that you didn't configured all squid needed directives to 
support auto WCCP service registration.


Try to do it manually on the fortigate and see the results.

Best regards,
Eliezer

On 1/4/2013 1:22 AM, Roman Gelfand wrote:

Thanks for your help.  Please, see attached configuration files and
topology picture.

I am not using cisco device.  I configured fortigate 50b firewall
wccp service using gre tunnel.  In this case, I am using straight
transparent proxy.  I have never used tproxy.

I do have catalyst router which supports wccp2.  Should I use that
instead of the fortigate?

How does using tproxy instead of transparent proxy improves wccp routing?

Thanks again


On Wed, Jan 2, 2013 at 4:39 AM, Eliezer Croitoru  wrote:

Based on what you configured you cisco router? what did you configured on
your cisco router?
What cisco device are you using?

did you had the chance to look at:
http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2

please try to share more information on the infrastructure and the whole
squid.conf removing only confrontational INFO.

Did you had the chance to use TPROXY before?
Did you tried to sniff with tcpdump?

Eliezer


On 1/2/2013 3:38 AM, Roman Gelfand wrote:


   I use wccp/gre tunnel.  Port 80
requests work but 443 don't.  I am not sure if this is right, but even
though data was received on wccp, no data was transmitted back over
wccp.  In other words, squid server response was routed back, through
eth0 interface, rather than go through wccp0 interface.  Is this
expected behavior?  If not, what do I do to make
response go over wccp?

my iptable config look like this

iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 80 -j DNAT --to
192.168.5.81:3228
iptables -t nat -A PREROUTING -i wccp0 -p tcp --dport 443 -j DNAT --to
192.168.5.81:3229

and squid.conf

wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp priority=240 ports=80,443





--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Squid3 extremely slow for some website cnn.com

2013-01-06 Thread Eliezer Croitoru

Hey muhammed,

Since it's not squid issue but another network level thing I was 
wondering if you have tried to test something about MSS\MTU?
Some ISP use hardware that can cause there kind of issues and can make 
it very difficult to find.


Hope it will help you.

Regards,
Eliezer

On 12/25/2012 2:38 PM, Muhammed Shehata wrote:




Dear Amos,
Is there any Ideas can help me in Java script issue

Best Regards,
*Muhammad Shehata*
IT Network Security Engineer
TEData
Building A11- B90, Smart Village
Km 28 Cairo - Alex Desert Road, 6th October, 12577, Egypt
T: +20 (2) 33 32 0700 | Ext: 1532
F: +20 (2) 33 32 0800 | M:
E: m.sheh...@tedata.net
On 12/19/2012 01:02 PM, Amos Jeffries wrote:


On 19/12/2012 7:24 a.m., Muhammad Shehata wrote:

Dear amos,
Is there any update


Hi,
 I am currently in the process of moving house. So the work I can do
on Squid is rather limited for 3-4 weeks. I hope to get to this soon,
but cannot promise anything.

Amos








Re: [squid-users] Squid crash on OpenBSD 5.2

2013-01-07 Thread Eliezer Croitoru
7 13:52:46 kid1| Accepting HTTP Socket connections at
local=0.0.0.0:3128 remote=[::] FD 314 flags=9
2013/01/07 13:52:46 kid1| Store rebuilding is 5.97% complete
2013/01/07 13:52:46 kid1| Done reading /var/squid/cache swaplog (66957
entries)
2013/01/07 13:52:46 kid1| Finished rebuilding storage from disk.
2013/01/07 13:52:46 kid1| 66957 Entries scanned
2013/01/07 13:52:46 kid1| 0 Invalid entries.
2013/01/07 13:52:46 kid1| 0 With invalid flags.
2013/01/07 13:52:46 kid1| 66957 Objects loaded.
2013/01/07 13:52:46 kid1| 0 Objects expired.
2013/01/07 13:52:46 kid1| 0 Objects cancelled.
2013/01/07 13:52:46 kid1| 0 Duplicate URLs purged.
2013/01/07 13:52:46 kid1| 0 Swapfile clashes avoided.
2013/01/07 13:52:46 kid1|   Took 0.34 seconds (196973.49 objects/sec).
2013/01/07 13:52:46 kid1| Beginning Validation Procedure
2013/01/07 13:52:46 kid1|   Completed Validation Procedure
2013/01/07 13:52:46 kid1|   Validated 66957 Entries
2013/01/07 13:52:46 kid1|   store_swap_size = 2579500.00 KB
2013/01/07 13:52:47 kid1| storeLateRelease: released 0 objects
2013/01/07 13:52:48 kid1| ipcacheParse: No Address records in response
to 'ipv6.msftncsi.com'
2013/01/07 13:52:48 kid1| ipcacheParse: No Address records in response
to 'ipv6.msftncsi.com'
2013/01/07 13:52:48 kid1| Failed to select source for '[null_entry]'
2013/01/07 13:52:48 kid1|   always_direct = 0
2013/01/07 13:52:48 kid1|never_direct = 0
2013/01/07 13:52:48 kid1|timedout = 0
2013/01/07 13:52:49 kid1| Failed to select source for '[null_entry]'
2013/01/07 13:52:49 kid1|   always_direct = 0
2013/01/07 13:52:49 kid1|never_direct = 0
2013/01/07 13:52:49 kid1|timedout = 0
2013/01/07 13:52:50 kid1| Failed to select source for '[null_entry]'
2013/01/07 13:52:50 kid1|   always_direct = 0
2013/01/07 13:52:50 kid1|never_direct = 0
2013/01/07 13:52:50 kid1|timedout = 0
2013/01/07 13:53:11 kid1| Failed to select source for '[null_entry]'
2013/01/07 13:53:11 kid1|   always_direct = 0
2013/01/07 13:53:11 kid1|never_direct = 0
2013/01/07 13:53:11 kid1|timedout = 0
2013/01/07 13:53:26 kid1| Failed to select source for '[null_entry]'
2013/01/07 13:53:26 kid1|   always_direct = 0
2013/01/07 13:53:26 kid1|never_direct = 0
2013/01/07 13:53:26 kid1|timedout = 0
FATAL: Received Segment Violation...dying.
2013/01/07 13:53:35 kid1| Closing HTTP port 0.0.0.0:3128
2013/01/07 13:53:35 kid1| Closing HTTP port 0.0.0.0:3128
2013/01/07 13:53:35 kid1| storeDirWriteCleanLogs: Starting...
2013/01/07 13:53:35 kid1| 65536 entries written so far.
2013/01/07 13:53:35 kid1|   Finished.  Wrote 67222 entries.
2013/01/07 13:53:35 kid1|   Took 0.02 seconds (4038084.94 entries/sec).
CPU Usage: 6.710 seconds = 1.600 user + 5.110 sys
Maximum Resident Size: 69424 KB
Page faults with physical i/o: 0



--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Squid crash on OpenBSD 5.2

2013-01-07 Thread Eliezer Croitoru

I didn't had much time to look at the back-trace.
Also I dont know what is your knowledge is about assert or other things.
I dont know about this specific one but.. in most cases it's there for a 
reason.

It's there to make an assessment about the state of the code.
Maybe the server is not crashing but just me throwing an idea about 
maybe some of your clients getting the wrong content.


So it's good..
If for most users there is no problem there might be one that you have 
discovered but it might not be the case and you should try to start from 
ground up and not to just remove it.


Have you tried without squidguard at all?
maybe there is another bug that we should know about that this assert is 
about?


I hope you will get more guidance about it from one of the core developers.

Best regards,
Eliezer

On 1/7/2013 6:04 PM, Loïc Blot wrote:

Hello,
at first time, only 10 squidGuard helpers are used. Next i increase the
amount because i thought squid doesn't have so many helpers, to this
limit 150/192.
But it was a big crash. I have posted the crash datas and my fix on
bugzilla (now no crash since the fix). Assert is a bad thing :(.

http://bugs.squid-cache.org/show_bug.cgi?id=3732






Re: [squid-users] calculating hardware for 900 users for SQUID cache server

2013-01-10 Thread Eliezer Croitoru

Hey Joseph,

You meant SAS? yes?
To decide about hardware specs you will want to try and measure the 
requests per second rather then the amount of users.
Another thing to take in account, is it a regular forward proxy or 
intercept\trpoxy?

Do you want to keep logs on local disc?
Do you have specific objects\sites you want to cache rather then having 
a  cache proxy?


Take in account that squid is a single process and not threaded.
Squid works faster(in general) with separated DISKS for a cache_dir 
rather then having them in a RAID array.


Things about ram:
About 10-15MB per 1 GB of cache_dir.
Each live connection consumes about ~60-70 KB of ram.

Hope this will help you make more calculations

Regards,
Eliezer

On 1/10/2013 1:03 PM, John Joseph wrote:

Hi All
I am trying to make the hardware specs for the SQUID  cache server.
I have around 600 users, they may be using the bandwidth from 500dbps to 3Mbs. 
Expected annual increase of users will be up to 20%. I would like to size the 
hardware specs for the server, which will be enough for next coming three 
years.( user may be up to 900 then)
What specs should I go for?
For fast r/w I will go for SCSI hard disk, but I am not sure about the amount 
of RAM, CPU power and harddisk space for disk cache.
I would like to request guidance on how to determine the hardware specs
Guidance requested.
Thanks
Joseph John



Re: [squid-users] Squid 3.2.6 is available | Also in centos repo

2013-01-10 Thread Eliezer Croitoru

CentOS RPM BUILDS here:
http://repo.ngtech.co.il/rpm/centos/6/x86_64/

I changed the repo a bit so it can be used with yum to get updates.

Add the following to a repo file:
##squid.repo
[squid]
name=Squid repo for CentOS Linux 6 - $basearch
baseurl=http://repo.ngtech.co.il/rpm/centos/6/$basearch
failovermethod=priority
enabled=1
gpgcheck=0
##

For now the files are signed with an asc file and not bundled in the RPM.

Later to come a RPM for the 3.3 branch.

Eliezer


Re: [squid-users] Squid as transparent proxy show squid error pages in browser

2013-01-10 Thread Eliezer Croitoru

Hey Frantisek,

This is no squid problem.
since it's intercept proxy the client tries to do a dns lookup for the 
www.example.com but since it dosn't have any way to get the dns result 
for this domain it shows the client the problem it has.

Firefox dont know to what IP send the requests.
In this situation the client dont even send a request at all that squid 
can intercept.


Handle this kind of network issues in another level then squid\application.

Regards,
Eliezer

On 1/10/2013 3:11 PM, Frantisek Remias wrote:

OK, the problem is that it still show browser default error message
like before when the domain cannot be resolved

The error message is like

"Server not found. Check if the address for typing errors such as
ww.example.com of www.example.com" in firefox.

I need to show custom page instead of this browser default page when
there is no internent connection

2013/1/10 Amos Jeffries 


On 10/01/2013 9:38 p.m., Frantisek Remias wrote:


Hello,

thank you for the response.

If I set the browser to use proxy a there is temporarily no internet
connection..then it shows the custom page (the ERR_DNS_FAIL one). BTW:
Is it possible to define another custom page when the internet
connection is not available (so it will shows different one when there
is DNS problem and another if internet connection is unavailable?)



How is Squid to know about connection unavailable? It is not making a
connection, just doing DNS lookup at this point to determine where the
connection might go in future. It is entirely possible (and normal) that
routing make the DNS go through one upstream link and HTTP packets through
another.



If I dont set the browser to use proxy (meaning it will use
transparent mode proxy). It shows default browser error message like
"Internet Explorer cannot display the webpage" or "Server not
found...Firefox cant found the server at www." in firefox. What I
need is to show the squid custom page in this case there isnt internet
connection available



Interfaces these days offer scripted hooks. So...

You configure Squid with:
   http_port 3128 intercept name=port1
   http_port 3129 intercept name=port2
   acl port2 myportname port2
   deny_info ERR_CUSTOM port2
   http_access deny port2

Then you make a script which changes the iptables rules sending traffic to
port #2 of Squid when the link goes down and sending traffic to port #1 of
Squid when it goes up.

Amos


Re: [squid-users] Squid as transparent proxy show squid error pages in browser

2013-01-10 Thread Eliezer Croitoru
In any case it wont be a conservative solution but it depends on what 
you want to do in this situation? Do you have 1 wan connection etc..
If you have two wan connections that one is down use a basic round robin 
load balancing as a base and a script to remove the faulty route in a 
case it's down.


Another way is to do some hacking on the DNS resolution of the clients 
and return them a cname\a answer of a local server IP that will inform 
them about network status.
The above is a *very very very* bad solution which can poison the 
browser dns cache if anything goes wrong.


It depends also on the users and environment.

Why the DNS problem page from firefox is bad?
Have you tried to use WPAD?

Eliezer


On 1/10/2013 5:46 PM, Frantisek Remias wrote:

Hello, Thank You for your answer. I know that its off topic now, but
can you get me some directions how this can be done?

Thank You


Re: [squid-users] websites not responding

2013-01-10 Thread Eliezer Croitoru

On 1/10/2013 5:35 PM, Simon Matthews wrote:

Thanks. That solved the problem.

I still have a problem with linkedin, but it is rather different. Some
pages (including the home page) load with only a subset of what should
be on the page. I don't know if this is an issue with squid or my
browser.

Seems to not be squid issue from here.

Eliezer


Re: [squid-users] RE: Your cache is running out of filedescriptors

2013-01-14 Thread Eliezer Croitoru
Or use the proper limits\security settings for squid process instead of 
hacking the start-up script.


Eliezer

On 1/14/2013 4:10 AM, Alfred Ding wrote:

Now is ok, you need add "ulimit -n 65536' in you squid startup script.

Thanks.


Re: [squid-users] Upgrade of SQUID from 3.1 to 3.2 on Freebsd 8.3

2013-01-14 Thread Eliezer Croitoru

On 1/14/2013 1:48 PM, Leslie Jensen wrote:


I've now upgraded squid to 3.2 and rewritten the firewall rule that
resulted in a forwarding loop.

Unfortunately I've got no access now and I can't see where I've made the
error.

The browser says squid is rejecting the requests:
Access control configuration prevents your request from being allowed at
this time.


1358162295.975  0 172.18.0.1 TCP_MISS/403 4052 GET
http://www.skatteverket.se/ - HIER_NONE/- text/html
1358162295.976 11 172.18.0.102 TCP_MISS/403 4137 GET
http://www.skatteverket.se/ - HIER_DIRECT/172.18.0.1 text/html
1358162296.110  0 172.18.0.1 TCP_MISS/403 4166 GET
http://www.squid-cache.org/Artwork/SN.png - HIER_NONE/- text/html
1358162296.110 99 172.18.0.102 TCP_MISS/403 4251 GET
http://www.squid-cache.org/Artwork/SN.png - HIER_DIRECT/172.18.0.1
text/html
1358162296.219  0 172.18.0.1 TCP_MISS/403 4058 GET
http://www.skatteverket.se/favicon.ico - HIER_NONE/- text/html
1358162296.219  1 172.18.0.102 TCP_MISS/403 4143 GET
http://www.skatteverket.se/favicon.ico - HIER_DIRECT/172.18.0.1 text/html
1358162296.239  0 172.18.0.1 TCP_MISS/403 4090 GET
http://www.skatteverket.se/favicon.ico - HIER_NONE/- text/html
1358162296.240  1 172.18.0.102 TCP_MISS/403 4175 GET
http://www.skatteverket.se/favicon.ico - HIER_DIRECT/172.18.0.1 text/html



Look closly.. it's not squid.
if it was squid you would have seen TCP_DENIED.
you get a TCP_MISS which squid is ok with but a remote server DENIES you 
with a 403 response.


I would say it looks pretty bad since every request seems to go into 
squid from two IP addresses which is like a loop.. but one which squid 
can not recognize from an unknown reason.


What have you done in the firewall to prevent the forwarding loop?

By the way did you tried to have a rule that allows all web requests 
from the local machine of the proxy to not be intercepted?


Regards,
Eliezer


Re: [squid-users] Back to Youtube Caching or any stream cache feature request/discuss

2013-01-17 Thread Eliezer Croitoru

On 1/16/2013 3:31 PM, David Touzeau wrote:

Dear

I would like to know if there is currently tips on Squid 3.2.x in order
to cache Youtube or other stream flows.
Probably not...
google is your friend just add the "site:squid-cache.org" to the search 
to filter some garbage.



If there any plan on Squid 3.3 to supports the Store URL Rewriting 2.7
feature ( http://wiki.squid-cache.org/Features/StoreUrlRewrite )


There is a plan in amos link.
The new feature is StoreID which will reflect a real name for the 
feature rather then giving the user wrong assumptions about what it does.


I have a working patch for trunk code revision 12552 if you want to use 
it for basic tests I will be more then happy about it.

The patch syntax is not fully polished yet.

I have drawn the full picture of how the feature works in a sense of 
squid code so feel free to polish it if you have some free time.


The patch is working without any known bugs to me yet for a single squid 
instance.
If you use a cluster I am still not testing HTCP and some other stuff 
which can act a bit different then you would expect it to.


Anyway feel free to follow squid-dev list and contribute to the project.

links about this specific subject\feature that you should be familiar with:
http://wiki.squid-cache.org/Features/StoreID
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator

If you(or anyone) have any questions feel free to contact me through 
email\lists and I will respond.


Best regards,
Eliezer




Re: [squid-users] How to fix a "Zero Sized Reply" error

2013-01-17 Thread Eliezer Croitoru

Hey Bastien,

I have seen this problem couple times being caused by wrong MTU used by 
some points in the network Infrastructure,


Try to make sure another application other then squid has a success 
while fetching the url.

try to find out the path MTU.

if you are not sure about it you can try the manual way by using ping 
with a specific size and calculate manually the PMTU to a specific node\IP.


I had a small script which was pretty nice to do the trick couple times at:
http://rcl-rs-vvg.blogspot.co.il/2011/09/discovering-largest-supported-mtu.html

and nmap also has a nice script for that:
nmap --script path-mtu www.example.com

By the way what OS are you using squid on?

Regards,
Eliezer


On 1/17/2013 8:47 AM, Amos Jeffries wrote:

On 17/01/2013 5:35 a.m., Bastien Ceriani wrote:

Hi,

We use Squid 3.1.20.


Firstly please upgrade this Squid if possible. 3.1 series is outdated
and up to a .23 release now anyway due to security bugs. Current Squid
release is 3.2.6.



We are often exposed to a problem which return a "Zero Sized Error".

Can i fix it with some options on the squid configuration file ?


It depends on why "the server is not sending any information back to
Squid". If you can determine what type of HTTP requests *do* get a
response from this server and how they differ from what is being relayed
by your Squid you have a chance at discovering what settings might make
it work.



I tried all of things mentioned after :

  - Delete or rename your cookie file and configure your browser to
prompt you before accepting any new cookies.
  - Disable HTTP persistent connections with the
server_persistent_connections and client_persistent_connections
directives.
  - Disable any advanced TCP features on the Squid system. Disable ECN
on Linux with echo 0 > /proc/sys/net/ipv4/tcp_ecn/.

I will try to find the origin of the problem with a tcpdump on my
proxy my i don't known how could i exploit it.
This is the result of the TCP stream between my proxy and the website :

GET / HTTP/1.1
Host: www.mopub.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101
Firefox/17.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Via: 1.1 localhost (squid/3.1.20)
X-Forwarded-For: 192.168.1.137
Cache-Control: max-age=259200
Connection: keep-alive


As you can see. NO reply.


And between the proxy and the client :

GET http://www.mopub.com/ HTTP/1.1
Host: www.mopub.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101
Firefox/17.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Proxy-Connection: keep-alive



Amos


--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Back to Youtube Caching or any stream cache feature request/discuss

2013-01-19 Thread Eliezer Croitoru

On 1/20/2013 1:30 AM, David Touzeau wrote:

So no chances to get it with the 3.2x ?

No.

The reason is very simple:
There is a big difference in the code parts which are critical for the 
feature to work.


If you are willing to try learning a lot of code and porting it from 
3.3\3.HEAD many will be happy but consider that you will be the main 
source to all the questions about the feature in 3.2 branch for at least 
a year..


Best regards,
Eliezer


Re: [squid-users] How to modify the process owner name in syslog

2013-01-21 Thread Eliezer Croitoru

Hey Bill,

Since squid 2.7 is not maintained anymore I doubt you will get much 
support about it but if you have the relevant settings you have used 
maybe someone can help you.


Regards,
Eliezer

On 1/21/2013 12:09 PM, Bill Yuan wrote:

Hi all,
I just finished the configuration on my squid 2.7, make it send all the
access log to an external syslog server. it is working properly.

thanks very much for creating such a nice software.  but I want to know
whether can change the name in the syslog like below:

Jan 21 08:09:10 192.168.0.1 *squid[12345]*: log message

And when I trigger the logger via command line , I can get another syslog
record like below,

Jan 21 08:09:10 192.168.0.1 root: message via command line

So my question is whether I can change the "process name" in the system
log? or Just dont show it .


thanks in advance.  :)



Re: [squid-users] Squid as reverse proxy and PCI Tests

2013-01-21 Thread Eliezer Croitoru

On 1/21/2013 6:11 PM, Sébastien WENSKE wrote:

Hope this can help :)

http://www.sw-servers.net/how-to-pass-pci-tests-with-squid/

Best Regards,
Sebastien WENSKE


Just wondering how it helps in these tests?

Since not everybody knows the reason you should explain the cause and 
the result of the patch.


Regards,
Eliezer



Re: [squid-users] Squid is crashing

2013-01-21 Thread Eliezer Croitoru

On 1/21/2013 5:19 PM, Farooq Bhatti wrote:

Thanks for the prompt response.

Actually I am newbie to debugging I have never used any debugging tool before 
so no idea of the error I am getting any how I have googled for the last error 
and be able to install the glibc debuginfo packages but now the error is 
changed which is like below. So far I am not been able to run gdb as my Program 
exited with code 01. Please check below:



Le lundi 21 janvier 2013 à 15:32 +0500, Farooq Bhatti a écrit :

Hi all,

My squid is crashing and I am getting the file of following core
dump, this is suddenly happening since last 2 weeks. Before it was working fine.

[root@hostal-squid cache]# ls -lah /usr/local/squid/var/cache/
total 5.5G drwxr-xr-x. 2 squid squid 4.0K Jan 21 14:55 .
drwxr-xr-x. 5 squid squid 4.0K Aug 29 03:44 ..
-rw---  1 squid squid 3.0G Jan 20 03:58 core.3878
-rw---  1 squid squid 3.0G Jan 20 04:06 core.3904

The version of squid with compiled option is as below:

[root@hostal-squid cache]# squid -v Squid Cache: Version
LUSCA_HEAD-r14809 configure options:
'--enable-delay-pools' '--disable-arp-acl'
'--enable-linux-netfilter' '--enable-large-cache-files'
'--enable-cache-digests' '--enable-external-acl-helpers=ip_user'
'--disable-ident-lookups' '--enable-removal-policies=heap,lru'
'--disable-snmp' '--disable-ssl' '--enable-storeio=aufs,coss' '--with-aio'
'--with-maxfd=1048576' '--with-dl' '--with-pthreads' '--with-large-files'
'--disable-unlinkd' '--disable-htcp'




Hey there,

This version of squid in not squid but LUSCA which is a fork of squid 2.7.

If you need help about it try contact LUSCA developers.

Since I am not following lusca is dont know anything about their 
revisions and maintenance but there are many new features in squid 3+ so 
as always I suggest you to try to use squid latest stable.


Best regards,
Eliezer


Re: [squid-users] Squid 3.2.6 - blocking hosts by regexp?

2013-01-23 Thread Eliezer Croitoru

On 1/23/2013 3:32 PM, Ralf Hildebrandt wrote:

1358927965.305  90252 141.42.xxx.69 TCP_MISS/200 2711 CONNECT 
download.teamviewer.com:443 - HIER_DIRECT/46.163.100.220 -
1358928992.439 74 141.42.xxx.115 TCP_MISS/200 9 CONNECT 
ping3.teamviewer.com:443 - HIER_DIRECT/95.211.37.197 -
Idea:

acl teamviewer-ssl url_regex ^(master|ping)[0-9]+\.teamviewer\.com
http_access deny teamviewer-ssl


If you want to block teamviewer totally, dstdomain would be faster:

acl teamviewer dstdomain .teamviewer.com
http_access deny teamviewer


OK, I still want be able to access www.teamviewer.com :)



Then use dstdomain + CONECT method deny and allow dstdomain + 
GET\POST|HEAD METHODS.


Eliezer


Re: [squid-users] access-lists from mysql ?

2013-01-23 Thread Eliezer Croitoru

On 1/24/2013 12:13 AM, Ali Jawad wrote:

Hi
Is it possible to load access-lists from a database ? I.e. I want to
read all the allowed src IPs from a database, all the examples I could
fine are around user authentication and not IP access-lists. If it is
possible can you please show me a few pointers ? Any example config /
howto ?
Thanks



For this kind of setup you'd better use an external_acl helper with DB 
which is pretty simple to implement.


Regards,
Eliezer


Re: [squid-users] Access Denied with transparent mode on FreeBSD

2013-01-24 Thread Eliezer Croitoru
dules'
'--enable-removal-policies=lru heap'
'--disable-epoll'
'--disable-linux-netfilter'
'--disable-linux-tproxy'
'--disable-translation'
'--enable-auth-basic=DB MSNT MSNT-multi-domain NCSA PAM POP3 RADIUS
fake getpwnam'
'--enable-auth-digest=file'
'--enable-external-acl-helpers=file_userip unix_group'
'--enable-auth-negotiate=none'
'--enable-auth-ntlm=fake smb_lm'
'--enable-storeio=diskd rock ufs aufs'
'--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped DiskThreads'
'--enable-log-daemon-helpers=file'
'--enable-url-rewrite-helpers=fake'
'--enable-icmp'
'--enable-htcp'
'--disable-forw-via-db'
'--disable-cache-digests'
'--enable-wccp'
'--enable-wccpv2'
'--disable-eui'
'--enable-ipfw-transparent'
'--enable-pf-transparent'
'--enable-ipf-transparent'
'--disable-follow-x-forwarded-for'
'--enable-ecap'
'--disable-icap-client'
'--disable-esi'
'--enable-kqueue'
'--prefix=/usr/local'
'--mandir=/usr/local/man'
'--infodir=/usr/local/info/'
'--build=amd64-portbld-freebsd9.1'
'build_alias=amd64-portbld-freebsd9.1' 'CC=cc' 'CFLAGS=-O2 -pipe
-I/usr/local/include -fno-strict-aliasing' 'LDFLAGS= -pthread
-L/usr/local/lib' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-O2 -pipe
-I/usr/local/include -fno-strict-aliasing' 'CPP=cpp'
'PKG_CONFIG=pkgconf' --enable-ltdl-convenience
*** END ***

This is basically a working 2.7 installation config that has been moved
onto a 3.2 box with some minor tweaks in the new config.

Any help appreciated.

Iain.



--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Why squid instance' cpu load so high, can rebuild so frequency ?

2013-01-28 Thread Eliezer Croitoru

Hey,

What exact version of 3.2+ are you using?
What distro?
squid.conf..
Self compiled or from a Repo?
squid build options from "squid -v"

There is a problem but it's not enough basic data to even look at it.
Squid 3.2+ was patched couple times and it might have been fixed in one 
of these patches.


Regards,
Eliezer

On 1/29/2013 5:19 AM, 金 戈 wrote:

Greetings!
We use squid as a forward proxy for our project.Recently we found that one of 
the instance rebuilding a lot of times.
And we check the configure found all the instance use the same. But just this 
instance always rebuilding( about 3 ~ 4 times per day)
And we found the cache log has some thing below.

2013/01/29 09:56:40 kid1| ctx: enter level  0: 
'http://bo.ok168.com/music//.wma'
2013/01/29 09:56:40 kid1| WARNING: unparseable HTTP header field {Bad Request (Invalid Hostname)}

#this is when i shutdown the instance.
2013/01/29 10:03:38 kid1| Preparing for shutdown after 9796891 requests
2013/01/29 10:03:38 kid1| Waiting 30 seconds for active connections to finish
2013/01/29 10:03:38 kid1| Closing HTTP port 192.168.134.16:3128
2013/01/29 10:03:38 kid1| Closing SNMP receiving port 192.168.134.16:3401
2013/01/29 10:03:38 kid1| Shutdown: NTLM authentication.
2013/01/29 10:03:38 kid1| Shutdown: Negotiate authentication.
2013/01/29 10:03:38 kid1| Shutdown: Digest authentication.
2013/01/29 10:03:38 kid1| Shutdown: Basic authentication.
2013/01/29 10:03:38 kid1| assertion failed: errorpage.cc:608: "entry->isEmpty()"


Re: [squid-users] Why squid instance' cpu load so high, can rebuild so frequency ?

2013-01-29 Thread Eliezer Croitoru
x27; 'CXX=c++' 
'CXXFLAGS=-O2 -pipe -I/usr/local/include -fno-strict-aliasing' 'CPP=cpp' 
--enable-ltdl-convenience


and my squid.conf

cache_mem 512  MB
memory_replacement_policy heap GDSF
#memory_replacement_policy heap LRU
memory_cache_shared off
minimum_object_size 0 KB
#cache_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
maximum_object_size 512 KB
cache_swap_low 85
cache_swap_high 95
logfile_daemon /usr/local/libexec/squid/log_file_daemon
buffered_logs on
negative_ttl 15 seconds
positive_dns_ttl 6 hours
negative_dns_ttl 30 seconds
store_avg_object_size 26 KB
store_objects_per_bucket 30
read_ahead_gap 64 KB
request_header_max_size 64 KB
reply_header_max_size 64 KB
via off
request_entities on
forward_timeout 1 minutes
connect_timeout 15 seconds
peer_connect_timeout 10 seconds
read_timeout 3 minutes
write_timeout 3 minutes
request_timeout 30 seconds
client_idle_pconn_timeout 1 minutes
client_lifetime 1 hours
server_idle_pconn_timeout 1 minute
cache_effective_user squid
cache_effective_group squid
httpd_suppress_version_string on
client_persistent_connections off
query_icmp off
accept_filter httpready
dns_v4_first on
check_hostnames off
ipcache_size 65535
fqdncache_size 65535
max_filedescriptors 5
memory_pools on
memory_pools_limit 50 MB
forwarded_for transparent
client_db off
http_port   10.10.1.1:3128 accel allow-direct ignore-cc
snmp_incoming_address   10.10.1.1
udp_incoming_address10.10.1.1
icp_port   0
htcp_port  0
snmp_port  3401
cache_dir diskd /cache1/aufs-32k 8000 32 256 max-size=32768 Q1=100 Q2=128
cache_dir diskd /cache2/aufs-32k 8000 32 256 max-size=32768 Q1=100 Q2=128
cache_dir diskd /cache3/aufs-32k 8000 32 256 max-size=32768 Q1=100 Q2=128
cache_dir diskd /cache4/aufs-512k 32000 16 256 min-size=32769  max-size=524288 
Q1=100 Q2=128





在 2013-1-29,上午11:34,Eliezer Croitoru  写道:


Hey,

What exact version of 3.2+ are you using?
What distro?
squid.conf..
Self compiled or from a Repo?
squid build options from "squid -v"

There is a problem but it's not enough basic data to even look at it.
Squid 3.2+ was patched couple times and it might have been fixed in one of 
these patches.

Regards,
Eliezer

On 1/29/2013 5:19 AM, 金 戈 wrote:

Greetings!
We use squid as a forward proxy for our project.Recently we found that one of 
the instance rebuilding a lot of times.
And we check the configure found all the instance use the same. But just this 
instance always rebuilding( about 3 ~ 4 times per day)
And we found the cache log has some thing below.

2013/01/29 09:56:40 kid1| ctx: enter level  0: 
'http://bo.ok168.com/music//.wma'
2013/01/29 09:56:40 kid1| WARNING: unparseable HTTP header field {Bad Request (Invalid Hostname)}

#this is when i shutdown the instance.
2013/01/29 10:03:38 kid1| Preparing for shutdown after 9796891 requests
2013/01/29 10:03:38 kid1| Waiting 30 seconds for active connections to finish
2013/01/29 10:03:38 kid1| Closing HTTP port 192.168.134.16:3128
2013/01/29 10:03:38 kid1| Closing SNMP receiving port 192.168.134.16:3401
2013/01/29 10:03:38 kid1| Shutdown: NTLM authentication.
2013/01/29 10:03:38 kid1| Shutdown: Negotiate authentication.
2013/01/29 10:03:38 kid1| Shutdown: Digest authentication.
2013/01/29 10:03:38 kid1| Shutdown: Basic authentication.
2013/01/29 10:03:38 kid1| assertion failed: errorpage.cc:608: "entry->isEmpty()"




--
Eliezer Croitoru



Re: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Eliezer Croitoru

Squid access logs?
What is the exact problem? you can't download at all or there is a problem?
Please share your squid.conf and "squid -v" output.
Where did you got your RPM? from my repo?

Please share more info and if you can get tcpdump output this can really 
help to find your problem.
Note that I am using 3.2.6 + 3.3 + 3.HEAD on CentOS 6 and it works fine 
with windows updates as for right now.


Regards,
Eliezer

On 1/31/2013 4:54 PM, Dave Burkholder wrote:

Are there any comments here? I've tried adding the following options 
fromhttp://wiki.squid-cache.org/SquidFaq/WindowsUpdate  (even though I don't 
especially want to cache updates)

range_offset_limit -1
maximum_object_size 200 MB
quick_abort_min -1

No joy. I've tried transparent & standard proxy modes. Not using authentication 
anywhere. I've now tested on 4 LANs behind Squid 3.2.6 on CentOS 5 & 6 machines and 
WU isn't working on any of them.

On one machine I downgraded to 3.2.0.18 and was able to get WU to work. Was 
there a regression since 3.2.0.18?

Thanks,

Dave


Re: [squid-users] Windows Updates on 3.2.6

2013-01-31 Thread Eliezer Croitoru

On 1/31/2013 6:11 PM, Dave Burkholder wrote:

Here are links to squid access.log

www.thinkwelldesigns.com/access_log.txt

Ok seems like pretty normal to me from squid point of view.
I have the same lines which windows tries to access and dosn't exist.



And tcpdump for 10.0.2.150

www.thinkwelldesigns.com/tcpdump.zip
In what format is it? I have tried to read it with wireshark and it 
seems like corrupted or something.

I think I do understand what is the problem from squid.conf.

range_offset_limit -1

Remove it..
try to make the proxy as simple as it is.

The above can cause windows to not fetch objects and when fails tries to 
use SSL which I dont know if it can or cannot use.


Eliezer



Thanks,

Dave

-Original Message-
From: Dave Burkholder
Sent: Thursday, January 31, 2013 10:29 AM
To: Eliezer Croitoru; squid-users@squid-cache.org
Subject: RE: [squid-users] Windows Updates on 3.2.6

Hello Eliezer,

Thank you for your reply. My exact problem is that Windows Updates do not 
install or even download at all.

The squid RPMs were built by my partner in 2 architectures: Centos 5 i386 and 
Centos 6 x86_64. Same nonfunctioning behavior in both.

I didn't realize you had a squid repo; I'd be glad to try your builds if 
they're compatible. Where is your repo hosted?


I had included the conf file in my first email, but a link would be better:

www.thinkwelldesigns.com/squid_conf.txt


###
squid -v: (Centos 6 x86_64)
---
Squid Cache: Version 3.2.6
configure options:  '--host=x86_64-unknown-linux-gnu' 
'--build=x86_64-unknown-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' 
'--enable-arp-acl' '--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,getpwnam'
 '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' '--enable-extern
al-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-http-violations' 
'--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' '--enable-snmp' 
'--enable-ssl' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' '--enable-ecap' 
'--with-aio' '--with-default-user=squid' '--with-filedescriptors=16384' 
'--with-dl' '--with-openssl' '--with-pthreads' 
'build_alias=x86_64-unknown-linux-gnu' 'host_alias=x86_64-unknown-linux-gnu' 
'CFLAGS=-O2 -g -fpie' 'CXXFLAGS=-O2 -g -fpie' 
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'

###
squid -v: (Centos 5 i386)
---
Squid Cache: Version 3.2.6
configure options:  '--host=i686-redhat-linux-gnu' 
'--build=i686-redhat-linux-gnu' '--target=i386-redhat-linux' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' 
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' 
'--includedir=/usr/include' '--libdir=/usr/lib' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--exec_prefix=/usr' '--libexecdir=/usr/lib/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with

Re: [squid-users] Reverse cache for HLS streaming

2013-01-31 Thread Eliezer Croitoru
I have seen your logs and it seems like you have user CURL to fetch a 
simple get while the clients are requesting a partial content of the 
video and squid in any version dosn't cache them yet.
You can try other alternatives that can offer you this kind of feature 
or reassess the way your application works.


Eliezer

On 1/31/2013 8:14 PM, Scott Baker wrote:

I'm trying to setup Squid as a reverse proxy to cache HLS segments. We
have a very controlled environment, so I'd like it to cache every .ts
file it sees, and not cache every .m3u8 file it sees. I have a pretty
generic configuration (I think) and it seems that it's not caching anything?

I don't see any reason it WOULDN'T cache the files. The headers all
indicate that it's cacheable I think.

-

http_port 80 accel defaultsite=hls2.domain.tv no-vhost ignore-cc
cache_peer master-streamer.domain.tv parent 80 0 no-query originserver
name=myAccel no-digest

acl our_sites dstdomain hls2.domain.tv
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 2000 16 256
cache_mem 1024 MB

-

1359655780.097 45 65.182.224.20 TCP_MISS/206 1607080 GET
http://hls2.domain.tv/katu/katu_996_92564.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655787.167 41 65.182.224.20 TCP_MISS/206 1607080 GET
http://hls2.domain.tv/katu/katu_996_92564.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655792.110 42 65.182.224.20 TCP_MISS/206 1563276 GET
http://hls2.domain.tv/katu/katu_996_92565.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655799.181 40 65.182.224.20 TCP_MISS/206 1563276 GET
http://hls2.domain.tv/katu/katu_996_92565.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655804.114 37 65.182.224.20 TCP_MISS/206 1565532 GET
http://hls2.domain.tv/katu/katu_996_92566.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655811.188 37 65.182.224.20 TCP_MISS/206 1565532 GET
http://hls2.domain.tv/katu/katu_996_92566.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655816.133 39 65.182.224.20 TCP_MISS/206 1610088 GET
http://hls2.domain.tv/katu/katu_996_92567.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655823.204 37 65.182.224.20 TCP_MISS/206 1610088 GET
http://hls2.domain.tv/katu/katu_996_92567.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655828.139 37 65.182.224.20 TCP_MISS/206 1580948 GET
http://hls2.domain.tv/katu/katu_996_92568.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T
1359655835.214 39 65.182.224.20 TCP_MISS/206 1580948 GET
http://hls2.domain.tv/katu/katu_996_92568.ts -
FIRSTUP_PARENT/65.182.224.89 video/MP2T

-

< HTTP/1.1 200 OK
< Date: Thu, 31 Jan 2013 18:11:44 GMT
< Server: Apache/2.2.22 (Fedora)
< Last-Modified: Thu, 31 Jan 2013 18:11:04 GMT
< ETag: "800182-181de4-4d4998cd170d4"
< Accept-Ranges: bytes
< Content-Length: 1580516
< Content-Type: video/MP2T
< X-Cache: MISS from hls2.domain.tv
< X-Cache-Lookup: MISS from hls2.domain.tv:80
< Via: 1.1 hls2.domain.tv (squid/3.2.5)
< Connection: keep-alive



--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Problem with squid and rp-pppoe serverver

2013-02-01 Thread Eliezer Croitoru

Hey Georgi,

It seems like an OS level problem rather then any squid business.

You can try to contact the OS\DISTRO mailing list or forums to get some 
more help about that.


Eliezer

On 2/1/2013 4:18 PM, Georgi Maleshkov wrote:

Hi,
i have some problem with implementing squid3 like transparent proxy and 
rp-pppoe server, for extended home network. In iptables inside the PREROUTING 
section i see that the rule for the extended users of my home network works 
have some packages and transfer, but they can't access any web site with 
timeout response in the browsers.The iptables router and squid and rp-pppoe 
server are in same linux machine. squid works with this configuration i test it 
with my other computer which is not connected thought pppoe and works. Any idea 
how to resolve the problem?
Thanks,
Georgi



Re: [squid-users] Re: Splitting objects by size into different cache_dir not working for me

2013-02-01 Thread Eliezer Croitoru

On 2/1/2013 2:33 AM, babajaga wrote:

Hi,

I am just starting to test with rock. And it could be, I have just the
opposit effect: Only UFS is used, rock not, using default of 512Kb righ now.
However, will do more testing tomorrow,
but I am a bit suspicious regarding this line in your squid.conf:
maximum_object_size_in_memory 8 KB

May be ONLY objects <=8kb are cached at all. And larger ones are never
cached.
Then your effect would be explanable: Always rock is used.

Worth to give it a try and use something like
maximum_object_size_in_memory 64 KB

If the above is true then their is a more series bug that blocks squid 
from caching objects and it should be tested deeper to make sure of it.


Anyone want to test it?

--
Eliezer Croitoru



Re: [squid-users] Re: Splitting objects by size into different cache_dir not working for me

2013-02-01 Thread Eliezer Croitoru

On 2/1/2013 8:04 PM, Luciano Ruete wrote:

I've already tested and the above seams to be true. How can I know for
shure if there are or not objects in the cache_dir greater than
maximum_object_size_in_memory?

I can ran more tests if you give me further instructions, or can try a
patch if provided.

Regards.


We need a set of http cachable objects in different sizes:
tiny:
http://repo.ngtech.co.il/rpm/centos/6/x86_64/squid-debuginfo-3.2.5-1.el6.x86_64.rpm.asc

avg:
http://repo.ngtech.co.il/squid/cachecluster.png

avg download:
http://repo.ngtech.co.il/rpm/centos/6/others/openssh-6.1p1-81.el6.x86_64.rpm

big download: 
http://repo.ngtech.co.il/rpm/centos/6/x86_64/squid-debuginfo-3.2.6-1.el6.x86_64.rpm


large download:
http://dl.fedoraproject.org/pub/fedora/linux/releases/18/Fedora/x86_64/iso/Fedora-18-x86_64-netinst.iso

very large download:
http://dl.fedoraproject.org/pub/fedora/linux/releases/18/Fedora/x86_64/iso/Fedora-18-x86_64-DVD.iso

All the above should have proper cache directive that can be checked using:
http://redbot.org
(ignore my testing Link headers)

You can set these settings in squid.conf
store_dir_select_algorithm round-robin
#^^default is: "least-load" which can cause your problem.
cache_dir rock /var/spool/squid/rock 1000 min-size=1024 max-size=31000 
max-swap-rate=250 swap-timeout=350
cache_dir aufs /var/spool/squid/aufs 3 16 256 min-size=209715 
max-size=734003200

maximum_object_size_in_memory 512 KB #default
minimum_object_size 0 KB #default
maximum_object_size 300 MB #non default
##end conf

First try only the round-robin to make sure this is not the basic cause 
for problems.
reload\restart check the cache_dir sizes before and after getting into a 
website like yahoo movies or any site with many objects in it.
Then try to change only the maximum_object_size_in_memory from 512 
default to 8 KB which will cause almost no ram cache and mostly dir 
cache to make sure that the reason for the low counters is not because 
of mem caching.

Check the cache_dir before and after...

If there is no straight answer if it's ok or not and if no clean answer 
try to use all the directives I gave you together and try to download 
each of the set of files I gave you.


The tiny should be cached in mem.
all the others should be cached in a cache_dir by the size.
the large ISO file should be cached in any case in the UFS cache_dir.

Feel free to ask me anything.
I am also in squid IRC channel here and there.

--
Eliezer Croitoru



Re: [squid-users] Re: Splitting objects by size into different cache_dir not working for me

2013-02-01 Thread Eliezer Croitoru
e to RoundRobin selection.
Dont change any object size default else then maximum_object_size to 700 MB

If you have any visible problem after that file a bug at the bugzilla 
and also refer to this mail as part of the process.


http://bugs.squid-cache.org/

--
Eliezer Croitoru
http://www1.ngtech.co.il


Re: [squid-users] Re: Splitting objects by size into different cache_dir not working for me

2013-02-02 Thread Eliezer Croitoru

On 2/2/2013 5:10 PM, Luciano Ruete wrote:

Ok, this one is my fault. Debian/Ubuntu init script does a squid -z in a
pre-start hook if the cache_dir was not initialized. My rock cache_dir
was already initialized but the script only knows from AUFS and COSS
because it is ready for squid-3.1 and not for squid-3.2.

Thanks for the answer.

Yes I have seen that and didn't had time to send you a note about it yet.

Since rock is a new type and squid -z reset it even if it's OK.
I think that like in ufs case squid -z should not reset the rock store 
file and just recreate it if it's not there.
The reason is to be consistent with ufs and also prevent this kind of 
problem.


This should be files as a separated bug.

what I did to CentOS init script is to start with:

CACHE_AUFS_SWAP=`sed -e 's/#.*//g' $SQUID_CONF | \
egrep "^cache_dir (auf|ufs)"| awk '{ print $3 }'`
CACHE_ROCK_SWAP=`sed -e 's/#.*//g' $SQUID_CONF | \
grep "^cache_dir rock"| awk '{ print $3 }'`

And later check a dir or file existence by the store.

This helps the init script to prevent false identification of cache_dir 
but not preventing the very wrong squid -z action.


Regards,
--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] a log check script for Squid

2013-02-02 Thread Eliezer Croitoru

Nice ONE.

Did you had the chance of using one of the existing tools for that?
There are bunch of them which are pretty simple to configure.

On 2/3/2013 3:52 AM, jeffrey j donovan wrote:

greetings

I was gathering some log info on several squid boxes and I wrote this little 
tool to quickly gather access time and ip information. It's just a group of log 
reading tasks.
basically it reads access.log displays current time, and you can search the 
logs for a particular Ip address. Those results can be delivered to a remote 
folder via scp.

it's not very elegant , it's actually pretty crude and some lines may not be 
complete. but I thought I should share this with others and maybe someone could 
use it or build off of it and make something better.
it worked with OSX and Ubuntu





modify whatever, default Paths were followed.
One folder may need to be created ../squid/var/logs/tmp. This is just to keep 
the oranges away from the tomatoes:, unless you like tormagetoes.


hope this is useful to someone
-j



--
Eliezer Croitoru
http://www1.ngtech.co.il


Re: [squid-users] access-lists from mysql ?

2013-02-02 Thread Eliezer Croitoru

Hey Matthew,

I wrote a more complex solution like that in ruby with MYSQL, 
TOKYOCABINET, REDIS and some others.


I have just seen couple days ago another a very nice interface with a 
very nice api called "moneta" for ruby.

It uses one interface for couple DB such as mentioned above.

Until now I have used BDB, MYSQL, POSTGRESQL and the mentioned above.
The problem with SQL DB are the size of the DB and speed compared to the 
others.
TOKYOCABINET takes about 50% from MYSQL in any store there is for a 
simple HASH DB.


I know people like sql queries but sql DB are very slow for computing stuff.
The more accurate you are the more speed you get.

Eliezer

On 2/3/2013 7:27 AM, Matthew Goff wrote:

I didn't find that anyone has created a flexible solution for use with
MySQL, so I wrote a small C++ program that will execute an specified
query with token replacement. You will need the MySQL development
libraries installed to compile it, but otherwise nothing special. If
no result set is found ERR is returned, if a result set is found OK is
returned.

GitHub:https://github.com/Kline-/tools/tree/master/c++/mysquid

Example usage with only one token passed, %DST=test.com:
external_acl_type mysquid1 %DST /path/to/mysquid "SELECT `url` FROM
`blocked_domains` WHERE INSTR('##TOK##',url);"

Which would result in MySQL executing the following:
SELECT `url` FROM `blocked_domains` WHERE INSTR('test.com',url);

##TOK## will be updated in each query with whatever Squid passes along
as %DST. Any number of tokens are supported and you can name them
whatever you want as long as they are ##enclosed##.

Example usage with two tokens passed, %SRC=192.168.1.8, %DST=test.com:
external_acl_type mysquid2 %SRC %DST /path/to/mysquid "SELECT * FROM
`blocked_src_dst` WHERE `ip` LIKE '##source##%' AND
INSTR('##destination##',url);"

Which would result in MySQL executing the following:
SELECT * FROM `blocked_src_dst` WHERE `ip` LIKE '192.168.1.8%' AND
INSTR('test.com',url);

I only use this on my home LAN, so I have no data on how well it may
or may not scale. With a low ttl I can now update the ACLs I use for
blocking websites in my home via any number of different SQL tools
rather than having to login to my proxy box, su, update acl files, and
reload Squid. Comments or improvements are welcome, I hope some others
will find this useful.


Re: [squid-users] access-lists from mysql ?

2013-02-03 Thread Eliezer Croitoru

On 02/03/2013 08:02 AM, Matthew Goff wrote:

Ah...But is it floating on the web to be found by Google?;)  I
searched off and on a little for a way to easily tie Squid to MySQL
and I found lots of people asking but very little practical examples
beyond user authentication using the supplied demo script.

I'm curious how much caching would really be necessary in the helper
program though given that Squid already caches external ACL lookup
results on its own. I haven't seen any slowdown using this on my own
LAN, but that's a fairly small traffic sample.

My end goal was something using as few external library dependencies
as possible in a compiled language, so I can say I achieved that at
least. I really was just tired of the whole process of: ssh, su, edit,
reload, test -- each time I needed to block a new domain one of my
kids stumbled on;)  The SQL tie-in is also nice because it can be
managed by so many different tools so you can create portal pages or
small GUI tools to allow less technical users to update their lists
without worrying about what file on disk to edit and what commands to
run afterwards.

Every solution will have pros and cons, just have to pick the best one
for your own use case:)

Indeed.

Well if you are here you can always ask and I do my best if I can.
Portability is very good.
I have used ruby since it's very intuitive to me.
The only systems I couldn't use Ruby was embedded.

Cache for external ACL is better limited to something.
Also the external ACL caches by IP or URL or couple together.
The application is caching in the block\search level which is far more 
advanced and low level then squid helper cache.
Since squid dosn't have a "domain" a "path" etc.. in the interface the 
app should do that.
Since I have used only a list of domains and partial url's path there is 
a pretty good reason for that.


In almost any case other options then static DB is better.
There are couple solutions which offers just that for free.

There were couple guys here who talked about MYSQL as ACL backend but 
nobody sketched a design for that.
If you do have something in mind for LDAP or MYSQL scheme which a 
application can use to check for ACLs I will be more then happy to think 
about it.


The current options are:
- squidGuard static DB by category.
- other weight categorizing such as -127 bad   +127 ok and the user 
choose the level he wants to be on or assigned a number.
This is a problem since many will refer a malware site as -127 while 
adult content as -120 or what so ever.


Eliezer


Re: [squid-users] what should squid -z do

2013-02-03 Thread Eliezer Croitoru

On 2/3/2013 8:29 PM, Alex Rousskov wrote:

The cause of this specific problem is_not_  rock behavior (correct or
not) but a mismatch between a startup script and squid.conf -- the
script does not test for all the right directories when running squid-z.
Even if rock code is changed, this mismatch still needs to be fixed.
Please consider filing a bug report with Debian/Ubuntu if it is their fault.


To be consistent with ufs, we should probably change rock behavior from
initializing the database to doing nothing if the configured database
directory already exists. Like ufs, rock will rely on directory
existence for that test (regardless of what may be inside that
configured directory). In other words, squid-z is not "validate and
initialize" but "initialize if the configured directory is not there".


Any objections to that rock change?


My starter assumption was that squid -z erase or reset any cache_dir.
Then I found out it's not like that.

The init scripts checks for directories AND FILES but is not smart 
enough to verify the integrity of the content.


So now the question is:
If squid has the ability to verify the cache_dir structure and DB more 
then an init script, Why do we even let the script decide this kind of 
decision?

Squid in any case rebuilds ufs stores or fix if corrupted or not?? right?
Why squid should not create a cache_dir if one dosn't exits at startup?
What side effects can come from that?

It can more complex but a "check", "reset", "build" flags can be added 
to the -z like in -k parse|...|..| while having a default to "build" 
which is what it does's now.


The "build" will be the default and compatible with the current -z flag 
works.


The "check" can respond to an init or any other script a more 
informative way about the check like 1 is bad 2 is something dirty 3 
there is not store present 4 DB corrected.


I just think loudly since the subject was opened.


--
Eliezer Croitoru
http://www1.ngtech.co.il


Re: [squid-users] UDP_HIT/000 after TCP_MISS/504

2013-02-10 Thread Eliezer Croitoru

On 2/8/2013 12:51 PM, Sylvio Cesar wrote:

Sometimes I see the log host01, messages like:

0 10.22.152.171 UDP_HIT/000 79 ICP_QUERY
http://intranet.xx.com.br/video/video01.flv  - NONE/- -
1360240846.373  1 10.22.152.171 TCP_MISS/504 1605 GET
http://intranet.xx.com.br/video/video01.flv

504 = gateway problem.
and what do you see on the origin server logs at this time?
what happens when you try to fetch the object with wget\curl or any 
other command line tools?




these messages sometimes appear also in siblings (host02 and host03)

When this happens, the siblings is going to get a new copy of the object


Have you tried to look at the store.log when it happens?
Maybe raise debug sections level?

Eliezer



--
Att,

Sylvio César,
LPIC1, LPIC2, RHCT, RHCE, NCLA, FreeBSD Committer.


Se vós estiverdes em mim, e as minhas palavras estiverem em vós, pedireis
tudo o que quiserdes, e vos será feito. João 15:7


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-11 Thread Eliezer Croitoru

What distro?

On 2/11/2013 1:34 PM, Sandrini Christian (xsnd) wrote:

We only use RPM so I can not use the --disable-ipv6 parameter.

-Ursprüngliche Nachricht-
Von: babajaga [mailto:augustus_me...@yahoo.de]
Gesendet: Montag, 11. Februar 2013 11:56
An:squid-users@squid-cache.org
Betreff: [squid-users] Re: dns_v4_first on ignored?

I am not using IPv6, too. So I compiled squid 3.2.7 using


  ./configure --disable-ipv6



--
View this message in 
context:http://squid-web-proxy-cache.1019090.n4.nabble.com/dns-v4-first-on-ignored-tp4658427p4658428.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-11 Thread Eliezer Croitoru

My repo indeed.

I dont have full IPV6 stack here but IPV6 enabled due to the necessity.
It's kind of a global settings which seems to be working for almost anyone.
If you do ask me I would deal with it on the DNS level rather then squid.
Also take in account that there are dns which has only  record for a 
domain.


If you do have specific site that does that I would consider debugging 
the problem deeper to make sure the reason is not a bug.


Notice that dns_v4_first may be not ignored but rather cannot be used.

BIND dns can be started with "-4" option to help you.
just add a dns cache server to the squid instance to help it.
There are other less robust forwarders which can be used only for this 
purpose but BIND is a very good choice.


Try first and let us know how it works for you.

Eliezer

P.S. you need to configure BIND to use only forwarders and point it to 
the local shared dns server to the clients.


On 2/11/2013 2:06 PM, Sandrini Christian (xsnd) wrote:

Centos 6.3

Source:
http://repo.ngtech.co.il/rpm/centos/6/x86_64/


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-11 Thread Eliezer Croitoru
I gave you an option to install on the squid server a BIND cache server 
wasn't talking about your main DNS server.
Note the you can always use a secondary dns instance to serve this 
purpose to filter  responses.



On 2/11/2013 2:48 PM, Sandrini Christian (xsnd) wrote:

Hi

Thanks for your reply.

I can't really mess around with our main DNS servers.

On our 3.1 squids we just disabled ipv6 module which does not sound right to me 
but works fine.

I suggest to not disable v6 and work with it if you can.



What we see is

2013/01/30 09:52:00.296| idnsGrokReply: www2.zhlex.zh.ch  query failed. 
Trying A now instead.

We do not need any ipv6 support. I'd rather have a way to tell squid to look 
first for an A record.


Please take your time to file a bug-report in the bugzilla:
http://bugs.squid-cache.org

describe the problem and add any logs you can into the report to help 
the development team track and fix it.
It seems like a *big* issue to me since this points about dns_v4_first 
failure.


Try to use the BIND solution I am using.

I have been logging my dns server and it seems like squid 3.HEAD tries 
to resolve A before  but tries to resolve  after A record.


You can try to remove manually ipv6 address from lo and other devices to 
make sure there is no v6 address initialized by centos scripts.


In my testing server the system starts with lo adapter
  inet6 addr: ::1/128 Scope:Host
and also on another devices with a local auto v6 address.
so remove them and try restarting squid service to see what is going on.

Regards,
--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-12 Thread Eliezer Croitoru


On 2/12/2013 2:09 AM, Amos Jeffries wrote:

No. A bug report will not make any difference here. dns_v4_first is
about the sorting the results found, not the lookup order.  is
faster than A in most networks, so we perform that lookup first in 3.1.
This was altered in 3.2 to perform happy-eyeballs parallel lookups
anyway so most bugs in the lookup code of 3.1 will be closed as irrelevant.

Note that the current supported release is now 3.3.1.

Thanks,

The logic seems odd to me and now I understood the reason to what happens.

> This is VERY likely to be the problem. Squid tests for IPv6 ability 
automatically by opening a socket on a private IP address, if that works 
the socket options are noted and used. There is no way for Squid to 
identify in advance of opening upstream connections whether the NIC the 
kernel chooses to use will be v6-enabled or not.
> Notice that the method used to disable IPv6 was to simply not assign 
IPv6 address to the NIC, nothing at the sockets layer was actually 
disabled. So every NIC needs to be checked and disabled individually as 
well, and any sub-system loading IPv6 functionality into the kernel also 
needs disabling as well.


>(Warning: soapbox)
>  The big question is, why disable in the first place? v6 is faster 
and more efficient than v4 when you get it going properly. And one he*l 
of a lot easier to administrate. If any of your upstreams supply native 
connections it is well worth taking the option up. If not there is 
always 6to4 or other tunnel types that can be built right to the proxy 
box to get IPv6 at only a small initial latency on the SYN packet (ping 
192.88.99.1 to see what 6to4 adds for you). Note that these are IPv6 
connectivity initiated from the proxy to the Internet *only*, so 
firewall alterations are minimal to get Squid v6-enabled.


Amos

The main problem with IPV6 is that most of the ISPs around the world 
dosn't support\provide it yet.
While trying to use a 4to6 tunnel I have seen some weird stuff going on 
when a gateway is used.
A proxy is another thing and speed is most likely the issue in the cases 
which 4to6 tunnel is not being used.


Regards,
--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-12 Thread Eliezer Croitoru
Try to contact the dns servers maintainer using postmaster or any other 
relevant address.


You can consult about it in ISOC mailing list.

BIND has very nice logging options about lazy and problematic dns 
servers which can help you prevent these issues.


It's a very common problem in the dns world not related just to IPV6.

Eliezer

On 2/12/2013 12:36 PM, Sandrini Christian (xsnd) wrote:

That is what I guessed as well. But we can not control their DNS and the 
"solution" so far was not to check for  records. It is silly for one domain 
but it is a quite important one that is used a lot.

Not sure if there is any alternatives? I thought that squid 3.2 is doing 
parallel lookups to  and A records?


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: AW: AW: AW: AW: [squid-users] Re: dns_v4_first on ignored?

2013-02-12 Thread Eliezer Croitoru

Many admins will be happy to know about these domains.
Admins should properly maintained and fix them or maybe get some help in 
finding the culprit for the problem.


As I posted before the ISOC list is full of requests for help regarding 
similar problems and solutions for them else then the way you have used.


Eliezer

On 2/12/2013 7:01 PM, Petter Abrahamsson wrote:

Christian,

This sounds very similar to what I have seen with a few sites.
My solution was to add the problematic domains to /etc/hosts (only ipv4
address) and restart squid. I'm not proud or happy about this solution but
it does the trick for me.

Kind regards,
/petter



--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Caching URLs with a ? in them?

2013-02-13 Thread Eliezer Croitoru

On 2/13/2013 5:59 PM, Scott Baker wrote:

The URL ONLY changes for logging purposes. The content being served is
static. The serial number is ONLY preset so I can comb the logs and find
who/when picked up a resource.
Still the url is constantly changing and the proxy cannot know about any 
of the reasons you do as application developer.


If you are the designing the url then consider changing this behavior.
There are many solutions for your specific needs and if you are not 
familiar with them try to get some help about it from someone with a bit 
more experience in the area.

It depends on your environment and application structure etc..
If you can come up with a way to describe what you want to achieve many 
can try to help you.


Regards,
Eliezer


Re: [squid-users] query about --with-filedescriptors and ulimit

2013-02-14 Thread Eliezer Croitoru

On 2/14/2013 11:12 AM, Amm wrote:

ulimit -H -n gives 4096
ulimit -n gives 1024

These are standard Fedora settings, I have not made any changes.


So back to my question:
If I am compiling squid with --with-filedescriptors=16384
do I need to set ulimit before starting squid?

Or does squid automatically set ulimit?

This gives squid the default to 16384 as a limit if available.
In a case the system is limiting to 1\4k the lower limit is being forced 
by the OS.


You need to change the limit's in the OS level to this specific 
service\user\process.


Many admins prefer to just add a line at the startup script:
ulimit 16384
(or another limit)

It works fine so feel free to use it unless you prefer to do it in the 
ways Fedora\linux structure offers the admin.



Regards,
Eliezer




Thanks


Amm.


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] StoreID helper example at the wiki.

2013-02-17 Thread Eliezer Croitoru

Just updated the wiki at:
http://wiki.squid-cache.org/Features/StoreID#Helper_Example

With a nice example of the helper input\output to illustrate the way it 
works:

http://wiki.squid-cache.org/Features/StoreID#Helper_Input.2BAFw-Output_Example

If you do have a pattern that you would like to share with others This 
is the place.


I will update the wiki in the next weeks to describe with a nice request 
example to illustrate couple options of usage.


If you have a helper in any programming or scripting language please 
feel free to post their example:

- python
- perl
- c
- c++
- lua
- bash
- java
- erlang
- lisp

Also Benchmarks for helpers are welcomed.

--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Re: Caching netflix by Mime headers

2013-02-17 Thread Eliezer Croitoru
You will need more then just one or two lines of logs and data to 
determine that.


I don't know a thing about how netflix players do their stuff but I 
doubt they will make it simple as "cache it using basic squid".


Eliezer

On 2/17/2013 9:01 PM, Luis Daniel Lucio Quiroz wrote:

I turn on more loggin and i realize this


1361126274.457 66976 192.168.7.134 TCP_MISS/206 18439445 GET
http://108.175.42.86/658595255.ismv?c=ca&n=812&v=3&e=1361155197&t=L_cj-INb4sDdWF9RHoaOwwjBg7o&d=android&p=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM
- HIER_DIRECT/108.175.42.86 application/octet-stream


1361126280.021 72537 192.168.7.134 TCP_MISS/206 1095098 GET
http://108.175.42.86/658618947.isma?c=ca&n=812&v=3&e=1361155197&t=_I4PVA3JkFpFxS90V8qgmM1Q-OU&d=android&p=5.c4MuCNB5I0-lmXZGQaxWaOpiwGX91JBhZqIvTbIHroM
- HIER_DIRECT/108.175.42.86 application/octet-stream

My question is, if i force caching of \d+\.ism[av] files, the ?
payload will be clashed or will diferenciate  a?b, and a?c for example

I hope to be clear

LD


--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] Weird CC header!

2013-03-03 Thread Eliezer Croitoru
While researching caching I was thinking about how to react to this kind 
of headers:


What do you think about this CC header??
Cache-Control: public,max-age=01209600,must-revalidate,proxy-revalidate

Eliezer


Re: [squid-users] Weird CC header!

2013-03-03 Thread Eliezer Croitoru



On 3/4/2013 4:34 AM, Amos Jeffries wrote:


Looks pretty normal for a server header to me, redundant details and all.
... cache for 14 days and revalidate on every use.

Where is that coming from? client or server? with what other headers
around it?

Amos

http://redbot.org/?id=H8MQPo

Eliezer
--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Blacklist Service for Squid Proxy - Squidblacklist.org

2013-03-17 Thread Eliezer Croitoru

A very nice idea.
can you please share how do you collect these lists?

Best regards,
Eliezer

On 3/17/2013 4:00 AM, Squidblacklist wrote:



  I am inviting you all to squidblacklist.org, a new service
   specializing in blacklists formatted specifically for use with squid
   proxy integrated acl support. Your criticism and contributions are
   not only welcomed, but requisite for success.


   Thank you.


   Signed.

   Fix Nichols

   http://squidblacklist.org


Re: [squid-users] Blacklist Service for Squid Proxy - Squidblacklist.org

2013-03-19 Thread Eliezer Croitoru

This is the place to get help.
Don't hesitate to just ask.

Eliezer

On 3/19/2013 9:27 AM, Squidblacklist wrote:

Scratch that, nothing wrong with -i in the directive, It appears my
test environment has developed a routing issue directing packets to the
squid proxy, I apologize for the wasted messages



Signed,

Fix Nichols

http://squidblacklist.org


On Tue, 19 Mar 2013 20:22:15 +1300
Amos Jeffries  wrote:


On 19/03/2013 8:07 p.m., Squidblacklist wrote:

Sir, if I do as you suggest and insert a -i

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (/cgi-bin/|\?)  0   0%  0
refresh_pattern .   0   20% 4320


the blacklist acls' only work half the time with the -i inserted,
do you have any suggestions for a solution? In the meantime I am
leaving -i out of the directive to retain functionality of the
blacklists


Very strange. All it does is make the regex pattern match
case-insensitive. Nothing related to ACLs.

I am talking about:
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

as per: http://www.squid-cache.org/Doc/config/refresh_pattern/

Amos





Re: [squid-users] not working tproxy in squid 3.2

2013-03-19 Thread Eliezer Croitoru

Hey Oleg,

I want to understand couple things about the situation.
what is the problem? a memory leak?
How do you see the memory leak? and where?
The memory leak you are talking about is in a case of tproxy usage only?


what is the load of the proxy cache?
do you use it for filtering or just plain cache?
on what environment?
the more details you can give on the scenario and point with your finger 
on the problem I will be happy to assist us finding the culprit.


What linux distro are you using?

Regards,
Eliezer

On 3/19/2013 1:41 PM, Oleg wrote:

   Hi, all.

After squid 3.1 ate all of my memory, i installed squid 3.2 (which also ate
all of my memory, but this is an another story). It seems, in squid 3.2 tproxy
is not work right. squid reply to my request, but count of packets too small
for normal workflow. If i connect directly to squid (to normal mode 3128 port),
all work fine.

How can i debug this problem?

My config (3.2.8):

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access allow all
http_port 3128
http_port 3129 tproxy
access_log none
coredump_dir /usr/local/var/cache/squid
url_rewrite_program /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
url_rewrite_children 30 startup=5 idle=10 concurrency=0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
cache_effective_user proxy

iptables-save:

# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*raw
:PREROUTING ACCEPT [7824875024:8401335411812]
:OUTPUT ACCEPT [3675157306:6129226492352]
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*mangle
:PREROUTING ACCEPT [6770135987:6702261415787]
:INPUT ACCEPT [4838725878:6108754481433]
:FORWARD ACCEPT [2985099037:2292524666165]
:OUTPUT ACCEPT [3675156676:6129226454540]
:POSTROUTING ACCEPT [6660255713:8421751120705]
:tproxied - [0:0]
-A PREROUTING -p tcp -m socket --transparent -j tproxied
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3129 --on-ip 0.0.0.0 
--tproxy-mark 0x1/0x
-A tproxied -j MARK --set-xmark 0x1/0x
-A tproxied -j ACCEPT
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*nat
:PREROUTING ACCEPT [166764142:12594892291]
:INPUT ACCEPT [88382392:5321491245]
:OUTPUT ACCEPT [54669707:3295422034]
:POSTROUTING ACCEPT [132896164:10559090386]
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*filter
:INPUT ACCEPT [14588788:12990241586]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [12967278:12836984550]
:block_ip - [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -s 10.232.0.0/16 -p tcp -m tcp --dport 3128 -j ACCEPT
-A INPUT -s 10.232.0.0/16 -p tcp -m tcp --dport 3129 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3129 -j DROP
-A FORWARD -o eth0 -j block_ip
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Wed Mar  6 15:41:59 2013

ip rule:
0:  from all lookup local
3:  from all fwmark 0x1 lookup tproxy
32766:  from all lookup main
32767:  from all lookup default

ip rou show table tproxy:
local default dev lo  scope host

This configuration works fine with squid 3.1.



--
Eliezer Croitoru
http://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] not working tproxy in squid 3.2

2013-03-20 Thread Eliezer Croitoru

On 3/19/2013 9:24 PM, Oleg wrote:

On Tue, Mar 19, 2013 at 08:49:25PM +0200, Eliezer Croitoru wrote:

Hey Oleg,

I want to understand couple things about the situation.
what is the problem? a memory leak?


   1 problem - memory leak;
   2 problem - tproxy doesn't work in squid 3.2.


I can think of a way you can configure squid to do cause them both.


How do you see the memory leak? and where?


   I just start squid, start top and wait about a hour when squid grow from
40MB to 800MB and kernel kills it.


The memory leak you are talking about is in a case of tproxy usage only?


   It's hard to say. I was run squid 3.2, with no working tproxy (as i wrote),
but with normal proxy on 3128 tcp port and it eat my memory too. So, tproxy
is configured, but not used.


what is the load of the proxy cache?
do you use it for filtering or just plain cache?


   Only for filtering.


on what environment?


   What do mean under environment?


ISP? OFFICE? HOME? ELSE...


the more details you can give on the scenario and point with your
finger on the problem I will be happy to assist us finding the
culprit.

What linux distro are you using?


   Debian 6 and also tried debian 7.
My opinion is that you dont need to test on 7 or do special tests but it 
helped us to understand the nature of the problem.


Try to not use the filtering helper by using only defaults and tproxy.
and also try to use this script with trpoxy on port 3129 and http_port 
127.0.0.1:3128


##start of script
#!/bin/sh  -x
echo "loading modules requierd for the tproxy"
modprobe ip_tables
modprobe xt_tcpudp
modprobe nf_tproxy_core
modprobe xt_mark
modprobe xt_MARK
modprobe xt_TPROXY
modprobe xt_socket
modprobe nf_conntrack_ipv4
sysctl net.netfilter.nf_conntrack_acct
sysctl net.netfilter.nf_conntrack_acct=1
ip route flush table 100
ip rule del fwmark 1 lookup 100
ip rule add fwmark 1 lookup 100
ip -f inet route add local default dev lo table 100

echo "flushing any exiting rules"
iptables -t mangle -F
iptables -t mangle -X DIVERT

echo "creating rules"
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -s ___LAN -p tcp -m tcp --dport 80 
-j TPROXY --on-port 3129 --tproxy-mark 0x1/0x1

##end of script


--
Eliezer Croitoru


Re: [squid-users] StoreId

2013-03-21 Thread Eliezer Croitoru

The old helpers should be fine.
Do you have experience with the old store_url_rewrite??

Eliezer

On 3/21/2013 5:21 PM, Marcos A. Dzieva wrote:

Dear.

How do I test StoreID with dynamic page? (ex. youtube)
What is the command that I use in squid.conf ?
Need to compile squid with some specific configuration?

Thanks.
Dzieva


--
Eliezer Croitoru


Re: [squid-users] Why is this un-cacheable?

2013-03-22 Thread Eliezer Croitoru




 Original Message 
Subject:Re: [squid-users] Why is this un-cacheable?
Date:   Fri, 22 Mar 2013 11:09:52 +0200
From:   Eliezer Croitoru 
To: squid-users@squid-cache.org



On 03/22/2013 10:04 AM, csn233 wrote:

URL:http://armdl.adobe.com/pub/adobe/reader/win/9.x/9.5.0/en_US/AdbeRdr950_en_US.exe

It shows a MISS, regardless of how I tweak the refresh_pattern,
including the adding of all the override* and ignore* options:

Last-Modified: Wed, 04 Jan 2012 07:08:53 GMT
...
X-Cache: MISS from ...
X-Cache-Lookup: MISS from ...


What have I missed, so to speak?

http://redbot.org/

will help you.

Regards,
Eliezer




Re: [squid-users] Re: need help from somebody installed videocache with squid !

2013-03-23 Thread Eliezer Croitoru

On 03/23/2013 10:25 AM, Ahmad wrote:

hi Amos , thanks for reply,

do you think that videocache has bugs with squid 3.x ??

The answer is "we don't know".
I wrote StoreID in the new squid.head which is not available in binary 
form yet.

you might want to take a peek at that:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator

Eliezer

i mean that currenty  i have debian os with videocache 2.7 and works fine
with videoache ??



with my best regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/need-help-from-somebody-installed-videocache-with-squid-tp4659178p4659181.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Basic question on refresh_pattern

2013-03-25 Thread Eliezer Croitoru

On 03/25/2013 12:16 PM, Amos Jeffries wrote:


Since all URLs are one or more bytes long it always matches, ie "any 
URL".


But don't confuse that for meaning the pattern applies on *all* URLs. 
It only matches when refresh_pattern is applied and only if no earlier 
pattern matched the URL.


Amos 

it's the most basic rule that should exist right?

Eliezer


Re: [squid-users] Upgrading SQUID from 3.1.6 to 3.1.23

2013-03-29 Thread Eliezer Croitoru

On 03/28/2013 07:02 PM, Vernet Jerome wrote:

My question: can I simply:
-stop SQUID3/dansguardian
-swap binary (/usr/sbin/squid3) with the new version
-start SQUID3/dansguardian ?
  
Is there something to put somewhere else ? Helpers ?
  
Will it work like that ? If something fail, can I simply get the old squid3(.1.6) binary ?
  
Furthermore, upgrading from 3.1 to 3.2 (and may be 3.3) is a difficult task ? Is it worth ?
  
Thanks for help

What?
I cannot understand what you have done.

restarted?
can you please share iptables + squid.conf + "squid -v".

how are you using dansguardian + squid exactly?

Thanks,
Eliezer


Re: [squid-users] running 2.6 and 3.3 in parallel ?

2013-04-04 Thread Eliezer Croitoru

On 04/04/2013 12:10 PM, Per Jessen wrote:

fyi, I have not yet determined if I can split the traffic between two
squids in a reasonable way.

why 2.6 again?? url_rewrite exists on 3.3 so what is the question?

Eliezer


Re: [squid-users] Warning squid -k parse

2013-04-04 Thread Eliezer Croitoru

On 04/04/2013 09:24 PM, Marcos A. Dzieva wrote:

Dear...

I am using squid 3.HEAD on a server ubuntu, and warning occurs before 
processing the configuration file.

I have removed them acls from squid.conf, and warnings keep popping up.

What could be wrong ?
This is a known BUG which should be reported and discussed in bugzilla \ 
squid-dev.

http://bugs.squid-cache.org/

Eliezer


Re: [squid-users] Issue related to using Squid 3.1 or 3.29 and accessing a site that uses a recursive DNS record. (30 seconds to bring up site)

2013-04-08 Thread Eliezer Croitoru

On 4/9/2013 7:27 AM, Duncan, Brian M. wrote:

Squid 3.1 or 3.29 takes like 30 seconds just to resolve the name then bring up 
the page.

Probably dns issues not related to squid version in any way.
This is a known issue that is not related to squid but I'm happy you 
posted about this issue.


Regards,
Eliezer


Re: [squid-users] Issue related to using Squid 3.1 or 3.29 and accessing a site that uses a recursive DNS record. (30 seconds to bring up site)

2013-04-09 Thread Eliezer Croitoru

On 4/9/2013 4:12 PM, Duncan, Brian M. wrote:

I would really like to move on from the 2.x and take advantage of how much 
better the newer version supposedly scale.

Thanks again.

Sorry if I wasn't clear.

I will try to rephrase the logic.

I went from the buttom up.

curl + wget + simple ruby script = slow response.
notice that this address is a redirection.
I am unsure now about the dns issue that I have seen this morning.

The main problem is not the page but the ssl.
which takes forever...

it might be a HTTP 1 vs 1.1 issue which is the same for wget + curl + squid.

leaves the main problems as: dns and service(http 1.1.) problem rather 
then squid.


Regards,
Eliezer



Re: [squid-users] Issue related to using Squid 3.1 or 3.29 and accessing a site that uses a recursive DNS record. (30 seconds to bring up site)

2013-04-10 Thread Eliezer Croitoru

On 4/9/2013 10:46 PM, Duncan, Brian M. wrote:

Thanks for the reply and further clarification,

I still believe the issue I am reporting is specific to DNS and how Squid's 
internal DNS resolver works.

I forgot to mention if I bypass using the hostname in my test, and enter one of 
the resolved IP's instead of webapps.kattenlaw.com it is immediate.  There is 
no delay in bringing the page up.

I also tried another variation while testing today, I re-compiled Squid 3.2.9 
with --disable-internal-dns and it has different behavior indicating even 
further that the problem lies within the internal resolver Squid 3.x uses.

I am not sure but I think it needs squid-dev report and check.

Can you please report this bug at squid-dev or in bugzilla?
http://bugs.squid-cache.org/

Thanks,
Eliezer


Re: [squid-users] high traffic with google

2013-04-12 Thread Eliezer Croitoru
I suggest you to contact squid and adding some headers will might help in this 
case.

Regards,
Eliezer

- Original Message -
From: "Alexandre Chappaz" 
To: squid-users@squid-cache.org
Sent: Thursday, April 11, 2013 6:38:04 PM
Subject: [squid-users] high traffic with google

Hi,

we are handling a rather large network ( ~140Kusers ) and we use one
unique public IP address for internet traffic. This lead google to get
suspicious with us ( captcha with each search )

Do you know if google can whitelist us in some way? where to contact
them? any way to smartly bypass this behavior?


Thanks
Alex


Re: [squid-users] Re: Question about encryption of data

2013-04-12 Thread Eliezer Croitoru

On 4/12/2013 5:32 PM, mazik24 wrote:

Thanks for your answer, but unfortunately all vpn ports are blocked, even
openvpn is not working properly. I want to know if it is possible to encrypt
data using squid and bypass filtering using squid alone.
Squid dosn't provide this kind of function but can use a cache_peer with 
ssl encryption.


Regards,
Eliezer


[squid-users] Re: squid 3.Head storeid

2013-04-18 Thread Eliezer Croitoru

On 4/16/2013 9:44 PM, syaifuddin wrote:

there have two output, first from your log
OK store-id=$url-out

This should be the right.
you can add other stuff into it like url but it's not tested.
YOU can access the squid-dev to see my tests.

I will be happy to test your code.
I will post my ruby code later in the next few weeks.
can you add to your code youtube videos caching??

if yes feel free to post it.
(I am doing it already).

Eliezer


[squid-users] Re: squid 3.Head storeid

2013-04-18 Thread Eliezer Croitoru

On 4/16/2013 9:44 PM, syaifuddin wrote:


i have test my store-id for youtube, fbcdn, ytimg and sourceforge.
overall HIT
this my store-id
.
hope this store-id can help other


best regard

ucok_karnadi


why do you use this code??

my $ref_log = File::ReadBackwards->new('/var/log/squid/yt.log');
.

why do you need to read old logs?

Eliezer




[squid-users] Re: squid 3.Head storeid

2013-04-18 Thread Eliezer Croitoru

On 4/16/2013 9:44 PM, syaifuddin wrote:

but if read on bottom like this [channel-ID] [result] [kv-pair] [URL]
As I posted since there is not concurrency in squid yet for this helper 
there is no need to send the channel.

just send:
"OK store-id=store-id"
or
"OK store-id=**blank**"(**blank** == nothing)

Eliezer


Re: [squid-users] Re: need help in cache_peer

2013-04-27 Thread Eliezer Croitoru

On 4/27/2013 9:11 PM, babajaga wrote:

It is always a good idea to post full squid.conf



Why not? unless you have something to hide... like passwords etc.




[squid-users] Little free consult about cache for the eyes of the users.

2013-05-06 Thread Eliezer Croitoru
I am doing a small non for profit consult for a company in my country 
which currently have these headers and they result a lot of tcp_miss on 
3.head (two month old).


data can be seen at:
http://redbot.org/?uri=http%3A%2F%2Fagadastories.org.il%2Fnode%2F265

and at:
http://redbot.org/?descend=True&uri=http://agadastories.org.il/node/265

To make this site cache-able there is a need to change couple things 
like expiration etc.


I would like to share this scenario to allow others see good settings on 
web server and cache proxy.


Thanks,
Eliezer



[squid-users] Looking for squid spec file

2013-05-13 Thread Eliezer Croitoru
Since I had a little trouble and my old spec file to create RPM for 
CentOS I am looking for one.

I remember a nice guy from here that had a SPEC file.

If you do have one please post it or send it to my personal email.

Thanks,
Eliezer


Re: [squid-users] Looking for squid spec file

2013-05-13 Thread Eliezer Croitoru

On 5/13/2013 3:30 PM, Alex Domoradov wrote:

For which version of squid do you need spec file?

3.2
3.3
3.head

any of the above ^^
I had 3.2 but now 3.3 is stable so I don't really care which one of them 
I will customize it again.


Eliezer



On Mon, May 13, 2013 at 3:02 PM, Eliezer Croitoru  wrote:

Since I had a little trouble and my old spec file to create RPM for CentOS I
am looking for one.
I remember a nice guy from here that had a SPEC file.

If you do have one please post it or send it to my personal email.

Thanks,
Eliezer




Re: [squid-users] Looking for squid spec file

2013-05-13 Thread Eliezer Croitoru

On 5/13/2013 6:13 PM, Amm wrote:

Well one can modify it to require for init.d (or whatever that package is 
called)

Or even pick up spec file from previous Fedora releases.

Amm
And since someone in the user list have a ready to use spec file just 
share it with me and I will use it.


Now I dont have the head to work on it too much.
Why work hard for a long time to find that someone else have the file 
already??


Eliezer


Re: [squid-users] Little free consult about cache for the eyes of the users.

2013-05-19 Thread Eliezer Croitoru
After analyzing the server it seems like there are some cache developer 
issues which are not known to me.

the main problem with the page is:
The Last-Modified header's value isn't a valid date.
since the headers are:
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Sun, 19 May 2013 19:47:08 +

which a bit unreasonable.
The reasons are: old expiration date and changing last modified headers.
The main problem of the slow site is not the cache or html or DB issues.
the main problem IS CPU over usage since there is an internal CAPTCHA 
thingy that works but not like a charm.

ideas on how to solve it??

My answer is to replace this internal CAPTCH plugins inside of drupal to 
something with google CAPTCHA which will take a lot of load from this 
small server to another server.
Notice that the main issue is picture calculation which is not really 
required in this specific server.


The above is one issue.
Another issue is that I want in this specific case to force cache for 
the static print pages.


I will be happy to hear the best way to force it.

Thanks,
Eliezer

On 5/6/2013 6:52 PM, Eliezer Croitoru wrote:

I am doing a small non for profit consult for a company in my country
which currently have these headers and they result a lot of tcp_miss on
3.head (two month old).

data can be seen at:
http://redbot.org/?uri=http%3A%2F%2Fagadastories.org.il%2Fnode%2F265

and at:
http://redbot.org/?descend=True&uri=http://agadastories.org.il/node/265

To make this site cache-able there is a need to change couple things
like expiration etc.

I would like to share this scenario to allow others see good settings on
web server and cache proxy.

Thanks,
Eliezer





Re: [squid-users] Re: what is best method to connect two squid servers on the same router?

2013-05-20 Thread Eliezer Croitoru

On 5/20/2013 12:17 PM, Ahmad wrote:

hi Amos ,
thanks fro reply ,
sorry for late ,
i dont think that its hardware issue .
i mean that i dont think that my hardware router cant bear two squid .  it
can do it and perfectly .

i returned to its  refernece manual and find it has a specific enhancement
featured implemented fro wccp  and it can load balance alot of clusters .
agian ,

my platform is multilayer switch cisco 7604 .

about ur queston in cpu dissipation :
1-  when none of squid working on router ===> cpu of router is 4 %
2- when 1 squid working on router >cpu  of router is 35 %
3- when 1 squid is running , and add the 2nd server while the 1st squid is
running , i note that squid 1  fail  and the 2nd squid dont work  and the
cpu of router reach 90 , 95 % , 97 %
The problem is that WCCP works on the IP level which you cannot use the 
same IP to the same transparent service.
Also you need to a configure exceptions on the cisco for traffic which 
comes from one service to not pass into the second service.
I dont know what your network load but squid works using EPOLL which 
will decrease the traffic load on the cpu by more then 100% compared to 
regular SOCKET.

Is this a self compiled squid or from of the repositories?
if it's from one of the repos from which?



actually i dont know , if i need to change the return , forwarding , or
assignment methods  to fix my problem !!!

which is better method  between squid and router ? should i let the
interface between switch and router as  Layer 2 port , and put the wccp
setting on the vlan interface of port ??
If you do ask me just put the squid machine in bridge mode just to see 
the actual load of the network to understand if wccp is better the 
bridge mode.




or let it as router "Layer 3 " port  and apply wccp settings on the giga
interface ??

any documents from squid about my issue ??

I have written about how to use squid as transparent cache proxy with WCCP.
http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2



wish to help

regards


If you need more help just ask.

Regards,
Eliezer






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/what-is-best-method-to-connect-two-squid-servers-on-the-same-router-tp4659922p4660147.html
Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] Strange behavior in selection of tcp_outgoing_address

2013-05-20 Thread Eliezer Croitoru

On 5/20/2013 11:34 AM, Alex Domoradov wrote:

older than 3.2

newer then 3.2 beta so 3.2 stable.
but now there is 3.3 stable then if you can use it it's better.

Eliezer


Re: [squid-users] 3.HEAD and TCP_MEM_HIT_ABORTED/200, TCP_MISS_ABORTED/200

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 3:56 PM, csn233 wrote:

http://wiki.squid-cache.org/SquidFaq/SquidLogs#Squid_result_codes

*_ABORTED means the client TCP connection got closed on Squid, probably by
the client browser.

Amos


Thanks. So these are concatenated which explains why searching for the
full word doesn't return anything.

 From a browser point of view however, it works - ie no errors visible
on the browser, the streaming video runs to completion. It's just that
every now and then the _ABORTED code pops up in the access.log while
the video is running.

I'll try to narrow it down a bit more.

I noticed that some people have wrong logs but I cannot understand it 
since on my compilation it actually works.


I am releasing new RPMs of head and 3.3.5

Eliezer


[squid-users] I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.

2013-05-21 Thread Eliezer Croitoru

I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.
After a great loss of data which required me to build the spec and 
script file from scratch We now have a working squid version which 
actually has StoreID for CentOS which is an enterprise class proxy package.

The stable version is at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/

and Head is at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/

Head has a lot of features and one of them is StoreID which helps cache 
youtube and other videos.


I am now releasing my work on the youtube helper which can help you 
cache youtube videos with no problems.

You can find it on squid wiki at:
http://wiki.squid-cache.org/Features/StoreID/Helper

Notice that the helper and storeID got invested by me with a lot of 
blood on the hands to make sure it works for eveybody.


Best regards,
Eliezer


Re: [squid-users] I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 5:04 PM, csn233 wrote:

Thanks for the great work on StoreID.

 From my testing, Youtube has more than just itag/range/redirect. I've
just found a /videoplayback of type text/plain rather than just
video/flv. If I excluded this text/plain, it works, otherwise I get a
blank window that doesn't play.

It could be just my setup of course...


I didnt shared everything but there are couple domains.
s.youtube.com and
.c.youtube.com

there are also couple other things inside this scope of excluding from 
using storeID.


a sec I will look at my settings.

acl ytcblcok urlpath_regex (begin\=)
acl ytcblockdoms dstdomain redirector.c.youtube.com
acl offlinedoms dstdomain "/etc/squid/offline.doms"
acl ytimg   dstdomain .ytimg.com
acl img urlpath_regex (\.jpg)
acl video   urlpath_regex (\.mp4|\.flv)
acl nocache urlpath_regex &non_cache\=1$
acl rewritedoms dstdomain .dailymotion.com .video-http.media-imdb.com 
.c.youtube.com av.vimeo.com .dl.sourceforge.net .ytimg.com 
.vid.ec.dmcdn.net .videoslasher.com

acl banned_methods method CONNECT POST DELETE PUT

refresh_pattern 
^http://(youtube|ytimg|vimeo|[a-zA-Z0-9\-]+)\.squid\.internal/.*  10080 
80%  79900 override-expire override-lastmod ignore-no-cache 
ignore-private ignore-reload ignore-must-revalidate ignore-private
refresh_pattern ^http://imdbv\.squid\.internal/.*mp4.*  10080 80%  28800 
override-expire override-lastmod ignore-no-cache ignore-private 
ignore-reload



store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms !banned_methods


For me the setup is pretty simple and works like a charm.
squid head handles all the files which dosn't need to be cached by 
default and forces only these who actually needs to be cached.
Also the helper I wrote is precise and does only what it needs to do in 
order to prevent what you are describing.

just install ruby and some small gem of xml and everything else is standard.

Best Regards,
Eliezer


Re: [squid-users] 3.HEAD and TCP_MEM_HIT_ABORTED/200, TCP_MISS_ABORTED/200

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 4:50 PM, csn233 wrote:

I'm interested in StoreID for videoplayback caching. Is this in 3.3.x?
I thought it's only in 3.4/3.HEAD?
3.HEAD but now you have RPM so it's easier for all the enterprises out 
there to have fun and save lots of bandwidth.


Eliezer


Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 5:23 PM, Chris Ross wrote:


  I had gotten a patch for compiling with SSL on RHEL6 from the net, presumably 
by following something noted on this mailing list.  When 3.3.5 came out 
yesterday, and the change log noted that this issue had been addressed, I was 
pleased to upgrade to 3.3.5.

  However, with an unmodified tree, I seem to still be unable to compile 
certificate_db.cc on my x86_64 RedHat EL 6.3 host.  The following are the 
compilation errors:

g++ -DHAVE_CONFIG_H  -I../.. -I../../include -I../../lib -I../../src 
-I../../include   -I../../libltdl   -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Werror -pipe -D_REENTRANT -g -O2 -std=c++0x -MT certificate_db.o 
-MD -MP -MF .deps/certificate_db.Tpo -c -o certificate_db.o certificate_db.cc
certificate_db.cc: In static member function ‘static void 
Ssl::CertificateDb::sq_TXT_DB_delete(TXT_DB*, const char**)’:
certificate_db.cc:170: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:170: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteInvalidCertificate()’:
certificate_db.cc:520: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:520: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteOldestCertificate()’:
certificate_db.cc:551: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:551: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteByHostname(const std::string&)’:
certificate_db.cc:568: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:568: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
make[3]: *** [certificate_db.o] Error 1


  Is anyone either in the core squid team, or in the user community, aware both 
of the short-coming of the fix for bug 3759, and a way to address the issue 
myself in the short term?

  Thanks…

- Chris


The above is known issue with RHEL 6.3 and CentOS 6.3.
This issue requires you to either install some custom openssl libs and 
headers or upgrade to 6.4(which is much more reasonable to me) and use 
the fixed openssl in 6.4.


Eliezer


Re: [squid-users] I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.

2013-05-21 Thread Eliezer Croitoru

Another mirror is at:
http://www2.ngtech.co.il/rpm/centos/6/x86_64/

which is much faster(GB over 0.7 KB)

Eliezer

On 5/21/2013 4:47 PM, Eliezer Croitoru wrote:

I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.
After a great loss of data which required me to build the spec and
script file from scratch We now have a working squid version which
actually has StoreID for CentOS which is an enterprise class proxy package.
The stable version is at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/

and Head is at:
http://www1.ngtech.co.il/rpm/centos/6/x86_64/head/

Head has a lot of features and one of them is StoreID which helps cache
youtube and other videos.

I am now releasing my work on the youtube helper which can help you
cache youtube videos with no problems.
You can find it on squid wiki at:
http://wiki.squid-cache.org/Features/StoreID/Helper

Notice that the helper and storeID got invested by me with a lot of
blood on the hands to make sure it works for eveybody.

Best regards,
Eliezer




Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 6:10 PM, Chris Ross wrote:


On May 21, 2013, at 10:28 , Eliezer Croitoru wrote:

The above is known issue with RHEL 6.3 and CentOS 6.3.
This issue requires you to either install some custom openssl libs and headers 
or upgrade to 6.4(which is much more reasonable to me) and use the fixed 
openssl in 6.4.


   Our systems team tells me that my build host already has the latest openssl 
and openssl-devel packages.  I see it has:

openssl-1.0.0-27.el6_4.2.x86_64
openssl-devel-1.0.0-27.el6_4.2.x86_64

   They say that the packages on the system have been updated to ones from 6.4. 
 I'm not 100% sure, and they acknowledge that the system still _identifies_ 
itself as 6.3, but that it's basically a 6.4 system.

   Do you have any information about what red hat errata or bug number is 
associated with the change they made to the shipped openssl-devel in 6.4?  Or, 
the version numbers of the package(s) that are known to work?

   Thanks much…

- Chris


Then there is another issue.
we can compare what packages we have installed and find out what are are 
missing.

in squid wiki it states:
http://wiki.squid-cache.org/KnowledgeBase/CentOS

in the spec file I have there is a list of required libs and packages 
and these are:

yum groupinstall "Development Tools"
yum install openldap-devel pam-devel openssl-devel krb5-devel db4-devel 
expat-devel libxml2-devel libcap-devel libtool libtool-ltdl-devel


try to install all of them and see if there is something missing.
it should be one of the above.
I am using the same ssl packages you are using with squid from trunk and 
stable 3.3.5 so it should work on yours.


Eliezer


Re: [squid-users] I am please to release 3.3.5 + 3.HEAD RPM for centos 6.4.

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 10:01 PM, Alex Domoradov wrote:

Thanks for the rpm, but I can't find src.rpm for squid-3.3.5. Could
you point me?

I need to create it.
since you asked I will put it there.
if in two days I will not post remind me.

Eliezer


Re: [squid-users] StoreID in 3.3.5

2013-05-21 Thread Eliezer Croitoru

On 5/21/2013 9:46 PM, Romulo Boschetti wrote:

Hi guys.

Can I patch my squid 3.3.5 for have the StoreID feature ?

I've tried to apply this patch 
http://www.squid-cache.org/Versions/v3/3.HEAD/changesets/squid-3-12655.patch on 
squid 3.3.5 sources, but some errors occured  There is another way to make 
this ?

There is another way to make this ?
There is a dirty way.. and I am recommending to just try the head 
version which is pretty stable after a lot of work which amos and all 
the team was working on hard.


You cannot apply a patch on a very very different code and this is the 
real reason for that.


Regards,
Eliezer




Thanks
Rômulo Giordani Boschetti





Re: [squid-users] StoreID in 3.3.5

2013-05-21 Thread Eliezer Croitoru

On 5/22/2013 4:41 AM, csn233 wrote:

On Wed, May 22, 2013 at 2:46 AM, Romulo Boschetti
 wrote:

There is another way to make this ?


Not a patching method, but I'm testing the use of 3.HEAD as a
cache_peer for traffic that would benefit from StoreID.


This way it works but not yet the other way around.
what I mean is that if the first proxy uses StoreID we can't still make 
it send to the second cache the request url instead of the StoreID but I 
hope that now that I have a lot of free time I can give it a lot of 
effort and it will be available in some time.

Just follow and if ask me about it.

Regards,
Eliezer


Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-21 Thread Eliezer Croitoru

On 5/22/2013 8:20 AM, Amos Jeffries wrote:

Things to be aware of:

* feature-detection by ./configure can fail when there are more than one
library version installed. The feature detecion may test the OS default
library in /usr, then some unrelated library adds /usr/local or /opt or
similar where files for a second library version gets used by the build.
This is true for all libraries, not just OpenSSL.
   The fix here is to explicitly point --with-openssl= at the particular
path for the desired library whenever there are more than one installed.

* These feature-detection patches make allowance for most permutations
of the known problems. But cannot handle the event where both our
workaround and the official API are failing. If you can be certain you
are hitting one of these cases we would like to know what OpenSSL
version is doing it

* the FIPS library versions which were also failing earlier with
apparently the same build errors. I believe these FIPS builds are also
fixed as a result of the featrue detection, but that is so far
unconfirmed. I do know the FIPS libraries have significantly more chance
of hitting the above case where our workarounds do not work.

* There may be other OpenSSL API problems hidden in any given library
which we are still unaware of and unfixed. 3.3.5 changes only decouple
the existing version-based workarounds from the library version.

In any event, the only real fix for these problems is to replace the
broken library versions with update working ones.
Can it be that two exactly identical CentOS systems do the same 
configure and make will result in a different result?
They only thing I can think of is a corrupted library that result in a 
corrupted library on the OS and this is not a very nice thing to know.
To make sure our libs are identical we can use SHA1 or a more complex 
calculation to verify his lib corruption reason.


Eliezer



Amos




Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-22 Thread Eliezer Croitoru

On 5/22/2013 10:20 AM, Alex Domoradov wrote:

I think the easiest way to find out with which version of openssl was
link squid is to use ldd

# ldd /usr/sbin/squid | grep ssl
 libssl.so.10 => /usr/lib64/libssl.so.10 (0x7ff8b13d6000)

From mine.
# ldd /usr/sbin/squid  |grep ssl
libssl.so.10 => /usr/lib64/libssl.so.10 (0x7f2a8dbf1000)
it's not the same..

So what do we do?
I can send the openssl RPM I am using to someone if he needs it.

Eliezer


Re: [squid-users] kerberos auth failing behind a load balancer

2013-05-22 Thread Eliezer Croitoru

On 2/28/2013 2:57 PM, Sean Boran wrote:

Hi,

I’ve received (kemp) load balancers to put in front of squids to
provide failover.
The failover / balancing  works fine until I enable Kerberos auth on the squid.

It seems to me like a basic LB problem since it's working on L7 and not L2.
Why do you use L7 LB and not L2 ?
it's less load less CPU etc..
you can use HAPROXY or even plain linux for that.

Eliezer



Test setup:
Browser ==> Kemp balancer ==> Squid  ==> Internet
  proxy.example.com proxy3.example.com

  The client in Windows7 in an Active Directory domain.
If the browser proxy is set to proxy3.example.com  (bypassing the LB),
Kerberos auth works just fine, but via the kemp (proxy.example.com)
the browser prompts for a username/password which is not accepted
anyway

Googling on Squid+LBs, the key is apparently to add a principal for the LB, e.g.
net ads keytab add HTTP/proxy.example.com

In the logs (below), one can see the client sending back a Krb ticket
to squid, but it rejects it:
"negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
Unspecified GSS failure.  "
When I searched on that. one user suggested changing the encryption in
/etc/krb5.conf . In /etc/krb5.conf   I tried with the recommended
squid settings (see below), and also with none at all. The results
were the same. Anyway, if encryption was the issue, it would not work,
via LB or directly.


Analysis:
-
When the client sent a request, squid replies with:

HTTP/1.1 407 Proxy Authentication Required
Server: squid
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
X-Cache: MISS from gsiproxy3.vptt.ch
Via: 1.1 gsiproxy3.vptt.ch (squid)

ok so far. the client answer with a kerberos ticket:

Proxy-Authorization: Negotiate YIIWpgYGKwYBXXX

UserRequest.cc(338) authenticate: header Negotiate
YIIWpgYGKwYBXXX
UserRequest.cc(360) authenticate: No connection authentication type
Config.cc(52) CreateAuthUser: header = 'Negotiate YIIWpgYGKwYBBQUC
auth_negotiate.cc(303) decode: decode Negotiate authentication
UserRequest.cc(93) valid: Validated. Auth::UserRequest '0x20d68d0'.
UserRequest.cc(51) authenticated: user not fully authenticated.
UserRequest.cc(198) authenticate: auth state negotiate none. Received
blob: 'Negotiate
YIIWpgYGKwYBBQUCoIIWmjCCFpagMDAuBgkqhkiC9xIBAXX
..
UserRequest.cc(101) module_start: credentials state is '2'
helper.cc(1407) helperStatefulDispatch: helperStatefulDispatch:
Request sent to negotiateauthenticator #1, 7740 bytes
negotiate_wrapper: Got 'YR YIIWpgYGKwYBBQXXX
negotiate_wrapper: received Kerberos token
negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
Unspecified GSS failure.  Minor code may provide more information.


Logs for a (successful) auth without LB:
  .. as above 
  negotiate_wrapper: received Kerberos token
  negotiate_wrapper: Return 'AF oYGXXA== 
u...@example.net


- configuration ---
Ubuntu 12.04 + std kerberod. Squid 3.2 bzr head from lat Jan.
- squid.conf:
- debug_options ALL,2 29,9 (to catch auth)
auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth -d --kerberos
/usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME
--ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param negotiate children 20 startup=20  idle=20 auth_param
negotiate keep_alive on

- The LB is configured as a Generic Proxy (does not try to interpret
the HTTP stream), with with Layer 7 transparency
   (it forwards traffic to the squid, the squid see the real client IP,
and squid traffic is routed back though the LB)
I've tried playing with the LB Layer 7 settings, to no avail.

Samba:
net ads join -U USER
net ads testjoin
   Join is OK

net ads keytab add HTTP -U USER
net ads keytab add HTTP/proxy.example.com  -U USER
chgrp proxy /etc/krb5.keytab
chmod 640 /etc/krb5.keytab
strings /etc/krb5.keytab   # check contents
net ads keytab list

/etc/krb5.conf
  [libdefaults]
 default_realm = EXAMPLE.NET
 kdc_timesync = 1
 ccache_type = 4
 forwardable = true
 proxiable = true
 fcc-mit-ticketflags = true
 default_keytab_name = FILE:/etc/krb5.keytab
 dns_lookup_realm = no
 ticket_lifetime = 24h

[realms]
 EXAMPLE.net = {
 kdc = ldap.EXAMPLE.net
 master_kdc = ldap.EXAMPLE.net
 admin_server = ldap.EXAMPLE.net
 default_domain = EXAMPLE.net
 }
[domain_realm]
 .corproot.net = EXAMPLE.NET
 corproot.net = EXAMPLE.NET


Any suggestions on where I could dig further?

Thanks in advance,

Sean Boran





Re: [squid-users] Squid: how to link inbound IPv4 + multiple port connections to unique outbound IPv6's

2013-05-22 Thread Eliezer Croitoru

On 5/22/2013 11:47 AM, bilderberger wrote:

Can anyone see what I've done wrong here? (using Squid 3.1.1 on Centos 6
64bit)


[squid]
name=Squid repo for CentOS Linux 6 - $basearch
baseurl=http://www1.ngtech.co.il/rpm/centos/6/$basearch
failovermethod=priority
enabled=1
gpgcheck=0


or

[squid]
name=Squid repo for CentOS Linux 6 - $basearch
baseurl=http://www2.ngtech.co.il/rpm/centos/6/$basearch
failovermethod=priority
enabled=1
gpgcheck=0



Re: [squid-users] Re: R: [squid-users] WARNING: no_suid: setuid(0): (1) Operation not permitted

2013-05-22 Thread Eliezer Croitoru

On 2/6/2013 11:49 PM, Alex Rousskov wrote:

mos, bug 3763 is not about setuid(0) warnings, although both bugs may
have been caused by the same Coverity-inspired motivation to check the
return values of system calls.

Simone, yes, I think you should report the setuid warning bug. If you
do, please note that it appears to be BSD-specific.


Thank you,

Alex.

I have a case on FreeBSD and it's confirm only on FreeBSD and not linux.
What can we do with it?
Do we want to handle it?

Eliezer


Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-22 Thread Eliezer Croitoru

On 5/22/2013 5:01 PM, Chris Ross wrote:

   From mine:

libssl.so.10 => /usr/lib64/libssl.so.10 (0x7f08f8eb2000)

   I think that last number is simply a memory address, so it could be located 
at a variety of different places depending on how squid was linked.  Using 
different options (such as, I'm not using kerberos) would affect that.

   The important thing is the major version of the library for ABI 
compatibility.  We're all the same on that, they're all just variants of 1.0.0. 
 And, the issue in question isn't the library anyway.  Linking isn't a problem, 
it's compilation.  The headers are what would need to change, or perhaps the 
compiler or compilation options.

   I also got a report back from my systems team that --enable-ssl works fine 
on our systems, but --enable-ssl-crtd causes the compilation failure I'm 
seeing.  You used both of those, Eliezer?

  - Chris

Hey Chris,

Now I remembered in a more detailed way that the reason was the crtd and 
no ssl which is another thing.
I didn't used the crtd since there is a bug and also since most users 
don't really need it.
OK so we have the same library and it's not corrupted but now we know 
for 100% once and for all the source of the problem which os the crtd 
and not enable-ssl.
since this bug was found I encouraged people to use self-compiled 
openssl libs and headers.
I am sorry for redhat team but they seems to not want an upgrade because 
last time it cost them too much pain in many places.


Will be it be hard for you to use a custom made ssl to build squid 
specificly??
if this is the main issue and we can make it work in a more RPM way such 
as using a good SPEC file to develop New openSSL I will be more then 
happy to host it in order to spare a lot of pain from many people.

are you up for some of the task?

Eliezer


Re: [squid-users] Compiling squid-3.3.5 with SSL on RedHat EL 6

2013-05-22 Thread Eliezer Croitoru

On 5/22/2013 6:40 PM, Chris Ross wrote:


On May 22, 2013, at 11:32 , Eliezer Croitoru wrote:

Hey Chris,

Now I remembered in a more detailed way that the reason was the crtd and no ssl 
which is another thing.
I didn't used the crtd since there is a bug and also since most users don't 
really need it.
OK so we have the same library and it's not corrupted but now we know for 100% 
once and for all the source of the problem which os the crtd and not enable-ssl.
since this bug was found I encouraged people to use self-compiled openssl libs 
and headers.
I am sorry for redhat team but they seems to not want an upgrade because last 
time it cost them too much pain in many places.

Will be it be hard for you to use a custom made ssl to build squid specificly??
if this is the main issue and we can make it work in a more RPM way such as 
using a good SPEC file to develop New openSSL I will be more then happy to host 
it in order to spare a lot of pain from many people.
are you up for some of the task?


   In my case, I found a way to work around the problem.  The following unruly patch will 
allow it to compile.  I don't think it's a "good" solution, as it's clearly a 
bit crude, but it does work for this one case.


Index: certificate_db.cc
===
--- certificate_db.cc   (revision 5213)
+++ certificate_db.cc   (working copy)
@@ -19,6 +19,10 @@
  #include 
  #endif

+#undef CHECKED_PTR_OF
+#define CHECKED_PTR_OF(type, p) \
+static_cast((void*) (1 ? p : (type*)0))
+
  #define HERE "(ssl_crtd) " << __FILE__ << ':' << __LINE__ << ": "

  Ssl::Lock::Lock(std::string const &aFilename) :
-
This is a nice and elegant solution which I do not know about the 
internals but do know that if it works it worth something.


   I post this here so that it will be pulled into the archives and live on.  
I'm not suggesting anyone else use it, specifically.  Use at your own risk.

   I haven't tried experimenting with the ssl_crtd yet, so all I know is that 
it allows it to compile.




   Eliezer, you mention that there is a bug.  What is the bug?  And, it's not 
clear from the documentation or configure help, if you do not use that 
configure option to get this external program, is squid able to perform the 
dynamic SSL cert functionality internally?  If so, I may not need it either.  
But, I did want to try for SSLBump + DynamicSslCert…

 - Chris
I didn't compiled squid with ssl-bump(crtd) yet on centos since there 
wasn't any big demand for that but I was considering it for a long time.
I can compile squid with static libs which will take more resources on 
the RPM and a bit of bigger memory print.
Since I am the maintainer of the repo I need to consider most of the 
users and maybe use another static version specifically for this case on 
centos.


I will probably will publish the head version with static libs which IF 
I understood right should solve the issue in a nicer way rather then 
forcing the users to compile openssl.(right?)


Eliezer



Re: [squid-users] Option name doesn't work in cache_peer

2013-05-22 Thread Eliezer Croitoru

On 5/22/2013 7:11 PM, Alex Domoradov wrote:

Hello all, I have the following squid.conf

acl parent_squid peername PARENT_SQUID
acl FILE_TO_CACHE urlpath_regex \.(zip|iso|rar)$
acl TEST dstdomain storage.example.net

cache_peer 192.168.100.50 parent 3128 3130 name=PARENT_SQUID connect-timeout=7
cache_peer_access 192.168.100.50 allow TEST FILE_TO_CACHE

tcp_outgoing_address 192.168.100.1 parent_squid
tcp_outgoing_address 192.168.1.2

http_port 192.168.100.1:3128 intercept

When I run squid -k parse I got the following message

# squid -k parse -f /root/xxx/squid.conf
2013/05/23 02:07:55| Processing Configuration File:
/root/test/squid.conf (depth 0)
2013/05/23 02:07:55| squid.conf, line 107: No cache_peer '192.168.100.50'


Use name=PARENT_SQUID as in

cache_peer_access PARENT_SQUID allow TEST FILE_TO_CACHE

Eliezer

2013/05/23 02:07:55| Starting Authentication on port 192.168.100.1:3128
2013/05/23 02:07:55| Disabling Authentication on port
192.168.100.1:3128 (interception enabled)
2013/05/23 02:07:55| Disabling IPv6 on port 192.168.100.1:3128
(interception enabled)
2013/05/23 02:07:55| Initializing https proxy context

If I removed name from cache_peer all works fine. Did I miss something?

P.S.
I have tested with squid-3.1.10 and squid-3.3.5





Re: [squid-users] kerberos auth failing behind a load balancer

2013-05-22 Thread Eliezer Croitoru

On 5/23/2013 8:42 AM, Brett Lymn wrote:

One problem with using L2 is that you then lose the ability to log the
client IP address, everything appears to come from the load balancer.
Using L7 you can, at least on some load balancers, insert a
X-FORWARDED-FOR header with the client IP in it so you can log this in
squid using a custom log line.
Unless you use TPROXY which is very simple to use if you understand the 
concepts and ideas.

Also there is an option to use LVS or PROXY protocol in many cases.
I dont remeber if squid support proxy protocol stickily but L2 LB is far 
more easy to debug and use rather then a L7 one which requires a much 
more advanced CPU ram and other stuff.


Eliezer


Re: [squid-users] strange behaivour with netflix on PS3 with squid

2013-05-24 Thread Eliezer Croitoru

On 5/25/2013 3:49 AM, Luis Daniel Lucio Quiroz wrote:

X-Squid-Error: ERR_INVALID_REQ 0

as squid states in the response headers ^^^
Also try to upgrade to the latest CentOS RPM of 3.3.5 from:
http://www2.ngtech.co.il/rpm/centos/6/x86_64/

Regards,
Eliezer


[squid-users] Using REDBOT from bookmarklet to analyze page.

2013-05-24 Thread Eliezer Croitoru

You can use these JS as bookmarks in firefox:
javascript:location%20=%20'http://www1.ngtech.co.il/redbot/webui.py?uri='+encodeURIComponent(location);%20void%200
javascript:location%20=%20'http://redbot.org/webui.py?uri='+encodeURIComponent(location);%20void%200

One is mine and the other is Mark Nottingham site.
These redbot systems will help you analyze servers sites and objects.
If you will look at the bytes level of every page you will see something 
else.


as an example try to get into this site:
http://www1.ngtech.co.il/redbot/webui.py?uri=http%3A%2F%2Fwww.djmaza.com%2F

Browse this site and see how a site should look like.
I would like to hear if you think this site design for cachablity can be 
done better.


Regards,
Eliezer






Re: [squid-users] can we know the ip of transparent proxy ??

2013-05-30 Thread Eliezer Croitoru

On 5/30/2013 9:49 AM, Ahmad wrote:

hi ,
im wondering if we can how the ip of transparent proxy when we get redirect
to it ?

im asking from customer side ,  can we estimate the ip of wccp transparent
squid ??

or

can we know if we are getting redirected to squid cache server or not ??


all my questions are based on transparent proxy with using wccp .

regards





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/can-we-know-the-ip-of-transparent-proxy-tp4660312.html
Sent from the Squid - Users mailing list archive at Nabble.com.


You can know it using application level PING.
it's a very complex hack and can even be possibly in many cases 
untraceable using some iptables and\or kernel hacks to prevent the 
client knowing that there is a transparent proxy in the view.


Regards,
Eliezer


Re: [squid-users] Same cache_dir for many squid process

2013-05-30 Thread Eliezer Croitoru

On 5/30/2013 10:00 AM, Sekar Duraisamy wrote:

Hello Friends,

Iam running 3 squid process on the same machine with different ports
and i would like to use same cache_dir for all the 3 processes.

Can we use same cache_dir for all the processes?


NO.
even on a SMP setup different process cannot use them both together.
If in the future there will be a static way to handle http requests then 
there will no need for a DB (no sure it will happen ever) and this will 
might happen.


Regards,
Eliezer


Thanks,
Sekar





Re: [squid-users] Can't stay logged in

2013-05-30 Thread Eliezer Croitoru

On 5/30/2013 3:20 AM, cac...@quantum-sci.com wrote:


Does anyone know why I can't stay logged in to this site, with headers paranoid?
http://www.cctvforum.com/

squid.conf:
http://pastebin.ca/2384770

Here's what happens when I give my username and password on the site and try to 
log in:
http://pastebin.ca/2384773




You should remove any important user name and password\cookies from the 
internet in order to prevent bad usage of your accounts.


Eliezer


Re: [squid-users] Same cache_dir for many squid process

2013-05-30 Thread Eliezer Croitoru

On 5/30/2013 5:24 PM, Alex Rousskov wrote:

Yes, provided you use SMP Squid and Rock cache_dir.

Nice to know!!!
This is a great feature but most cache objects will be larger then what 
rock cache_dir offers no?


Eliezer


<    1   2   3   4   5   6   7   8   9   10   >