Re: [squid-users] question in cache mem in squid 3

2013-07-31 Thread Pavel Kazlenka

Hi Ahmad,

On 07/31/2013 03:36 PM, Ahmad wrote:

hi ,
i have a question
i have a server with 48 G of rams ,
in squid.conf file ive served  mem for squid to be only 1 G

but my question is why is my total memory is get  full after sometime ??

result is below from my server :

root@drvirus:~# free -m
  total   used   free sharedbuffers cached
Mem: 48296  47853442  0   1893  38302
-/+ buffers/cache:   7658  40638
Swap:0  0  0
root@drvirus:~# cat /etc/squid3/squid.conf | grep cache_mem
cache_mem 1000 MB
==

as we see , the free memory is just 442 M and ive just configured memory 1 g
for squid , and also  i only use my system for squid , so i dont tink that
other processess other than squid is eating my memory !!


Please pay your attention that most of memory consumed is cache, i.e. 
the one is not really used at the moment and could be freed at any 
moment without harm to the system. Please see 
http://www.linuxatemyram.com/ for easy explanation.





here is output of the command top :
top

/top - 14:35:06 up 2 days, 22:58,  2 users,  load average: 3.04, 2.25, 1.90
Tasks: 190 total,   1 running, 188 sleeping,   0 stopped,   1 zombie
Cpu(s):  5.2%us,  9.7%sy,  0.0%ni, 77.4%id,  4.3%wa,  0.0%hi,  3.4%si,
0.0%st
Mem:  49455732k total, 49092852k used,   362880k free,  1898256k buffers
Swap:0k total,0k used,0k free, 39341964k cached

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2936 proxy 20   0 4950m 4.8g 2244 D   81 10.1   3025:47 squid3
15837 proxy 20   0  305m 171m 2532 S   78  0.4  43:02.94 python
  2417 mysql 20   0  449m  61m 2724 S7  0.1 208:01.06 mysqld
27535 proxy 20   0  148m  14m 2336 S6  0.0  30:21.52 python
27536 proxy 20   0  147m  14m 2336 S3  0.0  22:03.57 python
27542 proxy 20   0  147m  13m 2336 S2  0.0   9:32.79 python
27539 proxy 20   0  147m  14m 2336 S1  0.0  15:19.15 python
27549 proxy 20   0  148m  14m 2336 S1  0.0   6:03.95 python
15 root  20   0 000 S1  0.0  35:56.55 ksoftirqd/2
23 root  20   0 000 S1  0.0  35:43.95 ksoftirqd/4
31 root  20   0 000 S1  0.0  35:04.29 ksoftirqd/6
39 root  20   0 000 S1  0.0  32:19.42 ksoftirqd/8
55 root  20   0 000 S1  0.0  32:55.90 ksoftirqd/12
63 root  20   0 000 S1  0.0  32:21.44 ksoftirqd/14
16225 proxy 20   0  3952  272  208 S1  0.0   1:02.64 tail
27598 proxy 20   0  147m  13m 2344 S1  0.0   3:35.83 python
 3 root  20   0 000 S1  0.0  19:24.87 ksoftirqd/0
47 root  20   0 000 S1  0.0  34:32.83 ksoftirqd/10
  2862 root  20   0 000 S1  0.0   2:39.92 flush-8:0
27601 proxy 20   0  147m  16m 2336 S1  0.0   2:19.74 python
28405 www-data  20   0 1208m 3912 1072 S1  0.0   0:01.67 apache2
10 root  20   0 000 S0  0.0  10:42.76 ksoftirqd/1
19 root  20   0 000 S0  0.0  10:58.87 ksoftirqd/3
27 root  20   0 000 S0  0.0  11:02.36 ksoftirqd/5
35 root  20   0 000 S0  0.0  11:21.07 ksoftirqd/7
51 root  20   0 000 S0  0.0  10:35.43 ksoftirqd/11
59 root  20   0 000 S0  0.0   5:56.19 ksoftirqd/13
67 root  20   0 000 S0  0.0   5:03.22 ksoftirqd/15
   542 root  20   0 000 S0  0.0  13:25.53 kswapd0
   543 root  20   0 000 S0  0.0   8:02.43 kswapd1
  2866 root  20   0 000 S0  0.0   6:52.55 flush-8:48
  2867 root  20   0 000 S0  0.0   8:07.08 flush-8:80
  8141 www-data  20   0 1145m 4092 1136 S0  0.0   0:03.23 apache2
  9990 root  20   0 000 S0  0.0   2:38.96 kworker/0:2
20502 www-data  20   0 1145m 5172 1152 S0  0.0   0:10.88 apache2
27604 proxy 20   0  147m  13m 2336 S0  0.0   1:27.98 python
30424 www-data  20   0 1145m 5124 1344 S0  0.0   0:10.37 apache2
 1 root  20   0  8404  760  624 S0  0.0   0:03.01 init
 2 root  20   0 000 S0  0.0   0:00.00 kthreadd
 5 root  20   0 000 S0  0.0   0:04.12 kworker/u:0 /

wish to clarify


regards



Please note, that 'cache_mem' configuration directive just limits amount 
of memory used by cache itself, i.e. stored in cache documents. But 
squid in whole consumes more RAM, of course.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-in-cache-mem-in-squid-3-tp4661365.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Best wishes,
Pavel


Re: [squid-users] Refresh Problem

2013-08-01 Thread Pavel Kazlenka

Hi Gustavo,

sounds like your internet channel is not reliable enough. Anyway if you 
think the problem is in squid itself you should consider analysing 
access.log and cache.log with debug output.


Best wishes,
Pavel

On 08/01/2013 10:16 PM, Gustavo Esquivel wrote:

Hi,
actually i'm using squid proxy and all works fine but i have a little problem...
sometime the users must to press a couple times F5 button to refresh
the website because they receive not found page or incomplete page.

anybody have an idea?
thanks a lot!!

Best Regards,
Gustavo Esquivel




Re: [squid-users] 3.4 and external_acl_type

2013-08-06 Thread Pavel Kazlenka

Hi Dmitry,

This is known problem with configuration file parsing in 3.4.0.1. Just 
wait for stable version.
Details in this thread: 
http://www.squid-cache.org/mail-archive/squid-users/201308/0016.html


Best wishes,
Pavel

On 08/06/2013 03:00 PM, Dmitry Melekhov wrote:

Hello!

Just tried to start 3.4 instead of 3.3.8 just to check does it work or 
not  ( sooner or later 3.3 will  be deprecated ;-) ) and got:


2013/08/06 15:10:13| Macros are not supported here: 
%g)(memberUid=%u))" -F "(&(objectClass=account)(uid=%s))" -s sub
FATAL: Bungled /etc/squid3/squid.conf line 1929: external_acl_type 
LdapGroup children-max=30 children-startup=10 concurrency=0 ttl=600 
negative_ttl=10 grace=0 protocol=2.5 %LOGIN /usr/sbin/squid_ldap_group 
-v 3 -h 127.0.0.1 -b "o=Aspec,c=RU" -B 
"org=belkam,ou=People,o=Aspec,c=RU" -f "(&(cn=%g)(memberUid=%u))" -F 
"(&(objectClass=account)(uid=%s))" -s sub

Squid Cache (Version 3.4.0.1): Terminated abnormally.
CPU Usage: 0.008 seconds = 0.000 user + 0.008 sys
Maximum Resident Size: 19744 KB
Page faults with physical i/o: 0


Yes, I use old squid_ldap_group which can't be compiled even with 3.3, 
just because it is patched for our needs, but I guess, may be it can 
still be usable, just some changes in config for 3.4.


Here is line from config:

external_acl_type LdapGroup children-max=30 children-startup=10 
concurrency=0 ttl=600 negative_ttl=10 grace=0 protocol=2.5 %LOGIN 
/usr/sbin/squid_ldap_group -v 3 -h 127.0.0.1 -b "o=Aspec,c=RU" -B 
"org=belkam,ou=People,o=Aspec,c=RU" -f "(&(cn=%g)(memberUid=%u))" -F 
"(&(objectClass=account)(uid=%s))" -s sub



could you tell me, is it possible to change it to 3.4 compatibility?

Thank you!





Re: [squid-users] swapfile header inconsistent with available data

2013-08-17 Thread Pavel Kazlenka

Hi,

Segment violation is definitely the bug. Would you mind reporting the 
one to squid's bug tracker (bugs.squid-cache.org0?


TIA,
Pavel

On 08/18/2013 12:56 AM, Golden Shadow wrote:

Hi,

I think I got the answer to my question and thought it would be nice if I post 
what I've concluded to the list, just in case somebody would have the same 
question in the future.

I had the following two cache.log entries whenever the server is improperly 
shutdown via power failure (Yes, it is odd but this is what happens in that 
remote site!). squid keep spawning new worker but it dies after a short time.

kid1| WARNING: swapfile header inconsistent with available data
FATAL: Received Segment Violation...dying.


So to fix this, squid must first be shut down, and then swap.state files are 
deleted, then once you start squid again, it will try to generate new and 
consistent swap.state files. One very important thing to keep in mind is that 
this process may take a very long time depending on your cache_dir size! Since 
I had no other option, I had to risk the 2 TB cache contents and tried that, 
but I ended up with squid taking a long time (roughly 30 hours) trying to 
generate new swap.state files.

Best regards,
Firas



- Original Message -
From: Golden Shadow 
To: "squid-users@squid-cache.org" 
Cc:
Sent: Friday, August 16, 2013 7:10 PM
Subject: [squid-users] swapfile header inconsistent with available data

Hi Squid-users!

Recently squid started to restart its working kid and I see the following log 
entries in cache.log:

kid1| WARNING: swapfile header inconsistent with available data
FATAL: Received Segment Violation...dying.


Any advice to fix this? Can I just shutdown squid, remove swap.state and then 
start squid again?

Best regards,
Firas




Re: [squid-users] office 365 not accessible via squid proxy

2013-08-20 Thread Pavel Kazlenka

Hi Gaurav,

You probably have some problem with https traffic.
Does your squid works in forwarding or interception mode? What is your 
squid config? Are any other https sites work correctly (facebook, google)?


Best wishes,
Pavel

On 08/20/2013 10:57 AM, Gaurav Saxena wrote:

Hi,
I am not able to access o365 outlook/calendar/people using this url -
https://outlook.office365.com/owa/syntonicwireless.com when I am accessing
internet via squid proxy.Without proxy this url works.
  I can though access o365 outlook/calendar/people via proxy using these urls
- http://mail.office365.com and http://www.outlook.com/syntonicwireless.com.
Can anyone help me on  this?

Thx
Gaurav







Re: [squid-users] office 365 not accessible via squid proxy

2013-08-20 Thread Pavel Kazlenka

Ok,

Could you find in your access.log what exactly is rejected by squid when 
you are trying to access 
https://outlook.office365.com/owa/syntonicwireless.com ?


Pavel

On 08/20/2013 11:04 AM, Gaurav Saxena wrote:

Yes other https sites like facebook and google work well with my config.

Thx
Gaurav

-Original Message-
From: Pavel Kazlenka [mailto:pavel.kazle...@measurement-factory.com]
Sent: 20 August 2013 13:26
To: Gaurav Saxena
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] office 365 not accessible via squid proxy

Hi Gaurav,

You probably have some problem with https traffic.
Does your squid works in forwarding or interception mode? What is your squid
config? Are any other https sites work correctly (facebook, google)?

Best wishes,
Pavel

On 08/20/2013 10:57 AM, Gaurav Saxena wrote:

Hi,
I am not able to access o365 outlook/calendar/people using this url -
https://outlook.office365.com/owa/syntonicwireless.com when I am
accessing internet via squid proxy.Without proxy this url works.
   I can though access o365 outlook/calendar/people via proxy using
these urls
- http://mail.office365.com and

http://www.outlook.com/syntonicwireless.com.

Can anyone help me on  this?

Thx
Gaurav










Re: [squid-users] error with file system

2013-08-21 Thread Pavel Kazlenka

Hi Carlos,

Please note, that client's requests also spend file descriptors. Use 
netstat to find the exact number.
If you use ubuntu you could be interested in this thread too: 
http://www.squid-cache.org/mail-archive/squid-users/201212/0276.html


Best wishes,
Pavel

On 08/20/2013 09:57 PM, Carlos Defoe wrote:

Hello,

Look at this:

2013/08/20 07:55:26 kid1| ctx: exit level  0
2013/08/20 07:55:26 kid1| Attempt to open socket for EUI retrieval
failed: (24) Too many open files
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files
2013/08/20 07:55:26 kid1| Reserved FD adjusted from 100 to 64542 due to failures
2013/08/20 07:55:26 kid1| WARNING! Your cache is running out of filedescriptors
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files

ulimit -n = 65535 (i have configured it in limits.conf myself)

When squid starts, it shows no errors:

2013/08/20 13:38:11 kid1| Starting Squid Cache version 3.3.8 for
x86_64-unknown-linux-gnu...
2013/08/20 13:38:11 kid1| Process ID 8087
2013/08/20 13:38:11 kid1| Process Roles: worker
2013/08/20 13:38:11 kid1| With 65535 file descriptors available

running lsof gives no more than 8000 files opened when the problem occurs.

Why should it say "Too many open files"? Do you think SELinux can be
the cause of this issue?

thanks




Re: [squid-users] Exchange WebServices (EWS)

2013-08-21 Thread Pavel Kazlenka

Hi Matthew,

If squid doesn't stop any http requests/responses than it can be that 
some part of traffic from client goes (or tries to go) directly to 
server and the other part goes through squid. This could be caused by 
e.g. incorrect NAT settings or routing. You could install some tool like 
firebug or fiddler and check what requests are not satisfied at the 
client side.


Best wishes,
Pavel

On 08/21/2013 01:47 AM, Matthew Ceroni wrote:

Hi:

Let me start by saying I am not an Exchange expert by any means.
However we have two different network segments. One allows direct
access outbound without having to go through squid (used only for a
few select devices/users). The other needs to go through squid for
outbound services.

When on the segment that has to go through squid, Exchange Web Sevices
does not work. But when on the other segment (that doesn't need squid)
it works. Therefore I can only assume that squid is somehow blocking
or breaking the connection.

In checking the access log I do not see any DENIED messages for that
connection. In the googling I did it seems to indicate that EWS does
RPC over HTTPs. Is there a configuration in SQUID that has to be done
to allow this?

Thanks




Re: [squid-users] how do I block facebook?

2013-08-21 Thread Pavel Kazlenka

Hi,

You can use dstdomain acl type. See details at 
http://www.squid-cache.org/Doc/config/acl/


Best wishes,
Pavel

On 08/21/2013 03:21 AM, junio wrote:

I'm okay to block facebook in the company I work for, I can not redirect port
443 successfully.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-do-I-block-facebook-tp4661678.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] moving code from ecap 0.0.x to 0.2

2013-09-06 Thread Pavel Kazlenka

Hi Luis,

eCap project is not dead, however it is being developed not so fast as 
possible. Please, try ecap's mailing list 
(http://www.e-cap.org/mailman/listinfo/users/) or launchpad answers 
server (http://www.e-cap.org/mailman/listinfo/users/).

No need to ask here as squid's mailing list doesn't have intention to ecap.

Best wishes,
Pavel

On 09/06/2013 02:48 PM, Luis Daniel Lucio Quiroz wrote:

Im having troubles moving old working code from ecap-samples and
ecap-clamav to latest ecap

do you have a guide? ecap projects seems pretty dead




Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-10 Thread Pavel Kazlenka

Hi Mohsen,

Please note, that there are also system limit on opened file descriptors 
number (that is set by default to 1024 per user in most linux systems).
You could see/change current system limits using ulimit tool or in 
/etc/security/limits.d/ files.


Best wishes,
Pavel

On 09/10/2013 11:34 AM, Mohsen Dehghani wrote:

I have compiled and installed squid 3.3.8.
I have about 160Mbps bandwidth and about 18000 http request per minute.
The problem is that as soon as I redirect traffic  to squid, its cpu usage
reaches 100% and it hangs and even "squidclient" will not work.

What is weird is that when I remove traffic from squid, CPU usage does not
go down immediately,  but with a delay of about 2 or 3 minutes!
While in version 3.1.19(which previously I was using), as soon as I remove
traffic from squid, its cpu usage goes down. By the way in 3.1.19, CPU usage
never exceeded 30%.

When I debug , I see some lines saying: WARNING! Your cache is running out
of filedescriptors

I don't know this is the cause or not. But I've already compiled squid with
65536 filedescriptors

I have disabled disk swap for testing, but the problem yet exists.

Any help is appreciated

I have attached my compile options and config

___

#squid -v
Squid Cache: Version 3.3.8
configure options:  '--prefix=/usr/local/squid' '--build=x86_64-linux-gnu'
'--enable-storeio=ufs,aufs,diskd' '--enable-follow-x-forwarded-for'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience


###config:
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

shutdown_lifetime 3 second
wccp2_router 172.22.122.254
wccp_version 2
wccp2_rebuild_wait on
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method 2
# wccp2_service standard 0
wccp2_service dynamic 80
wccp2_service dynamic 90
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=240 ports=80

http_port 3129 tproxy
qos_flows local-hit=0x18
cache_mem 2000 MB
maximum_object_size_in_memory 10 MB

access_log none

snmp_port 3401
acl snmppublic snmp_community golabi
snmp_access allow snmppublic trusted
http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost

http_access deny all

http_port 3128

coredump_dir /usr/local/squid/var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320






Re: [squid-users] Squid and threads

2013-09-10 Thread Pavel Kazlenka

Hi Naira,

No, squid doesn't has native support for threads. See details at 
http://wiki.squid-cache.org/Features/SmpScale#Why_processes.3F_Aren.27t_threads_better.3F 



Best wishes,
Pavel

On 09/10/2013 07:53 PM, Naira Kaieski wrote:

Hi,

Sorry if this issue has already been addressed, but squid has native 
support for threads?






Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-11 Thread Pavel Kazlenka

I don't see any logic here. Are you sure your squid is started not by root?
Is replacing 'root' by 'squid' or '*' solves issue as well?

On 09/11/2013 07:19 AM, Mohsen Dehghani wrote:

Thanks everybody

[The problem resolved]
After adding following lines to /etc/security/limits.conf
root soft nofile 6
root hard nofile 6

but I am eager to know the rationale behind it, cuz squid runs as user
"proxy" not "root"


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Tuesday, September 10, 2013 9:10 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] 100% CPU Load problem with squid 3.3.8

It seems like an endless loop that causes this effect..
I dont think it's related in any way to squid directly but more to the
setup..

If we can make a real order and debug the problem I will be happy to try to
assist you.

please share your iptables, route, ulimit -sA, ulimit -hA and any other
information you do think is right for the cause and the solution.

if you can share the uname -a output and "cat /proc/cpuinfo|grep model"
also "free -m", "vmstat", the next script:
PID=`ps aux |grep squid-1|grep -v grep|awk '{print $2}'`;ls /proc/$PID/fd/
|wc

which should show how many FD are being used by the first squid process.
let's try out best and see what is the result..

we will might fine the reason on the first or second try.

Eliezer
On 09/10/2013 11:34 AM, Mohsen Dehghani wrote:

I have compiled and installed squid 3.3.8.
I have about 160Mbps bandwidth and about 18000 http request per minute.
The problem is that as soon as I redirect traffic  to squid, its cpu
usage reaches 100% and it hangs and even "squidclient" will not work.

What is weird is that when I remove traffic from squid, CPU usage does
not go down immediately,  but with a delay of about 2 or 3 minutes!
While in version 3.1.19(which previously I was using), as soon as I
remove traffic from squid, its cpu usage goes down. By the way in
3.1.19, CPU usage never exceeded 30%.

When I debug , I see some lines saying: WARNING! Your cache is running
out of filedescriptors

I don't know this is the cause or not. But I've already compiled squid
with
65536 filedescriptors

I have disabled disk swap for testing, but the problem yet exists.

Any help is appreciated

I have attached my compile options and config

___

#squid -v
Squid Cache: Version 3.3.8
configure options:  '--prefix=/usr/local/squid' '--build=x86_64-linux-gnu'
'--enable-storeio=ufs,aufs,diskd' '--enable-follow-x-forwarded-for'
'--with-filedescriptors=65536' '--with-large-files'
'--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux-gnu' --enable-ltdl-convenience


###config:
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

shutdown_lifetime 3 second
wccp2_router 172.22.122.254
wccp_version 2
wccp2_rebuild_wait on
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_assignment_method 2
# wccp2_service standard 0
wccp2_service dynamic 80
wccp2_service dynamic 90
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240
ports=80 wccp2_service_info 90 protocol=tcp
flags=dst_ip_hash,ports_source
priority=240 ports=80

http_port 3129 tproxy
qos_flows local-hit=0x18
cache_mem 2000 MB
maximum_object_size_in_memory 10 MB

access_log none

snmp_port 3401
acl snmppublic snmp_community golabi
snmp_access allow snmppublic trusted
http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost

http_access deny all

http_port 3128

coredump_dir /usr/local/squid/var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320









[squid-users] Is ignore-private option from refresh_pattern broken?

2013-09-12 Thread Pavel Kazlenka

Hi gentlemen,

I'm trying to cache youtube videos following 
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid rejects to cache the content because original 
server returns header 'Cache-control:private' and 'refresh pattern ... 
ignore-private' doesn't take effect. Here is debug log entries that are 
interesting here (I think):


2013/09/11 19:24:24.842 kid1| helper.cc(419) helperSubmit: 
buf[975]=http://r16---sn-4g57ln7e.c.youtube.com/videoplayback?ratebypass=yes&itag=43&ip=212.98.189.159&key=yt1&upn=6KCvBVLs-yM&mt=1378980214&fexp=919118%2C924606%2C929117%2C929121%2C929906%2C929907%2C929922%2C929127%2C929129%2C929131%2C929930%2C936403%2C92

5726%2C936310%2C925720%2C925722%2C925718%2C925714%2C929917%2C906945%2C929933%2C920302%2C906842%2C913428%2C920605%2C919811%2C935020%2C935021%2C913563%2C919373%
2C930803%2C908536%2C932211%2C938701%2C931924%2C934005%2C936308%2C909549%2C900816%2C912711%2C904494%2C904497%2C939903%2C900375%2C900382%2C934507%2C907231%2C936
312%2C906001&id=2c5e89c0af6c8804&expire=1379004381&sver=3&ipbits=8&cp=U0hWTlBLUl9NS0NONl9IRVZEOjNVQko4ZGFaMGcz&ms=au&source=youtube&sparams=cp%2Cid%2Cip%2Cipbits%2Citag%2Cratebypass%2Csource%2Cupn%2Cexpire&mv=m&cpn=Ph7LcSRYt1STlsoQ&signature=BB797D0EFC5182670EF89E95EFBB6E5D12F49B8F.6404F37419A96004F8DDCA2CAB901101A
30082CA&ptk=youtube_none&pltype=contentugc 192.168.10.8/- - GET 
myip=192.168.10.245 myport=3128

...
2013/09/11 19:24:24.843 kid1| helper.cc(919) helperHandleRead: 
accumulated[77]=OK 
store-id=http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804";
2013/09/11 19:24:24.843 kid1| store_dir.cc(786) get: none of 1 
cache_dirs have 028E8844ECA93A634459175C8C0D463D
2013/09/11 19:24:24.843 kid1| store.cc(840) storeCreateEntry: 
storeCreateEntry: 
'http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804'
2013/09/11 19:24:25.032 kid1| http.cc(396) cacheableReply: NO because 
server reply Cache-Control:private


My squid.conf:

acl rewritedoms dstdomain .c.youtube.com

store_id_program /home/tester/squid/libexec/storeid_file_rewrite 
/home/tester/squid/db.txt

store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms
store_id_access deny all
...
refresh_pattern ^http://video-srv\.youtube\.squid\.internal/.* 10080 
80%  79900 override-lastmod override-expire ignore-reload 
ignore-must-revalidate ignore-private

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
#refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern .   0   20% 4320

Squid version is squid-3.4.0.1-20130910-r12989.

So my questions are:

1) Am I right that refresh_pattern ... ignore-private should work here 
and help to cache reply with CC:Private?
2) Is there another (may be more correct) way to force squid to cache 
replies with CC:Private?
3) How can I check that 'refresh_pattern' config string is parsed 
correctly and all options are remembered by squid?


TIA,
Pavel




Re: [squid-users] Is ignore-private option from refresh_pattern broken?

2013-09-12 Thread Pavel Kazlenka

Thank you Amos,

On 09/12/2013 02:53 PM, Amos Jeffries wrote:

On 12/09/2013 10:51 p.m., Pavel Kazlenka wrote:

Hi gentlemen,

I'm trying to cache youtube videos following 
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid rejects to cache the content because original 
server returns header 'Cache-control:private' and 'refresh pattern 
... ignore-private' doesn't take effect. Here is debug log entries 
that are interesting here (I think):


2013/09/11 19:24:24.842 kid1| helper.cc(419) helperSubmit: 
buf[975]=http://r16---sn-4g57ln7e.c.youtube.com/videoplayback?ratebypass=yes&itag=43&ip=212.98.189.159&key=yt1&upn=6KCvBVLs-yM&mt=1378980214&fexp=919118%2C924606%2C929117%2C929121%2C929906%2C929907%2C929922%2C929127%2C929129%2C929131%2C929930%2C936403%2C92
5726%2C936310%2C925720%2C925722%2C925718%2C925714%2C929917%2C906945%2C929933%2C920302%2C906842%2C913428%2C920605%2C919811%2C935020%2C935021%2C913563%2C919373% 

2C930803%2C908536%2C932211%2C938701%2C931924%2C934005%2C936308%2C909549%2C900816%2C912711%2C904494%2C904497%2C939903%2C900375%2C900382%2C934507%2C907231%2C936 

312%2C906001&id=2c5e89c0af6c8804&expire=1379004381&sver=3&ipbits=8&cp=U0hWTlBLUl9NS0NONl9IRVZEOjNVQko4ZGFaMGcz&ms=au&source=youtube&sparams=cp%2Cid%2Cip%2Cipbits%2Citag%2Cratebypass%2Csource%2Cupn%2Cexpire&mv=m&cpn=Ph7LcSRYt1STlsoQ&signature=BB797D0EFC5182670EF89E95EFBB6E5D12F49B8F.6404F37419A96004F8DDCA2CAB901101A 

30082CA&ptk=youtube_none&pltype=contentugc 192.168.10.8/- - GET 
myip=192.168.10.245 myport=3128

...
2013/09/11 19:24:24.843 kid1| helper.cc(919) helperHandleRead: 
accumulated[77]=OK 
store-id=http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804";
2013/09/11 19:24:24.843 kid1| store_dir.cc(786) get: none of 1 
cache_dirs have 028E8844ECA93A634459175C8C0D463D
2013/09/11 19:24:24.843 kid1| store.cc(840) storeCreateEntry: 
storeCreateEntry: 
'http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804'
2013/09/11 19:24:25.032 kid1| http.cc(396) cacheableReply: NO because 
server reply Cache-Control:private


My squid.conf:

acl rewritedoms dstdomain .c.youtube.com

store_id_program /home/tester/squid/libexec/storeid_file_rewrite 
/home/tester/squid/db.txt

store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms
store_id_access deny all
...
refresh_pattern ^http://video-srv\.youtube\.squid\.internal/.* 10080 
80%  79900 override-lastmod override-expire ignore-reload 
ignore-must-revalidate ignore-private

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
#refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern .   0   20% 4320

Squid version is squid-3.4.0.1-20130910-r12989.

So my questions are:

1) Am I right that refresh_pattern ... ignore-private should work 
here and help to cache reply with CC:Private?


Yes the exact code is:
 if ( Cache-Control is present AND contains "private" AND 
ignore-private is *absent*)
   display that "NO because server reply Cache-Control:private" 
message etc.


2) Is there another (may be more correct) way to force squid to cache 
replies with CC:Private?


There is no _correct_ way to abuse the RFC protocol standard. This 
ignore-private explicitly forbidden behaviour for any cache shared by 
more than 1 person.
It is officially only made available in Squid to allow single-person 
installations to operate a proxy between multiple devices.


Sure. I meant correct from squid's internals point of view (may be 
dedicated directive).


3) How can I check that 'refresh_pattern' config string is parsed 
correctly and all options are remembered by squid?


To check it is parsed correctly use:
  squid -k parse


To check what the running config is you can produce a config file dump 
using the cache manager interface:


* ensure that you have a cachemgr_passwd (or "none") explicitly 
defined for either the "config" or the "all" reports.

  http://www.squid-cache.org/Doc/config/cachemgr_passwd/

* ensure that your http_access rules involving "manager" ACL permit 
you access to the proxy management interface.


* fetch  http://example.com/squid-internal-mgr/config in your browser

The result should be a TXT format listing of all the squid.conf 
settings (including the defaults) which Squid is using.


My config obtained in this way includes 'refresh_pattern 
^http://video-srv\.youtube\.squid\.internal/.* 10080 80% 79900 
override-expire override-lastmod ignore-reload ignore-must-revalidate 
ignore-private', so I guess that this is definitely a bug. I'm going to 
open new defect on bugzilla. Will see if this will be fixed in 3.4.0.2 
when 'unknown_cfg_function' patch is accepted. Any objections?




Amos




Re: [squid-users] Is ignore-private option from refresh_pattern broken?

2013-09-12 Thread Pavel Kazlenka

On 09/12/2013 03:12 PM, Pavel Kazlenka wrote:

Thank you Amos,

On 09/12/2013 02:53 PM, Amos Jeffries wrote:

On 12/09/2013 10:51 p.m., Pavel Kazlenka wrote:

Hi gentlemen,

I'm trying to cache youtube videos following 
http://wiki.squid-cache.org/Features/StoreID guide.
But seems like squid rejects to cache the content because original 
server returns header 'Cache-control:private' and 'refresh pattern 
... ignore-private' doesn't take effect. Here is debug log entries 
that are interesting here (I think):


2013/09/11 19:24:24.842 kid1| helper.cc(419) helperSubmit: 
buf[975]=http://r16---sn-4g57ln7e.c.youtube.com/videoplayback?ratebypass=yes&itag=43&ip=212.98.189.159&key=yt1&upn=6KCvBVLs-yM&mt=1378980214&fexp=919118%2C924606%2C929117%2C929121%2C929906%2C929907%2C929922%2C929127%2C929129%2C929131%2C929930%2C936403%2C92
5726%2C936310%2C925720%2C925722%2C925718%2C925714%2C929917%2C906945%2C929933%2C920302%2C906842%2C913428%2C920605%2C919811%2C935020%2C935021%2C913563%2C919373% 

2C930803%2C908536%2C932211%2C938701%2C931924%2C934005%2C936308%2C909549%2C900816%2C912711%2C904494%2C904497%2C939903%2C900375%2C900382%2C934507%2C907231%2C936 

312%2C906001&id=2c5e89c0af6c8804&expire=1379004381&sver=3&ipbits=8&cp=U0hWTlBLUl9NS0NONl9IRVZEOjNVQko4ZGFaMGcz&ms=au&source=youtube&sparams=cp%2Cid%2Cip%2Cipbits%2Citag%2Cratebypass%2Csource%2Cupn%2Cexpire&mv=m&cpn=Ph7LcSRYt1STlsoQ&signature=BB797D0EFC5182670EF89E95EFBB6E5D12F49B8F.6404F37419A96004F8DDCA2CAB901101A 

30082CA&ptk=youtube_none&pltype=contentugc 192.168.10.8/- - GET 
myip=192.168.10.245 myport=3128

...
2013/09/11 19:24:24.843 kid1| helper.cc(919) helperHandleRead: 
accumulated[77]=OK 
store-id=http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804";
2013/09/11 19:24:24.843 kid1| store_dir.cc(786) get: none of 1 
cache_dirs have 028E8844ECA93A634459175C8C0D463D
2013/09/11 19:24:24.843 kid1| store.cc(840) storeCreateEntry: 
storeCreateEntry: 
'http://video-srv.youtube.com.squid.internal/43&2c5e89c0af6c8804'
2013/09/11 19:24:25.032 kid1| http.cc(396) cacheableReply: NO 
because server reply Cache-Control:private


My squid.conf:

acl rewritedoms dstdomain .c.youtube.com

store_id_program /home/tester/squid/libexec/storeid_file_rewrite 
/home/tester/squid/db.txt

store_id_children 40 startup=10 idle=5 concurrency=0
store_id_access allow rewritedoms
store_id_access deny all
...
refresh_pattern ^http://video-srv\.youtube\.squid\.internal/.* 10080 
80%  79900 override-lastmod override-expire ignore-reload 
ignore-must-revalidate ignore-private

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
#refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern .   0   20% 4320

Squid version is squid-3.4.0.1-20130910-r12989.

So my questions are:

1) Am I right that refresh_pattern ... ignore-private should work 
here and help to cache reply with CC:Private?


Yes the exact code is:
 if ( Cache-Control is present AND contains "private" AND 
ignore-private is *absent*)
   display that "NO because server reply Cache-Control:private" 
message etc.


2) Is there another (may be more correct) way to force squid to 
cache replies with CC:Private?


There is no _correct_ way to abuse the RFC protocol standard. This 
ignore-private explicitly forbidden behaviour for any cache shared by 
more than 1 person.
It is officially only made available in Squid to allow single-person 
installations to operate a proxy between multiple devices.


Sure. I meant correct from squid's internals point of view (may be 
dedicated directive).


3) How can I check that 'refresh_pattern' config string is parsed 
correctly and all options are remembered by squid?


To check it is parsed correctly use:
  squid -k parse


To check what the running config is you can produce a config file 
dump using the cache manager interface:


* ensure that you have a cachemgr_passwd (or "none") explicitly 
defined for either the "config" or the "all" reports.

  http://www.squid-cache.org/Doc/config/cachemgr_passwd/

* ensure that your http_access rules involving "manager" ACL permit 
you access to the proxy management interface.


* fetch  http://example.com/squid-internal-mgr/config in your browser

The result should be a TXT format listing of all the squid.conf 
settings (including the defaults) which Squid is using.


My config obtained in this way includes 'refresh_pattern 
^http://video-srv\.youtube\.squid\.internal/.* 10080 80% 79900 
override-expire override-lastmod ignore-reload ignore-must-revalidate 
ignore-private', so I guess that this is definitely a bug. I'm going 
to open new defect on bugzilla. Will see if this will be fixed in 
3.4.0.2 when 'unknown_cfg_function' patch is accepted. Any 

Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-14 Thread Pavel Kazlenka
This could be ugly troubleshooting practice, but you can try to modify 
your init script (or upstart job, not sure how exactly squid is being 
started in ubuntu). The idea is to add 'ulimit -n > 
/tmp/squid.descriptors' and see if the number is really 65k.


On 09/14/2013 09:41 AM, Mohsen Dehghani wrote:

I don't see any logic here. Are you sure your squid is started not by root?
Is replacing 'root' by 'squid' or '*' solves issue as well?


When I manually  start service by root, there is no file descriptor warning
and squid works as normal.
But when the system boots up and starts the service automatically, squid
runs out of FD.

I've tested different the following settings without any luck. Every time
that the box reboots, I have to login and restart service manually.

root soft nofile 65000
root hard nofile 65000
proxy soft nofile 65000
proxy hard nofile 65000
squid soft nofile 65000
squid hard nofile 65000
* soft nofile 65000
* hard nofile 65000

It seems these settings only works if the user logins to system.
My squid user is "proxy"(I configured it at the time of compile).

Maybe some useful info:
OS:Ubuntu 12.04

# ulimit -n
65000

# squidclient mgr:info | grep 'file descri'
 Maximum number of file descriptors:   65536
 Available number of file descriptors: 65527
 Reserved number of file descriptors:   100








Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-14 Thread Pavel Kazlenka

On 09/14/2013 03:44 PM, Eliezer Croitoru wrote:

SORRY typo:
http://www.linuxtopia.org/online_books/linux_administrators_security_guide/16_Linux_Limiting_and_Monitoring_Users.html#PAM

the above can clarify more about the ulimit stuff.

The basic solution is to define the soft limit in the init script.


I don't think that this is a good solution. It could work as a temporary 
hook. See my thoughts below.

I would go to make sure that the hard and the soft limits are the problem..
like
ulimit -Sa >>/tmp/ulimit_test
ulimit -Ha >>/tmp/ulimit_test

this will make sure that the limits problem are in the soft and hard.

It's a basic linux issue which is not related to squid but more to the
distribution and how you define ulimits

I assume the limit is on the bash level rather then on the OS level.
http://www.linuxtopia.org/online_books/linux_administrators_security_guide/16_Linux_Limiting_and_Monitoring_Users.html#Bash

hope it helps clarify the issue.

There could be an option that will be added to the init.d script to
specify the ulimit soft and hard by a config file or variable.

I hope to post a new script for centos in the comming weeks.

Eliezer

On 09/14/2013 03:33 PM, Eliezer Croitoru wrote:

as stated before the mentioned solution was to insert the ulimit into
the init script to make sure the limit is absolute!

there might be a chance for this to solve or help solve and find the issue:
On 09/14/2013 12:05 PM, Mohsen Dehghani wrote:

Oh , no...it is 1024
thanks for the help
Now I added 'ulimit -n 65000' in squid init file and the problem is
resolved. But some questions:

1-why is it 1024 While I've set 65535 FD at compile time and squid user
which is "proxy" has this much limitation in limit.conf file?
This is an interesting question. I guess we need someone like 
package/distribution maintainer here, because I don't know why limits.d 
doesn't work.

2-is it ok to increase FD limit in this way?
No, that's not a good idea. You will have the problem each time you will 
try to update your squid using package manager (as you .
You have to set limits in /etc/security/sysctl.d/squid.conf file (or 
file with another name). Of course, you have to find out why this 
doesn't work at the moment.

3-Apearantly according to "# cat /proc/sys/fs/file-max" my os FD limit is
400577. Can I increase squid FD to it
Not really good idea to, as thus you allow squid to use all the 
available system-wide file descriptors. This value doesn't seem to be 
too high though so you can increase both system file descriptors 
(sys.fs.file-max) and squid's one.

4-What is best FD limit for about 150Mbps bandwidth and 18000 RPM
18000 rpm means you need 18000 descriptors available. I guess it will 
not be hard to find out appropriate value watching squid log and 
increasing nofile system limit each time you encounter warning in 
squid's log.


This could be ugly troubleshooting practice, but you can try to modify your
init script (or upstart job, not sure how exactly squid is being started in
ubuntu). The idea is to add 'ulimit -n > /tmp/squid.descriptors' and see if
the number is really 65k.

On 09/14/2013 09:41 AM, Mohsen Dehghani wrote:

I don't see any logic here. Are you sure your squid is started not by

root?

Is replacing 'root' by 'squid' or '*' solves issue as well?

When I manually  start service by root, there is no file descriptor
warning and squid works as normal.
But when the system boots up and starts the service automatically,
squid runs out of FD.

I've tested different the following settings without any luck. Every
time that the box reboots, I have to login and restart service manually.

root soft nofile 65000
root hard nofile 65000
proxy soft nofile 65000
proxy hard nofile 65000
squid soft nofile 65000
squid hard nofile 65000
* soft nofile 65000
* hard nofile 65000


The values seems fine. What exactly the name of file you put them into?


It seems these settings only works if the user logins to system.
My squid user is "proxy"(I configured it at the time of compile).

Maybe some useful info:
OS:Ubuntu 12.04

# ulimit -n
65000

# squidclient mgr:info | grep 'file descri'
  Maximum number of file descriptors:   65536
  Available number of file descriptors: 65527
  Reserved number of file descriptors:   100











Re: [squid-users] 100% CPU Load problem with squid 3.3.8

2013-09-16 Thread Pavel Kazlenka

On 09/11/2013 07:19 AM, Mohsen Dehghani wrote:

Thanks everybody

[The problem resolved]
After adding following lines to /etc/security/limits.conf
root soft nofile 6
root hard nofile 6

but I am eager to know the rationale behind it, cuz squid runs as user "proxy" not 
"root"




On 09/16/2013 01:55 AM, Eliezer Croitoru wrote:

Well for me it works when I start a shell using "su - user".
But there is a need to know and map the linux boot process and them
findout why bash is limited to 1024 FD instead of 4k or 4m.
The basic issue is a security issue.. which I support enforcing as it is
now.
What squid can do as a process to force FD limit?
squid as a limited process can only reach the LIMIT as at is now.
since squid 3.2 does a "forking" style under-the-ground it is indeed the
best practice to limit the init.d script access and execution..
then upper the ULIMIT in the init.d script to make sure that the limit
makes sense..
if there is a start-stop-daemon feature then this is why it was invented
anyway.
So we can choose to either work with an execution proxy the will force
all users options or force the ULIMIT in the init.d(bash) script.

I would not try to run under bash a for loop that opens more then 512 FD
in a case I want to stay sane.

Eliezer


On 09/16/2013 12:58 AM, Carlos Defoe wrote:

Seems right, Kinkie. "ulimit Provides control over the resources
available to the shell and to processes started by it". So that's why
squid process inherits the configuration made on the initialization
script. I assume that doesn't matter which user runs the subprocess.

But limits.conf is a PAM feature. As you said, it should configure
limits on a system basis, for all users configured. What we are saying
is that it does not work at boot time, when squid runs. The
configuration on the init script is the only way to make it work.


On Sun, Sep 15, 2013 at 4:59 PM, Kinkie  wrote:

On Sun, Sep 15, 2013 at 2:51 PM, Carlos Defoe  wrote:

I got the same result as Mohsen. The only thing that worked was adding
"ulimit -n mynumber" to the init script.

It was weird for me, because the script is run by root, not the squid
user, and i thought ulimit -n applied only to the current logged in
user. But I think it applies to any session that will start later.

Ulimits are inherited by all child processes; lowering them is always
possible, raising them may be an administrator-only action.
bash's manual (man 1 bash) has an informative chapter on ulimit.
Otherwise you may want to check setrlimit(2).
System-wide settings may be set in /etc/security/limits.conf (or
/etc/limits.conf, depending on your distro). Man 5 limits.conf has the
details (at least on my Ubuntu Raring system).

Kinkie
Setting limits in limits.conf is the correct solution that works for OP. 
Now we have answer "Why limits should be set for root, not proxy user?"
Because squid init script is started by init process which is owned by 
root and inherits limits from it.


As a more precise solution (at least for ubuntu/debian) I see the way 
used in nginx init script: https://www.ruby-forum.com/topic/4414498#291


Re: [squid-users] Video played using vlc media player from a windows media server in not getting stored in cache

2013-09-16 Thread Pavel Kazlenka

Hi susu,

The most common reason in this case is that video file is too big to be 
placed in cache when default squid disk cache settings are used. Please, 
show your squid.conf, especially cache_dir string.
Also you can add 'debug_options 20,9 27,9 31,9 70,9 22,9 90,9' into your 
config file. I usually use these to find out why file is not being 
cached. Look through cache.log for 'YES' and 'NO' (capital letters) to 
find out the exact reason.


Best wishes,
Pavel
On 09/17/2013 09:23 AM, susu wrote:

Hi,

I am playing a video which is hosted on a windows media server using vlc
media player from a client. But the video is not getting stored in squid
cache. I have checked the video is getting delivered using http. Do you have
any idea why this is not getting cached. The video file is a wmv file. If
you have any idea please help me. Thank you.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Video-played-using-vlc-media-player-from-a-windows-media-server-in-not-getting-stored-in-cache-tp4662172.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Video played using vlc media player from a windows media server in not getting stored in cache

2013-09-17 Thread Pavel Kazlenka

Ok, these two strings matter here, I guess:

>CheckQuickAbort2: YES bad content length
>storeCheckCachable: NO: release requested

if (curlen > expectlen) {
debugs(90, 3, "CheckQuickAbort2: YES bad content length");


If I understand correctly, squid downloaded from origin server more, 
than the server announced. But developers can correct me/explain what's 
wrong.


Can you sniff and compare http traffic on proxy server when you watch 
video with vlc and with wmp? I'm curious what is the difference that 
causes squid to behave in different manner. Command should be something 
like:

sudo tcpdump -i any 'port 3128 or 80' -Afnn -s 1024
(I assume that squid is running on port 3128 here).

On 09/17/2013 01:40 PM, susu wrote:

Hi Pavel,

Here size of the big file is not an issue. If I am playing the same file
using windows media player, it's getting cached. I have tried the method
suggested by you. But those messages are not so clear to me. I am posting
those messages below. Please have a look and tell if you can make out
something.


2013/09/17 10:44:02| storeClientCopy: 3AC24A8523C630E0017D538FCF058E6D, seen
1893429, want 1893429, size 4096, cb 0x423170, cbdata 0x16d8fc48
2013/09/17 10:44:02| storeClientCopy2: 3AC24A8523C630E0017D538FCF058E6D
2013/09/17 10:44:02| storeClientCopy3: Copying from memory
2013/09/17 10:44:02| storeClientCopy: 3AC24A8523C630E0017D538FCF058E6D, seen
1895953, want 1895953, size 4096, cb 0x423170, cbdata 0x16d8fc48
2013/09/17 10:44:02| storeClientCopy2: 3AC24A8523C630E0017D538FCF058E6D
2013/09/17 10:44:02| storeClientCopy3: Waiting for more
2013/09/17 10:44:05| storeClientUnregister: called for
'3AC24A8523C630E0017D538FCF058E6D'
2013/09/17 10:44:05| storeClientUnregister: store_client for
http://10.102.78.163/Police_122.wmv?MSWMExt=.asf has a callback
2013/09/17 10:44:05| storeSwapOutMaintainMemObject: lowest_offset = 1895954
2013/09/17 10:44:05| storePendingNClients: returning 0
2013/09/17 10:44:05| CheckQuickAbort2: entry=0x16f3e930, mem=0x16cdbc90
2013/09/17 10:44:05| CheckQuickAbort2: YES bad content length
2013/09/17 10:44:05| storeAbort: 3AC24A8523C630E0017D538FCF058E6D
2013/09/17 10:44:05| storeLockObject: (store.c:1397): key
'3AC24A8523C630E0017D538FCF058E6D' count=6
2013/09/17 10:44:05| storeExpireNow: '3AC24A8523C630E0017D538FCF058E6D'
2013/09/17 10:44:05| storeReleaseRequest: '3AC24A8523C630E0017D538FCF058E6D'
2013/09/17 10:44:05| storeDirSwapLog: SWAP_LOG_DEL
3AC24A8523C630E0017D538FCF058E6D 0 031E
2013/09/17 10:44:05| storeKeyPrivate: GET
http://10.102.78.163/Police_122.wmv?MSWMExt=.asf
2013/09/17 10:44:05| storeHashInsert: Inserting Entry 0x16f3e930 key
'D6568710A5766951D833459D605758CE'
2013/09/17 10:44:05| InvokeHandlers: D6568710A5766951D833459D605758CE
2013/09/17 10:44:05| storeSwapOutFileClose: D6568710A5766951D833459D605758CE
2013/09/17 10:44:05| storeSwapOutFileClose: sio = 0x16e04518
2013/09/17 10:44:05| storeSwapOutFileClosed: SwapOut complete:
'http://10.102.78.163/Police_122.wmv?MSWMExt=.asf' to 0, 031E
2013/09/17 10:44:05| storeCheckCachable: NO: release requested
2013/09/17 10:44:05| storeSwapOutFileClosed: store_swapout.c:371
2013/09/17 10:44:05| storeSwapOutMaintainMemObject: lowest_offset = 1895954
2013/09/17 10:44:05| storeUnlockObject: (store_swapout.c:375): key
'D6568710A5766951D833459D605758CE' count=5
2013/09/17 10:44:05| storeUnlockObject: (store.c:1422): key
'D6568710A5766951D833459D605758CE' count=4
2013/09/17 10:44:05| storeUnlockObject: (store_client.c:575): key
'D6568710A5766951D833459D605758CE' count=3
2013/09/17 10:44:05| storeUnlockObject: (client_side.c:1271): key
'D6568710A5766951D833459D605758CE' count=2
2013/09/17 10:44:05| storePendingNClients: returning 0
2013/09/17 10:44:05| storeUnlockObject: (forward.c:119): key
'D6568710A5766951D833459D605758CE' count=1
2013/09/17 10:44:05| storeUnlockObject: (http.c:75): key
'D6568710A5766951D833459D605758CE' count=0
2013/09/17 10:44:05| storePendingNClients: returning 0
2013/09/17 10:44:05| storeRelease: Releasing:
'D6568710A5766951D833459D605758CE'
2013/09/17 10:44:05| destroy_StoreEntry: destroying 0x16f3e930
2013/09/17 10:44:05| ctx: enter level  0:
'http://10.102.78.163/Police_122.wmv?MSWMExt=.asf'
2013/09/17 10:44:05| destroy_MemObject: destroying 0x16cdbc90

Thanks,
Susu



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Video-played-using-vlc-media-player-from-a-windows-media-server-in-not-getting-stored-in-cache-tp4662172p4662176.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Re: Video played using vlc media player from a windows media server in not getting stored in cache

2013-09-17 Thread Pavel Kazlenka
You have to understand, that only you are the one who can troubleshoot 
this. I can only point you at any steps. At the moment I have no answer 
to your question, but there is a chance we see anything interesting 
comparing the captures.


On 09/17/2013 02:51 PM, susu wrote:

Hi Pavel,

I have seen that. I have also noticed that these things are not getting
executed for windows media player and the content is getting cached. My
question is why this condition is getting hit for vlc media player. In both
the cases the server does not specify any content length. So squid
internally marks it as -1 and it should save the content in the cache. If
you have any detailed information, please let me know.

Thanks,
Subhrangsu



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Video-played-using-vlc-media-player-from-a-windows-media-server-in-not-getting-stored-in-cache-tp4662172p4662178.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Connection reset by peer

2013-10-08 Thread Pavel Kazlenka

Hi John,

As Amos mentioned, it would be great to see http payload of the packets 
(use -Afnn switches for this). Also traffic on proxy itself is more 
interesting.
IIRC, 502 status code means that your proxy has some issues when reading 
data from origin server. Do you see anything suspicious in  cache.log?


Best wishes,
Pavel

On 10/09/2013 09:00 AM, John Kenyon wrote:

What makes you think that? HTTP/1.1 does not require FIN and as far as I can
see those URL responses all contain explicit Connection:keep-alive.

The content of those packets between Squid and server may shed some more
light on it. Try saving a full packet dump and viewing it with wireshark.

Amos


Hi Amos,

I have rolled back to Squid 3.1.23 and all is working as expected now, but I 
would like to still find out what in Squid 3.3.x is causing this... I will need 
to run up another instance of squid to perform further testing.

The reason why I thought it might be related to the FIN packet is when it does 
work it seems the server is able to send the final packet before the reset 
packet.

Here is the full tcpdump... I can install wireshark if required...

NOT WORKING

# tcpdump -i eth0 dst 66.151.79.155
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:53:53.462042 IP proxyserver.33784 > 66.151.79.155.http: S 264441315:264441315(0) 
win 5840 
16:53:53.665606 IP proxyserver.33784 > 66.151.79.155.http: . ack 258927824 win 23 

16:53:53.666037 IP proxyserver.33784 > 66.151.79.155.http: P 0:636(636) ack 1 win 23 

16:53:53.666217 IP proxyserver.33784 > 66.151.79.155.http: P 636:711(75) ack 1 win 23 

16:53:53.903639 IP proxyserver.33784 > 66.151.79.155.http: . ack 327 win 27 

16:53:54.028623 IP proxyserver.33784 > 66.151.79.155.http: P 711:1363(652) ack 327 
win 27 

# tcpdump -i eth0 src 66.151.79.155
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:55:17.007426 IP 66.151.79.155.http > proxyserver.34334: S 2581779361:2581779361(0) 
ack 350474126 win 16384 
16:55:17.225169 IP 66.151.79.155.http > proxyserver.34334: . ack 714 win 64822 

16:55:26.115901 IP 66.151.79.155.http > proxyserver.34334: P 1:327(326) ack 714 win 
64822 
16:55:26.552923 IP 66.151.79.155.http > proxyserver.34334: . ack 1366 win 64170 

16:55:26.943813 IP 66.151.79.155.http > proxyserver.34334: R 327:327(0) ack 
1366 win 0


WORKING

# tcpdump -i eth0 dst 66.151.79.155
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
17:07:28.092131 IP proxyserver.53294 > 66.151.79.155.http: S 1133726612:1133726612(0) 
win 5840 
17:07:31.091991 IP proxyserver.53294 > 66.151.79.155.http: S 1133726612:1133726612(0) 
win 5840 
17:07:31.295283 IP proxyserver.53294 > 66.151.79.155.http: . ack 1237499326 win 23 

17:07:31.295597 IP proxyserver.53294 > 66.151.79.155.http: P 0:636(636) ack 1 win 23 

17:07:31.295710 IP proxyserver.53294 > 66.151.79.155.http: P 636:711(75) ack 1 win 23 

17:07:31.529730 IP proxyserver.53294 > 66.151.79.155.http: . ack 327 win 27 

17:07:31.680187 IP proxyserver.53294 > 66.151.79.155.http: P 711:1363(652) ack 327 
win 27 
17:07:31.882188 IP proxyserver.53294 > 66.151.79.155.http: F 1363:1363(0) ack 328 win 
27 
17:07:31.882228 IP proxyserver.53320 > 66.151.79.155.http: S 1131177905:1131177905(0) 
win 5840 
17:07:32.076585 IP proxyserver.53320 > 66.151.79.155.http: . ack 3528185504 win 23 

17:07:32.076867 IP proxyserver.53320 > 66.151.79.155.http: P 0:652(652) ack 1 win 23 

17:07:32.423790 IP proxyserver.53320 > 66.151.79.155.http: . ack 136 win 27 

17:07:32.425107 IP proxyserver.53320 > 66.151.79.155.http: . ack 1504 win 38 

17:07:32.618487 IP proxyserver.53320 > 66.151.79.155.http: . ack 2872 win 49 

17:07:32.618532 IP proxyserver.53320 > 66.151.79.155.http: . ack 4240 win 60 

17:07:32.619921 IP proxyserver.53320 > 66.151.79.155.http: . ack 5608 win 70 

17:07:32.619976 IP proxyserver.53320 > 66.151.79.155.http: . ack 6976 win 81 

17:07:32.813212 IP proxyserver.53320 > 66.151.79.155.http: . ack 8344 win 92 

17:07:32.813261 IP proxyserver.53320 > 66.151.79.155.http: . ack 9712 win 102 

17:07:32.813381 IP proxyserver.53320 > 66.151.79.155.http: . ack 11080 win 113 

17:07:32.813503 IP proxyserver.53320 > 66.151.79.155.http: . ack 12448 win 124 

17:07:32.814275 IP proxyserver.53320 > 66.151.79.155.http: . ack 13816 win 134 

17:07:32.814372 IP proxyserver.53320 > 66.151.79.155.http: . ack 15184 win 145 

17:07:32.814485 IP proxyserver.53320 > 66.151.79.155.http: . ack 16528 win 156 

17:07:33.007797 IP proxyserver.53320 > 66.151.79.155.http: . ack 16536 win 156 

17:07:33.010420 IP proxyserver.53320 > 66.151.79.155.http: . ack 17904 win 166 

17:07:33.010536 IP proxyserver.53320 > 66.151.79.155.http: . ack 19272 win 177 

17:07:33.010653 IP proxyserver.53320 > 66.1

Re: [squid-users] Low performance even with low number of users

2013-10-11 Thread Pavel Kazlenka
Could you check also availability of primary DNS server on proxy node? I 
suspect that the one is not available, so squid makes dns query to 
primary server, waits for timeout (5 seconds by default IIRC) and then 
queries the secondary DNS server (which answers to squid and you get 
your page with 5-7 sec delay).



On 10/11/2013 03:05 AM, Amos Jeffries wrote:

On 11/10/2013 4:00 a.m., Luiz Felipe wrote:

Hi,

I have a "curious" case of low performance with Squid.

The symptom is that every request take at least +/- 7 seconds. Even
when there is a small number of clients.

The browser keeps waiting and when it load, it loads fast.

If I allow the client to access the web without proxy, it access
really fast. So, there's no bandwidth problems - and in fact it loads
fast after the "freeze" period. (with the same DNS, by the way).


The hardware is enough for the demand - but maybe the settings are 
wrong.

Version 2.6.STABLE21 (CentOS)

Any hints?


I susect HTTP/1.1 Expect: feature. That feature is being used by more 
and more services around the Internet as HTTP/1.1 support improves in 
the middleware like Squid.


Please try an upgrade to the current supported stable Squid (3.3.9 as 
of today) and see if you problem simply disappears.


Amos




Re: [squid-users] Connection reset by peer

2013-10-12 Thread Pavel Kazlenka

Ok.

Is it possible for you to dump traffic into file like this:

#tcpdump -i any 'port  or port 53 or host 
66.151.79.155' -w /tmp/squid.pcap

And post the /tmp/squid.pcap into some of public hosting?
Also, please note, that your dump contains plain text passwords. This 
could be unsafe ;)


Best wishes,
Pavel.

On 10/12/2013 03:34 AM, Amos Jeffries wrote:

On 11/10/2013 5:53 p.m., John Kenyon wrote:

Here is what I do to get the required HTTP stream details from tcpdump:

* use the -s option to fetch unlimited packet payload (-s 0 or -s 65536
depending on your system).
* save the capture to a .cap file.
* open with wireshark
* locate any packet in the desired HTTP stream and select "follow 
TCP stream"

* cut-n-paste the HTTP details out of the resulting plain text document

PS. if you happen to notice anything strange like binary characters 
in amongst

the HTTP protocol headers, they themselves could be the cause of the
problems. The only binary should be in payload/object/body blocks 
between the

message header blocks.

Amos


Hey Amos,

Here is the stream content:


Okay. Odd thing is these are all missing Date headers. But there is 
nothing obvious that woud lead to disconnection.


Amos



POST /scripts/mms.dll/JAWS/MMS/acs/f_login HTTP/1.1

Host: www.cmmsau.com

User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 
Firefox/24.0


Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language: en,en-us;q=0.5

Accept-Encoding: gzip, deflate

Referer: http://www.cmmsau.com/mms/mm_login.htm

Cookie: 
__utma=257591705.1931310241.1381466348.1381466348.1381466348.1; 
__utmb=257591705.1.10.1381466348; __utmc=257591705; 
__utmz=257591705.1381466348.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)


Content-Type: application/x-www-form-urlencoded

Content-Length: 75

Cache-Control: max-age=259200

Connection: keep-alive



as_userid=asamuels&as_dbpass=as2013&as_store=00200021&submit.x=0&submit.y=0HTTP/1.1 
200 OK


Server: Jaguar Server Version 5.5.0

Connection: Keep-Alive

Content-Type: text/html

Content-Length: 200







window.location.href="http://www.cmmsau.com/scripts/mms.dll/JAWS/MMS/acs/f_redirect?as_sid=82A18A8F96938DA18A95737E72816AAF&as_proj=00200021&as_flag=RL";;



GET /scripts/mms.dll/JAWS/MMS/acs/f_redirect?as_sid=82A18A8F96938DA18A95737E72816AAF&as_proj=00200021&as_flag=RL HTTP/1.1 Host: www.cmmsau.com User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Firefox/24.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,en-us;q=0.5 Accept-Encoding: gzip, deflate Referer: http://www.cmmsau.com/scripts/mms.dll/JAWS/MMS/acs/f_login Cookie: __utma=257591705.1931310241.1381466348.1381466348.1381466348.1; __utmb=257591705.1.10.1381466348; __utmc=257591705; __utmz=257591705.1381466348.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none) Cache-Control: max-age=0 Connection: keep-alive Cheers, John

Re: [squid-users] kerberos annoyances


Hi Marko,

Squid's kerberos helper has debug mode. Just add '-d' switch to 
'auth_param negotiate program /usr/sbin/squid_kerb_auth' string in 
squid.conf file.
Also here are some useful information and tips: 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos#Troubleshooting_Tools


Best wishes,
Pavel

On 10/14/2013 06:10 PM, Marko Cupać wrote:

I am trying to set up kerberos authentication in the following environment:

Kerberos server: Windows 2008 R2 domain controller
Proxy OS: FreeBSD 9.2-RELEASE amd64
Squid version: squid-3.3.9

The problem is the fact that kerberos authentication sporadically starts
to work (no auth popups, cache log shows username of authenticated user)
without any apparent reason, and then later it stops working (popping up
auth window) showing the following in cache.log:

2013/10/14 17:00:10 kid1| ERROR: Negotiate Authentication validating user. 
Error returned 'BH gss_acquire_cred() failed:  No credentials were supplied, or 
the credentials were unavailable or inaccessible.. unknown mech-code 0 for mech 
unknown'

I have no idea how to start troubleshooting. Any tips?





Re: [squid-users] squid & squid3 running at the same time ! , can we get benefit ?


Hi Ahmad,

Please see replies to your question inlined:

On 10/22/2013 11:29 AM, Ahmad wrote:

hi all ,
actually im asking about squid & squid 3 that are running at the same time
under ubuntu or debian OS,


my question is ,

can we get benefit from doing this ???

I don't think so? Is there any squid v.2 feature that you lack in squid v.3?


can we get benefit of multi cpu cores  of squid by this method without using
SMP ??
No, you don't. It's better to run number of squid 3 workers for SMP 
configurations.




if no ,

what is the benefit of using both of them at same time ??

There's no benefit. If you have nothing that require squid2, use squid3.



regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-squid3-running-at-the-same-time-can-we-get-benefit-tp4662785.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Slow internet navigation squid vs blue coat


Hi,

Just want to put my two pennies in. 'Slow' internet navigation through 
squid is often observed in case of incorrect DNS server settings on 
squid box. Often issues are:

- ipv6 DNS queries are performed first;
- first DNS server in /etc/resolv.conf is not responsible.

Both of these cases give plus 5-7 seconds to each request (dns request 
time-out). So I'd recommend to make sure that DNS is not an issue here.


The easiest way to find out the place where delay is added is to sniff 
request-response  on squid box. Capture will show you what part of the 
path adds delay.


Best wishes,
Pavel

On 11/25/2013 03:57 PM, Kinkie wrote:

On Mon, Nov 25, 2013 at 11:26 AM, Michele Mase'  wrote:

Problem: internet navigation is extremely slow.
I've used squid from 1999 with no problems at all; during last month,
one proxy gave me a lot of troubles.
First we upgraded the system, from RHEL5.x - squid 2.6.x to RHEL6.x
squid3.4.x with no improvements.
Second, we have bypassed the Trend Micro Interscan proxy (the parent
proxy) without success.
Third: I do not know what to do.
So what should be done?
Some configuration improvements (sysctl/squid)?
Could it be a network related problem? (bandwidth/delay/MTU/other)?

Hi Michele,
   "extremely slow" is quite a poor indication, unfortunately. Can you
measure it? e.g. by using apachebench (ab) or squidclient to measure
the time download a large file and a small file through the proxy from
a local source and from a remote source. Then repeating the same from
the box where Squid runs and then from a different box.

Think about what has remained unchanged since when there was no
performance problems: e.g. network cables, switches, routers etc.

Kinkie




Re: [squid-users] Transparent proxy


On 11/30/2013 03:33 PM, Monah Baki wrote:

Hi Amos,

Thanks for the explanation. I switched to intercept yet once I restart
squid, I am still seeing the "No forward proxy ports configured".

The same machine later on will also be running IPtables since it has 2
NIC's in it.
You need both one 'intercept' and one 'forward proxy' port in config 
even if you don't use forward proxy:


http_port 3129
http_port 3128 intercept





Monah

On Sat, Nov 30, 2013 at 4:56 AM, Amos Jeffries  wrote:

On 30/11/2013 10:26 a.m., Monah Baki wrote:

Hi all,


I'm trying to setup a transparent proxy squid 3.3.9 using the following URL:


http://www.broexperts.com/2013/03/squid-as-transparent-proxy-on-centos-6-4/

What's the difference between

http_port 3128 transparent

The above expects all arriving traffic to be in HTTP port 80 origin
server format. Used for receving intercept-proxy traffic.

Also, the TCP level details are assumed to have passed through some form
of NAT system and need to be un-NAT'd before use. In Squid since 3.2 if
the original TCP details are not found in the NAT records some
restrictions are placed on what happens with the request and response.



and
http_port 3128


This one expects all arriving traffic to be an HTTP proxy format. Used
for receiving forward-proxy traffic.


If I where to configure with http_port 3128 transparent and restart
squid I get in my access.log file:
  ERROR: No forward-proxy ports configured.

If I where to then browse, nothing happens.

I am not running iptables by the way.

iptables or some other NAT system is mandatory for getting the traffic
to an intercept port. Squid is fetching the TCP details from the kernel
NAT records and using that as the preferred destination on outbound
connections.

As for the tutorial. It is broken in several major ways. Which for a
8-line example is remarkable in itself. Consider following the official
wiki configuration example instead
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect


* The "transparent" option has been deprecated by "intercept" option
since 2010.

* Using DNAT rules without matching SNAT rules prevents TCP reply
packets working at all. Im not surprised half teh comments are about it
"not working".

* Having both REDIRECT and DNAT rules on the same box is overkill
anyway. DNAT is best for machines with a static IP address, REDIRECT for
machines with dynamically assigned IP address or if writing examples for
complete newbies.

* Using port 3128 for the intercept port is a very BAD idea. There are
active attacks in the wild scanning for open proxy ports and intercept
without firewall protection on the port is ripe for attack. It should be
a secret port which you can firewall away from all access beyond the
machine itself. Only the NAT firewall and Squid need to use it.


HTH
Amos




Re: [squid-users] can not connect squid from remote machine


TCP (telnet) timeout means that you have networking issue.
Check firewalls, routing as well as if squid is started and is listening 
on port (#netstat -ntpl on squid node).


On 12/01/2013 12:24 PM, janwen wrote:

just try to use squid,i try to setup squid 2 days.
i use squidclient http://www.googe.com get response on local machine,
but when i try to connect to from remote ip(any ip is allowed for test),
i use:
telnet ip 3128 just get timeout exception.

my squid.conf as follow:
#
# Recommended minimum configuration:
#
#user
cache_effective_user squid
cache_effective_group squid

visible_hostname beerdark.com
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly 
plugged) machines

#acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
acl testip src 222.73.112.140

http_access allow all
http_access allow testip
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow testip
# And finally deny all other access to this proxy
#http_access allow all
http_access deny all
# Squid normally listens to port 3128
http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320





Re: [squid-users] can not connect squid from remote machine


On 12/01/2013 12:32 PM, janwen wrote:

thanks for your reply:
all your suggest,i tried before i send the email for help.
netstat -tulpn | grep ':3128'
(No info could be read for "-p": geteuid()=1000 but you should be root.)
tcp6 0 0 :::3128

telnet localhost 3128
Trying 127.0.0.1...
Connected to ubuntu.San.
Escape character is '^]'.

so squid start ok.no firewall settings.




Then you have no ip connect between client and squid server. Check using 
ping.


P.S. Please, don't CC me, use 'reply to list' action (if available in 
your client).




On 2013-12-1 17:29, Pavel Kazlenka wrote:

TCP (telnet) timeout means that you have networking issue.
Check firewalls, routing as well as if squid is started and is 
listening on port (#netstat -ntpl on squid node).


On 12/01/2013 12:24 PM, janwen wrote:

just try to use squid,i try to setup squid 2 days.
i use squidclient http://www.googe.com get response on local machine,
but when i try to connect to from remote ip(any ip is allowed for 
test),

i use:
telnet ip 3128 just get timeout exception.

my squid.conf as follow:
#
# Recommended minimum configuration:
#
#user
cache_effective_user squid
cache_effective_group squid

visible_hostname beerdark.com
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) 
machines

#acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl testip src 222.73.112.140

http_access allow all
http_access allow testip
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect 
innocent

# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow testip
# And finally deny all other access to this proxy
#http_access allow all
http_access deny all
# Squid normally listens to port 3128
http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320









Re: [squid-users] can not connect squid from remote machine


On 12/01/2013 12:41 PM, janwen wrote:

thanks,i telnet ssh port is ok:telent ip 22
and how can i ping squid proxy port 3142?

If telnet to ssh port is ok, than ip connection is fine and we get back 
to theory of firewall. If you need proof that there is some firewall 
between your client and squid server, just start tcpdump on squid proxy 
(e.g. # tcpdump -i any 'port 3128'). Then try telnet from client again. 
You will not see anything in capture on server.




On 2013-12-1 17:37, Pavel Kazlenka wrote:

On 12/01/2013 12:32 PM, janwen wrote:

thanks for your reply:
all your suggest,i tried before i send the email for help.
netstat -tulpn | grep ':3128'
(No info could be read for "-p": geteuid()=1000 but you should be 
root.)

tcp6 0 0 :::3128

telnet localhost 3128
Trying 127.0.0.1...
Connected to ubuntu.San.
Escape character is '^]'.

so squid start ok.no firewall settings.




Then you have no ip connect between client and squid server. Check 
using ping.


P.S. Please, don't CC me, use 'reply to list' action (if available in 
your client).




On 2013-12-1 17:29, Pavel Kazlenka wrote:

TCP (telnet) timeout means that you have networking issue.
Check firewalls, routing as well as if squid is started and is 
listening on port (#netstat -ntpl on squid node).


On 12/01/2013 12:24 PM, janwen wrote:

just try to use squid,i try to setup squid 2 days.
i use squidclient http://www.googe.com get response on local machine,
but when i try to connect to from remote ip(any ip is allowed for 
test),

i use:
telnet ip 3128 just get timeout exception.

my squid.conf as follow:
#
# Recommended minimum configuration:
#
#user
cache_effective_user squid
cache_effective_group squid

visible_hostname beerdark.com
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly 
plugged) machines

#acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl testip src 222.73.112.140

http_access allow all
http_access allow testip
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect 
innocent

# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP 
networks

# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow testip
# And finally deny all other access to this proxy
#http_access allow all
http_access deny all
# Squid normally listens to port 3128
http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache/squid

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320













Re: [squid-users] can not connect squid from remote machine


On 12/01/2013 01:07 PM, janwen wrote:

tcpdump on squid server get follow:
janwen@ubuntu:/usr/local/squid$ sudo tcpdump -i any 'port 3128'
tcpdump: verbose output suppressed, use -v or -vv for full protocol 
decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 
65535 bytes


and then telnet from client,squid server output follow:it seems that 
the request connect to squid server,but client shows Connection timed 
out.

and i use my google chrome to connect the squid server,it's not work.


10:05:46.555224 IP xxx .57922 > ubuntu.San.3128: Flags [S], seq 
2416550145, win 14600, options [mss 1460,sackOK,TS val 422202410 ecr 
0,nop,wscale 7], length 0
10:05:46.555271 IP ubuntu.San.3128 > 222.73.112.140.57922: Flags [S.], 
seq 1908335791, ack 2416550146, win 14480, options [mss 1460,sackOK,TS 
val 3545032717 ecr 422202410,nop,wscale 8], length 0
Ok. Server side capture shows that server replies with syn-ack, but this 
syn-ack doesn't reach client (if the one reach, we see client's 'ack' in 
capture'. Something prevents servers TCP 'syn-ack' from reaching your 
client. So the most probable case is FIREWALL (on server, on client, on 
some networking intermediate). However, looking at your client's 
address, I'm starting to suspect incorrect NAT settings (check that 
conntrack module is on on your nat box if the one exists).



On 2013-12-1 17:48, Pavel Kazlenka wrote:

On 12/01/2013 12:41 PM, janwen wrote:

thanks,i telnet ssh port is ok:telent ip 22
and how can i ping squid proxy port 3142?

If telnet to ssh port is ok, than ip connection is fine and we get 
back to theory of firewall. If you need proof that there is some 
firewall between your client and squid server, just start tcpdump on 
squid proxy (e.g. # tcpdump -i any 'port 3128'). Then try telnet from 
client again. You will not see anything in capture on server.




On 2013-12-1 17:37, Pavel Kazlenka wrote:

On 12/01/2013 12:32 PM, janwen wrote:

thanks for your reply:
all your suggest,i tried before i send the email for help.
netstat -tulpn | grep ':3128'
(No info could be read for "-p": geteuid()=1000 but you should be 
root.)

tcp6 0 0 :::3128

telnet localhost 3128
Trying 127.0.0.1...
Connected to ubuntu.San.
Escape character is '^]'.

so squid start ok.no firewall settings.




Then you have no ip connect between client and squid server. Check 
using ping.


P.S. Please, don't CC me, use 'reply to list' action (if available 
in your client).




On 2013-12-1 17:29, Pavel Kazlenka wrote:

TCP (telnet) timeout means that you have networking issue.
Check firewalls, routing as well as if squid is started and is 
listening on port (#netstat -ntpl on squid node).


On 12/01/2013 12:24 PM, janwen wrote:

just try to use squid,i try to setup squid 2 days.
i use squidclient http://www.googe.com get response on local 
machine,
but when i try to connect to from remote ip(any ip is allowed 
for test),

i use:
telnet ip 3128 just get timeout exception.

my squid.conf as follow:
#
# Recommended minimum configuration:
#
#user
cache_effective_user squid
cache_effective_group squid

visible_hostname beerdark.com
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly 
plugged) machines

#acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl testip src 222.73.112.140

http_access allow all
http_access allow testip
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect 
innocent

# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (inter

Re: [squid-users] Netfilter error when running configure

Here's something similar to your problem: 
http://www.squid-cache.org/mail-archive/squid-users/201005/0134.html


Do you have gcc-c++ and kernel-devel packages installed?

On 12/27/2013 06:27 PM, csn233 wrote:

I'm getting this netfilter error when compiling 3.3.11 on Centos 6.5:

checking if __va_copy is implemented... no
configure: IPF-based transparent proxying enabled: no
configure: error: Linux Netfilter support requested but needed headers not found

I've already got the following installed:

libnetfilter_conntrack.x86_64
libnetfilter_conntrack-devel.x86_64
libnfnetlink.x86_64 1.0.0-1.el6 @base
libnfnetlink-devel.x86_64

What else am I missing? There are some earlier Google hits about this
message, but they don't seem relevant (to me).




Re: [squid-users] Issue with Web Traffic through IPSEC Tunnel to a Squid Proxy


Hi,

I guess you miss some important for troubleshooting information. Can you 
access web sites from location 1 using proxy 1? Can you access web sites 
from proxy1 directly (e.g. using curl)? At now, I'd suspect that point 
of failure is  between proxy1 and internet.


Best wishes,
Pavel

On 01/16/2014 02:22 AM, RKGD512 wrote:

Hi All-
So I have a need to direct all web traffic through an IPSEC Tunnel to a
Squid Proxy server on the other end of the tunnel.

Sounds complicated but the concept is really easy however I am having
issues.

So let me gather as much info as I can:

*Location 1 Subnet:* 192.168.1.0/24
*Location 1 Router 1:* Netgear WNR2000v3 running Firmware: DD-WRT v24-sp2
(02/09/12) std
*Location 1 Router 2:* TPLink TL-R600VPN - VPN Router Housing the IPSEC
Tunnel
  
*Location 2 Subnet:* 192.168.100.0/24

*Location 2 Router 1:* Linksys WRT310Nv2 running Firmware: DD-WRT v24-sp2
(08/12/10) std-nokaid-small
*Location 2 Router 2:* TPLink TL-R600VPN - VPN Router Housing the IPSEC
Tunnel

Location 1's proxy server is housed on VMware Workstation Version 10 with
Centos 6.4 Minimal with squid proxy installed.

*Description of Issue* So when I enter the proxy server info in System proxy
and open a webpage, the page sits there until it times out.  It never
displays anything.  I can see that the proxy server is interpreting the
request but on the client from Location 2 to location 1's proxy server is
unable to browse the internet.

Now the funny thing is, as a test I created the same proxy on location 2's
side, location 1 can browse the internet fine and I can tell from
whatismyip.com as well as from logs that everything is fine.  I checked all
required firewalls (iptables) and squid configs.  Even tried turning off
iptables on the router as well as on the proxy server and included
"http_access allow all" with no success.

Why it works one direction versus the other?  I have no idea.  I validated
every Hops config and they are all identical in their firewall settings and
squid proxy settings.

Any help would be greatly appreciated!

Showing configs below:

Here's the squid Config:
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1 192.168.2.0/24 192.168.100.0/24
192.168.1.0/24
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localnet
http_access allow localhost

http_access deny all

http_port 80

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


Here are some logs to show the request is hitting the squid server:
&user_id=150566193&nid=2&ts=1389816137 - NONE/- text/html
1389816227.699 58 192.168.100.73 TCP_MISS/200 360 GET
http://notify4.dropbox.com/subscribe? - DIRECT/108.160.162.51 text/plain
1389816279.774  0 192.168.100.73 TCP_MEM_HIT/301 736 GET
http://google.com/ - NONE/- text/html
1389816279.934136 192.168.100.73 TCP_MISS/302 1186 GET
http://www.google.com/ - DIRECT/74.125.239.17 text/html
1389816285.846   5857 192.168.100.73 TCP_MISS/200 3539 CONNECT
www.google.com:443 - DIRECT/74.125.239.17 -
1389816288.123  0 192.168.100.73 TCP_MEM_HIT/301 736 GET
http://google.com/ - NONE/- text/html
1389816288.207 42 192.168.100.73 TCP_MISS/302 1186 GET
http://www.google.com/ - DIRECT/74.125.239.17 text/html
1389816294.935   6671 192.168.100.73 TCP_MISS/200 3539 CONNECT
www.google.com:443 - DIRECT/74.125.239.17 -
1389816378.040  60130 192.168.100.73 TCP_MISS/200 3828 CONNECT
client-lb.dropbox.com:443 - DIRECT/108.160.165.83 -
1389816387.059  60128 192.168.100.73 TCP_MISS/200 4242 CONNECT
d.dropbox.com:443 - DIRECT/108.160.165.189 -
1389816408.033 180281 192.168.100.73 TCP_MISS/200 3828 CONNECT
client-lb.dropbox.com:443 - DIRECT/108.160.166.9 -
1389816422.068  0 192.168.100.73 NONE/400 3874 GET
/subscribe?host_int=819546594&ns_map=241516770_170677946892514,261374389_5265891279285,24151

Re: [squid-users] Unbalaned Cpu cores with squid 3.4.3 with centos 6.4 64 bit


Hi,

Feel free to use 24 workers. There should not be deficiency in squid 
performance.
For better performance, use cpu_affinity_map configuration directive to 
bind each squid worker to dedicated cpu core explicitly.


Best wishes,
Pavel


On 02/12/2014 05:29 PM, Dr.x wrote:

hi all ,

ive tried cenots6.4 64 bit with32 G ram with squid 3.4.3 with tptoxy  ,

but

the cpu cores are not balanced !!!
this machine is delr720 it has 24 cores ,

before i go to try on this machine, i tried it on quad core machine with
same config squid file and it gave me equal sharing among 8 cores of cpu .

but when i tried the same config and same kernel and same squid and same os
on delr720 which has 24 cores cpu ,
i found that process of squid are only distributed on about 5 cores from
total of 25 core !

my question is ,
does squid depend on hardware core cpu ? does that mean squid SMP
compatibale with some hardware and not with others ???

==
ive pumped about 1000 ips to squid
and here is a snapshot with cores which are un balanced !
those are cores of delr720
  


i have only 4 workers and dont wanto to increase them , because i think if i
increased them i will have low bw utilization .

===
here is last of dmesg file log :


EXT4-fs (sda1): re-mounted. Opts: (null)
iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0860)
iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10
iTCO_vendor_support: vendor-support=0
dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2)
perf_event_intel: PEBS enabled due to microcode update
microcode: CPU23 updated to revision 0x710, date = 2013-06-17
microcode: CPU22 updated to revision 0x710, date = 2013-06-17
microcode: CPU21 updated to revision 0x710, date = 2013-06-17
microcode: CPU20 updated to revision 0x710, date = 2013-06-17
microcode: CPU19 updated to revision 0x710, date = 2013-06-17
microcode: CPU18 updated to revision 0x710, date = 2013-06-17
microcode: CPU17 updated to revision 0x710, date = 2013-06-17
microcode: CPU16 updated to revision 0x710, date = 2013-06-17
microcode: CPU15 updated to revision 0x710, date = 2013-06-17
microcode: CPU14 updated to revision 0x710, date = 2013-06-17
microcode: CPU13 updated to revision 0x710, date = 2013-06-17
microcode: CPU12 updated to revision 0x710, date = 2013-06-17
microcode: CPU11 updated to revision 0x710, date = 2013-06-17
microcode: CPU10 updated to revision 0x710, date = 2013-06-17
microcode: CPU9 updated to revision 0x710, date = 2013-06-17
microcode: CPU8 updated to revision 0x710, date = 2013-06-17
microcode: CPU7 updated to revision 0x710, date = 2013-06-17
microcode: CPU6 updated to revision 0x710, date = 2013-06-17
microcode: CPU5 updated to revision 0x710, date = 2013-06-17
microcode: CPU4 updated to revision 0x710, date = 2013-06-17
microcode: CPU3 updated to revision 0x710, date = 2013-06-17
microcode: CPU2 updated to revision 0x710, date = 2013-06-17
microcode: CPU1 updated to revision 0x710, date = 2013-06-17
microcode: CPU0 updated to revision 0x710, date = 2013-06-17
microcode: Microcode Update Driver: v2.00
<
tig...@aivazian.fsnet.co.uk
>
, Peter Oruba
microcode: CPU23 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU22 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU21 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU20 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU19 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU18 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU17 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU16 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU15 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU14 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU13 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU12 sig=0x206d7, pf=0x1, revision=0x70b
microcode: CPU11 sig=0x206d7, pf=0x1, revision=0x70b


[root@squid ~]# cat /etc/redhat-release
CentOS release 6.5 (Final)
[root@squid ~]# uname -a
Linux squid.ps 3.7.5 #1 SMP Tue Feb 11 21:31:21 EET 2014 x86_64 x86_64
x86_64 GNU/Linux
[root@squid ~]#
==
[root@squid ~]# cat /etc/squid/squid.conf


#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl mysubnet src 
acl eliezer sr

Re: [squid-users] Squid selinux audit review needed.


Hi Elizer,

I'm pretty far from selinux understanding, but I have two suggestions 
for you:

1) sealert tool can be used for getting human-readable output. E.g.

sealert -a /var/log/audit/audit.log > /path/to/mylogfile.txt
2) If you just want just to start squid again and do not care about 
reasons of problem, you can just follow 
http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01



Hope this will be helpful for you.

Best wishes,
Pavel

On 03/10/2014 04:34 PM, Eliezer Croitoru wrote:
Since I am not selinux expret but I am looking at couple issues I am 
not sure what the issue is.
I have a glusterfs squid machine as a client and then I restarted the 
squid instance.

All of a sudden I got a "Permission Denied(13)" in the logs.
I took an audit.log output for the time of server restarting.
Please take a look on it.
it maybe related to fusefs?

##START
tail /var/log/audit/audit.log -f
type=AVC msg=audit(1394456998.422:4293): avc:  denied  { search } for 
pid=17578 comm="squid" name="/" dev="fuse" ino=1 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=system_u:object_r:fusefs_t:s0 tclass=dir
type=SYSCALL msg=audit(1394456998.422:4293): arch=c03e syscall=59 
success=no exit=-13 a0=7fffeb15b980 a1=7fffeb1598e0 a2=7fffeb15bce8 
a3=376e018240 items=0 ppid=17577 pid=17578 auid=0 uid=23 gid=23 euid=0 
suid=0 fsuid=0 egid=23 sgid=23 fsgid=23 ses=388 tty=(none) 
comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.470:4294): avc:  denied  { getattr } for 
pid=17583 comm="squid" path="/mnt/gluster" dev="fuse" ino=1 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=system_u:object_r:fusefs_t:s0 tclass=dir
type=SYSCALL msg=audit(1394456998.470:4294): arch=c03e syscall=4 
success=no exit=-13 a0=254d830 a1=7fff24caccf0 a2=7fff24caccf0 a3=0 
items=0 ppid=17577 pid=17583 auid=0 uid=23 gid=23 euid=23 suid=0 
fsuid=23 egid=23 sgid=23 fsgid=23 ses=388 tty=(none) comm="squid" 
exe="/usr/sbin/squid" subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.509:4295): avc:  denied  { search } for 
pid=17582 comm="squid" name="/" dev="fuse" ino=1 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=system_u:object_r:fusefs_t:s0 tclass=dir
type=SYSCALL msg=audit(1394456998.509:4295): arch=c03e syscall=2 
success=no exit=-13 a0=1bc4d30 a1=2 a2=1a4 a3=1 items=0 ppid=17577 
pid=17582 auid=0 uid=23 gid=23 euid=23 suid=0 fsuid=23 egid=23 sgid=23 
fsgid=23 ses=388 tty=(none) comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.591:4296): avc:  denied  { create } for 
pid=17579 comm="squid" name="coordinator.ipc" 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1394456998.591:4296): arch=c03e syscall=49 
success=no exit=-13 a0=a a1=254f9ac a2=20 a3=98 items=0 ppid=17577 
pid=17579 auid=0 uid=23 gid=23 euid=23 suid=0 fsuid=23 egid=23 sgid=23 
fsgid=23 ses=388 tty=(none) comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.611:4297): avc:  denied  { search } for 
pid=17580 comm="squid" name="/" dev="fuse" ino=1 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=system_u:object_r:fusefs_t:s0 tclass=dir
type=SYSCALL msg=audit(1394456998.611:4297): arch=c03e syscall=2 
success=no exit=-13 a0=1375d30 a1=2 a2=1a4 a3=1 items=0 ppid=17577 
pid=17580 auid=0 uid=23 gid=23 euid=23 suid=0 fsuid=23 egid=23 sgid=23 
fsgid=23 ses=388 tty=(none) comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.625:4298): avc:  denied  { create } for 
pid=17582 comm="squid" name="kid-2.ipc" 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1394456998.625:4298): arch=c03e syscall=49 
success=no exit=-13 a0=a a1=1ff4f0c a2=1a a3=98 items=0 ppid=17577 
pid=17582 auid=0 uid=23 gid=23 euid=23 suid=0 fsuid=23 egid=23 sgid=23 
fsgid=23 ses=388 tty=(none) comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394456998.675:4299): avc:  denied  { create } for 
pid=17580 comm="squid" name="kid-3.ipc" 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1394456998.675:4299): arch=c03e syscall=49 
success=no exit=-13 a0=a a1=17a5f0c a2=1a a3=98 items=0 ppid=17577 
pid=17580 auid=0 uid=23 gid=23 euid=23 suid=0 fsuid=23 egid=23 sgid=23 
fsgid=23 ses=388 tty=(none) comm="squid" exe="/usr/sbin/squid" 
subj=unconfined_u:system_r:squid_t:s0 key=(null)
type=AVC msg=audit(1394457000.930:4300): avc:  denied  { search } for 
pid=17589 comm="squid" name="/" dev="fuse" ino=1 
scontext=unconfined_u:system_r:squid_t:s0 
tcontext=system_u:object_r:fusefs_t:s0 tclass=dir
type=SYSCALL ms

Re: [squid-users] Re: Acl.cc(26) AuthenticateAcl: authentication not applicable on intercepted requests.


No.

On 05/18/2014 12:29 PM, anly.zhang wrote:

Hi!
I found that returned to normal after cancel the transparent proxy
Settings.
such as change "http_port 3128 transparent" to "http_port 3128.

So ,It didn't  applicable for transparent proxy by squid 3.1.10.

Is applicable transparent and squid_ldap_group auth  in the squid latest
version 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Acl-cc-26-AuthenticateAcl-authentication-not-applicable-on-intercepted-requests-tp4665989p4665996.html
Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] Why not cached ?


Hi babajaga,

You can add 'debug_options 20,9 27,9 31,9 70,9 82,9 22,9 84,9 90,9' to 
your squid config to debug caching issues.
Search through log for string that contains 'NO' (in uppercase). This 
string should explain why squid made decision not to cache http response.


Best wishes,
Pavel

On 05/27/2014 06:17 PM, babajaga wrote:

I was wondering about very few HITs in this squid installation, and did some
checking:

access.log:
1401203150.334   1604 10.1.10.121 TCP_MISS/200 718707 GET
http://l5.yimg.com/av/moneyball/ads/0-1399331780-5313.jpg -
ORIGINAL_DST/66.196.65.174 image/jpeg
1401203186.100   1327 10.1.10.121 TCP_MISS/200 718707 GET
http://l5.yimg.com/av/moneyball/ads/0-1399331780-5313.jpg -
ORIGINAL_DST/66.196.65.174 image/jpeg

cache.log:
2014/05/27 14:52:12 kid1| Starting Squid Cache version 3.4.5-20140514-r13135
for i686-pc-linux-gnu...
2014/05/27 14:52:12 kid1| Process ID 7477
2014/05/27 14:52:12 kid1| Process Roles: worker
2014/05/27 14:52:12 kid1| With 1024 file descriptors available
2014/05/27 14:52:12 kid1| Initializing IP Cache...
2014/05/27 14:52:12 kid1| DNS Socket created at [::], FD 7
2014/05/27 14:52:12 kid1| DNS Socket created at 0.0.0.0, FD 8
2014/05/27 14:52:12 kid1| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2014/05/27 14:52:12 kid1| Logfile: opening log
daemon:/tmp/var/log/squid/access.log
2014/05/27 14:52:12 kid1| Logfile Daemon: opening log
/tmp/var/log/squid/access.log
2014/05/27 14:52:12 kid1| Logfile: opening log
daemon:/tmp/var/log/squid/store.log
2014/05/27 14:52:12 kid1| Logfile Daemon: opening log
/tmp/var/log/squid/store.log
2014/05/27 14:52:12 kid1| Swap maxSize 0 + 2097152 KB, estimated 161319
objects
2014/05/27 14:52:12 kid1| Target number of buckets: 8065
2014/05/27 14:52:12 kid1| Using 8192 Store buckets
2014/05/27 14:52:12 kid1| Max Mem  size: 2097152 KB
2014/05/27 14:52:12 kid1| Max Swap size: 0 KB
2014/05/27 14:52:12 kid1| Using Least Load store dir selection
2014/05/27 14:52:12 kid1| Set Current Directory to /tmp
2014/05/27 14:52:12 kid1| Finished loading MIME types and icons.
2014/05/27 14:52:12 kid1| HTCP Disabled.
2014/05/27 14:52:12 kid1| Squid plugin modules loaded: 0
2014/05/27 14:52:12 kid1| Accepting HTTP Socket connections at
local=10.1.10.1:3129 remote=[::] FD 13 flags=9
2014/05/27 14:52:12 kid1| Accepting NAT intercepted HTTP Socket connections
at local=10.1.10.1:3128 remote=[::] FD 14 flags=41
2014/05/27 14:52:13 kid1| storeLateRelease: released 0 objects

squid.conf:
root@voyage:/usr/local/squid/etc# vi squid.conf
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

acl SSL_ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 10.1.10.1:3129
http_port 10.1.10.1:3128 intercept
cache_mem 2048 MB
memory_cache_mode always
access_log daemon:/tmp/var/log/squid/access.log squid
cache_store_log daemon:/tmp/var/log/squid/store.log squid
logfile_rotate 3
pid_filename /var/run/squid.pid
cache_log /tmp/var/log/squid/cache.log
coredump_dir /tmp
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
shutdown_lifetime 10 seconds


This quid is running on a scaled down debian, no HDD, with mobile internet
connection. So /tmp in fact is a RAM-disk, and a good hit rate very welcome.
The example above should be cachable, or not ? squid was accessed on port
3128, intercept.






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-not-cached-tp4666117.html
Sent from the Squid - Users mailing list archive at Nabble.com.