Re: [squid-users] Re: squid3 block all 443 ports request

2014-02-17 Thread Sachin Divekar
On Fri, Feb 14, 2014 at 9:39 PM, khadmin khalil.bens...@hotmail.com wrote:
 Hi,
 -For the client 192.168.1.53 i configure the browser not to use the proxy
 and it fetch www.google.com web site
 -For the local machine (the server where squid is intalled) without the
 proxy i can fetch www.google.com with the proxy configured on 127.0.0.1 i
 get this message on access.log file:
 1392393591.247  38412 127.0.0.1 TCP_MISS_ABORTED/000 0 GET
 http://www.google-analytics.com/__utm.gif? -
 HIER_DIRECT/2a00:1450:4002:804::1006 -
 1392393632.774  40544 127.0.0.1 TCP_MISS_ABORTED/000 0 GET
 http://googleads.g.doubleclick.net/pagead/ads? -
 HIER_DIRECT/2a00:1450:4006:802::100d -
 1392392594.681  59856 127.0.0.1 TCP_MISS/503 0 CONNECT www.google.tn:443 -
 HIER_NONE/- -


If it is not very critical, just disable IPv6 temporarily and check if
yahoo, google etc.
work or not. Most probably it is the case of non-working IPv6 infrastructure.

Regards,
Sachin Divekar


Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-17 Thread Rajiv Desai
FWIW, from debug logs in cache.log, it seems like PUT responses are
being cached.
I am fairly new to using squid so I am be completely misreading these.
Just trying to understand caching.
So are PUT responses cached by design? or am I completely missing
something simple here? :)


logs
2014/02/17 00:06:54.977 kid1| store_dir.cc(1149) get: storeGet:
looking up AC671962CFC5644F4B22DA51C242DA50
2014/02/17 00:06:54.977 kid1| StoreMap.cc(293) openForReading: opening
entry with key AC671962CFC5644F4B22DA51C242DA50 for reading
/mnt/squid-cache_inodes
2014/02/17 00:06:54.977 kid1| StoreMap.cc(309) openForReadingAt:
opening entry 14877 for reading /mnt/squid-cache_inodes
2014/02/17 00:06:54.977 kid1| StoreMap.cc(322) openForReadingAt:
cannot open empty entry 14877 for reading /mnt/squid-cache_inodes
2014/02/17 00:06:54.977 kid1| store_dir.cc(820) find: none of 1
cache_dirs have AC671962CFC5644F4B22DA51C242DA50
2014/02/17 00:06:54.977 kid1| client_side_reply.cc(1626)
identifyFoundObject: StoreEntry is NULL -  MISS
2014/02/17 00:06:54.977 kid1| client_side_reply.cc(622) processMiss:
clientProcessMiss: 'PUT
https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
2014/02/17 00:06:54.977 kid1| store.cc(803) storeCreatePureEntry:
storeCreateEntry:
'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
2014/02/17 00:06:54.977 kid1| store.cc(395) StoreEntry: StoreEntry
constructed, this=0x168f990
/logs

... and later

logs
2014/02/17 00:06:55.127 kid1| store_dir.cc(820) find: none of 1
cache_dirs have DBA199D500F44928560537BB0CAB0908
2014/02/17 00:06:55.127 kid1| refresh.cc(319) refreshCheck: checking
freshness of 
'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
2014/02/17 00:06:55.127 kid1| refresh.cc(340) refreshCheck: Matched '.
7776000 100%% 7776000'
2014/02/17 00:06:55.127 kid1| refresh.cc(342) refreshCheck: age:60
2014/02/17 00:06:55.127 kid1| refresh.cc(344) refreshCheck:
check_time: Mon, 17 Feb 2014 08:07:55 GMT
2014/02/17 00:06:55.127 kid1| refresh.cc(346) refreshCheck:
entry-timestamp:   Mon, 17 Feb 2014 08:06:55 GMT
2014/02/17 00:06:55.127 kid1| refresh.cc(202) refreshStaleness: No
explicit expiry given, using heuristics to determine freshness
2014/02/17 00:06:55.128 kid1| refresh.cc(240) refreshStaleness: FRESH:
age (60 sec) is less than configured minimum (7776000 sec)
2014/02/17 00:06:55.128 kid1| refresh.cc(366) refreshCheck: Staleness = -1
2014/02/17 00:06:55.128 kid1| refresh.cc(486) refreshCheck: Object isn't stale..
2014/02/17 00:06:55.128 kid1| refresh.cc(501) refreshCheck: returning
FRESH_MIN_RULE
2014/02/17 00:06:55.128 kid1| http.cc(491) cacheableReply: YES because
HTTP status 200
2014/02/17 00:06:55.128 kid1| HttpRequest.cc(696) storeId: sent back
canonicalUrl:https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104
2014/02/17 00:06:55.128 kid1| store.cc(472) hashInsert:
StoreEntry::hashInsert: Inserting Entry e:=p2DV/0x168f990*3 key
'AC671962CFC5644F4B22DA51C242DA50'
2014/02/17 00:06:55.128 kid1| ctx: exit level  0
2014/02/17 00:06:55.128 kid1| store.cc(858) write: storeWrite: writing
17 bytes for 'AC671962CFC5644F4B22DA51C242DA50'
/logs

On Sun, Feb 16, 2014 at 11:07 PM, Rajiv Desai ra...@maginatics.com wrote:
 What is the authoritative source of cache statistics? The slots
 occupied due to PUT requests (as suggested by mgr:storedir stats is
 quite concerning.
 Is there some additional config that needs to be added to ensure that
 PUTs are simply bypassed for caching purpose.

 NOTE: fwiw, I have verified that subsequent GETs for the same objects
 after PUTs do get a cache MISS.

 On Sun, Feb 16, 2014 at 3:45 PM, Rajiv Desai ra...@maginatics.com wrote:
 On Sun, Feb 16, 2014 at 3:39 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/02/2014 11:41 a.m., Rajiv Desai wrote:
 I am using Squid Cache:
 Version 3.HEAD-20140127-r13248

 My cache dir is configured to use rock (Large rock with SMP):
 cache_dir rock /mnt/squid-cache 256000 max-size=4194304

 My refresh pattern is permissive to cache all objects:
 refresh_pattern . 129600 100% 129600 ignore-auth

 I uploaded 30 GB of data via squid cache with PUT requests.
 From storedir stats(squidclient mgr:storedir) it seems like each PUT
 is occupying 1 slot in rock cache.

 Is this a known bug? PUT requests should not increase cache usage right?


 Stats:

 by kid9 {

 Store Directory Statistics:

 Store Entries  : 53



 How may objects in that 30GB of PUT requests?

 That 53 looks more like the icons loaded by Squid for use in error pages
 and ftp:// directory listings.


 572557 objects were uploaded with PUT requests.
 I was looking at current size and used slots to interpret current
 cache occupancy. Perhaps I am interpreting these incorrectly?

 Current Size: 8960416.00 KB 4.27%
 Current entries:560025 4.27%
 Used slots: 560025 4.27%

 Amos



Re: [squid-users] hier_code acl and cache allow/deny

2014-02-17 Thread Nikolai Gorchilov
Dear Amos,

On Sat, Feb 15, 2014 at 3:12 PM, Nikolai Gorchilov n...@x3me.net wrote:
 On Sat, Feb 15, 2014 at 1:46 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 I'm trying to avoid the following scenario (excerpt from store.log):

 1392406208.398 SWAPOUT 00  8C2B9C51268EFEEDEB33FB9EC53030A1
 200 1392406217 1382373187 1394998217 image/jpeg 21130/21130 GET
 http://www.gnu.org/graphics/t-desktop-4-small.jpg
 1392406242.459 SWAPOUT 00  8C2B9C51268EFEEDEB33FB9EC53030A1
 200 1392406217 1382373187 1394998217 image/jpeg 21130/21130 GET
 http://www.gnu.org/graphics/t-desktop-4-small.jpg

 First request was served by kid1. It fetched the object by
 HIER_DIRECT, memory cached it, and stored it to it's storage (say
 /S1).
 Seconds later, the same request arrives to kid2. It retrieves the
 object from shared memory (hierarchy code NONE), then swaps it out to
 it's own storage (say /S2).

 The question is how to prevent kid2 from saving the duplicate object?
 Is there any mechanism other then switching memory_cache_shared off?

Can you recommend a solution regarding the above mentioned case?

Best,
Niki


Re: [squid-users] hier_code acl and cache allow/deny

2014-02-17 Thread Amos Jeffries
On 17/02/2014 10:27 p.m., Nikolai Gorchilov wrote:
 Dear Amos,
 
 On Sat, Feb 15, 2014 at 3:12 PM, Nikolai Gorchilov n...@x3me.net wrote:
 On Sat, Feb 15, 2014 at 1:46 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 I'm trying to avoid the following scenario (excerpt from store.log):

 1392406208.398 SWAPOUT 00  8C2B9C51268EFEEDEB33FB9EC53030A1
 200 1392406217 1382373187 1394998217 image/jpeg 21130/21130 GET
 http://www.gnu.org/graphics/t-desktop-4-small.jpg
 1392406242.459 SWAPOUT 00  8C2B9C51268EFEEDEB33FB9EC53030A1
 200 1392406217 1382373187 1394998217 image/jpeg 21130/21130 GET
 http://www.gnu.org/graphics/t-desktop-4-small.jpg

 First request was served by kid1. It fetched the object by
 HIER_DIRECT, memory cached it, and stored it to it's storage (say
 /S1).
 Seconds later, the same request arrives to kid2. It retrieves the
 object from shared memory (hierarchy code NONE), then swaps it out to
 it's own storage (say /S2).

 The question is how to prevent kid2 from saving the duplicate object?
 Is there any mechanism other then switching memory_cache_shared off?
 
 Can you recommend a solution regarding the above mentioned case?

Not at this point. Alex is the one to talk to about this.

Amos


[squid-users] negative values in mgr:info

2014-02-17 Thread Niki Gorchilov
Hello,

While using Squid 3.4.3 on 64bit Ubuntu 12.04.3 with 64GB cache mem
and I see negative values in some memory-related statistics:

===[cut]===
Memory usage for squid via mallinfo():
Total space in arena:  -972092 KB
Ordinary blocks:   -974454 KB   4472 blks
Small blocks:   0 KB  0 blks
Holding blocks:740328 KB 19 blks
Free Small blocks:  0 KB
Free Ordinary blocks:2362 KB
Total in use:2362 KB -1%
Total free:  2362 KB -1%
Total size:-231764 KB
Memory accounted for:
Total accounted:   1834725 KB -792%
memPool accounted: 77332197 KB -33367%
memPool unaccounted:   -77563961 KB  -0%
memPoolAlloc calls: 13874752170
memPoolFree calls:  13959152640
===[cut]===

Are these result of integer overflow or using signed integers?

Best,
Niki


Re: [squid-users] negative values in mgr:info

2014-02-17 Thread Kinkie
On Mon, Feb 17, 2014 at 11:15 AM, Niki Gorchilov n...@gorchilov.com wrote:
 Hello,

 While using Squid 3.4.3 on 64bit Ubuntu 12.04.3 with 64GB cache mem
 and I see negative values in some memory-related statistics:

 ===[cut]===
 Memory usage for squid via mallinfo():
 Total space in arena:  -972092 KB
 Ordinary blocks:   -974454 KB   4472 blks
 Small blocks:   0 KB  0 blks
 Holding blocks:740328 KB 19 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:2362 KB
 Total in use:2362 KB -1%
 Total free:  2362 KB -1%
 Total size:-231764 KB
 Memory accounted for:
 Total accounted:   1834725 KB -792%
 memPool accounted: 77332197 KB -33367%
 memPool unaccounted:   -77563961 KB  -0%
 memPoolAlloc calls: 13874752170
 memPoolFree calls:  13959152640
 ===[cut]===

 Are these result of integer overflow or using signed integers?

The former.
The OS API we rely on to collect those uses 32-bit signed integers.
There's nothing we can do about it, I'm sorry :(


-- 
Kinkie


Re: [squid-users] negative values in mgr:info

2014-02-17 Thread Amos Jeffries
On 17/02/2014 11:15 p.m., Niki Gorchilov wrote:
 Hello,
 
 While using Squid 3.4.3 on 64bit Ubuntu 12.04.3 with 64GB cache mem
 and I see negative values in some memory-related statistics:
 
 ===[cut]===
 Memory usage for squid via mallinfo():
 Total space in arena:  -972092 KB
 Ordinary blocks:   -974454 KB   4472 blks
 Small blocks:   0 KB  0 blks
 Holding blocks:740328 KB 19 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:2362 KB
 Total in use:2362 KB -1%
 Total free:  2362 KB -1%
 Total size:-231764 KB
 Memory accounted for:
 Total accounted:   1834725 KB -792%
 memPool accounted: 77332197 KB -33367%
 memPool unaccounted:   -77563961 KB  -0%
 memPoolAlloc calls: 13874752170
 memPoolFree calls:  13959152640
 ===[cut]===
 
 Are these result of integer overflow or using signed integers?

32-bit overflows in the mallinfo() internals and/or data structures.
These are well-known old flaws in mallinfo().

On 64-bit Squid you can disregard the memory sections of the report. The
only reliable display is the calls counters and the memPool
accounted size value. Display of everything else is corrupted in some
way by mallinfo() .

Amos


[squid-users] squidguard on special port

2014-02-17 Thread Grooz, Marc (regio iT)
Hi Squid Usergroup,

I want that a redirector like squidgard is only ask if a client connect to port 
3128 and on Port 8080 the request should be passed without the rewriting. Is 
that possible with squid?

Kind regards

Marc


Re: [squid-users] squidguard on special port

2014-02-17 Thread Nikolai Gorchilov
Hi, Marc,

Yes, it is possible. RTFM about myport/myportname ACL at
http://www.squid-cache.org/Doc/config/acl/

Best,
Niki

On Mon, Feb 17, 2014 at 12:48 PM, Grooz, Marc (regio iT)
marc.gr...@regioit.de wrote:
 Hi Squid Usergroup,

 I want that a redirector like squidgard is only ask if a client connect to 
 port 3128 and on Port 8080 the request should be passed without the 
 rewriting. Is that possible with squid?

 Kind regards

 Marc


AW: [squid-users] squidguard on special port

2014-02-17 Thread Grooz, Marc (regio iT)
My suggestion was:

http_port 3128 name=squidguard

url_rewrite_access allow squidguard
url_rewrite_access deny all

or

http_port 8080 name=unfiltred

url_rewrite_access allow !unfiltred

Is that right?

-Ursprüngliche Nachricht-
Von: n...@gorchilov.com [mailto:n...@gorchilov.com] Im Auftrag von Nikolai 
Gorchilov
Gesendet: Montag, 17. Februar 2014 12:27
An: Grooz, Marc (regio iT)
Cc: squid-users@squid-cache.org
Betreff: Re: [squid-users] squidguard on special port

Hi, Marc,

Yes, it is possible. RTFM about myport/myportname ACL at 
http://www.squid-cache.org/Doc/config/acl/

Best,
Niki

On Mon, Feb 17, 2014 at 12:48 PM, Grooz, Marc (regio iT) 
marc.gr...@regioit.de wrote:
 Hi Squid Usergroup,

 I want that a redirector like squidgard is only ask if a client connect to 
 port 3128 and on Port 8080 the request should be passed without the 
 rewriting. Is that possible with squid?

 Kind regards

 Marc


Re: [squid-users] squidguard on special port

2014-02-17 Thread Nikolai Gorchilov
You haven't define myportname ACLs. Corrections embedded bellow

On Mon, Feb 17, 2014 at 1:34 PM, Grooz, Marc (regio iT)
marc.gr...@regioit.de wrote:

 http_port 3128 name=squidguard

acl squidguard myportname squidguard

 url_rewrite_access allow squidguard
 url_rewrite_access deny all

 or

 http_port 8080 name=unfiltred

acl unfiltred myportname unfiltred

 url_rewrite_access allow !unfiltred

url_rewrite_access deny all


[squid-users] Squid transparent proxy with one nic access denied problem.

2014-02-17 Thread Spyros Vlachos
Hello! Thank you in advance for your help.
I have a fairly simple home network setup.
I have a modem (192.168.2.254) that connects to the internet.
Connected to that modem through its own wan port
I have an openwrt router (192.168.1.1). My internal network is the
192.168.1.0/24 one. On the router I have connected
an ubuntu 13.10 box (192.168.1.20) that acts as a squid proxy and dns
among other things. The ubuntu box has one network card.
I had successfully  installed a transparent squid proxy by using DNAT
and SNAT on the router using the 12.04 version of ubuntu.
Because of some problems with my ups I tried to install ubuntu 13.10
which solved the ups problem but also
upgraded the squid package to 3.3.8 from 3.1.something . My squid
configuration is as follows:

#--Squid server 192.168.1.20---
acl localnet src 192.168.1.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl squid-prime dstdomain /etc/squid3/squid-prime.acl
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access deny squid-prime
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128  #HAVE tried transparent and intercept but the problem persists
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
dns_nameservers 8.8.8.8 #have tried to use the local dns 127.0.0.1 but
the same problem
#---

I have tried disabling the dns server of ubuntu because I have heard
of some problem it can cause to squid.

My router (192.168.1.1) SNAT DNAT configuration is (openwrt luci gui)
1) MATCH: From IP not 192.168.1.20 in lan Via any router IP at port 80
FORWARD TO: IP 192.168.1.20, port 3128 in lan
2)MATCH: From any host in lan To IP 192.168.1.20, port 3128 in lan
Rewrite to source IP 192.168.1.1

The error I get by using the above configurations is a constant Access
denied Error in the browser and in the
squid access log is
#-
92  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590851.593  1 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590856.653  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590856.653  1 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590861.742  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590861.742  1 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590866.878  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590866.878 26 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590871.903  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590871.903  1 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590876.893  0 192.168.1.20 TCP_MISS/403 3985 GET
http://notify7.dropbox.com/subscribe? - HIER_NONE/- text/html
1392590876.893  1 192.168.1.1 TCP_MISS/403 4090 GET
http://notify7.dropbox.com/subscribe? - HIER_DIRECT/192.168.1.20
text/html
1392590876.992  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590876.993  1 192.168.1.1 TCP_MISS/403 4193 GET
http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
text/html
1392590878.600  0 192.168.1.20 TCP_MISS/403 4390 POST
http://safebrowsing.clients.google.com/safebrowsing/downloads? -
HIER_NONE/- text/html
1392590878.601 26 192.168.1.1 TCP_MISS/403 4495 POST
http://safebrowsing.clients.google.com/safebrowsing/downloads? -
HIER_DIRECT/192.168.1.20 text/html
1392590882.093  0 192.168.1.20 TCP_MISS/403 4088 GET
http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
1392590882.093  1 

[squid-users] Upgrade to 3.4.3 and TCP Connections to parent failing more often

2014-02-17 Thread Paul Carew
Hi

I have recently upgraded our Squid servers from 3.3.11 to 3.4.3 and am
seeing the following error every few minutes in the cache log.

2014/02/17 13:43:02 kid1| TCP connection to wwwproxy02.domain.local/8080 failed

I have 2 servers configured on the LAN which handle connections over a
private WAN and 2 other servers on another WAN connected to the
internet. The first 2 servers use the second pair of servers connected
to the internet as a parent with the following lines in squid.conf:

cache_peer wwwproxy01.domain.local parent 8080 0 no-query no-digest carp
cache_peer wwwproxy02.domain.local parent 8080 0 no-query no-digest carp

With 3.3.11 I occasionally got the error, maybe two or three times daily.

Does anyone have any ideas why this might be occurring on 3.4.3 but
not 3.3.11? I've had a look at debug_options but can't see a section
that screams debug me for this particular error. Maybe section 11 or
15?

Many Thanks

Paul


Re: [squid-users] squid 3.4.3 on Solaris Sparc

2014-02-17 Thread Monah Baki
Hi,


I did find /usr/lib/libdb.so but no results for libdb.a


Thanks

On Mon, Feb 17, 2014 at 12:42 AM, Francesco Chemolli gkin...@gmail.com wrote:

 On 17 Feb 2014, at 01:15, Monah Baki monahb...@gmail.com wrote:

 uname -a
 SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

 Here are the steps before it fails

 ./configure --prefix=/usr/local/squid --enable-async-io
 --enable-cache-digests --enable-underscores --enable-pthreads
 --enable-storeio=ufs,aufs --enable-removal-policies=lru,
 heap

 make

 c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
 -MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
 ext_session_acl.cc
 mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
 /bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
 ext_session_acl.o ../../../compat/libcompat-squid.la
 libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
 ext_session_acl ext_session_acl.o
 ../../../compat/.libs/libcompat-squid.a -pthreads
 Undefined   first referenced
 symbol in file
 db_create   ext_session_acl.o
 db_env_create   ext_session_acl.o

 The build system is not being able to find the Berkeley db library files (but 
 for some reason it can find the header).
 Please check that libdb.a or libdb.so are available and found on the paths 
 searched for libraries by your build system.

 Kinkie


[squid-users] Re: squid3 block all 443 ports request

2014-02-17 Thread khadmin
Hi,

I want to thank you all for your efforts, finally it works i have to disable
IPV6 protocole on clients and it works perfectly. 
Thank you again

Regards,
Khalil




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid3-block-all-443-ports-request-tp4664735p4664884.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] block domains based on LDAP group and force re-authentication every 30 minutes

2014-02-17 Thread Wim Ramakers
I’m trying to configure squid3 (on Debian server) to block certain (mostly 
social media) websites based on the LDAP (age) group the users are in.
The devices are apple ipads, safari is used as web browser, and apps are 
installed with the Mobile Iron multiuser platform. The device will be shared 
among users of multiple groups, so i must FORCE the user to reauthenticate 
every 30 minutes. 

The problem we have now is that when a user authenticates correctly, the 
credentials never expire. For testing purposes I’ve set the ttl to 1 minute 
now, but after I authenticate a user successfully I never get a new challenge.
My current config:
-
authenticate_ttl 1 minute

auth_param basic program /usr/lib/squid3/squid_ldap_auth -v 3 -b 
dc=mydomain,dc=eu  -f uid=%s -h 10.11.12.13
auth_param basic children 5
auth_param basic realm Web-Proxy
auth_param basic credentialsttl 5 minutes
acl ldap-auth proxy_auth REQUIRED

external_acl_type ldapgroup ttl=60 %LOGIN /usr/lib/squid3/squid_ldap_group -b 
dc=mydomain,dc=eu  -f 
((objectClass=inetOrgPerson)(uid=%u)(memberOf=cn=%g,ou=subou,ou=mainou,dc=mydomain,dc=eu))
 -h 10.11.12.13
acl ldapgroup-age9- external ldapgroup leeftijdsgroep_tot_9_jaar
acl ldapgroup-age12- external ldapgroup leeftijdsgroep_tot_12_jaar
acl ldapgroup-age13- external ldapgroup leeftijdsgroep_tot_13_jaar
acl ldapgroup-age18- external ldapgroup leeftijdsgroep_tot_18_jaar
acl ldapgroup-age18+ external ldapgroup standaard_leeftijdsgroep

acl facebook dstdomain .facebook.com
# Deny access to facebook if not in 18+ or 18- (=16-18)group
http_access deny facebook !ldapgroup-age18+ !ldapgroup-age18- !ldap-auth
——

I’ve tried also other http_access allow/deny rules, following different 
tutorials i found online, but that did not change anything.
Can anyone spot the problem in my config, or is it just the ipad that caches 
the correct credentials and automatically uses these on next challenges?? When 
it is a caching issue, what other options do i have to force the user to enter 
his credentials again after a fixed period of time?

Thanks in advance for your help.

Re: [squid-users] block domains based on LDAP group and force re-authentication every 30 minutes

2014-02-17 Thread Scott Mayo
On Mon, Feb 17, 2014 at 9:45 AM, Wim Ramakers wim.ramak...@lucine-os.be wrote:
 I’m trying to configure squid3 (on Debian server) to block certain (mostly 
 social media) websites based on the LDAP (age) group the users are in.
 The devices are apple ipads, safari is used as web browser, and apps are 
 installed with the Mobile Iron multiuser platform. The device will be shared 
 among users of multiple groups, so i must FORCE the user to reauthenticate 
 every 30 minutes.

 The problem we have now is that when a user authenticates correctly, the 
 credentials never expire. For testing purposes I’ve set the ttl to 1 minute 
 now, but after I authenticate a user successfully I never get a new challenge.
 My current config:
 -
 authenticate_ttl 1 minute

 auth_param basic program /usr/lib/squid3/squid_ldap_auth -v 3 -b 
 dc=mydomain,dc=eu  -f uid=%s -h 10.11.12.13
 auth_param basic children 5
 auth_param basic realm Web-Proxy
 auth_param basic credentialsttl 5 minutes
 acl ldap-auth proxy_auth REQUIRED

 external_acl_type ldapgroup ttl=60 %LOGIN /usr/lib/squid3/squid_ldap_group -b 
 dc=mydomain,dc=eu  -f 
 ((objectClass=inetOrgPerson)(uid=%u)(memberOf=cn=%g,ou=subou,ou=mainou,dc=mydomain,dc=eu))
  -h 10.11.12.13
 acl ldapgroup-age9- external ldapgroup leeftijdsgroep_tot_9_jaar
 acl ldapgroup-age12- external ldapgroup leeftijdsgroep_tot_12_jaar
 acl ldapgroup-age13- external ldapgroup leeftijdsgroep_tot_13_jaar
 acl ldapgroup-age18- external ldapgroup leeftijdsgroep_tot_18_jaar
 acl ldapgroup-age18+ external ldapgroup standaard_leeftijdsgroep

 acl facebook dstdomain .facebook.com
 # Deny access to facebook if not in 18+ or 18- (=16-18)group
 http_access deny facebook !ldapgroup-age18+ !ldapgroup-age18- !ldap-auth
 ——

 I’ve tried also other http_access allow/deny rules, following different 
 tutorials i found online, but that did not change anything.
 Can anyone spot the problem in my config, or is it just the ipad that caches 
 the correct credentials and automatically uses these on next challenges?? 
 When it is a caching issue, what other options do i have to force the user to 
 enter his credentials again after a fixed period of time?

 Thanks in advance for your help.

I will say that I don't know a lot about different parts of Squid, so
not sure about this, but would it have something to do with the
authenticate_cache_garbage_interval, default is an hour.
(http://www.squid-cache.org/Versions/v3/3.1/cfgman/authenticate_cache_garbage_interval.html)

I don't know if the authentication hangs around if it is greater than
the ttl or not.  Just a suggestion and I am guessing others will have
a better answer than me.

-- 
Scott Mayo
Mayo's Pioneer Seeds   PH: 573-568-3235   CE: 573-614-2138


Re: [squid-users] block domains based on LDAP group and force re-authentication every 30 minutes

2014-02-17 Thread Wim Ramakers
I forgot to paste the line in the first post, I’ve set  
authenticate_cache_garbage_interval 5 minutes.

Even after an hour I stayed authenticated, so I’ve changed it also to a lower 
value.


Wim

Re: [squid-users] block domains based on LDAP group and force re-authentication every 30 minutes

2014-02-17 Thread Scott Mayo
On Mon, Feb 17, 2014 at 10:39 AM, Wim Ramakers
wim.ramak...@lucine-os.be wrote:
 I forgot to paste the line in the first post, I’ve set  
 authenticate_cache_garbage_interval 5 minutes.

 Even after an hour I stayed authenticated, so I’ve changed it also to a lower 
 value.


I am curious to this also then.  I wonder if that is the browser.  Is
there  a setting for how often a browser asks for authentication?

My assumption would be that the browser asks Squid for authentication.
 Once it is authenticated with your LDAP, then it will not have to
authenticate again until the browser asks again.  I may be totally
wrong though.

-- 
Scott Mayo
Mayo's Pioneer Seeds   PH: 573-568-3235   CE: 573-614-2138


Re: [squid-users] squid 3.4.3 on Solaris Sparc

2014-02-17 Thread Kinkie
That should be enough.
Check (you can use the nm -s tool) that libdb.so contains the
symbols db_create and db_env_create. It may be that the file is
corrupted, a wrong version or a stub.
Alternatively, if you don't need the session helper, use squid's
configure flags to skip building it.

On Mon, Feb 17, 2014 at 4:23 PM, Monah Baki monahb...@gmail.com wrote:
 Hi,


 I did find /usr/lib/libdb.so but no results for libdb.a


 Thanks

 On Mon, Feb 17, 2014 at 12:42 AM, Francesco Chemolli gkin...@gmail.com 
 wrote:

 On 17 Feb 2014, at 01:15, Monah Baki monahb...@gmail.com wrote:

 uname -a
 SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

 Here are the steps before it fails

 ./configure --prefix=/usr/local/squid --enable-async-io
 --enable-cache-digests --enable-underscores --enable-pthreads
 --enable-storeio=ufs,aufs --enable-removal-policies=lru,
 heap

 make

 c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
 -MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
 ext_session_acl.cc
 mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
 /bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
 ext_session_acl.o ../../../compat/libcompat-squid.la
 libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
 ext_session_acl ext_session_acl.o
 ../../../compat/.libs/libcompat-squid.a -pthreads
 Undefined   first referenced
 symbol in file
 db_create   ext_session_acl.o
 db_env_create   ext_session_acl.o

 The build system is not being able to find the Berkeley db library files 
 (but for some reason it can find the header).
 Please check that libdb.a or libdb.so are available and found on the paths 
 searched for libraries by your build system.

 Kinkie



-- 
Francesco


[squid-users] cache.log Warnings

2014-02-17 Thread Scott Mayo
Just curious if these are anything that I should really worry about,
or just need to keep an eye on my log file?

2014/02/16 03:08:01| helperOpenServers: Starting 40/40
'squid_ldap_auth' processes
2014/02/16 03:08:01| helperOpenServers: Starting 5/5
'squid_ldap_group' processes
2014/02/17 08:59:13| TunnelStateData::Connection::error: FD 747:
read/write failure: (32) Broken pipe
2014/02/17 09:46:05| squidaio_queue_request: WARNING - Queue congestion
2014/02/17 10:30:41| squidaio_queue_request: WARNING - Queue congestion
2014/02/17 11:38:04| squidaio_queue_request: WARNING - Queue congestion
2014/02/17 12:57:40| TunnelStateData::Connection::error: FD 1298:
read/write failure: (110) Connection timed out
2014/02/17 13:08:00| squidaio_queue_request: WARNING - Queue congestion
2014/02/17 14:03:09| TunnelStateData::Connection::error: FD 1000:
read/write failure: (32) Broken pipe
2014/02/17 14:07:12| squidaio_queue_request: WARNING - Queue congestion

I am assuming that I may just need a faster drive?  My network has
been busing right along today and I have not seen any slowness at all.

Thanks for any suggestions.

-- 
Scott Mayo


[squid-users] Re: Squid transparent proxy with one nic access denied problem.

2014-02-17 Thread Spyros Vlachos
Hello! Sorry but I am new to this list and I don't know if I have sent
the mail correctly and iff anyone can see this. Is this the case?
Sorry and thank you!

On Mon, Feb 17, 2014 at 2:24 PM, Spyros Vlachos spyro...@gmail.com wrote:
 Hello! Thank you in advance for your help.
 I have a fairly simple home network setup.
 I have a modem (192.168.2.254) that connects to the internet.
 Connected to that modem through its own wan port
 I have an openwrt router (192.168.1.1). My internal network is the
 192.168.1.0/24 one. On the router I have connected
 an ubuntu 13.10 box (192.168.1.20) that acts as a squid proxy and dns
 among other things. The ubuntu box has one network card.
 I had successfully  installed a transparent squid proxy by using DNAT
 and SNAT on the router using the 12.04 version of ubuntu.
 Because of some problems with my ups I tried to install ubuntu 13.10
 which solved the ups problem but also
 upgraded the squid package to 3.3.8 from 3.1.something . My squid
 configuration is as follows:

 #--Squid server 
 192.168.1.20---
 acl localnet src 192.168.1.0/24
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl squid-prime dstdomain /etc/squid3/squid-prime.acl
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost manager
 http_access deny manager
 http_access deny squid-prime
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 http_port 3128  #HAVE tried transparent and intercept but the problem persists
 coredump_dir /var/spool/squid3
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
 refresh_pattern .   0   20% 4320
 dns_nameservers 8.8.8.8 #have tried to use the local dns 127.0.0.1 but
 the same problem
 #---

 I have tried disabling the dns server of ubuntu because I have heard
 of some problem it can cause to squid.

 My router (192.168.1.1) SNAT DNAT configuration is (openwrt luci gui)
 1) MATCH: From IP not 192.168.1.20 in lan Via any router IP at port 80
 FORWARD TO: IP 192.168.1.20, port 3128 in lan
 2)MATCH: From any host in lan To IP 192.168.1.20, port 3128 in lan
 Rewrite to source IP 192.168.1.1

 The error I get by using the above configurations is a constant Access
 denied Error in the browser and in the
 squid access log is
 #-
 92  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590851.593  1 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590856.653  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590856.653  1 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590861.742  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590861.742  1 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590866.878  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590866.878 26 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590871.903  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590871.903  1 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590876.893  0 192.168.1.20 TCP_MISS/403 3985 GET
 http://notify7.dropbox.com/subscribe? - HIER_NONE/- text/html
 1392590876.893  1 192.168.1.1 TCP_MISS/403 4090 GET
 http://notify7.dropbox.com/subscribe? - HIER_DIRECT/192.168.1.20
 text/html
 1392590876.992  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590876.993  1 192.168.1.1 TCP_MISS/403 4193 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_DIRECT/192.168.1.20
 text/html
 1392590878.600  0 192.168.1.20 TCP_MISS/403 4390 POST
 http://safebrowsing.clients.google.com/safebrowsing/downloads? 

Re: [squid-users] squid 3.4.3 on Solaris Sparc

2014-02-17 Thread Monah Baki
I hope this is the right output.

root@proxy:~# nm -s /usr/lib/libdb.so | grep db_create
[2332]  |214036|   716|FUNC |GLOB |0|.text |__bam_db_create
[1495]  |   1098492|  2172|FUNC |GLOB |0|.text
|__db_create_internal
[2052]  |395884|   216|FUNC |GLOB |0|.text |__ham_db_create
[1755]  |511400|   112|FUNC |GLOB |0|.text |__qam_db_create
[1335]  |   1096060|  2416|FUNC |GLOB |0|.text |db_create
root@proxy:~# nm -s /usr/lib/libdb.so | grep db_env
[1072]  |   1265120|   104|FUNC |GLOB |0|.text |__db_env_destroy
[656]   |   1265240|  3208|FUNC |LOCL |0|.text |__db_env_init
[1300]  |   1264744|   376|FUNC |GLOB |0|.text |db_env_create
[1445]  |   1495184|52|FUNC |GLOB |0|.text
|db_env_set_func_close
[947]   |   1495252|52|FUNC |GLOB |0|.text
|db_env_set_func_dirfree
[1340]  |   1495320|52|FUNC |GLOB |0|.text
|db_env_set_func_dirlist
[915]   |   1495388|52|FUNC |GLOB |0|.text
|db_env_set_func_exists
[2567]  |   1495796|56|FUNC |GLOB |0|.text
|db_env_set_func_file_map
[1512]  |   1495456|52|FUNC |GLOB |0|.text
|db_env_set_func_free
[2384]  |   1495524|52|FUNC |GLOB |0|.text
|db_env_set_func_fsync
[1604]  |   1495592|52|FUNC |GLOB |0|.text
|db_env_set_func_ftruncate
[1909]  |   1495660|52|FUNC |GLOB |0|.text
|db_env_set_func_ioinfo
[2005]  |   1495728|52|FUNC |GLOB |0|.text
|db_env_set_func_malloc
[1795]  |   1496076|52|FUNC |GLOB |0|.text
|db_env_set_func_open
[904]   |   1495940|52|FUNC |GLOB |0|.text
|db_env_set_func_pread
[1377]  |   1496008|52|FUNC |GLOB |0|.text
|db_env_set_func_pwrite
[1238]  |   1496144|52|FUNC |GLOB |0|.text
|db_env_set_func_read
[2513]  |   1496212|52|FUNC |GLOB |0|.text
|db_env_set_func_realloc
[1901]  |   1495868|56|FUNC |GLOB |0|.text
|db_env_set_func_region_map
[1327]  |   1496280|52|FUNC |GLOB |0|.text
|db_env_set_func_rename
[1616]  |   1496348|52|FUNC |GLOB |0|.text
|db_env_set_func_seek
[983]   |   1496416|52|FUNC |GLOB |0|.text
|db_env_set_func_unlink
[2446]  |   1496484|52|FUNC |GLOB |0|.text
|db_env_set_func_write
[1956]  |   1496552|52|FUNC |GLOB |0|.text
|db_env_set_func_yield





On Mon, Feb 17, 2014 at 2:43 PM, Kinkie gkin...@gmail.com wrote:
 That should be enough.
 Check (you can use the nm -s tool) that libdb.so contains the
 symbols db_create and db_env_create. It may be that the file is
 corrupted, a wrong version or a stub.
 Alternatively, if you don't need the session helper, use squid's
 configure flags to skip building it.

 On Mon, Feb 17, 2014 at 4:23 PM, Monah Baki monahb...@gmail.com wrote:
 Hi,


 I did find /usr/lib/libdb.so but no results for libdb.a


 Thanks

 On Mon, Feb 17, 2014 at 12:42 AM, Francesco Chemolli gkin...@gmail.com 
 wrote:

 On 17 Feb 2014, at 01:15, Monah Baki monahb...@gmail.com wrote:

 uname -a
 SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

 Here are the steps before it fails

 ./configure --prefix=/usr/local/squid --enable-async-io
 --enable-cache-digests --enable-underscores --enable-pthreads
 --enable-storeio=ufs,aufs --enable-removal-policies=lru,
 heap

 make

 c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
 -MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
 ext_session_acl.cc
 mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
 /bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
 -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
 -D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
 ext_session_acl.o ../../../compat/libcompat-squid.la
 libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
 -Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
 ext_session_acl ext_session_acl.o
 ../../../compat/.libs/libcompat-squid.a -pthreads
 Undefined   first referenced
 symbol in file
 db_create   ext_session_acl.o
 db_env_create   ext_session_acl.o

 The build system is not being able to find the Berkeley db library files 
 (but for some reason it can find the header).
 Please check that libdb.a or libdb.so are available and found on the paths 
 searched for libraries by your build system.

 Kinkie



 --
 Francesco


[squid-users] Re: cache.log Warnings

2014-02-17 Thread Scott Mayo
On Mon, Feb 17, 2014 at 2:31 PM, Scott Mayo scotgm...@gmail.com wrote:
 Just curious if these are anything that I should really worry about,
 or just need to keep an eye on my log file?

 2014/02/16 03:08:01| helperOpenServers: Starting 40/40
 'squid_ldap_auth' processes
 2014/02/16 03:08:01| helperOpenServers: Starting 5/5
 'squid_ldap_group' processes
 2014/02/17 08:59:13| TunnelStateData::Connection::error: FD 747:
 read/write failure: (32) Broken pipe
 2014/02/17 09:46:05| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 10:30:41| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 11:38:04| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 12:57:40| TunnelStateData::Connection::error: FD 1298:
 read/write failure: (110) Connection timed out
 2014/02/17 13:08:00| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 14:03:09| TunnelStateData::Connection::error: FD 1000:
 read/write failure: (32) Broken pipe
 2014/02/17 14:07:12| squidaio_queue_request: WARNING - Queue congestion

 I am assuming that I may just need a faster drive?  My network has
 been busing right along today and I have not seen any slowness at all.

 Thanks for any suggestions.


On top of those few errors, I noticed at the end of school, my free
memory was down to about 2.5GB out of 8GB.  I restarted squid just to
see if that would affect anything.  I have a lot more things in my
cache.log now.  They may not be anything, but I just wanted to ask.  I
excluded a bit to make it not so long.

I'll be curious if my memory frees up a bit later. I thought that
maybe a lot was being used since I changed my auth_param basic
credentialsttl to 9 hours.  Not sure if that would cause it to hold
that much info in memory or not as far as logins go.

Anyways, below is what was in my cache.log when restarting.  Is this normal?

2014/02/17 16:15:03| Preparing for shutdown after 490522 requests
2014/02/17 16:15:03| Waiting 30 seconds for active connections to finish
2014/02/17 16:15:03| FD 105 Closing HTTP connection
2014/02/17 16:15:35| Shutting down...
2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
use - not freeing
2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
use - not freeing
2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
use - not freeing
.
.  ((EXCLUDED ALL THE SAME LINES HERE THAT WERE IN BETWEEN FOR READABILITY))
.
2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
use - not freeing
2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
use - not freeing

2014/02/17 16:15:35| basic/auth_basic.cc(97) done: Basic
authentication Shutdown.
2014/02/17 16:15:35| Closing unlinkd pipe on FD 103
2014/02/17 16:15:35| storeDirWriteCleanLogs: Starting...
2014/02/17 16:15:35| 65536 entries written so far.
2014/02/17 16:15:35|   Finished.  Wrote 95602 entries.
2014/02/17 16:15:35|   Took 0.03 seconds (2791951.40 entries/sec).
CPU Usage: 1335.548 seconds = 616.616 user + 718.932 sys
Maximum Resident Size: 2354176 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:  572856 KB
Ordinary blocks:   401955 KB  49716 blks
Small blocks:   0 KB  7 blks
Holding blocks:  3160 KB  6 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  170900 KB
Total in use:  405115 KB 71%
Total free:170900 KB 30%
2014/02/17 16:15:35| Open FD UNSTARTED 7 DNS Socket IPv6
2014/02/17 16:15:35| Open FD READ/WRITE8 DNS Socket IPv4
2014/02/17 16:15:35| Open FD READ/WRITE9 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   11 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   13 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   14 clients3.google.com:443
2014/02/17 16:15:35| Open FD READ/WRITE   15 core.mochibot.com idle connection
2014/02/17 16:15:35| Open FD READ/WRITE   16 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   18 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   19 xmpp004.hpeprint.com:443
2014/02/17 16:15:35| Open FD READ/WRITE   20 mail.google.com:443
2014/02/17 16:15:35| Open FD READ/WRITE   24 safebrowsing.google.com:443
2014/02/17 16:15:35| Open FD READ/WRITE   25 Waiting for next request
2014/02/17 16:15:35| Open FD READ/WRITE   29 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   31 Waiting for next request
2014/02/17 16:15:35| Open FD READ/WRITE   32 squid_ldap_auth #1
2014/02/17 16:15:35| Open FD READ/WRITE   33 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   34 Waiting for next request
2014/02/17 16:15:35| Open FD READ/WRITE   35 static.poptropica.com
idle connection
2014/02/17 16:15:35| Open FD UNSTARTED36 squid_ldap_auth #2
2014/02/17 16:15:35| Open FD READ/WRITE   38 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   39 Reading next request
2014/02/17 16:15:35| Open FD READ/WRITE   40 Reading next 

Re: [squid-users] Re: Squid transparent proxy with one nic access denied problem.

2014-02-17 Thread Nikolai Gorchilov
Hi Spyros,

Seems you're experiencing request loops, that are unrelated to your ACLs

Looking at the logs, we can clearly see pairs of requests for same
url. Like this:
1392590890.301  0 192.168.1.20 TCP_MISS/403 4158 GET
http://www.tvxs.gr/ - HIER_NONE/- text/html
1392590890.302  1 192.168.1.1 TCP_MISS/403 4263 GET
http://www.tvxs.gr/ - HIER_DIRECT/192.168.1.20 text/html

As the logging happens at the end of transaction, records are ordered
by finish time, not start. They actually started in reverse order:
1. First came the request from 192.168.1.1 for http://www.tvxs.gr/.
2. As it was considered a MISS, your Squid decided to go directly to
the destination server (thus hierarchy code HIER_DIRECT)
3. PROBLEM! PROBLEM! Surprisingly, Squid resolves www.tvxs.gr as
192.168.1.20 and fires the request towards this IP!
4. Boom! This is how the same request arrives again, this time from
source IP 192.168.1.20 (Squid itself). We have a loop!
5. Squid detects the loop (something like WARNING: Forwarding loop
detected in cache.log) and generates internal error response like
HTTP/403 Forbidden, using ERR_ACCESS_DENIED or alike. Thus hierarchy
code is HIER_NONE.
6. The error returns in the first instance of this request after 1ms,
and Squid returns it to the original caller (TCP_MISS/403).

I don't have clear idea what is the root cause of the loop, but I'd do:
1. make http_port 192.168.1.20:3128 intercept
2. study carefully DNS settings of both Ubuntu and OpenWRT:
- /etc/resolv.conf
- iptables: DNS interceptions and redirections (UDP  TCP port 53)
- change with other public DNS services
- tcpdump as much as possible ;-)

Hope this helps!

Best,
Niki

On Tue, Feb 18, 2014 at 12:05 AM, Spyros Vlachos spyro...@gmail.com wrote:
 Hello! Sorry but I am new to this list and I don't know if I have sent
 the mail correctly and iff anyone can see this. Is this the case?
 Sorry and thank you!

 On Mon, Feb 17, 2014 at 2:24 PM, Spyros Vlachos spyro...@gmail.com wrote:
 Hello! Thank you in advance for your help.
 I have a fairly simple home network setup.
 I have a modem (192.168.2.254) that connects to the internet.
 Connected to that modem through its own wan port
 I have an openwrt router (192.168.1.1). My internal network is the
 192.168.1.0/24 one. On the router I have connected
 an ubuntu 13.10 box (192.168.1.20) that acts as a squid proxy and dns
 among other things. The ubuntu box has one network card.
 I had successfully  installed a transparent squid proxy by using DNAT
 and SNAT on the router using the 12.04 version of ubuntu.
 Because of some problems with my ups I tried to install ubuntu 13.10
 which solved the ups problem but also
 upgraded the squid package to 3.3.8 from 3.1.something . My squid
 configuration is as follows:

 #--Squid server 
 192.168.1.20---
 acl localnet src 192.168.1.0/24
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl squid-prime dstdomain /etc/squid3/squid-prime.acl
 acl CONNECT method CONNECT
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost manager
 http_access deny manager
 http_access deny squid-prime
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 http_port 3128  #HAVE tried transparent and intercept but the problem 
 persists
 coredump_dir /var/spool/squid3
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
 refresh_pattern .   0   20% 4320
 dns_nameservers 8.8.8.8 #have tried to use the local dns 127.0.0.1 but
 the same problem
 #---

 I have tried disabling the dns server of ubuntu because I have heard
 of some problem it can cause to squid.

 My router (192.168.1.1) SNAT DNAT configuration is (openwrt luci gui)
 1) MATCH: From IP not 192.168.1.20 in lan Via any router IP at port 80
 FORWARD TO: IP 192.168.1.20, port 3128 in lan
 2)MATCH: From any host in lan To IP 192.168.1.20, port 3128 in lan
 Rewrite to source IP 192.168.1.1

 The error I get by using the above configurations is a constant Access
 denied Error in the browser and in the
 squid access log is
 #-
 92  0 192.168.1.20 TCP_MISS/403 4088 GET
 http://stokokkino.live24.gr/stokokkino? - HIER_NONE/- text/html
 1392590851.593  1 192.168.1.1 TCP_MISS/403 4193 

Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-17 Thread Rajiv Desai
I think I found the problem. This applies only to HTTPs traffic being
cached with ssl-bump.
Basically HttpRequest::maybeCacheable() does not check for PROTO_HTTPS

Following patch fixes it:

patch
diff --git a/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
b/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
index dc18b33..ce6c411 100644
--- a/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
+++ b/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
@@ -596,6 +596,7 @@ HttpRequest::maybeCacheable()
 switch (protocol) {
 case AnyP::PROTO_HTTP:
+case AnyP::PROTO_HTTPS:
 if (!method.respMaybeCacheable())
 return false;

/patch

-Rajiv

On Mon, Feb 17, 2014 at 12:54 AM, Rajiv Desai ra...@maginatics.com wrote:
 FWIW, from debug logs in cache.log, it seems like PUT responses are
 being cached.
 I am fairly new to using squid so I am be completely misreading these.
 Just trying to understand caching.
 So are PUT responses cached by design? or am I completely missing
 something simple here? :)


 logs
 2014/02/17 00:06:54.977 kid1| store_dir.cc(1149) get: storeGet:
 looking up AC671962CFC5644F4B22DA51C242DA50
 2014/02/17 00:06:54.977 kid1| StoreMap.cc(293) openForReading: opening
 entry with key AC671962CFC5644F4B22DA51C242DA50 for reading
 /mnt/squid-cache_inodes
 2014/02/17 00:06:54.977 kid1| StoreMap.cc(309) openForReadingAt:
 opening entry 14877 for reading /mnt/squid-cache_inodes
 2014/02/17 00:06:54.977 kid1| StoreMap.cc(322) openForReadingAt:
 cannot open empty entry 14877 for reading /mnt/squid-cache_inodes
 2014/02/17 00:06:54.977 kid1| store_dir.cc(820) find: none of 1
 cache_dirs have AC671962CFC5644F4B22DA51C242DA50
 2014/02/17 00:06:54.977 kid1| client_side_reply.cc(1626)
 identifyFoundObject: StoreEntry is NULL -  MISS
 2014/02/17 00:06:54.977 kid1| client_side_reply.cc(622) processMiss:
 clientProcessMiss: 'PUT
 https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
 2014/02/17 00:06:54.977 kid1| store.cc(803) storeCreatePureEntry:
 storeCreateEntry:
 'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
 2014/02/17 00:06:54.977 kid1| store.cc(395) StoreEntry: StoreEntry
 constructed, this=0x168f990
 /logs

 ... and later

 logs
 2014/02/17 00:06:55.127 kid1| store_dir.cc(820) find: none of 1
 cache_dirs have DBA199D500F44928560537BB0CAB0908
 2014/02/17 00:06:55.127 kid1| refresh.cc(319) refreshCheck: checking
 freshness of 
 'https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104'
 2014/02/17 00:06:55.127 kid1| refresh.cc(340) refreshCheck: Matched '.
 7776000 100%% 7776000'
 2014/02/17 00:06:55.127 kid1| refresh.cc(342) refreshCheck: age:60
 2014/02/17 00:06:55.127 kid1| refresh.cc(344) refreshCheck:
 check_time: Mon, 17 Feb 2014 08:07:55 GMT
 2014/02/17 00:06:55.127 kid1| refresh.cc(346) refreshCheck:
 entry-timestamp:   Mon, 17 Feb 2014 08:06:55 GMT
 2014/02/17 00:06:55.127 kid1| refresh.cc(202) refreshStaleness: No
 explicit expiry given, using heuristics to determine freshness
 2014/02/17 00:06:55.128 kid1| refresh.cc(240) refreshStaleness: FRESH:
 age (60 sec) is less than configured minimum (7776000 sec)
 2014/02/17 00:06:55.128 kid1| refresh.cc(366) refreshCheck: Staleness = -1
 2014/02/17 00:06:55.128 kid1| refresh.cc(486) refreshCheck: Object isn't 
 stale..
 2014/02/17 00:06:55.128 kid1| refresh.cc(501) refreshCheck: returning
 FRESH_MIN_RULE
 2014/02/17 00:06:55.128 kid1| http.cc(491) cacheableReply: YES because
 HTTP status 200
 2014/02/17 00:06:55.128 kid1| HttpRequest.cc(696) storeId: sent back
 canonicalUrl:https://s3-us-west-1.amazonaws.com/mag-1363987602-cmbogo/334677ce-9882104
 2014/02/17 00:06:55.128 kid1| store.cc(472) hashInsert:
 StoreEntry::hashInsert: Inserting Entry e:=p2DV/0x168f990*3 key
 'AC671962CFC5644F4B22DA51C242DA50'
 2014/02/17 00:06:55.128 kid1| ctx: exit level  0
 2014/02/17 00:06:55.128 kid1| store.cc(858) write: storeWrite: writing
 17 bytes for 'AC671962CFC5644F4B22DA51C242DA50'
 /logs

 On Sun, Feb 16, 2014 at 11:07 PM, Rajiv Desai ra...@maginatics.com wrote:
 What is the authoritative source of cache statistics? The slots
 occupied due to PUT requests (as suggested by mgr:storedir stats is
 quite concerning.
 Is there some additional config that needs to be added to ensure that
 PUTs are simply bypassed for caching purpose.

 NOTE: fwiw, I have verified that subsequent GETs for the same objects
 after PUTs do get a cache MISS.

 On Sun, Feb 16, 2014 at 3:45 PM, Rajiv Desai ra...@maginatics.com wrote:
 On Sun, Feb 16, 2014 at 3:39 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/02/2014 11:41 a.m., Rajiv Desai wrote:
 I am using Squid Cache:
 Version 3.HEAD-20140127-r13248

 My cache dir is configured to use rock (Large rock with SMP):
 cache_dir rock /mnt/squid-cache 256000 max-size=4194304

 My refresh pattern is permissive to cache all objects:
 refresh_pattern . 129600 100% 129600 ignore-auth

 I uploaded 30 GB of data via squid cache with PUT 

[squid-users] Re: cache.log Warnings

2014-02-17 Thread Scott Mayo
Nevermind on the memory usage.  Looks like it is fine by looking at my
buffers/cache in free -m.

I am curious if any of the messages in the log look like something I
should worry about though.

Thanks.
Scott

On Mon, Feb 17, 2014 at 5:20 PM, Scott Mayo scotgm...@gmail.com wrote:
 On Mon, Feb 17, 2014 at 2:31 PM, Scott Mayo scotgm...@gmail.com wrote:
 Just curious if these are anything that I should really worry about,
 or just need to keep an eye on my log file?

 2014/02/16 03:08:01| helperOpenServers: Starting 40/40
 'squid_ldap_auth' processes
 2014/02/16 03:08:01| helperOpenServers: Starting 5/5
 'squid_ldap_group' processes
 2014/02/17 08:59:13| TunnelStateData::Connection::error: FD 747:
 read/write failure: (32) Broken pipe
 2014/02/17 09:46:05| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 10:30:41| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 11:38:04| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 12:57:40| TunnelStateData::Connection::error: FD 1298:
 read/write failure: (110) Connection timed out
 2014/02/17 13:08:00| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 14:03:09| TunnelStateData::Connection::error: FD 1000:
 read/write failure: (32) Broken pipe
 2014/02/17 14:07:12| squidaio_queue_request: WARNING - Queue congestion

 I am assuming that I may just need a faster drive?  My network has
 been busing right along today and I have not seen any slowness at all.

 Thanks for any suggestions.


 On top of those few errors, I noticed at the end of school, my free
 memory was down to about 2.5GB out of 8GB.  I restarted squid just to
 see if that would affect anything.  I have a lot more things in my
 cache.log now.  They may not be anything, but I just wanted to ask.  I
 excluded a bit to make it not so long.

 I'll be curious if my memory frees up a bit later. I thought that
 maybe a lot was being used since I changed my auth_param basic
 credentialsttl to 9 hours.  Not sure if that would cause it to hold
 that much info in memory or not as far as logins go.

 Anyways, below is what was in my cache.log when restarting.  Is this normal?

 2014/02/17 16:15:03| Preparing for shutdown after 490522 requests
 2014/02/17 16:15:03| Waiting 30 seconds for active connections to finish
 2014/02/17 16:15:03| FD 105 Closing HTTP connection
 2014/02/17 16:15:35| Shutting down...
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 .
 .  ((EXCLUDED ALL THE SAME LINES HERE THAT WERE IN BETWEEN FOR READABILITY))
 .
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing

 2014/02/17 16:15:35| basic/auth_basic.cc(97) done: Basic
 authentication Shutdown.
 2014/02/17 16:15:35| Closing unlinkd pipe on FD 103
 2014/02/17 16:15:35| storeDirWriteCleanLogs: Starting...
 2014/02/17 16:15:35| 65536 entries written so far.
 2014/02/17 16:15:35|   Finished.  Wrote 95602 entries.
 2014/02/17 16:15:35|   Took 0.03 seconds (2791951.40 entries/sec).
 CPU Usage: 1335.548 seconds = 616.616 user + 718.932 sys
 Maximum Resident Size: 2354176 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
 total space in arena:  572856 KB
 Ordinary blocks:   401955 KB  49716 blks
 Small blocks:   0 KB  7 blks
 Holding blocks:  3160 KB  6 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  170900 KB
 Total in use:  405115 KB 71%
 Total free:170900 KB 30%
 2014/02/17 16:15:35| Open FD UNSTARTED 7 DNS Socket IPv6
 2014/02/17 16:15:35| Open FD READ/WRITE8 DNS Socket IPv4
 2014/02/17 16:15:35| Open FD READ/WRITE9 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   11 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   13 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   14 clients3.google.com:443
 2014/02/17 16:15:35| Open FD READ/WRITE   15 core.mochibot.com idle connection
 2014/02/17 16:15:35| Open FD READ/WRITE   16 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   18 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   19 xmpp004.hpeprint.com:443
 2014/02/17 16:15:35| Open FD READ/WRITE   20 mail.google.com:443
 2014/02/17 16:15:35| Open FD READ/WRITE   24 safebrowsing.google.com:443
 2014/02/17 16:15:35| Open FD READ/WRITE   25 Waiting for next request
 2014/02/17 16:15:35| Open FD READ/WRITE   29 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   31 Waiting for next request
 2014/02/17 16:15:35| Open FD READ/WRITE   32 squid_ldap_auth #1
 2014/02/17 16:15:35| Open FD READ/WRITE   33 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   34 Waiting for next 

Re: [squid-users] Re: cache.log Warnings

2014-02-17 Thread Carlos Defoe
http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion

You're probably using aufs, those messages are normal.

On the restart log, I never saw the AuthUserHashPointer ones, but
since squid exits and starts normally, I don't think that is a
problem.

I was going to ask how are you checking your free memory, but I think
you just figured out that you have plenty of memory.

I looked quickly to your conf on the other message, and seems that
you're using a very small cache_mem. With 8GB, you can increase that
to at least 2 GB safely. The hot objects will be kept on RAM, thus
reducing the disk activity and speeding your proxy.

On Mon, Feb 17, 2014 at 9:26 PM, Scott Mayo scotgm...@gmail.com wrote:
 Nevermind on the memory usage.  Looks like it is fine by looking at my
 buffers/cache in free -m.

 I am curious if any of the messages in the log look like something I
 should worry about though.

 Thanks.
 Scott

 On Mon, Feb 17, 2014 at 5:20 PM, Scott Mayo scotgm...@gmail.com wrote:
 On Mon, Feb 17, 2014 at 2:31 PM, Scott Mayo scotgm...@gmail.com wrote:
 Just curious if these are anything that I should really worry about,
 or just need to keep an eye on my log file?

 2014/02/16 03:08:01| helperOpenServers: Starting 40/40
 'squid_ldap_auth' processes
 2014/02/16 03:08:01| helperOpenServers: Starting 5/5
 'squid_ldap_group' processes
 2014/02/17 08:59:13| TunnelStateData::Connection::error: FD 747:
 read/write failure: (32) Broken pipe
 2014/02/17 09:46:05| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 10:30:41| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 11:38:04| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 12:57:40| TunnelStateData::Connection::error: FD 1298:
 read/write failure: (110) Connection timed out
 2014/02/17 13:08:00| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 14:03:09| TunnelStateData::Connection::error: FD 1000:
 read/write failure: (32) Broken pipe
 2014/02/17 14:07:12| squidaio_queue_request: WARNING - Queue congestion

 I am assuming that I may just need a faster drive?  My network has
 been busing right along today and I have not seen any slowness at all.

 Thanks for any suggestions.


 On top of those few errors, I noticed at the end of school, my free
 memory was down to about 2.5GB out of 8GB.  I restarted squid just to
 see if that would affect anything.  I have a lot more things in my
 cache.log now.  They may not be anything, but I just wanted to ask.  I
 excluded a bit to make it not so long.

 I'll be curious if my memory frees up a bit later. I thought that
 maybe a lot was being used since I changed my auth_param basic
 credentialsttl to 9 hours.  Not sure if that would cause it to hold
 that much info in memory or not as far as logins go.

 Anyways, below is what was in my cache.log when restarting.  Is this normal?

 2014/02/17 16:15:03| Preparing for shutdown after 490522 requests
 2014/02/17 16:15:03| Waiting 30 seconds for active connections to finish
 2014/02/17 16:15:03| FD 105 Closing HTTP connection
 2014/02/17 16:15:35| Shutting down...
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 .
 .  ((EXCLUDED ALL THE SAME LINES HERE THAT WERE IN BETWEEN FOR READABILITY))
 .
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing
 2014/02/17 16:15:35| AuthUserHashPointer::removeFromCache: entry in
 use - not freeing

 2014/02/17 16:15:35| basic/auth_basic.cc(97) done: Basic
 authentication Shutdown.
 2014/02/17 16:15:35| Closing unlinkd pipe on FD 103
 2014/02/17 16:15:35| storeDirWriteCleanLogs: Starting...
 2014/02/17 16:15:35| 65536 entries written so far.
 2014/02/17 16:15:35|   Finished.  Wrote 95602 entries.
 2014/02/17 16:15:35|   Took 0.03 seconds (2791951.40 entries/sec).
 CPU Usage: 1335.548 seconds = 616.616 user + 718.932 sys
 Maximum Resident Size: 2354176 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
 total space in arena:  572856 KB
 Ordinary blocks:   401955 KB  49716 blks
 Small blocks:   0 KB  7 blks
 Holding blocks:  3160 KB  6 blks
 Free Small blocks:  0 KB
 Free Ordinary blocks:  170900 KB
 Total in use:  405115 KB 71%
 Total free:170900 KB 30%
 2014/02/17 16:15:35| Open FD UNSTARTED 7 DNS Socket IPv6
 2014/02/17 16:15:35| Open FD READ/WRITE8 DNS Socket IPv4
 2014/02/17 16:15:35| Open FD READ/WRITE9 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   11 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   13 Reading next request
 2014/02/17 16:15:35| Open FD READ/WRITE   14 clients3.google.com:443
 2014/02/17 16:15:35| Open FD READ/WRITE   15 core.mochibot.com idle 
 connection
 2014/02/17 16:15:35| Open FD 

Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-17 Thread Amos Jeffries
On 18/02/2014 1:23 p.m., Rajiv Desai wrote:
 I think I found the problem. This applies only to HTTPs traffic being
 cached with ssl-bump.
 Basically HttpRequest::maybeCacheable() does not check for PROTO_HTTPS
 
 Following patch fixes it:
 
 patch
 diff --git a/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
 b/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
 index dc18b33..ce6c411 100644
 --- a/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
 +++ b/squid-3.HEAD-20140127-r13248/src/HttpRequest.cc
 @@ -596,6 +596,7 @@ HttpRequest::maybeCacheable()
  switch (protocol) {
  case AnyP::PROTO_HTTP:
 +case AnyP::PROTO_HTTPS:
  if (!method.respMaybeCacheable())
  return false;
 
 /patch
 
 -Rajiv


Aha! Thank you. Patch applied to Squid-3.

Amos



Re: [squid-users] Re: squid3 block all 443 ports request

2014-02-17 Thread Amos Jeffries
On 17/02/2014 8:55 p.m., khadmin wrote:
 Hi Amos,
 
 Thank you for the response, actually i'am working with IPV4 on my network
 architecture.

While Squid appears to be trying to use the half-working IPv6 network
you have available.

Not that your Squid is apparently *successfully* performing the TCP
SYN/SYN-ACK exchange to setup the remote server connections over IPv6.
*Then* failing on the data packets.


As a friend of mine is becoming famous for saying:
 Welcome to your IPv6 transit network, whether you know it or not.


 All the client are connected to a DC Windows 2012 server that manage
 DNS,DHCP and AD.
 The proxy server is not under the domain controller and have a static Ip
 adress.
 Any way I will try to run MTU Path and i will give you feed-back.
 Other way would you advise me to installa nother version of Squid proxy?

I advise looking into fixing the IPv6 on your network.

Since Squid is getting as far as it does you can be sure there are other
software on your network doing same, or possibly even getting working
connections.

Start with the firewall rules on your routers ASAP so that when you get
around to fixing packet transit your normal security policies does not
suddenly gain lots of holes.

Amos



Re: [squid-users] Re: cache.log Warnings

2014-02-17 Thread Scott Mayo
On Mon, Feb 17, 2014 at 6:35 PM, Carlos Defoe carlosde...@gmail.com wrote:
 http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion

 You're probably using aufs, those messages are normal.

Yes, just changed that yesterday.



 On the restart log, I never saw the AuthUserHashPointer ones, but
 since squid exits and starts normally, I don't think that is a
 problem.


Okay, thanks.


 I was going to ask how are you checking your free memory, but I think
 you just figured out that you have plenty of memory.

 I looked quickly to your conf on the other message, and seems that
 you're using a very small cache_mem. With 8GB, you can increase that
 to at least 2 GB safely. The hot objects will be kept on RAM, thus
 reducing the disk activity and speeding your proxy.


Thanks, I was wondering about making that larger and if it would be
okay.  No one suggested it on that other post, so I figured I best
leave it where it was.  I'll increase it some tomorrow.


-- 
Scott Mayo
Mayo's Pioneer Seeds


Re: [squid-users] Re: squid3 block all 443 ports request

2014-02-17 Thread Amos Jeffries
On 18/02/2014 4:45 a.m., khadmin wrote:
 Hi,
 
 I want to thank you all for your efforts, finally it works i have to disable
 IPV6 protocole on clients and it works perfectly. 

That is wrong. The clients were already working perfectly and disabling
IPv6 breaks more than just this one small problem.

See the FAQ Q. What are Microsoft's recommendations about disabling IPv6?
 http://technet.microsoft.com/en-us/network/cc987595.aspx



It was the Squid-server connections which were having trouble and the
correct solution is to fix the brokenness by making IPv6 work (for
values of work which include denied) rather than disabling things more.

Amos



[squid-users] squid-3.4.3-20140203-r13087 can not compile on freebsd 10-stable

2014-02-17 Thread k simon
Hi,List,
  The squid-3.4.3-20140203-r13087 can not compile on freebsd 10-stable.
  When issue ./configure,it  report configure: Native pthreads
support disabled. DiskThreads module automaticaly disabled.
  And compile can not finished, it  report
/usr/include/c++/v1/cstdio:139:9: error: no member named
'ERROR_sprintf_UNSAFE_IN_SQUID' in the global namespace using ::sprintf;.
  It seems that aufs cannot work on freebsd10. And I found some
discussions by freebsd guys.

http://freebsd.1045724.n5.nabble.com/Re-Squid-aufs-crashes-under-10-0-tp5883849.html
http://freebsd.1045724.n5.nabble.com/Re-ports-184993-www-squid33-fails-to-build-on-stable-10-without-AUFS-or-why-AUFS-doesn-t-work-tp5885658.html



Simon






attached info:

configure: cbdata debugging enabled: no
configure: xmalloc stats display: no
configure: With 63 aufs threads
checking for library containing shm_open... none required
checking for DiskIO modules to be enabled...  AIO Blocking DiskDaemon
DiskThreads IpcIo Mmapped
checking aio.h usability... yes
checking aio.h presence... yes
checking for aio.h... yes
checking for aio_read in -lrt... yes
configure: Native POSIX AIO support detected.
configure: Enabling AIO DiskIO module
configure: Enabling Blocking DiskIO module
configure: Enabling DiskDaemon DiskIO module
configure: pthread library requires FreeBSD 7 or later
configure: Native pthreads support disabled. DiskThreads module
automaticaly disabled.
configure: Enabling IpcIo DiskIO module
configure: Enabling Mmapped DiskIO module
configure: IO Modules built:  AIO Blocking DiskDaemon IpcIo Mmapped
configure: Store modules built:  aufs diskd rock ufs
configure: Removal policies to build: lru heap
configure: Disabling ESI processor












Making all in base
/bin/sh ../../libtool --tag=CXX--mode=compile c++ -DHAVE_CONFIG_H
-I../.. -I../../include -I../../lib  -I../../src -I../../include
-I/usr/include  -I/usr/include -I../../libltdl   -I/usr/include
-I/usr/include  -g -O2 -march=native -std=c++0x -I/usr/local/include -MT
AsyncCall.lo -MD -MP -MF .deps/AsyncCall.Tpo -c -o AsyncCall.lo AsyncCall.cc
libtool: compile:  c++ -DHAVE_CONFIG_H -I../.. -I../../include
-I../../lib -I../../src -I../../include -I/usr/include -I/usr/include
-I../../libltdl -I/usr/include -I/usr/include -g -O2 -march=native
-std=c++0x -I/usr/local/include -MT AsyncCall.lo -MD -MP -MF
.deps/AsyncCall.Tpo -c AsyncCall.cc -o AsyncCall.o
In file included from AsyncCall.cc:2:
In file included from ./AsyncCall.h:6:
In file included from ./RefCount.h:40:
In file included from /usr/include/c++/v1/iostream:38:
In file included from /usr/include/c++/v1/ios:216:
In file included from /usr/include/c++/v1/__locale:15:
In file included from /usr/include/c++/v1/string:432:
/usr/include/c++/v1/cstdio:139:9: error: no member named
'ERROR_sprintf_UNSAFE_IN_SQUID' in the global namespace
using ::sprintf;
  ~~^
../../compat/unsafe.h:10:17: note: expanded from macro 'sprintf'
#define sprintf ERROR_sprintf_UNSAFE_IN_SQUID
^
1 error generated.
*** Error code 1

Stop.
make[3]: stopped in /root/kf/squid-3.4.3-20140203-r13087/src/base
*** Error code 1

Stop.
make[2]: stopped in /root/kf/squid-3.4.3-20140203-r13087/src
*** Error code 1

Stop.
make[1]: stopped in /root/kf/squid-3.4.3-20140203-r13087/src
*** Error code 1

Stop.


Re: [squid-users] cache.log Warnings

2014-02-17 Thread Amos Jeffries
On 18/02/2014 9:31 a.m., Scott Mayo wrote:
 Just curious if these are anything that I should really worry about,
 or just need to keep an eye on my log file?
 
 2014/02/16 03:08:01| helperOpenServers: Starting 40/40
 'squid_ldap_auth' processes
 2014/02/16 03:08:01| helperOpenServers: Starting 5/5
 'squid_ldap_group' processes
 2014/02/17 08:59:13| TunnelStateData::Connection::error: FD 747:
 read/write failure: (32) Broken pipe
 2014/02/17 09:46:05| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 10:30:41| squidaio_queue_request: WARNING - Queue congestion
 2014/02/17 11:38:04| squidaio_queue_request: WARNING - Queue congestion

see http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion

Maybe a faster drive would help reduce the queue lengths (and maybe
boost Squid speed). But since there was no noticible slowdown its not
particularly bad.

Amos


Re: [squid-users] Re: cache.log Warnings

2014-02-17 Thread Amos Jeffries
On 18/02/2014 1:35 p.m., Carlos Defoe wrote:
 http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion
 
 You're probably using aufs, those messages are normal.
 
 On the restart log, I never saw the AuthUserHashPointer ones, but
 since squid exits and starts normally, I don't think that is a
 problem.

Those lines mean the credentials were still tied to an active
transaction when they were removed from the auth cache. Normal for a
shutdown with active clients at the time.

The long list of Open FD  lines is possibly a worry. That indicates
open connections.

However that said most of them are port-443 (HTTPS tunnels) or idle
persistent connections. So not a problem really just an artifact of the
not-so-nice shutdown process in Squid.

Amos