[squid-users] log_db_daemon - Advice ?

2013-04-06 Thread Roland RoLaNd
Dear all,

I have squid 2.7 stable9 running in transparent mode.
i'd like to store all logs into a mysql database.

i've created the requested squid_log(access_log) with it'sĀ privilegesĀ as 
mentioned.
and added the following to my squid.conf:
access_log daemon:/SomePassword squid
according to what i've googled thus far, i need to add a log daemon directive 
that points to logfile-daemon_mysql.pl
though cannot find that script under these locations:

/etc/squid/*
/usr/local/*
nor under this 
ftp://mirror.aarnet.edu.au/pub/squid/squid/squid-2.7.STABLE9.tar.gz

advice on how to proceed, would be greatly appreciated

PS: i don't have a problem in upgrading to squid3 as this is a lab environment 
before switching to production.

[squid-users] big log filescache.log cause squid to get down !!

2013-04-06 Thread Ahmad
hi ,
i have a  problem which is the long  log files of squid such as :
access.log
cache.log
store.log

get high sizes like 500 G which cause the squid partition to be full and
squid to get down .

my question is how to set limit size for log file  and if that size exceeded
i want new logs to replace the old logs on the same file and not exceed  ??
=

here is my squid.conf file :
acl blockkeywords dstdomain /etc/squid3/squid-porn.acl
http_access deny blockkeywords

###
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines
acl mysubnet src xx
http_access allow mysubnet
###
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
##
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports
###
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
#http_access deny to_localhost
##
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
###
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
##
# And finally deny all other access to this proxy
http_access deny all
###
# Squid normally listens to port 3128
http_port x
http_port 
http_port 1 tproxy
#http_port 
#http_port x tproxy
###
# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
###
# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /cache1 15 32 256
cache_dir aufs /cache2 15 32 256
cache_dir aufs /cache3 15 32 256
#cache_dir ufs /var/spool/squid3 100 16 256
###
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
###
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
###
cache_mem 500 MB
### WCCP2 Config#
wccp2_router x
wccp_version 2
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=240 ports=80
###
dns_nameservers x.x.x.x
cache_effective_user virus

[squid-users] Re: big log filescache.log cause squid to get down !!

2013-04-06 Thread localhost3128
Hi,

You can rotate the log files with the squid -k rotate command
Just add the logfile_rotate directive in your squid.conf



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/big-log-files-cache-log-cause-squid-to-get-down-tp4659395p4659396.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: big log filescache.log cause squid to get down !!

2013-04-06 Thread Ahmad
hi localhost ,


thanks for reply ,

but if i want to limit the size of log file :
assume i want max size of 
access.log =2 G
cache.log ==2G
store.log==2G

and if the files size exceeded , i wan to replace the same file .
how can i do it ?

does just  logfile_rotate sufficient ?

regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/big-log-files-cache-log-cause-squid-to-get-down-tp4659395p4659397.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: big log filescache.log cause squid to get down !!

2013-04-06 Thread Helmut Hullen
Hallo, Ahmad,

Du meintest am 06.04.13:

 but if i want to limit the size of log file :
 assume i want max size of
 access.log =2 G
 cache.log ==2G
 store.log==2G

 and if the files size exceeded , i wan to replace the same file .
 how can i do it ?

 does just  logfile_rotate sufficient ?

What about logrotate? Many distributions use this program for the job.

It may use a file like /etc/logrotate.d/squid.

Viele Gruesse!
Helmut


Re: [squid-users] log_db_daemon - Advice ?

2013-04-06 Thread Amos Jeffries

On 6/04/2013 9:01 p.m., Roland RoLaNd wrote:

Dear all,

I have squid 2.7 stable9 running in transparent mode.
i'd like to store all logs into a mysql database.

i've created the requested squid_log(access_log) with it's privileges as 
mentioned.
and added the following to my squid.conf:
access_log daemon:/SomePassword squid
according to what i've googled thus far, i need to add a log daemon directive 
that points to logfile-daemon_mysql.pl
though cannot find that script under these locations:

/etc/squid/*
/usr/local/*
nor under this 
ftp://mirror.aarnet.edu.au/pub/squid/squid/squid-2.7.STABLE9.tar.gz


It is not part of Squid-2. The DB daemon was only added in squid-3.3, 
and the usage difffers a bit.


That said, you can build the squid-3.3 sources and run the daemon helper 
it builds under 2.7 if you want to.



advice on how to proceed, would be greatly appreciated

PS: i don't have a problem in upgrading to squid3 as this is a lab environment 
before switching to production.  


It really is well past time to attempt that switch.


Amos


[squid-users] Local Squid to Reverse Squid to keyserver.ubuntu.com

2013-04-06 Thread Christopher H. Laco
I'm currently in the process of testing some software installs behind
a proxy and ran into something I don't quite understand.
While running behind a Squid proxy, apt-key calls were failing to
process key requests. The same requests run fine directly connected to
the internet.

From a machine directly connected to the internet:

/usr/bin/apt-key adv --keyserver hkp://keyserver.ubuntu.com:80
--recv-keys 765C5E49F87CBDE0
Executing: gpg --ignore-time-conflict --no-options
--no-default-keyring --secret-keyring /tmp/tmp.5JakWGN817
--trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg
--primary-keyring /etc/apt/trusted.gpg --keyserver
hkp://keyserver.ubuntu.com:80 --recv-keys 765C5E49F87CBDE0
gpg: requesting key F87CBDE0 from hkp server keyserver.ubuntu.com
gpg: key F87CBDE0: RCB Build rcb-dep...@lists.rackspace.com not changed
gpg: Total number processed: 1
gpg:  unchanged: 1



From a machine with proxy only access, http_proxy set in
/etc/environment  sudoers env_keep:

/usr/bin/apt-key adv --keyserver hkp://keyserver.ubuntu.com:80
--recv-keys 765C5E49F87CBDE0
Executing: gpg --ignore-time-conflict --no-options
--no-default-keyring --secret-keyring /tmp/tmp.9YzGDcyc3K
--trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg
--primary-keyring /etc/apt/trusted.gpg --keyserver
hkp://keyserver.ubuntu.com:80 --recv-keys 765C5E49F87CBDE0
gpg: requesting key F87CBDE0 from hkp server keyserver.ubuntu.com
gpgkeys: key 765C5E49F87CBDE0 not found on keyserver
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0


And in the access.log on the proxy:

1365271816.885294 10.10.10.20 TCP_MISS/403 3827 GET
http://keyserver.ubuntu.com/pks/lookup?op=getoptions=mrsearch=0xF87CBDE0
- DIRECT/91.189.89.49 text/html


I've further distilled this down to a simple curl statement against
keyserver.ubuntu.com, removing apt-key, gpg from the picture:

curl -v http://keyserver.ubuntu.com
...output here is the squid access denied error message page


To be clear, at this point, this same server behind squid CAN access
other resources on the internet:

curl -v ubuntu.com
* About to connect() to proxy 10.10.10.10 port 3128 (#0)*   Trying
10.10.10.10... connected
 GET HTTP://ubuntu.com HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: ubuntu.com
 Accept: */* Proxy-Connection: Keep-Alive * HTTP 1.0, assume close after 
 body HTTP/1.0 301 Moved Permanently
 Date: Sat, 06 Apr 2013 18:41:26 GMT
 Server: Apache/2.2.14 (Ubuntu)
 Location: http://www.ubuntu.com/
 Content-Length: 306
 Content-Type: text/html; charset=iso-8859-1
 X-Cache-Lookup: MISS from localhost:3128
 Via: 1.0 localhost (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
 Connection: keep-alive
...output here is the rest of the / page...


And in the proxy access.log:

1365272030.197161 10.10.10.20 TCP_MISS/403 3814 GET
http://keyserver.ubuntu.com/ - DIRECT/91.189.90.55 text/html


Now, on a machine with direct access to the internet:

curl -v http://keyserver.ubuntu.com
* About to connect() to keyserver.ubuntu.com port 80 (#0)
*   Trying 91.189.89.49... connected
 GET / HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: keyserver.ubuntu.com
 Accept: */*

* HTTP 1.0, assume close after body
 HTTP/1.0 200 OK
 Date: Sat, 06 Apr 2013 18:16:49 GMT
 Server: sks_www/1.1.4
 Cache-Control: no-cache
 Pragma: no-cache
 Expires: 0
 Content-Length: 1417
 Content-Type: text/html; charset=UTF-8
 X-Cache: MISS from localhost
 X-Cache-Lookup: MISS from localhost:11371
 Via: 1.0 localhost (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
 Connection: keep-alive
...remaining content of the actual / page on the keyserver...


Just for fun, if connect through a local TinyProxy instead of Squid, it works:

curl -v http://keyserver.ubuntu.com
* About to connect() to proxy 10.10.10.10 port  (#0)*   Trying
10.10.10.10... connected
 GET http://keyserver.ubuntu.com HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: keyserver.ubuntu.com
 Accept: */* Proxy-Connection: Keep-Alive * HTTP 1.0, assume close after 
 body HTTP/1.0 200 OK
 Via: 1.0 localhost (squid/3.1.19), 1.1 tinyproxy (tinyproxy/1.8.3)
 Content-Length: 1417
 Expires: 0
 Server: sks_www/1.1.4
 Date: Sat, 06 Apr 2013 18:33:46 GMT
 X-Cache-Lookup: MISS from localhost:11371
 X-Cache: MISS from localhost
 Cache-Control: no-cache
 Pragma: no-cache
 Content-Type: text/html; charset=UTF-8
...remaining content of the actual / page on the keyserver...


I'm not an http/proxy guru. What I think is happening is that the
local Squid proxy receives the X-Cache MISS from upstream, and simply
halts the response as if it were a peer.

Can someone shed some insight on this? I always thought that a reverse
proxy should never expose it's X-Cache headers 

[squid-users] Re: Local Squid to Reverse Squid to keyserver.ubuntu.com

2013-04-06 Thread claco
Some followup information.

The local proxy server was the 3.1.19 package under Ubuntu 12.04.
Just for fun, I compiled 3.3.3 and ran that instead. At that point, things
worked as expected:

curl -v http://keyserver.ubuntu.com
* About to connect() to proxy 10.10.10.10 port 3128 (#0)
*   Trying 10.10.10.10... connected
 GET http://keyserver.ubuntu.com HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1
 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: keyserver.ubuntu.com
 Accept: */*
 Proxy-Connection: Keep-Alive

 HTTP/1.1 200 OK
 Date: Sun, 07 Apr 2013 00:14:43 GMT
 Server: sks_www/1.1.4
 Cache-Control: no-cache
 Pragma: no-cache
 Expires: 0
 Content-Length: 1417
 Content-Type: text/html; charset=UTF-8
 X-Cache: MISS from localhost
 X-Cache-Lookup: MISS from localhost:11371
 X-Cache: MISS from proxy
 Via: 1.0 localhost (squid/3.1.19), 1.1 proxy (squid/3.3.3)
 Connection: keep-alive


And the original apt-key call:

/usr/bin/apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys
765C5E49F87CBDE0
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring
--secret-keyring /tmp/tmp.tEx1FQBbkI --trustdb-name /etc/apt/trustdb.gpg
--keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg
--keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 765C5E49F87CBDE0
gpg: requesting key F87CBDE0 from hkp server keyserver.ubuntu.com
gpg: key F87CBDE0: RCB Build rcb-dep...@lists.rackspace.com not changed
gpg: Total number processed: 1
gpg:  unchanged: 1


I'd still like to understand the original failure under 3.1.19. Upgrade
your proxy isn't the greatest answer for the next person who stumble across
this issue.

Thanks!
-=Chris



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Local-Squid-to-Reverse-Squid-to-keyserver-ubuntu-com-tp4659400p4659401.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] yahoo messenger

2013-04-06 Thread Prathyush
HI,
Yes i did already

On Fri, Apr 5, 2013 at 2:43 PM, Nishant Sharma codemarau...@gmail.com wrote:

 On Apr 5, 2013 7:39 AM, Prathyush prathyus...@gmail.com wrote:
 allowed but still not working .
 Set the connection method though proxy in yahoo and in ie as well

 Did you add the yahoo messenger ports to the list of safe_ports? Long back
 it used to be 5050, but not sure about current status.

 Regards,
 Nishant

 --
 Regards,
 Prathyush



-- 
Regards,
Prathyush