[squid-users] Install Squid 3.3.10 on Slackware 14

2013-11-12 Thread Vukovic Ivan
Hello

Please i need help to ./configure, make and install Squid 3.3.10 on Slackware 
14.0 I Installed Slackware 14 with this packets:

aaa_base-14.0-i486-5
aaa_elflibs-14.0-i486-4
acl-2.2.51-i486-1
attr-2.4.46-i486-1
autoconf-2.69-noarch-1
automake-1.11.5-noarch-1
bash-4.2.037-i486-1
bin-11.1-i486-1
bind-9.9.1_P3-i486-1
binutils-2.22.52.0.2-i486-2
bison-2.5.1-i486-1
bzip2-1.0.6-i486-1
clisp-2.49-i486-1
coreutils-8.19-i486-1
cxxlibs-6.0.17-i486-1
db42-4.2.52-i486-3
db44-4.4.20-i486-3
db48-4.8.30-i486-2
dcron-4.5-i486-4
devs-2.3.1-noarch-25
dialog-1.1_20100428-i486-2
diffutils-3.2-i486-1
e2fsprogs-1.42.6-i486-1
elvis-2.2_0-i486-2
etc-14.0-i486-1
expat-2.0.1-i486-2
findutils-4.4.2-i486-1
floppy-5.4-i386-3
gawk-3.1.8-i486-1
gcc-4.7.1-i486-1
gcc-g++-4.7.1-i486-1
gdbm-1.8.3-i486-4
gettext-0.18.1.1-i486-3
gettext-tools-0.18.1.1-i486-3
glib-1.2.10-i486-3
glib2-2.32.4-i486-1
glibc-2.15-i486-7
glibc-i18n-2.15-i486-7
glibc-solibs-2.15-i486-7
glibc-zoneinfo-2012f_2012f-noarch-7
gpm-1.20.1-i486-5
grep-2.14-i486-1
groff-1.21-i486-1
guile-1.8.8-i486-1
gzip-1.5-i486-1
hdparm-9.37-i486-1
infozip-6.0-i486-1
iproute2-3.4.0-i486-2
iptables-1.4.14-i486-1
joe-3.7-i486-1
kbd-1.15.3-i486-2
kernel-firmware-20120804git-noarch-1
kernel-headers-3.2.29_smp-x86-1
kernel-huge-3.2.29-i486-1
kernel-modules-3.2.29-i486-1
kmod-9-i486-3
less-451-i486-1
libexif-0.6.21-i486-1
libpcap-1.3.0-i486-1
libpng-1.4.12-i486-1
libtermcap-1.2.3-i486-7
libtool-2.4.2-i486-1
libxml2-2.8.0-i486-1
libxslt-1.1.26-i486-2
lilo-23.2-i486-3
links-2.7-i486-1
logrotate-3.8.2-i486-1
lsof-4.83-i486-1
m4-1.4.16-i486-1
make-3.82-i486-3
man-1.6g-i486-1
man-pages-3.41-noarch-1
mhash-0.9.9.9-i486-3
mkinitrd-1.4.7-i486-6
ncftp-3.2.5-i486-1
ncurses-5.9-i486-1
net-tools-1.60.20120726git-i486-1
netwatch-1.3.0-i486-1
network-scripts-14.00-noarch-3
openssh-6.1p1-i486-1
openssl-1.0.1c-i486-3
openssl-solibs-1.0.1c-i486-3
pciutils-3.1.9-i486-1
perl-5.16.1-i486-1
pkg-config-0.25-i486-1
pkgtools-14.0-noarch-2
popt-1.7-i486-3
procps-3.2.8-i486-3
readline-5.2-i486-4
samba-3.6.8-i486-1
screen-4.0.3-i486-3
sed-4.2.1-i486-1
shadow-4.1.4.3-i486-7
slocate-3.1-i486-4
strace-4.5.20-i486-1
sysklogd-1.5-i486-1
sysvinit-2.88dsf-i486-2
sysvinit-scripts-2.0-noarch-13
tar-1.26-i486-1
tcpdump-4.3.0-i486-1
texinfo-4.13a-i486-4
time-1.7-i486-1
traceroute-2.0.18-i486-1
tree-1.6.0-i486-1
udev-182-i486-5
util-linux-2.21.2-i486-5
vim-7.3.645-i486-1
wget-1.14-i486-1
whois-5.0.15-i486-1
zlib-1.2.6-i486-1
zsh-5.0.0-i486-1


I can Boot and the Installation is ok.
Now i want install Squid 3.3.10 on this Slackware 14 Installation but everytime 
when i did the ./configure command, this error came:
gcc error: C Compiler works ..no
gcc -v command unrecognized
gcc -qversion command unrecognized

But!, here is the Point, when i install slackware 14 full (with all packages) 
then i can ./configure, make and install squid 3.3.10 without any Problem.

So,
Which package of slackware 14 is missing to ./configure, make and install Squid 
3.3.10 Here's is the list of all slackware 14 included packages:
http://mirror.netcologne.de/slackware/slackware-14.0/PACKAGES.TXT

Please help me to get the squid install process working, Thanks!


Mit freundlichen Grüssen
Ivan Vukovic
Abteilung Informatik-Dienste
--
Schlatter Industries AG
Brandstrasse 24
CH-8952 Schlieren
Tel. +41 44 732 7111
Direct +41 44 732 7495
Fax +41 44 732 45 00
Email: ivan.vuko...@schlattergroup.com
Internet www.schlattergroup.com



NoSpam


[squid-users] File download fails through transparent Squid

2012-07-31 Thread Ivan Botnar
Hello,

I have Squid 3.1.19 installed on Ubuntu 12.04 86_64 from packages. I
need a transparent proxy without disk cache for users working through
Wi-Fi on non-Windows (mostly Apple) devices. I performed a
configuration and Squid works for web surfing or streaming data but
I’m experiencing issues with files downloading. Basically every
download that lasts more than 10 seconds fails with error. I've been
looking into logs and debugs, and tcpdump but no luck.

Here’s my Squid:

# squid3 -v
Squid Cache: Version 3.1.19 configure options:
'--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc'
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
'--srcdir=.' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr'
'--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2'
'--disable-translation' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy'
'--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g
-O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Wformat-security -Werror=format-security'
'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE
-fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security
-Werror=format-security' --with-squid=/build/buildd/squid3-3.1.19

IP tables forward everything from 80 port to Squid on 3128 port:

-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128

Here’s my config:

acl my_networks src 192.168.110.0/24 10.21.40.0/24 10.20.40.0/24
192.168.109.0/24
cache deny all
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow my_networks
http_access deny all
cache_store_log /dev/null
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
httpd_suppress_version_string On
error_directory /usr/share/squid-langpack/en

Here's the last couple records I see in debug:
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData:
http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso,
7131720 bytes (4096 new bytes)
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: FD 213
'http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso'
out.offset=7127299
2012/07/30 18:06:27.339| clientStreamCallback: Calling 1 with cbdata
0x7fe40c97bc30 from node 0x7fe40c890008
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c35ff98
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=2
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| The AsyncCall clientWriteBodyComplete
constructed, this=0x7fe40c34b630 [call778693]
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=4
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=2
2012

[squid-users] File download fails through transparent Squid

2012-07-30 Thread Ivan Botnar
Hello,

I have Squid 3.1.19 installed on Ubuntu 12.04 86_64 from packages. I need a 
transparent proxy without disk cache for users working through Wi-Fi on 
non-Windows (mostly Apple) devices. I performed a configuration and Squid works 
for web surfing or streaming data but I'm experiencing issues with files 
downloading. Basically every download that lasts more than 10 seconds fails 
with error. I've been looking into logs and debugs, and tcpdump but no luck.

Here's my Squid:

# squid3 -v
Squid Cache: Version 3.1.19
configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' 
'--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' 
'--disable-dependency-tracking' '--disable-silent-rules' 
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' 
'--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' 
'--enable-removal-policies=lru,heap' '--enable-delay-pools' 
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
 '--enable-ntlm-auth-helpers=smb_lm,' 
'--enable-digest-auth-helpers=ldap,password' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
 '--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2' 
'--disable-translation' '--with-logdir=/var/log/squid3' 
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' 
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 
'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector 
--param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' 
'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector 
--param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' 
--with-squid=/build/buildd/squid3-3.1.19

IP tables forward everything from 80 port to Squid on 3128 port:

-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128

Here's my config:

acl my_networks src 192.168.110.0/24 10.21.40.0/24 10.20.40.0/24 
192.168.109.0/24
cache deny all
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow my_networks
http_access deny all
cache_store_log /dev/null
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
httpd_suppress_version_string On
error_directory /usr/share/squid-langpack/en


Here's the last couple records I see in debug:

2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: 
http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso,
 7131720 bytes (4096 new bytes)
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: FD 213 
'http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso'
 out.offset=7127299
2012/07/30 18:06:27.339| clientStreamCallback: Calling 1 with cbdata 
0x7fe40c97bc30 from node 0x7fe40c890008
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c35ff98
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=2
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| The AsyncCall clientWriteBodyComplete constructed, 
this=0x7fe40c34b630 [call778693]
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=4
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.

[squid-users] failed http redirection

2011-10-23 Thread Ivan Matala
hello, this is my code

iptables -t nat -A PREROUTING -i tun0 -p tcp -m tcp --match multiport
--dports 80 -j DNAT --to-destination 118.67.78.136:80

what im trying to do is, im trying to redirect all http requests to a
foreign proxy, but it fails

thanks


[squid-users] Tutorial for Squid Splash Page

2011-10-02 Thread Ivan Matala
Hello guys, do you any idea or is it possible to display a splash page
to squid proxy users? I want it like display for some specific
interval. Also can we put license agreement, in which they have to
press Yes or accept in order to browse any website. Thank you Squid
Users.

Kindly include your ideas or tutorials. Thank you


[squid-users] squid slow

2011-06-21 Thread Ivan Matala
i notice squid is slowing down..browsing goes very slow. real slow.
anyway to boost it? how can i tweak the settings? thanks


[squid-users] Re: Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-20 Thread Ivan .
Hi

Can you post this so I can get some feedback on whether people are
experiencing issues accessing the site via squid? thanks

The setup is

User>Squid-->http://www.microsofthup.com

The error is

++
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://www.microsofthup.com/hupus/chooser.aspx?
The following error was encountered:

   Read Error

The system returned:
   (104) Connection reset by peer
An error condition occurred while reading data from the network.
Please retry your request.
Your cache administrator is root.
Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy.fqdn.com (squid/2.6.STABLE21)
++

in the access.log I see the site is load balanced


Re: [squid-users] squid SSL

2011-06-17 Thread Ivan Matala
this is want i want to achieve:

i have a server and i want all ports to be forwaded to a remote squid
proxy.. i want udp and tcp ports starting from 1:65535. is it
possible?

this means,, all yahoo messenger traffic, games, skype will be
forwarded to squid.

thanks

On Fri, Jun 17, 2011 at 8:27 AM, Amos Jeffries  wrote:
> On 18/06/11 02:33, Ivan Matala wrote:
>>
>> how can i configure squid SSL?
>>
>> coz when i go to gmail.com, facebook.com, their require ssl support. i
>> got ssl error.
>>
>> pls help
>>
>> what should i do?
>
> You should start by telling us what the error is please.
>
> Note that HTTPS is by default relayed directly over Squid without being
> touched. So the error should be something in your browser or the website its
> contacting.
>  The error message will help us point you at what more to look at.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.8 and 3.1.12.2
>


[squid-users] squid SSL

2011-06-17 Thread Ivan Matala
how can i configure squid SSL?

coz when i go to gmail.com, facebook.com, their require ssl support. i
got ssl error.

pls help

what should i do?


[squid-users] yahoo messenger cant connect

2011-06-17 Thread Ivan Matala
hello, i installed squid (default config, didnt change anything) and
web browsing is ok, but when i connect to yahoo messenger, it doesnt
work.. pls help


Re: [squid-users] Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-16 Thread Ivan .
Hi Amos

I obfuscated the fully qual named of my proxy, so yes it is definitely
from my proxy.

"Clearswift SECURE Web Gateway" is the internal proxy system which
chains to the Squid

CSwebGW>Squid-->http://www.microsofthup.com

I have tried direct client to the Squid and same issues

User>Squid-->http://www.microsofthup.com

thanks
Ivan

On Fri, Jun 17, 2011 at 11:49 AM, Amos Jeffries  wrote:
> On 17/06/11 11:54, Ivan . wrote:
>>
>> Hi,
>>
>> I am having a issue accessing a MS site, which is actually hosted via
>> Digitalriver content network, and via tcpdump I can see allot of
>> redirects, 301 perm moved etc. So I am wondering if someone can try
>> via their squid setup, or if any has any ideas what is at play.
>>
>> No issues when going to the site direct
>>
>> http://www.microsofthup.com/
>>
>> Seem to have some load balancers at work  "
>> BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;"
>>
>> Some greps from the access log
>>
>> 1308202767.527    746 10.xxx.xxx.xxx TCP_MISS/301 728 GET
>> http://microsofthup.com/ - DIRECT/209.87.184.136 text/html [Accept:
>> image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
>> application/x-shockwave-flash, application/x-ms-application,
>> application/x-ms-xbap, application/vnd.ms-xpsdocument,
>> application/xaml+xml, application/vnd.ms-excel,
>> application/vnd.ms-powerpoint, application/msword,
>> */*\r\nAccept-Language: en-au\r\nUA-CPU: x86\r\nUser-Agent:
>> Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322;
>> .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET
>> CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C;
>> .NET4.0E)\r\nHost: microsofthup.com\r\nCookie:
>> op390chooserhomedefaultpagesgum=a0o51kr1uy275pp0hk3dqa2766p06y38i56d7;
>>
>> fcP=C=2&T=1307338680214&DTO=1306305944285&U=717377145&V=1307338680214\r\nVia:
>> 1.0 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 301 Moved
>> Permanently\r\nDate: Thu, 16 Jun 2011 05:16:24 GMT\r\nContent-Length:
>> 191\r\nContent-Type: text/html\r\nCache-Control:
>> no-cache\r\nConnection: keep-alive\r\nProxy-Connection:
>> keep-alive\r\nServer: Microsoft-IIS/6.0\r\nPragma:
>> no-cache\r\nLocation:
>> http://www.microsofthup.com/hupus/chooser.aspx\r\nVia: 1.1
>> dc1c5cache01 (NetCache NetApp/6.0.3)\r\nSet-Cookie:
>> BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;
>> path=/\r\n\r]
>>
>> 1308202718.027    231 10.xxx.xxx.xxx TCP_MISS/200 2031 GET
>>
>> http://c5.img.digitalriver.com/gtimages/store-mc-uri/mshup/assets/local//en-US/css/style.css
>> - DIRECT/122.252.43.91 text/css [Host:
>> c5.img.digitalriver.com\r\nUser-Agent: Mozilla/5.0 (Windows; U;
>> Windows NT 5.1; en-GB; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 (.NET
>> CLR 3.5.30729)\r\nAccept: text/css,*/*;q=0.1\r\nAccept-Language:
>> en-gb,en;q=0.5\r\nAccept-Charset:
>> ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:
>>
>> http://www.microsofthup.com/hupus/chooser.aspx?culture=en-US&resID=TfmRcwoHArEAADX4AfIb\r\nVia:
>> 1.1 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 200
>> OK\r\nContent-Type: text/css\r\nLast-Modified: Fri, 17 Dec 2010
>> 09:15:43 GMT\r\nETag: "8089fcfaca9dcb1:384"\r\nServer:
>> Microsoft-IIS/6.0\r\nX-Server-Name: dc1c5web07\r\nP3P: CP="CAO DSP
>> TAIa OUR IND PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE
>> LOC"\r\nX-Powered-By: ASP.NET\r\nDate: Thu, 16 Jun 2011 05:15:34
>> GMT\r\nContent-Length: 1531\r\nConnection: keep-alive\r\n\r]
>>
>
> Both of those are successful transfers through your Squid.
>
>>
>> This is the squid error
>>
>>
>> ++
>> ERROR
>> The requested URL could not be retrieved
>> While trying to retrieve the URL:
>> http://www.microsofthup.com/hupus/chooser.aspx?
>> The following error was encountered:
>>
>> Read Error
>>
>> The system returned:
>>     (104) Connection reset by peer
>> An error condition occurred while reading data from the network.
>> Please retry your request.
>> Your cache administrator is root.
>> Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy (squid/2.6.STABLE21)
>
> Are you sure this is being generated by your proxy?
>  "proxy" is not a FQDN indicating ownership, so it is a bit hard to tell who
> it belongs to.
>  "root" is an ambiguous email address, so good luck getting in touch with
> whoever runs it to report the problem.
>
>
> Your log indicates requests/replies are coming through a proxy with domain
> name "Clearswift SECURE Web Gateway" which is clearly also not available in
> DNS. So it could be broken in any number of other ways than just its FQDN
> hostname.
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.8 and 3.1.12.2
>


[squid-users] Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-16 Thread Ivan .
Hi,

I am having a issue accessing a MS site, which is actually hosted via
Digitalriver content network, and via tcpdump I can see allot of
redirects, 301 perm moved etc. So I am wondering if someone can try
via their squid setup, or if any has any ideas what is at play.

No issues when going to the site direct

http://www.microsofthup.com/

Seem to have some load balancers at work  "
BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;"

Some greps from the access log

1308202767.527    746 10.xxx.xxx.xxx TCP_MISS/301 728 GET
http://microsofthup.com/ - DIRECT/209.87.184.136 text/html [Accept:
image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
application/x-shockwave-flash, application/x-ms-application,
application/x-ms-xbap, application/vnd.ms-xpsdocument,
application/xaml+xml, application/vnd.ms-excel,
application/vnd.ms-powerpoint, application/msword,
*/*\r\nAccept-Language: en-au\r\nUA-CPU: x86\r\nUser-Agent:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322;
.NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET
CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C;
.NET4.0E)\r\nHost: microsofthup.com\r\nCookie:
op390chooserhomedefaultpagesgum=a0o51kr1uy275pp0hk3dqa2766p06y38i56d7;
fcP=C=2&T=1307338680214&DTO=1306305944285&U=717377145&V=1307338680214\r\nVia:
1.0 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 301 Moved
Permanently\r\nDate: Thu, 16 Jun 2011 05:16:24 GMT\r\nContent-Length:
191\r\nContent-Type: text/html\r\nCache-Control:
no-cache\r\nConnection: keep-alive\r\nProxy-Connection:
keep-alive\r\nServer: Microsoft-IIS/6.0\r\nPragma:
no-cache\r\nLocation:
http://www.microsofthup.com/hupus/chooser.aspx\r\nVia: 1.1
dc1c5cache01 (NetCache NetApp/6.0.3)\r\nSet-Cookie:
BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;
path=/\r\n\r]

1308202718.027    231 10.xxx.xxx.xxx TCP_MISS/200 2031 GET
http://c5.img.digitalriver.com/gtimages/store-mc-uri/mshup/assets/local//en-US/css/style.css
- DIRECT/122.252.43.91 text/css [Host:
c5.img.digitalriver.com\r\nUser-Agent: Mozilla/5.0 (Windows; U;
Windows NT 5.1; en-GB; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 (.NET
CLR 3.5.30729)\r\nAccept: text/css,*/*;q=0.1\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:
http://www.microsofthup.com/hupus/chooser.aspx?culture=en-US&resID=TfmRcwoHArEAADX4AfIb\r\nVia:
1.1 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 200
OK\r\nContent-Type: text/css\r\nLast-Modified: Fri, 17 Dec 2010
09:15:43 GMT\r\nETag: "8089fcfaca9dcb1:384"\r\nServer:
Microsoft-IIS/6.0\r\nX-Server-Name: dc1c5web07\r\nP3P: CP="CAO DSP
TAIa OUR IND PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE
LOC"\r\nX-Powered-By: ASP.NET\r\nDate: Thu, 16 Jun 2011 05:15:34
GMT\r\nContent-Length: 1531\r\nConnection: keep-alive\r\n\r]


This is the squid error

++
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://www.microsofthup.com/hupus/chooser.aspx?
The following error was encountered:

Read Error

The system returned:
    (104) Connection reset by peer
An error condition occurred while reading data from the network.
Please retry your request.
Your cache administrator is root.
Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy (squid/2.6.STABLE21)
++


Re: [squid-users] Vary the bandwidth to stream

2011-04-26 Thread Ivan Maldonado Zambrano
Buen día Rogelio,

Si usa streaming por web. Explico, en el dispositivo cree un nodo
(/dev/camera) en el cual coloco el video a enviar y mediante Live555 se
envia la informacion a travez de la web. El dispositivo tiene una IP
fija (server) y mediante una aplicación hecha en Qt/VLC requiero el
video (client).

Me puedes proporcionar informacion en relacion a delay_pools y
delay_access, es decir. Estoy usando fedora y lei en una pagina que para
la configuracion de este tipo de herramientas, es necesario instalar
squid manualmente y no usar "yum install squid" (en mi caso).

Saludos y gracias de antemano
Iván Maldonado Zambrano


On Tue, 2011-04-26 at 16:17 -0500, Rogelio Sevilla Fernandez wrote:
> Que tal Ivan..
> 
> Solo para verificar. Tu sistema de Videotrak usa streaming por web? si  
> es asi, con squid y el uso de delay_pools / delay_access podría ser tu  
> solucion.
> 
> 
> 
> Ivan Maldonado Zambrano  escribió:
> 
> > Hi all,
> >
> > I'm new in Squid and I'd like to know if Squid can help me to solve this
> > problem:
> >
> > I'm Videotrak developer (Peek Traffic product) and I'm trying to vary my
> > bandwidth to simulate that my device is located in a remote location.
> > I'm streaming video from my videotrak device to a Linux PC (Fedora
> > distribution). I don't want to block streaming with Squid, I just want
> > to control/vary my bandwidth between Videotrak and PC (local network).
> >
> > Regards and thanks in advanced
> > Iván Maldonado Zambrano
> >
> >
> > --
> > Este mensaje ha sido analizado por MailScanner del
> > Gobierno del Estado de Colima en busca de virus y otros
> > contenidos peligrosos, y se considera que est� limpio.
> >
> >
> 
> 
> 




[squid-users] Vary the bandwidth to stream

2011-04-26 Thread Ivan Maldonado Zambrano
Hi all,

I'm new in Squid and I'd like to know if Squid can help me to solve this
problem:

I'm Videotrak developer (Peek Traffic product) and I'm trying to vary my
bandwidth to simulate that my device is located in a remote location.
I'm streaming video from my videotrak device to a Linux PC (Fedora
distribution). I don't want to block streaming with Squid, I just want
to control/vary my bandwidth between Videotrak and PC (local network).

Regards and thanks in advanced
Iván Maldonado Zambrano



Re: [squid-users] File Descriptors

2010-07-05 Thread Ivan .
I used this how to, and did not require a re-compile

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Jul 6, 2010 at 1:43 PM, Mellem, Dan  wrote:
>
> Did you set the limit before you compiled it? The upper limit is set at 
> compile time. I ran into this problem myself.
>
> -Dan
>
>
> -Original Message-
> From:   Superted666 [mailto:ruckafe...@gmail.com]
> Sent:   Mon 7/5/2010 3:33 PM
> To:     squid-users@squid-cache.org
> Cc:
> Subject:        [squid-users] File Descriptors
>
>
> Hello,
>
> Got a odd problem with file descriptors im hoping you guys could help me out
> with?
>
> Background
>
> I'm running CentOS 5.5 and squid 3.0 Stable 5.
> The system is configured with 4096 file descriptors with the following :
>
> /etc/security/limits.conf
> *                -       nofile          4096
> /etc/sysctl.conf
> fs.file-max = 4096
>
> Also /etc/init.d/squid has ulimit -HSn 4096 at the start.
>
> Problem
>
> Running a ulimit -n on the box does indeed show 4096 connectors but squid
> states it is using 1024 despite what is said above. I noticed this because
> im starting to get warnings in the logs about file descriptors...
>
> Any help greatly appreciated.
>
> Thanks
>
> Ed
>
> Ed
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>
>
>
>


Re: [squid-users] Increasing File Descriptors

2010-05-06 Thread Ivan .
worked for me

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

no recompile necessary


On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
 wrote:
> I can't seem to get increase the number above 32768 no matter what I do.
>
> Ulimit during compile, sysctl.conf and everything else but no luck.
>
>
> I have about 5,000 users on a 400mbit connection.
>
> Steve
>
> RHEL5 64bit with Squid 3.1.1


[squid-users] TIME_WAIT state

2010-05-03 Thread Ivan .
Hi

I see allot of TIME_WAIT states when I run netstat -n.

I imagine that this points to some tcp parameters not quite tuned correctly.

Anyone have some kernel tcp tuning parameters for a Squid proxy
running on RH EL5 pushing around 30Mbs?


Thanks
Ivan


Re: [squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
not necessarily a test lab setup, but something that sits on a client
machine, pulls down some static content, at regular intervals and then
report on the performance.

what I am trying to do is simulate the client experience so to speak.

cheers
Ivan

On Sun, May 2, 2010 at 12:52 PM, Amos Jeffries  wrote:
> Ivan . wrote:
>>
>> thanks
>>
>> What am I looking for is something more along these lines.
>>
>>
>> http://www.webperformanceinc.com/library/files/proxy_server_performance.pdf
>>
>>
>> cheers
>> Ivan
>
> Oh.
>
> That paper describes requirements for a lab test. The good test software;
> polygraph etc, have not changed AFAIK so go with those mentioned if you want
> to.
>
> What your initial email seemed to describe was for monitoring live
> production installation performance.
>
> Be aware these are very different. Throwing lab data at a production server
> to a real remote web service is a very quick way to get yourself a huge
> bandwidth bill and annoyed phone calls.
>
> Amos
>
>
>>
>> On Sun, May 2, 2010 at 12:04 PM, Amos Jeffries 
>> wrote:
>>>
>>> Ivan . wrote:
>>>>
>>>> Hi
>>>>
>>>> I recently implemented a new proxy system. I am looking at doing is
>>>> setting a periodical test that
>>>> goes out to the Internet, pull some content down and record the
>>>> relevant metrics.
>>>>
>>>>
>>>>
>>>>
>>>> PC>Proxy--->FW-->Internet>Site-with-content
>>>>
>>>>
>>>> Some sort of scheduled process on a PC, that pulls down some static
>>>> content from the same website, which is repeatable. The application
>>>> would then record metrics such as speed, time taken to download the
>>>> static content and log that.
>>>>
>>> Squid native access.log contains transfer duration and size metrics.
>>> Some other options not in the default format provide additional metrics
>>> if
>>> you need them.
>>>  See http://www.squid-cache.org/Doc/config/logformat/ for a lit of log
>>> metrics.
>>>
>>> Otherwise the SNMP counters can be used, but they do not go down as fine
>>> grained as indvidual requests.
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE9 or 3.1.1
>>>
>
>
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.1
>


Re: [squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
thanks

What am I looking for is something more along these lines.

http://www.webperformanceinc.com/library/files/proxy_server_performance.pdf


cheers
Ivan

On Sun, May 2, 2010 at 12:04 PM, Amos Jeffries  wrote:
> Ivan . wrote:
>>
>> Hi
>>
>> I recently implemented a new proxy system. I am looking at doing is
>> setting a periodical test that
>> goes out to the Internet, pull some content down and record the
>> relevant metrics.
>>
>>
>>
>> PC>Proxy--->FW-->Internet>Site-with-content
>>
>>
>> Some sort of scheduled process on a PC, that pulls down some static
>> content from the same website, which is repeatable. The application
>> would then record metrics such as speed, time taken to download the
>> static content and log that.
>>
>
> Squid native access.log contains transfer duration and size metrics.
> Some other options not in the default format provide additional metrics if
> you need them.
>  See http://www.squid-cache.org/Doc/config/logformat/ for a lit of log
> metrics.
>
> Otherwise the SNMP counters can be used, but they do not go down as fine
> grained as indvidual requests.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.1
>


[squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
Hi

I recently implemented a new proxy system. I am looking at doing is
setting a periodical test that
goes out to the Internet, pull some content down and record the
relevant metrics.


PC>Proxy--->FW-->Internet>Site-with-content


Some sort of scheduled process on a PC, that pulls down some static
content from the same website, which is repeatable. The application
would then record metrics such as speed, time taken to download the
static content and log that.

I have been digging on this site
http://www.opensourcetesting.org/performance.php looking at tools that
are available, but I would appreciate any info if anyone has down
something similar.

Thanks
Ivan


[squid-users] client_lifetime

2010-04-29 Thread Ivan .
Hi

I chain from two internal Clearswift appliances to a Squid box in a DMZ.

I have noticed quite a few WARNING: Closing client 
connection due to lifetime timeout

The client_lifetime is set at default, but I was wondering if I should
stretch that right out to 365 days or alike, seeing as all my
connections to the Squid proxy come from two IP addresses only?

Any other parameters that I should tune in this sort of setup?

Thanks
Ivan


Re: [squid-users] Squid v3.0Stable16 memory leak

2010-04-05 Thread Ivan .
Amos

I can confirm that with the same kernal, v2.6STABLE21 works fine

cheers
Ivan

On Tue, Mar 30, 2010 at 6:21 PM, Amos Jeffries  wrote:
>
> Ivan . wrote:
>>
>> Hi
>>
>> Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
>> for about 3 days, with 8GB of memory. Slowly but surely "top" shows
>> the availble memory dropping down to 500MB, which concerned me a great
>> deal.
>>
>> I am not caching, using the cache_dir null directive, so not sure what
>> is going on other a memory leak. Restarting the squid process didn't
>> help, so I bounced the box and low and behold available memory is
>> around 7GB.
>
> Um ... Restarting Squid drops all the memory it has allocated, whether leaked 
> or not. Same as killing the process.
>
> This sounds very much like something I saw back in the 2.6.31 kernel last 
> year. Any app that used a lot of memory or connections slowly (relative) 
> leaked RAM into the kernel space somehow. Only a system restart or kernel 
> upgrade to 2.6.32 fixed it here.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Squid v3.0Stable16 memory leak

2010-03-30 Thread Ivan .
hmm i'll check tomorrow, but I am fairly certain that I am on the
latest kernel via the RHN support site for RH EL5

I was on Squid v2.6stablexx, which is the latest rpm available by
RedHat and didn't have any issues. I upgraded hoping solve my
persistant tcp_miss for a couple of sites

cheers
Ivan

On Tue, Mar 30, 2010 at 6:21 PM, Amos Jeffries  wrote:
> Ivan . wrote:
>>
>> Hi
>>
>> Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
>> for about 3 days, with 8GB of memory. Slowly but surely "top" shows
>> the availble memory dropping down to 500MB, which concerned me a great
>> deal.
>>
>> I am not caching, using the cache_dir null directive, so not sure what
>> is going on other a memory leak. Restarting the squid process didn't
>> help, so I bounced the box and low and behold available memory is
>> around 7GB.
>
> Um ... Restarting Squid drops all the memory it has allocated, whether
> leaked or not. Same as killing the process.
>
> This sounds very much like something I saw back in the 2.6.31 kernel last
> year. Any app that used a lot of memory or connections slowly (relative)
> leaked RAM into the kernel space somehow. Only a system restart or kernel
> upgrade to 2.6.32 fixed it here.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.1
>


[squid-users] Squid v3.0Stable16 memory leak

2010-03-30 Thread Ivan .
Hi

Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
for about 3 days, with 8GB of memory. Slowly but surely "top" shows
the availble memory dropping down to 500MB, which concerned me a great
deal.

I am not caching, using the cache_dir null directive, so not sure what
is going on other a memory leak. Restarting the squid process didn't
help, so I bounced the box and low and behold available memory is
around 7GB.

I just upgraded to v3.0.STABLE24, so hoping this is better

I am not running any other services on the box, apart from ssh for
access, no GUI etc

cheers
Ivan


Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
really? One site not working on each of the Squid boxes?

That would be very, very strange?

Ivan

On Tue, Mar 30, 2010 at 11:20 AM, Amos Jeffries  wrote:
> On Tue, 30 Mar 2010 10:50:53 +1100, "Ivan ."  wrote:
>> More odd tcp_miss
>>
>> Only had a small portion of the site which would work. Works fine from
>> the primary, but fails on the secondary squid.
>>
>
> Its at this point I'm suspecting the NIC or hardware.
> Though low level software such as the kernel or iptables version warrant a
> look as well.
>
> Amos
>


Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
More odd tcp_miss

Only had a small portion of the site which would work. Works fine from
the primary, but fails on the secondary squid.

1269906612.412   5464 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906612.930 17 10.xxx..xxx TCP_MISS/200 851 GET
http://advisories.internode.on.net/images/menu2-on.gif -
DIRECT/192.231.203.146 image/gif
1269906613.075  9 10.xxx..xxx TCP_REFRESH_MODIFIED/200 782 GET
http://advisories.internode.on.net/images/menu2.gif -
DIRECT/192.231.203.146 image/gif
1269906613.331221 10.xxx..xxx  TCP_MISS/200 819 GET
http://advisories.internode.on.net/images/menu1-on.gif -
DIRECT/192.231.203.146 image/gif
1269906614.487   1865 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/ - DIRECT/203.16.214.27 -
1269906696.702  60903 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906767.709  61004 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906840.719  60299 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -
1269906911.707  60981 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -



On Mon, Mar 29, 2010 at 5:56 PM, Ivan .  wrote:
> That is so odd, as I have two identical boxes, now running the same
> Squid version, going through the same infrastructure, one works, the
> other one doesn't?
>
> The only difference are the public addresses configured on each of the
> squid proxy systems.
>
> The TCP stats on the interface on the squid box that won't access that
> site, don't look to bad at all
>
> [r...@pcr-proxy ~]# netstat -s
> Ip:
>    1593488410 total packets received
>    17991 with invalid addresses
>    0 forwarded
>    0 incoming packets discarded
>    1593318874 incoming packets delivered
>    1413863445 requests sent out
>    193 reassemblies required
>    95 packets reassembled ok
> Icmp:
>    22106 ICMP messages received
>    0 input ICMP message failed.
>    ICMP input histogram:
>        destination unreachable: 16
>        echo requests: 22090
>    155761 ICMP messages sent
>    0 ICMP messages failed
>    ICMP output histogram:
>        destination unreachable: 133671
>        echo replies: 22090
> IcmpMsg:
>        InType3: 16
>        InType8: 22090
>        OutType0: 22090
>        OutType3: 133671
> Tcp:
>    27785486 active connections openings
>    78777077 passive connection openings
>    68247 failed connection attempts
>    560600 connection resets received
>    569 connections established
>    1589479495 segments received
>    1403833081 segments send out
>    6034370 segments retransmited
>    0 bad segments received.
>    626711 resets sent
> Udp:
>    3817253 packets received
>    20 packets to unknown port received.
>    0 packet receive errors
>    3840233 packets sent
> TcpExt:
>    217 invalid SYN cookies received
>    15888 resets received for embryonic SYN_RECV sockets
>    42765 packets pruned from receive queue because of socket buffer overrun
>    7282834 TCP sockets finished time wait in fast timer
>    3 active connections rejected because of time stamp
>    11427 packets rejects in established connections because of timestamp
>    8682907 delayed acks sent
>    1268 delayed acks further delayed because of locked socket
>    Quick ack mode was activated 1227980 times
>    36 packets directly queued to recvmsg prequeue.
>    14 packets directly received from prequeue
>    538829561 packets header predicted
>    492906318 acknowledgments not containing data received
>    190275750 predicted acknowledgments
>    372 times recovered from packet loss due to fast retransmit
>    348117 times recovered from packet loss due to SACK data
>    174 bad SACKs received
>    Detected reordering 71 times using FACK
>    Detected reordering 963 times using SACK
>    Detected reordering 25 times using reno fast retransmit
>    Detected reordering 998 times using time stamp
>    560 congestion windows fully recovered
>    10689 congestion windows partially recovered using Hoe heuristic
>    TCPDSACKUndo: 921
>    197231 congestion windows recovered after partial ack
>    1316789 TCP data loss events
>    TCPLostRetransmit: 22
>    3020 timeouts after reno fast retransmit
>    78970 timeouts after SACK recovery
>    10665 timeouts in loss state
>    743644 fast retransmits
>    1003156 forward retransmits
>    1884003 retransmits in slow start
>    1604549 other TCP timeouts
>    TCPRenoRecoveryFail: 150
>    31151 sack retransmits failed
>    

Re: [squid-users] TCP MISS 502

2010-03-28 Thread Ivan .
That is so odd, as I have two identical boxes, now running the same
Squid version, going through the same infrastructure, one works, the
other one doesn't?

The only difference are the public addresses configured on each of the
squid proxy systems.

The TCP stats on the interface on the squid box that won't access that
site, don't look to bad at all

[r...@pcr-proxy ~]# netstat -s
Ip:
1593488410 total packets received
17991 with invalid addresses
0 forwarded
0 incoming packets discarded
1593318874 incoming packets delivered
1413863445 requests sent out
193 reassemblies required
95 packets reassembled ok
Icmp:
22106 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
destination unreachable: 16
echo requests: 22090
155761 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 133671
echo replies: 22090
IcmpMsg:
InType3: 16
InType8: 22090
OutType0: 22090
OutType3: 133671
Tcp:
27785486 active connections openings
78777077 passive connection openings
68247 failed connection attempts
560600 connection resets received
569 connections established
1589479495 segments received
1403833081 segments send out
6034370 segments retransmited
0 bad segments received.
626711 resets sent
Udp:
3817253 packets received
20 packets to unknown port received.
0 packet receive errors
3840233 packets sent
TcpExt:
217 invalid SYN cookies received
15888 resets received for embryonic SYN_RECV sockets
42765 packets pruned from receive queue because of socket buffer overrun
7282834 TCP sockets finished time wait in fast timer
3 active connections rejected because of time stamp
11427 packets rejects in established connections because of timestamp
8682907 delayed acks sent
1268 delayed acks further delayed because of locked socket
Quick ack mode was activated 1227980 times
36 packets directly queued to recvmsg prequeue.
14 packets directly received from prequeue
538829561 packets header predicted
492906318 acknowledgments not containing data received
190275750 predicted acknowledgments
372 times recovered from packet loss due to fast retransmit
348117 times recovered from packet loss due to SACK data
174 bad SACKs received
Detected reordering 71 times using FACK
Detected reordering 963 times using SACK
Detected reordering 25 times using reno fast retransmit
Detected reordering 998 times using time stamp
560 congestion windows fully recovered
10689 congestion windows partially recovered using Hoe heuristic
TCPDSACKUndo: 921
197231 congestion windows recovered after partial ack
1316789 TCP data loss events
TCPLostRetransmit: 22
3020 timeouts after reno fast retransmit
78970 timeouts after SACK recovery
10665 timeouts in loss state
743644 fast retransmits
1003156 forward retransmits
1884003 retransmits in slow start
1604549 other TCP timeouts
TCPRenoRecoveryFail: 150
31151 sack retransmits failed
4198383 packets collapsed in receive queue due to low socket buffer
814608 DSACKs sent for old packets
33462 DSACKs sent for out of order packets
65506 DSACKs received
266 DSACKs for out of order packets received
215231 connections reset due to unexpected data
10630 connections reset due to early user close
76801 connections aborted due to timeout
IpExt:
InBcastPkts: 9199


On Mon, Mar 29, 2010 at 5:47 PM, Amos Jeffries  wrote:
> Ivan . wrote:
>>
>> Some even more strange access.log entries?
>>
>> This is odd? Does that mean no DNS record? strange as both squid's use
>> the same DNS setup, with a primary, secondary and tertiary setup.
>> 1269833940.167      0 127.0.0.1 NONE/400 1868 GET
>> www.environment.gov.au - NONE/- text/html
>>
>> 1269833960.464  60997 10.132.17.30 TCP_MISS/000 0 GET
>> http://www.environment.gov.au/ - DIRECT/155.187.3.81 -
>>
>> 1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
>> http://www.environment.gov.au - DIRECT/155.187.3.81 -
>>
>> This one is new?
>> 1269842635.028 295660 10.143.254.22 TCP_MISS/502 2514 GET
>> http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html
>>
>
> The TCP_MISS/000 are another version of the READ_ERROR you are receiving as
> TCP_MISS/502. The 000 ones are on the client facing side though, the TCP
> link read failing before the request headers are finished being received
> from the client.
>  The first line is received (to get the URL) but not the rest of the request
> headers.
>
> The NONE/400 might be yet another version of the read failing at some point
> of processing. It's hard to say.
>
> Something is

Re: [squid-users] TCP MISS 502

2010-03-28 Thread Ivan .
Some even more strange access.log entries?

This is odd? Does that mean no DNS record? strange as both squid's use
the same DNS setup, with a primary, secondary and tertiary setup.
1269833940.167  0 127.0.0.1 NONE/400 1868 GET
www.environment.gov.au - NONE/- text/html

1269833960.464  60997 10.132.17.30 TCP_MISS/000 0 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 -

1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
http://www.environment.gov.au - DIRECT/155.187.3.81 -

This one is new?
1269842635.028 295660 10.143.254.22 TCP_MISS/502 2514 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html



On Mon, Mar 29, 2010 at 4:56 PM, Ivan .  wrote:
> Hi Amos
>
> You can see the tcp_miss in the access.log here:-
>
> 1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
> http://www.environment.gov.au - DIRECT/155.187.3.81 -
>
> Here is a tcpdump output from the connection. You can see the TCP
> handshake setup and then the http session just hangs? I have confirmed
> with the website admin these are no ddos type protection, which would
> block multiple requests in quick succession.
>
> The tcp connection times out and then resets.
>
> [r...@squid-proxy ~]# tcpdump net 155.187.3
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
> 16:58:59.369482 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: S
> 1781942738:1781942738(0) win 5840  1321171542 0,nop,wscale 7>
> 16:58:59.418150 IP 155.187.3.81.http > xxx..xxx.xxx.41338: S
> 2343505326:2343505326(0) ack 1781942739 win 32768  0,nop,nop,timestamp 234270252 1321171542,sackOK,eol>
> 16:58:59.418167 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: . ack 1
> win 46 
> 16:58:59.418213 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: P
> 1:696(695) ack 1 win 46 
> 16:58:59.477692 IP 155.187.3.81.http > xxx..xxx.xxx.41338: P
> 2897:4081(1184) ack 696 win 33304  1321171591>
> 16:58:59.477700 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: . ack 1
> win 46  {2897:4081}>
>
>
> cheers
> Ivan
>
> On Mon, Mar 29, 2010 at 3:59 PM, Amos Jeffries  wrote:
>> Ivan . wrote:
>>>
>>> Hi,
>>>
>>> What would cause a TCP MISS 502, which would prevent a site from
>>> loading? The site works on squidv3.0 but not on v2.6?
>>>
>>
>> Any one of quite a few things. The ERR_READ_ERROR result means the remote
>> server or network is closing the TCP link on you for some unknown reason.
>>
>> Why it works in 3.0 is as much a mystery as why it does not in 2.6 until
>> details of the traffic on Squid->Server TCP link are known.
>>
>>
>> Amos
>> --
>> Please be using
>>  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>>  Current Beta Squid 3.1.0.18
>>
>


Re: [squid-users] TCP MISS 502

2010-03-28 Thread Ivan .
Hi Amos

You can see the tcp_miss in the access.log here:-

1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
http://www.environment.gov.au - DIRECT/155.187.3.81 -

Here is a tcpdump output from the connection. You can see the TCP
handshake setup and then the http session just hangs? I have confirmed
with the website admin these are no ddos type protection, which would
block multiple requests in quick succession.

The tcp connection times out and then resets.

[r...@squid-proxy ~]# tcpdump net 155.187.3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:58:59.369482 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: S
1781942738:1781942738(0) win 5840 
16:58:59.418150 IP 155.187.3.81.http > xxx..xxx.xxx.41338: S
2343505326:2343505326(0) ack 1781942739 win 32768 
16:58:59.418167 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: . ack 1
win 46 
16:58:59.418213 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: P
1:696(695) ack 1 win 46 
16:58:59.477692 IP 155.187.3.81.http > xxx..xxx.xxx.41338: P
2897:4081(1184) ack 696 win 33304 
16:58:59.477700 IP xxx..xxx.xxx.41338 > 155.187.3.81.http: . ack 1
win 46 


cheers
Ivan

On Mon, Mar 29, 2010 at 3:59 PM, Amos Jeffries  wrote:
> Ivan . wrote:
>>
>> Hi,
>>
>> What would cause a TCP MISS 502, which would prevent a site from
>> loading? The site works on squidv3.0 but not on v2.6?
>>
>
> Any one of quite a few things. The ERR_READ_ERROR result means the remote
> server or network is closing the TCP link on you for some unknown reason.
>
> Why it works in 3.0 is as much a mystery as why it does not in 2.6 until
> details of the traffic on Squid->Server TCP link are known.
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>  Current Beta Squid 3.1.0.18
>


[squid-users] TCP MISS 502

2010-03-27 Thread Ivan .
Hi,

What would cause a TCP MISS 502, which would prevent a site from
loading? The site works on squidv3.0 but not on v2.6?

This error on Squid v2.6STABLE21, can't get the www.environment.gov.au site up..

1269582298.419 306252 10.xxx.xxx.xxx TCP_MISS/502 1442 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html [Host:
www.environment.gov.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\nCookie: tmib_res_layout=default-wide;
__utma=181583987.2132547050.1269488465.1269509672.1269569388.4;
__utmc=181583987;
__utmz=181583987.1269488465.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)\r\n]
[HTTP/1.0 502 Bad Gateway\r\nServer: squid\r\nDate: Fri, 26 Mar 2010
05:44:58 GMT\r\nContent-Type: text/html\r\nContent-Length:
1074\r\nExpires: Fri, 26 Mar 2010 05:44:58 GMT\r\nX-Squid-Error:
ERR_READ_ERROR 104\r\n\r]

But works fine from a Squid v3.0STABLE16

Thanks
Ivan


[squid-users] TCP MISS 502

2010-03-25 Thread Ivan .
Man, Squid does my head in sometimes.

This error on Squid v2.6STABLE21, can't get the www.environment.gov.au site up..

1269582298.419 306252 10.xxx.xxx.xxx TCP_MISS/502 1442 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html [Host:
www.environment.gov.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\nCookie: tmib_res_layout=default-wide;
__utma=181583987.2132547050.1269488465.1269509672.1269569388.4;
__utmc=181583987;
__utmz=181583987.1269488465.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)\r\n]
[HTTP/1.0 502 Bad Gateway\r\nServer: squid\r\nDate: Fri, 26 Mar 2010
05:44:58 GMT\r\nContent-Type: text/html\r\nContent-Length:
1074\r\nExpires: Fri, 26 Mar 2010 05:44:58 GMT\r\nX-Squid-Error:
ERR_READ_ERROR 104\r\n\r]

But works fine from a Squid v3.0STABLE16

Thanks
Ivan


[squid-users] Can someone check a site?

2010-03-25 Thread Ivan .
Hi,

Can someone running Squid v2.6 STABLE21 check this site for me?
http://www.usp.ac.fj

Nothing in the access.log to give me a hint as to where the issue is.

I can access it direct, but through Squid it just hangs there after
the inital TCP handshake?

[r...@proxy squid]# tcpdump -vvv host 144.120.8.2
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size
65535 bytes
21:35:24.227005 IP (tos 0x0, ttl  64, id 7227, offset 0, flags [DF],
proto: TCP (6), length: 60) xxx.xxx.xxx.xxx.33151 >
belo.usp.ac.fj.http: S, cksum 0xb78a (correct),
2265843403:2265843403(0) win 5840 
21:35:24.488001 IP (tos 0x0, ttl  53, id 0, offset 0, flags [DF],
proto: TCP (6), length: 60) belo.usp.ac.fj.http >
xxx.xxx.xxx.xxx.33151: S, cksum 0x1a89 (correct),
2369822436:2369822436(0) ack 2265843404 win 5792 
21:35:24.488013 IP (tos 0x0, ttl  64, id 7228, offset 0, flags [DF],
proto: TCP (6), length: 52) xxx.xxx.xxx.xxx.33151 >
belo.usp.ac.fj.http: ., cksum 0x5ebb (correct), 1:1(0) ack 1 win 46

21:35:24.488077 IP (tos 0x0, ttl  64, id 7229, offset 0, flags [DF],
proto: TCP (6), length: 482) xxx.xxx.xxx.xxx.33151 >
belo.usp.ac.fj.http: P, cksum 0x15be (incorrect (-> 0x870c),
1:431(430) ack 1 win 46 
21:35:24.729001 IP (tos 0x0, ttl  53, id 63867, offset 0, flags [DF],
proto: TCP (6), length: 52) belo.usp.ac.fj.http >
xxx.xxx.xxx.xxx.33151: ., cksum 0x4401 (correct), 1:1(0) ack 431 win
6432 

thanks
Ivan


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread Ivan .
Have you set the descriptor size in the squid start up script?

see here
http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Mar 23, 2010 at 12:45 PM, a...@gmail  wrote:
>
> I have solved the problem, I managed to increase the filedescriptor
> My system now reads 65535
> But Squid still says only 1024 fileDescriptors available
>
> What can I do to fix this please, I have rebooted the system and Squid 
> several times
> I am running out of ideas
>
> Any help would be appreciated
> Regards
> Adam
>


Re: [squid-users] Squid cache_dir failed - can squid survive?

2010-03-18 Thread Ivan .
I noticed an improvement when I disabled it, which may have something
to do with my cache settings, but I tried a number of config combos
without much success

2010/3/18 Henrik Nordström :
> tor 2010-03-18 klockan 17:25 +1100 skrev Ivan .:
>> I wonder about the value of http cache, when the majority of high
>> volume sites used in the corporate environment are dynamic.
>> http://www.mnot.net/cache_docs/
>
> Hit ratio have not declined that much in the last decade. It's still
> around 25-30% byte hit ratio and significantly more in request hit
> ratio.
>
> While it's true that a lot of the html content is more dynamic than
> before there is also lots more inlined content such as images etc which
> are plain static and caches just fine and these make up for the majority
> of the traffic.
>
>> How is the no-cache HTTP header handled by Squid?
>
> By default as if the response is not cachable. Somewhat stricter than
> the specifications require, but more in line with what web authors
> expect when using this directive.
>
> Regards
> Henrik
>
>


Re: [squid-users] Squid cache_dir failed - can squid survive?

2010-03-17 Thread Ivan .
I wonder about the value of http cache, when the majority of high
volume sites used in the corporate environment are dynamic.
http://www.mnot.net/cache_docs/

How is the no-cache HTTP header handled by Squid?

I didn't see the value in it, and used the cache_dir null /tmp to stop it
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid

cheers
Ivan


On Thu, Mar 18, 2010 at 5:16 PM, GIGO .  wrote:
>
> Dear henrik,
>
> If you have only one physical machine what is the best strategy for 
> miminmizing the downtime and rebuild the cache directory again or start 
> utilizing the squid without the cache directory? I assume we have to 
> reinstall the Squid Software? Please guide
>
>
>
>
>
> 
>> From: hen...@henriknordstrom.net
>> To: gina...@gmail.com
>> CC: squid-users@squid-cache.org
>> Date: Sat, 13 Mar 2010 09:32:30 +0100
>> Subject: Re: [squid-users] Squid cache_dir failed - can squid survive?
>>
>> fre 2010-03-12 klockan 14:28 -0800 skrev Maykeen:
>>> I want to know, if squid is able to survive if it suddenly loses access to
>>> its cache directories, for example, stop caching requests and just serving
>>> as a proxy. Is there a way to do this, instead of squid termintaing when
>>> this happens?
>>
>> Squid is not currently designed to handle this and will terminate.
>>
>> What you can do to handle this situation is to run two Squids, one just
>> as a proxy and the other with the cache. The proxy only one uses the
>> cache one as parent.
>>
>> Regards
>> Henrik
>>
> _
> Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
> https://signup.live.com/signup.aspx?id=60969


Re: [squid-users] Warning your cache is running out of file descriptors

2010-03-17 Thread Ivan .
I used this

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

On Thu, Mar 18, 2010 at 6:21 AM, william  wrote:
> See this thread:
> http://www.mail-archive.com/squid-users@squid-cache.org/msg70230.html
>
>
> please search the archives
>
>
> with kind regards
>
> William van de Velde
>
>
> On 03/17/2010 07:20 PM, Mariel Sebedio wrote:
>>
>> Hello, the file descriptors you must be incress in
>> /etc/security/limits.conf and re-build the cache  For RHEL you must change
>>
>> *  - nofiles 1024
>> for
>> *  - nofiles 2048
>> Sorry my english!!
>>
>> Bye, Mariel
>>
>> Gmail wrote:
>>>
>>> Hello All,
>>>
>>> This is the first time I am using this mailing ;list, and I do apologise
>>> if I sent a copy of this email to another address by mistake
>>>
>>> I am desperately seeking some help, I have googled in a hope to find an
>>> answer, but all I could find was about the previous versions, which don't
>>> apply to the version I am using and to my OS:
>>>
>>>
>>> I am running Squid3.0 Stable
>>> OS Ubuntu Hardy
>>>
>>> I am currently getting this warning:
>>>
>>> Warning your cache is running out of file descriptor, but I couldn't find
>>> where to increase the size from 1024 to any number.
>>>
>>> On the previous versions and other OS systems, it's apparently located
>>> here /etc/default/squid but on my system it doesn't exist.
>>>
>>> Can anyone please point me to where I can change that?
>>>
>>> I have checked Ubuntu forums, I have checked several other forums, but
>>> the only links I seem to get on google are related to the previous versions
>>> of squid or other operating systems.
>>>
>>> Can you help please, since I started using Squid I had problem after
>>> problem, lot of other applications are not working, I still can't access my
>>> backend HTTP servers, but that's another problem for another day.
>>>
>>> Any help would be very much appreciated
>>> Thank you all
>>>
>>
>>
>
>


Re: [squid-users] squid consuming too much processor/cpu

2010-03-17 Thread Ivan .
run a cron job to restart Squid once a week?

On Wed, Mar 17, 2010 at 11:09 PM, Muhammad Sharfuddin
 wrote:
>
> On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
> > you might want to check out this thread
> >
> > http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html
>
> I checked, but its not clear to me
> do I need to install some packages/rpms ? and then ?
> I mean how can I resolve this issue
>
> --
> Regards
> Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823
>
> >
> >
> > cheers
> > ivan
> >
> > On Wed, Mar 17, 2010 at 4:55 PM, Muhammad Sharfuddin
> >  wrote:
> > > Squid Cache: Version 2.7.STABLE5(squid-2.7.STABLE5-2.3)
> > > kernel version: 2.6.27 x86_64
> > > CPU: Xeon 2.6 GHz CPU
> > > Memory: 2 GB
> > > /var/cache/squid is ext3, mounted with 'noacl' and 'noatime' options
> > > number of users using this proxy: 160
> > > number of users using simultaneously/concurrently using this proxy: 72
> > >
> > > I found that squid is consuming too much cpu, average cpu idle time is
> > > 49 only.
> > >
> > > I have attached the output 'top -b -n 7', and 'vmstat 1'
> > >
> > > below is the output of squid.conf
> > >
> > > squid.conf:
> > > -
> > >
> > > http_port 8080
> > > cache_mgr administra...@test.com
> > > cache_mem 1024 MB
> > > cache_dir aufs /var/cache/squid 2 32 256
> > > visible_hostname gateway.test.com
> > > refresh_pattern ^ftp: 1440 20% 10080
> > > refresh_pattern ^gopher: 1440 0% 1440
> > > refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
> > > refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
> > > 432000
> > > refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
> > > 10080 90% 43200
> > > refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
> > > refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
> > > refresh_pattern . 0 40% 40320
> > > cache_swap_low 78
> > > cache_swap_high 90
> > >
> > > maximum_object_size_in_memory 100 KB
> > > maximum_object_size 12288  KB
> > >
> > > fqdncache_size 2048
> > > ipcache_size 2048
> > >
> > > acl myFTP port   20  21
> > > acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
> > > http_access allow ftp_ipes myFTP
> > > http_access deny myFTP
> > >
> > > acl porn_deny url_regex "/etc/squid/domains.deny"
> > > http_access deny porn_deny
> > >
> > > acl vip src "/etc/squid/vip_ipes.txt"
> > > http_access allow vip
> > >
> > > acl entweb url_regex "/etc/squid/entwebsites.txt"
> > > http_access deny entweb
> > >
> > > acl mynet src "/etc/squid/allowed_ipes.txt"
> > > http_access allow mynet
> > >
> > >
> > > please help, why squid is utilizing so much of cpu
> > >
> > >
> > > --
> > > Regards
> > > Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823
> > >
> > >
> >
>
>


Re: [squid-users] Squid v2.6 error accessing site

2010-03-17 Thread Ivan .
Its all sorted, a issue on the hosted site

On Wed, Mar 17, 2010 at 8:27 PM, Matus UHLAR - fantomas
 wrote:
>> >> On Tue, 16 Mar 2010 11:12:44 +1100, "Ivan ."  wrote:
>> >>> I am having some trouble accessing the site
>> >>> http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
>> >>> edge router, but I don't see anything back?
>
>> > On Tue, Mar 16, 2010 at 11:27 AM, Amos Jeffries  
>> > wrote:
>> >> And what makes you think packets failing to return to your network is
>> >> caused by Squid?
>
>> On Tue, Mar 16, 2010 at 11:35 AM, Ivan .  wrote:
>> > Becuase the site is accessible direct, without going via squid, I am
>> > trying to eliminate the most obvious.
>> >
>> > and the window scaling issues
>> > http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize
>
> On 16.03.10 12:46, Ivan . wrote:
>> Now I am even more convinced it is a squid issue
>>
>> I just built up another RedHat ELv5 box, with Squid v 3.0 stable 16
>> and the site works
>>
>> The existing squid proxy running Squid v2.6 stable21 does not work
>
> did you try to connect from the site running squid and did it work?
>
> I am sure it is not your squid who sends packets back to your network.
>
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> The 3 biggets disasters: Hiroshima 45, Tschernobyl 86, Windows 95
>


Re: [squid-users] squid consuming too much processor/cpu

2010-03-17 Thread Ivan .
you might want to check out this thread

http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

cheers
ivan

On Wed, Mar 17, 2010 at 4:55 PM, Muhammad Sharfuddin
 wrote:
> Squid Cache: Version 2.7.STABLE5(squid-2.7.STABLE5-2.3)
> kernel version: 2.6.27 x86_64
> CPU: Xeon 2.6 GHz CPU
> Memory: 2 GB
> /var/cache/squid is ext3, mounted with 'noacl' and 'noatime' options
> number of users using this proxy: 160
> number of users using simultaneously/concurrently using this proxy: 72
>
> I found that squid is consuming too much cpu, average cpu idle time is
> 49 only.
>
> I have attached the output 'top -b -n 7', and 'vmstat 1'
>
> below is the output of squid.conf
>
> squid.conf:
> -
>
> http_port 8080
> cache_mgr administra...@test.com
> cache_mem 1024 MB
> cache_dir aufs /var/cache/squid 2 32 256
> visible_hostname gateway.test.com
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
> refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
> 432000
> refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
> 10080 90% 43200
> refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
> refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
> refresh_pattern . 0 40% 40320
> cache_swap_low 78
> cache_swap_high 90
>
> maximum_object_size_in_memory 100 KB
> maximum_object_size 12288  KB
>
> fqdncache_size 2048
> ipcache_size 2048
>
> acl myFTP port   20  21
> acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
> http_access allow ftp_ipes myFTP
> http_access deny myFTP
>
> acl porn_deny url_regex "/etc/squid/domains.deny"
> http_access deny porn_deny
>
> acl vip src "/etc/squid/vip_ipes.txt"
> http_access allow vip
>
> acl entweb url_regex "/etc/squid/entwebsites.txt"
> http_access deny entweb
>
> acl mynet src "/etc/squid/allowed_ipes.txt"
> http_access allow mynet
>
>
> please help, why squid is utilizing so much of cpu
>
>
> --
> Regards
> Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823
>
>


Re: [squid-users] Squid v2.6 error accessing site

2010-03-15 Thread Ivan .
Now I am even more convinced it is a squid issue

I just built up another RedHat ELv5 box, with Squid v 3.0 stable 16
and the site works

The existing squid proxy running Squid v2.6 stable21 does not work

On Tue, Mar 16, 2010 at 11:35 AM, Ivan .  wrote:
> Becuase the site is accessible direct, without going via squid, I am
> trying to eliminate the most obvious.
>
> and the window scaling issues
> http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize
>
> Ivan
> On Tue, Mar 16, 2010 at 11:27 AM, Amos Jeffries  wrote:
>> On Tue, 16 Mar 2010 11:12:44 +1100, "Ivan ."  wrote:
>>> Hi,
>>>
>>> I am having some trouble accessing the site
>>> http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
>>> edge router, but I don't see anything back?
>>
>> And what makes you think packets failing to return to your network is
>> caused by Squid?
>>
>> Amos
>>
>>
>


Re: [squid-users] Squid v2.6 error accessing site

2010-03-15 Thread Ivan .
Becuase the site is accessible direct, without going via squid, I am
trying to eliminate the most obvious.

and the window scaling issues
http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

Ivan
On Tue, Mar 16, 2010 at 11:27 AM, Amos Jeffries  wrote:
> On Tue, 16 Mar 2010 11:12:44 +1100, "Ivan ."  wrote:
>> Hi,
>>
>> I am having some trouble accessing the site
>> http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
>> edge router, but I don't see anything back?
>
> And what makes you think packets failing to return to your network is
> caused by Squid?
>
> Amos
>
>


[squid-users] Squid v2.6 error accessing site

2010-03-15 Thread Ivan .
Hi,

I am having some trouble accessing the site
http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
edge router, but I don't see anything back?

If I try to go direct without the squid it works fine.

1268696419.311 113830 10.xxx.xxx.xxx TCP_MISS/503 1444 GET
http://www.efirstaid.com.au/ - DIRECT/70.86.101.210 text/html [Host:
www.efirstaid.com.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\n] [HTTP/1.0 503 Service Unavailable\r\nServer:
squid\r\nDate: Mon, 15 Mar 2010 23:40:19 GMT\r\nContent-Type:
text/html\r\nContent-Length: 1066\r\nExpires: Mon, 15 Mar 2010
23:40:19 GMT\r\nX-Squid-Error: ERR_CONNECT_FAIL 111\r\n\r]

1268696516.331 114023 10.xxx.xxx.xxx  TCP_MISS/503 1444 GET
http://www.efirstaid.com.au/ - DIRECT/70.86.101.210 text/html [Host:
www.efirstaid.com.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\n] [HTTP/1.0 503 Service Unavailable\r\nServer:
squid\r\nDate: Mon, 15 Mar 2010 23:41:56 GMT\r\nContent-Type:
text/html\r\nContent-Length: 1066\r\nExpires: Mon, 15 Mar 2010
23:41:56 GMT\r\nX-Squid-Error: ERR_CONNECT_FAIL 111\r\n\r]

Trying a wget on the squid box gets the same timeout erro

[r...@proxy squid]# wget http://www.efirstaid.com.au/
--2010-03-16 11:16:45--  http://www.efirstaid.com.au/
Resolving www.efirstaid.com.au... 70.86.101.210
Connecting to www.efirstaid.com.au|70.86.101.210|:80... failed:
Connection refused.


Thanks
Ivan


[squid-users] Https traffic

2009-10-05 Thread Ivan . Galli
Hi, 
my company are going to buy Websense web security suite. 
It seems to be able to decrypt and check contents in ssl tunnel. 
Is it really important to do this to prevent malicius code or dangerous 
threat?

Thanks and regards.

Ivan

On Wed, 30 Sep 2009 14:58:08 +0200, Ivan.Galli_at_aciglobal.it wrote: 
> Hi, i have a question about https traffic content. 
> There is some way to check what pass through ssl tunnel? 
> Can squidguard or any other programs help me? 
The 'S' in HTTPS means Secure or SSL encrypted. 
Why do you want to do this? 
Depends on the type of service environment are you working with... 
* ISP-like where 'random' people use the proxy? 
- dont bother. This is a one-way road to serious trouble. 
* reverse-proxy where you own or manage the HTTPS website itself? 
- use https_port and decrypt as things enter Squid. Re-encrypt if needed 
to 
the peer. 
* Enterprise setup where you have full control of the workstation 
configuration? 
- use Squid-3.1 and SslBump. Push out settings to all workstations to 
trust 
the local proxy keys (required). 
Amos 

Ivan 


[squid-users] Https contents

2009-09-30 Thread Ivan . Galli
Hi, i have a question about https traffic content.
There is some way to check what pass through ssl tunnel?
Can squidguard or any other programs help me?

Thanks and regards.

Ivan 


[squid-users] ntlm

2006-12-13 Thread ivan re

I use squid 2.5 stable14 anda samba 3.0.2

squid conf:
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 15 minutes
auth_param ntlm use_ntlm_negotiate on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

When I try to access internet I obtain cache denied:

cache.log:[2006/12/12 16:42:07, 3] libsmb/ntlmssp.c:ntlmssp
_server_auth(672)
 Got user=[ivan] domain=[EGOBIANCHI] workstation=[ARTISTICO1] len1=24 len2=0
[2006/12/12 16:42:07, 3] utils/ntlm_auth.c:winbind_pw_check(429)
 Login for user [EMAIL PROTECTED] failed due to [Logon server]


What does it means???


TIA
Ivan


[squid-users] sqid auth problem

2006-12-04 Thread ivan re

have configured samba 3 and quid 2.5 stable 14 fc5

With my win client i enter into domain with user, password an domain.

I can't access to internet. Why?
Do I need winbind ? or not?

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol= squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 15 minutes
auth_param ntlm use_ntlm_negotiate on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol= squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off


acl password proxy_auth REQUIRED
http_access allow password


Re: Re: [squid-users] Making ACL for an IP range

2005-05-10 Thread Ivan Petrushev

Thanks for the comment :)
 >http://squid.visolve.com/squid/squid24s1/access_controls.htm
 >
 >acl aclname src 172.16.1.25-172.16.1.35/32
 >
 >Ryan Lamberton
 >FamiLink Company
 >Family Safe Internet Access
 That's exactly what I need :) In that example what is the purpose of the 
subnet mask? Does it have to match the subnet mask configured on the PCs over 
the network? Or it is only for determing the IP range parameters?

-
http://host.GBG.bg - лидер в Уеб Хостинг решения и регистрация на Домейн имена


Re: Re: [squid-users] Making ACL for an IP range

2005-05-10 Thread Ivan Petrushev
Thanks fot the comment :)
 >Dear Ivan 
 >   For and IP to IP you can define as follow
 >
 >acl pc1 src 192.168.1.30/255.255.255.255
 >http_access allow pc1
 >acl pc2 src 192.168.1.31/255.255.255.255
 >http_access allow pc2
 >
But that would allow access only for two IPs. If I have to describe every IP in 
that way, imagine what would my squid.conf would be looking like for about 40 
IPs :) There have to be shorter way.
Thanks again :)

-
http://host.GBG.bg - лидер в Уеб Хостинг решения и регистрация на Домейн имена


[squid-users] Making ACL for an IP range

2005-05-09 Thread Ivan Petrushev
Hello :-) That's my first mailist posting but I hope I'll get the
basics soon. Please excuse my poor english.
The problem I'm trying to solve is how to make ACL responding for a
range of IPs (not the whole subnet). If I wanted to make the ACL
responding for the whole subnet I would use CIDR or dotted notation
like:
acl mynetwork src 192.168.1.1/255.255.255.0
or
acl mynetwork src 192.168.1.1/24
I want that acl 'mynetwork' to respond only for IPs 192.168.1.30 -
192.168.1.47 (for example). That is neither a subnetwork and can't be
done via the upper examples. So can I use (from IP) (to IP) range in
squid.conf and what is the exact syntaxis? I haven't seen anything
like that in the online documentation, but that doesn't mean it
doesn't exist :-)

Greetings, Ivan Petrushev.

-
http://host.GBG.bg - лидер в Уеб Хостинг решения и регистрация на Домейн имена


Re: [squid-users] SquidNT crash on startup.

2004-09-28 Thread Ivan Doitchinov
Przemek Czerkas wrote:
Ivan Doitchinov wrote:
 

Hello all,
I just installed 
http://albaweb.albacom.net/acmeconsulting.it/download/squid-2.5.STABLE6-NT
-bin.zip

When I did
squid -i
to register squid as a service, it says it registered sucessfully but 
then crashes:

Faulting application squid.exe, version 2.5.4.0, faulting module 
advapi32.dll, version 5.1.2600.2180, fault address 0x0002869a.

"squid -r" or tring to start the service crashes as well.
I'm running WinXP professional SP2.
Any known issue?
Ivan Doitchinov
esmertec ag
   

Looks like http://www.squid-cache.org/bugs/show_bug.cgi?id=1064
Przemek Czerkas
 

That was it. Thanks.
Ivan Doitchinov
esmertec ag


[squid-users] SquidNT crash on startup.

2004-09-28 Thread Ivan Doitchinov
Hello all,
I just installed 
http://albaweb.albacom.net/acmeconsulting.it/download/squid-2.5.STABLE6-NT-bin.zip

When I did
squid -i
to register squid as a service, it says it registered sucessfully but 
then crashes:

Faulting application squid.exe, version 2.5.4.0, faulting module 
advapi32.dll, version 5.1.2600.2180, fault address 0x0002869a.

"squid -r" or tring to start the service crashes as well.
I'm running WinXP professional SP2.
Any known issue?
Ivan Doitchinov
esmertec ag


[squid-users] SQUID + LDAP HELP

2004-07-15 Thread Ivan Romero
Hi guys:
 
I want to set up Squid + LDAP to authenticate Mi w2k3
active directory 
users. I've searched all over the internet for info
(including posted 
messages on this list), but i haven't been able to
make it work.
 
When i use ldapsearch i works just fine. Same when I
use 
squid_ldap_auth from command line. When using squid,
it shows the auth window an 
every time i enter a user name and password, and click
ok, the Fedora 
server communicates with the w2k3 server (i use
ethereal & iptraf to check 
that).
 
The weird thing is when i put the wrong user or
passwd, the window 
keeps asking for it, but whent i put the right user
and passwd it generates 
a TCP_DENIED/407 and won't show anythig in the
browser.
 
My squid.conf file shows:
auth_param basic program
/usr/lib/squid/squid_ldap_auth -b 
cn=users,dc=dom1,dc=info,dc=co -D
cn=user1o,cn=users,dc=dom1,dc=info,dc=co -h 
10.10.1.25 -w pass1 -u cn
auth_param basic children 5
auth_param basic realm Squid LDAP
auth_param basic credentialsttl 2 hours
acl localusers proxy_auth REQUIERED
external_acl_type AD_Group %LOGIN
/usr/lib/squid/squid_ldap_auth -b 
cn=users,dc=dom1,dc=info,dc=co -D
cn=user1,cn=users,dc=dom1,dc=info,dc=co 
-h 10.10.1.25 -w pass1 -S -f 
"(&(cn=%u)(memberOf=cn=internet,cn=users,dc=dom1,dc=info,dc=co))"
acl wwwusers external AD_Group 
cn=internet,cn=users,dc=dom1,dc=info,dc=co
http_access deny !wwwusers
http_access allow localusers
http_reply_access allow all
icp_access allow all





__
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail 


[squid-users] Squid + SSL CA.

2004-05-05 Thread Ivan Doitchinov
Hello all,

I am using squid V2.5.STABLE1 on Red Hat linux and I am trying to set up 
an SSL proxy (CONNECT method). It all works fine except that I can't 
figure out how to add my own CA certificate in order to prevent a TLS 
Unkonwn CA fatal error. I googled a bit and found out that this should 
be configurable through "sslproxy_*" directive in the squid config file, 
but I could not find a list/description on these directives... I only 
found a mention to "sslproxy_cafile" which does not seem to be 
recognized by my squid.

My squid was compiled with "--enable-ssl" and "with-openssl=/usr/kerberos".

Thanks,

Ivan Doitchinov
esmertec ag


[squid-users] keep ip source address in logs

2003-07-18 Thread Ivan Rodriguez
Hello list i have a little problem 
i use squid version  2.5.STABLE1,
all my users use the server proxy 
but i have the web page for my intranet
so i use the file proxy.pac  where
we had configured the ip local address to be
routed directly
however not all users use proxy.pac and the apache
logs for my web page intranet.domain appear with the
ip address of the proxy server 
what can i do to get the machine's ip addres that
generate the request, instead of my proxy's ip
address?
in other terms, how can i redirect the requests
directly to my intranet web page ??

for example
ip address for proxy server 
192.168.64.20 

Logs for the Apache intranet.domain 

192.168.64.20 - - [30/Sep/2002:13:28:30 -0500] "GET 
(this must  be the real ip for the client request
example 192.168.64.16)


Iptables is not a option why i have only one interface
ethernet

Thanks a lot excusme my english is not god

_
Do You Yahoo!?
La mejor conexión a internet y 25MB extra a tu correo por $100 al mes. 
http://net.yahoo.com.mx


[squid-users] transparent proxy using wb_ntlm auth

2003-02-03 Thread Ivan de Gusmão Apolonio
Hi all

I'm trying to use transparent proxy but if I'm using some athentication
scheme, it always shows an authentication popup to me, even if I'm a member
of the allowed group, and I put my username/password and it's rejected. If I
disable the auth scheme, it works normally. My question is: is it possible
to use trasparent proxy using wb_ntlmauth auth?? Follows part of my
squid.conf

auth_param ntlm program /usr/local/squid/libexec/wb_ntlmauth
auth_param ntlm children 10
auth_param ntlm max_challenge_reuses 10
auth_param ntlm max_challenge_lifetime 8 minutes
auth_param basic program /usr/local/squid/libexec/wb_auth
auth_param basic children 5
auth_param basic realm Squid proxy-cach

httpd_accel_host virtual
httpd_accel_port 0
httpd_accel_with_proxy  on
httpd_accel_uses_host_header on
httpd_accel_single_host off

acl domainusers proxy_auth "/etc/squid/internet_users.local.txt"
http_access allow liberados

Thanks
Ivan



[squid-users] no login popup when using wb_group

2003-01-29 Thread Ivan de Gusmão Apolonio
Hi all!

I've compiled samba 2.2.5 with options --with-winbind
--with-winbind-auth-challenge to be able to use Squid's winbind
authentication method. I've followed the steps on
http://www.squid-cache.org/Doc/FAQ/FAQ-23.html#ss23.5 and everything works
fine, but...

Before using winbind, I was using ntlm_auth and when a user tryies to access
internet being no member of the the authorized group, a popup window apears
asking user for a login allowed to use the proxy.

My problem is when I'm using wb_group there's no popup window and is shown
an access denied page with no possibilities to access the internet with
another user insted the one logged in. Is there some configuration for
wb_group to allow an popup to authenticate to access the proxy with another
user??

Thanks

Ivan de Gusmão Apolonio