Re: [squid-users] squid compilation error in Docker

2022-04-26 Thread Ivan Larionov
I think based on the compilation log that it's not used by squid directly
but by libtool. I went through the whole log again and found the following
errors which I missed originally:

"libtool: line 4251: find: command not found"

On Mon, Apr 25, 2022 at 1:08 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 4/25/22 15:41, Ivan Larionov wrote:
> > Seems like "findutils" is the package which fixes the build.
> >
> > Binsaries in this package:
> >
> > # rpm -ql findutils | grep bin
> > /bin/find
> > /usr/bin/find
> > /usr/bin/oldfind
> > /usr/bin/xargs
> >
> > If build depends on some of these then configure script should probably
> > check that they're available.
>
>
> ... and/or properly fail when their execution/use fails. I do not know
> whether this find/xargs dependency is inside Squid or inside something
> that Squid is using though, but I could not quickly find any direct uses
> by Squid sources (that would fail the build).
>
>
> Alex.
>
>
> > On Wed, Apr 13, 2022 at 9:38 PM Amos Jeffries wrote:
> >
> > On 14/04/22 14:59, Ivan Larionov wrote:
> >  > There were no errors earlier.
> >  >
> >  > Seems like installing openldap-devel fixes the issue.
> >  >
> >  > There were other dependencies installed together with it, not
> > sure if
> >  > they also affected the build or not.
> >
> >
> > I suspect one or more of those other components is indeed the source
> of
> > the change. Some of them are very low-level OS functionality updates
> > (eg
> > /proc and filesystem  utilities).
> >
> > FWIW, The gist you posted looks suspiciously like reports we used to
> > see
> > when BSD people were having issues with the linker not receiving all
> > the
> > arguments passed to it. I would focus on the ones which interact
> > with OS
> > filesystem or the autotools / compiler/ linker.
> >
> >
> > HTH
> > Amos
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > <mailto:squid-users@lists.squid-cache.org>
> > http://lists.squid-cache.org/listinfo/squid-users
> > <http://lists.squid-cache.org/listinfo/squid-users>
> >
> >
> >
> > --
> > With best regards, Ivan Larionov.
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid compilation error in Docker

2022-04-25 Thread Ivan Larionov
Seems like "findutils" is the package which fixes the build.

Binsaries in this package:

# rpm -ql findutils | grep bin
/bin/find
/usr/bin/find
/usr/bin/oldfind
/usr/bin/xargs

If build depends on some of these then configure script should probably
check that they're available.

On Wed, Apr 13, 2022 at 9:38 PM Amos Jeffries  wrote:

> On 14/04/22 14:59, Ivan Larionov wrote:
> > There were no errors earlier.
> >
> > Seems like installing openldap-devel fixes the issue.
> >
> > There were other dependencies installed together with it, not sure if
> > they also affected the build or not.
>
>
> I suspect one or more of those other components is indeed the source of
> the change. Some of them are very low-level OS functionality updates (eg
> /proc and filesystem  utilities).
>
> FWIW, The gist you posted looks suspiciously like reports we used to see
> when BSD people were having issues with the linker not receiving all the
> arguments passed to it. I would focus on the ones which interact with OS
> filesystem or the autotools / compiler/ linker.
>
>
> HTH
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid compilation error in Docker

2022-04-13 Thread Ivan Larionov
There were no errors earlier.

Seems like installing openldap-devel fixes the issue.

There were other dependencies installed together with it, not sure if they
also affected the build or not. I assume the ldap one is the main reason:

cracklib
cracklib-dicts
libpwquality
pam
cpio
dbus-libs
libudev
libblkid
libmount
libnih
upstart
libuser
sysvinit
xz
libutempter
util-linux
net-tools
procps
ethtool
mingetty
psmisc
iptables
iproute
kmod-libs
kmod
hwdata
udev
findutils
iputils
initscripts
cyrus-sasl
cyrus-sasl-devel
openldap-devel

Basically this was enough for the build to succeed:

yum install -y gcc gcc-c++ libtool libtool-ltdl-devel make pkgconfig
automake autoconf wget diffutils file openldap-devel

but just removing openldap-devel from that line results in errors I posted.

On Wed, Apr 13, 2022 at 7:19 PM Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 4/13/22 20:07, Ivan Larionov wrote:
> > Yes this worked. Thanks Eliezer.
> >
> > This means some of these dependencies are required but not caught by the
> > configure script.
> >
> > I'll try to figure out which specific one was the culprit.
>
> And maybe find the earlier error in the make log? The errors you shared
> did not look like a direct effect of some missing package, more like a
> side effect of something that went wrong earlier...
>
>
> Thank you both,
>
> Alex.
>
>
> > On Wed, Apr 13, 2022 at 4:36 PM Eliezer Croitoru wrote:
> >
> > For CentOS 7 use the next:
> >
> > RUN yum install -y epel-release \
> >
> > &&  yum clean all \
> >
> > &&  yum update -y \
> >
> > &&  yum install -y gcc gcc-c++ libtool libtool-ltdl make cmake
> \
> >
> > git pkgconfig sudo automake autoconf yum-utils
> > rpm-build \
> >
> > &&  yum install -y libxml2 expat-devel openssl-devel libcap
> > ccache \
> >
> > libtool-ltdl-devel cppunit cppunit-devel bzr git
> > autoconf \
> >
> > automake libtool gcc-c++ perl-Pod-MinimumVersion
> > bzip2 ed \
> >
> >  make openldap-devel pam-devel db4-devel
> > libxml2-devel \
> >
> > libcap-devel screen vim nettle-devel redhat-lsb-core
> > \
> >
> > autoconf-archive libtdb-devel libtdb
> > redhat-rpm-config rpm-build rpm-devel \
> >
> > &&  yum install -y perl-libwww-perl ruby ruby-devel \
> >
> > &&  yum clean all
> >
> > __ __
> >
> > RUN yum update -y \
> >
> > &&  yum install -y systemd-units openldap-devel pam-devel \
> >
> > openssl-devel krb5-devel db4-devel expat-devel \
> >
> > libxml2-devel libcap-devel libtool
> > libtool-ltdl-devel \
> >
> > redhat-rpm-config libdb-devel
> > libnetfilter_conntrack-devel \
> >
> > gnutls-devel rpmdevtools wget \
> >
> > &&  yum clean all
> >
> > __ __
> >
> > __ __
> >
> > For CentOS 8 Stream:
> >
> > RUN dnf install -y epel-release dnf-plugins-core \
> >
> > &&  dnf config-manager --set-enabled powertools \
> >
> > &&  dnf clean all \
> >
> > &&  dnf update -y \
> >
> > &&  dnf install -y gcc gcc-c++ libtool libtool-ltdl make cmake
> \
> >
> > git pkgconfig sudo automake autoconf yum-utils
> rpm-build \
> >
> > &&  dnf install -y libxml2 expat-devel openssl-devel libcap
> ccache \
> >
> > libtool-ltdl-devel git autoconf \
> >
> > automake libtool gcc-c++ bzip2 ed \
> >
> > make openldap-devel pam-devel libxml2-devel \
> >
> > libcap-devel screen vim nettle-devel redhat-lsb-core
> \
> >
> > libtdb-devel libtdb redhat-rpm-config rpm-build
> rpm-devel \
> >
> > libnetfilter_conntrack-devel \
> >
> > &&  dnf install -y perl-libwww-perl ruby ruby-devel \
> >
> > &&  dnf clean all
> >
> > __  __
> >
> > RUN dnf update -y \
> >
>

Re: [squid-users] squid compilation error in Docker

2022-04-13 Thread Ivan Larionov
Yes this worked. Thanks Eliezer.

This means some of these dependencies are required but not caught by the
configure script.

I'll try to figure out which specific one was the culprit.

On Wed, Apr 13, 2022 at 4:36 PM Eliezer Croitoru 
wrote:

> For CentOS 7 use the next:
>
> RUN yum install -y epel-release \
>
>&&  yum clean all \
>
>&&  yum update -y \
>
>&&  yum install -y gcc gcc-c++ libtool libtool-ltdl make cmake \
>
>git pkgconfig sudo automake autoconf yum-utils rpm-build \
>
>&&  yum install -y libxml2 expat-devel openssl-devel libcap ccache \
>
>libtool-ltdl-devel cppunit cppunit-devel bzr git autoconf \
>
>automake libtool gcc-c++ perl-Pod-MinimumVersion bzip2 ed \
>
> make openldap-devel pam-devel db4-devel libxml2-devel \
>
>libcap-devel screen vim nettle-devel redhat-lsb-core \
>
>autoconf-archive libtdb-devel libtdb redhat-rpm-config
> rpm-build rpm-devel \
>
>&&  yum install -y perl-libwww-perl ruby ruby-devel \
>
>&&  yum clean all
>
>
>
> RUN yum update -y \
>
>&&  yum install -y systemd-units openldap-devel pam-devel \
>
>openssl-devel krb5-devel db4-devel expat-devel \
>
>libxml2-devel libcap-devel libtool libtool-ltdl-devel \
>
>redhat-rpm-config libdb-devel libnetfilter_conntrack-devel \
>
>gnutls-devel rpmdevtools wget \
>
>&&  yum clean all
>
>
>
>
>
> For CentOS 8 Stream:
>
> RUN dnf install -y epel-release dnf-plugins-core \
>
>&&  dnf config-manager --set-enabled powertools \
>
>&&  dnf clean all \
>
>&&  dnf update -y \
>
>&&  dnf install -y gcc gcc-c++ libtool libtool-ltdl make cmake \
>
>git pkgconfig sudo automake autoconf yum-utils rpm-build \
>
>&&  dnf install -y libxml2 expat-devel openssl-devel libcap ccache \
>
>libtool-ltdl-devel git autoconf \
>
>automake libtool gcc-c++ bzip2 ed \
>
>make openldap-devel pam-devel libxml2-devel \
>
>libcap-devel screen vim nettle-devel redhat-lsb-core \
>
>libtdb-devel libtdb redhat-rpm-config rpm-build rpm-devel \
>
>libnetfilter_conntrack-devel \
>
>&&  dnf install -y perl-libwww-perl ruby ruby-devel \
>
>&&  dnf clean all
>
>
>
> RUN dnf update -y \
>
>&&  dnf install -y systemd-units openldap-devel pam-devel \
>
>openssl-devel krb5-devel expat-devel \
>
>libxml2-devel libcap-devel libtool libtool-ltdl-devel \
>
>redhat-rpm-config libdb-devel \
>
>gnutls-devel rpmdevtools wget \
>
>&&  dnf clean all
>
>
>
>
>
> 
>
> Eliezer Croitoru
>
> NgTech, Tech Support
>
> Mobile: +972-5-28704261
>
> Email: ngtech1...@gmail.com
>
>
>
> *From:* squid-users  *On
> Behalf Of *Ivan Larionov
> *Sent:* Thursday, April 14, 2022 01:34
> *To:* squid-users@lists.squid-cache.org
> *Subject:* [squid-users] squid compilation error in Docker
>
>
>
> Hi.
>
>
>
> I have no issues building squid normally, but when I try to do exactly the
> same steps in docker I'm getting the following errors:
>
>
>
> https://gist.github.com/xeron/5530fe9aa1f5bdcb6a72c6edd6476467
>
>
>
> Example from that log:
>
>
>
> cache_cf.o: In function `configFreeMemory()':
>
> /root/build/src/cache_cf.cc:2982: undefined reference to
> `Adaptation::Icap::TheConfig'
>
>
>
> I can't figure out what exactly is wrong. Doesn't look like any
> dependencies are missing.
>
>
>
> Here's my build script:
>
>
>
>   yum install -y autoconf automake file gcc72 gcc72-c++ libtool
> libtool-ltdl-devel pkgconfig diffutils \
> libxml2-devel libcap-devel openssl-devel
>
>   autoreconf -ivf
>
>   ./configure --program-prefix= --prefix=/usr --exec-prefix=/usr \
> --bindir=/usr/sbin --sbindir=/usr/sbin --sysconfdir=/etc/squid \
> --libdir=/usr/lib --libexecdir=/usr/lib/squid \
> --includedir=/usr/include --datadir=/usr/share/squid \
> --sharedstatedir=/usr/com --localstatedir=/var \
> --mandir=/usr/share/man --infodir=/usr/share/info \
> --enable-epoll --enable-removal-policies=heap,lru \
> --enable-storeio=aufs,rock \
> --enable-delay-pools --with-pthreads --enable-cache-digests \
> --with-large-files --with-filedescriptors=65536 \
> --enable-htcp
>
>   make -j$(nproc) install DESTDIR=$PWD/_destroot
>
>
>
> Any ideas?
>
>
>
> --
>
> With best regards, Ivan Larionov.
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid compilation error in Docker

2022-04-13 Thread Ivan Larionov
Hi.

I have no issues building squid normally, but when I try to do exactly the
same steps in docker I'm getting the following errors:

https://gist.github.com/xeron/5530fe9aa1f5bdcb6a72c6edd6476467

Example from that log:

cache_cf.o: In function `configFreeMemory()':
/root/build/src/cache_cf.cc:2982: undefined reference to
`Adaptation::Icap::TheConfig'

I can't figure out what exactly is wrong. Doesn't look like any
dependencies are missing.

Here's my build script:

  yum install -y autoconf automake file gcc72 gcc72-c++ libtool
libtool-ltdl-devel pkgconfig diffutils \
libxml2-devel libcap-devel openssl-devel

  autoreconf -ivf

  ./configure --program-prefix= --prefix=/usr --exec-prefix=/usr \
--bindir=/usr/sbin --sbindir=/usr/sbin --sysconfdir=/etc/squid \
--libdir=/usr/lib --libexecdir=/usr/lib/squid \
--includedir=/usr/include --datadir=/usr/share/squid \
--sharedstatedir=/usr/com --localstatedir=/var \
--mandir=/usr/share/man --infodir=/usr/share/info \
--enable-epoll --enable-removal-policies=heap,lru \
--enable-storeio=aufs,rock \
--enable-delay-pools --with-pthreads --enable-cache-digests \
--with-large-files --with-filedescriptors=65536 \
--enable-htcp

  make -j$(nproc) install DESTDIR=$PWD/_destroot

Any ideas?

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] client_delay_pools doesn't work as expected

2021-03-31 Thread Ivan Larionov
Hello.

We've recently had an incident where misbehaving cluster of clients started
fetching 4MB file from squid cache with ~1200 RPS (slowed down to 600 RPS
later) which resulted in up to 2Gb/s of traffic sent to clients from each
of our squid hosts and quickly overloaded squid.

I'm trying to use client_delay_pools to limit bandwidth per client and
prevent misbehaving actors from saturating client-side network / CPU on
squid hosts.

However I can't get it to work reliably. It seems to be working as expected
for cache MISS, e.g. getting a speed limit of 10MB/s. But it's completely
broken for cache HIT, speed I'm getting is ~5KB/s!

The following configuration:

client_delay_pools 1
client_delay_access 1 allow localnet
client_delay_access 1 deny all
client_delay_parameters 1 1000 2000

Testing with an already cached big object (2GB ISO file).

client_delay_pools disabled MISS: 20MB/s (probably speed limit on origin
side)
client_delay_pools disabled HIT: 110MB/s (probably EBS disk speed)

client_delay_pools enabled MISS: 10MB/s (limit from client_delay_parameters)
client_delay_pools enabled HIT: 5KB/s (what ???)

I retested with a smaller file (337MB) but it made no difference. Still got
5KB download speed on cache HIT.

Any ideas? Am I doing something wrong? Any other ways to limit client-side
bandwidth?

Squid version:

Squid Cache: Version 4.14
Service Name: squid
configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share/squid'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--with-large-files' '--with-maxfd=16384' '--enable-htcp'

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-07 Thread Ivan Bulatovic
Hi Amos,

> > From what i remember there is a calculation for how much k per conn
> > should squid use.
>
> Aye;
>  256KB * number of currently open FD
>  + read_ahead_gap
>  + received size of current in-transit response (if cacheable MISS)

I tried to reduce the number of in-memory objects using following (I
read somewhere that 4KB is the minimum block of memory that squid can
allocate):
  cache_mem 8 MB
  maximum_object_size 4 KB
  maximum_object_size_in_memory 4 KB
  read_ahead_gap 4 KB

But the above settings did not help much.

At the moment I am running a much lighter load on the squid VM to see
how it behaves.

So, right now, the machine has about 110.000 open TCP connections (I
guess half are from clients and the other half is to the Internet,
which my firewall also confirms). It has been running like this for
the last 4 hours or so.

Here is the situation (in the attachment you will find full command
print-outs and config file):

- Running squid 4.12 from diladele repository on Ubuntu 18.04 LTS

- RAM used: around 9 GB out of 16 GB (no of swap is used)

- I am running 2 squid workers at the moment (see attached squid.conf)

- Top reports this (removed other processes, they practically have no
impact on the memory listing):

top - 10:22:03 up 18:00,  1 user,  load average: 1.88, 1.58, 1.43
Tasks: 169 total,   1 running,  94 sleeping,   0 stopped,   0 zombie
%Cpu(s): 13.6 us,  9.6 sy,  0.0 ni, 72.7 id,  0.1 wa,  0.0 hi,  3.9 si,  0.0 st
KiB Mem : 16380456 total,  5011560 free,  9205660 used,  2163236 buff/cache
KiB Swap: 12582904 total, 12582904 free,0 used.  7121124 avail Mem

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  1515 proxy 20   0 5713808 4.396g  14188 S  36.5 28.1 130:03.34 squid
  1514 proxy 20   0 4360348 3.329g  14380 S  28.9 21.3 104:15.21 squid

 - mgr:info show some weird stats:
Number of clients accessing cache:  156   (which is exactly
twice the number of actual clients, but this is probably due to the
number of workers)
Maximum Resident Size: 32467680 KB   (which is 32GB. At no time
during these 4 hours has this value of RAM consumption ever been
reached. The memory is steadily, but slowly increasing to where it is
now - at 9 GB. I have no idea what this value is.)

I have no idea if the rest of the stats from mgr:info are OK or not, I
really have no way of checking that.

I added to the configuration memory pools option, we will see if it
helps (I think I already tried this, but I can not be sure, I ran a
lot of tests trying to fix this myself before I reached out to you):
memory_pools off

If there is anything else I can do to help with debugging this, please
let me know.

Thank you for your time and help,
Ivan


On Fri, Aug 7, 2020 at 2:23 AM Amos Jeffries  wrote:
>
> On 6/08/20 11:06 am, NgTech LTD wrote:
> > Hey Ivan,
> >
> > From what i remember there is a calculation for how much k per conn
> > should squid use.
>
> Aye;
>  256KB * number of currently open FD
>  + read_ahead_gap
>  + received size of current in-transit response (if cacheable MISS)
>
>
> > another thing is that squid is not returning memory once ot took it.
>
> The calculation for this is _minimum_ 5MB per type of memory allocating
> object is retained by Squid for quick re-use. The mgr:mem report lists
> details of those allocations.
>
>
>
> Alex didn't mention this earlier but what I am seeing in your "top" tool
> output is that there are 5x 'squid' processes running. It looks like 4
> of them are SMP worker or disker processes each using 2.5GB of RAM.
>
> The "free" tool is confirming this with its report of "used: 10G" (4x
> 2.5GB) of memory actually being used on the machine.
>
> Most kernels fork() implementation is terrible with virtual memory
> calculations. Most of that number will never actually be used. So they
> can be ignored so long as the per-process number does not exceed the
> actual physical RAM installed (beyond that kernel refuses to spawn with
> fork()).
>  The numbers your tools are reporting are kind of reasonable - maximum
> about 7GB *per process* allocated.
>
>
> The 41GB "resident size" is from old memory allocation APIs in the
> kernel which suffer from 32-bit issues. When this value has odd numbers
> and/or conflicts with the system tools - believe the tools instead.
>
>
>
> So to summarize; what I am seeing there is that during *Peak* load times
> your proxy workers (combined) are *maybe* using up to 41GB of memory. At
> the off-peak time you are doing your analysis reports they have dropped
> down to 10GB.
>  With one data point there is no sign of a memory leak happening. Just a
> normal machine handling far more peak traffic than its available amount
> of memory can cope with.
>

Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-05 Thread Ivan Bulatovic
Hi Eliezer,

In the original message I sent to the squid-users mail list, I
attached listings from mgr:info and mgr:mem. The server is definitely
using a lot of connections (close to 200K connections), which is why I
increased the open files limits in linux as well as in squid.conf. And
there are a lot of requests per second, when the server is running. I
could understand that it needs a lot of memory for in-transit cache,
but that memory should be later released  back to OS, once the
requests load goes down. However, that is not the case, even days
after there is no load on the server, it stays at 11GB of RAM and 5.5
GB of swap used. The second I restart the squid process, everything
goes to normal, memory is released. That is why I suspect there is
some memory leak somewhere.

The server is a Ubuntu 18.04 LTS VM (running on Hyper-V 2019 server),
with 8 virtual processors and 12GB of RAM (although I can increase
that if that is the problem, but I thought that without caching this
would be more than enough).

I am not using dynamic memory on Hyper-V (it is turned off for this VM).

Best regards,
Ivan

On Wed, Aug 5, 2020 at 7:14 PM NgTech LTD  wrote:
>
> I think that the mgr:info or another page there contains the amount of 
> requests per second etc.
> also netstat or ss -ntp might give some basic understanding about this server 
> size.
>
> are you using dynamic memory on the hyper-v hypervisor?
>
> Eliezer
>
> On Wed, Aug 5, 2020, 19:59 Ivan Bulatovic  wrote:
>>
>> Hi Alex,
>>
>> Thank you very much for your help.
>>
>> I opened a bug on bugs.squid-cache.org
>> (https://bugs.squid-cache.org/show_bug.cgi?id=5071).
>>
>> Best regards,
>> Ivan
>>
>> On Mon, Aug 3, 2020 at 10:02 PM Alex Rousskov
>>  wrote:
>> >
>> > On 8/3/20 9:11 AM, Ivan Bulatovic wrote:
>> >
>> > > Looks like squid has some serious memory issues when under heavy load
>> > > (90 servers that crawl Internet sites).
>> >
>> > > Maximum Resident Size: 41500720 KB
>> >
>> > If the above (unreliable) report matches your observations using system
>> > tools like "top", then it is indeed likely that your Squid is suffering
>> > from a memory leak -- 41GB is usually too much for most non-caching
>> > Squid instances.
>> >
>> > Identifying the leak may take some time, and I am not volunteering to do
>> > the necessary legwork personally, but the Squid Project does fix
>> > virtually all runtime leaks that we know about. If you want to speed up
>> > the process, one of the best things you can do is to run Squid under
>> > valgrind with a good suppression file. This requires building Squid with
>> > a special ./configure option. Several testing iterations may be
>> > necessary. If you are willing to do this, please file a bug report and
>> > somebody will guide you through the steps.
>> >
>> >
>> > > It just eats up memory, and
>> > > does not free it up even days after it is being used (with no load on
>> > > the proxy for days).
>> >
>> > Some memory retention is expected by default. See
>> > http://www.squid-cache.org/Doc/config/memory_pools/
>> >
>> > Unfortunately, AFAICT, your mgr:mem output does not show any obvious
>> > leaks -- all numbers are very small. If something is leaking a lot, then
>> > it is probably not pooled by Squid.
>> >
>> >
>> > HTH,
>> >
>> > Alex.
>> >
>> >
>> > > On Mon, Jul 20, 2020 at 10:46 PM Ivan Bulatovic wrote:
>> > >>
>> > >> Hi all,
>> > >>
>> > >> I am trying to configure squid to run as a forward proxy with no
>> > >> caching (cache deny all) with an option to choose the outgoing IP
>> > >> address based on the username. So all squid has to do is to use a
>> > >> certain outgoing IP address for a certain user, return the data from
>> > >> the server to that user and cache nothing.
>> > >>
>> > >> For that I created a special authentication helper and used the ACLs
>> > >> and tcp_outgoing_address to create a lot of users and outgoing IP
>> > >> addresses (about 260 at the moment). Example (not the real IP I use,
>> > >> of course):
>> > >>
>> > >> acl use_IP1 proxy_auth user1
>> > >> tcp_outgoing_address 1.2.3.4   use_IP1
>> > >>
>> > >> I also configured the squid to use 4 workers, but this happens even
>> &

Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-05 Thread Ivan Bulatovic
Hi Alex,

Thank you very much for your help.

I opened a bug on bugs.squid-cache.org
(https://bugs.squid-cache.org/show_bug.cgi?id=5071).

Best regards,
Ivan

On Mon, Aug 3, 2020 at 10:02 PM Alex Rousskov
 wrote:
>
> On 8/3/20 9:11 AM, Ivan Bulatovic wrote:
>
> > Looks like squid has some serious memory issues when under heavy load
> > (90 servers that crawl Internet sites).
>
> > Maximum Resident Size: 41500720 KB
>
> If the above (unreliable) report matches your observations using system
> tools like "top", then it is indeed likely that your Squid is suffering
> from a memory leak -- 41GB is usually too much for most non-caching
> Squid instances.
>
> Identifying the leak may take some time, and I am not volunteering to do
> the necessary legwork personally, but the Squid Project does fix
> virtually all runtime leaks that we know about. If you want to speed up
> the process, one of the best things you can do is to run Squid under
> valgrind with a good suppression file. This requires building Squid with
> a special ./configure option. Several testing iterations may be
> necessary. If you are willing to do this, please file a bug report and
> somebody will guide you through the steps.
>
>
> > It just eats up memory, and
> > does not free it up even days after it is being used (with no load on
> > the proxy for days).
>
> Some memory retention is expected by default. See
> http://www.squid-cache.org/Doc/config/memory_pools/
>
> Unfortunately, AFAICT, your mgr:mem output does not show any obvious
> leaks -- all numbers are very small. If something is leaking a lot, then
> it is probably not pooled by Squid.
>
>
> HTH,
>
> Alex.
>
>
> > On Mon, Jul 20, 2020 at 10:46 PM Ivan Bulatovic wrote:
> >>
> >> Hi all,
> >>
> >> I am trying to configure squid to run as a forward proxy with no
> >> caching (cache deny all) with an option to choose the outgoing IP
> >> address based on the username. So all squid has to do is to use a
> >> certain outgoing IP address for a certain user, return the data from
> >> the server to that user and cache nothing.
> >>
> >> For that I created a special authentication helper and used the ACLs
> >> and tcp_outgoing_address to create a lot of users and outgoing IP
> >> addresses (about 260 at the moment). Example (not the real IP I use,
> >> of course):
> >>
> >> acl use_IP1 proxy_auth user1
> >> tcp_outgoing_address 1.2.3.4   use_IP1
> >>
> >> I also configured the squid to use 4 workers, but this happens even
> >> when I use only one worker (default)
> >>
> >> And this works. However, under heavy load, Squid eats all of the RAM
> >> and then starts going to swap. And the memory usage does not drop when
> >> I remove all the load from squid (I shut down all clients).
> >>
> >> I left it to see if the memory will be freed but even after leaving it
> >> for an hour the info page reports this:
> >> Cache information for squid:
> >> Hits as % of all requests:  5min: 0.0%, 60min: 0.0%
> >> Hits as % of bytes sent:5min: 0.0%, 60min: 1.1%
> >> Memory hits as % of hit requests:   5min: 0.0%, 60min: 0.0%
> >> Disk hits as % of hit requests: 5min: 0.0%, 60min: 100.0%
> >> Storage Swap size:  0 KB
> >> Storage Swap capacity:   0.0% used, 100.0% free
> >> Storage Mem size:   0 KB
> >> Storage Mem capacity:0.0% used, 100.0% free
> >> Mean Object Size:   0.00 KB
> >> Requests given to unlinkd:  0
> >>
> >> Resource usage for squid:
> >> UP Time:255334.875 seconds
> >> CPU Time:   7122.436 seconds
> >> CPU Usage:  2.79%
> >> CPU Usage, 5 minute avg:0.05%
> >> CPU Usage, 60 minute avg:   37.66%
> >> Maximum Resident Size: 41500720 KB
> >> Page faults with physical i/o: 1003410
> >>
> >> And here is the listing of free and top commands (with no load on the 
> >> server):
> >>
> >> # free -h
> >>   totalusedfree  shared  buff/cache   
> >> available
> >> Mem:11G 10G791M676K491M
> >> 1.0G
> >> Swap:   11G5.5G6.5G
> >>
> >> # top
> >> top - 14:12:32 up 3 days,  1:30,  1 user,  load average: 0.00, 0.00, 0.00
> >&

Re: [squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-08-03 Thread Ivan Bulatovic
Hi all,

I tried with stock squid from Ubuntu 18.04 (version 4.10) and basic
config, but still no luck. Here is the config I tried:
-
acl localnet src 10.20.0.0/16   # My network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local
(directly plugged) machines

acl SSL_ports port 443

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

acl CONNECT method CONNECT

cache deny all

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access allow localnet manager
http_access deny manager
http_access deny to_localhost
http_access allow localnet
http_access allow localhost
http_access deny all

http_port localhost:3128
http_port 10.20.6.8:50505

coredump_dir /var/spool/squid

shutdown_lifetime 2 seconds
max_filedescriptors 262143

server_persistent_connections off
client_persistent_connections off

request_header_access Via deny all
request_header_access X-Forwarded-For deny all
dns_v4_first on


Looks like squid has some serious memory issues when under heavy load
(90 servers that crawl Internet sites). It just eats up memory, and
does not free it up even days after it is being used (with no load on
the proxy for days). So I guess I have to look for another solution.

Best regards,
Ivan Bulatovic


On Mon, Jul 20, 2020 at 10:46 PM Ivan Bulatovic
 wrote:
>
> Hi all,
>
> I am trying to configure squid to run as a forward proxy with no
> caching (cache deny all) with an option to choose the outgoing IP
> address based on the username. So all squid has to do is to use a
> certain outgoing IP address for a certain user, return the data from
> the server to that user and cache nothing.
>
> For that I created a special authentication helper and used the ACLs
> and tcp_outgoing_address to create a lot of users and outgoing IP
> addresses (about 260 at the moment). Example (not the real IP I use,
> of course):
>
> acl use_IP1 proxy_auth user1
> tcp_outgoing_address 1.2.3.4   use_IP1
>
> I also configured the squid to use 4 workers, but this happens even
> when I use only one worker (default)
>
> And this works. However, under heavy load, Squid eats all of the RAM
> and then starts going to swap. And the memory usage does not drop when
> I remove all the load from squid (I shut down all clients).
>
> I left it to see if the memory will be freed but even after leaving it
> for an hour the info page reports this:
> Cache information for squid:
> Hits as % of all requests:  5min: 0.0%, 60min: 0.0%
> Hits as % of bytes sent:5min: 0.0%, 60min: 1.1%
> Memory hits as % of hit requests:   5min: 0.0%, 60min: 0.0%
> Disk hits as % of hit requests: 5min: 0.0%, 60min: 100.0%
> Storage Swap size:  0 KB
> Storage Swap capacity:   0.0% used, 100.0% free
> Storage Mem size:   0 KB
> Storage Mem capacity:0.0% used, 100.0% free
> Mean Object Size:   0.00 KB
> Requests given to unlinkd:  0
>
> Resource usage for squid:
> UP Time:255334.875 seconds
> CPU Time:   7122.436 seconds
> CPU Usage:  2.79%
> CPU Usage, 5 minute avg:0.05%
> CPU Usage, 60 minute avg:   37.66%
> Maximum Resident Size: 41500720 KB
> Page faults with physical i/o: 1003410
>
> And here is the listing of free and top commands (with no load on the server):
>
> # free -h
>   totalusedfree  shared  buff/cache   
> available
> Mem:11G 10G791M676K491M
> 1.0G
> Swap:   11G5.5G6.5G
>
> # top
> top - 14:12:32 up 3 days,  1:30,  1 user,  load average: 0.00, 0.00, 0.00
> Tasks: 177 total,   1 running, 102 sleeping,   0 stopped,   0 zombie
> %Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu2  :  0.0 us,  0.3 sy,  0.0 ni, 99.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 
> st
> %Cpu5  :  0.0 us,  0.

[squid-users] High memory usage under load with caching disabled, memory is not being freed even with no load

2020-07-20 Thread Ivan Bulatovic
 diladele repository)
Hardware: Hyper-V virtual machine with 8 vCPU, 12GB of RAM

I can not understand what is eating all of the memory, if I disabled the cache.

Maybe I configured something wrong but I can not find what.

Thank you for any help you can provide.

Best regards,
Ivan
HTTP/1.1 200 OK
Server: squid
Mime-Version: 1.0
Date: Mon, 20 Jul 2020 19:45:24 GMT
Content-Type: text/plain
Expires: Mon, 20 Jul 2020 19:45:24 GMT
Last-Modified: Mon, 20 Jul 2020 19:45:24 GMT
Connection: close

by kid1 {
Current memory usage:
Pool Obj Size   Chunks  
Allocated   In Use  
IdleAllocations Saved   Rate
 (bytes)KB/chobj/ch (#)  usedfreepart%Frag  
 (#) (KB)high (KB)   high (hrs)  %Tot   (#)  (KB)high 
(KB)   high (hrs)  %alloc (#)  (KB)high (KB)  (#)  %cnt 
   %vol   (#)/sec 
mem_node 4136   
 130 526 796 1.5824.513  127 513 796 1.58   
 97.692  3   13  291 133952  0.217   2.426   0.179
net_db_name32   
 5712179 180 1.478.333   5712179 180 1.47   
 100.000 0   0   3   129 0.000   0.000   0.000
netdbEntry168   
 983 162 165 2.077.529   983 162 165 2.07   
 100.000 0   0   17  66  0.000   0.000   0.000
cbdata idns_query (18)   8696   
 15  128 247 1.865.947   0   0   247 1.86   
 0.000   15  128 247 93211   0.151   3.550   0.000
Short Strings  40   
 3237127 22532   2.335.903   3072120 22532   2.33   
 94.903  165 7   11522570606341.632  4.503   0.583
ipcache_entry 128   
 930 117 126 1.795.427   920 115 126 1.79   
 98.925  10  2   11  60137   0.097   0.034   0.000
HttpRequest  1880   
 63  116 75579   2.335.400   51  94  75579   2.33   
 80.952  12  23  3884717076  1.161   5.904   0.000
cbdata clientReplyContext (20)   4352   
 12  51  174739  2.332.381   0   0   174739 
 2.330.000   12  51  8989717004  1.161   13.666  0.000
HttpHeaderEntry56   
 914 50  90432.332.333   861 48  90432.33   
 94.201  53  3   463 6280435 10.171  1.540   0.000
Stream   4216   
 12  50  169279  2.332.306   0   0   169279  2.33   
 0.000   12  50  8708717004  1.161   13.239  0.000
4KB Strings  4096   
 12  48  164460  2.332.241   0   0   164460  2.33   
 0.000   12  48  8456721270  1.168   12.938  0.000
cbdata Tree (1)   176   
 274 48  48  71.60   2.199   274 48  48  71.60  
 100.000 0   0   48  274 0.000   0.000   0.000
ClientInfo448   
 98  43  43  0.652.002   98  43  43  0.65   
 100.000 0   0   0   0   0.000   0.000   0.000
MemObject 344   
 124 42  43  1.681.945   124 42  43  1.68   
 100.000 0   0   3   128878  0.209   0.194   0.045
HttpReply 288   
 124 35  38  1.681.628   124 35  38  1.68   
 100.000 0   0   4   388311  0.629   0.490   0.045
MemBlob48   
 737 35  58232.331.613   700 33  58232.33   
 94.980  37  2   298 3967500 6.426   0.834   0.090
AndNode   120   
 282 34  34  0.961.543   282 34  34  0.96   
 100.000 0   0   33  281 0.000   0.000   0.000
32K Buffer   32768

[squid-users] Bad HTTP header error on non-standard HTTP response code

2019-02-06 Thread Ivan Larionov
Hello.

We've recently noticed a difference in behavior between squid v3 and v4.

On HTTP response with non-standard 4-digits HTTP code, for example
something like this:

HTTP/1.1 5009 Update Error
Connection: Closed

{"code":500911,"message":"update record error"}

squid 3 just passes this response to the client, but squid 4 returns 502
with ERR_INVALID_RESP template and writes into cache.log:

WARNING: HTTP: Invalid Response: Bad header encountered from … AKA …

While I understand that 4-digits response code is not standard I'd like to
know:

Is it expected behavior and is there an option to change squid 4 behavior
to match squid 3?

Thanks!

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tcp_outgoing_address issue how to deny traffic to other IPs

2018-02-22 Thread Ivan Larionov
Your balancing rules are incorrect. This is how we balance 30% per IP:

# 33% of traffic per local IP
acl third random 1/3
acl half random 1/2

tcp_outgoing_address X.X.X.2 third
tcp_outgoing_address X.X.X.3 half
tcp_outgoing_address X.X.X.4

Read https://wiki.squid-cache.org/Features/AclRandom.

Basically for 1/5 you need something like this:

acl fifth random 1/5
acl fourth random 1/4
acl third random 1/3
acl half random 1/2

tcp_outgoing_address XX.3X.YYY.10 fifth
tcp_outgoing_address XX.X3.YYY.21 fourth
tcp_outgoing_address XX.5X.YYY.31 third
tcp_outgoing_address XX.X9.YYY.34 half
tcp_outgoing_address XX.5X.YYY.38


On Thu, Feb 22, 2018 at 10:15 AM, Patrick Chemla <
patrick.che...@performance-managers.com> wrote:

> Hi,
>
> I have googled for days and can't find the right settings to distribut
> outgoing requests over part on local IPs of my server.
>
> This is my conf I built according to what I found on docs and forums:
>
>
> Squid Cache: Version 4.0.17
>
> 
>
> blablabla
>
> blablabla
>
> blablabla
>
> 
>
> acl Percent001 random 1/5
> acl Percent002 random 1/5
> acl Percent003 random 1/5
> acl Percent004 random 1/5
> acl Percent005 random 1/5
>
> server_persistent_connections off
>
>
> tcp_outgoing_address XX.3X.YYY.10 Percent001
> tcp_outgoing_address XX.X3.YYY.21 Percent002
> tcp_outgoing_address XX.5X.YYY.31 Percent003
> tcp_outgoing_address XX.X9.YYY.34 Percent004
> tcp_outgoing_address XX.5X.YYY.38 Percent005
>
> balance_on_multiple_ip on
>
> forwarded_for delete
> via off
>
> My problem is that this server as
>
> - a main IP MA.IN.IP.00 of course
>
> - a locahost 127.0.0.1 of course
>
> - some secondary IPs attached to the same interface as the main IP
>
>
> The input traffic comes through one of the secondaries, and I need the
> output traffic to get out randomly through other secondaries IPs, not any
> squid traffic from the main IP.
>
> When I look at the log, or using network tcpdump analyzer, I can see that
> there is squid outgoing traffic on this IP, and I can't find how to deny
> tcp_outgoing_address to be on the main IP.
>
> I hope it's clear, and I need help after I searched for days many
> combinations.
>
> Many thanks
>
> Patrick
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache reference age for heap LRU/LFUDA and rock/aufs

2018-02-12 Thread Ivan Larionov
On Fri, Feb 9, 2018 at 7:50 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

>
> I cannot answer your question for aufs, but please note that rock
> cache_dirs do not support/have/use a configurable replacement policy:
> Each incoming object is assigned a slot based on its key hash. With
> modern rock code, it is possible to remove that limitation IIRC, but
> nobody have done that.
>

Yeah I figured this out from the source code and I'm extremely surprised by
the fact that it was never mentioned in documentation. I think it will be a
huge blocker in our squid 4 + SMP + rock migration plan.

So what does rock do when storage is full then?


>
>
> > If you're wondering why would we need to know that – it's related to
> > GDPR and removing data of closed customer's accounts. We need to make
> > sure that we don't have any "not being accessed anymore" objects older
> > than "data retention period" days.
>
> If it is important to get this right, then I would not trust replacement
> policy metadata with this: The corresponding code interfaces look
> unreliable to me, and access counts/timestamps for a ufs-based cache_dir
> are not updated across Squid restarts when the swap log is lost (at least).
>
>
It's actually fine, we never restart squid and if it restarted by any
unexpected reason (host reboot, crash or w/e) we just replace the host.


> I would instead configure Squid to prohibit serving hits that are too
> old. That solution does not match your problem exactly, but it may be
> good enough and should work a lot more reliably across all cache_dirs.
> If there is no "age" ACL to use with the send_hit directive, then you
> may need to add one.
>
> http://www.squid-cache.org/Doc/config/send_hit/
>
> You may also be able to accomplish the same using refresh_pattern, but I
> am a little worroed about various exceptional/special conditions
> implemented on top of that directive. Others on this list may offer
> better guidance in this area.
>
>
I was thinking about similar solution but this is exactly why I wasn't able
to use it – there seems to be no acl suitable for such task.

We can always just replace the host every month or something like this but
it'll mean starting with a cold cache every time which I wanted to avoid.

I found this debug option for heap which could probably help in
understanding of approximate cache age but it doesn't work with rock
because rock uses some "simple scan" policy.

> src/repl/heap/store_repl_heap.cc:debugs(81, 3, "Heap age set to "
<< h->theHeap->age);


> HTH,
>
> Alex.
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache reference age for heap LRU/LFUDA and rock/aufs

2018-02-09 Thread Ivan Larionov
Hello!

Is it possible to get a metric similar to "LRU reference age" (or "LRU
expiration") when using heap LRU/LFUDA and aufs/rock?

What we need to do is to figure out the age of the oldest least accessed
object in the cache. Or the age of the last replaced object.

If my description is somehow unclear – we need to answer the question "How
many days ago the oldest object which is not being accessed anymore has
been put in the cache."

With aufs/lru we had "LRU reference age" or something like this in mgr:info
report, but with currently used heap lru/lfuda and rock/aufs I don't see it
there. SNMP metric also shows:

> SQUID-MIB::cacheCurrentLRUExpiration.0 = Timeticks: (0) 0:00:00.00

If you're wondering why would we need to know that – it's related to GDPR
and removing data of closed customer's accounts. We need to make sure that
we don't have any "not being accessed anymore" objects older than "data
retention period" days.


Thanks!

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rock storage and max-swap-rate

2018-01-18 Thread Ivan Larionov
Thanks Amos.

According to AWS docs:

> I/O size is capped at 256 KiB for SSD volumes
> When small I/O operations are physically contiguous, Amazon EBS attempts
to merge them into a single I/O up to the maximum size. For example, for
SSD volumes, a single 1,024 KiB I/O operation counts as 4 operations
(1,024÷256=4), while 8 contiguous I/O operations at 32 KiB each count as
1operation (8×32=256). However, 8 random I/O operations at 32 KiB each
count as 8 operations. Each I/O operation under 32 KiB counts as 1
operation.

So it's not so easy to figure out correlation between squid swap ops and
AWS EBS ops. What I see from here is:

* Multiple squid swap in or swap out ops reading/writing contiguous blocks
could be merged into one 256KB IO operation.
* Random squid operations could be handled as single 32KB IO operation.

On Thu, Jan 18, 2018 at 3:20 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 19/01/18 12:04, Ivan Larionov wrote:
>
>> Thank you for the fast reply!
>>
>> read_ops and write_ops is AWS EBS metric and in general it correlates
>> with OS-level reads/s writes/s stats which iostat shows.
>>
>> So if I understand you correctly max-swap-rate doesn't limit disk IOPS
>> but limits squid swap ops instead and every squid operation could in theory
>> use more than 1 disk IO operation. This means we can't really say "limit
>> swap ops to 1500 because our disk can handle 1500 iops" but should figure
>> out the number after testing different values.
>>
>> Ok, I suppose I'll just do what Rock documentation says – will test
>> different values and figure out what works for us.
>>
>>
>
> If you know what the OS level IOP size is (eg usually 4KB) and the Squid
> rock IOP size Alex mentioned of 32KB. That should give you a number to
> divide the disk IOPS limit you want with to get a rough estimate for the
> appropriate Squid setting.
>
> The tuning bit is just to see how much variance from that is caused by
> your traffic objects being different from the 32KB slot size.
>
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] rock storage and max-swap-rate

2018-01-18 Thread Ivan Larionov
Thank you for the fast reply!

read_ops and write_ops is AWS EBS metric and in general it correlates with
OS-level reads/s writes/s stats which iostat shows.

So if I understand you correctly max-swap-rate doesn't limit disk IOPS but
limits squid swap ops instead and every squid operation could in theory use
more than 1 disk IO operation. This means we can't really say "limit swap
ops to 1500 because our disk can handle 1500 iops" but should figure out
the number after testing different values.

Ok, I suppose I'll just do what Rock documentation says – will test
different values and figure out what works for us.

Thanks again.

On Thu, Jan 18, 2018 at 2:54 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 01/18/2018 03:16 PM, Ivan Larionov wrote:
>
> > cache_dir max-swap-rate documentation says that swap in requests
> > contribute to measured swap rate. However in our squid 4 load test we
> > see that read_ops + write_ops significantly exceeds the max-swap-rate we
> > set and squid doesn't limit it.
>
> In this context, a single Squid "op" is a read or write request from
> worker to the disker process. These requests are up to one I/O page in
> size. A single I/O page is 32*1024 bytes. See Ipc::Mem::PageSize().
>
> * A single read request usually ends up being a single pread(2) system
> call that reads at most one I/O page worth of data from disk. See
> diskerRead().
>
> * A single write request usually ends up being a single pwrite(2) system
> call that writes at most one I/O page worth of data to disk. However, if
> that single pwrite() does not write everything a worker has requested to
> write, then Squid will make more pwrite() calls, up to 10 calls total.
> See diskerWrite().
>
> Within a single cache miss transaction, the rock code should accumulate
> small swapout requests from Store into page-size write requests to
> disker, but I do not remember how complete those optimizations are: It
> is possible that smaller-than-page writes get through to diskers,
> increasing the number of write requests. Same for reading cache hits.
>
>
> What is the "op" in read_ops and write_ops you have measured?
>
>
> Since Squid does not (and does not want to) have access to low-level
> disk stats and since Squid cannot assume exlusive disk ownership, the
> rate-limiting feature for rock cache_dirs does not know how many
> low-level disk operations the disk is doing and how those operations
> correspond to what Squid is asking the disk to do.
>
>
> HTH,
>
> Alex.
>
>
> > I tried to set it to 200 to confirm that it actually works and saw that
> > it does. Squid started warning about exceeding max-swap-rate. But looks
> > like real limit is higher than the value we set in configuration.
> >
> > Hardware:
> >
> > AWS GP2 EBS (SSD) 600GB, 1500 iops baseline performance, 3000 iops
> > burstable.
> >
> > Config:
> >
> > cache_dir rock /mnt/services/squid/cache 435200 swap-timeout=500
> > max-swap-rate=1200 slot-size=16384
> >
> > IOPS squid pushes under our load test:
> >
> > read ~800 ops/sec
> > write ~1100 ops/sec
> >
> > In summary it gives us ~1900 ops/sec which exceeds AWS limit of 1500
> > ops/sec and after spending too much "burst balance" we started getting
> > throttled from AWS side.
> >
> > Could you please comment on this behavior? What the limit should we set
> > to stay under 1500 ops/sec for swap in + swap out operations?
> >
> > Thanks.
> >
> > Squid version:
> >
> > Squid Cache: Version 4.0.22
> > Service Name: squid
> > configure options:  '--program-prefix=' '--prefix=/usr'
> > '--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
> > '--sysconfdir=/etc/squid' '--libdir=/usr/lib'
> > '--libexecdir=/usr/lib/squid' '--includedir=/usr/include'
> > '--datadir=/usr/share' '--sharedstatedir=/usr/com'
> > '--localstatedir=/var' '--mandir=/usr/share/man'
> > '--infodir=/usr/share/info' '--enable-epoll'
> > '--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
> > '--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
> > '--with-large-files' '--with-maxfd=16384' '--enable-htcp'
> >
> > --
> > With best regards, Ivan Larionov.
> >
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
> >
>
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] rock storage and max-swap-rate

2018-01-18 Thread Ivan Larionov
Hello.

cache_dir max-swap-rate documentation says that swap in requests contribute
to measured swap rate. However in our squid 4 load test we see that
read_ops + write_ops significantly exceeds the max-swap-rate we set and
squid doesn't limit it.

I tried to set it to 200 to confirm that it actually works and saw that it
does. Squid started warning about exceeding max-swap-rate. But looks like
real limit is higher than the value we set in configuration.

Hardware:

AWS GP2 EBS (SSD) 600GB, 1500 iops baseline performance, 3000 iops
burstable.

Config:

cache_dir rock /mnt/services/squid/cache 435200 swap-timeout=500
max-swap-rate=1200 slot-size=16384

IOPS squid pushes under our load test:

read ~800 ops/sec
write ~1100 ops/sec

In summary it gives us ~1900 ops/sec which exceeds AWS limit of 1500
ops/sec and after spending too much "burst balance" we started getting
throttled from AWS side.

Could you please comment on this behavior? What the limit should we set to
stay under 1500 ops/sec for swap in + swap out operations?

Thanks.

Squid version:

Squid Cache: Version 4.0.22
Service Name: squid
configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--with-large-files' '--with-maxfd=16384' '--enable-htcp'

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid doesn't cache objects in memory when using SMP and shared memory cache

2018-01-16 Thread Ivan Larionov
Yeah I meant Very in response to requests I tried.

The fact that you can't reproduce it with this url and the fact that it
affects mostly mp3/wav files gave me an idea.

Our env specific is squid's cache_peer parent which transcodes audio files
to ulaw (format which our backend supports). This explains why audio files.
Doesn't explain why it works with squid 3 non-shared memory and squid 4
shared-memory.

I verified direct request and memory cache works with squid 3 +shared
memory.

The difference between direct and non-direct (transcoded) response for
http://techslides.com/demos/samples/sample.mp3:

* "Content-Type: audio/mpeg" for direct, "Content-Type: audio/ulaw" for
non-direct.
* No "Content-Length" header for non-direct.

What do you think? Could these 2 points lead to the issue I see? Why does
it work in all situations except squid 3 + shared memory?

On Tue, Jan 16, 2018 at 3:17 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 01/16/2018 02:40 PM, Ivan Larionov wrote:
> > So it's definitely not related to Vary, there's no such header in
> > requests I tried.
>
> Just to avoid misunderstanding, please note that Vary is a response header.
>
>
> > Also this issue affects squid even with 1 worker if
> > shared memory is forced to on.
>
> This matches the current suspicion that there is something wrong with
> the shared memory cache (in your environment).
>
> > It's not like it makes any sense but how about random example url:
> >
> > http://techslides.com/demos/samples/sample.mp3
> >
> > squid 4, 2 workers, shared cache, no disk cache – MEM_HIT
> > squid 3, same config – MISS every time
> > squid 3, no shared cache – MEM_HIT
> >
> > Could you do a brief test with this URL may be and confirm that I'm not
> > the only one who see this issue?
>
> I cannot confirm that: In primitive wget tests with Squid v3.5 (bzr
> r14182), I am getting shared memory hits with the above URL, with or
> without cache_dirs, with or without SMP.
>
> Alex.
>
>
> > On Tue, Jan 16, 2018 at 6:25 AM, Alex Rousskov wrote:
> >
> > On 01/15/2018 09:25 PM, Ivan Larionov wrote:
> > > My total hit ratio decreased in ~2 times from 40% to 20%
> >
> > > On 01/14/2018 10:53 PM, Ivan Larionov wrote:
> > >
> > > > After migrating squid from non-SMP/aufs to SMP/rock memory
> cache hit
> > > > ratio dropped significantly. Like from 50-100% to 1-5%.
> > >
> > > > And disk cache hit ratio went up from 15-50% to stable
> 60-65%.
> >
> >
> > The combination of the three statements above may be a sign of a
> problem
> > unrelated to Vary: Since the disk cache can cache everything the
> memory
> > cache can and is typically much larger than the memory cache, the
> > incapacitation of a memory cache (due to Vary) should not have a
> > significant effect on overall hit ratio. It should only affect hit
> > response time.
> >
> > The only known culprit I can think of in this context are hits for
> > being-cached objects: Rock lacks code that allows Squid to read
> > being-written objects. The shared memory cache has that code
> already. If
> > your workload has a lot of cases where clients request a
> being-written
> > object, then the overall hit ratio should go down after the memory
> cache
> > incapacitation (due to Vary).
> >
> > I suspect something else is in play here though, especially if you
> see a
> > different picture with Squid v4 -- the known problem discussed above
> is
> > present in all Squid versions. I second Amos's recommendation to
> focus
> > on v4 because it is unlikely that any complicated problems are going
> to
> > be fixed in v3, even if you triage them well.
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> >
> >
> > --
> > With best regards, Ivan Larionov.
>
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid doesn't cache objects in memory when using SMP and shared memory cache

2018-01-16 Thread Ivan Larionov
So it's definitely not related to Vary, there's no such header in requests
I tried. Also this issue affects squid even with 1 worker if shared memory
is forced to on.

Interesting thing I noticed is that according to log file a lot of images
are actually cached in memory but sound files are not (mp3/wav). It's not
like it makes any sense but how about random example url:

http://techslides.com/demos/samples/sample.mp3

squid 4, 2 workers, shared cache, no disk cache – MEM_HIT
squid 3, same config – MISS every time
squid 3, no shared cache – MEM_HIT

Could you do a brief test with this URL may be and confirm that I'm not the
only one who see this issue?

On Tue, Jan 16, 2018 at 6:25 AM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 01/15/2018 09:25 PM, Ivan Larionov wrote:
> > My total hit ratio decreased in ~2 times from 40% to 20%
>
> > On 01/14/2018 10:53 PM, Ivan Larionov wrote:
> >
> > > After migrating squid from non-SMP/aufs to SMP/rock memory cache
> hit
> > > ratio dropped significantly. Like from 50-100% to 1-5%.
> >
> > > And disk cache hit ratio went up from 15-50% to stable 60-65%.
>
>
> The combination of the three statements above may be a sign of a problem
> unrelated to Vary: Since the disk cache can cache everything the memory
> cache can and is typically much larger than the memory cache, the
> incapacitation of a memory cache (due to Vary) should not have a
> significant effect on overall hit ratio. It should only affect hit
> response time.
>
> The only known culprit I can think of in this context are hits for
> being-cached objects: Rock lacks code that allows Squid to read
> being-written objects. The shared memory cache has that code already. If
> your workload has a lot of cases where clients request a being-written
> object, then the overall hit ratio should go down after the memory cache
> incapacitation (due to Vary).
>
> I suspect something else is in play here though, especially if you see a
> different picture with Squid v4 -- the known problem discussed above is
> present in all Squid versions. I second Amos's recommendation to focus
> on v4 because it is unlikely that any complicated problems are going to
> be fixed in v3, even if you triage them well.
>
>
> HTH,
>
> Alex.
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid doesn't cache objects in memory when using SMP and shared memory cache

2018-01-15 Thread Ivan Larionov
My total hit ratio decreased in ~2 times from 40% to 20% (it could be cold
cache but it lasted at this level for a day without sign of improvement).

I'll retry tests with making sure there're no Vary header and will also try
1 worker with shared cache tomorrow. But even if it is this bug looks like
caching works as expected in squid 4 so it'll be better solution than
messing up with headers probably.

On Mon, Jan 15, 2018 at 10:26 AM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 01/14/2018 10:53 PM, Ivan Larionov wrote:
>
> > After migrating squid from non-SMP/aufs to SMP/rock memory cache hit
> > ratio dropped significantly. Like from 50-100% to 1-5%.
>
> This could be a side effect of not supporting Vary caching in shared
> memory: https://bugs.squid-cache.org/show_bug.cgi?id=3806#c9
>
>
> > And disk cache hit ratio went up from 15-50% to stable 60-65%.
>
> I hope your total/combined hit ratio improved overall.
>
>
> > it looks like in SMP/rock mode squid avoids using memory for small
> > files like 1-3KB but uses it for 10KB+ files.
>
> No, there is no such size-discrimination code in Squid.
>
>
> > I started tracking down the issue with disabling disk cache completely
> > and it didn't change anything, I just started to get MISS every time for
> > the URL which was getting MEM_HIT with an old configuration. Then I
> > changed "workers 2" to "workers 1" and started getting memory hits as
> > before.
>
> For a clean apples-to-apples test, make sure you use
> "memory_cache_shared on" when using a single worker without rock
> cache_dirs.
>
>
> > Am I doing anything wrong? Which debug options should I enable to
> > provide more information if it seems like a bug?
>
>
> Vary caching should be fixed as well, of course, but perhaps there is
> another problem we do not know about. I would start by eliminating Vary
> as the known problem. When using a test transaction, make sure the
> response does not have a Vary header. Or configure Squid to log the Vary
> header and remove the corresponding transactions when computing
> adjusted-for-Vary memory cache hit ratio.
>
>
> HTH,
>
> Alex.
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid doesn't cache objects in memory when using SMP and shared memory cache

2018-01-15 Thread Ivan Larionov
My disks are fast (SSD) so I didn't see performance issues but it doesn't
change the fact that memory hit ratio decreased in more than 10 times. And
with both rock and shared memory cache enabled most of the files were saved
into disk cache and not into memory cache and most of the hits were disk
hits (according to log file).

I already tried squid 4 and it works as expected.

So. Let's forget about rock because the issue I see is related to shared
memory. For my test file with only memory cache enabled:

squid 3.5.27 non-SMP - MISS first then always MEM_HIT
squid 3.5.27 SMP any amount of workers shared memory off - always MEM_HIT
after all workers handled the request once
squid 3.5.27 SMP any amount of workers shared memory on - MISS every time.
squid 4.0.22 SMP 2 workers shared memory on - MISS first then always MEM_HIT

I would like to use squid 4 in production and I probably will since looks
like SMP/shared_cache is broken in 3, but the fact that you still haven't
released it confuses me. IDK why it's still in beta/rc/whatever stage.

On Mon, Jan 15, 2018 at 7:22 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 15/01/18 18:53, Ivan Larionov wrote:
>
>> Hello!
>>
>> After migrating squid from non-SMP/aufs to SMP/rock memory cache hit
>> ratio dropped significantly. Like from 50-100% to 1-5%. And disk cache hit
>> ratio went up from 15-50% to stable 60-65%. From the brief log file check
>> it looks like in SMP/rock mode squid avoids using memory for small files
>> like 1-3KB but uses it for 10KB+ files.
>>
>
> AIUI, SMP-mode rock operates as a fully separate process (a "Disker" kid)
> which delivers its results as objects already in shared memory to the
> worker process.
>
> There should be little or no gain from that promotion process anymore -
> which would only be moving the object between memory locations. In fact if
> cache_mem were not operating as shared memory even with SMP active (which
> is possible) the promotion would be an actively bad idea as it prevents
> other workers using the object in future.
>
> They show up as non- MEM_HIT because they are either REFRESH or stored in
> the Disker shared memory instead of the cache_mem shared memory. The Squid
> logging is not quite up to recording the slim distinction between which of
> multiple memory areas are being used.
>
>
>
>> I started tracking down the issue with disabling disk cache completely
>> and it didn't change anything, I just started to get MISS every time for
>> the URL which was getting MEM_HIT with an old configuration. Then I changed
>> "workers 2" to "workers 1" and started getting memory hits as before.
>>
>> So it seems like the issue is with shared memory:
>>
>> When squid doesn't use shared memory it works as expected. Even with
>> multiple workers.
>> When squid uses shared memory it caches very small amount of objects.
>>
>> Am I doing anything wrong? Which debug options should I enable to provide
>> more information if it seems like a bug?
>>
>>
> Are you seeing an actual performance difference? if not I would not worry
> about it.
>
> FYI: if you really want to track this down I suggest using Squid-4 to do
> that. Squid-3 is very near the end of its support lifetime and changes of a
> deep nature do not have much chance at all of getting in there now.
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid doesn't cache objects in memory when using SMP and shared memory cache

2018-01-14 Thread Ivan Larionov
Hello!

After migrating squid from non-SMP/aufs to SMP/rock memory cache hit ratio
dropped significantly. Like from 50-100% to 1-5%. And disk cache hit ratio
went up from 15-50% to stable 60-65%. From the brief log file check it
looks like in SMP/rock mode squid avoids using memory for small files like
1-3KB but uses it for 10KB+ files.

I started tracking down the issue with disabling disk cache completely and
it didn't change anything, I just started to get MISS every time for the
URL which was getting MEM_HIT with an old configuration. Then I changed
"workers 2" to "workers 1" and started getting memory hits as before.

So it seems like the issue is with shared memory:

When squid doesn't use shared memory it works as expected. Even with
multiple workers.
When squid uses shared memory it caches very small amount of objects.

Am I doing anything wrong? Which debug options should I enable to provide
more information if it seems like a bug?

Config diff:

--- squid.conf.old  2018-01-14 02:01:19.0 -0800
+++ squid.conf.new  2018-01-14 02:01:16.0 -0800
@@ -1,5 +1,8 @@
 http_port 0.0.0.0:3128

+workers 2
+cpu_affinity_map process_numbers=1,2 cores=1,2
+
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
@@ -94,13 +97,12 @@

 never_direct allow all

-cache_mem 9420328 KB
-maximum_object_size_in_memory 32 KB
+cache_mem 12560438 KB
+maximum_object_size_in_memory 64 KB
 memory_replacement_policy heap LRU
 cache_replacement_policy heap LRU

-cache_dir aufs /mnt/services/squid/cache/cache0 261120 16 256
-cache_dir aufs /mnt/services/squid/cache/cache1 261120 16 256
+cache_dir rock /mnt/services/squid/cache 522240 swap-timeout=500
max-swap-rate=1200 slot-size=16384

 minimum_object_size 64 bytes # none-zero so we dont cache mistakes
 maximum_object_size 102400 KB


All relevant config options together (SMP/rock):

workers 2
cpu_affinity_map process_numbers=1,2 cores=1,2

cache_mem 12560438 KB
maximum_object_size_in_memory 64 KB
memory_replacement_policy heap LRU
cache_replacement_policy heap LRU

cache_dir rock /mnt/services/squid/cache 522240 swap-timeout=500
max-swap-rate=1200 slot-size=16384

minimum_object_size 64 bytes # none-zero so we dont cache mistakes
maximum_object_size 102400 KB

negative_ttl 0 seconds
range_offset_limit none


Squid version:

Squid Cache: Version 3.5.27
Service Name: squid
configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--with-large-files' '--with-maxfd=16384' '--enable-htcp'

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SMP mode and StoreID rewriter

2018-01-10 Thread Ivan Larionov
Hello.

We're currently testing squid in SMP mode. One of our services uses Store
ID feature. The interesting thing we see is that store_id_program started
for every squid process (except main). Process tree looks like this:

> squid
>  \_ (squid-coord-4)
>  |   \_ (rewriter_3)
>  |   \_ (rewriter_3)
>  \_ (squid-disk-3)
>  |   \_ (rewriter_3)
>  |   \_ (rewriter_3)
>  \_ (squid-2)
>  |   \_ (rewriter_3)
>  |   \_ (rewriter_3)
>  \_ (squid-1)
>  \_ (rewriter_3)
>  \_ (rewriter_3)

>From my brief testing it seems like rewrite is working as expected, but I
just wanted to make sure it's ok to see store_id_program started for every
child or may be it's sort of a bug.

May be it should be started only for "worker" or only for "disk"?

Relevant parts of the config:

> workers 2
> store_id_program /mnt/services/squid-url-rewriter/rewriter_3
> store_id_children 5 startup=2 idle=2 concurrency=10

>From the log file:

2018/01/10 16:56:12 kid4| helperOpenServers: Starting 2/5 'rewriter_3'
processes
2018/01/10 16:56:12 kid2| helperOpenServers: Starting 2/5 'rewriter_3'
processes
2018/01/10 16:56:12 kid3| helperOpenServers: Starting 2/5 'rewriter_3'
processes
2018/01/10 16:56:12 kid1| helperOpenServers: Starting 2/5 'rewriter_3'
processes

Squid Cache: Version 3.5.27
Service Name: squid
configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--with-large-files' '--with-maxfd=16384' '--enable-htcp'

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] forward_max_tries 1 has no effect

2017-11-29 Thread Ivan Larionov
Thanks Alex.

Unfortunately I don't have enough C/C++ skills to fix it.

I've created a bug report –
https://bugs.squid-cache.org/show_bug.cgi?id=4788

We've also changed parent behavior so it will not silently close the
connection but will return 502 in this exact situation and seems like it
fixes unexpected squid re-forward.

On Tue, Nov 28, 2017 at 7:12 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 11/28/2017 02:27 PM, Ivan Larionov wrote:
>
> > Another interesting fact is that I can't reproduce this issue if squid
> > has no other traffic except my testing requests. But it's easy to
> > reproduce when server has other traffic.
>
> I did not check your logs carefully, but I believe that (when things do
> not work the way you expect) your Squid is retrying a failed persistent
> connection (rather than re-forwarding the request after receiving a bad
> response). See the "pconn race happened" line below:
>
> > 1_retry.log:2017/11/28 12:55:12.731| 17,3| FwdState.cc(416) fail:
> ERR_ZERO_SIZE_OBJECT "Bad Gateway"
> > 1_retry.log:2017/11/28 12:55:12.731| 17,5| FwdState.cc(430) fail: pconn
> race happened
> > 1_retry.log:2017/11/28 12:55:12.731| 93,5| AsyncJob.cc(84) mustStop:
> HttpStateData will stop, reason: HttpStateData::continueAfterParsingHeader
> > 1_retry.log:2017/11/28 12:55:12.731| 17,3| FwdState.cc(618) retryOrBail:
> re-forwarding (1 tries, 40 secs)
> > 1_retry.log:2017/11/28 12:55:12.731| 17,4| FwdState.cc(621) retryOrBail:
> retrying the same destination
>
>
> Assuming you tested with forward_max_tries set to 1, the retryOrBail
> lines above confirm the off-by-one problem I was describing in my
> previous response.
>
> AFAICT, compensating by setting forward_max_tries to zero will _not_
> work (for reasons unrelated to the problem at hand).
>
>
> FWIW, your current options include those outlined at
> http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_
> add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
>
>
> Alex.
>
>
> > On Tue, Nov 28, 2017 at 7:32 AM, Alex Rousskov wrote:
> >
> > On 11/27/2017 05:19 PM, Ivan Larionov wrote:
> >
> > > I see retries only when squid config has 2 parents. If I comment
> out
> > > everything related to "newproxy" I can't reproduce this behavior
> anymore.
> >
> > The posted logs are not detailed enough to confirm or deny that IMO,
> but
> > I suspect that you are dealing with at least one bug.
> >
> >
> > > https://wiki.squid-cache.org/SquidFaq/InnerWorkings#When_
> does_Squid_re-forward_a_client_request.3F
> > <https://wiki.squid-cache.org/SquidFaq/InnerWorkings#When_
> does_Squid_re-forward_a_client_request.3F>
> > >
> > >> Squid does not try to re-forward a request if at least one of the
> following conditions is true:
> > >>
> > >> The number of forwarding attempts exceeded forward_max_tries. For
> > >> example, if you set forward_max_tries to 1 (one), then no requests
> > >> will be re-forwarded.
> >
> >
> > AFAICT, there is an off-by-one bug in Squid that violates the above:
> >
> > > if (n_tries > Config.forward_max_tries)
> > > return false;
> >
> > The n_tries counter is incremented before Squid makes a request
> > forwarding attempt. With n_tries and Config.forward_max_tries both
> set
> > to 1, the quoted FwdState::checkRetry() code will not prevent
> > re-forwarding. There is a similar problem in FwdState::reforward().
> This
> > reasoning needs confirmation/testing.
> >
> > Please note that simply changing the ">" operator to ">=" may break
> > other things in a difficult-to-detect-by-simple-tests ways. The
> correct
> > fix may be more complex than it looks and may involve making policy
> > decisions regarding forward_max_tries meaning. The best fix would
> remove
> > checkRetry() and reforward() duplication. This code is difficult to
> work
> > with; many related code names are misleading.
> >
> >
> > >> Squid has no alternative destinations to try. Please note that
> > >> alternative destinations may include multiple next hop IP
> addresses
> > >> and multiple peers.
> >
> > The fact that Squid sends two requests to the same peer with only one
> > peer address selected suggests that Squid is retrying a failed
> > persistent connection rather than re-forwarding after receiving a bad
>

Re: [squid-users] forward_max_tries 1 has no effect

2017-11-28 Thread Ivan Larionov
Thanks Alex, this is very helpful.

Another interesting fact is that I can't reproduce this issue if squid has
no other traffic except my testing requests. But it's easy to reproduce
when server has other traffic.

The problem is that with other traffic I can't provide the whole log file
with debug ALL,7 enabled because it has other requests.

So I tried to select only parts related to my test request (this is ALL,7):

https://www.dropbox.com/s/udzeipeerf5o38t/squid_retry_logs.tgz?dl=1


On Tue, Nov 28, 2017 at 7:32 AM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 11/27/2017 05:19 PM, Ivan Larionov wrote:
>
> > I see retries only when squid config has 2 parents. If I comment out
> > everything related to "newproxy" I can't reproduce this behavior anymore.
>
> The posted logs are not detailed enough to confirm or deny that IMO, but
> I suspect that you are dealing with at least one bug.
>
>
> > https://wiki.squid-cache.org/SquidFaq/InnerWorkings#When_
> does_Squid_re-forward_a_client_request.3F
> >
> >> Squid does not try to re-forward a request if at least one of the
> following conditions is true:
> >>
> >> The number of forwarding attempts exceeded forward_max_tries. For
> >> example, if you set forward_max_tries to 1 (one), then no requests
> >> will be re-forwarded.
>
>
> AFAICT, there is an off-by-one bug in Squid that violates the above:
>
> > if (n_tries > Config.forward_max_tries)
> > return false;
>
> The n_tries counter is incremented before Squid makes a request
> forwarding attempt. With n_tries and Config.forward_max_tries both set
> to 1, the quoted FwdState::checkRetry() code will not prevent
> re-forwarding. There is a similar problem in FwdState::reforward(). This
> reasoning needs confirmation/testing.
>
> Please note that simply changing the ">" operator to ">=" may break
> other things in a difficult-to-detect-by-simple-tests ways. The correct
> fix may be more complex than it looks and may involve making policy
> decisions regarding forward_max_tries meaning. The best fix would remove
> checkRetry() and reforward() duplication. This code is difficult to work
> with; many related code names are misleading.
>
>
> >> Squid has no alternative destinations to try. Please note that
> >> alternative destinations may include multiple next hop IP addresses
> >> and multiple peers.
>
> The fact that Squid sends two requests to the same peer with only one
> peer address selected suggests that Squid is retrying a failed
> persistent connection rather than re-forwarding after receiving a bad
> response. Again, the logs are not detailed enough to distinguish the two
> cases. I can only see that a single peer/destination address was
> selected (not two), which is correct/expected behavior. I cannot see
> what happened next with sufficient detail.
>
> Going forward, you have several options, including:
>
> A. Post a link to compressed ALL,7+ logs to confirm bug(s).
> B. Fix the broken condition(s) in FwdState. See above.
>
> HTH,
>
> Alex.
>
>
> > 2017/11/27 15:53:40.542| 5,2| TcpAcceptor.cc(220) doAccept: New
> connection on FD 15
> > 2017/11/27 15:53:40.542| 5,2| TcpAcceptor.cc(295) acceptNext: connection
> > on local=0.0.0.0:3128 <http://0.0.0.0:3128> remote=[::] FD 15 flags=9
> > 2017/11/27 15:53:40.543| 11,2| client_side.cc(2372) parseHttpRequest:
> > HTTP Client local=127.0.0.1:3128 <http://127.0.0.1:3128>
> > remote=127.0.0.1:53798 <http://127.0.0.1:53798> FD 45 flags=1
> > 2017/11/27 15:53:40.543| 11,2| client_side.cc(2373) parseHttpRequest:
> > HTTP Client REQUEST:
> > -
> > GET http://HOST:12345/ HTTP/1.1
> > Host: HOST:12345
> > User-Agent: curl/7.51.0
> > Accept: */*
> > Proxy-Connection: Keep-Alive
> >
> >
> > --
> > 2017/11/27 15:53:40.543| 85,2| client_side_request.cc(745)
> > clientAccessCheckDone: The request GET http://HOST:12345/ is ALLOWED;
> > last ACL checked: localhost
> > 2017/11/27 15:53:40.543| 85,2| client_side_request.cc(721)
> > clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
> > 2017/11/27 15:53:40.543| 85,2| client_side_request.cc(745)
> > clientAccessCheckDone: The request GET http://HOST:12345/ is ALLOWED;
> > last ACL checked: localhost
> > 2017/11/27 15:53:40.543| 17,2| FwdState.cc(133) FwdState: Forwarding
> > client request local=127.0.0.1:3128 <http://127.0.0.1:3128>
> > remote=127.0.0.1:53798 <http://127.0.0.1:53798> FD 45 flags=1,
> > url=http://HOST:12345/
> > 2017/11/27 15:53:40.543| 

Re: [squid-users] forward_max_tries 1 has no effect

2017-11-27 Thread Ivan Larionov
://HOST:12345/
2017/11/27 15:54:20.627| 11,2| http.cc(2229) sendRequest: HTTP Server local=
127.0.0.3:41355 remote=127.0.0.1:18070 FD 40 flags=1
2017/11/27 15:54:20.627| 11,2| http.cc(2230) sendRequest: HTTP Server
REQUEST:
-
GET http://HOST:12345/ HTTP/1.1
User-Agent: curl/7.51.0
Accept: */*
Host: HOST:12345
Cache-Control: max-age=259200
Connection: keep-alive


--

[SKIPPED 40 seconds again until parent closes TCP connection with FIN,ACK]

2017/11/27 15:55:00.728| ctx: enter level  0: 'http://HOST:12345/'
2017/11/27 15:55:00.728| 11,2| http.cc(719) processReplyHeader: HTTP Server
local=127.0.0.3:41355 remote=127.0.0.1:18070 FD 40 flags=1
2017/11/27 15:55:00.728| 11,2| http.cc(720) processReplyHeader: HTTP Server
REPLY:
-
HTTP/1.0 502 Bad Gateway
Cache-Control: no-cache
Connection: close
Content-Type: text/html

502 Bad Gateway
The server returned an invalid or incomplete response.


--
2017/11/27 15:55:00.728| ctx: exit level  0
2017/11/27 15:55:00.728| 20,2| store.cc(996) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2017/11/27 15:55:00.728| 20,2| store.cc(996) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2017/11/27 15:55:00.728| 88,2| client_side_reply.cc(2073)
processReplyAccessResult: The reply for GET http://HOST:12345/ is ALLOWED,
because it matched (access_log stdio:/var/log/squid/access.log line)
2017/11/27 15:55:00.728| 11,2| client_side.cc(1409) sendStartOfMessage:
HTTP Client local=127.0.0.1:3128 remote=127.0.0.1:53798 FD 45 flags=1
2017/11/27 15:55:00.728| 11,2| client_side.cc(1410) sendStartOfMessage:
HTTP Client REPLY:
-
HTTP/1.1 502 Bad Gateway
Date: Mon, 27 Nov 2017 23:54:20 GMT
Cache-Control: no-cache
Content-Type: text/html
X-Cache: MISS from ip-172-23-18-130
X-Cache-Lookup: MISS from ip-172-23-18-130:3128
Transfer-Encoding: chunked
Connection: keep-alive


--


On Thu, Nov 23, 2017 at 11:43 PM, Amos Jeffries <squ...@treenet.co.nz>
wrote:

>
> On 24/11/17 10:03, Ivan Larionov wrote:
>
>>
>>> On Nov 23, 2017, at 12:32 AM, Amos Jeffries <squ...@treenet.co.nz>
>>> wrote:
>>>
>>> On 23/11/17 14:20, Ivan Larionov wrote:
>>>
>>>> Hello.
>>>> We have an issue with squid when it tries to re-forward / retry failed
>>>> request even when forward_max_tries is set to 1. The situation when it
>>>> happens is when there's no response, parent just closes the connection.
>>>>
>>> ...
>>>
>>>> It doesn't happen 100% times. Sometimes squid returns 502 after the 1st
>>>> try, sometimes it retries once. Also I haven't seen more than 1 retry.
>>>>
>>>
>>> Please enable debug_options 44,2 to see what destinations your Squid is
>>> actually finding.
>>>
>>
>> I'll check this on Monday.
>>
>>
>>> max_forward_tries is just a rough cap on the number of server names
>>> which can be found when generating that list. The actual destinations count
>>> can exceed it if one or more of the servers happens to have multiple IPs to
>>> try.
>>>
>>> The overall transaction can involve retries if one of the other layers
>>> (TCP or HTTP) contains retry semantics to a single server.
>>>
>>>
>>>
>>> Could it be a bug? We'd really like to disable these retries.
>>>>
>>>
>>> Why are trying to break HTTP?
>>> What is the actual problem you are trying to resolve here?
>>>
>>>
>> Why do you think I'm trying to break HTTP?
>>
>> squid forwards the request to parent but parent misbehaves and just
>> closes the connection after 40 seconds. I'm trying to prevent retry of
>> request in such situation. Why squid retries if I never asked him to do it
>> and specifically said "forward_max_tries 1".
>>
>> And this is not a connection failure, squid successfully establishes the
>> connection and sends the request, parent ACKs it, just never responses back
>> and proactively closes the connection.
>>
>>
> This is not misbehaviour on the part of either Squid nor the parent.
> <https://tools.ietf.org/html/rfc7230#section-6.3.1>
> "Connections can be closed at any time, with or without intention."
>
>
> As has been discussed in other threads recently there are servers out
> there starting to greylist TCP connections, closing the first one some time
> *after* SYN+ACK regardless of what the proxy sends and accepting any
> followup connection attempts.
>
> NP: That can result in exactly the behaviour you describe from the peer as
> Squid does not wait for a FIN to arrive before sending its upstream HTTP
> request - Squid will "randomly" get a F

[squid-users] forward_max_tries 1 has no effect

2017-11-22 Thread Ivan Larionov
Hello.

We have an issue with squid when it tries to re-forward / retry failed
request even when forward_max_tries is set to 1. The situation when it
happens is when there's no response, parent just closes the connection.

Relevant parts of configuration so you understand the architecture:

cache_peer 127.0.0.1 parent 18070 0 no-query no-digest no-netdb-exchange
name=proxy
never_direct allow all
negative_ttl 0 seconds
forward_max_tries 1
retry_on_error off

The traffic flow from tcpdump is like this:

squid to parent
GET http://HOST/
parent to squid
ACK

waiting (no response for ~40 seconds)

parent to squid
FIN, ACK
squid to parent
FIN, ACK
parent to squid
FIN

Immediately after that:

squid to parent
GET http://HOST/ (again)

Debug logs from ALL,2:

…
http.cc(2229) sendRequest: HTTP Server local=127.0.0.2:46867 remote=
127.0.0.1:18070 FD 26 flags=1
http.cc(2230) sendRequest: HTTP Server REQUEST:
-
GET http://HOST:12345/ HTTP/1.1
…
http.cc(1299) continueAfterParsingHeader: WARNING: HTTP: Invalid Response:
No object data received for http://HOST:12345/ AKA HOST/
FwdState.cc(655) handleUnregisteredServerEnd: self=0x430a438*2
err=0x445fcf8 http://HOST:12345/
http.cc(2229) sendRequest: HTTP Server local=127.0.0.2:34417 remote=
127.0.0.1:18070 FD 26 flags=1
http.cc(2230) sendRequest: HTTP Server REQUEST:
-
GET http://HOST:12345/ HTTP/1.1
…

It doesn't happen 100% times. Sometimes squid returns 502 after the 1st
try, sometimes it retries once. Also I haven't seen more than 1 retry.

Could it be a bug? We'd really like to disable these retries.

Squid Cache: Version 3.5.27
Service Name: squid
configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs,rock'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--with-large-files' '--with-maxfd=16384' '--enable-htcp'

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-07-14 Thread Ivan Larionov
Ok, mystery solved.

Patch "HTTP: do not allow Proxy-Connection to override Connection header"
changes the behavior. And we indeed send from our clients:

Connection: close
Proxy-Connection: Keep-Alive


On Sat, Jul 8, 2017 at 9:51 AM, Ivan Larionov <xeron.os...@gmail.com> wrote:

> RPS didn't change. Throughput didn't change. Our prod load is 200-700 RPS
> per server (changes during the day) and my load test load was constant 470
> RPS.
>
> Clients didn't change. Doesn't matter if they use HTTP 1.1 or 1.0, because
> the only thing which changed is squid version. And as I figured out, it's
> not actually about 2.7 to 3.5 update, it's all about difference between
> 3.5.20 and 3.5.21.
>
> I'm sorry but anything you say about throughput doesn't make any sense.
> Load pattern didn't change. Squid still handles the same amount of requests.
>
> I think I'm going to load test every patch applied to 3.5.21 from this
> page: http://www.squid-cache.org/Versions/v3/3.5/
> changesets/SQUID_3_5_21.html so I'll be able to point to exact change
> which introduced this behavior. I'll try to do it during the weekend or may
> be on Monday.
>
> On Sat, Jul 8, 2017 at 5:46 AM, Amos Jeffries <squ...@treenet.co.nz>
> wrote:
>
>> On 08/07/17 02:06, Ivan Larionov wrote:
>>
>>> Thank you for the fast reply.
>>>
>>> On Jul 7, 2017, at 01:10, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>>>
>>>> On 07/07/17 13:55, Ivan Larionov wrote:
>>>>>
>>>> >>>
>>
>>> However I assumed that this is a bug and that I can find older version
>>>>> which worked fine. I started testing from 3.1.x all the way to 3.5.26 and
>>>>> this is what I found:
>>>>> * All versions until 3.5.21 work fine. There no issues with huge
>>>>> amount of TIME_WAIT connections under load.
>>>>> * 3.5.20 is the latest stable version.
>>>>> * 3.5.21 is the first broken version.
>>>>> * 3.5.23, 3.5.25, 3.5.26 are broken as well.
>>>>> This effectively means that bug is somewhere in between 3.5.20 and
>>>>> 3.5.21.
>>>>> I hope this helps and I hope you'll be able to find an issue. If you
>>>>> can create a bug report based on this information and post it here it 
>>>>> would
>>>>> be awesome.
>>>>>
>>>>
>>>> The changes in 3.5.21 were fixes to some common crashes and better
>>>> caching behaviour. So I expect at least some of the change is due to higher
>>>> traffic throughput on proxies previously restricted by those problems.
>>>>
>>>>
>>> I can't imagine how throughput increase could result in 500 times more
>>> TIME_WAIT connections count.
>>>
>>>
>> More requests per second generally means more TCP connections churning.
>>
>> Also when going from Squid-2 to Squid-3 there is a change from HTTP/1.0
>> to HTTP/1.1 and the accompanying switch from MISS to near-HIT
>> revalidations. Revalidations usually only have headers without payload so
>> the same bytes/sec can contain orders more magnitude of those than MISS -
>> which is the point of having them.
>>
>>
>> In our prod environment when we updated from 2.7.x to 3.5.25 we saw
>>> increase from 100 to 1. This is 100x.
>>>
>>>
>> Compared to what RPS change? Given the above traffic change this may be
>> reasonable for a v2 to v3 jump. Or own very rough tests on old hardware lab
>> tests have shown rates for Squid-2 at ~900 RPS and Squid-3 at around 1900
>> RPS.
>>
>>
>> When I was load testing different versions yesterday I was always sending
>>> the same amount of RPS to them. Update from 3.5.20 to 3.5.21 resulted in
>>> jump from 20 to 1 TIME_WAIT count. This is 500x.
>>>
>>> I know that time_wait is fine in general. Until you have too many of
>>> them.
>>>
>>>
>> At this point I'd check that your testing software supports HTTP/1.1
>> pipelines. It may be giving you worst-case results with per-message TCP
>> churn rather than what will occur normally (pipelines of N requests per TCP
>> connection).
>> Though seeing such a jump between Squid-3 releases is worrying.
>>
>> Amos
>>
>
>
>
> --
> With best regards, Ivan Larionov.
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-07-08 Thread Ivan Larionov
RPS didn't change. Throughput didn't change. Our prod load is 200-700 RPS
per server (changes during the day) and my load test load was constant 470
RPS.

Clients didn't change. Doesn't matter if they use HTTP 1.1 or 1.0, because
the only thing which changed is squid version. And as I figured out, it's
not actually about 2.7 to 3.5 update, it's all about difference between
3.5.20 and 3.5.21.

I'm sorry but anything you say about throughput doesn't make any sense.
Load pattern didn't change. Squid still handles the same amount of requests.

I think I'm going to load test every patch applied to 3.5.21 from this
page:
http://www.squid-cache.org/Versions/v3/3.5/changesets/SQUID_3_5_21.html so
I'll be able to point to exact change which introduced this behavior. I'll
try to do it during the weekend or may be on Monday.

On Sat, Jul 8, 2017 at 5:46 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 08/07/17 02:06, Ivan Larionov wrote:
>
>> Thank you for the fast reply.
>>
>> On Jul 7, 2017, at 01:10, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>>
>>> On 07/07/17 13:55, Ivan Larionov wrote:
>>>>
>>> >>>
>
>> However I assumed that this is a bug and that I can find older version
>>>> which worked fine. I started testing from 3.1.x all the way to 3.5.26 and
>>>> this is what I found:
>>>> * All versions until 3.5.21 work fine. There no issues with huge amount
>>>> of TIME_WAIT connections under load.
>>>> * 3.5.20 is the latest stable version.
>>>> * 3.5.21 is the first broken version.
>>>> * 3.5.23, 3.5.25, 3.5.26 are broken as well.
>>>> This effectively means that bug is somewhere in between 3.5.20 and
>>>> 3.5.21.
>>>> I hope this helps and I hope you'll be able to find an issue. If you
>>>> can create a bug report based on this information and post it here it would
>>>> be awesome.
>>>>
>>>
>>> The changes in 3.5.21 were fixes to some common crashes and better
>>> caching behaviour. So I expect at least some of the change is due to higher
>>> traffic throughput on proxies previously restricted by those problems.
>>>
>>>
>> I can't imagine how throughput increase could result in 500 times more
>> TIME_WAIT connections count.
>>
>>
> More requests per second generally means more TCP connections churning.
>
> Also when going from Squid-2 to Squid-3 there is a change from HTTP/1.0 to
> HTTP/1.1 and the accompanying switch from MISS to near-HIT revalidations.
> Revalidations usually only have headers without payload so the same
> bytes/sec can contain orders more magnitude of those than MISS - which is
> the point of having them.
>
>
> In our prod environment when we updated from 2.7.x to 3.5.25 we saw
>> increase from 100 to 1. This is 100x.
>>
>>
> Compared to what RPS change? Given the above traffic change this may be
> reasonable for a v2 to v3 jump. Or own very rough tests on old hardware lab
> tests have shown rates for Squid-2 at ~900 RPS and Squid-3 at around 1900
> RPS.
>
>
> When I was load testing different versions yesterday I was always sending
>> the same amount of RPS to them. Update from 3.5.20 to 3.5.21 resulted in
>> jump from 20 to 1 TIME_WAIT count. This is 500x.
>>
>> I know that time_wait is fine in general. Until you have too many of them.
>>
>>
> At this point I'd check that your testing software supports HTTP/1.1
> pipelines. It may be giving you worst-case results with per-message TCP
> churn rather than what will occur normally (pipelines of N requests per TCP
> connection).
> Though seeing such a jump between Squid-3 releases is worrying.
>
> Amos
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-07-07 Thread Ivan Larionov

> On Jul 7, 2017, at 07:20, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
> 
> Hey Ivan,
> 
> How do you run these tests?
> With what application "ab" ?
> 

Apache Jmeter with test case written by our load test engineer. I'm not at work 
right now so can't say the exact scenario but afaik we were trying to reproduce 
our production load so it should be somehow close to the real life traffic.

> Thanks,
> Eliezer
> 
> 
> Eliezer Croitoru
> Linux System Administrator
> Mobile: +972-5-28704261
> Email: elie...@ngtech.co.il
> 
> 
> 
> -Original Message-
> From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On 
> Behalf Of Ivan Larionov
> Sent: Friday, July 7, 2017 17:07
> To: Amos Jeffries <squ...@treenet.co.nz>
> Cc: squid-users@lists.squid-cache.org
> Subject: Re: [squid-users] Huge amount of time_wait connections after upgrade 
> from v2 to v3
> 
> Thank you for the fast reply.
> 
>>> On Jul 7, 2017, at 01:10, Amos Jeffries <squ...@treenet.co.nz> wrote:
>>> 
>>> On 07/07/17 13:55, Ivan Larionov wrote:
>>> Hi. Sorry that I'm answering to the old thread. I was on vacation and 
>>> didn't have a chance to test the proposed solution.
>>> Dieter, yes, I'm on the old CentOS 6 based OS (Amazon Linux) but with a new 
>>> kernel 4.9.27.
>>> Amos, thank you for the suggestions about configure flags and squid config 
>>> options, I fixed all issues you pointed to.
>>> Unfortunately following workarounds didn't help:
>>> * client_idle_pconn_timeout 30 seconds
>>> * half_closed_clients on
>>> * client_persistent_connections off
>>> * server_persistent_connections off
>> 
>> TIME_WAIT is a sign that Squid is following the normal TCP process for 
>> closing connections, and doing so before the remote endpoint closes.
>> 
>> Disabling persistent connections increases the number of connections going 
>> through that process. So you definitely want those settings ON to reduce the 
>> WAIT states.
>> 
> 
> I understand that. I just wrote that I tried this options and they had no 
> effect. They didn't increase nor decrease number of TIME_WAIT connections. I 
> removed them when I started testing older versions.
> 
>> If the remote end is the one doing the closure, then you will see less 
>> TIME_WAIT, but CLOSE_WAIT will increase instead. The trick is in finding the 
>> right balance of timeouts on both client and server idle pconn to get the 
>> minimum of total WAIT states. That is network dependent.
>> 
>> Generally though forward/explicit and intercept proxies want 
>> client_idle_pconn_timeout to be shorter than server_idle_pconn_timeout. 
>> Reverse proxy want the opposite.
>> 
>> 
>> 
>>> However I assumed that this is a bug and that I can find older version 
>>> which worked fine. I started testing from 3.1.x all the way to 3.5.26 and 
>>> this is what I found:
>>> * All versions until 3.5.21 work fine. There no issues with huge amount of 
>>> TIME_WAIT connections under load.
>>> * 3.5.20 is the latest stable version.
>>> * 3.5.21 is the first broken version.
>>> * 3.5.23, 3.5.25, 3.5.26 are broken as well.
>>> This effectively means that bug is somewhere in between 3.5.20 and 3.5.21.
>>> I hope this helps and I hope you'll be able to find an issue. If you can 
>>> create a bug report based on this information and post it here it would be 
>>> awesome.
>> 
>> The changes in 3.5.21 were fixes to some common crashes and better caching 
>> behaviour. So I expect at least some of the change is due to higher traffic 
>> throughput on proxies previously restricted by those problems.
>> 
> 
> I can't imagine how throughput increase could result in 500 times more 
> TIME_WAIT connections count.
> 
> In our prod environment when we updated from 2.7.x to 3.5.25 we saw increase 
> from 100 to 1. This is 100x.
> 
> When I was load testing different versions yesterday I was always sending the 
> same amount of RPS to them. Update from 3.5.20 to 3.5.21 resulted in jump 
> from 20 to 1 TIME_WAIT count. This is 500x.
> 
> I know that time_wait is fine in general. Until you have too many of them.
> 
>> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-07-07 Thread Ivan Larionov
Thank you for the fast reply.

> On Jul 7, 2017, at 01:10, Amos Jeffries <squ...@treenet.co.nz> wrote:
> 
>> On 07/07/17 13:55, Ivan Larionov wrote:
>> Hi. Sorry that I'm answering to the old thread. I was on vacation and didn't 
>> have a chance to test the proposed solution.
>> Dieter, yes, I'm on the old CentOS 6 based OS (Amazon Linux) but with a new 
>> kernel 4.9.27.
>> Amos, thank you for the suggestions about configure flags and squid config 
>> options, I fixed all issues you pointed to.
>> Unfortunately following workarounds didn't help:
>> * client_idle_pconn_timeout 30 seconds
>> * half_closed_clients on
>> * client_persistent_connections off
>> * server_persistent_connections off
> 
> TIME_WAIT is a sign that Squid is following the normal TCP process for 
> closing connections, and doing so before the remote endpoint closes.
> 
> Disabling persistent connections increases the number of connections going 
> through that process. So you definitely want those settings ON to reduce the 
> WAIT states.
> 

I understand that. I just wrote that I tried this options and they had no 
effect. They didn't increase nor decrease number of TIME_WAIT connections. I 
removed them when I started testing older versions.

> If the remote end is the one doing the closure, then you will see less 
> TIME_WAIT, but CLOSE_WAIT will increase instead. The trick is in finding the 
> right balance of timeouts on both client and server idle pconn to get the 
> minimum of total WAIT states. That is network dependent.
> 
> Generally though forward/explicit and intercept proxies want 
> client_idle_pconn_timeout to be shorter than server_idle_pconn_timeout. 
> Reverse proxy want the opposite.
> 
> 
> 
>> However I assumed that this is a bug and that I can find older version which 
>> worked fine. I started testing from 3.1.x all the way to 3.5.26 and this is 
>> what I found:
>> * All versions until 3.5.21 work fine. There no issues with huge amount of 
>> TIME_WAIT connections under load.
>> * 3.5.20 is the latest stable version.
>> * 3.5.21 is the first broken version.
>> * 3.5.23, 3.5.25, 3.5.26 are broken as well.
>> This effectively means that bug is somewhere in between 3.5.20 and 3.5.21.
>> I hope this helps and I hope you'll be able to find an issue. If you can 
>> create a bug report based on this information and post it here it would be 
>> awesome.
> 
> The changes in 3.5.21 were fixes to some common crashes and better caching 
> behaviour. So I expect at least some of the change is due to higher traffic 
> throughput on proxies previously restricted by those problems.
> 

I can't imagine how throughput increase could result in 500 times more 
TIME_WAIT connections count.

In our prod environment when we updated from 2.7.x to 3.5.25 we saw increase 
from 100 to 1. This is 100x.

When I was load testing different versions yesterday I was always sending the 
same amount of RPS to them. Update from 3.5.20 to 3.5.21 resulted in jump from 
20 to 1 TIME_WAIT count. This is 500x.

I know that time_wait is fine in general. Until you have too many of them.

> Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-07-06 Thread Ivan Larionov
Hi. Sorry that I'm answering to the old thread. I was on vacation and
didn't have a chance to test the proposed solution.

Dieter, yes, I'm on the old CentOS 6 based OS (Amazon Linux) but with a new
kernel 4.9.27.

Amos, thank you for the suggestions about configure flags and squid config
options, I fixed all issues you pointed to.

Unfortunately following workarounds didn't help:

* client_idle_pconn_timeout 30 seconds
* half_closed_clients on
* client_persistent_connections off
* server_persistent_connections off

However I assumed that this is a bug and that I can find older version
which worked fine. I started testing from 3.1.x all the way to 3.5.26 and
this is what I found:

* All versions until 3.5.21 work fine. There no issues with huge amount of
TIME_WAIT connections under load.
* 3.5.20 is the latest stable version.
* 3.5.21 is the first broken version.
* 3.5.23, 3.5.25, 3.5.26 are broken as well.

This effectively means that bug is somewhere in between 3.5.20 and 3.5.21.

I hope this helps and I hope you'll be able to find an issue. If you can
create a bug report based on this information and post it here it would be
awesome.

Thank you.

On Wed, Jun 7, 2017 at 4:34 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 07/06/17 12:13, Ivan Larionov wrote:
>
>> Hi!
>>
>> We recently updated from squid v2 to v3 and now see huge increase in
>> connections in TIME_WAIT state on our squid servers (verified that this is
>> clients connections).
>>
>
> The biggest change between 2.7 and 3.5 in this area is that 2.7 was
> HTTP/1.0 which closed TCP connections after each request by default, and
> 3.5 is HTTP/1.1 which does not. So connections are more likely to persist
> until they hit some TCP timeout then enter the slow TIME_WAIT process.
>
> There were also some other bugs identified in older 3.5 releases which
> increased the TIME_WAIT specifically. I thought those were almost all fixed
> by now, but YMMV whether you hit the remaining issues.
>  A workaround it to set <http://www.squid-cache.org/Do
> c/config/client_idle_pconn_timeout/> to a shorter value than the default
> 2min. eg you might want it to be 30sec or so.
>
>
>
>
>> See versions and amount of such connections under the same load with the
>> same configs (except some incompatible stuff):
>>
>> squid 2.7.STABLE9
>>
>> configure options:  '--program-prefix=' '--prefix=/usr'
>> '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
>> '--sysconfdir=/etc' '--includedir=/usr/include' '--libdir=/usr/lib'
>> '--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
>> '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
>> '--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--localstatedir=/var'
>> '--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll'
>> '--enable-removal-policies=heap,lru' '--enable-storeio=aufs'
>> '--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
>> '--enable-useragent-log' '--enable-referer-log' '--with-large-files'
>> '--with-maxfd=16384' '--enable-err-languages=English'
>>
>> # netstat -tn | grep TIME_WAIT | grep 3128 | wc -l
>> 95
>>
>> squid 3.5.25
>>
>> configure options:  '--program-prefix=' '--prefix=/usr'
>> '--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
>> '--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
>> '--includedir=/usr/include' '--datadir=/usr/share'
>> '--sharedstatedir=/usr/com' '--localstatedir=/var'
>> '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
>> '--enable-removal-policies=heap,lru' '--enable-storeio=aufs'
>> '--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
>> '--enable-useragent-log' '--enable-referer-log' '--with-large-files'
>> '--with-maxfd=16384' '--enable-err-languages=English' '--enable-htcp'
>>
>
> FYI, these options are not doing anything for Squid-3:
>   '--enable-useragent-log' '--enable-referer-log'
> '--enable-err-languages=English'
>
>
>
>> # netstat -tn | grep TIME_WAIT | grep 3128 | wc -l
>> 11277
>>
>> Config:
>>
>> http_port 0.0.0.0:3128 <http://0.0.0.0:3128>
>>
>> acl localnet src 10.0.0.0/8 <http://10.0.0.0/8> # RFC1918 possible
>> internal network
>> acl localnet src 172.16.0.0/12 <http://172.16.0.0/12>  # RFC1918
>> possible internal network
>> acl localnet src 192.168.0.0/16 <http://192.168.0.0/16> # RFC1918
>> possible internal network
>>
>> acl localnet src fc00::/7   # RFC 4193 local private network range
>> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
>> mach

[squid-users] Huge amount of time_wait connections after upgrade from v2 to v3

2017-06-06 Thread Ivan Larionov
Hi!

We recently updated from squid v2 to v3 and now see huge increase in
connections in TIME_WAIT state on our squid servers (verified that this is
clients connections).

See versions and amount of such connections under the same load with the
same configs (except some incompatible stuff):

squid 2.7.STABLE9

configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--localstatedir=/var'
'--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--enable-useragent-log' '--enable-referer-log' '--with-large-files'
'--with-maxfd=16384' '--enable-err-languages=English'

# netstat -tn | grep TIME_WAIT | grep 3128 | wc -l
95

squid 3.5.25

configure options:  '--program-prefix=' '--prefix=/usr'
'--exec-prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--sysconfdir=/etc/squid' '--libdir=/usr/lib' '--libexecdir=/usr/lib/squid'
'--includedir=/usr/include' '--datadir=/usr/share'
'--sharedstatedir=/usr/com' '--localstatedir=/var'
'--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-epoll'
'--enable-removal-policies=heap,lru' '--enable-storeio=aufs'
'--enable-delay-pools' '--with-pthreads' '--enable-cache-digests'
'--enable-useragent-log' '--enable-referer-log' '--with-large-files'
'--with-maxfd=16384' '--enable-err-languages=English' '--enable-htcp'

# netstat -tn | grep TIME_WAIT | grep 3128 | wc -l
11277

Config:

http_port 0.0.0.0:3128

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1025-65535  # unregistered ports

acl CONNECT method CONNECT

### START CUSTOM
acl Purge_method method PURGE

# Allow localhost to selectively flush the cache
http_access allow localhost Purge_method
http_access deny Purge_method
### END CUSTOM

### ALLOW ACCESS TO ALL PORTS
# http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

http_access allow localnet
http_access allow localhost
http_access deny all

### START CUSTOM
# Disable icp
icp_port 0
# Allow ICP queries from local networks only
icp_access allow localnet
icp_access allow localhost
icp_access deny all

# Disable htcp
htcp_port 0
# Allow HTCP queries from local networks only
htcp_access allow localnet
htcp_access allow localhost
htcp_access deny all

# Check for custom request header
acl custom_acl req_header x-use-custom-proxy -i true
# Check for x-use-new-proxy request header
acl custom_new_acl req_header x-use-new-proxy -i true

# first_proxy
cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=first_proxy
cache_peer_access first_proxy deny custom_acl
cache_peer_access first_proxy deny custom_new_acl

# second_proxy
cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=second_proxy
cache_peer_access second_proxy allow custom_acl
cache_peer_access second_proxy allow custom_new_acl
cache_peer_access second_proxy deny all

never_direct allow all

cache_mem 4620591 KB
maximum_object_size_in_memory 8 KB
memory_replacement_policy heap LRU
cache_replacement_policy heap LRU

cache_dir aufs /mnt/services/squid/cache 891289 16 256

minimum_object_size 64 bytes # none-zero so we dont cache mistakes
maximum_object_size 102400 KB

logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %>Hs %h" "%{User-Agent}>h" %Ss:%Sh
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ECAP: How to Add header to get request

2017-03-30 Thread Ivan Kolesnikov
Hi Everyone,

I need add a Cookie header for some get requests via SQUID.
I can add: 'request_header_add Cookie "My_cookie_value" all' in squid.conf,
but in this case Cookie header will be add for all requests and I can't
manipulate with my "My_cookie_value". I updated adapter_modifying.cc and
add the following code in Adapter::Xaction::start() function:
static const libecap::Name name_cookie("Cookie");
const libecap::Header::Value value_cookie =
libecap::Area::FromTempString("video_key=My_cookie_value");
adapted->header().add(name_cookie, value_cookie);
In that case Cookie header was add in all responses.

Please see my squid.conf:
loadable_modules /usr/local/lib/ecap/ecap_adapter_modifying.so
ecap_enable on
ecap_service ecapModifier respmod_precache \
uri=ecap://e-cap.org/ecap/services/sample/modifying \
victim=awerewrewrewrwerefbfcvglkflds9349rdsgfdk9dfgkj95tfnvxcncbnbv
\
replacement=$$$
adaptation_access ecapModifier allow all


Please advice how to correctly add header in get requests?


Best Regards,
Ivan Kolesnikov
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5.23 memory usage

2017-01-19 Thread Ivan Larionov
Hello.

I'm pretty sure this question has been asked multiple times already, but
after reading everything I found I still can't figure out squid memory
usage patterns.

We're currently trying to upgrade from squid 2.7 to squid 3.5 and memory
usage on squid 3 is much much higher compared to squid 2 with the same
configuration.

What do I see:

squid running for several days with low traffic:

# top
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 7367 squid 20   0 4780m 4.4g 5224 S  6.0 60.6 105:01.76 squid -N

So it uses 4.4GB resident memory. Ok, let's see important config options:

cache_mem 2298756 KB
maximum_object_size_in_memory 8 KB
memory_replacement_policy lru
cache_replacement_policy lru

cache_dir aufs /mnt/services/squid/cache 445644 16 256

minimum_object_size 64 bytes # none-zero so we dont cache mistakes
maximum_object_size 102400 KB

So we configured 2.2GB memory cache and 500GB disk cache. Disk cache is
quite big but current usage is only 3GB:

# du -sh /mnt/services/squid/cache # cache_dir
3.0G  /mnt/services/squid/cache

Now I'm looking into this page
http://wiki.squid-cache.org/SquidFaq/SquidMemory and see:

14 MB of memory per 1 GB on disk for 64-bit Squid

Which means disk cache should use ~50MB of RAM.

All these means we have ~2.2GB ram used for everything else except
cache_mem and disk cache index.

Let's see top pools from mgr:mem:

Pool  (KB) %Tot
mem_node  2298833  55.082
Short Strings 622365   14.913
HttpHeaderEntry   404531   9.693
Long Strings  284520   6.817
MemObject 182288   4.368
HttpReply 155612   3.729
StoreEntry739651.772
Medium Strings711521.705
cbdata MemBuf (12)355730.852
LRU policy node   304030.728
MD5 digest113800.273
16K Buffer1056 0.025

These pools consume ~35% of total squid memory usage: Short Strings,
HttpHeaderEntry, Long Strings, HttpReply. Looks suspicious. On squid 2 same
pools use 10 times less memory.

I found a bug which looks similar to our experience:
http://bugs.squid-cache.org/show_bug.cgi?id=4084.

I'm attaching our config, mgr:info, mgr:mem and some system info I
collected.

Could someone say if this is normal and why it's so much different from
squid 2?

-- 
With best regards, Ivan Larionov.
HTTP/1.1 200 OK
Server: squid/3.5.23
Mime-Version: 1.0
Date: Thu, 19 Jan 2017 23:39:50 GMT
Content-Type: text/plain;charset=utf-8
Expires: Thu, 19 Jan 2017 23:39:50 GMT
Last-Modified: Thu, 19 Jan 2017 23:39:50 GMT
X-Cache: MISS from ip-172-22-10-120
X-Cache-Lookup: MISS from ip-172-22-10-120:3128
Connection: close

Squid Object Cache: Version 3.5.23
Build Info: 
Service Name: squid
Start Time: Fri, 13 Jan 2017 23:35:32 GMT
Current Time:   Thu, 19 Jan 2017 23:39:50 GMT
Connection information for squid:
Number of clients accessing cache:  (client_db off)
Number of HTTP requests received:   8195690
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   948.1
Average ICP messages per minute since start:0.0
Select loop called: 73529108 times, 7.054 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 29.2%, 60min: 28.9%
Hits as % of bytes sent:5min: 89.0%, 60min: 89.1%
Memory hits as % of hit requests:   5min: 0.0%, 60min: 0.0%
Disk hits as % of hit requests: 5min: 100.0%, 60min: 100.0%
Storage Swap size:  2915344 KB
Storage Swap capacity:   0.6% used, 99.4% free
Storage Mem size:   2276524 KB
Storage Mem capacity:   99.0% used,  1.0% free
Mean Object Size:   4.00 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01745  0.01745
Cache Misses:  0.02899  0.02451
Cache Hits:0.00091  0.00091
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.00094
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:518657.265 seconds
CPU Time:   6265.444 seconds
CPU Usage:  1.21%
CPU Usage, 5 minute avg:6.43%
CPU Usage, 60 minute avg:   5.11%
Maximum Resident Size: 18579360 KB
Page faults with physical i/o: 0
Memory accounted for:
Total accounted:   -20826 KB
memPoolAlloc calls: 2192400061
memPoolFree calls:  2194290230
File descriptor usage for squid:
Maximum number of file descriptors:   524288
Largest file desc currently in use

Re: [squid-users] acls with the same name, last wins

2016-12-30 Thread Ivan Larionov
I'm a bit confused now. Examples from default config:

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1025-65535  # unregistered ports

All these ACL work as OR, right?

Why is req_header different?

On Thu, Dec 29, 2016 at 9:44 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 2016-12-29 21:01, Ivan Larionov wrote:
>
>> I see behavior change after update from squid 2.7 to 3.5:
>>
>> I have following ACLs which I later use for cache_peer_access:
>>
>> acl header req_header header_a -i true
>> acl header req_header header_b -i true
>>
>> # name1 parent
>> cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=name1
>> cache_peer_access name1 deny header
>>
>> # name2 parent
>> cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=name2
>> cache_peer_access name2 allow header
>> cache_peer_access name2 deny all
>>
>> With squid 2.7 it was working as expected (requests with header_a OR
>> header_b were going to the second parent, all other requests to the
>> first one).
>>
>> However with squid 3.5 the same config doesn't work as expected. ONLY
>> requests with header_b are going to the second parent and debug logs
>> show that squid only does verification of header_b.
>>
>> My current workaround is to use 2 different ACL names:
>>
>> acl header_a req_header header_a -i true
>> acl header_b req_header header_b -i true
>>
>> # name1 parent
>> cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=name1
>> cache_peer_access name1 deny header_a
>> cache_peer_access name1 deny header_b
>>
>> # name2 parent
>> cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=name2
>> cache_peer_access name2 allow header_a
>> cache_peer_access name2 allow header_b
>> cache_peer_access name2 deny all
>>
>> But I think it could be a bug. Multiple ACLs with the same name should
>> work as OR, right? Do I understand it correctly? And it was working as
>> expected in 2.7.
>>
>> Has anyone saw similar behavior? Should I report a bug?
>>
>
> Good find. You are the first to mention it.
>
> I have had a look back into the code history and don't see this as ever
> being an intended behaviour for Squid-2. Just a side effect of how the
> Squid-2 ACL lists happened to be stored internally.
>
> The intended design for ACLs is that basic/primitive tests check one piece
> of state data and get chained explicitly in the access lines for AND/OR
> conditions. That way it is clear what is being processed and matched (or
> not matched).
>
> So for now I am making Squid produce a config ERROR when this config
> situation is found. The 'anyof' or 'allof' ACL types in 3.4+ can be used to
> assemble a more complex test set checking different ACL primitives.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid sibling peers and digest requests

2016-12-29 Thread Ivan Larionov
Thank you for helping.

After some experiments and tcpdumping it looks like it's not sibling
sending request to the parent, but original squid!

So instead of asking sibling about his digests squid asks parent.

And your trick with urlpath_regex didn't help. I even tried:

acl internal_digest urlpath_regex +i /.*store_digest.*/
always_direct allow internal_digest
never_direct deny internal_digest

but no luck. It still asks parent.


On Thu, Dec 29, 2016 at 1:00 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 2016-12-29 20:51, Ivan Larionov wrote:
>
>> I'm sure about forwarding because I see requests to
>> http://172.22.15.88:3128/squid-internal-periodic/store_digest [1] in
>> parent logs and my parent returns 502 because we do not allow requests
>> to internal IPs. Logs from the parent:
>>
>> Got request: GET
>> http://172.22.15.88:3128/squid-internal-periodic/store_digest
>> Not allowing blacklisted IP 172.22.15.88
>> GET http://172.22.15.88:3128/squid-internal-periodic/store_digest 502
>> 0ms
>>
>> I do not have "global_internal_static off" in my config and also I'm
>> able to get
>> http://172.22.15.88:3128/squid-internal-periodic/store_digest [1]
>> using curl or telnet (with telnet I do "GET
>> /squid-internal-periodic/store_digest" – note relative URL).
>>
>
> Okay, thats good.
>
>
>> However according to debug logs squid does this request using absolute
>> URL which probably works if target sibling can do direct requests (so
>> it will request itself for digest and return response to original
>> squid). But I do have "never_direct allow all" which probably makes
>> sibling to forward such request to a parent.
>>
>
> Hmm, I think you might be right about that.
> You can test it by adding:
>
>  acl foo urlpath_regex +i /squid.internal.digest/
>  never_direct deny foo
>
>
>
>> If my theory about absolute vs relative URL is correct then I believe
>> original squid should make store_digest request using relative URL
>> (like I can do with telnet) so sibling squid will return response
>> right away w/o asking itself for result.
>>
>
> Whats happening with the URL is that the sending peer generates it from
> the cache_peer IP/host name and port.
>
> The receiving peer checks the pathstarts with "/squid-internal-" and that
> the hostname portion matches its own visible_hostname or unique_hostname.
> If those match its marked for special handling as an internal request,
> otherwise global_internal_static is used to determine if the hostname not
> matching is ignored and it gets marked anyway.
>
> Since the digest needs to be targeted at the specific peer and not
> anything which may inject itself in between them the hostname does need to
> be sent. The relative URLs are for things that don't vary between proxies,
> like the Squid icons.
>
> If you configure cache_peer with the hostname of the receiving peer
> instead of its raw-IP the requests should be sent with that hostname
> instead of raw-IP.
>
>
>
> The config looks okay. Thanks for that.
>
> Amos
>
>


-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] acls with the same name, last wins

2016-12-29 Thread Ivan Larionov
I see behavior change after update from squid 2.7 to 3.5:

I have following ACLs which I later use for cache_peer_access:

acl header req_header header_a -i true
acl header req_header header_b -i true

# name1 parent
cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=name1
cache_peer_access name1 deny header

# name2 parent
cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=name2
cache_peer_access name2 allow header
cache_peer_access name2 deny all

With squid 2.7 it was working as expected (requests with header_a OR
header_b were going to the second parent, all other requests to the first
one).

However with squid 3.5 the same config doesn't work as expected. ONLY
requests with header_b are going to the second parent and debug logs show
that squid only does verification of header_b.

My current workaround is to use 2 different ACL names:

acl header_a req_header header_a -i true
acl header_b req_header header_b -i true

# name1 parent
cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=name1
cache_peer_access name1 deny header_a
cache_peer_access name1 deny header_b

# name2 parent
cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=name2
cache_peer_access name2 allow header_a
cache_peer_access name2 allow header_b
cache_peer_access name2 deny all

But I think it could be a bug. Multiple ACLs with the same name should work
as OR, right? Do I understand it correctly? And it was working as expected
in 2.7.

Has anyone saw similar behavior? Should I report a bug?

-- 
With best regards, Ivan Larionov.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid sibling peers and digest requests

2016-12-28 Thread Ivan Larionov
I'm sure about forwarding because I see requests to
http://172.22.15.88:3128/squid-internal-periodic/store_digest in parent
logs and my parent returns 502 because we do not allow requests to internal
IPs. Logs from the parent:

Got request: GET
http://172.22.15.88:3128/squid-internal-periodic/store_digest
Not allowing blacklisted IP 172.22.15.88
GET http://172.22.15.88:3128/squid-internal-periodic/store_digest 502 0ms

I do not have "global_internal_static off" in my config and also I'm able
to get http://172.22.15.88:3128/squid-internal-periodic/store_digest using
curl or telnet (with telnet I do "GET /squid-internal-periodic/store_digest"
– note relative URL).

However according to debug logs squid does this request using absolute URL
which probably works if target sibling can do direct requests (so it will
request itself for digest and return response to original squid). But I do
have "never_direct allow all" which probably makes sibling to forward such
request to a parent.

If my theory about absolute vs relative URL is correct then I believe
original squid should make store_digest request using relative URL (like I
can do with telnet) so sibling squid will return response right away w/o
asking itself for result.

This is more complete config (only stripped default things like localnet acls
/ http_access), note that I have 2 parents actually which I select based on
header (but all requests w/o header will go to the first parent), and also
have:

via off
never_direct allow all
forwarded_for off

# START CONFIG 

# Allow HTCP queries from local networks only
htcp_access allow localnet
htcp_access allow localhost
htcp_access deny all

# Other squids
cache_peer 172.22.15.88 sibling 3128 4827 htcp
cache_peer … sibling 3128 4827 htcp
acl siblings src 172.22.15.88/32
acl siblings src …/32
miss_access deny siblings

acl header_a req_header header_a -i true
acl header_b req_header header_b -i true

# name1 parent
cache_peer 127.0.0.1 parent 18070 0 no-query no-digest name=name1
cache_peer_access name1 deny header_a
cache_peer_access name1 deny header_b

# name2 parent
cache_peer 127.0.0.1 parent 18079 0 no-query no-digest name=name2
cache_peer_access name2 allow header_a
cache_peer_access name2 allow header_b
cache_peer_access name2 deny all

cache_mem …
maximum_object_size_in_memory …
memory_replacement_policy …
cache_replacement_policy …

cache_dir aufs … … 16 256

minimum_object_size … bytes # none-zero so we dont cache mistakes
maximum_object_size … KB

client_db off

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
# refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

# don't cache errors
negative_ttl 0 minutes
# always fetch object from the beginning regardless of Range requests
range_offset_limit none
via off
cache_effective_user squid
cache_effective_group squid
# disable icp
icp_port 0
never_direct allow all
forwarded_for off

# END CONFIG 

On Wed, Dec 28, 2016 at 11:15 PM, Amos Jeffries <squ...@treenet.co.nz>
wrote:

> On 2016-12-29 16:03, Ivan Larionov wrote:
>
>> Hello!
>>
>> I'm trying to setup multiple squids as siblings with a parent which is
>> not even a squid.
>>
>> But I'm getting following message in logs:
>>
>> temporary disabling (Bad Gateway) digest from 172.22.15.88
>>
>> temporary disabling (Bad Gateway) digest from …
>>
>> Squid 3.5.23, compiled with "--enable-cache-digests".
>>
>> For parent I'm setting no-digest, but I'd like to get digests between
>> siblings. However, it doesn't work and I probably found a reason after
>> reading debug logs:
>>
>> This is how squid does store_digest request from a sibling peer:
>>
>> GET http://172.22.15.88:3128/squid-internal-periodic/store_digest [1]
>> HTTP/1.1
>> Accept: application/cache-digest
>> Accept: text/html
>> X-Forwarded-For: unknown
>> Host: 172.22.15.88:3128 [2]
>> Cache-Control: max-age=259200
>> Connection: keep-alive
>>
>> Response (if I execute this request manually from telnet):
>>
>> HTTP/1.1 502 Bad Gateway
>> …
>>
>> This request has been forwarded to a parent and parent returned 502!
>>
>>
> Are you sure about that forwarding?
>  Its not being generated by the sibling?
>
>
> Now if I manually do the same request with a relative URL:
>>
>> GET /squid-internal-periodic/store_digest HTTP/1.1
>> …
>>
>> Response:
>>
>> HTTP/1.1 200 Cache Digest OK
>> …
>>
>> My setup:
>>
>> Multiple squids as siblings, one parent (not a squid).
>>
>> Peers configuration:
>>
>> # Other squids
>> cache_peer 172.22.15

[squid-users] Install Squid 3.3.10 on Slackware 14

2013-11-12 Thread Vukovic Ivan
Hello

Please i need help to ./configure, make and install Squid 3.3.10 on Slackware 
14.0 I Installed Slackware 14 with this packets:

aaa_base-14.0-i486-5
aaa_elflibs-14.0-i486-4
acl-2.2.51-i486-1
attr-2.4.46-i486-1
autoconf-2.69-noarch-1
automake-1.11.5-noarch-1
bash-4.2.037-i486-1
bin-11.1-i486-1
bind-9.9.1_P3-i486-1
binutils-2.22.52.0.2-i486-2
bison-2.5.1-i486-1
bzip2-1.0.6-i486-1
clisp-2.49-i486-1
coreutils-8.19-i486-1
cxxlibs-6.0.17-i486-1
db42-4.2.52-i486-3
db44-4.4.20-i486-3
db48-4.8.30-i486-2
dcron-4.5-i486-4
devs-2.3.1-noarch-25
dialog-1.1_20100428-i486-2
diffutils-3.2-i486-1
e2fsprogs-1.42.6-i486-1
elvis-2.2_0-i486-2
etc-14.0-i486-1
expat-2.0.1-i486-2
findutils-4.4.2-i486-1
floppy-5.4-i386-3
gawk-3.1.8-i486-1
gcc-4.7.1-i486-1
gcc-g++-4.7.1-i486-1
gdbm-1.8.3-i486-4
gettext-0.18.1.1-i486-3
gettext-tools-0.18.1.1-i486-3
glib-1.2.10-i486-3
glib2-2.32.4-i486-1
glibc-2.15-i486-7
glibc-i18n-2.15-i486-7
glibc-solibs-2.15-i486-7
glibc-zoneinfo-2012f_2012f-noarch-7
gpm-1.20.1-i486-5
grep-2.14-i486-1
groff-1.21-i486-1
guile-1.8.8-i486-1
gzip-1.5-i486-1
hdparm-9.37-i486-1
infozip-6.0-i486-1
iproute2-3.4.0-i486-2
iptables-1.4.14-i486-1
joe-3.7-i486-1
kbd-1.15.3-i486-2
kernel-firmware-20120804git-noarch-1
kernel-headers-3.2.29_smp-x86-1
kernel-huge-3.2.29-i486-1
kernel-modules-3.2.29-i486-1
kmod-9-i486-3
less-451-i486-1
libexif-0.6.21-i486-1
libpcap-1.3.0-i486-1
libpng-1.4.12-i486-1
libtermcap-1.2.3-i486-7
libtool-2.4.2-i486-1
libxml2-2.8.0-i486-1
libxslt-1.1.26-i486-2
lilo-23.2-i486-3
links-2.7-i486-1
logrotate-3.8.2-i486-1
lsof-4.83-i486-1
m4-1.4.16-i486-1
make-3.82-i486-3
man-1.6g-i486-1
man-pages-3.41-noarch-1
mhash-0.9.9.9-i486-3
mkinitrd-1.4.7-i486-6
ncftp-3.2.5-i486-1
ncurses-5.9-i486-1
net-tools-1.60.20120726git-i486-1
netwatch-1.3.0-i486-1
network-scripts-14.00-noarch-3
openssh-6.1p1-i486-1
openssl-1.0.1c-i486-3
openssl-solibs-1.0.1c-i486-3
pciutils-3.1.9-i486-1
perl-5.16.1-i486-1
pkg-config-0.25-i486-1
pkgtools-14.0-noarch-2
popt-1.7-i486-3
procps-3.2.8-i486-3
readline-5.2-i486-4
samba-3.6.8-i486-1
screen-4.0.3-i486-3
sed-4.2.1-i486-1
shadow-4.1.4.3-i486-7
slocate-3.1-i486-4
strace-4.5.20-i486-1
sysklogd-1.5-i486-1
sysvinit-2.88dsf-i486-2
sysvinit-scripts-2.0-noarch-13
tar-1.26-i486-1
tcpdump-4.3.0-i486-1
texinfo-4.13a-i486-4
time-1.7-i486-1
traceroute-2.0.18-i486-1
tree-1.6.0-i486-1
udev-182-i486-5
util-linux-2.21.2-i486-5
vim-7.3.645-i486-1
wget-1.14-i486-1
whois-5.0.15-i486-1
zlib-1.2.6-i486-1
zsh-5.0.0-i486-1


I can Boot and the Installation is ok.
Now i want install Squid 3.3.10 on this Slackware 14 Installation but everytime 
when i did the ./configure command, this error came:
gcc error: C Compiler works ..no
gcc -v command unrecognized
gcc -qversion command unrecognized

But!, here is the Point, when i install slackware 14 full (with all packages) 
then i can ./configure, make and install squid 3.3.10 without any Problem.

So,
Which package of slackware 14 is missing to ./configure, make and install Squid 
3.3.10 Here's is the list of all slackware 14 included packages:
http://mirror.netcologne.de/slackware/slackware-14.0/PACKAGES.TXT

Please help me to get the squid install process working, Thanks!


Mit freundlichen Grüssen
Ivan Vukovic
Abteilung Informatik-Dienste
--
Schlatter Industries AG
Brandstrasse 24
CH-8952 Schlieren
Tel. +41 44 732 7111
Direct +41 44 732 7495
Fax +41 44 732 45 00
Email: ivan.vuko...@schlattergroup.com
Internet www.schlattergroup.com



NoSpam


[squid-users] File download fails through transparent Squid

2012-07-31 Thread Ivan Botnar
Hello,

I have Squid 3.1.19 installed on Ubuntu 12.04 86_64 from packages. I
need a transparent proxy without disk cache for users working through
Wi-Fi on non-Windows (mostly Apple) devices. I performed a
configuration and Squid works for web surfing or streaming data but
I’m experiencing issues with files downloading. Basically every
download that lasts more than 10 seconds fails with error. I've been
looking into logs and debugs, and tcpdump but no luck.

Here’s my Squid:

# squid3 -v
Squid Cache: Version 3.1.19 configure options:
'--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc'
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
'--srcdir=.' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr'
'--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2'
'--disable-translation' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy'
'--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g
-O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Wformat-security -Werror=format-security'
'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE
-fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security
-Werror=format-security' --with-squid=/build/buildd/squid3-3.1.19

IP tables forward everything from 80 port to Squid on 3128 port:

-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128

Here’s my config:

acl my_networks src 192.168.110.0/24 10.21.40.0/24 10.20.40.0/24
192.168.109.0/24
cache deny all
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow my_networks
http_access deny all
cache_store_log /dev/null
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
httpd_suppress_version_string On
error_directory /usr/share/squid-langpack/en

Here's the last couple records I see in debug:
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData:
http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso,
7131720 bytes (4096 new bytes)
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: FD 213
'http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso'
out.offset=7127299
2012/07/30 18:06:27.339| clientStreamCallback: Calling 1 with cbdata
0x7fe40c97bc30 from node 0x7fe40c890008
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c35ff98
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=2
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| The AsyncCall clientWriteBodyComplete
constructed, this=0x7fe40c34b630 [call778693]
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=4
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=2

[squid-users] File download fails through transparent Squid

2012-07-30 Thread Ivan Botnar
Hello,

I have Squid 3.1.19 installed on Ubuntu 12.04 86_64 from packages. I need a 
transparent proxy without disk cache for users working through Wi-Fi on 
non-Windows (mostly Apple) devices. I performed a configuration and Squid works 
for web surfing or streaming data but I'm experiencing issues with files 
downloading. Basically every download that lasts more than 10 seconds fails 
with error. I've been looking into logs and debugs, and tcpdump but no luck.

Here's my Squid:

# squid3 -v
Squid Cache: Version 3.1.19
configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr' 
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man' 
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' 
'--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' 
'--disable-dependency-tracking' '--disable-silent-rules' 
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' 
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' 
'--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' 
'--enable-removal-policies=lru,heap' '--enable-delay-pools' 
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client' 
'--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
 '--enable-ntlm-auth-helpers=smb_lm,' 
'--enable-digest-auth-helpers=ldap,password' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
 '--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2' 
'--disable-translation' '--with-logdir=/var/log/squid3' 
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' 
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 
'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector 
--param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' 
'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector 
--param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security' 
--with-squid=/build/buildd/squid3-3.1.19

IP tables forward everything from 80 port to Squid on 3128 port:

-A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128

Here's my config:

acl my_networks src 192.168.110.0/24 10.21.40.0/24 10.20.40.0/24 
192.168.109.0/24
cache deny all
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow my_networks
http_access deny all
cache_store_log /dev/null
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320
httpd_suppress_version_string On
error_directory /usr/share/squid-langpack/en


Here's the last couple records I see in debug:

2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: 
http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso,
 7131720 bytes (4096 new bytes)
2012/07/30 18:06:27.339| clientReplyContext::sendMoreData: FD 213 
'http://mirror.cst.temple.edu/opensuse/distribution/12.1/iso/openSUSE-12.1-DVD-x86_64.iso'
 out.offset=7127299
2012/07/30 18:06:27.339| clientStreamCallback: Calling 1 with cbdata 
0x7fe40c97bc30 from node 0x7fe40c890008
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c35ff98
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataReferenceValid: 0x7fe40c5763d8
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=2
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=3
2012/07/30 18:06:27.339| The AsyncCall clientWriteBodyComplete constructed, 
this=0x7fe40c34b630 [call778693]
2012/07/30 18:06:27.339| cbdataLock: 0x7fe40c97abc8=4
2012/07/30 18:06:27.339| cbdataUnlock: 0x7fe40c97abc8=3
2012/07/30 

[squid-users] failed http redirection

2011-10-23 Thread Ivan Matala
hello, this is my code

iptables -t nat -A PREROUTING -i tun0 -p tcp -m tcp --match multiport
--dports 80 -j DNAT --to-destination 118.67.78.136:80

what im trying to do is, im trying to redirect all http requests to a
foreign proxy, but it fails

thanks


[squid-users] Tutorial for Squid Splash Page

2011-10-02 Thread Ivan Matala
Hello guys, do you any idea or is it possible to display a splash page
to squid proxy users? I want it like display for some specific
interval. Also can we put license agreement, in which they have to
press Yes or accept in order to browse any website. Thank you Squid
Users.

Kindly include your ideas or tutorials. Thank you


[squid-users] squid slow

2011-06-21 Thread Ivan Matala
i notice squid is slowing down..browsing goes very slow. real slow.
anyway to boost it? how can i tweak the settings? thanks


[squid-users] Re: Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-20 Thread Ivan .
Hi

Can you post this so I can get some feedback on whether people are
experiencing issues accessing the site via squid? thanks

The setup is

UserSquid--http://www.microsofthup.com

The error is

++
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://www.microsofthup.com/hupus/chooser.aspx?
The following error was encountered:

   Read Error

The system returned:
   (104) Connection reset by peer
An error condition occurred while reading data from the network.
Please retry your request.
Your cache administrator is root.
Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy.fqdn.com (squid/2.6.STABLE21)
++

in the access.log I see the site is load balanced


[squid-users] yahoo messenger cant connect

2011-06-17 Thread Ivan Matala
hello, i installed squid (default config, didnt change anything) and
web browsing is ok, but when i connect to yahoo messenger, it doesnt
work.. pls help


[squid-users] squid SSL

2011-06-17 Thread Ivan Matala
how can i configure squid SSL?

coz when i go to gmail.com, facebook.com, their require ssl support. i
got ssl error.

pls help

what should i do?


Re: [squid-users] squid SSL

2011-06-17 Thread Ivan Matala
this is want i want to achieve:

i have a server and i want all ports to be forwaded to a remote squid
proxy.. i want udp and tcp ports starting from 1:65535. is it
possible?

this means,, all yahoo messenger traffic, games, skype will be
forwarded to squid.

thanks

On Fri, Jun 17, 2011 at 8:27 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 18/06/11 02:33, Ivan Matala wrote:

 how can i configure squid SSL?

 coz when i go to gmail.com, facebook.com, their require ssl support. i
 got ssl error.

 pls help

 what should i do?

 You should start by telling us what the error is please.

 Note that HTTPS is by default relayed directly over Squid without being
 touched. So the error should be something in your browser or the website its
 contacting.
  The error message will help us point you at what more to look at.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2



[squid-users] Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-16 Thread Ivan .
Hi,

I am having a issue accessing a MS site, which is actually hosted via
Digitalriver content network, and via tcpdump I can see allot of
redirects, 301 perm moved etc. So I am wondering if someone can try
via their squid setup, or if any has any ideas what is at play.

No issues when going to the site direct

http://www.microsofthup.com/

Seem to have some load balancers at work  
BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;

Some greps from the access log

1308202767.527    746 10.xxx.xxx.xxx TCP_MISS/301 728 GET
http://microsofthup.com/ - DIRECT/209.87.184.136 text/html [Accept:
image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
application/x-shockwave-flash, application/x-ms-application,
application/x-ms-xbap, application/vnd.ms-xpsdocument,
application/xaml+xml, application/vnd.ms-excel,
application/vnd.ms-powerpoint, application/msword,
*/*\r\nAccept-Language: en-au\r\nUA-CPU: x86\r\nUser-Agent:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322;
.NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET
CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C;
.NET4.0E)\r\nHost: microsofthup.com\r\nCookie:
op390chooserhomedefaultpagesgum=a0o51kr1uy275pp0hk3dqa2766p06y38i56d7;
fcP=C=2T=1307338680214DTO=1306305944285U=717377145V=1307338680214\r\nVia:
1.0 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 301 Moved
Permanently\r\nDate: Thu, 16 Jun 2011 05:16:24 GMT\r\nContent-Length:
191\r\nContent-Type: text/html\r\nCache-Control:
no-cache\r\nConnection: keep-alive\r\nProxy-Connection:
keep-alive\r\nServer: Microsoft-IIS/6.0\r\nPragma:
no-cache\r\nLocation:
http://www.microsofthup.com/hupus/chooser.aspx\r\nVia: 1.1
dc1c5cache01 (NetCache NetApp/6.0.3)\r\nSet-Cookie:
BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;
path=/\r\n\r]

1308202718.027    231 10.xxx.xxx.xxx TCP_MISS/200 2031 GET
http://c5.img.digitalriver.com/gtimages/store-mc-uri/mshup/assets/local//en-US/css/style.css
- DIRECT/122.252.43.91 text/css [Host:
c5.img.digitalriver.com\r\nUser-Agent: Mozilla/5.0 (Windows; U;
Windows NT 5.1; en-GB; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 (.NET
CLR 3.5.30729)\r\nAccept: text/css,*/*;q=0.1\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:
http://www.microsofthup.com/hupus/chooser.aspx?culture=en-USresID=TfmRcwoHArEAADX4AfIb\r\nVia:
1.1 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 200
OK\r\nContent-Type: text/css\r\nLast-Modified: Fri, 17 Dec 2010
09:15:43 GMT\r\nETag: 8089fcfaca9dcb1:384\r\nServer:
Microsoft-IIS/6.0\r\nX-Server-Name: dc1c5web07\r\nP3P: CP=CAO DSP
TAIa OUR IND PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE
LOC\r\nX-Powered-By: ASP.NET\r\nDate: Thu, 16 Jun 2011 05:15:34
GMT\r\nContent-Length: 1531\r\nConnection: keep-alive\r\n\r]


This is the squid error

++
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://www.microsofthup.com/hupus/chooser.aspx?
The following error was encountered:

Read Error

The system returned:
    (104) Connection reset by peer
An error condition occurred while reading data from the network.
Please retry your request.
Your cache administrator is root.
Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy (squid/2.6.STABLE21)
++


Re: [squid-users] Read error Squid v2.6.stable21 www.microsofthup.com

2011-06-16 Thread Ivan .
Hi Amos

I obfuscated the fully qual named of my proxy, so yes it is definitely
from my proxy.

Clearswift SECURE Web Gateway is the internal proxy system which
chains to the Squid

CSwebGWSquid--http://www.microsofthup.com

I have tried direct client to the Squid and same issues

UserSquid--http://www.microsofthup.com

thanks
Ivan

On Fri, Jun 17, 2011 at 11:49 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/06/11 11:54, Ivan . wrote:

 Hi,

 I am having a issue accessing a MS site, which is actually hosted via
 Digitalriver content network, and via tcpdump I can see allot of
 redirects, 301 perm moved etc. So I am wondering if someone can try
 via their squid setup, or if any has any ideas what is at play.

 No issues when going to the site direct

 http://www.microsofthup.com/

 Seem to have some load balancers at work  
 BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;

 Some greps from the access log

 1308202767.527    746 10.xxx.xxx.xxx TCP_MISS/301 728 GET
 http://microsofthup.com/ - DIRECT/209.87.184.136 text/html [Accept:
 image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
 application/x-shockwave-flash, application/x-ms-application,
 application/x-ms-xbap, application/vnd.ms-xpsdocument,
 application/xaml+xml, application/vnd.ms-excel,
 application/vnd.ms-powerpoint, application/msword,
 */*\r\nAccept-Language: en-au\r\nUA-CPU: x86\r\nUser-Agent:
 Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322;
 .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET
 CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2; .NET4.0C;
 .NET4.0E)\r\nHost: microsofthup.com\r\nCookie:
 op390chooserhomedefaultpagesgum=a0o51kr1uy275pp0hk3dqa2766p06y38i56d7;

 fcP=C=2T=1307338680214DTO=1306305944285U=717377145V=1307338680214\r\nVia:
 1.0 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 301 Moved
 Permanently\r\nDate: Thu, 16 Jun 2011 05:16:24 GMT\r\nContent-Length:
 191\r\nContent-Type: text/html\r\nCache-Control:
 no-cache\r\nConnection: keep-alive\r\nProxy-Connection:
 keep-alive\r\nServer: Microsoft-IIS/6.0\r\nPragma:
 no-cache\r\nLocation:
 http://www.microsofthup.com/hupus/chooser.aspx\r\nVia: 1.1
 dc1c5cache01 (NetCache NetApp/6.0.3)\r\nSet-Cookie:
 BIGipServerp-dc1-c9-commerce5-pod1-pool4=2654814218.20480.;
 path=/\r\n\r]

 1308202718.027    231 10.xxx.xxx.xxx TCP_MISS/200 2031 GET

 http://c5.img.digitalriver.com/gtimages/store-mc-uri/mshup/assets/local//en-US/css/style.css
 - DIRECT/122.252.43.91 text/css [Host:
 c5.img.digitalriver.com\r\nUser-Agent: Mozilla/5.0 (Windows; U;
 Windows NT 5.1; en-GB; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 (.NET
 CLR 3.5.30729)\r\nAccept: text/css,*/*;q=0.1\r\nAccept-Language:
 en-gb,en;q=0.5\r\nAccept-Charset:
 ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nReferer:

 http://www.microsofthup.com/hupus/chooser.aspx?culture=en-USresID=TfmRcwoHArEAADX4AfIb\r\nVia:
 1.1 Clearswift SECURE Web Gateway\r\n] [HTTP/1.0 200
 OK\r\nContent-Type: text/css\r\nLast-Modified: Fri, 17 Dec 2010
 09:15:43 GMT\r\nETag: 8089fcfaca9dcb1:384\r\nServer:
 Microsoft-IIS/6.0\r\nX-Server-Name: dc1c5web07\r\nP3P: CP=CAO DSP
 TAIa OUR IND PHY ONL UNI PUR COM NAV INT DEM CNT STA PRE
 LOC\r\nX-Powered-By: ASP.NET\r\nDate: Thu, 16 Jun 2011 05:15:34
 GMT\r\nContent-Length: 1531\r\nConnection: keep-alive\r\n\r]


 Both of those are successful transfers through your Squid.


 This is the squid error


 ++
 ERROR
 The requested URL could not be retrieved
 While trying to retrieve the URL:
 http://www.microsofthup.com/hupus/chooser.aspx?
 The following error was encountered:

 Read Error

 The system returned:
     (104) Connection reset by peer
 An error condition occurred while reading data from the network.
 Please retry your request.
 Your cache administrator is root.
 Generated Fri, 17 Jun 2011 00:03:18 GMT by proxy (squid/2.6.STABLE21)

 Are you sure this is being generated by your proxy?
  proxy is not a FQDN indicating ownership, so it is a bit hard to tell who
 it belongs to.
  root is an ambiguous email address, so good luck getting in touch with
 whoever runs it to report the problem.


 Your log indicates requests/replies are coming through a proxy with domain
 name Clearswift SECURE Web Gateway which is clearly also not available in
 DNS. So it could be broken in any number of other ways than just its FQDN
 hostname.


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2



[squid-users] Vary the bandwidth to stream

2011-04-26 Thread Ivan Maldonado Zambrano
Hi all,

I'm new in Squid and I'd like to know if Squid can help me to solve this
problem:

I'm Videotrak developer (Peek Traffic product) and I'm trying to vary my
bandwidth to simulate that my device is located in a remote location.
I'm streaming video from my videotrak device to a Linux PC (Fedora
distribution). I don't want to block streaming with Squid, I just want
to control/vary my bandwidth between Videotrak and PC (local network).

Regards and thanks in advanced
Iván Maldonado Zambrano



Re: [squid-users] Vary the bandwidth to stream

2011-04-26 Thread Ivan Maldonado Zambrano
Buen día Rogelio,

Si usa streaming por web. Explico, en el dispositivo cree un nodo
(/dev/camera) en el cual coloco el video a enviar y mediante Live555 se
envia la informacion a travez de la web. El dispositivo tiene una IP
fija (server) y mediante una aplicación hecha en Qt/VLC requiero el
video (client).

Me puedes proporcionar informacion en relacion a delay_pools y
delay_access, es decir. Estoy usando fedora y lei en una pagina que para
la configuracion de este tipo de herramientas, es necesario instalar
squid manualmente y no usar yum install squid (en mi caso).

Saludos y gracias de antemano
Iván Maldonado Zambrano


On Tue, 2011-04-26 at 16:17 -0500, Rogelio Sevilla Fernandez wrote:
 Que tal Ivan..
 
 Solo para verificar. Tu sistema de Videotrak usa streaming por web? si  
 es asi, con squid y el uso de delay_pools / delay_access podría ser tu  
 solucion.
 
 
 
 Ivan Maldonado Zambrano imaldon...@semex.com.mx escribió:
 
  Hi all,
 
  I'm new in Squid and I'd like to know if Squid can help me to solve this
  problem:
 
  I'm Videotrak developer (Peek Traffic product) and I'm trying to vary my
  bandwidth to simulate that my device is located in a remote location.
  I'm streaming video from my videotrak device to a Linux PC (Fedora
  distribution). I don't want to block streaming with Squid, I just want
  to control/vary my bandwidth between Videotrak and PC (local network).
 
  Regards and thanks in advanced
  Iván Maldonado Zambrano
 
 
  --
  Este mensaje ha sido analizado por MailScanner del
  Gobierno del Estado de Colima en busca de virus y otros
  contenidos peligrosos, y se considera que est� limpio.
 
 
 
 
 




Re: [squid-users] File Descriptors

2010-07-05 Thread Ivan .
I used this how to, and did not require a re-compile

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Jul 6, 2010 at 1:43 PM, Mellem, Dan dan.mel...@pomona.k12.ca.us wrote:

 Did you set the limit before you compiled it? The upper limit is set at 
 compile time. I ran into this problem myself.

 -Dan


 -Original Message-
 From:   Superted666 [mailto:ruckafe...@gmail.com]
 Sent:   Mon 7/5/2010 3:33 PM
 To:     squid-users@squid-cache.org
 Cc:
 Subject:        [squid-users] File Descriptors


 Hello,

 Got a odd problem with file descriptors im hoping you guys could help me out
 with?

 Background

 I'm running CentOS 5.5 and squid 3.0 Stable 5.
 The system is configured with 4096 file descriptors with the following :

 /etc/security/limits.conf
 *                -       nofile          4096
 /etc/sysctl.conf
 fs.file-max = 4096

 Also /etc/init.d/squid has ulimit -HSn 4096 at the start.

 Problem

 Running a ulimit -n on the box does indeed show 4096 connectors but squid
 states it is using 1024 despite what is said above. I noticed this because
 im starting to get warnings in the logs about file descriptors...

 Any help greatly appreciated.

 Thanks

 Ed

 Ed
 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html
 Sent from the Squid - Users mailing list archive at Nabble.com.






Re: [squid-users] Increasing File Descriptors

2010-05-06 Thread Ivan .
worked for me

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

no recompile necessary


On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
bradl...@muohio.edu wrote:
 I can't seem to get increase the number above 32768 no matter what I do.

 Ulimit during compile, sysctl.conf and everything else but no luck.


 I have about 5,000 users on a 400mbit connection.

 Steve

 RHEL5 64bit with Squid 3.1.1


[squid-users] TIME_WAIT state

2010-05-04 Thread Ivan .
Hi

I see allot of TIME_WAIT states when I run netstat -n.

I imagine that this points to some tcp parameters not quite tuned correctly.

Anyone have some kernel tcp tuning parameters for a Squid proxy
running on RH EL5 pushing around 30Mbs?


Thanks
Ivan


[squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
Hi

I recently implemented a new proxy system. I am looking at doing is
setting a periodical test that
goes out to the Internet, pull some content down and record the
relevant metrics.


PCProxy---FW--InternetSite-with-content


Some sort of scheduled process on a PC, that pulls down some static
content from the same website, which is repeatable. The application
would then record metrics such as speed, time taken to download the
static content and log that.

I have been digging on this site
http://www.opensourcetesting.org/performance.php looking at tools that
are available, but I would appreciate any info if anyone has down
something similar.

Thanks
Ivan


Re: [squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
thanks

What am I looking for is something more along these lines.

http://www.webperformanceinc.com/library/files/proxy_server_performance.pdf


cheers
Ivan

On Sun, May 2, 2010 at 12:04 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 Hi

 I recently implemented a new proxy system. I am looking at doing is
 setting a periodical test that
 goes out to the Internet, pull some content down and record the
 relevant metrics.



 PCProxy---FW--InternetSite-with-content


 Some sort of scheduled process on a PC, that pulls down some static
 content from the same website, which is repeatable. The application
 would then record metrics such as speed, time taken to download the
 static content and log that.


 Squid native access.log contains transfer duration and size metrics.
 Some other options not in the default format provide additional metrics if
 you need them.
  See http://www.squid-cache.org/Doc/config/logformat/ for a lit of log
 metrics.

 Otherwise the SNMP counters can be used, but they do not go down as fine
 grained as indvidual requests.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1



Re: [squid-users] Proxy performance monitoring

2010-05-01 Thread Ivan .
not necessarily a test lab setup, but something that sits on a client
machine, pulls down some static content, at regular intervals and then
report on the performance.

what I am trying to do is simulate the client experience so to speak.

cheers
Ivan

On Sun, May 2, 2010 at 12:52 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 thanks

 What am I looking for is something more along these lines.


 http://www.webperformanceinc.com/library/files/proxy_server_performance.pdf


 cheers
 Ivan

 Oh.

 That paper describes requirements for a lab test. The good test software;
 polygraph etc, have not changed AFAIK so go with those mentioned if you want
 to.

 What your initial email seemed to describe was for monitoring live
 production installation performance.

 Be aware these are very different. Throwing lab data at a production server
 to a real remote web service is a very quick way to get yourself a huge
 bandwidth bill and annoyed phone calls.

 Amos



 On Sun, May 2, 2010 at 12:04 PM, Amos Jeffries squ...@treenet.co.nz
 wrote:

 Ivan . wrote:

 Hi

 I recently implemented a new proxy system. I am looking at doing is
 setting a periodical test that
 goes out to the Internet, pull some content down and record the
 relevant metrics.




 PCProxy---FW--InternetSite-with-content


 Some sort of scheduled process on a PC, that pulls down some static
 content from the same website, which is repeatable. The application
 would then record metrics such as speed, time taken to download the
 static content and log that.

 Squid native access.log contains transfer duration and size metrics.
 Some other options not in the default format provide additional metrics
 if
 you need them.
  See http://www.squid-cache.org/Doc/config/logformat/ for a lit of log
 metrics.

 Otherwise the SNMP counters can be used, but they do not go down as fine
 grained as indvidual requests.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1



 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1



[squid-users] client_lifetime

2010-04-29 Thread Ivan .
Hi

I chain from two internal Clearswift appliances to a Squid box in a DMZ.

I have noticed quite a few WARNING: Closing client internal-proxy-ip
connection due to lifetime timeout

The client_lifetime is set at default, but I was wondering if I should
stretch that right out to 365 days or alike, seeing as all my
connections to the Squid proxy come from two IP addresses only?

Any other parameters that I should tune in this sort of setup?

Thanks
Ivan


Re: [squid-users] Squid v3.0Stable16 memory leak

2010-04-05 Thread Ivan .
Amos

I can confirm that with the same kernal, v2.6STABLE21 works fine

cheers
Ivan

On Tue, Mar 30, 2010 at 6:21 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 Ivan . wrote:

 Hi

 Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
 for about 3 days, with 8GB of memory. Slowly but surely top shows
 the availble memory dropping down to 500MB, which concerned me a great
 deal.

 I am not caching, using the cache_dir null directive, so not sure what
 is going on other a memory leak. Restarting the squid process didn't
 help, so I bounced the box and low and behold available memory is
 around 7GB.

 Um ... Restarting Squid drops all the memory it has allocated, whether leaked 
 or not. Same as killing the process.

 This sounds very much like something I saw back in the 2.6.31 kernel last 
 year. Any app that used a lot of memory or connections slowly (relative) 
 leaked RAM into the kernel space somehow. Only a system restart or kernel 
 upgrade to 2.6.32 fixed it here.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


[squid-users] Squid v3.0Stable16 memory leak

2010-03-30 Thread Ivan .
Hi

Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
for about 3 days, with 8GB of memory. Slowly but surely top shows
the availble memory dropping down to 500MB, which concerned me a great
deal.

I am not caching, using the cache_dir null directive, so not sure what
is going on other a memory leak. Restarting the squid process didn't
help, so I bounced the box and low and behold available memory is
around 7GB.

I just upgraded to v3.0.STABLE24, so hoping this is better

I am not running any other services on the box, apart from ssh for
access, no GUI etc

cheers
Ivan


Re: [squid-users] Squid v3.0Stable16 memory leak

2010-03-30 Thread Ivan .
hmm i'll check tomorrow, but I am fairly certain that I am on the
latest kernel via the RHN support site for RH EL5

I was on Squid v2.6stablexx, which is the latest rpm available by
RedHat and didn't have any issues. I upgraded hoping solve my
persistant tcp_miss for a couple of sites

cheers
Ivan

On Tue, Mar 30, 2010 at 6:21 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 Hi

 Had this running on a RedHat EL5 64bit OS running Squid v3.0.STABLE16
 for about 3 days, with 8GB of memory. Slowly but surely top shows
 the availble memory dropping down to 500MB, which concerned me a great
 deal.

 I am not caching, using the cache_dir null directive, so not sure what
 is going on other a memory leak. Restarting the squid process didn't
 help, so I bounced the box and low and behold available memory is
 around 7GB.

 Um ... Restarting Squid drops all the memory it has allocated, whether
 leaked or not. Same as killing the process.

 This sounds very much like something I saw back in the 2.6.31 kernel last
 year. Any app that used a lot of memory or connections slowly (relative)
 leaked RAM into the kernel space somehow. Only a system restart or kernel
 upgrade to 2.6.32 fixed it here.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1



Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
Some even more strange access.log entries?

This is odd? Does that mean no DNS record? strange as both squid's use
the same DNS setup, with a primary, secondary and tertiary setup.
1269833940.167  0 127.0.0.1 NONE/400 1868 GET
www.environment.gov.au - NONE/- text/html

1269833960.464  60997 10.132.17.30 TCP_MISS/000 0 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 -

1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
http://www.environment.gov.au - DIRECT/155.187.3.81 -

This one is new?
1269842635.028 295660 10.143.254.22 TCP_MISS/502 2514 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html



On Mon, Mar 29, 2010 at 4:56 PM, Ivan . ivan...@gmail.com wrote:
 Hi Amos

 You can see the tcp_miss in the access.log here:-

 1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
 http://www.environment.gov.au - DIRECT/155.187.3.81 -

 Here is a tcpdump output from the connection. You can see the TCP
 handshake setup and then the http session just hangs? I have confirmed
 with the website admin these are no ddos type protection, which would
 block multiple requests in quick succession.

 The tcp connection times out and then resets.

 [r...@squid-proxy ~]# tcpdump net 155.187.3
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
 16:58:59.369482 IP xxx..xxx.xxx.41338  155.187.3.81.http: S
 1781942738:1781942738(0) win 5840 mss 1460,sackOK,timestamp
 1321171542 0,nop,wscale 7
 16:58:59.418150 IP 155.187.3.81.http  xxx..xxx.xxx.41338: S
 2343505326:2343505326(0) ack 1781942739 win 32768 mss 1460,nop,wscale
 0,nop,nop,timestamp 234270252 1321171542,sackOK,eol
 16:58:59.418167 IP xxx..xxx.xxx.41338  155.187.3.81.http: . ack 1
 win 46 nop,nop,timestamp 1321171591 234270252
 16:58:59.418213 IP xxx..xxx.xxx.41338  155.187.3.81.http: P
 1:696(695) ack 1 win 46 nop,nop,timestamp 1321171591 234270252
 16:58:59.477692 IP 155.187.3.81.http  xxx..xxx.xxx.41338: P
 2897:4081(1184) ack 696 win 33304 nop,nop,timestamp 234270307
 1321171591
 16:58:59.477700 IP xxx..xxx.xxx.41338  155.187.3.81.http: . ack 1
 win 46 nop,nop,timestamp 1321171650 234270252,nop,nop,sack 1
 {2897:4081}


 cheers
 Ivan

 On Mon, Mar 29, 2010 at 3:59 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 Hi,

 What would cause a TCP MISS 502, which would prevent a site from
 loading? The site works on squidv3.0 but not on v2.6?


 Any one of quite a few things. The ERR_READ_ERROR result means the remote
 server or network is closing the TCP link on you for some unknown reason.

 Why it works in 3.0 is as much a mystery as why it does not in 2.6 until
 details of the traffic on Squid-Server TCP link are known.


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18




Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
That is so odd, as I have two identical boxes, now running the same
Squid version, going through the same infrastructure, one works, the
other one doesn't?

The only difference are the public addresses configured on each of the
squid proxy systems.

The TCP stats on the interface on the squid box that won't access that
site, don't look to bad at all

[r...@pcr-proxy ~]# netstat -s
Ip:
1593488410 total packets received
17991 with invalid addresses
0 forwarded
0 incoming packets discarded
1593318874 incoming packets delivered
1413863445 requests sent out
193 reassemblies required
95 packets reassembled ok
Icmp:
22106 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
destination unreachable: 16
echo requests: 22090
155761 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 133671
echo replies: 22090
IcmpMsg:
InType3: 16
InType8: 22090
OutType0: 22090
OutType3: 133671
Tcp:
27785486 active connections openings
78777077 passive connection openings
68247 failed connection attempts
560600 connection resets received
569 connections established
1589479495 segments received
1403833081 segments send out
6034370 segments retransmited
0 bad segments received.
626711 resets sent
Udp:
3817253 packets received
20 packets to unknown port received.
0 packet receive errors
3840233 packets sent
TcpExt:
217 invalid SYN cookies received
15888 resets received for embryonic SYN_RECV sockets
42765 packets pruned from receive queue because of socket buffer overrun
7282834 TCP sockets finished time wait in fast timer
3 active connections rejected because of time stamp
11427 packets rejects in established connections because of timestamp
8682907 delayed acks sent
1268 delayed acks further delayed because of locked socket
Quick ack mode was activated 1227980 times
36 packets directly queued to recvmsg prequeue.
14 packets directly received from prequeue
538829561 packets header predicted
492906318 acknowledgments not containing data received
190275750 predicted acknowledgments
372 times recovered from packet loss due to fast retransmit
348117 times recovered from packet loss due to SACK data
174 bad SACKs received
Detected reordering 71 times using FACK
Detected reordering 963 times using SACK
Detected reordering 25 times using reno fast retransmit
Detected reordering 998 times using time stamp
560 congestion windows fully recovered
10689 congestion windows partially recovered using Hoe heuristic
TCPDSACKUndo: 921
197231 congestion windows recovered after partial ack
1316789 TCP data loss events
TCPLostRetransmit: 22
3020 timeouts after reno fast retransmit
78970 timeouts after SACK recovery
10665 timeouts in loss state
743644 fast retransmits
1003156 forward retransmits
1884003 retransmits in slow start
1604549 other TCP timeouts
TCPRenoRecoveryFail: 150
31151 sack retransmits failed
4198383 packets collapsed in receive queue due to low socket buffer
814608 DSACKs sent for old packets
33462 DSACKs sent for out of order packets
65506 DSACKs received
266 DSACKs for out of order packets received
215231 connections reset due to unexpected data
10630 connections reset due to early user close
76801 connections aborted due to timeout
IpExt:
InBcastPkts: 9199


On Mon, Mar 29, 2010 at 5:47 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 Some even more strange access.log entries?

 This is odd? Does that mean no DNS record? strange as both squid's use
 the same DNS setup, with a primary, secondary and tertiary setup.
 1269833940.167      0 127.0.0.1 NONE/400 1868 GET
 www.environment.gov.au - NONE/- text/html

 1269833960.464  60997 10.132.17.30 TCP_MISS/000 0 GET
 http://www.environment.gov.au/ - DIRECT/155.187.3.81 -

 1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
 http://www.environment.gov.au - DIRECT/155.187.3.81 -

 This one is new?
 1269842635.028 295660 10.143.254.22 TCP_MISS/502 2514 GET
 http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html


 The TCP_MISS/000 are another version of the READ_ERROR you are receiving as
 TCP_MISS/502. The 000 ones are on the client facing side though, the TCP
 link read failing before the request headers are finished being received
 from the client.
  The first line is received (to get the URL) but not the rest of the request
 headers.

 The NONE/400 might be yet another version of the read failing at some point
 of processing. It's hard to say.

 Something is definitely very screwed at the TCP protocol level for those
 requests.

 Amos



 On Mon, Mar 29, 2010 at 4:56 PM, Ivan . ivan...@gmail.com wrote:

 Hi Amos

 You can see the tcp_miss in the access.log

Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
More odd tcp_miss

Only had a small portion of the site which would work. Works fine from
the primary, but fails on the secondary squid.

1269906612.412   5464 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906612.930 17 10.xxx..xxx TCP_MISS/200 851 GET
http://advisories.internode.on.net/images/menu2-on.gif -
DIRECT/192.231.203.146 image/gif
1269906613.075  9 10.xxx..xxx TCP_REFRESH_MODIFIED/200 782 GET
http://advisories.internode.on.net/images/menu2.gif -
DIRECT/192.231.203.146 image/gif
1269906613.331221 10.xxx..xxx  TCP_MISS/200 819 GET
http://advisories.internode.on.net/images/menu1-on.gif -
DIRECT/192.231.203.146 image/gif
1269906614.487   1865 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/ - DIRECT/203.16.214.27 -
1269906696.702  60903 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906767.709  61004 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906840.719  60299 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -
1269906911.707  60981 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -



On Mon, Mar 29, 2010 at 5:56 PM, Ivan . ivan...@gmail.com wrote:
 That is so odd, as I have two identical boxes, now running the same
 Squid version, going through the same infrastructure, one works, the
 other one doesn't?

 The only difference are the public addresses configured on each of the
 squid proxy systems.

 The TCP stats on the interface on the squid box that won't access that
 site, don't look to bad at all

 [r...@pcr-proxy ~]# netstat -s
 Ip:
    1593488410 total packets received
    17991 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    1593318874 incoming packets delivered
    1413863445 requests sent out
    193 reassemblies required
    95 packets reassembled ok
 Icmp:
    22106 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 16
        echo requests: 22090
    155761 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 133671
        echo replies: 22090
 IcmpMsg:
        InType3: 16
        InType8: 22090
        OutType0: 22090
        OutType3: 133671
 Tcp:
    27785486 active connections openings
    78777077 passive connection openings
    68247 failed connection attempts
    560600 connection resets received
    569 connections established
    1589479495 segments received
    1403833081 segments send out
    6034370 segments retransmited
    0 bad segments received.
    626711 resets sent
 Udp:
    3817253 packets received
    20 packets to unknown port received.
    0 packet receive errors
    3840233 packets sent
 TcpExt:
    217 invalid SYN cookies received
    15888 resets received for embryonic SYN_RECV sockets
    42765 packets pruned from receive queue because of socket buffer overrun
    7282834 TCP sockets finished time wait in fast timer
    3 active connections rejected because of time stamp
    11427 packets rejects in established connections because of timestamp
    8682907 delayed acks sent
    1268 delayed acks further delayed because of locked socket
    Quick ack mode was activated 1227980 times
    36 packets directly queued to recvmsg prequeue.
    14 packets directly received from prequeue
    538829561 packets header predicted
    492906318 acknowledgments not containing data received
    190275750 predicted acknowledgments
    372 times recovered from packet loss due to fast retransmit
    348117 times recovered from packet loss due to SACK data
    174 bad SACKs received
    Detected reordering 71 times using FACK
    Detected reordering 963 times using SACK
    Detected reordering 25 times using reno fast retransmit
    Detected reordering 998 times using time stamp
    560 congestion windows fully recovered
    10689 congestion windows partially recovered using Hoe heuristic
    TCPDSACKUndo: 921
    197231 congestion windows recovered after partial ack
    1316789 TCP data loss events
    TCPLostRetransmit: 22
    3020 timeouts after reno fast retransmit
    78970 timeouts after SACK recovery
    10665 timeouts in loss state
    743644 fast retransmits
    1003156 forward retransmits
    1884003 retransmits in slow start
    1604549 other TCP timeouts
    TCPRenoRecoveryFail: 150
    31151 sack retransmits failed
    4198383 packets collapsed in receive queue due to low socket buffer
    814608 DSACKs sent for old packets
    33462 DSACKs sent for out of order packets
    65506 DSACKs received
    266 DSACKs for out of order packets received
    215231 connections reset due to unexpected data
    10630 connections reset due to early user close
    76801 connections aborted

Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
really? One site not working on each of the Squid boxes?

That would be very, very strange?

Ivan

On Tue, Mar 30, 2010 at 11:20 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Tue, 30 Mar 2010 10:50:53 +1100, Ivan . ivan...@gmail.com wrote:
 More odd tcp_miss

 Only had a small portion of the site which would work. Works fine from
 the primary, but fails on the secondary squid.


 Its at this point I'm suspecting the NIC or hardware.
 Though low level software such as the kernel or iptables version warrant a
 look as well.

 Amos



Re: [squid-users] TCP MISS 502

2010-03-28 Thread Ivan .
Hi Amos

You can see the tcp_miss in the access.log here:-

1269834108.182 120002 127.0.0.1 TCP_MISS/000 0 GET
http://www.environment.gov.au - DIRECT/155.187.3.81 -

Here is a tcpdump output from the connection. You can see the TCP
handshake setup and then the http session just hangs? I have confirmed
with the website admin these are no ddos type protection, which would
block multiple requests in quick succession.

The tcp connection times out and then resets.

[r...@squid-proxy ~]# tcpdump net 155.187.3
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:58:59.369482 IP xxx..xxx.xxx.41338  155.187.3.81.http: S
1781942738:1781942738(0) win 5840 mss 1460,sackOK,timestamp
1321171542 0,nop,wscale 7
16:58:59.418150 IP 155.187.3.81.http  xxx..xxx.xxx.41338: S
2343505326:2343505326(0) ack 1781942739 win 32768 mss 1460,nop,wscale
0,nop,nop,timestamp 234270252 1321171542,sackOK,eol
16:58:59.418167 IP xxx..xxx.xxx.41338  155.187.3.81.http: . ack 1
win 46 nop,nop,timestamp 1321171591 234270252
16:58:59.418213 IP xxx..xxx.xxx.41338  155.187.3.81.http: P
1:696(695) ack 1 win 46 nop,nop,timestamp 1321171591 234270252
16:58:59.477692 IP 155.187.3.81.http  xxx..xxx.xxx.41338: P
2897:4081(1184) ack 696 win 33304 nop,nop,timestamp 234270307
1321171591
16:58:59.477700 IP xxx..xxx.xxx.41338  155.187.3.81.http: . ack 1
win 46 nop,nop,timestamp 1321171650 234270252,nop,nop,sack 1
{2897:4081}


cheers
Ivan

On Mon, Mar 29, 2010 at 3:59 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Ivan . wrote:

 Hi,

 What would cause a TCP MISS 502, which would prevent a site from
 loading? The site works on squidv3.0 but not on v2.6?


 Any one of quite a few things. The ERR_READ_ERROR result means the remote
 server or network is closing the TCP link on you for some unknown reason.

 Why it works in 3.0 is as much a mystery as why it does not in 2.6 until
 details of the traffic on Squid-Server TCP link are known.


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18



[squid-users] TCP MISS 502

2010-03-27 Thread Ivan .
Hi,

What would cause a TCP MISS 502, which would prevent a site from
loading? The site works on squidv3.0 but not on v2.6?

This error on Squid v2.6STABLE21, can't get the www.environment.gov.au site up..

1269582298.419 306252 10.xxx.xxx.xxx TCP_MISS/502 1442 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html [Host:
www.environment.gov.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\nCookie: tmib_res_layout=default-wide;
__utma=181583987.2132547050.1269488465.1269509672.1269569388.4;
__utmc=181583987;
__utmz=181583987.1269488465.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)\r\n]
[HTTP/1.0 502 Bad Gateway\r\nServer: squid\r\nDate: Fri, 26 Mar 2010
05:44:58 GMT\r\nContent-Type: text/html\r\nContent-Length:
1074\r\nExpires: Fri, 26 Mar 2010 05:44:58 GMT\r\nX-Squid-Error:
ERR_READ_ERROR 104\r\n\r]

But works fine from a Squid v3.0STABLE16

Thanks
Ivan


[squid-users] Can someone check a site?

2010-03-25 Thread Ivan .
Hi,

Can someone running Squid v2.6 STABLE21 check this site for me?
http://www.usp.ac.fj

Nothing in the access.log to give me a hint as to where the issue is.

I can access it direct, but through Squid it just hangs there after
the inital TCP handshake?

[r...@proxy squid]# tcpdump -vvv host 144.120.8.2
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size
65535 bytes
21:35:24.227005 IP (tos 0x0, ttl  64, id 7227, offset 0, flags [DF],
proto: TCP (6), length: 60) xxx.xxx.xxx.xxx.33151 
belo.usp.ac.fj.http: S, cksum 0xb78a (correct),
2265843403:2265843403(0) win 5840 mss 1460,sackOK,timestamp 992156400
0,nop,wscale 7
21:35:24.488001 IP (tos 0x0, ttl  53, id 0, offset 0, flags [DF],
proto: TCP (6), length: 60) belo.usp.ac.fj.http 
xxx.xxx.xxx.xxx.33151: S, cksum 0x1a89 (correct),
2369822436:2369822436(0) ack 2265843404 win 5792 mss
1460,sackOK,timestamp 159278980 992156400,nop,wscale 0
21:35:24.488013 IP (tos 0x0, ttl  64, id 7228, offset 0, flags [DF],
proto: TCP (6), length: 52) xxx.xxx.xxx.xxx.33151 
belo.usp.ac.fj.http: ., cksum 0x5ebb (correct), 1:1(0) ack 1 win 46
nop,nop,timestamp 992156661 159278980
21:35:24.488077 IP (tos 0x0, ttl  64, id 7229, offset 0, flags [DF],
proto: TCP (6), length: 482) xxx.xxx.xxx.xxx.33151 
belo.usp.ac.fj.http: P, cksum 0x15be (incorrect (- 0x870c),
1:431(430) ack 1 win 46 nop,nop,timestamp 992156661 159278980
21:35:24.729001 IP (tos 0x0, ttl  53, id 63867, offset 0, flags [DF],
proto: TCP (6), length: 52) belo.usp.ac.fj.http 
xxx.xxx.xxx.xxx.33151: ., cksum 0x4401 (correct), 1:1(0) ack 431 win
6432 nop,nop,timestamp 159279006 992156661

thanks
Ivan


[squid-users] TCP MISS 502

2010-03-25 Thread Ivan .
Man, Squid does my head in sometimes.

This error on Squid v2.6STABLE21, can't get the www.environment.gov.au site up..

1269582298.419 306252 10.xxx.xxx.xxx TCP_MISS/502 1442 GET
http://www.environment.gov.au/ - DIRECT/155.187.3.81 text/html [Host:
www.environment.gov.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\nCookie: tmib_res_layout=default-wide;
__utma=181583987.2132547050.1269488465.1269509672.1269569388.4;
__utmc=181583987;
__utmz=181583987.1269488465.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)\r\n]
[HTTP/1.0 502 Bad Gateway\r\nServer: squid\r\nDate: Fri, 26 Mar 2010
05:44:58 GMT\r\nContent-Type: text/html\r\nContent-Length:
1074\r\nExpires: Fri, 26 Mar 2010 05:44:58 GMT\r\nX-Squid-Error:
ERR_READ_ERROR 104\r\n\r]

But works fine from a Squid v3.0STABLE16

Thanks
Ivan


Re: [squid-users] FileDescriptor Issues

2010-03-22 Thread Ivan .
Have you set the descriptor size in the squid start up script?

see here
http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Mar 23, 2010 at 12:45 PM, a...@gmail adbas...@googlemail.com wrote:

 I have solved the problem, I managed to increase the filedescriptor
 My system now reads 65535
 But Squid still says only 1024 fileDescriptors available

 What can I do to fix this please, I have rebooted the system and Squid 
 several times
 I am running out of ideas

 Any help would be appreciated
 Regards
 Adam



Re: [squid-users] Squid cache_dir failed - can squid survive?

2010-03-18 Thread Ivan .
I wonder about the value of http cache, when the majority of high
volume sites used in the corporate environment are dynamic.
http://www.mnot.net/cache_docs/

How is the no-cache HTTP header handled by Squid?

I didn't see the value in it, and used the cache_dir null /tmp to stop it
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid

cheers
Ivan


On Thu, Mar 18, 2010 at 5:16 PM, GIGO . gi...@msn.com wrote:

 Dear henrik,

 If you have only one physical machine what is the best strategy for 
 miminmizing the downtime and rebuild the cache directory again or start 
 utilizing the squid without the cache directory? I assume we have to 
 reinstall the Squid Software? Please guide





 
 From: hen...@henriknordstrom.net
 To: gina...@gmail.com
 CC: squid-users@squid-cache.org
 Date: Sat, 13 Mar 2010 09:32:30 +0100
 Subject: Re: [squid-users] Squid cache_dir failed - can squid survive?

 fre 2010-03-12 klockan 14:28 -0800 skrev Maykeen:
 I want to know, if squid is able to survive if it suddenly loses access to
 its cache directories, for example, stop caching requests and just serving
 as a proxy. Is there a way to do this, instead of squid termintaing when
 this happens?

 Squid is not currently designed to handle this and will terminate.

 What you can do to handle this situation is to run two Squids, one just
 as a proxy and the other with the cache. The proxy only one uses the
 cache one as parent.

 Regards
 Henrik

 _
 Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
 https://signup.live.com/signup.aspx?id=60969


Re: [squid-users] Squid cache_dir failed - can squid survive?

2010-03-18 Thread Ivan .
I noticed an improvement when I disabled it, which may have something
to do with my cache settings, but I tried a number of config combos
without much success

2010/3/18 Henrik Nordström hen...@henriknordstrom.net:
 tor 2010-03-18 klockan 17:25 +1100 skrev Ivan .:
 I wonder about the value of http cache, when the majority of high
 volume sites used in the corporate environment are dynamic.
 http://www.mnot.net/cache_docs/

 Hit ratio have not declined that much in the last decade. It's still
 around 25-30% byte hit ratio and significantly more in request hit
 ratio.

 While it's true that a lot of the html content is more dynamic than
 before there is also lots more inlined content such as images etc which
 are plain static and caches just fine and these make up for the majority
 of the traffic.

 How is the no-cache HTTP header handled by Squid?

 By default as if the response is not cachable. Somewhat stricter than
 the specifications require, but more in line with what web authors
 expect when using this directive.

 Regards
 Henrik




Re: [squid-users] squid consuming too much processor/cpu

2010-03-17 Thread Ivan .
you might want to check out this thread

http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

cheers
ivan

On Wed, Mar 17, 2010 at 4:55 PM, Muhammad Sharfuddin
m.sharfud...@nds.com.pk wrote:
 Squid Cache: Version 2.7.STABLE5(squid-2.7.STABLE5-2.3)
 kernel version: 2.6.27 x86_64
 CPU: Xeon 2.6 GHz CPU
 Memory: 2 GB
 /var/cache/squid is ext3, mounted with 'noacl' and 'noatime' options
 number of users using this proxy: 160
 number of users using simultaneously/concurrently using this proxy: 72

 I found that squid is consuming too much cpu, average cpu idle time is
 49 only.

 I have attached the output 'top -b -n 7', and 'vmstat 1'

 below is the output of squid.conf

 squid.conf:
 -

 http_port 8080
 cache_mgr administra...@test.com
 cache_mem 1024 MB
 cache_dir aufs /var/cache/squid 2 32 256
 visible_hostname gateway.test.com
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
 refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
 432000
 refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
 10080 90% 43200
 refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
 refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
 refresh_pattern . 0 40% 40320
 cache_swap_low 78
 cache_swap_high 90

 maximum_object_size_in_memory 100 KB
 maximum_object_size 12288  KB

 fqdncache_size 2048
 ipcache_size 2048

 acl myFTP port   20  21
 acl ftp_ipes src /etc/squid/ftp_ipes.txt
 http_access allow ftp_ipes myFTP
 http_access deny myFTP

 acl porn_deny url_regex /etc/squid/domains.deny
 http_access deny porn_deny

 acl vip src /etc/squid/vip_ipes.txt
 http_access allow vip

 acl entweb url_regex /etc/squid/entwebsites.txt
 http_access deny entweb

 acl mynet src /etc/squid/allowed_ipes.txt
 http_access allow mynet


 please help, why squid is utilizing so much of cpu


 --
 Regards
 Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823




Re: [squid-users] Squid v2.6 error accessing site

2010-03-17 Thread Ivan .
Its all sorted, a issue on the hosted site

On Wed, Mar 17, 2010 at 8:27 PM, Matus UHLAR - fantomas
uh...@fantomas.sk wrote:
  On Tue, 16 Mar 2010 11:12:44 +1100, Ivan . ivan...@gmail.com wrote:
  I am having some trouble accessing the site
  http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
  edge router, but I don't see anything back?

  On Tue, Mar 16, 2010 at 11:27 AM, Amos Jeffries squ...@treenet.co.nz 
  wrote:
  And what makes you think packets failing to return to your network is
  caused by Squid?

 On Tue, Mar 16, 2010 at 11:35 AM, Ivan . ivan...@gmail.com wrote:
  Becuase the site is accessible direct, without going via squid, I am
  trying to eliminate the most obvious.
 
  and the window scaling issues
  http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

 On 16.03.10 12:46, Ivan . wrote:
 Now I am even more convinced it is a squid issue

 I just built up another RedHat ELv5 box, with Squid v 3.0 stable 16
 and the site works

 The existing squid proxy running Squid v2.6 stable21 does not work

 did you try to connect from the site running squid and did it work?

 I am sure it is not your squid who sends packets back to your network.

 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 The 3 biggets disasters: Hiroshima 45, Tschernobyl 86, Windows 95



Re: [squid-users] squid consuming too much processor/cpu

2010-03-17 Thread Ivan .
run a cron job to restart Squid once a week?

On Wed, Mar 17, 2010 at 11:09 PM, Muhammad Sharfuddin
m.sharfud...@nds.com.pk wrote:

 On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
  you might want to check out this thread
 
  http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

 I checked, but its not clear to me
 do I need to install some packages/rpms ? and then ?
 I mean how can I resolve this issue

 --
 Regards
 Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823

 
 
  cheers
  ivan
 
  On Wed, Mar 17, 2010 at 4:55 PM, Muhammad Sharfuddin
  m.sharfud...@nds.com.pk wrote:
   Squid Cache: Version 2.7.STABLE5(squid-2.7.STABLE5-2.3)
   kernel version: 2.6.27 x86_64
   CPU: Xeon 2.6 GHz CPU
   Memory: 2 GB
   /var/cache/squid is ext3, mounted with 'noacl' and 'noatime' options
   number of users using this proxy: 160
   number of users using simultaneously/concurrently using this proxy: 72
  
   I found that squid is consuming too much cpu, average cpu idle time is
   49 only.
  
   I have attached the output 'top -b -n 7', and 'vmstat 1'
  
   below is the output of squid.conf
  
   squid.conf:
   -
  
   http_port 8080
   cache_mgr administra...@test.com
   cache_mem 1024 MB
   cache_dir aufs /var/cache/squid 2 32 256
   visible_hostname gateway.test.com
   refresh_pattern ^ftp: 1440 20% 10080
   refresh_pattern ^gopher: 1440 0% 1440
   refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
   refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
   432000
   refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
   10080 90% 43200
   refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
   refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
   refresh_pattern . 0 40% 40320
   cache_swap_low 78
   cache_swap_high 90
  
   maximum_object_size_in_memory 100 KB
   maximum_object_size 12288  KB
  
   fqdncache_size 2048
   ipcache_size 2048
  
   acl myFTP port   20  21
   acl ftp_ipes src /etc/squid/ftp_ipes.txt
   http_access allow ftp_ipes myFTP
   http_access deny myFTP
  
   acl porn_deny url_regex /etc/squid/domains.deny
   http_access deny porn_deny
  
   acl vip src /etc/squid/vip_ipes.txt
   http_access allow vip
  
   acl entweb url_regex /etc/squid/entwebsites.txt
   http_access deny entweb
  
   acl mynet src /etc/squid/allowed_ipes.txt
   http_access allow mynet
  
  
   please help, why squid is utilizing so much of cpu
  
  
   --
   Regards
   Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823
  
  
 




Re: [squid-users] Warning your cache is running out of file descriptors

2010-03-17 Thread Ivan .
I used this

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

On Thu, Mar 18, 2010 at 6:21 AM, william k...@cobradevil.org wrote:
 See this thread:
 http://www.mail-archive.com/squid-users@squid-cache.org/msg70230.html


 please search the archives


 with kind regards

 William van de Velde


 On 03/17/2010 07:20 PM, Mariel Sebedio wrote:

 Hello, the file descriptors you must be incress in
 /etc/security/limits.conf and re-build the cache  For RHEL you must change

 *  - nofiles 1024
 for
 *  - nofiles 2048
 Sorry my english!!

 Bye, Mariel

 Gmail wrote:

 Hello All,

 This is the first time I am using this mailing ;list, and I do apologise
 if I sent a copy of this email to another address by mistake

 I am desperately seeking some help, I have googled in a hope to find an
 answer, but all I could find was about the previous versions, which don't
 apply to the version I am using and to my OS:


 I am running Squid3.0 Stable
 OS Ubuntu Hardy

 I am currently getting this warning:

 Warning your cache is running out of file descriptor, but I couldn't find
 where to increase the size from 1024 to any number.

 On the previous versions and other OS systems, it's apparently located
 here /etc/default/squid but on my system it doesn't exist.

 Can anyone please point me to where I can change that?

 I have checked Ubuntu forums, I have checked several other forums, but
 the only links I seem to get on google are related to the previous versions
 of squid or other operating systems.

 Can you help please, since I started using Squid I had problem after
 problem, lot of other applications are not working, I still can't access my
 backend HTTP servers, but that's another problem for another day.

 Any help would be very much appreciated
 Thank you all







[squid-users] Squid v2.6 error accessing site

2010-03-15 Thread Ivan .
Hi,

I am having some trouble accessing the site
http://www.efirstaid.com.au/. I confirm the TCP SYN packet leaves our
edge router, but I don't see anything back?

If I try to go direct without the squid it works fine.

1268696419.311 113830 10.xxx.xxx.xxx TCP_MISS/503 1444 GET
http://www.efirstaid.com.au/ - DIRECT/70.86.101.210 text/html [Host:
www.efirstaid.com.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\n] [HTTP/1.0 503 Service Unavailable\r\nServer:
squid\r\nDate: Mon, 15 Mar 2010 23:40:19 GMT\r\nContent-Type:
text/html\r\nContent-Length: 1066\r\nExpires: Mon, 15 Mar 2010
23:40:19 GMT\r\nX-Squid-Error: ERR_CONNECT_FAIL 111\r\n\r]

1268696516.331 114023 10.xxx.xxx.xxx  TCP_MISS/503 1444 GET
http://www.efirstaid.com.au/ - DIRECT/70.86.101.210 text/html [Host:
www.efirstaid.com.au\r\nUser-Agent: Mozilla/5.0 (Windows; U; Windows
NT 5.2; en-GB; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12\r\nAccept:
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
en-gb,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\nKeep-Alive: 300\r\nProxy-Connection:
keep-alive\r\n] [HTTP/1.0 503 Service Unavailable\r\nServer:
squid\r\nDate: Mon, 15 Mar 2010 23:41:56 GMT\r\nContent-Type:
text/html\r\nContent-Length: 1066\r\nExpires: Mon, 15 Mar 2010
23:41:56 GMT\r\nX-Squid-Error: ERR_CONNECT_FAIL 111\r\n\r]

Trying a wget on the squid box gets the same timeout erro

[r...@proxy squid]# wget http://www.efirstaid.com.au/
--2010-03-16 11:16:45--  http://www.efirstaid.com.au/
Resolving www.efirstaid.com.au... 70.86.101.210
Connecting to www.efirstaid.com.au|70.86.101.210|:80... failed:
Connection refused.


Thanks
Ivan


[squid-users] Https traffic

2009-10-05 Thread Ivan . Galli
Hi, 
my company are going to buy Websense web security suite. 
It seems to be able to decrypt and check contents in ssl tunnel. 
Is it really important to do this to prevent malicius code or dangerous 
threat?

Thanks and regards.

Ivan

On Wed, 30 Sep 2009 14:58:08 +0200, Ivan.Galli_at_aciglobal.it wrote: 
 Hi, i have a question about https traffic content. 
 There is some way to check what pass through ssl tunnel? 
 Can squidguard or any other programs help me? 
The 'S' in HTTPS means Secure or SSL encrypted. 
Why do you want to do this? 
Depends on the type of service environment are you working with... 
* ISP-like where 'random' people use the proxy? 
- dont bother. This is a one-way road to serious trouble. 
* reverse-proxy where you own or manage the HTTPS website itself? 
- use https_port and decrypt as things enter Squid. Re-encrypt if needed 
to 
the peer. 
* Enterprise setup where you have full control of the workstation 
configuration? 
- use Squid-3.1 and SslBump. Push out settings to all workstations to 
trust 
the local proxy keys (required). 
Amos 

Ivan 


[squid-users] Https contents

2009-09-30 Thread Ivan . Galli
Hi, i have a question about https traffic content.
There is some way to check what pass through ssl tunnel?
Can squidguard or any other programs help me?

Thanks and regards.

Ivan 


[squid-users] ntlm

2006-12-13 Thread ivan re

I use squid 2.5 stable14 anda samba 3.0.2

squid conf:
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 15 minutes
auth_param ntlm use_ntlm_negotiate on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

When I try to access internet I obtain cache denied:

cache.log:[2006/12/12 16:42:07, 3] libsmb/ntlmssp.c:ntlmssp
_server_auth(672)
 Got user=[ivan] domain=[EGOBIANCHI] workstation=[ARTISTICO1] len1=24 len2=0
[2006/12/12 16:42:07, 3] utils/ntlm_auth.c:winbind_pw_check(429)
 Login for user [EMAIL PROTECTED] failed due to [Logon server]


What does it means???


TIA
Ivan


[squid-users] sqid auth problem

2006-12-04 Thread ivan re

have configured samba 3 and quid 2.5 stable 14 fc5

With my win client i enter into domain with user, password an domain.

I can't access to internet. Why?
Do I need winbind ? or not?

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol= squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 15 minutes
auth_param ntlm use_ntlm_negotiate on
auth_param basic program /usr/bin/ntlm_auth --helper-protocol= squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off


acl password proxy_auth REQUIRED
http_access allow password


[squid-users] Making ACL for an IP range

2005-05-10 Thread Ivan Petrushev
Hello :-) That's my first mailist posting but I hope I'll get the
basics soon. Please excuse my poor english.
The problem I'm trying to solve is how to make ACL responding for a
range of IPs (not the whole subnet). If I wanted to make the ACL
responding for the whole subnet I would use CIDR or dotted notation
like:
acl mynetwork src 192.168.1.1/255.255.255.0
or
acl mynetwork src 192.168.1.1/24
I want that acl 'mynetwork' to respond only for IPs 192.168.1.30 -
192.168.1.47 (for example). That is neither a subnetwork and can't be
done via the upper examples. So can I use (from IP) (to IP) range in
squid.conf and what is the exact syntaxis? I haven't seen anything
like that in the online documentation, but that doesn't mean it
doesn't exist :-)

Greetings, Ivan Petrushev.

-
http://host.GBG.bg -  


Re: Re: [squid-users] Making ACL for an IP range

2005-05-10 Thread Ivan Petrushev
Thanks fot the comment :)
 Dear Ivan 
For and IP to IP you can define as follow
 
 acl pc1 src 192.168.1.30/255.255.255.255
 http_access allow pc1
 acl pc2 src 192.168.1.31/255.255.255.255
 http_access allow pc2
 
But that would allow access only for two IPs. If I have to describe every IP in 
that way, imagine what would my squid.conf would be looking like for about 40 
IPs :) There have to be shorter way.
Thanks again :)

-
http://host.GBG.bg -  


Re: Re: [squid-users] Making ACL for an IP range

2005-05-10 Thread Ivan Petrushev

Thanks for the comment :)
 http://squid.visolve.com/squid/squid24s1/access_controls.htm
 
 acl aclname src 172.16.1.25-172.16.1.35/32
 
 Ryan Lamberton
 FamiLink Company
 Family Safe Internet Access
 That's exactly what I need :) In that example what is the purpose of the 
subnet mask? Does it have to match the subnet mask configured on the PCs over 
the network? Or it is only for determing the IP range parameters?

-
http://host.GBG.bg -  


[squid-users] SquidNT crash on startup.

2004-09-28 Thread Ivan Doitchinov
Hello all,
I just installed 
http://albaweb.albacom.net/acmeconsulting.it/download/squid-2.5.STABLE6-NT-bin.zip

When I did
squid -i
to register squid as a service, it says it registered sucessfully but 
then crashes:

Faulting application squid.exe, version 2.5.4.0, faulting module 
advapi32.dll, version 5.1.2600.2180, fault address 0x0002869a.

squid -r or tring to start the service crashes as well.
I'm running WinXP professional SP2.
Any known issue?
Ivan Doitchinov
esmertec ag


Re: [squid-users] SquidNT crash on startup.

2004-09-28 Thread Ivan Doitchinov
Przemek Czerkas wrote:
Ivan Doitchinov wrote:
 

Hello all,
I just installed 
http://albaweb.albacom.net/acmeconsulting.it/download/squid-2.5.STABLE6-NT
-bin.zip

When I did
squid -i
to register squid as a service, it says it registered sucessfully but 
then crashes:

Faulting application squid.exe, version 2.5.4.0, faulting module 
advapi32.dll, version 5.1.2600.2180, fault address 0x0002869a.

squid -r or tring to start the service crashes as well.
I'm running WinXP professional SP2.
Any known issue?
Ivan Doitchinov
esmertec ag
   

Looks like http://www.squid-cache.org/bugs/show_bug.cgi?id=1064
Przemek Czerkas
 

That was it. Thanks.
Ivan Doitchinov
esmertec ag


[squid-users] Squid + SSL CA.

2004-05-05 Thread Ivan Doitchinov
Hello all,

I am using squid V2.5.STABLE1 on Red Hat linux and I am trying to set up 
an SSL proxy (CONNECT method). It all works fine except that I can't 
figure out how to add my own CA certificate in order to prevent a TLS 
Unkonwn CA fatal error. I googled a bit and found out that this should 
be configurable through sslproxy_* directive in the squid config file, 
but I could not find a list/description on these directives... I only 
found a mention to sslproxy_cafile which does not seem to be 
recognized by my squid.

My squid was compiled with --enable-ssl and with-openssl=/usr/kerberos.

Thanks,

Ivan Doitchinov
esmertec ag


[squid-users] keep ip source address in logs

2003-07-18 Thread Ivan Rodriguez
Hello list i have a little problem 
i use squid version  2.5.STABLE1,
all my users use the server proxy 
but i have the web page for my intranet
so i use the file proxy.pac  where
we had configured the ip local address to be
routed directly
however not all users use proxy.pac and the apache
logs for my web page intranet.domain appear with the
ip address of the proxy server 
what can i do to get the machine's ip addres that
generate the request, instead of my proxy's ip
address?
in other terms, how can i redirect the requests
directly to my intranet web page ??

for example
ip address for proxy server 
192.168.64.20 

Logs for the Apache intranet.domain 

192.168.64.20 - - [30/Sep/2002:13:28:30 -0500] GET 
(this must  be the real ip for the client request
example 192.168.64.16)


Iptables is not a option why i have only one interface
ethernet

Thanks a lot excusme my english is not god

_
Do You Yahoo!?
La mejor conexión a internet y 25MB extra a tu correo por $100 al mes. 
http://net.yahoo.com.mx


[squid-users] transparent proxy using wb_ntlm auth

2003-02-03 Thread Ivan de Gusmão Apolonio
Hi all

I'm trying to use transparent proxy but if I'm using some athentication
scheme, it always shows an authentication popup to me, even if I'm a member
of the allowed group, and I put my username/password and it's rejected. If I
disable the auth scheme, it works normally. My question is: is it possible
to use trasparent proxy using wb_ntlmauth auth?? Follows part of my
squid.conf

auth_param ntlm program /usr/local/squid/libexec/wb_ntlmauth
auth_param ntlm children 10
auth_param ntlm max_challenge_reuses 10
auth_param ntlm max_challenge_lifetime 8 minutes
auth_param basic program /usr/local/squid/libexec/wb_auth
auth_param basic children 5
auth_param basic realm Squid proxy-cach

httpd_accel_host virtual
httpd_accel_port 0
httpd_accel_with_proxy  on
httpd_accel_uses_host_header on
httpd_accel_single_host off

acl domainusers proxy_auth /etc/squid/internet_users.local.txt
http_access allow liberados

Thanks
Ivan