Re: [squid-users] Compile error from port on FreeBSD 11.2-RELEASE r342572

2019-06-25 Thread oleg palukhin

> On 25/06/19 4:22 am, oleg palukhin wrote:
> > Hi list.
> > Trying update to squid3-3.5.28_2 from squid3-3.5.28_1 (port on
> > FreeBSD 11.2-RELEASE):
> > "--- support.lo ---
> > support.cc:2203:9: error: no matching function for call to
> > 'SSL_CTX_sess_set_get_cb' SSL_CTX_sess_set_get_cb(ctx,
> > get_session_cb); ^~~
> > /usr/local/include/openssl/ssl.h:737:6: note: candidate function not
> > viable: no known conversion from 'SSL_SESSION *(SSL *, unsigned
> > char *, int, int *)' (aka 'ssl_session_st *(ssl_st *, unsigned char
> > *, int, int *)') to 'SSL_SESSION *(*)(struct ssl_st *, const
> > unsigned char *, int, int *)' (aka 'ssl_session_st *(*)(ssl_st *,
> > const unsigned char *, int, int *)') for 2nd argument void
> > SSL_CTX_sess_set_get_cb(SSL_CTX *ctx, ^ 1 error generated."
> > 
> > My DEFAULT_VERSIONS+=ssl=libressl, may be it`s break point? All
> > previos updating compilations were clean until now.
> > Any kick in right direction, please.
> >   
> 
> 
> libressl claims to be "OpenSSL version 2.0".
> 
> Please try the current stable / production release of Squid. Which
> today is v4.
> 
> PS. If you are buildling your own Squid and using TLS/SSL
> functionality please follow the latest release version. TLS is a very
> volatile environment these past few years and almost every Squid
> release has improvements for things like this.
> 
> Amos

So, time to move to v4.

Thank you, Amos.

-- 

Regards,
Oleg Palukhin
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Compile error from port on FreeBSD 11.2-RELEASE r342572

2019-06-24 Thread oleg palukhin
Hi list.
Trying update to squid3-3.5.28_2 from squid3-3.5.28_1 (port on  FreeBSD
11.2-RELEASE):
"--- support.lo ---
support.cc:2203:9: error: no matching function for call to
'SSL_CTX_sess_set_get_cb' SSL_CTX_sess_set_get_cb(ctx, get_session_cb);
^~~
/usr/local/include/openssl/ssl.h:737:6: note: candidate function not
viable: no known conversion from 'SSL_SESSION *(SSL *, unsigned char *,
int, int *)' (aka 'ssl_session_st *(ssl_st *, unsigned char *, int, int
*)') to 'SSL_SESSION *(*)(struct ssl_st *, const unsigned char *, int,
int *)' (aka 'ssl_session_st *(*)(ssl_st *, const unsigned char *, int,
int *)') for 2nd argument void SSL_CTX_sess_set_get_cb(SSL_CTX *ctx, ^
1 error generated."

My DEFAULT_VERSIONS+=ssl=libressl, may be it`s break point? All previos
updating compilations were clean until now.
Any kick in right direction, please.

-- 

Regards,
Oleg Palukhin
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-10-04 Thread oleg gv
Thank you very much. It's my fault - wrote wrong ACL .

That'll do it! Yahooo!  LIST , C.?D blocked ok.

2016-10-04 17:55 GMT+03:00 Alex Rousskov <rouss...@measurement-factory.com>:

> On 10/04/2016 06:24 AM, oleg gv wrote:
>
> > Then I try to block FTP-Command and nothing happen. Some from my config:
> >
> > acl rh req_header -i ^FTP-Command
>
> Wrong syntax. Please read req_header documentation carefully and try
> something like:
>
>   acl rh req_header FTP-Command -i LIST
>
> I also recommend renaming the "rh" ACL to something more meaningful like
> "ForbiddenCommand".
>
> Finally, since a regular HTTP request might have an FTP-Command header
> field, you should probably limit your rh-based http_access deny rule to
> transactions accepted at ftp_port(s).
>
>
> > http_access permit all
>
> There is no "permit" action AFAIK. Please use documented "allow" and
> "deny" actions only and copy-paste exact configuration lines when asking
> questions.
>
>
> > request_header_access  "FTP-Command: LIST" deny all
>
> Wrong syntax and wrong option. You want to deny a transaction, not to
> remove a header from that transaction.
>
>
> HTH,
>
> Alex.
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-10-04 Thread oleg gv
Finally I've managed to go on ftp.intel.com using FileZilla through my
squid gateway in standart (proxy) mode.

Squid conf:
ftp_port  x.x.x.x  2122

Then I try to block FTP-Command and nothing happen. Some from my config:

acl rh req_header -i ^FTP-Command
http_access deny rh
http_access permit all

And also add following:

request_header_access  "FTP-Command: LIST" deny all


Connect and browsing of remote ftp.intel.com is  OK - nothing blocked.

In squid log i see (fragment):


2016/10/04 15:23:04.177 kid1| 9,2| FtpServer.cc(495) writeReply: FTP Client
REPLY:
-
227 Entering Passive Mode (192,168,33,254,230,30).

--
2016/10/04 15:23:04.177 kid1| 20,2| store.cc(949) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/10/04 15:23:04.177 kid1| 20,2| store.cc(949) checkCachable:
StoreEntry::checkCachable: NO: not cachable
2016/10/04 15:23:04.178 kid1| 33,2| FtpServer.cc(699) parseOneRequest:
>>ftp LIST
2016/10/04 15:23:04.178 kid1| 9,2| FtpServer.cc(1320) handleRequest: FTP
Client local=192.168.33.254:2122 remote=192.168.33.10:60838 FD 9 flags=1
2016/10/04 15:23:04.178 kid1| 9,2| FtpServer.cc(1322) handleRequest: FTP
Client REQUEST:
-
GET / HTTP/1.1
FTP-Command: LIST
FTP-Arguments:

--
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(744)
clientAccessCheckDone: The request GET ftp://ftp.intel.com/ is ALLOWED;
last ACL checked: net33
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(720)
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2016/10/04 15:23:04.178 kid1| 85,2| client_side_request.cc(744)
clientAccessCheckDone: The request GET ftp://ftp.intel.com/ is ALLOWED;
last ACL checked: net33
2016/10/04 15:23:04.178 kid1| 17,2| FwdState.cc(133) FwdState: Forwarding
client request local=192.168.33.254:2122 remote=192.168.33.10:60838 FD 9
flags=1, url=ftp://ftp.intel.com/
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: ftp://ftp.intel.com/' via ftp.intel.com
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(258) peerSelectDnsPaths:
Find IP destination for: ftp://ftp.intel.com/' via ftp.intel.com
2016/10/04 15:23:04.178 kid1| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'ftp://ftp.intel.com/'



But I need to block FTP-Command: LIST (for example)


2016-10-03 20:34 GMT+03:00 Alex Rousskov <rouss...@measurement-factory.com>:

> Please ask these questions on squid-users...
>
> On 10/03/2016 05:51 AM, oleg gv wrote:
> > Thanks, but problems still exist - FTP doesn't work through proxy.
> >
> > 1. I've set in proxy
> > ftp_port 192.168.0.1:2121 <http://192.168.0.1:2121>
> > 2. set in client browser to use proxy for FTP on 192.168.0.1:2121
> > <http://192.168.0.1:2121>
> >
> > Trying to go ftp://ftp.intel.com  and In log of squid i see:
> >
> > FTP Client REPLY:
> > -
> > 530 Must login first
> >
> > 
> >
> > Another variant: setup inerception ftp_proxy (with nat redirect) - and
> > it also doesn'nt work: last commands in log:
> > 2016/10/03 14:43:09.929 kid1| 9,2| FtpRelay.cc(733)
> > dataChannelConnected: connected FTP server data channel:
> > local=8x.xxx.xxx.xxx:41231 remote=192.198.164.82:36034
> > <http://192.198.164.82:36034> FD 19 flags=1
> > 2016/10/03 14:43:09.929 kid1| 9,2| FtpClient.cc(791) writeCommand: ftp<<
> > LIST
> >
> > 2016/10/03 14:43:10.125 kid1| 9,2| FtpClient.cc(1108) parseControlReply:
> > ftp>> 125 Data connection already open; Transfer starting.
> >
> > And ftp.intel com is hang, trying to open..
> >
> >
> >
> >
> >
> > 2016-10-01 2:12 GMT+03:00 Alex Rousskov
> > <rouss...@measurement-factory.com
> > <mailto:rouss...@measurement-factory.com>>:
> >
> > On 09/30/2016 10:42 AM, oleg gv wrote:
> >
> > > Hello, I've found that NativeFtpRelay appeared in squid 3.5 . Is it
> > > possible to apply http-access acl for FTP proto concerning
> filtering of
> > > FTP methods(commands)
> >
> > Yes, it should be possible.
> >
> >
> > > by analogy of HTTP methods ?
> >
> > Not quite. IIRC, when the HTTP message representing the FTP
> transaction
> > is relayed through Squid, the FTP command name is _not_ stored as an
> > HTTP method. The FTP command name is stored as HTTP "FTP-Command"
> header
> > value. See http://wiki.squid-cache.org/Features/FtpRelay
> > <http://wiki.squid-cache.org/Features/FtpRelay>
> >
> > You should be able to block FTP commands using a req_header ACL.
> >
> >
> > > what other possibilities in squid exist to do this ?
> >
> > An ICAP or eCAP service can also filter relayed FTP messages.
> >
> > Alex.
> >
> >
>
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid-3.5.21: filter FTP content or FTP commands

2016-09-30 Thread oleg gv
Hello, I've found that NativeFtpRelay appeared in squid 3.5 . Is it
possible to apply http-access acl for FTP proto concerning filtering of FTP
methods(commands) by analogy of HTTP methods ?

For example, I need to deny FTP CD command:

acl m method CD
acl p proto FTP
http-access deny m p
http-access permit all

If it is not possible - what other possibilities in squid exist to do this ?

May be in future ?

Thanks!
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.3.12, Multiple process, requests serviced by process.

2014-11-10 Thread Oleg Chomenko
Hello,

We use a squid cache for our robots to collects an information from
client's web sites.

The squid running on FreeBSD 9.3 , squid version 3.3.13

the configuration is like this:

if ${process_number} = 1
http_port 3001
cache_peer 1.1.1.1 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.2 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.3 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.4 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.5 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
endif

if ${process_number} = 2
http_port 3001
cache_peer 1.1.1.1 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.2 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.3 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.4 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.1.5 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
endif


if ${process_number} = 3
http_port 3002
cache_peer 1.1.2.1 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.2 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.3 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.4 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.5 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
endif

if ${process_number} = 4
http_port 3002
cache_peer 1.1.2.1 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.2 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.3 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.4 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
cache_peer 1.1.2.5 parent 4567 0 no-query no-digest no-netdb-exchange
round-robin connect-fail-limit=3
endif

.
# COORDINATOR
if ${process_number} = 16
http_port 3099
endif

workers 15


in total 15+1 processes is running, traffic load over 100 Mbit; around
50K req/min (total #)

Problem is:
when we restart the squid all request to port 3001 do serve only
upstream proxy defined for this process. after couple hours, we see
request served by upstream cache NOT belonged to this 3001 ports.  (
like in example above can served by 1.1.2.4)

The rate depend on the load, up to 15% all requests can be served by
others upstream proxy NOT belonged to this port
we use a java application and our website to logging all requests we
generating and passing trough the cache server.

This behavior is a serious trouble for us .

Thanks in advance for any tips to solve it (Thinking it an internal
request distribution mechanism produce a fault )

-- 
This electronic message, including any attachments, may contain 
proprietary, confidential or privileged information for the sole use of the 
intended recipient(s). You are hereby notified that any unauthorized 
disclosure, copying, distribution, or use of this message is prohibited. 
 If you have received this message in error, please immediately notify the 
sender by reply e-mail and delete it.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] problem with squid-users maillist

2014-08-21 Thread Oleg Motienko
Hello,

Due to DMARC policy of several domains some mail is blocked (see an
example below).

I suppose maillist software ( ezmlm ) needs some tuning, it must
forward email to list with own sender address ( @squid-cache.org ).

An example:

--

Return-Path: 
Received: (qmail 8574 invoked for bounce); 9 Aug 2014 15:48:22 -
Date: 9 Aug 2014 15:48:22 -
From: mailer-dae...@squid-cache.org
To: squid-users-return-1235...@squid-cache.org
Subject: failure notice

Hi. This is the qmail-send program at squid-cache.org.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

motie...@gmail.com:
74.125.142.27 failed after I sent the message.
Remote host said: 550-5.7.1 Unauthenticated email from yahoo.com is
not accepted due to domain's
550-5.7.1 DMARC policy. Please contact administrator of yahoo.com domain if
550-5.7.1 this was a legitimate mail. Please visit
550-5.7.1 http://support.google.com/mail/answer/2451690 to learn about DMARC
550 5.7.1 initiative. o17si27260806icl.100 - gsmtp

--

-- 
Regards,
Oleg


Re: [squid-users] not working tproxy in squid 3.2

2013-04-11 Thread Oleg
On Tue, Apr 02, 2013 at 12:52:58AM +1300, Amos Jeffries wrote:
 On 1/04/2013 7:40 p.m., Oleg wrote:
 In your case with kernel limits of 800MB per-process this config
 will guarantee it gets killed quickly. No memory leak required:
 
   cache_mem 900 MB
 
 From your config I see Squid is using its default 256 MB of
 cache_mem. So you should expect to see at least 300MB of Squid RAM
 usage normally.

  I'm ready for 300MB, but i'm not ready for 800MB.

 The difference between 6 and 7 is the kernel version. Some Kernels
 are known to have TPROXY bugs.
 Also, the Debian kernels had non-working TPROXY for many releases
 after the apparently identical upstream kernel version was working
 very well. This affects Debian 6 for certain I'm not sure about 7.

  This is not an issue for us - we use a custom 3.2.38 kernel.

 modprobe xt_MARK
 FATAL: Module xt_MARK not found.
 
 I would guess this is related to the problem.

 Theory: without MARK support in the kernel the TPROXY connections
 are looping through Squid until the active connection state consumes
 800MB and gets killed.
 Can you verify that at all?

  What kernel config option is responsible for this?

# grep MARK /boot/config-3.2.38-my
CONFIG_NETWORK_SECMARK=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_CLS_U32_MARK=y

Now, we stay at squid 3.1.20 from Debian 7. And, as before, we see tcp
packets, but client browser doesn't open any site (may be packets are broken?)

tcpdump of one http request (10.232.194.5 - client):

16:12:57.971120 IP 10.232.194.5.3733  87.251.132.181.80: Flags [S], seq 
1252681145, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:57.971165 IP 87.251.132.181.80  10.232.194.5.3733: Flags [S.], seq 
2610504694, ack 1252681146, win 14600, options [mss 1460,nop,nop,sackOK], 
length 0
16:12:57.971569 IP 10.232.194.5.3734  87.251.132.181.80: Flags [S], seq 
3247035523, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:57.971608 IP 87.251.132.181.80  10.232.194.5.3734: Flags [S.], seq 
901187601, ack 3247035524, win 14600, options [mss 1460,nop,nop,sackOK], length 0
16:12:57.973064 IP 10.232.194.5.3733  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:57.973195 IP 87.251.132.181.80  10.232.194.5.3733: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:57.973379 IP 10.232.194.5.3734  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:57.973458 IP 87.251.132.181.80  10.232.194.5.3734: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:57.975361 IP 10.232.194.5.3733  87.251.132.181.80: Flags [P.], seq 
1:301, ack 1, win 65535, length 300
16:12:57.975388 IP 87.251.132.181.80  10.232.194.5.3733: Flags [R], seq 
2610504695, win 0, length 0
16:12:57.975396 IP 10.232.194.5.3733  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:57.975409 IP 87.251.132.181.80  10.232.194.5.3733: Flags [R], seq 
2610504696, win 0, length 0
16:12:57.975612 IP 10.232.194.5.3734  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:57.977060 IP 10.232.194.5.3734  87.251.132.181.80: Flags [F.], seq 1, 
ack 2, win 65535, length 0
16:12:57.977085 IP 87.251.132.181.80  10.232.194.5.3734: Flags [.], ack 2, win 
14600, length 0
16:12:58.004864 IP 10.232.194.5.3735  87.251.132.181.80: Flags [S], seq 
641201190, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:58.004897 IP 87.251.132.181.80  10.232.194.5.3735: Flags [S.], seq 
2722793776, ack 641201191, win 14600, options [mss 1460,nop,nop,sackOK], length 0
16:12:58.014947 IP 10.232.194.5.3735  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:58.015059 IP 87.251.132.181.80  10.232.194.5.3735: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:58.016445 IP 10.232.194.5.3735  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:58.196105 IP 10.232.194.5.3735  87.251.132.181.80: Flags [P.], seq 
1:301, ack 2, win 65535, length 300
16:12:58.196133 IP 87.251.132.181.80  10.232.194.5.3735: Flags [R], seq 
2722793778, win 0, length 0



Re: [squid-users] not working tproxy in squid 3.2

2013-04-01 Thread Oleg
On Wed, Mar 20, 2013 at 11:35:21AM +0200, Eliezer Croitoru wrote:
 On 3/19/2013 9:24 PM, Oleg wrote:
 On Tue, Mar 19, 2013 at 08:49:25PM +0200, Eliezer Croitoru wrote:
 Hey Oleg,
 
 I want to understand couple things about the situation.
 what is the problem? a memory leak?
 
1 problem - memory leak;
2 problem - tproxy doesn't work in squid 3.2.
 
 I can think of a way you can configure squid to do cause them both.

  I think this is a bug in a software, if we can do memory leak and crash
with bad config.

 How do you see the memory leak? and where?
 
I just start squid, start top and wait about a hour when squid grow from
 40MB to 800MB and kernel kills it.
 
 The memory leak you are talking about is in a case of tproxy usage only?
 
It's hard to say. I was run squid 3.2, with no working tproxy (as i 
  wrote),
 but with normal proxy on 3128 tcp port and it eat my memory too. So, tproxy
 is configured, but not used.
 
 what is the load of the proxy cache?
 do you use it for filtering or just plain cache?
 
Only for filtering.
 
 on what environment?
 
What do mean under environment?
 
 ISP? OFFICE? HOME? ELSE...

  ISP

 the more details you can give on the scenario and point with your
 finger on the problem I will be happy to assist us finding the
 culprit.
 
 What linux distro are you using?
 
Debian 6 and also tried debian 7.
 My opinion is that you dont need to test on 7 or do special tests
 but it helped us to understand the nature of the problem.
 
 Try to not use the filtering helper by using only defaults and tproxy.
 and also try to use this script with trpoxy on port 3129 and
 http_port 127.0.0.1:3128
 
 ##start of script
 #!/bin/sh  -x
 echo loading modules requierd for the tproxy
 modprobe ip_tables
 modprobe xt_tcpudp
 modprobe nf_tproxy_core
 modprobe xt_mark
 modprobe xt_MARK

FATAL: Module xt_MARK not found.

 modprobe xt_TPROXY
 modprobe xt_socket
 modprobe nf_conntrack_ipv4
 sysctl net.netfilter.nf_conntrack_acct
 sysctl net.netfilter.nf_conntrack_acct=1
 ip route flush table 100
 ip rule del fwmark 1 lookup 100
 ip rule add fwmark 1 lookup 100
 ip -f inet route add local default dev lo table 100
 
 echo flushing any exiting rules
 iptables -t mangle -F
 iptables -t mangle -X DIVERT
 
 echo creating rules
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A PREROUTING -s ___LAN -p tcp -m tcp --dport
 80 -j TPROXY --on-port 3129 --tproxy-mark 0x1/0x1
 ##end of script
 
 
 -- 
 Eliezer Croitoru
 


Re: [squid-users] not working tproxy in squid 3.2

2013-04-01 Thread Oleg
On Tue, Apr 02, 2013 at 12:52:58AM +1300, Amos Jeffries wrote:
 On 1/04/2013 7:40 p.m., Oleg wrote:
 In your case with kernel limits of 800MB per-process this config
 will guarantee it gets killed quickly. No memory leak required:
 
   cache_mem 900 MB
 
 From your config I see Squid is using its default 256 MB of
 cache_mem. So you should expect to see at least 300MB of Squid RAM
 usage normally.

  I'm ready for 300MB, but i'm not ready for 800MB.

 The difference between 6 and 7 is the kernel version. Some Kernels
 are known to have TPROXY bugs.
 Also, the Debian kernels had non-working TPROXY for many releases
 after the apparently identical upstream kernel version was working
 very well. This affects Debian 6 for certain I'm not sure about 7.

  This is not an issue for us - we use a custom 3.2.38 kernel.

 modprobe xt_MARK
 FATAL: Module xt_MARK not found.
 
 I would guess this is related to the problem.

 Theory: without MARK support in the kernel the TPROXY connections
 are looping through Squid until the active connection state consumes
 800MB and gets killed.
 Can you verify that at all?

  What kernel config option is responsible for this?

# grep MARK /boot/config-3.2.38-my
CONFIG_NETWORK_SECMARK=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_CLS_U32_MARK=y

Now, we stay at squid 3.1.20 from Debian 7. And, as before, we see tcp
packets, but client browser doesn't open any site (may be packets are broken?)

tcpdump of one http request (10.232.194.5 - client):

16:12:57.971120 IP 10.232.194.5.3733  87.251.132.181.80: Flags [S], seq 
1252681145, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:57.971165 IP 87.251.132.181.80  10.232.194.5.3733: Flags [S.], seq 
2610504694, ack 1252681146, win 14600, options [mss 1460,nop,nop,sackOK], 
length 0
16:12:57.971569 IP 10.232.194.5.3734  87.251.132.181.80: Flags [S], seq 
3247035523, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:57.971608 IP 87.251.132.181.80  10.232.194.5.3734: Flags [S.], seq 
901187601, ack 3247035524, win 14600, options [mss 1460,nop,nop,sackOK], length 0
16:12:57.973064 IP 10.232.194.5.3733  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:57.973195 IP 87.251.132.181.80  10.232.194.5.3733: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:57.973379 IP 10.232.194.5.3734  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:57.973458 IP 87.251.132.181.80  10.232.194.5.3734: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:57.975361 IP 10.232.194.5.3733  87.251.132.181.80: Flags [P.], seq 
1:301, ack 1, win 65535, length 300
16:12:57.975388 IP 87.251.132.181.80  10.232.194.5.3733: Flags [R], seq 
2610504695, win 0, length 0
16:12:57.975396 IP 10.232.194.5.3733  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:57.975409 IP 87.251.132.181.80  10.232.194.5.3733: Flags [R], seq 
2610504696, win 0, length 0
16:12:57.975612 IP 10.232.194.5.3734  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:57.977060 IP 10.232.194.5.3734  87.251.132.181.80: Flags [F.], seq 1, 
ack 2, win 65535, length 0
16:12:57.977085 IP 87.251.132.181.80  10.232.194.5.3734: Flags [.], ack 2, win 
14600, length 0
16:12:58.004864 IP 10.232.194.5.3735  87.251.132.181.80: Flags [S], seq 
641201190, win 65535, options [mss 1348,nop,nop,sackOK], length 0
16:12:58.004897 IP 87.251.132.181.80  10.232.194.5.3735: Flags [S.], seq 
2722793776, ack 641201191, win 14600, options [mss 1460,nop,nop,sackOK], length 0
16:12:58.014947 IP 10.232.194.5.3735  87.251.132.181.80: Flags [.], ack 1, win 
65535, length 0
16:12:58.015059 IP 87.251.132.181.80  10.232.194.5.3735: Flags [F.], seq 1, 
ack 1, win 14600, length 0
16:12:58.016445 IP 10.232.194.5.3735  87.251.132.181.80: Flags [.], ack 2, win 
65535, length 0
16:12:58.196105 IP 10.232.194.5.3735  87.251.132.181.80: Flags [P.], seq 
1:301, ack 2, win 65535, length 300
16:12:58.196133 IP 87.251.132.181.80  10.232.194.5.3735: Flags [R], seq 
2722793778, win 0, length 0


[squid-users] memory leakage in squid 3.1, 3.2, 3.3.

2013-03-19 Thread Oleg
  Hi, all.

I have strange problem of memory leakage in squid 3.1, 3.2 and 3.3.
Squid is grow fast and is killed by oom every day and use before a killing
about 600MB of memory. Interesting thing is that squid 3.2 doesn't fail fully.
Fails only it child process and parent process restart it again.

Tell me please how can i debug this problem?

  My config (for 3.2.8):

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access allow all
http_port 3128
http_port 3129 tproxy
access_log none
coredump_dir /usr/local/var/cache/squid
url_rewrite_program /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
url_rewrite_children 30 startup=5 idle=10 concurrency=0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
cache_effective_user proxy

  Thanks.

P.S. Please, CC me.


[squid-users] not working tproxy in squid 3.2

2013-03-19 Thread Oleg
  Hi, all.

After squid 3.1 ate all of my memory, i installed squid 3.2 (which also ate
all of my memory, but this is an another story). It seems, in squid 3.2 tproxy
is not work right. squid reply to my request, but count of packets too small
for normal workflow. If i connect directly to squid (to normal mode 3128 port),
all work fine.

How can i debug this problem?

My config (3.2.8):

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localhost manager
http_access deny manager
http_access allow localnet
http_access allow localhost
http_access allow all
http_port 3128
http_port 3129 tproxy
access_log none
coredump_dir /usr/local/var/cache/squid
url_rewrite_program /usr/bin/squidGuard -c /etc/squidguard/squidGuard.conf
url_rewrite_children 30 startup=5 idle=10 concurrency=0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
cache_effective_user proxy

iptables-save:

# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*raw
:PREROUTING ACCEPT [7824875024:8401335411812]
:OUTPUT ACCEPT [3675157306:6129226492352]
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*mangle
:PREROUTING ACCEPT [6770135987:6702261415787]
:INPUT ACCEPT [4838725878:6108754481433]
:FORWARD ACCEPT [2985099037:2292524666165]
:OUTPUT ACCEPT [3675156676:6129226454540]
:POSTROUTING ACCEPT [6660255713:8421751120705]
:tproxied - [0:0]
-A PREROUTING -p tcp -m socket --transparent -j tproxied
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3129 --on-ip 0.0.0.0 
--tproxy-mark 0x1/0x
-A tproxied -j MARK --set-xmark 0x1/0x
-A tproxied -j ACCEPT
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*nat
:PREROUTING ACCEPT [166764142:12594892291]
:INPUT ACCEPT [88382392:5321491245]
:OUTPUT ACCEPT [54669707:3295422034]
:POSTROUTING ACCEPT [132896164:10559090386]
COMMIT
# Completed on Wed Mar  6 15:41:59 2013
# Generated by iptables-save v1.4.14 on Wed Mar  6 15:41:59 2013
*filter
:INPUT ACCEPT [14588788:12990241586]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [12967278:12836984550]
:block_ip - [0:0]
:fail2ban-ssh - [0:0]
-A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
-A INPUT -s 10.232.0.0/16 -p tcp -m tcp --dport 3128 -j ACCEPT
-A INPUT -s 10.232.0.0/16 -p tcp -m tcp --dport 3129 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 3129 -j DROP
-A FORWARD -o eth0 -j block_ip
-A fail2ban-ssh -j RETURN
COMMIT
# Completed on Wed Mar  6 15:41:59 2013

ip rule:
0:  from all lookup local 
3:  from all fwmark 0x1 lookup tproxy 
32766:  from all lookup main 
32767:  from all lookup default 

ip rou show table tproxy:
local default dev lo  scope host

This configuration works fine with squid 3.1.


Re: [squid-users] not working tproxy in squid 3.2

2013-03-19 Thread Oleg
On Tue, Mar 19, 2013 at 09:39:34AM -0700, Squidblacklist wrote:
 Is there any reason why you require tproxy? I use a transparent proxy
 installed on a linux router than does forwarding via iptables to squid
 internally and I do not need or use tproxy.

  Because tproxy doesn't touch a client source ip address. For us this is
an important behaviour. We cann't change client source ip.


Re: [squid-users] not working tproxy in squid 3.2

2013-03-19 Thread Oleg
On Tue, Mar 19, 2013 at 08:49:25PM +0200, Eliezer Croitoru wrote:
 Hey Oleg,
 
 I want to understand couple things about the situation.
 what is the problem? a memory leak?

  1 problem - memory leak;
  2 problem - tproxy doesn't work in squid 3.2.

 How do you see the memory leak? and where?

  I just start squid, start top and wait about a hour when squid grow from
40MB to 800MB and kernel kills it.

 The memory leak you are talking about is in a case of tproxy usage only?

  It's hard to say. I was run squid 3.2, with no working tproxy (as i wrote),
but with normal proxy on 3128 tcp port and it eat my memory too. So, tproxy
is configured, but not used.

 what is the load of the proxy cache?
 do you use it for filtering or just plain cache?

  Only for filtering.

 on what environment?

  What do mean under environment?

 the more details you can give on the scenario and point with your
 finger on the problem I will be happy to assist us finding the
 culprit.
 
 What linux distro are you using?

  Debian 6 and also tried debian 7.


Re: [squid-users] Rearranging squid.conf file

2009-05-04 Thread Oleg

1 and 2 - yes, 3 - no (I'm not seen that for a while)

acl name type path_to_file

Jagdish Rao пишет:

Hi,

The squid.conf is now increased to a very long listing. Can I have the
following

1. Instead of the ACL

acl user1 src user1IP

Can I use an external file which will have the listing ?

2. Instead of the ACL 


acl user1 proxy_auth user1

Can I use an external file which will have the listing ?


3. Instead of the ACL

http_access allow useer1 Time1 user1IP

Can I use an external file which will have the listing ?

Thanks

Regards

Jagdish



##
The information transmitted is intended for the person or entity to which it is 
addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination, copying or other use of, 
or taking any action in reliance upon, this information by
persons or entities other than the intended recipient is prohibited. If you 
have received this in error, please contact the sender and delete
the material from your system. Accord Software  Systems Pvt. Ltd. (ACCORD) is 
not responsible for any changes made to the material other
than those made by ACCORD or for the effect of the changes on the meaning of 
the material.
##

.





[squid-users] Authenticator processes after reconfigure.

2009-04-22 Thread Oleg

Hello.

Version: Squid 3.0.STABLE13 on Gentoo 2.6.22-vs2.2.0.7

`squid -k reconfigure` do not close old authenticator processes if that 
was a clients. So my 'NTLM Authenticator Statistics' looks like below.

Is anybody has same symptom?

Oleg.

pre
NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 23 of 15
requests sent: 8896
replies received: 8896
queue length: 0
avg service time: 0 msec


#   FD  PID # Requests  # Deferred Requests Flags   Time
Offset  Request
1   12  23079   459 0   RS  0.002   0   (none)
2   13  23080   89  0   RS  0.000   0   (none)
3   14  23081   37  0   RS  0.000   0   (none)
4   15  23082   36  0   RS  0.002   0   (none)
5   16  23083   342 0   RS  0.000   0   (none)
6   17  23084   10570   RS  0.000   0   (none)
7   18  23085   97  0   RS  0.000   0   (none)
10  21  23089   71  0   RS  0.000   0   (none)
1   20  17695   653 0   0.003   0   (none)
2   22  17696   114 0   0.004   0   (none)
3   23  17697   22  0   0.008   0   (none)
4   24  17698   4   0   0.020   0   (none)
5   25  17699   0   0   0.000   0   (none)
6   26  17700   0   0   0.000   0   (none)
7   27  17701   0   0   0.000   0   (none)
8   28  17702   0   0   0.000   0   (none)
9   29  17703   0   0   0.000   0   (none)
10  30  17713   0   0   0.000   0   (none)
11  31  17714   0   0   0.000   0   (none)
12  32  17715   0   0   0.000   0   (none)
13  33  17716   0   0   0.000   0   (none)
14  34  17717   0   0   0.000   0   (none)
15  35  17718   0   0   0.000   0   (none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED or DEFERRED
   S = SHUTDOWN
   P = PLACEHOLDER
/pre


Re: [squid-users] Authenticator processes after reconfigure.

2009-04-22 Thread Oleg

Done. http://www.squid-cache.org/bugs/show_bug.cgi?id=2648

Amos Jeffries пишет:

Oleg wrote:

Hello.

Version: Squid 3.0.STABLE13 on Gentoo 2.6.22-vs2.2.0.7

`squid -k reconfigure` do not close old authenticator processes if 
that was a clients. So my 'NTLM Authenticator Statistics' looks like 
below.

Is anybody has same symptom?


Maybe.  The 23 of 15 issue has been resolved recently.

But the repeated use of FD some with RS set is a bug anyways. Please 
open a bugzilla entry so we don't lose track of this. With details on 
where that output was found.


Thanks

Amos



Oleg.

pre
NTLM Authenticator Statistics:
program: /usr/bin/ntlm_auth
number running: 23 of 15
requests sent: 8896
replies received: 8896
queue length: 0
avg service time: 0 msec


#FDPID# Requests# Deferred RequestsFlags
TimeOffsetRequest

112230794590RS0.0020(none)
21323080890RS0.0000(none)
31423081370RS0.0000(none)
41523082360RS0.0020(none)
516230833420RS0.0000(none)
6172308410570RS0.0000(none)
71823085970RS0.0000(none)
102123089710RS0.0000(none)
1201769565300.0030(none)
2221769611400.0040(none)
323176972200.0080(none)
42417698400.0200(none)
52517699000.0000(none)
62617700000.0000(none)
72717701000.0000(none)
82817702000.0000(none)
92917703000.0000(none)
103017713000.0000(none)
113117714000.0000(none)
123217715000.0000(none)
133317716000.0000(none)
143417717000.0000(none)
153517718000.0000(none)

Flags key:

   B = BUSY
   C = CLOSING
   R = RESERVED or DEFERRED
   S = SHUTDOWN
   P = PLACEHOLDER
/pre


Amos


[squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Oleg

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.

I'm found only access rules for browser string (User-Agent), but not I mean.

Oleg.


Re: [squid-users] Can I rewrite URL on browser?

2009-04-21 Thread Oleg

Ya, is what I need! Thank you.

Amos Jeffries пишет:

Oleg wrote:

Hi2All.

Can Squid redirect user's request to another URL on browser concordance?
For example, if user use MSIE  6.0 redirect him to page with browser 
update from IT site page.
I'm found only access rules for browser string (User-Agent), but not I 
mean.



I'd use a custom deny_info redirect for this.

  acl msie6 browser .. whatever the pattern for IE6 is ...
  deny_info http://example.com/meis_update.html msie6
  http_access deny msie6


Amos


Re: [squid-users] Headers control in Squid 3.0

2009-04-20 Thread Oleg

Oops. Ya - it's work's with Negotiate-NTLM-Basic sequence...
Strange - Firefox for Windows on Squid 2.7.6 don't work properly with 
that sequence. That why I begin experiments with headers control. Now 
all works right.

Thans for reply.

Amos Jeffries пишет:
 Oleg wrote:
 Hi.

 Before Squid 3.0 I can change a Proxy-Authenticate header through 
duplet:


 header_access Proxy-Authenticate deny browserFirefox osLinux
 header_replace Proxy-Authenticate Negotiate

 That because first authenticate method is NTLM for IE.

 After upgrade to Squid 3.0, header_access directive fork into
 request_header_access and reply_header_access. Implicate in my case I
 change directive header_access to reply_header_access. BUT! Now
 directive header_replace works only with request_header_access and don't
 change a Proxy-Authenticate headers.

 How to resolve this problem without downgrade to Squid 2.7.6? Or may be
 bypass this another way?


  From Squid-3, the proxy-authenticate headers are only relevant between
 browser and Squid. So always removed from client request before that
 request is passed to the web server.
  The *_header_access only apply to headers which are passed though from
 browser/client to server.


 Why is auth method an issue at all?

 Configuring the auth methods in right order should make the browsers use
 either method they prefer.

 I'd test the capability in 3.0 and see if it works okay now, before
 attempting to re-create an old hack.

 FWIW, IE on Vista prefers Negotiate to work most efficiently.


 Amos


[squid-users] Headers control in Squid 3.0

2009-04-19 Thread Oleg

Hi.

Before Squid 3.0 I can change a Proxy-Authenticate header through duplet:

header_access Proxy-Authenticate deny browserFirefox osLinux
header_replace Proxy-Authenticate Negotiate

That because first authenticate method is NTLM for IE.

After upgrade to Squid 3.0, header_access directive fork into
request_header_access and reply_header_access. Implicate in my case I
change directive header_access to reply_header_access. BUT! Now
directive header_replace works only with request_header_access and don't
change a Proxy-Authenticate headers.

How to resolve this problem without downgrade to Squid 2.7.6? Or may be
bypass this another way?

Oleg.





Re: [squid-users] Re: RE : [squid-users] coss

2009-01-10 Thread Oleg Motienko
2.6 works fine (default Ubuntu 8.04 package)

$ squid -v
Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/
squid' '--datadir=/usr/share/squid' '--enable-async-io'
'--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null'
'--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2'
'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='

$ uname -a
Linux proxygw 2.6.24-21-server #1 SMP Wed Oct 22 00:18:13 UTC 2008
i686 GNU/Linux


On Sat, Jan 10, 2009 at 5:23 PM, Heinz Diehl h...@fancy-poultry.org wrote:

 At Sat, 10 Jan 2009 11:53:47 +0100,
 Emmanuel Pelerin wrote:

  I have download this version : 
  http://people.redhat.com/mnagy/squid/squid-2.7.STABLE5-1.el4/i386/

 What I want to know is: is anybody here running squid-2.7-STABLE4/5 with the
 coss storage scheme, and does it work well, it is stable and safe to use on
 a production machine?




--
Regards,
Oleg


Re: [squid-users] cached MS updates !

2008-12-21 Thread Oleg Motienko
On Tue, Jun 17, 2008 at 1:24 AM, Henrik Nordstrom
hen...@henriknordstrom.net wrote:
 On mån, 2008-06-16 at 08:16 -0700, pokeman wrote:
 thanks henrik for you reply
 any other way to save bandwidth windows updates almost use 30% of my entire
 bandwidth

 Microsoft has a update server you can run locally. But you need to have
 some control over the clients to make them use this instead of windows
 update...

 Or you could look into sponsoring some Squid developer to add caching of
 partial objects with the goal of allowing http access to windows update
 to be cached. (the versions using https can not be done much about...)

I made such caching by removing headers Range from requests
(transparent redirect to nginx webserver in proxy mode before squid).
Works fine for my  ~ 1500 users. Cache size is 4G for now and growing.
Additionally It's possible to make static cache (I made it on the same
nginx, via proxy_store), so big files like servicepacks will be stored
in filesystem. Also it's possible to put in filesystem already
downloaded servicepacks and fixes, this will save the bandwidth.

Squid is running transparent port on http://127.0.0.1:1 .
Http requests from LAN to windowupdate networks are redirected to 127.0.0.4:80

Nginx caches cab exe psf and cuts off Range header, other requests
redirected to MS sites;

Here is nginx config for caching site:

server {
listen127.0.0.4:80;
server_name  au.download.windowsupdate.com
www.au.download.windowsupdate.com;

access_log
/var/log/nginx/access-au.download.windowsupdate.com-cache.log  main;


# root url - don't cache here

location /  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


# ? urls - don't cache here
location ~* \?  {
proxy_passhttp://127.0.0.1:1;
proxy_set_header   Host $host;
}


# here is static caching

location ~* ^/msdownload.+\.(cab|exe|psf)$ {
root /.1/msupd/au.download.windowsupdate.com;
error_page   404 = @fetch;
}


location @fetch {
internal;

proxy_passhttp://127.0.0.1:1;
proxy_set_header   Range'';
proxy_set_header   Host $host;

proxy_store  on;
proxy_store_access   user:rw  group:rw  all:rw;
proxy_temp_path  /.1/msupd/au.download.windowsupdate.com/temp;

root /.1/msupd/au.download.windowsupdate.com;
}

# error messages (if got err from squid)

error_page   500 502 503 504  /50x.html;
location = /50x.html {
root   html;
}


}


[squid-users] POST/PUT request Content-Length

2008-05-27 Thread Oleg Motienko
Hello,

We are using Squid as transparent proxy. So our users report problems
with several AJAX application.

If Content-Length header is absent in POST request Squid sends error reply.

AFAIR according to
http://www.faqs.org/rfcs/rfc2616.html (Chapter 4.3 Message Body)
Header Content-Length is not not required.


   The presence of a message-body in a request is signaled by the
   inclusion of a Content-Length or Transfer-Encoding header field in
   the request's message-headers. A message-body MUST NOT be included in
   a request if the specification of the request method (section 5.1.1)
   does not allow sending an entity-body in requests. A server SHOULD
   read and forward a message-body on any request; if the request method
   does not include defined semantics for an entity-body, then the
   message-body SHOULD be ignored when handling the request.


Am I wrong ?

Why squid blocks POST/PUT without header Content-Length ?


Thanks in advance.

--
Regards,
Oleg


[squid-users] Free-SA: Why can't I add my logfile processing software to the web list?

2008-02-04 Thread Oleg
Would you like to add my software to this list:
http://www.squid-cache.org/Scripts/

Home page:
http://free-sa.sourceforge.net

Description:
Free-SA is tool for statistical analysis of daemons' log files, similar to 
SARG. Its main advantages over SARG are much better speed (7x-20x), more 
support for reports, and W3C compliance of generated HTML/CSS reports. It can 
be used to help control traffic usage, to control Internet access security 
policies, to investigate security incidents, to evaluate server efficiency, 
and to detect troubles with configuration.

I've tried many times to add it via form at the end of list page, but nothing 
happen. All generated reports meet W3C requirements and have meta 
name=robots content=noindex,nofollow at headers section.

If you can't or want not to add it, then tell me please why.

-- 
Best regards, Oleg


[squid-users] Hello.

2006-06-01 Thread Belykh Oleg

Question:

NTLM Authentification with Mac OS X 10.4.6 Server

I need replace character which splits DOMAIN name and User name in  
SQUID protocol from + to one \. I can't change domain divisor in  
Samba (smb.conf, winbind separator), that option won't work.


When i make test ntlm_auth with string
DOMAIN+USER password
i got -'Invalid user' error

but DOMAIN\USER password - working well.


Re: [squid-users] delay pools and ident access lists.

2005-10-15 Thread Oleg Sharoiko

On Fri, 14 Oct 2005, Henrik Nordstrom wrote:

HNYes, please file a bug report.

Done (Bug #1428). I submitted both old patch and a new one, which fixes 
acl.c instead of delay_pools.c

-- 
Oleg Sharoiko.
Software and Network Engineer
Computer Center of Rostov State University.


Re: [squid-users] delay pools and ident access lists.

2005-10-14 Thread Oleg Sharoiko
Hello!

Could you please review the following patch, which solves the problem for 
me. Do I have to fill PR ?

diff -ur squid-2.5.STABLE11/src/delay_pools.c 
squid-2.5.STABLE11.patched/src/delay_pools.c
--- squid-2.5.STABLE11/src/delay_pools.cSun Sep 11 05:49:53 2005
+++ squid-2.5.STABLE11.patched/src/delay_pools.cFri Oct 14 11:28:31 2005
@@ -323,6 +323,7 @@
 ch.my_port = r-my_port;
 ch.conn = http-conn;
 ch.request = r;
+xstrncpy(ch.rfc931, http-conn-rfc931, USER_IDENT_SZ);
 if (r-client_addr.s_addr == INADDR_BROADCAST) {
debug(77, 2) (delayClient: WARNING: Called with 'allones' address, 
ignoring\n);
return delayId(0, 0);

On Thu, 13 Oct 2005, Oleg Sharoiko wrote:

OS
OSOn Thu, 13 Oct 2005, Oleg Sharoiko wrote:
OS
OSOS2.5STABLE2
OSOSI'll update it to latest stable today and see if it helps.
OS
OSIt doesn't work with 2.5.STABLE11 either. The same results in debug 
OS(http_access sees ident response, delay_access doesn't). Any suggestions 
OSon how to debug this further are welcome.
OS
OS2005/10/13 07:59:28| aclCheck: checking 'http_access allow sunray2 user01'
OS2005/10/13 07:59:28| aclMatchAclList: checking sunray2
OS2005/10/13 07:59:28| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
OS2005/10/13 07:59:28| aclMatchIp: '195.208.251.171' found
OS2005/10/13 07:59:28| aclMatchAclList: checking user01
OS2005/10/13 07:59:28| aclMatchAcl: checking 'acl user01 ident user01'
OS2005/10/13 07:59:28| aclMatchUser: user is user01, case_insensitive is 0
OS2005/10/13 07:59:28| Top is 0x81fb8c0, Top-data is user01
OS2005/10/13 07:59:28| aclMatchUser: returning 1,Top is 0x81fb8c0, Top-data 
is user01
OS2005/10/13 07:59:28| aclMatchAclList: returning 1
OS2005/10/13 07:59:28| aclCheck: match found, returning 1
OS2005/10/13 07:59:28| aclCheckCallback: answer=1
OS
OS2005/10/13 07:59:28| aclCheckFast: list: 0x82397f0
OS2005/10/13 07:59:28| aclMatchAclList: checking sunray2
OS2005/10/13 07:59:28| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
OS2005/10/13 07:59:28| aclMatchIp: '195.208.251.171' found
OS2005/10/13 07:59:28| aclMatchAclList: checking user01
OS2005/10/13 07:59:28| aclMatchAcl: checking 'acl user01 ident user01'
OS2005/10/13 07:59:28| aclMatchAclList: no match, returning 0
OS2005/10/13 07:59:28| aclCheckFast: no matches, returning: 0
OS
OS

-- 
Oleg Sharoiko.
Software and Network Engineer
Computer Center of Rostov State University.


Re: [squid-users] delay pools and ident access lists.

2005-10-13 Thread Oleg Sharoiko

On Thu, 13 Oct 2005, Oleg Sharoiko wrote:

OS2.5STABLE2
OSI'll update it to latest stable today and see if it helps.

It doesn't work with 2.5.STABLE11 either. The same results in debug 
(http_access sees ident response, delay_access doesn't). Any suggestions 
on how to debug this further are welcome.

2005/10/13 07:59:28| aclCheck: checking 'http_access allow sunray2 user01'
2005/10/13 07:59:28| aclMatchAclList: checking sunray2
2005/10/13 07:59:28| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
2005/10/13 07:59:28| aclMatchIp: '195.208.251.171' found
2005/10/13 07:59:28| aclMatchAclList: checking user01
2005/10/13 07:59:28| aclMatchAcl: checking 'acl user01 ident user01'
2005/10/13 07:59:28| aclMatchUser: user is user01, case_insensitive is 0
2005/10/13 07:59:28| Top is 0x81fb8c0, Top-data is user01
2005/10/13 07:59:28| aclMatchUser: returning 1,Top is 0x81fb8c0, Top-data is 
user01
2005/10/13 07:59:28| aclMatchAclList: returning 1
2005/10/13 07:59:28| aclCheck: match found, returning 1
2005/10/13 07:59:28| aclCheckCallback: answer=1

2005/10/13 07:59:28| aclCheckFast: list: 0x82397f0
2005/10/13 07:59:28| aclMatchAclList: checking sunray2
2005/10/13 07:59:28| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
2005/10/13 07:59:28| aclMatchIp: '195.208.251.171' found
2005/10/13 07:59:28| aclMatchAclList: checking user01
2005/10/13 07:59:28| aclMatchAcl: checking 'acl user01 ident user01'
2005/10/13 07:59:28| aclMatchAclList: no match, returning 0
2005/10/13 07:59:28| aclCheckFast: no matches, returning: 0

-- 
Oleg Sharoiko.
Software and Network Engineer
Computer Center of Rostov State University.


[squid-users] delay pools and ident access lists.

2005-10-12 Thread Oleg Sharoiko

Hello!

I'd like to setup separate delay pools for different users of multi-user 
box. Does delay_pools supposed to work with ident acls? I tried following 
setup:


---
acl sunray2 src 195.208.251.171
acl user01 ident user01

ident_lookup_access allow sunray2
ident_lookup_access deny all

http_access allow sunray2 user01

delay_class 2 1
delay_access 2 allow sunray2 user01
delay_parameters 2 16384/16384
---

And it doesn't work. If I change delay_access 2 to be

delay_access 2 allow sunray2

then all traffic for sunray2 is limited to 16Kbps. So it looks like acl 
user01 doesn't work. But usernames are being logged in access log:


1129142174.297   5437 195.208.251.171 TCP_MISS/200 3014657 GET 
http://ftp.rsu.ru/pub/FreeBSD/releases/i386/ISO-IMAGES/5.4/5.4-RELEASE-i386-disc1.iso
 user01 DIRECT/195.208.245.253 application/octet-stream

Enabling debug gives me this:

2005/10/12 18:36:08| aclCheck: checking 'http_access allow sunray2 user01'
2005/10/12 18:36:08| aclMatchAclList: checking sunray2
2005/10/12 18:36:08| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
2005/10/12 18:36:08| aclMatchIp: '195.208.251.171' found
2005/10/12 18:36:08| aclMatchAclList: checking user01
2005/10/12 18:36:08| aclMatchAcl: checking 'acl user01 ident user01'
2005/10/12 18:36:08| aclMatchAclList: returning 0
2005/10/12 18:36:08| aclCheck: Doing ident lookup
2005/10/12 18:36:08| aclCheck: checking 'http_access allow sunray2 user01'
2005/10/12 18:36:08| aclMatchAclList: checking sunray2
2005/10/12 18:36:08| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
2005/10/12 18:36:08| aclMatchIp: '195.208.251.171' found
2005/10/12 18:36:08| aclMatchAclList: checking user01
2005/10/12 18:36:08| aclMatchAcl: checking 'acl user01 ident user01'
2005/10/12 18:36:08| aclMatchUser: user is user01, case_insensitive is 0
2005/10/12 18:36:08| Top is 0x820e8e0, Top-data is user01
2005/10/12 18:36:08| aclMatchUser: returning 1,Top is 0x820e8e0, Top-data is 
user01
2005/10/12 18:36:08| aclMatchAclList: returning 1
2005/10/12 18:36:08| aclCheck: match found, returning 1
2005/10/12 18:36:08| aclCheckCallback: answer=1

2005/10/12 18:36:08| aclCheckFast: list: 0x827e830
2005/10/12 18:36:08| aclMatchAclList: checking sunray2
2005/10/12 18:36:08| aclMatchAcl: checking 'acl sunray2 src 195.208.251.171'
2005/10/12 18:36:08| aclMatchIp: '195.208.251.171' found
2005/10/12 18:36:08| aclMatchAclList: checking user01
2005/10/12 18:36:08| aclMatchAcl: checking 'acl user01 ident user01'
2005/10/12 18:36:08| aclMatchAclList: returning 0
2005/10/12 18:36:08| aclCheckFast: no matches, returning: 0

As far as I can understand 1st is http_access check and 2nd is 
delay_access check. I did a quick look at sources and found that 
delay_pools call only aclCheckFast which checks ident access lists only if 
result of ident loockup already exists. I was hoping that forcing ident 
loockup with http_access will cache username somewhere but this doesn't 
seem to work either. :( Am I doing something wrong or this setup will not 
work by design?


--
Oleg Sharoiko.
Software and Network Engineer
Computer Center of Rostov State University.


Re: [squid-users] delay pools and ident access lists.

2005-10-12 Thread Oleg Sharoiko

On Thu, 13 Oct 2005, Henrik Nordstrom wrote:

HNYes it is supposed to work, but only provided the ident lookup is also
HNenforced in http_access.

That's a good news. Do I understand it right that according to debug 
output my config enforces ident lookups?

HNWhich version of Squid are you using?

2.5STABLE2
I'll update it to latest stable today and see if it helps.

-- 
Oleg Sharoiko.
Software and Network Engineer
Computer Center of Rostov State University.


[squid-users] sigsegv again :-(

2003-11-13 Thread oleg
 debug(84, 3) (helperHandleRead: end of reply found\n);
749 *t = '\0';
750 if (cbdataValid(r-data))
751 r-callback(r-data, srv-buf);
752 srv-flags.busy = 0;
753 srv-offset = 0;
754 helperRequestFree(r);
755 srv-request = NULL;
(gdb) list *0x8064839
0x8064839 is in comm_poll (comm_select.c:447).
442 #endif
443 else {
444 F-read_handler = NULL;
445 hdl(fd, F-read_data);
446 statCounter.select_fds++;
447 if (commCheckICPIncoming)
448 comm_poll_icp_incoming();
449 if (commCheckDNSIncoming)
450 comm_poll_dns_incoming();
451 if (commCheckHTTPIncoming)
(gdb) list *0x808410f
0x808410f is in main (main.c:743).
738 }
739 eventRun();
740 if ((loop_delay = eventNextTime())  0)
741 loop_delay = 0;
742 #if HAVE_POLL
743 switch (comm_poll(loop_delay)) {
744 #else
745 switch (comm_select(loop_delay)) {
746 #endif
747 case COMM_OK:
===

my build options were
=
./configure \
--enable-storeio=ufs diskd\
--enable-removal-policies=lru heap\
--enable-delay-pools\
--disable-icmp  \
--disable-wccp  \
--disable-snmp  \
--enable-arp-acl\
--disable-htcp  \
--enable-err-languages=English Russian-1251 Russian-koi8-r\
--enable-default-err-language=Russian-koi8-r  \
--enable-poll   \
--disable-ident-lookups \
--enable-truncate   \
--enable-auth=basic digest\
--enable-stacktraces
=

AFAICS squid does not exit, it just starts new child instead of one that died.
and no core files at coredump_dir

this is squid 2.5.STABLE4 with all availible patches applied (except some
latest cosmetic)
Red Hat 9.0
2.4.22 kernel from kernel.org

can someone tell something on this?

oleg



Re: [squid-users] Squid Traffic Accounting

2003-10-25 Thread oleg-s
On Sat, 25 Oct 2003 11:11:31 +0200 (CEST)
Henrik Nordstrom [EMAIL PROTECTED] wrote:

 He then need to wait 
 as much time for the quota to refill until he has access to the Internet 
 again.

there is another unbeatable situation (typical for internet cafes and clubs) - one 
time generated and used logins.

  Maybe this could be solved by sending HEAD request in the self written
  redirector program and compare Content-lenght: field if present with
  current user quota. i don't know.
 
 No, this is not a good approach for technical reasons, but it can be
 solved by adding native quota support to Squid.

is there some plans in the development team about it?
 
 However, personally I would prefer the relaxed approach where users are
 allowed to temporarily go above quota. If not users will not ever be able
 to download very large objects as they will always be above their quota.
 With the relaxed approach they will be able to download this large object,
 but then won't be able to access the Internet for a longer time
 compensating for the fact that they overused their quota.

again, think of one time used user logins.
 
 What is important in both is that the user when denied access due to quota
 gets a clear message indicating what is the problem and when he will get
 access again, and optionally a link to where to purchase/negotiate
 additional bandwidth.

this is where deny_info comes into play. we use it.

olegs


Re: [squid-users] Squid Traffic Accounting

2003-10-24 Thread oleg-s
On Fri, 24 Oct 2003 19:38:50 +0800
Fadjar Tandabawana [EMAIL PROTECTED] wrote:

 
 Is there any tools or another technique to reach my goal?
 

with Squid you *can't* get *real-time* control.
it always depend on how often user click urls (auth_ttl or external acl ttl) 
and not on how much real bytes he or she gets in real-time.

olegs


Re: [squid-users] Squid Traffic Accounting

2003-10-24 Thread oleg-s
 True, but you can get quite near real-time, and if your accounting
 calculates quota over a longer period of time then this is not really an
 issue (users who manage to get above their download quota will be denied
 access longer, until their quota has been refilled)
 Regards

but we can't block big over quota downloads.
say user trying to download iso image, at the time of request we *don't* know size of 
the  object requested, 
if we want to block over quota downloads we *must* know it, but we don't. 
Maybe this could be solved by sending HEAD request in the self written redirector 
program 
and compare Content-lenght: field if present with current user quota.
i don't know.
Also we can't handle situation when user asked for big download and cancelled it 
during the process.

olegs

P.S. we are using self coded billing with squid and since there are no squid hackers 
among us 
we don't know how to fight this feature.


[squid-users] aclMatchProxyAuth: unauthorized ip address

2003-10-16 Thread oleg-s
hello,
sometimes for some users i see in the cache.log following line:
---
aclMatchProxyAuth: unauthorized ip address 'addr' for user 'user_name'
--
where 'addr' and 'user_name' vary.

what should i fix in my configuration?
Squid 2.4.STABLE7 + squidGuard + auth_type_basic
thanks for answers.
oleg


[squid-users] grsecurity and squid

2003-09-30 Thread oleg-s
hello.
question - i plan to install grsec kernel patch (www.grsecurity.net)
can it harm squid?
thanks for answers
oleg


[squid-users] consequent SIGSEGVs

2003-09-19 Thread oleg-s
| 0 Objects cancelled.
2003/09/19 15:32:22| 2 Duplicate URLs purged.
2003/09/19 15:32:22| 1 Swapfile clashes avoided.
2003/09/19 15:32:22|   Took 46.8 seconds (4391.6 objects/sec).
2003/09/19 15:32:22| Beginning Validation Procedure
2003/09/19 15:32:22|   Completed Validation Procedure
2003/09/19 15:32:22|   Validated 205354 Entries
2003/09/19 15:32:22|   store_swap_size = 2414380k
2003/09/19 15:32:23| storeLateRelease: released 0 objects

--
core file is missing, (ulimit -c unlimited is set and coredump_dir also), 
maybe this is because squid main process doesn't die at all?
i'm totally puzzled with such behavior.
oleg


[squid-users] deny_info kills squid

2003-09-13 Thread oleg-s
hello.
from my squid.conf
-
external_acl_type bill_acl ttl=120 concurrency=8 %LOGIN %SRC /etc/billing/new/bill_acl
acl password proxy_auth REQUIRED
acl double_ip max_user_ip -s 1 REQUIRED
acl billing external bill_acl REQUIRED
acl all src 0.0.0.0/0.0.0.0
..
http_access allow all !double_ip password billing
http_access deny all
-
i tried to configure deny_info pages for each of !double_ip password billing acls 
with these lines:

deny_info ERR_DOUBLE_IP !double_ip (not, sure about this, also tried double_ip, but 
doesn't work either)
deny_info ERR_BAD_PASSWORD password
deny_info ERR_NO_QUOTA billing

all corresponding files are placed in the errors/ dir
but faced a strange problem - squid dies with SIGSEGV and dumps core file.
gdb output is :
---
#0  strcmp (p1=0x88 Address 0x88 out of bounds, p2=0x829c9c2 ERR_DOUBLE_IP)
at ../sysdeps/generic/strcmp.c:38
38  ../sysdeps/generic/strcmp.c: No such file or directory.
---
what is it all mean?
thanks for answers.
olegs


Re: [squid-users] FTP over squid

2003-09-12 Thread oleg-s
On Fri, 12 Sep 2003 02:25:52 -0700 (PDT)
Abdul Khader [EMAIL PROTECTED] wrote:

 Hi,
 I am a newbi to squid. I would like to know if I can
 do ftp over squid. By default it does not do ftp. I
 would be obliged of any early help.

check your Safe_ports acl and see if port 20 (ftp data) among them.

olegs



[squid-users] mozilla vs. digest auth

2003-09-10 Thread oleg-s
hello, list.
is there any workaround patch for squid 2.5.STABLE3 to let mozilla work with auth
scheme digest in squid?
up to 1.5beta1 mozilla annoys user with username/password window every time new
nonce is sent to client.
here is my auth_param config for digest auth:
-
auth_param digest children 7
auth_param digest realm md5
auth_param digest nonce_strictness off
auth_param digest check_nonce_count off
auth_param digest post_workaround on
auth_param digest nonce_garbage_interval 2 minutes
auth_param digest nonce_max_duration 2 minutes
auth_param digest nonce_max_count 2
-
thanks for answers.

P.S. IE works, but mozilla fanatics shouting out and blaming all the rest of the world 
:-)


[squid-users] authenticateDecodeAuth error

2003-09-01 Thread oleg-s
hello.
recently switched to auth scheme Digest with squid 2.5.STABLE3.
and found this in my cache.log :
---
WARNING: failed to unpack meta data
authenticateDecodeAuth: Unsupported or unconfigured proxy-auth scheme, 'Basic 
base64_encoded_string'
--
is this a misconfigured|buggy|broken client software or should i do something with 
squid.conf?
all clients work w/o problems, but this message is printed to log periodically.
thanks for answers.
olegs


[squid-users] digest auth TTL

2003-08-30 Thread oleg-s
hello.
when using digest scheme to authenticate users, how often external auth program will 
be called?
for 'basic sheme it's credentialsttl parameter, what is it for digest?
for me it seems now that overall control is after 
authenticate_cache_garbage_interval, right?
nonce_* parameters do not touch external auth program, right?
and it's annoying mozilla users asking username/password every time ttl is expired.
ok, this is browser issue, but may be someone already have recipe?
thanks for answers.
olegs


[squid-users] CFLAGS tuninig

2003-08-29 Thread oleg-s
hello.
what CFLAGS can someone, who have played with it,  recommend for squid?
or is this system dependend/independed?
maybe -O3 -funroll-loops? something else?
thanks for answers.
olegs


[squid-users] x-squid-internal/vary

2003-08-22 Thread oleg-s
hello, list.

i'm facing strange problem.
when i try to browse .jsp pages with any browser instead of displaying it i get a 
file download window with
 type x-squid-internal/vary.
i've patched my old 2.4.STABLE4 to 2.4.STABLE7 recently.
thanks for answers.

oleg