[squid-users] Are you on mobile/handset?

2009-06-16 Thread Luis Daniel Lucio Quiroz
Hi Squids,

How do you think should be the best way to detect if a user is surfing inet 
throut its mobile/handset?

TIA

LD


Re: [squid-users] Are you on mobile/handset?

2009-06-16 Thread Gavin McCullagh

Hi

On 16 Jun 2009, at 08:05, Luis Daniel Lucio Quiroz luis.daniel.lu...@gmail.com 
 wrote:



Hi Squids,

How do you think should be the best way to detect if a user is  
surfing inet

throut its mobile/handset?


The user agent string sounds like the obvious answer?

Gavin


Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-16 Thread Gontzal
Hi Abdul,

As has been said the most simple solution is to use a PAC file, i'm
using it at my company and balancing the connections depending on the
subnet: subnet A goes throught proxy1 and subnet B goes throught
proxy2. When proxy1 goes down, connections goes to proxy2, but it
doesn´t sinchronyzes the information of the conections, so clients
will have to stablish a new connection to proxy2.You have multiple
examples of configuring a pac file on internet.

Obviouslly this is not the best solution, it is not a load balancing
depending on the amount of charge of each proxy. For that you may
need a solution including LinuxVirtualServer (LVS) + Heartbeat (like
ultramonkey), with two virtual/physicall machines acting as load
balancers in Active/Pasive mode (with heartbeat) connected to other
two machines acting as proxys. For the final user it acts as an
individual machine, with only one ip (virtual ip for the hole
structure). It has another advantages, like the LB sinchronyzes the
information of the connections throught UDP multicast, so if one
server goes down, the other proxy have the information of the
connection and the client doesn't have to restart the connection. Also
is a HA solution.

Also is good for stops due to updates, improves, fails, etc on your
servers, its is completely transparent for the users. And you can
increase easily the number of servers acting as proxys.

Hope it can help you.

Gontzal

2009/6/15 K K kka...@gmail.com

  1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2
  (this is fail over)

 IMHO, using PAC (with or without WPAD) is the simplest and most
 effective approach to failover, requiring no additional software
 beyond a web server to host the PAC file.

 With PAC, the browser will automatically switch to the second proxy in
 the list if the first stops responding.  All modern graphical browsers
 support PAC, and nearly all support WPAD.

 The PAC script is very powerful, you can use many, but not all,
 Javascript string and numeric functions.  With a little effort you can
 have PAC distribute user load across multiple proxy servers, or even
 hash the request URL so, for example, all requests for dilbert.com
 first go to squid1, to get the most value from cached content.

 For more on PAC, see http://wiki.squid-cache.org/Technology/ProxyPac


[squid-users] Authntication loop

2009-06-16 Thread csampath

Hi All,

I am using squid3.0 satble 15.

I am facing the authentication loop . For a page to load squid is asking for
3 to 5 times (may be for each ajax request)

When I give wrong password it is saying 

Sorry, you are not currently allowed to request http://yahoo.com from this
cache until you have authenticated yourself.

When I give correct password it is asking repeatedly (for every click) 

Here is my squid configuration.


http_port 3128 accel vport vhost

auth_param basic program /usr/lib64/squid/squid_radius_auth -f
/etc/squid/squid_radius_conf
auth_param basic children 2
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
acl radius-auth proxy_auth REQUIRED
http_access deny all !radius-auth
http_access deny  !radius-auth all
http_access allow  all
http_reply_access allow all
visible_hostname localhost
#miss_access allow all
cache deny all
always_direct allow all

can any one suggest me the order of http_access entries in the configuration
file?

Appreciate your response.
 
Thanks
-Sampath.

-- 
View this message in context: 
http://www.nabble.com/Authntication-loop-tp24052068p24052068.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid on DMZ

2009-06-16 Thread João Kuchnier
Thanks for your help!

I manage how to configure rules on shorewall fixing squid on DMZ:
http://www.shorewall.net/Shorewall_Squid_Usage.html

In addition of HTTP traffic loading, this extra flow interfere on
Internet browsing speed?

João

  Hi everyone!
 
  Today I'm running squid on firewall and it is very easy to manage.
  Despite of that, we are trying to decentralize services and adding new
  virtual machines on DMZ for each of the servers we need.
 
  I would like to know if you recommend to install Squid on DMZ, if it
  is use to manage and how I could manage rules on firewall (we use
  shorewall).

  I don't have any recommendations either way. The pros and cons balance out
  for most intents and purposes. If its working fine for you as-is then there
  really isn't anything to fix.
 
  If you do make the move, be aware that with interception the firewall will
  need to take into account the squid box IP and make exceptions. Also an
  added flow of traffic client-router-squid-router-internet which does
  not currently occur on the internal router interface. This effectively
  doubles or triples the internal HTTP traffic load on the router.

  Amos

João K.


[squid-users] Help with squid cache dir

2009-06-16 Thread Juan Manuel Perrote

Hello:

I have a problem with a cache squid, squid not generate files on the cache
dir.

* I have a squid as a reverse proxy.
* My version of squid is 3.0.RC1.
* On the cache dir structure I have a lot of directories, but all are empty.
* The cache dir have 777 permissions on all structure
* The store log have a content like these
1245151352.631 RELEASE -1  289A075B563E3C64D402EB58150B4B28  304
1245151388-1-1 text/html 0/0 GET
http://extranet.p/i/1px_trans.gif
1245151442.021 RELEASE -1  E26713FA0492A44B9DB8DD9E9A472615  200
1245151475-1-1 text/html 34223/34223 GET
http://extranet.p/prod/apex/f?
1245151444.920 RELEASE -1  C9251F1B9EC4A88A0EC41EFDFA014A35  404
1245151480-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151470.982 RELEASE -1  DE94CD1480F08E3361199B0F762C2173  200
1245151504-1-1 text/html 34184/34184 GET
http://extranet.p/prod/apex/f?
1245151474.116 RELEASE -1  1EACA84E18C97E2E964185A85B4EACC0  404
1245151510-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151499.351 RELEASE -1  44A1536A48A6290890D5E3331C4B9B9A  200
1245151533-1-1 text/html 34142/34142 GET
http://extranet.p/prod/apex/f?
1245151501.805 RELEASE -1  76DBCDF798B911EC157E388F2280843B  404
1245151537-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151520.811 RELEASE -1  64C9C1BB78A21DE890D65D654836BDFF  200
1245151554-1-1 text/html 34221/34221 GET
http://extranet.p/prod/apex/f?
1245151523.960 RELEASE -1  2C115C9A2730865E874723F05BC13994  404
1245151560-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif

* My operating system is Ubuntu 8.04 LTS
* Squid is working fine since 1 years
* We have a average of 180 users per day

Juan Manuel Perrote




[squid-users] Bypassing Squid using ACL's

2009-06-16 Thread Jamie Orzechowski
I am trying to avoid using iptables to bypass some sites which have
issues with squid.

I have created the following but the sites are still broken ... any
ideas how to force these sites to go direct?

acl directurls url_regex /etc/squid3/direct-urls
cache deny directurls

Contents of /etc/squid3/direct-urls

.watchtvsitcoms.com
.blackboard.sentara.com
.verizon.net
.hotmail.com
.msn.com
.live.com
.megaupload.com
.rapidshare.com
.tehparadox.com
.megavideo.com
.dealerconnection.com
.teamsfa.ca
.fusedsolutions.com


[squid-users] Applying ACLs to access_log directive

2009-06-16 Thread Jon Gregory

I am using SquidNT 2.7 STABLE 5 on WinXP SP3 running as a service and would 
like to sense check what I am attempting but failing to achieve.  From all the 
documentation I have read from Visolve, squid-cache.org FAQ and this lists 
history I am creating a valid set of directives in the below format.

access_log filepath [logformat name [acl acl ...]]



I am wanting to direct logging to individual files depending on the source 
network while still capturing all requests in the access.log.  The example 
below is how I have attempted to implement this but the result is that 
access.log logs all events which is okay but the network specific logs remain 
empty.

acl NET_A src 192.168.0.0/24
acl NET_A src 10.20.30.0/24
acl NET_B src 192.168.1.0/24
acl NET_C src 192.168.2.0/24

access_log c:/squid/var/logs/access_NET_A.log squid NET_A
access_log c:/squid/var/logs/access_NET_B.log squid NET_B
access_log c:/squid/var/logs/access_NET_C.log squid NET_C
access_log c:/squid/var/logs/access.log squid



In an attempt to test I have also implemented a usergroup based ACL I can get 
logging to individual files and to the catch all access.log which works as I 
would expect.

acl Admins external NT_local_group Administrators

access_log c:/squid/var/logs/access_ADMINS.log squid Admins
access_log c:/squid/var/logs/access.log squid



What am I not understanding?  Is there a dependence on the acl type when using 
access_log?



This message is meant for the sole viewing of the addressee. If you have 
received this message in error please reply to the sender to inform them of 
their mistake.
The views and opinions expressed in this email are not necessarily endorsed by 
Innovate Logistics Ltd (Company No. 02058414).

Disclaimer : 

This e-mail has been scanned using Anti-Virus Software, although all efforts 
have been made to make this email safe it is always a wise precaution to scan 
this message with your own Anti-Virus Software.



Re: [squid-users] Bypassing Squid using ACL's

2009-06-16 Thread Tim Bates
Is the issue that they complain you aren't allowed to view content due 
to location?
If so, what you want to do is prevent the X-Forwarded-For header from 
being sent by using the header_access ACL.


To disable sending it for any site use: header_access X-Forwarded-For 
deny all
To do it for just those sites you listed, change the all to the acl 
name you want to use (in your example, directurls, but I'd change that).


TB

Jamie Orzechowski wrote:

I am trying to avoid using iptables to bypass some sites which have
issues with squid.

I have created the following but the sites are still broken ... any
ideas how to force these sites to go direct?

acl directurls url_regex /etc/squid3/direct-urls
cache deny directurls

Contents of /etc/squid3/direct-urls

.watchtvsitcoms.com
.blackboard.sentara.com
.verizon.net
.hotmail.com
.msn.com
.live.com
.megaupload.com
.rapidshare.com
.tehparadox.com
.megavideo.com
.dealerconnection.com
.teamsfa.ca
.fusedsolutions.com

  


[squid-users] Squid rules analyser

2009-06-16 Thread Alberto Cappadonia

Dear squid users,

we are developing a Java-based tool to analyse content filtering rules
(acl, http_access,...) for squid.

The objective is to provide administrators with a tool able to help them
in identifying potential mistakes in the squid configuration.

More in detail, the aims are:
- identifying conflicts and anomalies in squid configuration file
- presenting anomalies to the administrators for further decisions
(e.g., mistakenly empty rules, acl intersection areas, hidden rules)
- optimising rules by removing redundant or shadowed rules

The conflict model is the geometric/algebraic one presented in this paper:
http://security.polito.it/doc/pub_r/policy2008.pdf

The tool fully supports basic set operations for all the acl types in
squid v3.0 (IP addresses, ports, proto and all the ones based on regular
expressions, ...).


The workflow of the tool is briefly:
- read and parse squid.conf for content filtering rules (internal
geometric rule representation)
- analyse rules for potential conflicts and anomalies
- interact with the administrators
- export the optimised and anomaly-free squid.conf


We finished the conflict detector and resolver engine, the parser and we
are improving the GUI for reporting the anomalies to administrators. We
guess we will have the beta version in a couple of week.


We will be glad if you can give your opinion about the tool (especially
about improvement and integrations) in order to make it as effective as
possible. For this, if there is some developer/administrator that is
interested in using/testing it (or at least providing us with a few real
configuration files) it will be very useful.

Regards,
Cataldo Basile
Alberto Cappadonia



smime.p7s
Description: S/MIME Cryptographic Signature


[squid-users] optimize squid

2009-06-16 Thread squid proxy
hi

I'd like to optimize according to this webpage

http://www.linux-faqs.com/squid.php

my squid 2.6.STABLE5 installed on Debian Etch (PC 4, 2GHz, 2GB RAM)
for about 150 users.

I should put the following two lines:

ulimit -HSn 8192 echo 1024 32768  /proc/sys/net/ipv4/ip_local_port_range

in /etc/init.d/squid, but I don't know where exactly.

my /etc/init.d/squid:

#! /bin/sh
#
# squid  Startup script for the SQUID HTTP proxy-cache.
#
# Version:   @(#)squid.rc  2.20  01-Oct-2001  miqu...@cistron.nl
#
### BEGIN INIT INFO
# Provides:  squid
# Required-Start:$local_fs $network
# Required-Stop: $local_fs $network
# Should-Start:  $named
# Should-Stop:   $named
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: Squid HTTP Proxy
### END INIT INFO

NAME=squid
DAEMON=/usr/sbin/squid
LIB=/usr/lib/squid
PIDFILE=/var/run/$NAME.pid
SQUID_ARGS=-D -sYC

[ ! -f /etc/default/squid ] || . /etc/default/squid

. /lib/lsb/init-functions

PATH=/bin:/usr/bin:/sbin:/usr/sbin

[ -x $DAEMON ] || exit 0

grepconf () {
   w= # space tab
   sq=/etc/squid/squid.conf
   # sed is cool.
   res=`sed -ne '
  s/^'$1'['$w']\+\([^'$w']\+\).*$/\1/p;
  t end;
  d;
  :end q'  $sq`
   [ -n $res ] || res=$2
   echo $res
}

grepconf2 () {
   w= # space tab
   sq=/etc/squid/$NAME.conf
   # sed is cool.
   res=`sed -ne '
  s/^'$1'['$w']\+[^'$w']\+['$w']\+\([^'$w']\+\).*$/\1/p;
  t end;
  d;
  :end q'  $sq`
   [ -n $res ] || res=$2
   echo $res
}

#
#   Try to increase the # of filedescriptors we can open.
#
maxfds () {
   [ -n $SQUID_MAXFD ] || return
   [ -f /proc/sys/fs/file-max ] || return 0
   [ $SQUID_MAXFD -le 4096 ] || SQUID_MAXFD=4096
   global_file_max=`cat /proc/sys/fs/file-max`
   minimal_file_max=$(($SQUID_MAXFD + 4096))
   if [ $global_file_max -lt $minimal_file_max ]
   then
  echo $minimal_file_max  /proc/sys/fs/file-max
   fi
   ulimit -n $SQUID_MAXFD
}

start () {
   cdr=`grepconf2 cache_dir /var/spool/$NAME`

   case $cdr in
  [0-9]*)
 log_failure_msg squid: squid.conf contains 2.2.5 syntax -
not starting!
 log_end_msg 1
 exit 1
 ;;
   esac

   #
# Create spool dirs if they don't exist.
#
   if [ -d $cdr -a ! -d $cdr/00 ]
   then
  log_warning_msg Creating squid spool directory structure
  $DAEMON -z
   fi

   if [ $CHUID =  ]; then
  CHUID=root
   fi

   maxfds
   umask 027
   cd $cdr
   start-stop-daemon --quiet --start \
  --pidfile $PIDFILE \
  --chuid $CHUID \
  --exec $DAEMON -- $SQUID_ARGS  /dev/null
   return $?
}

stop () {
   PID=`cat $PIDFILE 2/dev/null`
   start-stop-daemon --stop --quiet --pidfile $PIDFILE --name squid
   #
   #   Now we have to wait until squid has _really_ stopped.
   #
   sleep 2
   if test -n $PID  kill -0 $PID 2/dev/null
   then
  log_action_begin_msg  Waiting
  cnt=0
  while kill -0 $PID 2/dev/null
  do
 cnt=`expr $cnt + 1`
 if [ $cnt -gt 24 ]
 then
log_action_end_msg 1
return 1
 fi
 sleep 5
 log_action_cont_msg 
  done
  log_action_end_msg 0
  return 0
   else
  return 0
   fi
}

case $1 in
start)
   log_daemon_msg Starting Squid HTTP proxy squid
   if start ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
stop)
   log_daemon_msg Stopping Squid HTTP proxy squid
   if stop ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
reload|force-reload)
   log_action_msg Reloading Squid configuration files
   start-stop-daemon --stop --signal 1 \
  --pidfile $PIDFILE --quiet --exec $DAEMON
   log_action_end_msg 0
   ;;
restart)
   log_daemon_msg Restarting Squid HTTP proxy squid
   stop
   if start ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
*)
   echo Usage: /etc/init.d/$NAME {start|stop|reload|force-reload|restart}
   exit 3
   ;;
esac

exit 0


Piotr


[squid-users] commBind Errors

2009-06-16 Thread twinturbo
SLES10
Squid 3.1.0.7

I am seeing continual errors

commBind: Cannot bind socket FD 34 to 10.106.88.65:3128: (98) Address already in
use


I can't seem to find any usefull information about these errors?

can anyone shed any light?

Thanks

Rob





[squid-users] SUID as inbound https proxy and out bound http proxy

2009-06-16 Thread csampath

Hello,

Could any one successful in configuring the squid as both inbound and out
bound proxy?

client)---
squidorigin server 
  All web traffic should go as httpshttp or
https|

  
|
   
squid-|
  response always in https 
http or https


Could any one point me to the right configuration ?

Thanks in advance..

-Sampath




-- 
View this message in context: 
http://www.nabble.com/SUID-as--inbound-https-proxy-and-out-bound-http-proxy-tp24057044p24057044.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Computers on my network cannot access internet via Squid cache

2009-06-16 Thread Mark Lodge


I have installed debian to run Squid cache as a caching proxy.
Ive been bashing away now for 2 days and i have managed to install squid 
(i first tried manually, but that did not work so i used synaptic 
software packager to install it (from Administration menu)

That went well, thereafter i installed webamin to work with squid in a GUI

I have managed to start squid and added my range of IP addresses to the 
ACL list ( as  mentioned here: http://doxfer.com/Webmin/SquidProxyServer )

I have added the proxy restriction too.

Now, i tried to test it.
I opened Iceweasel Web browser (on the same machine) and setit to use 
the Proxy server: localhost and port:3128

That works fine.

But when i try to change the proxy setting to my machines ip (where 
squid is installed) :

Proxy server: 10.0.0.35 and port:3128
That does not work.
Am i missing something, please help
I then tried to set another windows PC on the network to:
Proxy server: 10.0.0.35 and port:3128
That also does not work.

I also edited the conf file to http_access allow all, but i do not know 
if i have doen it correctly, but maybe there is another problem?


I would really appreciate your comments and help
Thank you in advance


[squid-users] Squid Slowing down flash based speed tests

2009-06-16 Thread Jamie Orzechowski
For some reason when squid is running a speed test (www.speedtest.net)
will run fine though the download but when it tries the upload test
there will be a 20 second pause then it will start.  With squid
disabled everything runs as normal.

No delay pools in my configs.  It affects other flash based speedtests aswell.

any ideas?


Re: [squid-users] AD groups / wbinfo_group.pl problem

2009-06-16 Thread Kevin Blackwell
Jakob,

recently I've been having the same problem. You find a fix?

Kevin

On Tue, Oct 7, 2008 at 11:50 AM, Jakob Curdesj...@info-systems.de wrote:
 Hi,

 when trying to setup NTLM authentication  against an AD controller I ran
 into an issue with testing against Windows Group membership.

 Here's what works:
 - authorizing against AD controller via winbindd and ntlm_auth helper from
 samba package
 i.e. without group restrictions the authorization works

 - testing group membership with wbinfo_auth.pl via the command line:

 [r...@fw libexec]# ./wbinfo_group.pl
 DOMAIN+guest DOMAIN+WebEnabled
 ERR
 DOMAIN+service DOMAIN+WebEnabled
 OK

 What does not work is letting squid check the group membership.
 Here are the relevant conf settings:

 external_acl_type nt_group ttl=0 concurrency=5 %LOGIN
 /usr/local/squid/libexec/wbinfo_group.pl -d
 acl WebEnabled  external nt_group WebEnabled
 acl allowed_users proxy_auth REQUIRED
 (...)
 http_access allow WebEnabled
 http_access allow allowed_users
 http_access deny all

 What happens in cache.log is (wbinfo_group.pl debug is on) :
 [2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(63)
  Got NTLMSSP neg_flags=0xa208b207
 [2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(739)
  Got user=[guest] domain=[DOMAIN] workstation=[WS1] len1=24 len2=24
 [2008/10/07 18:30:57, 3] libsmb/ntlmssp_sign.c:ntlmssp_sign_init(338)
  NTLMSSP Sign/Seal - Initialising with flags:
 [2008/10/07 18:30:57, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(63)
  Got NTLMSSP neg_flags=0xa2088205
 Got 0 guest2 WebEnabled from squid
 Could not convert sid S- to gid
 User:  -0-
 Group: -guest-
 SID:   -
 GID:   --
 Could not get groups for user 0
 Sending OK to squid
 2008/10/07 18:30:58| helperHandleRead: unexpected reply on channel -1 from
 nt_group #1 'OK'

 Why is squid not able to lookup the groups if wbinfo on the commandline can?
 I changed the permissions of the winbindd_privileged directory to match the
 squid_effective group.

 Any ideas ?

 Regards,
 Jakob



Re: [squid-users] Bypassing squid for certain sites

2009-06-16 Thread Chris Robertson

Jamie Orzechowski wrote:

I am having issues with a few sites like megavideo, hotmail, etc and
looking to bypass them entirely via IPTables ... I have added some
rules to IPTables but I still see the traffic hitting the caches.  Any
ideas?

Strange thing is that when running an iptables --list it shows no
rules configured at all ..
  


iptables --list only shows the INPUT, FORWARD and OUTPUT tables.  
You'll need to run iptables -t mangle --list to see the mangle table.



Here is my iptables rules

/usr/local/sbin/iptables -t mangle -N DIVERT
/usr/local/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/usr/local/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

#Bypass These subnets
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.186.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 65.54.165.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 72.32.79.195/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 64.4.20.0/24 -j RETURN
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp -m tcp --dport
80 -d 69.5.88.0/24 -j RETURN

# Redirect to squid
/usr/local/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j
TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
  


You might need to add /usr/local/sbin/iptables -t mangle -F to the top 
of those rules to flush the mangle table before adding any other rules.


Chris





[squid-users] wbinfo and wbinfo_group.pl broken

2009-06-16 Thread Kevin Blackwell
I'm not  sure what happened, but all of a sudden wbinfo and
wbinfo_group.pl are not returning group data.

./testwb
++ wbinfo -n USER
+ sid='S-1-5-21-1607859618-1323328405-3834754132-2081 User (1)'
+ wbinfo -Y 'S-1-5-21-1607859618-1323328405-3834754132-2081 User (1)'
Could not convert sid S-1-5-21-1607859618-1323328405-3834754132-2081
User (1) to gid
+ wbinfo -Y S-1-5-21-1607859618-1323328405-3834754132-2081
Could not convert sid S-1-5-21-1607859618-1323328405-3834754132-2081 to gid
++ wbinfo -n GROUP
+ sid='S-1-5-21-1607859618-1323328405-3834754132-2829 Domain Group (2)'
+ wbinfo -Y 'S-1-5-21-1607859618-1323328405-3834754132-2829 Domain Group (2)'
Could not convert sid S-1-5-21-1607859618-1323328405-3834754132-2829
Domain Group (2) to gid
+ wbinfo -Y S-1-5-21-1607859618-1323328405-3834754132-2829
1
+ wbinfo -r USER
Could not get groups for user USER


wbinfo_group.pl -d
Debugging mode ON.
USER GROUP
Got USER GROUP from squid
User:  -USER-
Group: -GROUP-
SID:   -S-1-5-21-1607859618-1323328405-3834754132-2829-
GID:   -1-
Could not get groups for user USER
Sending ERR to squid
ERR


I've seen the problem a lot, but no fix.

Thanks in advance.


[squid-users] NTLM authentication + Administrator user

2009-06-16 Thread Sergio - Embalatec
Hello, is my first post in the list, I wonder if someone already went through 
this problem.

I'm using Squid 2.7.STABLE3 with NTLM authentication. Windows users 
authenticate without problems, but when logged in with User administrator 
can not access the Internet, is how he had managed to authenticate the 
user administrator

How would this situation for users who are not in the field?

Is that any way around this situation? 

below my configuration:
auth_param ntlm program /etc/squid/squid/ntlm_auth brk/dcbrk
auth_param ntlm children 5
auth_param ntlm max_challenge_reuses 0
auth_param ntlm max_challenge_lifetime 2 minutes


Thanks, any idea will be very useful!


Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-16 Thread Chris Robertson

Gontzal wrote:

Hi Abdul,

As has been said the most simple solution is to use a PAC file, i'm
using it at my company and balancing the connections depending on the
subnet: subnet A goes throught proxy1 and subnet B goes throught
proxy2. When proxy1 goes down, connections goes to proxy2, but it
doesn´t sinchronyzes the information of the conections, so clients
will have to stablish a new connection to proxy2.


Squid does not have connection synchronization capabilities between 
peers.  No matter what form of load balancing/high availability you use, 
if one of your Squid servers dies, any active connections with that 
server will be dropped and the client will have to reestablish a new 
connection.



You have multiple
examples of configuring a pac file on internet.

Obviouslly this is not the best solution, it is not a load balancing
depending on the amount of charge of each proxy.


A PAC file can be load balancing.  See the Super Proxy Script from Sharp 
(http://naragw.sharp.co.jp/sps/).



For that you may
need a solution including LinuxVirtualServer (LVS) + Heartbeat (like
ultramonkey), with two virtual/physicall machines acting as load
balancers in Active/Pasive mode (with heartbeat) connected to other
two machines acting as proxys. For the final user it acts as an
individual machine, with only one ip (virtual ip for the hole
structure).


Okay so far.


It has another advantages, like the LB sinchronyzes the
information of the connections throught UDP multicast, so if one
server goes down, the other proxy have the information of the
connection and the client doesn't have to restart the connection.


The load balancer might very well send the continuation of the TCP 
stream to Squid, but Squid will dump it due to the fact that it has no 
accounting of the connection.  If you have an active/active Linux-HA 
setup (or even an active/passive) and one of the load balancing machines 
(or processes) dies, the existing connections will be maintained (as 
long as the Squid process is not affected).



Also is a HA solution.

Also is good for stops due to updates, improves, fails, etc on your
servers, its is completely transparent for the users.


For true transparency, you have to remove the Squid server from the 
cluster (which will prevent NEW connections from being established) and 
then wait for active connections to finish (which if you have customers 
listening to Internet Radio, this step can take a while).  Then you can 
perform maintenance on it.  Just shutting the Squid service down will 
disrupt active connections.



And you can increase easily the number of servers acting as proxys.
  


Changing a PAC file is just as easy (if not more so).  The disadvantage 
the PAC file has is that it is only loaded when the browser starts.



Hope it can help you.

Gontzal
  


Be aware if you decide to go the multiple active proxies route, there 
are any number of sites which don't understand (or accept) that HTTP is 
stateless and attempt to maintain a session based on source IP.  If 
you load balance your traffic without some attempt at keeping 
connections sticky (such as using a source hash algorithm) or NATing 
all of your proxies outgoing traffic, you will experience trouble with 
such sites.  Ask me how I know...  :o)


Chris


Re: [squid-users] squid_ldap_auth failure

2009-06-16 Thread Chris Robertson

Benjamin Fleckenstein wrote:

Hi there,

I've tried to set up a connection from a Squid Proxy (Version 2.6.STABLE10) to 
our AD Server (Windows 2003 Server). I've already tried several commands but 
there always appears an error. I already checked different forums and manuals 
but I don't get the connection to work.

For testing the connection I've tried the following command:

./squid_ldap_auth -R -b dc=my,dc=domain -D cn=username,dc=my,dc=domain -w password -f 
sAMAccountName=%s -h hostname:389
username password
squid_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'
ERR Invalid credentials

The user and password is correct.


The Wiki shows different options used when querying a Win2k3 server:

http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap#head-3793850746c1c1e7a0108faa8ae46f33bdd57bd9

I'd suggest trying...

./squid_ldap_auth -v 3 -b dc=my,dc=domain -D cn=username,ou=Generic 
User Accounts,dc=my,dc=domain -w password -f sAMAccountName=%s -h 
hostname


...or just going with the Windows AD authentication: 
http://wiki.squid-cache.org/ConfigExamples/Authenticate/WindowsActiveDirectory



 I've installed the ADSnapshot Tool to test if the user is able to quering the 
ldap server. That works!

Does anybody has an idea why I always get that error and what I could try to 
bring this to work? Could it be a bug or is there something wrong with my query?

For any help any ideas I would be thankful!

Lukas
  


Chris



Re: [squid-users] Authntication loop

2009-06-16 Thread Chris Robertson

csampath wrote:

Hi All,

I am using squid3.0 satble 15.

I am facing the authentication loop . For a page to load squid is asking for
3 to 5 times (may be for each ajax request)

When I give wrong password it is saying 


Sorry, you are not currently allowed to request http://yahoo.com from this
cache until you have authenticated yourself.

When I give correct password it is asking repeatedly (for every click) 


Here is my squid configuration.


http_port 3128 accel vport vhost

auth_param basic program /usr/lib64/squid/squid_radius_auth -f
/etc/squid/squid_radius_conf
auth_param basic children 2
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
acl radius-auth proxy_auth REQUIRED
http_access deny all !radius-auth
http_access deny  !radius-auth all
http_access allow  all
http_reply_access allow all
visible_hostname localhost
#miss_access allow all
cache deny all
always_direct allow all

can any one suggest me the order of http_access entries in the configuration
file?
  


From the information given, I gather that you are running an 
interception proxy.  The accel argument to http_port is meant for 
acceleration setups, not for interception setups.  I further surmise 
that you chose to go the accel vport vhost route because using 
transparent gave configuration errors with authentication.


There is a reason for that.  
http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#head-e56904dd4dfe0e21e5c2903473c473d401533ac7



Appreciate your response.
 
Thanks

-Sampath.


Chris


Re: [squid-users] Help with squid cache dir

2009-06-16 Thread Chris Robertson

Juan Manuel Perrote wrote:

Hello:

I have a problem with a cache squid, squid not generate files on the cache
dir.

* I have a squid as a reverse proxy.
* My version of squid is 3.0.RC1.
  


Yikes.  3.0.RC1 is at least better than the PRE versions, but it's still 
not a Stable release.  It's also nearly two years old.  There have been 
a number of bug fixes and security patches since.



* On the cache dir structure I have a lot of directories, but all are empty.
* The cache dir have 777 permissions on all structure
  


Yikes again.  777 permissions are not a good idea.  Set the ownership 
correctly, and put the directory permissions back to 750.



* The store log have a content like these
1245151352.631 RELEASE -1  289A075B563E3C64D402EB58150B4B28  304
1245151388-1-1 text/html 0/0 GET
http://extranet.p/i/1px_trans.gif
  


Are you sure these objects are cacheable?  What is the output of 
squidclient -m HEAD http://extranet.p/i/1px_trans.gif;?



1245151442.021 RELEASE -1  E26713FA0492A44B9DB8DD9E9A472615  200
1245151475-1-1 text/html 34223/34223 GET
http://extranet.p/prod/apex/f?
  


Old squid configurations contained an ACL defining anything with 
cgi-bin or a question mark as a query and denying caching for matching 
requests.



1245151444.920 RELEASE -1  C9251F1B9EC4A88A0EC41EFDFA014A35  404
1245151480-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151470.982 RELEASE -1  DE94CD1480F08E3361199B0F762C2173  200
1245151504-1-1 text/html 34184/34184 GET
http://extranet.p/prod/apex/f?
1245151474.116 RELEASE -1  1EACA84E18C97E2E964185A85B4EACC0  404
1245151510-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151499.351 RELEASE -1  44A1536A48A6290890D5E3331C4B9B9A  200
1245151533-1-1 text/html 34142/34142 GET
http://extranet.p/prod/apex/f?
1245151501.805 RELEASE -1  76DBCDF798B911EC157E388F2280843B  404
1245151537-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif
1245151520.811 RELEASE -1  64C9C1BB78A21DE890D65D654836BDFF  200
1245151554-1-1 text/html 34221/34221 GET
http://extranet.p/prod/apex/f?
1245151523.960 RELEASE -1  2C115C9A2730865E874723F05BC13994  404
1245151560-1-1 text/html 328/328 GET
http://extranet.p/prod/apex/blank.gif

* My operating system is Ubuntu 8.04 LTS
* Squid is working fine since 1 years
* We have a average of 180 users per day

Juan Manuel Perrote


Chris


Re: [squid-users] wbinfo and wbinfo_group.pl broken

2009-06-16 Thread Kinkie
On Tue, Jun 16, 2009 at 8:27 PM, Kevin Blackwellakblack...@gmail.com wrote:
 I'm not  sure what happened, but all of a sudden wbinfo and
 wbinfo_group.pl are not returning group data.

You may want to talk to some Samba users group. Squid uses Samba's
services for AD-related activities.
-- 
/kinkie


Re: [squid-users] Applying ACLs to access_log directive

2009-06-16 Thread Chris Robertson

Jon Gregory wrote:

I am using SquidNT 2.7 STABLE 5 on WinXP SP3 running as a service and would 
like to sense check what I am attempting but failing to achieve.  From all the 
documentation I have read from Visolve, squid-cache.org FAQ and this lists 
history I am creating a valid set of directives in the below format.

access_log filepath [logformat name [acl acl ...]]



I am wanting to direct logging to individual files depending on the source 
network while still capturing all requests in the access.log.  The example 
below is how I have attempted to implement this but the result is that 
access.log logs all events which is okay but the network specific logs remain 
empty.

acl NET_A src 192.168.0.0/24
acl NET_A src 10.20.30.0/24
acl NET_B src 192.168.1.0/24
acl NET_C src 192.168.2.0/24

access_log c:/squid/var/logs/access_NET_A.log squid NET_A
access_log c:/squid/var/logs/access_NET_B.log squid NET_B
access_log c:/squid/var/logs/access_NET_C.log squid NET_C
access_log c:/squid/var/logs/access.log squid
  


That looks right...


In an attempt to test I have also implemented a usergroup based ACL I can get 
logging to individual files and to the catch all access.log which works as I 
would expect.

acl Admins external NT_local_group Administrators

access_log c:/squid/var/logs/access_ADMINS.log squid Admins
access_log c:/squid/var/logs/access.log squid
  


So it works...


What am I not understanding?  Is there a dependence on the acl type when using 
access_log?
  


Do the entries in c:/squid/var/logs/access.log show the remotehost IP in 
the third column?


Chris


Re: [squid-users] optimize squid

2009-06-16 Thread Chris Robertson

squid proxy wrote:

hi

I'd like to optimize according to this webpage

http://www.linux-faqs.com/squid.php
  


This article talks about patching ReiserFS into the 2.2 kernel and 
adding UDMA 66 support.  Neat.



my squid 2.6.STABLE5 installed on Debian Etch (PC 4, 2GHz, 2GB RAM)
for about 150 users.
  


For 150 users, you won't likely need these tweaks.


I should put the following two lines:

ulimit -HSn 8192 


This will only make a difference (in Squid 2.6) if squid was compiled 
with this limit in place, or specified --with-max-fd=8192 (or 
something like that). 


echo 1024 32768  /proc/sys/net/ipv4/ip_local_port_range

in /etc/init.d/squid, but I don't know where exactly.

my /etc/init.d/squid:

#! /bin/sh
#
# squid  Startup script for the SQUID HTTP proxy-cache.
#
# Version:   @(#)squid.rc  2.20  01-Oct-2001  miqu...@cistron.nl
#
### BEGIN INIT INFO
# Provides:  squid
# Required-Start:$local_fs $network
# Required-Stop: $local_fs $network
# Should-Start:  $named
# Should-Stop:   $named
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: Squid HTTP Proxy
### END INIT INFO

NAME=squid
DAEMON=/usr/sbin/squid
LIB=/usr/lib/squid
PIDFILE=/var/run/$NAME.pid
SQUID_ARGS=-D -sYC

[ ! -f /etc/default/squid ] || . /etc/default/squid
  


Check this file for a line defining SQUID_MAXFD.  It gets used later 
to set ulimit.



. /lib/lsb/init-functions

PATH=/bin:/usr/bin:/sbin:/usr/sbin

[ -x $DAEMON ] || exit 0

grepconf () {
   w= # space tab
   sq=/etc/squid/squid.conf
   # sed is cool.
   res=`sed -ne '
  s/^'$1'['$w']\+\([^'$w']\+\).*$/\1/p;
  t end;
  d;
  :end q'  $sq`
   [ -n $res ] || res=$2
   echo $res
}

grepconf2 () {
   w= # space tab
   sq=/etc/squid/$NAME.conf
   # sed is cool.
   res=`sed -ne '
  s/^'$1'['$w']\+[^'$w']\+['$w']\+\([^'$w']\+\).*$/\1/p;
  t end;
  d;
  :end q'  $sq`
   [ -n $res ] || res=$2
   echo $res
}

#
#   Try to increase the # of filedescriptors we can open.
#
maxfds () {
   [ -n $SQUID_MAXFD ] || return
   [ -f /proc/sys/fs/file-max ] || return 0
   [ $SQUID_MAXFD -le 4096 ] || SQUID_MAXFD=4096
   global_file_max=`cat /proc/sys/fs/file-max`
   minimal_file_max=$(($SQUID_MAXFD + 4096))
   if [ $global_file_max -lt $minimal_file_max ]
   then
  echo $minimal_file_max  /proc/sys/fs/file-max
   fi
   ulimit -n $SQUID_MAXFD
}

start () {
   cdr=`grepconf2 cache_dir /var/spool/$NAME`

   case $cdr in
  [0-9]*)
 log_failure_msg squid: squid.conf contains 2.2.5 syntax -
not starting!
 log_end_msg 1
 exit 1
 ;;
   esac

   #
# Create spool dirs if they don't exist.
#
   if [ -d $cdr -a ! -d $cdr/00 ]
   then
  log_warning_msg Creating squid spool directory structure
  $DAEMON -z
   fi

   if [ $CHUID =  ]; then
  CHUID=root
   fi

   maxfds
  


Right here is where the script is attempting to up the ulimit.  Just 
below this line would be a fine place to set ip_local_port_range



   umask 027
   cd $cdr
   start-stop-daemon --quiet --start \
  --pidfile $PIDFILE \
  --chuid $CHUID \
  --exec $DAEMON -- $SQUID_ARGS  /dev/null
   return $?
}

stop () {
   PID=`cat $PIDFILE 2/dev/null`
   start-stop-daemon --stop --quiet --pidfile $PIDFILE --name squid
   #
   #   Now we have to wait until squid has _really_ stopped.
   #
   sleep 2
   if test -n $PID  kill -0 $PID 2/dev/null
   then
  log_action_begin_msg  Waiting
  cnt=0
  while kill -0 $PID 2/dev/null
  do
 cnt=`expr $cnt + 1`
 if [ $cnt -gt 24 ]
 then
log_action_end_msg 1
return 1
 fi
 sleep 5
 log_action_cont_msg 
  done
  log_action_end_msg 0
  return 0
   else
  return 0
   fi
}

case $1 in
start)
   log_daemon_msg Starting Squid HTTP proxy squid
   if start ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
stop)
   log_daemon_msg Stopping Squid HTTP proxy squid
   if stop ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
reload|force-reload)
   log_action_msg Reloading Squid configuration files
   start-stop-daemon --stop --signal 1 \
  --pidfile $PIDFILE --quiet --exec $DAEMON
   log_action_end_msg 0
   ;;
restart)
   log_daemon_msg Restarting Squid HTTP proxy squid
   stop
   if start ; then
  log_end_msg $?
   else
  log_end_msg $?
   fi
   ;;
*)
   echo Usage: /etc/init.d/$NAME {start|stop|reload|force-reload|restart}
   exit 3
   ;;
esac

exit 0


Piotr
  


Chris


Re: [squid-users] commBind Errors

2009-06-16 Thread Chris Robertson

twintu...@f2s.com wrote:

SLES10
Squid 3.1.0.7

I am seeing continual errors

commBind: Cannot bind socket FD 34 to 10.106.88.65:3128: (98) Address already in
use


I can't seem to find any usefull information about these errors?

can anyone shed any light?
  


Something is already using port 3128 on IP address 10.106.88.65.  How 
are you starting Squid?



Thanks

Rob
  


Chris


Re: [squid-users] SUID as inbound https proxy and out bound http proxy

2009-06-16 Thread Chris Robertson

csampath wrote:

Hello,

Could any one successful in configuring the squid as both inbound and out
bound proxy?
  


Are you looking to use one Squid instance as both an accelerator AND a 
caching web proxy?



client)---
squidorigin server 
  All web traffic should go as httpshttp or

https|
  
|
   
squid-|
  response always in https 
http or https


  


Something bad happened to your diagram.


Could any one point me to the right configuration ?
  


Start with...

http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

...and add another http_port line, and some http_access rules that allow 
your clients to surf places other than the accelerated site.



Thanks in advance..

-Sampath
  


Chris



Re: [squid-users] Computers on my network cannot access internet via Squid cache

2009-06-16 Thread Chris Robertson

Mark Lodge wrote:


I have installed debian to run Squid cache as a caching proxy.
Ive been bashing away now for 2 days and i have managed to install 
squid (i first tried manually, but that did not work so i used 
synaptic software packager to install it (from Administration menu)
That went well, thereafter i installed webamin to work with squid in a 
GUI


I have managed to start squid and added my range of IP addresses to 
the ACL list ( as  mentioned here: 
http://doxfer.com/Webmin/SquidProxyServer )

I have added the proxy restriction too.

Now, i tried to test it.
I opened Iceweasel Web browser (on the same machine) and setit to use 
the Proxy server: localhost and port:3128

That works fine.


That's a good start.



But when i try to change the proxy setting to my machines ip (where 
squid is installed) :

Proxy server: 10.0.0.35 and port:3128
That does not work.


In what way does it not work?  Do you get a Squid error page?  Does your 
browser time out?  Are there any hints in the access.log?



Am i missing something, please help
I then tried to set another windows PC on the network to:
Proxy server: 10.0.0.35 and port:3128
That also does not work.

I also edited the conf file to http_access allow all, but i do not 
know if i have doen it correctly,


Displaying your config file (preferably devoid of comments and blank 
lines) would help.



but maybe there is another problem?


There could very well be.  Given the information given, I'd guess (in 
decreasing order of likelihood):

1) your server has a firewall preventing access to port 3128
2) the http_access rules are ordered incorrectly
or
3) your server is only listening on localhost


I would really appreciate your comments and help
Thank you in advance


Chris


Re: [squid-users] Squid Slowing down flash based speed tests

2009-06-16 Thread Chris Robertson

Jamie Orzechowski wrote:

For some reason when squid is running a speed test (www.speedtest.net)
will run fine though the download but when it tries the upload test
there will be a 20 second pause then it will start.  With squid
disabled everything runs as normal.

No delay pools in my configs.  It affects other flash based speedtests aswell.

any ideas?
  


The test runs several downloads and then immediately runs a number of 
POSTs for the upload evaluation...


--
http://speedtest.server.net/speedtest/random2000x2000.jpg?x=1245184571902-2

GET /speedtest/random2000x2000.jpg?x=1245184571902-2 HTTP/1.1
Host: speedtest.server.net
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11) 
Gecko/2009060215 Firefox/3.0.11

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive

HTTP/1.x 200 OK
Date: Tue, 16 Jun 2009 20:36:12 GMT
Server: Apache/1.3.34 (Unix) PHP/5.2.5 mod_perl/1.29
Cache-Control: max-age=-1
Expires: Tue, 16 Jun 2009 20:36:11 GMT
Last-Modified: Fri, 18 Jul 2008 21:09:37 GMT
Etag: 1a749e2-78a99c-48810691
Accept-Ranges: bytes
Content-Length: 7907740
Content-Type: image/jpeg
X-Cache: MISS from proxypool-2.mydomain.net
Via: 1.1 proxypool-2.mydomain.net:8080 (squid/2.7.STABLE6)
Connection: keep-alive
Proxy-Connection: keep-alive
--
http://speedtest.server.net/speedtest/upload.php?x=0.762562383431941

POST /speedtest/upload.php?x=0.762562383431941 HTTP/1.1
Host: speedtest.server.net
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11) 
Gecko/2009060215 Firefox/3.0.11

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
Referer: http://cdn.speedtest.net/flash/speedtest.swf?v=2.1.1.1
Content-type: application/x-www-form-urlencoded
Content-length: 220199

content3=RRFJOD...[SNIP]
HTTP/1.x 200 OK
Date: Tue, 16 Jun 2009 20:36:17 GMT
Server: Apache/1.3.34 (Unix) PHP/5.2.5 mod_perl/1.29
X-Powered-By: PHP/5.2.5
Content-Type: text/html
X-Cache: MISS from proxypool-2.mydomain.net
Via: 1.1 proxypool-2.mydomain.net:8080 (squid/2.7.STABLE6)
Connection: close
--

I have not adjusted the time stamps of the headers above.  As you can 
see, there is a 5 second difference between when the speedtest server 
responds to the last download test request, and when it responds to the 
first upload test.  My proxy is set explicitly in my browser.


Could it be related to interception?

Chris



Re: [squid-users] AD groups / wbinfo_group.pl problem

2009-06-16 Thread Chris Robertson

Kevin Blackwell wrote:

Jakob,

recently I've been having the same problem. You find a fix?

Kevin


Perhaps related to 
http://www.mail-archive.com/debian-bugs-d...@lists.debian.org/msg349766.html?


Chris


Re: [squid-users] extracting icp_query_timeout info?

2009-06-16 Thread Chris Robertson

Ross J. Reedstrom wrote:

On Mon, Jun 15, 2009 at 12:16:27PM -0800, Chris Robertson wrote:
  
Using squidclient mgr:server_list there is a AVG RTT value.  In 
mgr:digest_stats there is icp.query_median_svc_time.



Which are both useful values. However, they don't tell me what value the
server is using for the dynamic timeout. AFAICT, it's being kept on a
per-server basis. The mystery is that I've got a load balancing setup,
  


How did you set load balancing up?


and squid seems to be favoring servers that server_list claim an AVG RTT
in the 200-500 ms range, when other, less favored servers are showing
5-20 ms.  Walking the code now (which is much easier once I realized my
editor's tabstop was set for the python-default 4 spaces, not c-friendly
8. Oops) Anyone got pointers to any sort of higher-level design for any
of this? I've spent significant time poking around the wiki and FAQ, and
haven't surfaced much.

Ross
  


Chris


[squid-users] 2.7.Stable6 httpReadReply: Excess data

2009-06-16 Thread Quin Guin

Hi,

 I am in the need of some assistance in looking into a high number of  
httpReadReply: Excess data  entries around 3000 cache.log entries per day per 
SQUID server. The  httpReadReply: Excess data from GET http://xx; this is 
happening on many sites from my reading it is in most cases it is an issue with 
the content site/server. I am a bit concerned that Is this a sign of of memory 
or disk issue because from the cache manager things look to be running well. I 
also do see some other errors and I have included the below with more 
information on my setup. The urlParse: Illegal character in hostname 
'www.google-%20analytics.com' is just annoying adn if anyone has away to fix 
it beside blocking it I would appreciate any ideas on that.

 I am starting to see latency and I have a 3 node cluster of SQUID servers 
setup as standard reverse proxies. 

Cache.log entries:

2009/06/16 21:21:43| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:22:03| clientTryParseRequest: FD 14 (10.22.0.64:40881) Invalid 
Request
2009/06/16 21:22:04| clientTryParseRequest: FD 49 (10.22.0.63:40894) Invalid 
Request
2009/06/16 21:22:06| clientTryParseRequest: FD 36 (10.22.0.63:40938) Invalid 
Request
2009/06/16 21:22:21| clientTryParseRequest: FD 290 (10.22.0.65:41114) Invalid 
Request
2009/06/16 21:22:21| clientTryParseRequest: FD 415 (10.22.0.65:41124) Invalid 
Request
2009/06/16 21:22:22| clientTryParseRequest: FD 361 (10.22.0.63:41168) Invalid 
Request
2009/06/16 21:22:35| clientTryParseRequest: FD 109 (10.22.0.64:41418) Invalid 
Request
2009/06/16 21:22:35| clientTryParseRequest: FD 129 (10.22.0.63:41431) Invalid 
Request
2009/06/16 21:22:36| clientTryParseRequest: FD 477 (10.22.0.65:41458) Invalid 
Request
2009/06/16 21:22:50| clientTryParseRequest: FD 356 (10.22.0.63:41707) Invalid 
Request
2009/06/16 21:22:51| clientTryParseRequest: FD 180 (10.22.0.64:41719) Invalid 
Request
2009/06/16 21:22:51| clientTryParseRequest: FD 197 (10.22.0.63:41744) Invalid 
Request
2009/06/16 21:23:01| clientTryParseRequest: FD 49 (10.22.0.64:41875) Invalid 
Request
2009/06/16 21:23:01| clientTryParseRequest: FD 104 (10.22.0.63:41887) Invalid 
Request
2009/06/16 21:23:02| clientTryParseRequest: FD 399 (10.22.0.64:41921) Invalid 
Request
2009/06/16 21:23:03| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:23:04| httpReadReply: Excess data from GET 
http://www.myyearbook.com/apps/home;
2009/06/16 21:23:21| clientTryParseRequest: FD 117 (10.22.0.63:42346) Invalid 
Request
2009/06/16 21:23:21| clientTryParseRequest: FD 457 (10.22.0.65:42394) Invalid 
Request
2009/06/16 21:23:22| clientTryParseRequest: FD 328 (10.22.0.63:42458) Invalid 
Request
2009/06/16 21:23:23| urlParse: Illegal character in hostname 
'www.google-%20analytics.com'
2009/06/16 21:23:25| httpReadReply: Excess data from GET 
http://sugg.search.yahoo.net/sg/?output=fxjsonpnresults=10command=horny%20granies;
2009/06/16 21:23:45| clientTryParseRequest: FD 544 (10.22.0.65:42839) Invalid 
Request
2009/06/16 21:23:46| clientTryParseRequest: FD 228 (10.22.0.64:42852) Invalid 
Request
2009/06/16 21:23:47| clientTryParseRequest: FD 54 (10.22.0.64:42874) Invalid 
Request
2009/06/16 21:23:49| urlParse: Illegal character in hostname 
'www.google-%20analytics.com'
2009/06/16 21:24:03| clientTryParseRequest: FD 35 (10.22.0.63:43094) Invalid 
Request

Squid Cache: Version 2.7.STABLE6-20090511
configure options:  '--prefix=/usr/local/squid-2.7.STABLE6-20090511' 
'--enable-epoll' '--with-pthreads' '--enable-snmp' 
'--enable-storeio=ufs,aufs,coss' '-with-large-files' 
'--enable-large-cache-files' '--enable-follow-x-forwarded-for' 
'--with-maxfd=16384' '--disable-dependency-tracking' '--disable-ident-lookups' 
'--enable-removal-policies=heap,lru' '--disable-wccp' 'CFLAGS=-fPIE -Os -g 
-pipe -fsigned-char -O2 -g -pipe -m64' 'LDFLAGS=-pie'


Connection information for squid:
Number of clients accessing cache:9
Number of HTTP requests received:431867579
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:0
Request failure ratio: 0.00
Average HTTP requests per minute since start:14133.0
Average ICP messages per minute since start:0.0
Select loop called: 1303826448 times, 1.406 ms avg
Cache information for squid:
Request Hit Ratios:5min: 59.2%, 60min: 60.8%
Byte Hit Ratios:5min: 65.7%, 60min: 65.5%
Request Memory Hit Ratios:5min: 27.0%, 60min: 26.9%
Request Disk Hit Ratios:5min: 61.1%, 60min: 61.4%
Storage Swap size:207997068 KB
Storage Mem size:262592 KB
Mean Object Size:19.94 KB
Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01745  0.01469
Cache Misses:  0.10281  0.10857
Cache Hits:0.0  0.0
Near Hits: 0.07825  0.08729
Not-Modified 

[squid-users] Problems with H/W SSL acceleration

2009-06-16 Thread Steven Paster
Hi,

We are trying to use a Cavium H/W SSL acceleration card to accelerate SSL 
encryption.  The Cavium driver builds and installs without complaint.  Cavium 
supplies an SDK for building libcrypto.a and libssl.a.  These too built without 
issue.  

We compiled Squid 3.1.0.4 statically using the Cavium supplied libraries and 
the configuration options:  --with-openssl=cavium-base-directory and  
--enable-ssl. (We used ldd to confirm that Squid built statically with the 
correct libraries.) In our squid.conf file we added ssl_engine cavium as per 
information provided by Cavium; but, we get the message: FATAL Unable to find 
SSL engine 'cavium'.  Cavium has tested with Apache but never with Squid. 

Questions:
1) Does Squid require a patch for SSL crypto h/w acceleration?
2) Are there any Squid settings I need to know about?
3) Has anyone been successful with another h/w card? We are not wedded to 
Cavium.


Forgive me if this territory has been covered in the past; I'm new to Squid.  
Thank you in advance for any help,

Steven Paster
FaceTime Communications




[squid-users] Squid for Windows users **Best Practice**

2009-06-16 Thread Beavis
All,

  I just want to get some views from folks that use squid on a windows
environment. I'm looking at the following scenario.

a.) running squid that can be use by windows users (auth via ldap, ntlm. AD)
b.) site access is on a per group basis (squid auth or through squidguard)
c.) Squid Redundancy.



any help will be awesomely appreciated.


-b

-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


[squid-users] Squid - WCCP and ASA

2009-06-16 Thread Parvinder Bhasin
I have setup of squid ..which was compiled with --enable-delay-pools  
option.  Works really well but without WCCP.
I enabled WCCP support in the squid config and also enabled wccp  
support on my ASA.  Setup GRE tunnel etc.
For my testing purpose I am only having ONE client IP go through  
WCCP.  The problem is I am able to see that client on the GRE1  
interface (the requests) of the proxy server but that client is not  
getting anything back reply back.  Do I need anything in iptables to  
allow etc???  do I need to compile with some transparent support?? if  
so which one would I use for ASA?


 Any help is highly appreciated.


Here is part of my config:

http_port 3128 transparent

wccp2_router 192.168.100.250
wccp_version 4
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0

Additionally here is what I did to setup tunnel:

modprobe ip_gre
iptunnel add gre1 mode gre remote $ASA_IP local $LOCAL_IP dev eth0
ifconfig gre1 inet 127.0.0.2 netmask 255.255.255.0 up

echo 1  /proc/sys/net/ipv4/ip_forward
echo 0  /proc/sys/net/ipv4/tcp_window_scaling
echo 0  /proc/sys/net/ipv4/conf/default/rp_filter
echo 0  /proc/sys/net/ipv4/conf/all/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter
echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0  /proc/sys/net/ipv4/conf/gre1/rp_filter

iptables -t nat -A PREROUTING -i gre1 -p tcp -m tcp --dport 80 -j  
REDIRECT --to-port

3128

I do see the RX counter going up but not the TX on gre1:

gre1  Link encap:UNSPEC  HWaddr C0-A8-64-CF-B7-BF-C8- 
C2-00-00-00-00-00-00-00-00

  inet addr:127.0.0.2  P-t-P:127.0.0.2  Mask:255.255.255.0
  UP POINTOPOINT RUNNING NOARP  MTU:1476  Metric:1
  RX packets:1559 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:83432 (81.4 KiB)  TX bytes:0 (0.0 b)

Here is tcpdump output:

[r...@squidnclamav etc]# tcpdump -i gre1 host 192.168.100.175 and port  
not ssh
tcpdump: WARNING: arptype 778 not supported by libpcap - falling back  
to cooked socket
tcpdump: verbose output suppressed, use -v or -vv for full protocol  
decode
listening on gre1, link-type LINUX_SLL (Linux cooked), capture size 96  
bytes
14:13:37.615862 IP 192.168.100.175.52257  cf-in-f99.google.com.http:  
S 3689381709:3689381709(0) win 65535 mss 1460,sackOK,eol
14:13:45.524999 IP 192.168.100.175.52256   
bs2.ads.vip.sp1.yahoo.com.http: S 2516726129:2516726129(0) win 65535  
mss 1460,sackOK,eol
14:13:45.525001 IP 192.168.100.175.52255   
bs2.ads.vip.sp1.yahoo.com.http: S 878462413:878462413(0) win 65535  
mss 1460,sackOK,eol
14:13:45.525002 IP 192.168.100.175.52254   
bs2.ads.vip.sp1.yahoo.com.http: S 1528706489:1528706489(0) win 65535  
mss 1460,sackOK,eol
14:13:45.525003 IP 192.168.100.175.52253   
bs2.ads.vip.sp1.yahoo.com.http: S 1578413587:1578413587(0) win 65535  
mss 1460,sackOK,eol
14:13:47.427509 IP 192.168.100.175.52252   
mc2b.mail.vip.re1.yahoo.com.http: S 3796070861:3796070861(0) win 65535  
mss 1460,sackOK,eol
14:13:47.886251 IP 192.168.100.175.52259   
f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
mss 1460,nop,wscale 3,nop,nop,timestamp 322113293 0,sackOK,eol
14:13:48.127001 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
357937093:357937093(0) win 65535 mss 1460,nop,wscale  
3,nop,nop,timestamp 322113295 0,sackOK,eol
14:13:48.829652 IP 192.168.100.175.52259   
f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
mss 1460,nop,wscale 3,nop,nop,timestamp 322113302 0,sackOK,eol
14:13:49.029600 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
357937093:357937093(0) win 65535 mss 1460,nop,wscale  
3,nop,nop,timestamp 322113304 0,sackOK,eol
14:13:49.820922 IP 192.168.100.175.52259   
f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
mss 1460,nop,wscale 3,nop,nop,timestamp 322113312 0,sackOK,eol
14:13:50.030914 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
357937093:357937093(0) win 65535 mss 1460,nop,wscale  
3,nop,nop,timestamp 322113314 0,sackOK,eol


FW: [squid-users] Tproxy Help // Transparent works fine

2009-06-16 Thread Alexandre DeAraujo
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Monday, June 15, 2009 9:21 PM
To: Alexandre DeAraujo
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Tproxy Help // Transparent works fine

Should just be an upgrade Squid to 3.1 release and follow the instructions at:
http://wiki.squid-cache.org/Features/Tproxy4
Amos

I downloaded and installed squid-3.1.0.8.tar.gz with the configure build option 
'--enable-linux-netfilter'. 
Made sure squid.conf was configured with 
http_port 3128
http_port 3129 tproxy

The following modules are enabled on the kernel config file:
NF_CONNTRACK
NETFILTER_TPROXY
NETFILTER_XT_MATCH_SOCKET
NETFILTER_XT_TARGET_TPROXY

After typing the following lines:
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 
0x1/0x1 --on-port 3129

my iptables-save output:
# Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
*nat
:PREROUTING ACCEPT [33:2501]
:POSTROUTING ACCEPT [1:76]
:OUTPUT ACCEPT [1:76]
-A PREROUTING -i wccp2 -p tcp -j REDIRECT --to-ports 3128 
COMMIT
# Completed on Tue Jun 16 16:16:27 2009
# Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
*mangle
:PREROUTING ACCEPT [35:2653]
:INPUT ACCEPT [158:8713]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [123:11772]
:POSTROUTING ACCEPT [123:11772]
:DIVERT - [0:0]
-A PREROUTING -p tcp -m socket -j DIVERT 
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3129 --on-ip 0.0.0.0 
--tproxy-mark 0x1/0x1 
-A DIVERT -j MARK --set-xmark 0x1/0x 
-A DIVERT -j ACCEPT 
COMMIT
# Completed on Tue Jun 16 16:16:27 2009

Then I entered the following lines:
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
echo 1  /proc/sys/net/ipv4/ip_forward

Client could not browse after that. I see the connections coming in with 
tcpdump, but all connections just timeout

ps. after compiling squid-3.1.0.8, I did a search for 'tproxy' on the console 
screen and found this line:
checking for linux/netfilter_ipv4/ip_tproxy.h... no
I don’t know if this would have anything to do with it..

Thanks,

Alex



Re: FW: [squid-users] Tproxy Help // Transparent works fine

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 17:06:19 -0700, Alexandre DeAraujo al...@cal.net
wrote:
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, June 15, 2009 9:21 PM
 To: Alexandre DeAraujo
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Tproxy Help // Transparent works fine
 
Should just be an upgrade Squid to 3.1 release and follow the
instructions
at:
http://wiki.squid-cache.org/Features/Tproxy4
Amos
 
 I downloaded and installed squid-3.1.0.8.tar.gz with the configure build
 option '--enable-linux-netfilter'. 
 Made sure squid.conf was configured with 
 http_port 3128
 http_port 3129 tproxy
 
 The following modules are enabled on the kernel config file:
 NF_CONNTRACK
 NETFILTER_TPROXY
 NETFILTER_XT_MATCH_SOCKET
 NETFILTER_XT_TARGET_TPROXY
 
 After typing the following lines:
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark
 0x1/0x1 --on-port 3129
 
 my iptables-save output:
 # Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
 *nat
 :PREROUTING ACCEPT [33:2501]
 :POSTROUTING ACCEPT [1:76]
 :OUTPUT ACCEPT [1:76]
 -A PREROUTING -i wccp2 -p tcp -j REDIRECT --to-ports 3128 
 COMMIT
 # Completed on Tue Jun 16 16:16:27 2009
 # Generated by iptables-save v1.4.3.2 on Tue Jun 16 16:16:27 2009
 *mangle
 :PREROUTING ACCEPT [35:2653]
 :INPUT ACCEPT [158:8713]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [123:11772]
 :POSTROUTING ACCEPT [123:11772]
 :DIVERT - [0:0]
 -A PREROUTING -p tcp -m socket -j DIVERT 
 -A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3129 --on-ip
 0.0.0.0 --tproxy-mark 0x1/0x1 
 -A DIVERT -j MARK --set-xmark 0x1/0x 
 -A DIVERT -j ACCEPT 
 COMMIT
 # Completed on Tue Jun 16 16:16:27 2009
 
 Then I entered the following lines:
 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100
 echo 1  /proc/sys/net/ipv4/ip_forward
 
 Client could not browse after that. I see the connections coming in with
 tcpdump, but all connections just timeout
 
 ps. after compiling squid-3.1.0.8, I did a search for 'tproxy' on the
 console screen and found this line:
 checking for linux/netfilter_ipv4/ip_tproxy.h... no
 I don’t know if this would have anything to do with it..

No. that is just squid build scripts checking that you need tproxy4 instead
of tproxy2.

Does access.log say anything is arriving at Squid?
Are you able to track the packets anywhere else? 

Amos



Re: [squid-users] 2.7.Stable6 httpReadReply: Excess data

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 15:25:05 -0700 (PDT), Quin Guin quing...@yahoo.com
wrote:
 Hi,
 
  I am in the need of some assistance in looking into a high number of 
  httpReadReply: Excess data  entries around 3000 cache.log entries per
  day per SQUID server. The  httpReadReply: Excess data from GET
  http://xx; this is happening on many sites from my reading it is in
  most cases it is an issue with the content site/server.

The server supplying Squid dis one of two things:

1) sent a request/reply with Content-Length: N, then pushed more than N
bytes of data into Squid.

2) sent a request/reply with unknown Content-Length: followed some time
later by the HTTP byte sequence for end-of-object. Which was in turn
followed by even more bytes of data.

Both of these are an old signs of malware overflow attacks. Squid will
respond by unconditionally closing the link to that server.

 I am a bit
  concerned that Is this a sign of of memory or disk issue because from
the
  cache manager things look to be running well. I also do see some other
  errors and I have included the below with more information on my setup.
  The urlParse: Illegal character in hostname
  'www.google-%20analytics.com' is just annoying adn if anyone has away
to
  fix it beside blocking it I would appreciate any ideas on that.

This is another issue, possibly the cause of the above. The HTTP headers
being received by squid are mangled beyond repair.  The client sending the
request is severely broken.

If possible please get a binary dump of the stream going into squid. NP:
tcpdump requires -s65535 option to grab it all. From that you should be
able to see exactly how the headers are broken.

 
  I am starting to see latency and I have a 3 node cluster of SQUID
servers
  setup as standard reverse proxies.

Latency is kind of to be expected of many connections are failing with
these errors and being aborted incomplete.

 
 Cache.log entries:
 
 2009/06/16 21:21:43| httpReadReply: Excess data from GET
 http://www.myyearbook.com/apps/home;
 2009/06/16 21:22:03| clientTryParseRequest: FD 14 (10.22.0.64:40881)
 Invalid Request
 2009/06/16 21:22:04| clientTryParseRequest: FD 49 (10.22.0.63:40894)
 Invalid Request
 2009/06/16 21:22:06| clientTryParseRequest: FD 36 (10.22.0.63:40938)
 Invalid Request
 2009/06/16 21:22:21| clientTryParseRequest: FD 290 (10.22.0.65:41114)
 Invalid Request
 2009/06/16 21:22:21| clientTryParseRequest: FD 415 (10.22.0.65:41124)
 Invalid Request
 2009/06/16 21:22:22| clientTryParseRequest: FD 361 (10.22.0.63:41168)
 Invalid Request
 2009/06/16 21:22:35| clientTryParseRequest: FD 109 (10.22.0.64:41418)
 Invalid Request
 2009/06/16 21:22:35| clientTryParseRequest: FD 129 (10.22.0.63:41431)
 Invalid Request
 2009/06/16 21:22:36| clientTryParseRequest: FD 477 (10.22.0.65:41458)
 Invalid Request
 2009/06/16 21:22:50| clientTryParseRequest: FD 356 (10.22.0.63:41707)
 Invalid Request
 2009/06/16 21:22:51| clientTryParseRequest: FD 180 (10.22.0.64:41719)
 Invalid Request
 2009/06/16 21:22:51| clientTryParseRequest: FD 197 (10.22.0.63:41744)
 Invalid Request
 2009/06/16 21:23:01| clientTryParseRequest: FD 49 (10.22.0.64:41875)
 Invalid Request
 2009/06/16 21:23:01| clientTryParseRequest: FD 104 (10.22.0.63:41887)
 Invalid Request
 2009/06/16 21:23:02| clientTryParseRequest: FD 399 (10.22.0.64:41921)
 Invalid Request
 2009/06/16 21:23:03| httpReadReply: Excess data from GET
 http://www.myyearbook.com/apps/home;
 2009/06/16 21:23:04| httpReadReply: Excess data from GET
 http://www.myyearbook.com/apps/home;
 2009/06/16 21:23:21| clientTryParseRequest: FD 117 (10.22.0.63:42346)
 Invalid Request
 2009/06/16 21:23:21| clientTryParseRequest: FD 457 (10.22.0.65:42394)
 Invalid Request
 2009/06/16 21:23:22| clientTryParseRequest: FD 328 (10.22.0.63:42458)
 Invalid Request
 2009/06/16 21:23:23| urlParse: Illegal character in hostname
 'www.google-%20analytics.com'

Hint:  %20 (aka whitespace) is not a part of the google domain name.

 2009/06/16 21:23:25| httpReadReply: Excess data from GET

http://sugg.search.yahoo.net/sg/?output=fxjsonpnresults=10command=horny%20granies;
 2009/06/16 21:23:45| clientTryParseRequest: FD 544 (10.22.0.65:42839)
 Invalid Request
 2009/06/16 21:23:46| clientTryParseRequest: FD 228 (10.22.0.64:42852)
 Invalid Request
 2009/06/16 21:23:47| clientTryParseRequest: FD 54 (10.22.0.64:42874)
 Invalid Request
 2009/06/16 21:23:49| urlParse: Illegal character in hostname
 'www.google-%20analytics.com'
 2009/06/16 21:24:03| clientTryParseRequest: FD 35 (10.22.0.63:43094)
 Invalid Request
 
 Squid Cache: Version 2.7.STABLE6-20090511
 configure options:  '--prefix=/usr/local/squid-2.7.STABLE6-20090511'
 '--enable-epoll' '--with-pthreads' '--enable-snmp'
 '--enable-storeio=ufs,aufs,coss' '-with-large-files'
 '--enable-large-cache-files' '--enable-follow-x-forwarded-for'
 '--with-maxfd=16384' '--disable-dependency-tracking'
 '--disable-ident-lookups' '--enable-removal-policies=heap,lru'
 '--disable-wccp' 'CFLAGS=-fPIE -Os -g 

Re: [squid-users] Squid - WCCP and ASA

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 16:49:56 -0700, Parvinder Bhasin
parvinder.bha...@gmail.com wrote:
 I have setup of squid ..which was compiled with --enable-delay-pools  
 option.  Works really well but without WCCP.
 I enabled WCCP support in the squid config and also enabled wccp  
 support on my ASA.  Setup GRE tunnel etc.
 For my testing purpose I am only having ONE client IP go through  
 WCCP.  The problem is I am able to see that client on the GRE1  
 interface (the requests) of the proxy server but that client is not  
 getting anything back reply back.  Do I need anything in iptables to  
 allow etc???  do I need to compile with some transparent support?? if  
 so which one would I use for ASA?
 
   Any help is highly appreciated.
 
 
 Here is part of my config:
 
 http_port 3128 transparent
 
 wccp2_router 192.168.100.250
 wccp_version 4
 wccp2_forwarding_method 1
 wccp2_return_method 1
 wccp2_service standard 0
 
 Additionally here is what I did to setup tunnel:
 
 modprobe ip_gre
 iptunnel add gre1 mode gre remote $ASA_IP local $LOCAL_IP dev eth0
 ifconfig gre1 inet 127.0.0.2 netmask 255.255.255.0 up
 

IIRC localhost IDs 127.0.0.0/8 are hardware-limited to only be usable for
traffic internal to the box.
If WCCP is going on a tunnel it will likely need an externally visible IP
for the router to send to.

 echo 1  /proc/sys/net/ipv4/ip_forward
 echo 0  /proc/sys/net/ipv4/tcp_window_scaling
 echo 0  /proc/sys/net/ipv4/conf/default/rp_filter
 echo 0  /proc/sys/net/ipv4/conf/all/rp_filter
 echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter
 echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
 echo 0  /proc/sys/net/ipv4/conf/gre1/rp_filter
 
 iptables -t nat -A PREROUTING -i gre1 -p tcp -m tcp --dport 80 -j  
 REDIRECT --to-port
 3128
 
 I do see the RX counter going up but not the TX on gre1:
 
 gre1  Link encap:UNSPEC  HWaddr C0-A8-64-CF-B7-BF-C8- 
 C2-00-00-00-00-00-00-00-00
inet addr:127.0.0.2  P-t-P:127.0.0.2  Mask:255.255.255.0
UP POINTOPOINT RUNNING NOARP  MTU:1476  Metric:1
RX packets:1559 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:83432 (81.4 KiB)  TX bytes:0 (0.0 b)
 
 Here is tcpdump output:
 
 [r...@squidnclamav etc]# tcpdump -i gre1 host 192.168.100.175 and port  
 not ssh
 tcpdump: WARNING: arptype 778 not supported by libpcap - falling back  
 to cooked socket
 tcpdump: verbose output suppressed, use -v or -vv for full protocol  
 decode
 listening on gre1, link-type LINUX_SLL (Linux cooked), capture size 96  
 bytes
 14:13:37.615862 IP 192.168.100.175.52257  cf-in-f99.google.com.http:  
 S 3689381709:3689381709(0) win 65535 mss 1460,sackOK,eol
 14:13:45.524999 IP 192.168.100.175.52256   
 bs2.ads.vip.sp1.yahoo.com.http: S 2516726129:2516726129(0) win 65535  
 mss 1460,sackOK,eol
 14:13:45.525001 IP 192.168.100.175.52255   
 bs2.ads.vip.sp1.yahoo.com.http: S 878462413:878462413(0) win 65535  
 mss 1460,sackOK,eol
 14:13:45.525002 IP 192.168.100.175.52254   
 bs2.ads.vip.sp1.yahoo.com.http: S 1528706489:1528706489(0) win 65535  
 mss 1460,sackOK,eol
 14:13:45.525003 IP 192.168.100.175.52253   
 bs2.ads.vip.sp1.yahoo.com.http: S 1578413587:1578413587(0) win 65535  
 mss 1460,sackOK,eol
 14:13:47.427509 IP 192.168.100.175.52252   
 mc2b.mail.vip.re1.yahoo.com.http: S 3796070861:3796070861(0) win 65535  
 mss 1460,sackOK,eol
 14:13:47.886251 IP 192.168.100.175.52259   
 f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
 mss 1460,nop,wscale 3,nop,nop,timestamp 322113293 0,sackOK,eol
 14:13:48.127001 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
 357937093:357937093(0) win 65535 mss 1460,nop,wscale  
 3,nop,nop,timestamp 322113295 0,sackOK,eol
 14:13:48.829652 IP 192.168.100.175.52259   
 f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
 mss 1460,nop,wscale 3,nop,nop,timestamp 322113302 0,sackOK,eol
 14:13:49.029600 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
 357937093:357937093(0) win 65535 mss 1460,nop,wscale  
 3,nop,nop,timestamp 322113304 0,sackOK,eol
 14:13:49.820922 IP 192.168.100.175.52259   
 f1.www.vip.sp1.yahoo.com.http: S 547104:547104(0) win 65535  
 mss 1460,nop,wscale 3,nop,nop,timestamp 322113312 0,sackOK,eol
 14:13:50.030914 IP 192.168.100.175.52260  hp-core.ebay.com.http: S  
 357937093:357937093(0) win 65535 mss 1460,nop,wscale  
 3,nop,nop,timestamp 322113314 0,sackOK,eol


Re: [squid-users] Squid for Windows users **Best Practice**

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 17:29:33 -0600, Beavis pfu...@gmail.com wrote:
 All,
 
   I just want to get some views from folks that use squid on a windows
 environment. I'm looking at the following scenario.
 
 a.) running squid that can be use by windows users (auth via ldap, ntlm.
 AD)
 b.) site access is on a per group basis (squid auth or through
squidguard)
 c.) Squid Redundancy.
 

Being a squid linux admin with many users on windows I can say that none of
the above require Squid to run on a windows box. Samba + the provided squid
helpers handle windows authentications just fine from most non-windows OS.

Amos



Re: [squid-users] Problems with H/W SSL acceleration

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 15:30:54 -0700, Steven Paster spas...@facetime.com
wrote:
 Hi,
 
 We are trying to use a Cavium H/W SSL acceleration card to accelerate SSL
 encryption.  The Cavium driver builds and installs without complaint. 
 Cavium supplies an SDK for building libcrypto.a and libssl.a.  These too
 built without issue.  
 
 We compiled Squid 3.1.0.4 statically using the Cavium supplied libraries
 and the configuration options:  --with-openssl=cavium-base-directory
 and  --enable-ssl. (We used ldd to confirm that Squid built statically
 with the correct libraries.) In our squid.conf file we added ssl_engine
 cavium as per information provided by Cavium; but, we get the message:
 FATAL Unable to find SSL engine 'cavium'.  Cavium has tested with Apache
 but never with Squid.

Please try with current 3.1 or snapshot to be sure this is not already
fixed.
We have a few thousand lines of code changed every Squid beta release,
3.1.0.4 is now quite old.

 
 Questions:
 1) Does Squid require a patch for SSL crypto h/w acceleration?
 2) Are there any Squid settings I need to know about?
 3) Has anyone been successful with another h/w card? We are not wedded to
 Cavium.
 
 
 Forgive me if this territory has been covered in the past; I'm new to
 Squid.  Thank you in advance for any help,
 
 Steven Paster
 FaceTime Communications

Amos
Squid-3 Release Maintainer



Re: [squid-users] AD groups / wbinfo_group.pl problem

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 13:02:53 -0800, Chris Robertson crobert...@gci.net
wrote:
 Kevin Blackwell wrote:
 Jakob,

 recently I've been having the same problem. You find a fix?

 Kevin
 
 Perhaps related to 

http://www.mail-archive.com/debian-bugs-d...@lists.debian.org/msg349766.html?
 
 Chris

That was due to the concurrency=N parameter being used on a non-concurrent
helper.

If it is the same a simple config change from concurrency= to children=
will fix it.

FWIW: most of the helpers bundled with squid so far are non-concurrent.

Amos



Re: [squid-users] Help with squid cache dir

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 09:23:36 -0300, Juan Manuel Perrote
jperr...@educacion.rionegro.gov.ar wrote:
 Hello:
 
 I have a problem with a cache squid, squid not generate files on the
cache
 dir.
 
 * I have a squid as a reverse proxy.
 * My version of squid is 3.0.RC1.
 * On the cache dir structure I have a lot of directories, but all are
 empty.
 * The cache dir have 777 permissions on all structure
 * The store log have a content like these
   1245151352.631 RELEASE -1  289A075B563E3C64D402EB58150B4B28  304
 1245151388-1-1 text/html 0/0 GET
 http://extranet.p/i/1px_trans.gif
   1245151442.021 RELEASE -1  E26713FA0492A44B9DB8DD9E9A472615  200
 1245151475-1-1 text/html 34223/34223 GET
 http://extranet.p/prod/apex/f?
   1245151444.920 RELEASE -1  C9251F1B9EC4A88A0EC41EFDFA014A35  404
 1245151480-1-1 text/html 328/328 GET
 http://extranet.p/prod/apex/blank.gif
   1245151470.982 RELEASE -1  DE94CD1480F08E3361199B0F762C2173  200
 1245151504-1-1 text/html 34184/34184 GET
 http://extranet.p/prod/apex/f?
   1245151474.116 RELEASE -1  1EACA84E18C97E2E964185A85B4EACC0  404
 1245151510-1-1 text/html 328/328 GET
 http://extranet.p/prod/apex/blank.gif
   1245151499.351 RELEASE -1  44A1536A48A6290890D5E3331C4B9B9A  200
 1245151533-1-1 text/html 34142/34142 GET
 http://extranet.p/prod/apex/f?
   1245151501.805 RELEASE -1  76DBCDF798B911EC157E388F2280843B  404
 1245151537-1-1 text/html 328/328 GET
 http://extranet.p/prod/apex/blank.gif
   1245151520.811 RELEASE -1  64C9C1BB78A21DE890D65D654836BDFF  200
 1245151554-1-1 text/html 34221/34221 GET
 http://extranet.p/prod/apex/f?
   1245151523.960 RELEASE -1  2C115C9A2730865E874723F05BC13994  404
 1245151560-1-1 text/html 328/328 GET
 http://extranet.p/prod/apex/blank.gif
 
 * My operating system is Ubuntu 8.04 LTS
 * Squid is working fine since 1 years
 * We have a average of 180 users per day
 
 Juan Manuel Perrote

Please upgrade:
  apt-get update
  apt-get install squid3
OR:
  rebuild from current 3.0 sources.

 Ubuntu version Upstream version
karmic   development 3.0.STABLE13-1
jaunty   current 3.0.STABLE8-3
intrepid supported   3.0.STABLE7-1ubuntu1
hardysupported   3.0.STABLE1-1ubuntu1

http://www.squid-cache.org/Advisories/
Note the top three. Current official Ubuntu releases have been patched to
close all advisories.

Amos



Re: [squid-users] NTLM authentication + Administrator user

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 15:37:52 +, Sergio - Embalatec
sergio.so...@embalatec.com.br wrote:
 Hello, is my first post in the list, I wonder if someone already went
 through 
 this problem.
 
 I'm using Squid 2.7.STABLE3 with NTLM authentication. Windows users 
 authenticate without problems, but when logged in with User
administrator
 
 can not access the Internet, is how he had managed to authenticate the 
 user administrator
 
 How would this situation for users who are not in the field?
 
 Is that any way around this situation? 
 
 below my configuration:
 auth_param ntlm program /etc/squid/squid/ntlm_auth brk/dcbrk
 auth_param ntlm children 5
 auth_param ntlm max_challenge_reuses 0
 auth_param ntlm max_challenge_lifetime 2 minutes
 
 
 Thanks, any idea will be very useful!

Most times I've seen this issue is when the users are logged into the local
machine admin account instead of the domain admin account.

Which means the DC Squid uses to test the authentication has no record of
them ever logging in. One guess what the result is.

Most times this is not a serious issue, but means those users just have to
fill out a popup. Is there something in your use of ACLs which prevent that
popup appearing?

Amos



Re: [squid-users] Squid rules analyser

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 16:14:27 +0200, Alberto Cappadonia
alberto.cappado...@polito.it wrote:
 Dear squid users,
 
 we are developing a Java-based tool to analyse content filtering rules
 (acl, http_access,...) for squid.
 
 The objective is to provide administrators with a tool able to help them
 in identifying potential mistakes in the squid configuration.
 
 More in detail, the aims are:
 - identifying conflicts and anomalies in squid configuration file
 - presenting anomalies to the administrators for further decisions
 (e.g., mistakenly empty rules, acl intersection areas, hidden rules)
 - optimising rules by removing redundant or shadowed rules
 
 The conflict model is the geometric/algebraic one presented in this
paper:
 http://security.polito.it/doc/pub_r/policy2008.pdf
 
 The tool fully supports basic set operations for all the acl types in
 squid v3.0 (IP addresses, ports, proto and all the ones based on regular
 expressions, ...).
 
 
 The workflow of the tool is briefly:
 - read and parse squid.conf for content filtering rules (internal
 geometric rule representation)
 - analyse rules for potential conflicts and anomalies
 - interact with the administrators
 - export the optimised and anomaly-free squid.conf
 
 
 We finished the conflict detector and resolver engine, the parser and we
 are improving the GUI for reporting the anomalies to administrators. We
 guess we will have the beta version in a couple of week.
 
 
 We will be glad if you can give your opinion about the tool (especially
 about improvement and integrations) in order to make it as effective as
 possible. For this, if there is some developer/administrator that is
 interested in using/testing it (or at least providing us with a few real
 configuration files) it will be very useful.
 
 Regards,
 Cataldo Basile
 Alberto Cappadonia

Wonderful. This will make a perfect companion to the online config
validator I wrote for 3.0 (and must get to upgrading again soon for 3.1).

Is the tool able to be published for general public use anywhere? if so I
can probably reference interested people to it.

Does it handle all the options that use ACLlist.

Amos



Re: [squid-users] Squid on DMZ

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 08:43:29 -0300, João Kuchnier
joao.kuchn...@gmail.com
wrote:
 Thanks for your help!
 
 I manage how to configure rules on shorewall fixing squid on DMZ:
 http://www.shorewall.net/Shorewall_Squid_Usage.html
 
 In addition of HTTP traffic loading, this extra flow interfere on
 Internet browsing speed?

Some small transfer time increase. But nothing serious unless it causes a
full bandwidth pipe.
Just be aware of it in your network design and monitoring (some graphs can
show 'huge' mysterious jump in bandwidth when its turned on).

Amos

 
 João
 
  Hi everyone!
 
  Today I'm running squid on firewall and it is very easy to manage.
  Despite of that, we are trying to decentralize services and adding new
  virtual machines on DMZ for each of the servers we need.
 
  I would like to know if you recommend to install Squid on DMZ, if it
  is use to manage and how I could manage rules on firewall (we use
  shorewall).
 
  I don't have any recommendations either way. The pros and cons balance
  out
  for most intents and purposes. If its working fine for you as-is then
  there
  really isn't anything to fix.
 
  If you do make the move, be aware that with interception the firewall
  will
  need to take into account the squid box IP and make exceptions. Also
an
  added flow of traffic client-router-squid-router-internet which
  does
  not currently occur on the internal router interface. This effectively
  doubles or triples the internal HTTP traffic load on the router.
 
  Amos
 
 João K.


Re: [squid-users] Are you on mobile/handset?

2009-06-16 Thread Amos Jeffries
On Tue, 16 Jun 2009 08:05:17 +0200, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:
 Hi Squids,
 
 How do you think should be the best way to detect if a user is surfing
inet
 
 throut its mobile/handset?
 
 TIA
 
 LD

Pretty much the purpose of the User-Agent: header.

Amos



Re: [squid-users] Squid for Windows users **Best Practice**

2009-06-16 Thread Beavis
thanks for the reply amos..

I'm sorry it seems that i have not been clear on how i want to do this.

I'm not planning to put squid on windows, my plan is to get some best
practice from folks that have experience on using squid as a proxy
for their windows network (with AD and all).

I'm looking for some suggestions or common setup's on their squid where.

a.) squid can determine the AD user's group and give them their own
list of ACL's
b.) redundancy setup's
c.) recommended most common way of authenticating AD users to squid.
(NTLM, LDAP, ADS)


thanks again,
-b


On Tue, Jun 16, 2009 at 6:54 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 On Tue, 16 Jun 2009 17:29:33 -0600, Beavis pfu...@gmail.com wrote:
 All,

   I just want to get some views from folks that use squid on a windows
 environment. I'm looking at the following scenario.

 a.) running squid that can be use by windows users (auth via ldap, ntlm.
 AD)
 b.) site access is on a per group basis (squid auth or through
 squidguard)
 c.) Squid Redundancy.


 Being a squid linux admin with many users on windows I can say that none of
 the above require Squid to run on a windows box. Samba + the provided squid
 helpers handle windows authentications just fine from most non-windows OS.

 Amos





-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


Fw: Re: [squid-users] NONE/411 Length Required

2009-06-16 Thread Bijayant Kumar

Bijayant Kumar


--- On Mon, 15/6/09, Bijayant Kumar bijayan...@yahoo.com wrote:

 From: Bijayant Kumar bijayan...@yahoo.com
 Subject: Re: [squid-users] NONE/411 Length Required
 To: squid users squid-users@squid-cache.org
 Date: Monday, 15 June, 2009, 6:48 PM
 
 
 --- On Mon, 15/6/09, Amos Jeffries squ...@treenet.co.nz
 wrote:
 
  From: Amos Jeffries squ...@treenet.co.nz
  Subject: Re: [squid-users] NONE/411 Length Required
  To: Bijayant Kumar bijayan...@yahoo.com
  Cc: squid users squid-users@squid-cache.org
  Date: Monday, 15 June, 2009, 6:06 PM
  Bijayant Kumar wrote:
   Hello list,
   
   I have Squid version 3.0.STABLE 10 installed on
 Gentoo
  linux box. All things are working fine, means caching
  proxying etc. There is a problem with some sites. When
 I am
  accessing one of those sites, in access.log I am
 getting
   
   NONE/411 3692 POST 
   http://.justdial.com/autosuggest_category_query_main.php?
  - NONE/- text/html
   
   And on the webpage I am getting whole error page
 of
  squid. Actually its a search related page. In the
 search
  criteria field as soon as I am typing after two words
 I am
  getting this error. The website in a question is http://justdial.com;. But 
  it works without the Squid.
   
   
   I tried to capture the http headers also which
 are as
  below
   
   http://.justdial.com/autosuggest_category_query_main.php?city=Bangaloresearch=Ka
   
   
   
   POST
 
 /autosuggest_category_query_main.php?city=Bangaloresearch=Ka
  HTTP/1.1
   
   Host: .justdial.com
   
   User-Agent: Mozilla/5.0 (X11; U; Linux i686;
 en-US;
  rv:1.8.1.16) Gecko/20080807 Firefox/2.0.0.16
   
   Accept:
 
 text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
   
   Accept-Language: en-us,en;q=0.7,hi;q=0.3
   
   Accept-Encoding: gzip,deflate
   
   Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
   
   Keep-Alive: 300
   
   Connection: keep-alive
   
   Referer: http://.justdial.com/
   
   Cookie:
 PHPSESSID=d1d12004187d4bf1f084a1252ec46cef;
 
 __utma=79653650.2087995718.1245064656.1245064656.1245064656.1;
  __utmb=79653650; __utmc=79653650;
 
 __utmz=79653650.1245064656.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
  CITY=Bangalore
   
   Pragma: no-cache
   
   Cache-Control: no-cache
   
   
   
   HTTP/1.x 411 Length Required
   
   Server: squid/3.0.STABLE10
   
   Mime-Version: 1.0
   
   Date: Mon, 15 Jun 2009 11:18:10 GMT
   
   Content-Type: text/html
   
   Content-Length: 3287
   
   Expires: Mon, 15 Jun 2009 11:18:10 GMT
   
   X-Squid-Error: ERR_INVALID_REQ 0
   
   X-Cache: MISS from bijayant.kavach.blr
   
   X-Cache-Lookup: NONE from
 bijayant.kavach.blr:3128
   
   Via: 1.0 bijayant.kavach.blr
 (squid/3.0.STABLE10)
   
   Proxy-Connection: close
   
   Please suggest me what could be the reason and
 how to
  resolve this. Any help/pointer can be a very helpful
 for me.
  
   
   Bijayant Kumar
   
   
 Get your new
 Email
  address!
   Grab the Email name you've always wanted before
  someone else does!
   http://mail.promotions.yahoo.com/newdomains/aa/
  
  
  NONE - no upstream source.
  411  - Content-Length missing
  
  HTTP requires a Content-Length: header on POST
 requests.
  
 
How to resolve this issue. Because the website is on internet and its working 
fine without the squid. When I am bypassing the proxy, I am not getting any 
type of error.
 
Can't this website be accessed through the Squid?
 
  Amos
  -- Please be using
Current Stable Squid 2.7.STABLE6 or
 3.0.STABLE15
Current Beta Squid 3.1.0.8 or
 3.0.STABLE16-RC1
  
 
 
   New Email addresses available on
 Yahoo!
 Get the Email name you've always wanted on the new
 @ymail and @rocketmail. 
 Hurry before someone else does!
 http://mail.promotions.yahoo.com/newdomains/aa/



  Get your new Email address!
Grab the Email name you#39;ve always wanted before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/