Re: [squid-users] squid 3.1 ldap authentication

2016-01-30 Thread Eliezer Croitoru

Just to update the thread.

A basic CLI test showed it's not an issue related to anything in the 
LDAP helpers or settings.
The issue was IPV6 network level issue, there was a default gateway but 
for some unknown reason there was no IPV6 connectivity.
The test host could be any host with both IPV6 and IPV4 dns records that 
has at-least one IPV6 record. Due to request_start_timeout default of 5 
minutes the site took about 5 minutes to show up after the IPV6 try was 
timed out.
The basic way to test it is running a simple script on the host machine 
that will test IPV6 connectivity. The right way to do that should be 
using a basic IPV6 ping like this script:

- http://paste.ngtech.co.il/pxizenek2
- http://ngtech.co.il/squid/ipv6_test.sh

But since it is known that opening the whole IPV6 ICMP protocol in 
FireWalls opens network vulnerabilities it is commonly disabled(while it 
be opened properly) and there for makes it's an issue to test IPV6 
connectivity based only on ICMP.


Example ip6tables ICMPv6 rules that will allow a router to pass a basic 
ping6 test:
ip6tables -A FORWARD -p icmpv6 --icmpv6-type destination-unreachable -j 
ACCEPT

ip6tables -A FORWARD -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-request -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT
ip6tables -A FORWARD -p icmpv6 -j DROP

Later I will upgrade the script to test tcp\http level connectivity so 
it will be more useful as a debugging tool.


* http://www.squid-cache.org/Doc/config/request_start_timeout/
* https://www.cert.org/downloads/IPv6/ip6tables_rules.txt
* https://www.sixxs.net/wiki/IPv6_Firewalling

On 29/01/2016 03:50, Nando Mendonca wrote:

Thanks! I ran tcpdump, didnt really notice anything. Any other suggesstions?

Thanks,
Nando


On Jan 25, 2016, at 10:07 AM, Anders Gustafsson  
wrote:

Do a packet trace on the LDAP connection. I bet the delay happens there. Also: 
I suspect that it might do the same LDAP lookup for EVERY HTTP session of which 
there might be thousands for a complex page.



nando mendonca  2016-01-25 17:52 >>>

I'm running squid 3.5.12, i'm using ldap for authentication. When trying to
browse the internet from clients it takes up to 10 minutes for the website
to load. Can you please assist me in troubleshooting what the issue is?
Below is my squid.conf file.


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.1 ldap authentication

2016-01-28 Thread Eliezer Croitoru

  
  
Hey Nando,
  
  Can you test something?
  On 25/01/2016 17:52, nando mendonca wrote:


  external_acl_type

ldap_group %LOGIN /usr/local/squid1/libexec/ext_ldap_group_acl
-R -b "ou=groups,dc=gcsldap,dc=corp,dc=domain,dc=com" -D
"cn=cost,ou=admin,dc=gcsldap,dc=corp,dc=domain,dc=com" -f
"(&(memberuid=%u) (cn=%a))" -w password -h ldap.corp.domain.com
  
  


In the above replace the "%LOGIN" with "%un"  and see what
  happens.
The differences are mentioned at:
  http://www.squid-cache.org/Doc/config/external_acl_type/
  
Also comparing your command to what I have tested with I see
something different.
My test command can be seen in this ML thread: 
-
http://lists.squid-cache.org/pipermail/squid-users/2015-July/004874.html
I do not have the executable in my hands so I don't know the meaning
of  the "-R" flag and compared to the command I have used it's
different.
  
Try the above and we will see the results,
Eliezer

  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.1 ldap authentication

2016-01-25 Thread nando mendonca
Hi All,

I'm running squid 3.5.12, i'm using ldap for authentication. When trying to
browse the internet from clients it takes up to 10 minutes for the website
to load. Can you please assist me in troubleshooting what the issue is?
Below is my squid.conf file.




cache_mem 1048 MB

cache_log /usr/local/squid1/var/logs/cache.log

cache_swap_high 95

cache_swap_low 90

dns_nameservers x.x.x.x



#acl manager proto cache_object

#acl localhost src 127.0.0.1/32 ::1

#acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


# Example rule allowing access from your local networks.

# Adapt to list your (internal) IP networks from where browsing

# should be allowed

#acl localnet src 10.0.0.0/8# RFC1918 possible internal network

#acl localnet src x.x.x.x.0/24

#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network

#acl localnet src 192.168.0.0/16# RFC1918 possible internal network

#acl localnet src fc00::/7   # RFC 4193 local private network range

#acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines


## Ports to allow:

acl Safe_ports port 443 # https

acl Safe_ports port 80  # http

acl Safe_ports port 8080

#acl Safe_ports port 21 # ftp

#acl Safe_ports port 70 # gopher

#acl Safe_ports port 210# wais

#acl Safe_ports port 1025-65535 # unregistered ports

#acl Safe_ports port 280# http-mgmt

#acl Safe_ports port 488# gss-http

#acl Safe_ports port 591# filemaker

#acl Safe_ports port 777# multiling http


## CONNECT method:

#acl CONNECT method CONNECT


## LDAP Authentication ##

auth_param basic program /usr/local/squid1/libexec/basic_ldap_auth -b
"dc=ldap,dc=corp,dc=domain,dc=com" -f "uid=%s" ldapserv.corp.domain.com

auth_param basic children 5

#auth_param basic realm Web-Proxy

auth_param basic credentialsttl 30 minutes

acl ldap-auth proxy_auth REQUIRED


## Visible Hostname ##

visible_hostname proxy-01



external_acl_type ldap_group %LOGIN
/usr/local/squid1/libexec/ext_ldap_group_acl -R -b
"ou=groups,dc=gcsldap,dc=corp,dc=domain,dc=com" -D
"cn=cost,ou=admin,dc=gcsldap,dc=corp,dc=domain,dc=com" -f "(&(memberuid=%u)
(cn=%a))" -w password -h ldap.corp.domain.com



#external_acl_type ldap_group %LOGIN /usr/lib64/squid/squid_ldap_group -R
-b "ou=groups,dc=mydomain,dc=net" -D "cn=root,dc=mydomain,dc=net" -f
"(&(sn=%u) (cn=%a))" -w password -h localhost


#http_access allow ldap-auth


## ACL's for group checking ##


acl yumrepo external ldap_group yumrepo

acl winupdate external ldap_group winupdate

acl network-update external ldap_group network-update


## ACL's for url domains ##


acl rule1 url_regex -i "/usr/local/squid1/etc/allowed/yumrepo/domains"

acl rule2 url_regex -i "/usr/local/squid1/etc/allowed/winupdate/domains"

acl rule3 url_regex -i
"/usr/local/squid1/etc/allowed/network-update/domains"



# Only allow cachemgr access from localhost

http_access allow manager localhost

http_access deny manager


# Deny requests to certain unsafe ports

http_access deny !Safe_ports


# Deny CONNECT to other than secure SSL ports

#http_access deny CONNECT !SSL_ports


# We strongly recommend the following be uncommented to protect innocent

# web applications running on the proxy server who think the only

# one who can access services on "localhost" is a local user

http_access deny to_localhost


#

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

#


# Example rule allowing access from your local networks.

# Adapt localnet in the ACL section to list your (internal) IP networks

# from where browsing should be allowed

#http_access allow localnet

#http_access allow localhost


http_access allow rule1 ldap-auth yumrepo

http_access allow rule2 ldap-auth winupdate

http_access allow rule3 ldap-auth network-update


# And finally deny all other access to this proxy

#http_access deny all


# Squid normally listens to port 3128

http_port 8080


# Uncomment and adjust the following to add a disk cache directory.

maximum_object_size 1000 MB

cache_dir ufs /var/spool/squid 1000 16 256


# Leave coredumps in the first cache dir

coredump_dir /var/spool/squid




# Add any of your own refresh_pattern entries above these.

#refresh_pattern ^ftp:  144020% 10080

#refresh_pattern ^gopher:   14400%  1440

#refresh_pattern -i (/cgi-bin/|\?) 00%  0

#refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
432000 override-expire override-lastmod ignore-no-cache ignore-no-store
ignore-private

#refresh_pattern -i .(deb|rpm|exe|zip|tar|tgz|bz2|ram|rar|bin)$  129600
100% 129600 override-expire ignore-no-cache ignore-no-store


refresh_pattern .   0   20% 4320

debug_options ALL,1 33,2 28,9






























































On Wed, Oct 7, 2015 at 12:18 PM, nando mendonca 
wrote:

> Hi,
>
> I have squid 3.1 

Re: [squid-users] squid 3.1 ldap authentication

2015-10-10 Thread Amos Jeffries
On 10/10/2015 8:16 a.m., nando mendonca wrote:
> Hi Amos,
> 
> Below is my squid.conf configuration. I can login and browse any site
> entering my ldap username. This is working fine.
> 
> Below i would like to use squid_ldap_group -R to allow certain ldap groups
> to browse only certain sites. Below "admins" and "sales" are two ldap
> groups, can i allow the "admins" group to browse a couple of sites and deny
> all others, and also have the "sales" group browse different sites and deny
> all other ldap groups access?
> 
> When i run 'squid -k parse', i'm not seeing any configuration errors.

Then your Squid is a bit outdated. Please consider an upgrade.
The current Squid will at least complain about the manager and localhost
ACL definitions being built-in.


> #
> # Recommended minimum configuration:
> #
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32 ::1
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
> 
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 192.168.30.0/24# RFC1918 possible internal network
> acl localnet src 192.168.20.0/24
> #acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
> #acl localnet src 192.168.0.0/16# RFC1918 possible internal network
> acl localnet src fc00::/7   # RFC 4193 local private network range
> acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
> machines
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl Safe_ports port 8080
> acl CONNECT method CONNECT
> auth_param basic program /usr/lib64/squid/squid_ldap_auth -b
> "dc=test,dc=corp,dc=domain,dc=com" -f "uid=%s" test.corp.domain.com
> auth_param basic children 5
> #auth_param basic realm Web-Proxy
> auth_param basic credentialsttl 30 minutes
> acl ldap-auth proxy_auth REQUIRED
> http_access allow ldap-auth

The problem you have is that you are allowing access to anyone who is
authenticated. End of story. No other permissions required. The
remainder of your access control config does nothing.

You ned to do this instead:

 http_access deny !ldap-auth


> 
> #http_access deny all
> visible_hostname proxy-server-01
> 
> 
> ## Block access to Google ##
> #external_acl_type ldap_group %LOGIN /usr/lib64/squid/squid_ldap_group -R
> -b "dc=test,dc=corp,dc=domain,dc=com" -D
> "ou=Groups,dc=test,dc=corp,dc=domain,dc=com" -f "(&(objectclass=person)
> (sAMAccountName=%v) (memberof=cn=%a,
> ou=Groups,dc=test,dc=corp,dc=domain,dc=com))" -h test.corp.domain.com
> 
> #acl admin external ldap_group admin
> #acl sales external ldap_group sales
> 
> #acl rule1 url_regex -i "/etc/squid/blacklists/admin/domains"
> #acl rule2 url_regex -i "/etc/squid/blacklists/sales/domains"
> 
> #http_access allow admin rule1
> #http_access allow sales rule2
> #http_access deny all
> 

One you are using "deny !ldap-auth" for the auth check these group rules
will have a chance of doing something.


However, all of the above http_access lines should be placed below the
line which says "INSERT YOUR OWN RULE(S) HERE"

> 
> #
> # Recommended minimum Access Permission configuration:
> #
> # Only allow cachemgr access from localhost
> http_access allow manager localhost
> http_access deny manager
> 

Current best practice is to place these manager rules below the "CONNECT
!SSL_Ports" line.


> # Deny requests to certain unsafe ports
> http_access deny !Safe_ports
> 
> # Deny CONNECT to other than secure SSL ports
> http_access deny CONNECT !SSL_ports
> 
> # We strongly recommend the following be uncommented to protect innocent
> # web applications running on the proxy server who think the only
> # one who can access services on "localhost" is a local user
> http_access deny to_localhost
> 
> #
> # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
> #

Notice what the line above says. And how your authentication rules are
all up top well above the default rules that protect your system against
DoS and protocol abuse attacks.


> 
> # Example rule allowing access from your local networks.
> # Adapt localnet in the ACL section to list your (internal) IP networks
> # from where browsing should be allowed
> http_access allow localnet
> http_access allow localhost
> 

Once you have authentication going you may want to remove these.


> 
> # And finally deny all other access to this proxy
> #http_access deny all
> 

Re-enable that "deny all" rule as the last http_access line.

Amos

___
squid-users 

Re: [squid-users] squid 3.1 ldap authentication

2015-10-07 Thread Amos Jeffries
On 8/10/2015 8:18 a.m., nando mendonca wrote:
> Hi,
> 
> I have squid 3.1 installed using ldap authentication. When i access a
> browser i enter my ldap credentials and it works fine. I’m able to browse
> all sites without any issues.
> 
> 
> Is there a way to use ldap groups to allow certain groups access to a few
> sites on the internet and then pretty much block everything else?

Please read this page 

Particularly the sections titled "Common Mistakes".

> 
> I’m able to restrict access to only a couple of sites and block everything
> else without using ldap group authentication, was just hoping this can be
> done with ldap group authentication.

Well, no because you cannot authenticate a whole group. There is no such
thing as "ldap group authentication"

There is group *authorization*, with LDAP protocol used to fetch the
group details.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.1 ldap authentication

2015-10-07 Thread nando mendonca
Hi,

I have squid 3.1 installed using ldap authentication. When i access a
browser i enter my ldap credentials and it works fine. I’m able to browse
all sites without any issues.


Is there a way to use ldap groups to allow certain groups access to a few
sites on the internet and then pretty much block everything else?


I’m able to restrict access to only a couple of sites and block everything
else without using ldap group authentication, was just hoping this can be
done with ldap group authentication.


Thanks,
Nando
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.1 access_log and log module syslog sets program-name as (squid)

2015-06-25 Thread Amos Jeffries
On 25/06/2015 6:49 p.m., YogiBearNL aka Ronald wrote:
 Squid v2.7: 
 
 Jun 25 08:36:37 proxy SQUID[16271]:
 192.168.2.85 - - [25/Jun/2015:08:36:37 +0200] GET
 http://tpc.googlesyndication.com/safeframe/1-0-2/html/container.html
 HTTP/1.1 200 2439 http://tweakers.net/; Mozilla/5.0 (Macintosh; Intel
 Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko) Version/5.2.3
 Safari/427.8.5 TCP_MISS:DIRECT 
 
 Squid v3.1.6: 
 
 Jun 24 21:47:56 proxy
 (SQUID): 192.168.2.85 - - [24/Jun/2015:21:47:56 +0200] GET
 http://cdn.viglink.com/images/pixel.gif? HTTP/1.1 200 639
 http://www.zdnet.com/blog/central-europe/; Mozilla/5.0 (Macintosh;
 Intel Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko)
 Version/5.2.3 Safari/427.8.5 TCP_MISS:DIRECT 
 
 When I try to parse the
 syslog lines, the ones with the (squid) as a program name fail because
 there are not normal syslog lines.
 Why is this happening ? And is this
 fixed in a later release ? Or maybe it's some configuration problem
 ?

Squid (both versions) is using the OS syslog() API to deliver these log
entries. The bits up to and inluding the '(SQUID):' and 'SQUID[16271]:'
are all generated by the syslog kernel daemon.

This is weird output, but I think its due to a change in the syslog
application.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.1 access_log and log module syslog sets program-name as (squid)

2015-06-25 Thread YogiBearNL aka Ronald
 

Dear Squid users, 

I have a problem with Squid 3.1 on Debian
Squeeze. 

squid3 -v
Squid Cache: Version 3.1.6 

When I use the syslog
Log module for access_log the syslog lines have a funky program name
called (squid) i.s.o. squid.
This is different from syslog lines of
Squid v2. ( Squid Cache: Version 2.7.STABLE9 ).
I will provide an
example here:

Squid v2.7: 

Jun 25 08:36:37 proxy SQUID[16271]:
192.168.2.85 - - [25/Jun/2015:08:36:37 +0200] GET
http://tpc.googlesyndication.com/safeframe/1-0-2/html/container.html
HTTP/1.1 200 2439 http://tweakers.net/; Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko) Version/5.2.3
Safari/427.8.5 TCP_MISS:DIRECT 

Squid v3.1.6: 

Jun 24 21:47:56 proxy
(SQUID): 192.168.2.85 - - [24/Jun/2015:21:47:56 +0200] GET
http://cdn.viglink.com/images/pixel.gif? HTTP/1.1 200 639
http://www.zdnet.com/blog/central-europe/; Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko)
Version/5.2.3 Safari/427.8.5 TCP_MISS:DIRECT 

When I try to parse the
syslog lines, the ones with the (squid) as a program name fail because
there are not normal syslog lines.
Why is this happening ? And is this
fixed in a later release ? Or maybe it's some configuration problem
?

squid.conf (interesting parts only) 

logformat combined %a %ui %un
[%tl] %rm %ru HTTP/%rv %Hs %st %{Referer}h %{User-Agent}h
%Ss:%Sh
access_log syslog:local7 combined 

I've googled around and some
other guy had the same issue:
http://serverdown.ttwait.com/que/410957


Thanks,

Ronald 

 ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.1 with https traffic and delay pools is flooding network with hundreds of thousands 65-70 bytes packets (and killing the routers, anyway)

2015-06-17 Thread Amos Jeffries
On 17/06/2015 10:11 p.m., Horváth Szabolcs wrote:
 Hello!
 
 We're having serious problems with a squid proxy server. 
 
 The good news is the problem can be reproduced at any time in our production 
 squid system.
 
 Environment:
 - CentOS release 6.5 (Final) with Linux kernel 2.6.32-431.29.2.el6.x86_64
 - squid-3.1.10-22.el6_5.x86_64 (a bit old, CentOS ships this version)
 
 Problem description:
 - if we have a few mbytes/sec https traffic AND
 - delay_classes are in place AND
 - delay pools are full (I mean the available bandwidth for the customer are 
 used)
 
 - then squid is trickling https traffic down to the clients in 65-70 byte 
 packets.
 
 Our WAN routers are not designed to handle thousands of 65-70 byte packets 
 per seconds and therefore we have some network stability issues.
 
 I tracked down the following:
 - if delay_pools are commented out (clients can go with full speed as they 
 like) - the problem eliminates, https traffic flows with ~1500 byte packets
 - if we use only http traffic, there is no problem: http traffic flows with 
 ~1500 byte packets even if the delay pools are full
 
 Our test URL is www.opengroup.org/infosrv/DCE/dce122.tar.gz, which is 
 available both on http and https protocol.
 
 Resources can be found at http://support.iqsys.hu/logs/
 
 1. squid.conf - squid configuration file
 2. http-delaypool.pcap: 
   - wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
   - delay pools are active
   - http flows with 1500 byte packets
 3. http-nodelaypool.pcap: 
   - wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
   - delay pools are INACTIVE
   - http flows with 1500 byte packets
 4. https-delaypool.pcap:
   - wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
   - delay pools are active
   - http flows with 69 byte packets - this is extremely bad
 5. https-nodelaypool.pcap:
   - wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
   - delay pools are INACTIVE
   - http flows with 1500 byte packets
 
 My question is: is it a known bug?

Sounds like http://bugs.squid-cache.org/show_bug.cgi?id=2907,
 which was fixed in Squid-3.5.3.

see comment #16 in the bug report for a 3.1 workaround patch. Though if
your production server has high performance requirements the sleep(1)
workaround is not the best.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.1 with Tproxy and WCCP on Cisco 3550

2013-10-28 Thread mudasirmirza
Hi,

I am working on setting up Squid 3.1 with Tproxy using WCCP on Cisco 3550.

Configs that I am using is below

Router and Proxy both are on Public IPs, traffic coming in from clients are
also Public IP
But for some reason the Router Identifier IP is showing as Local IP which is
being used to access router from local network. 

=
[root@proxy squid]# cat squid.conf
##start of config

http_port 3127 tproxy

icp_port 3130
icp_query_timeout 5000

pid_filename /var/run/squid-3127.pid
cache_effective_user squid
cache_effective_group squid
visible_hostname proxy.local
unique_hostname proxy.local
cache_mgr noc@proxy.local

access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 1
shutdown_lifetime 10 seconds

acl localnet src X.X.X.X/X#  Public IP range for clients
acl squidlocal src 127.0.0.1

uri_whitespace strip
request_header_max_size 120 KB
dns_nameservers 127.0.0.1
cache_mem 8 GB
maximum_object_size_in_memory 1 MB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
max_filedesc 65500

cache_dir aufs /cache1 85 64 256 max-size=20971520
cache_dir aufs /cache2 85 64 256 max-size=20971520
cache_dir aufs /cache3 85 64 256 max-size=20971520
cache_dir aufs /cache4 85 64 256 max-size=20971520

minimum_object_size 512 bytes
maximum_object_size 100 MB
offline_mode off
cache_swap_low 98
cache_swap_high 99


# No redirector configured

*wccp2_router  192.168.50.4
wccp2_rebuild_wait off
wccp2_forwarding_method 2
wccp2_return_method 1
wccp2_assignment_method 1
*
# Setup some default acls
acl all src all
acl localhost src 127.0.0.1/255.255.255.255
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901 81 3128 3127
1025-65535
acl sslports port 443 563 81
acl manager proto cache_object
acl purge method PURGE
acl connect method CONNECT
acl dynamic urlpath_regex cgi-bin \?

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports

# Always allow localhost connections
http_access allow localhost

# Allow local network(s) on interface(s)
http_access allow localnet
http_access allow squidlocal

# Default block all to be sure
http_access deny all

qos_flows local-hit=0x30
qos_flows sibling-hit=0x31
qos_flows parent-hit=0x32

##end of config
=

*Router config related to WCCP*


Switch-3550#sh ru

ip wccp web-cache

interface FastEthernet0/15
 description PPTP-Server
 no switchport
 ip address X.X.X.X 255.255.255.252
 ip wccp web-cache redirect in

interface GigabitEthernet0/2
 description ***Squid-Proxy***
 no switchport
 ip address X.X.X.X 255.255.255.248



Switch-3550#sh ip wccp
Global WCCP information:
Router information:
Router Identifier:   192.168.50.4
Protocol Version:2.0

Service Identifier: web-cache
Number of Service Group Clients: 0
Number of Service Group Routers: 0
Total Packets s/w Redirected:0
  Process:   0
  CEF:   0
Redirect access-list:-none-
Total Packets Denied Redirect:   0
Total Packets Unassigned:0
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0
Total Bypassed Packets Received: 0

Switch-3550#
=


As I am new to WCCP with Squid, I do not know a great detail of configuring
WCCP and Squid.

With above config, I do not see any traffic being redirected to squid.

Any help is greatly appreciated.




-
Regards,
Mudasir Mirza
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-with-Tproxy-and-WCCP-on-Cisco-3550-tp4662987.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid-3.1 failed to select source

2013-04-28 Thread Doug
Hello,

I have the reverse proxy config as:

 cache_peer  175.6.1.216  parent  80 0  no-query  originserver name=caiyuan
acl resdomain dstdomain  www.52caiyuan.com www.52huayuan.cn
52caiyuan.com 52huayuan.cn huayuan.52caiyuan.com
cache_peer_access caiyuan allow resdomain

When accessing to the cache, the domains www.52caiyuan.com and
52caiyuan.com work fine.
But huayuan.52caiyuan.com got failed, the cache.log says:

 2013/04/28 16:36:13| Failed to select source for
'http://huayuan.52caiyuan.com/'
2013/04/28 16:36:13|   always_direct = 0
2013/04/28 16:36:13|never_direct = 1
2013/04/28 16:36:13|timedout = 0

 For the same originserver, why some domains work but some not?

The squid and OS version:

 Squid Cache: Version 3.1.6
Debian GNU/Linux 6.0

(apt-get install squid3)

Can you help? thanks.


Re: [squid-users] Squid-3.1 failed to select source

2013-04-28 Thread Amos Jeffries

On 28/04/2013 8:55 p.m., Doug wrote:

Hello,

I have the reverse proxy config as:

  cache_peer  175.6.1.216  parent  80 0  no-query  originserver name=caiyuan
acl resdomain dstdomain  www.52caiyuan.com www.52huayuan.cn
52caiyuan.com 52huayuan.cn huayuan.52caiyuan.com
cache_peer_access caiyuan allow resdomain

What does squid -k parse throw out at you?

I would expect some warnings about something to do with splay trees.
Which means ...


When accessing to the cache, the domains www.52caiyuan.com and
52caiyuan.com work fine.
But huayuan.52caiyuan.com got failed, the cache.log says:

  2013/04/28 16:36:13| Failed to select source for
'http://huayuan.52caiyuan.com/'
2013/04/28 16:36:13|   always_direct = 0
2013/04/28 16:36:13|never_direct = 1
2013/04/28 16:36:13|timedout = 0


The latest version should work much better. There is a package of 3.3.3 
now available in the Debian sid repository you should try out.


Amos



Re: [squid-users] Squid-3.1 failed to select source

2013-04-28 Thread Doug
Hello,

# squid3 -k parse
2013/04/29 10:10:15| Processing Configuration File:
/etc/squid3/squid.conf (depth 0)

This is the info it gives.

2013/4/29 Amos Jeffries squ...@treenet.co.nz:
 On 28/04/2013 8:55 p.m., Doug wrote:

 Hello,

 I have the reverse proxy config as:

   cache_peer  175.6.1.216  parent  80 0  no-query  originserver
 name=caiyuan
 acl resdomain dstdomain  www.52caiyuan.com www.52huayuan.cn
 52caiyuan.com 52huayuan.cn huayuan.52caiyuan.com
 cache_peer_access caiyuan allow resdomain

 What does squid -k parse throw out at you?

 I would expect some warnings about something to do with splay trees.
 Which means ...


 When accessing to the cache, the domains www.52caiyuan.com and
 52caiyuan.com work fine.
 But huayuan.52caiyuan.com got failed, the cache.log says:

   2013/04/28 16:36:13| Failed to select source for
 'http://huayuan.52caiyuan.com/'
 2013/04/28 16:36:13|   always_direct = 0
 2013/04/28 16:36:13|never_direct = 1
 2013/04/28 16:36:13|timedout = 0


 The latest version should work much better. There is a package of 3.3.3 now
 available in the Debian sid repository you should try out.

 Amos



[squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Alexander.Eck
Hi everyone,





is it possible to have squid use the same Source Port to connect to the Web=


server as the client uses to connect to squid ?





My problem is the following setup:





Various Citrix Server


URL Filtering with Identity Awareness


Squid 3.1 as Cache Proxy





I had to install a Terminal Server Identity Agent on every Citrix Server to=


 distinguish the users.





The Identity Agent assigns port ranges to every user, to distinguish them.








Problem is:


In my firewall logs i can see the identity of the user for the request from=


 the citrix server to the proxy (proxy is in the dmz). But i can't see the =


identity from the request from the proxy to the Internet.





My guess is, that this is because squid isn't using the same Source Port as=


 the client, or is not forwarding the Source Port.





Did anybody try something similiar and got it working ?  Is squid capable o=


f doing this or do i have an error in reasoning about my setup ?





Any help is appreciated :)





Best Regards





Alex




Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Eliezer Croitoru

On 10/23/2012 8:55 PM, alexander@heidelberg.de wrote:

Any help is appreciated:)





Best Regards





Alex

Take a peek at TPROXY.
if you can share your squid.conf you can get better help.
(notice that your email looks bad with lots of spaces)

Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.1 Client Source Port Identity Awareness

2012-10-23 Thread Amos Jeffries

On 24.10.2012 07:55, Alexander.Eck wrote:

Hi everyone,

is it possible to have squid use the same Source Port to connect to 
the Web=

server as the client uses to connect to squid ?



No. One gets errors when bind() is used on an already open port.
connect() and sendto() do not supply the OS with IP:port details.




My problem is the following setup:

Various Citrix Server
URL Filtering with Identity Awareness
Squid 3.1 as Cache Proxy

I had to install a Terminal Server Identity Agent on every Citrix 
Server to=

 distinguish the users.

The Identity Agent assigns port ranges to every user, to distinguish 
them.



Problem is:
In my firewall logs i can see the identity of the user for the 
request from=
 the citrix server to the proxy (proxy is in the dmz). But i can't 
see the =

identity from the request from the proxy to the Internet.

My guess is, that this is because squid isn't using the same Source 
Port as=

 the client, or is not forwarding the Source Port.


client also does not mean what you think it means. Squid is a client 
in HTTP and can generate new or different requests along with those 
aggregated from its inbound clients.


HTTP/1.1 is also stateless with multiplexing and pipelines. Any 
outgoing connection can be shared by requests received between multiple 
inbound client connections. There is no relationship between inbound and 
outbound - adding a stateful relationship (pinning) degrades performance 
a LOT.


How does your fancy client identification system correlate them 
cheeses?


PS: the TCP/IP firewall level is not a good place to log HTTP level 
client details.




Did anybody try something similiar and got it working ?  Is squid 
capable o=

f doing this or do i have an error in reasoning about my setup ?

Any help is appreciated :)



Amos


Re: [squid-users] Squid 3.1 vmware.com access

2012-09-21 Thread Amos Jeffries

On 20/09/2012 11:46 p.m., Bambang Sumitra wrote:



On Sep 20, 2012 5:12 PM, Amos Jeffries squ...@treenet.co.nz 
mailto:squ...@treenet.co.nz wrote:


 On 20/09/2012 7:03 p.m., Bambang Sumitra wrote:

 Hi All,

 I'am using Squid Cache: Version 3.1.19 on ubuntu 12.04, and i have
 problem accessing vmware.com http://vmware.com, its never 
complete load, always showing

 white blank page, i have try with different browser also, firefox,
 chrome and IE


 here is my squid config

 -- squid config
 --
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 http://127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 http://127.0.0.0/8 0.0.0.0/32 
http://0.0.0.0/32 ::1
 acl localnet src 192.168.0.0/16 http://192.168.0.0/16 # RFC1918 
possible internal network

 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localnet
 http_access allow localhost
 http_access deny all
 http_port 192.168.1.3:3128 http://192.168.1.3:3128 transparent
 cache_dir aufs /squid-cache/squid 1 16 256
 coredump_dir /squid-cache/squid
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern (Release|Packages(.gz)*)$  0 20% 2880
 refresh_pattern .   0   20% 4320

 # custom config
 logfile_rotate 10
 cache_mem 1024 MB
 pipeline_prefetch on
 dns_nameservers 192.168.1.1 8.8.8.8 8.8.4.4

 acl langsung dstdomain .vmware.com http://vmware.com
 always_direct allow langsung

 ## quinto lab configuration
 icap_enable on
 icap_preview_enable on
 icap_preview_size 4096
 icap_persistent_connections on
 icap_send_client_ip on
 icap_send_client_username on
 icap_client_username_header X-Client-Username
 icap_service qlproxy1 reqmod_precache bypass=0 
icap://127.0.0.1:1344/reqmod http://127.0.0.1:1344/reqmod
 icap_service qlproxy2 respmod_precache bypass=0 
icap://127.0.0.1:1344/respmod http://127.0.0.1:1344/respmod

 adaptation_access qlproxy1 allow all
 adaptation_access qlproxy2 allow all
 -- squid config
 --


 -- access.log clip
 --


 1348124282.802 158646 192.168.1.65 TCP_MISS/200 338 GET
 http://www.vmware.com/files/templates/inc/baynote_global.js - NONE/-
 application/javascript
 1348124283.102 159150 192.168.1.65 TCP_MISS/200 338 GET
 http://www.vmware.com/files/templates/inc/baynote_observer.js - NONE/-
 application/javascript


 It looks like it may be these which take 159 seconds *each* before 
they timeout and 338 bytes are returned to the client. No server is 
contacted. Everything else is only a few milliseconds.


 Amos

Hi Amos,

Thank you for replying on my issue.
do you have any idea what is causing squid to timeout?
I have test to by pass squid and i can open vmware.com 
http://vmware.com with no problem and its open so fast.




No idea. Its very strange. A MISS where no server was contacted 
(NONE/-) resulting in a 200 response with 338 bytes of data.
Unless that 200 response is coming out of the ICAP RESPMOD for some 
unknown reason?


Amos


[squid-users] Squid 3.1 vmware.com access

2012-09-20 Thread Bambang Sumitra
Hi All,

I'am using Squid Cache: Version 3.1.19 on ubuntu 12.04, and i have
problem accessing vmware.com, its never complete load, always showing
white blank page, i have try with different browser also, firefox,
chrome and IE


here is my squid config

-- squid config
--
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 192.168.1.3:3128 transparent
cache_dir aufs /squid-cache/squid 1 16 256
coredump_dir /squid-cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

# custom config
logfile_rotate 10
cache_mem 1024 MB
pipeline_prefetch on
dns_nameservers 192.168.1.1 8.8.8.8 8.8.4.4

acl langsung dstdomain .vmware.com
always_direct allow langsung

## quinto lab configuration
icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Client-Username
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all
-- squid config
--


-- access.log clip
--

1348124124.131  2 192.168.1.65 TCP_HIT/200 1271 GET
http://www.vmware.com/files/include/shadowbox/vmw_cust.js - NONE/-
application/javascript
1348124124.135277 192.168.1.65 TCP_REFRESH_UNMODIFIED/200 1481 GET
http://www.vmware.com/files/templates/inc/oo_engine.js -
DIRECT/96.16.179.51 application/javascri
pt
1348124124.287198 192.168.1.65 TCP_REFRESH_UNMODIFIED/200 1924 GET
http://www.vmware.com/files/include/ga/downloads-tracker.js -
DIRECT/96.16.179.51 application/jav
ascript
1348124124.431277 192.168.1.65 TCP_REFRESH_UNMODIFIED/200 2886 GET
http://www.vmware.com/files/include/common_min.js -
DIRECT/96.16.179.51 application/javascript
1348124282.802 158646 192.168.1.65 TCP_MISS/200 338 GET
http://www.vmware.com/files/templates/inc/baynote_global.js - NONE/-
application/javascript
1348124283.102 159150 192.168.1.65 TCP_MISS/200 338 GET
http://www.vmware.com/files/templates/inc/baynote_observer.js - NONE/-
application/javascript
1348124297.668396 192.168.1.65 TCP_MISS/301 704 GET
http://vmware.com/ - DIRECT/165.193.233.120 text/html
1348124297.876 33 192.168.1.65 TCP_HIT/200 15735 GET
http://www.vmware.com/ - NONE/- text/html
1348124297.968  2 192.168.1.65 TCP_HIT/200 6185 GET
http://www.vmware.com/files/include/ga/ga-code.js - NONE/-
application/javascript
1348124297.994 13 192.168.1.65 TCP_HIT/200 88191 GET
http://www.vmware.com/files/templates/inc/library_framework.js -
NONE/- application/javascript
1348124298.000  3 192.168.1.65 TCP_HIT/200 371 GET
http://www.vmware.com/files/templates/inc/s_define.js - NONE/-
application/javascript
1348124298.002 20 192.168.1.65 TCP_HIT/200 137831 GET
http://www.vmware.com/files/templates/inc/fce.css - NONE/- text/css
1348124298.022  1 192.168.1.65 TCP_HIT/200 2619 GET
http://www.vmware.com/files/js/demand.js - NONE/-
application/javascript
1348124298.054  2 192.168.1.65 TCP_HIT/200 2338 GET
http://www.vmware.com/files/include/shadowbox303/shadowbox.css -
NONE/- text/css
1348124298.064  9 192.168.1.65 TCP_HIT/200 106044 GET
http://www.vmware.com/files/include/location-popup/jquery.tools.min.jq164.js
- NONE/- application/javascript
1348124298.119  6 192.168.1.65 TCP_HIT/200 65355 GET
http://www.vmware.com/files/include/shadowbox303/shadowbox.js - NONE/-
application/javascript
1348124298.130  2 192.168.1.65 TCP_HIT/200 3886 GET
http://www.vmware.com/files/include/location-popup/geo_redirect_min.js
- NONE/- application/javascript
1348124298.132  2 192.168.1.65 TCP_HIT/200 6294 GET
http://www.vmware.com/files/include/location-popup/location-popup-api.js
- 

Re: [squid-users] Squid 3.1 vmware.com access

2012-09-20 Thread Amos Jeffries

On 20/09/2012 7:03 p.m., Bambang Sumitra wrote:

Hi All,

I'am using Squid Cache: Version 3.1.19 on ubuntu 12.04, and i have
problem accessing vmware.com, its never complete load, always showing
white blank page, i have try with different browser also, firefox,
chrome and IE


here is my squid config

-- squid config
--
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 192.168.1.3:3128 transparent
cache_dir aufs /squid-cache/squid 1 16 256
coredump_dir /squid-cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

# custom config
logfile_rotate 10
cache_mem 1024 MB
pipeline_prefetch on
dns_nameservers 192.168.1.1 8.8.8.8 8.8.4.4

acl langsung dstdomain .vmware.com
always_direct allow langsung

## quinto lab configuration
icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Client-Username
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all
-- squid config
--


-- access.log clip
--



1348124282.802 158646 192.168.1.65 TCP_MISS/200 338 GET
http://www.vmware.com/files/templates/inc/baynote_global.js - NONE/-
application/javascript
1348124283.102 159150 192.168.1.65 TCP_MISS/200 338 GET
http://www.vmware.com/files/templates/inc/baynote_observer.js - NONE/-
application/javascript


It looks like it may be these which take 159 seconds *each* before they 
timeout and 338 bytes are returned to the client. No server is 
contacted. Everything else is only a few milliseconds.


Amos


[squid-users] Squid 3.1.x and Kemp loadbalancer.

2012-06-20 Thread Josef Karliak

  Hi there,
  we use Kemp loadbalancer for balancing proxy (active-backup). All  
users has set IP of kemp loadbalancer. But in the squid access_log is  
IP of the loadbalancer, I want there an IP of the user that is  
accessing the web pages (we use webalizer for analyzing top browsing  
users).

  My logformat defined in squid.conf:
logformat combined %a %ui %un [%{%d/%b/%Y:%H:%M:%S +}tl] \
  %rm %ru HTTP/%rv Hs %st %{Referer}h %{User-Agent}h %Ss:%Sh

  Do I've some bad variable in the logformat ?
  Thank you very much and best regards
  J.Karliak

--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)
policy and check. If you've problem with sending emails to me, start
using email origin methods mentioned above. Thank you.


This message was sent using IMP, the Internet Messaging Program.



biny5iAWtCFdJ.bin
Description: Veřejný PGP klíč


Re: [squid-users] Squid 3.1.x and Kemp loadbalancer.

2012-06-20 Thread Amos Jeffries

On 20.06.2012 22:40, Josef Karliak wrote:

Hi there,
  we use Kemp loadbalancer for balancing proxy (active-backup). All
users has set IP of kemp loadbalancer. But in the squid access_log is
IP of the loadbalancer, I want there an IP of the user that is
accessing the web pages (we use webalizer for analyzing top browsing
users).
  My logformat defined in squid.conf:
logformat combined %a %ui %un [%{%d/%b/%Y:%H:%M:%S +}tl] \
  %rm %ru HTTP/%rv Hs %st %{Referer}h %{User-Agent}h 
%Ss:%Sh


  Do I've some bad variable in the logformat ?



Your format is accurate.

The kemp load balancer apparently operates in one of two ways:

 layer 4, using NAT alteration of packets before delivery to the Squid 
box. The real clients addresses are gone. There is no recovery possible.


 layer 7, using a proxy which itself makes HTTP requests through Squid. 
So it is the one and only *client* to Squid. It *might* be able to set 
X-Forwarded-For headers and inform Squid about the clients original IP 
address. If so configure:


  acl kemp src ... IP of kemp load balancer(s)
  follow_x_forwarded_for allow kempID
  follow_x_forwarded_for deny all



NOTE: You have the alternative option of active-passive load balancing 
in a PAC file which is performed directly in the client browser.



Amos



Re: [squid-users] Squid 3.1 and https ssl aes256 issue

2012-06-04 Thread Amos Jeffries

On 03.06.2012 22:23, alextouch wrote:

Hi

this is my first post... last month I installed a linux ubuntu server 
12.04
LTS machine with Squid3 in my organization. This machine works as a 
proxy

(not transparent proxy) for the web access from clients.
Proxy is connected to a gateway for internet connection.
Clients are configured so that all web (http, https, ftp, socks) 
trafic goes

through the squid proxy.
All works fine, clients are able to access to all type of internet 
trafic,

including https sites encrypted with aes128 (like gmail, or
https://www1.directatrading.com/).
But no client is able to access to sites encrypted with aes256 (like
https://www.unicredit.it/)... the browser locks with Connecting to
https://www...; and nothing else is displayed on the browser 
itself.
I searched the net but I wasn't able to find a thread about this 
issue.
squid.conf is the original one, I added only support for delay-pools 
and
acls to deny some client to access to certain sites. But even with 
these

options disabled, the problem is still present.

Does anyone have any idea?


In the standard setup like this Squid has nothing to do with the SSL or 
TLS operations. The browser simply opens a CONNECT tunnel through Squid. 
The encryption details are negotiated directly between the browser and 
origin server.


It is most likely that your clients browsers or SSL libraries are 
missing AES-256 support or are getting stuck negotiating to use a 
version of TLS/SSL which supports it.


Amos


[squid-users] Squid 3.1 and https ssl aes256 issue

2012-06-03 Thread alextouch
Hi 

this is my first post... last month I installed a linux ubuntu server 12.04
LTS machine with Squid3 in my organization. This machine works as a proxy
(not transparent proxy) for the web access from clients. 
Proxy is connected to a gateway for internet connection. 
Clients are configured so that all web (http, https, ftp, socks) trafic goes
through the squid proxy. 
All works fine, clients are able to access to all type of internet trafic,
including https sites encrypted with aes128 (like gmail, or
https://www1.directatrading.com/). 
But no client is able to access to sites encrypted with aes256 (like
https://www.unicredit.it/)... the browser locks with Connecting to
https://www...; and nothing else is displayed on the browser itself. 
I searched the net but I wasn't able to find a thread about this issue. 
squid.conf is the original one, I added only support for delay-pools and
acls to deny some client to access to certain sites. But even with these
options disabled, the problem is still present. 

Does anyone have any idea? 

Thank you. 
Alex

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-1-and-https-ssl-aes256-issue-tp4655249.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid 3.1 and TPROXY 4 Problems

2012-05-05 Thread Dave Blakey
Hi all,
 I'm busy working on a tproxy setup with the latest squid on Ubuntu
12.04; tproxy is enabled, squid is compiled with tproxy support etc.
The difference with this setup is that traffic is being sent to the
host using route-map on a cisco as opposed to WCCP but it seems that
should work. Unfortunately it seems there is very little documentation
about the latest tproxy+squid3.1 setup method - but this is what I
have --

# IP
ip -f inet rule add fwmark 1 lookup 100
ip -f inet route add local default dev eth0 table 100

# Sysctl
echo 1  /proc/sys/net/ipv4/ip_forward
echo 2  /proc/sys/net/ipv4/conf/default/rp_filter
echo 2  /proc/sys/net/ipv4/conf/all/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter

# IP Tables
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


In squid.conf the relevant line for http_port 3129 tproxy is set etc.
With this setup I get hits on the iptables rules, and see a request in
the access log but it fails to fill it, it also looks very strange --

1336146295.076  56266 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -
1336146337.969  42875 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -

As you can see it's a TCP_MISS/000 and the DIRECT/www.google.com in my
experience should have an IP not a hostname? Additionally the sizes
seem very weird. The client just hangs.

Should this setup be working or is there some obvious error?

Thank you in advance
Dave


Re: [squid-users] Squid 3.1 and TPROXY 4 Problems

2012-05-05 Thread Amos Jeffries

On 5/05/2012 7:58 p.m., Dave Blakey wrote:

Hi all,
  I'm busy working on a tproxy setup with the latest squid on Ubuntu
12.04; tproxy is enabled, squid is compiled with tproxy support etc.
The difference with this setup is that traffic is being sent to the
host using route-map on a cisco as opposed to WCCP but it seems that
should work. Unfortunately it seems there is very little documentation
about the latest tproxy+squid3.1 setup method - but this is what I
have --

# IP
ip -f inet rule add fwmark 1 lookup 100
ip -f inet route add local default dev eth0 table 100

# Sysctl
echo 1  /proc/sys/net/ipv4/ip_forward
echo 2  /proc/sys/net/ipv4/conf/default/rp_filter
echo 2  /proc/sys/net/ipv4/conf/all/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter

# IP Tables
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


In squid.conf the relevant line for http_port 3129 tproxy is set etc.
With this setup I get hits on the iptables rules, and see a request in
the access log but it fails to fill it, it also looks very strange --

1336146295.076  56266 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -
1336146337.969  42875 x.x.x.x TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -

As you can see it's a TCP_MISS/000 and the DIRECT/www.google.com in my
experience should have an IP not a hostname? Additionally the sizes
seem very weird. The client just hangs.


Depends on your squid version, the 3.2+ are IP-only there the older ones 
display FQDN when its available and log_fqdn is on.
Size is zero because upstream was contacted, but things went bad before 
any bytes were transferred to the client.


This is the usual log signature for a forwarding loop. With TPROXY those 
are a greater risk than with NAT, and harder to track down. You may need 
to take a very close look at the TCP packets in the different network 
link places and see what is going on. NP: port number is the only way to 
identify cleint and server connections apart at the TCP/IP level.





Should this setup be working or is there some obvious error?


I'm not entirely sure about the rp_filter sysctl. I've had trouble on 
recent Debian myself with TPROXY hanging. It may be worth experimenting 
with those a bit.


Amos


[squid-users] Squid 3.1 and TPROXY 4 Problems

2012-05-04 Thread Dave
Hi all,
 I'm busy working on a tproxy setup with the latest squid on Ubuntu
12.04; tproxy is enabled, squid is compiled with tproxy support etc.
The difference with this setup is that traffic is being sent to the
host using route-map on a cisco as opposed to WCCP but it seems that
should work. Unfortunately it seems there is very little documentation
about the latest tproxy+squid3.1 setup method - but this is what I
have --

# IP
ip -f inet rule add fwmark 1 lookup 100
ip -f inet route add local default dev eth0 table 100

# Sysctl
echo 1  /proc/sys/net/ipv4/ip_forward
echo 2  /proc/sys/net/ipv4/conf/default/rp_filter
echo 2  /proc/sys/net/ipv4/conf/all/rp_filter
echo 0  /proc/sys/net/ipv4/conf/eth0/rp_filter

# IP Tables
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129


In squid.conf the relevant line for http_port 3129 tproxy is set etc.
With this setup I get hits on the iptables rules, and see a request in
the access log but it fails to fill it, it also looks very strange --

1336146295.076  56266 69.77.128.218 TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -
1336146337.969  42875 69.77.128.218 TCP_MISS/000 0 GET
http://www.google.com/url? - DIRECT/www.google.com -

As you can see it's a TCP_MISS/000 and the DIRECT/www.google.com in my
experience should have an IP not a hostname? Additionally the sizes
seem very weird. The client just hangs.

Should this setup be working or is there some obvious error?

Thank you in advance
Dave


[squid-users] Squid 3.1 + Accel conf + ETag = ignoring ETag

2012-04-27 Thread Daniele Segato

Hi,

I'm experimenting with squid and a service I'm writing.

my service compute ETag and return it along with other Http headers:

Headers returned by a request to my service:

HTTP/1.1 200 OK
Content-Language: it
Cache-Control: public, max-age=60, s-maxage=60
ETag: 32b71ecde17592a1d6ef696f5ae78216
Last-Modified: Fri, 27 Apr 2012 14:09:08 GMT
Date: Fri, 27 Apr 2012 17:43:52 GMT
Vary: Accept, Accept-Language
Age: 0
Content-Type: application/json;charset=UTF-8
Content-Length: 932
Server: Jetty(6.1.21)


here's what's happen if I pass
If-Modified-Since: Fri, 27 Apr 2012 14:09:08 GMT

HTTP/1.1 304 Not Modified
Content-Language: it
Cache-Control: public, max-age=60, s-maxage=60
ETag: 32b71ecde17592a1d6ef696f5ae78216
Last-Modified: Fri, 27 Apr 2012 14:09:08 GMT
Date: Fri, 27 Apr 2012 17:44:49 GMT
Vary: Accept, Accept-Language
Age: 0
Content-Type: application/json;charset=UTF-8
Content-Length: 932
Server: Jetty(6.1.21)



And if pass:
If-None-Match: 32b71ecde17592a1d6ef696f5ae78216

HTTP/1.1 304 Not Modified
Content-Language: it
Cache-Control: public, max-age=60, s-maxage=60
ETag: 32b71ecde17592a1d6ef696f5ae78216
Last-Modified: Fri, 27 Apr 2012 14:09:08 GMT
Date: Fri, 27 Apr 2012 17:46:20 GMT
Vary: Accept, Accept-Language
Age: 0
Content-Type: application/json;charset=UTF-8
Content-Length: 932
Server: Jetty(6.1.21)



Nothing special...
But squid is not sending me the If-None-Match header

I have situation where the Last-Modified date doesn't change, the ETag 
work in identifying what's a 304 and what's not.
The last modified date check fails there (give 304 when the content has 
been actually modified).



So I need squid to give me If-None-Match

is there some config to enable?
Am I doing something wrong?

thanks,
Daniele


[squid-users] Squid 3.1: access.log did not log authenticated members

2012-04-20 Thread David Touzeau


Dear

I have tested all log formats on my squid 3.1.19 and member information 
still IP   -   - 

eg:  192.168.1.212 - - [

Is it normal ?

I notice that using squid 3.2 log correctly members uid in access.log

Best regards


[squid-users] squid 3.1 and HTTPS (and probably ipv6)

2012-03-13 Thread Eugene M. Zheganin

Hi.

I'm using squid 3.1.x on FreeBSD. Squid is built from ports.

Recently I was hit by a weird issue: my users cannot open HTTPS pages. 
This is not something constant - if they hit the F5 button in browser, 
the pages are loading, sometimes showing the message like 'Unable to 
connect. Firefox can't establish a connection to the server at 
access.ripe.net.' (for example. most of them are using FF). In the same 
time plain HTTP pages are working fine.


I did some investigation and it appears like squid really thinks it 
cannot connect to HTTPS-enabled web server:


===Cut===
2012/03/13 14:08:39.661| ACL::ChecklistMatches: result for 'all' is 1
2012/03/13 14:08:39.661| ACLList::matches: result is true
2012/03/13 14:08:39.661| aclmatchAclList: 0x285e4810 returning true (AND 
list satisfied)
2012/03/13 14:08:39.661| ACLChecklist::markFinished: 0x285e4810 
checklist processing finished
2012/03/13 14:08:39.661| ACLChecklist::check: 0x285e4810 match found, 
calling back with 1

2012/03/13 14:08:39.661| ACLChecklist::checkCallback: 0x285e4810 answer=1
2012/03/13 14:08:39.661| peerCheckAlwaysDirectDone: 1
2012/03/13 14:08:39.661| peerSelectFoo: 'CONNECT access.ripe.net'
2012/03/13 14:08:39.661| peerSelectFoo: direct = DIRECT_YES
2012/03/13 14:08:39.661| The AsyncCall SomeCommConnectHandler 
constructed, this=0x286e6740 [call1916]
2012/03/13 14:08:39.661| commConnectStart: FD 14, cb 0x286e6740*1, 
access.ripe.net:443
2012/03/13 14:08:39.661| The AsyncCall SomeCloseHandler constructed, 
this=0x2956c2c0 [call1917]

2012/03/13 14:08:39.661| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.661| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkBadAddr: access.ripe.net 
[2001:67c:2e8:22::c100:685]:443
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4810

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4810
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4910

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4910
2012/03/13 14:08:39.662| The AsyncCall SomeCommReadHandler constructed, 
this=0x28ce9100 [call1918]
2012/03/13 14:08:39.662| leaving SomeCommReadHandler(FD 150, 
data=0x286b6710, size=4, buf=0x28d1e000)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : family=28
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkAllGood: Changing ALL 
access.ripe.net addrs to OK (1/2 bad)

2012/03/13 14:08:39.662| commConnectCallback: FD 14
2012/03/13 14:08:39.662| comm.cc(1195) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(1206) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(934) will call 
SomeCommConnectHandler(FD 14, errno=22, flag=-8, data=0x28f6bdd0, ) 
[call1916]

2012/03/13 14:08:39.662| commConnectFree: FD 14
2012/03/13 14:08:39.662| entering SomeCommConnectHandler(FD 14, 
errno=22, flag=-8, data=0x28f6bdd0, )
2012/03/13 14:08:39.662| AsyncCall.cc(32) make: make call 
SomeCommConnectHandler [call1916]

2012/03/13 14:08:39.662| errorSend: FD 12, err=0x28f995d0
2012/03/13 14:08:39.662| errorpage.cc(1051) BuildContent: No existing 
error page language negotiated for ERR_CONNECT_FAIL. Using default error 
file.

===Cut==

But why ? I did some telnetting from this server to the 
access.ripe.net:443, and it succeeded like 10 from 10 times (squid error 
rate is far more frequent). The only thing that bothers me is that 
telnet also first tries ipv6 too, but then switches to the ipv4, and 
connects.


Now some suggestions (probably a shot in the dark). This only happens on 
an ipv6-enabled machines, but without actual ipv6 connectivity (no ipv6 
default route or no public ipv6 address, for example I have unique-local 
addresses for the testing purposes). In the same time this issue can be 
easily solved by restoring the ipv6 connectivity to the outer world. So, 
can it be some dual-stack behavior bug ? Or is it 'by design' ? Do I 
need to report it ?


Thanks.
Eugene.


Re: [squid-users] squid 3.1 and HTTPS (and probably ipv6)

2012-03-13 Thread Amos Jeffries

On 13.03.2012 22:10, Eugene M. Zheganin wrote:

Hi.

I'm using squid 3.1.x on FreeBSD. Squid is built from ports.

Recently I was hit by a weird issue: my users cannot open HTTPS
pages. This is not something constant - if they hit the F5 button in
browser, the pages are loading, sometimes showing the message like
'Unable to connect. Firefox can't establish a connection to the 
server

at access.ripe.net.' (for example. most of them are using FF). In the
same time plain HTTP pages are working fine.

I did some investigation and it appears like squid really thinks it
cannot connect to HTTPS-enabled web server:




As you guessed this does seem to be a stack issue. Dual-stack systems 
can be configured to operate as hybrid stacks or as split stacks (two 
distinct socket handling paths). Recently there has been a trend away 
from the simpler hybrid stacks towards split stacks.


Squid-3.1 was written for hybrid stacks with v4-mapping ability. When 
run on stack without mapping (split) it cannot reset the FD protocol to 
switch stack types. Workaround/Support for split stacks has been added 
incrementally across the 3.1 series, with some of the deeper changes 
only in 3.2.




===Cut===

snip
2012/03/13 14:08:39.661| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'


Found the site IPs.

assuming: connect to the first one (IPv6).


2012/03/13 14:08:39.662| ipcacheMarkBadAddr: access.ripe.net
[2001:67c:2e8:22::c100:685]:443


 Didn't work. Mark it bad.


2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at
193.0.6.133 (2 of 2)
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : 
family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : 
family=28


Reset the socket FD to convert for the new IP (v4).

assuming: socket still IPv6 and trying to use for IPv4?
assuming: connect to this IP also failed.

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 
'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : 
family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14-16 : 
family=28


 Reset the socket FD (again).. Bit strange that this is still 14-16.


2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at
193.0.6.133 (2 of 2)
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 
'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'

2012/03/13 14:08:39.662| ipcacheMarkAllGood: Changing ALL
access.ripe.net addrs to OK (1/2 bad)


Failed a second time. tries  number of IPs (huh? 3 or 2 tries?)

Instead of retrying yet again, cycle the IPs ...


2012/03/13 14:08:39.662| errorpage.cc(1051) BuildContent: No existing
error page language negotiated for ERR_CONNECT_FAIL. Using default
error file.


... and respond with error.


===Cut==

But why ? I did some telnetting from this server to the
access.ripe.net:443, and it succeeded like 10 from 10 times (squid
error rate is far more frequent). The only thing that bothers me is
that telnet also first tries ipv6 too, but then switches to the ipv4,
and connects.

Now some suggestions (probably a shot in the dark). This only happens
on an ipv6-enabled machines, but without actual ipv6 connectivity (no
ipv6 default route or no public ipv6 address, for example I have
unique-local addresses for the testing purposes). In the same time
this issue can be easily solved by restoring the ipv6 connectivity to
the outer world. So, can it be some dual-stack behavior bug ? Or is 
it

'by design' ? Do I need to report it ?



Squid opens an IPv6 socket by default, attempts the IPv6 destination 
(route down, IPv6 socket). Fails. Then attempts to reset the socket 
protocol family and contact the IPv4 destination (route fine, IPv6 
socket [oops!]).
You can avoid this in 3.1 by enabling v4-mapping capability in your 
kernel or using tcp_outgoing_address 0.0.0.0 to force the sockets to 
be IPv4-only from the start. 3.2 series has better split-stack support 
so should have this behaviour problem fixed now.



Amos


Re: [squid-users] squid 3.1 - endless loop IIS webserver

2012-03-12 Thread Amos Jeffries

On 12/03/2012 6:53 p.m., kadvar wrote:

Hi,

I have searched for other posts with the same problem but the workarounds
that worked for them did'nt work for me. I am trying to configure a squid
reverse proxy with ssl support. I have squid on 192.168.124.41 with apache
on 127.0.0.1 on the same box. I also have two other webservers (1 apache, 1
IIS). Squid is configured to direct any requests for asp pages to iis and
the rest to the apache machine.

I have also configured squid to use https, the programmer has set up a 302
redirect on the iis machine so that visiting http://example.com/Login.aspx
redirects to https://example.com/Login.aspx. Squid redirects fine but after
that gives me a The page isn't redirecting properly. Running wget shows
that squid is going into an endless loop. I have reproduced squid.conf and
also the wget output below.

$wget --no-check http://192.168.124.41/Login.aspx
--2012-03-12 11:06:53--  http://192.168.124.41/Login.aspx
Connecting to 192.168.124.41:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/Login.aspx [following]
--2012-03-12 11:06:53--  https://example.com/Login.aspx
Resolving example.com... 192.168.124.41
Connecting to example.com|192.168.124.41|:443... connected.
WARNING: cannot verify example.com’s certificate, issued by
“/C=IN/ST=AP/L=Default City/O=Default Company
Ltd/CN=example.com/emailAddress=ad...@example.com”:
   Unable to locally verify the issuer’s authority.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/memberplanet/Login.aspx [following]

and so on..


The problem is that Squid is sending HTTPS traffic to an HTTP port on 
IIS. Requests to origin servers do not include anything specifically 
saying HTTPS or HTTPS. The server tells that from the port its receiving 
the request on.


There is a trick you can add to your squid.conf to split traffic between 
two ports on the IIS peer




##
squid.conf
#
http_port 192.168.124.41:80 accel defaultsite=example.com

https_port 192.168.124.41:443 accel
cert=/usr/newrprgate/CertAuth/testcert.cert
key=/usr/newrprgate/CertAuth/testkey.pem defaultsite=example.com

acl rx_aspx urlpath_regex -i \.asp[x]*


acl HTTPS proto HTTPS


cache_peer 192.168.124.169 parent 80 0 no-query no-digest originserver
name=aspserver

cache_peer_access aspserver deny HTTPS


cache_peer_access aspserver allow rx_aspx
cache_peer_access aspserver deny all


cache_peer 192.168.124.169 parent 443 0 no-query no-digest originserver 
name=aspserverSSL

cache_peer_access aspserverSSL allow  HTTPS rx_aspx
cache_peer_access aspserverSSL deny all




cache_peer 127.0.0.1 parent 80 0 no-query originserver name=wb1
cache_peer_access wb1 deny rx_aspx

acl origin_servers dstdomain .example.com
http_access allow origin_servers
http_access deny all
###

I'd appreciate it if someone could give me some clues as to what I'm doing
wrong.



That should fix the looping.

Amos


[squid-users] squid 3.1 - endless loop IIS webserver

2012-03-11 Thread kadvar
Hi,

I have searched for other posts with the same problem but the workarounds
that worked for them did'nt work for me. I am trying to configure a squid
reverse proxy with ssl support. I have squid on 192.168.124.41 with apache
on 127.0.0.1 on the same box. I also have two other webservers (1 apache, 1
IIS). Squid is configured to direct any requests for asp pages to iis and
the rest to the apache machine.

I have also configured squid to use https, the programmer has set up a 302
redirect on the iis machine so that visiting http://example.com/Login.aspx
redirects to https://example.com/Login.aspx. Squid redirects fine but after
that gives me a The page isn't redirecting properly. Running wget shows
that squid is going into an endless loop. I have reproduced squid.conf and
also the wget output below.

$wget --no-check http://192.168.124.41/Login.aspx
--2012-03-12 11:06:53--  http://192.168.124.41/Login.aspx
Connecting to 192.168.124.41:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/Login.aspx [following]
--2012-03-12 11:06:53--  https://example.com/Login.aspx
Resolving example.com... 192.168.124.41
Connecting to example.com|192.168.124.41|:443... connected.
WARNING: cannot verify example.com’s certificate, issued by
“/C=IN/ST=AP/L=Default City/O=Default Company
Ltd/CN=example.com/emailAddress=ad...@example.com”:
  Unable to locally verify the issuer’s authority.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/memberplanet/Login.aspx [following]
 
and so on..

##
squid.conf
#
http_port 192.168.124.41:80 accel defaultsite=example.com

https_port 192.168.124.41:443 accel
cert=/usr/newrprgate/CertAuth/testcert.cert
key=/usr/newrprgate/CertAuth/testkey.pem defaultsite=example.com

acl rx_aspx urlpath_regex -i \.asp[x]*

cache_peer 192.168.124.169 parent 80 0 no-query no-digest originserver
name=aspserver
cache_peer_access aspserver allow rx_aspx
cache_peer_access aspserver deny all

cache_peer 127.0.0.1 parent 80 0 no-query originserver name=wb1
cache_peer_access wb1 deny rx_aspx

acl origin_servers dstdomain .example.com
http_access allow origin_servers
http_access deny all
###

I'd appreciate it if someone could give me some clues as to what I'm doing
wrong.

Thanks,
Adi

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-1-endless-loop-IIS-webserver-tp4465329p4465329.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid 3.1.x and detect/disable http tunneling over proxe web sites

2012-03-08 Thread Josef Karliak

  Good afternoon,
  is it able to detect somehow (and disable) tunneling http regular  
web thru proxy web sites ?  For example porn web site thru  
hidemyass.com. There are a lot of web proxies, couldn't locate  
everyone and disable it :). How do you solve it ?

  Thanks and best regards
  J.K.

--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)
policy and check. If you've problem with sending emails to me, start
using email origin methods mentioned above. Thank you.


This message was sent using IMP, the Internet Messaging Program.



binF6si6Z1q7C.bin
Description: Veřejný PGP klíč


Re: [squid-users] Squid 3.1.x and detect/disable http tunneling over proxe web sites

2012-03-08 Thread Amos Jeffries

On 9/03/2012 1:01 a.m., Josef Karliak wrote:

  Good afternoon,
  is it able to detect somehow (and disable) tunneling http regular 
web thru proxy web sites ?  For example porn web site thru 
hidemyass.com. There are a lot of web proxies, couldn't locate 
everyone and disable it :). How do you solve it ?

  Thanks and best regards
  J.K.



It is not possible to get them all. You can look for public lists and/or 
commercial lists. Even so it is a full time job or more just to stay 
updated.


The better solution is to work out policies that the users can agree to 
and willing to work within. Educate where possible about why you do the 
things you need to do and what the benefits are for the users in 
following along. And get management on-side to assist with enforcing 
restrictions when people are caught going against the agreement. A 
policy without teeth is just so much hot air.


Compare your network setup against 
http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers#Recommended_network_configuration 
to see if you have missed a useful layer.


Amos



Re: [squid-users] Squid 3.1.x and detect/disable http tunneling over proxe web sites

2012-03-08 Thread Helmut Hullen
Hallo, Josef,

Du meintest am 08.03.12:

is it able to detect somehow (and disable) tunneling http regular
 web thru proxy web sites ?  For example porn web site thru
 hidemyass.com. There are a lot of web proxies, couldn't locate
 everyone and disable it :). How do you solve it ?

I use squidGuard with its database p.e. for porn and/or proxies. It's  
simple to use it under squid.

Viele Gruesse!
Helmut


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-12 Thread 巍俊葛
Thanks Amos,

Currently, we use a VM ( vmware) to host a RHEL with squid running.
I change the back-end site with only an IIS test web site which is
hosted on the same IIS system.
And it's just a png image file. And it seem working.

On RHEL side, there is no limitations on outgoing on iptables rules.

Regards,
~Kimi


On 12/01/2012, Amos Jeffries squ...@treenet.co.nz wrote:
 On 12.01.2012 02:28, kimi ge wrote:
 Hi Amos,

 Really appreciate your help.

 I did changes with your sugguestion.

 Some debug logs are here:

 2012/01/11 13:21:58.167| The request GET
 http://ids-ams.elabs.eds.com/
 is ALLOWED, because it matched 'origin_servers'

 2012/01/11 13:21:58.168| client_side_request.cc(547)
 clientAccessCheck2: No adapted_http_access configuration.

 2012/01/11 13:21:58.168| The request GET
 http://ids-ams.elabs.eds.com/
 is ALLOWED, because it matched 'origin_servers'

 2012/01/11 13:21:58.170| ipcacheMarkBadAddr:
 wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

 2012/01/11 13:21:58.171| TCP connection to
 wtestsm1.asiapacific.hpqcorp.net/80 failed


 There you go. Squid unable to even connect to the IIS server using TCP.

 Bit strange that it should use 404 instead of 500 status. But that TCP
 connection failure is the problem.

 snip
 My squid environment information:
 RHEL6.0 64bit.
 squid v 3.1.4

 A very outdated Squid release version, even for RHEL (which are on
 3.1.8 or so now).

 * start with checking your firewall and packet routing configurations
 to ensure that Squid outgoing traffic is actually allowed and able to
 connect to IIS.

   * if that does not resolve the problem, please try a newer 3.1
 release. You will likely have to self-build or use non-RHEL RPM, there
 seem to be no recent packages for RHEL.


 Amos




[squid-users] R: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Guido Serassio
Hi,

Look at this bug:
http://bugs.squid-cache.org/show_bug.cgi?id=3141

Likely it's the same problem.
I hope that it will be fixed in the incoming 3.2.

Regards

Guido Serassio
Acme Consulting S.r.l.
Microsoft Silver Certified Partner
VMware Professional Partner
Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135   Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it


 -Messaggio originale-
 Da: kimi ge(巍俊葛) [mailto:weiju...@gmail.com]
 Inviato: mercoledì 11 gennaio 2012 8.47
 A: Amos Jeffries
 Cc: squid-users@squid-cache.org
 Oggetto: Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.
 
 Thanks Amos.
 
 I did the lynx test on back-end web site on squid system like this:
 sudo lynx http://wtestsm1.asiapacific.hpqcorp.net
 
 First, it show the message:
 Alert!: Invalid header 'WWW-Authenticate: NTLM'
 
 Then it show the following message.
 Show the 401 message body? (y/n)
 
 For the domain auth, I mean the back-end web site need corp domain
 user to be accessed.
 I put this in this way, if I log on with my corp domain on my laptop,
 then I could acces IIS Share Point without any credentials window pop
 up. If not, I have to input my domain account on credentials window to
 access the Share Point Site.
 
 
 The following is my squid configuration about this case which I ignore
 some default sections.
 #added by kimi
 acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
 #added by kimi
 acl origin_servers dstdomain ids-ams.elabs.eds.com
 http_access allow origin_servers
 http_access allow hpnet
 
 http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
 connection-auth=on
 
 forwarded_for on
 
 request_header_access WWW-Authenticate allow all
 
 cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
 no-digest originserver name=main connection-auth=on login=PASS
 
 cache_peer_domain main .elabs.eds.com
 
 hierarchy_stoplist cgi-bin ?
 
 coredump_dir /var/spool/squid
 
 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 
 cache_dir aufs /data/squid/cache 12000 64 256
 cache_mem 1024 MB
 maximum_object_size_in_memory 1024 KB
 maximum_object_size 51200 KB
 
 visible_hostname ids-ams.elabs.eds.com
 debug_options ALL,5
 http_access deny all
 
 While let squid be running, I do test like this
 http://ids-ams.elabs.eds.com
 
 The 404 error page is shown.
 That's why I am wondering squid could be as reverse-proxy with IIS
 SharePoint as back-end?
 
 Thanks,
 ~Kimi
 
 
 
 On 11/01/2012, Amos Jeffries squ...@treenet.co.nz wrote:
  On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:
  Hi,
 
I have an issue to make squid 3.1.x to work with IIS SharePoint as
 the
back-end.
  The details are listed below.
 
  1. squid 3.1.x is running as a reverse-proxy.
  2. The back-end is IIS SharePoint Site with domain authentication
  required.
That means only the valid domain user could access this SharePoint
 site.
The issue is it always return 404 error page. And the logon window is
not prompted.
 
  What is this domain authentication you mention? All of the HTTP auth
  mechanisms count as domain auth to a reverse proxy, and none of them
  are named Domain.
 
 
My question is whether squid supports this kind of case or not?
If supports, how should I do configuration on squid.conf file?
 
Thanks in advance.
~Kimi
 
  404 status is about the resource being requested _not existing_. Login
  only operates when there is something to be authorized fetching. So I
  think auth is not relevant at this point in your testing.
 
  Probably the URL being passed to IIS is not what you are expecting to be
  passed and IIS is not setup to handle it. You will need to share your
  squid.conf details for more help.
 
  Amos
 


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Amos Jeffries

On 11/01/2012 8:46 p.m., kimi ge(巍俊葛) wrote:

Thanks Amos.

I did the lynx test on back-end web site on squid system like this:
sudo lynx http://wtestsm1.asiapacific.hpqcorp.net

First, it show the message:
Alert!: Invalid header 'WWW-Authenticate: NTLM'

Then it show the following message.
Show the 401 message body? (y/n)


Aha. NTLM authentication. Very probaby that login=PASS then.



For the domain auth, I mean the back-end web site need corp domain
user to be accessed.
I put this in this way, if I log on with my corp domain on my laptop,
then I could acces IIS Share Point without any credentials window pop
up. If not, I have to input my domain account on credentials window to
access the Share Point Site.


The following is my squid configuration about this case which I ignore
some default sections.
#added by kimi
acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
#added by kimi
acl origin_servers dstdomain ids-ams.elabs.eds.com
http_access allow origin_servers
http_access allow hpnet

http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
connection-auth=on

forwarded_for on

request_header_access WWW-Authenticate allow all


This is not needed. The Squid default is to relay www-auth headers 
through. www-authenticate is a reply header anyway, to inform the client 
agent what types of auth it can use.




cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
no-digest originserver name=main connection-auth=on login=PASS


connection-auth=on should be enough. Try without login=PASS.



cache_peer_domain main .elabs.eds.com

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

cache_dir aufs /data/squid/cache 12000 64 256
cache_mem 1024 MB
maximum_object_size_in_memory 1024 KB
maximum_object_size 51200 KB

visible_hostname ids-ams.elabs.eds.com
debug_options ALL,5
http_access deny all

While let squid be running, I do test like this
http://ids-ams.elabs.eds.com

The 404 error page is shown.


Okay. Which error page?  Squid sends three different ones with that 
status code. Invalid request or Invalid URL or something else?



That's why I am wondering squid could be as reverse-proxy with IIS
SharePoint as back-end?


It can be. There is normally no trouble. But the newer features MS have 
been adding for IPv6 and cloud support recently are not widely tested yet.


Amos


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread 巍俊葛
Hi Amos,

Really appreciate your help.

I did changes with your sugguestion.

Some debug logs are here:

2012/01/11 13:21:58.167| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.168| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:21:58.168| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.170| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.171| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.171| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.177| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.177| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.177| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.183| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.184| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.184| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.190| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.191| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.191| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.197| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.197| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.197| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.203| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.204| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.204| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.210| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.210| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.210| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.216| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.216| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.217| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.222| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.223| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.223| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.229| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.229| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.229| Detected DEAD Parent: main

2012/01/11 13:21:58.229| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.235| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.236| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed

2012/01/11 13:21:58.236| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 dead

2012/01/11 13:21:58.236| fwdServerClosed: FD 11 http://ids-ams.elabs.eds.com/

2012/01/11 13:21:58.238| The reply for GET
http://ids-ams.elabs.eds.com/ is ALLOWED, because it matched 'all'

2012/01/11 13:21:58.240| ConnStateData::swanSong: FD 9

2012/01/11 13:22:07.406| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:07.406| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:22:07.406| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:07.407| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:22:07.408| Failed to select source for
'http://ids-ams.elabs.eds.com/'

2012/01/11 13:22:07.408|   always_direct = 0

2012/01/11 13:22:07.408|never_direct = 0

2012/01/11 13:22:07.408|timedout = 0

2012/01/11 13:22:07.410| The reply for GET
http://ids-ams.elabs.eds.com/ is ALLOWED, because it matched 'all'

2012/01/11 13:22:07.410| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 dead

2012/01/11 13:22:07.412| ConnStateData::swanSong: FD 9

2012/01/11 13:22:09.381| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:09.381| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:22:09.381| The request GET http://ids-ams.elabs.eds.com/
is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:22:09.383| ipcacheMarkBadAddr:

Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-11 Thread Amos Jeffries

On 12.01.2012 02:28, kimi ge wrote:

Hi Amos,

Really appreciate your help.

I did changes with your sugguestion.

Some debug logs are here:

2012/01/11 13:21:58.167| The request GET 
http://ids-ams.elabs.eds.com/

is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.168| client_side_request.cc(547)
clientAccessCheck2: No adapted_http_access configuration.

2012/01/11 13:21:58.168| The request GET 
http://ids-ams.elabs.eds.com/

is ALLOWED, because it matched 'origin_servers'

2012/01/11 13:21:58.170| ipcacheMarkBadAddr:
wtestsm1.asiapacific.hpqcorp.net 16.173.232.237:80

2012/01/11 13:21:58.171| TCP connection to
wtestsm1.asiapacific.hpqcorp.net/80 failed



There you go. Squid unable to even connect to the IIS server using TCP.

Bit strange that it should use 404 instead of 500 status. But that TCP 
connection failure is the problem.


snip

My squid environment information:
RHEL6.0 64bit.
squid v 3.1.4


A very outdated Squid release version, even for RHEL (which are on 
3.1.8 or so now).


* start with checking your firewall and packet routing configurations 
to ensure that Squid outgoing traffic is actually allowed and able to 
connect to IIS.


 * if that does not resolve the problem, please try a newer 3.1 
release. You will likely have to self-build or use non-RHEL RPM, there 
seem to be no recent packages for RHEL.



Amos



[squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread 巍俊葛
Hi,

 I have an issue to make squid 3.1.x to work with IIS SharePoint as the
 back-end.
The details are listed below.

1. squid 3.1.x is running as a reverse-proxy.
2. The back-end is IIS SharePoint Site with domain authentication required.
 That means only the valid domain user could access this SharePoint site.
 The issue is it always return 404 error page. And the logon window is
 not prompted.

 My question is whether squid supports this kind of case or not?
 If supports, how should I do configuration on squid.conf file?

 Thanks in advance.
 ~Kimi


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread Amos Jeffries

On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:

Hi,

  I have an issue to make squid 3.1.x to work with IIS SharePoint as the
  back-end.
The details are listed below.

1. squid 3.1.x is running as a reverse-proxy.
2. The back-end is IIS SharePoint Site with domain authentication required.
  That means only the valid domain user could access this SharePoint site.
  The issue is it always return 404 error page. And the logon window is
  not prompted.


What is this domain authentication you mention? All of the HTTP auth 
mechanisms count as domain auth to a reverse proxy, and none of them 
are named Domain.




  My question is whether squid supports this kind of case or not?
  If supports, how should I do configuration on squid.conf file?

  Thanks in advance.
  ~Kimi


404 status is about the resource being requested _not existing_. Login 
only operates when there is something to be authorized fetching. So I 
think auth is not relevant at this point in your testing.


Probably the URL being passed to IIS is not what you are expecting to be 
passed and IIS is not setup to handle it. You will need to share your 
squid.conf details for more help.


Amos


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread 巍俊葛
Thanks Amos.

I did the lynx test on back-end web site on squid system like this:
sudo lynx http://wtestsm1.asiapacific.hpqcorp.net

First, it show the message:
Alert!: Invalid header 'WWW-Authenticate: NTLM'

Then it show the following message.
Show the 401 message body? (y/n)

For the domain auth, I mean the back-end web site need corp domain
user to be accessed.
I put this in this way, if I log on with my corp domain on my laptop,
then I could acces IIS Share Point without any credentials window pop
up. If not, I have to input my domain account on credentials window to
access the Share Point Site.


The following is my squid configuration about this case which I ignore
some default sections.
#added by kimi
acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
#added by kimi
acl origin_servers dstdomain ids-ams.elabs.eds.com
http_access allow origin_servers
http_access allow hpnet

http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
connection-auth=on

forwarded_for on

request_header_access WWW-Authenticate allow all

cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
no-digest originserver name=main connection-auth=on login=PASS

cache_peer_domain main .elabs.eds.com

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

cache_dir aufs /data/squid/cache 12000 64 256
cache_mem 1024 MB
maximum_object_size_in_memory 1024 KB
maximum_object_size 51200 KB

visible_hostname ids-ams.elabs.eds.com
debug_options ALL,5
http_access deny all

While let squid be running, I do test like this
http://ids-ams.elabs.eds.com

The 404 error page is shown.
That's why I am wondering squid could be as reverse-proxy with IIS
SharePoint as back-end?

Thanks,
~Kimi



On 11/01/2012, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:
 Hi,

   I have an issue to make squid 3.1.x to work with IIS SharePoint as the
   back-end.
 The details are listed below.

 1. squid 3.1.x is running as a reverse-proxy.
 2. The back-end is IIS SharePoint Site with domain authentication
 required.
   That means only the valid domain user could access this SharePoint site.
   The issue is it always return 404 error page. And the logon window is
   not prompted.

 What is this domain authentication you mention? All of the HTTP auth
 mechanisms count as domain auth to a reverse proxy, and none of them
 are named Domain.


   My question is whether squid supports this kind of case or not?
   If supports, how should I do configuration on squid.conf file?

   Thanks in advance.
   ~Kimi

 404 status is about the resource being requested _not existing_. Login
 only operates when there is something to be authorized fetching. So I
 think auth is not relevant at this point in your testing.

 Probably the URL being passed to IIS is not what you are expecting to be
 passed and IIS is not setup to handle it. You will need to share your
 squid.conf details for more help.

 Amos



[squid-users] Squid 3.1.x and authentification against AD Windows 2008R2

2011-12-19 Thread Josef Karliak

  Hi there,
  We may plan to active authorization for users to the internet  
against Windows AD, running on Windows server 2008R2. I'm running  
squid on opensuse 11.4 64-bit. I've found some how-to, many of them  
solve it by ntlm-auth (not in opensuse, but there is a similar named  
ntlm_smb_lm_auth for squid i suppose). Another choice is over ldap.
  What is better ? What are your expericiences or recomentations ?  
And - please - some step-by-step how-to ...

   Thanks and best regards
   J.Karliak.


--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a  
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,  
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)  
policy and check. If you've problem with sending emails to me, start  
using email origin methods mentioned above. Thank you.



This message was sent using IMP, the Internet Messaging Program.



bin3WTxbKD372.bin
Description: Veřejný PGP klíč


Re: [squid-users] Squid 3.1.x and authentification against AD Windows 2008R2

2011-12-19 Thread Amos Jeffries

On 19/12/2011 9:00 p.m., Josef Karliak wrote:

  Hi there,
  We may plan to active authorization for users to the internet 
against Windows AD, running on Windows server 2008R2. I'm running 
squid on opensuse 11.4 64-bit. I've found some how-to, many of them 
solve it by ntlm-auth (not in opensuse, but there is a similar named 
ntlm_smb_lm_auth for squid i suppose


Nope. ntlm_smb_lm_auth does does the ancient LM-over-SMB protocol (using 
HTTP NTLM auth scheme) for with Windows98/CE/ME and similar older 
software and considered dangerous to use in todays network environment. 
NTLM is best done using the ntlm_auth helper from Samba project.  An 
even better alternative if you can use it is Kerberos authentication, 
which is supported by WindowsXP SP2 and later software.



). Another choice is over ldap.
  What is better ? What are your expericiences or recomentations ? And 
- please - some step-by-step how-to ...


LDAP is just the interface to the credentials database. It can be used 
with most of the auth schemes in HTTP.


The recommendation in this area is to go with whichever AD interface you 
are most familiar with and can implement securely. Pick the auth 
scheme(s) to suit your needs, then find which helper(s) plug the two 
together.



http://wiki.squid-cache.org/Features/Authentication has the overview of 
how auth works for Squid and link for more info and the config examples.


Amos


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-02 Thread Josef Karliak

  Hi,
  I use 64-bit machine, HP DL380 G7. I thought that it should be  
better to use tmpfs (part of the memory). After reboot it is clean and  
empty, squid creates directories again automaticaly.

  So you recommend use a few of disk capacity and set caching to memory only ?
  Thanks
  J.K.

Cituji Matus UHLAR - fantomas uh...@fantomas.sk:


On 01.12.11 15:05, Josef Karliak wrote:
I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've  
about 3000 computers behind squid, for OS is 16GB sufficient,  
that's why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only  
memory cache would be much more efficient (unless you are running  
32-bit squid).

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759





--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a  
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,  
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)  
policy and check. If you've problem with sending emails to me, start  
using email origin methods mentioned above. Thank you.



This message was sent using IMP, the Internet Messaging Program.



bingmJR7DCdNs.bin
Description: Veřejný PGP klíč


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-02 Thread Amos Jeffries

On 2/12/2011 11:10 p.m., Josef Karliak wrote:

  Hi,
  I use 64-bit machine, HP DL380 G7. I thought that it should be 
better to use tmpfs (part of the memory). After reboot it is clean and 
empty, squid creates directories again automaticaly.
  So you recommend use a few of disk capacity and set caching to 
memory only ?


Yes.

Amos



[squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Josef Karliak

  Hi there,
  I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've  
about 3000 computers behind squid, for OS is 16GB sufficient, that's  
why I used 8GB for squid tmpfs.

  Thanks for answers.
  J.K.

--
Ma domena pouziva zabezpeceni a kontrolu SPF (www.openspf.org) a  
DomainKeys/DKIM (with ADSP) . Pokud mate problemy s dorucenim emailu,  
zacnete pouzivat metody overeni puvody emailu zminene vyse. Dekuji.
My domain use SPF (www.openspf.org) and DomainKeys/DKIM (with ADSP)  
policy and check. If you've problem with sending emails to me, start  
using email origin methods mentioned above. Thank you.



This message was sent using IMP, the Internet Messaging Program.



bintAm4HGUw0T.bin
Description: Veřejný PGP klíč


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Matus UHLAR - fantomas

On 01.12.11 15:05, Josef Karliak wrote:
 I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've 
about 3000 computers behind squid, for OS is 16GB sufficient, that's 
why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only memory 
cache would be much more efficient (unless you are running 32-bit 
squid).

--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety. -- Benjamin Franklin, 1759


Re: [squid-users] Squid 3.1.x and right configuration parameters for tmpfs 8GB

2011-12-01 Thread Amos Jeffries

On 2/12/2011 5:13 a.m., Matus UHLAR - fantomas wrote:

On 01.12.11 15:05, Josef Karliak wrote:
 I wanna use tmpfs for squid cache, is 8GB enough or too big ? We've 
about 3000 computers behind squid, for OS is 16GB sufficient, that's 
why I used 8GB for squid tmpfs.


what is the point of using tmpfs as squid cache? I think using only 
memory cache would be much more efficient (unless you are running 
32-bit squid).


Yes, consider the purpose of why a disk cache is better than RAM cache: 
objects are not erased when Squid or the system restarts.


== tmpfs data is erased when Squid or the system restarts. So why bother?

All you gain from tmpfs is a drop in speed accessing the data, from RAM 
speeds down to the Disk speeds. Whether it is SSD or HDD that is slower 
than RAM.


Amos



[squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Bartschies, Thomas
Hi,

our setup is:
Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS. 
In Firefox we've set already:
network.automatic-ntlm-auth.trusted-uris to the server address
network.automatic-ntlm-auth.allow-proxies = true (default)

in squid.conf, we've tried some combinations of the following settings,
having the current settings this way:
client_persistent_connections on
server_persistent_connections on
pipeline_prefetch off

Every time we try to connect to the sharepoint site, the browser
authentication box pops up. Even when we supply
correct credentials, the request for them pops up again. Making it
impossible to logon to the site.

Internet Explorer 8/9 works fine. Google Chrome 15 also requests
credentials once and then logon works.

First question is: Should this even work with Firefox, or is it known
not to?

If it should work, what other settings we've possibly missed?

Connection pinning seems to be working, if I'm reading the traces
correctly. Sharepoint answers with HTTP Code 401.

Our Proxy Setup is open. There are absolutely no client address
restrictions and we're also not using proxy authentication.
So there's not ntlm_auth helper in use.

Kind regards,
Thomas


Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Amos Jeffries

On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:

Hi,

our setup is:
Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
In Firefox we've set already:
network.automatic-ntlm-auth.trusted-uris to the server address
network.automatic-ntlm-auth.allow-proxies = true (default)

in squid.conf, we've tried some combinations of the following settings,
having the current settings this way:
client_persistent_connections on
server_persistent_connections on


Right the above need to be on for NTLM to work properly.


pipeline_prefetch off

Every time we try to connect to the sharepoint site, the browser
authentication box pops up. Even when we supply
correct credentials, the request for them pops up again. Making it
impossible to logon to the site.

Internet Explorer 8/9 works fine. Google Chrome 15 also requests
credentials once and then logon works.

First question is: Should this even work with Firefox, or is it known
not to?


It is known to work as seamlessly as IE when setup properly.

This sounds like



If it should work, what other settings we've possibly missed?


There is nothing special for Firefox. Since the other browsers are 
working fine (through the proxy?) it suggests a config issue setting up 
firefox.




Connection pinning seems to be working, if I'm reading the traces
correctly. Sharepoint answers with HTTP Code 401.

Our Proxy Setup is open. There are absolutely no client address
restrictions and we're also not using proxy authentication.
So there's not ntlm_auth helper in use.

Kind regards,
Thomas


Amos


Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Bartschies, Thomas

Hi,

I should add that were running squid NOT in transparent mode and that the proxy 
port is 8080 and NOT 80 as one may have guessed.
I don't know any other firefox config settings than the ones I've already 
mentioned, with the exception of the network settings for
kerberos authentication. The squid traces clearly show, that NTLM 
authentication is used, so kerberos shouldn't be relevant.

Here is an excerpt from my config, without some access rules and acls. Even 
without the cache_peer, no change.

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443 1025-65535 22
acl Safe_ports port 80 81 83 85 # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 22  # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 631 # cups
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
http_access deny msnmessenger
http_access deny to_localhost
http_access allow localhost
http_access allow manager localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
cache_peer 10.x.x.x parent 8080 3130 no-query default
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
http_reply_access deny aim_http
http_reply_access allow all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 8080 connection-auth=on
hierarchy_stoplist cgi-bin ?
cache_mem 500 MB
cache_dir aufs /var/spool/squid 2000 16 256
maximum_object_size 1 KB
ftp_list_width 64
url_rewrite_children 15
url_rewrite_access deny localhost
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
quick_abort_pct 95
negative_dns_ttl 1 seconds
request_header_access Accept-Encoding deny support.microsoft.com
reply_header_access Accept-Encoding deny support.microsoft.com
forward_timeout 15 minutes
request_timeout 30 minutes
shutdown_lifetime 10 seconds
client_persistent_connections on
server_persistent_connections on
log_icp_queries off
error_directory /usr/share/squid/errors/de
always_direct allow local-intranet
icap_enable off
icap_preview_enable on
icap_preview_size 128
icap_send_client_ip on
dns_nameservers 127.0.0.1 212.202.215.1 212.202.215.2
ignore_unknown_nameservers off
forwarded_for off
pipeline_prefetch off
ignore_expect_100 on

Regards, Thomas

-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Dienstag, 8. November 2011 13:45
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:
 Hi,

 our setup is:
 Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
 In Firefox we've set already:
 network.automatic-ntlm-auth.trusted-uris to the server address
 network.automatic-ntlm-auth.allow-proxies = true (default)

 in squid.conf, we've tried some combinations of the following settings,
 having the current settings this way:
 client_persistent_connections on
 server_persistent_connections on

Right the above need to be on for NTLM to work properly.

 pipeline_prefetch off

 Every time we try to connect to the sharepoint site, the browser
 authentication box pops up. Even when we supply
 correct credentials, the request for them pops up again. Making it
 impossible to logon to the site.

 Internet Explorer 8/9 works fine. Google Chrome 15 also requests
 credentials once and then logon works.

 First question is: Should this even work with Firefox, or is it known
 not to?

It is known to work as seamlessly as IE when setup properly.

This sounds like


 If it should work, what other settings we've possibly missed?

There is nothing special for Firefox. Since the other browsers are 
working fine (through the proxy?) it suggests a config issue setting up 
firefox.


 Connection pinning seems to be working, if I'm reading the traces
 correctly. Sharepoint answers with HTTP Code 401.

 Our Proxy Setup is open. There are absolutely no client address
 restrictions and we're also not using proxy authentication.
 So there's not ntlm_auth helper in use.

 Kind regards,
 Thomas

Amos


Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread E.S. Rosenberg
Firefox on windows machines that are not in the domain by us only
worked properly when we switched it to using it's own NTLM
implementation instead of the native one, this is done by setting
network.auth.force-generic-ntlm to true

I am no big NTLM/AD guru (my field is the linux/unix machines in our
school), but from what I gleaned Mozilla encourages *not* using their
ntlm implementation since they see it as less secure than using the
native implementation, but I could be wrong here if anyone can
enlighten me I'd be happy :).

As far as I recall on a windows machine when using native NTLM and not
in the domain you also have to add the domain part in front of the
username because otherwise it sends the local machine name as the
'domain' (ie domain\username), but I think even with that it still
would continue to pop up when using native instead of mozilla.

I also have noticed that when using ntlm-auth on a client that is not
in the domain (windows/linux) you may be presented with multiple
authentication dialogs when you start to browse, my theory on that has
always been that the browser sent multiple request and squid replied
to each request with a 407 and since the browser doesn't have
authentication details yet it fires up a dialog for every 407
received.

Hopefully this was helpful, good luck,
Eli

2011/11/8 Bartschies, Thomas thomas.bartsch...@cvk.de:

 Hi,

 I should add that were running squid NOT in transparent mode and that the 
 proxy port is 8080 and NOT 80 as one may have guessed.
 I don't know any other firefox config settings than the ones I've already 
 mentioned, with the exception of the network settings for
 kerberos authentication. The squid traces clearly show, that NTLM 
 authentication is used, so kerberos shouldn't be relevant.

 Here is an excerpt from my config, without some access rules and acls. Even 
 without the cache_peer, no change.

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
 acl SSL_ports port 443 1025-65535 22
 acl Safe_ports port 80 81 83 85 # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443 22      # https, snews
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 631         # cups
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 http_access deny msnmessenger
 http_access deny to_localhost
 http_access allow localhost
 http_access allow manager localhost
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow all
 cache_peer 10.x.x.x parent 8080 3130 no-query default
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 auth_param basic casesensitive off
 http_reply_access deny aim_http
 http_reply_access allow all
 icp_access allow localnet
 icp_access deny all
 htcp_access allow localnet
 htcp_access deny all
 http_port 8080 connection-auth=on
 hierarchy_stoplist cgi-bin ?
 cache_mem 500 MB
 cache_dir aufs /var/spool/squid 2000 16 256
 maximum_object_size 1 KB
 ftp_list_width 64
 url_rewrite_children 15
 url_rewrite_access deny localhost
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern (cgi-bin|\?)    0       0%      0
 refresh_pattern .               0       20%     4320
 quick_abort_pct 95
 negative_dns_ttl 1 seconds
 request_header_access Accept-Encoding deny support.microsoft.com
 reply_header_access Accept-Encoding deny support.microsoft.com
 forward_timeout 15 minutes
 request_timeout 30 minutes
 shutdown_lifetime 10 seconds
 client_persistent_connections on
 server_persistent_connections on
 log_icp_queries off
 error_directory /usr/share/squid/errors/de
 always_direct allow local-intranet
 icap_enable off
 icap_preview_enable on
 icap_preview_size 128
 icap_send_client_ip on
 dns_nameservers 127.0.0.1 212.202.215.1 212.202.215.2
 ignore_unknown_nameservers off
 forwarded_for off
 pipeline_prefetch off
 ignore_expect_100 on

 Regards, Thomas

 -Ursprüngliche Nachricht-
 Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Gesendet: Dienstag, 8. November 2011 13:45
 An: squid-users@squid-cache.org
 Betreff: Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with 
 Firefox

 On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:
 Hi,

 our setup is:
 Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
 In Firefox we've set already:
 network.automatic-ntlm-auth.trusted-uris to the server address
 network.automatic-ntlm-auth.allow-proxies = true (default)

 in squid.conf, we've tried some combinations of the following settings,
 having the current settings this way:
 client_persistent_connections on
 server_persistent_connections on

 Right the above need to be on for NTLM to work properly

[squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and 206 codes

2011-07-22 Thread Ritter, Nicholas
I am doing extended testing of a CentOS v6 TPROXY/SQUID3/WCCP setup and
I noticing higher than usual TCP_MISS/502 codes. I am also seeing some
206 codes, but it is the 502s that are much higher than normal. I think
it is transport related inside the TPROXY/SQUID side of things but I am
not sure.

I am seeing the 502 codes on both gets and posts. Can anyone provide
more insight on this condition and what/where I should start
troubleshooting?

I am running the stock CentOS v6 kernel (2.6.32-71.29.1) and Squid
3.1.10 as package by RHEL 6 (specifically a RHEL 6 rebuilt source rpm of
squid-3.1.10-1.el6.

Should update to the more recent release of squid 3.1 as a starting
point?

Nick



Re: [squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and 206 codes

2011-07-22 Thread Amos Jeffries

On 23/07/11 04:24, Ritter, Nicholas wrote:

I should add one important point. When the error occurs, it is most
often not affecting the entire site nor transaction. This is to say that
I can visit a site, get content, and then at some point fill out a form
on the site, which then generates the 502. I don't want anyone to assume
that the 502 is being generated because of an obvious path connectivity
error where the site being surfed was down all along.

I should also not that I am not running any unique refresh patterns in
the squid.conf.

-Original Message-
From: Ritter, Nicholas [mailto:nicholas.rit...@americantv.com]
Sent: Friday, July 22, 2011 11:16 AM
To: squid-users@squid-cache.org
Subject: [squid-users] squid 3.1 (w/ TPROXY/WCCP) and increased 502 and
206 codes

I am doing extended testing of a CentOS v6 TPROXY/SQUID3/WCCP setup and
I noticing higher than usual TCP_MISS/502 codes. I am also seeing some
206 codes, but it is the 502s that are much higher than normal. I think
it is transport related inside the TPROXY/SQUID side of things but I am
not sure.

I am seeing the 502 codes on both gets and posts. Can anyone provide
more insight on this condition and what/where I should start
troubleshooting?


With the message presented in that 502 error page. 502 is sent on 
several outbound connection problems from TCP connect through to reply 
parsing.




I am running the stock CentOS v6 kernel (2.6.32-71.29.1) and Squid
3.1.10 as package by RHEL 6 (specifically a RHEL 6 rebuilt source rpm of
squid-3.1.10-1.el6.

Should update to the more recent release of squid 3.1 as a starting
point?



Always a good choice to know if its been fixed. Though I don't recall 
anything major having changed since .10 regarding connectivity.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9


[squid-users] Squid 3.1 failover problem from IPv6 to IPv4?

2011-06-22 Thread Peter Olsson
Hello!

We use Squid 3.1.12 on a couple of servers with IPv4 and IPv6.
The servers are FreeBSD 8.1. Squid is installed from ports.

This works fine, except for this web: www.informator.se
www.informator.se has an  address, but it doesn't seem to
listen to it. Eventually the browser times out with this error:
(51) Network is unreachable

But shouldn't Squid try the IPv4 address when the IPv6 address
fails? If so, there is maybe something wrong with our config.
The only IPv6 specific config we have is this (taken from the
release notes of Squid 3.1):
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all
tcp_outgoing_address x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6

Is the failure on www.informator.se a bug/feature in Squid,
or is the problem in our setup/config?

Thanks!

-- 
Peter Olssonp...@leissner.se


Re: [squid-users] Squid 3.1 failover problem from IPv6 to IPv4?

2011-06-22 Thread Amos Jeffries

On 23/06/11 01:44, Peter Olsson wrote:

Hello!

We use Squid 3.1.12 on a couple of servers with IPv4 and IPv6.
The servers are FreeBSD 8.1. Squid is installed from ports.

This works fine, except for this web: www.informator.se
www.informator.se has an  address, but it doesn't seem to
listen to it. Eventually the browser times out with this error:
(51) Network is unreachable



On BSD you should only it this if the site has no A address either. 
split-stack 3.1 uses IPv4-only links to servers unless the hack you 
found (below) is added.



But shouldn't Squid try the IPv4 address when the IPv6 address
fails? If so, there is maybe something wrong with our config.
The only IPv6 specific config we have is this (taken from the
release notes of Squid 3.1):
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all
tcp_outgoing_address x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6

Is the failure on www.informator.se a bug/feature in Squid,
or is the problem in our setup/config?


That hack requires its http_access line to be run. So preferrably that 
is placed at the top of http_access list. This ensures that the 
destination IP is always resolved early in processing and with luck 
available to the outgoing address selection.


The solution to all these split-stack problems has just hit 3.2 series 
this week in 3.2.0.9. We are working through the bunch of unexpected 
problems right now. Any help welcome.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3


Re: [squid-users] squid 3.1.

2011-06-11 Thread Amos Jeffries

On 11/06/11 05:39, Shenavandeh wrote:

Hi,

I have a squid installation that crashes twice a day under the load of
7 Mbps bandwidth with the following message in squid.out with no
specific traces in cache.log

Startup: Fri Jun 10 15:46:20
dying from an unhandled exception: !theConsumer
terminate called after throwing an instance of 'TextException'
   what():  !theConsumer
Startup: Fri Jun 10 19:55:29

It is compiled using following options:
  sbin]# ./squid -v
Squid Cache: Version 3.1.12.1
configure options:  '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs' '--enable-poll'
'--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
'--enable-ssl' '--enable-snmp' '--enable-removal-policies'
'--enable-gnuregex' '--with-large-files' '--enable-async-io'
'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
--enable-ltdl-convenience

the platform is as follows:

CPU :4 cores of  Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
RAM : 8GB
OS: CentOS 5.6 :
Kernel: Linux version 2.6.25 compiled with tproxy option.

the Squid configuration:

cache_mem 4000 MB

dead_peer_timeout 30 seconds
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY


hierarchy_stoplist and the QUERY bits are outdated. It is worth removing 
these.




maximum_object_size 50 MB
maximum_object_size_in_memory 500 KB
minimum_object_size 0 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow to_localhost


This is dangerous.
 to_localhost is designed to match a class of attack signatures and 
prevent DoS. It is intended for use in a deny line.





http_access allow localhost
http_access allow localnet
http_access allow to_localhost


A second allow to_localhost is useless. The first will stop processing 
when it gets tested and matches.




http_access deny all

http_port 3128 tproxy

hierarchy_stoplist cgi-bin ?


repeat directive, worth removing.



cache_dir aufs /cache 24000 16 256

coredump_dir cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

cache_effective_user squid
cache_effective_group squid

half_closed_clients off
buffered_logs on
client_db off
quick_abort_max 0 KB
quick_abort_min 0 KB
memory_pools off

cache_swap_high 95%
cache_swap_low 90
logfile_rotate 10%


logfile_rotate is not a percentage. It is a count of many log files to 
keep. A new one is generated ever time you run squid -k rotate




visible_hostnameCache


Interesting FQDN.

The admin@Cache contact address for example, does not resolve here.



quick_abort_min 32 KB
quick_abort_max 32 KB
quick_abort_pct 95

negative_ttl 3 minutes
positive_dns_ttl 6 hours

pipeline_prefetch on

acl snmpkey snmp_community public
snmp_port 3401
snmp_access allow snmpkey localhost
snmp_access deny all

refresh_pattern -i
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
26 90% 260009 override-expire
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
90% 260009 override-expire


These refresh_pattern lines are useless. The query and dot patterns 
above catch all URL in existence. Squid never gets past them to match these.





I would be most grateful if somebody helps me out.
Yours Faithfully,
---
Amir H Sh


A few seconds search in bugzilla shows this:
 http://bugs.squid-cache.org/show_bug.cgi?id=3117

Perhapse you can help provide a trace (debug_options ALL,6) and help 
track down where it is coming from.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


Re: [squid-users] squid 3.1.

2011-06-11 Thread Shenavandeh
Hi Amos,

Thanks for your time and nice comments on the config file

It is impossible for me to use debug_options ALL,6 under such load.
log file  grows wildly and easily eats the whole hard disk !
Any other solutions to capture the log?

Yours Faithfully,
---
Amir H Shenavandeh




On 11 June 2011 10:35, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/06/11 05:39, Shenavandeh wrote:

 Hi,

 I have a squid installation that crashes twice a day under the load of
 7 Mbps bandwidth with the following message in squid.out with no
 specific traces in cache.log

 Startup: Fri Jun 10 15:46:20
 dying from an unhandled exception: !theConsumer
 terminate called after throwing an instance of 'TextException'
   what():  !theConsumer
 Startup: Fri Jun 10 19:55:29

 It is compiled using following options:
  sbin]# ./squid -v
 Squid Cache: Version 3.1.12.1
 configure options:  '--enable-linux-netfilter'
 '--enable-storeio=ufs,aufs' '--enable-poll'
 '--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
 '--enable-ssl' '--enable-snmp' '--enable-removal-policies'
 '--enable-gnuregex' '--with-large-files' '--enable-async-io'
 'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
 --enable-ltdl-convenience

 the platform is as follows:

 CPU :4 cores of  Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
 RAM : 8GB
 OS: CentOS 5.6 :
 Kernel: Linux version 2.6.25 compiled with tproxy option.

 the Squid configuration:

 cache_mem 4000 MB

 dead_peer_timeout 30 seconds
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 no_cache deny QUERY

 hierarchy_stoplist and the QUERY bits are outdated. It is worth removing
 these.


 maximum_object_size 50 MB
 maximum_object_size_in_memory 500 KB
 minimum_object_size 0 KB

 cache_replacement_policy heap LFUDA
 memory_replacement_policy heap LRU

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


 acl localnet src 10.0.0.0/8    # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12    # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16    # RFC1918 possible internal network
 acl localnet src fc00::/7       # RFC 4193 local private network range
 acl localnet src fe80::/10      # RFC 4291 link-local (directly
 plugged) machines

 acl SSL_ports port 443
 acl Safe_ports port 80        # http
 acl Safe_ports port 21        # ftp
 acl Safe_ports port 443        # https
 acl Safe_ports port 70        # gopher
 acl Safe_ports port 210        # wais
 acl Safe_ports port 1025-65535    # unregistered ports
 acl Safe_ports port 280        # http-mgmt
 acl Safe_ports port 488        # gss-http
 acl Safe_ports port 591        # filemaker
 acl Safe_ports port 777        # multiling http
 acl CONNECT method CONNECT

 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow to_localhost

 This is dangerous.
  to_localhost is designed to match a class of attack signatures and prevent
 DoS. It is intended for use in a deny line.



 http_access allow localhost
 http_access allow localnet
 http_access allow to_localhost

 A second allow to_localhost is useless. The first will stop processing
 when it gets tested and matches.


 http_access deny all

 http_port 3128 tproxy

 hierarchy_stoplist cgi-bin ?

 repeat directive, worth removing.


 cache_dir aufs /cache 24000 16 256

 coredump_dir cache

 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp:        1440    20%    10080
 refresh_pattern ^gopher:    1440    0%    1440
 refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
 refresh_pattern .        0    20%    4320

 cache_effective_user squid
 cache_effective_group squid

 half_closed_clients off
 buffered_logs on
 client_db off
 quick_abort_max 0 KB
 quick_abort_min 0 KB
 memory_pools off

 cache_swap_high 95%
 cache_swap_low 90
 logfile_rotate 10%

 logfile_rotate is not a percentage. It is a count of many log files to keep.
 A new one is generated ever time you run squid -k rotate


 visible_hostname        Cache

 Interesting FQDN.

 The admin@Cache contact address for example, does not resolve here.


 quick_abort_min 32 KB
 quick_abort_max 32 KB
 quick_abort_pct 95

 negative_ttl 3 minutes
 positive_dns_ttl 6 hours

 pipeline_prefetch on

 acl snmpkey snmp_community public
 snmp_port 3401
 snmp_access allow snmpkey localhost
 snmp_access deny all

 refresh_pattern -i
 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
 26 90% 260009 override-expire
 refresh_pattern -i
 \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
 90% 260009 override-expire

 These refresh_pattern lines are useless. The query and dot patterns above
 catch all URL in existence. Squid never gets past them to match these.



 I would be most grateful if somebody helps me out.
 Yours Faithfully,
 ---
 Amir H Sh

 A few seconds search in bugzilla shows 

Re: [squid-users] squid 3.1.

2011-06-11 Thread Amos Jeffries

On 12/06/11 06:28, Shenavandeh wrote:

Hi Amos,

Thanks for your time and nice comments on the config file

It is impossible for me to use debug_options ALL,6 under such load.
log file  grows wildly and easily eats the whole hard disk !
Any other solutions to capture the log?



There are two tricky alternatives.

One is the -l command line option to send the log to syslog. You need a 
syslog server that can handle the traffic though.


The other is using -X (which generates a huge lot more log 
unfortunately) and piping the results out to somewhere that can handle it.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.8 and 3.1.12.2


[squid-users] squid 3.1.

2011-06-10 Thread Shenavandeh
Hi,

I have a squid installation that crashes twice a day under the load of
7 Mbps bandwidth with the following message in squid.out with no
specific traces in cache.log

Startup: Fri Jun 10 15:46:20
dying from an unhandled exception: !theConsumer
terminate called after throwing an instance of 'TextException'
  what():  !theConsumer
Startup: Fri Jun 10 19:55:29

It is compiled using following options:
 sbin]# ./squid -v
Squid Cache: Version 3.1.12.1
configure options:  '--enable-linux-netfilter'
'--enable-storeio=ufs,aufs' '--enable-poll'
'--enable-x-accelerator-vary' '--enable-follow-x-forwarded-for'
'--enable-ssl' '--enable-snmp' '--enable-removal-policies'
'--enable-gnuregex' '--with-large-files' '--enable-async-io'
'CFLAGS=-DNUMTHREADS=300' --with-squid=/root/squid-3.1.12.1
--enable-ltdl-convenience

the platform is as follows:

CPU :4 cores of  Intel(R) Xeon(R) CPU   E5504  @ 2.00GHz
RAM : 8GB
OS: CentOS 5.6 :
Kernel: Linux version 2.6.25 compiled with tproxy option.

the Squid configuration:

cache_mem 4000 MB

dead_peer_timeout 30 seconds
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

maximum_object_size 50 MB
maximum_object_size_in_memory 500 KB
minimum_object_size 0 KB

cache_replacement_policy heap LFUDA
memory_replacement_policy heap LRU

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1


acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12# RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow to_localhost


http_access allow localhost
http_access allow localnet
http_access allow to_localhost

http_access deny all

http_port 3128 tproxy

hierarchy_stoplist cgi-bin ?

cache_dir aufs /cache 24000 16 256

coredump_dir cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern -i (/cgi-bin/|\?) 00%0
refresh_pattern .020%4320

cache_effective_user squid
cache_effective_group squid

half_closed_clients off
buffered_logs on
client_db off
quick_abort_max 0 KB
quick_abort_min 0 KB
memory_pools off

cache_swap_high 95%
cache_swap_low 90
logfile_rotate 10%

visible_hostnameCache

quick_abort_min 32 KB
quick_abort_max 32 KB
quick_abort_pct 95

negative_ttl 3 minutes
positive_dns_ttl 6 hours

pipeline_prefetch on

acl snmpkey snmp_community public
snmp_port 3401
snmp_access allow snmpkey localhost
snmp_access deny all

refresh_pattern -i
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$
26 90% 260009 override-expire
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff|pdf|uxx)$ 26
90% 260009 override-expire


I would be most grateful if somebody helps me out.
Yours Faithfully,
---
Amir H Sh


[squid-users] squid 3.1 android ebuddy

2011-04-25 Thread Gerson Barreiros
Hi,

I'm using Squid 3.1.12.1 (Amos ppa-maverick) and i got some weird problem.

Users with Android2 can't get 'ebuddy' to work, but for iPhone users,
it works. (?)

I've made an exception on firewall (for 38.99.73.0/24) to ebuddy
connections skip squid, now it works for both.

Anyone know anything related? My squid.conf don't block nothing
related to ebuddy.


Re: [squid-users] squid 3.1 android ebuddy

2011-04-25 Thread Amos Jeffries

On 26/04/11 12:19, Gerson Barreiros wrote:

Hi,

I'm using Squid 3.1.12.1 (Amos ppa-maverick) and i got some weird problem.

Users with Android2 can't get 'ebuddy' to work, but for iPhone users,
it works. (?)

I've made an exception on firewall (for 38.99.73.0/24) to ebuddy
connections skip squid, now it works for both.

Anyone know anything related? My squid.conf don't block nothing
related to ebuddy.


Can you diagnose anything about it from cache.log and/or access.log? 
from the app error message?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1


Re: [squid-users] Squid 3.1.x do not open certain sites

2011-03-24 Thread Francesco

  Site looks okay from here.
  http://redbot.org/?descend=Trueuri=http://itinerari.mondodelgusto.it/

  Of course, there is no page content. Just a flash media object and a
  stat counter.
  So there could be something broken in the site code where the HTTP
  tools can't identify.

Hi Amos,
thank you for your reply!

But, there is a way to tell Squid to let view this, and others, website?
Users complaint that, at home, they can surf this and others site!

Thank you again!!!

Francesco, from Italy




Re: [squid-users] Squid 3.1.x do not open certain sites

2011-03-24 Thread Amos Jeffries

On 24/03/11 23:44, Francesco wrote:



  Site looks okay from here.
  http://redbot.org/?descend=Trueuri=http://itinerari.mondodelgusto.it/

  Of course, there is no page content. Just a flash media object and a
  stat counter.
  So there could be something broken in the site code where the HTTP
  tools can't identify.


Hi Amos,
thank you for your reply!

But, there is a way to tell Squid to let view this, and others, website?
Users complaint that, at home, they can surf this and others site!


Depends on why they can;t get to it...

 * do they get an HTTP error message from squid loading the base page 
HTML object?

 * does the flash object get pulled in okay?
  - we can look at these and see what is broken if either is a bad 
transfer.


 * does the page content displayed by the flash object not show up or 
otherwise display badly?

  - nothing we can do about this. It is a website bad-code problem.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


Re: [squid-users] Squid 3.1.x do not open certain sites

2011-03-24 Thread Francesco
   * do they get an HTTP error message from squid loading the base page
 HTML object?
   * does the flash object get pulled in okay?
- we can look at these and see what is broken if either is a bad
 transfer.

Hi Amos, and thank you again!

The answer is generated by squid and it is: Zero sized reply, when
accessing itinerari.mondodlegusto.it

See you!
Francesco



[squid-users] Squid 3.1.x do not open certain sites

2011-03-21 Thread Francesco
Hello,

some website, perhaps not compliant with W3C standard?, are not accessible
by users with Squid proxy set into browser's connection.

For example, one of this is: http://itinerari.mondodelgusto.it
It waits for some seconds, and then the connections hangs...

Is there a workaroung, bugfix?

Thanks you!!!
Francesco



Re: [squid-users] Squid 3.1.x do not open certain sites

2011-03-21 Thread Amos Jeffries

On Mon, 21 Mar 2011 13:42:47 +0100 (CET), Francesco wrote:

Hello,

some website, perhaps not compliant with W3C standard?, are not 
accessible

by users with Squid proxy set into browser's connection.

For example, one of this is: http://itinerari.mondodelgusto.it
It waits for some seconds, and then the connections hangs...

Is there a workaroung, bugfix?

Thanks you!!!
Francesco


Site looks okay from here.
http://redbot.org/?descend=Trueuri=http://itinerari.mondodelgusto.it/

Of course, there is no page content. Just a flash media object and a 
stat counter.
So there could be something broken in the site code where the HTTP 
tools can't identify.


Amos



[squid-users] Squid 3.1 and winbind 3.4.7 permissions issue on winbindd_privileged

2011-03-18 Thread Go Wow
Hi,

 I'm trying squid 3.1.10 with ntlm and kerberos. The kinit, klist
process works good even net join is working. The problem im facing is
when trying to start winbind service and using wbinfo. Always the
service is not starting giving the error message

lib/util_sock.c:1771(create_pipe_sock)   invalid permissions on socket
directory /var/run/samba/winbindd_privileged
winbindd/winbindd.c:1412(main)  winbindd_setup_listeners() failed


Right now the ownership of /var/run/samba/winbindd_privileged is set
to proxy:winbindd_priv with permissions of 0777 (for testing only),
still the service doesn't start. I made the change of permissions to
reflect in the service script also, /etc/init.d/winbind. I'm using
ubuntu 10.04 (lucid).

On the side note, after editing the winbind service script, when I run
this command sudo update-rc.d winbind start 21 2 3 4 5 .  I get a
warning saying

update-rc.d: warning: winbind stop runlevel arguments (none) do not
match LSB Default-Stop values (0 1 6)

System start/stop links for /etc/init.d/winbind already exist.



Is there a known solution for this issue?


Regards


[squid-users] Squid 3.1 SSL bump and tarnsparent mode

2011-03-08 Thread Francesco
Hello,

by activating SSL bump in Squid 3.1, is it possible to transparent proxy
https request?

I have read some documentation and posts, but i have not clear if it
possible, with browser warning, or not...

Any workaround/ideas?

Thank you!
Francesco



Re: [squid-users] Squid 3.1 SSL bump and tarnsparent mode

2011-03-08 Thread Amos Jeffries

On Tue, 8 Mar 2011 17:10:43 +0100 (CET), Francesco wrote:

Hello,

by activating SSL bump in Squid 3.1, is it possible to transparent 
proxy

https request?



No. It is not.


I have read some documentation and posts, but i have not clear if it
possible, with browser warning, or not...

Any workaround/ideas?


WPAD transparent configuration for browsers. Both DNS and DHCP 
methods are recommended for best browser coverage.

http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers#Fully_Automatically_Configuring_Browsers_for_WPAD

Amos



Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-03-01 Thread Gordon McKee

Hi

Okay - sorry I am just using our website as a test - it is on the same
server as the exchange box and is reverse proxied.  Browse the site and you
will see what I mean (how slow it is).  Something is going on causing the
images to be sent really slowly.  www.optimalprofit.com  is the website and
www.optimalprofit.com/owa is the exchange domain.  The exchange login page
should be really fast - it take about 4 min to load.  If I browse to the
site internally it is really fast.

I am kind of clucking at straws as to what it wrong.  Text comes down fast
and images are really slow.  SQUID worked a treat with version 2.6, but 2.7,
3.0 and 3.1 all make the reverse proxy really slow.

Many thanks

Gordon

-Original Message- 
From: Amos Jeffries

Sent: Monday, February 28, 2011 9:38 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

On Mon, 28 Feb 2011 16:18:27 -, Gordon McKee wrote:

Hi

The GET / HTTP/1.1 returns:

GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close


:) I hope not. That is the initial request.



and the GET /images/op-hwynit-ad1.gif HTTP/1.1 to pull an image
file returns:

HTTP/1.0 200 OK
Content-Type: image/gif
Content-Encoding: gzip
Last-Modified: Wed, 08 Dec 2004 15:34:12 GMT
Accept-Ranges: bytes
ETag: a0d3e25d3bddc41:0
Vary: Accept-Encoding
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Mon, 28 Feb 2011 16:13:28 GMT
Content-Length: 264171
X-Cache: MISS from kursk.gdmckee.home
Via: 1.0 kursk.gdmckee.home (squid/3.1.11)
Connection: close

I have tried the telnet codes to access the OWA folder and the
scripts come back very fast and the images take for every.  Not sure
what is going wrong.


It's 258 KB after compression and not being cached. Size may have
something to do with it if the scripts are much smaller.


Amos 





Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-02-28 Thread Gordon McKee

Hi

The GET / HTTP/1.1 returns:

GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

and the GET /images/op-hwynit-ad1.gif HTTP/1.1 to pull an image file 
returns:


HTTP/1.0 200 OK
Content-Type: image/gif
Content-Encoding: gzip
Last-Modified: Wed, 08 Dec 2004 15:34:12 GMT
Accept-Ranges: bytes
ETag: a0d3e25d3bddc41:0
Vary: Accept-Encoding
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Mon, 28 Feb 2011 16:13:28 GMT
Content-Length: 264171
X-Cache: MISS from kursk.gdmckee.home
Via: 1.0 kursk.gdmckee.home (squid/3.1.11)
Connection: close

I have tried the telnet codes to access the OWA folder and the scripts come 
back very fast and the images take for every.  Not sure what is going wrong.


Many thanks

Gordon


-Original Message- 
From: Amos Jeffries

Sent: Monday, February 28, 2011 12:19 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

On Sun, 27 Feb 2011 17:19:33 -, Gordon McKee wrote:

Hi

I had FreeBSD 6.3 and squid 2.6 running fine reverse proxying my OWA
server. I have now upgraded to FreeBSD 8 and squid 3.1 as the old
software was getting rather old.  I have copied the config file off
the old server on the the new server.  All is working except OWA.  The
images come down very very slowly (it does work really slowly).

I was thinking it might be a DNS issue, but if I telnet (outside
network) to www.optimalprofit.com 80 and enter:
GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the page comes back really fast, but if I telnet to
www.optimalprofit.com 80 and enter (to get a gif file off the server):

GET /images/op-hwynit-ad1.gif HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the text comes back really really slowly.  I am not sure what is
wrong as even my mobile connects in and over active sync and picks up
my emails just fine.  I have tried different browsers and they don't
make any difference.


What are the reply headers for each of these tests?

Amos




Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-02-28 Thread Amos Jeffries

On Mon, 28 Feb 2011 16:18:27 -, Gordon McKee wrote:

Hi

The GET / HTTP/1.1 returns:

GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close


:) I hope not. That is the initial request.



and the GET /images/op-hwynit-ad1.gif HTTP/1.1 to pull an image
file returns:

HTTP/1.0 200 OK
Content-Type: image/gif
Content-Encoding: gzip
Last-Modified: Wed, 08 Dec 2004 15:34:12 GMT
Accept-Ranges: bytes
ETag: a0d3e25d3bddc41:0
Vary: Accept-Encoding
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Mon, 28 Feb 2011 16:13:28 GMT
Content-Length: 264171
X-Cache: MISS from kursk.gdmckee.home
Via: 1.0 kursk.gdmckee.home (squid/3.1.11)
Connection: close

I have tried the telnet codes to access the OWA folder and the
scripts come back very fast and the images take for every.  Not sure
what is going wrong.


It's 258 KB after compression and not being cached. Size may have 
something to do with it if the scripts are much smaller.



Amos



[squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-02-27 Thread Gordon McKee

Hi

I had FreeBSD 6.3 and squid 2.6 running fine reverse proxying my OWA server. 
I have now upgraded to FreeBSD 8 and squid 3.1 as the old software was 
getting rather old.  I have copied the config file off the old server on the 
the new server.  All is working except OWA.  The images come down very very 
slowly (it does work really slowly).


I was thinking it might be a DNS issue, but if I telnet (outside network) to 
www.optimalprofit.com 80 and enter:

GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the page comes back really fast, but if I telnet to www.optimalprofit.com 80 
and enter (to get a gif file off the server):


GET /images/op-hwynit-ad1.gif HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the text comes back really really slowly.  I am not sure what is wrong as 
even my mobile connects in and over active sync and picks up my emails just 
fine.  I have tried different browsers and they don't make any difference.


I have tried squid 2.7 and 3 and I get the same issue.  Can't try 2. as 
there is no port on FreeBSD 8 anymore!!


Any help is much appreciated.

Gordon






Re: [squid-users] Squid 3.1 reverse proxy to OWA on IIS7

2011-02-27 Thread Amos Jeffries

On Sun, 27 Feb 2011 17:19:33 -, Gordon McKee wrote:

Hi

I had FreeBSD 6.3 and squid 2.6 running fine reverse proxying my OWA
server. I have now upgraded to FreeBSD 8 and squid 3.1 as the old
software was getting rather old.  I have copied the config file off
the old server on the the new server.  All is working except OWA.  
The

images come down very very slowly (it does work really slowly).

I was thinking it might be a DNS issue, but if I telnet (outside
network) to www.optimalprofit.com 80 and enter:
GET / HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the page comes back really fast, but if I telnet to
www.optimalprofit.com 80 and enter (to get a gif file off the 
server):


GET /images/op-hwynit-ad1.gif HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: www.optimalprofit.com
Connection: Close

the text comes back really really slowly.  I am not sure what is
wrong as even my mobile connects in and over active sync and picks up
my emails just fine.  I have tried different browsers and they don't
make any difference.


What are the reply headers for each of these tests?

Amos



[squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Clemente Aguiar
I am running squid 3.1.9, and I would like to know if this version is
able to cache youtube content?

I did check the wiki
(http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
and I must say that it is not clear what bits applies to version 3.1.

Can somebody give me some pointers to what exactly I should configure.

Thanks,
Clemente



Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Luis Daniel Lucio Quiroz
Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
 I am running squid 3.1.9, and I would like to know if this version is
 able to cache youtube content?
 
 I did check the wiki
 (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
 and I must say that it is not clear what bits applies to version 3.1.
 
 Can somebody give me some pointers to what exactly I should configure.
 
 Thanks,
 Clemente

Clemente,

there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the 
only way for now is:
use 2.7
user an ICAP server capable to manage those types of urls

Regards,

LD


Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Clemente Aguiar
Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
 Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
  I am running squid 3.1.9, and I would like to know if this version is
  able to cache youtube content?
  
  I did check the wiki
  (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
  and I must say that it is not clear what bits applies to version 3.1.
  
  Can somebody give me some pointers to what exactly I should configure.
  
  Thanks,
  Clemente
 
 Clemente,
 
 there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the 
 only way for now is:
 use 2.7
 user an ICAP server capable to manage those types of urls
 
 Regards,
 
 LD

Ok, thanks.

Maybe somebody should make that (perfectly) clear in the wiki ... and
maybe add an example on how to implement ICAP server.

Well, now for the next question. Which ICAP server and how to implement?
Can you help me?

Regards,
Clemente




Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Hasanen AL-Bana
No need for ICAP , storeurl script should be enough.
The problem is that youtube internal links are changing from time to
time, so we need to update our scripts from time to time.

On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
ca-li...@madeiratecnopolo.pt wrote:

 Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
  Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
   I am running squid 3.1.9, and I would like to know if this version is
   able to cache youtube content?
  
   I did check the wiki
   (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
   and I must say that it is not clear what bits applies to version 3.1.
  
   Can somebody give me some pointers to what exactly I should configure.
  
   Thanks,
   Clemente
 
  Clemente,
 
  there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the
  only way for now is:
  use 2.7
  user an ICAP server capable to manage those types of urls
 
  Regards,
 
  LD

 Ok, thanks.

 Maybe somebody should make that (perfectly) clear in the wiki ... and
 maybe add an example on how to implement ICAP server.

 Well, now for the next question. Which ICAP server and how to implement?
 Can you help me?

 Regards,
 Clemente




Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Luis Daniel Lucio Quiroz
Le mercredi 2 février 2011 12:29:58, Hasanen AL-Bana a écrit :
 No need for ICAP , storeurl script should be enough.
 The problem is that youtube internal links are changing from time to
 time, so we need to update our scripts from time to time.
 
 On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
 
 ca-li...@madeiratecnopolo.pt wrote:
  Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
   Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
I am running squid 3.1.9, and I would like to know if this version is
able to cache youtube content?

I did check the wiki
(http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
and I must say that it is not clear what bits applies to version 3.1.

Can somebody give me some pointers to what exactly I should
configure.

Thanks,
Clemente
   
   Clemente,
   
   there is a non 100% sure probability because 3.1 laks 2.7 capabilities,
   the only way for now is:
   use 2.7
   user an ICAP server capable to manage those types of urls
   
   Regards,
   
   LD
  
  Ok, thanks.
  
  Maybe somebody should make that (perfectly) clear in the wiki ... and
  maybe add an example on how to implement ICAP server.
  
  Well, now for the next question. Which ICAP server and how to implement?
  Can you help me?
  
  Regards,
  Clemente

store_url is for 2.7 not for 3.1, he must use 3.1+ icap if he want to get 
similar results

i can recomend you i-cap for linuxbut it lacks what you want, how ever it 
has some templates so you can code the things you want

LD


RE: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Saurabh Agarwal
Hi Luis

I have recently successfully cached youtube videos using Squid-2.7.Stable7 and 
posted the solution on squid mailing list as well. I tested it yesterday and 
youtube videos were still being cached. For Squid3.1 I have not tried yet.

Please do a google for squid mails with subject Caching youtube videos 
problem/ always getting TCP_MISS

Regards,
Saurabh

-Original Message-
From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com] 
Sent: Thursday, February 03, 2011 12:09 AM
To: Hasanen AL-Bana
Cc: Clemente Aguiar; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1 youtube caching

Le mercredi 2 février 2011 12:29:58, Hasanen AL-Bana a écrit :
 No need for ICAP , storeurl script should be enough.
 The problem is that youtube internal links are changing from time to
 time, so we need to update our scripts from time to time.
 
 On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
 
 ca-li...@madeiratecnopolo.pt wrote:
  Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
   Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
I am running squid 3.1.9, and I would like to know if this version is
able to cache youtube content?

I did check the wiki
(http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
and I must say that it is not clear what bits applies to version 3.1.

Can somebody give me some pointers to what exactly I should
configure.

Thanks,
Clemente
   
   Clemente,
   
   there is a non 100% sure probability because 3.1 laks 2.7 capabilities,
   the only way for now is:
   use 2.7
   user an ICAP server capable to manage those types of urls
   
   Regards,
   
   LD
  
  Ok, thanks.
  
  Maybe somebody should make that (perfectly) clear in the wiki ... and
  maybe add an example on how to implement ICAP server.
  
  Well, now for the next question. Which ICAP server and how to implement?
  Can you help me?
  
  Regards,
  Clemente

store_url is for 2.7 not for 3.1, he must use 3.1+ icap if he want to get 
similar results

i can recomend you i-cap for linuxbut it lacks what you want, how ever it 
has some templates so you can code the things you want

LD


RE: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Amos Jeffries
On Thu, 3 Feb 2011 10:03:03 +0530, Saurabh Agarwal
saurabh.agar...@citrix.com wrote:
 Hi Luis
 
 I have recently successfully cached youtube videos using
Squid-2.7.Stable7
 and posted the solution on squid mailing list as well. I tested it
 yesterday and youtube videos were still being cached.

AFAICT your 23rd Nov 2010 posted configuration differs from the wiki
example by:
 * passing every single URL passing through your proxy goes to the
storeurl program (not just the relevant YT URLs)
 * ignoring updates and changes to the HTML pages (forcing people to think
profiles are not being updated etc)
 * ignoring users force-refresh (so that if somebody does notice a page
problem caused by the above they can't manually force the cache to update
the page)

None of these have any obvious or explained reasons relating to the .FLV
video which is the only relevant piece to be de-duplicated.

Your re-writer adds two interesting URLs to the altered pot.

 * If that generate_204 is what I think then you are preventing users from
fast-forwarding videos, forcing them to re-download the entire thing from
cache if they try.

 * the docid= pages. Can you explain what those are and how their URLs
result in a .FLV object response?



I'm ignoring the 10th Nov and 1st Nov and April and July and August
configurations because YT change their URLs occasionally. That is the point
of using the wiki to publish latest details instead of a long-term mailing
list.

 If you find the wiki setup is not working please get yourself an editing
account and add a message to the *Discussion* page outlining the YT
operations which are being missed and what changes will catch them. When
somebody can independently verify their success we add the changes to the
main config.


 For Squid3.1 I have
 not tried yet.

3.x do not yet have the storeurl feature these hacks all rely upon.

Amos



[squid-users] squid-3.1 couldn't be installed.

2010-12-27 Thread Seok Jiwoo
Dear all,

I have a problem of installing of squid-3.1.8-1

when I installed 'squid' on Redhat 5.2, I had the erre message in
cache.log, Failed dependencies: per (DBI) is needed by
squid-3.1.8-1.el5.x86_64.

Please, let me know its reason.

best regards. J.


Re: [squid-users] squid-3.1 couldn't be installed.

2010-12-27 Thread Orestes Leal R.

I think that you must just install the package Perl::DBI and you're done.

LeaL



Dear all,

I have a problem of installing of squid-3.1.8-1

when I installed 'squid' on Redhat 5.2, I had the erre message in
cache.log, Failed dependencies: per (DBI) is needed by
squid-3.1.8-1.el5.x86_64.

Please, let me know its reason.

best regards. J.





--
Using Opera's revolutionary email client: http://www.opera.com/mail/




Re: [squid-users] squid-3.1 client POST buffering

2010-12-02 Thread Graham Keeling
On Wed, Dec 01, 2010 at 11:40:52AM +, Graham Keeling wrote:
 Hello,
 
 I am convinced that this is a serious bug, so I have entered a proper bug
 report.
 
 It is bug 3113:
 
 http://bugs.squid-cache.org/show_bug.cgi?id=3113

I have created a simple patch that seems to fix the problem for me on
squid-3.1.9, and I have attached it to the bug report.



Re: [squid-users] squid-3.1 client POST buffering

2010-12-01 Thread Graham Keeling
Hello,

I am convinced that this is a serious bug, so I have entered a proper bug
report.

It is bug 3113:

http://bugs.squid-cache.org/show_bug.cgi?id=3113



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Amos Jeffries

On 30/11/10 04:04, Oguz Yilmaz wrote:

Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


This sounds like the same problem as 
http://bugs.squid-cache.org/show_bug.cgi?id=3017


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


In my tests, no NTLM auth was used.
The browser has proxy confguration targeting DG and DG uses squid as
provider proxy. If you think it will work,  I can try the patch
located in the bug case.
Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?



 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Amos Jeffries

On 30/11/10 21:23, Oguz Yilmaz wrote:

On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 30/11/10 04:04, Oguz Yilmaz wrote:


Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


This sounds like the same problem as
http://bugs.squid-cache.org/show_bug.cgi?id=3017




Sorry, crossing bug reports in my head.

This one is closer to the suck-everything behaviour you have seen:
http://bugs.squid-cache.org/show_bug.cgi?id=2910

both have an outside chance of working.



In my tests, no NTLM auth was used.
The browser has proxy confguration targeting DG and DG uses squid as
provider proxy. If you think it will work,  I can try the patch
located in the bug case.
Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?


AIUI, Squid is supposed to read SQUID_TCP_SO_RCVBUF + read_ahead_gap and 
wait for some of that to pass on to the server before grabbing some more.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
--
Oguz YILMAZ



On Tue, Nov 30, 2010 at 10:46 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/11/10 21:23, Oguz Yilmaz wrote:

 On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


 Sorry, crossing bug reports in my head.

 This one is closer to the suck-everything behaviour you have seen:
 http://bugs.squid-cache.org/show_bug.cgi?id=2910

 both have an outside chance of working.


I have tried the patch proposed (BodyPipe.h). However does not work.
Note: My system is based on Linux os.


 In my tests, no NTLM auth was used.
 The browser has proxy confguration targeting DG and DG uses squid as
 provider proxy. If you think it will work,  I can try the patch
 located in the bug case.
 Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?

 AIUI, Squid is supposed to read SQUID_TCP_SO_RCVBUF + read_ahead_gap and
 wait for some of that to pass on to the server before grabbing some more.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Graham Keeling
On Tue, Nov 30, 2010 at 09:46:47PM +1300, Amos Jeffries wrote:
 On 30/11/10 21:23, Oguz Yilmaz wrote:
 On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:
 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


 Sorry, crossing bug reports in my head.

 This one is closer to the suck-everything behaviour you have seen:
 http://bugs.squid-cache.org/show_bug.cgi?id=2910

 both have an outside chance of working.

I have tried both suggestions, and neither of them make a difference
(changes to BodyPipe.h and client_side_request.cc).

I am keen to try any further suggestions, or provide you with debug output,
or whatever you like. 

This problem is extremely easy for me to reproduce.
It happens without any authentication, and with squid as the only proxy between
my browser and the website.

Shall I enter a proper bug report?



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Graham Keeling
On Tue, Nov 30, 2010 at 11:31:45AM +, Graham Keeling wrote:
 On Tue, Nov 30, 2010 at 09:46:47PM +1300, Amos Jeffries wrote:
  On 30/11/10 21:23, Oguz Yilmaz wrote:
  On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  
  wrote:
  On 30/11/10 04:04, Oguz Yilmaz wrote:
 
  Graham,
 
  This is the best explanation I have seen about ongoing upload problem
  in proxy chains where squid is one part of the chain.
 
  On our systems, we use Squid 3.0.STABLE25. Before squid a
  dansguardian(DG) proxy exist to filter. Results of my tests:
 
  1-
  DG+Squid 2.6.STABLE12: No problem of uploading
  DG+Squid 3.0.STABLE25: Problematic
  DG+Squid 3.1.8: Problematic
  DG+Squid 3.2.0.2: Problematic
 
  2- We have mostly prıblems with the sites with web based upload status
  viewers. Like rapidshare, youtube etc...
 
  3- If Squid is the only proxy, no problem of uploading.
 
  4- ead_ahead_gap 16 KB does not resolv the problem
 
 
  Dear Developers,
 
  Can you propose some other workarounds for us to test? The problem is
  encountered with most active sites of the net, unfortunately.
 
  This sounds like the same problem as
  http://bugs.squid-cache.org/show_bug.cgi?id=3017
 
 
  Sorry, crossing bug reports in my head.
 
  This one is closer to the suck-everything behaviour you have seen:
  http://bugs.squid-cache.org/show_bug.cgi?id=2910
 
  both have an outside chance of working.
 
 I have tried both suggestions, and neither of them make a difference
 (changes to BodyPipe.h and client_side_request.cc).
 
 I am keen to try any further suggestions, or provide you with debug output,
 or whatever you like. 
 
 This problem is extremely easy for me to reproduce.
 It happens without any authentication, and with squid as the only proxy 
 between
 my browser and the website.
 
 Shall I enter a proper bug report?

To demonstrate the problem happening, I set on 'debug_options 33,2' and
re-ran my test. This shows that ConnStateData::makeSpaceAvailable() in
client_side.cc will eat memory forever.
I can turn on more debug if needed, but others should be able to easily
reproduce this.

2010/11/30 11:57:17.482| growing request buffer: notYetUsed=4095 size=8192
2010/11/30 11:57:17.483| growing request buffer: notYetUsed=8191 size=16384
2010/11/30 11:57:17.483| growing request buffer: notYetUsed=16383 size=32768
2010/11/30 11:57:17.484| growing request buffer: notYetUsed=32767 size=65536
2010/11/30 11:57:17.486| growing request buffer: notYetUsed=65535 size=131072
2010/11/30 11:57:17.488| growing request buffer: notYetUsed=131071 size=262144
2010/11/30 11:57:17.506| growing request buffer: notYetUsed=262143 size=524288
2010/11/30 11:57:17.533| growing request buffer: notYetUsed=524287 size=1048576
2010/11/30 11:57:17.586| growing request buffer: notYetUsed=1048575 size=2097152
2010/11/30 11:57:17.692| growing request buffer: notYetUsed=2097151 size=4194304
2010/11/30 11:57:17.884| growing request buffer: notYetUsed=4194303 size=8388608
2010/11/30 11:57:18.308| growing request buffer: notYetUsed=8388607 
size=16777216
2010/11/30 11:57:19.136| growing request buffer: notYetUsed=16777215 
size=33554432
2010/11/30 11:57:20.792| growing request buffer: notYetUsed=33554431 
size=67108864
2010/11/30 11:57:23.957| growing request buffer: notYetUsed=67108863 
size=134217728
2010/11/30 11:57:31.176| growing request buffer: notYetUsed=134217727 
size=268435456
2010/11/30 11:57:58.433| growing request buffer: notYetUsed=268435455 
size=536870912
...



Re: [squid-users] squid-3.1 client POST buffering

2010-11-29 Thread Oguz Yilmaz
Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


Best Regards,

--
Oguz YILMAZ


On Thu, Nov 25, 2010 at 6:36 PM, Graham Keeling gra...@equiinet.com wrote:

 Hello,

 I have upgraded to squid-3.1 recently, and found a change of behaviour.
 I have been using dansguardian in front of squid.

 It appears to be because squid now buffers uploaded POST data slightly
 differently.
 In versions  3.1, it would take some data, send it through to the website,
 and then ask for some more.
 In 3.1 version, it appears to take as much from the client as it can without
 waiting for what it has already got to be uploaded to the website.

 This means that dansguardian quickly uploads all the data into squid, and
 then waits for a reply, which is a long time in coming because squid still
 has to upload everything to the website.
 And then dansguardian times out on squid after two minutes.


 I noticed the following squid configuration option. Perhaps what I need is
 a similar thing for buffering data sent from the client.

 #  TAG: read_ahead_gap  buffer-size
 #       The amount of data the cache will buffer ahead of what has been
 #       sent to the client when retrieving an object from another server.
 #Default:
 # read_ahead_gap 16 KB

 Comments welcome!

 Graham.



Re: [squid-users] squid-3.1 client POST buffering

2010-11-26 Thread Graham Keeling
On Thu, Nov 25, 2010 at 04:36:49PM +, Graham Keeling wrote:
 Hello,
 
 I have upgraded to squid-3.1 recently, and found a change of behaviour.
 I have been using dansguardian in front of squid.
 
 It appears to be because squid now buffers uploaded POST data slightly
 differently.
 In versions  3.1, it would take some data, send it through to the website,
 and then ask for some more.
 In 3.1 version, it appears to take as much from the client as it can without
 waiting for what it has already got to be uploaded to the website.
 
 This means that dansguardian quickly uploads all the data into squid, and
 then waits for a reply, which is a long time in coming because squid still
 has to upload everything to the website.
 And then dansguardian times out on squid after two minutes.
 
 
 I noticed the following squid configuration option. Perhaps what I need is
 a similar thing for buffering data sent from the client.
 
 #  TAG: read_ahead_gap  buffer-size
 #   The amount of data the cache will buffer ahead of what has been
 #   sent to the client when retrieving an object from another server.
 #Default:
 # read_ahead_gap 16 KB
 
 Comments welcome!
 
 Graham.


Upon further experimentation, I have found that squid-3.1.x (specifically,
I have tried squid-3.1.8 and squid-3.1.9) behaves very badly with POST uploads.

It just increases the input buffer forever, until the upload is finished, or
the machine runs out of memory.

This problem exists when connecting directly to squid without dansguardian
in the way.

This problem doesn't exist on my old squid-2.5 installation.



[squid-users] squid-3.1 client POST buffering

2010-11-25 Thread Graham Keeling
Hello,

I have upgraded to squid-3.1 recently, and found a change of behaviour.
I have been using dansguardian in front of squid.

It appears to be because squid now buffers uploaded POST data slightly
differently.
In versions  3.1, it would take some data, send it through to the website,
and then ask for some more.
In 3.1 version, it appears to take as much from the client as it can without
waiting for what it has already got to be uploaded to the website.

This means that dansguardian quickly uploads all the data into squid, and
then waits for a reply, which is a long time in coming because squid still
has to upload everything to the website.
And then dansguardian times out on squid after two minutes.


I noticed the following squid configuration option. Perhaps what I need is
a similar thing for buffering data sent from the client.

#  TAG: read_ahead_gap  buffer-size
#   The amount of data the cache will buffer ahead of what has been
#   sent to the client when retrieving an object from another server.
#Default:
# read_ahead_gap 16 KB

Comments welcome!

Graham.



Re: [squid-users] Squid 3.1 with MRTG, Not able to get Graphs- squid upgraded to 3.1.8 ( Resolved at last)

2010-10-11 Thread Babu Chaliyath
Hi List,
At last I could get the MRTG running with squid 3.1.8, though it took
much time. Will be writing a howto soon regarding how to set up mrtg
on FreeBSD.
It was the SNMP_util.pm gave all the trouble as the port maintainers
did some changes of merging. Those who are breaking head with mrtg
kindly have p5-SNMP_Session port to be installed additionally and save
the time.
Hope that may help someone in future.

Thank you so much to those took their valuable time to reply to my
silly doubts and cleared and guided me.

Regards
Babs

On 10/4/10, Babu Chaliyath babu.chaliy...@gmail.com wrote:
 It's well worth upgrading to 3.1.8. Many of the 3.1 betas had broken
 SNMP.

 Also check that the squid.mib being loaded came from the 3.1 install.

 We now have a full map of what the OID are and what versions they work
 for. You may find this useful:
 http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs


 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.8
Beta testers wanted for 3.2.0.2


 Hi List,
 As suggested by Amos, I have upgraded the squid box to 3.1.8 and
 everything is working fine except the graph part with mrtg.
 mrtg version :mrtg-2.16.4

 My mrtg.cfg is as below

 LoadMIBs: /usr/local/etc/mrtg/squid.mib
 EnableIPv6: no
 WorkDir: /usr/local/www/apache22/data
 Options[_]: bits,growright

 Target[proxy-hit]: cacheHttpHitscacheServerRequests:pub...@127.0.0.1:3401
 MaxBytes[proxy-hit]: 10
 Title[proxy-hit]: HTTP Hits
 Suppress[proxy-hit]: y
 LegendI[proxy-hit]: HTTP hits
 LegendO[proxy-hit]: HTTP requests
 Legend1[proxy-hit]: HTTP hits
 Legend2[proxy-hit]: HTTP requests
 YLegend[proxy-hit]: perminute
 ShortLegend[proxy-hit]: req/min
 Options[proxy-hit]: nopercent, perminute, dorelpercent, unknaszero,
 growright, pngdate
 #PNGTitle[proxy-hit]: Proxy Hits

 Target[proxy-srvkbinout]:
 cacheServerInKbcacheServerOutKb:pub...@127.0.0.1:3401
 MaxBytes[proxy-srvkbinout]: 76800
 Title[proxy-srvkbinout]: Cache Server Traffic In/Out
 Suppress[proxy-srvkbinout]: y
 LegendI[proxy-srvkbinout]: Traffic In
 LegendO[proxy-srvkbinout]: Traffic Out
 Legend1[proxy-srvkbinout]: Traffic In
 Legend2[proxy-srvkbinout]: Traffic Out
 YLegend[proxy-srvkbinout]: per minute
 ShortLegend[proxy-srvkbinout]: b/min
 kMG[proxy-srvkbinout]: k,M,G,T
 kilo[proxy-srvkbinout]: 1024
 Options[proxy-srvkbinout]: nopercent, perminute, unknaszero, growright,
 pngdate

 I have verified that squid snmp is working through the following command

 #snmpget -On -m /usr/local/etc/mrtg/squid.mib -v 2c -c public
 127.0.0.1:3401 cacheHttpHits cacheServerRequests cacheServerInKb
 cacheServerOutKb cacheUptime CacheSoftware cacheVersionId

 This gives me results without any errors so snmp part of squid is
 working fine I think
 Now when I run mrtg I could see the following errors in mrtg.log file

 010-10-04 12:37:33 -- Started mrtg with config
 '/usr/local/etc/mrtg/mrtg.cfg'
 2010-10-04 12:37:33 -- Unknown SNMP var cacheHttpHits
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheServerRequests
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheUptime
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheSoftware
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheVersionId
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Use of uninitialized value $ret[-2] in
 concatenation (.) or string at /usr/local/bin/mrtg line 2261.
 2010-10-04 12:37:33 -- Use of uninitialized value $ret[-1] in
 concatenation (.) or string at /usr/local/bin/mrtg line 2261.
 2010-10-04 12:37:33 -- Unknown SNMP var cacheServerInKb
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheServerOutKb
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheUptime
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheSoftware
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Unknown SNMP var cacheVersionId
  at /usr/local/bin/mrtg line 2242
 2010-10-04 12:37:33 -- Use of uninitialized value $ret[-2] in
 concatenation (.) or string at /usr/local/bin/mrtg line 2261.
 2010-10-04 12:37:33 -- Use of uninitialized value $ret[-1] in
 concatenation (.) or string at /usr/local/bin/mrtg line 2261.
 2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
 Target[proxy-hit][_IN_] ' $target-[0]{$mode} ' did not eval into
 defined data
 2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
 Target[proxy-hit][_OUT_] ' $target-[0]{$mode} ' did not eval into
 defined data
 2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
 Target[proxy-srvkbinout][_IN_] ' $target-[1]{$mode} ' did not eval
 into defined data
 2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
 Target[proxy-srvkbinout][_OUT_] ' $target-[1]{$mode} ' did not eval
 into defined data

 All I could make out from these error was mrtg not reading squid.mib
 file. Am I right?
 Now I am stuck and I suspect a broken mrtg? or  did 

Re: [squid-users] Squid 3.1 with MRTG, Not able to get Graphs- squid upgraded to 3.1.8

2010-10-04 Thread Babu Chaliyath
 It's well worth upgrading to 3.1.8. Many of the 3.1 betas had broken SNMP.

 Also check that the squid.mib being loaded came from the 3.1 install.

 We now have a full map of what the OID are and what versions they work
 for. You may find this useful:
 http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs


 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.8
Beta testers wanted for 3.2.0.2


Hi List,
As suggested by Amos, I have upgraded the squid box to 3.1.8 and
everything is working fine except the graph part with mrtg.
mrtg version :mrtg-2.16.4

My mrtg.cfg is as below

LoadMIBs: /usr/local/etc/mrtg/squid.mib
EnableIPv6: no
WorkDir: /usr/local/www/apache22/data
Options[_]: bits,growright

Target[proxy-hit]: cacheHttpHitscacheServerRequests:pub...@127.0.0.1:3401
MaxBytes[proxy-hit]: 10
Title[proxy-hit]: HTTP Hits
Suppress[proxy-hit]: y
LegendI[proxy-hit]: HTTP hits
LegendO[proxy-hit]: HTTP requests
Legend1[proxy-hit]: HTTP hits
Legend2[proxy-hit]: HTTP requests
YLegend[proxy-hit]: perminute
ShortLegend[proxy-hit]: req/min
Options[proxy-hit]: nopercent, perminute, dorelpercent, unknaszero,
growright, pngdate
#PNGTitle[proxy-hit]: Proxy Hits

Target[proxy-srvkbinout]: cacheServerInKbcacheServerOutKb:pub...@127.0.0.1:3401
MaxBytes[proxy-srvkbinout]: 76800
Title[proxy-srvkbinout]: Cache Server Traffic In/Out
Suppress[proxy-srvkbinout]: y
LegendI[proxy-srvkbinout]: Traffic In
LegendO[proxy-srvkbinout]: Traffic Out
Legend1[proxy-srvkbinout]: Traffic In
Legend2[proxy-srvkbinout]: Traffic Out
YLegend[proxy-srvkbinout]: per minute
ShortLegend[proxy-srvkbinout]: b/min
kMG[proxy-srvkbinout]: k,M,G,T
kilo[proxy-srvkbinout]: 1024
Options[proxy-srvkbinout]: nopercent, perminute, unknaszero, growright, pngdate

I have verified that squid snmp is working through the following command

#snmpget -On -m /usr/local/etc/mrtg/squid.mib -v 2c -c public
127.0.0.1:3401 cacheHttpHits cacheServerRequests cacheServerInKb
cacheServerOutKb cacheUptime CacheSoftware cacheVersionId

This gives me results without any errors so snmp part of squid is
working fine I think
Now when I run mrtg I could see the following errors in mrtg.log file

010-10-04 12:37:33 -- Started mrtg with config '/usr/local/etc/mrtg/mrtg.cfg'
2010-10-04 12:37:33 -- Unknown SNMP var cacheHttpHits
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheServerRequests
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheUptime
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheSoftware
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheVersionId
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Use of uninitialized value $ret[-2] in
concatenation (.) or string at /usr/local/bin/mrtg line 2261.
2010-10-04 12:37:33 -- Use of uninitialized value $ret[-1] in
concatenation (.) or string at /usr/local/bin/mrtg line 2261.
2010-10-04 12:37:33 -- Unknown SNMP var cacheServerInKb
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheServerOutKb
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheUptime
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheSoftware
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Unknown SNMP var cacheVersionId
 at /usr/local/bin/mrtg line 2242
2010-10-04 12:37:33 -- Use of uninitialized value $ret[-2] in
concatenation (.) or string at /usr/local/bin/mrtg line 2261.
2010-10-04 12:37:33 -- Use of uninitialized value $ret[-1] in
concatenation (.) or string at /usr/local/bin/mrtg line 2261.
2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
Target[proxy-hit][_IN_] ' $target-[0]{$mode} ' did not eval into
defined data
2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
Target[proxy-hit][_OUT_] ' $target-[0]{$mode} ' did not eval into
defined data
2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
Target[proxy-srvkbinout][_IN_] ' $target-[1]{$mode} ' did not eval
into defined data
2010-10-04 12:37:33 -- 2010-10-04 12:37:33: ERROR:
Target[proxy-srvkbinout][_OUT_] ' $target-[1]{$mode} ' did not eval
into defined data

All I could make out from these error was mrtg not reading squid.mib
file. Am I right?
Now I am stuck and I suspect a broken mrtg? or  did I go wrong
somewhere? Do kindly let me know what went wrong and how to proceed
further.
Thanx in advance
Babs


[squid-users] Squid 3.1 with MRTG, Not able to get Graphs

2010-09-09 Thread Babu Chaliyath
Hi List,
I am trying to get mrtg graphing of my squid box running freebsd 7.2
with squid 3.1.0.13, I was able to get the mrtg while running 2.6
version of squid, but once  moved to 3.1 version, I am not able to get
the mrtg graph at all, I would greatly appreciate if any
suggestions/clues what might have gone wrong on my mrtg setup.

System details as follows
OS verion FreeBSD 7.2
Squid version 3.1.0.13
mrtg version 2.16.2
my mrtg.cfg

##MRTG Configuration file ###
WorkDir: /home/www/mrtg/
Options[_]: bits,growright
logFormat: rrdtool

Target[proxy-hit]: cacheHttpHitscacheServerRequests:pub...@localhost:3401
MaxBytes[proxy-hit]: 10
Title[proxy-hit]: HTTP Hits
Suppress[proxy-hit]: y
LegendI[proxy-hit]: HTTP hits
LegendO[proxy-hit]: HTTP requests
Legend1[proxy-hit]: HTTP hits
Legend2[proxy-hit]: HTTP requests
YLegend[proxy-hit]: perminute
ShortLegend[proxy-hit]: req/min
Options[proxy-hit]: nopercent, perminute, dorelpercent, unknaszero,
growright, pngdate
PNGTitle[proxy-hit]: Proxy Hits

Target[proxy-srvkbinout]: cacheServerInKbcacheServerOutKb:pub...@localhost:3401
MaxBytes[proxy-srvkbinout]: 76800
Title[proxy-srvkbinout]: Cache Server Traffic In/Out
Suppress[proxy-srvkbinout]: y
LegendI[proxy-srvkbinout]: Traffic In
LegendO[proxy-srvkbinout]: Traffic Out
Legend1[proxy-srvkbinout]: Traffic In
Legend2[proxy-srvkbinout]: Traffic Out
YLegend[proxy-srvkbinout]: per minute
ShortLegend[proxy-srvkbinout]: b/min
kMG[proxy-srvkbinout]: k,M,G,T
kilo[proxy-srvkbinout]: 1024
Options[proxy-srvkbinout]: nopercent, perminute, unknaszero, growright, pngdate
PNGTitle[proxy-srvkbinout]: Proxy Traffic In/Out

## End of MRTG Configuration ###

Kindly note that I can successfully run the following command too

#snmpwalk -m /usr/local/etc/squid/mib.txt -v2c -Cc -c public
localhost:3401 .1.3.6.1.4.1.3495.1.1

SQUID-MIB::cacheSysVMsize.0 = INTEGER: 16348
SQUID-MIB::cacheSysStorage.0 = INTEGER: 7535652
SQUID-MIB::cacheUptime.0 = Timeticks: (162270170) 18 days, 18:45:01.70

Pls let me know how can I get the graphing started
Thanx  Regards
Babs


  1   2   >