[squid-users] Squid Cache (Version 3.0.STABLE20) Windows SBS 2008 Reverse Proxy over Https

2011-01-19 Thread Hakan Cosar
Hello,

we are trying to get reverse proxy work for Windows SBS 2008. Active-Sync and 
OWA works fine on SBS.
I've exported the certificate from SBS as .pfx and converted it to .pem format. 
The Domain name remote.sci.de is not public; instead we use the public 
IP-Address. 
Any idea?


Cosar



--squid.conf
visible_hostname revproxy
debug_options ALL,1
extension_methods RPC_IN_DATA RPC_OUT_DATA

https_port 192.168.50.199:443 accel cert=/etc/squid/cert/sbs2008.pem 
key=/etc/squid/cert/sbs2008.key defaultsite=remote.sci.de

cache_peer 192.168.5.34 parent 443 0 no-query originserver login=PASS 
front-end-https=on name=exchangeServer

acl owa dstdomain remote.sci.de
cache_peer_access exchangeServer allow owa
cache_peer_access exchangeServer allow all
never_direct allow owa
http_access allow owa
http_access allow all
miss_access allow owa
miss_access allow all
--squid.conf

Cache.log says:

2011/01/18 16:24:57| Squid Cache (Version 3.0.STABLE20): Exiting normally.
2011/01/18 16:24:58| Starting Squid Cache version 3.0.STABLE20 for 
i386-redhat-linux-gnu...
2011/01/18 16:24:58| Process ID 10381
2011/01/18 16:24:58| With 1024 file descriptors available
2011/01/18 16:24:58| DNS Socket created at 0.0.0.0, port 38483, FD 7
2011/01/18 16:24:58| Adding domain sci.de from /etc/resolv.conf
2011/01/18 16:24:58| Adding nameserver 192.168.5.34 from /etc/resolv.conf
2011/01/18 16:24:58| User-Agent logging is disabled.
2011/01/18 16:24:58| Referer logging is disabled.
2011/01/18 16:24:58| Unlinkd pipe opened on FD 11
2011/01/18 16:24:58| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec
2011/01/18 16:24:58| Swap maxSize 102400 + 8192 KB, estimated 8507 objects
2011/01/18 16:24:58| Target number of buckets: 425
2011/01/18 16:24:58| Using 8192 Store buckets
2011/01/18 16:24:58| Max Mem  size: 8192 KB
2011/01/18 16:24:58| Max Swap size: 102400 KB
2011/01/18 16:24:58| Version 1 of swap file with LFS support detected... 
2011/01/18 16:24:58| Rebuilding storage in /var/spool/squid (CLEAN)
2011/01/18 16:24:58| Using Least Load store dir selection
2011/01/18 16:24:58| Current Directory is /
2011/01/18 16:24:58| Loaded Icons.
2011/01/18 16:24:58| Accepting HTTPS connections at 192.168.50.199, port 443, 
FD 13.
2011/01/18 16:24:58| HTCP Disabled.
2011/01/18 16:24:58| Configuring Parent 192.168.5.34/443/0
2011/01/18 16:24:58| Ready to serve requests.
2011/01/18 16:24:58| Done reading /var/spool/squid swaplog (48 entries)
2011/01/18 16:24:58| Finished rebuilding storage from disk.
2011/01/18 16:24:58|    48 Entries scanned
2011/01/18 16:24:58| 0 Invalid entries.
2011/01/18 16:24:58| 0 With invalid flags.
2011/01/18 16:24:58|    48 Objects loaded.
2011/01/18 16:24:58| 0 Objects expired.
2011/01/18 16:24:58| 0 Objects cancelled.
2011/01/18 16:24:58| 0 Duplicate URLs purged.
2011/01/18 16:24:58| 0 Swapfile clashes avoided.
2011/01/18 16:24:58|   Took 0.03 seconds (1918.31 objects/sec).
2011/01/18 16:24:58| Beginning Validation Procedure
2011/01/18 16:24:58|   Completed Validation Procedure
2011/01/18 16:24:58|   Validated 121 Entries
2011/01/18 16:24:58|   store_swap_size = 308
2011/01/18 16:24:59| storeLateRelease: released 0 objects

-BEGIN SSL SESSION PARAMETERS-
MFECAQECAgMBBAIAhAQABDAgagjWSe3u/7aXYFMw117Ty+i+g2VyHR1hRYLV/PND
yxtyiDO7NYN7MVbNoZ+TOw6hBgIETTWxLqIEAgIBLKQCBAA=
-END SSL SESSION PARAMETERS-
2011/01/18 16:26:54| TCP connection to 192.168.5.34/443 failed











[squid-users] Re: What http headers required for squid to work?

2011-01-19 Thread diginger

Hi 

I have updated my squid version 3.0 STABLE25. But its caching for the bad
response i.e. TCP_NEGATIVE_HIT/204 or TCP_NEGATIVE_HIT/400 but not caching
for sttaus code 200 i.e.TCP_MISS/200.


Following 
http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator  online
guide I have done the following configuration 

#These two lines are on top of config file 
http_port 80 accel defaultsite=mysite DNS:8081 
cache_peer 192.234.172.25 parent 8081 0 no-query originserver name=myAccel 

# And finally deny all other access to this proxy 
#http_access allow localhost 
#http_access deny all 
http_access allow our_sites 
cache_peer_access myAccel allow our_sites 
cache_peer_access myAccel deny all 

Do squid need to be configured diffrently for diffrent content type? I have 
application/json set as header content type as I am expevtin JSON response.

Pease guide, thanks in advance.
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-http-headers-required-for-squid-to-work-tp3223434p3225134.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid Cache (Version 3.0.STABLE20) Windows SBS 2008 Reverse Proxy over Https

2011-01-19 Thread Amos Jeffries

On 19/01/11 21:41, Hakan Cosar wrote:

Hello,

we are trying to get reverse proxy work for Windows SBS 2008. Active-Sync and 
OWA works fine on SBS.
I've exported the certificate from SBS as .pfx and converted it to .pem format. 
The Domain name remote.sci.de is not public; instead we use the public 
IP-Address.
Any idea?


Cosar



--squid.conf
visible_hostname revproxy


Visible hostname is supposed to be the public hostname by which the 
public see your proxy machine identified.  I would expect it to be 
remote.sci.de in this case.




debug_options ALL,1
extension_methods RPC_IN_DATA RPC_OUT_DATA

https_port 192.168.50.199:443 accel cert=/etc/squid/cert/sbs2008.pem 
key=/etc/squid/cert/sbs2008.key defaultsite=remote.sci.de

cache_peer 192.168.5.34 parent 443 0 no-query originserver login=PASS 
front-end-https=on name=exchangeServer



You need at minimum to flag ssl on the cache_peer line to turn on SSL 
encryption on that link.




acl owa dstdomain remote.sci.de
cache_peer_access exchangeServer allow owa
cache_peer_access exchangeServer allow all
never_direct allow owa
http_access allow owa
http_access allow all
miss_access allow owa
miss_access allow all
--squid.conf

Cache.log says:

2011/01/18 16:24:57| Squid Cache (Version 3.0.STABLE20): Exiting normally.
2011/01/18 16:24:58| Starting Squid Cache version 3.0.STABLE20 for 
i386-redhat-linux-gnu...

snip

2011/01/18 16:24:59| storeLateRelease: released 0 objects

-BEGIN SSL SESSION PARAMETERS-
MFECAQECAgMBBAIAhAQABDAgagjWSe3u/7aXYFMw117Ty+i+g2VyHR1hRYLV/PND
yxtyiDO7NYN7MVbNoZ+TOw6hBgIETTWxLqIEAgIBLKQCBAA=
-END SSL SESSION PARAMETERS-
2011/01/18 16:26:54| TCP connection to 192.168.5.34/443 failed




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Re: What http headers required for squid to work?

2011-01-19 Thread Amos Jeffries

On 20/01/11 00:33, diginger wrote:


Hi

I have updated my squid version 3.0 STABLE25. But its caching for the bad
response i.e. TCP_NEGATIVE_HIT/204 or TCP_NEGATIVE_HIT/400 but not caching
for sttaus code 200 i.e.TCP_MISS/200.


Following
http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator  online
guide I have done the following configuration

#These two lines are on top of config file
http_port 80 accel defaultsite=mysite DNS:8081


It's generally problematic to change ports in transit. The backend 
application needs support and maybe extra configuration to be aware that 
the client is using port 80.


You will likely need:
  http_port 80 accel vport=8081 defaultsite=$public_domain
  http_poort 8081 accel defaultsite=$public_domain

or
  http_port 80 accel vhost vport=8081 defaultsite=$public_domain
  http_poort 8081 accel vhost defaultsite=$public_domain


cache_peer 192.234.172.25 parent 8081 0 no-query originserver name=myAccel

# And finally deny all other access to this proxy
#http_access allow localhost
#http_access deny all
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all

Do squid need to be configured diffrently for diffrent content type? I have
application/json set as header content type as I am expevtin JSON response.

Pease guide, thanks in advance.


Please supply your full config file (without the comment lines). There 
are many things which *might* be affecting this problem.


Also did you read the reference pages in my reply from yesterday?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


[squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Rafal Zawierta
Hello,

I'm trying to set up squid to auth against AD.

AD is on 2008 server (but functionality level of 2003).
Kerberos works fine, from linux machine (debian) kinit and klist and
kutil are all right. I also have created krb5.keytab and for my proxy
user I have:

ktutil:  rkt /etc/krb5.keytab
ktutil:  l
slot KVNO Principal
  -
   12   HTTP/squid.pfsee@pfsee.net
   22   HTTP/squid.pfsee@pfsee.net
   32   HTTP/squid.pfsee@pfsee.net
   42 HTTP/sq...@pfsee.net
   52 HTTP/sq...@pfsee.net
   62 HTTP/sq...@pfsee.net
ktutil:  q

squid - hostname of linux machine
pfsee.net - my AD domain

Squid3 cache.log (at startup)
2011/01/19 13:07:43| Process ID 1782
2011/01/19 13:07:43| With 65535 file descriptors available
2011/01/19 13:07:43| Initializing IP Cache...
2011/01/19 13:07:43| helperOpenServers: Starting 10/10
'squid_kerb_auth' processes
(is it working now?)

First try - IE8 from my AD server (2008R2).
In Lan-Proxy i have: squid.pfsee.net

When I try to open page, I get basic auth prompt (I really should
not!) - and cache.log says:
authenticateNegotiateHandleReply: Error validating user via Negotiate.
Error returned 'BH received type 1 NTLM token'

What is wrong? Problem is with squid and linux or on the win2k8
machine (IE client side)?

Regards
R.


Re: [squid-users] What http headers required for squid to work?

2011-01-19 Thread Henrik Nordström
tis 2011-01-18 klockan 08:41 -0800 skrev diginger:

 Please tell me what http headers required in response for squid caching to
 work. 

At least one of
Last-Modified: datetime
Cache-Control: max-age=seconds
Expires: datetime

and no other headers which forbids caching. I.e. Cache-Control:
no-store / no-cache etc.

Regards
Henrik



Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Amos Jeffries

On 20/01/11 01:12, Rafal Zawierta wrote:

Hello,

I'm trying to set up squid to auth against AD.

AD is on 2008 server (but functionality level of 2003).
Kerberos works fine, from linux machine (debian) kinit and klist and
kutil are all right. I also have created krb5.keytab and for my proxy
user I have:

ktutil:  rkt /etc/krb5.keytab
ktutil:  l
slot KVNO Principal
  -
12   HTTP/squid.pfsee@pfsee.net
22   HTTP/squid.pfsee@pfsee.net
32   HTTP/squid.pfsee@pfsee.net
42 HTTP/sq...@pfsee.net
52 HTTP/sq...@pfsee.net
62 HTTP/sq...@pfsee.net
ktutil:  q

squid - hostname of linux machine
pfsee.net - my AD domain

Squid3 cache.log (at startup)
2011/01/19 13:07:43| Process ID 1782
2011/01/19 13:07:43| With 65535 file descriptors available
2011/01/19 13:07:43| Initializing IP Cache...
2011/01/19 13:07:43| helperOpenServers: Starting 10/10
'squid_kerb_auth' processes
(is it working now?)

First try - IE8 from my AD server (2008R2).
In Lan-Proxy i have: squid.pfsee.net

When I try to open page, I get basic auth prompt (I really should
not!) - and cache.log says:
authenticateNegotiateHandleReply: Error validating user via Negotiate.
Error returned 'BH received type 1 NTLM token'

What is wrong? Problem is with squid and linux or on the win2k8
machine (IE client side)?


As you can see the browser is sending an NTLM handshake instead of the 
Kerberos token. The current Squid auth system does not support 
Negotiate/NTLM only Negotiate/Kerberos but has no way to tell IE8 that.


* Check that you have all auth_param with Negotiate type first before 
other types of auth.


* Check that IE is configured to use Kerberos by reference.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Henrik Nordström
ons 2011-01-19 klockan 13:12 +0100 skrev Rafal Zawierta:

 authenticateNegotiateHandleReply: Error validating user via Negotiate.
 Error returned 'BH received type 1 NTLM token'

That the client selected to use NTLM, not Kerberos. The squid_kerb_auth
helper only supports Kerberos. To support NTLM you also need to
configure NTLM authentication support in Squid. The Negotiate scheme as
such on the wire supports any authentication method Windows SPNEGO
supports.

I can only guess to why the client did not select to use Kerberos
* Did not find the right kerberos principal in the domain directory.
* do not trust the requested proxy server for Kerbeors authentication
* perhaps kerberos auth failed somehow and it did a fallback on NTLM?

Regards
Henrik



Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Henrik Nordström
tor 2011-01-20 klockan 01:26 +1300 skrev Amos Jeffries:

 As you can see the browser is sending an NTLM handshake instead of the 
 Kerberos token. The current Squid auth system does not support 
 Negotiate/NTLM only Negotiate/Kerberos but has no way to tell IE8 that.

Technically Squid do not care which SPNEGO (Negotiate scheme) method is
used, but squid_kerb_auth is Kerberos only.

In this case Negotiate/NTLM was used by the client (not to be confused
with bare NTLM).

Regards
Henrik



Re: [squid-users] Problem with squid_kerb_auth

2011-01-19 Thread Rafal Zawierta
Ok, I'll try to focus on client side.

Now I've installed XP SP3 with IE8 and FF3.6 and there is the same problem.

* Check that IE is configured to use Kerberos by reference.
How to check it?


In addition:
When I start IE on XP machine, with Wireshark I get:
KRB Error: KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN

R.


[squid-users] Re: What http headers required for squid to work?

2011-01-19 Thread diginger

Hello, 

I have gone through refrences you provided and following that I have updated
squid version too but still no luck. now even I have made squid and
originserver port to be same. 

Here is my full squid.conf 

http_port 80 accel defaultsite=xxx.xx.xxx.118
cache_peer xxx.xx.xxx.118 parent 80 0 no-query originserver name=myAccel
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl our_sites dst xxx.xx.xxx.118
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?)   0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
coredump_dir /var/spool/squid

Here is my HTTP Respnse Header

Status=OK - 200
Content-Type=application/json
Cache-Control=max-age=6000
Server=Jetty(6.1.25)
X-Cache=MISS from cache001.com
X-Cache-Lookup=MISS from cache001.com:80
Via=1.0 cache001.com (squid/3.0.STABLE25), 1.0 localhost.localdomain
Date=Wed, 19 Jan 2011 14:19:36 GMT
Content-Length=132
Age=0

Please guide me.

Thanks 





 

-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/What-http-headers-required-for-squid-to-work-tp3223434p3225415.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Problem with squid_kerb_auth

2011-01-19 Thread Rafal Zawierta
Update.

Fortrunately I was able to reinstall my proxy machine and now it works fine.

Steps on Ubuntu 10.04 are almost the same as:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

But please be sure to carry on pathnames - they are a little bit
different on Ubuntu.

Regards


Re: [squid-users] Re: What http headers required for squid to work?

2011-01-19 Thread Amos Jeffries

On 20/01/11 03:37, diginger wrote:


Hello,

I have gone through refrences you provided and following that I have updated
squid version too but still no luck. now even I have made squid and
originserver port to be same.



Aha, think about this...


 acl our_sites dst xxx.xx.xxx.118

 cache_peer_access myAccel allow our_sites
 cache_peer_access myAccel deny all

If the client browser was going to xxx.xx.xxx.118. What IP woudl t 
connect to? xxx.xx.xxx.118 or the Squid one?


dst matches the IP the client was connecting to.



Here is my full squid.conf

http_port 80 accel defaultsite=xxx.xx.xxx.118


Why hide this?
The default site is one of the publicly visible names for things Squid 
generates on behalf of your website. For example the value which goes on 
web pages pointing people to http://xxx.xx.xxx.118/index.html etc.


Think like a user: what the..? I'm trying to reach example.com which is 
at *.*.*.20



cache_peer xxx.xx.xxx.118 parent 80 0 no-query originserver name=myAccel


snip

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow our_sites


Um, unrelated problem.

Line 1 of all the reverse-proxy configuration guides mentions:
Warning: the reverse proxy configuration MUST be placed first in 
the config file above any regular forward-proxy configuration..


That means most reverse-proxy configurations need to be at the very top 
of the config file before anything else.
 *particulary* that the http_access allow our_sites must be above the 
top of the http_access list.


The forward-proxy limits have not done you noticeable harm but its worth 
fixing.



icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128
hierarchy_stoplist cgi-bin ?


hierarchy_stoplist will be doing bad things. Removing it from reverse 
proxies is useful.



access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?)   0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
coredump_dir /var/spool/squid

Here is my HTTP Respnse Header

Status=OK - 200
Content-Type=application/json
Cache-Control=max-age=6000
Server=Jetty(6.1.25)
X-Cache=MISS from cache001.com
X-Cache-Lookup=MISS from cache001.com:80
Via=1.0 cache001.com (squid/3.0.STABLE25), 1.0 localhost.localdomain
Date=Wed, 19 Jan 2011 14:19:36 GMT
Content-Length=132
Age=0

Please guide me.



Nothing visible there as to why that would not be stored.

Perhapse you are testing with a forced refresh or reload? The F5 browser 
action, refresh page button or shift-refresh (aka force reload) all send 
headers that will force this result to be a MISS.


You can get around this by using squidclient, wget or similar tools. Or 
just clicking on the address bar and pressing enter to re-load the page.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Re: Problem with squid_kerb_auth

2011-01-19 Thread Amos Jeffries

On 20/01/11 03:51, Rafal Zawierta wrote:

Update.

Fortrunately I was able to reinstall my proxy machine and now it works fine.

Steps on Ubuntu 10.04 are almost the same as:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

But please be sure to carry on pathnames - they are a little bit
different on Ubuntu.

Regards


Which paths were causing trouble?
 This may be worth an extra note in the wiki.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


[squid-users] Why is Cache-Control: max-age added to forwarded HTTP requests?

2011-01-19 Thread John Craws
Hi,

After observing this, I have been going through RFC 2616, the squid
documentation, mailing list archives and various google results.

It's still not completely clear to me: why is squid adding a
Cache-Control with max-age defined in cases where the original client
request does not contain one.
Why for a request? What is the intended behavior / desired result?

Thank you!

John Craws


[squid-users] errors on Make squid 3.1.10, ubuntu 10.04.1 server

2011-01-19 Thread mbruell

Hi,

I'm trying to build a transparent proxy using squid 3.1.10, iptables 1.4.10,
and a custom kernel based on the standard kernel for ubuntu server 10.04
LTS, but using 2.6.37 source with the following additional kernel configs:

NF_CONNTRACK=m
NETFILTER_TPROXY=m
NETFILTER_XT_MATCH_SOCKET=m
NETFILTER_XT_TARGET_TPROXY=m

To build squid, I run the .configure with the following options:

--enable-linux-netfilter
--prefix=/usr
--localstatedir=/var
--libexecdir=${prefix}/lib/squid
--srcdir=.
--datadir=${prefix}/share/squid
--sysconfdir=/etc/squid
--with-default-user=proxy
--with-logdir=/var/log
--with-pidfile=/var/run/squid.pid

and it appears to work okay (though I can post the config.log if that's
helpful).

However, when I run make I see errors. These are last few lines of the
output from make:

mv -f $depbase.Tpo $depbase.Po
tools.cc: In function ‘void restoreCapabilities(int)’:
tools.cc:1233: error: ‘cap_t’ was not declared in this scope
tools.cc:1233: error: expected ‘;’ before ‘caps’
tools.cc:1235: error: ‘caps’ was not declared in this scope
tools.cc:1235: error: ‘cap_get_proc’ was not declared in this scope
tools.cc:1237: error: ‘caps’ was not declared in this scope
tools.cc:1237: error: ‘cap_init’ was not declared in this scope
tools.cc:1238: error: ‘caps’ was not declared in this scope
tools.cc:1243: error: ‘cap_value_t’ was not declared in this scope
tools.cc:1243: error: expected ‘;’ before ‘cap_list’
tools.cc:1244: error: ‘cap_list’ was not declared in this scope
tools.cc:1253: error: ‘CAP_EFFECTIVE’ was not declared in this scope
tools.cc:1253: error: ‘cap_clear_flag’ was not declared in this scope
tools.cc:1254: error: ‘CAP_SET’ was not declared in this scope
tools.cc:1254: error: ‘cap_set_flag’ was not declared in this scope
tools.cc:1255: error: ‘CAP_PERMITTED’ was not declared in this scope
tools.cc:1257: error: ‘cap_set_proc’ was not declared in this scope
tools.cc:1260: error: ‘cap_free’ was not declared in this scope
make[3]: *** [tools.o] Error 1
make[3]: Leaving directory `/usr/src/squid-3.1.10/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/src/squid-3.1.10/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/squid-3.1.10/src'
make: *** [all-recursive] Error 1

Thanks for any help you can give.

Marc
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/errors-on-Make-squid-3-1-10-ubuntu-10-04-1-server-tp3225450p3225450.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] SSL Stops responding

2011-01-19 Thread James P. Ashton
Hi all,
 It appears that after about 2 months of up time I has a pair of squid servers 
stop servicing SSL at the same time. Both are running CentOS 5.5 fully updated.

Version: 3.0.STABLE25-1.el5 (from the rpmforge repository)

Servers are default CentOS 5.5 install with no packages or package groups 
installed outside of base.  Only squid from rpmforge.
They are Dell 2950s with Solid state cache drives. 16G of ram each.
They are running in accelerator mode. The config is posted below.
They are behind a load balancer. The traffic to about a dozen sites are 
balanced across these 2 servers.

No errors in the error log, No errors in the cache log and nothing in the 
access log other than no requests for any SSL domains. It appears as if the 
requests were simply not getting to squid.

Netstat showed 2 connections to port 443.  Both were off-site addresses.  

Restarting squid solved the issue. Connections were getting through immediately.

All this time non SSL (Port 80 / HTTP) requests were working with no problems.


Any thoughts on this?  

Thanks in advance for any ideas.
James




Config



http_port 80 accel vhost   #For IP xxx.xxx.xxx.101

https_port xxx.xxx.xxx.101:443 cert=/root/SSL/9696421.crt 
key=/root/SSL/xmediagroup.com.key cafile=/root/SSL/9696421.ca-bundle 
options=NO_SSLv2 accel vhost 
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

https_port xxx.xxx.xxx.103:443 cert=/root/SSL/multi-domain.crt 
key=/root/SSL/multi-domain.key cafile=/root/SSL/multi-domain.ca-bundle 
options=NO_SSLv2 accel vhost 
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

# Test Server
# Production Servers
cache_peer xxx.xxx.xxx.21 parent 80 0 no-query no-digest originserver 
login=PASS name=default1 round-robin
cache_peer xxx.xxx.xxx.22 parent 80 0 no-query no-digest originserver 
login=PASS name=default2 round-robin
cache_peer xxx.xxx.xxx.23 parent 80 0 no-query no-digest originserver 
login=PASS name=default3 round-robin
cache_peer xxx.xxx.xxx.24 parent 80 0 no-query no-digest originserver 
login=PASS name=default4 round-robin
cache_peer xxx.xxx.xxx.25 parent 80 0 no-query no-digest originserver 
login=PASS name=default5 round-robin
#
# xuser
cache_peer xxx.xxx.xxx.61 parent 80 0 no-query no-digest originserver 
login=PASS name=puser1 round-robin
cache_peer xxx.xxx.xxx.62 parent 80 0 no-query no-digest originserver 
login=PASS name=puser2 round-robin
cache_peer xxx.xxx.xxx.63 parent 80 0 no-query no-digest originserver 
login=PASS name=puser3 round-robin
cache_peer xxx.xxx.xxx.64 parent 80 0 no-query no-digest originserver 
login=PASS name=puser4 round-robin
cache_peer xxx.xxx.xxx.72 parent 80 0 no-query no-digest originserver 
login=PASS name=puser5 round-robin
#
# xMedia
cache_peer xxx.xxx.xxx.51 parent 80 0 no-query no-digest originserver 
login=PASS name=kmedia1 round-robin
cache_peer xxx.xxx.xxx.52 parent 80 0 no-query no-digest originserver 
login=PASS name=kmedia2 round-robin
cache_peer xxx.xxx.xxx.53 parent 80 0 no-query no-digest originserver 
login=PASS name=kmedia3 round-robin
cache_peer xxx.xxx.xxx.54 parent 80 0 no-query no-digest originserver 
login=PASS name=kmedia4 round-robin
cache_peer xxx.xxx.xxx.70 parent 80 0 no-query no-digest originserver 
login=PASS name=kmedia5 round-robin
#
# xworld
cache_peer xxx.xxx.xxx.66 parent 80 0 no-query no-digest originserver 
login=PASS name=pworld1 round-robin
cache_peer xxx.xxx.xxx.67 parent 80 0 no-query no-digest originserver 
login=PASS name=pworld2 round-robin
cache_peer xxx.xxx.xxx.68 parent 80 0 no-query no-digest originserver 
login=PASS name=pworld3 round-robin
cache_peer xxx.xxx.xxx.69 parent 80 0 no-query no-digest originserver 
login=PASS name=pworld4 round-robin
cache_peer xxx.xxx.xxx.73 parent 80 0 no-query no-digest originserver 
login=PASS name=pworld5 round-robin
#
# xTraining
cache_peer xxx.xxx.xxx.56 parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain1 round-robin
cache_peer xxx.xxx.xxx.57 parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain2 round-robin
cache_peer xxx.xxx.xxx.58 parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain3 round-robin
cache_peer xxx.xxx.xxx.59 parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain4 round-robin
cache_peer xxx.xxx.xxx.71 parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain5 round-robin
#
# Ad Server
cache_peer xxx.xxx.xxx.30 parent 80 0 no-query no-digest originserver 
login=PASS name=adserver1 round-robin
#
acl PURGE method PURGE
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
#acl all src 0.0.0.0/0.0.0.0
#

acl our_sites dstdomain origin.xmediagroup.com
acl our_sites dstdomain streamorigin.xmediagroup.com
acl our_sites dstdomain xtrainingonline.com
acl our_sites dstdomain www.xtrainingonline.com
acl our_sites dstdomain images.xmediagroup.com
acl our_sites dstdomain 

[squid-users] Re: squid_kerb_ldap question

2011-01-19 Thread Markus Moeller
For squid_kerb_ldap to work the AD entry must have a userprincipalname 
attribute set to one of the keytab entry names e.g. 
HTTP/ubuntu.pfsee@pfsee.net

. This is one of the differences of msktutil with --upn to net ads join.

Markus


- Original Message - 
From: Rafal Zawierta zawie...@gmail.com

To: hua...@moeller.plus.com
Sent: Wednesday, January 19, 2011 11:39 PM
Subject: squid_kerb_ldap question



Hello Markus!

If you don't mind I'd like to ask you for help with my squid_kerb_ldap 
problem.

After 2 long days I have squid_kerb_auth working.

I have ubuntu host, which was joined AD by net join command AND
krb5.keytab also was created in such way.

Now, when I start my squid with kerb_ldap helper I get:
2011/01/20 00:20:14| squid_kerb_ldap: Error while initialising
credentials from keytab : Client not found in Kerberos database
2011/01/20 00:20:14| squid_kerb_ldap: Error during setup of Kerberos
credential cache

AFAIK the problem is with my keytab - I'm right? Is it possible to fix
it whithout running msktutil? Or the only good way is to delete (?) my
keytab and create a new one with msktutil with --upn option?

ktutil on proxy server shows me:
ktutil:  rkt /etc/squid/HTTP.keytab
ktutil:  l
slot KVNO Principal
  -
  12  host/ubuntu.pfsee@pfsee.net
  22  host/ubuntu.pfsee@pfsee.net
  32  host/ubuntu.pfsee@pfsee.net
  42host/ubu...@pfsee.net
  52host/ubu...@pfsee.net
  62host/ubu...@pfsee.net
  72UBUNTU$@PFSEE.NET
  82UBUNTU$@PFSEE.NET
  92UBUNTU$@PFSEE.NET
 102  HTTP/ubuntu.pfsee@pfsee.net
 112  HTTP/ubuntu.pfsee@pfsee.net
 122  HTTP/ubuntu.pfsee@pfsee.net
 132HTTP/ubu...@pfsee.net
 142HTTP/ubu...@pfsee.net
 152HTTP/ubu...@pfsee.net

But on AD server in AD users and computers there is NO http or
whatever entry in Users. Just ubuntu in Computers.

Regards
Rafal







[squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-19 Thread Max Feil
I'm wondering if anybody knows what might be causing this. I've
confirmed this problem in linux builds of Squid 3.0, 3.1.1, 3.1.10 and
3.2.0.4.

Using firefox (or probably any browser - it also happens in a webkit
based browser under development) clear the browser's disk cache and try
to load or reload http://ireport.cnn.com (with proxy address/port set to
Squid of course). Loading the page takes a very long time (several
minutes) even on a fast network connection. Take Squid out of the mix
and everything loads in seconds.

This is using the default squid.conf file. The problem does not happen
in Squid 2.7!

Thanks,
Max


Re: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-19 Thread Amos Jeffries

On 20/01/11 13:31, Max Feil wrote:

I'm wondering if anybody knows what might be causing this. I've
confirmed this problem in linux builds of Squid 3.0, 3.1.1, 3.1.10 and
3.2.0.4.

Using firefox (or probably any browser - it also happens in a webkit
based browser under development) clear the browser's disk cache and try
to load or reload http://ireport.cnn.com (with proxy address/port set to
Squid of course). Loading the page takes a very long time (several
minutes) even on a fast network connection. Take Squid out of the mix
and everything loads in seconds.

This is using the default squid.conf file. The problem does not happen
in Squid 2.7!

Thanks,
Max


There are 101 different objects assembled into that one page coming from 
10 different domains.


Browsers set a very low limit on the amount of connections and objects 
fetched in parallel when using a proxy as compared to going direct. 
Large pages like this make the speed difference more noticeable.


That will account for some of the extra time. But should not be taking 
that much longer. You will need to find out which objects are taking too 
long (firebug or the webkit dev tools should help) and then figure out 
why them.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Welcome page on first access to web ever

2011-01-19 Thread Amos Jeffries

On 20/01/11 12:01, Rafal Zawierta wrote:

Hello,

Is it possible to show with squid only new user (and only for the
first time he access Web) some kind of welcome page with rules, which
he must accept to enter the Web?

Users are authorized by AD and squid_kerb_auth.

Regards
R.


What you describe is called a captive portal splash page.

http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] Why is Cache-Control: max-age added to forwarded HTTP requests?

2011-01-19 Thread Amos Jeffries

On 20/01/11 08:29, John Craws wrote:

Hi,

After observing this, I have been going through RFC 2616, the squid
documentation, mailing list archives and various google results.

It's still not completely clear to me: why is squid adding a
Cache-Control with max-age defined in cases where the original client
request does not contain one.
Why for a request? What is the intended behavior / desired result?

Thank you!

John Craws


Which Squid version? there are different behaviours for different versions.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


Re: [squid-users] errors on Make squid 3.1.10, ubuntu 10.04.1 server

2011-01-19 Thread Amos Jeffries

On 20/01/11 10:07, mbruell wrote:


Hi,

I'm trying to build a transparent proxy using squid 3.1.10, iptables 1.4.10,
and a custom kernel based on the standard kernel for ubuntu server 10.04
LTS, but using 2.6.37 source with the following additional kernel configs:

NF_CONNTRACK=m
NETFILTER_TPROXY=m
NETFILTER_XT_MATCH_SOCKET=m
NETFILTER_XT_TARGET_TPROXY=m

To build squid, I run the .configure with the following options:

--enable-linux-netfilter
--prefix=/usr
--localstatedir=/var
--libexecdir=${prefix}/lib/squid
--srcdir=.
--datadir=${prefix}/share/squid
--sysconfdir=/etc/squid
--with-default-user=proxy
--with-logdir=/var/log
--with-pidfile=/var/run/squid.pid

and it appears to work okay (though I can post the config.log if that's
helpful).

However, when I run make I see errors. These are last few lines of the
output from make:

mv -f $depbase.Tpo $depbase.Po
tools.cc: In function ‘void restoreCapabilities(int)’:
tools.cc:1233: error: ‘cap_t’ was not declared in this scope
tools.cc:1233: error: expected ‘;’ before ‘caps’
tools.cc:1235: error: ‘caps’ was not declared in this scope
tools.cc:1235: error: ‘cap_get_proc’ was not declared in this scope
tools.cc:1237: error: ‘caps’ was not declared in this scope
tools.cc:1237: error: ‘cap_init’ was not declared in this scope
tools.cc:1238: error: ‘caps’ was not declared in this scope
tools.cc:1243: error: ‘cap_value_t’ was not declared in this scope
tools.cc:1243: error: expected ‘;’ before ‘cap_list’
tools.cc:1244: error: ‘cap_list’ was not declared in this scope
tools.cc:1253: error: ‘CAP_EFFECTIVE’ was not declared in this scope
tools.cc:1253: error: ‘cap_clear_flag’ was not declared in this scope
tools.cc:1254: error: ‘CAP_SET’ was not declared in this scope
tools.cc:1254: error: ‘cap_set_flag’ was not declared in this scope
tools.cc:1255: error: ‘CAP_PERMITTED’ was not declared in this scope
tools.cc:1257: error: ‘cap_set_proc’ was not declared in this scope
tools.cc:1260: error: ‘cap_free’ was not declared in this scope
make[3]: *** [tools.o] Error 1
make[3]: Leaving directory `/usr/src/squid-3.1.10/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/src/squid-3.1.10/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/src/squid-3.1.10/src'
make: *** [all-recursive] Error 1

Thanks for any help you can give.

Marc


You have a problem with your libcap package.

Make sure it is a libcap2 library package and matching libcap-dev 
headers package version.


You could use the Lucid 3.1.10 source package for an easier build:
  https://launchpad.net/~yadi/+archive/ppa

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.10
  Beta testers wanted for 3.2.0.4


RE: [squid-users] Squid 3.x very slow loading on ireport.cnn.com

2011-01-19 Thread Max Feil
Thanks. I am looking at the squid access.log and the delay is caused by
a GET which for some reason does not result in a response from the
server. Either there is no response or Squid is missing the response.
After a 120 second time-out the page continues loading, but the end
result may be malformed due to the object which did not load. 

The error object is different every time and seems random! So the page
never loads properly with Squid 3.x and takes about 125 seconds to load.
It always loads properly without Squid and takes about 5 seconds to
load. It always loads properly using Squid 2.7 and takes about 5 seconds
to load.

For consistency in tracking the problem down, I have Squid's disk and
memory caches disabled so every client request is a cache miss.

Strange eh?

Max

P.S. I am debugging natively on my Ubuntu 10.10 64 bit laptop using
Firefox, but the original problem comes from an embedded device running
the QNX RTOS using a libcurl based WebKit browser (both the browser and
Squid are running on 127.0.0.1 in each case, but this problem happens
across the network as well).

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, January 19, 2011 9:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.x very slow loading on
ireport.cnn.com

On 20/01/11 13:31, Max Feil wrote:
 I'm wondering if anybody knows what might be causing this. I've
 confirmed this problem in linux builds of Squid 3.0, 3.1.1, 3.1.10 and
 3.2.0.4.

 Using firefox (or probably any browser - it also happens in a webkit
 based browser under development) clear the browser's disk cache and
try
 to load or reload http://ireport.cnn.com (with proxy address/port set
to
 Squid of course). Loading the page takes a very long time (several
 minutes) even on a fast network connection. Take Squid out of the mix
 and everything loads in seconds.

 This is using the default squid.conf file. The problem does not happen
 in Squid 2.7!

 Thanks,
 Max

There are 101 different objects assembled into that one page coming from

10 different domains.

Browsers set a very low limit on the amount of connections and objects 
fetched in parallel when using a proxy as compared to going direct. 
Large pages like this make the speed difference more noticeable.

That will account for some of the extra time. But should not be taking 
that much longer. You will need to find out which objects are taking too

long (firebug or the webkit dev tools should help) and then figure out 
why them.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.10
   Beta testers wanted for 3.2.0.4