Re: [squid-users] Authentication bug in 3.1.19 solved in 3.2.0.17

2012-05-27 Thread Amos Jeffries

On 25/05/2012 8:37 p.m., Alan wrote:

Hello,

I'm implementing a proxy server that authenticates users via radius,
and then based on the source ip, login and the destination, grants
access or not to the requested objects.

The relevant section of squid.conf is:

auth_param basic program /usr/lib/squid/squid_radius_auth -f
/etc/squid/radius.conf -t 5
auth_param basic children 5
auth_param basic realm Web Proxy
auth_param basic credentialsttl 1 hour
external_acl_type my_acl_type %SRC %LOGIN %DST /var/www/htdocs/acl.php
acl my_acl external my_acl_type
http_access allow my_acl
http_access deny all

Both IE and Firefox have the same behavior: they popup the
authentication prompt, then they can make requests for a while, and
randomly popup the authentication prompt again.  I type the same
username and password, and it works fine.
In Konqueror there is no popup, I guess it tries again one more time
with the last username and password before prompting the user.

A network capture reveals that the client is always sending the right
Proxy-Authentication header with it's requests, but squid randomly
replies with a 407 status code, without even asking the radius server
(the authentication result is presumably still cached).

In squid 3.2.0.17 this problem is gone and I don't get the
authentication prompts anymore, but since it is labeled Beta instead
of Stable, I wonder if this can be solved in the 3.1 series.

Has anybody else been affected by this?


Very likely.

The 3.2 code has had a near complete re-write to resolve a large number 
of mysterious little bugs in pointer reference handling. That change is 
BIG, still being debugged itself, and I'm not willing to repeat the 
years voluntary work spent on fixing 3.2. So the auth bug fixes are not 
going to be automatically ported to 3.1 like most others are. Patches 
are welcome though if you wish to spend the time or money to locate and 
fix 3.1.


Amos


Re: [squid-users] How to disable squid nat

2012-05-27 Thread Amos Jeffries

On 25/05/2012 10:31 a.m., Luis Candia wrote:

Hi, we have a proxy to control the acces to Internet to the Lan users
and it is working right. Now I want to install a Router capable to do
a traffic shaping using the original ip addres of the LAN users, but
the router just can see the external IP of the squid server, I tried
to disable this kind of NATing that is doing the squid server but no
success. Please I need your help.

Luis Candia


Several points you need to understand before I give any solution
 1) This is how HTTP operates. It is a protocol with explicit 
specification for multiple hops during both request and response 
delivery. Those hops are i teh HTTP protocol layer, not the TCP or IP 
packet layers. Any given TCP connection of HTTP may contain multiple 
HTTP requests, each of which comes from one or more end-users.


2) Squid *is* the client of the router. Requests sent by Squid are *not* 
the same requests sent by the clients contacting Squid. They have been 
received and processed by Squid, some of them may be servered by Squid, 
some of them may need requests to another server to fulfill. Squid will 
then alter the request an pass it on. These latter are the requests from 
Squid-Internet which you are trying to control, at the TCP/QoS level 
any relation they have to an end-user is mere coincidence.


One you are clear on those, it should be clear that to remove the 
NAT-like behaviour from HTTP requires disabling one the major 
performance benefits offered by HTTP (persistent connections with 
pipelining and multiplexing).



You have several options:

1) use Squid HTTP-level access and service controls. These include 
several ways of QoS tagging of traffic by Squid. ie no need for the 
router to know the lient internal IP, just to process the transaction 
TOS properly.


2) use a router software which supports HTTP relay/proxy itself and can 
identify the HTTP X-Forwarded-For headers added by Squid (enabling 
forwarded_for in squid.conf if its disabled).


3) setup Squid as a TPROXY interception proxy. Which retains the 
TCP-level IP address info across the Squid software processing stage. 
This does exactly what you ask for but breaks the HTTP multiplexing 
features annoying you, and a few others such as: authentication, DNS 
offloading, and non-HTTP protocol gatewaying.


Amos


Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-27 Thread Amos Jeffries

On 25/05/2012 7:50 a.m., Ruiyuan Jiang wrote:

Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy (hop2) -  
exchange 2007

That is not true. Here we have the setup:

Client -  Apache (hop1) -  IIS 7 -  exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.


Squid does supports relaying any type of www-auth headers to the backend 
over multiple hops. What Squid does not support is logging *itself* into 
a peer proxy with NTLM (using proxy-auth headers).


There are also various minor but annoying bugs in NTLM pinning support 
and persistent connections handling in some Squid releases, with those 
basically the newer the Squid release the better but its still not 100% 
clean.


 I am noting a LOT of complaints in the areas of Squid-IIS and 
sharepoint, and a few other MS products this year. But nobody has yet 
been able to supply a patch for anything (I dont have MS products or 
time to work on this stuff myself). There is a hint that it is related 
to Squid-3.1 persistent connection keep-alive to the server, if that 
helps anyone.


Amos


Re: [squid-users] Error to test connectivity to internal MS Exchange server

2012-05-27 Thread Amos Jeffries

On 23/05/2012 10:08 a.m., Ruiyuan Jiang wrote:

Hi, all

I am trying to setup MS webmail over rpc Exchange server access through squid 
(squid 3.1.19, SPARC, Solaris 10) from internet. Here is my pilot squid 
configuration (squid.conf):

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer 10.150.2.15 parent 443 0 no-query originserver login=PASS ssl 
sslcert=/opt/squid-3.1.19/ssl.crt/webmail_katespade_com.crt 
sslkey=/opt/squid-3.1.19/ssl.crt/webmail_katespade_com.key 
sslcafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt name=exchangeServer

snip

2012/05/22 17:44:15| fwdNegotiateSSL: Error negotiating SSL connection on FD 
13: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed (1/-1/0)
2012/05/22 17:44:15| TCP connection to 10.150.2.15/443 failed
2012/05/22 17:44:15| fwdNegotiateSSL: Error negotiating SSL connection on FD 
13: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify 
failed (1/-1/0)

 From the packet capture, the internal Exchange server reset the connection from the squid 
proxy server by either Alert (Level: Fatal, Description: Unknown CA) when I used 
above official certificates or Alert (Level: Fatal, Description: Certificate Unknown) 
when I used internal CA signed certificate after initial https handshaking between squid and 
exchange server through https connection. Can anyone tell me how do I correctly configure 
cache_peer statement to make it work?


In case you did not figure this out already... Squid is unable to 
validate the exchange server certificate using either the openssl 
libraries trusted CA certificates or the sslcafile= parameter 
certificate given to verify it with.


* Check that your openSSL library trusted CA are up to date on the Squid 
machine - this is the most common cause of validation errors.


* Check that your /opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt file on 
the Squid machine contains the CA used to sign the exchange servers 
certificate.


Amos


Re: [squid-users] can't access cachemgr

2012-05-27 Thread Amos Jeffries

On 24/05/2012 6:45 a.m., Jeff MacDonald wrote:

Hi,

I can't put the access rules above the acl definition if that was what you 
meant. but incase that isn't what you meant.. i did re-order it a bit and this 
is what i have now.. still no access.

FYI, i'm trying to access it using the cache manager cgi which runs on the same 
server


If you have a current squid (3.1 series) localhost is also using the 
IP address ::1. This may need adding to your ACL definition.


For your current problem though see below ...



root@proxy:~# !gre
grep -e ^acl -e ^http_acc /etc/squid3/squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl westhants proxy_auth REQUIRED
acl westhants-network src 192.168.11.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT


In general you can consider squid.conf somewhat of a script programming 
Squid what to do with a request.


As such, when needing to check whether an HTTP request is allowed to be 
processed by Squid it does the following...




http_access allow westhants

Step 1)
 1a) test westhants ACL.
 1b) send 407 message to locate client credentils.

Step 2) - there is no 2, see 1b for why.



http_access allow localhost
http_access allow westhants-network
http_access allow manager localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all



Consider the logic of:

 deny A
 deny B
 deny everything

Why bother denying A and B individually if everything is denied anyway?

There is also a disconnection between your westhaunts authentication 
test and the westhaunts network IPs.


Simply put IMHO your ACLs should be configured as:

  http_access allow manager localhost
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost
  http_access allow westhants-network westhant
  http_access deny all


If you want particulars about why I'm happy to provide. but it should be 
clear if you understand Squid tests http_access lines top-done, 
left-to-right on a first line to match wins basis. lines where one ACL 
does not match skip to the next immediately.


Amos


Re: [squid-users] squid slow response time

2012-05-27 Thread Amos Jeffries

On 24/05/2012 10:05 a.m., Ali Esf wrote:

hello list and hello dear Amos
thanks for your help.
some of my problems with squid are solved but some of them not.

i compared squid on Linux Centos 5.8 with cc proxy on Microsoft windows server 
2003

and understood that the ccproxy is more fast than squid on the same 
specification machine and supports more users.

i captured the screen of the cc proxy and squid.

http://up98.org/upload/server1/02/j/bpufq054uyf1qeamraj.jpg

the above picture shows cc proxy  on windows.as you see it supports 64 users 
and 1264 connections and even more.

http://up98.org/upload/server1/02/j/kqlr5fcr2fvk1jafqva4.jpg

the above picture shows port 9090 that is configed for http proxy by squid by 
netstat command.
it shows there are 574 connections through port 9090 and squid.

http://up98.org/upload/server1/02/j/hprnte4gldvsylb19xf.jpgthe above picture 
shows the number of users to port 9090 that are 37 users.


Ah, I see. You are confusing users with TCP connections. There is no 
relationship in HTTP between number of users supported and number of 
connections supported.


The number of TCP connections as measured by netstat has only one limit: 
65535 TCP connections per receiving IP:port on the box. This will be 
true for both proxies I'm sure. What will be different is the HTTP 
keep-alive support. Which determines how and when connections are 
closed, and how many requests happen inside each before closure. 
Pipelining of requests also determines whether any requests are aborted 
and have to be retried.



What you are looking at depends entirely on what those 1264/64 means to 
ccproxy. Is that 1264 authenticated users using 64 concurrent TCP 
connections? or 1264 TCP connections and 64 currently alive? or is it 
1264 requests received over 64 TCP connections?


Squid has a similar confusion looking solely at netstat numbers. One 
user can open 1 or more TCP connections, and any or none of them can be 
kept alive.



The amount of speed slowdown you can expect from one or both depends 
entirely on the amount of requests sent over the TCP connection. Which 
is where the questions above come in very imporant:

 * Authenticating requires minimum 1-2 requests per user.
 * HTTP keep-alive feature permits one single TCP connection (netstat 
== 1) to handle many thousands or millions of requests. This will be 
different for each of the proxies, and depend on the type of requests 
being sent by the clients.



Say for example, you have 1500 users. 1480 connect at once and both 
proxies handle them fast. those clients disconnect. *only* 1 of them 
connects later but this one has a virus. The infected user can turn on 
the PC, not even open the browser, and the virus open a TCP connection 
and fills it with 10,000,000 small HTTP requests.
  How long is it going to process and reject 10 million requests? 
netstat shows 1 connection total. For that period all other users will 
see degraded service to some degree.



HTTP software (any type) is measured in requests-per-second as a simple 
consistant measure to avoid all this fuzzy boundaries and calculation 
issues.




when the number of users increases the response time of squid become too slowly 
that sometimes takes 11 - 15 seconds to load the google web page.
but i tested that the speed of download files through squid is great and the 
problem is when loading the pages when users get around 40.


Hmm, 40 (clients) * 2 (FD per client) * 65536 (buffer bytes per 
connection) == 5242880 (bytes of buffer) + size of objects requested == 
? how many MB/GB of RAM do you have free?
DO NOT count swap and virtual memory, if Squid swaps the first thing to 
start thrashing I/O speed is the VMem pages used for memory cache and 
its index.






and also in cc proxy with even 64 users and more the speed of loading pages is 
great.it is as like as there is no any proxy.


the machines specification is the same and are :
ram = 1 GB
port = 1 Gbps
cpu = Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz, 2 cores


The current stable release of Squid are single-core software. ccproxy 
has multi-core support.

What versions you test is *important* when comparing these things.


os = CentOS Linux 5.8
hard disk space = 30 GB

we use squid just for proxy and not for catching. and need authentication just 
by user name and password through mysql database.
here is the configuration::



cache deny all

snip
...


cache_mem 800 MB


Um, you have caused Squid to allocate itself 800 MB of your 1024 MB on 
the box. Just for memory cache ... when caching there is disabled (deny 
all).


Either remove the huge cache_mem allocation (non-caching proxy), or 
re-enable caching (cachign proxy) to see what Squid can actually do when 
sufficient RAM is available.


Amos


Re: [squid-users] Connection pinning (NTLM pass through)

2012-05-27 Thread Amos Jeffries

On 26/05/2012 8:31 a.m., Petter Abrahamsson wrote:

Hi,

I'm trying to get NTLM pass through to work with squid 3.1.19. I have
followed the instructions found on the wiki[1] on connection pinning
but I just keep receiving 401 status messages.
Below is the very simple squid.conf that I'm using for this test.

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 8080 connection-auth=on
hierarchy_stoplist cgi-bin ?
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

And below is the corresponding access.log entries with obfuscated ip
addresses and host names.

1337976537.852 63 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976550.714 29 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976551.025 57 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976554.627 57 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976558.006   3128 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976559.462 59 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976559.760 56 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html

I feel like I'm missing something obvious since the instructions on
the wiki are quite simple.
When I try the same website through a v2.7 squid it lets me login.
Let me know if any other information is needed.
Any help would be very much appreciated.


Check the HTTP headers at each point before/after Squid for keep-alive. 
There is something a little strange going on with HTTP/1.1 connections 
to servers and NTLM keep-alive in 3.1.19. If you are able to do some 
code digging that would help as well.


Amos


Re: [squid-users] how to check squid slowness?

2012-05-27 Thread Amos Jeffries

On 25/05/2012 4:03 p.m., Julie Xu wrote:

Hi

I have some complains about the squid courses get web page from internet very 
slow time by time.

on the server side, I can not see any performance issue.

so, I would like to get someway to confirm on squid application side.

what I should do to confirm the squid pass traffic is slow or not?


Your Squid access.log should contain service-time information. see 
http://wiki.squid-cache.org/SquidFaq/SquidLogs for more details of the 
log format.


You can look for unusually long requests times here, its relative to 
size of object of course and ignore CONNECT that is a encrypted tunnel 
of multiple HTTPS requests.





and is any possbile to monitor for few day?


Your proxy. Only you can answer that.

Unless you mean you are looking for log processors which can handle 
several days data?


Amos


Re: [squid-users] Squid Restarting

2012-05-27 Thread Amos Jeffries

On 25/05/2012 7:54 p.m., Kamarudin Samsudin wrote:

Hi Amos,

I'm looking into this now. Could you guide me, how to get the two values in those 
FD form the core dump file.
I did something... hope will give you clues. This is from Solaris.


Sorry, I dont have much time any more. Will do what I can though.




pflags  core_mib01_01-19_04_2012
core 'core-19_04_2012' of 22938:   (squid) -f /squid_dir/etc/squid.conf
 data model = _ILP32  flags = MSACCT|MSFORK
  /1:flags = 0
 sigmask = 0xbefc,0x  cursig = SIGABRT


pstack core_mib01_01-19_04_2012
core 'core-19_04_2012' of 22938:   (squid) -f /squid_dir/etc/squid.conf
-  lwp# 1 / thread# 1  
  feccc5f8 _lwp_kill (6, 0, fed34f98, fecabf18, , 6) + 8
  fec4194c abort(23dd4e28, 1, 286790, eeb60, fed333d8, 0) + 110
  000c2aa4 xassert  (23b3b0, 23aca0, 1f0, 49, 1, 2) + c4
  000d6d10 _ZN8FwdState12serverClosedEi (1f706cd0, e, 0, fed392a4, 3, 47c) + 48


They should show up in the parameters where xassert function parameters 
are listed. if not there then as local parameters to 
_ZN8FwdState12serverClosedEi  (aka FwdState::serverClosed).


GDB shows a good display format for these things if you have it 
available. I can't help with pstack sorry.


Amos


Re: [squid-users] Squid 3.2.0.14 using 100% cpu and not responding

2012-05-27 Thread Amos Jeffries

On 25/05/2012 1:33 p.m., Will Roberts wrote:

Hi,

I've found my squid 'stuck' a couple times the past week. It'll be 
spinning on one of my cores and not responding to connections or any 
signals and I have to kill -9 it and restart.


I know the 3.2 branch has moved on a bit, but I'd like to try and 
debug this to make sure it's something that's gotten fixed. What's my 
best approach? Use gdb and try and get a backtrace or something? (not 
normally a C/C++ developer so some tips here would be good)


strace is ususally the best for this type of issue. That will show where 
its looping and you can then lookup changes to the component in the 
changeset archive to see if anything similar is fixed.


3.2 has not just moved on, .17 daily is now a fair bit more stable than 
.14 was. If you are in a position to update its good to try anyway, if 
not for this one bug.


Amos


Re: [squid-users] segfault at 0

2012-05-27 Thread Amos Jeffries

On 22/05/2012 3:20 p.m., Pablo Lionel Arturi wrote:

Program received signal SIGSEGV, Segmentation fault.
0x0053eae3 in storeRead ()
#0  0x0053eae3 in storeRead ()
#1  0x0053fbe9 in store_client::fileRead() ()
#2  0x005a1418 in UFSStoreState::readCompleted(char const*, 
int, int, RefCountReadRequest) ()


This is being tracked in 
http://bugs.squid-cache.org/show_bug.cgi?id=3549


Thanks for the info this is being seen in 3.1.19.

Amos


Amos, sorry for this question but I'm not familiar with bugs/reporting 
and the time it takes o fix this kind of issues.


It's advisable that I wait for a resolution or should I find some 
other way to fix this problem?


Sorry, for the slow reply.

We rely on community help both for finding behaviour problems and fixes, 
sponsrship etc. It is up to you whether to be proactive or wait. Its a 
bit of a gamble if and when anyone will fix it unless someone steps up.


Amos


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-27 Thread Amos Jeffries

On 22/05/2012 6:36 p.m., Nishant Sharma wrote:

Hi Amos,

Thanks for your detailed response.

On Tue, May 22, 2012 at 4:56 AM, Amos Jeffries wrote:

external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
acl loggedin external hosted_auth
deny_info https://hostedserver/auth.html loggedin
http_access deny !loggedin
http_access allow all


Please be ware there is no authentication in this setup, despite the login
on your portal page.
What you have is session-based *authorization*.
It is a razor-thin line, but critical to be aware of. Since NAT erases and
plays with the %SRC key which you are using to identify clients. 1) NAT
hides unwanted visitors on the POP networks. 2) The XFF workaround to undo
the NAT is header based with risks of header forgery. So NAT introduces
multiple edge cases where attacks can leak through and hijack sessions.

I understand the difference between Authentication and Authorization,
but here the prime motive is to enforce user based access rules and
perform AuthN / AuthZ over a secured channel against IMAP.

If we segregate the zones as Trusted and Non-Trusted where the
trusted zone is our HO and a proxy forwards the requests to our
publicly hosted squid with XFF header while Non-Trusted zones are
our spokes and roadwarrior users who are behind a simple NAT. Trusted
zone users are allowed to access the proxy with just authorization
(session / form based) and Non-Trusted zone users need to authenticate
compulsorily (explicit proxy-auth). This way, we could enforce the
policies based on users instead of IPs.

Again, the problem is the secured authentication against IMAPS. Mail
is hosted on google and we can't use DIGEST that we receive from
browsers. BASIC auth is ruled out again due to security reasons. VPN /
Stunnel is not considered due to user credential / machine management.


  While the HTML file displays a login
form over HTTPS and sends request to a CGI script which authenticates
against IMAPS and populates the DB with session information. I
understand that I can not use cookies for authentication as browser
will not include cookie set by our authentication page for request to
other domains.

Correct.

On some more googling, I found something called Surrogate Cookies here:
https://kb.bluecoat.com/index?page=contentid=KB3407
https://kb.bluecoat.com/index?page=contentid=KB2877

 From what I could understand is their primary usage is with the
reverse proxy in front of the webservers with limited domains behind
them but it is being used for surrogate authentication with normal
proxy deployments by forcing proxies to accept cookies for any domain?
Even the commercial proxies advise against using surrogate credentials
wherever possible. The major disadvantage I can see is they can't be
used with wget, lynx, elinks, java applets etc. which expect usual
proxy authentication.


bit lacking in how to merge the format %SRC %{X-Forwarded-For} into one
UUID token. There is the space between the two tokens and XFF header is
likely to contain spaces internally which the script as published can;t
handle.
HINT: If anyone has a fix for that *please* let me know. I know its
possible, I stumbled on a perl trick ages back that would do it then lost
the script that was in :(

Following snippet should help if you just want to strip spaces in the
$token string:

my $token = %SRC %{X-Forwarded-For};
$token =~ s/\ //; # This should remove only the first space
$token =~ s/\ //g; # This removes all the spaces in the string

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.


Thank you. Expected input is strings like:

1 foo bar   - channel-ID=1,  UUID=foo bar
2 hello   - channel-ID=2, UUID=hello

Only numerics in the channel-ID, followed by one SP to separate them, 
then anything including more SP characters in the UUID portion.


I think my initial was something nasty like explode on S, then strip the 
channel-ID followed by space from the original and call the remainder 
UUID. Any improvements on that would be great.





I have also written an auth helper based on the existing POP3 auth
helper. It authenticates against IMAP and IMAPS depending on the
arguments provided e.g.:

## IMAPS against google but return ERR if user tries to authenticate
with @gmail.com
imap_auth imaps://imap.google.com mygooglehostedmail.com

## IMAP auth against my own IMAP server
imap_auth imap://imap.mydomain.com mydomain.com

Where should I submit that as contribution to Squid?


code submission and auditing procedures are detailed at:
http://wiki.squid-cache.org/MergeProcedure

Essentially email a patch or the helper sub-folder to squid-dev at 
squid-cache.org with description of what its for. Under our naming 
scheme this would be basic_imap_auth.


I'm also asking helper contributors to be willing to support their 
helper for a reasonable period (a year or so) here in squid-users to 
reduce workload for everyone and get issues fixed faster.



Amos


Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-27 Thread Amos Jeffries

On 26/05/2012 1:34 a.m., Ruiyuan Jiang wrote:

Hi, Clem

In the Apache link that I provided, it stated that below Apache v2.0.58 
supports RPC over HTTP. Any version of Apache above that version does not 
support RPC. Two reasons:

1. it is not a standard.
2. patents by Microsoft if Apache uses it.


Patents?

RPC over HTTP is required to fit within HTTP standard operational 
behaviour. If it were breaking protocol requirements, that would explain 
why Squid, which does obey HTTP standards was breaking as an 
RPC-over-HTTP relay.


FYI: The body content of the HTTP messages is the RPC protocol under 
patent, possibly the method names themselves. Neither Squid nor Apache 
when proxying have any reason to touch those details and thus are not 
affected by any such patents (unless they are made to do so).


Amos



Ruiyuan Jiang


-Original Message-
From: Clem [mailto:clemf...@free.fr]
Sent: Friday, May 25, 2012 2:19 AM
To: Ruiyuan Jiang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

Hi Ruiyuan,

  Client -  Apache (hop1) -  IIS 7 -  exchange 2007 It works the setup
and just I could not have the latest Apache. Otherwise I will continue
to use Apache reverse proxy. The latest Apache does not support MS RPC
over http which is posted on the internet.

What do you mean when you say that the latest Apache does not support MS
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?

If I can do Client -  Apache reverse proxy -  IIS RPC -  exchange 2007,
I'll install it as soon as possible !

Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :

By the way, NTLM works with windows 7 client through Apache here.


Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -   squid (hop1) IIS6 rpx proxy (hop2) -   
exchange 2007

That is not true. Here we have the setup:

Client -   Apache (hop1) -   IIS 7 -   exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.

Ruiyuan





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.





Re: [squid-users] Squid 3.2.0.14 using 100% cpu and not responding

2012-05-27 Thread Will Roberts



On 05/27/2012 07:35 AM, Amos Jeffries wrote:

strace is ususally the best for this type of issue. That will show where
its looping and you can then lookup changes to the component in the
changeset archive to see if anything similar is fixed.


Amos, thanks I'll try that next time it happens.


3.2 has not just moved on, .17 daily is now a fair bit more stable than
.14 was. If you are in a position to update its good to try anyway, if
not for this one bug.


I ended up upgrading to the 3.2.0.17 release (not the daily), and it 
happened again so I'll see what I can find next time.


--Will


[squid-users] 3.1.x compile errors using ssl_crtd

2012-05-27 Thread Linos
Hi,
i am using Ubuntu Server 12.04 with libssl-dev 1.0.1 and i can't compile
squid, i have downloaded the last daily auto-generated release (that should have
this bug fixed) but i still have the same problem.

./configure --enable-ssl --enable-ssl-crtd
make

(after a while)
/bin/bash ../../libtool --tag=CXX   --mode=link g++ -Wall -Wpointer-arith
-Wwrite-strings -Wcomments -Werror -fhuge-objects -D_REENTRANT -g -O2  -g -o
ssl_crtd ssl_crtd.o certificate_db.o -lssl -lcrypto -lsslutil
../../compat/libcompat.la ../../lib/libmiscutil.a
libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror
-fhuge-objects -D_REENTRANT -g -O2 -g -o ssl_crtd ssl_crtd.o certificate_db.o
-lssl -lcrypto -lsslutil ../../compat/.libs/libcompat.a ../../lib/libmiscutil.a
g++: warning: switch '-fhuge-objects' is no longer supported
/usr/bin/ld: cannot find -lsslutil
collect2: ld returned 1 exit status

I have tried to add .libs/libsslutil.a to the linker line in src/ssl/Makefile
the direct path .libs/libsslutil.a but i get the original bug error after this
change:

libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror
-fhuge-objects -D_REENTRANT -g -O2 -g -o ssl_crtd ssl_crtd.o certificate_db.o
-lssl -lcrypto .libs/libsslutil.a ../../compat/.libs/libcompat.a
../../lib/libmiscutil.a
g++: warning: switch '-fhuge-objects' is no longer supported
../../lib/libmiscutil.a(MemPool.o): In function `MemPools::flushMeters()':
/tmp/squid-3.1.19-20120526-r10446/lib/MemPool.cc:224: undefined reference to
`squid_curtime'
/tmp/squid-3.1.19-20120526-r10446/lib/MemPool.cc:225: undefined reference to
`squid_curtime'
/tmp/squid-3.1.19-20120526-r10446/lib/MemPool.cc:223: undefined reference to
`squid_curtime'
../../lib/libmiscutil.a(MemPoolChunked.o): In function
`MemPoolChunked::deallocate(void*, bool)':
/tmp/squid-3.1.19-20120526-r10446/lib/MemPoolChunked.cc:357: undefined reference
to `squid_curtime'
../../lib/libmiscutil.a(MemPoolChunked.o): In function
`MemPoolChunked::convertFreeCacheToChunkFreeCache()':
/tmp/squid-3.1.19-20120526-r10446/lib/MemPoolChunked.cc:380: undefined reference
to `squid_curtime'
../../lib/libmiscutil.a(MemPoolChunked.o):/tmp/squid-3.1.19-20120526-r10446/lib/MemPoolChunked.cc:406:
more undefined references to `squid_curtime' follow
collect2: ld returned 1 exit status

I don't know how to fix and i am in a hurry with this problem :(

Thanks.

Regards,
Miguel Angel.




Re: [squid-users] 3.1.x compile errors using ssl_crtd

2012-05-27 Thread Amos Jeffries

On 28.05.2012 07:31, Linos wrote:

Hi,
i am using Ubuntu Server 12.04 with libssl-dev 1.0.1 and i can't 
compile

squid, i have downloaded the last daily auto-generated release (that
should have
this bug fixed) but i still have the same problem.

./configure --enable-ssl --enable-ssl-crtd
make

(after a while)
/bin/bash ../../libtool --tag=CXX   --mode=link g++ -Wall 
-Wpointer-arith
-Wwrite-strings -Wcomments -Werror -fhuge-objects -D_REENTRANT -g -O2 
-g -o

ssl_crtd ssl_crtd.o certificate_db.o -lssl -lcrypto -lsslutil
../../compat/libcompat.la ../../lib/libmiscutil.a
libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments 
-Werror
-fhuge-objects -D_REENTRANT -g -O2 -g -o ssl_crtd ssl_crtd.o 
certificate_db.o

-lssl -lcrypto -lsslutil ../../compat/.libs/libcompat.a
../../lib/libmiscutil.a
g++: warning: switch '-fhuge-objects' is no longer supported
/usr/bin/ld: cannot find -lsslutil
collect2: ld returned 1 exit status

I have tried to add .libs/libsslutil.a to the linker line in 
src/ssl/Makefile

the direct path .libs/libsslutil.a but i get the original bug error
after this



The dependencies seem a bit out of sync with the library linkage order.

I think it should actually be ssl/libsslutil.la $(SSLLIB)

Amos


[squid-users] Testers wanted

2012-05-27 Thread Amos Jeffries
I am looking for people to assist with testing some major changes to 
HTCP, ICP and SNMP which if working will complete the IPv6 split-stack 
support in Squid.


I would like at least one tester for MacOS and/or OpenBSD. But I also 
need others to verify there is nothing broken on other systems due to 
the nature of the changes.


If you are interested and able to help please speak up. Thank you.

Amos


Re: [squid-users] 3.1.x compile errors using ssl_crtd

2012-05-27 Thread Linos
El 28/05/12 01:13, Amos Jeffries escribió:
 
 
 The dependencies seem a bit out of sync with the library linkage order.
 
 I think it should actually be ssl/libsslutil.la $(SSLLIB)
 
 Amos

Thanks Amos,
I am not sure if i have understood you correctly but i have tried with:

ssl_crtd_LDADD = ssl/libsslutil.la $(SSLLIB) -lsslutil ../../compat/libcompat.la
../../lib/libmiscutil.a

that fails with:
libtool: link: cannot find the library `ssl/libsslutil.la' or unhandled argument
`ssl/libsslutil.la'

and with:

ssl_crtd_LDADD = .libs/libsslutil.la $(SSLLIB) -lsslutil
../../compat/libcompat.la ../../lib/libmiscutil.a

that fails with the same error about undefined reference to `squid_curtime' as
before.

Regards,
Miguel Angel.