Re: [squid-users] Problems with hotmail and facebook - rev

2010-11-25 Thread Landy Landy
After a while looking for solutions on this problem still havent resolve this 
issue. I added an extra dsl line to our network and things are going the same 
way. Also, tried other mailing list and posted on WISPA and got this response:

"Could be your squid cache. "

Someone replied to that with:

"Agreed, everyone gets different photo and messages depending who their
associated to. it would probably drive the squid nuts, especially when FB is
busy and slow and squid is trying to compare.  "

I don't know if that is true but, would like to confirm with this list before 
acknowledging it.

Thanks for your time and continued help.
--- On Mon, 11/15/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook - rev
> To: "Landy Landy" 
> Cc: squid-users@squid-cache.org
> Date: Monday, November 15, 2010, 5:00 PM
> On Mon, 15 Nov 2010 06:25:10 -0800
> (PST), Landy Landy
> 
> wrote:
> > --- On Mon, 11/15/10, Landy Landy 
> wrote:
> 
> > 
> > Just discovered another site I can't log on to. Is my
> bank's website.
> > Looks like theres a problem with https and squid I
> can't discover.
> > 
> > Sorry to insist on this issue but, please understand
> my frustration.
> > 
> > Thanks.
> 
> I understand. It is one of the built-in problems with NAT
> interception.
> The IPs change. Websites that depend on IP will break.
> 
> I think you need to give TPROXY a try. It does everything
> that NAT does
> without this IP change.
> 
> Amos
> 
> 


  


[squid-users] Squid 2.6 (centos 5.5) ntlm active directory

2010-11-25 Thread JC Putter
I have I am running squid 2.6 stable 21 on Centos 5.5 the box is
authenticated using winbind to the active directory domain

Wbinfo -t tells me that the RPC call was successful and everything is
working well, my ntlm SSO is working with chrome,ff,ie6,ie7 and ie8 on
windows xp,windows vista

My only problem is Windows 7 with IE8 (FF,Chrome Works 100%)

When is user accesses normal http pages with Windows 7 and IE8 everything
works but as soon as they try to access HTTPS sites the browser refuses to
open those pages and just hangs.., I cant see anything funny is my logs.

Just for testing when is disable proxy authencation (ntlm), the windows 7
machine loads HTTPS pages but refuses when its enabled.

Also tried to change the NT LAN Manager setting 

Has anyone experienced this issue? 





[squid-users] squid-3.1 client POST buffering

2010-11-25 Thread Graham Keeling
Hello,

I have upgraded to squid-3.1 recently, and found a change of behaviour.
I have been using dansguardian in front of squid.

It appears to be because squid now buffers uploaded POST data slightly
differently.
In versions < 3.1, it would take some data, send it through to the website,
and then ask for some more.
In 3.1 version, it appears to take as much from the client as it can without
waiting for what it has already got to be uploaded to the website.

This means that dansguardian quickly uploads all the data into squid, and
then waits for a reply, which is a long time in coming because squid still
has to upload everything to the website.
And then dansguardian times out on squid after two minutes.


I noticed the following squid configuration option. Perhaps what I need is
a similar thing for buffering data sent from the client.

#  TAG: read_ahead_gap  buffer-size
#   The amount of data the cache will buffer ahead of what has been
#   sent to the client when retrieving an object from another server.
#Default:
# read_ahead_gap 16 KB

Comments welcome!

Graham.



Re: [squid-users] Re: squid receives (null) instead of http

2010-11-25 Thread Knop Uwe
Hi Amos,

The problem addressed here, I've found in my log file too.
You have indicated, a solution. Can you say more about this.

Thanks
Uwe


>Re: [squid-users] Re: squid receives (null) instead of http
>
>Sun, 17 Jan 2010 17:29:23 -0800Amos Jeffries
>
>On Mon, 18 Jan 2010 00:22:09 +0200, Arthur Titeica
> wrote:
>> On 09.12.2009 03:43, Amos Jeffries wrote:
>>> On Tue, 08 Dec 2009 20:40:12 +0200, Arthur Titeica
>>>  wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 06.12.2009 16:20, Arthur Titeica wrote:
> Hi,
>
> Recently I see lots of the following in my squid logs
>
> 1259130624.131  0 89.42.191.44 NONE/400 3172 GET
>
>>>
>(null)://example.com/Contab/Rapoarte/Rapoarte_obisnuite.asp?strCategorie=ContPart
> - NONE/- text/html
>
> 1259141404.195  0 89.122.203.185 NONE/400 3200 POST
>
>>>
>(null)://example.com/Contab/NoteContabile/NoteContabUpd.asp?op=MODPOZ&NotaContabId=185
> - NONE /- text/html
>
>
> Squid is: Squid Cache: Version 3.1.0.14-20091120 on Ubuntu Server
>9.04
> x86 and the clients are Windows mostly with IE8 but also with older
>IE,
> Opera and Firefox.
>
> Squid is acting as a reverse proxy in this case and it has worked
>like
> that for at least an year now. Only recently I started seeing these
>>> kind
> of errors. The client receives the usual squid error and a client
> refresh solves it in general.
>

 And here bellow is the full error text:

 ERROR
 The requested URL could not be retrieved

 -

>>>
>

 The following error was encountered while trying to retrieve the URL:
 (null)://example.com/test/

 Invalid URL

 Some aspect of the requested URL is incorrect.

 Some possible problems are:

 .Missing or incorrect access protocol (should be "http://"; or similar)

 .Missing hostname

 .Illegal double-escape in the URL-Path

 .Illegal character in hostname; underscores are not allowed.

 Your cache administrator is em...@example.com.

>>>
>>> Usually spotted when Squid is sent a request for:
>>>
>>>  GET example.com/somepage HTTP/1.0
>>>
>>> Instead of the correct:
>>>  GET http://example.com/somepage HTTP/1.0
>>>
>>> May also appear for other custom or unknown protocol namespace URIs
>such
>>> as:
>>>   GET randomjoes://example.com/somepage HTTP/1.0
>>>
>>> Amos
>>>
>>>
>>
>> [Once again for the list. Sorry Amos.]
>>
>> For further reference this was caused by resolvconf on debian which
>> triggered a 'squid -k reconfigure' every hour or so (hourly reconnection
>> of a VPN).
>>
>> During the time of the reconfigure (10 secs or so) squid displayed the
>> above message which, I have to say, is way too misleading.
>
>Aha. I have an idea, may take a while to check and fix though.
>
>Amos



Re: [squid-users] no cache 404 for a special domain

2010-11-25 Thread Amos Jeffries

On 25/11/10 19:52, Compu Serve wrote:

Hello list,

I'm running squid for reverse proxy.

I have been setting up "negative_ttl 0 seconds" for no caching all 404
like pages.
But now I want to no cache for a special domain with 404 response, but
not others.

for example,

www.abc.com  404 will get no cache.
but www.def.com  404 page can be cached.

How to setup it? Thanks in advance.


Configure the web server at www.def.com to send Expires, Last-Modified 
and/or Cache-Control headers as appropriate to match your caching needs.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] squid cache not updating?

2010-11-25 Thread J Webster

I have my cache mounted on a drive at /var/spool/squid.
The other day I tied to mount a new folder also on the same drive, which is 
apparently not the best thing to do.
Since then, I am not sure if my squid cache is updating or not. It seems to 
be stuck at 35Gb use and 16% capacity.
Is there anyway to check if the cache is updating? 



Re: [squid-users] STDERR is closed? So no std::cerr?

2010-11-25 Thread declanw
On Thu, Nov 25, 2010 at 12:27:50AM +, Amos Jeffries wrote:
> On Wed, 24 Nov 2010 13:26:03 +, Declan White 
> wrote:
> > I've got some 'uncaught exception' coredumping squids which are leaving no
> > clues about their deaths.
> > They are *meant* to be sending an SOS via:
> > 
> > main.cc:1162:std::cerr << "dying from an unhandled exception: " <<
> > e.what() << std::endl;
> > 
> > but std::cerr isn't the cache_log is it. It's STDERR, aka FD 2.
> > 
> > COMMAND   PID   USER   FD   TYPEDEVICE SIZE/OFFNODE NAME
> > squid   22444  squid2u  VCHR  13,2  0t03398
> > /devices/pseudo/m...@0:null
> > 
> > .. which according to lsof has been /dev/nulled, which is odd, as I had it
> > redirected to a file when it was started.
> > 
> > Should the fallback exception handler not be using another reporting
> > channel?
> > 
> > I also notice that the root parent squid which waits for the child
> > eventually disappears, after restarting crashes, making the next crash
> > fatal. Is that normal? Does it react badly if it catches a HUP sent by a
> > 'pkill -HUP squid' ?
> > 
> > DW
> 
> hmm, how many and what particular processes are running? which particular
> sub-process(es) is this happening to? how are you starting squid? etc. etc.
> 
> For background, by default only the master process uses stderr as itself.
> All sub-processes have their stderr redirected to cache.log.

It looks like it's decided by whether or not you use the -N non-daemonise
startup flag. The auth sub processes always have STDERR correctly redirected
to cache_log, but without -N, the worker squid in the squid/root-squid pair
leaves no STDERR open for itself.

I'll get my farm using 'squid -N &' when they next hit a quiet period (and
I'm awake). This will also fix my HUP problem, the non-worker root-squid
does indeed drop dead on HUP.

squid 3.1.9 on Solaris 9 64bit btw.

DW

> Amos



[squid-users] Monitoring 407 authentications

2010-11-25 Thread Nick Cairncross
Hi List,

I have nailed a few niggles relating to extremely high CPU usage for my 
authenticators, and I can now clearly look at the requests coming in on the 
access.log. I use a combination of Kerb & NTLM helpers for my 700 users - 
majority Kerberos.(70/30). I started tailing the log yesterday and noticed some 
clients repeatedly attempting to authenticate but failing due to no cred; 
Mac/Pc system or local and not domain accounts
The frequency of the requests is very high and therefore hogging some helpers. 
I can increased the helper amounts but there is a ratio (CPU/auth) that I need 
to bear in mind. The clients are mainly trying to get out onto the internet to 
update various software packages but don't have any credentials to do this, 
hence the repeated, frequent 407s. Short of visiting these clients to see 
what's going on (a possibility) is there a way to monitor for these 407 auth 
requests and flag high-request users that are constantly failing? Some clients 
occur VERY often and must be hogging helpers maybe even multiple ones..

Appreciate this is probably more of a *nix question but any help or pointers 
would be great.

Nick

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square, 
London W1S 1JU