RE: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Amos Jeffries
On Thu, 3 Feb 2011 10:03:03 +0530, Saurabh Agarwal
 wrote:
> Hi Luis
> 
> I have recently successfully cached youtube videos using
Squid-2.7.Stable7
> and posted the solution on squid mailing list as well. I tested it
> yesterday and youtube videos were still being cached.

AFAICT your 23rd Nov 2010 posted configuration differs from the wiki
example by:
 * passing every single URL passing through your proxy goes to the
storeurl program (not just the relevant YT URLs)
 * ignoring updates and changes to the HTML pages (forcing people to think
profiles are not being updated etc)
 * ignoring users force-refresh (so that if somebody does notice a page
problem caused by the above they can't manually force the cache to update
the page)

None of these have any obvious or explained reasons relating to the .FLV
video which is the only relevant piece to be de-duplicated.

Your re-writer adds two interesting URLs to the altered pot.

 * If that generate_204 is what I think then you are preventing users from
fast-forwarding videos, forcing them to re-download the entire thing from
cache if they try.

 * the "docid=" pages. Can you explain what those are and how their URLs
result in a .FLV object response?



I'm ignoring the 10th Nov and 1st Nov and April and July and August
configurations because YT change their URLs occasionally. That is the point
of using the wiki to publish latest details instead of a long-term mailing
list.

 If you find the wiki setup is not working please get yourself an editing
account and add a message to the *Discussion* page outlining the YT
operations which are being missed and what changes will catch them. When
somebody can independently verify their success we add the changes to the
main config.


> For Squid3.1 I have
> not tried yet.

3.x do not yet have the storeurl feature these hacks all rely upon.

Amos



Re: [squid-users] Re: not caching ecap munged body

2011-02-02 Thread Amos Jeffries
On Wed, 2 Feb 2011 20:09:05 -0800, Jonathan Wolfe wrote:
> Oops, here's the diff from Alex which I forgot to include below:
> 
> === modified file 'src/adaptation/ecap/MessageRep.cc'
> --- src/adaptation/ecap/MessageRep.cc 2010-05-26 03:06:02 +
> +++ src/adaptation/ecap/MessageRep.cc 2011-01-26 21:43:36 +
> @@ -44,24 +44,30 @@
>  void
>  Adaptation::Ecap::HeaderRep::add(const Name &name, const Value &value)
>  {
>  const http_hdr_type squidId = TranslateHeaderId(name); // HDR_OTHER
OK
>  HttpHeaderEntry *e = new HttpHeaderEntry(squidId,
>  name.image().c_str(),
>  value.toString().c_str());
>  theHeader.addEntry(e);
> +
> + // XXX: this is too often and not enough, on many levels
> + theMessage.content_length = theHeader.getInt64(HDR_CONTENT_LENGTH);
>  }
> 
>  void
>  Adaptation::Ecap::HeaderRep::removeAny(const Name &name)
>  {
>  const http_hdr_type squidId = TranslateHeaderId(name);
>  if (squidId == HDR_OTHER)
>  theHeader.delByName(name.image().c_str());
>  else
>  theHeader.delById(squidId);
> +
> + // XXX: this is too often and not enough, on many levels
> + theMessage.content_length = theHeader.getInt64(HDR_CONTENT_LENGTH);
>  }
> 
>  libecap::Area
>  Adaptation::Ecap::HeaderRep::image() const
>  {
>  MemBuf mb;
>  mb.init();
> 
> 
> This only works, I think, because when removing Content-Length we end up
> with a -1 in the length for the whole message, instead of the old (too
> large) content length.  (There's a
> Adaptation::Ecap::XactionRep::setAdaptedBodySize function that's
commented
> out in the eCAP support, presumably because it's common to not know the
> length of the final adapted body, like in the gzip case.)


I think that patch should be:

 if (squidId == HDR_CONTENT_LENGTH)
theMessage.content_length = theHeader.getInt64(HDR_CONTENT_LENGTH);

in both places to avoid that "too often" case.

> 
> There are a few options - assume the content-length for adapted messages
> is just the total object_len - hdr_sz, leave it always -1 unless the
eCAP
> module tells you otherwise by setting the Content-Length header maybe,
or
> finally, store it by accumulating the size of returned chunks along the
way
> receiving the adapted body and set it accordingly when there's no more
> adapted body to receive.

We almost always end up with problems caused by large bodies extending
past any possible buffer window. So sending this headers after object
completion is not an available choice.

AFAIK if the adaptor sends its reply with chunked encoding and no
Content-Length header at all it would work out. Squid should handle the
dechunking and connection termination requirements for old clients.

Amos



RE: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Saurabh Agarwal
Hi Luis

I have recently successfully cached youtube videos using Squid-2.7.Stable7 and 
posted the solution on squid mailing list as well. I tested it yesterday and 
youtube videos were still being cached. For Squid3.1 I have not tried yet.

Please do a google for squid mails with subject "Caching youtube videos 
problem/ always getting TCP_MISS"

Regards,
Saurabh

-Original Message-
From: Luis Daniel Lucio Quiroz [mailto:luis.daniel.lu...@gmail.com] 
Sent: Thursday, February 03, 2011 12:09 AM
To: Hasanen AL-Bana
Cc: Clemente Aguiar; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1 youtube caching

Le mercredi 2 février 2011 12:29:58, Hasanen AL-Bana a écrit :
> No need for ICAP , storeurl script should be enough.
> The problem is that youtube internal links are changing from time to
> time, so we need to update our scripts from time to time.
> 
> On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
> 
>  wrote:
> > Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
> > > Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
> > > > I am running squid 3.1.9, and I would like to know if this version is
> > > > able to cache youtube content?
> > > > 
> > > > I did check the wiki
> > > > (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
> > > > and I must say that it is not clear what bits applies to version 3.1.
> > > > 
> > > > Can somebody give me some pointers to what exactly I should
> > > > configure.
> > > > 
> > > > Thanks,
> > > > Clemente
> > > 
> > > Clemente,
> > > 
> > > there is a non 100% sure probability because 3.1 laks 2.7 capabilities,
> > > the only way for now is:
> > > use 2.7
> > > user an ICAP server capable to manage those types of urls
> > > 
> > > Regards,
> > > 
> > > LD
> > 
> > Ok, thanks.
> > 
> > Maybe somebody should make that (perfectly) clear in the wiki ... and
> > maybe add an example on how to implement ICAP server.
> > 
> > Well, now for the next question. Which ICAP server and how to implement?
> > Can you help me?
> > 
> > Regards,
> > Clemente

store_url is for 2.7 not for 3.1, he must use 3.1+ icap if he want to get 
similar results

i can recomend you i-cap for linuxbut it lacks what you want, how ever it 
has some templates so you can code the things you want

LD


[squid-users] Re: not caching ecap munged body

2011-02-02 Thread Jonathan Wolfe
Oops, here's the diff from Alex which I forgot to include below:

=== modified file 'src/adaptation/ecap/MessageRep.cc'
--- src/adaptation/ecap/MessageRep.cc 2010-05-26 03:06:02 +
+++ src/adaptation/ecap/MessageRep.cc 2011-01-26 21:43:36 +
@@ -44,24 +44,30 @@
 void
 Adaptation::Ecap::HeaderRep::add(const Name &name, const Value &value)
 {
 const http_hdr_type squidId = TranslateHeaderId(name); // HDR_OTHER OK
 HttpHeaderEntry *e = new HttpHeaderEntry(squidId, name.image().c_str(),
 value.toString().c_str());
 theHeader.addEntry(e);
+
+ // XXX: this is too often and not enough, on many levels
+ theMessage.content_length = theHeader.getInt64(HDR_CONTENT_LENGTH);
 }

 void
 Adaptation::Ecap::HeaderRep::removeAny(const Name &name)
 {
 const http_hdr_type squidId = TranslateHeaderId(name);
 if (squidId == HDR_OTHER)
 theHeader.delByName(name.image().c_str());
 else
 theHeader.delById(squidId);
+
+ // XXX: this is too often and not enough, on many levels
+ theMessage.content_length = theHeader.getInt64(HDR_CONTENT_LENGTH);
 }

 libecap::Area
 Adaptation::Ecap::HeaderRep::image() const
 {
 MemBuf mb;
 mb.init();


This only works, I think, because when removing Content-Length we end up with a 
-1 in the length for the whole message, instead of the old (too large) content 
length.  (There's a Adaptation::Ecap::XactionRep::setAdaptedBodySize function 
that's commented out in the eCAP support, presumably because it's common to not 
know the length of the final adapted body, like in the gzip case.)

There are a few options - assume the content-length for adapted messages is 
just the total object_len - hdr_sz, leave it always -1 unless the eCAP module 
tells you otherwise by setting the Content-Length header maybe, or finally, 
store it by accumulating the size of returned chunks along the way receiving 
the adapted body and set it accordingly when there's no more adapted body to 
receive.

Any suggestions for the best route to pursue?

Regards,
Jonathan Wolfe

On Jan 26, 2011, at 2:35 PM, Jonathan Wolfe wrote:

> Okay, I narrowed this down a bit more with some help from Alex Rousskov:
> 
> When it works (doing a string replace from "asdf" to "fdsa" for example, so 
> same total content length):
> 
> 2011/01/26 16:07:21.312| storeEntryValidLength: Checking 
> '1078B4E8EC1D17CFEBCD533EE19F7FD6'
> 2011/01/26 16:07:21.312| storeEntryValidLength: object_len = 20366
> 2011/01/26 16:07:21.312| storeEntryValidLength: hdr_sz = 360
> 2011/01/26 16:07:21.312| storeEntryValidLength: content_length = 20006
> 2011/01/26 16:07:21.317| StoreEntry::setMemStatus: inserted mem node 
> http://www.example.com/squid-test
> 
> When it doesn't work ("asdf" to just "a"):
> 
> 2011/01/26 16:05:59.878| storeEntryValidLength: Checking 
> '1078B4E8EC1D17CFEBCD533EE19F7FD6'
> 2011/01/26 16:05:59.878| storeEntryValidLength: object_len = 14843
> 2011/01/26 16:05:59.878| storeEntryValidLength: hdr_sz = 360
> 2011/01/26 16:05:59.878| storeEntryValidLength: content_length = 20006
> 2011/01/26 16:05:59.878| storeEntryValidLength: 5523 bytes too small; 
> '1078B4E8EC1D17CFEBCD533EE19F7FD6'
> 2011/01/26 16:05:59.879| StoreEntry::checkCachable: NO: wrong content-length
> 
> The headers returned in both cases don't actually include a Content-Length 
> header, which is removed by the module using adapted->header().removeAny.
> 
> It looks like squid is restoring the content length in the second case, and 
> declaring it too small.
> 
> See https://answers.launchpad.net/ecap/+question/142965 for my discussion 
> with Alex on this.  The diff he provided, which is repeated here, seems to 
> have the effect of setting the message content length to -1 when removing the 
> content length header from within the ecap module, and that results in this:
> 
> 2011/01/26 17:21:46.539| storeEntryValidLength: Checking 
> '1078B4E8EC1D17CFEBCD533EE19F7FD6'
> 2011/01/26 17:21:46.539| storeEntryValidLength: object_len = 16190
> 2011/01/26 17:21:46.539| storeEntryValidLength: hdr_sz = 360
> 2011/01/26 17:21:46.539| storeEntryValidLength: content_length = -1
> 2011/01/26 17:21:46.539| storeEntryValidLength: Unspecified content length: 
> 1078B4E8EC1D17CFEBCD533EE19F7FD6
> 2011/01/26 17:21:46.544| StoreEntry::setMemStatus: inserted mem node 
> http://www.example.com/squid-test
> 
> Not the best behavior, but it does cache as expected now.
> 
> Likely there's a better place to reset the content length, right?  Perhaps in 
> src/adaptation/ecap/XactionRep.cc, in moveAbContent() when we've received the 
> full adapted body?
> 
> Regards,
> -Jon
> 
> On Jan 23, 2011, at 8:46 PM, Amos Jeffries wrote:
> 
>> On 24/01/11 13:43, Henrik Nordström wrote:
>>> lör 2011-01-22 klockan 23:04 +1300 skrev Amos Jeffries:
>>> 
 Squid caches only one of N variants so the expected behviour is that
 each new variant is a MISS but becomes a HIT on repeated duplicate
 requests until a new variant pushes it out 

Re: [squid-users] Re: Configuring Squid to Proxy HTTPS

2011-02-02 Thread Amos Jeffries
On Wed, 2 Feb 2011 11:15:31 -0500, "Martin \(Jake\) Jacobson" wrote:
> Hi,
> 
> I need to configure a proxy box that will proxy a site that requires a
> PKI cert.  The site requires a chained cert and fails if the cert
> presented is unchained.  We have a bot that is only presenting its
> cert and not the complete chain so it fails the connection.

Sounds like you need to figure out why a non-chained cert was loaded into
the bot in the first place.

> 
> I am wondering if we could have squid make the request for the
> resource and instead of using the bot's cert, the squid client would
> use the chained cert that I have loaded with squid?
> 
> Jake Jacobson

To use Squid certs you will need the bot to communicate over unsecured
HTTP with Squid.
Then you just configure a cache_peer line in Squid presenting the relevant
cert to the website.

Amos


Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Luis Daniel Lucio Quiroz
Le mercredi 2 février 2011 12:29:58, Hasanen AL-Bana a écrit :
> No need for ICAP , storeurl script should be enough.
> The problem is that youtube internal links are changing from time to
> time, so we need to update our scripts from time to time.
> 
> On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
> 
>  wrote:
> > Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
> > > Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
> > > > I am running squid 3.1.9, and I would like to know if this version is
> > > > able to cache youtube content?
> > > > 
> > > > I did check the wiki
> > > > (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
> > > > and I must say that it is not clear what bits applies to version 3.1.
> > > > 
> > > > Can somebody give me some pointers to what exactly I should
> > > > configure.
> > > > 
> > > > Thanks,
> > > > Clemente
> > > 
> > > Clemente,
> > > 
> > > there is a non 100% sure probability because 3.1 laks 2.7 capabilities,
> > > the only way for now is:
> > > use 2.7
> > > user an ICAP server capable to manage those types of urls
> > > 
> > > Regards,
> > > 
> > > LD
> > 
> > Ok, thanks.
> > 
> > Maybe somebody should make that (perfectly) clear in the wiki ... and
> > maybe add an example on how to implement ICAP server.
> > 
> > Well, now for the next question. Which ICAP server and how to implement?
> > Can you help me?
> > 
> > Regards,
> > Clemente

store_url is for 2.7 not for 3.1, he must use 3.1+ icap if he want to get 
similar results

i can recomend you i-cap for linuxbut it lacks what you want, how ever it 
has some templates so you can code the things you want

LD


Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Hasanen AL-Bana
No need for ICAP , storeurl script should be enough.
The problem is that youtube internal links are changing from time to
time, so we need to update our scripts from time to time.

On Wed, Feb 2, 2011 at 8:23 PM, Clemente Aguiar
 wrote:
>
> Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
> > Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
> > > I am running squid 3.1.9, and I would like to know if this version is
> > > able to cache youtube content?
> > >
> > > I did check the wiki
> > > (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
> > > and I must say that it is not clear what bits applies to version 3.1.
> > >
> > > Can somebody give me some pointers to what exactly I should configure.
> > >
> > > Thanks,
> > > Clemente
> >
> > Clemente,
> >
> > there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the
> > only way for now is:
> > use 2.7
> > user an ICAP server capable to manage those types of urls
> >
> > Regards,
> >
> > LD
>
> Ok, thanks.
>
> Maybe somebody should make that (perfectly) clear in the wiki ... and
> maybe add an example on how to implement ICAP server.
>
> Well, now for the next question. Which ICAP server and how to implement?
> Can you help me?
>
> Regards,
> Clemente
>
>


Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Clemente Aguiar
Qua, 2011-02-02 às 10:01 -0600, Luis Daniel Lucio Quiroz escreveu:
> Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
> > I am running squid 3.1.9, and I would like to know if this version is
> > able to cache youtube content?
> > 
> > I did check the wiki
> > (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
> > and I must say that it is not clear what bits applies to version 3.1.
> > 
> > Can somebody give me some pointers to what exactly I should configure.
> > 
> > Thanks,
> > Clemente
> 
> Clemente,
> 
> there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the 
> only way for now is:
> use 2.7
> user an ICAP server capable to manage those types of urls
> 
> Regards,
> 
> LD

Ok, thanks.

Maybe somebody should make that (perfectly) clear in the wiki ... and
maybe add an example on how to implement ICAP server.

Well, now for the next question. Which ICAP server and how to implement?
Can you help me?

Regards,
Clemente




[squid-users] Re: Configuring Squid to Proxy HTTPS

2011-02-02 Thread Martin (Jake) Jacobson
Hi,

I need to configure a proxy box that will proxy a site that requires a
PKI cert.  The site requires a chained cert and fails if the cert
presented is unchained.  We have a bot that is only presenting its
cert and not the complete chain so it fails the connection.

I am wondering if we could have squid make the request for the
resource and instead of using the bot's cert, the squid client would
use the chained cert that I have loaded with squid?

Jake Jacobson


Re: [squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Luis Daniel Lucio Quiroz
Le mercredi 2 février 2011 09:49:23, Clemente Aguiar a écrit :
> I am running squid 3.1.9, and I would like to know if this version is
> able to cache youtube content?
> 
> I did check the wiki
> (http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
> and I must say that it is not clear what bits applies to version 3.1.
> 
> Can somebody give me some pointers to what exactly I should configure.
> 
> Thanks,
> Clemente

Clemente,

there is a non 100% sure probability because 3.1 laks 2.7 capabilities, the 
only way for now is:
use 2.7
user an ICAP server capable to manage those types of urls

Regards,

LD


[squid-users] Squid 3.1 youtube caching

2011-02-02 Thread Clemente Aguiar
I am running squid 3.1.9, and I would like to know if this version is
able to cache youtube content?

I did check the wiki
(http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube)
and I must say that it is not clear what bits applies to version 3.1.

Can somebody give me some pointers to what exactly I should configure.

Thanks,
Clemente



Re: [squid-users] TCP send/receive buffer tuning

2011-02-02 Thread Jack Falworth

Am 01.02.2011 23:03, schrieb Amos Jeffries:

On Tue, 01 Feb 2011 14:31:02 +0100, Jack Falworth
wrote:

On 31.01.2011 23:53, Amos Jeffries wrote:

On Mon, 31 Jan 2011 10:57:57 +0100, "Jack Falworth"
wrote:

Hi squid-users,

I have a question regarding the TCP send/receive buffer size Squid

uses.

For my high-performance setup I increased both buffer sizes on my

Ubuntu

10.04 system. Unfortunately I found out that Squid 2.7 (as well as

3.x)

limits the receive buffer to 64K and the send buffer to 32K in the
configure.in script.

In addition I found this bug report regarding this check:
http://bugs.squid-cache.org/show_bug.cgi?id=1075

I couldn't really figure out the problem with Squid using higher

buffer

sizes if it is the intention of the administrator to increase those

values.

This check was included in CVS rev. 1.303 back in 2005, thus it's

quite

old.

Is this some legacy check or is it still important with today's

systems?

Can I safely remove this check or will this have some side-effects,

e.g.

say the some internal data structures won't be able to cope with

higher

values?

Note that setting ONLY affects the TCP buffers so 64K worth of packets
can
sit outside of Squid in the networking stack.
This has side-effects on the ACK packets. While they are waiting in

that

buffer they are possibly ACKed but not actually received by Squid. If
anything causes Squid to stop, crash or slow down on its read()'s and
accept()'s the client can be left with incorrect information about the
state of those bytes.


But this could also happen on a 64K buffer. If Squid crashes or goes
down for some reason then information is lost anyways.
Thus the only reason increasing the buffer size in a high-traffic
scenario would be a bad idea is if Squid is somehow overloaded and
slowed down on its read()'s and accept()'s? But if I make sure that
Squid can handle some peak traffic values without being overloaded it
would be safe to increase buffer sizes?

It's the only reason I'm aware of. There may be another I'm not.
Feel free to experiment and see for yourself of course.

Amos
I digged a little bit deeper and found some problems that would occur 
when increasing the receive buffer.


The receive buffer size is stored in SQUID_TCP_SO_RCVBUF. There are some 
code parts where this size is assigned to size_t variables which are 
normally 16bit sized. If greater value are assigned the size
would wrap around. Critically parts are in the functions httpReadReply 
(http.c), sslSetSelect (ssl.c) and clientEatRequestBodyHandler 
(client_side.c).
So in my opinion there is no way to get this working without (some) code 
rewrite and verifying these critical sections. Correct me if I'm wrong.


Re: [squid-users] SQUID transparent, HTTP/1.0, HTTP/1.1

2011-02-02 Thread Giles Coochey

On 01/02/2011 11:13, Amos Jeffries wrote:


The CVE is applicable to all proxies doing interception. They generate 
their URL from the Host: header instead of the TCP link details from 
the client. Neither being a reliable source of information. The one 
saving grace so far is that the client TCP IP gets logged and 
countermeasures can be placed to block nasties.


In the case of remote NAT the TCP link details are themselves wrong. 
Indicating that the router IP is the client origin. So there is zero 
traceability for a network-wide poisoning attack with zero ways to 
protect against it.


The problem has apparently been known since around the time NAT 
interception was created. 2009 is merely the year infections were 
identified that use it. There is no reliable fix.
All we can do is stress "avoid NAT" and take the (slightly) more 
difficult road of configuring the network to use so called "zero-conf" 
auto-detection of the proxy. It is worth it in both medium and long term.




Thanks for elaborating.

So the case being, and I think that this is true for all NAT operations, 
that NAT at least obfuscates connections and in the case of Squid that 
can cause real problems. I agree that policy routing is a better 
implementation, although a lot more complex for some to set up and, I'm 
guessing, would probably require a custom kernel recompile for many 
distributions.



--
Best Regards,

Giles Coochey
NetSecSpec Ltd
NL T-Systems Mobile: +31 681 265 086
NL Mobile: +31 626 508 131
GIB Mobile: +350 5401 6693
Email/MSN/Live Messenger: gi...@coochey.net
Skype: gilescoochey





smime.p7s
Description: S/MIME Cryptographic Signature


RE: [squid-users] Authentication to Sharepoint not happening

2011-02-02 Thread Saurabh Agarwal
I used "pipeline_prefetch off" setting in squid.conf and it works.

Regards,
Saurabh

-Original Message-
From: Senthilkumar [mailto:senthilkumaar2...@gmail.com] 
Sent: Wednesday, February 02, 2011 12:48 PM
To: Saurabh Agarwal
Subject: Re: [squid-users] Authentication to Sharepoint not happening

Hi Saurabh Agarwal,

We have also have the same issue. Could you please share us the steps to 
be followed to make it to work.

Thanks
Senthil

Saurabh Agarwal wrote:
> It works now! I followed the code and then turned "off pipeline_prefetch". In 
> code there was this check which was setting no_connection_auth flag to 1.
>
> if (Config.onoff.pipeline_prefetch)
> request->flags.no_connection_auth = 1;
>
> I don't understand it completely but I can move forward. Thank You Amos!
>
> Regards,
> Saurabh
> 
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
> Sent: Tuesday, February 01, 2011 6:30 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Authentication to Sharepoint not happening
>
> On 02/02/11 00:43, Saurabh Agarwal wrote:
>   
>> Looks like we are making progress. Yeah there is a condition in the code 
>> client_side.c that relates to when "WWW-Authenticate" header is being 
>> deleted. Condition checks for no_connection_auth flag in the request.
>>
>> This is the code. It checks if there is no_connection_auth in incoming 
>> request then that header is being deleted. I think it relates to pinning 
>> connections as you said earlier.
>>
>>  if (request->flags.no_connection_auth) {
>>  httpHeaderDelAt(hdr, pos);
>>  connection_auth_blocked = 1;
>>  continue;
>>  }
>>
>> But in Squid-2.7.Stable7 there is support only for specifying 
>> no-connection-auth in http_port directive. In Squid 3.1 we can turn it 
>> on|off using connection-auth=[on|off].
>>
>> How to not set the no_connection_auth flag in Squid-2.7.Stable.7?
>> 
>
> It is supposed to be on by default in both versions and the 
> configuration option there to turn it off and turn on stripping of the 
> header.
>
> Amos
>