Re: [squid-users] LVS & Reverse Proxy Squid

2007-09-18 Thread Ding Deng
David Lawson <[EMAIL PROTECTED]> writes:

>> You need as many public addresses as number of Squid instances you'd
>> like to run in a single box, and configure each instance to listen on
>> a different public address, e.g.:
>
> This is untrue in an LVS environment, though true if the Squids are
> bare on the network.  In the case where you're load balancing with

I have to admit that I am in a setup that LVS routes traffic to Squid
boxes through private network, and Squid response directly to clients
;-(

> LVS, the simplest way to achieve this is to have each squid instance
> simply listen on a unique port.  Instance A on port 80, Instance B on
> port 81, etc.  The set up the LVS VIPs and RIPs to direct traffic
> appropriately.


Re: [squid-users] Squid setup questions

2007-09-18 Thread Tek Bahadur Limbu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Antonio,


On Tue, 18 Sep 2007 17:00:25 -0400
"Antonio Pereira" <[EMAIL PROTECTED]> wrote:

> Ok Great.
> 
> I have a hardware based firewall.
> 
> What setup in the way of the squid box is best physically take the cable
> from the firewall and put 2 nics on the squid box and plug 1 nic to the
> firewall and the other to the backbone switch. Or just use 1 nic on the
> squid box and put a rule in the firewall to allow only outbound http
> traffic from the squid box.
> Right now everyone defaults to the firewall and all http traffic goes
> out to the internet. We also have VPN and web and ssl traffic coming is
> from inbound http.


I think the best layout would be to put 2 NIC cards on the Squid box. Like you 
said, plug the 1st cable to the firewall and the 2nd cable to your backbone 
switch where the 4 other sites connect.

The following diagram may represent the simple layout.


 
Internet
|
|
|
  Transparent Squid Bridge Box 
|
|
Backbone Switch
|
|
-
|   |   |   |
|   |   |   |
  Site1   Site2   Site3Site4


I would like the Squid box to run in transparent bridging mode. This way, you 
don't have to change anything on your network. Furthermore if your Squid box 
should go down, which is unlikely, you just reconnect the cable from your 
backbone switch to your firewall and everything becomes normal again!

Since we won't be running any firewall except for intercepting web requests to 
Squid's port, your VPN and SSL traffic should not get hampered.

In fact, I am using this setup on a Debian shaper box and so far it is working 
great.

Hope it helps.


Thanking you...


> 
> Thanks again
> 
> -Original Message-
> From: Tek Bahadur Limbu [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, September 18, 2007 4:13 PM
> To: Antonio Pereira
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Squid setup questions
> 
> Hi Antonio,
> 
> Antonio Pereira wrote:
> > Hello,
> > 
> > I have pretty much redundant question but I would like some opinions
> > before I venture into this possible solution.
> > 
> > I have 4 sites on an MPLS network that access the internet via 1
> > location, at this 1 location there is already a firewall. What I would
> > like to do is start blocking web sites and start block web traffic. 
> > 
> > What is the best setup with squid for this type of setup? What
> documents
> > should I read for this type of setup?
> 
> Not sure about MPLS networking. However, in your case, it should be 
> simple. Just run Squid transparently on the gateway (firewall) from 
> where all 4 sites gets access to the internet.
> 
> Adding SquidGuard or DansGuardian or even custom ACLs will provide you 
> with all the web blocking functionalities.
> 
> Thanking you...
> 
> 
> > 
> > Thanks in advance
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> 
> 
> -- 
> 
> With best regards and good wishes,
> 
> Yours sincerely,
> 
> Tek Bahadur Limbu
> 
> System Administrator
> 
> (TAG/TDG Group)
> Jwl Systems Department
> 
> Worldlink Communications Pvt. Ltd.
> 
> Jawalakhel, Nepal
> 
> http://www.wlink.com.np
> 
> 
> 


- -- 

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator 

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal
http://wlink.com.np/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (FreeBSD)

iD8DBQFG8L+zfpE0pz+xqQQRAg/rAJ4sgpGJzJr+snPl3H7CAleqqWE7nwCgq+g4
0MkQ4qe+lfsTRoAMKwIITio=
=Sobk
-END PGP SIGNATURE-


Re: [squid-users] How can I clean out old objects - refresh patt

2007-09-18 Thread Adrian Chadd
On Wed, Sep 19, 2007, Ding Deng wrote:

> > It'd mean more RAM used to track expired objects, and more CPU to walk
> > the list and delete unneeded objects..
> 
> And probably longer disk seek time.

Depends how its done. Doing it on a busy UFS might mess with your
service times.

> Agreed. We still have to make sure that cache_dirs match server memory
> however, as I'm seeing a scenario right now that if we make full use of
> all the available disks, a single Squid instance will eat up dozens of
> gigabytes of physical memory, which is far more than what we've
> installed on the server.

Squid isn't exactly memory-frugal at the present time. I've been thinking
of ways to improve its index memory usage.

> I'm also seeing a scenario that 10GB of cache_dirs get roughly the same
> hit ratio as 30GB of cache_dirs due to cache pollution, so cache_dir
> which is larger than necessary is not always a good idea.

Yup! Well, caching has diminishing returns with cache size if you're
just caching small objects.



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


[squid-users] Simple authentication on a home-based (ie no domain controller) WinXP box

2007-09-18 Thread Jeffery Chow
Hi Gang,

I need to set up a squid for the purpose of letting my friend visit sites 
anonymously. I downloaded the WinXP port of Squid from Acme Consulting and have 
Squid set up adequately to run as a proxy server, but I also want to slap on 
some authentication so that only my friend can proxy through my machine. 
Ideally I would store a username/password pair in a text file somewhere on my 
system (plaintext or not, doesn't matter), but the authentication helpers that 
I see in my distro (mswin_auth, mswin_negotiate_auth, mswin_ntlm_auth) don't 
come with enough documentation to tell me which one is the right one to try. 

Does anyone know how I can set up a simple user/pwd authentication scheme on a 
WinXP box, and doesn't require a windows domain, NCSA, NTLM, or any other 
buzzword?

Thanks in advance,
Jeff


   

Building a website is a piece of cake. Yahoo! Small Business gives you all the 
tools to get online.
http://smallbusiness.yahoo.com/webhosting


Re: [squid-users] LVS & Reverse Proxy Squid

2007-09-18 Thread David Lawson


On Sep 19, 2007, at 12:00 AM, Ding Deng wrote:


"Brad Taylor" <[EMAIL PROTECTED]> writes:


We use LVS (load balancer) to send traffic to multiple Squid 2.5
servers in reverse proxy mode. We want to put multiple Squid  
instances

on one box and have successful done that by changing: http_port 80 to
http_port 192.168.60.7:80 in the squid.conf file. We tested to that


Squid is listening only on a private address now, what will the source
address of response from Squid be?


LVS NAT's outbound responses, as long as the response to a client  
request goes from the cache through the load balancer, it'll be NATed  
fine.


instance of squid and worked successfully. Once it is added to the  
LVS

load balancer the site no longer works. I'll check with the LVS group
also.


You need as many public addresses as number of Squid instances you'd
like to run in a single box, and configure each instance to listen  
on a

different public address, e.g.:


This is untrue in an LVS environment, though true if the Squids are  
bare on the network.  In the case where you're load balancing with  
LVS, the simplest way to achieve this is to have each squid instance  
simply listen on a unique port.  Instance A on port 80, Instance B on  
port 81, etc.  The set up the LVS VIPs and RIPs to direct traffic  
appropriately.


VIP A: 1.1.1.1:80
RIP A: 2.2.2.2:80
RIP A: 2.2.2.3:80

VIP B: 1.1.1.2:80
RIP B: 2.2.2.2:81
RIP B: 2.2.2.3:81

Etc.  This assumes you're using LVS NAT routing, for DR and TUN  
there's some details that are slightly different, but the basic  
concept is the same.  I'll be more than happy to answer Brad's  
specific questions about the LVS/Squid relationship in more depth off  
list if he wants, since this is really less a Squid question and more  
a "How do I make LVS and Squid play well together?" question.


--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]





Re: [squid-users] How can I clean out old objects - refresh patt

2007-09-18 Thread Ding Deng
Adrian Chadd <[EMAIL PROTECTED]> writes:

> On Tue, Sep 18, 2007, Nicole wrote:
>
>>  Thanks for the clarification, but Eeek!
>
> Whats eek about it!
>
>>  So then, I guess this raises the question: If you have plenty of
>> disk, there really is nothing from keeping ancient files hanging
>> around, using up space and enlarging your swap.state file?
>
> Squid will clean it for you! Relax.
>
>>  I thought it was an either not enough space Or older than expire
>>  time would delete objects.
>
> It'd mean more RAM used to track expired objects, and more CPU to walk
> the list and delete unneeded objects..

And probably longer disk seek time.

>>  So it seems like, I either have to manually purge old files every so
>> often, or set my disk space artificially to prevent too many objects
>> based on my servers's memory or increase my memory?
>
> Nope, you just need to:
>
> * cron a squid -k rotate once a day to make sure swap.state (and your
> other log files) don't grow to infinity! So many people forget this
> step.
>
> * Relax, and let squid handle deleting files when it wants to. It'll
> delete old files to make room for others as appropriate!

Agreed. We still have to make sure that cache_dirs match server memory
however, as I'm seeing a scenario right now that if we make full use of
all the available disks, a single Squid instance will eat up dozens of
gigabytes of physical memory, which is far more than what we've
installed on the server.

I'm also seeing a scenario that 10GB of cache_dirs get roughly the same
hit ratio as 30GB of cache_dirs due to cache pollution, so cache_dir
which is larger than necessary is not always a good idea.

> Now, the question you should be asking is "will legitimate but
> infrequently used files be deleted in preference to "stale" files, and
> will this make my disk use suboptimal?"
>
> The answer, thankfully, is no - if you think about it, if a file is
> stale then:
>
> * it hasn't been accessed in a while (or its freshness would've been
> "updated" and it suddenly isn't stale/expired anymore!, and
>
> * if it hasn't been accessed in a while, it'll be at the tail end of
> the LRU or Heap anyway.
>
>
> Adrian
>
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
> -
> - $25/pm entry-level bandwidth-capped VPSes available in WA -


Re: [squid-users] LVS & Reverse Proxy Squid

2007-09-18 Thread Ding Deng
"Brad Taylor" <[EMAIL PROTECTED]> writes:

> We use LVS (load balancer) to send traffic to multiple Squid 2.5
> servers in reverse proxy mode. We want to put multiple Squid instances
> on one box and have successful done that by changing: http_port 80 to
> http_port 192.168.60.7:80 in the squid.conf file. We tested to that

Squid is listening only on a private address now, what will the source
address of response from Squid be?

> instance of squid and worked successfully. Once it is added to the LVS
> load balancer the site no longer works. I'll check with the LVS group
> also.

You need as many public addresses as number of Squid instances you'd
like to run in a single box, and configure each instance to listen on a
different public address, e.g.:

Instance 1:

 http_port 192.168.60.7:80
 http_port 168.215.102.126:80
 udp_incoming_address 192.168.60.7
 udp_outgoing_address 255.255.255.255

Instance 2:

 http_port 192.168.60.8:80
 http_port 168.215.102.127:80
 udp_incoming_address 192.168.60.8
 udp_outgoing_address 255.255.255.255

Hope that helps.


Re: [squid-users] header_access debug, pam_appl.h, digest-auth-helper, storeio

2007-09-18 Thread Amos Jeffries

> Finally, question 5) that I've meant to ask for a long time: I find I
> always have to issue "squid -k shutdown" at least twice, before squid
> would shut down.
> Not too surprisingly "squid -k kill" only needs to be issued once. I'm
> curious what's causing squid's "resiliency" in the face of "squid -k
> shutdown"?

Some versions of squid had a bug that made them ignore the 'ive shutdown
properly' signal from child to the restarter process. That has been fixed
for several months now.

I find since that fix that even default shutdown_lifetime of 30 seconds
far too long for an active cache and had more success setting it to 5
seconds.

> Does it have anything to do with the 8 squidGuard redirect_children in my
> setup?

Yes its waiting so shutdown for them to close. Maybe even when it does not
have to.

Amos



Re: [squid-users] How can I clean out old objects - refresh patt

2007-09-18 Thread Adrian Chadd
On Tue, Sep 18, 2007, Nicole wrote:

>  Thanks for the clarification, but Eeek!

Whats eek about it!

>  So then, I guess this raises the question: If you have plenty of disk, there
> really is nothing from keeping ancient files hanging around, using up space
> and enlarging your swap.state file? 

Squid will clean it for you! Relax.

>  I thought it was an either not enough space Or older than expire time would
>  delete objects.

It'd mean more RAM used to track expired objects, and more CPU to walk the
list and delete unneeded objects..

>  So it seems like, I either have to manually purge old files every so often, 
> or
> set my disk space artificially to prevent too many objects based on my
> servers's memory or increase my memory?

Nope, you just need to:

* cron a squid -k rotate once a day to make sure swap.state (and your other log
  files) don't grow to infinity! So many people forget this step.

* Relax, and let squid handle deleting files when it wants to. It'll delete old
  files to make room for others as appropriate!

Now, the question you should be asking is "will legitimate but infrequently used
files be deleted in preference to "stale" files, and will this make my disk
use suboptimal?"

The answer, thankfully, is no - if you think about it, if a file is stale then:

* it hasn't been accessed in a while (or its freshness would've been "updated"
  and it suddenly isn't stale/expired anymore!, and
* if it hasn't been accessed in a while, it'll be at the tail end of the LRU or
  Heap anyway.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


Re: [squid-users] New Squid user help required with setup

2007-09-18 Thread nick w
Hi,

What does your conf file look like?

On 9/18/07, Abd-Ur-Razzaq Al-Haddad <[EMAIL PROTECTED]> wrote:
> Hi,
>
>
>
> I've just installed squid on OpenSuse 10.2 installation.
>
> I have configured squid and Suse to use samba and have added it to the
> Windows Active Directory network successfully.
>
>
>
> The problem I am now facing is ACL's - nothing seems to work and I can
> get the error messages that I should be getting for blocking
> sites/content.
>
> Please can you tell me where I am going wrong.
>
>
>
> Thanks
>
>
>
>
> Abd-Ur-Razzaq Al-Haddad
> IT Analyst
>
>
> 9 Queen Street London W1J 5PE
>
> Tel: +44 (0)207 659 6620Fax: +44 (0)207 659 6621
> Direct: +44 (0)207 659 6632 Mob: +44 (0)7738 787881
> [EMAIL PROTECTED]
>
>
>
>
>
> The information contained in this email or any of its attachments may be 
> privileged or confidential and is intended for the exclusive use of the 
> addressee. Any unauthorised use may be unlawful. If you received this email 
> by mistake, please advise the sender immediately by using the reply facility 
> in your email software and delete the email from your system.
>
> Carron Energy Limited.  Registered Office 9 Queen Street, London W1J 5PE. 
> Incorporated in England and Wales with company number 5150453
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> __
>


Re: [squid-users] How can I clean out old objects - refresh patt

2007-09-18 Thread Nicole

On 19-Sep-07 My Secret NSA Wiretap Overheard Adrian Chadd Saying  :
> Files aren't deleted when they expire.
> 
> Files are deleted when:
> 
> * A request occurs and squid checks the file for freshness, or
> * Squid issues a validation requests and determines the local copy is stale,
> or
> * Squid needs to make space (as the disk store is full) and starts running
>   the object replacement policy to purge objects - but then, it doesn't
> maintain
>   a list of "stale" objects to purge; it just deletes the 'oldest' objects.
>
> Adrian

 Thanks for the clarification, but Eeek!

 So then, I guess this raises the question: If you have plenty of disk, there
really is nothing from keeping ancient files hanging around, using up space
and enlarging your swap.state file? 

 I thought it was an either not enough space Or older than expire time would
 delete objects.

 So it seems like, I either have to manually purge old files every so often, or
set my disk space artificially to prevent too many objects based on my
servers's memory or increase my memory?

 Is that what people generally do?


  Nicole


 
> On Tue, Sep 18, 2007, Nicole wrote:
>> Hate to respond to myself,  but I wanted to add more info..
>> 
>> In a well duh moment I ran find and found objects going back to July.
>>   find /cache -type f -mtime +30 -exec ls {} \;
>> 
>>  If my headers from my web servers are set to expire in 2 weeks:
>> Cache-Control: max-age=1728000
>> Connection: close
>> Date: Wed, 19 Sep 2007 00:38:51 GMT
>> Accept-Ranges: bytes
>> ETag: "1155193587"
>> Server: lighttpd
>> Content-Length: 68424
>> Content-Type: image/jpeg
>> Expires: Tue, 09 Oct 2007 00:38:51 GMT
>> Last-Modified: Mon, 17 Sep 2007 20:03:00 GMT
>> Client-Date: Wed, 19 Sep 2007 00:38:51 GMT
>> Client-Response-Num: 1
>> 
>> How did my expires make it keep files for so long and why did my new ones
>> not
>> start a purge?
>> 
>> 
>> Thanks
>> 
>> 
>>   Nicole
>> 
>> 
>> >  Hello all
>> > 
>> >  I have a few squid servers that seem to have gotten a bit out of control.
>> > 
>> >  They are using up all the systems memory and starting to serve items
>> >  slowly.
>> > 
>> >  As near as I can tell, it seems to just want more memory than I have to
>> > serve
>> > and manage all the objects in the cache. 
>> > 
>> >  Internal Data Structures:
>> > 20466526 StoreEntries
>> >  24888 StoreEntries with MemObjects
>> >  24870 Hot Object Cache Items
>> > 20466434 on-disk objects
>> > 
>> > 
>> > I have tried reducing my refresh pattern from:
>> > refresh_pattern -i \.jpg 10080 150% 40320 ignore-reload
>> > to:
>> > refresh_pattern -i \.jpg 5040 100% 4320 ignore-reload
>> > 
>> > and doing a reload.
>> > 
>> >  However, I have not noticed it expiring out old objects and freeing up
>> >  disk
>> > space like I thought it would.
>> >  
>> >  Do objects get stored based on their original refresh pattern? So even if
>> >  I
>> > change it, they won't expire until they expire based on the pattern they
>> > were
>> > stored with?
>> > 
>> >  Is there any way to tell the age of the objects eating up my cache
>> >  storage
>> > space?  Any reccomendations on how to reduce my object count besides
>> > reducing
>> > disk space? This is for a reverse proxy cache and we have the cache
>> > header set to expire objects in 2 weeks. I really can't believe that I
>> > have
>> > 20Million 2 week old objects.
>> > 
>> > 
>> > 
>> >   Thanks!
>> > 
>> > 
>> > 
>> >   Nicole
>> > 
>> > 
>> > --
>> >  |\ __ /|   (`\
>> >  | o_o  |__  ) )   
>> > //  \\ 
>> >   -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
>> > --
>> >  "The term "daemons" is a Judeo-Christian pejorative.
>> >  Such processes will now be known as "spiritual guides"
>> >   - Politicaly Correct UNIX Page
>> 
>> 
>> --
>>  |\ __ /|   (`\
>>  | o_o  |__  ) )   
>> //  \\ 
>>   -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
>> --
>>  "The term "daemons" is a Judeo-Christian pejorative.
>>  Such processes will now be known as "spiritual guides"
>>   - Politicaly Correct UNIX Page
>> 
>> 
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support
> -
> - $25/pm entry-level bandwidth-capped VPSes available in WA -


--
 |\ __ /|   (`\
 | o_o  |__  ) )   
//  \\ 
  -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
--
 "The term "daemons" is a Judeo-Christian pejorative.
 Such processes will now be known as "spiritual guides"
  - Politicaly Correct UNIX Page





Re: [squid-users] How can I clean out old objects - refresh patterns and really old items

2007-09-18 Thread Adrian Chadd
Files aren't deleted when they expire.

Files are deleted when:

* A request occurs and squid checks the file for freshness, or
* Squid issues a validation requests and determines the local copy is stale, or
* Squid needs to make space (as the disk store is full) and starts running
  the object replacement policy to purge objects - but then, it doesn't maintain
  a list of "stale" objects to purge; it just deletes the 'oldest' objects.




Adrian

On Tue, Sep 18, 2007, Nicole wrote:
> Hate to respond to myself,  but I wanted to add more info..
> 
> In a well duh moment I ran find and found objects going back to July.
>   find /cache -type f -mtime +30 -exec ls {} \;
> 
>  If my headers from my web servers are set to expire in 2 weeks:
> Cache-Control: max-age=1728000
> Connection: close
> Date: Wed, 19 Sep 2007 00:38:51 GMT
> Accept-Ranges: bytes
> ETag: "1155193587"
> Server: lighttpd
> Content-Length: 68424
> Content-Type: image/jpeg
> Expires: Tue, 09 Oct 2007 00:38:51 GMT
> Last-Modified: Mon, 17 Sep 2007 20:03:00 GMT
> Client-Date: Wed, 19 Sep 2007 00:38:51 GMT
> Client-Response-Num: 1
> 
> How did my expires make it keep files for so long and why did my new ones not
> start a purge?
> 
> 
> Thanks
> 
> 
>   Nicole
> 
> 
> >  Hello all
> > 
> >  I have a few squid servers that seem to have gotten a bit out of control.
> > 
> >  They are using up all the systems memory and starting to serve items 
> > slowly.
> > 
> >  As near as I can tell, it seems to just want more memory than I have to
> > serve
> > and manage all the objects in the cache. 
> > 
> >  Internal Data Structures:
> > 20466526 StoreEntries
> >  24888 StoreEntries with MemObjects
> >  24870 Hot Object Cache Items
> > 20466434 on-disk objects
> > 
> > 
> > I have tried reducing my refresh pattern from:
> > refresh_pattern -i \.jpg 10080 150% 40320 ignore-reload
> > to:
> > refresh_pattern -i \.jpg 5040 100% 4320 ignore-reload
> > 
> > and doing a reload.
> > 
> >  However, I have not noticed it expiring out old objects and freeing up disk
> > space like I thought it would.
> >  
> >  Do objects get stored based on their original refresh pattern? So even if I
> > change it, they won't expire until they expire based on the pattern they 
> > were
> > stored with?
> > 
> >  Is there any way to tell the age of the objects eating up my cache storage
> > space?  Any reccomendations on how to reduce my object count besides 
> > reducing
> > disk space? This is for a reverse proxy cache and we have the cache
> > header set to expire objects in 2 weeks. I really can't believe that I have
> > 20Million 2 week old objects.
> > 
> > 
> > 
> >   Thanks!
> > 
> > 
> > 
> >   Nicole
> > 
> > 
> > --
> >  |\ __ /|   (`\
> >  | o_o  |__  ) )   
> > //  \\ 
> >   -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
> > --
> >  "The term "daemons" is a Judeo-Christian pejorative.
> >  Such processes will now be known as "spiritual guides"
> >   - Politicaly Correct UNIX Page
> 
> 
> --
>  |\ __ /|   (`\
>  | o_o  |__  ) )   
> //  \\ 
>   -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
> --
>  "The term "daemons" is a Judeo-Christian pejorative.
>  Such processes will now be known as "spiritual guides"
>   - Politicaly Correct UNIX Page
> 
> 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


Re: [squid-users] header_access debug, pam_appl.h, digest-auth-helper, storeio

2007-09-18 Thread vollkommen
> > 1) I got "pam_auth.c:74:31: error: security/pam_appl.h: No such file
> > or directory" when compiling squid-2.6.STABLE16-20070916. I found a
> > nearly identical instance in the list archive more than a year ago.
> > That got me looking into the pam-devel on my host os--Mac OS X 10.4.
> > It turns out pam_appl.h is located in /usr/include/pam/ on OS X 10.4
> > and 10.3, rather than /usr/include/security. A symbolic link takes
> > care of it. I wonder, however, if the developers are open to
> > accommodating this type of OS-specific peculiarities by adjusting
> > during ./configure based on --host=.
>
> so we need a configure test to see which of the two is available, and
> include the proper one..
>
> (should not make that decision based on the host type)

Thanks, Henrik.

> > 2) I narrowed down the cause of my inability to log into several sites
> > to the last line in the 'http_anonymizer paranoid' emulation of
> > squid-2.6 that I was using, namely: "header_access All deny all". I'd
> > like to find out what headers these sites need to see. Could anyone
> > let me know the debug_options number for header_access without going
> > full bore to "debug_options ALL,9"? Currently I'm aware of 33 for
> > reply_mime_type and 28 for ACL debugging. Is there a quick list of all
> > the debug option numbers, without resorting to reading the source
> > code?
>
> Usually login problems means you have blocked cookies..
>

I find "header_access All deny all" appears to be responsible for the cookie 
blocking.
I'd like to find out what additional header_access I need to allow to let these 
cookies through. Would enabling
header_access debug help in this regard? Could you point me to a list of all 
the possible debug_options, other than the source code? =D

Here's the header_access portion of my squid.conf

#Default:
# none
header_access User-Agent deny all
header_access Allow allow all
header_access Authorization allow all
header_access WWW-Authenticate allow all
header_access Cache-Control allow all
header_access Content-Encoding allow all
# to reproduce the old 'http_anonymizer paranoid' feature, as shown in the 
default squid.conf
header_access Allow allow all
(snipped for brevity)
header_access All deny all

I used Firefox extension LiveHTTPHeader to capture the difference (trying to) 
logging into youtube,
with the only change to squid.conf being "header_access All deny all" is 
disabled for the session to the right.

http://www.youtube.com/login?next=/index^M  

http://www.youtube.com/login?next=/index^MÊÊÊ
^M  
^MÊ
POST /login?next=/index HTTP/1.1^M  
POST /login?next=/index 
HTTP/1.1^MÊÊÊ
Host: www.youtube.com^M 
Host: www.youtube.com^M
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US;^M 
User-Agent: Mozilla/5.0 (Macintosh; U; 
Intel Mac OS X; 
en-US;^MÊÊ
Accept: text/xml,application/xml,application/xhtml+xml,text/h^M 
Accept: 
text/xml,application/xml,application/xhtml+xml,text/h^MÊÊ
Accept-Language: en,en-us;q=0.5^M   
Accept-Language: 
en,en-us;q=0.5^M
Accept-Encoding: gzip,deflate^M 
Accept-Encoding: 
gzip,deflate^MÊÊ
Accept-Charset: gb18030,utf-8;q=0.7,*;q=0.7^M   
Accept-Charset: 
gb18030,utf-8;q=0.7,*;q=0.7^M
Keep-Alive: 300^M   
Keep-Alive: 300^MÊÊ
Proxy-Connection: keep-alive^M  
Proxy-Connection: 
keep-alive^MÊÊÊ
Referer: http://www.youtube.com/login?next=/index^M 
Referer: 
http://www.youtube.com/login?next=/index
 ÊÊ
Cookie: GEO=4dbf49b28f5f6763908f946191912f49cxUAAABVUyxuaixhd^M 
Cookie: 
GEO=4dbf49b28f5f6763908f946191912f49cxUAAABVUyxuaixhd^MÊ

[squid-users] cache_dir

2007-09-18 Thread alexus
i have increased value from

cache_dir ufs /usr/local/squid/var/cache 100 16 256

to

cache_dir ufs /usr/local/squid/var/cache 1000 16 256

i didn't see any changes other then cache dir got 10 times bigger,
among other reasons one of the main reason was to save traffic, most
of our users go to 1 website, after we start using squid i didnt see
any traffic saving, its same as it was... i then tried to 2000 and
3000, same thing...

also whenever i have a need to restart squid does this "Store
rebuilding" which takes forever, is there a way to skip this? or is
this a nessesery step? and on the top of all this i get this
statistics

2007/08/02 12:18:04| Done reading /usr/local/squid/var/cache swaplog
(31497889 entries)
2007/08/02 12:18:04| Finished rebuilding storage from disk.
2007/08/02 12:18:04|   15742193 Entries scanned
2007/08/02 12:18:04| 0 Invalid entries.
2007/08/02 12:18:04| 0 With invalid flags.
2007/08/02 12:18:04|   12982474 Objects loaded.
2007/08/02 12:18:04| 0 Objects expired.
2007/08/02 12:18:04|   12977266 Objects cancelled.
2007/08/02 12:18:04|   1284480 Duplicate URLs purged.
2007/08/02 12:18:04|   1475239 Swapfile clashes avoided.
2007/08/02 12:18:04|   Took 785.7 seconds (16523.7 objects/sec).
2007/08/02 12:18:04| Beginning Validation Procedure
2007/08/02 12:18:05|   Completed Validation Procedure
2007/08/02 12:18:05|   Validated 5208 Entries
2007/08/02 12:18:05|   store_swap_size = 119566k
2007/08/02 12:18:09| storeLateRelease: released 711 objects


2007/08/02 12:18:04|   12982474 Objects loaded.
2007/08/02 12:18:04|   12977266 Objects cancelled.

that part i really dont get it... it loaded X amount of object and
nearly 99% gets canceled...

what am I missing?

thanks in advance

-- 
http://alexus.org/


RE: [squid-users] How can I clean out old objects - refresh patterns and really old items

2007-09-18 Thread Nicole
Hate to respond to myself,  but I wanted to add more info..

In a well duh moment I ran find and found objects going back to July.
  find /cache -type f -mtime +30 -exec ls {} \;

 If my headers from my web servers are set to expire in 2 weeks:
Cache-Control: max-age=1728000
Connection: close
Date: Wed, 19 Sep 2007 00:38:51 GMT
Accept-Ranges: bytes
ETag: "1155193587"
Server: lighttpd
Content-Length: 68424
Content-Type: image/jpeg
Expires: Tue, 09 Oct 2007 00:38:51 GMT
Last-Modified: Mon, 17 Sep 2007 20:03:00 GMT
Client-Date: Wed, 19 Sep 2007 00:38:51 GMT
Client-Response-Num: 1

How did my expires make it keep files for so long and why did my new ones not
start a purge?


Thanks


  Nicole


>  Hello all
> 
>  I have a few squid servers that seem to have gotten a bit out of control.
> 
>  They are using up all the systems memory and starting to serve items slowly.
> 
>  As near as I can tell, it seems to just want more memory than I have to
> serve
> and manage all the objects in the cache. 
> 
>  Internal Data Structures:
> 20466526 StoreEntries
>  24888 StoreEntries with MemObjects
>  24870 Hot Object Cache Items
> 20466434 on-disk objects
> 
> 
> I have tried reducing my refresh pattern from:
> refresh_pattern -i \.jpg 10080 150% 40320 ignore-reload
> to:
> refresh_pattern -i \.jpg 5040 100% 4320 ignore-reload
> 
> and doing a reload.
> 
>  However, I have not noticed it expiring out old objects and freeing up disk
> space like I thought it would.
>  
>  Do objects get stored based on their original refresh pattern? So even if I
> change it, they won't expire until they expire based on the pattern they were
> stored with?
> 
>  Is there any way to tell the age of the objects eating up my cache storage
> space?  Any reccomendations on how to reduce my object count besides reducing
> disk space? This is for a reverse proxy cache and we have the cache
> header set to expire objects in 2 weeks. I really can't believe that I have
> 20Million 2 week old objects.
> 
> 
> 
>   Thanks!
> 
> 
> 
>   Nicole
> 
> 
> --
>  |\ __ /|   (`\
>  | o_o  |__  ) )   
> //  \\ 
>   -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
> --
>  "The term "daemons" is a Judeo-Christian pejorative.
>  Such processes will now be known as "spiritual guides"
>   - Politicaly Correct UNIX Page


--
 |\ __ /|   (`\
 | o_o  |__  ) )   
//  \\ 
  -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
--
 "The term "daemons" is a Judeo-Christian pejorative.
 Such processes will now be known as "spiritual guides"
  - Politicaly Correct UNIX Page





[squid-users] How can I clean out old objects - refresh_patterns , extra memoryt usage and more..

2007-09-18 Thread Nicole

 Hello all

 I have a few squid servers that seem to have gotten a bit out of control.

 They are using up all the systems memory and starting to serve items slowly.

 As near as I can tell, it seems to just want more memory than I have to serve
and manage all the objects in the cache. 

 Internal Data Structures:
20466526 StoreEntries
 24888 StoreEntries with MemObjects
 24870 Hot Object Cache Items
20466434 on-disk objects


I have tried reducing my refresh pattern from:
refresh_pattern -i \.jpg 10080 150% 40320 ignore-reload
to:
refresh_pattern -i \.jpg 5040 100% 4320 ignore-reload

and doing a reload.

 However, I have not noticed it expiring out old objects and freeing up disk
space like I thought it would.
 
 Do objects get stored based on their original refresh pattern? So even if I
change it, they won't expire until they expire based on the pattern they were
stored with?

 Is there any way to tell the age of the objects eating up my cache storage
space?  Any reccomendations on how to reduce my object count besides reducing
disk space? This is for a reverse proxy cache and we have the cache
header set to expire objects in 2 weeks. I really can't believe that I have
20Million 2 week old objects.



  Thanks!



  Nicole


--
 |\ __ /|   (`\
 | o_o  |__  ) )   
//  \\ 
  -  [EMAIL PROTECTED]  -  Powered by FreeBSD  -
--
 "The term "daemons" is a Judeo-Christian pejorative.
 Such processes will now be known as "spiritual guides"
  - Politicaly Correct UNIX Page





Re: [squid-users] FW: Java authentication under SquidNT 2.6 STABLE 14 using NTLM

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 20:34 +0100, Paul Cocker wrote:
> Under the advise of the 3rd party I have added the following to
> squid.conf
> 
> acl Java browser Java/1.4 Java/1.5 Java/1.6
> http_access allow Java 
> 
> This appears to resolve the issue. However I would like to better
> understand it the above line, and whether it is an acceptable full-time
> solution, or merely a workaround.

It's a workaround, allowing Java clients access without the need to
authenticateto the proxy.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Java authentication under SquidNT 2.6 STABLE 14using NTLM

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 23:13 +0100, Paul Cocker wrote:
> How so? I didn't see anything in the change logs which jumped out at
> me.
> 
- Bug #2057: NTLM stop work in messengers after upgrade to 2.6.STABLE14


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Java authentication under SquidNT 2.6 STABLE 14 using NTLM

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 19:51 +0100, Paul Cocker wrote:
> Last week (Thursday/Friday) my organisation moved from SquidNT 2.5 to
> SquidNT 2.6 STABLE 14.

> Java 6 Update 2 and users connect using NTLM passthrough authentication,
> squid looks to see that they are a member of group X before allowing

Upgrade to 2.6.STABLE16.. should work better.

Regards
Henrik



RE: [squid-users] Squid setup questions

2007-09-18 Thread Antonio Pereira
Ok Great.

I have a hardware based firewall.

What setup in the way of the squid box is best physically take the cable
from the firewall and put 2 nics on the squid box and plug 1 nic to the
firewall and the other to the backbone switch. Or just use 1 nic on the
squid box and put a rule in the firewall to allow only outbound http
traffic from the squid box.
Right now everyone defaults to the firewall and all http traffic goes
out to the internet. We also have VPN and web and ssl traffic coming is
from inbound http.

Thanks again

-Original Message-
From: Tek Bahadur Limbu [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 18, 2007 4:13 PM
To: Antonio Pereira
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid setup questions

Hi Antonio,

Antonio Pereira wrote:
> Hello,
> 
> I have pretty much redundant question but I would like some opinions
> before I venture into this possible solution.
> 
> I have 4 sites on an MPLS network that access the internet via 1
> location, at this 1 location there is already a firewall. What I would
> like to do is start blocking web sites and start block web traffic. 
> 
> What is the best setup with squid for this type of setup? What
documents
> should I read for this type of setup?

Not sure about MPLS networking. However, in your case, it should be 
simple. Just run Squid transparently on the gateway (firewall) from 
where all 4 sites gets access to the internet.

Adding SquidGuard or DansGuardian or even custom ACLs will provide you 
with all the web blocking functionalities.

Thanking you...


> 
> Thanks in advance
> 
> 
> 
> 
> 
> 
> 
> 


-- 

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np




Re: [squid-users] Squid setup questions

2007-09-18 Thread Tek Bahadur Limbu

Hi Antonio,

Antonio Pereira wrote:

Hello,

I have pretty much redundant question but I would like some opinions
before I venture into this possible solution.

I have 4 sites on an MPLS network that access the internet via 1
location, at this 1 location there is already a firewall. What I would
like to do is start blocking web sites and start block web traffic. 


What is the best setup with squid for this type of setup? What documents
should I read for this type of setup?


Not sure about MPLS networking. However, in your case, it should be 
simple. Just run Squid transparently on the gateway (firewall) from 
where all 4 sites gets access to the internet.


Adding SquidGuard or DansGuardian or even custom ACLs will provide you 
with all the web blocking functionalities.


Thanking you...




Thanks in advance











--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np


Re: [squid-users] RPC over HTTPS

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 20:31 +0100, Gordon McKee wrote:

> 19: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate 
> verify failed (1/-1/0)

Your Squid is not trusting the CA that has issued the server certificate
of the web server.

As you have already exported the certificate the easiest "fix" is to
specify cafile=/path/to/certificate.pem, and will work until the
certificate is renewed..

Regards
Henrik




signature.asc
Description: This is a digitally signed message part


Re: [squid-users] RPC over HTTPS

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 17:38 +0100, Gordon McKee wrote:

> After a bit debug switching on, I have found out that squid is not passing 
> https traffic correctly.

Or your server is not accepting it from an https frontend...

> Would a cache_peer 443 entry work and drop the auto frontend?

Most likely.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 09:25 -0700, Solomon Asare wrote:
> Hi Henrik,
> since you say so, I have rather been toying with the
> idea of saving these supposedly expired objects in an
> apache document root and using the url_rewrite of the
> squid to fetch the objects from my apache server. I
> hope the bandwidth savings will justify the bandwidth
> cost in repopulating the apache with these objects.
> Its about bandwidth!

That's pretty much the same as using refresh_pattern to give Squid a
long freshness for those objects, or actually worse as you give the
objects completely new HTTP meta information.

If I were you I would use refresh_pattern, overriding the expiry
information of these objects. Much less intrusive to the HTTP flow.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] FW: Java authentication under SquidNT 2.6 STABLE 14 using NTLM

2007-09-18 Thread Paul Cocker
Under the advise of the 3rd party I have added the following to
squid.conf

acl Java browser Java/1.4 Java/1.5 Java/1.6
http_access allow Java 

This appears to resolve the issue. However I would like to better
understand it the above line, and whether it is an acceptable full-time
solution, or merely a workaround.

Paul Cocker
IT Systems Administrator
IT Security Officer

01628 81(6647)

TNT Post (Doordrop Media) Ltd.
1 Globeside Business Park
Fieldhouse Lane
Marlow
Bucks
SL7 1HY

-Original Message-
From: Paul Cocker 
Sent: 18 September 2007 19:52
To: squid-users@squid-cache.org
Subject: Java authentication under SquidNT 2.6 STABLE 14 using NTLM

Last week (Thursday/Friday) my organisation moved from SquidNT 2.5 to
SquidNT 2.6 STABLE 14. We use a Java applet which generates parcel tags
and prints them off. It was working fine... until today. We are running
Java 6 Update 2 and users connect using NTLM passthrough authentication,
squid looks to see that they are a member of group X before allowing
them access. Java is setup to use the same settings as the browser. We
are seeing the following in the console output

java.lang.NullPointerException
at
sun.net.www.protocol.http.NTLMAuthentication.setHeaders(Unknown Source)
at
sun.net.www.protocol.http.HttpURLConnection.doTunneling(Unknown Source)
at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Un
known Source)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown
Source)
at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unknown
Source)
at sun.plugin.PluginURLJarFileCallBack$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.plugin.PluginURLJarFileCallBack.retrieve(Unknown Source)
at sun.net.www.protocol.jar.URLJarFile.retrieve(Unknown Source)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(Unknown
Source)
at sun.net.www.protocol.jar.JarFileFactory.get(Unknown Source)
at sun.net.www.protocol.jar.JarURLConnection.connect(Unknown
Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.connect(Unknown
Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.getJarFileInternal(Un
known Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.getJarFile(Unknown
Source)
at sun.misc.URLClassPath$JarLoader.getJarFile(Unknown Source)
at sun.misc.URLClassPath$JarLoader.access$600(Unknown Source)
at sun.misc.URLClassPath$JarLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath$JarLoader.ensureOpen(Unknown Source)
at sun.misc.URLClassPath$JarLoader.(Unknown Source)
at sun.misc.URLClassPath$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath.getLoader(Unknown Source)
at sun.misc.URLClassPath.getLoader(Unknown Source)
at sun.misc.URLClassPath.getResource(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at sun.applet.AppletClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadCode(Unknown Source)
at sun.applet.AppletPanel.createApplet(Unknown Source)
at sun.plugin.AppletViewer.createApplet(Unknown Source)
at sun.applet.AppletPanel.runLoader(Unknown Source)
at sun.applet.AppletPanel.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

Having spoken to a chap at the company behind the software he indicated
that this is a problem with the passthrough authentication, which is
further supported by the fact that if we take a workstation which runs
this application and give it a direct connection to the Internet,
everything works just fine. Yet, as I say, we upgraded last week and it
was working fine on Monday and nothing has been changed in the config
since, though the service was restarted this morning.

I am seeing quite a few TCP/DENIED entires in the access.log file
relating to the site in question:

TCP_DENIED/407 1789 CONNECT web.site.com:443 - NONE/- text/html
TCP_DENIED/407 2035 CONNECT web.site.com:443 - NONE/- text/html

I note from the logs that where we register NONE, there should be the
username of the individual in question.

Any help would be much appreciated.

Paul Cocker
IT Systems Administrator
IT Security Officer

01628 81(6647)

TNT Post (Doordrop Media) Ltd.
1 Globeside Business Park
Fieldhouse Lane
Marlow
Bucks
SL7 1HY




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047

Re: [squid-users] RPC over HTTPS

2007-09-18 Thread Gordon McKee

Hi

I have switched off http in on port 80 to make sure https reverse proxy is 
working.  This must be the problem!!


I have exported the certificate from iis and used the instructions below:

http://www.petefreitag.com/item/16.cfm

Now I get :

2007/09/18 20:21:51| Detected DEAD Parent: opls
2007/09/18 20:21:51| SSL unknown certificate error 20 in /C=GB/ST=West 
Midlands/L=Solihull/O=Optimal Profit Ltd/OU=StartCom Free Certificate 
Member/OU=Domain validated 
only/CN=www.optimalprofit.com/[EMAIL PROTECTED]
2007/09/18 20:21:51| fwdNegotiateSSL: Error negotiating SSL connection on FD 
19: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate 
verify failed (1/-1/0)

2007/09/18 20:21:51| TCP connection to 192.168.0.11/443 failed
2007/09/18 20:23:31| Detected REVIVED Parent: opls

Has anyone got any ideas how to get the certificates talking to each other?

Many thanks

Gordon


- Original Message - 
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>

To: "Gordon McKee" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, September 18, 2007 4:30 PM
Subject: Re: [squid-users] RPC over HTTPS





Re: [squid-users] LVS & Reverse Proxy Squid

2007-09-18 Thread David Lawson
I use a similar setup, what you want to do is have multiple  
squid.conf files for each instance, with each instance listening on a  
different http_port and icp_port, then point your real servers at the  
appropriate instances.  It's worked out very well for me.


--Dave
On Sep 18, 2007, at 2:42 PM, Brad Taylor wrote:

We use LVS (load balancer) to send traffic to multiple Squid 2.5  
servers

in reverse proxy mode. We want to put multiple Squid instances on one
box and have successful done that by changing: http_port 80 to  
http_port

192.168.60.7:80 in the squid.conf file. We tested to that instance of
squid and worked successfully. Once it is added to the LVS load  
balancer

the site no longer works. I'll check with the LVS group also.





[squid-users] Java authentication under SquidNT 2.6 STABLE 14 using NTLM

2007-09-18 Thread Paul Cocker
Last week (Thursday/Friday) my organisation moved from SquidNT 2.5 to
SquidNT 2.6 STABLE 14. We use a Java applet which generates parcel tags
and prints them off. It was working fine... until today. We are running
Java 6 Update 2 and users connect using NTLM passthrough authentication,
squid looks to see that they are a member of group X before allowing
them access. Java is setup to use the same settings as the browser. We
are seeing the following in the console output

java.lang.NullPointerException
at
sun.net.www.protocol.http.NTLMAuthentication.setHeaders(Unknown Source)
at
sun.net.www.protocol.http.HttpURLConnection.doTunneling(Unknown Source)
at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Un
known Source)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown
Source)
at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unknown
Source)
at sun.plugin.PluginURLJarFileCallBack$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.plugin.PluginURLJarFileCallBack.retrieve(Unknown Source)
at sun.net.www.protocol.jar.URLJarFile.retrieve(Unknown Source)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(Unknown
Source)
at sun.net.www.protocol.jar.JarFileFactory.get(Unknown Source)
at sun.net.www.protocol.jar.JarURLConnection.connect(Unknown
Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.connect(Unknown
Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.getJarFileInternal(Un
known Source)
at
sun.plugin.net.protocol.jar.CachedJarURLConnection.getJarFile(Unknown
Source)
at sun.misc.URLClassPath$JarLoader.getJarFile(Unknown Source)
at sun.misc.URLClassPath$JarLoader.access$600(Unknown Source)
at sun.misc.URLClassPath$JarLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath$JarLoader.ensureOpen(Unknown Source)
at sun.misc.URLClassPath$JarLoader.(Unknown Source)
at sun.misc.URLClassPath$3.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath.getLoader(Unknown Source)
at sun.misc.URLClassPath.getLoader(Unknown Source)
at sun.misc.URLClassPath.getResource(Unknown Source)
at java.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at sun.applet.AppletClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.applet.AppletClassLoader.loadCode(Unknown Source)
at sun.applet.AppletPanel.createApplet(Unknown Source)
at sun.plugin.AppletViewer.createApplet(Unknown Source)
at sun.applet.AppletPanel.runLoader(Unknown Source)
at sun.applet.AppletPanel.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)

Having spoken to a chap at the company behind the software he indicated
that this is a problem with the passthrough authentication, which is
further supported by the fact that if we take a workstation which runs
this application and give it a direct connection to the Internet,
everything works just fine. Yet, as I say, we upgraded last week and it
was working fine on Monday and nothing has been changed in the config
since, though the service was restarted this morning.

I am seeing quite a few TCP/DENIED entires in the access.log file
relating to the site in question:

TCP_DENIED/407 1789 CONNECT web.site.com:443 - NONE/- text/html
TCP_DENIED/407 2035 CONNECT web.site.com:443 - NONE/- text/html

I note from the logs that where we register NONE, there should be the
username of the individual in question.

Any help would be much appreciated.

Paul Cocker
IT Systems Administrator
IT Security Officer

01628 81(6647)

TNT Post (Doordrop Media) Ltd.
1 Globeside Business Park
Fieldhouse Lane
Marlow
Bucks
SL7 1HY




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



[squid-users] LVS & Reverse Proxy Squid

2007-09-18 Thread Brad Taylor
We use LVS (load balancer) to send traffic to multiple Squid 2.5 servers
in reverse proxy mode. We want to put multiple Squid instances on one
box and have successful done that by changing: http_port 80 to http_port
192.168.60.7:80 in the squid.conf file. We tested to that instance of
squid and worked successfully. Once it is added to the LVS load balancer
the site no longer works. I'll check with the LVS group also.


Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Solomon Asare
Hi Henrik,
since you say so, I have rather been toying with the
idea of saving these supposedly expired objects in an
apache document root and using the url_rewrite of the
squid to fetch the objects from my apache server. I
hope the bandwidth savings will justify the bandwidth
cost in repopulating the apache with these objects.
Its about bandwidth!

Regards,
solomon.
 
--- Henrik Nordstrom <[EMAIL PROTECTED]>
wrote:

> On tis, 2007-09-18 at 02:55 -0700, Solomon Asare
> wrote:
> 
> > This is the exact problem I have that I am trying
> to
> > resolve, not querry string issues. If only I can
> > overide the lack of Last-Modified, Etag and not
> > meeting minimum_expiry_time conditions.
> 
> There would be no use doing so. All you would get is
> more disk I/O as
> Squid would be unable to reuse the cached copy on
> the next request.
> 
> Without a cache validator you MUST assign freshness
> to the object for it
> to be of any use.
> 
> Think of it, what do you want Squid to do with the
> expired object if it
> can not check if the object has changed (validator
> required), and you do
> not allow it to consider the object as fresh?
> 
> Regards
> Henrik
> 



Re: [squid-users] Multi-ISP / Squid 2.6 Problem going DIRECT

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 14:50 +0200, Philipp Rusch wrote:
> Sorry to bother you, but I don't get it.
> 
> We have a SuSE 10.1 system and have our www-traffic going through squid.
> Since upgrade from 2.5 to version 2.6 STABLE5-30 (SuSE versions) we notice
> that Squid is behaving strange. After running normally a while Squid seems
> to go "DIRECT" only and the browsers on the clients seem to hang and or
> surfing is ultra slow.

Are you behind a parent proxy firewall? If so see the FAQ...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Allowing links with specified ports

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
> I have noticed that when ever a url contains a port squid does not allow it.  
> For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
> is there a way to allow all pages when a port is specified in the link?

See the Safe_Ports ACL.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] RPC over HTTPS

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 10:00 +0100, Gordon McKee wrote:

> When I try to connect in I get the following error:
> 
> 2007/09/18 09:35:38| httpReadReply: Request not yet fully sent "RPC_IN_DATA 
> https://www.optimalprofit.com/rpc/rpcproxy.dll?nt-opro-h3.gdmckee.home:6002";

This message is seen if the response is sent by the server before the
POST:ed data has been transmitted..

A guess is that the server don't like you, or that you are forwarding
the request to the wrong server...

What do access.log say?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 02:55 -0700, Solomon Asare wrote:

> This is the exact problem I have that I am trying to
> resolve, not querry string issues. If only I can
> overide the lack of Last-Modified, Etag and not
> meeting minimum_expiry_time conditions.

There would be no use doing so. All you would get is more disk I/O as
Squid would be unable to reuse the cached copy on the next request.

Without a cache validator you MUST assign freshness to the object for it
to be of any use.

Think of it, what do you want Squid to do with the expired object if it
can not check if the object has changed (validator required), and you do
not allow it to consider the object as fresh?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] No Error pages for transparent caching

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 14:29 +0800, Adrian Chadd wrote:

> I've thought about it. I jotted down some brainstorming ideas when
> thinking about how to handle asymmetric TCP flows during transparent
> interception - http://www.creative.net.au/node/72 - it'd possibly
> also "solve" your issues. I don't think its possible with current
> kernels btw, you'd have to modify them to do the splicing.

there is also the option of early access, delaying the SYN-ACK until the
proxy has been able to contact the intended destination..

do not fit well with caching however...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] 2.5 -> 2.6 accel migration

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 16:12 +0100, Craig Skinner wrote:

> And can get inbound requests from the Internet working with the above 
> plus, but it kills local outbound access as all requests are sent to apache:
> 
> http_port 3128 vhost (packet filter redirect)
> cache_peer 127.0.0.1 parent 80 0 no-query originserver

See cache_peer_access/domain.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid submit problem

2007-09-18 Thread Henrik Nordstrom
On tis, 2007-09-18 at 18:09 +0400, Fedor Trusov wrote:
> My Squid version is 2.6.STABLE11. I have problem when i browse some pages 
> with submit button (mail.ru, icq.com). When i press such button i recieve 
> error message.

Are you inside a parent proxy firewall? If so see the FAQ...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] 2.5 -> 2.6 accel migration

2007-09-18 Thread Craig Skinner
I have a general purpose box that acts as a caching firewall for a small 
LAN, and also it reverse proxies (httpd accel) for apache on the 
localhost to the web.


I don not use transparent, users load a proxy.pac file.

In 2.5 my config was:

acl accel_host dst 127.0.0.1/32 an.ip.address/32
acl accel_port port 80
http_access deny to_localhost
acl our_networks src 192.168.6.0/24 a.network.address/29 127.0.0.1/32
http_access allow our_networks
http_access deny !accel_port
acl local-servers dstdomain .example.org
http_access allow local-servers
httpd_accel_host 127.0.0.1
httpd_accel_port 80
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
forwarded_for off




In 2.6, I can get outbound caching working for the LAN with:

allow_underscore off
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
acl accel_host dst 127.0.0.1/32 an.ip.address/32
acl accel_port port 80
http_access deny to_localhost
acl our_networks src 192.168.6.0/24 a.network.address/29 127.0.0.1/32
http_access allow our_networks
http_access deny !accel_port
acl local-servers dstdomain .example.org
http_access allow local-servers
forwarded_for off


And can get inbound requests from the Internet working with the above 
plus, but it kills local outbound access as all requests are sent to apache:


http_port 3128 vhost (packet filter redirect)
cache_peer 127.0.0.1 parent 80 0 no-query originserver


I've followed various suggestions on 
http://wiki.squid-cache.org/SquidFaq/ReverseProxy but these seem to be 
for use with squid hosts that only work in 1 direction.



Any ideas?

Ta,
--
Craig


Re: [squid-users] store.log filling up

2007-09-18 Thread Henrik Nordstrom
On mån, 2007-09-17 at 16:30 -0500, [EMAIL PROTECTED]
wrote: 
> Could spyware or addware cause the store.log to fill up very quickly? 
> Another tech has had troubles with this in the last couple of days and was
> asking.  He says that they can clear it out and in no time (not sure how
> long, but under an hour) it is filled up and causing problems.
> 
> Here is a small post of what was in it.  Why does it list all the ?
> 
> Thanks for any info.
> 
> 1190033958.390 RELEASE -1  7B1287005AF9902646FDACC9F3EA9C7F   ?   
>  ? ? ? ?/? ?/? ? ?

Looks a bit odd.. the ? is when the information is unknown, but these
objects was in memory so the information should have been known I
think..

What do access.log say?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Squid submit problem

2007-09-18 Thread Fedor Trusov

My Squid version is 2.6.STABLE11. I have problem when i browse some pages with 
submit button (mail.ru, icq.com). When i press such button i recieve error 
message. Ex:
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://win.mail.ru/cgi-bin/auth

The following error was encountered:

* Connection to 194.67.57.200 Failed 

The system returned:

(110) Connection timed out

The remote host or network may be down. Please try the request again.

Your cache administrator is webmaster. 




Re: [squid-users] Multi-ISP / Squid 2.6 Problem going DIRECT

2007-09-18 Thread Tek Bahadur Limbu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Philipp,

On Tue, 18 Sep 2007 14:50:54 +0200
Philipp Rusch <[EMAIL PROTECTED]> wrote:

> Sorry to bother you, but I don't get it.
> 
> We have a SuSE 10.1 system and have our www-traffic going through squid.
> Since upgrade from 2.5 to version 2.6 STABLE5-30 (SuSE versions) we notice
> that Squid is behaving strange. After running normally a while Squid seems
> to go "DIRECT" only and the browsers on the clients seem to hang and or
> surfing is ultra slow. This is happening every three or four websites we 
> try
> to access, it seems to work normal for one or two, then the next four or 
> five
> GETs are very slow again and the circle begins again.
> In /var/logs/Squid/access.log I see that most of the connections are going
> DIRECT, sometimes we get connection timeouts (110) and sometimes we
> see that "somehow" an :443 is added to the URL-lines. STRANGE.
> Any hints appreciated.

Since you upgraded from version 2.5 to 2.6, your squid.conf must have changed 
too. Do you have a local caching DNS server running in your Squid box? 

Posting your squid.conf and output of "squidclient mgr:info" and "squid -v" 
might help.

If you have large ACLs, then squid might be busy processing them rather than 
serving web requests!

Are you running Squid transparently and do you also have parent caches?

What does cache.log say?

Maybe upgrading to the latest stable version of Squid might help?

Check out the URL below:

http://www.squid-cache.org/Versions/v2/2.6/squid-2.6.STABLE16.tar.gz


Thanking you...



> 
> Regards from Germany,
> Mit freundlichen Grüßen
> Philipp Rusch
> 
> 


- -- 

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

System Administrator 

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal
http://wlink.com.np/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (FreeBSD)

iD8DBQFG79f5fpE0pz+xqQQRAlVdAJ45QFZ6PjL2BWASWa8DboE644PkMwCfR84p
F44uMq9jzryGBiHCt7sC8a0=
=brq1
-END PGP SIGNATURE-


[squid-users] New Squid user help required with setup

2007-09-18 Thread Abd-Ur-Razzaq Al-Haddad
Hi, 

 

I've just installed squid on OpenSuse 10.2 installation.

I have configured squid and Suse to use samba and have added it to the
Windows Active Directory network successfully.

 

The problem I am now facing is ACL's - nothing seems to work and I can
get the error messages that I should be getting for blocking
sites/content. 

Please can you tell me where I am going wrong.

 

Thanks




Abd-Ur-Razzaq Al-Haddad 
IT Analyst 

  
9 Queen Street London W1J 5PE 

Tel: +44 (0)207 659 6620Fax: +44 (0)207 659 6621
Direct: +44 (0)207 659 6632 Mob: +44 (0)7738 787881 
[EMAIL PROTECTED] 
  




The information contained in this email or any of its attachments may be 
privileged or confidential and is intended for the exclusive use of the 
addressee. Any unauthorised use may be unlawful. If you received this email by 
mistake, please advise the sender immediately by using the reply facility in 
your email software and delete the email from your system.

Carron Energy Limited.  Registered Office 9 Queen Street, London W1J 5PE. 
Incorporated in England and Wales with company number 5150453

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__


[squid-users] Squid setup questions

2007-09-18 Thread Antonio Pereira
Hello,

I have pretty much redundant question but I would like some opinions
before I venture into this possible solution.

I have 4 sites on an MPLS network that access the internet via 1
location, at this 1 location there is already a firewall. What I would
like to do is start blocking web sites and start block web traffic. 

What is the best setup with squid for this type of setup? What documents
should I read for this type of setup?

Thanks in advance







[squid-users] Multi-ISP / Squid 2.6 Problem going DIRECT

2007-09-18 Thread Philipp Rusch

Sorry to bother you, but I don't get it.

We have a SuSE 10.1 system and have our www-traffic going through squid.
Since upgrade from 2.5 to version 2.6 STABLE5-30 (SuSE versions) we notice
that Squid is behaving strange. After running normally a while Squid seems
to go "DIRECT" only and the browsers on the clients seem to hang and or
surfing is ultra slow. This is happening every three or four websites we 
try
to access, it seems to work normal for one or two, then the next four or 
five

GETs are very slow again and the circle begins again.
In /var/logs/Squid/access.log I see that most of the connections are going
DIRECT, sometimes we get connection timeouts (110) and sometimes we
see that "somehow" an :443 is added to the URL-lines. STRANGE.
Any hints appreciated.

Regards from Germany,
Mit freundlichen Grüßen
Philipp Rusch



Re: [squid-users] squid pre-pending blank line

2007-09-18 Thread Adrian Chadd
On Tue, Sep 18, 2007, John Moylan wrote:
> Hi,
> 
> Pages served via our reverse proxy squid seem to have a blank line
> pre-pended to them. Is this normal? We are trying to validate mobile
> XHTML and this is causing us issues.

Got a test case you can stuff into bugzilla?



Adrian

> 
> Version 2.6.STABLE6 on Centos
> 
> Thanks,
> 
> J
> 
> 
> 
> On Tue, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
> > I have noticed that when ever a url contains a port squid does not allow 
> > it.  For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
> > is there a way to allow all pages when a port is specified in the link?
> > 
> > 
> >
> > 
> > Be a better Heartthrob. Get better relationship answers from someone who 
> > knows. Yahoo! Answers - Check it out. 
> > http://answers.yahoo.com/dir/?link=list&sid=396545433

> ***
> The information in this e-mail is confidential and may be legally privileged.
> It is intended solely for the addressee. Access to this e-mail by anyone else
> is unauthorised. If you are not the intended recipient, any disclosure,
> copying, distribution, or any action taken or omitted to be taken in reliance
> on it, is prohibited and may be unlawful.
> Please note that emails to, from and within RT?? may be subject to the Freedom
> of Information Act 1997 and may be liable to disclosure.
> 


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


Re: [squid-users] squid pre-pending blank line

2007-09-18 Thread John Moylan
Hi,

Please disregard, the issue is being caused by an web server module.

J

On Tue, 2007-09-18 at 11:57 +0100, John Moylan wrote:
> Hi,
> 
> Pages served via our reverse proxy squid seem to have a blank line
> pre-pended to them. Is this normal? We are trying to validate mobile
> XHTML and this is causing us issues.
> 
> Version 2.6.STABLE6 on Centos
> 
> Thanks,
> 
> J
> 
> 
> 
> On Tue, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
> > I have noticed that when ever a url contains a port squid does not allow 
> > it.  For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
> > is there a way to allow all pages when a port is specified in the link?
> > 
> > 
> >
> > 
> > Be a better Heartthrob. Get better relationship answers from someone who 
> > knows. Yahoo! Answers - Check it out. 
> > http://answers.yahoo.com/dir/?link=list&sid=396545433
> ***
> The information in this e-mail is confidential and may be legally privileged.
> It is intended solely for the addressee. Access to this e-mail by anyone else
> is unauthorised. If you are not the intended recipient, any disclosure,
> copying, distribution, or any action taken or omitted to be taken in reliance
> on it, is prohibited and may be unlawful.
> Please note that emails to, from and within RTÉ may be subject to the Freedom
> of Information Act 1997 and may be liable to disclosure.
> 
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



[squid-users] squid pre-pending blank line

2007-09-18 Thread John Moylan
Hi,

Pages served via our reverse proxy squid seem to have a blank line
pre-pended to them. Is this normal? We are trying to validate mobile
XHTML and this is causing us issues.

Version 2.6.STABLE6 on Centos

Thanks,

J



On Tue, 2007-09-18 at 03:23 -0700, Nadeem Semaan wrote:
> I have noticed that when ever a url contains a port squid does not allow it.  
> For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
> is there a way to allow all pages when a port is specified in the link?
> 
> 
>
> 
> Be a better Heartthrob. Get better relationship answers from someone who 
> knows. Yahoo! Answers - Check it out. 
> http://answers.yahoo.com/dir/?link=list&sid=396545433
***
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this e-mail by anyone else
is unauthorised. If you are not the intended recipient, any disclosure,
copying, distribution, or any action taken or omitted to be taken in reliance
on it, is prohibited and may be unlawful.
Please note that emails to, from and within RTÉ may be subject to the Freedom
of Information Act 1997 and may be liable to disclosure.



[squid-users] Allowing links with specified ports

2007-09-18 Thread Nadeem Semaan
I have noticed that when ever a url contains a port squid does not allow it.  
For example the webpage http://www.sns2.dns2go.com:81/helpdesk/
is there a way to allow all pages when a port is specified in the link?


   

Be a better Heartthrob. Get better relationship answers from someone who knows. 
Yahoo! Answers - Check it out. 
http://answers.yahoo.com/dir/?link=list&sid=396545433


Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Solomon Asare
Adrian,
sorry but this is not a querry (?) issue. I think
Henrik explained why I am not caching. Just in case u
did not read his response I repeat for you info:
a) The object must have a cache validator
(Last-Modified or ETag). If there is no cache
validator then the response must be fresh for at least
minimum_expiry_time to get cached, this to avoid
wasting disk I/O 

This is the exact problem I have that I am trying to
resolve, not querry string issues. If only I can
overide the lack of Last-Modified, Etag and not
meeting minimum_expiry_time conditions.

Thanks,
solomon.

--- Adrian Chadd <[EMAIL PROTECTED]> wrote:

> On Tue, Sep 18, 2007, Solomon Asare wrote:
> > Hi Henrik,
> > thanks for your insightful response. However, the
> > object is a .flv file that hasn't changed in
> months.
> > The origin server certainly doesn't want the
> object
> > cached, but I want to. Any leads that can help me
> > achieve this?
> 
> * set your refresh_pattern's right, you can override
> almost all the relevant
>   headers in there;
> * if the URL has a ? in it then you need to look at
> the cache/no_cache directives
> * if in doubt, compile with the option to log
> request/reply headers (I forget
>   what it is, ./configure --help will tell you) and
> take a look at exactly what
>   headers they're sending back.
> 
> 
> 
> 
> Adrian
> 
> > Regards,
> > solomon.
> > 
> > --- Henrik Nordstrom <[EMAIL PROTECTED]>
> > wrote:
> > 
> > > On m?n, 2007-09-17 at 11:55 -0700, Solomon Asare
> > > wrote:
> > > > Hi Amos,
> > > > I am not sure if refresh_pattern is the sole
> > > > determinant in caching an object, that is if
> it
> > > has
> > > > any influence at all.
> > > 
> > > It has influence, both directly by assigning
> > > freshness information when
> > > there is none, and indirectly by overriding
> various
> > > HTTP controls..
> > > 
> > > Requirementsto cache stale objects:
> > > 
> > > a) The object must have a cache validator
> > > (Last-Modified or ETag). If
> > > there is no cache validator then the response
> must
> > > be fresh for at least
> > > minimum_expiry_time to get cached, this to avoid
> > > wasting disk I/O for
> > > caching content which can not be reused.
> > > 
> > > b) There must not be other headers preventing it
> > > from getting cached.
> > > refresh_pattern can override most of these if
> > > needed.
> > > 
> > > > I am not discussing getting a
> > > > HIT for a cached object, but rather caching an
> > > expired
> > > > object from an origin server. If this object
> is
> > > > expired, by say 60 seconds before being served
> > > from
> > > > the origin server, how do  I cache it? Date
> and
> > > > Last-Modified dates are also not set.
> > > 
> > > If there is no Last-Modified and no ETag then
> it's
> > > useless to cache an
> > > expired object, as it can not be reused on any
> > > future request and all
> > > you get is extra disk I/O for writing the object
> > > out.
> > > 
> > > A cache validator (Last-Modified or ETag) is
> > > required to be able to
> > > verify with the origin server if an expired
> object
> > > is still valid or
> > > not. Without a cache validator there is nothing
> to
> > > relate to and there
> > > is no other choice than to fetch the complete
> object
> > > again when
> > > expired..
> > > 
> > > Regards
> > > Henrik
> > > 
> 
> -- 
> - Xenion - http://www.xenion.com.au/ - VPS Hosting -
> Commercial Squid Support -
> - $25/pm entry-level bandwidth-capped VPSes
> available in WA -
> 



Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Adrian Chadd
On Tue, Sep 18, 2007, Solomon Asare wrote:
> Hi Henrik,
> thanks for your insightful response. However, the
> object is a .flv file that hasn't changed in months.
> The origin server certainly doesn't want the object
> cached, but I want to. Any leads that can help me
> achieve this?

* set your refresh_pattern's right, you can override almost all the relevant
  headers in there;
* if the URL has a ? in it then you need to look at the cache/no_cache 
directives
* if in doubt, compile with the option to log request/reply headers (I forget
  what it is, ./configure --help will tell you) and take a look at exactly what
  headers they're sending back.




Adrian

> Regards,
> solomon.
> 
> --- Henrik Nordstrom <[EMAIL PROTECTED]>
> wrote:
> 
> > On m?n, 2007-09-17 at 11:55 -0700, Solomon Asare
> > wrote:
> > > Hi Amos,
> > > I am not sure if refresh_pattern is the sole
> > > determinant in caching an object, that is if it
> > has
> > > any influence at all.
> > 
> > It has influence, both directly by assigning
> > freshness information when
> > there is none, and indirectly by overriding various
> > HTTP controls..
> > 
> > Requirementsto cache stale objects:
> > 
> > a) The object must have a cache validator
> > (Last-Modified or ETag). If
> > there is no cache validator then the response must
> > be fresh for at least
> > minimum_expiry_time to get cached, this to avoid
> > wasting disk I/O for
> > caching content which can not be reused.
> > 
> > b) There must not be other headers preventing it
> > from getting cached.
> > refresh_pattern can override most of these if
> > needed.
> > 
> > > I am not discussing getting a
> > > HIT for a cached object, but rather caching an
> > expired
> > > object from an origin server. If this object is
> > > expired, by say 60 seconds before being served
> > from
> > > the origin server, how do  I cache it? Date and
> > > Last-Modified dates are also not set.
> > 
> > If there is no Last-Modified and no ETag then it's
> > useless to cache an
> > expired object, as it can not be reused on any
> > future request and all
> > you get is extra disk I/O for writing the object
> > out.
> > 
> > A cache validator (Last-Modified or ETag) is
> > required to be able to
> > verify with the origin server if an expired object
> > is still valid or
> > not. Without a cache validator there is nothing to
> > relate to and there
> > is no other choice than to fetch the complete object
> > again when
> > expired..
> > 
> > Regards
> > Henrik
> > 

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


[squid-users] RPC over HTTPS

2007-09-18 Thread Gordon McKee

Hi

I have got the vast majority of this working reading the FAQ etc.  I have 
set this up on RPC over HTTP SBS 2003 boxes so am confident that the 
exchange server is setup correctly.


When I try to connect in I get the following error:

2007/09/18 09:35:38| httpReadReply: Request not yet fully sent "RPC_IN_DATA 
https://www.optimalprofit.com/rpc/rpcproxy.dll?nt-opro-h3.gdmckee.home:6002";
2007/09/18 09:35:38| httpReadReply: Request not yet fully sent "RPC_OUT_DATA 
https://www.optimalprofit.com/rpc/rpcproxy.dll?nt-opro-h3.gdmckee.home:6002";


Does any one know how to resolve this?

Squid.con look as follows:
http_port proxy.gdmckee.home:3128
http_port 82..17:80 vhost vport

https_port 443 cert=/usr/local/etc/squid/op7.crt 
key=/usr/local/etc/squid/pre.key cafile=/usr/local/etc/squid/crt.crt 
defaultsite=www.optimalprofit.com


### Optimal Profit
cache_peer 192.168.0.11parent80  0  no-query originserver login=PASS 
name=opl front-end-https=auto

cache_peer_domain opl www.optimalprofit.com

acl hosted_domains dstdomain .optimalprofit.com

http_access allow hosted_domains
http_access allow our_networks

extension_methods RPC_IN_DATA RPC_OUT_DATA

Here is the output when squid start:

2007/09/16 17:15:18| Reconfiguring Squid Cache (version 2.6.STABLE14)...
2007/09/16 17:15:18| FD 9 Closing HTTP connection
2007/09/16 17:15:18| FD 11 Closing HTTP connection
2007/09/16 17:15:18| FD 12 Closing HTTP connection
2007/09/16 17:15:18| FD 13 Closing ICP connection
2007/09/16 17:15:18| FD 14 Closing HTCP socket
2007/09/16 17:15:18| Initialising SSL.
2007/09/16 17:15:18| Using certificate in /usr/local/etc/squid/op*7.crt
2007/09/16 17:15:18| Using private key in /usr/local/etc/squid/p*.key
2007/09/16 17:15:18| Cache dir '/usr/local/squid/cache' size remains 
unchanged a

t 8388608 KB
2007/09/16 17:15:18| Extension method 'RPC_IN_DATA' added, enum=30
2007/09/16 17:15:18| Extension method 'RPC_OUT_DATA' added, enum=31
2007/09/16 17:15:18| Initialising SSL.
2007/09/16 17:15:18| User-Agent logging is disabled.
2007/09/16 17:15:18| Referer logging is disabled.
2007/09/16 17:15:18| DNS Socket created at 0.0.0.0, port 49795, FD 8
2007/09/16 17:15:18| Adding domain gdmckee.home from /etc/resolv.conf
2007/09/16 17:15:18| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2007/09/16 17:15:18| Accepting proxy HTTP connections at 192.168.0.1, port 
3128,

FD 9.
2007/09/16 17:15:18| Accepting accelerated HTTP connections at 82.36.186.17, 
por

t 80, FD 11.
2007/09/16 17:15:18| Accepting HTTPS connections at 0.0.0.0, port 443, FD 
12.

2007/09/16 17:15:18| Accepting ICP messages at 0.0.0.0, port 3130, FD 13.
2007/09/16 17:15:18| Accepting HTCP messages on port 4827, FD 14.
2007/09/16 17:15:18| WCCP Disabled.
2007/09/16 17:15:18| Configuring Parent 192.168.0.11/80/0
2007/09/16 17:15:18| Configuring Parent 192.168.0.1/80/0
2007/09/16 17:15:18| Loaded Icons.
2007/09/16 17:15:18| Ready to serve requests.


Any help would be much appreciated.

Many thanks

Gordon 





Re: [squid-users] Caching Expired Objects

2007-09-18 Thread Solomon Asare
Hi Henrik,
thanks for your insightful response. However, the
object is a .flv file that hasn't changed in months.
The origin server certainly doesn't want the object
cached, but I want to. Any leads that can help me
achieve this?

Regards,
solomon.

--- Henrik Nordstrom <[EMAIL PROTECTED]>
wrote:

> On mån, 2007-09-17 at 11:55 -0700, Solomon Asare
> wrote:
> > Hi Amos,
> > I am not sure if refresh_pattern is the sole
> > determinant in caching an object, that is if it
> has
> > any influence at all.
> 
> It has influence, both directly by assigning
> freshness information when
> there is none, and indirectly by overriding various
> HTTP controls..
> 
> Requirementsto cache stale objects:
> 
> a) The object must have a cache validator
> (Last-Modified or ETag). If
> there is no cache validator then the response must
> be fresh for at least
> minimum_expiry_time to get cached, this to avoid
> wasting disk I/O for
> caching content which can not be reused.
> 
> b) There must not be other headers preventing it
> from getting cached.
> refresh_pattern can override most of these if
> needed.
> 
> > I am not discussing getting a
> > HIT for a cached object, but rather caching an
> expired
> > object from an origin server. If this object is
> > expired, by say 60 seconds before being served
> from
> > the origin server, how do  I cache it? Date and
> > Last-Modified dates are also not set.
> 
> If there is no Last-Modified and no ETag then it's
> useless to cache an
> expired object, as it can not be reused on any
> future request and all
> you get is extra disk I/O for writing the object
> out.
> 
> A cache validator (Last-Modified or ETag) is
> required to be able to
> verify with the origin server if an expired object
> is still valid or
> not. Without a cache validator there is nothing to
> relate to and there
> is no other choice than to fetch the complete object
> again when
> expired..
> 
> Regards
> Henrik
> 



RE: [squid-users] Compiling Squid to auth on ldap server

2007-09-18 Thread Paul Cocker
Just a reminder to copy in the squid-users group, otherwise you're not
going to get much of a response ;) 


Paul Cocker
IT Systems Administrator
IT Security Officer

01628 81(6647)

TNT Post (Doordrop Media) Ltd.
1 Globeside Business Park
Fieldhouse Lane
Marlow
Bucks
SL7 1HY

-Original Message-
From: Mauricio Paulo de Sousa [mailto:[EMAIL PROTECTED] 
Sent: 17 September 2007 17:50
To: Paul Cocker
Subject: Re: [squid-users] Compiling Squid to auth on ldap server

I'm using the latest squid stable version on slackware 11.0



2007/9/17, Paul Cocker <[EMAIL PROTECTED]>:
> While I can't help with the compile side of things, using SquidNT 
> myself, I can lend a hand with the LDAP authentication within an AD 
> environment.
>
> Using Squid 2.6 STABLE 14 we use the following lines (filed in the 
> usual
> places):
>
> # Where InternetAccess is a group in Active Directory and GProxyUsers 
> is a name we give the group for reference within squid.conf acl 
> GProxyUsers external NT_global_group InternetAccess
>
> # Before http_access deny all
> http_access allow password GProxyUsers
>
> # If you're using NTLM you'll need something like the following 
> auth_param ntlm program D:/squid2614/libexec/mswin_ntlm_auth.exe
> auth_param ntlm children 5
> auth_param ntlm keep_alive on
> # If not you'll need to list your auth_param of choice
>
> Hope this helps :)
>
> Paul Cocker
> IT Systems Administrator
> IT Security Officer
>
> 01628 81(6647)
>
> TNT Post (Doordrop Media) Ltd.
> 1 Globeside Business Park
> Fieldhouse Lane
> Marlow
> Bucks
> SL7 1HY
>
>
> -Original Message-
> From: Mauricio Paulo de Sousa [mailto:[EMAIL PROTECTED]
> Sent: 17 September 2007 15:14
> To: squid-users@squid-cache.org
> Subject: [squid-users] Compiling Squid to auth on ldap server
>
> Hello all,
> I would like to compile my squid to make autentication on a ldap 
> server, can anybody help me?
>
> if possible, show me how to define the acl autentication.
> thanks :D
>
>
> --
> Mauricio Paulo de Sousa
>
>
>
>
> TNT Post is the trading name for TNT Post UK Ltd (company number:
04417047), TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland
Ltd (05695897),TNT Post North Ltd (05701709) and TNT Post South West Ltd
(05983401). Emma's Diary and Lifecycle are trading names for Lifecycle
Marketing (Mother and Baby) Ltd (02556692). All companies are registered
in England and Wales; registered address: 1 Globeside Business Park,
Fieldhouse Lane, Marlow, Buckinghamshire, SL7 1HY.
>
>


--
Mauricio Paulo de Sousa




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.