RE: [squid-users] proxy errors

2004-07-22 Thread Elsen Marc

 
 
 Hi All
 
 One one of our biggest proxy servers we have the following errors:
 
 TCP_SWAPFAIL_MISS/304
 
 WHat could cause this ?
 

  TCP_SWAPFAIL_MISS

The object was believed to be in the cache, but could not be accessed. 


 Possibly due to swap.state corruption due  to previous unexpected shutdown
of squid (e.g.).
Can be ignored if seen 'sporadic' since squid will re-fetch the object.

M.


[squid-users] Remove Squid signature (%S) on FTP indexes

2004-07-22 Thread lists
Hi,

I am trying to figure out how to remove or change the Squid signature that is
appended to the bottom of FTP indexes (same as what is appended to error pages
but I know how to get rid of those).

eg: if I visit ftp://ftp.download.com/ I get the following signature at the
bottom of my page:

Generated Thu, 22 Jul 2004 08:45:45 GMT by xxx.xxx.xxx (squid/2.5.STABLE6)

Any help with this would be appreciated.

Regards,
Rob Hadfield


[squid-users] load balancing

2004-07-22 Thread Gert Brits
Hi All

Please could I have either some ideas or solutions to the following:

I work for a BIG company, providing internet services for many customers.

I am planning to install a load balancing plan for our cache server, which
will obviously load balance, but will also have a redundant feature.

What can I do this with?

Basically have to servers service the caching, but they must look like one
server ... I am sure I sound stupid, but just to make sure.


Regards

Gert Brits



RE: [squid-users] Re: Implementation issues

2004-07-22 Thread Chris Perreault
Here at my work all employees sign a form stating that they realize email,
internet usage, file storage systems, etc are for work purposes and that
none of it is truly private. That email will not be directly read except for
troubleshooting/maintenance issues unless there are illegal or non-condoned
company activities taking place.

All traffic encrypted? Encryption puts a burden on the servers and if all
the traffic is being encrypted, then what exactly *is* being monitored?

Thoughts? Set your PC's gateway to your Apache server, visit www.google.com,
get your disclaimer page to come up, sign in, and then have it make you end
up at google. If this works, insert Squid afterwards. (one piece at a time,
see what works, what doesn't)

Chris Perreault


-Original Message-
From: Rick Whitley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 21, 2004 5:37 PM
To: [EMAIL PROTECTED]; Chris Perreault
Subject: RE: [squid-users] Re: Implementation issues


The disclaimer page gives us the opportunity to inform the users that their
traffic is being monitored. It's the law. I am open to suggestions as to a
better way to accomplish this. I agree its ugly. The disclaimer page would
have a login link that went to the 2nd proxy. On top of everything, they
want the traffic encrypted. I need to get a workable process working first.
What if the first stop was a web server?

client - Apache - Proxy - internet
   |
   signon

Here apache would trap the traffic unless authenticated(session variable?).
We would have apache be the gateway.

Thoughts?

rick...
Rom.5:8

 Chris Perreault [EMAIL PROTECTED] 7/21/2004 3:55:21 PM

First attempt:

client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet

User clicks on a page, sending another request, which then will do
this:

client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet

If proxy1 sends all traffic to the signon page, then all traffic passes to
it. All of it. Each request is coming from the browser, not the signon
server, so each request hits the proxy1 first and gets resent to the logon
page. Ok...so if a session variable knows the user logged in, then the
signon server can redirect the request for the website through proxy2. All
requests would pass through proxy1--signon server--proxy2. If that's
acceptable, it looks like it would work. You'd want to redo the session
variable on each hit, or else you could have the user relogging in every 20
minutes or so. If they were doing anything on the web (filling in a form or
taking a test) and didn't submit another request for a while, their session
could time out and they sure wouldn't be happy. It looks doable, but it
looks ugly too.

Once a user has signed up, you want them to get the disclaimer page every
time they fire up a web browser?

It would be much easier to have them just sign an internet use policy and
in exchange tell them their username and password:)



Chris Perreault
Webmaster/MCSE
The Wiremold Company
West Hartford, CT 06010
860-233-6251 ext 3426


-Original Message-
From: Rick Whitley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 21, 2004 3:44 PM
To: [EMAIL PROTECTED]; Chris Perreault
Subject: RE: [squid-users] Re: Implementation issues


What we are thinking of doing is:

client - Proxy - sign-on/disclaimer - Proxy - ldap - Internet

The 1st proxy will be open and require no auth and redirect all traffic to
the sign-on/disclaim site. User has option to Activate account or visit
Internet. The Visit Internet link will change go to the 2nd proxy which
will have proxy_auth enabled for ldap. The client will be prompted with a
userid and passwd dialog to be authenticated and sent to the internet.

Does this seem possible? I have the gateway for the segment set up to be the
1st proxy so I may still have a loop issue. Is there an automated way to
modify the gateway on client systems? Like a forced dhcp without the
request. If we could make this behave as two separate segments ( seg1 = 1st
proxy with no auth required, seg2 = 2nd proxy with auth
required.) client would always start in seg1 and have to request seg2. Any
thoughts or suggestions are greatly appreciated!!

thanks

rick...
Rom.5:8

 Chris Perreault [EMAIL PROTECTED] 7/21/2004 2:02:23
PM

Doh, there is a referer_regex ACL type, but I don't see that helping here
anyways.

1)Browser wants to visit website.com and hits the proxy. 
2)Request gets redirected to the signup_disclaimer.htm website. 3)User signs
up and/or logs in. 4)The page that verifies the username/password then
redirects to the originally requested site with the login information stored
in the correct header. (via the proxy)
5) the proxy sees the refering site and proxies the request.

A better 5) would be squid sees it has an already authenticated request so
passes it through. Otherwise the user would  have to log in, on the
disclaimer page, every time they clicked a link. Open a new window/session
though and then the user would have to 

[squid-users] RE : [squid-users] load balancing

2004-07-22 Thread mathieu . millet
 -Message d'origine-
 De : Gert Brits [mailto:[EMAIL PROTECTED] 
 Envoyé : jeudi 22 juillet 2004 13:40
 À : [EMAIL PROTECTED] Org
 Objet : [squid-users] load balancing
 
 
 Hi All
 
 Please could I have either some ideas or solutions to the following:
 
 I work for a BIG company, providing internet services for 
 many customers.
 
 I am planning to install a load balancing plan for our cache 
 server, which will obviously load balance, but will also have 
 a redundant feature.
 
 What can I do this with?

You may perform using either solution like LVS (Linux Virtual Server)
http://www.linuxvirtualserver.org or load-balancing dedicated equipment such
as Radware Cache Server Director (or Web Server Director), F5 BIG IP, Nortel
Alteon, Cisco (I wouldn't recommend since they are not good in this market,
really).

The thing is you have an equipment (either special switchs or linux servers)
that dispatches the incoming requests to real servers. The load-balancer
know the states of each real servers to prevent dispatching requests to
an overloaded server.

And to avoir single point of failure, the dispatcher is doubled in an
active/passive way.

 Basically have to servers service the caching, but they must 
 look like one server ... I am sure I sound stupid, but just 
 to make sure.
 

I hope it will help you.

Sincerely yours, Mathieu.


RE: [squid-users] load balancing

2004-07-22 Thread Chris Perreault
Do a search for round robin dns multiple a records clustering
redundancy failover. That should give good coverage on the techniques

Also...visiting www.squid-cache.org and searching for failover will help.

Chris Perreault

-Original Message-
From: Gert Brits [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 22, 2004 7:40 AM
To: [EMAIL PROTECTED] Org
Subject: [squid-users] load balancing


Hi All

Please could I have either some ideas or solutions to the following:

I work for a BIG company, providing internet services for many customers.

I am planning to install a load balancing plan for our cache server, which
will obviously load balance, but will also have a redundant feature.

What can I do this with?

Basically have to servers service the caching, but they must look like one
server ... I am sure I sound stupid, but just to make sure.


Regards

Gert Brits


[squid-users] forward intranet traffic to a second squid proxy

2004-07-22 Thread PraktikantMPI Toni (MPI)

Hi !

Is it possible to configure a squid proxy to forward all http traffic directed to the 
intranet to a second squid proxy located in the same network ?

Thanks for any answer.

Jens Bauer


RE: [squid-users] RE : [squid-users] load balancing

2004-07-22 Thread Gert Brits
Awesome help thanks

I knew about the hardware way ... let me investigate the Linux way ;-)


Regards

Gert Brits
Senior Engineer
Technology Concepts
Tel +27 11 803 2169
Fax +27 11 803 2189
http://www.techconcepts.co.za

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, July 22, 2004 1:55 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] RE : [squid-users] load balancing


 -Message d'origine-
 De : Gert Brits [mailto:[EMAIL PROTECTED]
 Envoyé : jeudi 22 juillet 2004 13:40
 À : [EMAIL PROTECTED] Org
 Objet : [squid-users] load balancing


 Hi All

 Please could I have either some ideas or solutions to the following:

 I work for a BIG company, providing internet services for
 many customers.

 I am planning to install a load balancing plan for our cache
 server, which will obviously load balance, but will also have
 a redundant feature.

 What can I do this with?

You may perform using either solution like LVS (Linux Virtual Server)
http://www.linuxvirtualserver.org or load-balancing dedicated equipment such
as Radware Cache Server Director (or Web Server Director), F5 BIG IP, Nortel
Alteon, Cisco (I wouldn't recommend since they are not good in this market,
really).

The thing is you have an equipment (either special switchs or linux servers)
that dispatches the incoming requests to real servers. The load-balancer
know the states of each real servers to prevent dispatching requests to
an overloaded server.

And to avoir single point of failure, the dispatcher is doubled in an
active/passive way.

 Basically have to servers service the caching, but they must
 look like one server ... I am sure I sound stupid, but just
 to make sure.


I hope it will help you.

Sincerely yours, Mathieu.



RE: [squid-users] forward intranet traffic to a second squid proxy

2004-07-22 Thread Van Hoorenbeeck, Peter (RST/Hammerstone EMEA)
Hello Toni,

Did you consider using cache_peer in squid.conf?  I might be
interpreting your problem too simplistically; if so, please provide more
details.

Peter

 -Original Message-
 From: PraktikantMPI Toni (MPI) [mailto:[EMAIL PROTECTED]
 Sent: donderdag 22 juli 2004 14:05
 To: squid mailingliste (E-Mail)
 Subject: [squid-users] forward intranet traffic to a second squid
proxy
 
 
 Hi !
 
 Is it possible to configure a squid proxy to forward all http traffic
 directed to the intranet to a second squid proxy located in the same
 network ?
 
 Thanks for any answer.
 
 Jens Bauer



RE: [squid-users] forward intranet traffic to a second squid proxy

2004-07-22 Thread Elsen Marc
 
 
 
 Hi !
 
 Is it possible to configure a squid proxy to forward all http 
 traffic directed to the intranet to a second squid proxy 
 located in the same network ?
 
  Check :

   cache_peer_domain
   cache_peer_access

 directives and comments in squid.conf.default.

 M.


[squid-users] High CPU

2004-07-22 Thread Scott Phalen
I have searched the archives and can't seem to find a solution to my issue.
My server sits at 92% or higher for CPU utilization 24 hours a day.  During
my peek hours it is at 99% with a load of about 35 requests per second.  My
plan is to convert this to reiserfs in the next month and spread the cache
over both disks instead of using RAID0.

I am using a Dell PowerEdge 650 P4 3.0Ghz processor
2 - 145Gig 15K RPM hard disks - striped
2 Gig RAM
Linux 2.1AS
Squid-2.5STABLE6

In my squid.conf I have only 4 regex statements (seen in archives that 20+
can cause issues).  Many thanks to anyone who can look at this and possibly
solve my issue!

Below is my squid.conf minus the ACLs and cachemgr info:

cache_mem 128 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 128 MB
minimum_object_size 0 KB
maximum_object_size_in_memory 8 KB
ipcache_size 0
ipcache_low 90
ipcache_high 95
fqdncache_size 0
cache_replacement_policy lru
memory_replacement_policy lru
cache_dir diskd /big/squid/var/cache 13000 16 256
cache_access_log /big/squid/var/logs/access.log
cache_store_log none
log_ip_on_direct on
mime_table /big/squid/etc/mime.conf
log_fqdn off
client_netmask 255.255.255.255
diskd_program /big/squid/libexec/diskd
unlinkd_program /big/squid/libexec/unlinkd
request_header_max_size 10 KB
request_body_max_size 0 KB
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
range_offset_limit 0 KB
negative_ttl 30 seconds
connect_timeout 2 minutes
peer_connect_timeout 30 seconds
half_closed_clients off
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_single_host off
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
logfile_rotate 5
memory_pools on
memory_pools_limit 100 MB
forwarded_for off
store_avg_object_size 13 KB
store_objects_per_bucket 20
client_db on
wccp_router 172.17.1.1
wccp_version 4
coredump_dir /big/squid/var/cache


Squid Object Cache: Version 2.5.STABLE6
Start Time: Thu, 22 Jul 2004 11:34:38 GMT
Current Time: Thu, 22 Jul 2004 12:50:53 GMT
Connection information for squid:
Number of clients accessing cache:  284
Number of HTTP requests received:   58354
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   765.4
Average ICP messages per minute since start:0.0
Select loop called: 188047689 times, 0.024 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 53.1%, 60min: 59.1%
Byte Hit Ratios:5min: 47.9%, 60min: 26.5%
Request Memory Hit Ratios:  5min: 3.9%, 60min: 1.8%
Request Disk Hit Ratios:5min: 35.7%, 60min: 22.6%
Storage Swap size:  11980796 KB
Storage Mem size:   18396 KB
Mean Object Size:   12.75 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01745  0.00091
Cache Misses:  0.15048  0.14252
Cache Hits:0.00091  0.00091
Near Hits: 0.10281  0.10281
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.00091  0.00091
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:4574.613 seconds
CPU Time:   4456.430 seconds
CPU Usage:  97.42%
CPU Usage, 5 minute avg:99.82%
CPU Usage, 60 minute avg:   99.65%
Process Data Segment Size via sbrk(): 114795 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 474
Memory usage for squid via mallinfo():
Total space in arena:  114795 KB
Ordinary blocks:   114774 KB 59 blks
Small blocks:   0 KB  0 blks
Holding blocks:  2108 KB  6 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  21 KB
Total in use:  116882 KB 100%
Total free:21 KB 0%
Total size:116903 KB
Memory accounted for:
Total accounted:93135 KB
memPoolAlloc calls: 200223317
memPoolFree calls: 197304376
File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:154
Number of file desc currently in use:  132
Files queued for open:   0
Available number of file descriptors:  892
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
940096 StoreEntries
  4179 StoreEntries with MemObjects
  4171 Hot Object Cache Items
939946 on-disk objects


Regards,
Scott



RE: [squid-users] Re: Implementation issues

2004-07-22 Thread Merton Campbell Crockett
Out of curiousity, why is the notification that computer and network usage 
will be monitored being deferred until this late in the game?

The monitoring began the moment the individual logged into the sorkstation 
or laptop and requested to use the Windows NOS.  The pop-up authentication 
window can be modified to display the notification message and inform the 
user that he is agreeing to be monitored by submitting his credentials.

At one site where I did some work, the login process was modified so that 
a separate pop-up window was used.  If you clicked on the accept button, 
the authentication window was displayed.

Merton Campbell Crockett


On Thu, 22 Jul 2004, Chris Perreault wrote:

 Here at my work all employees sign a form stating that they realize email,
 internet usage, file storage systems, etc are for work purposes and that
 none of it is truly private. That email will not be directly read except for
 troubleshooting/maintenance issues unless there are illegal or non-condoned
 company activities taking place.
 
 All traffic encrypted? Encryption puts a burden on the servers and if all
 the traffic is being encrypted, then what exactly *is* being monitored?
 
 Thoughts? Set your PC's gateway to your Apache server, visit www.google.com,
 get your disclaimer page to come up, sign in, and then have it make you end
 up at google. If this works, insert Squid afterwards. (one piece at a time,
 see what works, what doesn't)
 
 Chris Perreault
 
 
 -Original Message-
 From: Rick Whitley [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, July 21, 2004 5:37 PM
 To: [EMAIL PROTECTED]; Chris Perreault
 Subject: RE: [squid-users] Re: Implementation issues
 
 
 The disclaimer page gives us the opportunity to inform the users that their
 traffic is being monitored. It's the law. I am open to suggestions as to a
 better way to accomplish this. I agree its ugly. The disclaimer page would
 have a login link that went to the 2nd proxy. On top of everything, they
 want the traffic encrypted. I need to get a workable process working first.
 What if the first stop was a web server?
 
 client - Apache - Proxy - internet
|
signon
 
 Here apache would trap the traffic unless authenticated(session variable?).
 We would have apache be the gateway.
 
 Thoughts?
 
 rick...
 Rom.5:8
 
  Chris Perreault [EMAIL PROTECTED] 7/21/2004 3:55:21 PM
 
 First attempt:
 
 client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
 
 User clicks on a page, sending another request, which then will do
 this:
 
 client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
 
 If proxy1 sends all traffic to the signon page, then all traffic passes to
 it. All of it. Each request is coming from the browser, not the signon
 server, so each request hits the proxy1 first and gets resent to the logon
 page. Ok...so if a session variable knows the user logged in, then the
 signon server can redirect the request for the website through proxy2. All
 requests would pass through proxy1--signon server--proxy2. If that's
 acceptable, it looks like it would work. You'd want to redo the session
 variable on each hit, or else you could have the user relogging in every 20
 minutes or so. If they were doing anything on the web (filling in a form or
 taking a test) and didn't submit another request for a while, their session
 could time out and they sure wouldn't be happy. It looks doable, but it
 looks ugly too.
 
 Once a user has signed up, you want them to get the disclaimer page every
 time they fire up a web browser?
 
 It would be much easier to have them just sign an internet use policy and
 in exchange tell them their username and password:)
 
 
 
 Chris Perreault
 Webmaster/MCSE
 The Wiremold Company
 West Hartford, CT 06010
 860-233-6251 ext 3426
 
 
 -Original Message-
 From: Rick Whitley [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, July 21, 2004 3:44 PM
 To: [EMAIL PROTECTED]; Chris Perreault
 Subject: RE: [squid-users] Re: Implementation issues
 
 
 What we are thinking of doing is:
 
 client - Proxy - sign-on/disclaimer - Proxy - ldap - Internet
 
 The 1st proxy will be open and require no auth and redirect all traffic to
 the sign-on/disclaim site. User has option to Activate account or visit
 Internet. The Visit Internet link will change go to the 2nd proxy which
 will have proxy_auth enabled for ldap. The client will be prompted with a
 userid and passwd dialog to be authenticated and sent to the internet.
 
 Does this seem possible? I have the gateway for the segment set up to be the
 1st proxy so I may still have a loop issue. Is there an automated way to
 modify the gateway on client systems? Like a forced dhcp without the
 request. If we could make this behave as two separate segments ( seg1 = 1st
 proxy with no auth required, seg2 = 2nd proxy with auth
 required.) client would always start in seg1 and have to request seg2. Any
 thoughts or suggestions are greatly appreciated!!
 
 

AW: [squid-users] forward intranet traffic to a second squid proxy

2004-07-22 Thread PraktikantMPI Toni (MPI)

That's what I was looking for.

Thanks, also to Peter and Chris.

Jens Bauer

-Ursprüngliche Nachricht-
Von: Elsen Marc [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 22. Juli 2004 14:17
An: PraktikantMPI Toni (MPI); squid mailingliste (E-Mail)
Betreff: RE: [squid-users] forward intranet traffic to a second squid
proxy


 
 
 
 Hi !
 
 Is it possible to configure a squid proxy to forward all http 
 traffic directed to the intranet to a second squid proxy 
 located in the same network ?
 
  Check :

   cache_peer_domain
   cache_peer_access

 directives and comments in squid.conf.default.

 M.


[squid-users] ICAP Support

2004-07-22 Thread Yohann Eyer
Hi All,

I want to enable ICAP support in Squid (2.5.STABLE6-20040722 version).
I compiled it with ./configure ... --enable-icap-support ; make all ;
make install
No error, all is working correctly.

When I add the following lines at the end of my squid.conf
icap_enable on
icap_mode both
icap_service service1 reqmod_precache 0 icap://localhost:1344/reqmod
icap_service service2 respmod_preache 0
icap://localhost:1344/respmod
icap_class class1 service1 service2
icap_access class1 allow all
and run Squid, I receive this error :
2004/07/22 15:41:30| parseConfigFile: line 567 unrecognized:
'icap_enable on'
..

Squid run so properly but without ICAP support.

Must I patch or recompile my squid installation, what doing?


I have tried to patch the source with icap-2.5.patch.
$SQUID-INSTALL-DIR#patch -p1  /tmp/icap-2.5.patch 

but I obtain this error message
patching file acconfig.h
patching file configure.in
Hunk #1 succeeded at 438 (offset 5 lines).
Hunk #2 succeeded at 1749 (offset 28 lines).
Hunk #3 FAILED at 1782.
1 out of 3 hunks FAILED -- saving rejects to file configure.in.rej
can't find file to patch at input line 69
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--
|Index: squid/errors/list
|diff -u squid/errors/list:1.1.1.1 squid/errors/list:1.1.1.1.182.1
|--- squid/errors/list:1.1.1.1  Tue Jan 25 19:21:47 2000
|+++ squid/errors/list  Mon Dec  1 06:23:44 2003
--
File to patch: 


Any advice is welcome...
Yohann



RE: [squid-users] Re: Implementation issues

2004-07-22 Thread Rick Whitley
From the user perspective, the first thing they will see is the
disclaimer page. Clicking either the login or activation link will be
the same as accepting the conditions. Where would I modify the
authentication window?

thanks

rick...
Rom.5:8

 Merton Campbell Crockett [EMAIL PROTECTED] 7/22/2004 8:20:00 AM

Out of curiousity, why is the notification that computer and network
usage 
will be monitored being deferred until this late in the game?

The monitoring began the moment the individual logged into the
sorkstation 
or laptop and requested to use the Windows NOS.  The pop-up
authentication 
window can be modified to display the notification message and inform
the 
user that he is agreeing to be monitored by submitting his
credentials.

At one site where I did some work, the login process was modified so
that 
a separate pop-up window was used.  If you clicked on the accept
button, 
the authentication window was displayed.

Merton Campbell Crockett


On Thu, 22 Jul 2004, Chris Perreault wrote:

 Here at my work all employees sign a form stating that they realize
email,
 internet usage, file storage systems, etc are for work purposes and
that
 none of it is truly private. That email will not be directly read
except for
 troubleshooting/maintenance issues unless there are illegal or
non-condoned
 company activities taking place.
 
 All traffic encrypted? Encryption puts a burden on the servers and if
all
 the traffic is being encrypted, then what exactly *is* being
monitored?
 
 Thoughts? Set your PC's gateway to your Apache server, visit
www.google.com,
 get your disclaimer page to come up, sign in, and then have it make
you end
 up at google. If this works, insert Squid afterwards. (one piece at a
time,
 see what works, what doesn't)
 
 Chris Perreault
 
 
 -Original Message-
 From: Rick Whitley [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, July 21, 2004 5:37 PM
 To: [EMAIL PROTECTED]; Chris Perreault
 Subject: RE: [squid-users] Re: Implementation issues
 
 
 The disclaimer page gives us the opportunity to inform the users that
their
 traffic is being monitored. It's the law. I am open to suggestions as
to a
 better way to accomplish this. I agree its ugly. The disclaimer page
would
 have a login link that went to the 2nd proxy. On top of everything,
they
 want the traffic encrypted. I need to get a workable process working
first.
 What if the first stop was a web server?
 
 client - Apache - Proxy - internet
|
signon
 
 Here apache would trap the traffic unless authenticated(session
variable?).
 We would have apache be the gateway.
 
 Thoughts?
 
 rick...
 Rom.5:8
 
  Chris Perreault [EMAIL PROTECTED] 7/21/2004 3:55:21
PM
 
 First attempt:
 
 client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
 
 User clicks on a page, sending another request, which then will do
 this:
 
 client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
 
 If proxy1 sends all traffic to the signon page, then all traffic
passes to
 it. All of it. Each request is coming from the browser, not the
signon
 server, so each request hits the proxy1 first and gets resent to the
logon
 page. Ok...so if a session variable knows the user logged in, then
the
 signon server can redirect the request for the website through
proxy2. All
 requests would pass through proxy1--signon server--proxy2. If
that's
 acceptable, it looks like it would work. You'd want to redo the
session
 variable on each hit, or else you could have the user relogging in
every 20
 minutes or so. If they were doing anything on the web (filling in a
form or
 taking a test) and didn't submit another request for a while, their
session
 could time out and they sure wouldn't be happy. It looks doable, but
it
 looks ugly too.
 
 Once a user has signed up, you want them to get the disclaimer page
every
 time they fire up a web browser?
 
 It would be much easier to have them just sign an internet use
policy and
 in exchange tell them their username and password:)
 
 
 
 Chris Perreault
 Webmaster/MCSE
 The Wiremold Company
 West Hartford, CT 06010
 860-233-6251 ext 3426
 
 
 -Original Message-
 From: Rick Whitley [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, July 21, 2004 3:44 PM
 To: [EMAIL PROTECTED]; Chris Perreault
 Subject: RE: [squid-users] Re: Implementation issues
 
 
 What we are thinking of doing is:
 
 client - Proxy - sign-on/disclaimer - Proxy - ldap - Internet
 
 The 1st proxy will be open and require no auth and redirect all
traffic to
 the sign-on/disclaim site. User has option to Activate account or
visit
 Internet. The Visit Internet link will change go to the 2nd proxy
which
 will have proxy_auth enabled for ldap. The client will be prompted
with a
 userid and passwd dialog to be authenticated and sent to the
internet.
 
 Does this seem possible? I have the gateway for the segment set up to
be the
 1st proxy so I may still have a loop issue. Is there an automated way
to
 modify the gateway 

RE: [squid-users] Re: Implementation issues

2004-07-22 Thread Merton Campbell Crockett
On Thu, 22 Jul 2004, Rick Whitley wrote:

 From the user perspective, the first thing they will see is the
 disclaimer page. Clicking either the login or activation link will be
 the same as accepting the conditions. Where would I modify the
 authentication window?

Unfortunately, I was the unix guy on the projects and dealt, primarily, 
with the network infrastructure and perimeter security.  Windows was 
treated as a shared application.  The applications support group made the 
modifications.  Basically, I can only tell you that it can be done but 
nothing about the how.

Merton Campbell Crockett


 
 thanks
 
 rick...
 Rom.5:8
 
  Merton Campbell Crockett [EMAIL PROTECTED] 7/22/2004 8:20:00 AM
 
 Out of curiousity, why is the notification that computer and network
 usage 
 will be monitored being deferred until this late in the game?
 
 The monitoring began the moment the individual logged into the
 sorkstation 
 or laptop and requested to use the Windows NOS.  The pop-up
 authentication 
 window can be modified to display the notification message and inform
 the 
 user that he is agreeing to be monitored by submitting his
 credentials.
 
 At one site where I did some work, the login process was modified so
 that 
 a separate pop-up window was used.  If you clicked on the accept
 button, 
 the authentication window was displayed.
 
 Merton Campbell Crockett
 
 
 On Thu, 22 Jul 2004, Chris Perreault wrote:
 
  Here at my work all employees sign a form stating that they realize
 email,
  internet usage, file storage systems, etc are for work purposes and
 that
  none of it is truly private. That email will not be directly read
 except for
  troubleshooting/maintenance issues unless there are illegal or
 non-condoned
  company activities taking place.
  
  All traffic encrypted? Encryption puts a burden on the servers and if
 all
  the traffic is being encrypted, then what exactly *is* being
 monitored?
  
  Thoughts? Set your PC's gateway to your Apache server, visit
 www.google.com,
  get your disclaimer page to come up, sign in, and then have it make
 you end
  up at google. If this works, insert Squid afterwards. (one piece at a
 time,
  see what works, what doesn't)
  
  Chris Perreault
  
  
  -Original Message-
  From: Rick Whitley [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, July 21, 2004 5:37 PM
  To: [EMAIL PROTECTED]; Chris Perreault
  Subject: RE: [squid-users] Re: Implementation issues
  
  
  The disclaimer page gives us the opportunity to inform the users that
 their
  traffic is being monitored. It's the law. I am open to suggestions as
 to a
  better way to accomplish this. I agree its ugly. The disclaimer page
 would
  have a login link that went to the 2nd proxy. On top of everything,
 they
  want the traffic encrypted. I need to get a workable process working
 first.
  What if the first stop was a web server?
  
  client - Apache - Proxy - internet
 |
 signon
  
  Here apache would trap the traffic unless authenticated(session
 variable?).
  We would have apache be the gateway.
  
  Thoughts?
  
  rick...
  Rom.5:8
  
   Chris Perreault [EMAIL PROTECTED] 7/21/2004 3:55:21
 PM
  
  First attempt:
  
  client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
  
  User clicks on a page, sending another request, which then will do
  this:
  
  client - Proxy1 - sign-on/disclaimer - Proxy2 - ldap - Internet
  
  If proxy1 sends all traffic to the signon page, then all traffic
 passes to
  it. All of it. Each request is coming from the browser, not the
 signon
  server, so each request hits the proxy1 first and gets resent to the
 logon
  page. Ok...so if a session variable knows the user logged in, then
 the
  signon server can redirect the request for the website through
 proxy2. All
  requests would pass through proxy1--signon server--proxy2. If
 that's
  acceptable, it looks like it would work. You'd want to redo the
 session
  variable on each hit, or else you could have the user relogging in
 every 20
  minutes or so. If they were doing anything on the web (filling in a
 form or
  taking a test) and didn't submit another request for a while, their
 session
  could time out and they sure wouldn't be happy. It looks doable, but
 it
  looks ugly too.
  
  Once a user has signed up, you want them to get the disclaimer page
 every
  time they fire up a web browser?
  
  It would be much easier to have them just sign an internet use
 policy and
  in exchange tell them their username and password:)
  
  
  
  Chris Perreault
  Webmaster/MCSE
  The Wiremold Company
  West Hartford, CT 06010
  860-233-6251 ext 3426
  
  
  -Original Message-
  From: Rick Whitley [mailto:[EMAIL PROTECTED] 
  Sent: Wednesday, July 21, 2004 3:44 PM
  To: [EMAIL PROTECTED]; Chris Perreault
  Subject: RE: [squid-users] Re: Implementation issues
  
  
  What we are thinking of doing is:
  
  client - Proxy - sign-on/disclaimer - Proxy - ldap - Internet
  
  

RE: [squid-users] Re: Implementation issues

2004-07-22 Thread Rick Whitley
Sorry to be vague. The users have a student userid and password. Access
to email, local network and internet is granted via this id. We have a
single signon process which takes advantage of an ldap directory. This
is what the 2nd proxy uses to authenticate the users. Sorry I left that
part out. Is there a log file that lists the communication to the ldap
server? 

rick...
Rom.5:8

 Chris Perreault [EMAIL PROTECTED] 7/22/2004 9:41:45 AM


Rick said:

From the user perspective, the first thing they will see is the
disclaimer
page. Clicking either the login or activation link will be the same as
accepting the conditions. Where would I modify the authentication
window?

thanks

rick...
Rom.5:8

++
++

 Merton Campbell Crockett [EMAIL PROTECTED] 7/22/2004 8:20:00
AM

Out of curiousity, why is the notification that computer and network
usage 
will be monitored being deferred until this late in the game?

The monitoring began the moment the individual logged into the
sorkstation 
or laptop and requested to use the Windows NOS.  The pop-up
authentication 
window can be modified to display the notification message and inform
the 
user that he is agreeing to be monitored by submitting his
credentials.

At one site where I did some work, the login process was modified so
that 
a separate pop-up window was used.  If you clicked on the accept
button, 
the authentication window was displayed.

Merton Campbell Crockett

+++
+++

Chris replies:
I know I asked this before, but what are they authenticating to? These
students/guests/faculty/etc, do they log into a windows network, unix
system, etc? Or can anyone who plugs their computer into a dorm room
have
access to your computer network? Are these personally owned PC's or
campus
computers? Login scripts can be used to pop up a disclaimer page. If
all
users of the network need a userid and password, then can you use the
same
one for internet access? Ie: if you want internet access, you need to
log
onto the campus network first. 

Chris Perreault


RE: [squid-users] load balancing

2004-07-22 Thread Mohsin Khan
ah ha!well do a search for Wccp, and do not just
do it, implement it. You can also use LVS thats work
goods as well.

--- Chris Perreault [EMAIL PROTECTED]
wrote:
 Do a search for round robin dns multiple a
 records clustering
 redundancy failover. That should give good
 coverage on the techniques
 
 Also...visiting www.squid-cache.org and searching
 for failover will help.
 
 Chris Perreault
 
 -Original Message-
 From: Gert Brits [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, July 22, 2004 7:40 AM
 To: [EMAIL PROTECTED] Org
 Subject: [squid-users] load balancing
 
 
 Hi All
 
 Please could I have either some ideas or solutions
 to the following:
 
 I work for a BIG company, providing internet
 services for many customers.
 
 I am planning to install a load balancing plan for
 our cache server, which
 will obviously load balance, but will also have a
 redundant feature.
 
 What can I do this with?
 
 Basically have to servers service the caching, but
 they must look like one
 server ... I am sure I sound stupid, but just to
 make sure.
 
 
 Regards
 
 Gert Brits
 


=
Regards, 
Mohsin Khan 
CCNA ( Cisco Certified Network Associate 2.0 ) 
http://portal.aaghaz.net (Your attention is needed)

Happy is the one who can smile






__
Do you Yahoo!?
Yahoo! Mail is new and improved - Check it out!
http://promotions.yahoo.com/new_mail


[squid-users] Problems while downloading pages from some sites

2004-07-22 Thread Luis Eduardo Cortes
I'm running Squid 2.5.5-2 on Debian.
My problem is that with www.driverguide.com and others, the browser freezes
while downloading the login page, It keeps me waiting. If I don't use the
proxy server, the page appears immediately. ¿ Is this problem related to
the MTU configuration of the interface (MTU=1500) ?

Thanks in advance.



RE: [squid-users] load balancing

2004-07-22 Thread David Rippel
Check out http://naragw.sharp.co.jp/sps/. I think that this is the most simple and 
effective way as long as you don't mind using proxy.pac files and/or WPAD. Your other 
option is to use LVS w/ keepalived and ICP.

Regards,
David

 Chris Perreault [EMAIL PROTECTED] 7/22/2004 8:02:00 AM 
Do a search for round robin dns multiple a records clustering
redundancy failover. That should give good coverage on the techniques

Also...visiting www.squid-cache.org and searching for failover will help.

Chris Perreault

-Original Message-
From: Gert Brits [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 22, 2004 7:40 AM
To: [EMAIL PROTECTED] Org
Subject: [squid-users] load balancing


Hi All

Please could I have either some ideas or solutions to the following:

I work for a BIG company, providing internet services for many customers.

I am planning to install a load balancing plan for our cache server, which
will obviously load balance, but will also have a redundant feature.

What can I do this with?

Basically have to servers service the caching, but they must look like one
server ... I am sure I sound stupid, but just to make sure.


Regards

Gert Brits



--
SPAM filtering options: Spam is defined as unsolicited commercial email.
--
ITS has provided the following optional links to train our Anti-Spam system.
If the subject contains a ___ then this email was identified as possible Spam.

If you wish to identify this email as SPAM then click here.
Spam:
http://ns1b.hillsboroughcounty.org/canit/b.php?c=si=378368m=29cf55a9fe1e 

If you wish to identify this email as not being Spam then click here.
Not spam:
http://ns1b.hillsboroughcounty.org/canit/b.php?c=ni=378368m=29cf55a9fe1e 
--





Re: [squid-users] High CPU

2004-07-22 Thread maes
Hello Scott,

SP maximum_object_size_in_memory 8 KB
8k??
i do think you need to increase this...

SP Storage Mem size:   18396 KB
i think it's too low because of maximum_object_size_in_memory

hope it helps.

-- 
Best regards,
 maesmailto:[EMAIL PROTECTED]



RE: [squid-users] forward intranet traffic to a second squid proxy

2004-07-22 Thread Van Hoorenbeeck, Peter (RST/Hammerstone EMEA)
You are welcome, sorry I called you Toni though ;)



 -Original Message-
 From: PraktikantMPI Toni (MPI) [mailto:[EMAIL PROTECTED]
 Sent: donderdag 22 juli 2004 15:58
 To: squid mailingliste (E-Mail)
 Subject: AW: [squid-users] forward intranet traffic to a second squid
 proxy
 
 
 That's what I was looking for.
 
 Thanks, also to Peter and Chris.
 
 Jens Bauer
 
 -Ursprüngliche Nachricht-
 Von: Elsen Marc [mailto:[EMAIL PROTECTED]
 Gesendet: Donnerstag, 22. Juli 2004 14:17
 An: PraktikantMPI Toni (MPI); squid mailingliste (E-Mail)
 Betreff: RE: [squid-users] forward intranet traffic to a second squid
 proxy
 
 
 
 
 
  Hi !
 
  Is it possible to configure a squid proxy to forward all http
  traffic directed to the intranet to a second squid proxy
  located in the same network ?
 
   Check :
 
cache_peer_domain
cache_peer_access
 
  directives and comments in squid.conf.default.
 
  M.



[squid-users] Squid 2.5.STABLE6 transparent authentication question

2004-07-22 Thread stephanos . kotsakis




Hello,

My current setup is Squid 2.5.STABLE5 and Samba 3.0.4-1 On RedHat Linux
9.0. Transparent authentication with AD clients using IE was setup using
Samba helpers as specified in the Squid documentation.Things were working
fine until the proxy was placed under a heavy load, then I started
receiving the assertion failed: helper.c:323 error in my cache.log and the
squid service kept restarting.

I want to upgrade to Squid 2.5.STABLE6 but I am confused about the helpers,
ntlm_auth that we use for transparent authentication, do I continue to use
the ones from the Samba package or after installing Squid 2.5.STABLE6 can I
now use the ones that come with squid? I want to preserve my transparent
authentication and correct the assertion errors.

Any help is greatly appreciated!

Thank You,
Steph

 END 


-
The information contained in this email message is intended only for use of the 
individual or entity named above. If the reader of this message is not the intended 
recipient, or the employee or agent responsible to deliver it to the intended 
recipient, you are hereby notified that any dissemination, distribution or copying of 
this communication is strictly prohibited. If you have received this communication in 
error, please immediately notify us by email ([EMAIL PROTECTED]), and destroy the 
original message. Thank you



[squid-users] Proxy Selection Problem

2004-07-22 Thread Merton Campbell Crockett
Background:

   (1)  Companies A, B, and C are working on a joint project.
   (2)  Company A is the lead for the joint project and is responsible
for maintaining a web-accessible database of information about
the project that is accessible to all team members.
   (3)  Private, dedicated links have been established between A, B,
and C to allow access to the database.
   (4)  The web server for the database is only accessible through the
private network connection and is not visible to the Internet.
   (5)  Company B installs a Squid proxy server to provide restricted
access to the web server at company A.
   (6)  All HTTP requests for PROJECT.A.COM need to be forwarded to
this Squid proxy server while all other HTTP requests for A.COM
are forwarded to the default proxy server that provides Internet
access.

Using 2.4-STABLE7, this appeared to work when using cache_peer_domain.  
Under 2.5-STABLE6, the requests are always being routed to the default
parent cache that provides access to the Internet.

Which debug sections need to be enabled in the debug options to determine
why the right proxy server is not being selected?

Merton Campbell Crockett

-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard



[squid-users] multiple wan links

2004-07-22 Thread Abdock

Hello,

I need some help in configuring squid to utilise 2 links.

We have 2 links from 2 different ISP, is there a way to configure squid so that it can 
utilise both the links equally when they are up, and if one fails it continues to 
serve with one.


Many thanks.


Re: [squid-users] Remove Squid signature (%S) on FTP indexes

2004-07-22 Thread Christoph Haas
On Thu, Jul 22, 2004 at 06:48:14PM +1000, [EMAIL PROTECTED] wrote:
 I am trying to figure out how to remove or change the Squid signature that is
 appended to the bottom of FTP indexes (same as what is appended to error pages
 but I know how to get rid of those).
 
 eg: if I visit ftp://ftp.download.com/ I get the following signature at the
 bottom of my page:
 
 Generated Thu, 22 Jul 2004 08:45:45 GMT by xxx.xxx.xxx (squid/2.5.STABLE6)
 
 Any help with this would be appreciated.

Doing this correctly would mean to alter the source.

Dirty tricks include putting a !-- at the end of your error message
thus suppressing the output of the signature (although it is still in
the HTML source). This will however make the error message be non-HTML
as closing tags will be missing. That will probably lead to severe
display problems with some browsers.

This has been discussed a couple of times in this mailing list. You may
want to search the archives.

 Christoph

-- 
~
~
.signature [Modified] 3 lines --100%--3,41 All


Re: [squid-users] multiple wan links

2004-07-22 Thread Christoph Haas
On Thu, Jul 22, 2004 at 08:18:19PM +0300, Abdock wrote:
 I need some help in configuring squid to utilise 2 links.
 
 We have 2 links from 2 different ISP, is there a way to configure
 squid so that it can utilise both the links equally when they are up,
 and if one fails it continues to serve with one.

If both ISPs provide a proxy service you could set both up in
cache_peer and tag them round-robin so they will both be used.
If one goes done the other will be used for all requests.

 Christoph

-- 
~
~
.signature [Modified] 3 lines --100%--3,41 All


RE: [squid-users] multiple wan links

2004-07-22 Thread Chris Perreault


-Original Message-
From: Christoph Haas [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 22, 2004 2:05 PM
To: [EMAIL PROTECTED]
Subject: Re: [squid-users] multiple wan links


On Thu, Jul 22, 2004 at 08:18:19PM +0300, Abdock wrote:
 I need some help in configuring squid to utilise 2 links.
 
 We have 2 links from 2 different ISP, is there a way to configure 
 squid so that it can utilise both the links equally when they are up, 
 and if one fails it continues to serve with one.

If both ISPs provide a proxy service you could set both up in cache_peer
and tag them round-robin so they will both be used. If one goes done the
other will be used for all requests.

 Christoph

-- 
~
~
.signature [Modified] 3 lines --100%--3,41 All



Depending on your network you might want to load balance things closer to
the internet too, so other things such as email can take advantage of the
same benefits.

Chris Perreault


RE: [squid-users] multiple wan links

2004-07-22 Thread Abdock

None of the ISP has proxy service.


 
 
 On Thu, Jul 22, 2004 at 08:18:19PM +0300, Abdock wrote:
  I need some help in configuring squid to utilise 2 links.
  
  We have 2 links from 2 different ISP, is there a way to configure 
  squid so that it can utilise both the links equally when they are up, 
  and if one fails it continues to serve with one.
 
 If both ISPs provide a proxy service you could set both up in cache_peer
 and tag them round-robin so they will both be used. If one goes done the
 other will be used for all requests.
 
  Christoph
 
 .signature [Modified] 3 lines --100%--3,41 All
 
 
 
 Depending on your network you might want to load balance things closer to
 the internet too, so other things such as email can take advantage of the
 same benefits.
 
 Chris Perreault



RE: [squid-users] multiple wan links

2004-07-22 Thread Chris Perreault
How are these two ISP's linked to you? 2 different firewall/routers? 1 with
two NICs? If all you have connecting you to the internet is squid, then the
round robin technique, on squid, should work fine. If your ISP's both
connect to your infrastructure and you have more back end servers, then you
could put a load balancer closer to the door to the internet, is what I
meant. The ISP's wouldn't need to have a proxy running for this to happen.

isp one-nic| /---squid
|(load balancer)other
isp twp-nic| \---email



Chris Perreault
Webmaster/MCSE
The Wiremold Company
West Hartford, CT 06010
860-233-6251 ext 3426


-Original Message-
From: Abdock [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 22, 2004 2:52 PM
To: [EMAIL PROTECTED]
Subject: RE: [squid-users] multiple wan links



None of the ISP has proxy service.


 
 
 On Thu, Jul 22, 2004 at 08:18:19PM +0300, Abdock wrote:
  I need some help in configuring squid to utilise 2 links.
  
  We have 2 links from 2 different ISP, is there a way to configure
  squid so that it can utilise both the links equally when they are up, 
  and if one fails it continues to serve with one.
 
 If both ISPs provide a proxy service you could set both up in 
 cache_peer and tag them round-robin so they will both be used. If 
 one goes done the other will be used for all requests.
 
  Christoph
 
 .signature [Modified] 3 lines --100%--3,41 All
 
 
 
 Depending on your network you might want to load balance things closer 
 to the internet too, so other things such as email can take advantage 
 of the same benefits.
 
 Chris Perreault


Re: [squid-users] Proxy Selection Problem

2004-07-22 Thread Merton Campbell Crockett
On Thu, 22 Jul 2004, Merton Campbell Crockett wrote:

 Background:
 
  (1)  Companies A, B, and C are working on a joint project.
  (2)  Company A is the lead for the joint project and is responsible
   for maintaining a web-accessible database of information about
   the project that is accessible to all team members.
  (3)  Private, dedicated links have been established between A, B,
   and C to allow access to the database.
  (4)  The web server for the database is only accessible through the
   private network connection and is not visible to the Internet.
  (5)  Company B installs a Squid proxy server to provide restricted
   access to the web server at company A.
  (6)  All HTTP requests for PROJECT.A.COM need to be forwarded to
   this Squid proxy server while all other HTTP requests for A.COM
   are forwarded to the default proxy server that provides Internet
   access.
 
 Using 2.4-STABLE7, this appeared to work when using cache_peer_domain.  
 Under 2.5-STABLE6, the requests are always being routed to the default
 parent cache that provides access to the Internet.
 
 Which debug sections need to be enabled in the debug options to determine
 why the right proxy server is not being selected?

The solution to this problem was to set the right debug_option values.  
After 'grep'ing the source code, I found that I needed 44,5 set to debug
the peer selection process.

With the right section and level specified, the problem was quickly
identified as a forward reference to an ACL.  Hate when that happens!

There still seems to be an error in the peer selection algorithm.  I had
initially defined the peer as a sibling.

 cache_peer proxy.b.com sibling 8080 3130 allow-miss no-digest no-query

Then, I redefined it using neighbor_type_domain.

 neigbor_type_domain proxy.b.com parent .project.a.com

I had expected this to force selection of the proxy.b.com as the peer.  
The default parent continued to be selected over the peer that was
explicitly defined as a parent for the specified destination domain.

I had to add a cache_peer_access to explicitly deny the use of the default
proxy as a peer.

Merton Campbell Crockett


-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard



[squid-users] Microsoft Outlook Mobile Access and SSL acceleration

2004-07-22 Thread Mike Kelson
Has anybody attempted to use Squid as an SSL accelerator for Outlook Mobile Access 
running on Exchange 2003?
 
I have Outlook Web Access running well using the front-end-https=on option, but have 
had no success it getting the OMA WAP site working.
 
Thanks in advance.
 
Regards
 
Mike Kelson
Atlanta Technology Limited
20a Brownlow Mews
London WC1 2LA
020 7692 7000
07976 731953
 
 
 


This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.atl.uk.net



[squid-users] Microsoft Outlook Mobile Access and SSL acceleration

2004-07-22 Thread Mike Kelson
Has anybody attempted to use Squid as an SSL accelerator for Outlook Mobile Access 
running on Exchange 2003?
 
I have Outlook Web Access running well using the front-end-https=on option, but have 
had no success it getting the OMA WAP site working.
 
Thanks in advance.
 
Regards
 
Mike Kelson
Atlanta Technology Limited
20a Brownlow Mews
London WC1 2LA
020 7692 7000
07976 731953
 
 
 


This e-mail has been scanned for all viruses by Star Internet. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.atl.uk.net



[squid-users] Re: Squid 2.5.STABLE6 transparent authentication question

2004-07-22 Thread Adam Aube
[EMAIL PROTECTED] wrote:

 My current setup is Squid 2.5.STABLE5 and Samba 3.0.4-1 On RedHat Linux
 9.0. Transparent authentication with AD clients using IE was setup using
 Samba helpers as specified in the Squid documentation.

For future reference, you are using NTLM authentication, which IE makes
transparent to the end user. Transparent authentication does not exist -
transparent proxying cannot be combined with authentication.

 Things were working fine until the proxy was placed under a heavy load,
 then I started receiving the assertion failed: helper.c:323 error in my
 cache.log and the squid service kept restarting.

 I want to upgrade to Squid 2.5.STABLE6 but I am confused about the
 helpers, ntlm_auth that we use for transparent authentication, do I
 continue to use the ones from the Samba package or after installing Squid
 2.5.STABLE6 can I now use the ones that come with squid?

Continue to use the Samba helpers. The Squid helpers are only used with
Samba 2.2.x.

[email disclaimer snipped]

If it is possible to turn off that disclaimer when posting to public mailing
lists, please do so. Besides wasting space and potentially annoying list
members, the disclaimer is rather pointless when posted to a public forum.

Adam



[squid-users] Re: multiple wan links

2004-07-22 Thread Adam Aube
Abdock wrote:

 We have 2 links from 2 different ISP, is there a way to configure squid so
 that it can utilise both the links equally when they are up, and if one
 fails it continues to serve with one.

Setup your OS to utilize both links so Squid will do it automatically.

You didn't mention your OS, but for Linux you can Google for Linux Advanced
Routing Howto. The BSDs have similar capabilities.

Adam



[squid-users] Re: Proxy Selection Problem

2004-07-22 Thread Adam Aube
Merton Campbell Crockett wrote:

(1)Companies A, B, and C are working on a joint project.
(2)Company A is the lead for the joint project and is responsible
 for maintaining a web-accessible database of information about
 the project that is accessible to all team members.
(3)Private, dedicated links have been established between A, B,
 and C to allow access to the database.
(4)The web server for the database is only accessible through the
 private network connection and is not visible to the Internet.
(5)Company B installs a Squid proxy server to provide restricted
 access to the web server at company A.
(6)All HTTP requests for PROJECT.A.COM need to be forwarded to
 this Squid proxy server while all other HTTP requests for A.COM
 are forwarded to the default proxy server that provides Internet
 access.

This can be done easily with a proxy auto-config script. See the Squid FAQ
for more details:

http://www.squid-cache.org/Doc/FAQ/FAQ-5.html#ss5.2

Despite the title of the section, it works with most browsers, not just
Netscape.

Adam



Re: [squid-users] Remove Squid signature (%S) on FTP indexes

2004-07-22 Thread lists
Quoting Christoph Haas [EMAIL PROTECTED]:

 On Thu, Jul 22, 2004 at 06:48:14PM +1000, [EMAIL PROTECTED] wrote:
  I am trying to figure out how to remove or change the Squid signature that is
  appended to the bottom of FTP indexes (same as what is appended to error pages
  but I know how to get rid of those).
 
  eg: if I visit ftp://ftp.download.com/ I get the following signature at the
  bottom of my page:
 
  Generated Thu, 22 Jul 2004 08:45:45 GMT by xxx.xxx.xxx (squid/2.5.STABLE6)
 
  Any help with this would be appreciated.

 Doing this correctly would mean to alter the source.

 Dirty tricks include putting a !-- at the end of your error message
 thus suppressing the output of the signature (although it is still in
 the HTML source). This will however make the error message be non-HTML
 as closing tags will be missing. That will probably lead to severe
 display problems with some browsers.

 This has been discussed a couple of times in this mailing list. You may
 want to search the archives.

  Christoph

 --
 ~
 ~
 .signature [Modified] 3 lines --100%--3,41 All


Thanks Christoph, I have already searched the archives and that is how I discovered
how to get rid of the signature in error messages (BTW, including !-- %s -- will
remove the signature without breaking the rest of the page - yes it is still
viewable in the source though).

What my question is about was *not* how to change error pages, but how to change the
page returned when you visit an ftp:// style URL

I am happy to hack the source, but if there is an easier way I'd like to use that.

Regards,
Rob Hadfield


Re: [squid-users] Re: Proxy Selection Problem

2004-07-22 Thread Merton Campbell Crockett
On Thu, 22 Jul 2004, Adam Aube wrote:

 Merton Campbell Crockett wrote:
 
 (1) Companies A, B, and C are working on a joint project.
 (2) Company A is the lead for the joint project and is responsible
 for maintaining a web-accessible database of information about
 the project that is accessible to all team members.
 (3) Private, dedicated links have been established between A, B,
 and C to allow access to the database.
 (4) The web server for the database is only accessible through the
 private network connection and is not visible to the Internet.
 (5) Company B installs a Squid proxy server to provide restricted
 access to the web server at company A.
 (6) All HTTP requests for PROJECT.A.COM need to be forwarded to
 this Squid proxy server while all other HTTP requests for A.COM
 are forwarded to the default proxy server that provides 
 Internet access.
 
 This can be done easily with a proxy auto-config script. See the Squid FAQ
 for more details:
 
 http://www.squid-cache.org/Doc/FAQ/FAQ-5.html#ss5.2
 
 Despite the title of the section, it works with most browsers, not just
 Netscape.

I've been using proxy.pac files for this purpose for years.  As a result 
of corporate acquisitions, there has been an influx of staff into our IT 
organisation that simply doesn't understand autoconfiguration.  They are 
deathly afraid to use it and think that using intercept proxies is the 
answer.

As you, correctly, point out; an autoconfiguration file provides a simple, 
straight forward solution to the problem.

Given the IT organisation's fears, the question is how do you solve the 
problem when your only option is to use an intercept proxy?

Merton Campbell Crockett


-- 
BEGIN:  vcard
VERSION:3.0
FN: Merton Campbell Crockett
ORG:General Dynamics Advanced Information Systems;
Intelligence and Exploitation Systems
N:  Crockett;Merton;Campbell
EMAIL;TYPE=internet:[EMAIL PROTECTED]
TEL;TYPE=work,voice,msg,pref:   +1(805)497-5045
TEL;TYPE=work,fax:  +1(805)497-5050
TEL;TYPE=cell,voice,msg:+1(805)377-6762
END:vcard