Re: [squid-users] transparent proxyng works but...

2003-03-25 Thread Henrik Nordstrom
Squid or iptables do not care how many hops away the station is, only
which IP addresss they are using.

What I can think of is if your Squid server do not know how where to
route the return traffic to those networks.

Regards
Henrik




SSCR Internet Admin wrote:
> 
> I have already set transparent proxying on my squid server, workstations' ip
> addresses are masqueraded on iptables and invisibly redirected to squid 3128
> if anyone tries to bypass squid so those workstations are already can
> connect to the internet without specifying squid 3128 on their browsers, but
> those workstations which are 2 to 3 hops away from my proxy/firewalled
> server cant connect to the internet directly or not even redirected to port
> 3128 unlike those workstations that are 1 hop away from my server.. whats
> happening? is there a bug on iptables or something that i have to tweak on
> squid?
> 
> Thanks.
> ---
> Outgoing mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.463 / Virus Database: 262 - Release Date: 3/17/2003
> 
> --
> This message has been scanned for viruses and
> dangerous contents on SSCR Email Scanner Server, and is
> believed to be clean.


Re: [squid-users] make pinger error

2003-03-25 Thread Henrik Nordstrom
SSCR Internet Admin wrote:
> 
> thanks it works now.  Just one question, why is it that pinger do not have
> the root privileges since i have installed and compiled squid using root?

Because the make files cannot assume you are installing Squid as root
and would give errors to all people not doing so.

The recommended procedure is to make and install Squid as a
non-privileged user and only run "make installpinger" as root
alternatively manually making the pinger setuserid root.

Regards
Henrik


Re: [squid-users] NTLM Authentication using the SMB helper - need help with access log problems

2003-03-25 Thread Henrik Nordstrom
Ken Thomson wrote:

> The server operates fine, and the authentication works as
> expected.  My problem lies with the access.log file.
> Every request from a client is first denied and then
> accepted after being authenticated.  This happens to
> *EVERY* request.

Yes, this is because of how NTLM authentication works.

On each new TCP connection from the browser the following happens

1a. Browser sends request without authentication
1b. Rejected by Squid as there is no authentication, squid proposing to
use NTLM
2a. Browser sends request with a NTLM NEGOTIATE packet embedded in the
headers
2b. Rejected by Squid with a NTLM CHALLENGE packet embedded in the
headers
3a. Browser sends request with a NTLM AUTHENTICATE packet embedded in
the headers
3b. Connection accepted by Squid if the authentication is successful.
This request and any future requests on the same TCP connection is
forwarded.

All responses by Squid is logged.

If this disturbs your log statistics then filter out TCP_DENIED/407
lines with no username before processing the logs.

Regards
Henrik


[squid-users] block ftp access using browser (IE)

2003-03-25 Thread Patrick Kwan
Hello:

I am tring to block users using browser (IE) to access ftp.

I set the following acl in squid.conf

acl ftpusers proto FTP
http_access deny ftpusers

But I still can access ftp site via browser.

Can anyone can give some example?
Thanks your help!


Patrick






Re: [squid-users] block ftp access using browser (IE)

2003-03-25 Thread Marc Elsen


Patrick Kwan wrote:
> 
> Hello:
> 
> I am tring to block users using browser (IE) to access ftp.
> 
> I set the following acl in squid.conf
> 
> acl ftpusers proto FTP
> http_access deny ftpusers
> 
> But I still can access ftp site via browser.

 Could you de-capitalize 'FTP' in the acl directive ,
 try again -> ?

 M.

> 
> Can anyone can give some example?
> Thanks your help!
> 
> Patrick

-- 

 'Time is a consequence of Matter thus
 General Relativity is a direct consequence of QM
 (M.E. Mar 2002)


RE: [squid-users] Timeouts details and Retry problems

2003-03-25 Thread Fabrice DELHOSTE

First of all, thanks for your help.

So, here is the behavior that our tests show:
1) when connect_timeout < read_timeout
=> No retries. Response time = read_timeout

2) when connect_timeout > read_timeout
=> Always 1 Retry = 2 requests to the content server. Response time = 
connect_timeout + read_timeout

Could you give us details about this behavior?
Do you plan to add a retry property to the configuration?

Fabrice

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: mardi 25 mars 2003 08:47
To: Victor Tsang
Cc: Fabrice DELHOSTE; [EMAIL PROTECTED]
Subject: Re: [squid-users] Timeouts details and Retry problems


Only by editing the source.

1. Make sure server side persistent connections are disabled in
squid.conf.

2. modify fwdCheckRetry() in forward.c to always return 0.

Regards
Henrik


Victor Tsang wrote:
> 
> Is there a way to turn off such feature, or control the number of retry
> squid does?
> 
> Thanks.
> Tor
> 
> Henrik Nordstrom wrote:
> >
> > mån 2003-03-24 klockan 12.11 skrev Fabrice DELHOSTE:
> >
> > > So after installing Squid, we modified connect_timeout and
> > > read_timeout. We found configurations that works but we would like to
> > > understand precisely why. Moreover, due to our misunderstanding, we
> > > sometimes have strange effects such as double requests (or even more)
> > > to the content server whereas the application correctly receives the
> > > timeout error by sending only one request. Any idea?
> >
> > Squid automatically retries requests if the first attempt fails, to
> > increase the likelyhood of a successful reply. Depending on the
> > conditions there may be as much as up to 10 retries of the same request.
> >
> > The same is also done by most browsers.
> >
> > --
> > Henrik Nordstrom <[EMAIL PROTECTED]>
> > MARA Systems AB, Sweden



RE: [squid-users] Internet Access

2003-03-25 Thread Rick Matthews
> I know someone else has probably implemented something similar, is 
> there a well-known solution to this? 

Here's a few places to check:

http://www.kioskcom.com/index.php
(check product resource guide)

http://www.kioskmarketplace.com/

http://www.kiosks.org/

http://www.nnu.com/



> -Original Message-
> From: Clayton Hicklin [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 24, 2003 11:41 PM
> To: [EMAIL PROTECTED]
> Subject: [squid-users] Internet Access
> 
> 
> Hi,
> I'm helping someone develop a kiosk-type Internet access station.  I 
> need to be able to ask for a name and credit card information, allow the 
> user access once that information is given, and time the session.  All 
> of this information needs to be recorded and transmitted to another 
> server.  This is a dialup kiosk, so there is no Internet connection 
> until the user has entered their CC information.  I know a little of 
> Squid and squidGuard, and have played with basic authentication, but I 
> need a little help getting started.  I know someone else has probably 
> implemented something similar, is there a well-known solution to this?  
> I will be implementing on linux boxes with Mozilla, pppd, etc.  I'm very 
> comfortable with the other aspects (dialup, file transmission, etc), but 
> I need help with regulated Internet access.  Thanks.
> 
> -- 
> Clayton Hicklin
> [EMAIL PROTECTED]
> 
> 


[squid-users] customizing error messages (client hostname?)

2003-03-25 Thread DANNY KHALIL
Hello,

I am trying to customize the error messages. I mainly want to report the the client 
hostname and 
not its IP address. I know that the tags supported by squid can give me the IP address 
: %i

the question is then, how can I do a reverse dns lookup on the %i to obtain the client 
hostname? keep 
in mind that all this has to be done inside the error files. Is there a tag that can 
report the client 
hostname?

btw, I am runningsquid-2.5.STABLE2

thanx 

Danny


Re: [squid-users] customizing error messages (client hostname?)

2003-03-25 Thread Christoph Haas
Hi, Danny...

> I am trying to customize the error messages. I mainly want to report
> the the client hostname and not its IP address. I know that the tags
> supported by squid can give me the IP address : %i

According to 'errorpage.c' these tags are allowed:

* B - URL with FTP %2f hackx
* c - Squid error code x
* e - errnox
* E - strerror()   x
* f - FTP request line x
* F - FTP reply line   x
* g - FTP server message   x
* h - cache hostname   x
* H - server host name x
* i - client IP addressx
* I - server IP addressx
* L - HREF link for more info/contact  x
* M - Request Method   x
* m - Error message returned by external Auth. x 
* p - URL port #   x
* P - Protocol x
* R - Full HTTP Requestx
* S - squid signature from ERR_SIGNATURE   x
* s - caching proxy software with version  x
* t - local time   x
* T - UTC  x
* U - URL without password x
* u - URL with passwordx
* w - cachemgr email address   x
* z - dns server error message

According to this list I see no way to print out the client host name.

 Christoph

-- 
~
~
".signature" [Modified] 3 lines --100%--3,41 All


[squid-users] identify files ?

2003-03-25 Thread Messner, Alexander
Hi all,

is there a posibility in the logs to identify the name of an uploaded file?
Websweeper for Windows can do this - can this Squid?

Thank you

Mit freundlichen Grüssen :-)
Alexander Messner
---
Graffinity Pharmaceuticals AG
Im Neuenheimer Feld 518-519  ---  D-69120 Heidelberg
Tel: 06221/6510-152 --- Fax: 06221/6510-111
mailto: [EMAIL PROTECTED] --- http://www.graffinity.com
 


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.graffinity.com
**



RE: [squid-users] NTLM Authentication using the SMB helper - need help with access log problems

2003-03-25 Thread James Ambursley
Could you send me a sample squid.conf file, please.


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 25, 2003 2:59 AM
To: Ken Thomson
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] NTLM Authentication using the SMB helper -
need help with access log problems


Ken Thomson wrote:

> The server operates fine, and the authentication works as
> expected.  My problem lies with the access.log file.
> Every request from a client is first denied and then
> accepted after being authenticated.  This happens to
> *EVERY* request.

Yes, this is because of how NTLM authentication works.

On each new TCP connection from the browser the following happens

1a. Browser sends request without authentication
1b. Rejected by Squid as there is no authentication, squid proposing to
use NTLM
2a. Browser sends request with a NTLM NEGOTIATE packet embedded in the
headers
2b. Rejected by Squid with a NTLM CHALLENGE packet embedded in the
headers
3a. Browser sends request with a NTLM AUTHENTICATE packet embedded in
the headers
3b. Connection accepted by Squid if the authentication is successful.
This request and any future requests on the same TCP connection is
forwarded.

All responses by Squid is logged.

If this disturbs your log statistics then filter out TCP_DENIED/407
lines with no username before processing the logs.

Regards
Henrik


[squid-users] ssl between squid accellerators

2003-03-25 Thread mlister
I have been poking around in the FAQ's and whitepapers trying to find how I
can
configure one squid accel. server to speak SSL to another.  I have the SSL
patch
installed and both accellerators are configured as SSL servers using
identical certs.
I am not sure if I should be trying to configure some kind of peer or what.
cache_peer look like its in the general direction but looks more for a
proxy.



Re: [squid-users] block ftp access using browser (IE)

2003-03-25 Thread Patrick Kwan
Hello Marc, Tan:

Thanks your reply!
I already solve the problem which was cause my careless.

I set the IE's connection setting to use proxy for all protocal (inc.
ftp), so I assume all ftp connection should access through squid but it's
not. The connection is access throught my default gateway to outsie. After
I remove the default gateway, the squid's acl works.

Thanks you kindly help

Patrick


>
>
> Patrick Kwan wrote:
>>
>> Hello:
>>
>> I am tring to block users using browser (IE) to access ftp.
>>
>> I set the following acl in squid.conf
>>
>> acl ftpusers proto FTP
>> http_access deny ftpusers
>>
>> But I still can access ftp site via browser.
>
>  Could you de-capitalize 'FTP' in the acl directive ,
>  try again -> ?
>
>  M.
>
>>
>> Can anyone can give some example?
>> Thanks your help!
>>
>> Patrick
>
> --
>
>  'Time is a consequence of Matter thus
>  General Relativity is a direct consequence of QM
>  (M.E. Mar 2002)





RE: [squid-users] Timeouts details and Retry problems

2003-03-25 Thread Henrik Nordstrom
tis 2003-03-25 klockan 10.58 skrev Fabrice DELHOSTE:
> First of all, thanks for your help.
> 
> So, here is the behavior that our tests show:
> 1) when connect_timeout < read_timeout
>   => No retries. Response time = read_timeout
> 
> 2) when connect_timeout > read_timeout
>   => Always 1 Retry = 2 requests to the content server. Response time = 
> connect_timeout + read_timeout


This depends on the reason why Squid retried the request. The above is
consistent with request being retried because the server is too slow to
respond to the request only, not for cases there the server refuses the
request or fails to process the request.

Squid will only retry the request if within connect_timeout from the
start of the request.

> Do you plan to add a retry property to the configuration?

I have no need of such property, but a patch is most likely accepted.

Regards
Henrik

-- 
Henrik Nordstrom <[EMAIL PROTECTED]>
MARA Systems AB, Sweden



Re: [squid-users] ssl between squid accellerators

2003-03-25 Thread Henrik Nordstrom
tis 2003-03-25 klockan 16.09 skrev mlister:
> I have been poking around in the FAQ's and whitepapers trying to find how I
> can
> configure one squid accel. server to speak SSL to another.  I have the SSL
> patch
> installed and both accellerators are configured as SSL servers using
> identical certs.
> I am not sure if I should be trying to configure some kind of peer or what.
> cache_peer look like its in the general direction but looks more for a
> proxy.

See the cache_peer directive in your patched Squid, or have a redirector
rewrite the requested URLs to https://

This kind of thing will be a lot easier in Squid-3 with it's new
accelerator support.

Regards
Henrik

-- 
Henrik Nordstrom <[EMAIL PROTECTED]>
MARA Systems AB, Sweden



[squid-users] no_cache and Process Filedescriptors Allocation menu don't agree

2003-03-25 Thread Adam
Hello,

I am wondering both why the Process Filedescriptor Allocation table logs
file types set to "no_cache" and whether Nread and Nwrite means Number FD's
or Bytes read and written or something else.  If it is Bytes, it does not
seem to be accurate since the file is 5+MB.

Short background (the why):  Our Squid server is 3-4 times slower than
surfing directly to the internet.   From our reading and using the Cachmgr
cgi, it seems
that much of our bottleneck is I/O and much of that is streaming audio/video
(users listening to internet radio).  The server Ultra 60 Sun server running
Solaris 8
only has one internal SCSI controller that runs the two internal disks: they
are mirrored using disksuite.
Squid version is:  Squid Cache: Version 2.5.STABLE1-20030307
configure
options:  --enable-dlmalloc --enable-async-io --enable-storeio=aufs,diskd,uf
s --enable-removal-policies=heap,lru --enable-delay-pools
--disable-icmp --disable-ident --enable-cachemgr-hostname=ourproxy

Using the Cachemgr.cgi script's "Process Filedescriptor Allocation" printout
we can see many users doing streaming media (.asx, .wmv, etc.).  We can't
arbitrarily block these until a policy has been developed and rolled out to
each and every user.  So in the meantime I want to not cache the streaming
media to reduce disk writes.Everyone is listening to different (radio)
stations and since the content changes, our feeling is "why cache it?"
Also we figure that this will alleviate some of the I/O contention for
writing to internal the internal disk. If this reasoning is wrong-headed,
please advise.  We plan on rolling out delay pools soon but are trying one
thing at a time.

So my question is: is the server no longer caching, now that I have added
these acl/no_cache directives to the squid.conf file:
  acl zipfiles urlpath_regex -i \.zip$ \.asf$  \.tar$ \.asx$  \.wmv$
\.mpg$  \.rm$  \.mov$  \.iso$  \.mpeg$
  no_cache deny zipfiles

>From reading the FAQ and the mailing list via groups.google, I think the
answer would be YES it's no longer caching. cache.log has nothing in it
(good sign) except my own accesses to the cachemgr.cgi and access.log logs
this line once the transfer is completed: 1048619816.535 595467 192.168.1.4
TCP_MISS/206 3283803 GET http://205.225.141.21/BerkeleyDB.tar -
DIRECT/205.225.141.21 application/x-tar

 However the Process Filedescriptor Allocation table still actively shows
the file and the Nread and Nwrite columns continue growing as the file
continues downloading.  The numbers increment as the file is further
downloaded though they don't match the bytes. Since the ratio is about 1.6:1
(bytes in BerkeleyDB.tar to the Nread or Nwrite) I am wondering what the
number could represent.  It's not bytes and I can't believe there is 1.6 FD
per byte (that wouldn't be efficient).  So what are Nread and Nwrite
supposed to represent?

Lastly, since the idea of enabling no_cache is to not have any additional
disk reads/writes I am wondering why this is still logging in the sub-menu?
Can anyone tell me why and/or if I am doing something wrong or making
incorrect assumptions?

thanks,

Adam


Here is the Cachemgr.cgi Process Filedescriptor Allocation just prior to the
download finishing:
Active file descriptors:
File Type   Tout Nread  * Nwrite * Remote AddressDescription
 --    - ---
---
   3 Log 0   0   0
/logs/cache.log
   6 Socket01291  412  .0DNS Socket
   7 File 0   0   347721
/logs/access.log
   8 Pipe0   0   0unlinkd ->
squid
   9 Socket0   0* 0  .0HTTP Socket
  10 Socket   0   0* 0  .0HTTP Socket
  11 Pipe   0   0   0squid ->
unlinkd
  12 Socket   0   0* 0  .0HTTP Socket
  13 Socket   0   0* 0  .0HTTP Socket
  14 File0   0 192
/cache/swap.state
  15 File0   0 192
/cache2/swap.state
  16 Socket 1430   572* 3254047  192.168.1.4.2028
http://extern.site.com/BerkeleyDB.tar
  17 Socket   43254043*1310  extern.site.com.80
http://extern.site.com/BerkeleyDB.tar
  18 Socket 1440 162*   0  192.168.1.4.53586
cache_object://squid.mydom.com/filedescriptors
  24 Pipe  0   0*  0async-io
completetion event: main
  25 Pipe  0   00async-io
completetion event: threads



Re: [squid-users] no_cache and Process Filedescriptors Allocation menu don't agree

2003-03-25 Thread Marc Elsen


Adam wrote:
> 
> Hello,
> 
> I am wondering both why the Process Filedescriptor Allocation table logs
> file types set to "no_cache" and whether Nread and Nwrite means Number FD's
> or Bytes read and written or something else.  If it is Bytes, it does not
> seem to be accurate since the file is 5+MB.
> 
> Short background (the why):  Our Squid server is 3-4 times slower than
> surfing directly to the internet.   From our reading and using the Cachmgr
> cgi, it seems
> that much of our bottleneck is I/O and much of that is streaming audio/video
> (users listening to internet radio).  The server Ultra 60 Sun server running
> Solaris 8
> only has one internal SCSI controller that runs the two internal disks: they
> are mirrored using disksuite.
> Squid version is:  Squid Cache: Version 2.5.STABLE1-20030307
> configure
> options:  --enable-dlmalloc --enable-async-io --enable-storeio=aufs,diskd,uf
> s --enable-removal-policies=heap,lru --enable-delay-pools
> --disable-icmp --disable-ident --enable-cachemgr-hostname=ourproxy
> 
> Using the Cachemgr.cgi script's "Process Filedescriptor Allocation" printout
> we can see many users doing streaming media (.asx, .wmv, etc.).  We can't
> arbitrarily block these until a policy has been developed and rolled out to
> each and every user.  So in the meantime I want to not cache the streaming
> media to reduce disk writes.Everyone is listening to different (radio)
> stations and since the content changes, our feeling is "why cache it?"
> Also we figure that this will alleviate some of the I/O contention for
> writing to internal the internal disk. If this reasoning is wrong-headed,
> please advise.  We plan on rolling out delay pools soon but are trying one
> thing at a time.
> 
> So my question is: is the server no longer caching, now that I have added
> these acl/no_cache directives to the squid.conf file:
>   acl zipfiles urlpath_regex -i \.zip$ \.asf$  \.tar$ \.asx$  \.wmv$
> \.mpg$  \.rm$  \.mov$  \.iso$  \.mpeg$
>   no_cache deny zipfiles
> 
> From reading the FAQ and the mailing list via groups.google, I think the
> answer would be YES it's no longer caching. cache.log has nothing in it
> (good sign) except my own accesses to the cachemgr.cgi and access.log logs
> this line once the transfer is completed: 1048619816.535 595467 192.168.1.4
> TCP_MISS/206 3283803 GET http://205.225.141.21/BerkeleyDB.tar -
> DIRECT/205.225.141.21 application/x-tar
> 
>  However the Process Filedescriptor Allocation table still actively shows
> the file and the Nread and Nwrite columns continue growing as the file
> continues downloading.  The numbers increment as the file is further
> downloaded though they don't match the bytes. Since the ratio is about 1.6:1
> (bytes in BerkeleyDB.tar to the Nread or Nwrite) I am wondering what the
> number could represent.  It's not bytes and I can't believe there is 1.6 FD
> per byte (that wouldn't be efficient).  So what are Nread and Nwrite
> supposed to represent?
> 
> Lastly, since the idea of enabling no_cache is to not have any additional
> disk reads/writes I am wondering why this is still logging in the sub-menu?
> Can anyone tell me why and/or if I am doing something wrong or making
> incorrect assumptions?

  I think the 'fd mechanism' is always used in Squid to read data,
 for a particular object.
 Hence this is unrelated to a no_cache directive for certain extensions.

 M.

> 
> thanks,
> 
> Adam
> 
> Here is the Cachemgr.cgi Process Filedescriptor Allocation just prior to the
> download finishing:
> Active file descriptors:
> File Type   Tout Nread  * Nwrite * Remote AddressDescription
>  --    - ---
> ---
>3 Log 0   0   0
> /logs/cache.log
>6 Socket01291  412  .0DNS Socket
>7 File 0   0   347721
> /logs/access.log
>8 Pipe0   0   0unlinkd ->
> squid
>9 Socket0   0* 0  .0HTTP Socket
>   10 Socket   0   0* 0  .0HTTP Socket
>   11 Pipe   0   0   0squid ->
> unlinkd
>   12 Socket   0   0* 0  .0HTTP Socket
>   13 Socket   0   0* 0  .0HTTP Socket
>   14 File0   0 192
> /cache/swap.state
>   15 File0   0 192
> /cache2/swap.state
>   16 Socket 1430   572* 3254047  192.168.1.4.2028
> http://extern.site.com/BerkeleyDB.tar
>   17 Socket   43254043*1310  extern.site.com.80
> http://extern.site.com/BerkeleyDB.tar
>   18 Socket 1440 162*   0  192.168.1.4.53586
> cache_object://squid.mydom.com/filedescriptors
>   24 Pipe  0   0*  0async-io
> completetion event: main
>   25 Pipe  0   00async-io
> completetion event:

Re: [squid-users] cache_peer_access and NTLM groups.

2003-03-25 Thread Christopher Weimann
This looks to me to be the same problem as bug 556 but
I'm using 2.5.STABLE2 not 2.5.STABLE1.

On Mon 03/24/2003-03:09:45PM -0500, Christopher Weimann wrote:
>
> I am running 2.5.STABLE2 with squid-2.5.STABLE2-concurrent_external_acl.patch
> This did not work without the conncurrent_external_acl.patch either.
>
> I am having trouble using cache_peer_access with groups and
> NTLM. This works perfectly with Mozilla and Basic auth but
> not with NTLM (IE6 or IE5.5). Sometimes the page comes up
> with broken images and sometimes I get "Unable to forward
> this request at this time."
>
> It seems that it is failing to pick a cache_peer. I turned on
> some debug_options and it appears that the auth info is lost
> at some point when using NTLM.
>
[snip]



[squid-users] Re: sample squid.conf

2003-03-25 Thread Henrik Nordstrom
# Configure Basic authentication with radius as backend
# (see Related Software)
auth_param basic program /path/to/squid_rad_auth ...
[... is options as per the squid_rad_auth documentation]
[other auth_param basic directives as per squid.conf.default]

# Require users to log in
acl login proxy_auth REQUIRED
http_access deny !login


As and alternative to  squid_rad_auth you can use the PAM authenticator
shipped with Squid, but this assumes you are familiar with how to
configure the PAM radius client..

Regards
Henrik


James Ambursley wrote:
> 
> radius
> 
> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 25, 2003 3:18 PM
> To: James Ambursley
> Subject: RE: sample squid.conf
> 
> What password source are you trying to connect to? (NCSA / LDAP /
> Windows Domain / Radius / UNIX(PAM) / ...)
> 
> Which authentication scheme? (Basic / NTLM / Digest? If unsure Basic..)
> 
> Which Squid version?
> 
> Regards
> Henrik
> 
> tis 2003-03-25 klockan 19.37 skrev James Ambursley:
> > i am trying to create a working squid.conf which shows authentication.  I have 
> > tried with various parameters and have not been successful.
> >
> >
> > -Original Message-
> > From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, March 25, 2003 11:55 AM
> > To: James Ambursley
> > Subject: RE: sample squid.conf
> >
> >
> > What for?
> >
> >
> > tis 2003-03-25 klockan 16.01 skrev James Ambursley:
> > > Could you send me a sample squid.conf file, please.
> > >
> > >
> > > -Original Message-
> > > From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> > > Sent: Tuesday, March 25, 2003 2:59 AM
> > > To: Ken Thomson
> > > Cc: [EMAIL PROTECTED]
> > > Subject: Re: [squid-users] NTLM Authentication using the SMB helper -
> > > need help with access log problems
> > >
> > >
> > > Ken Thomson wrote:
> > >
> > > > The server operates fine, and the authentication works as
> > > > expected.  My problem lies with the access.log file.
> > > > Every request from a client is first denied and then
> > > > accepted after being authenticated.  This happens to
> > > > *EVERY* request.
> > >
> > > Yes, this is because of how NTLM authentication works.
> > >
> > > On each new TCP connection from the browser the following happens
> > >
> > > 1a. Browser sends request without authentication
> > > 1b. Rejected by Squid as there is no authentication, squid proposing to
> > > use NTLM
> > > 2a. Browser sends request with a NTLM NEGOTIATE packet embedded in the
> > > headers
> > > 2b. Rejected by Squid with a NTLM CHALLENGE packet embedded in the
> > > headers
> > > 3a. Browser sends request with a NTLM AUTHENTICATE packet embedded in
> > > the headers
> > > 3b. Connection accepted by Squid if the authentication is successful.
> > > This request and any future requests on the same TCP connection is
> > > forwarded.
> > >
> > > All responses by Squid is logged.
> > >
> > > If this disturbs your log statistics then filter out TCP_DENIED/407
> > > lines with no username before processing the logs.
> > >
> > > Regards
> > > Henrik
> --
> Henrik Nordstrom <[EMAIL PROTECTED]>
> MARA Systems AB, Sweden


[squid-users] Error Messages

2003-03-25 Thread Riza Tantular
Hi all,

Can we hide an error messages in squid ?
How can we do that ?
Thanks

Riza



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.463 / Virus Database: 262 - Release Date: 3/17/2003



Re: [squid-users] Error Messages

2003-03-25 Thread Ben White
Maybe you can refer to the link below :

http://www.squid-cache.org/mail-archive/squid-users/200303/0459.html

--- Riza Tantular <[EMAIL PROTECTED]> wrote: > Hi all,
> 
> Can we hide an error messages in squid ?
> How can we do that ?
> Thanks
> 
> Riza


__
Do You Yahoo!?
Promote your business from just $5 a month!
http://sg.biztools.yahoo.com


[squid-users] multiple lines for same port acl

2003-03-25 Thread Gary Price (ICT)
Hi 

I get the impression that the following default from squid.conf

acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210  # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280  # http-mgmt
acl Safe_ports port 488  # gss-http
acl Safe_ports port 591  # filemaker
acl Safe_ports port 777  # multiling http

is the same as

acl Safe_Ports 80 21 443 563 70 210 1025-65535 280 488 591 777

Is this so? If so, is this a property of any other acl types?

Thanks
Gary Price
ICT





[squid-users] SSL problem on cache hierarchy

2003-03-25 Thread José Luis Serrano Rivera
I hope you can help me

I've a squid parent cache server running SuSE Linux
7.3, this one serves HTTP to win clients and other
secondary server.

Actually I can configure clients to browse web pages
using the parent or the secondary cache server, but
the problem is when they try to access a https (SSL
protocol) using the secondary server. I don't have
this problem when I use the parent cache only when I
use the secondary cache server. I don't know what to
do.

I wait for your help


_
Do You Yahoo!?
La mejor conexión a internet y 25MB extra a tu correo por $100 al mes. 
http://net.yahoo.com.mx


[squid-users] Underscore not allowed

2003-03-25 Thread Ben White
Hi,

Squid won't let me go to this web site which has
underscore in the url :

http://dear_raed.blogspot.com/

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL:
http://dear_raed.blogspot.com/ 

The following error was encountered: 

 Invalid URL 

Some aspect of the requested URL is incorrect.
Possible problems: 

Missing or incorrect access protocol (should be
`http://'' or similar) 
Missing hostname 
Illegal double-escape in the URL-Path 
Illegal character in hostname; underscores are not
allowed

When I use another proxy which is not squid, I have no
problem accessing the website.

How do I make squid allow the access of urls with
underscore ?

Thanks.


__
Do You Yahoo!?
Promote your business from just $5 a month!
http://sg.biztools.yahoo.com


Re: [squid-users] Timeouts details and Retry problems

2003-03-25 Thread Victor Tsang
I see, thank you very much.  

Tor.

Henrik Nordstrom wrote:
> 
> Only by editing the source.
> 
> 1. Make sure server side persistent connections are disabled in
> squid.conf.
> 
> 2. modify fwdCheckRetry() in forward.c to always return 0.
> 
> Regards
> Henrik
> 
> Victor Tsang wrote:
> >
> > Is there a way to turn off such feature, or control the number of retry
> > squid does?
> >
> > Thanks.
> > Tor
> >
> > Henrik Nordstrom wrote:
> > >
> > > mån 2003-03-24 klockan 12.11 skrev Fabrice DELHOSTE:
> > >
> > > > So after installing Squid, we modified connect_timeout and
> > > > read_timeout. We found configurations that works but we would like to
> > > > understand precisely why. Moreover, due to our misunderstanding, we
> > > > sometimes have strange effects such as double requests (or even more)
> > > > to the content server whereas the application correctly receives the
> > > > timeout error by sending only one request. Any idea?
> > >
> > > Squid automatically retries requests if the first attempt fails, to
> > > increase the likelyhood of a successful reply. Depending on the
> > > conditions there may be as much as up to 10 retries of the same request.
> > >
> > > The same is also done by most browsers.
> > >
> > > --
> > > Henrik Nordstrom <[EMAIL PROTECTED]>
> > > MARA Systems AB, Sweden


[squid-users] header_access design problem?

2003-03-25 Thread Gerhard Wiesinger
Hello!

I'm trying to get the header_access feature to work (squid 2.5.STABLE2).
It works well except the following:

1.) header_access works for Request headers well, but Response headers are
cutted to!!!
2.) Adding an currently unknown header (e.g. Depth for WebDAV) requires
source compilation and modification. Also adding other new headers should
be much more easier. (e.g. config). Maybe there should be a warning
message, but filtering should be done correctly.

If it is not done the following warning occours:
2003/03/26 07:45:38| squid.conf line 2851: header_access Depth allow all
2003/03/26 07:45:38| parse_http_header_access: unknown header name Depth.

HttpHeader.c:
--- HttpHeader.c.orig   Tue Mar 25 17:25:03 2003
+++ HttpHeader.cTue Mar 25 17:25:44 2003
@@ -126,6 +126,7 @@
 {"X-Request-URI", HDR_X_REQUEST_URI, ftStr},
 {"X-Squid-Error", HDR_X_SQUID_ERROR, ftStr},
 {"Negotiate", HDR_NEGOTIATE, ftStr},
+{"Depth", HDR_DEPTH, ftStr},
 #if X_ACCELERATOR_VARY
 {"X-Accelerator-Vary", HDR_X_ACCELERATOR_VARY, ftStr},
 #endif

--- enums.h.origTue Mar 25 17:22:46 2003
+++ enums.h Tue Mar 25 17:24:37 2003
@@ -236,6 +236,7 @@
 HDR_X_REQUEST_URI, /* appended if ADD_X_REQUEST_URI is
#defined */
 HDR_X_SQUID_ERROR,
 HDR_NEGOTIATE,
+HDR_DEPTH,
 #if X_ACCELERATOR_VARY
 HDR_X_ACCELERATOR_VARY,
 #endif

Thank you for the answer.

Ciao,
Gerhard


Re: [squid-users] Underscore not allowed

2003-03-25 Thread Henrik Nordstrom
See the FAQ.

Regards
Henrik


Ben White wrote:
> 
> Hi,
> 
> Squid won't let me go to this web site which has
> underscore in the url :
> 
> http://dear_raed.blogspot.com/
> 
> ERROR
> The requested URL could not be retrieved
> 
> While trying to retrieve the URL:
> http://dear_raed.blogspot.com/
> 
> The following error was encountered:
> 
>  Invalid URL
> 
> Some aspect of the requested URL is incorrect.
> Possible problems:
> 
> Missing or incorrect access protocol (should be
> `http://'' or similar)
> Missing hostname
> Illegal double-escape in the URL-Path
> Illegal character in hostname; underscores are not
> allowed
> 
> When I use another proxy which is not squid, I have no
> problem accessing the website.
> 
> How do I make squid allow the access of urls with
> underscore ?
> 
> Thanks.
> 
> __
> Do You Yahoo!?
> Promote your business from just $5 a month!
> http://sg.biztools.yahoo.com


Re: [squid-users] SSL problem on cache hierarchy

2003-03-25 Thread Henrik Nordstrom
Which Squid version?

What symptoms do you get?

What is said in access.log?

Can your Squid go direct, or must the parent be used? If the parent must
be used, have you told Squid that you are inside a firewall? (see the
FAQ)

Regards
Henrik




José Luis Serrano Rivera wrote:
> 
> I hope you can help me
> 
> I've a squid parent cache server running SuSE Linux
> 7.3, this one serves HTTP to win clients and other
> secondary server.
> 
> Actually I can configure clients to browse web pages
> using the parent or the secondary cache server, but
> the problem is when they try to access a https (SSL
> protocol) using the secondary server. I don't have
> this problem when I use the parent cache only when I
> use the secondary cache server. I don't know what to
> do.
> 
> I wait for your help
> 
> _
> Do You Yahoo!?
> La mejor conexión a internet y 25MB extra a tu correo por $100 al mes. 
> http://net.yahoo.com.mx


Re: [squid-users] multiple lines for same port acl

2003-03-25 Thread Henrik Nordstrom
"Gary Price (ICT)" wrote:

> I get the impression that the following default from squid.conf
> 
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
[...]
> 
> is the same as
> 
> acl Safe_Ports 80 21 443 563 70 210 1025-65535 280 488 591 777
> 
> Is this so? If so, is this a property of any other acl types?

Yes, yes. All, unless broken (as was the case for the time acl in
2.5.STABLE1 and earlier..).

Regards
Henrik


Re: [squid-users] header_access design problem?

2003-03-25 Thread Henrik Nordstrom
Gerhard Wiesinger wrote:
> 
> Hello!
> 
> I'm trying to get the header_access feature to work (squid 2.5.STABLE2).
> It works well except the following:
> 
> 1.) header_access works for Request headers well, but Response headers are
> cutted to!!!

Yes. This is intentional.

> 2.) Adding an currently unknown header (e.g. Depth for WebDAV) requires
> source compilation and modification. Also adding other new headers should
> be much more easier. (e.g. config). Maybe there should be a warning
> message, but filtering should be done correctly.

This is a known problem with Squid. If you want to filter new headers
you currently either need to use paranoid mode only allowing what should
be allowed, or modify the source.

A patch correcting this is welcome if it bothers you.

Regards
Henrik


Re: [squid-users] SSL problem on cache hierarchy

2003-03-25 Thread Matthias Henze
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

i've exactly the same problem.

i can see the https request in the logs of the secorndary squid (which also 
does proxy auth) but the log entry does not apear immediately. i apears 
after the timeout of the secondary proxy. i think, that the secondary proxy 
tries to connect DIRECTLY to the wanted ssl host but his parent.

how counld this be resolved ?

TIA

mh

- --On Mittwoch, 26. März 2003 08:00 +0100 Henrik Nordstrom 
<[EMAIL PROTECTED]> wrote:

> Which Squid version?
>
> What symptoms do you get?
>
> What is said in access.log?
>
> Can your Squid go direct, or must the parent be used? If the parent must
> be used, have you told Squid that you are inside a firewall? (see the
> FAQ)
>
> Regards
> Henrik
>
>
>
>
> José Luis Serrano Rivera wrote:
>>
>> I hope you can help me
>>
>> I've a squid parent cache server running SuSE Linux
>> 7.3, this one serves HTTP to win clients and other
>> secondary server.
>>
>> Actually I can configure clients to browse web pages
>> using the parent or the secondary cache server, but
>> the problem is when they try to access a https (SSL
>> protocol) using the secondary server. I don't have
>> this problem when I use the parent cache only when I
>> use the secondary cache server. I don't know what to
>> do.
>>
>> I wait for your help
>>
>> _
>> Do You Yahoo!?
>> La mejor conexión a internet y 25MB extra a tu correo por $100 al mes.
>> http://net.yahoo.com.mx
>
>






Matthias Henze[EMAIL PROTECTED]

Use PGP!! http://www.mhcsoftware.de/MatthiasHenze.asc
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
MHC SoftWare GmbH  voice: +49-(0)9533-92006-0
Fichtera 17  fax: +49-(0)9533-92006-6
96274 Itzgrund/Germanye-Mail: [EMAIL PROTECTED]
- - - - - - - - - - - - - - - - - - - - - - - - - - - -
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (MingW32)
Comment: Weitere Infos: siehe http://www.gnupg.org

iEYEARECAAYFAj6BVbMACgkQkuyUDXwkmpZgRACeOgfZkp5jEBIUrurKKkPw+57S
GesAnRpkStl53Hw+ESCooNW6NsxbQwRD
=3rb8
-END PGP SIGNATURE-





[squid-users] median_svc_time

2003-03-25 Thread atit jariwala
in cachemr.cgi script
we have cache utilization section displaying various statistics

considering following :

client_http.all_median_svc_time
client_http.miss_median_svc_time
client_http.nm_median_svc_time
client_http.nh_median_svc_time
client_http.hit_median_svc_time

i want to get better performance in terms of good response time to user
accessing cache

for that
does all_median_svc_time be high/low?
does hit_median_svc_time be high/low?
does miss_median_svc_time be high/low?

and i think ( not sure) that all_median_svc_time is avg svc_time derived
from other 4 svc_times
am i making sense?

waiting for reply.
regards
==atit






[squid-users] top n site listing

2003-03-25 Thread atit jariwala
I want to know top n sites accessed by my clients
I am using squid 2.5 STABLE1

how to achieve this via squid...
waiting for reply

regards
==atit



[squid-users] want to freeze some sites permanently in RAM

2003-03-25 Thread atit jariwala
I am using SQUID 2.5 STABLE1

following are some sites frequetly  used by clients

www.yahoo.com
www.mail.yahoo.com
www.rediff.com
www.updates.microsoft.com



i want to freeze all stastic contents of these sites permanently in memory
they should not be deleted even when swapping is done for other contents
so that i have TCP_MEM_HIT for all these and my clients will have better
response time

is it possible?
if so how?

waiting for reply

regards
atit




[squid-users] hotmail problem

2003-03-25 Thread Raja R
Hi ,
I am posting this qtn again as I did not get any reply..pls. help me..I have
a strange problem with HOTMAIL. The page says DONE before it fully loads
the page with No Display. I am using squid 2.5 stable 1 . I think the squid
is sending Finish too fast, before actually loading the page.
I am facing this problem only with hotmail . Allowed connections to 443
(https) also in the config as hotmail requires it...but still no luck..ANy
pointers ?

Regards,
Raja.



RE: [squid-users] top n site listing

2003-03-25 Thread Boniforti Flavio
> I want to know top n sites accessed by my clients
> I am using squid 2.5 STABLE1
> 
> how to achieve this via squid...
> waiting for reply

I use webalizer and SARG and Calamaris, you can find them all with
google.