Re: [squid-users] make pinger error

2003-03-24 Thread Henrik Nordstrom
Marc Elsen wrote:
 
 SSCR Internet Admin wrote:
 
  I just run make pinger on the src tree.. and have this result
 
  [EMAIL PROTECTED] src]# make pinger
  gcc  -g -O2 -Wall  -g -o pinger  pinger.o debug.o
  globals.o -L../lib -lmiscutil -lpthread -ldl -lm -lresolv -lbsd -lnsl
  /usr/bin/ld: cannot find -lmiscutil
  collect2: ld returned 1 exit status
  make: *** [pinger] Error 1
 
  where shall i begun looking?
 
  Did you execute an appropriate configure command for squid ?


You also need to first build Squid.

Note: pinger is automatically built and installed when ICMP pinging is
enabled by configure, but you will need to finish the installation as
root. See the Squid FAQ for pinger installation instructions.

Regards
Henrik


Re: [squid-users] Directives removed ?

2003-03-24 Thread Henrik Nordstrom
Ben White wrote:

 Have these directives been removed :
 
 cache_stoplist, cache_host_acl

See the no_cache and cache_peer_access directives.

 Is local_domain now replaced by always_direct,
 inside_firewall by never_direct ?

Approximately. The new directives work sligtly different and is more
flexible.

Regards
Henrik


RE: [squid-users] make pinger error

2003-03-24 Thread SSCR Internet Admin
Well I just installed and compiled squid as root and i have --enable-icmp
during configure.. but still pinger will exit when squid is running, thats
why i have to make pinger.. but no luck...

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
Henrik Nordstrom
Sent: Sunday, March 23, 2003 11:49 PM
To: Marc Elsen
Cc: SSCR Internet Admin; squid-mailing list
Subject: Re: [squid-users] make pinger error


Marc Elsen wrote:

 SSCR Internet Admin wrote:
 
  I just run make pinger on the src tree.. and have this result
 
  [EMAIL PROTECTED] src]# make pinger
  gcc  -g -O2 -Wall  -g -o pinger  pinger.o debug.o
  globals.o -L../lib -lmiscutil -lpthread -ldl -lm -lresolv -lbsd -lnsl
  /usr/bin/ld: cannot find -lmiscutil
  collect2: ld returned 1 exit status
  make: *** [pinger] Error 1
 
  where shall i begun looking?

  Did you execute an appropriate configure command for squid ?


You also need to first build Squid.

Note: pinger is automatically built and installed when ICMP pinging is
enabled by configure, but you will need to finish the installation as
root. See the Squid FAQ for pinger installation instructions.

Regards
Henrik

--
This message has been scanned for viruses and
dangerous contents on SSCR Email Scanner Server, and is
believed to be clean.
---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.463 / Virus Database: 262 - Release Date: 3/17/2003

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.463 / Virus Database: 262 - Release Date: 3/17/2003


-- 
This message has been scanned for viruses and
dangerous contents on SSCR Email Scanner Server, and is
believed to be clean.



[squid-users] ymessenger

2003-03-24 Thread arun

 I am using ymessenger ver .99 and its installation is OK on Linux
Client, I have tried configuring proxy with IP/port 8080 defined for
squid proxy ( I am using squid 2.4.Stable6 on RHL7.3 with PAM
authentication on Server). but when I give yahoo id / password it became
in connecting mode which never ends.
 
 I have also tried GAIM which while connecting gives error unable to
 sign on : Connection Problem
 
can any body tell me what need to set in these to run ymessenger
properly.
 
 Arun
 
 




[squid-users] Timeouts details and Retry problems

2003-03-24 Thread Fabrice DELHOSTE
Hi *,

We are new to Squid and have problems around timeouts.
First, please tell us if we could get more information about timeouts than those 
provided in Guides and FAQ.
We do not clearly understand their behavior even if they seems well documented.

In particular, we have an application that does some browsing through a squid proxy to 
a content server.
Unfortunately, the HTTP client layer (third-party) in our application does not support 
timeouts. That's why we would like to provide request timeouts thanks to the proxy.
So after installing Squid, we modified connect_timeout and read_timeout. We found 
configurations that works but we would like to understand precisely why. Moreover, due 
to our misunderstanding, we sometimes have strange effects such as double requests (or 
even more) to the content server whereas the application correctly receives the 
timeout error by sending only one request. Any idea?

Thanks for your help.

Fabrice Delhoste
[EMAIL PROTECTED]



[squid-users] How to enable upload of 1 MB file using Squid?

2003-03-24 Thread Tan, Kian Tiong
Hi all,

Would like to know if there is anyway to allow  1 MB via FTP in Squid?

I have tried to upload the a file size of 1,001 KB to a remote site running
Notes application successfully but connection refused when upload a file
size of 1,263 KB.

No ACL has been imposed on FTP.

Warmest Regards,
Tan Kian Tiong





Re: [squid-users] your cache is Running out of filedescriptors

2003-03-24 Thread Marc Elsen


Jeff Donovan wrote:
 
 greetings
 Can someone give me an explanation on this error. I understand
 (limited) that is has something to do with the OS reaching some limit.
 
 Currently I am running
 Squid 2.5 Stable1
 cache mem = 256mb
 cache size = 16gb
 
 SquidGuard 1.2.0
 BekeleyDB 2.7.7
 OSX 10.2.4 server
 dual 1ghz PowerPC G4
 Memory 2 GB
 
 just before the file descriptor error i receive these notices ;
 
   parsehttpRequest ; requestheader contains NULL characters
 ClientReadRequest : FD {somenumber} Invalid request
 WARNING! Your cache is running out of filedescriptors

 Unless someone would launch some kind of denial of service
 attack against your squid. The 2 lines are normally unrelated
 to the out of file desc. problem.
 Check access.log to see which kind of requests are being processed
 by squid during the time of these error(s).

 However you may need to increase the available no of file descriptors.
 I do not know how to do this on OSX however.


 M.


 
 Any insight?
 
 --jeff

-- 

 'Time is a consequence of Matter thus
 General Relativity is a direct consequence of QM
 (M.E. Mar 2002)


Re: [squid-users] your cache is Running out of filedescriptors

2003-03-24 Thread MASOOD AHMAD
It seems that you OS have support up to 12288 file
des.
but you have not started squid with more than 1024
file des.. so you will have to kill the squid process
and then you will restart it with command like that.

ulimit -HSn 2048 or more than that. 

and than start squid 

Best Regards,
Masood Ahmad Shah
System Administrator
Fibre Net 
Cell #   923004277367


--- Jeff Donovan [EMAIL PROTECTED] wrote:
 Silly me , i found a part in the  FAQ-11.4
 FreeBSD
 
 by Torsten Sturm
 How do I check my maximum filedescriptors?
 
 Do sysctl -a and look for the value of
 kern.maxfilesperproc .
 How do I increase them?
 sysctl -w kern.maxfiles=
  sysctl -w kern.maxfilesperproc=
 Warning : You probably want maxfiles 
 maxfilesperproc if you're going 
 to be pushing the limit.
 What is the upper limit?
 
 I don't think there is a formal upper limit inside
 the kernel. All the 
 data structures are dynamically allocated.  In
 practice there might be 
 unintended metaphenomena (kernel spending too much
 time searching 
 tables, for example).
 
 Here is my kernel output: i would assume i could
 increase the 
 maxproc and the maxfiles.
 
 
 
 [squidx:~] root# sysctl -a | more
 kern.ostype = Darwin
 kern.osrelease = 6.4
 kern.osrevision = 199506
 kern.version = Darwin Kernel Version 6.4:
 Wed Jan 29 18:50:42 PST 2003;
 root:xnu/xnu-344.26.obj~1/RELEASE_PPC
 
 
 kern.maxvnodes = 33584
 kern.maxproc = 2048
 kern.maxfiles = 12288
 
 any suggestions on how much to increase this by?
 
 kern.argmax = 65536
 kern.securelevel = 1
 kern.hostname = squidx
 kern.hostid = 3223847169
 kern.clockrate: hz = 100, tick = 1, profhz =
 100, stathz = 100
 kern.posix1version = 198808
 kern.ngroups = 16
 kern.job_control = 1
 kern.saved_ids = 0
 kern.boottime = Sat Mar 22 19:52:28 2003
 
 {snip}--not relative
 
 On Monday, March 24, 2003, at 10:19 AM, Marc Elsen
 wrote:
 
 
 
  Jeff Donovan wrote:
 
parsehttpRequest ; requestheader contains NULL
 characters
  ClientReadRequest : FD {somenumber} Invalid
 request
  WARNING! Your cache is running out of
 filedescriptors
 
   Unless someone would launch some kind of denial
 of service
   attack against your squid. The 2 lines are
 normally unrelated
   to the out of file desc. problem.
   Check access.log to see which kind of requests
 are being processed
   by squid during the time of these error(s).
 
   However you may need to increase the available no
 of file descriptors.
   I do not know how to do this on OSX however.
 
 
   M.
 


__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com


Re: [squid-users] your cache is Running out of filedescriptors

2003-03-24 Thread Jeff Donovan
ok let me get this straight

send a hup signal to squid
and restart with
./squid ulimit -HSn 2048
(doesn't look right to me)

./squid -h
doesn't show much in line with what you are saying. Do i need to 
recompile squid maybe?

--jeff

On Monday, March 24, 2003, at 11:16 AM, MASOOD AHMAD wrote:

It seems that you OS have support up to 12288 file
des.
but you have not started squid with more than 1024
file des.. so you will have to kill the squid process
and then you will restart it with command like that.
ulimit -HSn 2048 or more than that.

and than start squid

Best Regards,
Masood Ahmad Shah
System Administrator
Fibre Net
Cell #   923004277367
--- Jeff Donovan [EMAIL PROTECTED] wrote:
Silly me , i found a part in the  FAQ-11.4
FreeBSD
by Torsten Sturm
How do I check my maximum filedescriptors?
Do sysctl -a and look for the value of
kern.maxfilesperproc .
How do I increase them?
sysctl -w kern.maxfiles=
 sysctl -w kern.maxfilesperproc=
Warning : You probably want maxfiles 
maxfilesperproc if you're going
to be pushing the limit.
What is the upper limit?
I don't think there is a formal upper limit inside
the kernel. All the
data structures are dynamically allocated.  In
practice there might be
unintended metaphenomena (kernel spending too much
time searching
tables, for example).
Here is my kernel output: i would assume i could
increase the
maxproc and the maxfiles.


[squidx:~] root# sysctl -a | more
kern.ostype = Darwin
kern.osrelease = 6.4
kern.osrevision = 199506
kern.version = Darwin Kernel Version 6.4:
Wed Jan 29 18:50:42 PST 2003;
root:xnu/xnu-344.26.obj~1/RELEASE_PPC
kern.maxvnodes = 33584
kern.maxproc = 2048
kern.maxfiles = 12288
any suggestions on how much to increase this by?

kern.argmax = 65536
kern.securelevel = 1
kern.hostname = squidx
kern.hostid = 3223847169
kern.clockrate: hz = 100, tick = 1, profhz =
100, stathz = 100
kern.posix1version = 198808
kern.ngroups = 16
kern.job_control = 1
kern.saved_ids = 0
kern.boottime = Sat Mar 22 19:52:28 2003
{snip}--not relative

On Monday, March 24, 2003, at 10:19 AM, Marc Elsen
wrote:


Jeff Donovan wrote:
  parsehttpRequest ; requestheader contains NULL
characters
ClientReadRequest : FD {somenumber} Invalid
request
WARNING! Your cache is running out of
filedescriptors
 Unless someone would launch some kind of denial
of service
 attack against your squid. The 2 lines are
normally unrelated
 to the out of file desc. problem.
 Check access.log to see which kind of requests
are being processed
 by squid during the time of these error(s).

 However you may need to increase the available no
of file descriptors.
 I do not know how to do this on OSX however.

 M.



__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com



[squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Peter Smith
I copied and altered a squid-2.5.STABLE1-2.src.rpm which I'd cobbled 
together to be a squid-2.5.STABLE2-1.src.rpm.  However, upon building, 
I now get a 'aufs/aiops.c:36:2: #error _REENTRANT MUST be defined to 
build squid async io support. ' error.  Any ideas as to why I would get 
this with Squid-2.5.STABLE2 and not with Squid-2.5.STABLE1?

Btw, here is my %configure line for the SRPM...  (note that pthreads is 
enabled.)

%configure \
  --exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid \
  --localstatedir=/var --sysconfdir=/etc/squid --datadir=/usr/lib/squid \
  --enable-poll --enable-snmp --enable-removal-policies=heap,lru \
  --enable-delay-pools --enable-linux-netfilter \
  --enable-carp --with-pthreads \
  --enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,MSNT \
  --enable-storeio=aufs,coss,diskd,ufs,null
This doesn't make sense as I've read 
'http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE1-aufs_reentrant' 
and already have pthreads enabled.

Peter Smith



RE: [squid-users] Redirecting Squid traffic to another Proxy. How ?

2003-03-24 Thread Chris Val Bamber
Hi,

Thanks ever so much for the Info, but I need to change the way this
works because
The access to the other proxy is actually obtained through another
gateway, and
Of course you can not have more than 1 default gateway on a box.

So what I would like to do is to setup Squid Box 1 to redirect any
traffic
for companydomain.com to Squid Box 2. Squid Box 2 will then proxy the
normal way.

Thanks ever so much

Chris

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Henrik Nordstrom
Sent: 01 February 2003 11:29
To: Chris  Val Bamber
Cc: [EMAIL PROTECTED]
Subject: Re: [squid-users] Redirecting Squid traffic to another Proxy.
How ?


Something like this should work:

cache_peer proxy-jp.your.domain parent jpproxyportnumber 0 no-query acl
intranet dstdomain intranet.your.domain cache_peer_access allow
proxy-jp.your.domain never_direct allow intranet

You need to replace proxy-jp.your.domain, jpproxyportnumber,
intranet.your.domain with values suitable for your setup.

Regards
Henrik

Chris  Val Bamber wrote:
 
 Hi,
 
 We presently have a Squid 2.5 server and are happy with it. We are now

 needing To make our company intranet (based in Japan) available on the

 network. This is
 Not available through the normal T1 line, but a frame relay link.
 
 What I would like to do is to have Squid automatically forward 
 requests specific for Our intranet site to the Proxy box based in 
 Japan rather than going through the T1.
 
 We did it in a test environment using a ISA server and Squid Proxy 
 together. Everyone Pointed to the ISA, and a set of rules were 
 configured on the ISA to direct the traffic,
 Up streaming server I believe ISA called them.
 
 I am hoping I can do this with two Squid boxes instead, rather than 
 using the ISA. Buying hardware and software for the ISA is very 
 expensive, so I want to avoid it if I can.
 
 I have looked through the squid.conf, but not really sure what I need 
 to be reading up On. Is it Cache_peer sections, or perhaps 
 redirectors. If I can merely direct all requests
 that match a certain IP range or domain name then I think I will be
onto
 a winner.
 
 Thanks in advance.
 Chris
 
 PS After lots of reading on the FAQs I managed to get NTLM 
 authentication working,  a nice feature to have!




RE: [squid-users] Redirecting Squid traffic to another Proxy. How ?

2003-03-24 Thread Henrik Nordstrom
mån 2003-03-24 klockan 19.16 skrev Chris  Val Bamber:
 Hi,
 
 Thanks ever so much for the Info, but I need to change the way this
 works because
 The access to the other proxy is actually obtained through another
 gateway, and
 Of course you can not have more than 1 default gateway on a box.

Sure you can. But in this case I don't see why you would need two
default gateways, a normal network route for the network/host of proxy 2
via the gateway there should be sufficient

Regards
Henrik
-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] Is it possible ?

2003-03-24 Thread Henrik Nordstrom
Yes, you can do this with the help of a redirector which when it detects
a download link redirects the user to a cgi program returning the
customizable error message and other options, including a link for
downloading.

The download link is based on the original requested URL, but modified
in such manner that the redirector will recognize it as a download link
and rewrite it back to the originally requested URL. This can be done by
for example either modifying the host component, or adding a ; separated
download tag to the end or the URL.

Regards
Henrik


tis 2003-02-25 klockan 10.17 skrev akira:
 Dear All,
 
 I have following scenario for squid, is it possible ?
 
 1. User request file download, eg. mysong.mp3
 2. Squid block the file and display customize error message,  that user can
 :
 2.1 Email the link so that admin/system can download the file and send the
 file to user's email address.
 2.2 Download the file anyway.
 
 Thank You.
-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] blocking FTP access on Ip address and timingbasis

2003-03-24 Thread Henrik Nordstrom
mån 2003-03-24 klockan 11.07 skrev Pragati Dahisarkar:
 Dear Everyone,
 I have a simple question that I do not know
 the answer to. What I was wondering is it possible to
 limit the download size via the cache to say
 10MB during 9am to 5pm Monday - Friday (or just 9am -
 5pm if the days of the week cannot be set). If so is
 someone willing to let me in on the secret to
 do this.

Yes. See http_reply_max_size directive of Squid-2.5.

 I am running SQUID version squid-2.3.STABLE4-1.

Then upgrade. There is no reason why you should be limited by almost 2
years old software and not take benefit of later developments.

The current Squid version is 2.5.STABLE2 and includes countless number
of improvements and bugfixes relative to the very old 2.3.STABLE4
version.

-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] How to enable upload of 1 MB file using Squid?

2003-03-24 Thread Henrik Nordstrom
mån 2003-03-24 klockan 12.13 skrev Tan, Kian Tiong:
 Hi all,
 
 Would like to know if there is anyway to allow  1 MB via FTP in Squid?

Either upgrade to Squid-2.5 where the default limit is unlimited, or see
the request_body_max_size directive in squid.conf.

-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] only using one first-level cache directory

2003-03-24 Thread Henrik Nordstrom
Does look correct to me as you have told Squid to put up to 4194304
(2048 * 2048) files in each L1 directory.. and as Squid starts with the
first directory first all your files will be in the first L1 directory..

With this setting all directories from 00/100 to 1F/7FF will not even
have a theoretical chance of being used.

Generally the default L2 parameter of 256 is recommended for all setups
unless you have very specific reasons to change it (I usually use
smaller values in testing with a very small cache, but 256 for larger
caches 200MB). With a L2 parameter of 256 the default L1 parameter of
16 is good up to about 7GB of cache IIRC.

Regards
Henrik





mån 2003-03-24 klockan 18.47 skrev Leon:
 Hi there!
 
 I spotted something rather odd with my squid today. Using the config line
 
 cache_dir aufs /usr/local/squid/cache 2048 32 2048
 
 with squid-2.5.STABLE1-20030309, my cache directory looks like
 
 [17:45:43 /usr/local/squid/cache]# du -sh *
 2.0G00
 1.1M01
 1.1M02
 1.1M03
 1.1M04
 1.1M05
 1.1M06
 1.1M07
 1.1M08
 1.1M09
 1.1M0A
 1.1M0B
 1.1M0C
 1.1M0D
 1.1M0E
 1.1M0F
 1.1M10
 1.1M11
 1.1M12
 1.1M13
 1.1M14
 1.1M15
 1.1M16
 1.1M17
 1.1M18
 1.1M19
 1.1M1A
 1.1M1B
 1.1M1C
 1.1M1D
 1.1M1E
 1.1M1F
 131Mstore.log
 15M swap.state
 
 
 
 Surely this isn't the right behaviour! Has anybody got any ideas why this
 has happened?
 
 
 
 Cheers,  Leon
-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Henrik Nordstrom
Maybe the spec file overrides CFLAGS when running make, inhibiting the
settings done by configure..

Regards
Henrik


mån 2003-03-24 klockan 19.15 skrev Peter Smith:
 I copied and altered a squid-2.5.STABLE1-2.src.rpm which I'd cobbled 
 together to be a squid-2.5.STABLE2-1.src.rpm.  However, upon building, 
 I now get a 'aufs/aiops.c:36:2: #error _REENTRANT MUST be defined to 
 build squid async io support. ' error.  Any ideas as to why I would get 
 this with Squid-2.5.STABLE2 and not with Squid-2.5.STABLE1?
 
 Btw, here is my %configure line for the SRPM...  (note that pthreads is 
 enabled.)
 
 %configure \
--exec_prefix=/usr --bindir=/usr/sbin --libexecdir=/usr/lib/squid \
--localstatedir=/var --sysconfdir=/etc/squid --datadir=/usr/lib/squid \
--enable-poll --enable-snmp --enable-removal-policies=heap,lru \
--enable-delay-pools --enable-linux-netfilter \
--enable-carp --with-pthreads \
--enable-basic-auth-helpers=LDAP,NCSA,PAM,SMB,MSNT \
--enable-storeio=aufs,coss,diskd,ufs,null
 
 This doesn't make sense as I've read 
 'http://www.squid-cache.org/Versions/v2/2.5/bugs/#squid-2.5.STABLE1-aufs_reentrant' 
 and already have pthreads enabled.
 
 Peter Smith
-- 
Henrik Nordstrom [EMAIL PROTECTED]
MARA Systems AB, Sweden



Re: [squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Peter Smith
Yes, this is most likely the case.  Thanks for the tip!  I am working on 
making the CFLAGS more transparent..

Peter

Henrik Nordstrom wrote:

Maybe the spec file overrides CFLAGS when running make, inhibiting the
settings done by configure..
Regards
Henrik




Fw: [squid-users] dual accellerators; navigation woes

2003-03-24 Thread mlister
debug verbosity helped me see in cache.log
that it was trying port 80 on squid 2 and when i set
squid 2 http_port to 8003 and
squid 1 httpd_accel_port to 8003 things started
working..


- Original Message -
From: mlister [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, March 24, 2003 3:38 PM
Subject: [squid-users] dual accellerators; navigation woes


 I've setup two accelerators in the following configuration.
 client-SQUID1-SQUID2-webserver
 below is the configurations (away from default) and the
 navigations I am testing.  I need the second navigation
 listed for SQUID 1 to work.

 SQUID 1
 ---
 http_port 80
 httpd_accel_host 10.10.1.73
 httpd_accel_port 80
 http_access allow all

 http://10.10.1.77/OA_HTML/US/ICXINDEX.htm ~~works
 http://10.10.1.77/OA_HTML/jtflogin.jsp~~doesn't work

 SQUID 2
 ---
 http_port 80
 httpd_accel_host webserver.domain.org
 httpd_accel_port 8003
 http_access allow all

 http://10.10.1.73/OA_HTML/US/ICXINDEX.htm ~~works
 http://10.10.1.73/OA_HTML/jtflogin.jsp~~works

 squid shows typical Access control configuration prevents
 your request from being allowed at this time. Please contact
 your service provider if you feel this is incorrect.

 SQUID 1 access.log shows 1048537841.283987 10.10.1.92
 TCP_NEGATIVE_HIT/403 1468 GET http://10.10.1.73/OA_HTML/jtflogin.jsp -
 NONE/- text/html

 SQUID 2 doesn't show an entry for this attempt in its log.

 Above 10.10.1.92 is the client pc, 10.10.1.73 is SQUID 2, and again this
is
 coming from SQUID 1's access log.
 all above navigations work when plugging in the real webserver.domain.org

 Any ideas?




[squid-users] Transparent Proxy, Bridged interfaces SQUID

2003-03-24 Thread Steven Bourque
Hello,

I was hoping someone could help me:

I have linux (debian) kernel 2.4.20 compiled with everything mentioned 
in the transparent proxy/squid HOWTO and iptables working properly:

eth0 is connected to the LAN
eth1 is connected to the WAN
both are setup as a memeber of the bridge br0
br0 has an IP address of 10.10.6.231/24 (part of our local IP's for 
monitoring and configuration)

the Bridging is working, however, it will not grab the port 80 traffic:

I have added the following as stated in the howto:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT 
--to-port 3128

iptables -A INPUT -i br0 -p tcp -d 10.10.6.231 -s 10.10.6.0/24 --dport 
3128 -m state --state NEW,ESTABLISHED -j ACCEPT

(so I can SSH to the box)
iptables -A INPUT -i br0 -p tcp -d 10.10.6.231 -s 10.10.6.0/24 --dport 
22 -m state --state NEW,ESTABLISHED -j ACCEPT

I have also tried the first iptable with -j DNAT --to 10.10.6.231:3128

Neither table gets a hit when viewed with iptable -t nat -v -n -L or 
iptable -v -n -L

Those are the only entries in the iptables, the SSH command does work.
Squid is configured with the entries has noted in the HOWTO, otherwise 
they are defaults.

Squid is version 2.5.STABLE1

iptables -L -n -v -t nat

Chain PREROUTING (policy ACCEPT 31 packets, 5420 bytes)
pkts  bytes target prot opt inout source destination
00   REDIRECT   tcp  --  eth0 *0.0.0.0/0  0.0.0.0/0
 tcp  dpt:80 redir ports 3128
Chain POSTROUTING (policy ACCEPT)
...
(empty)
Chain OUTPUT (policy ACCPEPT)
...
(empty)
iptables -L -n -v
Chain DROP (policy ACCEPT 136 packets, 16195 bytes)
pkts  bytes target prot opt in  out source destination
00   ACCEPT   tcp  --br0 *  0.0.0.0/0  10.10.6.231
 tcp  dpt:3128 state NEW,ESTABLISHED
14 1651  ACCEPT   tcp  --br0 *  0.0.0.0/0  10.10.6.231
 tcp  dpt:22 state NEW,ESTABLISHED
Chain FORWARD (policy ACCEPT)
...
(empty)
Chain OUTPUT (policy ACCEPT)
...
(empty)
We do not want any firewalling on this box, hense the default are all 
ACCEPT except the actual connections to the box, which has two accepts 
(SQUID and SSH)

With this setup, I am able to surf the web, but it is bypassing SQUID. 
Everhything is continuing to be bridged.

I spent a few days reading everything I can about this.

I found the program divert (I have divert enabled in my kernel)  does 
that have anything to do with it?

I tried it with divert on eth0 enable tcp add dst 80,
that just seemed to kill my browsing as well as not hitting squid or the
filters, although it a tcpdump -ne -i eth0 tcp dst port 80, I do see the 
MAC address change from that of my next hop router to the MAC of the 
eth0 (which should then get redirected by the iptable, shouldn't it?)

any help would be much appreciated! :)

Thanks
--
\Steven.

/*
  | Steven R. Bourque, CCNA
/\   | Network Engineer
\ /  ASCII ribbon campaign| Packet Works Inc.
 X   against HTML email   | p:519.579.4507. f:519.579.8475.
/ \   | http://www.packetworks.net
  | PGP ID: 0x373AB23B
*\


Re: [squid-users] Squid-2.5.STABLE2 compile

2003-03-24 Thread Henrik Nordstrom
Robert Collins wrote:

 Yeah. Just an additional data point: I don't precisely recall when we
 added that #error to the aufs code, but I think it was after 2.5Stable
 1.

It was long after 2.5.STABLE1. I first wondered why we should have this
check, but as it does not hurt I did not comment, and now I am convinced
;-)

 So: that rpm *may* have been broken for 2.5S1, but we didn't detect the
 breakage.

Quite likely.

Regards
Henrik


Re: [squid-users] only using one first-level cache directory

2003-03-24 Thread Henrik Nordstrom
I suppose you could write a small script which moves the files into
correct place.

Each filename is a hexadecimal number. The file belongs in

  L1 = number / L2 / L2
  L2 = (number / L2) % L2

  cache_dir/L1/L2/filenumber

With L2 = 256 the calculation becomes very easy. Each filename can then
be read

  XXL1L2XX

So I suppose the following script would work

#!/bin/sh
find [0-9]* -type f -print |
  sed -e 's%.*/.*/\(..\)\(..\)\(..\)\(..\)%\0 \2/\3/\1\2\3\4%' |
while read line; do
  mv $line;
done

Then clean up from the extra L2 directories by running

rm -rf ??/???

Regards
Henrik


Leon wrote:
 
 Ah ok then :) Thanks for your quick reply!
 
 Is there a way of moving to the suggested 16/256 structure from what I
 currently have without losing all the currently stored objects?
 
 Cheers,  Leon
 
 - Original Message -
 From: Henrik Nordstrom [EMAIL PROTECTED]
 To: Leon [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Monday, March 24, 2003 7:44 PM
 Subject: Re: [squid-users] only using one first-level cache directory
 
 Does look correct to me as you have told Squid to put up to 4194304
 (2048 * 2048) files in each L1 directory.. and as Squid starts with the
 first directory first all your files will be in the first L1 directory..
 
 With this setting all directories from 00/100 to 1F/7FF will not even
 have a theoretical chance of being used.
 
 Generally the default L2 parameter of 256 is recommended for all setups
 unless you have very specific reasons to change it (I usually use
 smaller values in testing with a very small cache, but 256 for larger
 caches 200MB). With a L2 parameter of 256 the default L1 parameter of
 16 is good up to about 7GB of cache IIRC.
 
 Regards
 Henrik
 
 mån 2003-03-24 klockan 18.47 skrev Leon:
  Hi there!
 
  I spotted something rather odd with my squid today. Using the config line
 
  cache_dir aufs /usr/local/squid/cache 2048 32 2048
 
  with squid-2.5.STABLE1-20030309, my cache directory looks like


Re: [squid-users] Redirecting Squid traffic to another Proxy. How ?

2003-03-24 Thread Henrik Nordstrom
The cache_peer line expects the address or name of the proxy to use.

You then control with cache_peer_access and never_direct which requests
are sent there.

Regards
Henrik


Chris  Val Bamber wrote:
 
 I have replaced the values as shown below.
 
 cache_peer jp.sonix.net parent 8080 0 no-query
 acl intranet dstdomain uk.sonix.com
 cache_peer_access allow jp.sonix.net
 never_direct allow intranet
 
 I do not see from here where I would specify the IP address
 for the proxy in Japan that should handle any requests for
 anything with jp.sonix.net in the URL. Looking at the
 Documentation it seems I might have to specify it on the
 cache_peer line ?
 
 I decided to go back to the original idea of adding an extra
 network card and creating an static route rather than routing
 to another squid box which I was first planning.
 
 Basically to re-cap on the original requirement. I have a single
 Squid box which is used for normal internet access. If a user
 decides to browse for anything in the domain jp.sonix.net then
 It should be directed to a proxy to Japan for processing.
 
 If I wanted to add additional domains for processing by the
 Proxy in Japan I assume I could add additional cache_peer
 lines ?
 
 The uk.sonix.com is my domain.
 
 Thanks
 Chris
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
 Henrik Nordstrom
 Sent: 01 February 2003 11:29
 To: Chris  Val Bamber
 Cc: [EMAIL PROTECTED]
 Subject: Re: [squid-users] Redirecting Squid traffic to another Proxy.
 How ?
 
 Something like this should work:
 
 cache_peer proxy-jp.your.domain parent jpproxyportnumber 0 no-query acl
 intranet dstdomain intranet.your.domain cache_peer_access allow
 proxy-jp.your.domain never_direct allow intranet
 
 You need to replace proxy-jp.your.domain, jpproxyportnumber,
 intranet.your.domain with values suitable for your setup.
 
 Regards
 Henrik


Re: [squid-users] SSL-SSL-unencrypted, (was: provide external access)

2003-03-24 Thread Henrik Nordstrom
mlister wrote:
 
 Henrik I'm making progress do to your help.
 
 I've setup two squid servers for use as follows:
 
 client-SQUID1-SQUID2-webserver
 
 SQUID1 has the following:
 https_port 443 cert=/etc/httpd/conf/ssl.crt/server.crt
 key=/etc/httpd/conf/ssl.key/server.key
 
 SQUID2 has no SSL configuration.
 
 From the client an SSL connection is established and maintained during
 navigation as expected.
 
 How can I determine that communication between SQUID1 and SQUID2 is SSL ??

With the above configuration it is not.

To use SSL between SQUID1 and SQUID2 you must configure SQUID2 as an SSL
server just as SQUID1, and also configure SQUID1 to use SSL when
speaking to SQUID2 (requires the discussed SSL update patch, or to wait
for Squid-3).

Regards
Henrik


[squid-users] NTLM Authentication using the SMB helper - need help with access log problems

2003-03-24 Thread Ken Thomson
Hi everyone,

I have setup a test server using Redhat Linux 8 and Squid 2.5STABLE2 from the source 
distribution.  Squid was configured to use NTLM authentication and in particular the 
SMB helper.  Test clients are using IE 6.0 SP1 (all current patches) on Windows 2000.

The server operates fine, and the authentication works as expected.  My problem lies 
with the access.log file.  Every request from a client is first denied and then 
accepted after being authenticated.  This happens to *EVERY* request.  The log files 
are twice the size they need to be and the huge number of denieds makes analysing the 
logs more difficult.

All of this is transparent to the client.  IE is able to display the websites with no 
problems (apart from twice requests in the background).  I assume that IE is just 
re-authenticating when it recieves the denied reply to every request.

My previous experience using Basic authentication and squid access logs showed that 
only the 1st request was denied, prompting the authentication prompt.  After 
successful authentication all requests were allowed.  ie. the browser seemed to hold 
the authentication.

My questions are:
1) Does anyone else with a similar setup using NTLM authentication and SMB experience 
this log problem?
2) Is the problem with the client or with the squid setup?
3) Is there a way to fix it?
4) What is the winbind NTLM helper? How does it differ to SMB?

Thanks in advance to any help or discussion people can provide.

Regards,
Ken.


Re: [squid-users] Timeouts details and Retry problems

2003-03-24 Thread Victor Tsang
Is there a way to turn off such feature, or control the number of retry
squid does?

Thanks.
Tor

Henrik Nordstrom wrote:
 
 mån 2003-03-24 klockan 12.11 skrev Fabrice DELHOSTE:
 
  So after installing Squid, we modified connect_timeout and
  read_timeout. We found configurations that works but we would like to
  understand precisely why. Moreover, due to our misunderstanding, we
  sometimes have strange effects such as double requests (or even more)
  to the content server whereas the application correctly receives the
  timeout error by sending only one request. Any idea?
 
 Squid automatically retries requests if the first attempt fails, to
 increase the likelyhood of a successful reply. Depending on the
 conditions there may be as much as up to 10 retries of the same request.
 
 The same is also done by most browsers.
 
 --
 Henrik Nordstrom [EMAIL PROTECTED]
 MARA Systems AB, Sweden


[squid-users] transparent proxyng works but...

2003-03-24 Thread SSCR Internet Admin
I have already set transparent proxying on my squid server, workstations' ip
addresses are masqueraded on iptables and invisibly redirected to squid 3128
if anyone tries to bypass squid so those workstations are already can
connect to the internet without specifying squid 3128 on their browsers, but
those workstations which are 2 to 3 hops away from my proxy/firewalled
server cant connect to the internet directly or not even redirected to port
3128 unlike those workstations that are 1 hop away from my server.. whats
happening? is there a bug on iptables or something that i have to tweak on
squid?

Thanks.
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.463 / Virus Database: 262 - Release Date: 3/17/2003


-- 
This message has been scanned for viruses and
dangerous contents on SSCR Email Scanner Server, and is
believed to be clean.



[squid-users] blocking kazaa and imesh

2003-03-24 Thread wesley deypalan
 Hi,
 
Is it possible to block kazaa,imesh and other p2p
software using squid? Newer p2p software are using
port 80 Im having problems blocking this software.
 
TIA
Wesley

__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com


RE: [squid-users] blocking kazaa and imesh

2003-03-24 Thread Mark A Lewis
User agent would be a good start I would guess. Look at your access.log
and see what the requests have in common. You may have to enable user
agent logging.

-Original Message-
From: wesley deypalan [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 24, 2003 11:05 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] blocking kazaa and imesh

 Hi,
 
Is it possible to block kazaa,imesh and other p2p
software using squid? Newer p2p software are using
port 80 Im having problems blocking this software.
 
TIA
Wesley

__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com

**
This message was virus scanned at siliconjunkie.net and
any known viruses were removed. For a current virus list
see http://www.siliconjunkie.net/antivirus/list.html



[squid-users] Internet Access

2003-03-24 Thread Clayton Hicklin
Hi,
   I'm helping someone develop a kiosk-type Internet access station.  I 
need to be able to ask for a name and credit card information, allow the 
user access once that information is given, and time the session.  All 
of this information needs to be recorded and transmitted to another 
server.  This is a dialup kiosk, so there is no Internet connection 
until the user has entered their CC information.  I know a little of 
Squid and squidGuard, and have played with basic authentication, but I 
need a little help getting started.  I know someone else has probably 
implemented something similar, is there a well-known solution to this?  
I will be implementing on linux boxes with Mozilla, pppd, etc.  I'm very 
comfortable with the other aspects (dialup, file transmission, etc), but 
I need help with regulated Internet access.  Thanks.

--
Clayton Hicklin
[EMAIL PROTECTED]



RE: [squid-users] blocking kazaa and imesh

2003-03-24 Thread S ý è d F ú r q à n
hi ..
Firstly Block the Kazaa platform thru your acl
desktop.kazaa.com
because every kazaa user firstly connect to this platform then use the p2p 
tech.

Thanks

Furqan






From: Mark A Lewis [EMAIL PROTECTED]
To: 'wesley deypalan' [EMAIL PROTECTED],[EMAIL PROTECTED]
Subject: RE: [squid-users] blocking kazaa and imesh
Date: Mon, 24 Mar 2003 23:28:20 -0600
User agent would be a good start I would guess. Look at your access.log
and see what the requests have in common. You may have to enable user
agent logging.
-Original Message-
From: wesley deypalan [mailto:[EMAIL PROTECTED]
Sent: Monday, March 24, 2003 11:05 PM
To: [EMAIL PROTECTED]
Subject: [squid-users] blocking kazaa and imesh
 Hi,

Is it possible to block kazaa,imesh and other p2p
software using squid? Newer p2p software are using
port 80 Im having problems blocking this software.
TIA
Wesley
__
Do you Yahoo!?
Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
http://platinum.yahoo.com
**
This message was virus scanned at siliconjunkie.net and
any known viruses were removed. For a current virus list
see http://www.siliconjunkie.net/antivirus/list.html


_
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. 
http://join.msn.com/?page=features/virus



Re: [squid-users] blocking kazaa and imesh

2003-03-24 Thread Maciej Kuczkowski
On Tue, 25 Mar 2003 12:23:04 +0500
S ý è d F ú r q à n [EMAIL PROTECTED] wrote:

 hi ..
 Firstly Block the Kazaa platform thru your acl
 desktop.kazaa.com
 because every kazaa user firstly connect to this platform then use the p2p 
 tech.
 
 Thanks
 
 Furqan
 
 
 
 
 
 
 From: Mark A Lewis [EMAIL PROTECTED]
 To: 'wesley deypalan' [EMAIL PROTECTED],[EMAIL PROTECTED]
 Subject: RE: [squid-users] blocking kazaa and imesh
 Date: Mon, 24 Mar 2003 23:28:20 -0600
 
 User agent would be a good start I would guess. Look at your access.log
 and see what the requests have in common. You may have to enable user
 agent logging.
 
 -Original Message-
 From: wesley deypalan [mailto:[EMAIL PROTECTED]
 Sent: Monday, March 24, 2003 11:05 PM
 To: [EMAIL PROTECTED]
 Subject: [squid-users] blocking kazaa and imesh
 
   Hi,
 
 Is it possible to block kazaa,imesh and other p2p
 software using squid? Newer p2p software are using
 port 80 Im having problems blocking this software.
 
 TIA
 Wesley
 
 __
 Do you Yahoo!?
 Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
 http://platinum.yahoo.com
 
 **
 This message was virus scanned at siliconjunkie.net and
 any known viruses were removed. For a current virus list
 see http://www.siliconjunkie.net/antivirus/list.html
 
 
 
 _
 MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. 
 http://join.msn.com/?page=features/virus
 

Hi ..
try this..

acl Safe_port 80
http_acces deny !Safe_port

( its paranoid but work )

in this way, your proxy server can established connection to http only.
.. and no kazzza :)

[EMAIL PROTECTED]