Re: [squid-users] https questions

2008-06-06 Thread Henrik Nordstrom
On lör, 2008-06-07 at 09:58 +0800, Ken W. wrote:
> 2008/6/7 Henrik Nordstrom <[EMAIL PROTECTED]>:
> 
> >
> > But you are quite likely to run into issues with the server sending out
> > http:// URLs in it's responses unless the server has support for running
> > behind an SSL frontend. See for example the front-end-https cache_peer
> > option.
> >
> 
> Thanks Henrik.
> Under my setting, can squid work correctly for this flow?
> 
> clients  --https-->  squid  --http-->  webserver
> webserver  --http-->  squid  --https-->  clients

Again, yes, provided your web server application has support for being
used in this manner.





RE: [squid-users] How to not cache a site?

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 15:48 -0700, Jerome Yanga wrote:

> I believe some do but others don't.  I just responded to Chris with the
> http headers.  The captured log is a mere mouse over of an icon in the
> site.

Yes, but is those headers from an object which you found was cached by
Squid?

Regards
Henrik



Re: [squid-users] High tcp_hit times

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 15:30 -0700, leongmzlist wrote:

> Does squid still use dns for reverse proxy requests?  All my requests 
> goes to http://cache-int/, but cache-int is not on /etc/hosts nor on 
> DNS.  I have 1 orginal-server defined and is used as the default, so 
> shouldn't squid just goto the backend w/o dns lookups?

Also depends on if you have any acls relying on DNS, such as a dst acl.



Regards
Henrik



Re: [squid-users] High tcp_hit times

2008-06-06 Thread Amos Jeffries

leongmzlist wrote:

I think it's due to dns. Here was the squid manager output:

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   8.68295  2.37608
Cache Misses: 10.20961  0.03066
Cache Hits:8.22659  2.79397
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:  10.60242  9.70242
ICP Queries:   0.0  0.0

Does squid still use dns for reverse proxy requests?  All my requests 
goes to http://cache-int/, but cache-int is not on /etc/hosts nor on 
DNS.  I have 1 orginal-server defined and is used as the default, so 
shouldn't squid just goto the backend w/o dns lookups?


If you have ACL which require rDNS then yes. 'dst' for example when 
'dstdomain' should have been used.


Amos



thx,
mike


At 03:10 PM 6/6/2008, Henrik Nordstrom wrote:

On fre, 2008-06-06 at 14:38 -0700, leongmzlist wrote:
> My cache performance is acting strange; I'm getting extremely high
> tcp_hit times for cached objects:
>
> 1212787643.465  50343 10.2.7.22 TCP_HIT/200 19290 GET 
http://cache-int/
> 1212787737.740  15212 10.2.7.25 TCP_HIT/200 11511 GET 
http://cache-int/

>
>
> Those high times comes in bursts.  Eg: bunch of high response time
> will come followed by a normal response times.  Normal response times
> are sub 100ms

Could be cache validations. Some times TCP_HIT is logged when it really
should have been TCP_REFRESH_HIT. This can happen if the object uses
Vary if I remember correctly.

Another possibility is if the Squid serer is swapping, causing Squid to
delay everything waiting for swap activity.

A third possibility is if you have ACLs which may cause delays, such as
DNS dependencies or external acl lookups.

Regards
Henrik





--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


Re: [squid-users] Transparent proxy with MSN

2008-06-06 Thread Amos Jeffries

Sergio Belkin wrote:

2008/6/5 Amos Jeffries <[EMAIL PROTECTED]>:

Sergio Belkin wrote:

Hi,
I'd want to know if it's possible allos MSN usage along transparent proxy.

Possible. But not always easy. It depends highly on the type of network you
have setup (a level of NAT between the client and squid kills it fairly
well).


The schema is as follows:

A user connect with his notebook via Access Point which has OpenWRT
installed. OpenWRT has DNAT rules:

iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 80 -j DNAT
--to-destination $SQUID_IP:8080

iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 1863 -j DNAT
--to-destination SQUID_IP:8080


That NAT happening on the AP would break squid transparency.
The AP needs to do policy-routing to pass only the port-80 packets to 
the squid box.

  http://wiki.squid-cache.org/ConfigExamples/LinuxPolicyRouteWebTraffic

The NAT part appears to be right, but the Squid box should be the one 
doing it.


There is something about authentication too with MSN, full TPROXY may be 
needed for that one.




(I've tried the last one and even redirecting 1050, but I'm not sure
if that's right)

Users can browse the web with no problems using transparent proxy
(except SSL sites of course) but they fail to use MSN.



MSN is _supposed_ to have automatic failovers to port 80 that use HTTP. But
that depends on what other paths it can find through your network first.



Amos
--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


Re: [squid-users] Squid Performance, VMware vs Physical Machine

2008-06-06 Thread Adrian Chadd
On Sat, Jun 07, 2008, Brodsky, Jared S. wrote:
> In my instance it is not an ESX server but rather their free offering. When I 
> did my testing I did it on my desktop with was a P4 w 3GB ram and I saw a hit 
> of 25-30 percent usage with 6 users and myself working on the desktop. 

Right. VMWare server will perform slightly less normally and suck if you're 
doing lots
of IO.



Adrian

> --
> Jared Brodsky
> 212.647.6303
> Sent via Blackberry.  
> 
> -Original Message-
> From: Adrian Chadd <[EMAIL PROTECTED]>
> 
> Date: Sat, 7 Jun 2008 10:44:58 
> To:"Brodsky, Jared S." <[EMAIL PROTECTED]>
> Cc:squid-users@squid-cache.org
> Subject: Re: [squid-users] Squid Performance, VMware vs Physical Machine
> 
> 
> On Fri, Jun 06, 2008, Brodsky, Jared S. wrote:
> > Getting ready to roll out a squid server in my organization after doing
> > about a month of testing on it on a virtual machine in VMware server.
> > Is running squid in a virtual environment recommended, or is having a
> > dedicated box a safer way to go?  I'll have about 30 users that hit
> > YouTube and other streaming media sites throughout the day and I am
> > hoping to cache a lot of it since many watch the same ones more than
> > once.  I do, however have a box set aside that I can use which is a P4 3
> > Ghz w/ 1GB ram and was going to drop in two 10,000 RPM drives for the
> > cache.  I know that the squid wiki says JBOD is preferable, however is
> > RAID 0 a bad way to go?
> 
> RAID 0 is fine; losing a disk will have exactly the results that losing one
> JBOD disk will have in Squid at the moment.
> 
> The best way to know is to test it out. I ran Squid/Linux under VMWare ESX
> 3.mumble a while ago for a small LAN (~150 users) w/ NTLM authentication and
> besides the clock drift issues, things ran quite fine.
> 
> YMMV,
> 
> 
> 
> Adrian
> 
> --
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
> -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Re: Problem with some Microsoft Sites

2008-06-06 Thread Amos Jeffries

Linda W wrote:

Leonardo Rodrigues Magalhães wrote:
   probably the problem reported is chunked-encoding related. Please 
check:

http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/



Blog entry 
"http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/"; posted on 
April 29, 2008 at 2:24pm says:
  [  The 3.0 code is just different enough that it would need a whole 
new   ]
  [  back-port project to get it going well. The time and work that 
would   ]
  [  take is being used instead to get 3.1 out faster. Which should 
be  ]
  [  within a month of this writing so procrastinating could solve 
the  ]
  [  problem for 
you.   ]

---

Curious -- how's 3.1 progressing?...or about how much of
the month of work has been completed?

(I'm presuming "within a month" figure you gave was probably
an estimation of work left divided by percentage of work time(WT) devoted
to the project  --and--  that non-project related demands increased
(reducing percentage of 'WT' available to spend on the SQ3.1 project,
effectively multiplying the time),  --or--  unforeseen or unexpected
complexities (or 'gotchas'? :-)) arose, creating more work than initially
planned when giving estimate.)

Seems like software always expands to exceed the available
time allocated for doing it*ouch*...  That trend will likely
continue or get 'worse', as the software 'universe' expands and newer
software relies on more layers of software that will have
come before.

Hmmm...reminds me of the accelerating expansion of the
universe that some posit will eventually result in 'the big rip'...
lets hope software doesn't mirror that theory...:-)



Sorry the timetable has slipped a little. We are down to 15 bugs and 4 
'features' (well underway). Things have gone slightly quiet these last 
few weeks. Hopefully that means more code done and tested ;-)


A pre-release is due out after those features are merged.

Amos
--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] disk request!

2008-06-06 Thread Adrian Chadd
Hi everyone,

I hate to ask for "stuff" on the public mailing list but I'm a little stuck
right now.

I've been loaned a pair of compaq storageworks arrays - 14 U160 disks
per array - but I don't have disks for them. I have some older 10krpm
disks to partially fill one array but those disks are failing.

I've bought the rest of the hardware (SCSI cards, servers, switches,
etc) which I've been using to benchmark Squid with but I've been unable
to do any serious storage related testing. People are now rolling
out Squid on 73gig and 146gig U320 disks and asking about performance.

I've been trying to acquire some useful size 10k and 15kpm disks for
further testing but its just far too expensive to source them myself
just yet.

So, does anyone have any 10k/15k rpm disks, 18gig or above (preferably
above!) which you'd like to donate? I'm looking to fill both of these
arrays (so 28 disks there) plus 6 more in a Dell 2650 2ru server.

Please contact me in private if you can help!

Thanks,



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Squid Performance, VMware vs Physical Machine

2008-06-06 Thread Adrian Chadd
On Fri, Jun 06, 2008, Brodsky, Jared S. wrote:
> Getting ready to roll out a squid server in my organization after doing
> about a month of testing on it on a virtual machine in VMware server.
> Is running squid in a virtual environment recommended, or is having a
> dedicated box a safer way to go?  I'll have about 30 users that hit
> YouTube and other streaming media sites throughout the day and I am
> hoping to cache a lot of it since many watch the same ones more than
> once.  I do, however have a box set aside that I can use which is a P4 3
> Ghz w/ 1GB ram and was going to drop in two 10,000 RPM drives for the
> cache.  I know that the squid wiki says JBOD is preferable, however is
> RAID 0 a bad way to go?

RAID 0 is fine; losing a disk will have exactly the results that losing one
JBOD disk will have in Squid at the moment.

The best way to know is to test it out. I ran Squid/Linux under VMWare ESX
3.mumble a while ago for a small LAN (~150 users) w/ NTLM authentication and
besides the clock drift issues, things ran quite fine.

YMMV,



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] https questions

2008-06-06 Thread Ken W.
2008/6/7 Henrik Nordstrom <[EMAIL PROTECTED]>:

>
> But you are quite likely to run into issues with the server sending out
> http:// URLs in it's responses unless the server has support for running
> behind an SSL frontend. See for example the front-end-https cache_peer
> option.
>

Thanks Henrik.
Under my setting, can squid work correctly for this flow?

clients  --https-->  squid  --http-->  webserver
webserver  --http-->  squid  --https-->  clients

Thank you again.


[squid-users] Re: squid_kerb_auth on mac os x

2008-06-06 Thread Markus Moeller
BTW If you download the cvs source from sourceforge at 
http://squidkerbauth.cvs.sourceforge.net/squidkerbauth you can use 
./configure and it should check everything for Mac


Markus

"Alex Morken" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

Hello,

This is the first time I have posted on this list, so hello to  everyone. 
I have been trying to get squid_kerb_auth to work on Mac  OS X 10.4.11 and 
I cannot seem to figure out the reason it fails.


Here are the options I had set for the configure part of squid:
Squid Cache: Version 2.7.STABLE2
configure options:  '--enable-auth=basic negotiate' '--enable-basic- 
auth-helpers=LDAP' '--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-esternal-acl-helpers=ldap_group' '--prefix=/usr/local/ 
squid-2.7'


Everything compiles nicely and produces no errors.

I set up and tested my kerberos configuration per below:

Set up a local keytab for squid - HTTP/[EMAIL PROTECTED]

Tested it by issuing the following command and it worked correctly:

`kinit -k -t /etc/squid/squid.keytab HTTP/[EMAIL PROTECTED]

Set and exported KRB5_KTNAME pointing to the local keytab.  I wrote a 
bash script that does this and I have also tried to set the  environmental 
variable in the current shell and run it from there.   Both work as 
expected.


I added authentication to squid.conf

auth_param negotiate program /usr/libexec/squid_kerb_auth -d -s HTTP/ 
[EMAIL PROTECTED]


I then started squid and it looks like everything is starting  correctly. 
But it is still not dealing with kerberos correctly.


I downloaded and compiled squid_kerb_auth by hand as I had found  someone 
else on this list that was running into a problem similar to  mine.  I 
recompiled squid_kerb_auth with a few different options as  mentioned in 
the thread.  They are listed below.


Compiled by hand:
gcc -o squid_kerb_auth -DHAVE_SPNEGO -D__LITTLE_ENDIAN__ -Ispnegohelp 
squid_kerb_auth.c base64.c spnegohelp/derparse.c  spnegohelp/ spnego.c 
spnegohelp/spnegohelp.c  spnegohelp/spnegoparse.c - 
lgssapi_krb5 -lkrb5 -lcom_err


root# ./squid_kerb_auth -d
2008/06/03 13:37:59| squid_kerb_auth: Starting version 1.0.1
[EMAIL PROTECTED]
2008/06/03 13:38:01| squid_kerb_auth: Got 'username' from squid  (length: 
15).
2008/06/03 13:38:01| squid_kerb_auth: gss_accept_sec_context()  failed: A 
token was invalid. Token header is malformed or corruptBH 
gss_accept_sec_context() failed: A token was invalid. Token header is 
malformed or corrupt



Results from just using ./configure and no options specified:
host:/tmp/kerb/squid_kerb_auth root# ./squid_kerb_auth -d -s HTTP/ 
[EMAIL PROTECTED]

2008/06/03 13:47:38| squid_kerb_auth: Starting version 1.0.1
[EMAIL PROTECTED]
2008/06/03 13:47:39| squid_kerb_auth: Got '[EMAIL PROTECTED]' from  squid 
(length: 15).
2008/06/03 13:47:39| squid_kerb_auth: parseNegTokenInit failed with 
rc=108

2008/06/03 13:47:39| squid_kerb_auth: Token is possibly a GSSAPI token
2008/06/03 13:47:39| squid_kerb_auth: gss_accept_sec_context()  failed: A 
token was invalid. Token header is malformed or corruptBH 
gss_accept_sec_context() failed: A token was invalid. Token header is 
malformed or corrupt


I have also tried all combinations of -DHAVE_SPNEGO, - D__LITTLE_ENDIAN__ 
and -D__BIG_ENDIAN__.  All have failed in similar  ways.


So the obvious questions are - what am I doing wrong?   am I using 
squid_kerb_auth correctly from the command line (can I use it all  that 
way)?  Is there anywhere I can look for more verbose logs from  squid?  I 
have been running squid with -d 9 -N options and it doesn't  error to the 
logs or to the screen in any sort of verbose way (the  way I would expect 
it to work).  Any help would be much appreciated  and I would be happy to 
provide any information you request!


Thank you,

Alex Morken







[squid-users] https questions

2008-06-06 Thread Ken W.

Hello members,

I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.

This is the setting I considered:

https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
protocol=http 

cache_peer 10.0.0.1 parent 80 0 no-query originserver name=origin_1
acl service_1 dstdomain .xxx.com
cache_peer_access origin_1 allow service_1


Then I access to squid with this way:
https://www.xxx.com/

Can squid accept this https request and forward it to original server with
http correctly?
btw, what's the usage of "protocol=http"? I can't understand for it
enough.

Thanks in advance.
[ this is my second message for the same content, because the first
message I sent to the list was lost. ]


  Get the name you always wanted with the new y7mail email address.
www.yahoo7.com.au/mail



[squid-users] Re: Problem with some Microsoft Sites

2008-06-06 Thread Linda W

Leonardo Rodrigues Magalhães wrote:

   probably the problem reported is chunked-encoding related. Please check:
http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/



Blog entry "http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/"; posted 
on April 29, 2008 at 2:24pm says:

  [  The 3.0 code is just different enough that it would need a whole new   ]
  [  back-port project to get it going well. The time and work that would   ]
  [  take is being used instead to get 3.1 out faster. Which should be  ]
  [  within a month of this writing so procrastinating could solve the  ]
  [  problem for you.   ]
---

Curious -- how's 3.1 progressing?...or about how much of
the month of work has been completed?

(I'm presuming "within a month" figure you gave was probably
an estimation of work left divided by percentage of work time(WT) devoted
to the project  --and--  that non-project related demands increased
(reducing percentage of 'WT' available to spend on the SQ3.1 project,
effectively multiplying the time),  --or--  unforeseen or unexpected
complexities (or 'gotchas'? :-)) arose, creating more work than initially
planned when giving estimate.)

Seems like software always expands to exceed the available
time allocated for doing it*ouch*...  That trend will likely
continue or get 'worse', as the software 'universe' expands and newer
software relies on more layers of software that will have
come before.

Hmmm...reminds me of the accelerating expansion of the
universe that some posit will eventually result in 'the big rip'...
lets hope software doesn't mirror that theory...:-)









RE: [squid-users] How to not cache a site?

2008-06-06 Thread Jerome Yanga
Henrik,

I believe some do but others don't.  I just responded to Chris with the
http headers.  The captured log is a mere mouse over of an icon in the
site.

I apologize for my noobness.

Regards,
Jerome

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 06, 2008 2:41 PM
To: Jerome Yanga
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] How to not cache a site?

On tor, 2008-06-05 at 17:22 -0700, Jerome Yanga wrote:
> #/cache/usr/bin/purge -n -v -c /etc/squid/cachepurge.conf -p
127.0.0.1:80 -P 1 -e site_address\.com >
/var/log/site_address.com_purge.log
> 
> I grep'ed the log created from the command above and I can find
instances of site_address.com being deleted.  Hence, it is being cached.

And you are positively sure those objects do have the mentioned
Cache-Control headers?

Quite often there is different cache requirements for different kinds of
objects.

Regards
Henrik




[squid-users] Squid Performance, VMware vs Physical Machine

2008-06-06 Thread Brodsky, Jared S.
Getting ready to roll out a squid server in my organization after doing
about a month of testing on it on a virtual machine in VMware server.
Is running squid in a virtual environment recommended, or is having a
dedicated box a safer way to go?  I'll have about 30 users that hit
YouTube and other streaming media sites throughout the day and I am
hoping to cache a lot of it since many watch the same ones more than
once.  I do, however have a box set aside that I can use which is a P4 3
Ghz w/ 1GB ram and was going to drop in two 10,000 RPM drives for the
cache.  I know that the squid wiki says JBOD is preferable, however is
RAID 0 a bad way to go?

Jared


Re: [squid-users] Re: squid_kerb_auth on mac os x

2008-06-06 Thread Alex Morken


On Jun 6, 2008, at 2:55 PM, Henrik Nordstrom wrote:

On fre, 2008-06-06 at 14:33 -0700, Alex Morken wrote:


I have done a bit more testing and shut off my ldap authentication
and it seems that it still trying to use the basic auth.  I have shut
squid completely down and restarted each time I change auth methods
per the documentation.  How can I verify that it is indeed hitting
squid_kerb_auth?


Use squidclient and look at the response headers sent by Squid.

What is your auth_param settings?


auth_param negotiate program /usr/local/squid/libexec/squid_kerb_auth -d
auth_param negotiate children 10auth_param negotiate keep_alive on
auth_param basic program /usr/local/squid/libexec/pam_auth
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours



 I have my debugging level set to 9 and have tried to
squid -k debug to see what I can get but I can't find where it is
trying to pass anything to squid_kerb_auth.


It will only talk to squid_kerb_auth when there is a client trying to
perform a kerberos handshake. Before that it's complete silence on the
helper side..



When I comment out the auth_param basic part of the file and restart  
squid I get authentication denied and it doesn't look like it is  
passing anything to kerberos.  I do have acl's in place that require  
auth and it works correctly when just using pam_auth.  Am I missing  
something for getting it to hit kerberos either on the ACL side of  
things or on the auth_param side?


Thanks
Alex Morken



Re: [squid-users] High tcp_hit times

2008-06-06 Thread leongmzlist

I think it's due to dns. Here was the squid manager output:

Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   8.68295  2.37608
Cache Misses: 10.20961  0.03066
Cache Hits:8.22659  2.79397
Near Hits: 0.0  0.0
Not-Modified Replies:  0.0  0.0
DNS Lookups:  10.60242  9.70242
ICP Queries:   0.0  0.0

Does squid still use dns for reverse proxy requests?  All my requests 
goes to http://cache-int/, but cache-int is not on /etc/hosts nor on 
DNS.  I have 1 orginal-server defined and is used as the default, so 
shouldn't squid just goto the backend w/o dns lookups?


thx,
mike


At 03:10 PM 6/6/2008, Henrik Nordstrom wrote:

On fre, 2008-06-06 at 14:38 -0700, leongmzlist wrote:
> My cache performance is acting strange; I'm getting extremely high
> tcp_hit times for cached objects:
>
> 1212787643.465  50343 10.2.7.22 TCP_HIT/200 19290 GET http://cache-int/
> 1212787737.740  15212 10.2.7.25 TCP_HIT/200 11511 GET http://cache-int/
>
>
> Those high times comes in bursts.  Eg: bunch of high response time
> will come followed by a normal response times.  Normal response times
> are sub 100ms

Could be cache validations. Some times TCP_HIT is logged when it really
should have been TCP_REFRESH_HIT. This can happen if the object uses
Vary if I remember correctly.

Another possibility is if the Squid serer is swapping, causing Squid to
delay everything waiting for swap activity.

A third possibility is if you have ACLs which may cause delays, such as
DNS dependencies or external acl lookups.

Regards
Henrik




Re: [squid-users] High tcp_hit times

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 14:38 -0700, leongmzlist wrote:
> My cache performance is acting strange; I'm getting extremely high 
> tcp_hit times for cached objects:
> 
> 1212787643.465  50343 10.2.7.22 TCP_HIT/200 19290 GET http://cache-int/
> 1212787737.740  15212 10.2.7.25 TCP_HIT/200 11511 GET http://cache-int/
> 
> 
> Those high times comes in bursts.  Eg: bunch of high response time 
> will come followed by a normal response times.  Normal response times 
> are sub 100ms

Could be cache validations. Some times TCP_HIT is logged when it really
should have been TCP_REFRESH_HIT. This can happen if the object uses
Vary if I remember correctly.

Another possibility is if the Squid serer is swapping, causing Squid to
delay everything waiting for swap activity.

A third possibility is if you have ACLs which may cause delays, such as
DNS dependencies or external acl lookups.

Regards
Henrik



Re: [squid-users] Squid hangs after some time -testing-

2008-06-06 Thread Carlos Alberto Bernat Orozco
Hi

Thanks to all for the answers. I will tell you my confguration. I have
a debian etch box with 2 network interfaces. One for WAN and the other
LAN. I have an script with iptables to redirect traffic to the proxy
as transparent proxy.

When I started the Squid service, the ping after 10 or 15 minutes
fails. When I stopped the service, ping works normally. So I believed
something wrong with Squid.

But I just change the order of the interfaces, the eth0 now is eth1
and eth1 is eth0 and everything works fine, for now.

It seems an issue with the interfaces, exactly don't know. If someone
have had this issue with this and has any explication, is welcome.

Thanks. I will continue posting my results if someone needs.


Have a nice day!

2008/6/6 Henrik Nordstrom <[EMAIL PROTECTED]>:
> On tor, 2008-06-05 at 23:15 -0500, Carlos Alberto Bernat Orozco wrote:
>
>> I'm new to Squid, so please be patient. I've installed Squid 2.6 on a
>> Debian Etch as a transparent proxy. I can go to many web sites but
>> suddenly I can't surf the web. I realize the problem when I make ping
>> to gmail.com and after 10 or 20 minutes gives me Request Time Out.
>
> If ping suddently fails but normally works then you have a network
> failure or some kind, completely outside of Squid.
>
> Regards
> Henrik
>
>


Re: [squid-users] Re: squid_kerb_auth on mac os x

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 14:33 -0700, Alex Morken wrote:

> I have done a bit more testing and shut off my ldap authentication  
> and it seems that it still trying to use the basic auth.  I have shut  
> squid completely down and restarted each time I change auth methods  
> per the documentation.  How can I verify that it is indeed hitting  
> squid_kerb_auth?

Use squidclient and look at the response headers sent by Squid.

What is your auth_param settings?


>  I have my debugging level set to 9 and have tried to  
> squid -k debug to see what I can get but I can't find where it is  
> trying to pass anything to squid_kerb_auth.

It will only talk to squid_kerb_auth when there is a client trying to
perform a kerberos handshake. Before that it's complete silence on the
helper side..

Regards
Henrik



Re: [squid-users] debug_options reference

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 18:56 +0200, Anton Melser wrote:
> Hi all,
> I feel like a complete fool but I just can't seem to use the squid
> docs... could someone point me to the list of sections? ALL,1 33,2
> seems to be a common setting - but wtf is the doc that says what 33
> is?!?

doc/debug-sections.txt in the source distribution. Also printed at the
top of each source file.

The recommended default is ALL,1 unless you get told to increase some
debugging by a developer looking into some problem for you.

Regards
Henrik



Re: [squid-users] https questions

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 22:59 +0800, Ken W. wrote:

> I want to set squid, which accepts https from clients, then forward the
> request to original server with http protocal.
> 
> This is the setting I considered:
> 
> https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
> protocol=http

Don't use protocol= unless you absolutely need it.

> cache_peer 10.0.0.1 parent 80 0 no-query originserver name=origin_1
> acl service_1 dstdomain .xxx.com
> cache_peer_access origin_1 allow service_1

Looks fine.

> Then I access to squid with this way:
> https://www.xxx.com/
> 
> Can squid accept this https request and forward it to original server with
> http correctly?

Yes.

But you are quite likely to run into issues with the server sending out
http:// URLs in it's responses unless the server has support for running
behind an SSL frontend. See for example the front-end-https cache_peer
option.

> btw, what's the usage of "protocol=http"? I can't understand for it
> enough.

It's the protocol Squid should internally assign to the requested URL.
When acting as a web server / accelerator the request does not contain
information on the protocol used, just the request-path.

It has only marginal practical importance, and is best left at the
default automatic setting unless you have very special reasons to change
it.

Regards
Henrik



Re: [squid-users] Transparent proxy with DansGuardian using IDENT

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 07:40 -0700, modulok wrote:

> Also, we have clients who go offsite often (salesmen are barely here), if
> they have proxies, when they go offsite they will not be able to work online
> without using the VPN and proxy through that. And if they are at a hotel
> that requires registration, that may not work either.

Thats easy. Configure your network to autodiscover the proxies using
WPAD.

Regards
Henrik



Re: [squid-users] performances ... again

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 17:06 +0200, Matus UHLAR - fantomas wrote:
> is using of
> isInNet(host, "127.0.0.0", "255.0.0.0")
> 
> not working?

That relies on DNS lookups..

Regards
Henrik



Re: [squid-users] Squid hangs after some time

2008-06-06 Thread Henrik Nordstrom
On tor, 2008-06-05 at 23:15 -0500, Carlos Alberto Bernat Orozco wrote:

> I'm new to Squid, so please be patient. I've installed Squid 2.6 on a
> Debian Etch as a transparent proxy. I can go to many web sites but
> suddenly I can't surf the web. I realize the problem when I make ping
> to gmail.com and after 10 or 20 minutes gives me Request Time Out.

If ping suddently fails but normally works then you have a network
failure or some kind, completely outside of Squid.

Regards
Henrik



RE: [squid-users] How to not cache a site?

2008-06-06 Thread Henrik Nordstrom
On tor, 2008-06-05 at 17:22 -0700, Jerome Yanga wrote:
> #/cache/usr/bin/purge -n -v -c /etc/squid/cachepurge.conf -p 127.0.0.1:80 -P 
> 1 -e site_address\.com > /var/log/site_address.com_purge.log
> 
> I grep'ed the log created from the command above and I can find instances of 
> site_address.com being deleted.  Hence, it is being cached.

And you are positively sure those objects do have the mentioned
Cache-Control headers?

Quite often there is different cache requirements for different kinds of
objects.

Regards
Henrik



[squid-users] High tcp_hit times

2008-06-06 Thread leongmzlist
My cache performance is acting strange; I'm getting extremely high 
tcp_hit times for cached objects:


1212787643.465  50343 10.2.7.22 TCP_HIT/200 19290 GET http://cache-int/
1212787737.740  15212 10.2.7.25 TCP_HIT/200 11511 GET http://cache-int/


Those high times comes in bursts.  Eg: bunch of high response time 
will come followed by a normal response times.  Normal response times 
are sub 100ms


1212787849.726 20 10.2.7.62 TCP_HIT/200 24149 GET http://cache-int/
1212787850.352 30 10.2.7.62 TCP_HIT/200 25469 GET http://cache-int/g

The cache server is part of a reverse proxy pool.  We have 8 servers 
and only this one is acting weird.  Any ideas on how find the cause 
of the issue?


mike

config:
6GB of RAM
64bit debian 4.0
32bit squid, 2.6stable20
3ware raid controller, sets of 2 disk mirrors

output from top:
top - 14:36:19 up  1:43,  1 user,  load average: 1.27, 0.77, 2.57
Tasks:  88 total,   1 running,  87 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  0.0%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   6111580k total,  4968848k used,  1142732k free,   211724k buffers
Swap:  1959888k total,  116k used,  1959772k free,  1326964k cached


squid conf:

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /apps/squid/var/logs/access.log squid
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_reply_access allow all
icp_access allow all
coredump_dir /apps/squid/var/cache
cache_effective_user squid
url_rewrite_host_header off
http_port 80 vhost defaultsite=c2.globexplorer.com
http_port 81
refresh_pattern -i gexservlets 525948 100 1051896 override-expire ignore-reload
refresh_pattern -i cgi-bin 525948 100 1051896 override-expire ignore-reload
acl textfiles rep_mime_type -i ^text/plain$
acl htmlfiles rep_mime_type -i ^text/html$
acl jpegfiles rep_mime_type -i ^image/jpeg$
acl pngfiles rep_mime_type -i ^image/png$
acl giffiles rep_mime_type -i ^image/gif$
cache deny textfiles
cache deny htmlfiles
cache_store_log none
pid_filename /var/run/squid.pid
cache_dir aufs /mnt/cache_a/squid-cache 85332 320 256
cache_dir aufs /mnt/cache_b/squid-cache 85332 320 256
cache_dir aufs /mnt/cache_c/squid-cache 85332 320 256
cache_dir aufs /mnt/cache_d/squid-cache 85332 320 256
cache_dir aufs /mnt/cache_e/squid-cache 85332 320 256
cache_dir aufs /mnt/cache_f/squid-cache 85332 320 256

cache_mem 512 MB
minimum_object_size 1 KB
maximum_object_size_in_memory 1 MB
store_avg_object_size 20 KB
offline_mode on
icp_hit_stale on
acl acceleratedHosts dst 10.2.14.1 10.2.13.1
http_access allow acceleratedHosts
acl mynet src 10.2.7.0/24
acl mynet src 10.2.13.0/24
acl mynet src 10.2.14.0/24
acl mynet src 10.9.101.0/24
http_access allow mynet
http_access allow localhost
acl PURGE method PURGE
acl purge_group src 10.10.10.213 10.9.101.60 10.10.10.14 10.10.10.15 
10.10.10.16 10.10.10.176

http_access allow PURGE purge_group
http_access allow PURGE localhost
http_access deny PURGE
http_access deny all
acl snmppublic snmp_community public
snmp_access allow snmppublic all
nonhierarchical_direct off
icp_query_timeout 7000
maximum_icp_query_timeout 1
strip_query_terms off
log_icp_queries off
cache_peer 10.2.14.1 parent 80 0 no-query originserver 
no-netdb-exchange no-digest name=mw_pool

hierarchy_stoplist cmd=info cmd=checkextent
visible_hostname cache4
unique_hostname cache4
cache_peer 10.2.7.20 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache1
cache_peer 10.2.7.21 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache2
cache_peer 10.2.7.22 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache3
cache_peer 10.2.7.24 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache5
cache_peer 10.2.7.25 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache6
cache_peer 10.2.7.62 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache7-1
cache_peer 10.2.7.63 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-digest name=cache7-2
cache_peer 10.2.7.64 sibling 80 3130 proxy-only no-delay allow-miss 
weight=1 no-netdb-exchange no-d

RE: [squid-users] Apple Computers jam my NTLM Helpers.

2008-06-06 Thread Henrik Nordstrom
On tor, 2008-06-05 at 20:10 -0400, Jonathan Chretien wrote:

> It's very strange. I really don't know if it's a Mac problem or if it's a 
> problem with the Helper that has difficulty to talk with Mac Computers.

Shoule be easy to see with a wireshark capture of the traffic. Each new
connection starting an NTLM handshake reserves a helper until the
authentication completes or the connection is closed.

My guess on what happens is that the client opens a connection, sends
the initial negotiate blob, and gets the challenge from the helper and
then just sits there doing nothing with the connection, when it's
expected to send an authentication blob (final NTLM packet)

Regards
Henrik





[squid-users] Re: RE : [squid-users] performances ... again

2008-06-06 Thread Henrik Nordstrom
Is there more than one DNS servre in /etc/resolv.conf or squid.conf? If
so then you need to test both..



On fre, 2008-06-06 at 21:55 +0200, GARDAIS Ionel wrote:
> Okay ...
> It's been the hardest 20 minutes of the day : find a few domain names that 
> "should" have not been accessed and cached by our DNS.
> 
> Well, from Paris, France, time given by dig stats :
> - mana.pf (French Polynesia, other side of the Earth, satellite link) : 
> around 700ms
> - aroundtheworld.com, astaluego.com, apple.is, dell.nl, Volvo.se : between 
> 100 and 150ms
> - nintendo.co.jp, Yamaha.co.jp, pioneer.co.jp : around 300ms
> 
> Cached entries are returned in less than 1ms.
> 
> Ionel
> 
> 
> -Message d'origine-
> De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
> Envoyé : vendredi 6 juin 2008 21:05
> À : GARDAIS Ionel
> Cc : Squid Users
> Objet : Re: [squid-users] performances ... again
> 
> On fre, 2008-06-06 at 14:37 +0200, Ionel GARDAIS wrote:
> > I got a user (whom I can trust) who uses an explicit proxy configuration 
> > : there are no improvments.
> 
> Ok. Then it's at the proxy, or the DNS servers it uses.
> 
> Remember that to diagnose DNS slowness you need to query for hosts and
> domains which has not yet been visited, as the DNS server also caches a
> lot. Lookups of already visited domains/hosts is not valid as proof to
> say that the DNS is fine..
> 
> > I tried to avoid use of calls which cause DNS lookups (hence the 
> > host.match() and host.indexOf() ).
> 
> Good.
> 
> Regards
> Henrik



Re: [squid-users] Re: squid_kerb_auth on mac os x

2008-06-06 Thread Alex Morken


On Jun 6, 2008, at 2:19 PM, Markus Moeller wrote:
I can create a simple test tool to create blobs. I will post it  
later next week.


Markus

"Henrik Nordstrom" <[EMAIL PROTECTED]> wrote in message  
news:[EMAIL PROTECTED]

On ons, 2008-06-04 at 15:41 -0700, Alex Morken wrote:


Thank you Henrik.  I kind of figured it needed something else, but I
wasn't sure what to put there.  Where can I get or generate the
Kerberos GSSAPI blob I need for the input?  I have been digging
around kerberos docs and haven't found what I needed.


Not sure. It's a kerberos authentication handshake, and initially
depends on a challenge sent by the helper...


I have done a bit more testing and shut off my ldap authentication  
and it seems that it still trying to use the basic auth.  I have shut  
squid completely down and restarted each time I change auth methods  
per the documentation.  How can I verify that it is indeed hitting  
squid_kerb_auth? I have my debugging level set to 9 and have tried to  
squid -k debug to see what I can get but I can't find where it is  
trying to pass anything to squid_kerb_auth.


I am using the newest Safari to test this out so there shouldn't be  
anything to configure on the web browser end of things.


Any help would be much appreciated!

Alex Morken


[squid-users] Re: squid_kerb_auth on mac os x

2008-06-06 Thread Markus Moeller
I can create a simple test tool to create blobs. I will post it later next 
week.


Markus

"Henrik Nordstrom" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

On ons, 2008-06-04 at 15:41 -0700, Alex Morken wrote:


Thank you Henrik.  I kind of figured it needed something else, but I
wasn't sure what to put there.  Where can I get or generate the
Kerberos GSSAPI blob I need for the input?  I have been digging
around kerberos docs and haven't found what I needed.


Not sure. It's a kerberos authentication handshake, and initially
depends on a challenge sent by the helper...

Regards
Henrik







[squid-users] https->http setting

2008-06-06 Thread Ken W.
Hello members,

I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.

This is the setting I considered:

https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
protocol=http 

cache_peer 10.0.0.1 parent 80 0 no-query originserver name=origin_1
acl service_1 dstdomain .xxx.com
cache_peer_access origin_1 allow service_1


Then I access to squid with this way:
https://www.xxx.com/

Can squid accept this https request and forward it to original server with
http correctly?
btw, what's the usage of "protocol=http"? I can't understand for it
enough.

Thanks in advance.


  Get the name you always wanted with the new y7mail email address.
www.yahoo7.com.au/mail



Re: [squid-users] How to not cache a site?

2008-06-06 Thread Chris Robertson

Jerome Yanga wrote:

Thanks for the quick response, Chris.

Here are my attempts to answer your questions.  :)


Using Live HTTP Headers plugin for Firefox.  It seems to show that 
Cache-Control and Pragma settings.

http://site_address.com/help/jssamples_start.htm

GET /help/jssamples_start.htm HTTP/1.1
Host: site_address.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.14) 
Gecko/20080404 Firefox/2.0.0.14
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: CFID=1234567890; CFTOKEN=1234567890; SESSIONID=1234567890; 
__utma=.1.1.1.1.3; __utmc=1; 
__utmz=1.1.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); 
__utmb=1.4.10. 1

HTTP/1.x 200 OK
Date: Thu, 05 Jun 2008 23:41:00 GMT
Server: Apache
Last-Modified: Thu, 05 Jun 2008 09:03:27 GMT
Etag: "1-1-1"
Accept-Ranges: bytes
Content-Type: text/html; charset=UTF-8
Cache-Control: no-store, no-cache, must-revalidate, max-age=0
Expires: Thu, 05 Jun 2008 23:41:00 GMT
  


These two lines ("Cache-Control: no-store", and an Expires with the same 
time as the request) should stop any (compliant) shared cache from 
caching the content.  Have you modified the refresh_pattern in your 
squid.conf?



Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Pragma: no-cache
Content-Length: 811
Connection: keep-alive


I purge the cache using a purge command.

#file /cache/usr/bin/purge
/cache/usr/bin/purge: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), 
for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped

...and the syntax I use is below.

#/cache/usr/bin/purge -n -v -c /etc/squid/cachepurge.conf -p 127.0.0.1:80 -P 1 -e 
site_address\.com > /var/log/site_address.com_purge.log

I grep'ed the log created from the command above and I can find instances of 
site_address.com being deleted.  Hence, it is being cached.
  


Have you checked the headers returned with requests for those objects 
that are being cached?



I have also reviewed the access.log and I found a some TCP_MEM_HIT:NONE, 
TCP_REFRESH_HIT, TCP_IMS_HIT, TCP_HIT, TCP_REFRESH_MISS.
  


Same story here, have you verified the headers on these objects?  
Especially the objects that result in TCP_REFRESH_HIT and TCP_IMS_HIT as 
(I think) those are requests that are being validated with the origin 
server.



I cannot review the store.log as it is disabled.

I shall try the syntax you have provided on the next available downtime.

acl cacheDenyAclName dstdomain .site_address.com 
acl otherCacheDenyAclName urlpath_regex ^/help/ 
cache deny cacheDenyAclName otherCacheDenyAclName 


Thanks again, Chris.

Regards,
Jerome
  


Chris


[squid-users] RE : [squid-users] performances ... again

2008-06-06 Thread Dean Weimer
Your DNS responses were similar to what I saw on those same domains, but how is 
squid querying DNS, it can be set different than the host DNS servers that dig 
would be using.

Do you have any of the following options set in your squid.conf?  If so what 
are they set to?

DNS OPTIONS
 -

* check_hostnames
* allow_underscore
* cache_dns_program
* dns_children
* dns_retransmit_interval
* dns_timeout
* dns_defnames
* dns_nameservers
* hosts_file
* dns_testnames
* append_domain
* ignore_unknown_nameservers
* ipcache_size
* ipcache_low
* ipcache_high
* fqdncache_size

Also if you haven't already, setup cachemgr.cgi, look at the general runtime 
information page, and see what the median service times are reporting for DNS 
Lookups.  Also look at the IP Cache statistics, that will show you all cached 
domains, those should not have the delay when accessing them if It is purely a 
DNS issue causing the performance hit.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: GARDAIS Ionel [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 06, 2008 2:56 PM
To: Henrik Nordstrom
Cc: Squid Users
Subject: [squid-users] RE : [squid-users] performances ... again

Okay ...
It's been the hardest 20 minutes of the day : find a few domain names that 
"should" have not been accessed and cached by our DNS.

Well, from Paris, France, time given by dig stats :
- mana.pf (French Polynesia, other side of the Earth, satellite link) : around 
700ms
- aroundtheworld.com, astaluego.com, apple.is, dell.nl, Volvo.se : between 100 
and 150ms
- nintendo.co.jp, Yamaha.co.jp, pioneer.co.jp : around 300ms

Cached entries are returned in less than 1ms.

Ionel


-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : vendredi 6 juin 2008 21:05
À : GARDAIS Ionel
Cc : Squid Users
Objet : Re: [squid-users] performances ... again

On fre, 2008-06-06 at 14:37 +0200, Ionel GARDAIS wrote:
> I got a user (whom I can trust) who uses an explicit proxy configuration 
> : there are no improvments.

Ok. Then it's at the proxy, or the DNS servers it uses.

Remember that to diagnose DNS slowness you need to query for hosts and
domains which has not yet been visited, as the DNS server also caches a
lot. Lookups of already visited domains/hosts is not valid as proof to
say that the DNS is fine..

> I tried to avoid use of calls which cause DNS lookups (hence the 
> host.match() and host.indexOf() ).

Good.

Regards
Henrik


[squid-users] RE : [squid-users] performances ... again

2008-06-06 Thread GARDAIS Ionel
Okay ...
It's been the hardest 20 minutes of the day : find a few domain names that 
"should" have not been accessed and cached by our DNS.

Well, from Paris, France, time given by dig stats :
- mana.pf (French Polynesia, other side of the Earth, satellite link) : around 
700ms
- aroundtheworld.com, astaluego.com, apple.is, dell.nl, Volvo.se : between 100 
and 150ms
- nintendo.co.jp, Yamaha.co.jp, pioneer.co.jp : around 300ms

Cached entries are returned in less than 1ms.

Ionel


-Message d'origine-
De : Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Envoyé : vendredi 6 juin 2008 21:05
À : GARDAIS Ionel
Cc : Squid Users
Objet : Re: [squid-users] performances ... again

On fre, 2008-06-06 at 14:37 +0200, Ionel GARDAIS wrote:
> I got a user (whom I can trust) who uses an explicit proxy configuration 
> : there are no improvments.

Ok. Then it's at the proxy, or the DNS servers it uses.

Remember that to diagnose DNS slowness you need to query for hosts and
domains which has not yet been visited, as the DNS server also caches a
lot. Lookups of already visited domains/hosts is not valid as proof to
say that the DNS is fine..

> I tried to avoid use of calls which cause DNS lookups (hence the 
> host.match() and host.indexOf() ).

Good.

Regards
Henrik


Re: [squid-users] performances ... again

2008-06-06 Thread Henrik Nordstrom
On fre, 2008-06-06 at 14:37 +0200, Ionel GARDAIS wrote:
> I got a user (whom I can trust) who uses an explicit proxy configuration 
> : there are no improvments.

Ok. Then it's at the proxy, or the DNS servers it uses.

Remember that to diagnose DNS slowness you need to query for hosts and
domains which has not yet been visited, as the DNS server also caches a
lot. Lookups of already visited domains/hosts is not valid as proof to
say that the DNS is fine..

> I tried to avoid use of calls which cause DNS lookups (hence the 
> host.match() and host.indexOf() ).

Good.

Regards
Henrik



[squid-users] debug_options reference

2008-06-06 Thread Anton Melser
Hi all,
I feel like a complete fool but I just can't seem to use the squid
docs... could someone point me to the list of sections? ALL,1 33,2
seems to be a common setting - but wtf is the doc that says what 33
is?!?
Cheers
Anton
ps. Do I have to read through the source for this?

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] ESI questions

2008-06-06 Thread Luciano Ramalho
>> Amos Jeffries said:
> It's a bit before my time but I believe the initial sponsors were
> using it. At least until development on it stopped.

The initial sponsors for ESI support in Squid were Zope Corp, and they
never used it, because Squid 3 which as supposed implement it was a
few years late.

-- 
Luciano Ramalho <[EMAIL PROTECTED]>
http://occam.com.br
+55 11 3628-9723 (com.)
+55 11 8432-0333 (cel.)


[squid-users] RE : [squid-users] performances ... again

2008-06-06 Thread GARDAIS Ionel
I will try the "host != "some.url.com" part.

For the isInNet() trick, the problem is that it inducts a DNS resolution call 
for every request to compare with the IP/mask parameters.
I was thinking to myself that it was an useless overhead...

Ionel


-Message d'origine-
De : Matus UHLAR - fantomas [mailto:[EMAIL PROTECTED] 
Envoyé : vendredi 6 juin 2008 17:07
À : squid-users@squid-cache.org
Objet : Re: [squid-users] performances ... again

On 06.06.08 14:37, Ionel GARDAIS wrote:
> function FindProxyForURL(url,host) {
>if (
>(
>!(
>host.indexOf('www.ifp.fr') == 0
>|| host.indexOf('validation.ifp.fr') == 0
>|| host.indexOf('project.ifp.fr') == 0
>|| host.indexOf('ogst.ifp.fr') == 0
>)
>)

wouldn't be easier to compare host with those strings?

host != "www.ifp.fr" && host != "validation.ifp.fr" ... 

>|| host.match('127.0.0.1')

is using of
isInNet(host, "127.0.0.0", "255.0.0.0")

not working?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I just got lost in thought. It was unfamiliar territory. 


[squid-users] Squid hangs after a time

2008-06-06 Thread Carlos Alberto Bernat Orozco
 Hi group

 Thanks for your answer. I just checked the resolv.conf. It seems to me
 normal, with the nameservers list of my ISP.

 Could be another problem?

 Thanks in advanced

 2008/6/6 Mario Salazar Ba=F1os <[EMAIL PROTECTED]>:
>
> > Carlos Alberto Bernat Orozco escribi=F3:
> >
> >  Hi
> >>
> >> I'm new to Squid, so please be patient. I've installed Squid 2.6 on a
> >> Debian Etch as a transparent proxy. I can go to many web sites but
> >> suddenly I can't surf the web. I realize the problem when I make ping
> >> to gmail.com and after 10 or 20 minutes gives me Request Time Out.
> >>
> >> I saw the link
> >>
> >>
> >> http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-websites=
> -are-hanging/
> >>
> >> (Things to look at if websites are hanging!) and I've applied those
> >> solutions but still have that problem. What other thing can I check?
> >>
> >> Please tell me if you need more info
> >>
> >> Thanks in advanced.
> >>
> >>
> >>
> >>
> > check your dns list (resolv.conf)
> >
> > --
> >
>
> --=_Part_8173_3861938.1212764861194
> Content-Type: text/html; charset=ISO-8859-1
> Content-Transfer-Encoding: quoted-printable
> Content-Disposition: inline
>
> Hi groupThanks for your answer. I just checked the resolv.conf. It =
> seems to me normal, with the nameservers list of my ISP.Could be an=
> other problem?Thanks in advanced
> 2008/6/6 Mario Salazar Ba=F1os  m.mx">[EMAIL PROTECTED]>: te" style=3D"border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt=
>  0.8ex; padding-left: 1ex;">
> Carlos Alberto Bernat Orozco escribi=F3: c">
>  204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> Hi
> 
> I'm new to Squid, so please be patient. I've installed Squid 2.6 on=
>  a
> Debian Etch as a transparent proxy. I can go to many web sites but
> suddenly I can't surf the web. I realize the problem when I make ping r>
> to http://gmail.com"; target=3D"_blank">gmail.com and after 1=
> 0 or 20 minutes gives me Request Time Out.
> 
> I saw the link
> 
> http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-=
> websites-are-hanging/" target=3D"_blank">http://squidproxy.wordpress.com/20=
> 07/06/05/thinsg-to-look-at-if-websites-are-hanging/
> 
> (Things to look at if websites are hanging!) and I've applied those
> solutions but still have that problem. What other thing can I check?
> 
> Please tell me if you need more info
> 
> Thanks in advanced.
> 
> 
>   
> 
> check your dns list (resolv.conf)
> 
> -- 
> 
>
> --=_Part_8173_3861938.1212764861194--


Re: [squid-users] performances ... again

2008-06-06 Thread Matus UHLAR - fantomas
On 06.06.08 14:37, Ionel GARDAIS wrote:
> function FindProxyForURL(url,host) {
>if (
>(
>!(
>host.indexOf('www.ifp.fr') == 0
>|| host.indexOf('validation.ifp.fr') == 0
>|| host.indexOf('project.ifp.fr') == 0
>|| host.indexOf('ogst.ifp.fr') == 0
>)
>)

wouldn't be easier to compare host with those strings?

host != "www.ifp.fr" && host != "validation.ifp.fr" ... 

>|| host.match('127.0.0.1')

is using of
isInNet(host, "127.0.0.0", "255.0.0.0")

not working?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I just got lost in thought. It was unfamiliar territory. 


[squid-users] https questions

2008-06-06 Thread Ken W.
( I'm sorry that this was my third message for  the same question to the list.
  b/c the before two messages sent from yahoo got lost...)

Hello members,

I want to set squid, which accepts https from clients, then forward the
request to original server with http protocal.

This is the setting I considered:

https_port 443 accel vhost cert=/squid/etc/xxx.crt key=/squid/etc/xxx.key
protocol=http

cache_peer 10.0.0.1 parent 80 0 no-query originserver name=origin_1
acl service_1 dstdomain .xxx.com
cache_peer_access origin_1 allow service_1


Then I access to squid with this way:
https://www.xxx.com/

Can squid accept this https request and forward it to original server with
http correctly?
btw, what's the usage of "protocol=http"? I can't understand for it
enough.

Thanks in advance.


Re: [squid-users] Squid hangs after some time

2008-06-06 Thread Mario Salazar Baños

Carlos Alberto Bernat Orozco escribió:

Hi

I'm new to Squid, so please be patient. I've installed Squid 2.6 on a
Debian Etch as a transparent proxy. I can go to many web sites but
suddenly I can't surf the web. I realize the problem when I make ping
to gmail.com and after 10 or 20 minutes gives me Request Time Out.

I saw the link

http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-websites-are-hanging/

(Things to look at if websites are hanging!) and I've applied those
solutions but still have that problem. What other thing can I check?

Please tell me if you need more info

Thanks in advanced.


  

check your dns list (resolv.conf)

--


Re: [squid-users] Transparent proxy with DansGuardian using IDENT

2008-06-06 Thread modulok

This seems like more work, for the admin and clients.

Also, we have clients who go offsite often (salesmen are barely here), if
they have proxies, when they go offsite they will not be able to work online
without using the VPN and proxy through that. And if they are at a hotel
that requires registration, that may not work either.

Correct me if I am wrong, but I think it will be much easier moving the
Squid and Firewall onto one box.


Henrik Nordstrom-5 wrote:
> 
> On fre, 2008-05-30 at 05:56 -0700, [EMAIL PROTECTED] wrote:
> 
>> The whole ident setup (every client having an ident service running)
>> is a little bit annoying, but I haven't found any other way to log
>> usernames without forcing clients to enter a user/pass to access the
>> internet. If there is another way please let me know.
> 
> Clients configured to use the proxy, and NTLM authentication for
> identification. Works for computers who are members of the windows
> domain.
> 
> Guests with computers outside of the domain has to enter a user/pass to
> get access.
> 
>> How can I make sure nobody uses a third party browser to get on the
>> internet?
> 
> Use the interception technique you already have played with, but instead
> of intercepting in the proxy, send it to a web page explaining how to
> configure the proxy settings in their browser.
> 
> Regards
> Henrik
> 
>  
> 

-- 
View this message in context: 
http://www.nabble.com/Transparent-proxy-with-DansGuardian-using-IDENT-tp17544098p17693766.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] RE: performances ... again

2008-06-06 Thread Dean Weimer
Could you possibly give us the pac script you are using?  I once thought that 
using the option of DNS does not resolve use proxy, else go direct, as internal 
clients can't resolve outside DNS.  This caused a very similar symptom as you 
are seeing as clients had to wait for local DNS timeouts before going through 
the proxy on every page.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Ionel GARDAIS [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 06, 2008 12:55 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] RE: performances ... again

We are using a pac script, mostly with Windows XP clients ...

@Dean : the second page does not load faster than the first one. 
"Browser repsonse time" are much better early in the morning (with less 
connection obviously). When a blank page takes too much time to load, 
doing a "refresh" or revalidating the URL in the address bar often 
"unlock" page loading and the page is displayed with a good response time.

I'll try to investigate if DNS is involved and maybe find a workaround 
to the pac autoconfiguration to do transparent proxy.

Ionel


Ritter, Nicholas wrote:
>  I had a problem similar to this at another job site a coulple of years ago. 
> The clients were windows xp machines, and they were using wpad/pac style 
> configuration. The fix was transparent caching.
>
> -Nick
>
>
> -Original Message-
> From: Dean Weimer [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, June 05, 2008 1:53 PM
> To: GARDAIS Ionel; squid-users@squid-cache.org
> Subject: [squid-users] Re:[squid-users] performances ... again
>
> How are your browsers configured to use the proxy? Manual, wpad script, 
> transparent?
>  
> Could be a problem with discovering proxy settings.
>
> What about a second page on the same server, ie. http://some.domain.com then 
> http://some.domain.com/nextpage.html?  Could be a DNS response issue, perhaps 
> your first server is timing out, and the clients have to wait for the second 
> to respond.  If the second page comes up right away, this would be a good 
> indicator of that.  As Squid would have cached the DNS lookup from the first 
> request.
>
> Most servers are not going to have a 12Mb/s of bandwidth is a decent chunk, I 
> wouldn't expect to see that maxed out all the time, because you are averaging 
> under 2Mb/s in itself is not cause for concern.  The fact that you are 
> hitting it on large downloads means the link is performing well.
>
> I am seeing about 180ms median response time on misses and 5ms median 
> response time on hits, 87ms response time on DNS Lookups.  The server is 
> running 2G cpu and 1G ram, with an average of 900 req/min.  The server is 
> servicing about 500 clients connected behind 2 T1 lines.  Both lines are 
> consistently running at 1.2 to 1.5Mb/s from 7am to 6pm when most users are at 
> work.  Disk cache is 8gigs on the same disk as system, which is actually a 
> hardware mirrored ultra 160 10K SCSI disks, (Not ideal, as I have learned a 
> lot more since I first built this system), but the performance is excellent, 
> so I haven't found cause to change it.  The server is running FreeBSD 5.4, 
> squid the cache and logs are installed on their own mount point using ufs 
> file system, Mount point is on a single Disk slice encompassing entire hard 
> drive, and to top that off, the file system runs about 90% of capacity, yet 
> another no no.
>
> Thanks,
>  Dean Weimer
>  Network Administrator
>  Orscheln Management Co
>
> -Original Message-
> From: GARDAIS Ionel [mailto:[EMAIL PROTECTED]
> Sent: Thursday, June 05, 2008 12:11 PM
> To: chris brain; squid-users@squid-cache.org
> Subject: [squid-users] RE : [squid-users] Re:[squid-users] performances ... 
> again
>
> Hi Chris,
>
> The internet link is not congested.
> As I wrote, we use less than 2Mb/s of the 12Mb/s we can reach (but yes, 
> upload seems to be limited to 512Kb/s (somewhere around the maximum of the 
> line), this might be a bottleneck).
> When downloading large files (from ten to hundereds of megabytes), the whole 
> 12Mb are used (showing a 1100KB/s download speed).
>
> After rereading my post, I saw that I did not finish a line :
> "[...] cache-misses median service times are around 200ms and cache-hits are 
> around 3ms" but we often see a 10-second lag for browser to start loading the 
> page.
>
> Ionel
>
>
> -Message d'origine-
> De : chris brain [mailto:[EMAIL PROTECTED]
> Envoyé : jeudi 5 juin 2008 18:34
> À : squid-users@squid-cache.org
> Objet : [squid-users] Re:[squid-users] performances ... again
>
> Hi Ionel,
>
> Your performance dont look that bad. Our stats roughly work out to be :
>
> 1000+ users
> NTLM auth
> Average HTTP requests per minute since start: 2990.8
> with max 30% hits. (so your hits look coparable to us.) Our cache miss 
> service time averages to about 160ms and cache hits service time about 10ms 
> running IBM blade P4 3G c

Re: [squid-users] performances ... again

2008-06-06 Thread Ionel GARDAIS
I got a user (whom I can trust) who uses an explicit proxy configuration 
: there are no improvments.
The pac we use is mostly made of a huge "if" which instruct user's 
browser to bypass the proxy and to go direct to some servers.


Here is the pac :

function FindProxyForURL(url,host) {
   if (
   (
   !(
   host.indexOf('www.ifp.fr') == 0
   || host.indexOf('validation.ifp.fr') == 0
   || host.indexOf('project.ifp.fr') == 0
   || host.indexOf('ogst.ifp.fr') == 0
   )
   )
   &&
   (
   isPlainHostName(host)
   || host.match('.ifp.fr')
   || host.match('.cegedim-srh.com')
   || host.match('.cegedim-srh.net')
   || host.match('.private.cegedim.com')
   || host.match('graphidoc.cvp.fr')
   || host.match('127.0.0.1')
   || host.match('192.168.9.204')
   || host.match('172.16')
   || host.match('172.17.2')
   || host.match('172.17.3')
   || host.match('172.20')
   || host.match('172.29')
   || host.match('172.30')
   || host.match('172.31')
   || host.match('192.168.1')
   || host.match('156.118')
   || host.match('83.173.66.219')
   || host.match('89.148.17.193')
   || host.match('194.5.133')
   || host.match('194.5.134')
   || host.match('80.94.191')
   )
   )
   return "DIRECT";

   return "PROXY 192.168.9.200:3328";
}

I tried to avoid use of calls which cause DNS lookups (hence the 
host.match() and host.indexOf() ).


Ionel


Henrik Nordstrom wrote:

Is there any difference if you configure the proxy explicit without
using a PAC?

Do you have any rules in the PAC depending on destinaion IP of the
requested server?


fre 2008-06-06 klockan 08:56 +0200 skrev Ionel GARDAIS:
  

Configured proxy for now.
I'm doing some network to see how can I use squid in transparent 
interception without breaking the exclude rules ffrom the current pac we 
use.


Ionel


Henrik Nordstrom wrote:


Configured proxy, or transparent interception?


On fre, 2008-06-06 at 08:29 +0200, Ionel GARDAIS wrote:
  
  

DNS issues ... client side ? proxy side ?
clients resolve to Windows Server 2003 DNS for internal domain names.
These servers forward to DMZ DNS (running bind) for internal view of
the DNS (private IPs). DMZ DNS forward to the world for all internet
name resolution.
The squid box uses the DMZ DNS.

Thanks,
Ionel

Henrik Nordstrom wrote: 



tor 2008-06-05 klockan 19:10 +0200 skrev GARDAIS Ionel:
  
  
  

After rereading my post, I saw that I did not finish a line :
"[...] cache-misses median service times are around 200ms and cache-hits are around 
3ms" but we often see a 10-second lag for browser to start loading the page.




That's usually DNS issues. For example if you have two DNS servers
configured where one can not resolve external names...

Regards
Henrik
  
  
  

--
Ionel GARDAIS
System-Network Engineer




--
Ionel GARDAIS
System-Network Engineer

begin:vcard
fn:Ionel GARDAIS
n:GARDAIS;Ionel
org:Tech'Advantage;IT
adr:;;1 Rue Isabey;Rueil Malmaison;FR;92500;FR
email;internet:[EMAIL PROTECTED]
tel;work:+33(0)147088131
tel;fax:+33(0)147088065
x-mozilla-html:FALSE
url:http://www.tech-advantage.com
version:2.1
end:vcard



Re: [squid-users] Transparent proxy with MSN

2008-06-06 Thread Sergio Belkin
2008/6/5 Amos Jeffries <[EMAIL PROTECTED]>:
> Sergio Belkin wrote:
>>
>> Hi,
>> I'd want to know if it's possible allos MSN usage along transparent proxy.
>
> Possible. But not always easy. It depends highly on the type of network you
> have setup (a level of NAT between the client and squid kills it fairly
> well).

The schema is as follows:

A user connect with his notebook via Access Point which has OpenWRT
installed. OpenWRT has DNAT rules:

iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 80 -j DNAT
--to-destination $SQUID_IP:8080

iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 1863 -j DNAT
--to-destination SQUID_IP:8080

(I've tried the last one and even redirecting 1050, but I'm not sure
if that's right)

Users can browse the web with no problems using transparent proxy
(except SSL sites of course) but they fail to use MSN.


>
> MSN is _supposed_ to have automatic failovers to port 80 that use HTTP. But
> that depends on what other paths it can find through your network first.
>
> Amos
> --
> Please use Squid 2.7.STABLE1 or 3.0.STABLE6
>

Thanks in advance!

-- 
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -


[squid-users] Squid hangs after some time

2008-06-06 Thread Carlos Alberto Bernat Orozco
Hi

I'm new to Squid, so please be patient. I've installed Squid 2.6 on a
Debian Etch as a transparent proxy. I can go to many web sites but
suddenly I can't surf the web. I realize the problem when I make ping
to gmail.com and after 10 or 20 minutes gives me Request Time Out.

I saw the link

http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-websites-are-hanging/

(Things to look at if websites are hanging!) and I've applied those
solutions but still have that problem. What other thing can I check?

Please tell me if you need more info

Thanks in advanced.


Re: [squid-users] block the http tunnel...

2008-06-06 Thread Amos Jeffries

[EMAIL PROTECTED] wrote:

dear all...

i have big problem with my squid-2.6-stable19 transparent.
i cant filter the http-tunnel with squid.


What http-tunnel? There are many ways of doing it. Your transparent 
setup is one.



usually i used the acl
dstdom_regex, acl dstdom, acl src, but i think it's not useful for
filtering access when my client use http-proxy.

please, explain me, how the http-proxy work. and how to filter the
http-tunnel connection with squid, without input the domain names, or
ip's in acl?

thanks in advance.



tommy



--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


Re: [squid-users] performances ... again

2008-06-06 Thread Henrik Nordstrom
Is there any difference if you configure the proxy explicit without
using a PAC?

Do you have any rules in the PAC depending on destinaion IP of the
requested server?


fre 2008-06-06 klockan 08:56 +0200 skrev Ionel GARDAIS:
> Configured proxy for now.
> I'm doing some network to see how can I use squid in transparent 
> interception without breaking the exclude rules ffrom the current pac we 
> use.
> 
> Ionel
> 
> 
> Henrik Nordstrom wrote:
> > Configured proxy, or transparent interception?
> >
> >
> > On fre, 2008-06-06 at 08:29 +0200, Ionel GARDAIS wrote:
> >   
> >> DNS issues ... client side ? proxy side ?
> >> clients resolve to Windows Server 2003 DNS for internal domain names.
> >> These servers forward to DMZ DNS (running bind) for internal view of
> >> the DNS (private IPs). DMZ DNS forward to the world for all internet
> >> name resolution.
> >> The squid box uses the DMZ DNS.
> >>
> >> Thanks,
> >> Ionel
> >>
> >> Henrik Nordstrom wrote: 
> >> 
> >>> tor 2008-06-05 klockan 19:10 +0200 skrev GARDAIS Ionel:
> >>>   
> >>>   
>  After rereading my post, I saw that I did not finish a line :
>  "[...] cache-misses median service times are around 200ms and cache-hits 
>  are around 3ms" but we often see a 10-second lag for browser to start 
>  loading the page.
>  
>  
> >>> That's usually DNS issues. For example if you have two DNS servers
> >>> configured where one can not resolve external names...
> >>>
> >>> Regards
> >>> Henrik
> >>>   
> >>>   
> >> -- 
> >> Ionel GARDAIS
> >> System-Network Engineer
> >> 
> 



Re: [squid-users] Squid 2.6 Access Log Not showing access to websites

2008-06-06 Thread Indunil Jayasooriya
>> On squid box, there is a utility Guarddog used for port forwarding. So
>> it forward all traffic on port 80 to Squid port 3128.
>
> I'd say your problem is here. You have port forwarded port 80 on the
> server itself to port 3128 on the server itself. Same as configuring
> Squid to listen on port 80 directly.

I think  Henrik is right. Pls do not uer suc a GUI tool. pls input
iptables command by hand.

> What you need is a rule which intercepts (NAT:s)any outgoing traffic to
> port 80 on servers out on the Internet and redirect these to Squid. This
> is different from port 80 on the server itself.

Pls try below rules.

#on the squidbox, Open squidport (3218) for LAN ips
iptables -A INPUT -i eth0 -d ipofsquidbox -p tcp -s ipofLANs/24
--dport 3128 -j ACCEPT

#Redirecting traffic destined to port 80 to port 3128
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j REDIRECT
--to-port 3128


Hope to hear from you.

Happy squiding

-- 
Thank you
Indunil Jayasooriya


[squid-users] block the http tunnel...

2008-06-06 Thread [EMAIL PROTECTED]
dear all...

i have big problem with my squid-2.6-stable19 transparent.
i cant filter the http-tunnel with squid. usually i used the acl
dstdom_regex, acl dstdom, acl src, but i think it's not useful for
filtering access when my client use http-proxy.

please, explain me, how the http-proxy work. and how to filter the
http-tunnel connection with squid, without input the domain names, or
ip's in acl?

thanks in advance.



tommy


Re: [squid-users] Squid 2.6 Access Log Not showing access to websites

2008-06-06 Thread Kirtimaan

Henrik,

Thanks for details. I will try these and reply with results.

Regards,
Kirtimaan



Henrik Nordstrom wrote:

Protocol: TCP
Source IP: LAN
Source port: ANY
Destination IP: ANY
Destination port: 80
Action: DNAT to serverip:port, or alternatively REDIRECT to porxy port

You can find iptables rule templates in the Squid FAQ.

I can not help you with the GUI tool you are using as I have never seen
it or used it, and from what I have read Guarddog DOES NOT support NAT
or even port forwarding.

Regards
Henrik

On fre, 2008-06-06 at 11:42 +0530, Kirtimaan wrote:

Henrik,

Thanks for reply, can you please provide me the rule which I have to add 
  at (NAT:s).


Regards,
Kirtimaan

Henrik Nordstrom wrote:

On tor, 2008-06-05 at 11:37 +0530, Kirtimaan wrote:
On squid box, there is a utility Guarddog used for port forwarding. So 
it forward all traffic on port 80 to Squid port 3128.

I'd say your problem is here. You have port forwarded port 80 on the
server itself to port 3128 on the server itself. Same as configuring
Squid to listen on port 80 directly.

What you need is a rule which intercepts (NAT:s)any outgoing traffic to
port 80 on servers out on the Internet and redirect these to Squid. This
is different from port 80 on the server itself.

Regards
Henrik











Re: [squid-users] performances ... again

2008-06-06 Thread Ionel GARDAIS

Configured proxy for now.
I'm doing some network to see how can I use squid in transparent 
interception without breaking the exclude rules ffrom the current pac we 
use.


Ionel


Henrik Nordstrom wrote:

Configured proxy, or transparent interception?


On fre, 2008-06-06 at 08:29 +0200, Ionel GARDAIS wrote:
  

DNS issues ... client side ? proxy side ?
clients resolve to Windows Server 2003 DNS for internal domain names.
These servers forward to DMZ DNS (running bind) for internal view of
the DNS (private IPs). DMZ DNS forward to the world for all internet
name resolution.
The squid box uses the DMZ DNS.

Thanks,
Ionel

Henrik Nordstrom wrote: 


tor 2008-06-05 klockan 19:10 +0200 skrev GARDAIS Ionel:
  
  

After rereading my post, I saw that I did not finish a line :
"[...] cache-misses median service times are around 200ms and cache-hits are around 
3ms" but we often see a 10-second lag for browser to start loading the page.



That's usually DNS issues. For example if you have two DNS servers
configured where one can not resolve external names...

Regards
Henrik
  
  

--
Ionel GARDAIS
System-Network Engineer



--
Ionel GARDAIS
System-Network Engineer

begin:vcard
fn:Ionel GARDAIS
n:GARDAIS;Ionel
org:Tech'Advantage;IT
adr:;;1 Rue Isabey;Rueil Malmaison;FR;92500;FR
email;internet:[EMAIL PROTECTED]
tel;work:+33(0)147088131
tel;fax:+33(0)147088065
x-mozilla-html:FALSE
url:http://www.tech-advantage.com
version:2.1
end:vcard