[squid-users] RPC over HTTPS for Terminal Services Gateway

2008-11-24 Thread Andreas Adler
Hi there

I am running Squid 3.0 PRE6 as a reverse proxy for many applications and 
services. RPC over HTTPS for Exchange/OWA is running fine for a long time. 
Recently I tried to pass the TS Gateway through Squid, but this is giving me a 
very hard time. TS Gateway is using RPC over HTTPS just like Exchange does, but 
I always get  an authentication error. Here is what I get:

--
TCP_MISS/401 399 RPC_IN_DATA https://server.domain.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/server.domain.com text/plain
--

Here is my access rule:
cache_peer server.domain.com parent 443 0 proxy-only no-query originserver 
front-end-https=on ssl login=PASS sslflags=DONT_VERIFY_PEER

Does anybody run a Terminal Services Gateway (TS Gateway) being proxied through 
squid?  Could there be something wrong with some NTLM passthrough? I am pretty 
clueless on this, so any help is very appreciated!

Thanks a lot!
Andreas Adler


Re: [squid-users] RPC over HTTPS for Terminal Services Gateway

2008-11-24 Thread Guido Serassio

Hi,

At 10.23 24/11/2008, Andreas Adler wrote:

Hi there

I am running Squid 3.0 PRE6 as a reverse proxy for many applications 
and services. RPC over HTTPS for Exchange/OWA is running fine for a 
long time. Recently I tried to pass the TS Gateway through Squid, 
but this is giving me a very hard time. TS Gateway is using RPC over 
HTTPS just like Exchange does, but I always get  an authentication 
error. Here is what I get:


--
TCP_MISS/401 399 RPC_IN_DATA 
https://server.domain.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/server.domain.com text/plain

--

Here is my access rule:
cache_peer server.domain.com parent 443 0 proxy-only no-query 
originserver front-end-https=on ssl login=PASS sslflags=DONT_VERIFY_PEER


Does anybody run a Terminal Services Gateway (TS Gateway) being 
proxied through squid?  Could there be something wrong with some 
NTLM passthrough? I am pretty clueless on this, so any help is very 
appreciated!


I never tested TS Gateway on Squid, but usually Exchange RPC over 
HTTPS works better using Basic authentication over SSL.


Another thing to verify is the Reverse Proxy SSL certificate: using 
self signed certificates for Echange RPC over HTTPS, Outlook fails 
silently if the CA is not trusted.


Regards

Guido



Thanks a lot!
Andreas Adler



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] * Zero Sized Reply

2008-11-24 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
when i open ( from LAN and from hide  my ass . com )
my squid/access.log say :

 10917 67.159.44.138 TCP_HIT/000 0 GET http://castor.gpi-g.com/ -
FIRST_UP_PARENT/mySite -

and at browser :
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://w2.hidemyass.com/index.php?

The following error was encountered:

* Zero Sized Reply

Squid did not receive any data for this request.

Your cache administrator is [EMAIL PROTECTED]
Generated Mon, 24 Nov 2008 09:50:57 GMT by castor.gpi-g.com
(squid/2.6.STABLE18)



# Squid #
http_port 80 accel vhost defaultsite=castor.gpi-g.com
cache_peer 202.169.51.122 parent 80 0 no-query name=mySite
acl hostedSites dstdomain castor.gpi-g.com

http_port 80 accel vhost defaultsite=hr.gpi-g.com
cache_peer 192.168.222.111 parent 80 0 no-query name=mySite2
acl hostedSites2 dstdomain hr.gpi-g.com


acl manager proto cache_object
acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1 192.168.222.100
acl SSL_ports port 443 563
acl Safe_ports port 21 80 81 53 143 2443 443 563 70 210 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT

cache_peer_access mySite allow hostedSites
cache_peer_access mySite2 allow hostedSites2



-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


[squid-users] Re: * Zero Sized Reply

2008-11-24 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
sorry i forgot
i use HAVP too

this is the msg from LAN ( HAVP Error MSG )
WARNING


The following server is down:
Could not read headers


On Mon, Nov 24, 2008 at 4:52 PM, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
[EMAIL PROTECTED] wrote:
 when i open ( from LAN and from hide  my ass . com )
 my squid/access.log say :

  10917 67.159.44.138 TCP_HIT/000 0 GET http://castor.gpi-g.com/ -
 FIRST_UP_PARENT/mySite -

 and at browser :
 ERROR
 The requested URL could not be retrieved

 While trying to retrieve the URL: http://w2.hidemyass.com/index.php?

 The following error was encountered:

* Zero Sized Reply

 Squid did not receive any data for this request.

 Your cache administrator is [EMAIL PROTECTED]
 Generated Mon, 24 Nov 2008 09:50:57 GMT by castor.gpi-g.com
 (squid/2.6.STABLE18)



 # Squid #
 http_port 80 accel vhost defaultsite=castor.gpi-g.com
 cache_peer 202.169.51.122 parent 80 0 no-query name=mySite
 acl hostedSites dstdomain castor.gpi-g.com

 http_port 80 accel vhost defaultsite=hr.gpi-g.com
 cache_peer 192.168.222.111 parent 80 0 no-query name=mySite2
 acl hostedSites2 dstdomain hr.gpi-g.com


 acl manager proto cache_object
 acl all src 0.0.0.0/0.0.0.0
 acl localhost src 127.0.0.1 192.168.222.100
 acl SSL_ports port 443 563
 acl Safe_ports port 21 80 81 53 143 2443 443 563 70 210 1025-65535
 acl Safe_ports port 280
 acl Safe_ports port 488
 acl Safe_ports port 591
 acl Safe_ports port 777
 acl CONNECT method CONNECT

 cache_peer_access mySite allow hostedSites
 cache_peer_access mySite2 allow hostedSites2



 --
 -=-=-=-=
 http://amyhost.com
 Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
 sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
 85.000 untuk non-reseller | Rp. 82.000 untuk Reseller
 
 Pengin punya Layanan SMS PREMIUM ?
 Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...




-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


[squid-users] SSL Question

2008-11-24 Thread jhnns

I have configured Squid only with the http port. I'm not using --enable-ssl.
When I go to an https URL in a browser the URL works. How is this possible?
I expected that the browser should create an https connection to squid.

Regards,

Johannes
-- 
View this message in context: 
http://www.nabble.com/SSL-Question-tp20658569p20658569.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] cache_peer, time based

2008-11-24 Thread John Doe
Maybe something like the following would work?

acl daytime 08:00-23:59
acl nighttime 00:00-07:59

cache_peer_access mydaycache.net allow daytime
cache_peer_access mydaycache.net deny all
cache_peer_access mynightcache.net allow nighttime
cache_peer_access mynightcache.net deny all

JD


- Original Message 
 From: Marcel Grandemange [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Sunday, November 23, 2008 1:46:33 AM
 Subject: [squid-users] cache_peer, time based
 
 Hey does anybody know how I would do something like the following...
 
 I have a squid server that under normal circumstances will always fetch
 objects directly.
 
 However from 12:00 at night to 8:00am , the line is saturated from p2p use.
 Now the same time we have another cache on a different line for different
 purposes that will be entirely free during non working hours.
 
 What I would like to do is use the cache_peer directive for this cache but
 it must ONLY be used between 12:00 and 8:00 , the rest of the time it must
 go directly.
 
 
 Possible? Scripts?
 
 
 Any advice?



  



[squid-users] improve flow capacity for Squid

2008-11-24 Thread Ken DBA
Hello,

I was just finding the flow capacity for Squid is too limited.
It's even hard to reach an upper limit of 150 MBits.

How can I improve the flow capacity for Squid in the reverse-proxy mode?
Thanks in advance.

Ken


  


Re: [squid-users] squid-3.1.0.2 compilation error

2008-11-24 Thread Mario Remy Almeida
Thanks,

Yes I had to do that.

I downloaded the latest daily snapshot last night and copied the file to
squid-3.1.0.2 and it worked fine

now no problem

Regards,
Remy


On Mon, 2008-11-24 at 10:34 +1300, Amos Jeffries wrote:
  Hi All,
 
 
  Tried to compile squid on Ubuntu 8.04
  got the bellow error message
 
  squid_kerb_auth.c:121:20: error: base64.h: No such file or directory
 
  base64.h file not found under the below directory
  helpers/negotiate_auth/squid_kerb_auth/
  instead base64.c file found
 
  would like to know if anyone had the above problem and how it was solved
 
  downloaded squid-3.1.0.2.tar.bz2 the file twice form the below location
 
  http://www1.it.squid-cache.org/Versions/v3/3.1/
 
 
 Thank you for testing. This is a known problem with the bundle.
 I believe it to be corrected in the daily snapshots.
 
 Please also note the updated compile details for 3.1 on Debian  Ubuntu
 http://wiki.squid-cache.org/SquidFaq/CompilingSquid
 
 Amos
 
 



[squid-users] tcp_outgoing_tos 0x02 not working

2008-11-24 Thread Evelio Vila
Hi everyone!

I have a 2.6 stable squid and 
something like this in my conf file..

tcp_outgoing_tos 0x02 !top_users

however the 0x02 mark doesn't seem to work. I've 
tried with others and they do work

This post has the following issue 
http://www.squid-cache.org/mail-archive/squid-users/200508/0466.html

any suggestions..

thanks in advance,
evelio





[squid-users] Squid vs httpd mod_cache

2008-11-24 Thread Neil Gunton

Hi all,

I'm running a LAMP community website (Debian Lenny, Apache 2.2.9, MySQL, 
mod_perl) which gets around 100,000 page requests per day. I currently 
use two builds of apache - one lightweight front end caching reverse 
proxy, and a heavy back-end mod_perl. This worked well for years while I 
was using Apache 1.3, since I was using Igor Sysoev's mod_accel and 
mod_deflate modules to do the reverse proxy and caching. Now I have 
upgraded to Apache 2.2, I can't use his modules any more, so I've been 
trying to use the stock mod_cache. The server is a dual Opteron 265 
(i.e. 4 cores), 4GB RAM, 4x10k SCSI drives in RAID0 (I know it's risky, 
buy I need the space and performance, and backup is instantaneous with 
MySQL replication).


Everything's working fine, mostly, but I'm having some issues with the 
cache management. In a nutshell, htcacheclean just doesn't seem to be 
able to keep up with managing the cache pruning (i.e. keeping it down to 
a reasonable size). If I run htcacheclean in cron mode, then it takes 
hours to complete its run, and while running it hogs the disks and 
produces big iowait times. If I run it in daemon mode, then it just sits 
there and produces about half the iowait (if I run with the -n nice 
option), in which case it just isn't keeping up with the cache growth.


I'm concerned about the cache structure - it's a 3-level directory, and 
it seems to take a long time just to traverse it. Even doing a simple du 
on it seems to take forever, currently about 3 hours or more, and that's 
for about 10GB of cache. I'd prefer to keep the cache down to more like 
1GB at the most. In fact, that's what I have htcacheclean set to - 
1000MB. But it doesn't seem to be doing the job.


I've been asking around the Apache and mod_perl lists about ways to 
improve this. Someone suggested using Squid instead. So here I am - I've 
never used Squid, mostly because I always used Apache and really need 
the mod_rewrite capabilities for doing things like blocking image 
hotlinking from other sites. I really need a front-end reverse proxy 
that has capability to do access control stuff like this, as well as 
redirects for old content etc - you know, all the things you can do with 
mod_rewrite. I really don't want to have to pass all that back to the 
mod_perl processes.


I would like to know how good Squid's cache management (i.e. pruning) 
is. I get the impression that mod_cache in Apache 2.2 is not very mature 
- some of the cache management features don't even seem to be 
implemented yet. I assume that Squid is a much more mature product, and 
thus I'd hope that it has cache management pretty much down pat.


How does Squid manage its disk cache? Does it consume a lot of disk io 
when doing it?


Has anybody else here migrated from using Apache's mod_cache to Squid, 
and if so do you have any insights?


Lastly, if I do decide to use Squid, is the O'Reilly book from 2004 
still relevant, or is it out of date now? I know there's a lot of stuff 
online, but I like to have a handy book reference, plus a well-written 
book often has a good intro to the tool. This book seems to get only 
5-star reviews on Amazon. Is it still up to date?


Thanks in advance,

Neil


Re: [squid-users] Recommended Store Size

2008-11-24 Thread Chris Robertson

Nyamul Hassan wrote:
Is there any recommended store size for a forward proxy for a given 
link speed?  We have a 70 Mbps link, and a client base of 
approximately 4,000 to 4,500.


The rule of thumb I've read previously is storage equivalent to a 
week's traffic.  If you pass an average of 30GB per day, a storage size 
of 210GB is a good start.




Thanks in advance for your suggestions.

Regards
NYAMUL HASSAN


Chris



Re: [squid-users] SSL Question

2008-11-24 Thread Chris Robertson

jhnns wrote:

I have configured Squid only with the http port. I'm not using --enable-ssl.
When I go to an https URL in a browser the URL works. How is this possible?
I expected that the browser should create an https connection to squid.

Regards,

Johannes
  


http://muffin.doit.org/docs/rfc/tunneling_ssl.html

Chris


Re: [squid-users] Recommended Store Size

2008-11-24 Thread Nyamul Hassan

Thank you Chris for your response.  Greatly appreciate your advise.

However, in our system, the LRU Reference Age is around 3.5 days, and our 
combined cache_dir is 256GB.  Does it mean we should upgrade?


Regards
HASSAN



- Original Message - 
From: Chris Robertson [EMAIL PROTECTED]

To: Squid Users squid-users@squid-cache.org
Sent: Tuesday, November 25, 2008 01:47
Subject: Re: [squid-users] Recommended Store Size



Nyamul Hassan wrote:
Is there any recommended store size for a forward proxy for a given link 
speed?  We have a 70 Mbps link, and a client base of approximately 4,000 
to 4,500.


The rule of thumb I've read previously is storage equivalent to a week's 
traffic.  If you pass an average of 30GB per day, a storage size of 210GB 
is a good start.




Thanks in advance for your suggestions.

Regards
NYAMUL HASSAN


Chris






[squid-users] squid_ldap_group against nested groups/Ous

2008-11-24 Thread Mike Ely
Hi there,

I've tried this a couple of different ways without any success.  I have
squid_ldap_group working fine if the user queried is a member of the listed
group, but if I try to pick up the user by reference (putting a group
containing that user within the effective group), it fails.  Users are
authenticated via squid_ldap_auth without any trouble.

Here's the use case: acl to prevent unauthorized users from accessing the
web, adding users to specific groups gives them access to specific websites
or the entire Internet.  Since we have a few hundred users, adding each one
individually to whichever group they need to be in is not practical.

Right now, it works fine if I put the acl groups in the Users container or
create a top-level OU and put the groups in there, but iff the user is
directly a member of the acl group.  What I need to do is be able to use
nested groups.  (Sorry, it looks like I've stated this two different ways)

Here's the relevant portion of squid.conf.  Our DC is Win2k swerver (for now
at least), proxy box is running squid 2.6 on CentOS, but have repeated the
problem with squid 3.0 on Debian:

Group lookup:

external_acl_type InetGroup %LOGIN /usr/lib/squid/squid_ldap_group \
-b dc=example,dc=com \
-D cn=lookupuser,cn=Users,dc=example,dc=com \
-w password \
-f 
((objectclass=person)(sAMAccountName=%v)(memberof=cn=%a,ou=WebAccess,dc=ex
ample,dc=com)) \
-h ldapserver

And one of the ACLs:

acl AllWebAccess external InetGroup allweb
http_access allow AllWebAccess


I'd rather use LDAP for this than winbind for various reasons.

Thanks in advance,
Mike Ely



[squid-users] redirect_children and Squid 3

2008-11-24 Thread Tom Williams
I'm running Squid 3.0STABLE10 on RedHat Enterprise Linux 5 and I noticed 
the following message in my cache.log file;


2008/11/16 15:05:07| client_side.cc(2701) WARNING! Your cache is running 
out of filedescriptors


I was doing a performance test using Apache's ab tool at the time the 
message was generated.


I did a Google search on the message and found this mailing list thread:

http://www.squid-cache.org/mail-archive/squid-users/200402/0560.html

My question is: is the redirect_children option supported in Squid 3?

Are the suggestions for resolving the issue causing the  Your cache is 
running out of filedescriptors message applicable to Squid 3?  The 
suggestions are:


*)  Setting /ulimit -HSn 8192/ when compiling and starting Squid:
http://www.squid-cache.org/mail-archive/squid-users/200402/0563.html

*)  Turning off half_closed_clients, server_persistent_connections, and 
client_persistent_connections:

http://www.squid-cache.org/mail-archive/squid-users/200402/0570.html

*)  Increase the setting of the redirect_children tag:
http://www.squid-cache.org/mail-archive/squid-users/200402/0581.html

Thanks!

Peace...

Tom


[squid-users] Fwd: website problem via squid

2008-11-24 Thread Joe Pukepail
Hello,
I'm having problems with  www.morgankeegan.com especially this page
http://www.morgankeegan.com/ECM/ECMHome.htm when accessed through
squid.  When we attempt to view this site it does not finish rendering
(we are using IE, have tested it with 6.0 and 7.0).  I have checked to
see if it is there was any java trying to connect directly, have setup
a test system with a bare config (normally we use NTLM) and have not
been able to find out what is different about this site.


We are running 2.6STABLE22 in production and I have tested this with
3.0STABLE10 in a test enviroment.


My testing config:

http_port 3128
acl all src 0.0.0.0/0.0.0.0
http_access allow  all
icp_access  allow  all
cache_effective_user squid
cache_effective_group squid
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
cache_dir ufs /squid/cache 5000 16 256

If anyone could test and give any pointers or suggestions I would appreciate it.


Re: [squid-users] Recommended Store Size

2008-11-24 Thread Chris Robertson

Nyamul Hassan wrote:

Thank you Chris for your response.  Greatly appreciate your advise.

However, in our system, the LRU Reference Age is around 3.5 days, and 
our combined cache_dir is 256GB.  Does it mean we should upgrade?


Regards
HASSAN 


I can't really give a yes or no answer, as there are a number of factors 
at play, including cost, added load on the spindles (drives), memory 
constraints, etc.  That said, as long as your hit response time is lower 
than your current miss response time, adding more cache is not hurting 
anything.


Chris


Re: [squid-users] redirect_children and Squid 3

2008-11-24 Thread Chris Robertson

Tom Williams wrote:
I'm running Squid 3.0STABLE10 on RedHat Enterprise Linux 5 and I 
noticed the following message in my cache.log file;


2008/11/16 15:05:07| client_side.cc(2701) WARNING! Your cache is 
running out of filedescriptors


I was doing a performance test using Apache's ab tool at the time 
the message was generated.


I did a Google search on the message and found this mailing list thread:

http://www.squid-cache.org/mail-archive/squid-users/200402/0560.html

My question is: is the redirect_children option supported in Squid 3?

Are the suggestions for resolving the issue causing the  Your cache 
is running out of filedescriptors message applicable to Squid 3?  The 
suggestions are:


*)  Setting /ulimit -HSn 8192/ when compiling and starting Squid:
http://www.squid-cache.org/mail-archive/squid-users/200402/0563.html


Probably your best bet.  You might just try running...

ulimit -n 8192  /path/to/squid -Nd9

...to see if you really need to recompile.



*)  Turning off half_closed_clients, server_persistent_connections, 
and client_persistent_connections:

http://www.squid-cache.org/mail-archive/squid-users/200402/0570.html


While these have a good chance of lowering your file descriptor usage, 
they may also affect performance.




*)  Increase the setting of the redirect_children tag:
http://www.squid-cache.org/mail-archive/squid-users/200402/0581.html


This suggestion is for eliminating the...


WARNING: All redirector processes are busy.
WARNING: 5 pending requests queued
Consider increasing the number of redirector processes

...error.  Running more redirectors is likely to utilize more file 
descriptors, not less.


And for what it's worth, for Squid 2.6+ and 3.0+ the recognized options 
are url_rewrite_children and url_rewrite_concurrency.




Thanks!

Peace...

Tom


Chris



Re: [squid-users] tcp_outgoing_tos 0x02 not working

2008-11-24 Thread Diego Woitasen
On Mon, Nov 24, 2008 at 01:12:23PM -0500, Evelio Vila wrote:
 Hi everyone!
 
 I have a 2.6 stable squid and 
 something like this in my conf file..
 
 tcp_outgoing_tos 0x02 !top_users
 
 however the 0x02 mark doesn't seem to work. I've 
 tried with others and they do work
 
 This post has the following issue 
 http://www.squid-cache.org/mail-archive/squid-users/200508/0466.html
 
 any suggestions..
 
 thanks in advance,
 evelio
 
 
 


Using tcp_outgoing_tos with proxy_auth ACLs only works if the ACL is
used in http_access first. In other words, the ACL must match in
http_access to work in tcp_outgoing_tos.

tcp_outgoing_tos doesn't support external lookup, so the user   
information must be cached.   

-- 
---
Diego Woitasen - XTECH
www.xtech.com.ar


Re: [squid-users] Fwd: website problem via squid

2008-11-24 Thread Mike Ely
I can confirm this with squid 2.6 running on Centos 5.2, browser was Firefox
3 on a Mac.

The site in question loads, but very slowly.  More alarmingly, it seems to
hose other connections within the same authenticated session.  I can see the
page elements crawling along in access.log, so it is working, just very very
slowly.  Killing the browser window frees things up.


On 11/24/08 3:46 PM, Joe Pukepail [EMAIL PROTECTED] wrote:

 Hello, 
 I'm having problems with  www.morgankeegan.com especially this page
 http://www.morgankeegan.com/ECM/ECMHome.htm when accessed through
 squid.  When we attempt to view this site it does not finish rendering
 (we are using IE, have tested it with 6.0 and 7.0).  I have checked to
 see if it is there was any java trying to connect directly, have setup
 a test system with a bare config (normally we use NTLM) and have not
 been able to find out what is different about this site.
 
 
 We are running 2.6STABLE22 in production and I have tested this with
 3.0STABLE10 in a test enviroment.
 
 
 My testing config:
 
 http_port 3128 
 acl all src 0.0.0.0/0.0.0.0
 http_access allow  all
 icp_access  allow  all
 cache_effective_user squid
 cache_effective_group squid
 cache_access_log /var/log/squid/access.log
 cache_log /var/log/squid/cache.log
 cache_store_log /var/log/squid/store.log
 cache_dir ufs /squid/cache 5000 16 256
 
 If anyone could test and give any pointers or suggestions I would appreciate
 it. 
 



[squid-users] Re: DG and Squid 1 Machine

2008-11-24 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
yes this is solved
i put X FORWARDER in both ( DG and SQUID )
iptables to DG's port
thx for your help all


close

On Tue, Nov 25, 2008 at 5:27 AM, Chuck Kollars [EMAIL PROTECTED] wrote:
 --- In [EMAIL PROTECTED], â–`â–'â– É¹É zÇ upÉ É¥Êž
 É zɹıɯ â–â–'â–` [EMAIL PROTECTED] wrote:

 hi all
 sorry for my cross posting but this is urgent :(
 i have problem here

 eth0 192.168.222.100 = Go to LAN and act as Client's GW and DNS (
 Installed DG and Squid )
 eth1 10.0.0.2 = Go to LoadBalancing + DMZ server ( IP PUBLIC
 forwarded ( got DMZ to this machine )

 squid.conf :
 http_port 2210 transparent

 dansguardian.conf :
 filterport = 2211
 proxyip = 127.0.0.1
 proxyport = 2210

 rc.local
 /sbin/iptables --table nat --append POSTROUTING --out-interface eth1
 -j MASQUERADE
 /sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
 /sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
 192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 192.168.222.100:2211
 /sbin/iptables -t nat -A PREROUTING -p tcp -i eth1 -d 10.0.0.2 --dport
 2210 -j DNAT --to-destination 192.168.222.100


 output :
 ERROR
 The requested URL could not be retrieved

 
 While trying to retrieve the URL: http://google.com/
 The following error was encountered:
 Access Denied.
 


 what wrong ?

 Who knows? The error message The requested URL could not be
 retrieved is a very generic one that could mean almost anything. (In
 fact, roughly translated it probably means nothing more than doesn't
 work. I seriously doubt this can get resolved in your original
 timeframe.)

 Access Denied suggests some sort of permissions problem. What user
 is the Squid process running as? Does that user have write permission
 to the Squid cache files? How about the Squid log files? How about the
 Squid configuration files?

 What distribution are you using? Is Squid actually running? Is
 DansGuardian actually running? Did DansGuardian issue any error
 messages when it was started? Are there any messages in any of the
 logs (Squid, DansGuardian, or system)?

 What does the rest of your Squid configuration look like? (In
 particular do you have anything like acl src 10.0.0.0/24 or acl src
 192.168.200.0/24 and http_access allow localnet?

 Can you isolate the problem to either interface 192.168.222.100 or
 10.0.0.2 and discard the other one as irrelevant to this problem? (It
 looks to me like 10.0.0.2 isn't involved in this problem, but I'm not
 sure I've understood your description correctly.

 Perhaps the tips at
 http://contentfilter.futuragts.com/wiki/doku.php?id=general_troubleshooting_strategies
 will be helpful.

 thanks! -Chuck Kollars





-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


[squid-users] URGENT : How to limit some ext

2008-11-24 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
how to limit .zip .swf only from squid.conf in 2 option

1. Global Rule ( i mean all user will get this rule - limit on zip and swf )
2. Individual Rule ( only certain ppl that listed )

thx b4

in urgent ASAP :(

it's about dead and live :(

-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


Re: [squid-users] Recommended Store Size

2008-11-24 Thread Nyamul Hassan
Thx Chris.  Cost of hardware does not become a big factor here, as it is 
directly related to the amount of BW that we save, and also the customer 
experience of getting pages faster from the cache.


After looking many of the threads here, I've found that some guys are using 
cache stores measured in terabytes.  I was wondering if a bigger store was 
going to improve the byte hit ratio, which seems to give the idea of how 
much BW was saved.


If I wanted to increase my store size by adding a JBOD of 12 disks using 
eSATA, and put another 12 x 160 GB sata disks, and also putting 130GB on 
each disk, making a total 2 TB cache store, would that improve the hit 
ratio?


I understand that patterns of user behavior greatly changes the hit ratio, 
as we ourselves see it drop during off-peak hours (late into the night), as 
users who are online probably visit more and more diverse web content.  I 
just wanted to check how all the guys out here who are using Squid as a 
forward proxy are doing in terms of saving BW, and for regular broadband 
internet users, how much BW they were saving with how big of a cache store.


Thanks once again for your response, and hope you and the guys running squid 
as I am would share some of their experiences.


Regards
HASSAN



- Original Message - 
From: Chris Robertson [EMAIL PROTECTED]

To: Squid Users squid-users@squid-cache.org
Sent: Tuesday, November 25, 2008 05:46
Subject: Re: [squid-users] Recommended Store Size



Nyamul Hassan wrote:

Thank you Chris for your response.  Greatly appreciate your advise.

However, in our system, the LRU Reference Age is around 3.5 days, and our 
combined cache_dir is 256GB.  Does it mean we should upgrade?


Regards
HASSAN


I can't really give a yes or no answer, as there are a number of factors 
at play, including cost, added load on the spindles (drives), memory 
constraints, etc.  That said, as long as your hit response time is lower 
than your current miss response time, adding more cache is not hurting 
anything.


Chris





[squid-users] improve flow capacity

2008-11-24 Thread Ken DBA
Hello,

I was just finding the flow capacity for Squid is too limited.
It's even hard to reach an upper limit of 150 MBits.

How can I improve the flow capacity for Squid in the reverse-proxy mode?
Thanks in advance.

Ken