Re: [squid-users] Help please

2007-09-21 Thread Muhammad Tayseer Alquoatli
On 9/19/07, Abd-Ur-Razzaq Al-Haddad <[EMAIL PROTECTED]> wrote:
> Hi all I'd like to set up squid so that it can block sites and streaming
> content
>
> What changes must I make to the squid.conf file?
>

check if "acl" and "http_access" would fulfill your requirements
Regards,

> thanks
>
>
> Abd-Ur-Razzaq Al-Haddad
> IT Analyst
>
>
> 9 Queen Street London W1J 5PE
>
> Tel: +44 (0)207 659 6620Fax: +44 (0)207 659 6621
> Direct: +44 (0)207 659 6632 Mob: +44 (0)7738 787881
> [EMAIL PROTECTED]
>
>
>
>
>
>
> The information contained in this email or any of its attachments may be 
> privileged or confidential and is intended for the exclusive use of the 
> addressee. Any unauthorised use may be unlawful. If you received this email 
> by mistake, please advise the sender immediately by using the reply facility 
> in your email software and delete the email from your system.
>
> Carron Energy Limited.  Registered Office 9 Queen Street, London W1J 5PE. 
> Incorporated in England and Wales with company number 5150453
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> __
>



-- 
Muhammad Tayseer Alquoatli


Re: [squid-users] Help please

2007-09-21 Thread nima sadeghian
It depends of many factors, such as ur network configuration, ur
organization's policy, version of squid ...
please explain further

On 9/19/07, Abd-Ur-Razzaq Al-Haddad <[EMAIL PROTECTED]> wrote:
> Hi all I'd like to set up squid so that it can block sites and streaming
> content
>
> What changes must I make to the squid.conf file?
>
> thanks
>
>
>Abd-Ur-Razzaq Al-Haddad
> IT Analyst
>
>
> 9 Queen Street London W1J 5PE
>
> Tel: +44 (0)207 659 6620Fax: +44 (0)207 659 6621
> Direct: +44 (0)207 659 6632 Mob: +44 (0)7738 787881
> [EMAIL PROTECTED]
>
>
>
>
>
>
> The information contained in this email or any of its attachments may be 
> privileged or confidential and is intended for the exclusive use of the 
> addressee. Any unauthorised use may be unlawful. If you received this email 
> by mistake, please advise the sender immediately by using the reply facility 
> in your email software and delete the email from your system.
>
> Carron Energy Limited.  Registered Office 9 Queen Street, London W1J 5PE. 
> Incorporated in England and Wales with company number 5150453
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> __
>


-- 
Best Regards
Nima Sadeghian
Person in charge of ICT department,
Iranian Fisheries Org.
No. 250, Dr. Fatemi Ave.,
Tehran, IRAN
Tel: 0098-21-66941360
FAX: 0098-21-66943885
Mobile: 0098-912-5603698
www.fisheries.ir


Re: [squid-users] Banner page for certain users in squid

2007-09-21 Thread Amos Jeffries

Henrik Nordstrom wrote:

On lör, 2007-09-22 at 00:02 +1200, Amos Jeffries wrote:

Henrik Nordstrom wrote:

On tor, 2007-09-20 at 10:45 +0800, Adrian Chadd wrote:
 

I run SARG against my access.log every day to get a list of top 30
users, and would like to know if there is a way of redirecting these top
30 users to a notice page upon first login in squid, where they are
notified of their high usage? After which they can continue surfing of
course.

I'm sure people have done it in the past. I've not done it. Henrik?

A acl containing these users combined with the session helper would do
the trick fine.
The idea behind most of these is that it a dynamic process rather than a 
  fixed one and squid -k reconfigure is too chunky a process to want 
running every, say minute, to be fast enough.


Why would you be running "squid -k reconfigure" every minute for this?
Only needed when the list of users to alert changes..

And yes, even that can easily be eleminated by using a simple helper..


Oh, I didn't read the initial post too well. Missed the "every day" and 
jumped to the conclusion it was a live system like mine :-(.
Here I check traffic counters in minutes and for a few other things on 
an order of "immediately" and acls HAVE to be dynamic enough to cope 
with changes at any time.

Prise Be, for whoever created the external acl types.

Amos


Re: [squid-users] acl definitions and delay_pools

2007-09-21 Thread
Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> Please post in plain-text. HTML is a bit hard to read...

A little hard to read?!!  I'd say, even I couldn't read it, and I sent it!  :-) 
 So 
sorry folks.  I use Pegasus email and the client is supposed to be set to 
send in plain text only.  No idea why it came up with all that HTML.  Forgive 
me if it happens again.  I sent the following:

~~

At the risk of the list beating me with a stick, I cannot otherwise find an 
answer to what I am doing and whether my config will work.

I have an aggregated 20mb (2 x 10mb) which is feeding to a SmoothWall 
firewall.  It is working nicely, but I need to kill off some leaching, and the 
best 
option is delay_pools.

I've tried a number of different combinations and think I've hit on the proper 
configuration, but would like confirmation or a kick in the butt and an answer 
to what I'm doing wrong.

I have 1 subnet (same netmask) which I've split into three IP pools with 
DHCPd static assignments.  The set x.x.3.1 through x.x.3.79 are the "fast" 
pool, set x.x.3.80 through x.x.3.120 are the "medium" pool and the x.x.3.200 
through x.x.3.250 is for leachers and hackers (dynamic assigned).

The config which I hope will work follows.  It seems no one is using the 
bandwidth right now (Friday, I guess) and have late results which are 
positive from someone in the "fast" pool.  So, does the following acl and 
delay pool definitions look OK??  (Thanks in advance; Kevin):


acl fast src 192.168.3.1-192.168.3.79/255.255.255.0
acl medium src 192.168.3.80-192.168.3.120/255.255.255.0
acl slow src 192.168.3.200-192.168.3.250/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl all src 0.0.0.0/0.0.0.0

acl SSL_ports port 445 443 441 563
acl Safe_ports port 80  # http
acl Safe_ports port 81  # smoothwall http
acl Safe_ports port 21  # ftp 
acl Safe_ports port 445 443 441 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais  
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http 
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all


# delay_pools config



# define 3 class 2 pools
delay_pools 3

# fast follows the rules of pool 1
delay_class 1 2
delay_access 1 allow fast
delay_access 1 deny all
delay_parameters 1 -1/-1 25/6000

# medium follows the rules of pool 2
delay_class 2 2
delay_access 2 allow medium
delay_access 2 deny all
delay_parameters 2 -1/-1 125000/3000

# slow follows the rules of pool 3
delay_class 3 2
delay_access 3 allow slow
delay_access 3 deny all
delay_parameters 3 -1/-1 8000/8000

# everyone's bucket starts out full
delay_initial_bucket_level 100

v^v^v^v^v^v^v^v^v^v^v^v^v^v^v^v

Beausejour news - http://beausejour.yia.ca/
~~~
Uvea tech news and forums - http://tech.uveais.ca/
~~~
Beausejour LUG - http://bjlug.yia.ca/



Re: [squid-users] Re: Non-permanent Internet Connection Question

2007-09-21 Thread Amos Jeffries

RW wrote:

On Fri, 21 Sep 2007 07:36:05 -0600
Blake Grover <[EMAIL PROTECTED]> wrote:


We are working on a new project where we will distribute Linux
machines in different areas that will be connected to the Internet.
But these machines might not have a Internet connection always.  We
would like these machines to show certain web pages from a web server
on a loop. For example I have 7 pages that jump to one to another
after 7 - 10 seconds.  But if the Internet connection goes down we
want squid to keep showing the loop of HTML pages until the
connection get restored and then squid could update the pages in the
cache. 



You could write a script to switch squid into offline mode when the
connection goes down, but there will always be race condition problems
with this.

Have you considered running local webservers instead?



What I'd do is check to see if the following works (Note I have not 
tested any of this)


 - use a deny_info override for that particular ERROR_PAGE
 - the new errorpage to refresh to the next slide-show page in sequence.

If that works, any pages broken during the downtime will simply be 
skipped in favour of pages that do work.


You will most likely need a small http daemon/script to provide the new 
deny_info page and keep track of what was meant to be next.


Amos


Re: [squid-users] maximum size of cache_mem

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 05:16 -0700, zulkarnain wrote:
> -
> I've 24GB of memory with this configuration: 
> - cache_mem = 4GB
> - system = 20GB
> 
> average of web traffic a day is around 25GB, cache_mem
> still not moving from 1.8GB.

On 64-bit platforms cache_mem should be unlimited, only limited by the
amount of memory you have. But I do not think configurations with more
than 2GB or so has been tested..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] ICAP - not sending Respmod

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 12:55 -0300, Thiago Cruz wrote:
> Instead of using multipleservices, could I use ICAP with cache_peer?

should work fine.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid 2.6 and squidguard

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 17:08 -0400, Benjamin Gonzalez wrote:
> I have just finished installing squid and squidguard on and openSuse
> 10.2 platform. I have squid running fine and I have it (I think)
> redirecting to squidguard. Squidguard is not blocking anything even if I
> set a rule to block everything.
> 
> since there is no redirect_program option anymore I used:
> url_rewrite_program /usr/sbin/squidGuard -c /etc/squidguard.conf
> 
> Can anyone help me? Am I missing something?

Probably a permission error. SquidGuard enters passive passthru mode if
it encounters any error on startup.

See cache.log and/or SquidGuard log files.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] acl definitions and delay_pools

2007-09-21 Thread
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>
http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">








At the risk of the list beating me with a stick, I cannot otherwise find an 
answer to what I am doing and whether my config will work.











I have an aggregated 20mb (2 x 10mb) which is feeding to a SmoothWall 
firewall.  It is working nicely, but I need to kill off some leaching, and 
the best 
option is delay_pools.











I've tried a number of different combinations and think I've hit on the proper 
configuration, but would like confirmation or a kick in the butt and an answer 
to what I'm doing wrong.











I have 1 subnet (same netmask) which I've split into three IP pools with 
DHCPd static assignments.  The set x.x.3.1 through x.x.3.79 are the 
"fast" 
pool, set x.x.3.80 through x.x.3.120 are the "medium" pool and the 
x.x.3.200 
through x.x.3.250 is for leachers and hackers (dynamic assigned).











The config which I hope will work follows.  It seems no one is using the 
bandwidth right now (Friday, I guess) and have late results which are 
positive from someone in the "fast" pool.  So, does the 
following acl and 
delay pool definitions look OK??  (Thanks in advance; Kevin):


















acl fast src 192.168.3.1-192.168.3.79/255.255.255.0




acl medium src 192.168.3.80-192.168.3.120/255.255.255.0




acl slow src 192.168.3.200-192.168.3.250/255.255.255.0




acl localhost src 127.0.0.1/255.255.255.255




acl all src 0.0.0.0/0.0.0.0











acl SSL_ports port 445 443 441 563




acl Safe_ports port 80      
  # http




acl Safe_ports port 81      
  # smoothwall 
http




acl Safe_ports port 21      
  # ftp 




acl Safe_ports port 445 443 441 563  # https, 
snews




acl Safe_ports port 70 
   #
 gopher




acl Safe_ports port 210    
 
 # wais  





acl Safe_ports port 
1025-65535  #
 unregistered ports




acl Safe_ports port 280   
   #
 http-mgmt




acl Safe_ports port 488   
   #
 gss-http 




acl Safe_ports port 591   
   #
 filemaker




acl Safe_ports port 777   
   #
 multiling http




acl CONNECT method CONNECT











http_access allow localhost




http_access deny !Safe_ports




http_access deny CONNECT !SSL_ports




http_access allow localnet




http_access deny all
















# delay_pools config























# define 3 class 2 pools




delay_pools 3











# fast follows the rules of pool 1




delay_class 1 2




delay_access 1 allow fast




delay_access 1 deny all




delay_parameters 1 -1/-1 25/6000











# medium follows the rules of pool 2




delay_class 2 2




delay_access 2 allow medium




delay_access 2 deny all




delay_parameters 2 -1/-1 125000/3000











# slow follows the rules of pool 3




delay_class 3 2




delay_access 3 allow slow




delay_access 3 deny all




delay_parameters 3 -1/-1 8000/8000











# everyone's bucket starts out full




delay_initial_bucket_level 100





Re: [squid-users] Simple authentication on a home-based (ie no domain controller) WinXP box

2007-09-21 Thread Guido Serassio

Hi,

At 09.46 19/09/2007, Henrik Nordstrom wrote:

On tis, 2007-09-18 at 22:34 -0700, Jeffery Chow wrote:
>  Ideally I would store a username/password pair in a text file
> somewhere on my system (plaintext or not, doesn't matter), but the
> authentication helpers that I see in my distro (mswin_auth,
> mswin_negotiate_auth, mswin_ntlm_auth) don't come with enough
> documentation to tell me which one is the right one to try.

Neither, from your description you want ncsa_auth. It should be included
as well I hope, if not lets ask Guido to include it.


ncsa_auth is included into the Windows binary kit.
If needed, NCSA support tools (htpasswd and chpasswd.cgi) for Windows 
are available here:

http://squid.acmeconsulting.it/download/NCSAsupport.zip


The mswin_* helpers is for authenticating to the Windows user services.
Which may be the local accounts on your XP if you like.. The three
mswin_* helpers is one per authentication scheme (see the auth_param
directive).


Local account authentication can be done using mswin_auth (basic) and 
mswin_ntlm_auth (NTLM). For negotiate usage, a Kerberos KDC is 
needed, so it cannot be used without an AD Windows domain.


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



[squid-users] squid 2.6 and squidguard

2007-09-21 Thread Benjamin Gonzalez
I have just finished installing squid and squidguard on and openSuse
10.2 platform. I have squid running fine and I have it (I think)
redirecting to squidguard. Squidguard is not blocking anything even if I
set a rule to block everything.

since there is no redirect_program option anymore I used:
url_rewrite_program /usr/sbin/squidGuard -c /etc/squidguard.conf

Can anyone help me? Am I missing something?


Re: [squid-users] Repeated LDAP auth login prompt problem.

2007-09-21 Thread Chris Rosset
Just checking back, 
Anyone know how to solve this repeated authentication prob?
I meant to say i think it's a problem with squid_ldap_group not
squid_ldap_auth(i think)

trying it command line returns successfull info:

/usr/local/squid2.6.16/libexec/squid_ldap_group -d -b
"cn=Organization,cn=Businesswire Employees,o=businesswire.com" -f
cn=nointernet uniquemember=uid=dmerrill* -h sfapp2.businesswire.com
dmerrill 

returns results such as:
connected OK
group filter 'cn=nointernet', searchbase
'cn=Organization,cn=Businesswire Employees,o=businesswire.com'
OK


>>> "Chris Rosset" <[EMAIL PROTECTED]> 9/19/2007 3:41:54 PM
>>>
I am having a problem with Ldap Auth/Squid. certain restricted users
being prompted multiple times for auth  even though it should be
cached.

This behaviour happens with a site such as
http://www.euroinvestor.co.uk/ 
many others as well. It prompts them for the initial webpage then
various ad's on the page as well.

This also did not happen in squid2.5.6 but happens constantly in
squid2.6.14 or squid2.6.16, i am guessing its some ldap config setting
but who knows.

Config info etc:
Squid Cache: Version 2.6.STABLE16
configure options:  '--prefix=/usr/local/squid2.6.16'
'--enable-basic-auth-helpers=LDAP'
'--enable-external-acl-helpers=ldap_group' '--enable-storeio=aufs,ufs

entries in squid.conf:
auth_param basic program
/usr/local/squid2.6.16/libexec/squid_ldap_auth
-d -P -b o=businesswire.com -h servername.businesswire.com -f (uid=%s)

auth_param basic children 15
auth_param basic realm Business Wire Internet logon - Use InsideTrak
username and password to log on
auth_param basic credentialsttl 1 minute
auth_param basic casesensitive off

external_acl_type sfapp2ldapgroup %LOGIN
/usr/local/squid2.6.16/libexec/squid_ldap_group -d -b
"cn=Organization,cn=Businessw
ire Employees,o=businesswire.com" -f (&(cn=%a)(uniquemember=uid=%v*))
-h sfapp2.businesswire.com

Or maybe it's an ACL thing but cant figure out why it worked in
squid2.5.6 but not in 2.6.16 with the same squid.conf config
paramters.
Debugging is on in the logs and login activity is shown.

Thanks


Re: [squid-users] More ACL issues.

2007-09-21 Thread Chris Robertson

Tom Vivian wrote:

Hi,

SquidNT 2.5
ntlm auth
Windows Server 2003

Everything is nearly working. The authentication against AD is fine, I can
see the domain name\username in the logs etc. However when I add the acl for
my tomtom software it allows the tomtom software to connect to their site,
but I stop seeing the domain name\username in the access logs.

acl tomtom src 192.168.2.100 
http_access allow tomtom 
  


Instead of the above, use...

acl tomtom dstdomain .tomtom.com
acl tomsIP src 192.168.2.100
http_access allow tomsIP tomtom

...so 192.168.2.100 is required to authenticate to other sites.

acl localnet proxy_auth REQUIRED src 192.168.2.0/24 
  


Does this even parse?  :o)


http_access allow localnet
  


Here's what I would use...

acl localnet src 192.168.2.0/24
acl proxyauth proxy_auth REQUIRED
http_access allow localnet proxyauth

...assuming you don't want people outside of localnet to use the proxy, 
even with proper authentication.



There must be a way so that I can login to the tomtom site and still
authenticate in AD?

Thanks,

Tom.
  


Chris


Re: [squid-users] using squid to mirror files?

2007-09-21 Thread Chris Robertson

Henrik Nordstrom wrote:

On ons, 2007-09-19 at 15:55 +0200, Greg Kellum wrote:

  

from the main server.  Um...  I'm writing to ask whether anyone thinks
I will encounter any unexpected problems if I try to do this.  As far
as I can tell, other people have been using Squid as an accelerator to
take the load of dynamically generated websites, but no one seems to
be using it for file mirroring.  Is there a deeper reason for this?



Works quite fine for that as well.

What quite many heavily loaded sites do is to use Squid to offload all
static content (images etc), leaving the web server to only render the
pages into HTML. And if the content is only semi-dynamic allowing the
rendered pages to stay in the cache for some time to further offload the
web server.

If these audio files is a little larger you may need to allow larger
files to get cached in squid.conf. See the maximum_object_size option.
Default is 4 MB. There is no upper limit.

Regards
Henrik
  


You might also look into using the Coral Content Distribution Network 
(http://www.coralcdn.org/).  It's free, and already widely distributed.


Chris


[squid-users] Ignoring redirects if the redirector wants squid to do so?

2007-09-21 Thread Richard Hartmann
Hi all,


I am using a redirector script to rewrite requests according to several
rules. Let's supposed the user wants to connect to user.com, while my
redirector rewrites the request to point to redirect.com. user.com
itself has a 301 pointing to 301.com.
In this case, 301.com will be loaded instead of redirect.com which is
definately not the expected behaviour. Is there any way to force _my_
choice over the initial target one's?



Thanks for any help :)
Richard


[squid-users] Re: Non-permanent Internet Connection Question

2007-09-21 Thread RW
On Fri, 21 Sep 2007 07:36:05 -0600
Blake Grover <[EMAIL PROTECTED]> wrote:

> We are working on a new project where we will distribute Linux
> machines in different areas that will be connected to the Internet.
> But these machines might not have a Internet connection always.  We
> would like these machines to show certain web pages from a web server
> on a loop. For example I have 7 pages that jump to one to another
> after 7 - 10 seconds.  But if the Internet connection goes down we
> want squid to keep showing the loop of HTML pages until the
> connection get restored and then squid could update the pages in the
> cache. 


You could write a script to switch squid into offline mode when the
connection goes down, but there will always be race condition problems
with this.

Have you considered running local webservers instead?



[squid-users] squid_ldap_auth : Can't contact LDAP Server

2007-09-21 Thread Darren Durbin
Hello,

I'm trying to get Squid 2.6.STABLE13 from FC6 to authenticate against a
Windows 2003 Active Directory Domain but I'm getting the following error

squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP
server'

I'm using the following in the squid.conf (edited to remove site info) :

auth_param basic program /usr/lib/squid/squid_ldap_auth -f
"SamAccountName=%s" -d -b "dc=company,dc=co,dc=uk" -D
"cn=,cn=Users,dc=company,dc=co,dc=uk" -P -w "" -h
"dc-1.company.co.uk" -p 3268

If I enter this from the command line, enter a suitable
username/password then I get:

user filter 'SamAccountName=', searchbase
'dc=company,dc=co,dc=uk'
attempting to authenticate user
'CN=,CN=Users,DC=company,DC=co,DC=uk'
OK

Which seems great, but I can't get it to work in squid!

Any ideas greatly appreciated!

Thanks,
Darren
 
iCode Systems Ltd is registered in England. Company No. 3428325. VAT Reg. No. 
699 4246 74


Re: [squid-users] Cache_dir

2007-09-21 Thread Andreas Pettersson

Netmail wrote:

Hi
I have set 10 gb for cache dir but i have a question:
When the cache dir it catches up the 10 gb , automatic clean or no ? 
  


Automatic clean, yes.
The Least Recently Used entries gets deleted.

--
Andreas




[squid-users] Cache_dir

2007-09-21 Thread Netmail
Hi
I have set 10 gb for cache dir but i have a question:
When the cache dir it catches up the 10 gb , automatic clean or no ? 



Re: [squid-users] ICAP - not sending Respmod

2007-09-21 Thread Thiago Cruz
Instead of using multipleservices, could I use ICAP with cache_peer?
Something like this:

...
acl USERS external NTGroup @USERS
acl sites_1 url_regex "/etc/squid/sites"

http_access allow sites_1
http_access allow all USERS
http_access deny all
icp_access deny all

always_direct allow sites_1
never_direct allow all

icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/wwreqmod
icap_service service_2 respmod_precache 0 icap://127.0.0.1:1344/wwrespmod
icap_class filtro_url service_1 service_2

icap_access filtro_url deny sites_1
icap_access filtro_url allow all

cache_peer 172.1.1.16 parent 8088 7 no-query no-delay no-digest default

When I use this configuration, Respmod doesn't work. I only can see
Reqmod at the track file.

Regards,
Thiago Cruz


On 9/21/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> On tor, 2007-09-20 at 18:39 -0300, Thiago Cruz wrote:
>
> > Have you tried a configuration like this? It seems that service_3 will
> > never be actived.
> >
> > icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/wwreqmod
> > icap_service service_2 respmod_precache 0 icap://127.0.0.1:1344/wwrespmod
> > icap_service service_3 respmod_precache 0 icap://172.1.1.16:1344/respmod
> >
> > icap_class filtro_url service_1 service_2 service_3
>
> Curretntly chaining of multipleservices at the same service point is not
> supported, which means you can at most have two icap services per
> request, one at reqmod_precache and one at respmod_precache.
>
> Regards
> Henrik
>


[squid-users] Non-permanent Internet Connection Question

2007-09-21 Thread Blake Grover
We are working on a new project where we will distribute Linux machines
in different areas that will be connected to the Internet.  But these
machines might not have a Internet connection always.  We would like
these machines to show certain web pages from a web server on a loop.
For example I have 7 pages that jump to one to another after 7 - 10
seconds.  But if the Internet connection goes down we want squid to keep
showing the loop of HTML pages until the connection get restored and
then squid could update the pages in the cache.  I have tried to go
through the documentation and see if I could get it all configured by
myself.  But I am having some issues still.  

The problem I am having is on one of the pages that has flash it is
always trying to get the latest version.  It does this as well for a
couples of pages that have a graphic on them.  If I unplug the internet
connection to this machine and let it run through the loop it will
always stop on the pages it likes to get content for and says in the
browser (101) Error Network is Unreacable.  I had thought that using the
negative_ttl would stop that but I am not sure what to do.  I have the
following setup in my squid.conf file and I know I will have somethings
wrong and if I could find out why it isn't caching the page or why it
isn't using the cached page I would appreciate it.


http_port 80 
cache_mem 64 MB
maximum_object_size 8182 KB
cache_dir ufs /cache 100 16 256
access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
refresh_pattern .  14400 80% 43200 ignore-no-cache
negative_ttl 720 minutes   # 12 hours
negative_dns_ttl 30 minute
connect_timeout 45 seconds 



Blake Grover
IT Manager
EZ-NetTools

www.eznettools.com
800-627-4780 X2003

EZ-NetTools - We make it easy!




Re: [squid-users] Problem with squid 2.6 on a single computer as a transparent proxy

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 05:37 -0700, hk- wrote:
> I have configured squid to run on a single computer like a transparent proxy.
> I used this mail from the archive as a install guide.
> http://www.mail-archive.com/squid-users@squid-cache.org/msg48149.html
> 
> Adding this as a iptables rule
> iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -m owner --uid-owner
> root -j ACCEPT

This needs to be your cache_effective_user, not root...  (default nobody
if you are using a standard Squid source build)

> iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -j REDIRECT --to-port
> 3128


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Banner page for certain users in squid

2007-09-21 Thread Henrik Nordstrom
On lör, 2007-09-22 at 00:02 +1200, Amos Jeffries wrote:
> Henrik Nordstrom wrote:
> > On tor, 2007-09-20 at 10:45 +0800, Adrian Chadd wrote:
> >  
> >>> I run SARG against my access.log every day to get a list of top 30
> >>> users, and would like to know if there is a way of redirecting these top
> >>> 30 users to a notice page upon first login in squid, where they are
> >>> notified of their high usage? After which they can continue surfing of
> >>> course.
> >> I'm sure people have done it in the past. I've not done it. Henrik?
> > 
> > A acl containing these users combined with the session helper would do
> > the trick fine.
> 
> The idea behind most of these is that it a dynamic process rather than a 
>   fixed one and squid -k reconfigure is too chunky a process to want 
> running every, say minute, to be fast enough.

Why would you be running "squid -k reconfigure" every minute for this?
Only needed when the list of users to alert changes..

And yes, even that can easily be eleminated by using a simple helper..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] HTTPS Reverse Proxy

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 12:31 +0100, Gordon McKee wrote:

> here are the squid.conf line
> https_port 82.36.186.17:443 
> cert=/usr/local/etc/squid/sslcert/opl20070919.pem ca
> file=/usr/local/etc/squid/sslcert/opl-all.pem name=opls 
> defaultsite=www.optimalp
> rofit.com
> 
> cache_peer 192.168.0.11parent   443  0  no-query originserver login=PASS 
> nam
> e=opls ssl sslcert=/usr/local/etc/squid/sslcert/opl20070919.pem
> cache_peer_domain opls www.optimalprofit.com



> 2007/09/21 12:24:41| fwdNegotiateSSL: Error negotiating SSL connection on FD 
> 19: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate 
> verify failed (1/-1/0)
> 2007/09/21 12:24:41| TCP connection to 192.168.0.11/443 failed
> 

You need to move cafile from https_port to cache_peer. It's the peers
certificate which is rejected.

It's not needed in https_port.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] after squid shut down redirector still working

2007-09-21 Thread Arthur Tumanyan



Arthur Tumanyan wrote:
> 
> 
> 
> Gonzalo Arana-2 wrote:
>> 
>> On 9/21/07, Arthur Tumanyan <[EMAIL PROTECTED]> wrote:
>> 
>> You should break the main loop when fgets() returns NULL.
>> 
>> Regards,
>> 
>> -- 
>> Gonzalo A. Arana
>> 
>> 
> Ok,I did it,thanks!Now the redirectors shutting down with squid.But I note
> that when i try to open some page in browser,i get timeout error,and the
> following line
>  in cache.log.
> 
> Otherwise,the timeout error exists,but without "2007/09/20 16:38:33| Open
> FD READ/WRITE6 shaga_redir #1" line.
> 
> And "Page faults with physical i/o: 6" = "Page faults with physical i/o:
> 0"
> 
> 2007/09/20 16:30:31| FD 23 Closing ICP connection
> 2007/09/20 16:30:31| WARNING: Closing client 127.0.0.1 connection due to
> lifetime timeout
> 2007/09/20 16:30:31|http://forum.sysadmins.ru/viewtopic.php?p=7817630
> 2007/09/20 16:30:31| Closing unlinkd pipe on FD 20
> 2007/09/20 16:30:31| storeDirWriteCleanLogs: Starting...
> 2007/09/20 16:30:31|   Finished.  Wrote 0 entries.
> 2007/09/20 16:30:31|   Took 0.0 seconds (   0.0 entries/sec).
> CPU Usage: 0.055 seconds = 0.041 user + 0.014 sys
> Maximum Resident Size: 4000 KB
> Page faults with physical i/o: 6
> 2007/09/20 16:30:31| Open FD READ/WRITE6 shaga_redir #1
> 2007/09/20 16:30:31| Squid Cache (Version 2.6.STABLE14): Exiting normally.
> 
> What this can mean?   
>   
> 
I get the error reason!There must be a "\n" in the end of return line.
Now all is working normally!Thanks for help!!!:)
-- 
View this message in context: 
http://www.nabble.com/after-squid-shut-down-redirector-still-working-tf4488899.html#a12819969
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Problem with squid 2.6 on a single computer as a transparent proxy

2007-09-21 Thread hk-

I fixed the problem by running squid from none root user. :)


hk- wrote:
> 
> I have configured squid to run on a single computer like a transparent
> proxy.
> I used this mail from the archive as a install guide.
> http://www.mail-archive.com/squid-users@squid-cache.org/msg48149.html
> 
> Adding this as a iptables rule
> iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -m owner --uid-owner
> root -j ACCEPT
> iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -j REDIRECT --to-port
> 3128
> 
> And using this as my config
> http_port 3128 transparent
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> access_log /usr/local/squid/var/logs/access.log squid
> hosts_file /etc/hosts
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443  # https
> acl SSL_ports port 563  # snews
> acl SSL_ports port 873  # rsync
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl Safe_ports port 631 # cups
> acl Safe_ports port 873 # rsync
> acl Safe_ports port 901 # SWAT
> acl purge method PURGE
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access allow purge localhost
> http_access deny purge
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> acl ME src 10.0.0.5
> http_access allow ME
> http_access deny all
> http_reply_access allow all
> icp_access allow all
> coredump_dir /usr/local/squid/var/cache
> 
> Squid Cache: Version 2.6.STABLE14
> configure options: '--prefix=/usr/local/squid' '--enable-linux-netfilter'
> 
> 
> But when i use lynx I get this output in the browser
> 
>  This request could not be forwarded to the origin server or to any
>  parent caches. The most likely cause for this error is that:
>* The cache administrator does not allow this cache to make direct
>  connections to origin servers, and
>* All configured parent caches are currently unreachable.
> 
> and this in my cache.log
> 
> 
> 2007/09/21 14:00:24| WARNING: Forwarding loop detected for:
> Client: 10.0.0.5 http_port: 127.0.0.1:3128
> GET http://www.nytimes.com/ HTTP/1.0
> Host: www.nytimes.com
> Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01
> Accept-Encoding: gzip, bzip2
> Accept-Language: en
> User-Agent: Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8d
> Via: 1.0 linux.niap.no:3128 (squid/2.6.STABLE14), 1.0 linux.niap.no:3128
> (squid/2.6.STABLE14)
> X-Forwarded-For: 10.0.0.5, 10.0.0.5
> Cache-Control: max-age=259200
> Connection: keep-alive
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Problem-with-squid-2.6-on-a-single-computer-as-a-transparent-proxy-tf4495099.html#a12819841
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Problem with squid 2.6 on a single computer as a transparent proxy

2007-09-21 Thread hk-

I have configured squid to run on a single computer like a transparent proxy.
I used this mail from the archive as a install guide.
http://www.mail-archive.com/squid-users@squid-cache.org/msg48149.html

Adding this as a iptables rule
iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -m owner --uid-owner
root -j ACCEPT
iptables -t nat -A OUTPUT -o eth0 -p tcp --dport 80 -j REDIRECT --to-port
3128

And using this as my config
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /usr/local/squid/var/logs/access.log squid
hosts_file /etc/hosts
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
acl ME src 10.0.0.5
http_access allow ME
http_access deny all
http_reply_access allow all
icp_access allow all
coredump_dir /usr/local/squid/var/cache

Squid Cache: Version 2.6.STABLE14
configure options: '--prefix=/usr/local/squid' '--enable-linux-netfilter'


But when i use lynx I get this output in the browser

 This request could not be forwarded to the origin server or to any
 parent caches. The most likely cause for this error is that:
   * The cache administrator does not allow this cache to make direct
 connections to origin servers, and
   * All configured parent caches are currently unreachable.

and this in my cache.log


2007/09/21 14:00:24| WARNING: Forwarding loop detected for:
Client: 10.0.0.5 http_port: 127.0.0.1:3128
GET http://www.nytimes.com/ HTTP/1.0
Host: www.nytimes.com
Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01
Accept-Encoding: gzip, bzip2
Accept-Language: en
User-Agent: Lynx/2.8.6rel.4 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8d
Via: 1.0 linux.niap.no:3128 (squid/2.6.STABLE14), 1.0 linux.niap.no:3128
(squid/2.6.STABLE14)
X-Forwarded-For: 10.0.0.5, 10.0.0.5
Cache-Control: max-age=259200
Connection: keep-alive

-- 
View this message in context: 
http://www.nabble.com/Problem-with-squid-2.6-on-a-single-computer-as-a-transparent-proxy-tf4495099.html#a12818686
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Re: maximum size of cache_mem

2007-09-21 Thread RW
On Thu, 20 Sep 2007 19:01:16 -0700 (PDT)
zulkarnain <[EMAIL PROTECTED]> wrote:

> But squid unable to use 4GB of cache_mem. Did I
> miss something?

A couple of points:

- Memory cache only caches the network, so objects that were in
disk caches when the daemon started wont get cached.

- It only caches objects smaller than maximum_object_size_in_memory
(defaults to 8 KB)



Re: [squid-users] maximum size of cache_mem

2007-09-21 Thread zulkarnain
--- Matus UHLAR - fantomas <[EMAIL PROTECTED]> wrote:
> On 20.09.07 19:01, zulkarnain wrote:
> > Yes! I'm running 64-bit CPU, 64-bit OS and 64-bit
> > squid. But squid unable to use 4GB of cache_mem.
> Did I
> > miss something?
> 
> do you have more than 4GB of RAM? (how much?)
> did you users fetch more than 4GB of content? IT has
> to be:
> - different pages, so they don't replace each other
> - cacheable, e.g. not excluded by "cache deny" e.g.
> containing .cgi or ?
> 

I've 24GB of memory with this configuration: 
- cache_mem = 4GB
- system = 20GB

average of web traffic a day is around 25GB, cache_mem
still not moving from 1.8GB.

Zul



   

Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. 
http://mobile.yahoo.com/go?refer=1GNXIC


Re: [squid-users] Banner page for certain users in squid

2007-09-21 Thread Amos Jeffries

Henrik Nordstrom wrote:

On tor, 2007-09-20 at 10:45 +0800, Adrian Chadd wrote:
 

I run SARG against my access.log every day to get a list of top 30
users, and would like to know if there is a way of redirecting these top
30 users to a notice page upon first login in squid, where they are
notified of their high usage? After which they can continue surfing of
course.

I'm sure people have done it in the past. I've not done it. Henrik?


A acl containing these users combined with the session helper would do
the trick fine.


The idea behind most of these is that it a dynamic process rather than a 
 fixed one and squid -k reconfigure is too chunky a process to want 
running every, say minute, to be fast enough.


Amos


Re: [squid-users] after squid shut down redirector still working

2007-09-21 Thread Arthur Tumanyan



Gonzalo Arana-2 wrote:
> 
> On 9/21/07, Arthur Tumanyan <[EMAIL PROTECTED]> wrote:
> 
> You should break the main loop when fgets() returns NULL.
> 
> Regards,
> 
> -- 
> Gonzalo A. Arana
> 
> 
Ok,I did it,thanks!Now the redirectors shutting down with squid.But I note
that when i try to open some page in browser,i get timeout error,and the
following line
 in cache.log.

Otherwise,the timeout error exists,but without "2007/09/20 16:38:33| Open FD
READ/WRITE6 shaga_redir #1" line.

And "Page faults with physical i/o: 6" = "Page faults with physical i/o: 0"

2007/09/20 16:30:31| FD 23 Closing ICP connection
2007/09/20 16:30:31| WARNING: Closing client 127.0.0.1 connection due to
lifetime timeout
2007/09/20 16:30:31|http://forum.sysadmins.ru/viewtopic.php?p=7817630
2007/09/20 16:30:31| Closing unlinkd pipe on FD 20
2007/09/20 16:30:31| storeDirWriteCleanLogs: Starting...
2007/09/20 16:30:31|   Finished.  Wrote 0 entries.
2007/09/20 16:30:31|   Took 0.0 seconds (   0.0 entries/sec).
CPU Usage: 0.055 seconds = 0.041 user + 0.014 sys
Maximum Resident Size: 4000 KB
Page faults with physical i/o: 6
2007/09/20 16:30:31| Open FD READ/WRITE6 shaga_redir #1
2007/09/20 16:30:31| Squid Cache (Version 2.6.STABLE14): Exiting normally.

What this can mean? 

-- 
View this message in context: 
http://www.nabble.com/after-squid-shut-down-redirector-still-working-tf4488899.html#a12815041
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] HTTPS Reverse Proxy

2007-09-21 Thread Gordon McKee

Hi

I am still having trouble with my certificate!  Am I doing something 
stupid??  Here is the openssl output to prove the cert paths


kursk# openssl verify -CAfile opl-all.pem  -verbose opl20070919.pem
opl20070919.pem: OK
kursk#

here are the squid.conf line
https_port 82.36.186.17:443 
cert=/usr/local/etc/squid/sslcert/opl20070919.pem ca
file=/usr/local/etc/squid/sslcert/opl-all.pem name=opls 
defaultsite=www.optimalp

rofit.com

cache_peer 192.168.0.11parent   443  0  no-query originserver login=PASS 
nam

e=opls ssl sslcert=/usr/local/etc/squid/sslcert/opl20070919.pem
cache_peer_domain opls www.optimalprofit.com

I am still getting this error:
2007/09/21 12:24:41| SSL unknown certificate error 20 in /C=GB/ST=West 
Midlands/L=Solihull/O=Optimal Profit Ltd/OU=StartCom Free Certificate 
Member/OU=Domain validated 
only/CN=www.optimalprofit.com/[EMAIL PROTECTED]
2007/09/21 12:24:41| fwdNegotiateSSL: Error negotiating SSL connection on FD 
19: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate 
verify failed (1/-1/0)

2007/09/21 12:24:41| TCP connection to 192.168.0.11/443 failed

Does anyone know how to fix this?  Do I need to post the certificates?  Not 
very secure though!!


Many thanks

Gordon 





Re: [squid-users] Transparent proxy

2007-09-21 Thread [EMAIL PROTECTED]
show us your interception in the iptables
Don't foget to enable forwarding too in ip_forward file
 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT
--to-ports 3128
if your squid box is a router think twice it can be eth0 or
eth1
Ronny

- Original Message Follows -
From: "Indunil Jayasooriya" <[EMAIL PROTECTED]>
To: Netmail <[EMAIL PROTECTED]>
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Transparent proxy
Date: Fri, 21 Sep 2007 15:31:59 +0530

> > On 9/21/07, Netmail <[EMAIL PROTECTED]> wrote:
> > >  Hi guys
> > > I'm configuring the transparent proxy but i have a
> > > difficulty ... The Intercepition don't' work if
> > > configure only http_port 3128 transparent ?
> > >
> > > I must shape the iptables  by force?
> 
>  pls see below
> 
>
http://www.squid-cache.org/mail-archive/squid-users/200708/0232.html
> 
> 
> 
> -- 
> Thank you
> Indunil Jayasooriya 


If I have seen further it's by standing on shoulders of
Giants --> Newton
:::


Re: [squid-users] maximum size of cache_mem

2007-09-21 Thread Matus UHLAR - fantomas
> > Are you running 64-bit CPU,64-bit OS and 64-bit
> > Squid program?
> > Otherwise I don't think your squid can use full of
> > 4G memory for cache_mem.

On 20.09.07 19:01, zulkarnain wrote:
> Yes! I'm running 64-bit CPU, 64-bit OS and 64-bit
> squid. But squid unable to use 4GB of cache_mem. Did I
> miss something?

do you have more than 4GB of RAM? (how much?)
did you users fetch more than 4GB of content? IT has to be:
- different pages, so they don't replace each other
- cacheable, e.g. not excluded by "cache deny" e.g. containing .cgi or ?

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
"Where do you want to go to die?" [Microsoft]


RE: [squid-users] SquidNT - Compressing rotated logs

2007-09-21 Thread Paul Cocker
So simple I'm disgusted I didn't think of it.

Thanks. 


Paul Cocker
IT Systems Administrator
IT Security Officer

01628 81(6647)

TNT Post (Doordrop Media) Ltd.
1 Globeside Business Park
Fieldhouse Lane
Marlow
Bucks
SL7 1HY

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 20 September 2007 21:51
To: Paul Cocker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] SquidNT - Compressing rotated logs

On tor, 2007-09-20 at 12:31 +0100, Paul Cocker wrote:
> Since I'm running SquidNT there's no native log rotation method. I'm 
> writing a batch file to handle this, but I would like to zip up the 
> archive copies of the log to save space, seeing as how store.log and 
> access.log are 500MB+ each in less than a week. Thing is, I want to 
> run this from the command line, preferably without relying no any 3rd 
> party solutions.

Here is one idea using only Windows:

Create a log archival directory, configure it to be compressed and that
files stored there should get automatically compressed. Then in your
batch job that rotates the logs, copy the logs there and let the
filesystem compress them for you..

Regards
Henrik




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



Re: [squid-users] Transparent proxy

2007-09-21 Thread Indunil Jayasooriya
> On 9/21/07, Netmail <[EMAIL PROTECTED]> wrote:
> >  Hi guys
> > I'm configuring the transparent proxy but i have a difficulty ...
> > The Intercepition don't' work if configure only
> > http_port 3128 transparent ?
> >
> > I must shape the iptables  by force?

 pls see below

http://www.squid-cache.org/mail-archive/squid-users/200708/0232.html



-- 
Thank you
Indunil Jayasooriya


[squid-users] Transparent proxy

2007-09-21 Thread Netmail
Hi guys
I'm configuring the transparent proxy but i have a difficulty ...
The Intercepition don't' work if configure only 
http_port 3128 transparent ?

I must shape the iptables  by force? 

http://wiki.squid-cache.org/SquidFaq/InterceptionProx 

thanks 



Re: [squid-users] after squid shut down redirector still working

2007-09-21 Thread Gonzalo Arana
On 9/21/07, Arthur Tumanyan <[EMAIL PROTECTED]> wrote:
>
>
>
> Henrik Nordstrom-5 wrote:
> >
> > Probably your redirectors are broken. What redirector are you using?
> >
> > Regards
> > Henrik
> >
> >
> I think so,because the redirector is just writing ,and all this working in
> test mode only.
>
> 
> if(fgets(p_string, LINE_MAXLEN, stdin) != NULL){
> manageOutput(p_string);//bug with this function,need check
>
> }
> //
> usleep(1000);

You should break the main loop when fgets() returns NULL.

Regards,

-- 
Gonzalo A. Arana


Re: [squid-users] maximum size of cache_mem

2007-09-21 Thread Gonzalo Arana
On 9/20/07, zulkarnain <[EMAIL PROTECTED]> wrote:
> --- tech user <[EMAIL PROTECTED]> wrote:
> >
> > Are you running 64-bit CPU,64-bit OS and 64-bit
> > Squid program?
> > Otherwise I don't think your squid can use full of
> > 4G memory for cache_mem.
> >
>
> Yes! I'm running 64-bit CPU, 64-bit OS and 64-bit
> squid. But squid unable to use 4GB of cache_mem. Did I
> miss something?
>
> Zul
>

Sory to come back with the same suggestion, but have you:
 - looked into cache_log? (particularily when squid starts).
 - looked at cache manager's 'Current squid configuration'? (and check
cache_mem value).
 - checked ulimits?  You may want to modify your squid rc.d script to
increase hard & soft ulimits.

Hope this helps,

-- 
Gonzalo A. Arana


[squid-users] - SOLVED - Multi-ISP / Squid 2.6 Problem going DIRECT

2007-09-21 Thread Philipp Rusch

SOLVED - Update to Squid 2.6.STABLE14-8.5 and applying patches to
our firewall (Shorewall 4.0.3) did work for us.
Now this works flawlessly.

--
Sorry to bother you, but I don't get it.

We have a SuSE 10.1 system and have our www-traffic going through squid.
Since upgrade from 2.5 to version 2.6 STABLE5-30 (SuSE versions) we notice
that Squid is behaving strange. After running normally a while Squid seems
to go "DIRECT" only and the browsers on the clients seem to hang and or
surfing is ultra slow. This is happening every three or four websites we 
try
to access, it seems to work normal for one or two, then the next four or 
five

GETs are very slow again and the circle begins again.
In /var/logs/Squid/access.log I see that most of the connections are going
DIRECT, sometimes we get connection timeouts (110) and sometimes we
see that "somehow" an :443 is added to the URL-lines. STRANGE.
Any hints appreciated.
---

Regards from Germany,
Mit freundlichen Grüßen
Philipp Rusch






Re: [squid-users] after squid shut down redirector still working

2007-09-21 Thread Arthur Tumanyan



Henrik Nordstrom-5 wrote:
> 
> Probably your redirectors are broken. What redirector are you using?
> 
> Regards
> Henrik
> 
> 
I think so,because the redirector is just writing ,and all this working in
test mode only.

Here is the source code of "main" function.

int main(int argc, char **argv)
{

log_debug("Starting");
if(argc < 2)
{   strncpy(config,def_config,255);
log_debug("Wrong usage of program!The second argument must be config
file absolute path");
snprintf(TMP_MSG,LINE_MAXLEN,
"Using default config name '%s'",config);
   log_debug(TMP_MSG);


}else if (argc == 2){

if(!(cnf_file = fopen(argv[1],"r")))
{
strncpy(config,def_config,255);
snprintf(TMP_MSG,LINE_MAXLEN,
"The specified path not found!Using default config name 
'%s'",config);
   log_debug(TMP_MSG);
}else{
strncpy(config,argv[1],255);
}
}
//
if(config_read())
{
log_debug("Read config error!");
emergMode = 2;

}//label

log_debug("Config reading done.");
checkEmergMode();
//
log_debug("Ready to serve requests");
//
   // my_debug();
set_sig_handler();
for(;;)
{
//
now = time(NULL);
//
time_t diff_time = difftime(now,last_configure);
if(((int) diff_time == reconfigure) || (((int)diff_time %
reconfigure) == 0 && (int)diff_time != 0))
{
reConfigure();
checkEmergMode();
   sleep(1);
}//time
if (chdir(root_dir) != 0) {
snprintf(TMP_MSG,Q_LINE_MAXLEN, "Can not change working directory to 
%s",
root_dir);
log_debug(TMP_MSG);
exit(EXIT_FAILURE);
}
if (setvbuf(stdout, NULL, _IOLBF, 0) != 0) {
snprintf(TMP_MSG,Q_LINE_MAXLEN, "Cannot configure stdout buffer");
log_debug(TMP_MSG);
exit(EXIT_FAILURE);
}
//  

if(fgets(p_string, LINE_MAXLEN, stdin) != NULL){
manageOutput(p_string);//bug with this function,need check

}
//
usleep(1000);
//
}//while

exit(EXIT_SUCCESS);
}

void manageOutput(char *stream){

char *sym;

sym = strtok(stream," ");
snprintf(rEntry.url,255,"%s",sym);
sym = strtok(NULL,"");
snprintf(rEntry.src_address,255,"%s",sym);
sym = strtok(NULL,"");
snprintf(rEntry.ident,255,"%s",sym);
sym = strtok(NULL,"");
snprintf(rEntry.user,UNAME_MAXLEN,"%s",sym);
sym = strtok(NULL,"");
snprintf(rEntry.method,32,"%s",sym);

if(isBlocked(rEntry.user) == 1){
snprintf(rEntry.url,LINE_MAXLEN,"%s%s",reurl_blocked,rEntry.user);
}else if(isOverdrafted(rEntry.user) != 0) {
snprintf(rEntry.url,LINE_MAXLEN,"%s%s",reurl_overdrafted,rEntry.user);
}//else redirect_all = 2;
//
if(redirect_all == 0){
fprintf(stdout, "%s %s %s %s %s",
rEntry.url, 
rEntry.src_address,
rEntry.ident,
rEntry.user,
rEntry.method
);
}else fprintf(stdout,"%s","");

fflush(stdout);
}

I think all the problems are in the "manageOutput" function.
-- 
View this message in context: 
http://www.nabble.com/after-squid-shut-down-redirector-still-working-tf4488899.html#a12811785
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Maximum Cachedir COSS size

2007-09-21 Thread Henrik Nordstrom
On fre, 2007-09-21 at 09:23 +0700, Sunin Thaveethamsavee wrote:

> 
> cache_dir coss /dev/cciss/c0d5 10 max-size=33554431 overwrite-percent=40
> 
>  
> And use command squid -z I got this error
> FATAL: COSS cache_dir size exceeds largest offset

> I try to random the max size and the size can't more than 7500. What's
> happen ? How I can use the maximum disk for single partition.


from the cache_dir directive regarding coss:

block-size=n defines the "block size" for COSS cache_dir's.
Squid uses file numbers as block numbers.  Since file numbers
are limited to 24 bits, the block size determines the maximum
size of the COSS partition.  The default is 512 bytes, which
leads to a maximum cache_dir size of 512<<24, or 8 GB.  Note
you should not change the COSS block size after Squid
has written some objects to the cache_dir.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part