Re: [squid-users] cache everything except flv mp3 mpeg etc

2010-02-09 Thread Amos Jeffries

J. Webster wrote:

Is there a way to cache everything in squid except for the video
files and mp3s? I had a problem before when I turned caching on and
it made the performance very poor because it was trying to cache
large video files. Right now, I have a problem on the server where
even if iptraf lists ingoing+outgoing at 5000 kbits/sec viewing video
works fine, but it takes a while for normal webpages to load and I
can't figure out where the bottleneck is.  


Configuration?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Error 111

2010-02-09 Thread Amos Jeffries

Abdul Khan wrote:

Hi guys,
some of my users are getting the error below when they trying to logon
to webpagethe website is allowed to browse through  ACL..
They can get to the website but when they try to make a purchase and try
to logon...they get this error...

Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error



That is not a Squid error. Something at the website end is broken.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Cisco switches and cache interception and TPROXY

2010-02-09 Thread Amos Jeffries

Clemente Aguiar wrote:

I am in the process of upgrading our network infrastructure.

At present I have a Cisco Router and I have a working interception cache
(Squid) using WCCPv2 protocol for this purpose.

In this upgrade process, I am considering the use of Cisco Catalyst 3560
or 3750 switches instead of the router, but from the documentation,
these switches support WCCPv2 but the are some unsupported WCCP
features.

The WCCP features that are not supported in the software release
12.2(52)SE are:
- Packet redirection on an outbound interface that is configured by
using the "ip wccp redirect out" interface configuration command. This
command is not supported.
- The GRE forwarding method for packet redirection is not supported.
- The hash assignment method for load balancing is not supported.
- There is no SNMP support for WCCP.



Then use the WCCPv2. Squid supports both.
AFAIK L2 transport method is not a problem for Squid. Though I'm not 
sure how that gets configured on the receiving system.




Taking in consideration these unsupported features would it be possible
to:

1) use Squid in "interception" mode?


Yes.



2) use Squid in "TProxy interception" mode?



Yes.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Missing Cache on Requests.

2010-02-09 Thread Amos Jeffries

John Villa wrote:

Hello,
I have finished setting up squid but I do not believe it is working 
properly. It appears as though when request are made they are missing 
the cache;

 X-Cache-Lookup: MISS from localhost:3128

I am running squid3 with a pretty much out of the box config and a few 
refresh_pattern variables.

Any help would be great.
Thanks,
-John


A large 50-60% of objects can be expected to miss the cache. Due to the 
design of their websites or the fact you have not visited the site 
recently enough to use the cached object.


You need to look at the exact URLs producing TCP_MISS and check out why 
they are that way.

  http://redbot.org can help with that.

NP: facebook and google hosted websites are known to be very cache 
unfriendly.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Client browser perpetually 'Loading'

2010-02-09 Thread Amos Jeffries

Dave Coventry wrote:

OK, Amos, this is what I've thought of:

I'll install squid on the server.

The 2 users can access the Apache front page ("It Works!") but not the
Drupal directories as defined by the Apache httpd.conf aliases.

If I set up the proxy to allow access from the 2 users (who will be on
Dynamic IPs and accessing from the WAN), will that allow access to
http://localhost/drupal/ ?

I don't think I would need the proxy to cache anything, but I would
like it to be as lightweight as possible.

Does that sound like it would work?


If the users can't get to it, I doubt Squid would do any better.

To be as lightweight a possible, you don't need Squid here at all. Just 
a correct apache configuration. One that lets it's visitors access the 
hosted site content would be a good start.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Read, Clean, Cache, Re-Serve a Blog

2010-02-09 Thread Amos Jeffries

Grover Cleveland wrote:

I want to create a single purpose website to:

1.  Read in a libelous blog
2.  Strip the web-bugs and traffic analytics
3.  Cache the sterile version
4.  Serve the sterile version.  Untouched by me except for removal of
bugs and analytic devices

Goals

1.  Reduce page views that the blog author sees
2.  Protect privacy of readers

The blog is not read by many.. the author has used social engineering
to backtrack and harrass readers.


Nasty. I'd advise you to advertise such bad netizen behavior and 
encourage the readership to protect themselves by shunning the abuser.


If you can prove the libelous nature you claim then complain. The blog 
hosts and resource providers, and the bloggers ISP have an obligation to 
disconnect them, and often leave themselves grounds in contracts to do so.




Would Squid to suitable to this task?  Would this be hard for a
newbie? Recommended reading or other ideas?


Squid is a proxy not a web server. It does not alter a single byte of 
the content in transit beyond the basic compression/decompression 
required to send something. End-to-end the content remains accurate.


What you are asking us to advise on is an illegal copyright violation.
Sorry, can't help any further.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Redirecting user based on user agent string or incoming URL

2010-02-09 Thread Amos Jeffries

Reuben S. wrote:

Hi,

I have an existing url_rewrite_program script which redirects the user
based on the host they came in on.  I also want to redirect the user
to different URLs based on their user agent string (for example, if
the string has the word "iphone" in it).  Does anyone know how I'd
accomplish this with a single url_rewrite_program?  I've checked out a
few squid options, but I haven't found what I'm looking for yet.
Should I be using urlgroups or something?

I've found some information on how to do this, but no details.  For
instance, this email says what I'd like to do is possible, but doesn't
say how:

http://www.squid-cache.org/mail-archive/squid-users/200803/0802.html

Thanks,

Reuben


Before I start, please reconsider any use of URL re-writing carefully. 
It introduces a number of problems that are avoidable by proper use of 
HTTP messages.



When Henrik means there is that you can use ACLs to control what 
requests get re-written. For example;


 acl iphone browser ...
 url_rewrite_access allow iphone
 url_rewrite_access deny all

... will cause on the requests with User-Agent: header matching the 
pattern you enter to be re-written.



Do redirection of ore than one type, you will need something more 
sophisticated than just a URL-rewriter.



The best alternative we have is deny_info 'redirecting' requests 
matching an ACL to some other URL. For example;


 acl iphone browser ...
 acl site dstdomain www.example.com
 http_access deny site iphone
 deny_info http://iphone.example.com/ iphone

... will redirect any requests made from an iphone agent for 
www.example.com to iphone.example.com


NP: deny_info is limited in current Squid to one destination URL.. 
However if you are willing to use a build of 3.2 (alpha code right now) 
you can dynamically generate the deny_info URL and for example change 
the domain name but keep the path and other parts of the URL.



A more tricky and complex setup would be to mix external_acl_type helper 
with a url-rewrite helper. Where the external_acl_type helper registers 
some background key outside Squid indicating that the next URL X from 
client IP Y needs to be re-written to Z. Then the URL-rewriter when it 
seeks IP Y requesting URL X does the re-write.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Is there ICAP evidence log in any log files?

2010-02-09 Thread Amos Jeffries

Henrik Nordström wrote:

tis 2010-02-09 klockan 15:18 -0600 skrev Luis Daniel Lucio Quiroz:

Le Mercredi 30 Juillet 2008 22:24:35, Henrik Nordstrom a écrit :

On tor, 2008-07-31 at 11:26 +0900, S.KOBAYASHI wrote:

Hello developer,

I'm looking for the evidence for accessing ICAP server. Is there its log
in any log files such as access.log, cache.log?

The ICAP server should have logs of it's own.

There is no information in the Squid logs on which iCAP servers were
used for the request/response.

Regards
Henrik

I wonder if using squidclient mngr:xxXX  we could see some info about icap,
where?


Seems not.

You can however increase the debug level of section 93 to have ICAP spew
out lots of information in cache.log.

debug_options ALL,1 93,5

should do the trick I think.

Regards
Henrik



Squid-3.1 and later provide some little ICAP service logging in access.log.
http://www.squid-cache.org/Doc/config/logformat/


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


[squid-users] Read, Clean, Cache, Re-Serve a Blog

2010-02-09 Thread Grover Cleveland
I want to create a single purpose website to:

1.  Read in a libelous blog
2.  Strip the web-bugs and traffic analytics
3.  Cache the sterile version
4.  Serve the sterile version.  Untouched by me except for removal of
bugs and analytic devices

Goals

1.  Reduce page views that the blog author sees
2.  Protect privacy of readers

The blog is not read by many.. the author has used social engineering
to backtrack and harrass readers.

Would Squid to suitable to this task?  Would this be hard for a
newbie? Recommended reading or other ideas?

Thanks
Grover


RE: [squid-users] Usernames in capitals converted lowercase and denied

2010-02-09 Thread Amos Jeffries
On Wed, 10 Feb 2010 00:12:29 +, Jenny Lee  wrote:
> Hmm, this mailing list seems to ignore me.
>  
> auth_param basic casesensitive on 
>  
> as per: http://bugs.squid-cache.org/show_bug.cgi?id=431
>  
> fixed the problem for usernames in capitals.
>  
> However, devising a solution for ncsa_auth to accept user/pass
> case-insensitively still remains. 
>  
> Anyone with a similiar requirement and/or solution?

The problem is that most security systems use hashes at some point. NCSA
secure storage of the password data being the bit where you hit it.

At that point the exact binary values of each character matter a *lot*.
Changing the case of a single character will generate a completely
different hash to compare against the one stored.

You seem to be trying to fix a user problem by technical means. If you
make all passwords lower case, somebody will still insist on pressing shift
at some point to capitalize their pets name or something.

Amos



Re: [squid-users] using parent cache and parent dns

2010-02-09 Thread Amos Jeffries
On Tue, 09 Feb 2010 16:42:18 -0900, Chris Robertson 
wrote:
> im notreal wrote:
>> Basically here is my setup:
>>
>> I have a cache located on an oil rig
>> down south and one in the home office.  There is a dns in the home
>> office that I don't have access to on the rig, and there is one on the
>> rig.  There are webservers in both locations (management and control
>> stuff) that I need access to.  What I'd like to do is be able to
>> forward requests to the parent without having to resolve them locally.
>>
>> if
>> i do a dns_nameserver 127.0.0.1 I won't be able to split the requests
>> between both sites (i.e. !somewebserver.local won't be resolved locally
>> and will fail)
>>
>> ideas ?
>>   
> 
> Use never_direct (http://www.squid-cache.org/Doc/config/never_direct/) 
> in association with a dstdomain ACL.  Any matching domains should not 
> attempt resolution, but will send the request to the appropriate peer 
> (as defined by your cache_peer_access rules).

Um, this is a fairly basic reverse-proxy requirement.

"im notreal" take a look through
http://wiki.squid-cache.org/ConfigExamples#Reverse_Proxy_.28Acceleration.29
under reverse proxy. You can decide whether the peer configured is the
other proxy, or the web server at the other site. Depending on your network
layout and security one or other will work.

All you have to be careful of is that the ACLs used in cache_peer_access
and http_access use dstdomain instead of dst or anything else based on DNS.

Amos


Re: [squid-users] using parent cache and parent dns

2010-02-09 Thread Chris Robertson

im notreal wrote:

Basically here is my setup:

I have a cache located on an oil rig
down south and one in the home office.  There is a dns in the home
office that I don't have access to on the rig, and there is one on the
rig.  There are webservers in both locations (management and control
stuff) that I need access to.  What I'd like to do is be able to
forward requests to the parent without having to resolve them locally.

if
i do a dns_nameserver 127.0.0.1 I won't be able to split the requests
between both sites (i.e. !somewebserver.local won't be resolved locally
and will fail)

ideas ?
  


Use never_direct (http://www.squid-cache.org/Doc/config/never_direct/) 
in association with a dstdomain ACL.  Any matching domains should not 
attempt resolution, but will send the request to the appropriate peer 
(as defined by your cache_peer_access rules).



Tim 


Chris



Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread Amos Jeffries
On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
 wrote:
> Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
>> I don't know what to do with my current squid, I even upgraded to
>> 3.0.STABLE21 but, the problem persist every three days:
>> 
>> /usr/local/squid/sbin/squid -v
>> Squid Cache: Version 3.0.STABLE21
>> configure options:  '--prefix=/usr/local/squid'
'--sysconfdir=/etc/squid'
>> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
>> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
>> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
>> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
>> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
>> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>> 
>> I built with --with-maxfd=32768 option but, when squid is started it
says
>> is working with only 1024 filedescriptor.
>> 
>> I even added the following to the squid.conf:
>> 
>> max_open_disk_fds 0
>> 
>> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
don't
>> know what to do. Here's part of cache.log:
>>

> 
> 
> You got a bug! that behaivor happens when a coredump occurs in squid,
> please 
> file a ticket with gdb output, rice debug at maximum if you can.

WTF are you talking about Luis? None of the above problems have anything
to do with crashing Squid.

They are in order:

"WARNING! Your cache is running out of filedescriptors"
 * either the system limits being set too low during run-time operation.
 * or the system limits were too small during the configure and build
process.
   -> Squid may drop new client connections to maintain lower than desired
traffic levels.

  NP: patching the kernel headers to artificially trick squid into
believing the kernel supports more by default than it does is not a good
solution. The ulimit utility exists for that purpose instead.



"Unsupported method attempted by 172.16.100.83"
 * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
  -> Squid will drop these connections.

"clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
or directory"
 * NAT interception is failing to locate the NAT table entries for some
client connection.
 * usually due to configuring the same port with "transparent" option and
regular traffic.
 -> for now Squid will treat these connections as if the directly
connecting box was the real client. This WILL change in some near future
release.


As you can see in none of those handling operations does squid crash or
core dump.


Amos


Re: [squid-users] Authentication Browser Dialog

2010-02-09 Thread Chris Robertson

Christian Weiligmann wrote:

Am Dienstag, den 09.02.2010, 17:10 +1300 schrieb Amos Jeffries:
  

Christian Weiligmann wrote:


Hello,

i use the squidproxy over 10 years, an i am very happy to have this
programm for internet access, the user may look different about
this.
But, I have a demand concerning the authentication dialogs

I want to authenticate the internet access for my users by mysql
backend, but not with a browser dialog, else with a webpage. 


Similar to the question "Re: [squid-users] Proxy subscription on-line"
where is the error page, i can modify? 


Thanks a lot for viewing and please give me a answer...

  
So ... what error page? in response to what action? in which squid 
version? under what circumstances? with what information?


Amos



Hello,

I'am using the Squid 2.6.18-1ubuntu3 with non-transparent on Ubuntu LTS
8.04.04. .
I want to use for my authentication process a website and i don't want
to use the authenticate dialog in the browser. Is this possible? 


My "Similar to the quest" is written because i have understood as
the same question from me sorry.

Thank you for answer!


Using external_acl_type 
(http://www.squid-cache.org/Doc/config/external_acl_type/) and deny_info 
(http://www.squid-cache.org/Doc/config/deny_info/) you can redirect 
those clients that are not authenticated to the page that performs the 
authentication.  Your external_acl_type can return a username which will 
be used in the logs.


Perhaps my response to a similar query at 
http://www.squid-cache.org/mail-archive/squid-users/201001/0331.html 
will give you a good starting point.


Chris



Re: [squid-users] cache manager access from web

2010-02-09 Thread Chris Robertson

Amos Jeffries wrote:

J. Webster wrote:
I have followed the tutorial here: 
http://wiki.squid-cache.org/SquidFaq/CacheManager
and set up acls to access the cache manager cgi on my server. I have 
to access this externally for the moment as that is the only access 
to the server that I have (SSH or web). The cache manager login 
appears when I access: http://myexternalipaddress/cgi-bin/cachemgr.cgi

I have set the cache manager login and password in the squid.conf
#  TAG: cache_mgr
#   Email-address of local cache manager who will receive
#   mail if the cache dies. The default is "root".
#
#Default:
# cache_mgr root
cache_mgr a...@aaa.com
cachemgr_passwd aaa all
#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl cacheadmin src 88.xxx.xxx.xx9/255.255.255.255 #external IP address?


You don't need the /255.255.255.255 bit. Just a single IP address will 
do.



acl to_localhost dst 127.0.0.0/8
# Only allow cachemgr access from localhost


As a side note


http_access allow ncsa_users
http_access allow manager localhost
http_access allow manager cacheadmin
http_access deny manager


cache_manager access (any access, really) is already allowed to 
ncsa_users, no matter if they are accessing from localhost, 
88.xxx.xxx.xx9 or any other IP.  You might want to have a gander at the 
FAQ section on ACLs (http://wiki.squid-cache.org/SquidFaq/SquidAcl).




However, whenever I enter the password and select localhost port 8080 
from the cgi script I get:

The following error was encountered:
Cache Access Denied.
Sorry, you are not currently allowed to request:
cache_object://localhost/
from this cache until you have authenticated yourself.


Looks like the CGI script does its own internal access to Squid to 
fetch the page data. But does not have the right login details to pass 
your "http_access allow ncsa_auth" security config.


Amos


Chris



RE: [squid-users] Squid Clustering

2010-02-09 Thread Michael Bowe
> -Original Message-
> From: John Villa [mailto:john.joe.vi...@gmail.com]

> Basically I have two nodes and I am trying to make it so
> that if I hit one twice (to store the cache) and then I hit the other
> note the icp lookups will work to deliver content. Here is what I have
> minus all the acl rules and what not. Let me know if you have any
> recommendations.
> node 1: cache_peer staging-ss2 sibling 4228 4220
> node 2: cache_peer staging-ss1 sibling 4228 4220

I would suggest you set it to be 

 node 1: cache_peer staging-ss2 sibling 4228 4220 proxy-only
 node 2: cache_peer staging-ss1 sibling 4228 4220 proxy-only

As this will prevent duplication of objects on the two server's disks.

Michael.



RE: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread Michael Bowe
I run some busy installed-from-source squid v3.1 on Lenny

To get more filedescriptors I did this :

* configure with --with-filedescriptors=65536

* Modify the init.d startup script ( which I stole from the packaged
  squid version ) so that it includes "ulimit -n 65535"

To check if your tweaks worked, look in cache.log after starting squid.
In my case it reports "With 65535 file descriptors available"

Hope that helps!

Michael.

> -Original Message-
> From: Landy Landy [mailto:landysacco...@yahoo.com]
> Sent: Wednesday, 10 February 2010 10:29 AM
> To: Squid-Users
> Subject: [squid-users] Ongoing Running out of filedescriptors
> 
> I don't know what to do with my current squid, I even upgraded to
> 3.0.STABLE21 but, the problem persist every three days:
> 
> /usr/local/squid/sbin/squid -v
> Squid Cache: Version 3.0.STABLE21
> configure options:  '--prefix=/usr/local/squid' '--
> sysconfdir=/etc/squid' '--enable-delay-pools' '--enable-kill-parent-
> hack' '--disable-htcp' '--enable-default-err-language=Spanish' '--
> enable-linux-netfilter' '--disable-ident-lookups' '--
> localstatedir=/var/log/squid3.1' '--enable-stacktraces' '--with-
> default-user=proxy' '--with-large-files' '--enable-icap-client' '--
> enable-async-io' '--enable-storeio=aufs' '--enable-removal-
> policies=heap,lru' '--with-maxfd=32768'
> 
> I built with --with-maxfd=32768 option but, when squid is started it
> says is working with only 1024 filedescriptor.
> 
> I even added the following to the squid.conf:
> 
> max_open_disk_fds 0
> 
> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
> don't know what to do. Here's part of cache.log:



RE: [squid-users] Usernames in capitals converted lowercase and denied

2010-02-09 Thread Jenny Lee

Hmm, this mailing list seems to ignore me.
 
auth_param basic casesensitive on 
 
as per: http://bugs.squid-cache.org/show_bug.cgi?id=431
 
fixed the problem for usernames in capitals.
 
However, devising a solution for ncsa_auth to accept user/pass 
case-insensitively still remains. 
 
Anyone with a similiar requirement and/or solution?
 
J

 
> From: bodycar...@live.com
> To: squid-users@squid-cache.org
> Date: Tue, 9 Feb 2010 09:49:50 +
> Subject: [squid-users] Usernames in capitals converted lowercase and denied
> 
> 
> Can someone assist with this? Apprarently users were not typing their 
> usernames wrong.
> 
> Squid is apparently not authenticating capital letter usernames.
> 
> 3.1.0.15
> 
> 2010/02/09 13:08:12.781| basic/auth_basic.cc(412) decodeCleartext: 
> 'JEN133:xs39ds'
> 2010/02/09 13:08:12.781| basic/auth_basic.cc(364) 
> authBasicAuthUserFindUsername: Looking for user 'jen133'
> 2010/02/09 13:08:12.781| basic/auth_basic.cc(510) makeCachedFrom: Creating 
> new user 'jen133'
> 
> 2010/02/09 13:08:52.745| helperSubmit: jen133 xs39ds
> 
> 2010/02/09 13:08:52.746| helperHandleRead: 17 bytes from basicauthenticator #1
> 2010/02/09 13:08:52.746| helperHandleRead: 'ERR No such user
> '
> 
> 2010/02/09 13:08:52.746| basic/auth_basic.cc(246) 
> authenticateBasicHandleReply: {ERR No such user}
> 
> ncsa_auth is accepting username/pass in capitals fine from command line and 
> returning OK.
> 
> squid.conf: acl USERS proxy_auth REQUIRED
> 
> Everything else works and has been working. And same setup was working before 
> with older squids.
> 
> Also please advise how to make squid/nsca_auth accept user/pass 
> case-insensitively. Can't put "-i REQUIRED" on proxy_auth line.
> 
> Thanks.
> 
> J 
> _
> Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
> http://clk.atdmt.com/GBL/go/201469226/direct/01/  
>   
_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/201469227/direct/01/

[squid-users] Redirecting user based on user agent string or incoming URL

2010-02-09 Thread Reuben S.
Hi,

I have an existing url_rewrite_program script which redirects the user
based on the host they came in on.  I also want to redirect the user
to different URLs based on their user agent string (for example, if
the string has the word "iphone" in it).  Does anyone know how I'd
accomplish this with a single url_rewrite_program?  I've checked out a
few squid options, but I haven't found what I'm looking for yet.
Should I be using urlgroups or something?

I've found some information on how to do this, but no details.  For
instance, this email says what I'd like to do is possible, but doesn't
say how:

http://www.squid-cache.org/mail-archive/squid-users/200803/0802.html

Thanks,

Reuben


Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread George Herbert
Secret compile time gotcha - your compile needs to have the max fd set
higher during the configure, make, and compile, or it doesn't actually
end up able to use the higher maxfd limit.

I do a script with roughly "ulimit -HSn 32768; ./configure (long
options string included from a file)"

(On CentOS 5.1-5.3 build servers, and presumably 5.4; the same should
apply to other Linux + Gnu configure/make environments)


-george

On Tue, Feb 9, 2010 at 3:29 PM, Landy Landy  wrote:
> I don't know what to do with my current squid, I even upgraded to 
> 3.0.STABLE21 but, the problem persist every three days:
>
> /usr/local/squid/sbin/squid -v
> Squid Cache: Version 3.0.STABLE21
> configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid' 
> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp' 
> '--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1' 
> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>
> I built with --with-maxfd=32768 option but, when squid is started it says is 
> working with only 1024 filedescriptor.
>
> I even added the following to the squid.conf:
>
> max_open_disk_fds 0
>
> But it hasn't resolve anything. I'm using squid on Debian Lenny. I don't know 
> what to do. Here's part of cache.log:
>
> 2010/02/09 17:14:29| ctx: exit level  0
> 2010/02/09 17:14:29| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:16:50| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:18:45| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:01| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:17| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:38| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:21:33| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:26| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method in request 
> '_...@.#c5u_e__:___{_Q_"___L_r'
> 2010/02/09 17:22:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:22:43| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:59| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:16| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:36| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:52| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:19| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:23| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
> (2) No such file or directory
> 2010/02/09 17:24:38| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method in request 
> '_E__&_b_%_w__pw__m_}z%__i_...@_t__q___d__?_g'
> 2010/02/09 17:24:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:24:54| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:12| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method in request 
> '_Z___|G3_7^_%U_r_1.h__gd__8C'
> 2010/02/09 17:25:12| clientProcessRequest: Invalid Request
> 2010/02/09 17:25:29| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:41| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
> (2) No such file or directory
> 2010/02/09 17:25:45| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:01| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:18| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:34| client_side.cc(2843) WARNING! Your cache is running out 
> of 

Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread Luis Daniel Lucio Quiroz
Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
> I don't know what to do with my current squid, I even upgraded to
> 3.0.STABLE21 but, the problem persist every three days:
> 
> /usr/local/squid/sbin/squid -v
> Squid Cache: Version 3.0.STABLE21
> configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid'
> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
> 
> I built with --with-maxfd=32768 option but, when squid is started it says
> is working with only 1024 filedescriptor.
> 
> I even added the following to the squid.conf:
> 
> max_open_disk_fds 0
> 
> But it hasn't resolve anything. I'm using squid on Debian Lenny. I don't
> know what to do. Here's part of cache.log:
> 
> 2010/02/09 17:14:29| ctx: exit level  0
> 2010/02/09 17:14:29| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:16:50| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:18:45|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:20:01| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:20:17| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:20:38|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:21:33| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:22:26| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:22:41|
> clientParseRequestMethod: Unsupported method attempted by 172.16.100.83:
> This is not a bug. see squid.conf extension_methods 2010/02/09 17:22:41|
> clientParseRequestMethod: Unsupported method in request
> '_...@.#c5u_e__:___{_Q_"___L_r' 2010/02/09
> 17:22:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:22:43| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:22:59| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:23:16|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:23:36| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:23:52| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:24:19|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:24:23| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST)
> failed: (2) No such file or directory 2010/02/09 17:24:38|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method
> attempted by 172.16.100.83: This is not a bug. see squid.conf
> extension_methods 2010/02/09 17:24:41| clientParseRequestMethod:
> Unsupported method in request
> '_E__&_b_%_w__pw__m_}z%__i_...@_t__q___d__?_g' 2010/02/09
> 17:24:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:24:54| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:25:12| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:25:12|
> clientParseRequestMethod: Unsupported method attempted by 172.16.100.83:
> This is not a bug. see squid.conf extension_methods 2010/02/09 17:25:12|
> clientParseRequestMethod: Unsupported method in request
> '_Z___|G3_7^_%U_r_1.h__gd__8C' 2010/02/09 17:25:12|
> clientProcessRequest: Invalid Request
> 2010/02/09 17:25:29| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:25:41| clientNatLookup: NF
> getsockopt(SO_ORIGINAL_DST) failed: (2) No such file or directory
> 2010/02/09 17:25:45| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:26:01| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:26:18|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:26:34| client_side.cc(2843) WARNING! Your cache is running
> out of filedescriptors 2010/02/09 17:26:59| client_side.cc(2843) WARNING!
> Your cache is running out of filedescriptors 2010/02/09 17:27:26|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:27:29| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST)
> failed: (2) No such file or directory 2010/02/09 17:27:56|
> client_side.cc(2843) WARNING! Your cache is running out of filedescriptors
> 2010/02/09 17:28:12| client_side.cc(2843) WARNING! Your cac

[squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread Landy Landy
I don't know what to do with my current squid, I even upgraded to 3.0.STABLE21 
but, the problem persist every three days:

/usr/local/squid/sbin/squid -v
Squid Cache: Version 3.0.STABLE21
configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid' 
'--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid3.1' 
'--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
'--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
'--enable-removal-policies=heap,lru' '--with-maxfd=32768'

I built with --with-maxfd=32768 option but, when squid is started it says is 
working with only 1024 filedescriptor.

I even added the following to the squid.conf:

max_open_disk_fds 0

But it hasn't resolve anything. I'm using squid on Debian Lenny. I don't know 
what to do. Here's part of cache.log:

2010/02/09 17:14:29| ctx: exit level  0
2010/02/09 17:14:29| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:16:50| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:18:45| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:20:01| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:20:17| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:20:38| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:21:33| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:22:26| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method attempted by 
172.16.100.83: This is not a bug. see squid.conf extension_methods
2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method in request 
'_...@.#c5u_e__:___{_Q_"___L_r'
2010/02/09 17:22:41| clientProcessRequest: Invalid Request
2010/02/09 17:22:43| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:22:59| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:23:16| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:23:36| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:23:52| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:24:19| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:24:23| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
(2) No such file or directory
2010/02/09 17:24:38| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method attempted by 
172.16.100.83: This is not a bug. see squid.conf extension_methods
2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method in request 
'_E__&_b_%_w__pw__m_}z%__i_...@_t__q___d__?_g'
2010/02/09 17:24:41| clientProcessRequest: Invalid Request
2010/02/09 17:24:54| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:25:12| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method attempted by 
172.16.100.83: This is not a bug. see squid.conf extension_methods
2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method in request 
'_Z___|G3_7^_%U_r_1.h__gd__8C'
2010/02/09 17:25:12| clientProcessRequest: Invalid Request
2010/02/09 17:25:29| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:25:41| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
(2) No such file or directory
2010/02/09 17:25:45| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:26:01| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:26:18| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:26:34| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:26:59| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:27:26| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:27:29| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
(2) No such file or directory
2010/02/09 17:27:56| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:28:12| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:28:35| client_side.cc(2843) WARNING! Your cache is running out of 
filedescriptors
2010/02/09 17:28:35| clientNatLookup: 

Re: [squid-users] Is there ICAP evidence log in any log files?

2010-02-09 Thread Henrik Nordström
tis 2010-02-09 klockan 15:18 -0600 skrev Luis Daniel Lucio Quiroz:
> Le Mercredi 30 Juillet 2008 22:24:35, Henrik Nordstrom a écrit :
> > On tor, 2008-07-31 at 11:26 +0900, S.KOBAYASHI wrote:
> > > Hello developer,
> > > 
> > > I'm looking for the evidence for accessing ICAP server. Is there its log
> > > in any log files such as access.log, cache.log?
> > 
> > The ICAP server should have logs of it's own.
> > 
> > There is no information in the Squid logs on which iCAP servers were
> > used for the request/response.
> > 
> > Regards
> > Henrik
> I wonder if using squidclient mngr:xxXX  we could see some info about icap,
> where?

Seems not.

You can however increase the debug level of section 93 to have ICAP spew
out lots of information in cache.log.

debug_options ALL,1 93,5

should do the trick I think.

Regards
Henrik



Re: [squid-users] DNUMTHREADS

2010-02-09 Thread Marcus Kool

It depends on the number of disks thats you use for the cache on disk.
as a rule of thumb: 10 I/Os per disk is fine, so 10 threads per disk.
Only if you use very high performance disk arrays you may
increase the number of threads per (logical) disk.

Marcus


J. Webster wrote:

Would this dramatically improve performance or it it best left at default?



Date: Tue, 9 Feb 2010 17:01:46 +1300
From: squ...@treenet.co.nz
To: squid-users@squid-cache.org
Subject: Re: [squid-users] DNUMTHREADS

J. Webster wrote:

Is it recommended to recompile squid and increase the DNUMTHREADS value?
I read that 30 could easily be used on a 500MHz machine and my machine is more 
than 2GHz so would it give an improvement to squid performance.
I have been reading through this document here, which recommends various 
changes including using the reiserfs filesystem.
My machine is CentOS.

http://blog.last.fm/2007/08/30/squid-optimization-guide


Not sure how he got that info Squid provides the ./configure
--enable-async-io[=N_THREADS] option as far back as I can see.

It only affects AUFS disk storage.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
Current Beta Squid 3.1.0.16
 		 	   		  
_

Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/



RE: [squid-users] kernel 2.6.32

2010-02-09 Thread Michael Bowe
I tried this exact same combination recently and failed.

I could ping/traceroute, but TCP traffic wasn't working properly. Did a fair
bit of poking around but couldn't work out why this was the case.

2.6.31 kernels work fine though.


> -Original Message-
> From: Ariel [mailto:lauchafernan...@gmail.com]
> Sent: Wednesday, 10 February 2010 12:59 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] kernel 2.6.32
> 
>  List,, alguin can guide me if you can compile the kernel 2.6.32 with
> tproxy?
> I use that version of iptables?
> 
> debain lenny 64 bits
> kernel 2.6.32
> iptables  ??
> 
> Gracias




[squid-users] reverse proxying for sharepoint ??

2010-02-09 Thread Jakob Curdes
Can anybody comment on protecting a sharepoint server with squid as 
reverse proxy?
I worked my way through some stories, also on the squid list, and it 
seems that there are two possible problems:


- sharepoint seems to rely on http 1.1
- sharepoint uses absolute URLs which would have to be rewritten (but 
newer versions seem to have options to remedy that)




Best regards,
Jakob Curdes



Re: [squid-users] Is there ICAP evidence log in any log files?

2010-02-09 Thread Luis Daniel Lucio Quiroz
Le Mercredi 30 Juillet 2008 22:24:35, Henrik Nordstrom a écrit :
> On tor, 2008-07-31 at 11:26 +0900, S.KOBAYASHI wrote:
> > Hello developer,
> > 
> > I'm looking for the evidence for accessing ICAP server. Is there its log
> > in any log files such as access.log, cache.log?
> 
> The ICAP server should have logs of it's own.
> 
> There is no information in the Squid logs on which iCAP servers were
> used for the request/response.
> 
> Regards
> Henrik
I wonder if using squidclient mngr:xxXX  we could see some info about icap,
where?

TIA


Re: [squid-users] Squid Clustering

2010-02-09 Thread John Villa

Hello,
Thank you for your response. It seems to be working now for I RTFM (I  
usually do I am sorry). Playing around with the icp setting and the  
cache_	peer setting I was able to make some progress and it appears to  
be working. Basically I have two nodes and I am trying to make it so  
that if I hit one twice (to store the cache) and then I hit the other  
note the icp lookups will work to deliver content. Here is what I have  
minus all the acl rules and what not. Let me know if you have any  
recommendations.

node 1: cache_peer staging-ss2 sibling 4228 4220
node 2: cache_peer staging-ss1 sibling 4228 4220
Again it appears to be working but if there is anything you can  
recommend or if in fact I am reading my test wrong please feel free to  
let me know.

Thanks in advance,
-John

On Feb 9, 2010, at 1:25 PM, Luis Daniel Lucio Quiroz wrote:


Le Mardi 9 Février 2010 10:36:04, John Villa a écrit :

Hello,
Can someone point me to some good documentation on how to set up a
squid cluster? I have been looking but have not found anything  
useful.

I appreciate any help.
Thank You,
-John
it is better if you tell us your requirements, clustering differs  
from for

example if you use acl externas, digest or simple auth,




RE: [squid-users] DNUMTHREADS

2010-02-09 Thread J. Webster

Would this dramatically improve performance or it it best left at default?


> Date: Tue, 9 Feb 2010 17:01:46 +1300
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] DNUMTHREADS
>
> J. Webster wrote:
>> Is it recommended to recompile squid and increase the DNUMTHREADS value?
>> I read that 30 could easily be used on a 500MHz machine and my machine is 
>> more than 2GHz so would it give an improvement to squid performance.
>> I have been reading through this document here, which recommends various 
>> changes including using the reiserfs filesystem.
>> My machine is CentOS.
>>
>> http://blog.last.fm/2007/08/30/squid-optimization-guide
>>
>
> Not sure how he got that info Squid provides the ./configure
> --enable-async-io[=N_THREADS] option as far back as I can see.
>
> It only affects AUFS disk storage.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
> Current Beta Squid 3.1.0.16
  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

[squid-users] Re: Re: Re: Re:Problem with SQUID_KERB_LDAP

2010-02-09 Thread Markus Moeller
squid_kerb_auth is for transparent authentication ( no popop). Maybe you 
want to use another authentication module. squid_kerb_ldap will still work 
independant of squid_kerb_auth.


Markus

"Fruehauf"  wrote in message 
news:4b713c27.8020...@googlemail.com...

So far, i test it with IE8 and Firefox 3.5

When i test it with IE6, no popup occurs and i get immediately the error 
message.

Therefore i have this problem with all browsers .

The best way would be, that always a popup appear, when i start a new 
browser session and the user have to authentication on the domain,
cause, i have not only domain clients, i work with workgroup clients too. 
I thought, i take the right howto therefore. What have i to change,

to get always the authentication screen?

I now, that are 2 different problems, i hope, it's ok.

Rainer


Am 09.02.2010 00:14, schrieb Markus Moeller:

Ralf,

The lines:

2010/02/08 20:59:08| squid_kerb_auth: received type 1 NTLM token

mean that your browser is not using Kerberos authentication, why you get 
the popup.


Markus

"Ralf Fruehauf"  wrote in message 
news:4b706e39.9050...@googlemail.com...

Am 05.02.2010 19:03, schrieb Markus Moeller:
If  you have only a directory not an executable then you don't really 
have squid_kerb_ldap installed.


The script is a standalone script somewhere on your filesystem 
accesible by the squid process.


Markus

"Ralf Fruehauf"  wrote in message 
news:ff35590e1002050714q1bd0432bje929e96818924...@mail.gmail.com...

For my understanding:

i take this script and put it into my /etc/init.d/squid start script?

With strace, i thought, i need a executably file/program, but i have
no squid_kerb_ldap file, only a directory!?
Sorry, for this simple question.

Rainer




Ok, that was my mistake, i had a problem during the make command with 
squid_kerb_ldap, now,
i have a squid_kerb_ldap file and squid successfully starts, that is 
some progress at least.


Now, i have a problem with the authenticating. The registration box 
appears on the screen,
but he don't accept my user/passwort entry. The user is located in the 
SQUID_USERS group
in my Active Directory. After 4 until 5 attempts, i get a error - Cache 
Access Denied -
"Sorry, you are not currently allowed to request http://www.google.de/ 
from this cache until you have authenticated yourself."

__

access.log:

1265659148.810  2 192.168.100.130 TCP_DENIED/407 2462 GET 
http://www.google.de/ - NONE/- text/html
1265659148.856  1 192.168.100.130 TCP_DENIED/407 2565 GET 
http://www.google.de/ - NONE/- text/html
1265659158.206  1 192.168.100.130 TCP_DENIED/407 2565 GET 
http://www.google.de/ - NONE/- text/html


__

cache.log:

2010/02/08 20:38:35| Starting Squid Cache version 3.0.STABLE18 for 
i686-pc-linux-gnu...

2010/02/08 20:38:35| Process ID 2292
2010/02/08 20:38:35| With 1024 file descriptors available
2010/02/08 20:38:35| DNS Socket created at 0.0.0.0, port 46847, FD 7
2010/02/08 20:38:35| Adding domain homebase.local from /etc/resolv.conf
2010/02/08 20:38:35| Adding domain homebase.local from /etc/resolv.conf
2010/02/08 20:38:35| Adding nameserver 192.168.100.1 from 
/etc/resolv.conf
2010/02/08 20:38:35| Adding nameserver 192.168.100.254 from 
/etc/resolv.conf
2010/02/08 20:38:35| helperOpenServers: Starting 10/10 'squid_kerb_auth' 
processes
2010/02/08 20:38:36| helperOpenServers: Starting 5/5 'squid_kerb_ldap' 
processes

2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID

Re: [squid-users] None Existing File; Repeating Request Timeout

2010-02-09 Thread Joe P.H. Chiang
Ok, Thank you very much for taking your time and answer my questions

On Tue, Feb 9, 2010 at 6:40 PM, Amos Jeffries  wrote:
> Joe P.H. Chiang wrote:
>>
>> What i meant is;
>>
>> This way when ddos attack occurs.. and the attacker is requesting
>> something that doesn't exist on my squid servers and backend servers
>>
>> my server in the backend doesn't have to respond to it, squid will
>> blocked the request and give a timeout interval for 30 seconds.
>>
>> so it goes like this
>> Squid is accepting the request for no-existing file
>> --> Squid doesn't have such file
>> -> Squid Pass the request to backend servers
>> ---> backend server says I don't have it neither
>> -> Squid say okay next time such request will be timeout for 30
>> seconds
>>
>> Possible? are there such config?
>>
>
> Not in the way you seems to be asking for.
>
> You can send an Expires: header with the 404 error reply message.
> That should make Squid do the not asking again part. During that period
> Squid will send back its own stored copy of the 404 to the visitor, without
> contacting the web server.
>  Any well-behaved proxies between you and the attacker will also be
> protected and help lift the load on your Squid. Sadly there are a lot of
> admin out there who set ignore-expires for things.
>
> Just be aware that any real attacker will disobey the HTTP header
> instructions anyway, and some badly configured proxies will as well.
>
>
>>
>>
>> On Tue, Feb 9, 2010 at 12:26 PM, Amos Jeffries 
>> wrote:
>>>
>>> Joe P.H. Chiang wrote:

 Hi All Im New to squid..

 I've scanned through squid 2.6 & 3.0 Manual and Definitive guide, but
 i still can't find information about this question..

 Is it possible to have a request_timeout when the request file doesn't
 exist on the squid cache and peer server?
 e.g if client requestionwww.example.com/dontexist.html and then
 receives 404 http
 then the client will have to wait until request_timeout 30 seconds to
 able to request
 www.example.com/dontexist.html again
 could this be done? is there such setting/configuration?
>>>
>>> This is a "wetware" problem. You need to teach all your users to press
>>> the
>>> refresh button at exactly 30 seconds after any failure.
>>>
>>>
>>> Seriously though, not the way you describe. You can't prevent people
>>> being
>>> "able" to make requests. You can only change the result if they do one
>>> you
>>> don't like.
>>>
>>> What exactly are you trying to accomplish?
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
>  Current Beta Squid 3.1.0.16
>



-- 
Thanks,
Joe


Re: [squid-users] Squid Clustering

2010-02-09 Thread Luis Daniel Lucio Quiroz
Le Mardi 9 Février 2010 10:36:04, John Villa a écrit :
> Hello,
> Can someone point me to some good documentation on how to set up a
> squid cluster? I have been looking but have not found anything useful.
> I appreciate any help.
> Thank You,
> -John
it is better if you tell us your requirements, clustering differs from for 
example if you use acl externas, digest or simple auth,


Re: [squid-users] Authentication Browser Dialog

2010-02-09 Thread Christian Weiligmann
Am Dienstag, den 09.02.2010, 17:10 +1300 schrieb Amos Jeffries:
> Christian Weiligmann wrote:
> > Hello,
> > 
> > i use the squidproxy over 10 years, an i am very happy to have this
> > programm for internet access, the user may look different about
> > this.
> > But, I have a demand concerning the authentication dialogs
> > 
> > I want to authenticate the internet access for my users by mysql
> > backend, but not with a browser dialog, else with a webpage. 
> > 
> > Similar to the question "Re: [squid-users] Proxy subscription on-line"
> > where is the error page, i can modify? 
> > 
> > Thanks a lot for viewing and please give me a answer...
> > 
> 
> So ... what error page? in response to what action? in which squid 
> version? under what circumstances? with what information?
> 
> Amos

Hello,

I'am using the Squid 2.6.18-1ubuntu3 with non-transparent on Ubuntu LTS
8.04.04. .
I want to use for my authentication process a website and i don't want
to use the authenticate dialog in the browser. Is this possible? 

My "Similar to the quest" is written because i have understood as
the same question from me sorry.

Thank you for answer!









[squid-users] Squid Clustering

2010-02-09 Thread John Villa

Hello,
Can someone point me to some good documentation on how to set up a  
squid cluster? I have been looking but have not found anything useful.  
I appreciate any help.

Thank You,
-John


Re: [squid-users] Client browser perpetually 'Loading'

2010-02-09 Thread Matus UHLAR - fantomas
> On 4 February 2010 14:10, Amos Jeffries  wrote:
> > Not really. I only know how to diagnose from the bx which is having the
> > issue. Which means the ISP proxy in your case.

On 04.02.10 14:35, Dave Coventry wrote:
> These guys are currently suffering a lot of complaints about poor
> service levels and lack of support, so I'm not sure that they will be
> that helpful. http://mybroadband.co.za/news/Broadband/11359.html

This may also indicate that _this_ is the poor service level...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I just got lost in thought. It was unfamiliar territory. 


[squid-users] kernel 2.6.32

2010-02-09 Thread Ariel
 List,, alguin can guide me if you can compile the kernel 2.6.32 with tproxy?
I use that version of iptables?

debain lenny 64 bits
kernel 2.6.32
iptables  ??

Gracias


[squid-users] using parent cache and parent dns

2010-02-09 Thread im notreal

Basically here is my setup:

I have a cache located on an oil rig
down south and one in the home office.  There is a dns in the home
office that I don't have access to on the rig, and there is one on the
rig.  There are webservers in both locations (management and control
stuff) that I need access to.  What I'd like to do is be able to
forward requests to the parent without having to resolve them locally.

if
i do a dns_nameserver 127.0.0.1 I won't be able to split the requests
between both sites (i.e. !somewebserver.local won't be resolved locally
and will fail)

ideas ?

Tim   
_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/201469227/direct/01/

[squid-users] Delay-pools (class2) limit more than specified

2010-02-09 Thread Jose Lopes
Hi,

I'm  using delay pools to limit bandwidth.
My version of squid is "Squid Cache: Version 3.1.0.15".

My configs are:

delay_pools 2
delay_access 1 allow client_hosts1
delay_access 1 deny all
delay_class 1 2
delay_parameters 1 131072/131072 65536/65536

delay_access 2 allow client_hosts2
delay_access 2 deny all
delay_class 2 1
delay_parameters 2 131072/131072

Delay_pool 2 works well, one host at client_hosts2 downloads at ~130KB/s .
At delay_pool 1, with one host downloading, it download's at ~33KB/s .
At delay_pool 1, with all hosts (client_hosts1) downloading, the global max 
value of download is ~66KB/s.

Seems like delay_pool of class 2 limit at half of the bandwidth specified.

How do I sort out this problem?

Thanks in advance.
Regards
Jose Lopes



Re: [squid-users] Re: Re: Re:Problem with SQUID_KERB_LDAP

2010-02-09 Thread Amos Jeffries

Fruehauf wrote:

So far, i test it with IE8 and Firefox 3.5

When i test it with IE6, no popup occurs and i get immediately the error 
message.

Therefore i have this problem with all browsers .


What? No. Therefore ... you have a problem with IE6.

Which makes sense, since IE6 is not capable of Kerberos authentication 
and displays an error page instead.
The other two you tested are capable of Kerberos and will use it as 
requested by that same page.




The best way would be, that always a popup appear, when i start a new 
browser session and the user have to authentication on the domain,
cause, i have not only domain clients, i work with workgroup clients 
too. I thought, i take the right howto therefore. What have i to change,

to get always the authentication screen?


You need to change the web browser being used. It was a bug in IE6 (lack 
of Kerberos support) which has been fixed by Microsoft in their IE7 and 
IE8 releases.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] squid 3.1 and error_directory

2010-02-09 Thread Amos Jeffries

Eugene M. Zheganin wrote:

Hi.

Recently I decided to look on 3.1 branch on my test proxy. Everything 
seems to work fine, but I'm stuck with the problem with the error messages.
Whatever I do with the error_directory/error_default_language settings 
(leaving 'em commented out, or setting 'em to something) in my browser I 
see corrupted symbols. These are neither latin, nor cyrillic. They do 
look like it is UTF-8 treated like Cp1251, for example. Changing 
encoding of the page in browser doesn't help.

And the charset in  tag of such page is always "us-ascii" (why ?).


Um, thank you. I've seen something like this before. Will get on and 
check the fix.


The symbols you are seeing is probably UTF-8 treated as us-ascii. I've 
seen it as an artifact of 'tidy html' which is used by default on the 
translation toolkit we build the error pages with. I just have to check 
that is true and update the sources to leave the generated files 
slightly mangled.




How can I make pages be displayed at least in english ? I thought that 
this can be achieved by setting error_default_language to en, but I was 
wrong again.


I thought I am familiar with squid error directory and creating my own 
templates for 2.x/3.0 branches, but definitely I'm not with the 3.1


They are almost the same. The base templates are in templates/ERR_* for 
copying and adding your own ones in templates/* too.


That is the big difference, that your local templates always go in 
templates/* or a custom directory (with error_default_language pointing 
at it).


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] Re: Re: Re:Problem with SQUID_KERB_LDAP

2010-02-09 Thread Fruehauf

So far, i test it with IE8 and Firefox 3.5

When i test it with IE6, no popup occurs and i get immediately the error 
message.

Therefore i have this problem with all browsers .

The best way would be, that always a popup appear, when i start a new 
browser session and the user have to authentication on the domain,
cause, i have not only domain clients, i work with workgroup clients 
too. I thought, i take the right howto therefore. What have i to change,

to get always the authentication screen?

I now, that are 2 different problems, i hope, it's ok.

Rainer


Am 09.02.2010 00:14, schrieb Markus Moeller:

Ralf,

The lines:

2010/02/08 20:59:08| squid_kerb_auth: received type 1 NTLM token

mean that your browser is not using Kerberos authentication, why you 
get the popup.


Markus

"Ralf Fruehauf"  wrote in message 
news:4b706e39.9050...@googlemail.com...

Am 05.02.2010 19:03, schrieb Markus Moeller:
If  you have only a directory not an executable then you don't 
really have squid_kerb_ldap installed.


The script is a standalone script somewhere on your filesystem 
accesible by the squid process.


Markus

"Ralf Fruehauf"  wrote in message 
news:ff35590e1002050714q1bd0432bje929e96818924...@mail.gmail.com...

For my understanding:

i take this script and put it into my /etc/init.d/squid start script?

With strace, i thought, i need a executably file/program, but i have
no squid_kerb_ldap file, only a directory!?
Sorry, for this simple question.

Rainer




Ok, that was my mistake, i had a problem during the make command with 
squid_kerb_ldap, now,
i have a squid_kerb_ldap file and squid successfully starts, that is 
some progress at least.


Now, i have a problem with the authenticating. The registration box 
appears on the screen,
but he don't accept my user/passwort entry. The user is located in 
the SQUID_USERS group
in my Active Directory. After 4 until 5 attempts, i get a error - 
Cache Access Denied -
"Sorry, you are not currently allowed to request 
http://www.google.de/ from this cache until you have authenticated 
yourself."
__ 



access.log:

1265659148.810  2 192.168.100.130 TCP_DENIED/407 2462 GET 
http://www.google.de/ - NONE/- text/html
1265659148.856  1 192.168.100.130 TCP_DENIED/407 2565 GET 
http://www.google.de/ - NONE/- text/html
1265659158.206  1 192.168.100.130 TCP_DENIED/407 2565 GET 
http://www.google.de/ - NONE/- text/html


__ 



cache.log:

2010/02/08 20:38:35| Starting Squid Cache version 3.0.STABLE18 for 
i686-pc-linux-gnu...

2010/02/08 20:38:35| Process ID 2292
2010/02/08 20:38:35| With 1024 file descriptors available
2010/02/08 20:38:35| DNS Socket created at 0.0.0.0, port 46847, FD 7
2010/02/08 20:38:35| Adding domain homebase.local from /etc/resolv.conf
2010/02/08 20:38:35| Adding domain homebase.local from /etc/resolv.conf
2010/02/08 20:38:35| Adding nameserver 192.168.100.1 from 
/etc/resolv.conf
2010/02/08 20:38:35| Adding nameserver 192.168.100.254 from 
/etc/resolv.conf
2010/02/08 20:38:35| helperOpenServers: Starting 10/10 
'squid_kerb_auth' processes
2010/02/08 20:38:36| helperOpenServers: Starting 5/5 
'squid_kerb_ldap' processes

2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| squid_kerb_ldap: Starting version 1.1.2
2010/02/08 20:38:36| squid_kerb_ldap: Group list SQUID_USERS
2010/02/08 20:38:36| squid_kerb_ldap: Group SQUID_USERS  Domain NULL
2010/02/08 20:38:36| squid_kerb_ldap: Netbios list NULL
2010/02/08 20:38:36| squid_kerb_ldap: No netbios names defined.
2010/02/08 20:38:36| Unlinkd pipe opened on FD 27
2010/02/08 20:38:36| Swap maxSize 102400 + 8192 KB, estimated 8507

Re: [squid-users] None Existing File; Repeating Request Timeout

2010-02-09 Thread Amos Jeffries

Joe P.H. Chiang wrote:

What i meant is;

This way when ddos attack occurs.. and the attacker is requesting
something that doesn't exist on my squid servers and backend servers

my server in the backend doesn't have to respond to it, squid will
blocked the request and give a timeout interval for 30 seconds.

so it goes like this
Squid is accepting the request for no-existing file
--> Squid doesn't have such file
-> Squid Pass the request to backend servers
---> backend server says I don't have it neither
-> Squid say okay next time such request will be timeout for 30 seconds

Possible? are there such config?



Not in the way you seems to be asking for.

You can send an Expires: header with the 404 error reply message.
That should make Squid do the not asking again part. During that period 
Squid will send back its own stored copy of the 404 to the visitor, 
without contacting the web server.
 Any well-behaved proxies between you and the attacker will also be 
protected and help lift the load on your Squid. Sadly there are a lot of 
admin out there who set ignore-expires for things.


Just be aware that any real attacker will disobey the HTTP header 
instructions anyway, and some badly configured proxies will as well.






On Tue, Feb 9, 2010 at 12:26 PM, Amos Jeffries  wrote:

Joe P.H. Chiang wrote:

Hi All Im New to squid..

I've scanned through squid 2.6 & 3.0 Manual and Definitive guide, but
i still can't find information about this question..

Is it possible to have a request_timeout when the request file doesn't
exist on the squid cache and peer server?
e.g if client requestionwww.example.com/dontexist.html and then
receives 404 http
then the client will have to wait until request_timeout 30 seconds to
able to request
www.example.com/dontexist.html again
could this be done? is there such setting/configuration?


This is a "wetware" problem. You need to teach all your users to press the
refresh button at exactly 30 seconds after any failure.


Seriously though, not the way you describe. You can't prevent people being
"able" to make requests. You can only change the result if they do one you
don't like.

What exactly are you trying to accomplish?



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE23
  Current Beta Squid 3.1.0.16


Re: [squid-users] problem

2010-02-09 Thread Jeff Peng
On Tue, Feb 9, 2010 at 4:49 PM, David C. Heitmann  wrote:
> so i have all i need, but one thing left
> when i would download something, where i have to put a keyword
> inside..f.e. rapidshare or sharingmatrix
> the keyword is always wrong^^
>

Then I have a question, why you need the request_header_access ACL there?

Jeff.


Re: [squid-users] dns?

2010-02-09 Thread Matus UHLAR - fantomas
On 01.02.10 14:44, David C. Heitmann wrote:
> when i ping with connected to squid proxy cacheversion 2.7  
> stable5.to www.google.de
> then i get "cant find hostname"
>
> why?
> have i to configure a dns service on my squid server?
> or have i to route something?
>
> surfing over proxy works ;)

apparently squid offers you the HTTP service, but not the DNS service. Squid
is only a HTTP proxy, you are missing a DNS server.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
On the other hand, you have different fingers. 


[squid-users] Usernames in capitals converted lowercase and denied

2010-02-09 Thread Jenny Lee

Can someone assist with this? Apprarently users were not typing their usernames 
wrong.
 
Squid is apparently not authenticating capital letter usernames.
 
3.1.0.15
 
2010/02/09 13:08:12.781| basic/auth_basic.cc(412) decodeCleartext: 
'JEN133:xs39ds'
2010/02/09 13:08:12.781| basic/auth_basic.cc(364) 
authBasicAuthUserFindUsername: Looking for user 'jen133'
2010/02/09 13:08:12.781| basic/auth_basic.cc(510) makeCachedFrom: Creating new 
user 'jen133'
 
2010/02/09 13:08:52.745| helperSubmit: jen133 xs39ds

2010/02/09 13:08:52.746| helperHandleRead: 17 bytes from basicauthenticator #1
2010/02/09 13:08:52.746| helperHandleRead: 'ERR No such user
'

2010/02/09 13:08:52.746| basic/auth_basic.cc(246) authenticateBasicHandleReply: 
{ERR No such user}
 
ncsa_auth is accepting username/pass in capitals fine from command line and 
returning OK.
 
squid.conf: acl USERS proxy_auth REQUIRED
 
Everything else works and has been working. And same setup was working before 
with older squids.
 
Also please advise how to make squid/nsca_auth accept user/pass 
case-insensitively. Can't put "-i REQUIRED" on proxy_auth line.
 
Thanks.
 
J 
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
http://clk.atdmt.com/GBL/go/201469226/direct/01/

Re: [squid-users] WARNING: redirector .....

2010-02-09 Thread Matus UHLAR - fantomas
> Landy Landy wrote:
>> 2010/01/31 13:35:18| clientParseRequestMethod: Unsupported method attempted 
>> by 172.16.100.56: This is not a bug. see squid.conf extension_methods
>> 2010/01/31 13:35:18| clientParseRequestMethod: Unsupported method in request 
>> '<_g_g?^=__  
>> k__...@_m_l_q_m___da_c7yx___3h___"__~_;P__P_&'_
>> \!hn\_O_X'
>> 2010/01/31 13:35:18| clientProcessRequest: Invalid Request

On 01.02.10 13:15, Amos Jeffries wrote:
> The machine at 172.16.100.56 is broken and pushing garbage into Squid  
> instead of HTTP.
> This is not fatal for you or Squid, but may cause the person who uses  
> that machine to have a very bad web experience.

maybe the wrong protocol is intercepted e.g. https.
Note that squid is HTTP proxy, the only protocol it supports from clients is
HTTP, and with some tuning HTTPS.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Save the whales. Collect the whole set.


Re: [squid-users] squid is closing sessions after 1 hour

2010-02-09 Thread Matus UHLAR - fantomas
On 31.01.10 23:19, Mr. Issa(*) wrote:
> We have an Intel SSR212MMC2 system with 32GB of rams , 12 SATA HD;s
> (each 2 are made raid0)

raid0 as stripes or raid1 as mirrors?

> port 80 traffic is routed from a mikrotik to the squid 2.7 stable7 box
> (running on debian lenny)
> 
> cache_dir aufs /cache1 120 16 256
> cache_dir aufs /cache2 120 16 256
> cache_dir aufs /cache3 120 16 256
> cache_dir aufs /cache4 120 16 256
> 
> fdisk -l
> /dev/sdc1 1.8T   72G  1.7T   5% /cache1
> /dev/sdd1 1.8T   72G  1.7T   5% /cache2
> /dev/sde1 1.8T   72G  1.7T   5% /cache3
> /dev/sdf1 1.8T   72G  1.7T   5% /cache4
> 
> we noticed that when cache is nearly about 280GB (70GB on each
> cache_dir), squid closes all sessions every 1 hour for 30 seconds,
> then it works back normally...

is there anything in cache_log ?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Fighting for peace is like fucking for virginity...


[squid-users] problem

2010-02-09 Thread David C. Heitmann

hello,

i have configured the reply- and request_header_access Rulez

request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access All deny all

reply_header_access Allow allow all
reply_header_access Authorization allow all
reply_header_access WWW-Authenticate allow all
reply_header_access Proxy-Authorization allow all
reply_header_access Proxy-Authenticate allow all
reply_header_access Cache-Control allow all
reply_header_access Content-Encoding allow all
reply_header_access Content-Length allow all
reply_header_access Content-Type allow all
reply_header_access Date allow all
reply_header_access Expires allow all
reply_header_access Host allow all
reply_header_access If-Modified-Since allow all
reply_header_access Last-Modified allow all
reply_header_access Location allow all
reply_header_access Pragma allow all
reply_header_access Accept allow all
reply_header_access Accept-Charset allow all
reply_header_access Accept-Encoding allow all
reply_header_access Accept-Language allow all
reply_header_access Content-Language allow all
reply_header_access Mime-Version allow all
reply_header_access Retry-After allow all
reply_header_access Title allow all
reply_header_access Connection allow all
reply_header_access Proxy-Connection allow all
reply_header_access All deny all


so i have all i need, but one thing left
when i would download something, where i have to put a keyword
inside..f.e. rapidshare or sharingmatrix
the keyword is always wrong^^

when i delete the ALL deny all rule.i can access !
what have i to put under allow to fix it?

THANKS forward
greets dave