Re: [squid-users] Re: Squid3 issues

2010-03-19 Thread Amos Jeffries

Linda Walsh wrote:

Gmail wrote:

I have used many softwares, packages, compiled stuff for years, never
ever had an experience such as this one, it's a package full of
headaches, and problem after problem, And to be honest the feedback I
get is always blaming other things, why can't you people just admit that
Squid doesn't work at all, and you are not providing any help
whatsoever, as if you expect everyone to be an expert.


 I've only seen one post by you on this list -- and that was about



Gmail (Adam?),
  I think most of the problem communicating with us is that your 
replies outlining the problems are going to individual people, not to 
the list itself. Those of us here who might be ale to help with the 
secondary problems are not even hearing about them.


What I've seen is;
  you post a problem description, somebody post a solution that 
_should_ work under some circumstances and could act as a pointer for 
further fixes or research if you understood them right.
 Then no further response from you. Which in these parts indicates you 
are happy with the solution and have moved on to other problems at your 
workplace.

 The rest of us make that assumption and move on to other peoples problems.


To fix this breakdown in communication:

 If you are using the gmail interface there is an advanced reply 
options that need to be setup. If you can do Reply-To List or 
Reply-To All the list should start getting the mails (check that the 
list address 'squid-users' is in the recipients set before sending 
anyway just to be sure).


 Other mailers tend to have those reply-to-all features somewhere as 
well, and more easily available.




increasing your linux file descriptors at process start time in linux
-- not something in the squid software -- but something you do in linux
before you call squid.  It *** SHOULD*** be in your squid's
/etc/init.d/squid startup script. --  you should see a command ulimit -n
number.

I have ulimit -n 4096 in my squid's rc script.

It is a builtin in the bash script.  I don't know where else it is
documented, but if you use the linux-standard shell, bash, it should
just work.  -n sets the number of open file descriptors.



FWIW, Myself or Luigi of Debian are the contact people for Squid 
problems on Ubuntu.


In Ubuntu and other Debian -derived OS it seems to be limited by both 
ulimit and the setting inside /proc/sys/fs/file-max.


The squid package from 2.7+ alters /proc/sys/fs/file-max as needed, 
and provides a max_fd configuration option for run-time settings. Plus 
adding a new SQUID_MAXFD=1024 in /etc/default/squid increases the 
global limit set into /proc.


The squid3 package does not alter /proc, but changing ulimit in 
/etc/init.d/squid3 can allow up to the /proc amount of FD to be used.
 The early 3.0 packages were built with an absolute max of 1024 FD, I 
think the newer ones are built with a higher limit





I uninstalled the version that was packaged with Ubuntu hardy, I am
trying to compile it so I won't have the same problem, with the file
descriptors, I followed exactly the suggestions in the configure --help
menu, yet I am getting an error, like Compile cannot create executable,
or something to that effect.


Maybe you should try a distribution where it is 1) known to work, or
2) already has a pre-compiled binary.


Linda,
 Ubuntu Hardy is one such. But the old packages have low FD built-in.

Gmail,
  Regarding your earlier complaints which Nyamul Hassan kindly 
forwarded back to the list for the rest of us to see...


 Yes we know squid-3.0 (particularly the early releases) was very 
problematic. These problems have mostly been fixed over the last few 
years as people reported them. You seem to have been stuck with an old 
release of OS distribution and thus an old non-changeable squid version.


 If you are not tied to the LTS support, I would suggest trying an 
upgrade to Ubuntu Jaunty or Karmic. The Hardy squid3 package has a lot 
of known and fixed issues.


 Yes, I read your reply to Nyamul indicating you were trying to build 
your own. Squid 3.x is mostly developed on Debian and Ubuntu. Your build 
problems are a mystery.  Self-builds usually fail due to wanting 
features built but not having the development libraries needed. If you 
want to continue the self-build route we can help, but will need to know 
exactly what the error messages are that you face.


I'm awaiting your response to Nyamul Hassans' last question before 
commenting on the config details for yoru setup.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid proxy Setup in fail-over mode

2010-03-19 Thread Amos Jeffries

GIGO . wrote:

How to setup squid proxy to run in fail-over mode? Any guide.
 


There is no such mode in Squid.

As the other respondents have said so far, to have fail-over from your 
users perspective when squid dies you need multiple squid and some load 
balancer setup (WPAD counts as a load balancer).


To have squid performing fail-over between multiple web servers where 
data is sourced, you need do nothing particular. This is how squid is 
designed to work.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid Accelerator mode: HTTP/1.0 and 'defaultsite' header

2010-03-19 Thread Amos Jeffries

Riccardo Castellani wrote:
Most clients these days will do so regardless of their version. 
defaultsite is completely optional, in your case if you omit it broken 
clients will get the squid invalid request error page instead of 
tomcat front page


If I insert 'defaultsite', I think so for HTTP/1.0 clients :

host header (http://pages.example.com) is present in the request,  but I 
think the http packet contains GET command with the complete URL (e.g. 
http://pages.example.com/mkLista.do?code=A) so they will be able to ask 
correct url.

Why do you say ... instead of tomcat front page ?
Tomcat front page appears only you request http://pages.example.com.



HTTP standards require clients to send the Host: header.
If they do not, squid looks for a configured defaultsite= and uses that 
instead, if neither is present the client gets an error page.


When the defaultsite is set and squid will use it and pass the broken 
request on to tomcat. Resulting in the tomcat response for whatever URL 
was requested.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Cancelled downloads

2010-03-19 Thread Amos Jeffries

CASALI COMPUTERS - Michele Brodoloni wrote:

Hello,
is it possible to stop squid from keep downloading a file when a user stops the 
download from his browser?
If an user initiates a 1GB of web download and then hits “cancel”, squid 
doesn’t mind it and continues to download until it finishes, and this is a 
waste of bandwidth.

Is there a solution for this behavior?



This is the default behaviour of Squid.

Check your configuration settings for:
 http://www.squid-cache.org/Doc/config/quick_abort_max/
 http://www.squid-cache.org/Doc/config/quick_abort_min/
 http://www.squid-cache.org/Doc/config/quick_abort_pct/
 http://www.squid-cache.org/Doc/config/range_offset_limit/


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Fwd: [squid-users] Squid3 issues

2010-03-19 Thread Nyamul Hassan
Hi,

As a normal courtesy on regular mailing lists, it is more appropriate
to use your regular name, rather than just GMail.  The answers on
this list still come from humans, and it's always nice to know the
name of the person we're communicating with.

Also, in one of your emails, you said that you had a FD problem, which
can only happen if you have a working Squid, which is processing a lot
of requests.  Please confirm if that is correct.

And, if your're seeing this, then I believe you have already read
Amos's post.  I'm forwarding this to the list.  I'm more of a forward
proxy guy, so the more adept members of the list would be of more
helpful in your scenario.

Regards
HASSAN




-- Forwarded message --
From: Gmail adbas...@googlemail.com
Date: Fri, Mar 19, 2010 at 3:29 AM
Subject: Re: [squid-users] Squid3 issues
To: Nyamul Hassan mnhas...@usa.net


I'd rather use it in hosting like setup, considering I have other
clients not only the webservers
so if it's possible which I believe it is, to use it as Hosting setup
Thanks

Let me give you a quick insight of my network

All my machines run Ubuntu hardy 8 my network is based on 192.1.1.0/24
1) DNS / DHCP   Examples (192.168.1.1)
2) Router (Squid) Proxy    (192.168.1.4)
3) Webserver  xxx.xxx.x. 5
4) Websever   xxx.xxx.x.6
5) Websever  xxx.xxx.x 7
6) IRC Server xxx.xxx.110
7) Digichat 100% (java) / Flash Servers xxx.xxx.x 112
5) Windows XP clients range 192.168.1.3 - 192.168.1.2 - 192.168.1.8 -
192.168.1.111 - 192.168.1.113
Other machines are not connected yet
The above are just examples
Two network switches

Hope that helps
Thanks



- Original Message - From: Nyamul Hassan mnhas...@usa.net
To: Squid Users squid-users@squid-cache.org
Sent: Thursday, March 18, 2010 9:05 PM
Subject: Re: [squid-users] Squid3 issues


So, do you want to use proxy in an ISP like setup?  Or in a Web
Hosting like setup?

Regards
HASSAN




On Fri, Mar 19, 2010 at 2:25 AM, Gmail adbas...@googlemail.com wrote:

 Ok I'll try and clarify it (thanks btw)
 I am running 3 websites on one single machine and have been for few years,
 then the load started to grow, then I decided to have a go at a proxy
 server:
 I was actually putting off for a couple of years, simply because I am very
 restricted time wise
 I have as I said 3 different websites running on one single machine in a
 vhost mode

 three websites with three different domain names.

 Let's say 1) example.com, example.net, example.org all pointing eventually
 to the same IP address
 as I said it worked perfectly but it started to slow down a bit as the load
 gets too much for one machine to handle.
 On top of that I run other servers on different machines, such as Chat
 servers (IRC, Flash, DigiChat) , and various other applications.

 Now, I am using this machine as a proxy server (reverse proxy server) and a
 router at the same time using iptables, and I use another machine as a
 DNS/DHCP servers, all configured and working fine indeed no problems at all.

 Now, I really struggled to get the clients on my network to have access to
 the internet, I mean just to browse the net, I did in the end, but every
 single example I followed not a single one worked for me, I don't know how
 many forums and articles I read.
 I have applied so many examples no luck.

 So basically no requests were passed to the backend server, all I wanted is
 to get those requests forwarded to the web-server and if that works then I
 will add three more machines as backend servers and each machine will hold
 one website with it's DB and so on..

 That was my plan anyway, And I found myself in ever decreasing circle going
 around in circle, following some people's examples and nothing worked, I
 tried to find information for example about, how to setup a cache parent,
 sibbling and so on, not a single word about, I even read O'reilly's
 articles.


 In those examples for instance they mention a parent in order to forward a
 request, without telling you how to set a parent, and if you don't have a
 parent, does that mean you can't use a proxy server, and If I had a parent
 where would it be? and how to decide which one is the parent and which one
 is the child etc.. NO indication not a single word, they expect you to
 know all that as if you spent all you life working on their project, it
 never occured to them that maybe some people won't know what is a parent or
 how to set it up and so on..


 I can go on like this for a whole night, I know you're trying to help but to
 be perfectly honest I am put off by this whole thing, I don't think I want
 to use Squid at all, I reached a saturation point now.

 You see I know even if I get the thing off the ground now, I am sure in a
 few weeks time it will whinge at me or even in a few days time.

 Maybe one day if I have the time I can look into it in more details and take
 the time to understand first it's concept and the way it works, it seems to
 have it's own logic.

 If not I will 

Re: [squid-users] error libcap2 --

2010-03-19 Thread Amos Jeffries

Ariel wrote:

Hello .. please someone can help me with this error because more than
a week ago that I'm swearing and I just realized I asked this

in Centos 5.4 i386 kernel 2.6.30 iptables 1.4.5

it asks for libcap2 and libcap2-dev, but there in centos 5.3 and I am
following this guide to install
http://www.eu.squid-cache.org/mail-archive/squid-users/200906/0602.html
someone has way to fix this?



What error?

As I understand it libcap2 is a piece of system software, not an error.

Could you clarify please what problem you have hit?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Requests through proxy take 4x+ longer than direct to the internet

2010-03-19 Thread Amos Jeffries

David Parks wrote:

Hi, I set up a dev instance of squid on my windows system.

I've configured 2 browsers (Chrome  Firefox), chrome direct to the
internet, firefox through the locally running instance of squid.

I expected similar response times from the two browsers, but I consistently
see firefox (configured to proxy through squid) takes 4x+ longer.

Below are the logs showing response times from a hit on yahoo.com, the
chrome browser opened the page in ~2 seconds.

I have used the windows binaries of squid and configured digest password
authentication, everything else (other than default port) is left as default
in the config file.

After doing a packet capture I noted the following behavior:

   - When going through the proxy: 9 GET requests are made, and 9 HTTP
responses are received in a reasonable time period (2sec)
   - After the 9th HTTP response is sent, there is a 4 second delay until
the next GET request is made
   - Then 6 GET requests are made, and 6 HTTP responses are received in a
reasonable amount of time.
   - After the 6th GET request in this second group there is a 5 second
delay until the next GET request is made.
   - This pattern repeats its self when the proxy is in use.
   - This pattern does not occur when I am not connected through the proxy.

Any thoughts on this behavior?



This blog article explains the issues involved:

http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid not caching anything

2010-03-19 Thread Amos Jeffries

jayesh chavan wrote:

Hi,
 My squid is working but not caching anything.What is
problem?Whenever I use purge command for any url,it replies 404 not
found.


 * Run some of the URLs through www.redbot.org and see if there is any 
particular reason for that.


 * check your configuration. Using cache deny will prevent storage of 
responses.


 * PURGE is hobbled somewhat in the presence of Vary: header. You must 
specify the exact same variant conditions in your PURGE to match the 
request which stored the object.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] R: [squid-users] Cancelled downloads

2010-03-19 Thread CASALI COMPUTERS - Michele Brodoloni
Hmmm.. 
So I guess this behaviour is caused by these following lines:

range_offset_limit -1
maximum_object_size 200 MB
quick_abort_min -1

Which are used to cache the most possible from windows update... (from: 
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate)
At this point I'm asking if there's any workaround for this.. I mean: is it 
possible to make quick_abort_min to be set to -1 only for windows updates,
and have it behave normally for the rest of the websites?

Thanks a lot

-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: venerdì 19 marzo 2010 7.19
A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] Cancelled downloads

CASALI COMPUTERS - Michele Brodoloni wrote:
 Hello,
 is it possible to stop squid from keep downloading a file when a user stops 
 the download from his browser?
 If an user initiates a 1GB of web download and then hits ?cancel?, squid 
 doesn?t mind it and continues to download until it finishes, and this is a 
 waste of bandwidth.
 
 Is there a solution for this behavior?
 

This is the default behaviour of Squid.

Check your configuration settings for:
  http://www.squid-cache.org/Doc/config/quick_abort_max/
  http://www.squid-cache.org/Doc/config/quick_abort_min/
  http://www.squid-cache.org/Doc/config/quick_abort_pct/
  http://www.squid-cache.org/Doc/config/range_offset_limit/


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




Re: [squid-users] squid consuming too much processor/cpu

2010-03-19 Thread Matus UHLAR - fantomas
  On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
   you might want to check out this thread
  
   http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html

 On Wed, Mar 17, 2010 at 11:09 PM, Muhammad Sharfuddin
 m.sharfud...@nds.com.pk wrote:
  I checked, but its not clear to me
  do I need to install some packages/rpms ? and then ?
  I mean how can I resolve this issue

On 18.03.10 08:46, Ivan . wrote:
 run a cron job to restart Squid once a week?

or simply not using tons ov regular expressions within squid access lists
and leave the job to external program(s).
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
42.7 percent of all statistics are made up on the spot. 


Re: [squid-users] R: [squid-users] Cancelled downloads

2010-03-19 Thread Amos Jeffries

CASALI COMPUTERS - Michele Brodoloni wrote:
Hmmm.. 
So I guess this behaviour is caused by these following lines:


range_offset_limit -1
maximum_object_size 200 MB
quick_abort_min -1

Which are used to cache the most possible from windows update... (from: 
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate)
At this point I'm asking if there's any workaround for this.. I mean: is it possible to 
make quick_abort_min to be set to -1 only for windows updates,
and have it behave normally for the rest of the websites?

Thanks a lot



Not with the current Squid. Sorry.
There is a patch on my TODO list to add ACL support to 
range_offset_limit, but nothing yet for quick-abort.


Amos


-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: venerdì 19 marzo 2010 7.19

A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] Cancelled downloads

CASALI COMPUTERS - Michele Brodoloni wrote:

Hello,
is it possible to stop squid from keep downloading a file when a user stops the 
download from his browser?
If an user initiates a 1GB of web download and then hits ?cancel?, squid 
doesn?t mind it and continues to download until it finishes, and this is a 
waste of bandwidth.

Is there a solution for this behavior?



This is the default behaviour of Squid.

Check your configuration settings for:
  http://www.squid-cache.org/Doc/config/quick_abort_max/
  http://www.squid-cache.org/Doc/config/quick_abort_min/
  http://www.squid-cache.org/Doc/config/quick_abort_pct/
  http://www.squid-cache.org/Doc/config/range_offset_limit/


Amos



--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] R: [squid-users] R: [squid-users] Cancelled downloads

2010-03-19 Thread CASALI COMPUTERS - Michele Brodoloni
In this case, are you aware of some third-party software/squid plugin which may 
could do the job?
I'm still crawling the entire internet without luck... I've seen a redirector 
written in perl, but it seems to use other
caching mechanism, so it renders useless my windows updates collection fetched 
until now... :)

For who is interested:
http://www.glob.com.au/windowsupdate_cache/


Thanks


-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: venerdì 19 marzo 2010 9.25
A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] R: [squid-users] Cancelled downloads

CASALI COMPUTERS - Michele Brodoloni wrote:
 Hmmm.. 
 So I guess this behaviour is caused by these following lines:
 
 range_offset_limit -1
 maximum_object_size 200 MB
 quick_abort_min -1
 
 Which are used to cache the most possible from windows update... (from: 
 http://wiki.squid-cache.org/SquidFaq/WindowsUpdate)
 At this point I'm asking if there's any workaround for this.. I mean: is it 
 possible to make quick_abort_min to be set to -1 only for windows updates,
 and have it behave normally for the rest of the websites?
 
 Thanks a lot
 

Not with the current Squid. Sorry.
There is a patch on my TODO list to add ACL support to 
range_offset_limit, but nothing yet for quick-abort.

Amos

 -Messaggio originale-
 Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Inviato: venerdì 19 marzo 2010 7.19
 A: squid-users@squid-cache.org
 Oggetto: Re: [squid-users] Cancelled downloads
 
 CASALI COMPUTERS - Michele Brodoloni wrote:
 Hello,
 is it possible to stop squid from keep downloading a file when a user stops 
 the download from his browser?
 If an user initiates a 1GB of web download and then hits ?cancel?, squid 
 doesn?t mind it and continues to download until it finishes, and this is a 
 waste of bandwidth.

 Is there a solution for this behavior?

 
 This is the default behaviour of Squid.
 
 Check your configuration settings for:
   http://www.squid-cache.org/Doc/config/quick_abort_max/
   http://www.squid-cache.org/Doc/config/quick_abort_min/
   http://www.squid-cache.org/Doc/config/quick_abort_pct/
   http://www.squid-cache.org/Doc/config/range_offset_limit/
 
 
 Amos


-- 
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




Re: [squid-users] Reverse Proxy SSL Options

2010-03-19 Thread Matus UHLAR - fantomas
On 18.03.10 13:12, Dean Weimer wrote:
 We have multiple websites using a certificate that has subject
 alternative names set to use SSL for the multiple domains.  That part is
 working fine, and traffic will pass through showing with Valid
 certificates.  However, I need to Disable it from answering with weak
 ciphers and SSLv2 to pass the scans.

check https_port options cipher= and options=

for the latter you can play with openssl ciphers.
I use (not on squid), DEFAULT:!EXP
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I feel like I'm diagonally parked in a parallel universe. 


[squid-users] Absolute Beginner help required on concepts related to Cache_effective_user.

2010-03-19 Thread GIGO .


On a compiled squid3 stable24. I am unable to run squid as root in Ubuntu. So 
the cache_effective_user defined in squid.conf never comes into play. Is this a 
security concern? what good is cache_effective_user for? 
 
 
 
Is it right to run squid with the default ubuntu user one has installed the OS? 
 
 
 
On ubuntu there lies another user proxy(13) having group proxy? For what 
purpose this user exist if this has any relation with squid?
 
 
 
Startup scripts in etc/init.d run with root privilege on system startup? 
however my startup script never succeeds because permission is denied to run 
squid as root? is there a way to fix this issue.
 
 
 
 
please if somebody enlighten me about these concepts i would be really thankful 
as unable to get this concept right myself.
 
regards,
  
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969

[squid-users] Yet another IMAP support request

2010-03-19 Thread Sabyasachi Ruj
I went through this thread:
http://www.mail-archive.com/squid-users@squid-cache.org/msg59892.html.
I also needed that IMAP to work via Squid.  There was no conclusion on
that thread.

Is it possible to use Squid for IMAP traffic using HTTP's CONNECT
method? If not, can anybody tell us the reason? I thought CONNECT can
be used to achieve the same functionality that SOCKS can provide? Am I
missing something?

--
Sabyasachi


Re: [squid-users] Cancelled downloads

2010-03-19 Thread John Doe
From: CASALI COMPUTERS - Michele Brodoloni m.brodol...@casalicomputers.com
 In this case, are you aware of some third-party software/squid plugin which 
 may 
 could do the job?
I'm still crawling the entire internet without luck... I've 
 seen a redirector written in perl, but it seems to use other
caching 
 mechanism, so it renders useless my windows updates collection fetched until 
 now... :)

Not a direct solution to your problem but, what about using delay pools?
While it would still download the whole file; it would limit the bandwidth.

Or maybe running a second squid dedicated to windows updates...
The first squid would send a request to the second squid if it is a windows 
update URL.
Would that work?

JD


  


Re: [squid-users] Absolute Beginner help required on concepts related to Cache_effective_user.

2010-03-19 Thread John Doe
From: GIGO . gi...@msn.com
 On a compiled squid3 stable24. I am unable to run squid as root in 
 Ubuntu. So the cache_effective_user defined in squid.conf never comes into 
 play. 
 Is this a security concern? what good is cache_effective_user for? 
 Is it right to run squid with the default ubuntu user one has installed 
 the OS? 
 On ubuntu there lies another user proxy(13) having 
 group proxy? For what purpose this user exist if this has any relation with 
 squid?
 Startup scripts in etc/init.d run with root privilege 
 on system startup? however my startup script never succeeds because 
 permission 
 is denied to run squid as root? is there a way to fix this issue.
 please if somebody enlighten me about these concepts i would be really 
 thankful as unable to get this concept right myself.

I suggest you look at these:http://tinyurl.com/yfrjkdc
Basicaly, install the standard packaged squid and see the init.rd script and 
conf file they are using.
Then, remove it and use your compiled version and adapt it.

JD


  


Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Jakob Curdes

Sabyasachi Ruj schrieb:

I went through this thread:
http://www.mail-archive.com/squid-users@squid-cache.org/msg59892.html.
I also needed that IMAP to work via Squid.  There was no conclusion on
that thread.

Is it possible to use Squid for IMAP traffic using HTTP's CONNECT
method? If not, can anybody tell us the reason? I thought CONNECT can
be used to achieve the same functionality that SOCKS can provide? Am I
missing something?
  

No, you are not missing something, currently this is not possible.

Squid concentrateds on being a good HTTP proxy with some limited 
functionality RE https.

The tunnel CONNECT patch mentioned in the thread might help, but only
if you have a target that accepts what has been tunneled.
I.E. you could try to tunnel IMAP via SSL but only if the target 
mailserver accepts SSL.
It is probably impossible to tunnel a plain IMAP connection in this way 
as your target mailserver will not understand the protocol being delivered.


HTH,
Jakob Curdes




Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Sabyasachi Ruj
Okay. Any pointer what can be achieved with Squid's HTTP CONNECT method?

Quote:
The tunnel CONNECT patch mentioned in the thread might help, but only
if you have a target that accepts what has been tunneled.

If I understand you correctly, the IMAP server should wrap IMAP
responses with HTTP responses, and accept IMAP requests wrapped with
HTTP requests?

On Fri, Mar 19, 2010 at 4:05 PM, Jakob Curdes j...@info-systems.de wrote:
 Sabyasachi Ruj schrieb:

 I went through this thread:
 http://www.mail-archive.com/squid-users@squid-cache.org/msg59892.html.
 I also needed that IMAP to work via Squid.  There was no conclusion on
 that thread.

 Is it possible to use Squid for IMAP traffic using HTTP's CONNECT
 method? If not, can anybody tell us the reason? I thought CONNECT can
 be used to achieve the same functionality that SOCKS can provide? Am I
 missing something?


 No, you are not missing something, currently this is not possible.

 Squid concentrateds on being a good HTTP proxy with some limited
 functionality RE https.
 The tunnel CONNECT patch mentioned in the thread might help, but only
 if you have a target that accepts what has been tunneled.
 I.E. you could try to tunnel IMAP via SSL but only if the target mailserver
 accepts SSL.
 It is probably impossible to tunnel a plain IMAP connection in this way as
 your target mailserver will not understand the protocol being delivered.

 HTH,
 Jakob Curdes






-- 
Sabyasachi


Re: [squid-users] Reverse Proxy SSL Options

2010-03-19 Thread Amos Jeffries

Matus UHLAR - fantomas wrote:

On 18.03.10 13:12, Dean Weimer wrote:

We have multiple websites using a certificate that has subject
alternative names set to use SSL for the multiple domains.  That part is
working fine, and traffic will pass through showing with Valid
certificates.  However, I need to Disable it from answering with weak
ciphers and SSLv2 to pass the scans.


check https_port options cipher= and options=

for the latter you can play with openssl ciphers.
I use (not on squid), DEFAULT:!EXP



@Dean: Thanks for bringing this up. I've now updated the config 
documentation to actually mention those details.


In short for options:
NO_SSLv2  Disallow the use of SSLv2
NO_SSLv3  Disallow the use of SSLv3
NO_TLSv1  Disallow the use of TLSv1
SINGLE_DH_USE
Always create a new key when using
temporary/ephemeral DH key exchanges

These options vary depending on your SSL engine.
See the OpenSSL SSL_CTX_set_options documentation for a
complete list of possible options.

ciphers is a comma separated list of ciphers which are to be accepted. 
I'm only going on second-hand info but think it's like SHA1,SHA256 etc.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] RE: NTLM error

2010-03-19 Thread Dawie Pretorius
Hi is it possible that someone can come back to me on this request.

Thank you

Dawie Pretorius


-Original Message-
From: Dawie Pretorius [mailto:da...@tradebridge.co.za] 
Sent: 11 March 2010 10:40 AM
To: squid-users@squid-cache.org
Subject: [squid-users] NTLM error

Hi, 

I continually have this error inside my /var/log/squid/cache.log:

[2010/03/05 12:40:02, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
  got NTLMSSP command 3, expected 1

And getting a authentication pop up.

I found this article about this issue:

http://www1.il.squid-cache.org/mail-archive/squid-dev/200906/0041.html

This article states that there is a workaround:

The workaround is pretty simple - just enable the IP auth cache.

I need to know how to enable my IP auth cache to workaround this problem? 
Please advise me if I'm interpreting this incorrectly?

Here is my squid.conf:

http_port 0.0.0.0:3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
dns_nameservers 168.210.2.2 196.14.239.2
refresh_pattern -i (/cgi-bin/|\?)  0 0% 0
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
half_closed_clients off
acl manager proto cache_object
acl localnet src 172.16.0.0/12
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443 21
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 89
acl Safe_ports port 119
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl update-micro-dom dstdomain .microsoft.com
acl update-micro-dom dstdomain .windowsupdate.com
http_access allow update-micro-dom
acl cape_town src 172.16.38.0/23
http_access allow cape_town
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp 
--require-membership-of=S-1-5-21-1070830588-1373467647-793153460-513
auth_param ntlm children 150
auth_param ntlm keep_alive on
acl ntlm proxy_auth REQUIRED
http_access allow localhost ntlm
http_access allow localhost
http_reply_access allow all
icp_access allow all
cache_mgr hb...@.co.za
visible_hostname ZATBIMPROXY01
client_db on
acl FTP proto FTP
always_direct allow FTP
snmp_port 3401
coredump_dir /var/spool/squid

Thanks in advance! 

Regards,
Dawie
 
Note: Privileged/Confidential information may be contained in this message and 
may be subject to legal privilege. Access to this e-mail by anyone other than 
the intended is unauthorised. If you are not the intended recipient (or 
responsible for delivery of the message to such person), you may not use, copy, 
distribute or deliver to anyone this message (or any part of its contents ) or 
take any action in reliance on it. All reasonable precautions have been taken 
to ensure no viruses are present in this e-mail. As our company cannot accept 
responsibility for any loss or damage arising from the use of this e-mail or 
attachments we recommend that you subject these to your virus checking 
procedures prior to use. The views, opinions, conclusions and other information 
expressed in this electronic mail are not given or endorsed by the company 
unless otherwise indicated by an authorized representative independent of this 
message.

Note: Privileged/Confidential information may be contained in this message and 
may be subject to legal privilege. Access to this e-mail by anyone other than 
the intended is unauthorised. If you are not the intended recipient (or 
responsible for delivery of the message to such person), you may not use, copy, 
distribute or deliver to anyone this message (or any part of its contents ) or 
take any action in reliance on it. All reasonable precautions have been taken 
to ensure no viruses are present in this e-mail. As our company cannot accept 
responsibility for any loss or damage arising from the use of this e-mail or 
attachments we recommend that you subject these to your virus checking 
procedures prior to use. The views, opinions, conclusions and other information 
expressed in this electronic mail are not given or endorsed by the company 
unless otherwise indicated by an authorized representative independent of this 
message.


Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Jakob Curdes




If I understand you correctly, the IMAP server should wrap IMAP
responses with HTTP responses, and accept IMAP requests wrapped with
HTTP requests?
  

Right, but I am not aware of an IMAP server capable of doing this.

JC


Re: [squid-users] Squid3 issues

2010-03-19 Thread a...@gmail

Hi,
As a common courtesy I did give my name at the end, with best regards Adam 
if you really looked.
And when I created this account years ago, I named it Gmail because I have 
many other accounts, it helps me filter through
my email boxes, second of all I am new to the mainling list system, I 
receive an email I hit reply to the person that answered me
And please just forget it, will you, I am no longer seeking any help I told 
you before, you asked me to describe my scenario, so I did

but I really don't need help thanks all the same.
If you looked on my reply I did say Best Regards Adam

Thanks for your time and good luck
Regards ADAM

- Original Message - 
From: Nyamul Hassan mnhas...@usa.net

To: Squid Users squid-users@squid-cache.org
Sent: Friday, March 19, 2010 6:33 AM
Subject: Fwd: [squid-users] Squid3 issues


Hi,

As a normal courtesy on regular mailing lists, it is more appropriate
to use your regular name, rather than just GMail.  The answers on
this list still come from humans, and it's always nice to know the
name of the person we're communicating with.

Also, in one of your emails, you said that you had a FD problem, which
can only happen if you have a working Squid, which is processing a lot
of requests.  Please confirm if that is correct.

And, if your're seeing this, then I believe you have already read
Amos's post.  I'm forwarding this to the list.  I'm more of a forward
proxy guy, so the more adept members of the list would be of more
helpful in your scenario.

Regards
HASSAN




-- Forwarded message --
From: Gmail adbas...@googlemail.com
Date: Fri, Mar 19, 2010 at 3:29 AM
Subject: Re: [squid-users] Squid3 issues
To: Nyamul Hassan mnhas...@usa.net


I'd rather use it in hosting like setup, considering I have other
clients not only the webservers
so if it's possible which I believe it is, to use it as Hosting setup
Thanks

Let me give you a quick insight of my network

All my machines run Ubuntu hardy 8 my network is based on 192.1.1.0/24
1) DNS / DHCP Examples (192.168.1.1)
2) Router (Squid) Proxy (192.168.1.4)
3) Webserver xxx.xxx.x. 5
4) Websever xxx.xxx.x.6
5) Websever xxx.xxx.x 7
6) IRC Server xxx.xxx.110
7) Digichat 100% (java) / Flash Servers xxx.xxx.x 112
5) Windows XP clients range 192.168.1.3 - 192.168.1.2 - 192.168.1.8 -
192.168.1.111 - 192.168.1.113
Other machines are not connected yet
The above are just examples
Two network switches

Hope that helps
Thanks



- Original Message - From: Nyamul Hassan mnhas...@usa.net
To: Squid Users squid-users@squid-cache.org
Sent: Thursday, March 18, 2010 9:05 PM
Subject: Re: [squid-users] Squid3 issues


So, do you want to use proxy in an ISP like setup? Or in a Web
Hosting like setup?

Regards
HASSAN




On Fri, Mar 19, 2010 at 2:25 AM, Gmail adbas...@googlemail.com wrote:


Ok I'll try and clarify it (thanks btw)
I am running 3 websites on one single machine and have been for few years,
then the load started to grow, then I decided to have a go at a proxy
server:
I was actually putting off for a couple of years, simply because I am very
restricted time wise
I have as I said 3 different websites running on one single machine in a
vhost mode

three websites with three different domain names.

Let's say 1) example.com, example.net, example.org all pointing eventually
to the same IP address
as I said it worked perfectly but it started to slow down a bit as the 
load

gets too much for one machine to handle.
On top of that I run other servers on different machines, such as Chat
servers (IRC, Flash, DigiChat) , and various other applications.

Now, I am using this machine as a proxy server (reverse proxy server) and 
a

router at the same time using iptables, and I use another machine as a
DNS/DHCP servers, all configured and working fine indeed no problems at 
all.


Now, I really struggled to get the clients on my network to have access to
the internet, I mean just to browse the net, I did in the end, but every
single example I followed not a single one worked for me, I don't know how
many forums and articles I read.
I have applied so many examples no luck.

So basically no requests were passed to the backend server, all I wanted 
is

to get those requests forwarded to the web-server and if that works then I
will add three more machines as backend servers and each machine will hold
one website with it's DB and so on..

That was my plan anyway, And I found myself in ever decreasing circle 
going

around in circle, following some people's examples and nothing worked, I
tried to find information for example about, how to setup a cache parent,
sibbling and so on, not a single word about, I even read O'reilly's
articles.


In those examples for instance they mention a parent in order to forward a
request, without telling you how to set a parent, and if you don't have a
parent, does that mean you can't use a proxy server, and If I had a parent
where would it be? and how to decide which one is the parent and 

[squid-users] squid, squirm, clamav, viralator 0.9.8, Invoked with the arguments

2010-03-19 Thread Stefan Reible

Hey,

I am using squid 3.0.19 with squirm 1.23, clamav 0.95.3, viralator  
0.9.8 from svn and mozilla firefox with configured proxy.


If I put following url in my Firefox:

http://squid1.testdomain.de/cgi-bin/viralator.cgi?action=http://putty.very.rulez.org/latest/x86/putty.exe

I get this Output:


squid1 log # tail -f viralator.log

2010/03/19 13:47:28 INFO viralator.cgi: 1637 main::config_app -  
Reading configuration file /etc/viralator/viralator.conf
2010/03/19 13:47:28 INFO viralator.cgi: 1668 main::config_app -  
Configuration file was read successfully
2010/03/19 13:47:28 DEBUG viralator.cgi: 1679 main::config_app -  
Values recovered from configuration file

popupwidth - 600
filechmod - 0644
popupback - false
maximum_size - 1689600
css_file - style.css
virusscanner - clamdscan
dirmask - 0022
scannersummary - true
scannerpath - /usr/bin
progress_indicator - progress.png
downloadsdir - /downloads
default_language - english.txt
alert - FOUND
downloads - /var/www/localhost/htdocs/downloads
lang - en-US
viruscmd - --verbose --stdout
secret - sdfjkjk438sdfh234Hasdh73
charset - ISO-8859-1
skip_downloads - true
popupheight - 400
popupfast - false
progress_unit - bar.png
2010/03/19 13:47:28 INFO viralator.cgi: 1683 main::config_app -  
Testing configuration values
2010/03/19 13:47:28 INFO viralator.cgi: 1717 main::config_app -  
Configuration is OK
2010/03/19 13:47:28 INFO viralator.cgi: 1731 main::config_lang -  
Trying to read language file /etc/viralator/languages/english.txt
2010/03/19 13:47:28 INFO viralator.cgi: 1755 main::config_lang -  
Language file read successfully
2010/03/19 13:47:28 INFO viralator.cgi: 101 main:: - Client  
192.9.200.32 connected to Viralator
2010/03/19 13:47:28 INFO viralator.cgi: 140 main:: - Charset is  
defined as ISO-8859-1
2010/03/19 13:47:28 INFO viralator.cgi: 156 main:: - Presenting  
initial page to user
2010/03/19 13:47:28 DEBUG viralator.cgi: 162 main:: - Parameters  
received action
2010/03/19 13:47:28 DEBUG viralator.cgi: 1356 main::test_param -  
Invoked with the arguments: action,  
http://putty.very.rulez.org/latest/x86/putty.exe
2010/03/19 13:47:28 ERROR viralator.cgi: 676 main::error - Invalid  
value for action parameter:  
http://putty.very.rulez.org/latest/x86/putty.exe - requested by  
192.9.200.32


And when I put the url normaly:

http://putty.very.rulez.org/latest/x86/putty.exe

I get:

()
2010/03/19 13:49:16 INFO viralator.cgi: 1683 main::config_app -  
Testing configuration values
2010/03/19 13:49:16 INFO viralator.cgi: 1717 main::config_app -  
Configuration is OK
2010/03/19 13:49:16 INFO viralator.cgi: 1731 main::config_lang -  
Trying to read language file /etc/viralator/languages/english.txt
2010/03/19 13:49:16 INFO viralator.cgi: 1755 main::config_lang -  
Language file read successfully
2010/03/19 13:49:16 INFO viralator.cgi: 101 main:: - Client  
192.9.200.32 connected to Viralator
2010/03/19 13:49:16 INFO viralator.cgi: 140 main:: - Charset is  
defined as ISO-8859-1
2010/03/19 13:49:16 INFO viralator.cgi: 156 main:: - Presenting  
initial page to user

2010/03/19 13:49:16 DEBUG viralator.cgi: 162 main:: - Parameters received url
2010/03/19 13:49:16 DEBUG viralator.cgi: 1356 main::test_param -  
Invoked with the arguments: url,  
http://putty.very.rulez.org/latest/x86/putty.exe

2010/03/19 13:49:16 INFO viralator.cgi: 197 main:: - No referer is available
2010/03/19 13:49:16 DEBUG viralator.cgi: 1459 main::WinOpen - Invoked  
with the arguments:  
http://192.9.200.32/cgi-bin/viralator.cgi?action=popupfileurl=http://putty.very.rulez.org/latest/x86/putty.exe, 1269002956,  
width=600,height=400,scrollbars=1,resize=no


The download button didn't work. Here is my squirm.patterns:

abortregexi ^http://192.9.200.32.* #zB (^http://192\.168\.100\.1/.*)
abortregexi ^http://squid1.testdomain.de.*
regexi ^(.*\.zip)$ http://192.9.200.32/cgi-bin/viralator.cgi?url=\1
regexi ^(.*\.exe)$ http://192.9.200.32/cgi-bin/viralator.cgi?url=\1

squirm match log:

Fri Mar 19 13:49:16  
2010:http://putty.very.rulez.org/latest/x86/putty.exe:http://192.9.200.32/cgi-bin/viralator.cgi?url=http://putty.very.rulez.org/latest/x86/putty.exe


My viralator config:

default_language - english.txt
charset - ISO-8859-1
lang - en-US
servername -
proxy_address -
proxy_port -
maximum_size - 1689600
virusscanner - clamdscan
scannerpath - /usr/bin
viruscmd - --verbose --stdout
alert - FOUND
scannersummary - true
downloads - /var/www/localhost/htdocs/downloads
skip_downloads - true
downloadsdir - /downloads
()




I don't find an error in my config. I`m running the whole system under  
linux gentoo, an in future the proxy server will be in transparent  
mode. The squid and squirm are running as user squid.


Regards, Stefan



Re: [squid-users] error libcap2 --

2010-03-19 Thread Victor Javier Brizuela
On Fri, Mar 19, 2010 at 03:46, Amos Jeffries squ...@treenet.co.nz wrote:

 What error?

 As I understand it libcap2 is a piece of system software, not an error.

 Could you clarify please what problem you have hit?

The proper translation of his email would be:

-
Hello... can anybody please help me with this error, I've been
fighting with this for over a week and just now I realise that it asks
me this:

in Centos 5.4 i386 kernel 2.6.30 iptables 1.4.5

it is asking me to install libcap2 and libcap2-dev, but it doesn't
exist in centos 5.3 and I'm following this guide to install it
http://www.eu.squid-cache.org/mail-archive/squid-users/200906/0602.html

does anybody have any way to solve this?
-

-- 
Victor Javier Brizuela
http://w2bh.com.ar/

BOFH excuse #38:
secretary plugged hairdryer into UPS


Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Amos Jeffries

Jakob Curdes wrote:




If I understand you correctly, the IMAP server should wrap IMAP
responses with HTTP responses, and accept IMAP requests wrapped with
HTTP requests?
  

Right, but I am not aware of an IMAP server capable of doing this.



Other way around I would have thought. The client usually makes 
connection to server.


One of the reasons CONNECT is so dangerous is that the receiving server 
does not need to know HTTP to communicate once the client has setup the 
tunnel.


Still, I don't know of any IMAP client software which wraps its IMAP 
requests in HTTP either


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Jakob Curdes




Other way around I would have thought. The client usually makes 
connection to server.


One of the reasons CONNECT is so dangerous is that the receiving 
server does not need to know HTTP to communicate once the client has 
setup the tunnel.
Oh, right, I did not read the OP's message correctly. What I was trying 
to tell is that if he somehow can manage it to encapsulate his IMAP 
requests in HTTP, he may get them through squid, but this does not 
really help as he is probably unable to decode this on the server side, 
unless he sets up a special http tunnel end. Probably possible but 
then I would rather go for an IMAP proxy - that would be a 
straightforward solution that achieves what he actually wants tro do.


JC


[squid-users] What version of squid in the upcoming ubuntu 10.4 repo

2010-03-19 Thread tcygne

The new LTS ubuntu coming in april will be vresion 10.4. I'm wondering if
anyone knows what version of squid will be in the repos and thus
apt-get'able.
-- 
View this message in context: 
http://n4.nabble.com/What-version-of-squid-in-the-upcoming-ubuntu-10-4-repo-tp1599340p1599340.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] RE: NTLM error

2010-03-19 Thread Amos Jeffries

Dawie Pretorius wrote:

Hi is it possible that someone can come back to me on this request.

Thank you

Dawie Pretorius



Maybe yes, maybe no.

You did add this:

 message and may be subject to legal privilege. Access to this e-mail
 by anyone other than the intended is unauthorised. If you are not the
 intended recipient (or responsible for delivery of the message to
 such person), you may not use, copy, distribute or deliver to anyone
 this message (or any part of its contents ) or take any action in


Sigh. Some people who might have answered will be legally bound not to 
or risk their employment.


/joke.



-Original Message-
From: Dawie Pretorius [mailto:da...@tradebridge.co.za] 
Sent: 11 March 2010 10:40 AM

To: squid-users@squid-cache.org
Subject: [squid-users] NTLM error

Hi, 


I continually have this error inside my /var/log/squid/cache.log:

[2010/03/05 12:40:02, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
  got NTLMSSP command 3, expected 1


A client is using kerberos (aka 3) to respond to your NTLM (aka 1) 
challenge.
 * Find out what client browser this is its really rather broken, and 
if possible why it's acting this way.
 * Look into implementing Kerberos auth in your network. NTLM is 
officially deprecated by MS now, and apparently not supported in Windows 7.




And getting a authentication pop up.

I found this article about this issue:

http://www1.il.squid-cache.org/mail-archive/squid-dev/200906/0041.html

This article states that there is a workaround:

The workaround is pretty simple - just enable the IP auth cache.



I think they mean that storing the auth credentials and re-using them 
for the IP gets around it.


Not a good solution at all. And squid does not support auth cache for 
NTLM type protocols anyway. Which means you need to be using insecure 
Basic auth for it to work.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Matus UHLAR - fantomas
On 19.03.10 15:00, Sabyasachi Ruj wrote:
 I went through this thread:
 http://www.mail-archive.com/squid-users@squid-cache.org/msg59892.html.
 I also needed that IMAP to work via Squid.  There was no conclusion on
 that thread.

need? Why do you _need_ it?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Nothing is fool-proof to a talented fool. 


[squid-users] Squid3 issues

2010-03-19 Thread a...@gmail

Hi Amos,
Thanks for your comments, All I was doing is hit reply, this is the very 
first time ever I used any mailing list
It doesn't matter anymore, I am sorry if I offended anyone, it was not my 
intention, when I get an email I simply hit reply
I will try and solve my problems, and if I do get it to work I will 
certainly post the solution for future users who might face the same problem


As for now, I just want to thank you all

I have previously installed an older version of Squid compiled it manually 
it wasn't the one packaged with the OS (Ubuntu hardy)
after few days trying to get it to work, I mean as a reverse proxy, with no 
luck, I removed it, tried the version 3.0 the one that was packaged with the 
Os, I got as far as allowing clients on my network to have access to the 
internet and most of other applications on windows XP couldn't connect.


anyway this time around I have downloaded it again configured it compiled it 
and installed it, it's not starting but this is a minor problem, it's a 
permission issue rather than anything else.


I just want to say, thank you all, If I do get it to work I will post the 
solution as promised if not that means I have moved on and no longer using 
Squid3.


I will break it down for others to see and it will hopefully help others:

Here it is:

1) Machine A Proxy-Router
2) Machine DSN DHCP
3) Web-server One www.example.com
4) Web-server Twowww.example.org
5) Web-server Three  www.example.net
6) IRC-server / Digichat server
Plus 5 Windows clients

I wanted a proxy server in the for two good reasons, one is for 
loadbalancing and second for an extra layer of security
Currently I have all of the three websites above running on a single machine 
on a virtualhosts, but it's too much for one machine to handle all the 
requests.


I always wanted to use a proxy server but I was putting it off.
a) I knew it was going to be a challenge
b) I was trying to get sometime off in order to do it properly
Basically all I wanted for now is to forward all requests to the relevant 
backend servers, to which I knew it was going to be a challenge


Once again I am sorry if I offended anyone it wasn't my intention
I will manage to sort it out or simply move on and try something else
Thank you all
Best Regards
Adam






Re: [squid-users] R: [squid-users] R: [squid-users] Cancelled downloads

2010-03-19 Thread Amos Jeffries

CASALI COMPUTERS - Michele Brodoloni wrote:

In this case, are you aware of some third-party software/squid plugin which may 
could do the job?
I'm still crawling the entire internet without luck... I've seen a redirector 
written in perl, but it seems to use other
caching mechanism, so it renders useless my windows updates collection fetched 
until now... :)

For who is interested:
http://www.glob.com.au/windowsupdate_cache/


Thanks



I'm not aware of any sorry. With Squid-3 this might work (warning theory 
only at present):


  acl windowsUpdate dstdomain .windowsupdate.com 
  request_header_access Accept-Ranges deny windowsUpdate

In theory at least, that strips away the ability of WU to get ranges of 
data.


Caveats,
  Squid might end up sending the whole object back. Under 
range_offset_limit it fetches that whole thing and sends just the 
requested range back. I'm not sure how WU software deals with full-data 
responses to range requests.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] R: [squid-users] R: [squid-users] Cancelled downloads

2010-03-19 Thread Leonardo Carneiro - Veltrac

Well, if you haev any windows server on your network, you could use WSUS.

http://technet.microsoft.com/en-us/wsus/default.aspx



CASALI COMPUTERS - Michele Brodoloni wrote:

In this case, are you aware of some third-party software/squid plugin which may 
could do the job?
I'm still crawling the entire internet without luck... I've seen a redirector 
written in perl, but it seems to use other
caching mechanism, so it renders useless my windows updates collection fetched 
until now... :)

For who is interested:
http://www.glob.com.au/windowsupdate_cache/


Thanks


-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: venerdì 19 marzo 2010 9.25

A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] R: [squid-users] Cancelled downloads

CASALI COMPUTERS - Michele Brodoloni wrote:
  
Hmmm.. 
So I guess this behaviour is caused by these following lines:


range_offset_limit -1
maximum_object_size 200 MB
quick_abort_min -1

Which are used to cache the most possible from windows update... (from: 
http://wiki.squid-cache.org/SquidFaq/WindowsUpdate)
At this point I'm asking if there's any workaround for this.. I mean: is it possible to 
make quick_abort_min to be set to -1 only for windows updates,
and have it behave normally for the rest of the websites?

Thanks a lot




Not with the current Squid. Sorry.
There is a patch on my TODO list to add ACL support to 
range_offset_limit, but nothing yet for quick-abort.


Amos

  

-Messaggio originale-
Da: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Inviato: venerdì 19 marzo 2010 7.19

A: squid-users@squid-cache.org
Oggetto: Re: [squid-users] Cancelled downloads

CASALI COMPUTERS - Michele Brodoloni wrote:


Hello,
is it possible to stop squid from keep downloading a file when a user stops the 
download from his browser?
If an user initiates a 1GB of web download and then hits ?cancel?, squid 
doesn?t mind it and continues to download until it finishes, and this is a 
waste of bandwidth.

Is there a solution for this behavior?

  

This is the default behaviour of Squid.

Check your configuration settings for:
  http://www.squid-cache.org/Doc/config/quick_abort_max/
  http://www.squid-cache.org/Doc/config/quick_abort_min/
  http://www.squid-cache.org/Doc/config/quick_abort_pct/
  http://www.squid-cache.org/Doc/config/range_offset_limit/


Amos




  


Re: [squid-users] RE: NTLM error

2010-03-19 Thread Jeff Foster
Dawie,

Welcome to the squid It's Microsoft and it's broke, so it's not our
fault list.

I had the same problem and did find a work around that seems to stop
the pop-up authentication.
The hack is to change the registry setting MaxConnectionsPerServer to
1. This is a
link for setting the registry value: http://support.microsoft.com/kb/282402

I believe the squid connection pinning code is wrong but I can't get
anyone to believe me.

Jeff F

On Fri, Mar 19, 2010 at 8:30 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Dawie Pretorius wrote:

 Hi is it possible that someone can come back to me on this request.

 Thank you

 Dawie Pretorius


 Maybe yes, maybe no.

 You did add this:
 
 message and may be subject to legal privilege. Access to this e-mail
 by anyone other than the intended is unauthorised. If you are not the
 intended recipient (or responsible for delivery of the message to
 such person), you may not use, copy, distribute or deliver to anyone
 this message (or any part of its contents ) or take any action in
 

 Sigh. Some people who might have answered will be legally bound not to or
 risk their employment.

 /joke.


 -Original Message-
 From: Dawie Pretorius [mailto:da...@tradebridge.co.za] Sent: 11 March 2010
 10:40 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] NTLM error

 Hi,
 I continually have this error inside my /var/log/squid/cache.log:

 [2010/03/05 12:40:02, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
  got NTLMSSP command 3, expected 1

 A client is using kerberos (aka 3) to respond to your NTLM (aka 1)
 challenge.
  * Find out what client browser this is its really rather broken, and if
 possible why it's acting this way.
  * Look into implementing Kerberos auth in your network. NTLM is officially
 deprecated by MS now, and apparently not supported in Windows 7.


 And getting a authentication pop up.

 I found this article about this issue:

 http://www1.il.squid-cache.org/mail-archive/squid-dev/200906/0041.html

 This article states that there is a workaround:

 The workaround is pretty simple - just enable the IP auth cache.


 I think they mean that storing the auth credentials and re-using them for
 the IP gets around it.

 Not a good solution at all. And squid does not support auth cache for NTLM
 type protocols anyway. Which means you need to be using insecure Basic auth
 for it to work.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18



Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Sabyasachi Ruj
Does that mean that if I modify the client to use HTTP proxy's CONNECT
method, it can connect to any standard IMAP server? Say, Gmail IMAP
server?

I also think the client only has to setup the tunnel once. Then there
is no need to wrap the requests with HTTP requests. It can just write
the socket. Right?

On Fri, Mar 19, 2010 at 6:48 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Jakob Curdes wrote:


 If I understand you correctly, the IMAP server should wrap IMAP
 responses with HTTP responses, and accept IMAP requests wrapped with
 HTTP requests?


 Right, but I am not aware of an IMAP server capable of doing this.


 Other way around I would have thought. The client usually makes connection
 to server.

 One of the reasons CONNECT is so dangerous is that the receiving server does
 not need to know HTTP to communicate once the client has setup the tunnel.

 Still, I don't know of any IMAP client software which wraps its IMAP
 requests in HTTP either

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18




-- 
Sabyasachi


RE: [squid-users] RE: NTLM error

2010-03-19 Thread Dawie Pretorius
Hello Amos

Thanks I will look into that.

And I apologize for adding that, didn't even know that was added :D

Have a good weekend... :D

Dawie Pretorius

Pretorius wrote:
 Hi is it possible that someone can come back to me on this request.
 
 Thank you
 
 Dawie Pretorius
 

Maybe yes, maybe no.

You did add this:

  message and may be subject to legal privilege. Access to this e-mail
  by anyone other than the intended is unauthorised. If you are not the
  intended recipient (or responsible for delivery of the message to
  such person), you may not use, copy, distribute or deliver to anyone
  this message (or any part of its contents ) or take any action in


Sigh. Some people who might have answered will be legally bound not to 
or risk their employment.

/joke.

 
 -Original Message-
 From: Dawie Pretorius [mailto:da...@tradebridge.co.za] 
 Sent: 11 March 2010 10:40 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] NTLM error
 
 Hi, 
 
 I continually have this error inside my /var/log/squid/cache.log:
 
 [2010/03/05 12:40:02, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
   got NTLMSSP command 3, expected 1

A client is using kerberos (aka 3) to respond to your NTLM (aka 1) 
challenge.
  * Find out what client browser this is its really rather broken, and 
if possible why it's acting this way.
  * Look into implementing Kerberos auth in your network. NTLM is 
officially deprecated by MS now, and apparently not supported in Windows 7.

 
 And getting a authentication pop up.
 
 I found this article about this issue:
 
 http://www1.il.squid-cache.org/mail-archive/squid-dev/200906/0041.html
 
 This article states that there is a workaround:
 
 The workaround is pretty simple - just enable the IP auth cache.
 

I think they mean that storing the auth credentials and re-using them 
for the IP gets around it.

Not a good solution at all. And squid does not support auth cache for 
NTLM type protocols anyway. Which means you need to be using insecure 
Basic auth for it to work.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18

Note: Privileged/Confidential information may be contained in this message and 
may be subject to legal privilege. Access to this e-mail by anyone other than 
the intended is unauthorised. If you are not the intended recipient (or 
responsible for delivery of the message to such person), you may not use, copy, 
distribute or deliver to anyone this message (or any part of its contents ) or 
take any action in reliance on it. All reasonable precautions have been taken 
to ensure no viruses are present in this e-mail. As our company cannot accept 
responsibility for any loss or damage arising from the use of this e-mail or 
attachments we recommend that you subject these to your virus checking 
procedures prior to use. The views, opinions, conclusions and other information 
expressed in this electronic mail are not given or endorsed by the company 
unless otherwise indicated by an authorized representative independent of this 
message.


RE: [squid-users] RE: NTLM error

2010-03-19 Thread Dawie Pretorius
Hello Jeff

Thanks for help and the link, finally an answer that I can work with :D

Thanks again.

Dawie Pretorius

Dawie,

Welcome to the squid It's Microsoft and it's broke, so it's not our
fault list.

I had the same problem and did find a work around that seems to stop
the pop-up authentication.
The hack is to change the registry setting MaxConnectionsPerServer to
1. This is a
link for setting the registry value: http://support.microsoft.com/kb/282402

I believe the squid connection pinning code is wrong but I can't get
anyone to believe me.

Jeff F

On Fri, Mar 19, 2010 at 8:30 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Dawie Pretorius wrote:

 Hi is it possible that someone can come back to me on this request.

 Thank you

 Dawie Pretorius


 Maybe yes, maybe no.

 You did add this:
 
 message and may be subject to legal privilege. Access to this e-mail
 by anyone other than the intended is unauthorised. If you are not the
 intended recipient (or responsible for delivery of the message to
 such person), you may not use, copy, distribute or deliver to anyone
 this message (or any part of its contents ) or take any action in
 

 Sigh. Some people who might have answered will be legally bound not to or
 risk their employment.

 /joke.


 -Original Message-
 From: Dawie Pretorius [mailto:da...@tradebridge.co.za] Sent: 11 March 2010
 10:40 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] NTLM error

 Hi,
 I continually have this error inside my /var/log/squid/cache.log:

 [2010/03/05 12:40:02, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
  got NTLMSSP command 3, expected 1

 A client is using kerberos (aka 3) to respond to your NTLM (aka 1)
 challenge.
  * Find out what client browser this is its really rather broken, and if
 possible why it's acting this way.
  * Look into implementing Kerberos auth in your network. NTLM is officially
 deprecated by MS now, and apparently not supported in Windows 7.


 And getting a authentication pop up.

 I found this article about this issue:

 http://www1.il.squid-cache.org/mail-archive/squid-dev/200906/0041.html

 This article states that there is a workaround:

 The workaround is pretty simple - just enable the IP auth cache.


 I think they mean that storing the auth credentials and re-using them for
 the IP gets around it.

 Not a good solution at all. And squid does not support auth cache for NTLM
 type protocols anyway. Which means you need to be using insecure Basic auth
 for it to work.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Note: Privileged/Confidential information may be contained in this message and 
may be subject to legal privilege. Access to this e-mail by anyone other than 
the intended is unauthorised. If you are not the intended recipient (or 
responsible for delivery of the message to such person), you may not use, copy, 
distribute or deliver to anyone this message (or any part of its contents ) or 
take any action in reliance on it. All reasonable precautions have been taken 
to ensure no viruses are present in this e-mail. As our company cannot accept 
responsibility for any loss or damage arising from the use of this e-mail or 
attachments we recommend that you subject these to your virus checking 
procedures prior to use. The views, opinions, conclusions and other information 
expressed in this electronic mail are not given or endorsed by the company 
unless otherwise indicated by an authorized representative independent of this 
message.


Re: [squid-users] Squid3 issues

2010-03-19 Thread Amos Jeffries

a...@gmail wrote:

Hi Amos,
Thanks for your comments, All I was doing is hit reply, this is the very 
first time ever I used any mailing list
It doesn't matter anymore, I am sorry if I offended anyone, it was not 
my intention, when I get an email I simply hit reply
I will try and solve my problems, and if I do get it to work I will 
certainly post the solution for future users who might face the same 
problem


As for now, I just want to thank you all

I have previously installed an older version of Squid compiled it 
manually it wasn't the one packaged with the OS (Ubuntu hardy)
after few days trying to get it to work, I mean as a reverse proxy, with 
no luck, I removed it, tried the version 3.0 the one that was packaged 
with the Os, I got as far as allowing clients on my network to have 
access to the internet and most of other applications on windows XP 
couldn't connect.


Windows apps sadly often have to be individually configured for the 
proxy. A lot are not able to use proxies at all.


For the MS software on WindowsXP, set the IE Internet Options then at 
the command line running proxycfg -u.
 That proxycfg -u seems trivial, but it is seriously important for 
Windows XP or a lot of HTTP service stuff in the background will not 
work even with IE set correctly.
 Also worth noting is that proxy auto-detect is not done by several of 
the back-end libraries either. Including windows update :(




anyway this time around I have downloaded it again configured it 
compiled it and installed it, it's not starting but this is a minor 
problem, it's a permission issue rather than anything else.


I just want to say, thank you all, If I do get it to work I will post 
the solution as promised if not that means I have moved on and no longer 
using Squid3.


I will break it down for others to see and it will hopefully help others:

Here it is:

1) Machine A Proxy-Router
2) Machine DSN DHCP
3) Web-server One www.example.com
4) Web-server Twowww.example.org
5) Web-server Three  www.example.net
6) IRC-server / Digichat server
Plus 5 Windows clients

I wanted a proxy server in the for two good reasons, one is for 
loadbalancing and second for an extra layer of security
Currently I have all of the three websites above running on a single 
machine on a virtualhosts, but it's too much for one machine to handle 
all the requests.


I always wanted to use a proxy server but I was putting it off.
a) I knew it was going to be a challenge
b) I was trying to get sometime off in order to do it properly
Basically all I wanted for now is to forward all requests to the 
relevant backend servers, to which I knew it was going to be a challenge



The IRC-server / Digichat server may not be proxy-able at all through 
Squid. It depends if they use HTTP services, or if they are accessible 
via HTTP.



For the reverse proxying of your websites:
 pick one of the web servers to start with and this is the wiki article 
you need for that website:

  http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

Note, the config settings must be going in above all the default 
http_access lines currently in your config. The default http_access are 
for forward-proxy and will block external access.


Then when thats tested and working, this config describes what to add to 
the above to get multiple websites from multiple servers:

  http://wiki.squid-cache.org/ConfigExamples/Reverse/MultipleWebservers


At this point or even with just one server setup you may hit the FD 
overload problem again.


Why: Squid uses 2-3 FD for every request (client, cache file, and maybe 
server connections) and clients like making 4-16 requests in parallel 
each these days and make them is persistent for many minutes at a 
stretch. FD run out fast.
 For reverse-proxies on a fairly used site it may be a good idea to 
have  many FD available to Squid (64K or even 128K has been cited a needed).



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Cancelled downloads

2010-03-19 Thread Marcello Romani

CASALI COMPUTERS - Michele Brodoloni ha scritto:

Hello,
is it possible to stop squid from keep downloading a file when a user stops the 
download from his browser?
If an user initiates a 1GB of web download and then hits “cancel”, squid 
doesn’t mind it and continues to download until it finishes, and this is a 
waste of bandwidth.

Is there a solution for this behavior?

Thanks



Hallo,
I have the same problem here. I have set quick_abort_min and _max 
to 0 to avoid any (useless, in my situation) download.


But what to do with downloads that have been interrupted before the 
config change ?


I.e., I have now 5-6 huge iso file that are beign downloaded by squid as 
leftover from previous interrupted downloads.


Can I tell squid to abort them via some kind of administrative interface 
(cachemgr doesn't seem to provie such command) or should I go the 
iptables route ?


Thanks in advance.

--
Marcello Romani


RE: [squid-users] Reverse Proxy SSL Options

2010-03-19 Thread Dean Weimer
On 18.03.10 13:12, Dean Weimer wrote:
 We have multiple websites using a certificate that has subject 
 alternative names set to use SSL for the multiple domains.  That part

 is working fine, and traffic will pass through showing with Valid 
 certificates.  However, I need to Disable it from answering with weak

 ciphers and SSLv2 to pass the scans.

check https_port options cipher= and options=

for the latter you can play with openssl ciphers.
I use (not on squid), DEFAULT:!EXP
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I feel like I'm diagonally parked in a parallel universe. 

Thanks for the info that worked, almost, I added the following entries.

sslproxy_options NO_SSLv2
sslproxy_cipher
ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

I stole the cipher options from an apache server that was passing the
PCI scans.  This still caused it to fail the scans.

When I entered the same configuration in the https_port line, however it
worked.

Example(IP and domain name has been changed):
https_port 192.168.1.2:443 accel
cert=/usr/local/squid/etc/certs/test.crt
key=/usr/local/squid/etc/certs/test.key defaultsite=www.default.com
vhost options=NO_SSLv2
cipher=ALL:!aNULL:!eNULL:!LOW:!EXP:!ADH:!RC4+RSA:+HIGH:+MEDIUM:!SSLv2

Do the sslproxy_* lines only effect the squid outbound connections to
the back end servers?
Or are both settings possibly required?  In the successful test scan I
had both Set.

I am willing to test some other options if anyone wants me to, I have
untill Tuesday before the system needs to be live, its currently only
accessible to internal clients with a hosts file entry and is being
tested with a Rapid7 Nexpose scanner.

Thanks,
Dean Weimer



Re: [squid-users] Yet another IMAP support request

2010-03-19 Thread Amos Jeffries

Sabyasachi Ruj wrote:

Does that mean that if I modify the client to use HTTP proxy's CONNECT
method, it can connect to any standard IMAP server? Say, Gmail IMAP
server?

I also think the client only has to setup the tunnel once. Then there
is no need to wrap the requests with HTTP requests. It can just write
the socket. Right?



Yes.

You will also have to explicitly add the destination ports to the 
SSL_ports list and Safe_ports list in squid.conf so they are not blocked 
like they should be.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] RE: NTLM error

2010-03-19 Thread Amos Jeffries

Jeff Foster wrote:

Dawie,

Welcome to the squid It's Microsoft and it's broke, so it's not our
fault list.

I had the same problem and did find a work around that seems to stop
the pop-up authentication.
The hack is to change the registry setting MaxConnectionsPerServer to
1. This is a
link for setting the registry value: http://support.microsoft.com/kb/282402

I believe the squid connection pinning code is wrong but I can't get
anyone to believe me.

Jeff F



You would be the guy whose server kept closing connections on Squid 
after one object, right?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Cache_dir size considerations

2010-03-19 Thread Marcello Romani

GIGO . ha scritto:

Well i want to make sure that my settings are optimized and want to
learn more about the cache_dir settings.let me come in details


Gigo,
you are asking a lot of questions all at once.
This is a volounteer-based support list, so your chances of getting
(good) responses are maximized if you ask specific questions, one or two
per post (possibly related).

That said, I'll try to answer with what I know...




I have installed squid3stable24 on Ubuntu 8.04 on IBM 3650 X series
server with two hard disks on which physical RAID1 is implemented. I
am to use the Squid Server for 1000 users out of which 250 are power
usrs rest of them are normal users for which there are many
restrictions(youtube,facebook,msnmessgenger,yahoomessenger,mp3mpg
etc...).


OK



I have done my settings specifically to ensure that windows updates
are cached and my maximum_object_size is 256 mb. Also i am looking
forward to cache Youtube content(for which i have no updated script
and settings so far the one on internet is with storeurl directive
which is depricated)...


Now my cache directory size is 50 gb with 16 L1 and 256 L2. I think
better would be

Cache_dir_size aufs 50 GB 48(L1) 768(L2)


as far as L1  L2 settings i am clear that there should be no more
than around 100 file in L2 directories so one's settings should be
adjusted accordingly. However i am confused that if setting your
cache (50gb) of too large a size will have anything to do with your
performance. Secondly at the moment the cache directory is
implemented on the same hard drive on which OS is installed. I know
that cache should be better moved to a spare hard drive. But what
about the highavailability? Failure of a disk cud result in the
failure of proxy?


To maximize performance you want 1 disk for OS and logs, and one disk 
per cache_dir, without any RAID.
With only two disks, obviously if either one dies you have an out of 
service.
So to achieve ha squid you'd neeto to have two phisical squid boxes, I 
think. Haven't tried myself, so i cannot guide you on how to set that up...




Another confusion which i have is that what about the
cahe_effective_user i hav set my user

cache_effective_user proxy but i dont have much concepts about it. I
have read on SAN institute site a white paper published 2003 that
squid should not be run as nobody user but as a sandbox user with
noshell. However i am not sure what is it all about and whether this
informaiton is still valid after 7 years have been passed.


Squid should not be run as root.
You should have a dedicated user account for it.
Squid cache dirs should be rw by that squid account, obviously.
I belive most distros (at least server-oriented ones) take care of this 
setup when you install squid via package manager.




Please also guide me that what are the risks involved with this
setting which i have done for windows update:

range_offset_limit -1 maximum_object_size 256 MB quick_abort_min -1


No risk, but if a user interrupts a huge download, squid will continue 
it until it finishes, possibly wasting a lot of bandwidth on the wan side.





Further after giving squid too many longs list of blocked site say
containg 100+ sites. I have noticed that its slowed down however i am
not sure that if it is the reason? please guide..



Well, blocking sites involves checking every request's url against all 
the sites in the blacklist. This might have a noticeable impact on the 
server load. Also, if you have many regexes in the blacklist(s) the load 
will be significantly higher.
You might want to have a look at squidGuard or other external helper, to 
take advantage of the multiple CPU cores your server might have.




Please guide in detail it will be really beneficial for me as concept
building...i would be really thankful..

regards,




HTH








Date: Wed, 17 Mar 2010 11:00:22 +0100 From: mrom...@ottotecnica.com
 CC: squid-users@squid-cache.org Subject: Re: [squid-users]
Cache_dir size considerations

GIGO . ha scritto:

The total amount of Ram on the server is 4 GB with cache_mem
parameter set to 1 GB.


IMHO there's plenty of HW for squid to run smoothly. But it also
depends on the amount of traffic.

I'm sorry but I think I don't get your point... what is exactly the
 problem you're having ?

-- Marcello Romani
_ 
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. 
https://signup.live.com/signup.aspx?id=60969



--
Marcello Romani


[squid-users] tcp_outgoing_address binding to wrong address

2010-03-19 Thread john

Hi,
I seem to be running in to a problem with tcp_outgoing_address binding to 
the incorrect interface address when sending traffic.


I have a private subnet which is not routable which I use squid to reach 
stuff on. This is on a seperate network interface on the server. Squid 
also sends other traffic out to the Internet (which seems to work fine).


What I find is that when trying to connect to stuff on the non-routable 
subnet, it takes two requests from the browser to access it.


I have squid configured with an acl:

acl local_network dst 10.0.0.0/16

and with the tcp_outgoing_address section as follows:

tcp_outgoing_address 10.0.0.254 local_network
tcp_outgoing_address real ip !local_network


netstat shows that Squid sends out a SYN but with the wrong source 
address (uses the real IP) on the first attempt, and this fails as it 
can't route to that network on that interface. If I re-send the request in 
the browser (hit enter in address bar), it then sends the request from the 
correct local IP and subsquently works.


Can anyone suggest what's wrong?

Thanks,

john


RE: [squid-users] Ignore requests from certain hosts in access_log

2010-03-19 Thread Baird, Josh
Amos,

Do you think that what I am trying to achieve is possible?

Thanks,

Josh

-Original Message-
From: Baird, Josh 
Sent: Tuesday, March 16, 2010 9:25 AM
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] Ignore requests from certain hosts in access_log

Hi Amos,

Same results.  Nothing coming from the load balancers is being logged (even 
requests using X-Forwarded-For).  Here is my configuration:

acl loadbalancers src x.x.x.y/255.255.255.255
acl loadbalancers src x.x.x.z/255.255.255.255

follow_x_forwarded_for allow loadbalancers
log_uses_indirect_client on
acl_uses_indirect_client on

# Define Logging (do not log loadbalancer health checks)
access_log /var/log/squid/access.log squid
log_access deny !loadbalancers

Without the log_access directive enabled, all requests are logged using their 
X-Forwarded-For source address:

1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET 
http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain

These are the types of requests that I am trying to prevent from being logged:

1268749630.481  0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - 
NONE/- text/html

(where x.x.x.y is the load balancer, and the request is a health check of the 
web proxy service)

Thanks,

Josh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 15, 2010 6:52 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Ignore requests from certain hosts in access_log

On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com
wrote:
 Ok, that sort of worked.  I have a pair of load balancers sitting in
 front of my Squid proxy farm. The load balancers insert the
 X-Forwarded-For header into each HTTP request which allows Squid to log
 their connections using their real client source IP (extracted from
 X-Forwarded-For).  In reality, the connections to the squid servers are
 being made directly from the load balancers.
 
 When I use log_access to deny logging to the load balancer's IP
 addresses, -nothing- gets logged to access_log.  I am attempting to not
 log the health HTTP checks from 10.26.100.130/10.26.100.131 but still
 log the other traffic.  It doesn't seem that log_access is
 X-Forwarded-For aware?  Any ideas?
 
 acl loadbalancers src 10.26.100.130/255.255.255.255
 acl loadbalancers src 10.26.100.131/255.255.255.255
 log_access deny !loadbalancers

Ah, you will require these as well:
 # to trust what the load balancers report for XFF
 follow_x_forwarded_for allow loadbalancers

 # to use the XFF details in the logs
 log_uses_indirect_client on

 # to use the XFF details in ACL tests
 # telling loadbalancer generated requests from relayed
 acl_uses_indirect_client on


Amos


RE: [squid-users] Cache_dir size considerations

2010-03-19 Thread GIGO .

Yes you are right about asking of lot of questions at once. i be careful.
 
+
 
Thank you


 Date: Fri, 19 Mar 2010 16:44:18 +0100
 From: mrom...@ottotecnica.com
 To: gi...@msn.com
 CC: squid-users@squid-cache.org
 Subject: Re: [squid-users] Cache_dir size considerations

 GIGO . ha scritto:
 Well i want to make sure that my settings are optimized and want to
 learn more about the cache_dir settings.let me come in details

 Gigo,
 you are asking a lot of questions all at once.
 This is a volounteer-based support list, so your chances of getting
 (good) responses are maximized if you ask specific questions, one or two
 per post (possibly related).

 That said, I'll try to answer with what I know...



 I have installed squid3stable24 on Ubuntu 8.04 on IBM 3650 X series
 server with two hard disks on which physical RAID1 is implemented. I
 am to use the Squid Server for 1000 users out of which 250 are power
 usrs rest of them are normal users for which there are many
 restrictions(youtube,facebook,msnmessgenger,yahoomessenger,mp3mpg
 etc...).

 OK


 I have done my settings specifically to ensure that windows updates
 are cached and my maximum_object_size is 256 mb. Also i am looking
 forward to cache Youtube content(for which i have no updated script
 and settings so far the one on internet is with storeurl directive
 which is depricated)...


 Now my cache directory size is 50 gb with 16 L1 and 256 L2. I think
 better would be

 Cache_dir_size aufs 50 GB 48(L1) 768(L2)


 as far as L1  L2 settings i am clear that there should be no more
 than around 100 file in L2 directories so one's settings should be
 adjusted accordingly. However i am confused that if setting your
 cache (50gb) of too large a size will have anything to do with your
 performance. Secondly at the moment the cache directory is
 implemented on the same hard drive on which OS is installed. I know
 that cache should be better moved to a spare hard drive. But what
 about the highavailability? Failure of a disk cud result in the
 failure of proxy?

 To maximize performance you want 1 disk for OS and logs, and one disk
 per cache_dir, without any RAID.
 With only two disks, obviously if either one dies you have an out of
 service.
 So to achieve ha squid you'd neeto to have two phisical squid boxes, I
 think. Haven't tried myself, so i cannot guide you on how to set that up...


 Another confusion which i have is that what about the
 cahe_effective_user i hav set my user

 cache_effective_user proxy but i dont have much concepts about it. I
 have read on SAN institute site a white paper published 2003 that
 squid should not be run as nobody user but as a sandbox user with
 noshell. However i am not sure what is it all about and whether this
 informaiton is still valid after 7 years have been passed.

 Squid should not be run as root.
 You should have a dedicated user account for it.
 Squid cache dirs should be rw by that squid account, obviously.
 I belive most distros (at least server-oriented ones) take care of this
 setup when you install squid via package manager.


 Please also guide me that what are the risks involved with this
 setting which i have done for windows update:

 range_offset_limit -1 maximum_object_size 256 MB quick_abort_min -1

 No risk, but if a user interrupts a huge download, squid will continue
 it until it finishes, possibly wasting a lot of bandwidth on the wan side.



 Further after giving squid too many longs list of blocked site say
 containg 100+ sites. I have noticed that its slowed down however i am
 not sure that if it is the reason? please guide..


 Well, blocking sites involves checking every request's url against all
 the sites in the blacklist. This might have a noticeable impact on the
 server load. Also, if you have many regexes in the blacklist(s) the load
 will be significantly higher.
 You might want to have a look at squidGuard or other external helper, to
 take advantage of the multiple CPU cores your server might have.


 Please guide in detail it will be really beneficial for me as concept
 building...i would be really thankful..

 regards,



 HTH





 
 Date: Wed, 17 Mar 2010 11:00:22 +0100 From: mrom...@ottotecnica.com
 CC: squid-users@squid-cache.org Subject: Re: [squid-users]
 Cache_dir size considerations

 GIGO . ha scritto:
 The total amount of Ram on the server is 4 GB with cache_mem
 parameter set to 1 GB.

 IMHO there's plenty of HW for squid to run smoothly. But it also
 depends on the amount of traffic.

 I'm sorry but I think I don't get your point... what is exactly the
 problem you're having ?

 -- Marcello Romani
 _
 Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
 https://signup.live.com/signup.aspx?id=60969


 --
 Marcello Romani 

Re: [squid-users] Squid3 issues

2010-03-19 Thread a...@gmail

Hi Amos,
Thanks again for your reply, I have tried these two links, I have used them 
for one server at a time, or maybe the issue is that I was trying to access 
the backend Server which is currently running in virtualhost mode and holds 
the 3 websites.


As I said before I have completely uninstalled the previous Squid, I 
reinstalled it again this time, configured it and compiled it (manually)
I had some issues with permissions, first the cache logs and then the swap 
file directory but it's all sorted.

Now when ever I start Squid with
Squid -NCd 10
I check if everything is running ok, so I get this warning:

ClientParseRequestMethod: Unsupported method attempted by : 111.118.144.225
This is not a bug. see Squid.conf  extension methods
ClientProcess Invalid Request.

Let me just point out that first I have no idea where this IP originate 
from, I tried Dnsstuff to figure out where it's coming from, I am not sure 
if it's a Google crawler or someone else, the information wasn't clear.

But it's definitely not one of my IPs
Second, the proxy at the moment is behind a router and is not connected to 
any of Local clients yet, I wanted to run it first before I can connect it 
as a Proxy-Router
How can I prevent this from accessing it because it's persisting connection 
it will soon cripple the server.


Does anyone know who owns this IP address please? 111.118.144.225

All I got as info is this
Location: Cambodia [City: Phnom Penh, Phnum Penh]Maybe I need to block their 
IP if I can.At the moment the proxy server is set as a standalone machine 
connected through a router so I can't understand why is it gettingthese 
requests, from outside.Any ideas please?RegardsAdam- Original 
Message - 
From: Amos Jeffries squ...@treenet.co.nz

To: squid-users@squid-cache.org
Sent: Friday, March 19, 2010 2:53 PM
Subject: Re: [squid-users] Squid3 issues



a...@gmail wrote:

Hi Amos,
Thanks for your comments, All I was doing is hit reply, this is the very 
first time ever I used any mailing list
It doesn't matter anymore, I am sorry if I offended anyone, it was not my 
intention, when I get an email I simply hit reply
I will try and solve my problems, and if I do get it to work I will 
certainly post the solution for future users who might face the same 
problem


As for now, I just want to thank you all

I have previously installed an older version of Squid compiled it 
manually it wasn't the one packaged with the OS (Ubuntu hardy)
after few days trying to get it to work, I mean as a reverse proxy, with 
no luck, I removed it, tried the version 3.0 the one that was packaged 
with the Os, I got as far as allowing clients on my network to have 
access to the internet and most of other applications on windows XP 
couldn't connect.


Windows apps sadly often have to be individually configured for the proxy. 
A lot are not able to use proxies at all.


For the MS software on WindowsXP, set the IE Internet Options then at 
the command line running proxycfg -u.
 That proxycfg -u seems trivial, but it is seriously important for Windows 
XP or a lot of HTTP service stuff in the background will not work even 
with IE set correctly.
 Also worth noting is that proxy auto-detect is not done by several of the 
back-end libraries either. Including windows update :(




anyway this time around I have downloaded it again configured it compiled 
it and installed it, it's not starting but this is a minor problem, it's 
a permission issue rather than anything else.


I just want to say, thank you all, If I do get it to work I will post the 
solution as promised if not that means I have moved on and no longer 
using Squid3.


I will break it down for others to see and it will hopefully help others:

Here it is:

1) Machine A Proxy-Router
2) Machine DSN DHCP
3) Web-server One www.example.com
4) Web-server Twowww.example.org
5) Web-server Three  www.example.net
6) IRC-server / Digichat server
Plus 5 Windows clients

I wanted a proxy server in the for two good reasons, one is for 
loadbalancing and second for an extra layer of security
Currently I have all of the three websites above running on a single 
machine on a virtualhosts, but it's too much for one machine to handle 
all the requests.


I always wanted to use a proxy server but I was putting it off.
a) I knew it was going to be a challenge
b) I was trying to get sometime off in order to do it properly
Basically all I wanted for now is to forward all requests to the relevant 
backend servers, to which I knew it was going to be a challenge



The IRC-server / Digichat server may not be proxy-able at all through 
Squid. It depends if they use HTTP services, or if they are accessible via 
HTTP.



For the reverse proxying of your websites:
 pick one of the web servers to start with and this is the wiki article 
you need for that website:

  http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

Note, the config settings must be going in above all the default 

[squid-users] How to orward CONNECT and POST to parent cache.

2010-03-19 Thread Krist van Besien
I have the following in my Squid (the rest is standard)

cache_peer p.somewhere.com   parent443  0   no-digest no-query
proxy-only l
cache_peer_domain p.somewhere.com  .domain.net

In my logfile I see that GET requests are forwarded to the peer, but
POST and CONNECT still goest DIRECT.

1269021794.161 71 192.168.1.185 TCP_MISS/304 439 GET
http://www.domain.net/favicon.ico - FIRST_UP_PARENT/p.somewhwere.net
image/x-icon
1269021794.338986 192.168.1.185 TCP_MISS/200 4704 CONNECT
www.domain.net:443 - DIRECT/x.x.x.x -

What have I overlooked? Or am I asking to much?
 Basically I want everything for domain.net, GET, CONNECT and POST to
go over the parent proxy.

Thanks in advance,

Krist


-- 
krist.vanbes...@gmail.com
kr...@vanbesien.org
Bremgarten b. Bern, Switzerland
--
A: It reverses the normal flow of conversation.
Q: What's wrong with top-posting?
A: Top-posting.
Q: What's the biggest scourge on plain text email discussions?


[squid-users] Re: How to orward CONNECT and POST to parent cache.

2010-03-19 Thread Krist van Besien
Forgot to add: This is squid 2.7 on Ubuntu Jaunty.

Krist

-- 
krist.vanbes...@gmail.com
kr...@vanbesien.org
Bremgarten b. Bern, Switzerland
--
A: It reverses the normal flow of conversation.
Q: What's wrong with top-posting?
A: Top-posting.
Q: What's the biggest scourge on plain text email discussions?


Re: [squid-users] Squid3 issues

2010-03-19 Thread a...@gmail

Hi Amos, I forgot to ask you about this comment

Amos Wrote:
 The IRC-server / Digichat server may not be proxy-able at all through
Squid. It depends if they use HTTP services, or if they are accessible via 
HTTP


According to you or from what I understand, proxy server (Squid) can only 
allow HTTP/HTTPS requests, correct?

If that's a yes, what are we going to do with all hundreds of requests then?

You know as well as I do, running servers and services, you don't just run 
programmes and applications that are passed through http
So if the only access to A network is through 3128 (http) what happens to 
the rest of the services that we can provide?


I am a little confused, so in my opinion correct me if I am wrong, we must 
allow through DNAT iptables all other services that don't use http, for 
the simple reason, those requests will be rejected by the Proxy server.


For instance IRC servers use mainly these ports -7000 the standard port 
is 6667

Is the proxy server able to handle these ports?.

As for the Digichat server here is what is said about on their website

Will DigiChat work through firewalls and proxy servers?

All DigiChat licenses and chat hosting plans allow you to customize the 
ports used, providing your


users access through firewalls. Additionally, DigiChat offers HTTP Tunneling 
functionality on select


server licenses. This feature allows your chatters to use DigiChat from 
behind protective proxy


servers. It is important that you understand the proper configuring of 
server ports in order for this


feature to perform optimally. To ensure proper performance of DigiChat, 
please refer to the


product documentation or consult a DigiChat support representative. NOTE: 
Some advanced


features such as Audio chat (voice) or Video chat (web cam chat) make use of 
UDP ports for proper


operation and as such are NOT tunnelled. Please configure your firewall so 
that such advanced


features will work without interruption.

If anyone is interested to find out more about this here is the link

http://www.digichat.com/PDF/DC_FAQ.pdf

Regards
Adam



RE: [squid-users] Requests through proxy take 4x+ longer than direct to the internet

2010-03-19 Thread David Parks
Ah brilliant, thank you for passing this link along, it's very helpful!

Question then: Does the proxy server have a similar functionality as the
browser, that of limiting concurrent requests to a given domain (as
described in this article)?

What I want to know really is: Can I have my users bump up the number of
connections to the proxy server, or, by doing so, do I risk the proxy server
flooding a site and getting the proxies IP blocked?

What solutions have been employed in other scenarios, or are proxy servers
just inherently slower than direct connections due to this concurrent
connection issue?

Thanks,
David



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 19, 2010 1:06 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Requests through proxy take 4x+ longer than
direct to the internet

David Parks wrote:
 Hi, I set up a dev instance of squid on my windows system.
 
 I've configured 2 browsers (Chrome  Firefox), chrome direct to the 
 internet, firefox through the locally running instance of squid.
 
 I expected similar response times from the two browsers, but I 
 consistently see firefox (configured to proxy through squid) takes 4x+
longer.
 
 Below are the logs showing response times from a hit on yahoo.com, the 
 chrome browser opened the page in ~2 seconds.
 
 I have used the windows binaries of squid and configured digest 
 password authentication, everything else (other than default port) is 
 left as default in the config file.
 
 After doing a packet capture I noted the following behavior:
 
- When going through the proxy: 9 GET requests are made, and 9 HTTP 
 responses are received in a reasonable time period (2sec)
- After the 9th HTTP response is sent, there is a 4 second delay 
 until the next GET request is made
- Then 6 GET requests are made, and 6 HTTP responses are received 
 in a reasonable amount of time.
- After the 6th GET request in this second group there is a 5 
 second delay until the next GET request is made.
- This pattern repeats its self when the proxy is in use.
- This pattern does not occur when I am not connected through the
proxy.
 
 Any thoughts on this behavior?
 

This blog article explains the issues involved:

http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




[squid-users] What version of squid in the upcoming ubuntu 10.4 repo

2010-03-19 Thread a...@gmail

Have you tried to ask the question on Ubuntu forums?
You're more likely to get an answer, I believe it will be version 3.0 Stable 
25

I am only guessing

Regards
Adam



[squid-users] Squid-2.7STABLE7: problem with Vary

2010-03-19 Thread Krzysztof Olędzki
Hello,

I'm have been trying to configure Squid to store and provide two
versions of the same obiect, but so far with no luck.

I configured my load balancer to append an additional header to
a request depending on a client status, something like:
 X-ASP-CFlag: Yes or X-ASP-CFlag: No

I also configured my servers to append Vary: X-ASP-CFlag and to
set a different ETag for both responses.

Squid is able to cache such responses and always provide a correct
version, so I believe I did everyting correct releated to handling
Vary  ETag.

My problem is that each time, when a different type of client comes,
the object is RELEASED and Squid fetches a new one. So, Squid
is able to provide a cached version of such obiect as long as
consecutive requests come from the same type of a client. If they comes
from the different type, then I get 0% hit rate. :(

 1269025015.033 23 192.168.162.1/192.168.152.2 TCP_MISS/200 16857 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml
 1269025022.400 27 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025022.863 81 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025022.967 25 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025023.456  1 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025024.015 21 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025024.101 16 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16887 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025025.836  1 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16887 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025028.506 27 192.168.162.1/192.168.152.2 TCP_MISS/200 100934 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml
 1269025031.030 37 192.168.162.1/192.168.152.2 TCP_MISS/200 16904 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml
 1269025033.208 11 192.168.162.1/192.168.152.2 TCP_MISS/200 100934 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml

According to the store.log I have:

- request from a client type A:
1269025015.023 RELEASE 00 00032DDE 3670265D41E40D46FB58467B0A406016  200 
1269025002 1268659805 1269025062 text/css -1/16369 GET 
http://www.example.com/css/EF.001.css
1269025015.023 SWAPOUT 00 000332CA 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025015-1 1269125015 x-squid-internal/vary -1/220 GET 
http://www.example.com/css/EF.001.css
1269025015.023 RELEASE 00 00032F03 BA73564A12C40FB51174FE3CD14F2BDA  200 
1269025005-1 1269125005 x-squid-internal/vary -1/220 GET 
http://www.example.com/css/EF.001.css
1269025015.033 SWAPOUT 00 000332CC E3A743051428428E9D4D45836CB2719C  200 
1269025014 1268659805 1269025074 text/css -1/16338 GET 
http://www.example.com/css/EF.001.css

- request from a client type B:
1269025028.491 RELEASE 00 00032F04 F7F9BF630687B86AFAA4D5CD729E6F15  200 
1269025005 1268659805-1 text/xml 100483/100483 GET 
http://www.example.com/xml/EF.001.xml
1269025028.491 SWAPOUT 00 00033BEA 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025028-1 1269125028 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025028.491 RELEASE 00 000332CA 80E0AD812ADE72183FD2BF19D3D1F251  200 
1269025015-1 1269125015 x-squid-internal/vary -1/-218 GET 
http://www.example.com/xml/EF.001.xml
1269025028.506 SWAPOUT 00 00033BF0 A070DC36FD8ED3452573EE7DC398DF53  200 
1269025028 1268659805 1269025088 text/xml 100483/100483 GET 
http://www.example.com/xml/EF.001.xml

- request from a client type A:
1269025031.015 RELEASE 00 000332CC BCB90FADA1A3A323B25925C4776B64AB  200 
1269025014 1268659805 1269025074 text/xml -1/16338 GET 
http://www.example.com/xml/EF.001.xml
1269025031.015 SWAPOUT 00 00033D2D 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025031-1 1269125031 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025031.015 RELEASE 00 00033BEA C4D3004363864A9BC877E75165903539  200 
1269025028-1 1269125028 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025031.028 SWAPOUT 00 00033D2A E3A743051428428E9D4D45836CB2719C  200 
1269025030 1268659805 1269025090 text/xml -1/16385 GET 
http://www.example.com/xml/EF.001.xml

- request from a client type B:
1269025033.204 RELEASE 00 00033BF0 4A2CE9062319A7E381086826978BCBB3  200 
1269025028 1268659805 1269025088 text/xml 100483/100483 GET 
http://www.example.com/xml/EF.001.xml
1269025033.204 SWAPOUT 00 00033EE7 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025033-1 1269125033 x-squid-internal/vary -1/220 GET 

Re: [squid-users] Ignore requests from certain hosts in access_log

2010-03-19 Thread Amos Jeffries

Baird, Josh wrote:

Amos,

Do you think that what I am trying to achieve is possible?


Yes.  Do exactly the same myself with a simple !aclname at the end of 
access_log directives.


I can't figure out why neither that nor the longer log_access is working 
for you.


Amos


-Original Message-
From: Baird, Josh 
Sent: Tuesday, March 16, 2010 9:25 AM

To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] Ignore requests from certain hosts in access_log

Hi Amos,

Same results.  Nothing coming from the load balancers is being logged (even 
requests using X-Forwarded-For).  Here is my configuration:

acl loadbalancers src x.x.x.y/255.255.255.255
acl loadbalancers src x.x.x.z/255.255.255.255

follow_x_forwarded_for allow loadbalancers
log_uses_indirect_client on
acl_uses_indirect_client on

# Define Logging (do not log loadbalancer health checks)
access_log /var/log/squid/access.log squid
log_access deny !loadbalancers

Without the log_access directive enabled, all requests are logged using their 
X-Forwarded-For source address:

1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET 
http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain

These are the types of requests that I am trying to prevent from being logged:

1268749630.481  0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - 
NONE/- text/html

(where x.x.x.y is the load balancer, and the request is a health check of the 
web proxy service)

Thanks,

Josh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 15, 2010 6:52 PM

To: squid-users@squid-cache.org
Subject: Re: [squid-users] Ignore requests from certain hosts in access_log

On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com
wrote:

Ok, that sort of worked.  I have a pair of load balancers sitting in
front of my Squid proxy farm. The load balancers insert the
X-Forwarded-For header into each HTTP request which allows Squid to log
their connections using their real client source IP (extracted from
X-Forwarded-For).  In reality, the connections to the squid servers are
being made directly from the load balancers.

When I use log_access to deny logging to the load balancer's IP
addresses, -nothing- gets logged to access_log.  I am attempting to not
log the health HTTP checks from 10.26.100.130/10.26.100.131 but still
log the other traffic.  It doesn't seem that log_access is
X-Forwarded-For aware?  Any ideas?

acl loadbalancers src 10.26.100.130/255.255.255.255
acl loadbalancers src 10.26.100.131/255.255.255.255
log_access deny !loadbalancers


Ah, you will require these as well:
 # to trust what the load balancers report for XFF
 follow_x_forwarded_for allow loadbalancers

 # to use the XFF details in the logs
 log_uses_indirect_client on

 # to use the XFF details in ACL tests
 # telling loadbalancer generated requests from relayed
 acl_uses_indirect_client on


Amos




RE: [squid-users] Ignore requests from certain hosts in access_log

2010-03-19 Thread Baird, Josh
And, you still see the non-healthcheck, normal traffic logged using the 
X-Forwarded-For information?

Here is my entire config, maybe this will help:

# What port do we want to listen on?
http_port 80

# Define refresh patterns for content types
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

# Define network ACL's
acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl loadbalancers src 10.26.100.136/255.255.255.255
acl loadbalancers src 10.26.100.137/255.255.255.255

# Define access ACL's.  To allow SSL tunneling to a new port, add that port
# to the ssl_ports ACL.  To allow HTTP access over new ports, add that port
# to the safe_ports ACL, and so on.
acl manager proto cache_object
acl ssl_ports port /etc/squid/acl-ssl_ports
acl safe_ports port /etc/squid/acl-safe_ports
acl deny_sites dstdomain /etc/squid/acl-deny_sites
acl deny_browsers browser /etc/squid/acl-deny_browsers
acl CONNECT method CONNECT

# Define HTTP access rules
http_access deny manager !localhost
http_access deny !safe_ports
http_access deny CONNECT !ssl_ports
http_access deny deny_sites
http_access deny deny_browsers
http_access allow localhost
http_access allow localnet
http_access deny all

# Allow icp_access to allowed_src_hosts
# icp_access allow allowed_src_hosts
# icp_access deny all_src

# We want to append the X-Forwarded-For header for Websense
follow_x_forwarded_for allow loadbalancers
log_uses_indirect_client on
acl_uses_indirect_client on

# Define Logging (do not log loadbalancer health checks)
access_log /var/log/squid/access.log squid
log_access deny !loadbalancers
coredump_dir /var/spool/squid
pid_filename /var/run/squid.pid
httpd_suppress_version_string on
shutdown_lifetime 5 seconds
# We don't cache, so there is no need to waste disk I/O on cache logging
cache_store_log none

# Define SNMP properties
# We will proxy requestst to Squid's internal agent from net-snmp
acl snmpprivate snmp_community fcsnmp1ro
snmp_port 3401
snmp_access allow snmpprivate localhost
snmp_access deny all

# Allow non-FQDN hostnames, even though they are bad bad bad!
dns_defnames on

# Disable all caching
cache deny all
cache_dir null /tmp

# Misc Configuration
negative_ttl 0


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 19, 2010 6:55 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Ignore requests from certain hosts in access_log

Baird, Josh wrote:
 Amos,
 
 Do you think that what I am trying to achieve is possible?

Yes.  Do exactly the same myself with a simple !aclname at the end of 
access_log directives.

I can't figure out why neither that nor the longer log_access is working 
for you.

Amos

 -Original Message-
 From: Baird, Josh 
 Sent: Tuesday, March 16, 2010 9:25 AM
 To: Amos Jeffries; squid-users@squid-cache.org
 Subject: RE: [squid-users] Ignore requests from certain hosts in access_log
 
 Hi Amos,
 
 Same results.  Nothing coming from the load balancers is being logged (even 
 requests using X-Forwarded-For).  Here is my configuration:
 
 acl loadbalancers src x.x.x.y/255.255.255.255
 acl loadbalancers src x.x.x.z/255.255.255.255
 
 follow_x_forwarded_for allow loadbalancers
 log_uses_indirect_client on
 acl_uses_indirect_client on
 
 # Define Logging (do not log loadbalancer health checks)
 access_log /var/log/squid/access.log squid
 log_access deny !loadbalancers
 
 Without the log_access directive enabled, all requests are logged using 
 their X-Forwarded-For source address:
 
 1268749629.423354 172.26.100.23 TCP_MISS/200 1475 GET 
 http://webmail.blah.net/? - DIRECT/72.29.72.189 text/plain
 
 These are the types of requests that I am trying to prevent from being logged:
 
 1268749630.481  0 x.x.x.y TCP_DENIED/400 2570 GET error:invalid-request - 
 NONE/- text/html
 
 (where x.x.x.y is the load balancer, and the request is a health check of 
 the web proxy service)
 
 Thanks,
 
 Josh
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
 Sent: Monday, March 15, 2010 6:52 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Ignore requests from certain hosts in access_log
 
 On Mon, 15 Mar 2010 12:15:49 -0500, Baird, Josh jba...@follett.com
 wrote:
 Ok, that sort of worked.  I have a pair of load balancers sitting in
 front of my Squid proxy farm. The load balancers insert the
 X-Forwarded-For header into each HTTP request which allows Squid to log
 their connections using their real client source IP (extracted from
 X-Forwarded-For).  In reality, the connections to the squid servers are
 being made directly from the load balancers.

 When I use log_access to deny logging to the load balancer's IP
 addresses, -nothing- gets 

Re: [squid-users] Squid3 issues

2010-03-19 Thread Amos Jeffries

a...@gmail wrote:

Hi Amos, I forgot to ask you about this comment

Amos Wrote:
 The IRC-server / Digichat server may not be proxy-able at all through
Squid. It depends if they use HTTP services, or if they are accessible 
via HTTP




I said that because my reading of one of your earlier messages it 
appeared that you were getting frustrated by Squid not proxying traffic 
for those services.


 I'm not sure if you are wanting Squid to gateway access for your 
client machines to those server(s), which is possible with some client 
configuration. DigiWeb sounds like it needs special licenses to be 
configured that way.


 I'm not sure if you are wanting to gateway traffic from the general 
public to those servers. Which is not possible for IRC and seems not for 
DigiWeb either.


According to you or from what I understand, proxy server (Squid) can 
only allow HTTP/HTTPS requests, correct?


Yes.

If that's a yes, what are we going to do with all hundreds of requests 
then?


I don't understand what you mean by hundreds of requests. What type of 
requests and for what? user requests for access? software requests for 
non-HTTP stuff?




You know as well as I do, running servers and services, you don't just 
run programmes and applications that are passed through http
So if the only access to A network is through 3128 (http) what happens 
to the rest of the services that we can provide?


Your public (externally visible) services should not be published on 
port 3128 unless you are offering proxy services.




I am a little confused, so in my opinion correct me if I am wrong, we 
must allow through DNAT iptables all other services that don't use 
http, for the simple reason, those requests will be rejected by the 
Proxy server.


Maybe. It gets complicated.

 1) Squid can only handle HTTP inbound to Squid.

 2) You could do routing or port forwarding (DNAT) with iptables, or 
use other non-Squid proxy software for each publicly provided protocol.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid3 issues

2010-03-19 Thread Amos Jeffries

a...@gmail wrote:

Hi Amos,
Thanks again for your reply, I have tried these two links, I have used 
them for one server at a time, or maybe the issue is that I was trying 
to access the backend Server which is currently running in virtualhost 
mode and holds the 3 websites.


As I said before I have completely uninstalled the previous Squid, I 
reinstalled it again this time, configured it and compiled it (manually)
I had some issues with permissions, first the cache logs and then the 
swap file directory but it's all sorted.

Now when ever I start Squid with
Squid -NCd 10
I check if everything is running ok, so I get this warning:

ClientParseRequestMethod: Unsupported method attempted by : 111.118.144.225
This is not a bug. see Squid.conf  extension methods
ClientProcess Invalid Request.


The line above (or maybe below) should indicate what request methods was 
used. If it looks like garbage it is not HTTP.
 This is caused commonly by apps which send their non-HTTP stuff 
through port 80.
 Or, by overly wide DNAT / interception rules grabbing non-80 ports and 
pushing their data into Squid.




Let me just point out that first I have no idea where this IP originate 
from, I tried Dnsstuff to figure out where it's coming from, I am not 
sure if it's a Google crawler or someone else, the information wasn't 
clear.


Well, it is probably coming for outside your network and being set to 
your Squid.





But it's definitely not one of my IPs
Second, the proxy at the moment is behind a router and is not connected 
to any of Local clients yet, I wanted to run it first before I can 
connect it as a Proxy-Router
How can I prevent this from accessing it because it's persisting 
connection it will soon cripple the server.


Does anyone know who owns this IP address please? 111.118.144.225



The whois tool is a first step to finding out:

 whois 111.118.144.225

I wont publish their contact details here, but the whois command will 
show them to you if you really need them. It's one of their customers 
probably.




All I got as info is this
Location: Cambodia [City: Phnom Penh, Phnum Penh]Maybe I need to block 
their IP if I can.At the moment the proxy server is set as a standalone 
machine connected through a router so I can't understand why is it 
gettingthese requests, from outside.Any ideas please?


Firstly, check your firewall rules that public traffic really is not 
being explicitly sent to the proxy yet.


If you can confirm that it really should not, add an iptables rule to 
DROP packets coming from it before they go anywhere.


Maybe you face an attack or an infected/insecure machine already on your 
network. Either way its worth finding out more about what that IP is/was 
doing and why.


Amos


Amos wrote:



a...@gmail wrote:

Hi Amos,
Thanks for your comments, All I was doing is hit reply, this is the 
very first time ever I used any mailing list
It doesn't matter anymore, I am sorry if I offended anyone, it was 
not my intention, when I get an email I simply hit reply
I will try and solve my problems, and if I do get it to work I will 
certainly post the solution for future users who might face the same 
problem


As for now, I just want to thank you all

I have previously installed an older version of Squid compiled it 
manually it wasn't the one packaged with the OS (Ubuntu hardy)
after few days trying to get it to work, I mean as a reverse proxy, 
with no luck, I removed it, tried the version 3.0 the one that was 
packaged with the Os, I got as far as allowing clients on my network 
to have access to the internet and most of other applications on 
windows XP couldn't connect.


Windows apps sadly often have to be individually configured for the 
proxy. A lot are not able to use proxies at all.


For the MS software on WindowsXP, set the IE Internet Options then 
at the command line running proxycfg -u.
 That proxycfg -u seems trivial, but it is seriously important for 
Windows XP or a lot of HTTP service stuff in the background will not 
work even with IE set correctly.
 Also worth noting is that proxy auto-detect is not done by several of 
the back-end libraries either. Including windows update :(




anyway this time around I have downloaded it again configured it 
compiled it and installed it, it's not starting but this is a minor 
problem, it's a permission issue rather than anything else.


I just want to say, thank you all, If I do get it to work I will post 
the solution as promised if not that means I have moved on and no 
longer using Squid3.


I will break it down for others to see and it will hopefully help 
others:


Here it is:

1) Machine A Proxy-Router
2) Machine DSN DHCP
3) Web-server One www.example.com
4) Web-server Twowww.example.org
5) Web-server Three  www.example.net
6) IRC-server / Digichat server
Plus 5 Windows clients

I wanted a proxy server in the for two good reasons, one is for 
loadbalancing and second for an extra layer of security
Currently I have all of 

Re: [squid-users] Ignore requests from certain hosts in access_log

2010-03-19 Thread Amos Jeffries

Baird, Josh wrote:

And, you still see the non-healthcheck, normal traffic logged using the 
X-Forwarded-For information?


Yes.



Here is my entire config, maybe this will help:

snip


# We want to append the X-Forwarded-For header for Websense
follow_x_forwarded_for allow loadbalancers
log_uses_indirect_client on
acl_uses_indirect_client on

# Define Logging (do not log loadbalancer health checks)
access_log /var/log/squid/access.log squid
log_access deny !loadbalancers


Gah. Stupid me not reading that right earlier.

Means: deny all requests that are NOT loadbalancers.

You are wanting:
  log_access deny loadbalancers

So sorry.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] tcp_outgoing_address binding to wrong address

2010-03-19 Thread Amos Jeffries

john wrote:

Hi,
I seem to be running in to a problem with tcp_outgoing_address binding 
to the incorrect interface address when sending traffic.


I have a private subnet which is not routable which I use squid to reach 
stuff on. This is on a seperate network interface on the server. Squid 
also sends other traffic out to the Internet (which seems to work fine).


What I find is that when trying to connect to stuff on the non-routable 
subnet, it takes two requests from the browser to access it.


I have squid configured with an acl:

acl local_network dst 10.0.0.0/16


dst requires a DNS lookup. This is a slow category ACL as we call it 
in Squid.




and with the tcp_outgoing_address section as follows:

tcp_outgoing_address 10.0.0.254 local_network
tcp_outgoing_address real ip !local_network


tcp_outgoing_address is a fast category lookup. Which has no guarantee 
of working when using slow category ACL types.


You need to get the dst lookup results cached in squid memory by an 
earlier slow category lookup. http_access is good for this.


One http_access line which does the lookup (for example, the line which 
permits that client access to the local_network area) will make the 
address lookup work in most requests (emphasis on most, no guarantees).



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] How to orward CONNECT and POST to parent cache.

2010-03-19 Thread Amos Jeffries

Krist van Besien wrote:

I have the following in my Squid (the rest is standard)

cache_peer p.somewhere.com   parent443  0   no-digest no-query
proxy-only l
cache_peer_domain p.somewhere.com  .domain.net

In my logfile I see that GET requests are forwarded to the peer, but
POST and CONNECT still goest DIRECT.

1269021794.161 71 192.168.1.185 TCP_MISS/304 439 GET
http://www.domain.net/favicon.ico - FIRST_UP_PARENT/p.somewhwere.net
image/x-icon
1269021794.338986 192.168.1.185 TCP_MISS/200 4704 CONNECT
www.domain.net:443 - DIRECT/x.x.x.x -

What have I overlooked? Or am I asking to much?
 Basically I want everything for domain.net, GET, CONNECT and POST to
go over the parent proxy.

Thanks in advance,

Krist



There is something really funky with your setup if POST is not going the 
same way as GET.


CONNECT is semantically a request to make a tunnel CONNECTion directly 
to a service. It needs to be forced indirect with the never_direct 
access controls.

  never_direct allow CONNECT

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] Unsupported method attempted by

2010-03-19 Thread a...@gmail

Hi,
I was wondering if anyone here could help with this problem
I have just finished reinstalling my proxy server Squid3.0STABLE25
As soon as I start it with Squid -NCd 10
I check if everything is running ok, so I get this warning:

ClientParseRequestMethod: Unsupported method attempted by : 111.118.144.225
This is not a bug. see Squid.conf  extension methods
ClientProcess Invalid Request.

And  the proxy server is not yet connected to any client at this time, but I 
get these invalid requets one after another, is there anyway from stopping 
this?

It's almost like a flood, it is an outside IP address.

These are the information related to the above IP address:
All I got as info is this
Location: Cambodia [City: Phnom Penh, Phnum Penh]

If you have any suggestions please let me know
Regards
Adam 



Re: [squid-users] Requests through proxy take 4x+ longer than direct to the internet

2010-03-19 Thread Amos Jeffries

David Parks wrote:

Ah brilliant, thank you for passing this link along, it's very helpful!

Question then: Does the proxy server have a similar functionality as the
browser, that of limiting concurrent requests to a given domain (as
described in this article)?


Not certain about other proxies.

Squid does not limit server connections AFAIK.

Client connectinos can be limited with maxconn ACL or in newer releases 
you also have a limit you can set on total connections for each client IP.




What I want to know really is: Can I have my users bump up the number of
connections to the proxy server, or, by doing so, do I risk the proxy server
flooding a site and getting the proxies IP blocked?


It's a risk yes. Squid will use as many server-facing connections as 
needed to meet the client demand. So simultaneously concurrent client 
connections is a problem even if you only have one connection per client.


Making sure persistent connections on server side is enabled makes the 
total connection count drop dramatically for working HTTP/1.1 servers.


To be extra sure make sure the x-forwarded-for and via are working 
correctly and the sites can tell you are a proxy serving many clients. 
The strict but reasonable sites like wikipedia will detect that and 
measure against each client individually.




What solutions have been employed in other scenarios, or are proxy servers
just inherently slower than direct connections due to this concurrent
connection issue?

Thanks,
David



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 19, 2010 1:06 AM

To: squid-users@squid-cache.org
Subject: Re: [squid-users] Requests through proxy take 4x+ longer than
direct to the internet

David Parks wrote:

Hi, I set up a dev instance of squid on my windows system.

I've configured 2 browsers (Chrome  Firefox), chrome direct to the 
internet, firefox through the locally running instance of squid.


I expected similar response times from the two browsers, but I 
consistently see firefox (configured to proxy through squid) takes 4x+

longer.
Below are the logs showing response times from a hit on yahoo.com, the 
chrome browser opened the page in ~2 seconds.


I have used the windows binaries of squid and configured digest 
password authentication, everything else (other than default port) is 
left as default in the config file.


After doing a packet capture I noted the following behavior:

   - When going through the proxy: 9 GET requests are made, and 9 HTTP 
responses are received in a reasonable time period (2sec)
   - After the 9th HTTP response is sent, there is a 4 second delay 
until the next GET request is made
   - Then 6 GET requests are made, and 6 HTTP responses are received 
in a reasonable amount of time.
   - After the 6th GET request in this second group there is a 5 
second delay until the next GET request is made.

   - This pattern repeats its self when the proxy is in use.
   - This pattern does not occur when I am not connected through the

proxy.

Any thoughts on this behavior?



This blog article explains the issues involved:

http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18





--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] What version of squid in the upcoming ubuntu 10.4 repo

2010-03-19 Thread Amos Jeffries

a...@gmail wrote:

Have you tried to ask the question on Ubuntu forums?
You're more likely to get an answer, I believe it will be version 3.0 
Stable 25

I am only guessing

Regards
Adam



Ubuntu 10.04 Lucid went into freeze with 3.0.STABLE19 + a few security 
patches.


3.1 will make Debian Squeeze and Ubuntu 10.09 (Mongoose?) if things 
continue the way they are now.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid3 issues

2010-03-19 Thread a...@gmail

Well IRC can be accessed with IRC clients such as mIRC and so on
But they can also be accessed via the web with Java Applets using in fact a 
web browser

That's why I am asking the question, if anyone has had this done.

As for Digichat, is a 100% Java written programme, and it also uses the Web 
browser for clients to connect to it from outside with a Java Applet.
It uses http, what they were saying there was about the hosting server on 
their servers

I have my own Digichat server, which is hosted in my house.
So if they can do it even with a proxy I am sure I can do it.

And If I get it to work then I will post how I did it in case someone else 
is looking for a solution of the same nature or same service.


Because these services were running fine on port 80 with no problems, I mean 
clients could easily access these servers from the HTTP port 80 and then 
they are redirected to the server's ports:


IRC -7000 and Digichat usually on 8396
So I will post back if I get it up and running
Regards
Adam
- Original Message - 
From: Amos Jeffries squ...@treenet.co.nz

To: squid-users@squid-cache.org
Sent: Saturday, March 20, 2010 12:12 AM
Subject: Re: [squid-users] Squid3 issues



a...@gmail wrote:

Hi Amos, I forgot to ask you about this comment

Amos Wrote:
 The IRC-server / Digichat server may not be proxy-able at all through
Squid. It depends if they use HTTP services, or if they are accessible 
via HTTP




I said that because my reading of one of your earlier messages it appeared 
that you were getting frustrated by Squid not proxying traffic for those 
services.


 I'm not sure if you are wanting Squid to gateway access for your client 
machines to those server(s), which is possible with some client 
configuration. DigiWeb sounds like it needs special licenses to be 
configured that way.


 I'm not sure if you are wanting to gateway traffic from the general 
public to those servers. Which is not possible for IRC and seems not for 
DigiWeb either.


According to you or from what I understand, proxy server (Squid) can only 
allow HTTP/HTTPS requests, correct?


Yes.

If that's a yes, what are we going to do with all hundreds of requests 
then?


I don't understand what you mean by hundreds of requests. What type of 
requests and for what? user requests for access? software requests for 
non-HTTP stuff?




You know as well as I do, running servers and services, you don't just 
run programmes and applications that are passed through http
So if the only access to A network is through 3128 (http) what happens 
to the rest of the services that we can provide?


Your public (externally visible) services should not be published on port 
3128 unless you are offering proxy services.




I am a little confused, so in my opinion correct me if I am wrong, we 
must allow through DNAT iptables all other services that don't use 
http, for the simple reason, those requests will be rejected by the Proxy 
server.


Maybe. It gets complicated.

 1) Squid can only handle HTTP inbound to Squid.

 2) You could do routing or port forwarding (DNAT) with iptables, or use 
other non-Squid proxy software for each publicly provided protocol.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18 




Re: [squid-users] Squid-2.7STABLE7: problem with Vary

2010-03-19 Thread Amos Jeffries

Krzysztof Olędzki wrote:

Hello,

I'm have been trying to configure Squid to store and provide two
versions of the same obiect, but so far with no luck.

I configured my load balancer to append an additional header to
a request depending on a client status, something like:
 X-ASP-CFlag: Yes or X-ASP-CFlag: No

I also configured my servers to append Vary: X-ASP-CFlag and to
set a different ETag for both responses.

Squid is able to cache such responses and always provide a correct
version, so I believe I did everyting correct releated to handling
Vary  ETag.

My problem is that each time, when a different type of client comes,
the object is RELEASED and Squid fetches a new one. So, Squid
is able to provide a cached version of such obiect as long as
consecutive requests come from the same type of a client. If they comes
from the different type, then I get 0% hit rate. :(


Is Vary always set regardless of X-ASP-CFlag presence?
missing X-ASP-CFlag is considered to be one of the three variant cases 
by Squid (missing, Yes, No).


Is the ETag the same for both variants?

What does Cache-Control: header contain for each/both?




 1269025015.033 23 192.168.162.1/192.168.152.2 TCP_MISS/200 16857 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml
 1269025022.400 27 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025022.863 81 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025022.967 25 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025023.456  1 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025024.015 21 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16886 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025024.101 16 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16887 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025025.836  1 192.168.162.1/192.168.152.2 TCP_MEM_HIT/200 16887 GET 
http://www.example.com/xml/EF.001.xml - NONE/- text/xml
 1269025028.506 27 192.168.162.1/192.168.152.2 TCP_MISS/200 100934 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml


... client B first request.


 1269025031.030 37 192.168.162.1/192.168.152.2 TCP_MISS/200 16904 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml
 1269025033.208 11 192.168.162.1/192.168.152.2 TCP_MISS/200 100934 GET 
http://www.example.com/xml/EF.001.xml - FIRST_UP_PARENT/192.168.162.1 text/xml


... client B second request.

Notice how the object size keeps changing with each TCP_MISS. Might be 
related.




According to the store.log I have:

- request from a client type A:
1269025015.023 RELEASE 00 00032DDE 3670265D41E40D46FB58467B0A406016  200 
1269025002 1268659805 1269025062 text/css -1/16369 GET 
http://www.example.com/css/EF.001.css
1269025015.023 SWAPOUT 00 000332CA 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025015-1 1269125015 x-squid-internal/vary -1/220 GET 
http://www.example.com/css/EF.001.css
1269025015.023 RELEASE 00 00032F03 BA73564A12C40FB51174FE3CD14F2BDA  200 
1269025005-1 1269125005 x-squid-internal/vary -1/220 GET 
http://www.example.com/css/EF.001.css
1269025015.033 SWAPOUT 00 000332CC E3A743051428428E9D4D45836CB2719C  200 
1269025014 1268659805 1269025074 text/css -1/16338 GET 
http://www.example.com/css/EF.001.css

- request from a client type B:
1269025028.491 RELEASE 00 00032F04 F7F9BF630687B86AFAA4D5CD729E6F15  200 
1269025005 1268659805-1 text/xml 100483/100483 GET 
http://www.example.com/xml/EF.001.xml
1269025028.491 SWAPOUT 00 00033BEA 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025028-1 1269125028 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025028.491 RELEASE 00 000332CA 80E0AD812ADE72183FD2BF19D3D1F251  200 
1269025015-1 1269125015 x-squid-internal/vary -1/-218 GET 
http://www.example.com/xml/EF.001.xml
1269025028.506 SWAPOUT 00 00033BF0 A070DC36FD8ED3452573EE7DC398DF53  200 
1269025028 1268659805 1269025088 text/xml 100483/100483 GET 
http://www.example.com/xml/EF.001.xml

- request from a client type A:
1269025031.015 RELEASE 00 000332CC BCB90FADA1A3A323B25925C4776B64AB  200 
1269025014 1268659805 1269025074 text/xml -1/16338 GET 
http://www.example.com/xml/EF.001.xml
1269025031.015 SWAPOUT 00 00033D2D 26DF93F5ACF8EFF960D1ABD01F1D9509  200 
1269025031-1 1269125031 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025031.015 RELEASE 00 00033BEA C4D3004363864A9BC877E75165903539  200 
1269025028-1 1269125028 x-squid-internal/vary -1/220 GET 
http://www.example.com/xml/EF.001.xml
1269025031.028 SWAPOUT 00 00033D2A E3A743051428428E9D4D45836CB2719C  200 
1269025030 1268659805 

Re: [squid-users] Squid3 issues

2010-03-19 Thread Amos Jeffries

a...@gmail wrote:

Well IRC can be accessed with IRC clients such as mIRC and so on
But they can also be accessed via the web with Java Applets using in 
fact a web browser

That's why I am asking the question, if anyone has had this done.



Ah okay. I think you will find that those IRC Java applets use IRC 
protocol natively in the background. Only using the browser for a GUI. 
The ones I've seen were like that.



As for Digichat, is a 100% Java written programme, and it also uses the 
Web browser for clients to connect to it from outside with a Java Applet.
It uses http, what they were saying there was about the hosting server 
on their servers

I have my own Digichat server, which is hosted in my house.
So if they can do it even with a proxy I am sure I can do it.

And If I get it to work then I will post how I did it in case someone 
else is looking for a solution of the same nature or same service.


Because these services were running fine on port 80 with no problems, I 
mean clients could easily access these servers from the HTTP port 80 and 
then they are redirected to the server's ports:


IRC -7000 and Digichat usually on 8396
So I will post back if I get it up and running
Regards
Adam


Oh. Okay. It sounds like they should keep working then even if Squid is 
in front. The Digichat (port 80 of Digichat at least) may be just 
another cache_peer entry for Squid.


Amos


- Original Message - From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Saturday, March 20, 2010 12:12 AM
Subject: Re: [squid-users] Squid3 issues



a...@gmail wrote:

Hi Amos, I forgot to ask you about this comment

Amos Wrote:
 The IRC-server / Digichat server may not be proxy-able at all 
through
Squid. It depends if they use HTTP services, or if they are 
accessible via HTTP




I said that because my reading of one of your earlier messages it 
appeared that you were getting frustrated by Squid not proxying 
traffic for those services.


 I'm not sure if you are wanting Squid to gateway access for your 
client machines to those server(s), which is possible with some client 
configuration. DigiWeb sounds like it needs special licenses to be 
configured that way.


 I'm not sure if you are wanting to gateway traffic from the general 
public to those servers. Which is not possible for IRC and seems not 
for DigiWeb either.


According to you or from what I understand, proxy server (Squid) can 
only allow HTTP/HTTPS requests, correct?


Yes.

If that's a yes, what are we going to do with all hundreds of 
requests then?


I don't understand what you mean by hundreds of requests. What type 
of requests and for what? user requests for access? software requests 
for non-HTTP stuff?




You know as well as I do, running servers and services, you don't 
just run programmes and applications that are passed through http
So if the only access to A network is through 3128 (http) what 
happens to the rest of the services that we can provide?


Your public (externally visible) services should not be published on 
port 3128 unless you are offering proxy services.




I am a little confused, so in my opinion correct me if I am wrong, we 
must allow through DNAT iptables all other services that don't use 
http, for the simple reason, those requests will be rejected by the 
Proxy server.


Maybe. It gets complicated.

 1) Squid can only handle HTTP inbound to Squid.

 2) You could do routing or port forwarding (DNAT) with iptables, or 
use other non-Squid proxy software for each publicly provided protocol.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18 





--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


Re: [squid-users] Squid3 issues

2010-03-19 Thread a...@gmail


- Original Message - 
From: Amos Jeffries squ...@treenet.co.nz

To: squid-users@squid-cache.org
Sent: Saturday, March 20, 2010 1:38 AM
Subject: Re: [squid-users] Squid3 issues



a...@gmail wrote:

Well IRC can be accessed with IRC clients such as mIRC and so on
But they can also be accessed via the web with Java Applets using in fact 
a web browser

That's why I am asking the question, if anyone has had this done.





Ah okay. I think you will find that those IRC Java applets use IRC 
protocol natively in the background. Only using the browser for a GUI. The 
ones I've seen were like that.


Yes the Applet is configured to connect to any of these ports 6667-7000 for 
argument sake

it's usually 6667.
And yes the browser is used for GUI



As for Digichat, is a 100% Java written programme, and it also uses the 
Web browser for clients to connect to it from outside with a Java Applet.
It uses http, what they were saying there was about the hosting server on 
their servers

I have my own Digichat server, which is hosted in my house.
So if they can do it even with a proxy I am sure I can do it.

And If I get it to work then I will post how I did it in case someone 
else is looking for a solution of the same nature or same service.


Because these services were running fine on port 80 with no problems, I 
mean clients could easily access these servers from the HTTP port 80 and 
then they are redirected to the server's ports:


IRC -7000 and Digichat usually on 8396
So I will post back if I get it up and running
Regards
Adam


Oh. Okay. It sounds like they should keep working then even if Squid is in 
front. The Digichat (port 80 of Digichat at least) may be just another 
cache_peer entry for Squid.


This is what is says in the documentation anyway

HTTP Tunneling Servlet Configuration

The DigiChat client connects to the DigiChat server through six default TCP 
ports: 8396, 58396,


443, 110, 119, 25. Users that access the Internet from behind a firewall or 
proxy server will


generally have those ports blocked on their systems. DigiChat will display 
an error when it is not


able to access the necessary ports. In order to allow access to the applet 
for users behind


firewalls and proxy servers, HTTP Tunneling functionality has been 
implemented with the


DigiChat software. Generally, ports 80 and 8080 are available to users 
behind such systems.


The HTTP Tunneling Servlet can listen on these ports and pass the connection 
to the DigiChat


Server.


Regards
Adam


- Original Message - From: Amos Jeffries squ...@treenet.co.nz
To: squid-users@squid-cache.org
Sent: Saturday, March 20, 2010 12:12 AM
Subject: Re: [squid-users] Squid3 issues



a...@gmail wrote:

Hi Amos, I forgot to ask you about this comment

Amos Wrote:
 The IRC-server / Digichat server may not be proxy-able at all 
through
Squid. It depends if they use HTTP services, or if they are accessible 
via HTTP




I said that because my reading of one of your earlier messages it 
appeared that you were getting frustrated by Squid not proxying traffic 
for those services.


 I'm not sure if you are wanting Squid to gateway access for your client 
machines to those server(s), which is possible with some client 
configuration. DigiWeb sounds like it needs special licenses to be 
configured that way.


 I'm not sure if you are wanting to gateway traffic from the general 
public to those servers. Which is not possible for IRC and seems not for 
DigiWeb either.


According to you or from what I understand, proxy server (Squid) can 
only allow HTTP/HTTPS requests, correct?


Yes.

If that's a yes, what are we going to do with all hundreds of requests 
then?


I don't understand what you mean by hundreds of requests. What type of 
requests and for what? user requests for access? software requests for 
non-HTTP stuff?




You know as well as I do, running servers and services, you don't just 
run programmes and applications that are passed through http
So if the only access to A network is through 3128 (http) what 
happens to the rest of the services that we can provide?


Your public (externally visible) services should not be published on 
port 3128 unless you are offering proxy services.




I am a little confused, so in my opinion correct me if I am wrong, we 
must allow through DNAT iptables all other services that don't use 
http, for the simple reason, those requests will be rejected by the 
Proxy server.


Maybe. It gets complicated.

 1) Squid can only handle HTTP inbound to Squid.

 2) You could do routing or port forwarding (DNAT) with iptables, or use 
other non-Squid proxy software for each publicly provided protocol.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18





--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18