Re: [squid-users] set-cookie header and rfc2109

2010-05-19 Thread Angelo Höngens
On 20-5-2010 8:22, Henrik Nordström wrote:
> ons 2010-05-19 klockan 22:22 +0200 skrev Angelo Höngens:
> 
>> http://wiki.squid-cache.org/SquidFaq/InnerWorkings
>>
>> "The proper way to deal with Set-Cookie reply headers, according to RFC
>> 2109 is to cache the whole object, EXCEPT the Set-Cookie header lines."
> 
> Wrong reference.
> 
> This is from the original Netscape Cookie specification. At that time
> Cache-Control did not exists.

So if I understand you correctly, squid follows the behaviour dictated
in the Netscape Cookie Specification (undated), which says set-cookie
headers should never be cached. However, that was superseded by rfc2109
(1997), which says they should be cached unless told not to.

So by that reasoning, I would say Squid does not follow rfc. Not that I
care that much, but perhaps it would warrant an update in the
documentation or the faq page?

-- 


With kind regards,


Angelo Höngens
systems administrator

MCSE on Windows 2003
MCSE on Windows 2000
MS Small Business Specialist
--
NetMatch
tourism internet software solutions

Ringbaan Oost 2b
5013 CA Tilburg
+31 (0)13 5811088
+31 (0)13 5821239

a.hong...@netmatch.nl
www.netmatch.nl
--




Re: [squid-users] WARNING cache_mem is larger than total disk cache space!

2010-05-19 Thread Georg Höllrigl

Am 19.05.2010 16:40, schrieb Peng, Jeff:


Can set cache_dir with a memory filesystem.



So it's not possible to just tell squid to store the cache items in ram without 
using a ramdisk?


Re: [squid-users] set-cookie header and rfc2109

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 22:22 +0200 skrev Angelo Höngens:

> http://wiki.squid-cache.org/SquidFaq/InnerWorkings
> 
> "The proper way to deal with Set-Cookie reply headers, according to RFC
> 2109 is to cache the whole object, EXCEPT the Set-Cookie header lines."

Wrong reference.

This is from the original Netscape Cookie specification. At that time
Cache-Control did not exists.

Regards
Henrik



Re: [squid-users] Testing website I have set not to cache.

2010-05-19 Thread Peng, Jeff
2010/5/20 Henrik Nordström :
> ons 2010-05-19 klockan 14:03 -0500 skrev Ryan McCain:
>> I have this set in my Squid 2.7 conf file..
>>
>>
>> #5/19/10 - Added to bypass Webex caching
>> acl webex dstdomain .webex.com
>>
>> #5/19/20 - Added to not cache webex
>> cache deny webex
>>
>> ...How can I verify Squid isn't caching anything going to Webex.com?
>
> Monitor cache.log and look for requests with TCP_HIT in their status
> code.
>

Should that be access.log? :)


-- 
Tech support agency in China
http://duxieweb.com/


RE: [squid-users] RE: Anacron log entries

2010-05-19 Thread Amos Jeffries
On Wed, 19 May 2010 16:28:08 +0200, Simon Brereton
 wrote:
>> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
>> Sent: Monday, May 17, 2010 9:59 PM
> 
> 
>> >> Well, there you go. Debug level #2 is full of debugging traces.
>> >>
>> >> FWIW:
>> >>   level 0 - critical failure messages.
>> >>   level 1 - warnings and important notices
>> >>   level 2 thru 9 - debug traces (section specific)
>> >>
>> >> This is why the recommended level is 1 and not 2 or higher.
>> >
>> > Amos
>> >
>> > I'll try that - but there are too things to note..
>> >
>> > 1) I initially increased the debugging to see the auth failures -
>> which
>> I
>> > couldn't see - despite going to 9.  In fact, I saw no difference
>> > between
>> 1
>> > and 2 so that's why I left it at that.
>> >
>> > 2) My logging options are to output to:
>> > 1128 access_log /var/log/squid3/access.log combined
>> > 1137 cache_log /var/log/squid3/cache.log
>> >
>> >
>> > I meant to send this out on Friday.  Anacron doesn't seem to have
>> sent
>> me
>> > the notice since I made the change, but nonetheless, I'm curious as
>> to
>> why
>> > that would make a difference.  My assumption is that no matter what
>> I
>> put
>> > the debugging level at, it should log to file, not to anacron.
>> 
>> They are part of the configuration file loading. The system log is
>> used for initial startup messages before the cache.log file is
>> configured for use. debug_options takes effect immediately on being
>> read in, but cache.log opening is done after the config load is
>> finished and the final cache.log location is known (it can currently
>> be specific twice or more with different filenames).
> 
> That would imply that squid is also being restarted on a daily basis.. 
Is
> that implication correct?  Is that behaviour correct?
> 

Yes, it does appears so.

Behaviour correctness depends on what is being done at the time of
restart. The only "normal" operation which is done daily by external
processes is log rotation. That should be using "squid -k rotate".

However, there may be other opeartions somewhere in your setup that mean a
full restart is required.

Amos


[squid-users] set-cookie header and rfc2109

2010-05-19 Thread Angelo Höngens
Hey guys,

I have question about rfc compliancy in regard to caching set-cookie
headers. According to the faq, squid does not return set-cookie headers
for hits, and I am very happy that it works this way.

It does not really make sense to me for an application to send a
cache-control:public header and then set a cookie, but that aside. I had
a discussion with one the our developers, I want to see where in the
rfc's this behaviour is dictated.

The Squid faq says:

http://wiki.squid-cache.org/SquidFaq/InnerWorkings

"The proper way to deal with Set-Cookie reply headers, according to RFC
2109 is to cache the whole object, EXCEPT the Set-Cookie header lines."


However, when I read rfc2109, I cannot find the directive that states
proxies must not cache these headers. On the contrary, the RFC talks
about "A Set-cookie header that is intended to be shared by multiple
users may be cached", and tells you that if you don't want to cache this
header, you should indicate so by setting the header "Cache-control:
no-cache="set-cookie":

http://www.ietf.org/rfc/rfc2109.txt

"4.2.3  Controlling Caching

   An origin server must be cognizant of the effect of possible caching
   of both the returned resource and the Set-Cookie header.  Caching
   "public" documents is desirable.  For example, if the origin server
   wants to use a public document such as a "front door" page as a
   sentinel to indicate the beginning of a session for which a Set-
   Cookie response header must be generated, the page should be stored
   in caches "pre-expired" so that the origin server will see further
   requests.  "Private documents", for example those that contain
   information strictly private to a session, should not be cached in
   shared caches.

   If the cookie is intended for use by a single user, the Set-cookie
   header should not be cached.  A Set-cookie header that is intended to
   be shared by multiple users may be cached.

   The origin server should send the following additional HTTP/1.1
   response headers, depending on circumstances:

   * To suppress caching of the Set-Cookie header: Cache-control: no-
 cache="set-cookie".
"
and a little down:

"4.5  Caching Proxy Role

   One reason for separating state information from both a URL and
   document content is to facilitate the scaling that caching permits.
   To support cookies, a caching proxy must obey these rules already in
   the HTTP specification:

   * Honor requests from the cache, if possible, based on cache validity
 rules.

   * Pass along a Cookie request header in any request that the proxy
 must make of another server.

   * Return the response to the client.  Include any Set-Cookie response
 header.

   * Cache the received response subject to the control of the usual
 headers, such as Expires, Cache-control: no-cache, and Cache-
 control: private,

   * Cache the Set-Cookie subject to the control of the usual header,
 Cache-control: no-cache="set-cookie".  (The Set-Cookie header
 should usually not be cached.)

   Proxies must not introduce Set-Cookie (Cookie) headers of their own
   in proxy responses (requests)."


Is this a 'bug' in the documentation/squid, is it a well-considered
deviation from rfc, or am I missing something?
-- 


With kind regards,


Angelo Höngens
systems administrator

MCSE on Windows 2003
MCSE on Windows 2000
MS Small Business Specialist
--
NetMatch
tourism internet software solutions

Ringbaan Oost 2b
5013 CA Tilburg
+31 (0)13 5811088
+31 (0)13 5821239

a.hong...@netmatch.nl
www.netmatch.nl
--




Re: [squid-users] Testing website I have set not to cache.

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 14:03 -0500 skrev Ryan McCain:
> I have this set in my Squid 2.7 conf file.. 
> 
> 
> #5/19/10 - Added to bypass Webex caching
> acl webex dstdomain .webex.com
> 
> #5/19/20 - Added to not cache webex
> cache deny webex
> 
> ...How can I verify Squid isn't caching anything going to Webex.com?

Monitor cache.log and look for requests with TCP_HIT in their status
code.

Regrds
Henrik



Re: [squid-users] Logging web traffic only

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 13:47 -0500 skrev Kevin Blackwell:
> Is it possible with squid to just log web traffic on a PC, but if it
> does not match a restricted site via squidguard and a blacklist, have
> it surf on it's own internet connection instead of it going through
> the proxy?

Unfortunately not. Once the request has been sent to the proxy the proxy
have to process it. The protocol does not enable proxies to say "not my
business, please find some other way for accessing this resource", only
terminal responses.

Regards
Henrik



Re: [squid-users] Squid 3.1.3 crashes

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 09:16 -0500 skrev Luis Daniel Lucio Quiroz:
> Helo,
> 
> I'm having this, under 3.1.3  (unfortunallty server is in producction)
> 
> 
> 2010/05/18 23:39:57| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 00:42:12| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 01:22:57| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 02:13:03| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 03:24:26| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 04:05:13| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 05:09:55| NETDB state saved; 0 entries, 0 msec
> FATAL: Received Segment Violation...dying.
> 2010/05/19 06:15:04| storeDirWriteCleanLogs: Starting...
> 2010/05/19 06:15:04| WARNING: Closing open FD   25
> 2010/05/19 06:15:04|   Finished.  Wrote 34985 entries.
> 2010/05/19 06:15:04|   Took 0.01 seconds (5138807.29 entries/sec).
> CPU Usage: 43.150 seconds = 28.280 user + 14.870 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> Memory usage for squid via mallinfo():

Did you get a core dump? If so then extract a stack backtrace from it an
file a bug report.

See also the FAQ on how to file bug reports.

Regards
Henrik



[squid-users] Testing website I have set not to cache.

2010-05-19 Thread Ryan McCain
I have this set in my Squid 2.7 conf file.. 


#5/19/10 - Added to bypass Webex caching
acl webex dstdomain .webex.com

#5/19/20 - Added to not cache webex
cache deny webex

...How can I verify Squid isn't caching anything going to Webex.com?

Thanks..


[squid-users] Logging web traffic only

2010-05-19 Thread Kevin Blackwell
Is it possible with squid to just log web traffic on a PC, but if it
does not match a restricted site via squidguard and a blacklist, have
it surf on it's own internet connection instead of it going through
the proxy?



-- 
Kevin Blackwell


[squid-users] Re: SQUID 3.1 + sslBump https interception and decryption

2010-05-19 Thread James Tan
Franz Angeli  gmail.com> writes:

> And what about ICAP configuration? Some suggestion?
> 
> 

Hi Franz Angeli,

here's the link - http://jez4christ.com/view/archives/127 to my recent attempt 
to 
decrypt SSL and having ICAP with SQUID. 
Am new to GMANE so did not get my earlier response to you.

I chanced upon your post when digging SQUID and ICAP related postings for a 
personal project.


thanks,
James Tan




[squid-users] Re: SQUID 3.1 + sslBump https interception and decryption

2010-05-19 Thread James Tan
Hi Franz Angeli, 

take a look at my recent attempt to decrypt SSL (terminate) using Squid and 
ICAP, might be useful to you.

Chanced upon your message when digging for more information relating to Squid 
and ICAP solutions for a personal project.

thanks,
James Tan





[squid-users] Re: SQUID 3.1 + sslBump https interception and decryption

2010-05-19 Thread James Tan
Here is the link - http://jez4christ.com/view/archives/127

Left that out in my earlier response to you.

thanks,
James Tan



[squid-users] refresh patterns for Caching Media

2010-05-19 Thread Jumping Mouse

Hello eveyone, 
We are using Squid 2.7 for caching educational media files.   We are only using 
the cache for users who need to access these files.   For other internet 
traffic the cache will be bypassed. 
The media files will not be changed for at least a year at which point I will 
run a script to pre-load the cache with the new media files. 

1. How can I set the refresh pattern to never refresh these media files?   The 
files are swf (flash) flv, and mp3, etc. 
This is what I currently have for media:

refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 
override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache 
ignore-no-store ignore-private

2. If  I have already pre-loaded media files into the cache, will changes to 
the refresh patterns work retroactively on these files, or will I have to load 
them into the cache again?

Thanks. 

Kafriki
  
_
New Windows 7: Find the right PC for you. Learn more.
http://windows.microsoft.com/shop

Re: [squid-users] Squid 3.1.3 crashes

2010-05-19 Thread Luis Daniel Lucio Quiroz
Le mercredi 19 mai 2010 09:37:43, Peng, Jeff a écrit :
> 2010/5/19 Luis Daniel Lucio Quiroz :
> > Helo,
> > 
> > I'm having this, under 3.1.3  (unfortunallty server is in producction)
> > 
> > 
> > 2010/05/18 23:39:57| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 00:42:12| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 01:22:57| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 02:13:03| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 03:24:26| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 04:05:13| NETDB state saved; 0 entries, 0 msec
> > 2010/05/19 05:09:55| NETDB state saved; 0 entries, 0 msec
> > FATAL: Received Segment Violation...dying.
> > 2010/05/19 06:15:04| storeDirWriteCleanLogs: Starting...
> > 2010/05/19 06:15:04| WARNING: Closing open FD   25
> > 2010/05/19 06:15:04|   Finished.  Wrote 34985 entries.
> > 2010/05/19 06:15:04|   Took 0.01 seconds (5138807.29 entries/sec).
> > CPU Usage: 43.150 seconds = 28.280 user + 14.870 sys
> > Maximum Resident Size: 0 KB
> > Page faults with physical i/o: 0
> 
> > Memory usage for squid via mallinfo():
> Maybe run squid with some debug level and watch what's the output.

What debug level do you recomend me,
remember i can slow donw too many this server


[squid-users] mswin_ntlm_auth specify default domain

2010-05-19 Thread Ryan How -I.T. HEROES-

Hi,

I'm using mswin_ntlm_auth to authenticate users and it appears to be 
working correctly.


However, when a non-domain users accesses the proxy, or if using firefox 
for example, they get the login dialog (which is fine) and they need to 
enter their username in the form DOMAIN\user . Is there any way I can 
specify the domain in the config somewhere so they don't need to enter it?


I am using the basic mswin_auth as a fallback. I do not need to enter 
the domain in when using this. But it only pops up after I click cancel 
on the NTLM auth dialog.


Many thanks,
Ryan


Re: [squid-users] WARNING cache_mem is larger than total disk cache space!

2010-05-19 Thread Peng, Jeff
2010/5/19 Georg Höllrigl :
> Hello,
>
> I've tried to set the disc cache smaller than memory size - because I'm
> observing reduced performance with too much disc cache.
>
> So now, I ask myself - if it would be a good idea to disable the whole disk
> cache thing and only use RAM? And if so - how will I do it - with squid 3.0
> there are always warnings about to little disc cache.
>
>

Can set cache_dir with a memory filesystem.


-- 
Tech support agency in China
http://duxieweb.com/


Re: [squid-users] Squid 3.1.3 crashes

2010-05-19 Thread Peng, Jeff
2010/5/19 Luis Daniel Lucio Quiroz :
> Helo,
>
> I'm having this, under 3.1.3  (unfortunallty server is in producction)
>
>
> 2010/05/18 23:39:57| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 00:42:12| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 01:22:57| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 02:13:03| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 03:24:26| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 04:05:13| NETDB state saved; 0 entries, 0 msec
> 2010/05/19 05:09:55| NETDB state saved; 0 entries, 0 msec
> FATAL: Received Segment Violation...dying.
> 2010/05/19 06:15:04| storeDirWriteCleanLogs: Starting...
> 2010/05/19 06:15:04| WARNING: Closing open FD   25
> 2010/05/19 06:15:04|   Finished.  Wrote 34985 entries.
> 2010/05/19 06:15:04|   Took 0.01 seconds (5138807.29 entries/sec).
> CPU Usage: 43.150 seconds = 28.280 user + 14.870 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 0
> Memory usage for squid via mallinfo():
>
>

Maybe run squid with some debug level and watch what's the output.

-- 
Tech support agency in China
http://duxieweb.com/


RE: [squid-users] RE: Anacron log entries

2010-05-19 Thread Simon Brereton
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, May 17, 2010 9:59 PM


> >> Well, there you go. Debug level #2 is full of debugging traces.
> >>
> >> FWIW:
> >>   level 0 - critical failure messages.
> >>   level 1 - warnings and important notices
> >>   level 2 thru 9 - debug traces (section specific)
> >>
> >> This is why the recommended level is 1 and not 2 or higher.
> >
> > Amos
> >
> > I'll try that - but there are too things to note..
> >
> > 1)  I initially increased the debugging to see the auth failures -
> which
> I
> > couldn't see - despite going to 9.  In fact, I saw no difference
> > between
> 1
> > and 2 so that's why I left it at that.
> >
> > 2)  My logging options are to output to:
> > 1128 access_log /var/log/squid3/access.log combined
> > 1137 cache_log /var/log/squid3/cache.log
> >
> >
> > I meant to send this out on Friday.  Anacron doesn't seem to have
> sent
> me
> > the notice since I made the change, but nonetheless, I'm curious as
> to
> why
> > that would make a difference.  My assumption is that no matter what
> I
> put
> > the debugging level at, it should log to file, not to anacron.
> 
> They are part of the configuration file loading. The system log is
> used for initial startup messages before the cache.log file is
> configured for use. debug_options takes effect immediately on being
> read in, but cache.log opening is done after the config load is
> finished and the final cache.log location is known (it can currently
> be specific twice or more with different filenames).

That would imply that squid is also being restarted on a daily basis..  Is that 
implication correct?  Is that behaviour correct?


Simon




[squid-users] Squid 3.1.3 crashes

2010-05-19 Thread Luis Daniel Lucio Quiroz
Helo,

I'm having this, under 3.1.3  (unfortunallty server is in producction)


2010/05/18 23:39:57| NETDB state saved; 0 entries, 0 msec
2010/05/19 00:42:12| NETDB state saved; 0 entries, 0 msec
2010/05/19 01:22:57| NETDB state saved; 0 entries, 0 msec
2010/05/19 02:13:03| NETDB state saved; 0 entries, 0 msec
2010/05/19 03:24:26| NETDB state saved; 0 entries, 0 msec
2010/05/19 04:05:13| NETDB state saved; 0 entries, 0 msec
2010/05/19 05:09:55| NETDB state saved; 0 entries, 0 msec
FATAL: Received Segment Violation...dying.
2010/05/19 06:15:04| storeDirWriteCleanLogs: Starting...
2010/05/19 06:15:04| WARNING: Closing open FD   25
2010/05/19 06:15:04|   Finished.  Wrote 34985 entries.
2010/05/19 06:15:04|   Took 0.01 seconds (5138807.29 entries/sec).
CPU Usage: 43.150 seconds = 28.280 user + 14.870 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():


any comment?

LD


Re: [squid-users] acl aclname browser and wget

2010-05-19 Thread Amos Jeffries

Andreas Moroder wrote:

Hello,

all our users have to authenticate via LDAP. I now would like to open 
the access from one machine but only for download via wget.


Does "acl aclname browser" work with wget



Yes. Any standards compliant HTTP client has a User-Agent name and sends 
it in requests. Wget naturally uses "Wget" with its version.


  acl wget browser Wget/
or
 acl wget browser ^Wget/



and how can I combine this acl togheter with the IP src acl ?


see the FAQ documentation on how to configure Squid access controls.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


Re: [squid-users] TR: ACL squid error - unable to restart

2010-05-19 Thread Amos Jeffries

Dumon Sylvain (THALES GROUP) wrote:

Hi,

I got this 'fatal' error :
2010/04/21 16:10:25| aclParseIpData: Bad host/IP: 'ws.spotimage.com'
FATAL: Bungled squid.conf line 344: acl TO_SPOTIMAGE dst
ws.spotimage.com
Squid Cache (Version 3.0.STABLE13): Terminated abnormally. 


Different topic explain that this error is not fatal and squid may
continue running.

In my case, squid refuses restarting with this ACL.
If I delete the line, squid daemon restarts normaly.

any idea ?

Sylvain D.



You have specified a hostname in an ACL which matches IP addresses.

Squid can normally resolve the name into addresses when loading the 
config. DNS was broken or unavailable at the time that Squid started.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.3


[squid-users] TR: ACL squid error - unable to restart

2010-05-19 Thread Dumon Sylvain (THALES GROUP)
> Hi,
> 
> I got this 'fatal' error :
> 2010/04/21 16:10:25| aclParseIpData: Bad host/IP: 'ws.spotimage.com'
> FATAL: Bungled squid.conf line 344: acl TO_SPOTIMAGE dst
> ws.spotimage.com
> Squid Cache (Version 3.0.STABLE13): Terminated abnormally. 
> 
> Different topic explain that this error is not fatal and squid may
> continue running.
> 
> In my case, squid refuses restarting with this ACL.
> If I delete the line, squid daemon restarts normaly.
> 
> any idea ?
> 
> Sylvain D.
> 


[squid-users] WARNING cache_mem is larger than total disk cache space!

2010-05-19 Thread Georg Höllrigl

Hello,

I've tried to set the disc cache smaller than memory size - because I'm observing reduced 
performance with too much disc cache.


So now, I ask myself - if it would be a good idea to disable the whole disk cache thing and only use 
RAM? And if so - how will I do it - with squid 3.0 there are always warnings about to little disc cache.




Georg


Re: [squid-users] Squid 3.1.3 & squid 2.7 running together on the same server.

2010-05-19 Thread Kinkie
On Wed, May 19, 2010 at 2:20 PM, GIGO .  wrote:
>
> Hi All,
>
> I was running multiple instances of  squid 3.0 Stable 25 on the same server 
> successfully. However i intend to run squid 2.7 & 3.1.3 on the same server 
> now reason being 2.7s enhance support of dynamic content caching. (Earlier 
> the main intention to use multiple instances was to give fault tolerance to 
> cache failure )
>
>
> My question is that if this possible? If there be any special changes i be 
> requiring?

It is. If you build squid 3.1 on your own, all you have to do is
specifying a specific --prefix path to the configure script.
The two copies will share nothing (you can have them share included
parts of the configuration afterwards).

There are other ways to obtain the result, but they are more complex
and of limited gain.
-- 
/kinkie


[squid-users] Squid 3.1.3 & squid 2.7 running together on the same server.

2010-05-19 Thread GIGO .

Hi All,
 
I was running multiple instances of  squid 3.0 Stable 25 on the same server 
successfully. However i intend to run squid 2.7 & 3.1.3 on the same server now 
reason being 2.7s enhance support of dynamic content caching. (Earlier the main 
intention to use multiple instances was to give fault tolerance to cache 
failure )
 
 
My question is that if this possible? If there be any special changes i be 
requiring?
 
 
 
copy of squid instance 2 which i will be using for caching please peruse it in 
the context of youtube/facebook caching specifically. If you notice any other 
drawback/discrepancy please do guide about it as well i would be really really 
thankful.
 
( i have also altered the client_side.c as per the guide available on squid 
cache web site)
-
visible_hostname squidl...@virtual.local
unique_hostname squidlhr1cache
pid_filename /var/run/inst2squid.pid
http_port 1975
icp_port 0
snmp_port 7172
access_log /var/logs/inst2access.log squid
cache_log /var/logs/inst2cache.log
cache_store_log /var/logs/inst2store.log
cache_effective_user proxy 
cache_mgr squidadm...@virtual.local
# If peering with ISA then following options will be required. Otherwise not
#cache_peer 10.1.82.205 parent 8080 0 default no-digest no-query no-delay 
#never_direct allow all 
 
# Hard disk size 71gb SAS 15k dedicated for caching. Operating system is on 
RAID1.
cache_dir aufs /cachedisk1/var/spool/squid 5 128 256
coredump_dir /cachedisk1/var/spool/squid
cache_swap_low 75
#should be 1/4 of the physical memory installed in the system
cache_mem 1000 MB
 
 
range_offset_limit -1 KB
maximum_object_size 4 GB
minimum_object_size 10 KB
quick_abort_min -1 KB
 
# not yet sure that what options during compilation should be provided and if i 
have defined this directive correctly
cache_replacement_policy heap
 
 
 
#-Refresh Pattern Portion--

# Custom Refresh patterns will come first
#specific for youtube custom refreshpatterns belowones
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487 % 
5259487 override-expire ignore-reload
 
# Break HTTP standard for flash videos. Keep them in cache even if asked not to.

refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache override-expire 
ignore-private
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

# This portion is not understood yet well what does it mean?
# Let the clients favorite video site through with full caching
# - they can come from any of a number of youtube.com subdomains.
# - this is NOT ideal, the 'merging' of identical content is really needed here
acl youtube dstdomain .youtube.com
cache allow youtube

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

acl store_rewrite_list urlpath_regex 
\/(get_video\?|videodownload\?|videoplayback.*id)
# storeurl rewrite helper program
storeurl_rewrite_program /usr/local/etc/squid/storeurl.pl
storeurl_access allow store_rewrite_list
storeurl_access deny all
storeurl_rewrite_children 1
storeurl_rewrite_concurrency 10
#Allow access from localhost only
http_access allow localhost
http_access deny all
-
 
This is the script i be looking forward to use as per configuration guide.
--
#your perl location in here, mine is #!/bin/perl
$|=1;
while (<>) {
@X = split;
$x = $X[0];
$_ = $X[1];
} elsif (m/^http:\/\/([0-9.]{4}
|.*\.youtube\.com|.*\.googlevideo\.com|.*\.video\.google\.com).*?\&(itag=
[0-9]*).*?\&(id=[a-zA-Z0-9]*)/) {
print $x . "http://video-srv.youtube.com.SQUIDINTERNAL/"; . $2 .
"&" . $3 . "\n";
} else {
print $x . $_ . "\n";
}
}

 
 
 
 
Just for the completion sake only here is the copy of my squid.conf that is 
user facing...However if somebody could give suggestions over it as well 
will definately be really thankful.
 
 
# This is the configuration file for the instance1 which is serving the user 
requests by forwarding it to the local parent peer. All the logic of 
Authentication/Access control is build here. Name this file squidinst1.conf
 
#---Adminsitrative Section-
visible_hostname squidLhr1
unique_hostname squidlhr1main
pid_filename /var/run/inst1squid.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/inst1access.log squid
cache_log /var/logs/inst1cache.log
cache_store_log /var/logs/inst1store.log
cache_effective_user proxy 
cache_mgr squid

RE: [squid-users] SELINUX issue(confined>unconfined)

2010-05-19 Thread GIGO .

Hi,
 
I use CENTOS 5.3 and currently have no knowledge of SELINUX as yesterday was 
the first time i studied it. As u could have guessed i am a newbie in Linux 
field.yes.. i have been assigned the project of migrating from ISA to squid 
(managing having confidence in my capability to learn/understand things have 
assigned it... )
 
And i assume it would take quite a  time to be able to build the policy myself 
for which i have short of time. So i am thinking of pending it for some future 
time. And concentrate towards other issues/stabalization that are necessary for 
the required Basic functionality. Once the project is piloted and management 
show confidence in me i can do more challenging tasks like this.
 
But if you think its really very necessary then definately i will look forward 
to complete this task before piloting. Any tips/guidance will be warm welcomed.
 
 
Thanking you
 
&
 
regards,
 
Bilal 
 
 
 



> Date: Wed, 19 May 2010 11:33:40 +0200
> From: tiery.de...@gmail.com
> To: gi...@msn.com
> CC: squid-users@squid-cache.org
> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>
> Hi,
>
> In permissive mode, you only get log, but selinux will not be active
> (it will not forbid unauthorized access). Usually you put selinux in
> permissive mode only in order to get all access denied log in
> audit.log in order to build policy module or adjust filecontexts.
>
> I suggest you to spend some time on selinux, it can realy increase the
> security of your proxy server.
>
> But you will need to build a policy module for squid_kerb_auth witch
> is not currently supported by selinux policy on redhat-like systems.
>
> What distrib do you use ?
>
>
> Tiery
>
>
> On Wed, May 19, 2010 at 6:17 AM, GIGO . wrote:
>>
>> Thank you i will give it a try. However i am also thinking of running 
>> SELinux in permissive mode for my proxy server. what do you say about it?
>>
>>
>> regards,
>>
>> Bilal
>>
>> 
>>> Date: Tue, 18 May 2010 15:00:05 +0200
>>> From: tiery.de...@gmail.com
>>> To: gi...@msn.com
>>> CC: squid-users@squid-cache.org
>>> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>>>
>>> okay,
>>>
>>> I have also worked on a similar project (squid/kerberos/selinux).
>>> I installed squid in /usr/local/squid but I had to modify
>>> /etc/selinux/targeted/contexts/files/file_contexts and adapt it to my
>>> squid directory.
>>>
>>> /usr/local/squid/etc(/.*)? system_u:object_r:squid_conf_t:s0
>>> /usr/local/squid/var/logs(/.*)? system_u:object_r:squid_log_t:s0
>>> /usr/local/squid/share(/.*)? system_u:object_r:squid_conf_t:s0
>>> /usr/local/squid/var/cache(/.*)? system_u:object_r:squid_cache_t:s0
>>> /usr/local/squid/sbin/squid -- system_u:object_r:squid_exec_t:s0
>>> /usr/local/squid/var/logs/squid\.pid -- system_u:object_r:squid_var_run_t:s0
>>> /usr/local/squid/libexec(/.*)? system_u:object_r:lib_t:s0
>>> /usr/local/squid -d system_u:object_r:bin_t:s0
>>> /usr/local/squid/var -d system_u:object_r:var_t:s0
>>>
>>> Then restore context (with restorecon or .autorelabel and reboot).
>>>
>>> But i am not sure modifing this file is the best way.
>>> It you update your selinux policy, changement will not be persistent.
>>>
>>> I think it is better to build a selinux module for our squid.
>>>
>>> Tiery
>>>
>>>
>>>
>>> On Tue, May 18, 2010 at 2:34 PM, GIGO . wrote:

 Yes i am using a compiled version. I have used this command chcon -t 
 unconfined_exec_t /usr/sbin/squid and its working now. Is this a security 
 issue?

 regards,

 Bilal







 
> Date: Tue, 18 May 2010 14:26:06 +0200
> From: tiery.de...@gmail.com
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>
> Hi,
>
> ps -Z => squid_t and getenforce => enforcing
> squid is started with selinux
>
> Redhat/centos platform:
> If squid is installed with yum, squid will be started with a squid_t
> selinux context.
>
> If you compile your squid and installed it, you will have to change
> squid files contexts manually.
>
> As i see you have squid_kerb_plugin, you should have compile you squid
> to support kerberos, no?
>
> ---
>
> For your problem:
>
> try to check selinux log:
> audit2allow -al
> or cat /var/log/audit/audit.log | audit2allow
>
> You can also try to restore selinux context for all squid files:
> restorecon -R /etc/squid
> restorecon -R /var/log/squid
>
> etc...
>
> or touch /.autorelabel and reboot
>
>
> Tiery
>
> On Tue, May 18, 2010 at 9:47 AM, GIGO . wrote:
>>
>> Dear All,
>>
>> Your guidance is required. Please help.
>>
>> It looks that squid process run by default as a confined process whether 
>> its a com

Re: [squid-users] SELINUX issue(confined>unconfined)

2010-05-19 Thread Tiery DENYS
Hi,

In permissive mode, you only get log, but selinux will not be active
(it will not forbid unauthorized access). Usually you put selinux in
permissive mode only in order to get all access denied log in
audit.log in order to build policy module or adjust filecontexts.

I suggest you to spend some time on selinux, it can realy increase the
security of your proxy server.

But you will need to build a policy module for squid_kerb_auth witch
is not currently supported by selinux policy on redhat-like systems.

What distrib do you use ?


Tiery


On Wed, May 19, 2010 at 6:17 AM, GIGO .  wrote:
>
> Thank you i will give it a try. However i am also thinking of running SELinux 
> in permissive mode for my proxy server. what do you say about it?
>
>
> regards,
>
> Bilal
>
> 
>> Date: Tue, 18 May 2010 15:00:05 +0200
>> From: tiery.de...@gmail.com
>> To: gi...@msn.com
>> CC: squid-users@squid-cache.org
>> Subject: Re: [squid-users] SELINUX issue(confined>unconfined)
>>
>> okay,
>>
>> I have also worked on a similar project (squid/kerberos/selinux).
>> I installed squid in /usr/local/squid but I had to modify
>> /etc/selinux/targeted/contexts/files/file_contexts and adapt it to my
>> squid directory.
>>
>> /usr/local/squid/etc(/.*)? system_u:object_r:squid_conf_t:s0
>> /usr/local/squid/var/logs(/.*)? system_u:object_r:squid_log_t:s0
>> /usr/local/squid/share(/.*)? system_u:object_r:squid_conf_t:s0
>> /usr/local/squid/var/cache(/.*)? system_u:object_r:squid_cache_t:s0
>> /usr/local/squid/sbin/squid -- system_u:object_r:squid_exec_t:s0
>> /usr/local/squid/var/logs/squid\.pid -- system_u:object_r:squid_var_run_t:s0
>> /usr/local/squid/libexec(/.*)? system_u:object_r:lib_t:s0
>> /usr/local/squid -d system_u:object_r:bin_t:s0
>> /usr/local/squid/var -d system_u:object_r:var_t:s0
>>
>> Then restore context (with restorecon or .autorelabel and reboot).
>>
>> But i am not sure modifing this file is the best way.
>> It you update your selinux policy, changement will not be persistent.
>>
>> I think it is better to build a selinux module for our squid.
>>
>> Tiery
>>
>>
>>
>> On Tue, May 18, 2010 at 2:34 PM, GIGO . wrote:
>>>
>>> Yes i am using a compiled version. I have used this command chcon -t 
>>> unconfined_exec_t /usr/sbin/squid and its working now. Is this a security 
>>> issue?
>>>
>>> regards,
>>>
>>> Bilal
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
 Date: Tue, 18 May 2010 14:26:06 +0200
 From: tiery.de...@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] SELINUX issue(confined>unconfined)

 Hi,

 ps -Z => squid_t and getenforce => enforcing
 squid is started with selinux

 Redhat/centos platform:
 If squid is installed with yum, squid will be started with a squid_t
 selinux context.

 If you compile your squid and installed it, you will have to change
 squid files contexts manually.

 As i see you have squid_kerb_plugin, you should have compile you squid
 to support kerberos, no?

 ---

 For your problem:

 try to check selinux log:
 audit2allow -al
 or cat /var/log/audit/audit.log | audit2allow

 You can also try to restore selinux context for all squid files:
 restorecon -R /etc/squid
 restorecon -R /var/log/squid

 etc...

 or touch /.autorelabel and reboot


 Tiery

 On Tue, May 18, 2010 at 9:47 AM, GIGO . wrote:
>
> Dear All,
>
> Your guidance is required. Please help.
>
> It looks that squid process run by default as a confined process whether 
> its a compiled version or a version that come with the linux distro. It 
> means that the squid software is SELINUX aware.Am i right?
>
> [r...@squidlhr ~]# ps -eZ | grep squid
> system_u:system_r:squid_t 3173 ? 00:00:00 squid
> system_u:system_r:squid_t 3175 ? 00:00:00 squid
> system_u:system_r:squid_t 3177 ? 00:00:00 squid
> system_u:system_r:squid_t 3179 ? 00:00:00 squid
> system_u:system_r:squid_t 3222 ? 00:00:00 unlinkd
> system_u:system_r:squid_t 3223 ? 00:00:00 unlinkd
>
>
> it was successful before i changed the selinux to enforcing.Now i even 
> cannot start squid process that access the parent at localhost(3128) 
> manually even. The other process starts normally if i do manually.
>
> When running as an unconfined process by the following command the 
> problem had resolved
>
> chcon -t unconfined_exec_t /usr/sbin/squid
>
> However it doesnot feel appropriate to me. Please guide me on this.
>
>
>
> I am starting squid with the following init script if it has something to 
> do with the problem:
>
> #!/bin/sh
> #
> #my script
> case "$1" in
> start)
> /usr/sbin/squid -D -sYC -f /etc/squid/squidcache.conf
> /usr/sbin/squid -D -sYC -f /etc/squid/squid.c

[squid-users] acl aclname browser and wget

2010-05-19 Thread Andreas Moroder

Hello,

all our users have to authenticate via LDAP. I now would like to open 
the access from one machine but only for download via wget.


Does "acl aclname browser" work with wget

and how can I combine this acl togheter with the IP src acl ?

Thanks
Andreas



RE: [squid-users] SELINUX issue

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 04:22 + skrev GIGO .:
> Mine is a compiled version of squid does it matter? Is it true that
> binaries available through a distro by default run in confined domain
> and in case squid is compiled it will run in unconfined domain.

This appears to depends on how you start the service and a couple of
other things. Haven't quite understood the exact details.

Regards
Henrik




RE: [squid-users] Squid 2.6 - Deny all users in a specific Active Directory OU (not group)

2010-05-19 Thread Henrik Nordström
ons 2010-05-19 klockan 13:17 +1000 skrev Kris Glynn:

> Can the same be achieved with the NTLM helper given this initial 
> configuration ?
> 
> external_acl_type ldap_group ttl=300 children=40 %LOGIN 
> /usr/lib/squid/wbinfo_group.pl

Thats a winbind NT domain helper, not NTLM.

> Can we allow/deny users in a specific OU with NTLM ?

Not to an OU, but to a user group within the Windows domain.

Regards
Henrik



[squid-users] Squid Reverse proxy and https

2010-05-19 Thread Rakesh Jha
Hi experts,

We are running Squid version 2.7.STABLE8 in acceleration mode. What we
want to achieve is that - when the site is accessed through squid
reverse proxy web site should prompt for authentication window. The
authentication request is sent to Active directory by IIS server before
granting further access to the web site. 

This work perfectly ok with http - we do http://squid-rev.domain.com, we
get the authentication window and after correctly entering user name
password, we get full access to the site.

Now the problem - when we configure ssl certificate and
https://squid-rev.domain.com we get authentication window and after that
nothing appears on screen.

We tried various options but with no success. Pl help. The Squid.conf is
as following -

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl all_dst dst 0.0.0.0/0.0.0.0
acl all src 0.0.0.0/0.0.0.0
acl SSL_ports port 443
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow all
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow owa_host
http_access deny all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 80 accel defaultsite=iishost vhost

 FOR HTTPS Access -
https_port ip-add:443 cert=/path/selfsigned_cert.pem key=/path/key.pem

ssl_unclean_shutdown on
cache_peer iishost parent 80 0 no-query originserver login=PASS
hierarchy_stoplist cgi-bin ?
access_log /usr/local/squid/var/logs/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
cache_effective_user squid
cache_effective_group squid
visible_hostname squid-Rev
icp_port 3130
coredump_dir /usr/local/squid/var/cache


Thanks & regards,
Rakesh Jha
Attention: 
Any non-official business related views, opinions and other information 
presented in this electronic mail
are solely those of the sender/author.
Burgan Bank does not endorse or accept responsibility for their opinions. If 
you are not the addressed 
indicated in this mail or responsible for delivering this message to the 
intended,
you should delete this message and notify the sender immediately.
---
Burgan Bank S.A.K
www.burgan.com