Re: [squid-users] Adding an extra header to TLS connection

2024-05-23 Thread Robin Wood
On Thu, 23 May 2024 at 18:00, Jonathan Lee  wrote:

> I do use ssl bump again it requires certificates installed on the devices,
> and or some and a splice for the others. You must also add a url list for
> items that must never be intercepted like banks etc. I agree it is not an
> easy task, it took me years to get it to work correctly for what I needed.
> When it does work it works beautifully, you can cache updates and reuse
> them, you can use clam AV on https traffic. It’s not for everyone it will
> make you a wizard level 1000 if you can get it going.
>

Jonathan, can you give me an example of it working?

Oddly, you are replying to a message from Alex that I never received.

Alex, in answer to your questions...

I'm doing some testing against a client's site, they require a custom
header to allow my connections through their WAF. I could try to do this
manually with all my tools, but it would be easier to just have Squid do it
for me and then have the tools use Squid as their proxy. I can tell them to
not do cert checking or I can use my own CA and import it into the system
store, that is not a problem.

I've tried searching for Squid and sslbump and not found anything useful
that works with the current version, that is why I'm asking here, I was
hoping someone could point me at an example that would definitely work with
the current version of Squid.

Robin


> Sent from my iPhone
>
> > On May 23, 2024, at 08:49, Alex Rousskov <
> rouss...@measurement-factory.com> wrote:
> >
> > On 2024-05-22 03:49, Robin Wood wrote:
> >
> >> I'm trying to work out how to add an extra header to a TLS connection.
> >
> > I assume that you want to add a header field to an HTTP request or
> response that is being transmitted inside a TLS connection between a TLS
> client (e.g., a user browser) and an HTTPS origin server.
> >
> > Do you control the client that originates that TLS connection (or its
> OS/environment) or the origin server? If you do not, then what you want is
> impossible -- TLS encryption exists, in part, to prevent such traffic
> modifications.
> >
> > If you control the client that originates that TLS connection (or its
> OS/environment), then you may be able to, in _some_ cases, add that header
> by configuring the client (or its OS/environment) to trust you as a
> Certificate Authority, minting your own X509 certificates, and configuring
> Squid to perform a "man in the middle" attack on client-server traffic,
> using your minted certificates. You can search for Squid SslBump to get
> more information about this feature, but the area is full of insurmountable
> difficulties and misleading advice. Avoid it if at all possible!
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> >> I've found information on how to do it on what I think is the pre-3.5
> release, but I can't find any useful information on doing it on the current
> version.
> >> Could someone give me an example or point me at some documentation on
> how to do it.
> >> Thanks
> >> Robin
> >> ___
> >> squid-users mailing list
> >> squid-users@lists.squid-cache.org
> >> https://lists.squid-cache.org/listinfo/squid-users
> >
> > ___
> > squid-users mailing list
> > squid-users@lists.squid-cache.org
> > https://lists.squid-cache.org/listinfo/squid-users
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Adding an extra header to TLS connection

2024-05-22 Thread Robin Wood
Hi
I'm trying to work out how to add an extra header to a TLS connection.

I've found information on how to do it on what I think is the pre-3.5
release, but I can't find any useful information on doing it on the current
version.

Could someone give me an example or point me at some documentation on how
to do it.

Thanks

Robin
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] stale-if-error returning a 502

2024-02-12 Thread Robin Carlisle
Thanks for clarification on max-stale, although it is unintentionally ideal
for my use-case.
Best,
Robin


On Mon, 12 Feb 2024 at 16:06, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 2024-02-12 10:13, Robin Carlisle wrote:
>
> > I have been having success so far with the config workaround.. config
> > snippet :-
> >
> > /max_stale 31536000 seconds
> > refresh_pattern . 0  20% 4320 max-stale=31536000/
> >
> > When an object has expired due to max-age and the PC is offline
> > (ethernet unplugged), squid attempts an origin refresh and gives me :
> >
> > /0 ::1 TCP_REFRESH_FAIL_OLD/200 35965 GET
> >
> https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json
> <
> https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json>
> - HIER_NONE/- application/json/
> >
> > Previously it had been passing the 502 through to the client application.
>
> Glad this workaround helps. Just keep in mind that the configuration
> snippet above changes max-stale for _all_ responses.
>
>
> > I am continuing to test this - but it looks like I have a working
> solution.
>
> Meanwhile, the fix for the underlying Squid bug was officially accepted
> and should become a part of v6.8 release (at least).
>
>
> Thank you,
>
> Alex.
>
>
> > On Fri, 9 Feb 2024 at 14:31, Alex Rousskov wrote:
> >
> > On 2024-02-09 08:53, Robin Carlisle wrote:
> >
> >  > I am trying the config workaround approach.
> >
> > Please keep us posted on your progress.
> >
> >  >  Below is the config snippet I have added.I made the
> >  > assumption that for the /refresh_pattern, max-stale=NN /config,
> > the NN
> >  > is in minutes as per the rest of that config directive.
> >
> > That assumption is natural but incorrect: Unlike the anonymous
> > positional min and max parameters (that use minutes), refresh_pattern
> > max-stale=NN uses seconds. Documentation improvements are welcome.
> >
> > Said that, the workaround should still prevent the application of the
> > broken default refresh_pattern max-stale=0 rule, so you should still
> > see
> > positive results for the first NN seconds of the response age.
> >
> > Instead of specifying max-stale=NN, consider adding refresh_pattern
> > rules recommended by squid.conf.documented (and included in
> > squid.cond.default). Those rules do not have max-stale options at
> all,
> > and, hence, Squid will use (explicit or default) max_stale directive
> > instead.
> >
> > HTH,
> >
> > Alex.
> >
> >
> >  > I am testing this right now
> >  >
> >  > # this should allow stale objects up to 1 year if allowed by
> >  > Cache-Control repsonseheaders ...
> >  >
> >  > # ... setting both options just in case
> >  >
> >  > max_stale 525600 minutes
> >  >
> >  > refresh_pattern . 0  20% 4320 max-stale=525600
> >  >
> >  >
> >  > Thanks again for your help
> >  >
> >  >
> >  > Robin
> >  >
> >  >
> >  >
> >  >
> >  > On Thu, 8 Feb 2024 at 17:42, Alex Rousskov
> >  >  > <mailto:rouss...@measurement-factory.com>
> >  > <mailto:rouss...@measurement-factory.com
> > <mailto:rouss...@measurement-factory.com>>> wrote:
> >  >
> >  > Hi Robin,
> >  >
> >  >   AFAICT from the logs you have privately shared and your
> >  > squid.conf
> >  > that you have posted earlier, your Squid overwrites
> >  > stale-if-error=31536000 in the response with "refresh_pattern
> >  > max-stale=0" default. That 0 value is wrong. The correct value
> >  > should be
> >  > taken from max_stale directive that defaults to 1 week, not
> zero:
> >  >
> >  >   refresh_pattern
> >  >   ...
> >  >   max-stale=NN provide a maximum staleness factor. Squid
> > won't
> >  >   serve objects more stale than this even if it failed to
> >  >   validate the object. Default: use the max_stale global
> > limit.
> >  >
> >  > This wrong default is a Squid bug AFAICT. I posted an
> > 

Re: [squid-users] stale-if-error returning a 502

2024-02-12 Thread Robin Carlisle
Hi,

I have been having success so far with the config workaround.. config
snippet :-


*max_stale 31536000 secondsrefresh_pattern . 0  20% 4320 max-stale=31536000*

When an object has expired due to max-age and the PC is offline (ethernet
unplugged), squid attempts an origin refresh and gives me :

 * 0 ::1 TCP_REFRESH_FAIL_OLD/200 35965 GET
https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json
<https://widgets.api.labs.dev.framestoresignage.com/api/v1/instagram/labs/posts.json>
- HIER_NONE/- application/json*

Previously it had been passing the 502 through to the client application.

I am continuing to test this - but it looks like I have a working solution.

Thanks again for all your help on this,

Robin




On Fri, 9 Feb 2024 at 14:31, Alex Rousskov 
wrote:

> On 2024-02-09 08:53, Robin Carlisle wrote:
>
> > I am trying the config workaround approach.
>
> Please keep us posted on your progress.
>
> >  Below is the config snippet I have added.I made the
> > assumption that for the /refresh_pattern, max-stale=NN /config, the NN
> > is in minutes as per the rest of that config directive.
>
> That assumption is natural but incorrect: Unlike the anonymous
> positional min and max parameters (that use minutes), refresh_pattern
> max-stale=NN uses seconds. Documentation improvements are welcome.
>
> Said that, the workaround should still prevent the application of the
> broken default refresh_pattern max-stale=0 rule, so you should still see
> positive results for the first NN seconds of the response age.
>
> Instead of specifying max-stale=NN, consider adding refresh_pattern
> rules recommended by squid.conf.documented (and included in
> squid.cond.default). Those rules do not have max-stale options at all,
> and, hence, Squid will use (explicit or default) max_stale directive
> instead.
>
> HTH,
>
> Alex.
>
>
> > I am testing this right now
> >
> > # this should allow stale objects up to 1 year if allowed by
> > Cache-Control repsonseheaders ...
> >
> > # ... setting both options just in case
> >
> > max_stale 525600 minutes
> >
> > refresh_pattern . 0  20% 4320 max-stale=525600
> >
> >
> > Thanks again for your help
> >
> >
> > Robin
> >
> >
> >
> >
> > On Thu, 8 Feb 2024 at 17:42, Alex Rousskov
> >  > <mailto:rouss...@measurement-factory.com>> wrote:
> >
> > Hi Robin,
> >
> >   AFAICT from the logs you have privately shared and your
> > squid.conf
> > that you have posted earlier, your Squid overwrites
> > stale-if-error=31536000 in the response with "refresh_pattern
> > max-stale=0" default. That 0 value is wrong. The correct value
> > should be
> > taken from max_stale directive that defaults to 1 week, not zero:
> >
> >   refresh_pattern
> >   ...
> >   max-stale=NN provide a maximum staleness factor. Squid won't
> >   serve objects more stale than this even if it failed to
> >   validate the object. Default: use the max_stale global limit.
> >
> > This wrong default is a Squid bug AFAICT. I posted an _untested_ fix
> as
> > Squid PR 1664: https://github.com/squid-cache/squid/pull/1664
> > <https://github.com/squid-cache/squid/pull/1664>
> >
> > If possible, please test the corresponding patch:
> >
> https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch
> <
> https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch
> >
> >
> > AFAICT, you can also work around that bug by configuring an explicit
> > refresh_pattern rule with an explicit max-stale option (see
> > squid.conf.documented for examples). I have not tested that theory
> > either.
> >
> >
> > HTH,
> >
> > Alex.
> >
> >
> > On 2024-02-07 13:45, Robin Carlisle wrote:
> >  > Hi,
> >  >
> >  > I have just started my enhanced logging journey and have a small
> > snippet
> >  > below that might illuminate the issue ...
> >  >
> >  > /2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507)
> >  > handleIMSReply: origin replied with error 502, forwarding to
> > client due
> >  > to fail_on_validation_err/
> >  >
> >  > A few lines below in the log it looks like squid sent :-
> >  >
> >  > /2024/02/07 17:06:39.212 kid1| 11,2| Stream.cc(280)
> >

Re: [squid-users] stale-if-error returning a 502

2024-02-09 Thread Robin Carlisle
Hi,
Thanks for the info Alex.  Patching code and building is a little beyond me
tbh, especially as I would need this as a debian package to deploy to many
machines.  With that in mind I am trying the config workaround approach.
 Below is the config snippet I have added.I made the assumption that
for the *refresh_pattern, max-stale=NN  *config, the NN is in minutes as
per the rest of that config directive.
I am testing this right now

# this should allow stale objects up to 1 year if allowed by Cache-Control
repsonse headers ...

# ... setting both options just in case

max_stale 525600 minutes

refresh_pattern . 0  20% 4320 max-stale=525600


Thanks again for your help


Robin




On Thu, 8 Feb 2024 at 17:42, Alex Rousskov 
wrote:

> Hi Robin,
>
>  AFAICT from the logs you have privately shared and your squid.conf
> that you have posted earlier, your Squid overwrites
> stale-if-error=31536000 in the response with "refresh_pattern
> max-stale=0" default. That 0 value is wrong. The correct value should be
> taken from max_stale directive that defaults to 1 week, not zero:
>
>  refresh_pattern
>  ...
>  max-stale=NN provide a maximum staleness factor. Squid won't
>  serve objects more stale than this even if it failed to
>  validate the object. Default: use the max_stale global limit.
>
> This wrong default is a Squid bug AFAICT. I posted an _untested_ fix as
> Squid PR 1664: https://github.com/squid-cache/squid/pull/1664
>
> If possible, please test the corresponding patch:
>
> https://github.com/squid-cache/squid/commit/571973589b5a46d458311f8b60dcb83032fd5cec.patch
>
> AFAICT, you can also work around that bug by configuring an explicit
> refresh_pattern rule with an explicit max-stale option (see
> squid.conf.documented for examples). I have not tested that theory either.
>
>
> HTH,
>
> Alex.
>
>
> On 2024-02-07 13:45, Robin Carlisle wrote:
> > Hi,
> >
> > I have just started my enhanced logging journey and have a small snippet
> > below that might illuminate the issue ...
> >
> > /2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507)
> > handleIMSReply: origin replied with error 502, forwarding to client due
> > to fail_on_validation_err/
> >
> > A few lines below in the log it looks like squid sent :-
> >
> > /2024/02/07 17:06:39.212 kid1| 11,2| Stream.cc(280) sendStartOfMessage:
> > HTTP Client REPLY:
> > -
> > HTTP/1.1 502 Bad Gateway
> > Server: squid/5.7
> > Mime-Version: 1.0
> > Date: Wed, 07 Feb 2024 17:06:39 GMT
> > Content-Type: text/html;charset=utf-8
> > Content-Length: 3853
> > X-Squid-Error: ERR_READ_ERROR 0
> > Vary: Accept-Language
> > Content-Language: en
> > X-Cache: MISS from labs-maul-st-15
> > X-Cache-Lookup: HIT from labs-maul-st-15:3129
> > Via: 1.1 labs-maul-st-15 (squid/5.7)
> > Connection: close/
> >
> >
> > The rest of the logs are quite large and contain URLs I cannot put
> > here.   The logs were generated with debug_options to ALL,3.
> >
> > Any ideas?   Or should I generate more detailed logs and send them
> > privately?
> >
> > Thanks again,
> >
> > Robin
> >
> >
> >
> >
> > On Fri, 2 Feb 2024 at 11:20, Robin Carlisle
> > mailto:robin.carli...@framestore.com>>
> > wrote:
> >
> > Hi, thanks for your reply.
> >
> > I have been looking at :
> >
> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control <
> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control>
> >
> > /The stale-if-error response directive indicates that the cache can
> > reuse a stale response when an upstream server generates an error,
> > or when the error is generated locally. Here, an error is considered
> > any response with a status code of 500, 502, 503, or 504.
> >
> > Cache-Control: max-age=604800, stale-if-error=86400
> > In the example above, the response is fresh for 7 days (604800s).
> > Afterwards, it becomes stale, but can be used for an extra 1 day
> > (86400s) when an error is encountered.
> >
> > After the stale-if-error period passes, the client will receive any
> > error generated/
> >
> > Given what you have said and what the above docs say - I am still
> > confused as it looks like (in my test cases) the cached response can
> > be used for 3600 secs (this works), after which the cached response
> > can still be used for an additional 31536000 seconds on an error
> > (this doesnt work).
> >
&

Re: [squid-users] stale-if-error returning a 502

2024-02-07 Thread Robin Carlisle
Hi,

I have just started my enhanced logging journey and have a small snippet
below that might illuminate the issue ...

*2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507)
handleIMSReply: origin replied with error 502, forwarding to client due to
fail_on_validation_err*

A few lines below in the log it looks like squid sent :-















*2024/02/07 17:06:39.212 kid1| 11,2| Stream.cc(280) sendStartOfMessage:
HTTP Client REPLY:-HTTP/1.1 502 Bad GatewayServer:
squid/5.7Mime-Version: 1.0Date: Wed, 07 Feb 2024 17:06:39 GMTContent-Type:
text/html;charset=utf-8Content-Length: 3853X-Squid-Error: ERR_READ_ERROR
0Vary: Accept-LanguageContent-Language: enX-Cache: MISS from
labs-maul-st-15X-Cache-Lookup: HIT from labs-maul-st-15:3129Via: 1.1
labs-maul-st-15 (squid/5.7)Connection: close*


The rest of the logs are quite large and contain URLs I cannot put here.
 The logs were generated with debug_options to ALL,3.

Any ideas?   Or should I generate more detailed logs and send them
privately?

Thanks again,

Robin




On Fri, 2 Feb 2024 at 11:20, Robin Carlisle 
wrote:

> Hi, thanks for your reply.
>
> I have been looking at :
> https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control
>
>
>
>
>
>
> *The stale-if-error response directive indicates that the cache can reuse
> a stale response when an upstream server generates an error, or when the
> error is generated locally. Here, an error is considered any response with
> a status code of 500, 502, 503, or 504.Cache-Control: max-age=604800,
> stale-if-error=86400In the example above, the response is fresh for 7 days
> (604800s). Afterwards, it becomes stale, but can be used for an extra 1 day
> (86400s) when an error is encountered.After the stale-if-error period
> passes, the client will receive any error generated*
>
> Given what you have said and what the above docs say - I am still confused
> as it looks like (in my test cases) the cached response can be used for
> 3600 secs (this works), after which the cached response can still be used
> for an additional 31536000 seconds on an error (this doesnt work).
>
> I am going to dig into the error logging you suggested to see if I can
> make sense of that - and will send on if I can't.
>
> Thanks v much for your help again,
>
> Robin
>
>
>
>
>
> On Thu, 1 Feb 2024 at 18:27, Alex Rousskov <
> rouss...@measurement-factory.com> wrote:
>
>> On 2024-02-01 12:03, Robin Carlisle wrote:
>> > Hi, I am having trouble with stale-if-error response.
>>
>> If I am interpreting Squid code correctly, in primary use cases:
>>
>> * without a Cache-Control:stale-if-error=X in the original response,
>> Squid sends a stale object if revalidation results in a 5xx error;
>>
>> * with a Cache-Control:stale-if-error=X and object age at most X, Squid
>> sends a stale object if revalidation results in a 5xx error;
>>
>> * with a Cache-Control:stale-if-error=X and object age exceeding X,
>> Squid forwards the 5xx error response if revalidation results in a 5xx
>> error;
>>
>> In other words, stale-if-error=X turns on a "fail on validation errors"
>> behavior for stale objects older than X. It has no other effects.
>>
>> In your test case, the stale objects are much younger than
>> stale-if-error value (e.g., Age~=3601 vs. stale-if-error=31536000).
>> Thus, stale-if-error should have no relevant effect.
>>
>> Something else is probably preventing your Squid from serving the stale
>> response when facing a 5xx error. I do not know what that something is.
>>
>> I recommend sharing (privately if you need to protect sensitive info) a
>> pointer to a compressed ALL,9 cache.log collected while reproducing the
>> problem (using two transactions similar to the ones you have shared
>> below -- a successful stale hit and a problematic one):
>>
>> https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction
>>
>> Alternatively, you can try to study cache.log yourself after setting
>> debug_options to ALL,3. Searching for "refresh" and "handleIMSReply" may
>> yield enough clues.
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>
>>
>> > # /etc/squid/squid.conf :
>> >
>> > acl to_aws dstdomain .amazonaws.com <http://amazonaws.com>
>> >
>> > acl from_local src localhost
>> >
>> > http_access allow to_aws
>> >
>> > http_access allow from_local
>> >
>> > cache allow all
>> >
>> > cache_dir ufs /var/cache/squid 1024 16 256
>> >
>> > http_port 3129 ssl-bump cert=/etc/squid/ma

Re: [squid-users] stale-if-error returning a 502

2024-02-02 Thread Robin Carlisle
Hi, thanks for your reply.

I have been looking at :
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control






*The stale-if-error response directive indicates that the cache can reuse a
stale response when an upstream server generates an error, or when the
error is generated locally. Here, an error is considered any response with
a status code of 500, 502, 503, or 504.Cache-Control: max-age=604800,
stale-if-error=86400In the example above, the response is fresh for 7 days
(604800s). Afterwards, it becomes stale, but can be used for an extra 1 day
(86400s) when an error is encountered.After the stale-if-error period
passes, the client will receive any error generated*

Given what you have said and what the above docs say - I am still confused
as it looks like (in my test cases) the cached response can be used for
3600 secs (this works), after which the cached response can still be used
for an additional 31536000 seconds on an error (this doesnt work).

I am going to dig into the error logging you suggested to see if I can make
sense of that - and will send on if I can't.

Thanks v much for your help again,

Robin





On Thu, 1 Feb 2024 at 18:27, Alex Rousskov 
wrote:

> On 2024-02-01 12:03, Robin Carlisle wrote:
> > Hi, I am having trouble with stale-if-error response.
>
> If I am interpreting Squid code correctly, in primary use cases:
>
> * without a Cache-Control:stale-if-error=X in the original response,
> Squid sends a stale object if revalidation results in a 5xx error;
>
> * with a Cache-Control:stale-if-error=X and object age at most X, Squid
> sends a stale object if revalidation results in a 5xx error;
>
> * with a Cache-Control:stale-if-error=X and object age exceeding X,
> Squid forwards the 5xx error response if revalidation results in a 5xx
> error;
>
> In other words, stale-if-error=X turns on a "fail on validation errors"
> behavior for stale objects older than X. It has no other effects.
>
> In your test case, the stale objects are much younger than
> stale-if-error value (e.g., Age~=3601 vs. stale-if-error=31536000).
> Thus, stale-if-error should have no relevant effect.
>
> Something else is probably preventing your Squid from serving the stale
> response when facing a 5xx error. I do not know what that something is.
>
> I recommend sharing (privately if you need to protect sensitive info) a
> pointer to a compressed ALL,9 cache.log collected while reproducing the
> problem (using two transactions similar to the ones you have shared
> below -- a successful stale hit and a problematic one):
>
> https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction
>
> Alternatively, you can try to study cache.log yourself after setting
> debug_options to ALL,3. Searching for "refresh" and "handleIMSReply" may
> yield enough clues.
>
>
> HTH,
>
> Alex.
>
>
>
>
> > # /etc/squid/squid.conf :
> >
> > acl to_aws dstdomain .amazonaws.com <http://amazonaws.com>
> >
> > acl from_local src localhost
> >
> > http_access allow to_aws
> >
> > http_access allow from_local
> >
> > cache allow all
> >
> > cache_dir ufs /var/cache/squid 1024 16 256
> >
> > http_port 3129 ssl-bump cert=/etc/squid/maul.pem
> > generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> >
> > sslcrtd_program /usr/lib/squid/security_file_certgen -s
> > /var/lib/squid/ssl_db -M 4MB
> >
> > acl step1 at_step SslBump1
> >
> > ssl_bump bump step1
> >
> > ssl_bump bump all
> >
> > sslproxy_cert_error deny all
> >
> > cache_store_log stdio:/var/log/squid/store.log
> >
> > logfile_rotate 0
> >
> > shutdown_lifetime 3 seconds
> >
> >
> > # /usr/bin/proxy-test :
> >
> > #!/bin/bash
> >
> > curl --proxy http://localhost:3129 <http://localhost:3129> \
> >
> >--cacert /etc/squid/stuff.pem \
> >
> >-v "https://stuff.amazonaws.com/api/v1/stuff/stuff.json
> > <https://stuff.amazonaws.com/api/v1/stuff/stuff.json>" \
> >
> >-H "Authorization: token MYTOKEN" \
> >
> >-H "Content-Type: application/json" \
> >
> >--output "/tmp/stuff.json"
> >
> >
> >
> > Tests  ..
> >
> >
> > At this point in time the network cable is unattached.  Squid returns
> > the cached object it got when the network was online earlier. The Age of
> > this object is just still under the max_age of 3600. Previously I
> > was using offline_mode but I found that it did not try to revalidate
> > from the origin after the o

[squid-users] stale-if-error returning a 502

2024-02-01 Thread Robin Carlisle
Hi, I am having trouble with stale-if-error response.  I am making calls
using curl to an API (under my control) on Amazon AWS.  Config and details
below ...


# /etc/squid/squid.conf :

acl to_aws dstdomain .amazonaws.com

acl from_local src localhost

http_access allow to_aws

http_access allow from_local

cache allow all

cache_dir ufs /var/cache/squid 1024 16 256

http_port 3129 ssl-bump cert=/etc/squid/maul.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

sslcrtd_program /usr/lib/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 4MB

acl step1 at_step SslBump1

ssl_bump bump step1

ssl_bump bump all

sslproxy_cert_error deny all

cache_store_log stdio:/var/log/squid/store.log

logfile_rotate 0

shutdown_lifetime 3 seconds

# /usr/bin/proxy-test :

#!/bin/bash

curl --proxy http://localhost:3129 \

  --cacert /etc/squid/stuff.pem \

  -v "https://stuff.amazonaws.com/api/v1/stuff/stuff.json; \

  -H "Authorization: token MYTOKEN" \

  -H "Content-Type: application/json" \

  --output "/tmp/stuff.json"


Tests  ..

At this point in time the network cable is unattached.  Squid returns the
cached object it got when the network was online earlier. The Age of this
object is just still under the max_age of 3600. Previously I was using
offline_mode but I found that it did not try to revalidate from the origin
after the object expired (defined via max-age response).   My understanding
is that stale-if-error should work under my circumstances.

# /var/log/squid/access.log

1706799404.440  6 127.0.0.1 NONE_NONE/200 0 CONNECT
stuff.amazonaws.com:443 - HIER_NONE/- -

1706799404.440  0 127.0.0.1 TCP_MEM_HIT/200 20726 GET
https://stuff.amazonaws.com/stuff.json - HIER_NONE/- application/json

# extract from /usr/bin/proxy-test

< HTTP/1.1 200 OK

< Date: Thu, 01 Feb 2024 13:57:11 GMT

< Content-Type: application/json

< Content-Length: 20134

< x-amzn-RequestId: 3a2d3b26-df73-4b30-88cb-1a9268fa0df2

< Last-Modified: 2024-02-01T13:00:45.000Z

< Access-Control-Allow-Origin: *

< x-amz-apigw-id: SdZwpG7qiYcERUQ=

< Cache-Control: public, max-age=3600, stale-if-error=31536000

< ETag: "cec102b43372840737ab773c2e77858b"

< X-Amzn-Trace-Id: Root=1-65bba337-292be751134161b03555cdd6

< Age: 3573

< X-Cache: HIT from labs-maul-st-31

< X-Cache-Lookup: HIT from labs-maul-st-31:3129

< Via: 1.1 labs-maul-st-31 (squid/5.7)

< Connection: keep-alive



Below .. the curl script executes again.  The Age has gone over the max-age
so squid attempted to refresh from the origin.  The machine is still
offline so the refresh failed.   I expected that the stale-if-error
response would instruct squid to return the cached object as a 200.

# /var/log/squid/access.log

1706799434.464  5 127.0.0.1 NONE_NONE/200 0 CONNECT
stuff.amazonaws.com:443 - HIER_NONE/- -

1706799434.464  0 127.0.0.1 TCP_REFRESH_FAIL_ERR/502 4235 GET
https://stuff.amazonaws.com/stuff.json - HIER_NONE/- text/html

# extract from /usr/bin/proxy-test

< HTTP/1.1 502 Bad Gateway

< Server: squid/5.7

< Mime-Version: 1.0

< Date: Thu, 01 Feb 2024 14:57:14 GMT

< Content-Type: text/html;charset=utf-8

< Content-Length: 3853

< X-Squid-Error: ERR_READ_ERROR 0

< Vary: Accept-Language

< Content-Language: en

< X-Cache: MISS from labs-maul-st-31

< X-Cache-Lookup: HIT from labs-maul-st-31:3129

< Via: 1.1 labs-maul-st-31 (squid/5.7)

< Connection: close


Hope someone can help me with this.  All the best,

Robin Carlisle
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-21 Thread Robin Carlisle
Ah OK - understood.  Thanks for the explanation.

Robin

On Sat, 20 Jan 2024 at 12:17, Amos Jeffries  wrote:

> On 20/01/24 02:05, Robin Carlisle wrote:
> >
> > I do have 1 followup question which I think is unrelated, let me know if
> > etiquette demands I create a new post for this. When I test using
> > chromium browser, chromium sends OPTION requests- which I think is
> > something to do with CORS.   These always cause cache MISS from squid,..
> > I think because the return code is 204...?
> >
>
> No, the reason is HTTP specification (RFC 9110 section 9.3.7):
> "Responses to the OPTIONS method are not cacheable."
>
> If these actually are CORS (might be several other things also), then
> there is important differences in the response headers per-visitor.
> These cannot be cached, and Squid does not know how to correctly
> generate for those headers. So having Squid auto-respond is not a good
> idea.
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-19 Thread Robin Carlisle
Thanks for the explanations Amos, much appreciated.

On Thu, 18 Jan 2024 at 16:24, Amos Jeffries  wrote:

> On 19/01/24 03:53, Robin Carlisle wrote:
> > Hi, Hoping someone can help me with this issue that I have been
> > struggling with for days now.   I am setting up squid on an ubuntu PC to
> > forward HTTPS requests to an API and an s3 bucket under my control on
> > amazon AWS.  The reason I am setting up the proxy is two-fold...
> >
> > 1) To reduce costs from AWS.
> > 2) To provide content to the client on the ubuntu PC if there is a
> > networking issue somewhere in between the ubuntu PC and AWS.
> >
> > Item 1 is going well so far.   Item 2 is not going well.   Setup details
> ...
> >
> ...
>
> >
> > When network connectivity is BAD, I get errors and a cache MISS.   In
> > this test case I unplugged the ethernet cable from the back on the
> > ubuntu-pc ...
> >
> > *# /var/log/squid/access.log*
> > 1705588717.420 11 127.0.0.1 NONE_NONE/200 0 CONNECT
> > stuff.amazonaws.com:443 <http://stuff.amazonaws.com:443> -
> > HIER_DIRECT/3.135.162.228 <http://3.135.162.228> -
> > 1705588717.420  0 127.0.0.1 NONE_NONE/503 4087 GET
> > https://stuff.amazonaws.com/api/v1/stuff/stuff.json
> > <https://stuff.amazonaws.com/api/v1/stuff/stuff.json> - HIER_NONE/-
> > text/html
> >
> > *# extract from /usr/bin/proxy-test output*
> > < HTTP/1.1 503 Service Unavailable
> > < Server: squid/5.7
> > < Mime-Version: 1.0
> > < Date: Thu, 18 Jan 2024 14:38:37 GMT
> > < Content-Type: text/html;charset=utf-8
> > < Content-Length: 3692
> > < X-Squid-Error: ERR_CONNECT_FAIL 101
> > < Vary: Accept-Language
> > < Content-Language: en
> > < X-Cache: MISS from ubuntu-pc
> > < X-Cache-Lookup: NONE from ubuntu-pc:3129
> > < Via: 1.1 ubuntu-pc (squid/5.7)
> > < Connection: close
> >
> > I have also seen it error in a different way with a 502 but with the
> > same ultimate result.
> >
> > My expectation/hope is that squid would return the cached object on any
> > network failure in between ubuntu-pc and the AWS endpoint - and continue
> > to return this cached object forever.   Is this something squid can do?
> >It would seem that offline_mode should do this?
> >
>
>
> FYI,  offline_mode is not a guarantee that a URL will always HIT. It is
> simply a form of "greedy" caching - where Squid will take actions to
> ensure that full-size objects are fetched whenever it lacks one, and
> serve things as stale HITs when a) it is not specifically prohibited,
> and b) a refresh/fetch is not working.
>
>
> The URL you are testing with should meet your expected behaviour due to
> the "Cache-Control: public, stale-of-error" header alone.
>Regardless of offline_mode configuration.
>
>
> That said, getting a 5xx response when there is an object already in
> cache seems like something is buggy to me.
>
> A high level cache.log will be needed to figure out what is going on
> (see https://wiki.squid-cache.org/SquidFaq/BugReporting#full-debug-output
> ).
> Be aware this list does not permit large posts so please provide a link
> to download in your reply not attachment.
>
>
> Cheers
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-19 Thread Robin Carlisle
Hi, thanks so much for the detailed response.  I chose to test option 2
from your recommendations as I am new to squid and I do not understand how
to set it up as a reverse proxy anyway.  I made the change to my squid.conf
:


#ssl_bump peek step1

ssl_bump bump step1

ssl_bump bump all


This made it work - which is great news.   My curl requests now are
satisfied by the cache when the pc is offline!


I do have 1 followup question which I think is unrelated, let me know if
etiquette demands I create a new post for this.  When I test using
chromium browser, chromium sends OPTION requests - which I think is
something to do with CORS.   These always cause cache MISS  from squid,.. I
think because the return code is 204...?


1705669236.776113 ::1 TCP_MISS/204 680 OPTIONS
https://stuff.amazonaws.com/api/v1/stuff/stuff.json - HIER_DIRECT/
3.135.146.17 application/json


I can prevent my chromium instance from making these (pointless?) OPTIONS
calls using the following args, but I would rather not have to do this.


--disable-web-security  --disable-features=IsolateOrigins,site-per-process


Any way I can get squid to cache these calls?


Thanks again and all the best,


Robin





On Thu, 18 Jan 2024 at 16:03, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 2024-01-18 09:53, Robin Carlisle wrote:
>
> > My expectation/hope is that squid would return the cached object on
> > any network failure in between ubuntu-pc and the AWS endpoint - and
> > continue to return this cached object forever.   Is this something
> > squid can do? It would seem that offline_mode should do this?
>
> Yes and yes. The reason you are getting errors are not related to cache
> hits or misses. Those errors happen _before_ Squid gets the requested
> resource URL and looks up that resource in Squid cache.
>
> > ssl_bump peek step1
> > ssl_bump bump all
>
> To get that URL (in your configuration), Squid must bump the connection.
> To bump the connection at step2, Squid must contact the origin server.
> When the cable is unplugged, Squid obviously cannot do that: The attempt
> to open a Squid-AWS connection fails.
>
>  > .../200 0 CONNECT stuff.amazonaws.com:443 - HIER_DIRECT
>  > .../503 4087 GET https://stuff.amazonaws.com/api/... - HIER_NONE
>
> Squid reports bumping errors to the client using HTTP responses. To do
> that, Squid remembers the error response, bumps the client connection,
> receives GET from the client on that bumped connection, and sends that
> error response to the client. This is why you see both CONNECT/200 and
> GET/503 access.log records. Note that Squid does not check whether the
> received GET request would have been a cache hit in this case -- the
> response to that request has been preordained by the earlier bumping
> failure.
>
>
> Solution candidates to consider include:
>
> * Stop bumping: https_port 443 cert=/etc/squid/stuff.pem
>
> Configure Squid as (a reverse HTTPS proxy for) the AWS service. Use
> https_port. No SslBump rules/options! The client would think that it is
> sending HTTPS requests directly to the service. Squid will forward
> client requests to the service. If this works (and I do not have enough
> information to know that this will work in your specific environment),
> then you will get a much simpler setup.
>
>
> * Bump at step1, before Squid contacts AWS: ssl_bump bump all
>
> Bugs notwithstanding, there will be no Squid-AWS connection for cache
> hits. The resulting certificate will not be based on AWS service info,
> but it looks like your client is ignorant enough to ignore related
> certificate problems.
>
>
> HTH,
>
> Alex.
>
>
> > Hi, Hoping someone can help me with this issue that I have been
> > struggling with for days now.   I am setting up squid on an ubuntu PC to
> > forward HTTPS requests to an API and an s3 bucket under my control on
> > amazon AWS.  The reason I am setting up the proxy is two-fold...
> >
> > 1) To reduce costs from AWS.
> > 2) To provide content to the client on the ubuntu PC if there is a
> > networking issue somewhere in between the ubuntu PC and AWS.
> >
> > Item 1 is going well so far.   Item 2 is not going well.   Setup details
> ...
> >
> > *# squid - setup cache folder*
> > mkdir -p /var/cache/squid
> > chown -R proxy:proxy  /var/cache/squid
> >
> > *# ssl - generate key*
> > apt --yes install squid-openssl libnss3-tools
> > openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
> >-subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com
> > <http://www.example.com>" \
> >-keyout /etc/squid/stuff.pem -out /etc/squid/stuff.pem
> > chown root:proxy /etc/squid/stuff.pem

[squid-users] offline mode not working for me

2024-01-18 Thread Robin Carlisle
Hi, Hoping someone can help me with this issue that I have been struggling
with for days now.   I am setting up squid on an ubuntu PC to forward HTTPS
requests to an API and an s3 bucket under my control on amazon AWS.  The
reason I am setting up the proxy is two-fold...

1) To reduce costs from AWS.
2) To provide content to the client on the ubuntu PC if there is a
networking issue somewhere in between the ubuntu PC and AWS.

Item 1 is going well so far.   Item 2 is not going well.   Setup details ...

*# squid - setup cache folder*
mkdir -p /var/cache/squid
chown -R proxy:proxy  /var/cache/squid

*# ssl - generate key*
apt --yes install squid-openssl libnss3-tools
openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
  -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com" \
  -keyout /etc/squid/stuff.pem -out /etc/squid/stuff.pem
chown root:proxy /etc/squid/stuff.pem
chmod 644  /etc/squid/stuff.pem

*# ssl - ssl DB*
mkdir -p /var/lib/squid
rm -rf /var/lib/squid/ssl_db
/usr/lib/squid/security_file_certgen -c -s /var/lib/squid/ssl_db -M 4MB
chown -R proxy:proxy /var/lib/squid/ssl_db

*# /etc/squid/squid.conf :*
acl to_aws dstdomain .amazonaws.com
acl from_local src localhost
http_access allow to_aws
http_access allow from_local
cache allow all
cache_dir ufs /var/cache/squid 1024 16 256
offline_mode on
http_port 3129 ssl-bump cert=/etc/squid/stuff.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 4MB
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
sslproxy_cert_error deny all
cache_store_log stdio:/var/log/squid/store.log
logfile_rotate 0

*# /usr/bin/proxy-test :*
#!/bin/bash
curl --proxy http://localhost:3129 \
  --cacert /etc/squid/stuff.pem \
  -v "https://stuff.amazonaws.com/api/v1/stuff/stuff.json; \
  -H "Authorization: token MYTOKEN" \
  -H "Content-Type: application/json" \
  --output "/tmp/stuff.json"



When network connectivity is GOOD, everything works well and I get cache
HITS ...

*# /var/log/squid/access.log*
1705587538.837238 127.0.0.1 NONE_NONE/200 0 CONNECT
stuff.amazonaws.com:443 - HIER_DIRECT/3.136.246.238 -
1705587538.838  0 127.0.0.1 TCP_MEM_HIT/200 32818 GET
https://stuff.amazonaws.com/api/v1/stuff/stuff.json - HIER_NONE/-
application/json

*# extract from /usr/bin/proxy-test output*
< HTTP/1.1 200 OK
< Date: Thu, 18 Jan 2024 13:38:01 GMT
< Content-Type: application/json
< Content-Length: 32187
< x-amzn-RequestId: 8afba80e-6df7-4d5b-a34b-a70bd9b54380
< Last-Modified: 2024-01-03T11:23:19.000Z
< Access-Control-Allow-Origin: *
< x-amz-apigw-id: RvN1CF2_iYcEokA=
< Cache-Control: max-age=2147483648,public,stale-if-error
< ETag: "53896156c4e8e26933188a092c4e40f1"
< X-Amzn-Trace-Id: Root=1-65a929b9-3bd3285934151c1a2495481a
< Age: 2578
< Warning: 110 squid/5.7 "Response is stale"
< X-Cache: HIT from ubuntu-pc
< X-Cache-Lookup: HIT from ubuntu-pc:3129
< Via: 1.1 ubuntu-pc (squid/5.7)
< Connection: keep-alive


When network connectivity is BAD, I get errors and a cache MISS.   In this
test case I unplugged the ethernet cable from the back on the ubuntu-pc ...

*# /var/log/squid/access.log*
1705588717.420 11 127.0.0.1 NONE_NONE/200 0 CONNECT
stuff.amazonaws.com:443 - HIER_DIRECT/3.135.162.228 -
1705588717.420  0 127.0.0.1 NONE_NONE/503 4087 GET
https://stuff.amazonaws.com/api/v1/stuff/stuff.json - HIER_NONE/- text/html

*# extract from /usr/bin/proxy-test output*
< HTTP/1.1 503 Service Unavailable
< Server: squid/5.7
< Mime-Version: 1.0
< Date: Thu, 18 Jan 2024 14:38:37 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3692
< X-Squid-Error: ERR_CONNECT_FAIL 101
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from ubuntu-pc
< X-Cache-Lookup: NONE from ubuntu-pc:3129
< Via: 1.1 ubuntu-pc (squid/5.7)
< Connection: close

I have also seen it error in a different way with a 502 but with the same
ultimate result.

My expectation/hope is that squid would return the cached object on any
network failure in between ubuntu-pc and the AWS endpoint - and continue to
return this cached object forever.   Is this something squid can do?   It
would seem that offline_mode should do this?

Hope you can help,

Robin
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Cache NEVER HIT's Only get TCP_MISS/200 and TCP_MISS/304

2014-03-07 Thread Robin Gwynne

I am struggling with my Squid Reverse proxy cache.  I have been all round the 
forums with no success in getting my Squid Proxy Cache to actually do any 
caching.  I am running Squid 3.1 on Debian 6

Can anyone suggest what might be wrong with my Squid.conf file?  I have 
verified that the correct permissions exist on the cache folder, cache folders 
are initialized, no errors are returned from running squid3 -k parse

Regards,

Robin

--Squid.conf--
http_port 80 accel ignore-cc defaultsite=richmedia.mydomain.com
cache_mem 500 MB
maximum_object_size_in_memory 5 KB
cache_dir ufs /var/spool/squid3 1 32 512 max-size=10485760 
minimum_object_size 2 KB maximum_object_size 5000 MB refresh_pattern -i 
\.(gif|png|jpg|jpeg|ico|bmp|xml)$ 26 90% 260009 refresh_pattern -i 
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$ 26 90% 
260009 refresh_pattern . 26 90% 260009

acl xdomain urlpath_regex ^/crossdomain.xml

cache_peer 94.125.16.13 parent 80 0 no-query no-digest originserver 
name=server1 cache_peer_access server1 deny xdomain cache_peer 162.13.17.12 
parent 8080 0 no-query no-digest originserver name=server2 cache_peer_access 
server2 allow xdomain cache_peer_access server2 deny all cache allow all 
http_access allow all cache_effective_user proxy cache_effective_group proxy

--Access.log output--
1394187754.972    108 195.157.14.29 TCP_MISS/200 118376 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187754.992 30 62.232.36.16 TCP_MISS/200 1004 GET 
http://richmedia.mydomain.com/favicon.ico - FIRST_UP_PARENT/server1 image/x-icon
1394187755.163 94 62.232.36.16 TCP_MISS/200 68954 GET 
http://richmedia.mydomain.com/media/webinar/supplier/I-Holland-2013Webinar/slides/Slide00029.swf
 - FIRST_UP_PARENT/server1 application/x-shockwave-flash
1394187765.378   9794 195.157.14.29 TCP_MISS/200 1696587 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187768.885    136 195.157.14.29 TCP_MISS/200 169077 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187782.779 38 62.232.3.16 TCP_MISS/200 1611 GET 
http://richmedia.mydomain.com/media/webinar/supplier/I-Holland-2013Webinar/slides/Slide624911.htm
 - FIRST_UP_PARENT/server1 text/html
1394187783.461 35 79.171.8.14 TCP_MISS/200 8811 GET 
http://richmedia.mydomain.com/media/webinar/supplier/Kampffmeyer-14Nov13/index.htm
 - FIRST_UP_PARENT/server1 text/html
1394187788.851  19370 195.157.14.29 TCP_MISS/200 3110156 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187792.101  66784 195.157.14.29 TCP_MISS/206 3961057 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187793.415    100 195.157.14.29 TCP_MISS/200 154126 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187807.537  13461 195.157.14.29 TCP_MISS/200 2109420 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187819.670  3 95.131.10.18 TCP_MISS/200 607 GET 
http://richmedia.mydomain.com/crossdomain.xml - FIRST_UP_PARENT/server2 
application/xml
1394187838.664    144 195.157.14.29 TCP_MISS/200 115568 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187855.303  35596 95.131.10.18 TCP_MISS/200 75550871 GET 
http://richmedia.mydomain.com/content/download/424921/8844388/file/Apprenticeships.mp4
 - FIRST_UP_PARENT/server1 video/mp4
1394187867.488  28168 195.157.14.29 TCP_MISS/200 3961100 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg



[squid-users] RE: Squid Cache NEVER HIT's Only get TCP_MISS/200 and TCP_MISS/304

2014-03-07 Thread Robin Gwynne
My copy and paste was not correct in the original post.  I have corrected my 
conf file below.

Robin

-Original Message-
From: Robin Gwynne [mailto:robin.gwy...@wrbm.com] 
Sent: 07 March 2014 10:46
To: squid-users@squid-cache.org
Subject: [squid-users] Squid Cache NEVER HIT's Only get TCP_MISS/200 and 
TCP_MISS/304


I am struggling with my Squid Reverse proxy cache.  I have been all round the 
forums with no success in getting my Squid Proxy Cache to actually do any 
caching.  I am running Squid 3.1 on Debian 6

Can anyone suggest what might be wrong with my Squid.conf file?  I have 
verified that the correct permissions exist on the cache folder, cache folders 
are initialized, no errors are returned from running squid3 -k parse

Regards,

Robin

--Squid.conf--
http_port 80 accel ignore-cc defaultsite=richmedia.mydomain.com
cache_mem 500 MB
maximum_object_size_in_memory 5 KB
cache_dir ufs /var/spool/squid3 1 32 512 max-size=10485760 
minimum_object_size 2 KB maximum_object_size 5000 MB 
refresh_pattern -i \.(gif|png|jpg|jpeg|ico|bmp|xml)$ 26 90% 260009 
refresh_pattern -i 
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv|mpg|wma|ogg|wmv|asx|asf)$ 26 90% 
260009 
refresh_pattern . 26 90% 260009
acl xdomain urlpath_regex ^/crossdomain.xml
cache_peer 94.125.16.13 parent 80 0 no-query no-digest originserver 
name=server1 
cache_peer_access server1 deny xdomain 
cache_peer 162.13.17.12 parent 8080 0 no-query no-digest originserver 
name=server2 
cache_peer_access server2 allow xdomain 
cache_peer_access server2 deny all 
cache allow all 
http_access allow all
--Access.log output--
1394187754.972    108 195.157.14.29 TCP_MISS/200 118376 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187754.992 30 62.232.36.16 TCP_MISS/200 1004 GET 
http://richmedia.mydomain.com/favicon.ico - FIRST_UP_PARENT/server1 image/x-icon
1394187755.163 94 62.232.36.16 TCP_MISS/200 68954 GET 
http://richmedia.mydomain.com/media/webinar/supplier/I-Holland-2013Webinar/slides/Slide00029.swf
 - FIRST_UP_PARENT/server1 application/x-shockwave-flash
1394187765.378   9794 195.157.14.29 TCP_MISS/200 1696587 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187768.885    136 195.157.14.29 TCP_MISS/200 169077 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187782.779 38 62.232.3.16 TCP_MISS/200 1611 GET 
http://richmedia.mydomain.com/media/webinar/supplier/I-Holland-2013Webinar/slides/Slide624911.htm
 - FIRST_UP_PARENT/server1 text/html
1394187783.461 35 79.171.8.14 TCP_MISS/200 8811 GET 
http://richmedia.mydomain.com/media/webinar/supplier/Kampffmeyer-14Nov13/index.htm
 - FIRST_UP_PARENT/server1 text/html
1394187788.851  19370 195.157.14.29 TCP_MISS/200 3110156 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187792.101  66784 195.157.14.29 TCP_MISS/206 3961057 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187793.415    100 195.157.14.29 TCP_MISS/200 154126 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187807.537  13461 195.157.14.29 TCP_MISS/200 2109420 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187819.670  3 95.131.10.18 TCP_MISS/200 607 GET 
http://richmedia.mydomain.com/crossdomain.xml - FIRST_UP_PARENT/server2 
application/xml
1394187838.664    144 195.157.14.29 TCP_MISS/200 115568 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg
1394187855.303  35596 95.131.10.18 TCP_MISS/200 75550871 GET 
http://richmedia.mydomain.com/content/download/424921/8844388/file/Apprenticeships.mp4
 - FIRST_UP_PARENT/server1 video/mp4
1394187867.488  28168 195.157.14.29 TCP_MISS/200 3961100 GET 
http://richmedia.mydomain.com/content/download/383683/8226052/file/Tim%20Storer.mp3
 - FIRST_UP_PARENT/server1 audio/mpeg



[squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Robin Bonin
I have a squid reverse proxy working for all the domains that I
specify in the squid.conf file. I would like to add an additional
default rule, if the domain does not match one of the known domains.

I am mapping the domains to the particular servers using the following
config lines.

 cache_peer 10.10.20.15 parent 80 0 no-query no-digest originserver 
 name=lamp_server login=PASS
 acl sites_lamp dstdomain (list of domain names here)
 cache_peer_access lamp_server allow sites_lamp

is there an additional acl line that I can use for other?


Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Robin Bonin
My goal is to get a handful of domains redirected to a lamp server and
the rest defaulted to my windows server.

I tried adding all to the windows server cache_peer_access line, then
all traffic went to my windows server. I also tried playing with the
position of that line. Seem like no matter where it is, when I have
all in there, all traffic is redirected there.



On Tue, Jul 5, 2011 at 3:06 PM, Kinkie gkin...@gmail.com wrote:
 On Tue, Jul 5, 2011 at 9:58 PM, Robin Bonin rbo...@gmail.com wrote:
 I have a squid reverse proxy working for all the domains that I
 specify in the squid.conf file. I would like to add an additional
 default rule, if the domain does not match one of the known domains.

 I am mapping the domains to the particular servers using the following
 config lines.

 cache_peer 10.10.20.15 parent 80 0 no-query no-digest originserver 
 name=lamp_server login=PASS
 acl sites_lamp dstdomain (list of domain names here)
 cache_peer_access lamp_server allow sites_lamp

 is there an additional acl line that I can use for other?

 all will do, just place it at the end of your cache_peer_access lines.

 --
     /kinkie



Re: [squid-users] Need help with multiple web server reverse proxy

2011-07-05 Thread Robin Bonin
Thanks, that did it, I appreciate your help.

On Tue, Jul 5, 2011 at 7:27 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Tue, 5 Jul 2011 17:41:32 -0500, Robin Bonin wrote:

 My goal is to get a handful of domains redirected to a lamp server and
 the rest defaulted to my windows server.

 I tried adding all to the windows server cache_peer_access line, then
 all traffic went to my windows server. I also tried playing with the
 position of that line. Seem like no matter where it is, when I have
 all in there, all traffic is redirected there.


 Like this:
  cache_peer_access lamp_server allow sites_lamp
  cache_peer_access lamp_server deny all

  cache_peer_access windows_server deny sites_lamp
  cache_peer_access windows_server allow all


 Amos




Re: [squid-users] RE: problem with 1 site?

2011-05-18 Thread robin
It's not going to HTTPS is it? 

TT

On Wed, 2011-05-18 at 15:44 -0500, Brian Tuley wrote:
 I have an update on this...
 
 It appears that Firefox as a browser works just fine.  The problem only 
 manifests itself on IE (both XP  Win7 experience the same problem).
 
 Is there something that causes IE to behave strangely on some sites when 
 going through squid?  Are there IE incompatibilities under some 
 circumstances? Any settings to improve this?
 When not using squid, it works fine.  After a click or 2 it literally takes 
 30sec to a minute to pull up a page on this 1 site.
 
 Thanks
 -Brian
 
 
 
 
 -Original Message-
 From: Brian Tuley [mailto:btu...@midcoconnections.com]
 Sent: Tuesday, May 03, 2011 11:50 AM
 To: 'squid-users@squid-cache.org'
 Subject: [squid-users] problem with 1 site?
 
 I've got a squid proxy server 2.7 on an Ubuntu 64 (10.04.02) server.   4gb 
 ram, plenty of drive space...
 
 It runs all of my 200 users just fine... Except for 1 site.
 When I go to the site below, it comes up in a browser (IE), then just dies 
 after a few navigations.
 When I remove Squid proxy from the equation, the site is slow, but not 
 horrible.
 
 When using squid, I get the site, click a few links then it seems to time out.
 
 Any suggestions on improving performance?  Is it squid or the site?
 
 
 Thanks
 -Brian
 
 
 
 Here's the site:
 http://www.natlallergy.com/
 
 
 The site is heavily cross linked with marketing crap.  I don't manage the 
 site, just some of the infrastructure in a my call center.
 
 
 I see some high load times  in access.log:
 
 
 Note: This e-mail and any attachments may be privileged and confidential and 
 protected from disclosure. If the reader of this message is not the intended 
 recipient, or an employee or agent responsible for delivering this message to 
 the intended recipient, you are hereby notified that any disclosure, copying, 
 distribution or use of this e-mail and any attachments is strictly 
 prohibited. If you have received this e-mail in error, please notify us 
 immediately by returning it to the sender and deleting it from your computer 
 system. Thank you for your cooperation.




Re: [squid-users] performance question, 1 or 2 NIC's?

2010-08-29 Thread Robin
I only use 2 cards when one is on a private lan and the other on a 
public routed interface.


And I usualy run up to 700 users on avarage on the boxes I build..


What are you expecting the 300 users to be doing?

Rob


On 29/08/2010 18:10, Jose Ildefonso Camargo Tolosa wrote:

Hi!

On Sat, Aug 28, 2010 at 11:11 PM, Andreifunactivit...@gmail.com  wrote:
   

Ooo... the line between Squid and the clients is 1000 MB. My internet
connection is 12MB. Not sure if that changes things. Does it? Would it
make a difference in that situation if clients (from 1000Mb) come on
one line, eth0 and get cached on eth1 which is only 12MB.
 

I assume that MB=Mega Bits (and *not* Megabytes).

If that's the case: is the squid NIC 1Gbps? if so: these are usually
full-duplex (and = to clients connection speed), so: no, you will not
have a real benefit from adding another NIC, but, if you insist, you
could do it without changing most of your configuration, by adding two
NICs together with bonding (and a port-channel on your switch, if it
support it).



   

Sorry if I wasn't clear before




On Sat, Aug 28, 2010 at 5:12 PM, Amos Jeffriessqu...@treenet.co.nz  wrote:
 

Leonardo Rodrigues wrote:
   

Em 28/08/2010 12:29, Andrei escreveu:
 

I'm setting up a transparent Squid box for 300 users. All requests
from the router are sent to the Squid box. Squid box has one NIC,
eth0. This box receives requests (from clients) and catches content
from the web using this one NIC on its one WAN port, eth0.

Question: would it improve performance of the Squid box if I was
receiving requests (from the clients) on eth0 and caching content on
eth1? In other words, is there a benefit of using two NIC's vs. one?
This is a public IP/WAN Squid box. Both eth0 and eth1 would have a WAN
(public IP) address.


I'm on a 12Mb line.

   


Your limitation is your 12Mb line  any decent hardware can handle
that with no problem at all. ANY 100Mbit NIC, even onboard and
cheapers/generics one, can handle 12Mbit with no problem at all.

i really dont think adding another NIC will improve your performance,
given your 12Mbit WAN limitation.


 

Indeed.

Andrei escreveu:
  Whether anything can be done by Squid depends on whether the clients using
Squid are on the outside of that 12Mb line or on some faster connection
between them and Squid.

  For a faster internal connection and slower Internet connection you can
look towards raising the Hit Ratio' probably the byte hits specifically.
That will drop the load on the Internet line and make the whole network
appear faster to users. The holy grail for forward proxies seems to be 50%,
with reality coming in between 20% and 45% depending on your clients and
storage space.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.7
  Beta testers wanted for 3.2.0.1

   
 


   




Re: [squid-users] Who uses squid

2009-06-20 Thread ROBIN
Do you mean people using it, or programs that can use it?

Either way your not going to get a complete list as any program that
uses the http protocol should work with squid.

Rob


On Sat, 2009-06-20 at 13:05 -0400, Alain Guerrero Enamorado wrote:
 ¿How can I get a complete list of squid clients?
 



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread ROBIN
Do you actualy require these functions? if no then comment them out.

Rob


On Tue, 2009-04-07 at 10:02 -0700, Henrique M. wrote:
 I'm trying to run squid but I'm getting a few error msgs:
 
  * Starting Squid HTTP proxy squid

 2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
 'wais_relay_port'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
 'incoming_icp_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
 'incoming_http_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
 'incoming_dns_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
 'min_icp_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
 'min_dns_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
 'min_http_poll_cnt'
 
 Could you guys help me solve this?



Re: [squid-users] Getting error msgs when trying to start squid

2009-04-07 Thread ROBIN
Also what version are you running? is this a hand crafted config or one
borrowed from somwhere else?

Post up the confg from lines 66 to 106

Rob


On Tue, 2009-04-07 at 10:02 -0700, Henrique M. wrote:
 I'm trying to run squid but I'm getting a few error msgs:
 
  * Starting Squid HTTP proxy squid

 2009/04/07 13:25:53| parseConfigFile: squid.conf:67 unrecognized:
 'wais_relay_port'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:100 unrecognized:
 'incoming_icp_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:101 unrecognized:
 'incoming_http_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:102 unrecognized:
 'incoming_dns_average'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:103 unrecognized:
 'min_icp_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:104 unrecognized:
 'min_dns_poll_cnt'
 2009/04/07 13:25:53| parseConfigFile: squid.conf:105 unrecognized:
 'min_http_poll_cnt'
 
 Could you guys help me solve this?



Re: [squid-users] Redirector startup output being logged to cache.log

2009-03-28 Thread ROBIN
Ok, will give that a whirl on monday when I am back in the office and
report back.

Thanks

Rob


On Sat, 2009-03-28 at 12:03 +1300, Amos Jeffries wrote:
 ROBIN wrote:
  Sorry Amos...I guess the old adage one mans meat is another mans poison
  applys here.
  
  
  where do i specify the debug options? is it a buildtime or runtime ?
  
  Cheers, have a nice weekend
  
  Rob.
  
 
 run time in squid.conf
 
 Amos
 
  
  
  
  
  
  On Sat, 2009-03-28 at 11:47 +1300, Amos Jeffries wrote:
  twintu...@f2s.com wrote:
  3.0-Stable13
  SquidGuard 1.4
 
  Our SquidGuard configuration changes every minuet.
 
  The problem is that with 3.0.S13 and SquidGuard 1.4 it seems to 
  constantly log
  the startup output of SquidGuard to the cache.log
 
 
  on our old server with S 2.5.S12 and SG 1.2 it did not log the redirector
  startup output.
 
  Any ideas as it makes looking for problems in the cache log a nightmare...
 
  Ta
 
  Rob
 
  Sigh, one users bug fix is anothers nightmare.
 
  Does debug_options 61,1 84,1 stop the flood?
 
  Amos
  
 
 



Re: [squid-users] Redirector startup output being logged to cache.log

2009-03-27 Thread ROBIN
Sorry Amos...I guess the old adage one mans meat is another mans poison
applys here.


where do i specify the debug options? is it a buildtime or runtime ?

Cheers, have a nice weekend

Rob.






On Sat, 2009-03-28 at 11:47 +1300, Amos Jeffries wrote:
 twintu...@f2s.com wrote:
  3.0-Stable13
  SquidGuard 1.4
  
  Our SquidGuard configuration changes every minuet.
  
  The problem is that with 3.0.S13 and SquidGuard 1.4 it seems to constantly 
  log
  the startup output of SquidGuard to the cache.log
  
  
  on our old server with S 2.5.S12 and SG 1.2 it did not log the redirector
  startup output.
  
  Any ideas as it makes looking for problems in the cache log a nightmare...
  
  Ta
  
  Rob
  
 
 Sigh, one users bug fix is anothers nightmare.
 
 Does debug_options 61,1 84,1 stop the flood?
 
 Amos



Re: [squid-users] Squid exitin periodicaly ( Preparing for shut down after )

2009-03-16 Thread ROBIN
I will have a look, the basic config file has been in use for about 10
years with no major issues. ( god i feel old now thinking about that.) 

Will examine and post the manager ACL's


Rob


On Tue, 2009-03-17 at 10:59 +1200, Amos Jeffries wrote:
  (Amos) Sorry did not reply to list Ignore..
 
  I wish SLES10 was more up to date on a few packages!!
 
  I can't find anythin that may be shutting down squid, certainly there
  seems to
  be no cron jobs and the issues are happeing at aproximatly 22 minuet
  intervals
  which is not consistent with a cron schedule.
 
  It's very odd, and been hapenig for a while but we had not noticed.
 
  I may just try a full restart on the system.
 
  Thanks
 
  Rob
 
 You could also check what type of controls you have around the 'manager'
 ACL in squid.conf. Every visitor with an allow line before the deny
 manager line may have the option to restart Squid with an HTTP request.
 
 Amos
 
 
 
 
  twintu...@f2s.com wrote:
  Squid 2.5STABLE12 on SLES10
 
  I know this is quite an old version but it's on our production machine.
 
 
  Yes. Please bug SLES about using a newer release.
 
 
  Anyway we have a strange issue where squid seems to be shunting down
  every 22
  minuets or so,  the logs says Preparing for shut down after XXX
  requests.
 
  Now every minuet we do a squid -k reconfigure as we run squidGuard and
  it's
  config can change all the time. This has never seemed to be a problem in
  the
  past.
 
  I am building up a fresh machine to take over but would like to get this
  one
  working properly too.
 
  So far I have stoped the store.log being written and got the other logs
  rotating
  more than once a day to keep them small.
 
  I was previously getting errors about there being to few redirectors so
  I
  upped
  that to 30, I have now set it back down to 10 to see what happens.
 
  Rob
 
  Preparing for shut down after XXX requests occurs when Squid receives
  its proper shutdown signal. A clean/graceful shutdown proceeds to follow.
 
  Amos
  --
  Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.6
 
 
 
 
 
 



[squid-users] fqdn blocking by...

2008-07-10 Thread Robin Clayton
Hi Guys.

IF I enable log_fqdn 

and it actualy worked ( which so far I don;t seem to have any reverse lookup 
working.. )

would squidGuard be able to block by the c

[squid-users] filtering on FQDN

2008-07-08 Thread Robin Clayton
Hi Guys,

can I filter on a source that is the windows machine name rather than the 
source IP? 

would turning

[squid-users] Logging/Blocking URLs with question marks ?

2008-03-17 Thread Robin Clayton
Dear all

2.5-Stable-5

I have used squid for probably 8 years. 

It has recently come to my attention that sites with dynamic content as denoted 
by a ?  question mark are not being logged or blocked.

so for example searches on google do not show the full URL.

is there any way to switch this on as it's important to our filtering of 
unwanted sites.

Thanks

Rob


[squid-users] Invalid Response: Blank HTTP Reply HDR

2007-11-08 Thread Robin
Hi,

I have recently setup a squid proxy on centos 4.5 using the default
package which is 2.5.STABLE14.

I have some problems with some pages on certain web apps (like
monster.co.uk) which result in an Invalid Response error page. When I
go through the logs it seems that the web server is returning a blank
response header (see below).

Unfortunately the web app providers are unresponsive and refuse to
make any changes since it is working for the other users. Is there any
configuration change/ workaround I can apply to allow my users to
access these sites?

Here is the error message, please let me know if I should supply any
more information.

2007/11/08 14:42:27| ctx: enter level  0:
'http://client.thomasinternational.net/client/Reports_ViewExisting.aspx?~VGVzdFR5cGU9MSY='
2007/11/08 14:42:27| httpProcessReplyHeader: key
'9F7879BB02C18F9C4CD9D7D227286BD6'
2007/11/08 14:42:27| GOT HTTP REPLY HDR:
-


--
2007/11/08 14:42:27| cleaning hdr: 0x8f9e458 owner: 2
2007/11/08 14:42:27| init-ing hdr: 0x8f9e458 owner: 2
2007/11/08 14:42:27| 0x8f9e458 lookup for 38
2007/11/08 14:42:27| 0x8f9e458 lookup for 9
2007/11/08 14:42:27| 0x8f9e458 lookup for 22
2007/11/08 14:42:27| cleaning hdr: 0x8f9e458 owner: 2
2007/11/08 14:42:27| init-ing hdr: 0x8f9e458 owner: 2
2007/11/08 14:42:27| 0x8f9e458 lookup for 38
2007/11/08 14:42:27| 0x8f9e458 lookup for 9
2007/11/08 14:42:27| 0x8f9e458 lookup for 22
2007/11/08 14:42:27| httpProcessReplyHeader: Non-HTTP-compliant header: '
'
2007/11/08 14:42:27| ctx: exit level  0

Any information/help/advice greatfully received.

Thanks,

Robin


Re: [squid-users] Active Rule Changing

2007-11-02 Thread Robin-Vossen

Thanks alot that helps..
If there is anything I can help you with tell me.
I own you one ;)

Cheers,
Robin



Angela Williams-2 wrote:
 
 On Friday 02 November 2007, Robin-Vossen wrote:
 I just did setup my first Squid Conf.
 I love it already. ^^
 But, well I have a new problem now
 I did define:
 acl Badwords url_regex -i /usr/local/etc/words.squid
 and
 http_access deny Badwords
 Since I thought I could change the words.squid file while squid was
 running. I tryed that. And that didnt really work..
 So, my question now is.
 Is this possible? or do I have to restart Squid every time that the words
 file chances?
 
 squid -kcheck
 squid -kreconfigure
 
 I always run the check in case I made a fluff of a file!
 The reconfigure simply tells squid to reread all is config files!
 
 Cheers
 Ang
 
 
 -- 
 Angela Williams   Enterprise Outsourcing
 Unix/Linux  Cisco spoken here!   Bedfordview
 [EMAIL PROTECTED] Gauteng South Africa
 
 Smile!! Jesus Loves You!!
 
 

-- 
View this message in context: 
http://www.nabble.com/Active-Rule-Changing-tf4736205.html#a13544764
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] First Time squid Config Problem

2007-11-02 Thread Robin-Vossen

Thanks all.
Its working now =)

I think I am going to buy that o'reily book about Squid.
I only get some warnings now, all about netmasks. So well.
I think after some Googling that will be fixed aswell.
Anyhow, thanks again.
I love Squid ^^ oh, and what can be replaced with http_access?
Can that also be something else? 
Well thanks again.. A whole lot.
I can make the Login thing now ^^
-- 
View this message in context: 
http://www.nabble.com/First-Time-squid-Config-Problem-tf4730316.html#a13543370
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Active Rule Changing

2007-11-02 Thread Robin-Vossen

I just did setup my first Squid Conf.
I love it already. ^^
But, well I have a new problem now
I did define:
acl Badwords url_regex -i /usr/local/etc/words.squid
and 
http_access deny Badwords
Since I thought I could change the words.squid file while squid was running.
I tryed that. And that didnt really work..
So, my question now is. 
Is this possible? or do I have to restart Squid every time that the words
file chances?

Please let me know. ^^

Cheers,
Robin
-- 
View this message in context: 
http://www.nabble.com/Active-Rule-Changing-tf4736205.html#a13544094
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] First Time squid Config Problem

2007-11-01 Thread Robin-Vossen

Hello, I am a first time user of Squid.
I think its great and I want to get a certificate or something that supports
that I can fully operate Squid.
But thats now where my question is about.
My question is about my config.
My /etc/squid/squid.conf file Is written by myself. And I think I made a
mistake somewere since when I start Squid it crashes.
It might be important that I run GNU/Linux with Gentoo 2007.0 with my own
Configured Kernel. So that might be a problem.
Anywho my configuration is like this..

#Squid Config
#Used Doc http://www.visolve.com/squid/squid26/contents.php
 
http_port 5629
cache_mem 75 MB
visable_hostname firegate
cache_dir ufs /var/cache/squid 500 16 256
offline_mode on
maximun_object_size 102400 KB
reload_into_ims on
pipeline_prefetch on
 
##Define ACL
acl WAN src 192.168.24.0/255.255.255.0
acl LAN src 192.168.42.0/255.255.255.0
acl all src 0.0.0.0/0.0.0.0
acl busness_hours time M T W H F 8:30-18:00
acl break_time time M T W H F 11:00-14:00
acl BadSites dstdomain /usr/local/etc/restricted-sites.squid
acl BadWords url_regex -i /usr/local/etc/restricted-keywords.squid
acl BadFiles urlpath_regex -i /usr/local/etc/restricted-files.squid
acl ftp proto FTP
acl http proto HTTP
acl ssl proto SSL
acl ssh_port port 22 443 1
acl Admin-IP src /usr/local/etc/Admin-IP.squid
acl Admin-MAC arp /usr/local/etc/Admin-MAC.squid
acl User-IP src /usr/local/etc/User-IP.squid
acl User-MAC arp /usr/local/etc/User-MAC.squid
 
##Laws
allow ssh_ports LAN CONNECT
deny !USer-IP !Admin-IP
deny !User-MAC !Admin-MAC
deny !break_time BadSites User-IP
deny !break_time BadWords User-IP
deny !break_time BadFiles User-IP
allow User-IP business-hours
deny all

Thats it..
I think I made some mistakes in the laws part.
And well the Admin-IP thing is made this way since the IP's in that file
chance..
People have to logon to the PC before they have access to the Inet..
This is done since this is a Firewall box only (Squid + Snort + IPtables)
Well, can somebody tell me what Ive done wrong?
And, well what books shall I buy to learn Squid. Since it really looks like
a promising project.
I think I want to get a certificate or something for it. (As do I want one
for Snort, wireshark, iptables.)
But anywho, that aside. What have I done wrong
And how can I fix.
THanks already alot! (only for reading )

Cheers,
Robin :-)
-- 
View this message in context: 
http://www.nabble.com/First-Time-squid-Config-Problem-tf4730316.html#a13525936
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid to Log DNS Querys

2007-11-01 Thread Robin-Vossen

Hello,
I wonder is there a way to log all DNS requests that go out of our network
with Squid.
Since I noticed that we had a Trojan Horse on our Company Network.
And well it didnt send it self the data out.
It did send DNS Querys to there DNS Server..
And a Firewall doesnt detect that.
Is there a way to Log the DNS Querys with Squid so I can Monitor that
myself?

Thanks alot.
Cheers,
Robin
-- 
View this message in context: 
http://www.nabble.com/Squid-to-Log-DNS-Querys-tf4730318.html#a13525943
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] Squid to Log DNS Querys

2007-11-01 Thread Robin-Vossen

Ok, damn.. :(
I just have to find something else to do that then..

Thanks for telling me :(



traef06 wrote:
 
 Hello,
 I wonder is there a way to log all DNS requests that go out of our
 network
 with Squid.
 Since I noticed that we had a Trojan Horse on our Company Network.
 And well it didnt send it self the data out.
 It did send DNS Querys to there DNS Server..
 And a Firewall doesnt detect that.
 Is there a way to Log the DNS Querys with Squid so I can Monitor that
 myself?
 
 
 [Tom replied with:] 
 
 Squid doesn't ever see DNS queries from your network.
 
 Answer is no.
 
 Thomas J. Raef
 e-Based Security, LLC
 www.ebasedsecurity.com
 1-866-838-6108
 You're either hardened, or you're hacked!
 
 

-- 
View this message in context: 
http://www.nabble.com/Squid-to-Log-DNS-Querys-tf4730318.html#a13527017
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid to Log DNS Querys

2007-11-01 Thread Robin-Vossen

Well I have no idea what the name of the Trojan horse was.
But, our DNS server was down.
And I still had DNS querys over the network.
I thought that was strange. But I thought.. Oh Well 
So, some time later on some PCs started to show Trojan behavior.
(Minesweeper autostarting etc)
I thought, oh damn.
So I started scanning for problems.
Till I found something with a sniffer.
We did send a DNS Query that did held Critical data..
Our work statsions do run a Virus Scanner. 
But I think its not yet logged. I confiscated a PC that did show that weird
behavior and I am looking for the  infected files.
If found Ill share it with the net. 



Tek Bahadur Limbu wrote:
 
 Hi Robin,
 
 Robin-Vossen wrote:
 Hello,
 I wonder is there a way to log all DNS requests that go out of our
 network
 with Squid.
 Since I noticed that we had a Trojan Horse on our Company Network.
 And well it didnt send it self the data out.
 It did send DNS Querys to there DNS Server..
 And a Firewall doesnt detect that.
 Is there a way to Log the DNS Querys with Squid so I can Monitor that
 myself?
 
 Are you runing Squid transparently? As Thomas pointed out, Squid does 
 not see DNS queries on your network. That's the job of your DNS servers 
 and your gateway firewall.
 
 You can only log the DNS queries that your Squid box actually makes to 
 your DNS servers.
 
 You can use the following option in your squid.conf:
 
 dns_nameservers IP.OF.YOUR.DNSSERVER
 
 One way is to run a local DNS caching name server on the Squid box 
 itself and point your clients machines to this caching name server which 
 then forwards the DNS requests to your actual DNS servers.
 
 Probably the better way is to block the unwanted DNS queries on your DNS 
 servers or gateway firewall.
 
 Just curious, which Trojan Horse did you detect in your network? When 
 you say that your firewall does not detect them, do you mean a firewall 
 running on your clients' machines or on your Gateway firewall itself?
 
 Thanking you...
 
 
 
 Thanks alot.
 Cheers,
 Robin
 
 
 -- 
 
 With best regards and good wishes,
 
 Yours sincerely,
 
 Tek Bahadur Limbu
 
 System Administrator
 
 (TAG/TDG Group)
 Jwl Systems Department
 
 Worldlink Communications Pvt. Ltd.
 
 Jawalakhel, Nepal
 
 http://www.wlink.com.np
 
 http://teklimbu.wordpress.com
 
 

-- 
View this message in context: 
http://www.nabble.com/Squid-to-Log-DNS-Querys-tf4730318.html#a13531298
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] First Time squid Config Problem

2007-11-01 Thread Robin-Vossen

Thanks alot Ill look into that asap.
And well the Typo error are since I am building Squid on a Gentoo box
without a Graphical Shell or a Webbrowser ;)

Thanks again Ill look into that asap.


Michael Alger-3 wrote:
 
 On Thu, Nov 01, 2007 at 03:06:38AM -0700, Robin-Vossen wrote:
 My /etc/squid/squid.conf file Is written by myself. And I think I
 made a mistake somewere since when I start Squid it crashes.
 
 Did you check the squid logs to see what the problem was? The cache
 log is the one you'll be looking for. Since you didn't define a
 value for it it'll use the default, which is most likely:
 
   /var/log/squid/cache.log
 
 You can configure it explicitly using this syntax in the squid
 config:
 
 cache_log /var/log/squid/cache.log
 
 (on that subject, I'd also recommend making sure you have an
 access_log configured as well)
 
 #Squid Config
 #Used Doc http://www.visolve.com/squid/squid26/contents.php
  
 http_port 5629
 cache_mem 75 MB
 visable_hostname firegate
 cache_dir ufs /var/cache/squid 500 16 256
 offline_mode on
 maximun_object_size 102400 KB
 reload_into_ims on
 pipeline_prefetch on
  
 ##Define ACL
 acl WAN src 192.168.24.0/255.255.255.0
 acl LAN src 192.168.42.0/255.255.255.0
 acl all src 0.0.0.0/0.0.0.0
 acl busness_hours time M T W H F 8:30-18:00
 acl break_time time M T W H F 11:00-14:00
 acl BadSites dstdomain /usr/local/etc/restricted-sites.squid
 acl BadWords url_regex -i /usr/local/etc/restricted-keywords.squid
 acl BadFiles urlpath_regex -i /usr/local/etc/restricted-files.squid
 acl ftp proto FTP
 acl http proto HTTP
 acl ssl proto SSL
 acl ssh_port port 22 443 1
 acl Admin-IP src /usr/local/etc/Admin-IP.squid
 acl Admin-MAC arp /usr/local/etc/Admin-MAC.squid
 acl User-IP src /usr/local/etc/User-IP.squid
 acl User-MAC arp /usr/local/etc/User-MAC.squid
  
 ##Laws
 allow ssh_ports LAN CONNECT
 deny !USer-IP !Admin-IP
 deny !User-MAC !Admin-MAC
 deny !break_time BadSites User-IP
 deny !break_time BadWords User-IP
 deny !break_time BadFiles User-IP
 allow User-IP business-hours
 deny all
 
 Thats it..
 
 If this is a verbatim dump of your config, then the first problem I
 spotted was that you define an acl called busness-hours, but then
 later reference business-hours.
 
 The squid log should give you some info about the problem. If you
 still can't solve it, include the relevant part of the log when you
 ask for help here, as not everyone will copy your config to a test
 squid to see what happens. It's also usually a good idea to include
 the exact version number of the squid you're using.
 
 

-- 
View this message in context: 
http://www.nabble.com/First-Time-squid-Config-Problem-tf4730316.html#a13531345
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] professional support recomendations

2007-11-01 Thread Robin Mordasiewicz
Can anyone recomend a good professional support organization for squid.
ie. providing 24*7 phone support.

The company I work for requires a support contract for all software.

-- 



Re: [squid-users] IE versus firefox problems

2007-10-29 Thread Robin Mordasiewicz
On Fri, 26 Oct 2007, Amos Jeffries wrote:

 Robin Mordasiewicz wrote:
  On Thu, 25 Oct 2007, Amos Jeffries wrote:
 
  Robin Mordasiewicz ha scritto:
  I have a site which will not load properly while I am using IE7, but
  using
  firefox it loads instantly.
 
  for example, while visting http://www.thestar.com IE will pause and wait
  for a very long time for the site to open, while firefox is able to load
  the page instantly. Can someone else verify for me wether or not they
  are
  seeing the same problem.
 
   http://www.thestar.com
 
  I've noticed that on the right side there are two square areas where
  some images initially load. I used firefox to visit the site and
  everything went fine, but admittedly those two boxes needed a bunch of
  seconds to come up with content.
  Could it be that a difference in plugins between FF and IE ?
 
  A quick check of the site presents me with *at least* 6 dynamic adverts or
  multimedia objects just on the front page. The ads look like an
  add-sharing include. Those could quite possibly be presenting the rest of
  use with different content to the stuff that is troubling the original
  user.
 
  Amos, while visting the site with IE+squid did you notice that IE
  hangs/freezes for about a minute ?
 

 I did not use IE for that test.
 FF did hang for a noticeable time when it should not have though. The
 hang with browser title/status-bar set then super-fast display of page
 title graphics, another small delay for the rest of the page. And more
 short delays for each big object on the page.


Yeah, my problem is that IE hangs/freezes for almost 60 seconds, but
firefox just shows a small delay.

-- 



Re: [squid-users] IE versus firefox problems

2007-10-26 Thread Robin Mordasiewicz
On Thu, 25 Oct 2007, Amos Jeffries wrote:

  Robin Mordasiewicz ha scritto:
  I have a site which will not load properly while I am using IE7, but
  using
  firefox it loads instantly.
 
  for example, while visting http://www.thestar.com IE will pause and wait
  for a very long time for the site to open, while firefox is able to load
  the page instantly. Can someone else verify for me wether or not they
  are
  seeing the same problem.
 
   http://www.thestar.com
 
 
  I've noticed that on the right side there are two square areas where
  some images initially load. I used firefox to visit the site and
  everything went fine, but admittedly those two boxes needed a bunch of
  seconds to come up with content.
  Could it be that a difference in plugins between FF and IE ?
 

 A quick check of the site presents me with *at least* 6 dynamic adverts or
 multimedia objects just on the front page. The ads look like an
 add-sharing include. Those could quite possibly be presenting the rest of
 use with different content to the stuff that is troubling the original
 user.

Amos, while visting the site with IE+squid did you notice that IE
hangs/freezes for about a minute ?

-- 



RE: [squid-users] IE versus firefox problems

2007-10-23 Thread Robin Mordasiewicz
On Mon, 22 Oct 2007, Amos Jeffries wrote:

  Same here, bug in ie7, it seems to try to load an active x cause I can
  get the top of the page then after it ffroze and seems to load and load
  and load.
 
  thanks for confirming that for me.
  The site does working going directlyto it without a proxy, and it also
  works while using M$ ISA proxy.
 
  any tips on troubleshooting this for a newb is appreaciated.

 1) try an upgrade to the latest available version of squid.
  - BTW which version are you seeing the problem with?

I have tried with squid-2.5.STABLE14, and now I am on squid-2.6.STABLE16,
the fedora rpm.
Both gave the same problem.

 2) find out exactly what URI is causing squid problems
- is the domain doing a 302 redirect or just loading an object?

I have not been able to figure that out. I have tried copying the page
locally and everything appears fine, but accessing it from the original
server is not. I am at a bit of a loss for figuring out which URI is the
causing the problem. I have turned up the debugging, but I dont see any
evidence of any errors.

 3) try to locate what squids doing from cache.log and debug_options ALL,5

 4) ask for help again given any new info you have gleaned in the above.
- others may see something in the log you missed.

 If its still occuring after (1) and the rest don't lead to a configuration
 fix it should probably be brought up in squid-dev or reported as a bug. We
 do want squid to work properly on every site.

ok, well I will wait to see if anyone else has any comments, and then I
guess I will escalate it to a bug report.

-- 



RE: [squid-users] IE versus firefox problems

2007-10-22 Thread Robin Mordasiewicz
On Mon, 22 Oct 2007, Frenette, Jean-Sébastien wrote:

 Same here, bug in ie7, it seems to try to load an active x cause I can get 
 the top of the page then after it ffroze and seems to load and load and load.

thanks for confirming that for me.
The site does working going directlyto it without a proxy, and it also
works while using M$ ISA proxy.

any tips on troubleshooting this for a newb is appreaciated.

-- 


[squid-users] IE versus firefox problems

2007-10-19 Thread Robin Mordasiewicz
I have a site which will not load properly while I am using IE7, but using
firefox it loads instantly.

for example, while visting http://www.thestar.com IE will pause and wait
for a very long time for the site to open, while firefox is able to load
the page instantly. Can someone else verify for me wether or not they are
seeing the same problem.

 http://www.thestar.com

-- 



[squid-users] https not working

2007-10-17 Thread Robin Mordasiewicz
I have squid working for http, but https connections just fail.

I have tried squid 2.5 on centos 3 via rpm, as well as squid 2.6 on centos
5 via rpm as well, but neither work for me.

Can someone please let me know what I am missing.

In my access log I see the following when trying to access a site
https://mail.domain.com, but the site does not appear and firefox/IE error
out The connection was reset

snip /var/log/squid/access.log
1192640469.146 5 192.168.0.118 TCP_MISS/200 39 CONNECT mail.domain.com:443 - 
DIRECT/24.10.210.133 -
/snip

Here is my config

snip /etc/squid/squid.conf for squid 2.6

http_port 3128

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /var/log/squid/access.log squid

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

external_acl_type nt_group ttl=0 concurrency=5 %LOGIN 
/usr/lib64/squid/wbinfo_group.pl

Brefresh_pattern ^ftp: 1440  20%   10080
refresh_pattern ^gopher:  1440  0%1440
refresh_pattern .   0 20%   4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443   # https
acl Safe_ports port 70# gopher
acl Safe_ports port 210   # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280   # http-mgmt
acl Safe_ports port 488   # gss-http
acl Safe_ports port 591   # filemaker
acl Safe_ports port 777   # multiling http
acl CONNECT method CONNECT
acl local-servers dstdomain .domain.com
acl FTP proto FTP
acl smtcorp_pub snmp_community public
acl unrestrictedusers external nt_group INTERNETOK_NT_GROUP
acl NTLMUsers proxy_auth REQUIRED

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost

http_access allow unrestrictedusers
http_access deny !NTLMUsers

http_access deny all
http_reply_access allow all

icp_access allow all
cache_mgr [EMAIL PROTECTED]
visible_hostname proxy.domain.com
unique_hostname smtcorpx07.domain.com
append_domain .domain.com
always_direct allow local-servers
always_direct allow FTP
snmp_port 3401
snmp_access allow snmp_community all
mail_from [EMAIL PROTECTED]
coredump_dir /var/spool/squid

end /etc/squid/squid.conf
-- 



Re: [squid-users] https not working

2007-10-17 Thread Robin Mordasiewicz
On Wed, 17 Oct 2007, Robin Mordasiewicz wrote:

 I have squid working for http, but https connections just fail.

 I have tried squid 2.5 on centos 3 via rpm, as well as squid 2.6 on centos
 5 via rpm as well, but neither work for me.

 Can someone please let me know what I am missing.


reply to myself.

I did not mention that my squid server is behind a f5 bigip load balancer.
I restested the squid server while not behind the load balancer, and it
worked.

If anyone else is successfully load balancing squid without a problem
accessing https sites please provide some insight.


-- 



Re: [squid-users] https not working

2007-10-17 Thread Robin Mordasiewicz
On Wed, 17 Oct 2007, Robin Mordasiewicz wrote:
 On Wed, 17 Oct 2007, Robin Mordasiewicz wrote:

  I have squid working for http, but https connections just fail.
 
  I have tried squid 2.5 on centos 3 via rpm, as well as squid 2.6 on centos
  5 via rpm as well, but neither work for me.
 
  Can someone please let me know what I am missing.
 

 reply to myself.

 I did not mention that my squid server is behind a f5 bigip load balancer.
 I restested the squid server while not behind the load balancer, and it
 worked.

 If anyone else is successfully load balancing squid without a problem
 accessing https sites please provide some insight.


reply to myself again.

squid is working perfectly fine now for load balancing , I needed to tweak
my load balancer to make it work.

-- 



[squid-users] Re: cache_peer_domain wildcards?

2006-12-05 Thread Robin Bowes
Henrik Nordstrom wrote:
 mån 2006-12-04 klockan 18:20 + skrev Robin Bowes:
 
 What does the additional acl do? i.e. this:

 acl all src 0.0.0.0/0.0.0.0
 never_direct deny all
 
 This tells Squid that it must not attempt to go directly to the
 requested host, always using a peer.
 
 If this is a 2.6 accelerator setup then generally this isn't needed as
 there is an built-in hardcoded rule denying direct access on accelerated
 content unless overridden with always_direct.

Yes, this is a 2.6 accelerator setup.

Thanks for all the help - this is now working using cache_peer_access
instead of cache_peer_domain.

Cheers,

R.



[squid-users] cache_peer_domain wildcards?

2006-12-04 Thread Robin Bowes
Hi all.

A question about cache_peer_domain in squid 2.6: Can it accept wildcards?

For example, instead of writing something like this:

cache_peer_domain images \
images.example.com \
images1.example.com \
images2.example.com \
images3.example.com

Can I use something like:

cache_peer_domain images *.example.com

Thanks,

R.



[squid-users] Re: cache_peer_domain wildcards?

2006-12-04 Thread Robin Bowes
Pablo García wrote:
 Robin
 
 You have to define the servers, url, domains, etc you're
 going to accelerate in the configuration using a combination of
 cache_peer + acl + cache_peer_access + never_direct
 Ej : you have to accelerate www.example.com that resides on 10.1.1.1,
 then your config should be like this
 
 http_port 80 defaultsite= www.example.com vhost
 cache_peer 10.1.1.1 parent 80 0 no-query originserver
 acl accel_host dstdomain .example.com
 cache_peer_access 10.1.1.1 allow accel_host
 
 acl all src 0.0.0.0/0.0.0.0
 never_direct deny all

OK, so my current config is:

http_port   172.28.28.20:80  vhost
cache_peer 172.28.28.40 parent 80 0 \
 no-query \
 no-digest \
 originserver \
 name=sites
cache_peer_domain sites sites.example.com


This would become:

http_port 172.28.28.20:80 vhost
cache_peer 172.28.28.40 parent 80 0 \
 no-query \
 no-digest \
 originserver \
 name=sites
acl accel_host dstdomain .sites.example.com
cache_peer_access 172.28.28.40 allow accel_host

What does the additional acl do? i.e. this:

 acl all src 0.0.0.0/0.0.0.0
 never_direct deny all

Thanks,

R.



[squid-users] Rewrite https to http

2006-09-11 Thread Robin Bowes
Hi,

A question about https ...

Is it possible to get squid to convert an https request to an http request?

This is for an application that is running under https and wants to
include google maps content (http) within itself but avoid the browser
warning the site includes encrypted and unencrypted content

So, I'm wondering if it's possible to hit squid with a link like
https://gmaps.example.com/?blahfoo and for squid to convert that to
http://maps.google.com/?blahfoo ?

I've googled and found this link:

http://safari.oreilly.com/0596001622/squid-CHP-1

I've actually got that book - but I can't find anywhere in the book that
explains how to do this ...

Can someone give me a pointer?

R.



[squid-users] Different backend server based on URL?

2006-08-28 Thread Robin Bowes
Hi,

Given the following URL scheme:

  example.com/x/123456
  example.com/y/123456

Is it possible to forward the requests to different hosts based on the
URL, e.g.:

  example.com/x/123456 - foo.com
  example.com/y/123456 - bar.com

(foo.com and bar.com woud most-likely be specified as IP addresses)

Thanks,

R.



[squid-users] Re: How to set multiple namebased virtual reverse proxy?

2006-08-25 Thread Robin Bowes
Henrik Nordstrom wrote:
 ons 2006-08-23 klockan 16:49 +0100 skrev Robin Bowes:
 
 One other thing I'm not sure about is DNS resolution.
 
 Only DNS involved is the client browser making a DNS lookup to find the
 server IP. This should give the IP of your Squid (or load balancer
 infront).

[snip]

 Works here without the internal DNS.
 
 Maybe you have something in http_access relying on DNS?

Hi,

Thanks for the reply.

I think it was a misconfiguration on the load balancer that was causing
the problem.

I've yet to test extensively but it seems to be working OK now.

Just need to tighten up the acls...

I presume it's possible to configure squid to only respond to a specific
set of domains, e.g. cache.example.com and images.example.com ?

R.



[squid-users] Re: How to set multiple namebased virtual reverse proxy?

2006-08-23 Thread Robin Bowes
Henrik Nordstrom wrote:
 mån 2006-08-21 klockan 06:40 + skrev Monty Ree:
 
 Is there any problem to set this?
 
 Exacly how it's meant to be done, except that perhaps you want to use
 the real server IP addresses in squid.conf rather than DNS.

Henrik,

I too want to set up something with exactly this configuration.

Whereabouts do the IPs go?

Here's a stab at the configuration:

http_port 192.168.26.26:80 vhost

cache_peer 192.168.0.41 parent 80 0 no-query originserver name=cache
cache_peer_domain server1 cache.example.com
cache_peer 192.168.0.42 parent 80 0 no-query originserver name=images
cache_peer_domain server1 images.example.com


One other thing I'm not sure about is DNS resolution.

I currently have this configuration:

client - LB1 - squid farm - LB2 - apache farm

LB1  LB2 are load-balancers

So, clients access cache.example.com which externally (i.e. public IP
address) resolves to LB1.
LB1 passes the request to a machine in the squid farm (squid01,02,03)
The squid instances peer with each other and are configured as
accelerators for the apache farm via LB2
proxy.example.com resolves to LB2 (192.168.0.41)
LB2 passes the request on to a machine in the apache farm
(proxy01,02,03) which are configured with cache.example.com as
ServerAliases in httpd.conf.

On each of the squid machines, I'm currently using this config (IP
address different per machine):

http_port 192.168.26.26:80 vhost
cache_peer 192.168.0.41 parent 80 0 no-query originserver

LB2 has address 192.168.0.41

However, I find that this only works if cache.example.com resolves
internally to 192.168.0.41.

Is this how it's supposed to work, or am I missing something?

Basically, what I'd like to happen is :

 * all incoming requests for cache.example.com get passed to 192.168.0.41
 * all incoming requests for images.example.com get passed to 192.168.0.42

This should happen regardless of what cache.example.com and
images.example.com resolve to internally.

Thanks,

R.



[squid-users] Re: Selective cache flush?

2006-06-14 Thread Robin Bowes
John Oliver wrote:
 Is there a way to flush only certain items out of the Squid cache when
 those item(s) are updated on the web server?
 
 Let's say the web server DocumentRoot has:
 
 /content/1/a.txt
 /content/1/b.txt
 /content/2/a.txt
 /content/2/b.txt
 
 If something in /content/1/ is changed, I'd like to be able to flush
 those items (everything in /content/1/ would be OK since it's likely
 that one change would involve several files) without affecting
 /content/2/ or anything else in the cache.

John,

I'm looking at the same thing.

I've not actually done it yet, but I plan to use the purge tool.

HTH,

R.



[squid-users] Pipe log output to program

2006-06-13 Thread Robin Bowes
Hi,

Is it possible to pipe squid log output to a custom logging program as
it is with apache [1].

I've googled, but could only find this reference to the topic [2]

Thanks,

R.

[1] http://httpd.apache.org/docs/2.2/logs.html#piped
[2] http://www.squid-cache.org/mail-archive/squid-dev/200509/0021.html



[squid-users] Re: AW: Pipe log output to program

2006-06-13 Thread Robin Bowes
[EMAIL PROTECTED] wrote:
 Use the linux command
 
 tail -f access.log | program

Werner,

Thanks for the suggestion, but that's not quite what I'm looking for.

I'd like to do something like:

 CacheAccessLog |/path/to/logprocessor

where logprocessor is some program that processes the log at is is
written, line-by-line, and does something like update a database.

R.

 
 -Ursprüngliche Nachricht-
 Von: news [mailto:[EMAIL PROTECTED] Im Auftrag von Robin Bowes
 Gesendet: Dienstag, 13. Juni 2006 11:31
 An: squid-users@squid-cache.org
 Betreff: [squid-users] Pipe log output to program
 
 
 Hi,
 
 Is it possible to pipe squid log output to a custom logging program as it is 
 with apache [1].
 
 I've googled, but could only find this reference to the topic [2]
 
 Thanks,
 
 R.
 
 [1] http://httpd.apache.org/docs/2.2/logs.html#piped
 [2] http://www.squid-cache.org/mail-archive/squid-dev/200509/0021.html
 
 



[squid-users] Re: Squid logs to named pipe?

2006-06-13 Thread Robin Bowes
Henrik Nordstrom wrote:
 tis 2006-06-13 klockan 13:08 -0700 skrev John Oliver:
 Googling turned up a three-year-old post indicating that there would be
 problems with sending logs to a named pipe under load. Well, I'm hoping
 that's been resolved... we expect to have to hundle dozens or hundreds
 of hits per second :-) And accurate log handling is critical for
 billing. What are my best options?
 
 Sending logs to a pipe (named, or explicit opened by Squid if one is to
 add such patch) always has issues under load as the performance gets
 limited to the performance of your log processor. If the log processor
 can't keep up Squid will soon stop all activities until it catches up.
 The same also happens if the log processor should exit for some reason..
 
 My recommendation is therefore to use the perl module File::Tail to
 monitor the on-disk file. File::Tail is quite similar to tail -f but
 automatically detects when the logfile has been rotated so it can be
 kept running completely independent of Squid, and will process log
 records as fast as it can without delaying Squid under peak load.

tail -F has the same effect - it monitors the filename rather than the
inode.

R.



[squid-users] Manually expire content

2006-05-30 Thread Robin Bowes
Hi,

I'm planning to use squid to cache content from a content-rewriting
proxy (running apache).

The proxy sucks content from a live site and replaces specific text strings.

So, http://proxy.example.com/?id=12345 might map to the site
http://squid-cache.org replacing all instances of the word squid with
foobar. I want squid to cache this.

Is it possible to manually expire content in the squid cache when
changes are made to the content-rewriting in the proxy?

Basically, I'd like to be able to say something like:

  Expire all content containing the query string id=12345

Thanks for any suggestions.

R.



[squid-users] Re: Manually expire content

2006-05-30 Thread Robin Bowes
Robin Bowes wrote:

 So, http://proxy.example.com/?id=12345 might map to the site
 http://squid-cache.org replacing all instances of the word squid with
 foobar. I want squid to cache this.

Related to this, is it possible to get the whole URL logged in the
access.log, i.e. including the whole query string? e.g.:

http://proxy.example.com/?id=12345

R.



[squid-users] Re: Manually expire content

2006-05-30 Thread Robin Bowes
Chris Robertson wrote:

 So, http://proxy.example.com/?id=12345 might map to the site
 http://squid-cache.org replacing all instances of the word squid with
 foobar. I want squid to cache this.
   
 The purge tool , from related - software
 (http://www.squid-cache.org/related-software.html), while old, is
 purported to do this.

Thanks. Will check it out.

 Related to this, is it possible to get the whole URL logged in the
 access.log, i.e. including the whole query string? e.g.:

 http://proxy.example.com/?id=12345

  

 Look into the strip_query_terms directive in squid.conf.default.  It
 defaults to on.

Again, thanks. I thought there would be a config directive.

R.



[squid-users] Reverse proxy multiple sites with re-writing

2006-03-17 Thread Robin Bowes
Hi,

It's been a long time since I used squid - late '90s to proxy an entire
organisation's internet access over a 33.6k modem! But I digress...

I'm working on a targetted google/yahoo ad system.

For the sake of illustration, assume that Pizza Hot is a Pizza company
with a website at pizzahot.com and that my company's website is
adcounter.net.

An ad. campaign would aim to get links for Pizza Hot at the top of
search engine results.

Instead of a link to pizzahot.com, the link would be a link to
pizzahot.adcounter.net.

The idea is that pizzahot.adcounter.net would be served by a squid
instance which would serve up content from pizzahot.com, re-writing
certain key parts (telephone numbers, some URLs, etc.).

I have a couple of questions:

1. Can squid do content rewriting?
2. Would it be possible to proxy multiple such sites on one squid host?

R.



[squid-users] Are you interested in working for Yahoo!?

2005-10-19 Thread Robin Ren
Hi,

My name is Robin Ren.   I am an engineering manager in the Yahoo! Mail
backend team.  As you probably know, Y! Mail is the largest email provider
in the world.  We handle more than a billion messages a day and have
hundreds of millions of active users.

We are looking for energetic developers with great ideas, and with
experiences in the areas of distributed computing, caching, storage system,
among others.  If you want to find challenges, we may have the perfect
platform for you to launch your next big idea.

If you are interested, please contact me via email, or call me at
408-349-2840.

Robin Ren
Yahoo! Inc.
701 First Ave.
Sunnyvale, CA 94085



[squid-users] squid as incoming proxy?

2003-11-10 Thread Robin Bowes
Hi,

It's been a long time since I've used squid (over 4 years!), so be gentle with me!


I run a web server on my broadband connection at home. I run NAT on my gateway router 
and have a small internal network.
I currently have all my web services hosted on a single box because of the 1-2-1 
nature of NAT.
I maintain my own internal DNS service which is different to the publicly visible DNS 
information (hosted at dyndns.org).

Here's what I would like to do:

Internet --- router --- proxy --+-- web1.robinbowes.com
   (squid)  |
+-- web2.robinbowes.com

Externally, web1.robinbowes.com and web2.robinbowes.com will resolve to my single 
external IP address.
Internally, web1.robinbowes.com and web2.robinbowes.com will resolve to different 
internal IP addresses.
The router will map these incoming requests to the proxy server which look up the IP 
of the URL host internally and pass the request
on to the relevant machine.

Can squid do this?

Is there any special sort of set up I need to consider?

I also am considering implementing some sort of outgoing access control - pah, kids! 
Would I be able to use the same instance of
squid for this or would I be better considering a separate instance?

Thanks for any help,

Cheers,

R.