On 11/09/2010 04:35 AM, Amos Jeffries wrote:
Unique ETag headers are required for that.
Hi Amos,
I have understood as far but I still cant get my setup to behave as
intended: to keep distinct versions of the same URL in cache until they
expire. Distinct based on ETag value/ Vary header valu
Will do.
Although with single requests it would get mem and sibling hits but then revert
to pulling from the backend even tho I replayed the same logs with httperf 30
mins later. All objects have a 3600s (60 min) expiry so it's not that either.
Will post configs for comments
Chris Toft
Mob:
On 03/11/10 21:53, Chris Toft wrote:
Thanks for the reply, I actually fixed it. Removed the multicast-responder
option and just left multicast-sibling.
Man this thing flies on 5 boxes with 64gb memory and 10x 50gb solid state
drives for the cache :-)
I will post working config tomorrow for an
On 04/11/10 02:27, My LinuxHAList wrote:
Hi,
I may run multiple instances of squid inside a box.
Those instances may be serving out of the same eth0 or some bonded interface.
I have a question on the icp multicast option.
Is squid icp multicast called with "loopback" option on, so that wh
On 09/11/10 03:08, donovan jeffrey j wrote:
On Nov 5, 2010, at 7:37 PM, Amos Jeffries wrote:
On 06/11/10 03:28, donovan jeffrey j wrote:
On Nov 5, 2010, at 10:24 AM, Amos Jeffries wrote:
On 06/11/10 03:20, donovan jeffrey j wrote:
does this look right ?
#redirect_program /usr/
On 03/11/10 09:57, Konrado Z wrote:
But how to write properly sth like this
'http_access allow clients|managers|clients2 #Squid cannot start with that line'
I want to replace 'http_access allow all' line with this given above.
Best
http://wiki.squid-cache.org/SquidFaq/SquidAcl#Common_Mistakes
On 09/11/10 00:11, Leonardo wrote:
Hi Amos,
On Sun, Nov 7, 2010 at 5:12 AM, Amos Jeffries wrote:
http_port 3128 intercept
I have changed the config from "http_port 3128 transparent" to
"http_port 3128 intercept", but I see no change in the behaviour.
You will also need a separate port for
On 09/11/10 05:30, mrmmm wrote:
Thanks for your response. For example, I have the file with the following
entries:
.site1.com
.site2.com
123.123.123.123
234.234.234.234
Type: Web Server Hostname
And still I am able to browse all these sites from behind the proxy...
Anything I might be missing
RES grows to 14.7GB. Looks like this patch does not fix the problem...
2010/10/26 Kaiwang Chen :
> Currently running two instances behind round-robin load balanced DNS,
> one with the following patch(bug3068_mk2.patch with several twists to
> apply to 3.1.6), the other without. Wish to get some te
On Mon, 8 Nov 2010 18:32:52 -0500, Kevin Wilcox
wrote:
> Hi all.
>
> This is currently a test environment so making changes isn't an issue.
>
> Initially I had issues with hosts updating Windows> but solved that with the included squid.conf. I'm even
> getting real cache hits on some of the Win
On Mon, 08 Nov 2010 18:33:37 +0200, Adrian Dascalu
wrote:
> Done some new tests and I found out that caching an URL that has a
> different X-Username header will invalidate the other version of that
> object.
Aha, you can disregard my earlier reply.
>
> Is this the intended behaviour?
Yes.
On Mon, 08 Nov 2010 14:33:31 +0200, Adrian Dascalu
wrote:
> Hi,
>
> I'm out of ideeas trying to debug cache misses that I cannot explain. As
a
> last resort I'm sending this problem to the list with the hope that you
> could come up with some explanation and/or cure for this.
>
> the setup is: s
On Mon, 08 Nov 2010 16:15:24 +0200, karj wrote:
> Dear Expert,
>
> I'm using:
> - Squid Cache: Version Squid Cache: Version 2.7.STABLE9
>
> My Problem is.
>
> When i'm using
> Cache-Control headers in the origin iis ( post-check=3600,
> pre-check=43200 )
>
> Squid is caching the 404 Error Ms
On Mon, 8 Nov 2010 09:02:37 -0500, david robertson
wrote:
>> What is your digest rebuild time set to?
>> your cache_dir and cache_mem sizes?
>> and your negative_ttl setting?
>
> digest_rebuild_period 60 minutes
> negative_ttl 1 minute
> backends use a cache_dir of 20gb (8mb cache_mem)
> fronte
Hi all.
This is currently a test environment so making changes isn't an issue.
Initially I had issues with hosts updating but solved that with the included squid.conf. I'm even
getting real cache hits on some of the Windows XP and Windows 7
updates in my test lab, so the amount of effort I've pu
So, the problem boils down to:
why a cashed version of the page with URL X will be invalidated by an
access to the same URL and a different value in one of the headers
listed in the Vary header? The store log consistently logs:
Mon 08 Nov 2010 11:27:50 PM CET RELEASE 200 text/html GET
http:/
Em 07/11/2010 01:45, Amos Jeffries escreveu:
Indicating that your NAT rules are incorrect.
The above line is simply forcing Squid to send from 127.0.0.1. It
would only have any effect if your NAT intercept rules were forcing
all localhost traffic back into Squid.
Removing the above line ma
This is what you're looking for:
# TAG: negative_ttltime-units
# Time-to-Live (TTL) for failed requests. Certain types of
# failures (such as "connection refused" and "404 Not Found") are
# negatively-cached for a configurable amount of time. The
# default is 5 minut
Hi!
I wouldn't think you need multiple network cards to use squid, unless your
internet connection is on or above 1GB/s. If your ISP provides you less, I
would think a regular gigabit Nic would do the job.
Your Hard Drives probably wont be fast enough to cache data on multiple Nics
anyways.
We
Done some new tests and I found out that caching an URL that has a
different X-Username header will invalidate the other version of that
object.
Is this the intended behaviour? I mean, Vary header will just inform
there is a new version so everything else is discarded? If so, is there
a metho
Thanks for your response. For example, I have the file with the following
entries:
.site1.com
.site2.com
123.123.123.123
234.234.234.234
Type: Web Server Hostname
And still I am able to browse all these sites from behind the proxy...
Anything I might be missing?
Thank you,
--
View this messag
Thanks.
I still have users connecting at around 1.91Mb and faster on the server so
the delay pools don;t seem to be working.
Only thing I can think of is that it's not registering the ncsa users?
--
From: "Chad Naugle"
Sent: Monday, November 08,
Yes sorry, at work. See Below. I am not 100% on fill-rate versus the
other numbers, so I'll leave that up for someone else to reply. I would
just tinker with the values until you get acceptable results until
then.
acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3
do I need to add this:
delay_access 2 deny all
delay_access 1 deny all
?
Also, what is the difference between fill rate and reserve?
I think I have a fill rate of 256, maybe I should increase this for watching
video?
I am using iftop on the server, and users still seem to be connecting at
mor
I also forgot to mention how you also forgot to "deny all" the first two
pools.
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressio
Your problem here is that you are trying to layer delay_pool 1 twice, so
I corrected the config below adding a third delay_pool for your
ncsa_users.
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
-
I have done this but I am not sure if it will pick up the ncsa users.
This should restrict max bandwidth for any 1 user to 1024 (1Mbps)?
acl magic_words1 url_regex -i 192.168
acl magic_words2 url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar
.avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .
Anyway, I apologize for the short response, I was busy on the phone. I
would research delay_pools and try to figure out / tweak your config to
meet your needs. It's not a real straight forward config, but that's
because it is very flexible in how users are limited. The only thing
that it does no
Use delay_pools ...
-
Chad E. Naugle
Tech Support II, x. 7981
Travel Impressions, Ltd.
>>> "J Webster" 11/8/2010 9:46 AM >>>
I have put in some controls for downloading files like iso, mp3 etc but
I
would like to limit the connection per ip address
I have put in some controls for downloading files like iso, mp3 etc but I
would like to limit the connection per ip address?
--
From: "J Webster"
Sent: Sunday, November 07, 2010 9:18 PM
To:
Subject: Bandwidth split?
It is becoming apparent that
Dear Expert,
I'm using:
- Squid Cache: Version Squid Cache: Version 2.7.STABLE9
My Problem is.
When i'm using
Cache-Control headers in the origin iis ( post-check=3600,
pre-check=43200 )
Squid is caching the 404 Error Msg.
In the first two or thre requests i have
TCP_MISS:FIRST_UP_PARENT
On Nov 5, 2010, at 7:37 PM, Amos Jeffries wrote:
> On 06/11/10 03:28, donovan jeffrey j wrote:
>>
>> On Nov 5, 2010, at 10:24 AM, Amos Jeffries wrote:
>>
>>> On 06/11/10 03:20, donovan jeffrey j wrote:
>>>
does this look right ?
#redirect_program /usr/local/bin/squ
> What is your digest rebuild time set to?
> your cache_dir and cache_mem sizes?
> and your negative_ttl setting?
digest_rebuild_period 60 minutes
negative_ttl 1 minute
backends use a cache_dir of 20gb (8mb cache_mem)
frontends use a cache_mem of 2gb (no cache_dir)
> What do you get back when
Witam,
Dnia pon, lis 01, 2010 at 01:17:46 +, Amos Jeffries napisał:
> by "soft raid" you mean *software* raid? That is a disk IO killer for
> Squid.
For one day squid work perfect, i thinnk it was a point. Thank You.
--
Pozdrawiam
Michał Prokopiuk
mich...@]sloneczko.net
http://www.slonec
Hi,
I'm out of ideeas trying to debug cache misses that I cannot explain. As a last
resort I'm sending this problem to the list with the hope that you could come
up with some explanation and/or cure for this.
the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12 parents 1
sibling
Hi list,
I'm looking at building a couple more 3.1.8 servers on RHEL 5.5 x86. The
servers are nicely high-powered have multiple Gb NICs (4 in total). My previous
proxy server (bluecoat) had two NICs. I understand that one was used to listen
to requests and send to our upstream accelerator and o
Hi Amos,
On Sun, Nov 7, 2010 at 5:12 AM, Amos Jeffries wrote:
> http_port 3128 intercept
I have changed the config from "http_port 3128 transparent" to
"http_port 3128 intercept", but I see no change in the behaviour.
> You will also need a separate port for the normal browser-configured and
>
On 08/11/10 03:58, Fabiano Carlos Heringer wrote:
Hey guys, in the squid the ZPH it´s already enabled by default or its
necessary to enable something?
I´ve installed the squid with --enable-zph-qos option, and put three
good.
options on my squid.conf to mark packets with diferente ToS, but i
On Sat, 6 Nov 2010, Luis Enrique Sanchez Arce wrote:
When squid resolve the resource from cache does not send the answer to ICAP.
How I can change this behavior?
You need a respmod_postcache hook, which unfortunately hasn't been
implemented yet. The workaround I use is to run two separate Sq
39 matches
Mail list logo