We currently are unable to upgrade to squid6 due to a serious problem we found
with collapsed_forwarding (https://bugs.squid-cache.org/show_bug.cgi?id=5332),
and our applications need collapsed_forwarding for reasonable performance.
So we want to build a version of squid5 with as many
On Wed, Sep 21, 2022 at 11:43:41PM +1200, Amos Jeffries wrote:
> Subject: Re: [squid-users] Missing squid 5.6 & 5.7 announcements
> On 21/09/22 10:33, Dave Dykstra wrote:
> > I tried sending this directly to Amos twice over the last week or so but
> > it bounced each time.
>
I tried sending this directly to Amos twice over the last week or so but
it bounced each time.
I noticed that 5.7 is on the website since 5 September, but I have not
see a release announcement for that or for 5.6 from June. I would like
to know if it is considered to be in a stable enough state
I was surprised to see an announcment of squid-3.5.5 for Windows, when
I never saw a squid-3.5.5 release announcement. Indeed the squid-cache.org
website has it, but there's no announcement in
http://lists.squid-cache.org/pipermail/squid-announce/2015-May/thread.html
I do recall having a
On Thu, May 12, 2011 at 01:37:13PM +1200, Amos Jeffries wrote:
On 12/05/11 08:18, Dave Dykstra wrote:
...
So its a choice of being partially vulnerable to slow loris style
attacks (timeouts etc prevent full vulnerability) or packet
amplification on a massive scale.
Just to make sure I
On Wed, May 11, 2011 at 09:05:08PM +1200, Amos Jeffries wrote:
On 11/05/11 04:34, Dave Dykstra wrote:
On Sat, May 07, 2011 at 02:32:22PM +1200, Amos Jeffries wrote:
On 07/05/11 08:54, Dave Dykstra wrote:
Ah, but as explained here
http://www.squid-cache.org/mail-archive/squid-users/200903
On Sat, May 07, 2011 at 02:32:22PM +1200, Amos Jeffries wrote:
On 07/05/11 08:54, Dave Dykstra wrote:
Ah, but as explained here
http://www.squid-cache.org/mail-archive/squid-users/200903/0509.html
this does risk using up a lot of memory because squid keeps all of the
read-ahead data
On Wed, May 04, 2011 at 02:52:12PM -0500, Dave Dykstra wrote:
I found the answer: set read_ahead_gap to a buffer larger than the
largest data chunk I transfer.
- Dave
On Wed, May 04, 2011 at 09:11:59AM -0500, Dave Dykstra wrote:
I have a reverse proxy squid on the same machine as my origin
I have a reverse proxy squid on the same machine as my origin server.
Sometimes queries from squid are sent around the world and can be very
slow, for example today there is one client taking 40 minutes to
transfer 46MB. When the data is being transferred from the origin
server, the connection
I found the answer: set read_ahead_gap to a buffer larger than the
largest data chunk I transfer.
- Dave
On Wed, May 04, 2011 at 09:11:59AM -0500, Dave Dykstra wrote:
I have a reverse proxy squid on the same machine as my origin server.
Sometimes queries from squid are sent around the world
On Thu, Sep 03, 2009 at 09:48:43PM +0200, Henrik Nordstrom wrote:
tor 2009-09-03 klockan 09:46 -0500 skrev Dave Dykstra:
When is the next squid-2.7 stable release expected? I am very eager
for the fix in http://www.squid-cache.org/bugs/show_bug.cgi?id=2451
(regarding 304 Not Modified
When is the next squid-2.7 stable release expected? I am very eager
for the fix in http://www.squid-cache.org/bugs/show_bug.cgi?id=2451
(regarding 304 Not Modified responses).
- Dave Dykstra
On Thu, May 21, 2009 at 01:57:37PM +1200, Amos Jeffries wrote:
I would like to forward an scp session from one internal machine through
the Squid proxy and connect to an external machine. I have found many
documents that write about running squid over SSH but not the other way
around.
On Sat, Oct 04, 2008 at 12:55:15PM -0400, Chris Nighswonger wrote:
On Tue, Sep 30, 2008 at 6:13 PM, Dave Dykstra [EMAIL PROTECTED] wrote:
On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
I am running squid on over a thousand computers that are filtering data
coming out of one
.
- Dave
On Fri, Oct 03, 2008 at 11:21:19AM +1000, Mark Nottingham wrote:
Have you considered setting squid up to know about both origins, so it
can fail over automatically?
On 26/09/2008, at 5:04 AM, Dave Dykstra wrote:
I am running squid on over a thousand computers that are filtering data
Henrik,
Thanks so much for your very informative reply!
On Thu, Oct 02, 2008 at 12:31:03PM +0200, Henrik Nordstrom wrote:
By default Squid tries to use a parent 10 times before declaring it
dead.
Ah, I never would have guessed that I needed to try 10 times before
negative_ttl would take
On Tue, Oct 07, 2008 at 08:38:12PM +0200, Henrik Nordstrom wrote:
On tis, 2008-10-07 at 11:49 -0500, Dave Dykstra wrote:
Ah, I never would have guessed that I needed to try 10 times before
negative_ttl would take effect for a dead host. That wouldn't be
bad at all.
You don't. Squid
Meanwhile the '-I' option to squid makes it possible to run multiple
squids serving the same port on the same machine, so you can make use of
more CPUs. I've got scripts surrounding squid startups to take
advantage of that. Let me know if you're interested in having them.
Currently I run a
to have been a dual dual-core 64-bit 2Ghz Opteron,
although I saw some Intel machines with similar performance per CPU but
on those I had only one gigabit network interface and one squid.
- Dave
On Mon, Oct 06, 2008 at 08:09:17PM +0200, Marcin Mazurek wrote:
Dave Dykstra ([EMAIL PROTECTED
from a squid.conf based on the number of subdirectories
under the cache_dir of the form 0, 1, etc (up to 4) exist. It makes
squid 0 a cache_peer parent of the others so it's the only one that
makes upstream connections, but they all can serve clients.
- Dave
Dave Dykstra wrote:
Meanwhile the '-I
On Mon, Sep 29, 2008 at 03:41:33PM -0500, Dean Weimer wrote:
I am looking at implementing a new proxy configuration, using multiple peers
and load balancing, I have been looking through the past archives but I
haven't found the answers to some questions I have.
...
Now the other question is
Do any of the squid experts have any answers for this?
- Dave
On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
I am running squid on over a thousand computers that are filtering data
coming out of one of the particle collision detectors on the Large
Hadron Collider
server to another
at the application level.
- Dave
On Tue, Sep 30, 2008 at 10:32:43AM -0500, Dave Dykstra wrote:
Do any of the squid experts have any answers for this?
- Dave
On Thu, Sep 25, 2008 at 02:04:09PM -0500, Dave Dykstra wrote:
I am running squid on over a thousand computers
On Thu, Sep 25, 2008 at 08:51:00AM -0400, jeff donovan wrote:
On Sep 24, 2008, at 11:38 AM, Kinkie wrote:
On Wed, Sep 24, 2008 at 5:16 PM, jeff donovan
[EMAIL PROTECTED] wrote:
greetings
How could I go about load balancing two or more transparent proxy
squid servers ?
No caching
I am running squid on over a thousand computers that are filtering data
coming out of one of the particle collision detectors on the Large
Hadron Collider. There are two origin servers, and the application
layer is designed to try the second server if the local squid returns a
5xx HTTP code
On Mon, Jun 25, 2007 at 11:57:21PM +0200, Henrik Nordstrom wrote:
m??n 2007-06-25 klockan 15:02 -0500 skrev Dave Dykstra:
...
I considered that, but wouldn't multicasted ICP queries tend to get many
hundreds of replies (on average, half the total number of squids)?
Right.. so not so good
On Tue, Jun 26, 2007 at 09:58:12AM +1200, [EMAIL PROTECTED] wrote:
On Tue, Jun 12, 2007 at 11:42:42AM -0500, Dave Dykstra wrote:
...
I considered that, but wouldn't multicasted ICP queries tend to get many
hundreds of replies (on average, half the total number of squids)? It
would only
I have now posted the patch at
http://www.squid-cache.org/bugs/show_bug.cgi?id=1996
I decided to have only an option that uses stdin, as it is simpler for
users.
- Dave
On Tue, Jun 12, 2007 at 10:52:08PM +0200, Henrik Nordstrom wrote:
tis 2007-06-12 klockan 11:24 -0500 skrev Dave Dykstra
On Wed, Jun 13, 2007 at 09:33:19AM -0300, Michel Santos wrote:
Dave Dykstra disse na ultima mensagem:
Hi,
I wanted more throughput for my application than I was able to get with
one gigabit connection, so we have put in place a bonded interface with
two one-gigabit connections
On Tue, Jun 12, 2007 at 12:16:35AM +0200, Henrik Nordstrom wrote:
m??n 2007-06-11 klockan 14:42 -0500 skrev Dave Dykstra:
Two different processes
can't open the same address port on Linux, but one process can open a
socket and pass it to two forked children. So, I have modified
On Tue, Jun 12, 2007 at 12:19:26AM +0200, Henrik Nordstrom wrote:
m??n 2007-06-11 klockan 15:17 -0500 skrev Dave Dykstra:
of jobs. It quickly becomes impractical to distribute all the data from
just a few nodes running squid, so I am thinking about running squid on
every node, especially
through the 4 Gbits of network
connections with a single squid?
- Dave Dykstra
On Mon, Jun 11, 2007 at 06:13:32PM -0700, Michael Puckett wrote:
My squid application is doing large file transfers only. We have
(relatively)few clients doing (relatively)few transfers of very large
files
squid
distribution so I don't need to maintain it myself?
- Dave Dykstra
them. It's quite
a bit like the approach that peer-to-peer systems like bittorrent use,
although I haven't found any existing implementations that would be
appropriate for this application and I think it is probably more
appropriate to extend squid.
- Dave Dykstra
34 matches
Mail list logo