On Jan 24, 2010, at 10:40 AM, Angelo Höngens wrote:
>
> According to top, the CPU usage for the varnishd process is 0.0% at 400
> req/sec. The load over the past 15 minutes is 0.45, probably mostly
> because of haproxy running on the same machine. So I don't think load is
> a problem.. My problem
On Jan 24, 2010, at 7:23 AM, Angelo Höngens wrote:
>> What is thread_pool_max set to? Have you tried lowering it? We have
>> found that on systems with very high cache-hit ratios, 16 threads per
>> CPU is the sweet spot to avoid context-switch saturation.
>
> [ang...@nmt-nlb-03 ~]$ varnishadm -
On Jan 19, 2010, at 12:46 AM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> Does Varnish already try to utilize CPU caches efficiently by employing =
>> some sort of LIFO thread reuse policy or by pinning thread pools to =
>> specific CPUs? If not, there might b
On Jan 18, 2010, at 4:35 PM, Poul-Henning Kamp wrote:
> In message <97f066dd-4044-46a7-b3e1-34ce928e8...@slide.com>, Ken Brownfield
> wri
> tes:
>
>> Ironically and IMHO, one of the barriers to Varnish scalability
>> is its thread model, though this problem strikes in the thousands
>> of connect
On Jan 18, 2010, at 4:15 PM, Ken Brownfield wrote:
> Ironically and IMHO, one of the barriers to Varnish scalability is its thread
> model, though this problem strikes in the thousands of connections.
Agreed. In an early thread on varnish-misc in February 2008 I concluded that
reducing thread_
On Jan 18, 2010, at 4:06 PM, Poul-Henning Kamp wrote:
> In message <02d0ec1a-d0b0-40ee-b278-b57714e54...@dynamine.net>, "Michael S.
> Fis
> cher" writes:
>
>> But we are not discussing serving dynamic content in this thread
>> anyway. We are talking about binary files, aren't we? Yes? Blobs
>
On Jan 18, 2010, at 3:54 PM, Ken Brownfield wrote:
> Adding unnecessary software overhead will add latency to requests to the
> filesystem, and obviously should be avoided. However, a cache in front of a
> general web server will 1) cause an object miss to have additional latency
> (though sma
On Jan 18, 2010, at 3:47 PM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> That's why you don't use those webservers as origin servers for
>> that purpose. But you don't use Varnish for it either. It's not
>> an origin server anyway.
>
> Actually, for protocol
On Jan 18, 2010, at 3:37 PM, pub crawler wrote:
>> Differences in latency of serving static content can vary widely based on
>> the web server in use, easily tens of milliseconds or more. There are
>> dozens of web servers out there, some written in interpreted languages, many
>> custom-written f
On Jan 18, 2010, at 3:08 PM, Ken Brownfield wrote:
>> I have a hard time believing that any difference in the total response time
>> of a cached static object between Varnish and a general-purpose webserver
>> will be statistically significant, especially considering typical Internet
>> network
On Jan 18, 2010, at 2:16 PM, pub crawler wrote:
>> Most kernels cache recently-accessed files in RAM, and so common web servers
>> such as Apache can ?>already serve up static objects very quickly if they
>> are located in the buffer cache. (Varnish's apparent >speed is largely
>> based on the
On Jan 18, 2010, at 1:52 PM, Poul-Henning Kamp wrote:
> In message , "Michael S.
> Fis
> cher" writes:
>
>> What VM can overcome page-thrashing incurred by constantly referencing a
>> working set that is significantly larger than RAM?
>
> No VM can "overcome" the task at hand, but some work a l
On Jan 18, 2010, at 12:58 PM, pub crawler wrote:
> This is an inquiry for the Varnish community.
>
> Wondering how many folks are using Varnish purely for binary storage
> and caching (graphic files, archives, audio files, video files, etc.)?
>
> Interested specifically in large Varnish installa
On Jan 18, 2010, at 1:05 PM, Poul-Henning Kamp wrote:
> In message <43a238d7-433d-4000-8aa5-6c574882d...@dynamine.net>, "Michael S.
> Fis
> cher" writes:
>
>> I should have been more clear. If you overcommit and use disk you
>> will die. Even SSD is a problem as the write latencies are high.
On Jan 18, 2010, at 12:31 PM, Ken Brownfield wrote:
On Jan 16, 2010, at 7:32 AM, Michael Fischer wrote:
On Sat, Jan 16, 2010 at 1:54 AM, Bendik Heltne
wrote:
Our Varnish servers have ~ 120.000 - 150.000 objects cached in ~ 4GB
memory and the backends have a much easier life than before
On Jan 18, 2010, at 5:20 AM, Tollef Fog Heen wrote:
> we are considering changing the defaults on how the cache-control header
> is handled in Varnish. Currently, we only look at s-maxage and maxage
> to decide if and how long an object should be cached. (We also look at
> expires, but that's not
Varnish does keep a log if you ask it to.
On Jan 10, 2010, at 10:37 PM, pub crawler
wrote:
> Alright, up and running with Varnish successfully. Quite happy with
> Varnish. Our app servers no longer are failing / overwhelmed.
>
> Here's our new problem...
>
> We have a lot of logging going on
It has been my experience that anti-DoS is usually easiest to implement at the
origin server level, where the request handlers are typically more flexible and
easiest to program. Even forking servers like Apache can issue 4xx responses
lightning fast, without many resources being consumed.
--M
That kind of VM overcommit (400GB on an 8GB box) is hazardous for performance
anyway. I strongly advise configuring Varnish cache sizes at slightly under
the actual RAM size of the box. If your working set size is larger, you need
more boxes or more RAM anyway, as paging I/O will significantly
Are you returning a "Vary: Accept-Encoding" in your origin server's
response headers?
--Michael
On Nov 17, 2009, at 4:01 PM, Daniel Rodriguez wrote:
> Hi guys,
>
> I'm having a problem with a varnish implementation that we are testing
> to replace an ugly appliance. We were almost ready to pla
amd64 refers to the architecture (AKA x86_64), not the particular CPU
vendor. (As a matter of fact, I was unaware of this limitation;
AFAIK it does not exist in FreeBSD.)
In any event, mmap()ing 340GB even on a 64GB box is a recipe for
disaster; you will probably suffer death by paging if
If you'd like to examine the source, you can find it at:
http://svn.apache.org/repos/asf/incubator/trafficserver/
(I'm a Yahoo! employee, though I'm not here to represent them in any
way.)
--Michael
On Nov 2, 2009, at 4:26 PM, Ask Bjørn Hansen wrote:
> I thought this might be of interest:
>
On Sep 20, 2009, at 6:20 AM, Nils Goroll wrote:
>> tcp_tw_recycle is incompatible with NAT on the server side
>
> ... because it will enforce the verification of TCP time stamps.
> Unless all
> clients behind a NAT (actually PAD/masquerading) device use
> identical timestamps
> (within a certa
On Jul 28, 2009, at 3:09 PM, Rob S wrote:
> Michael S. Fischer wrote:
>> On Jul 28, 2009, at 2:35 PM, Rob S wrote:
>>> Thanks Darryl. However, I don't think this solution will work in
>>> our
>>> usage. We're running a blog. Administrators get
On Jul 28, 2009, at 2:35 PM, Rob S wrote:
> Thanks Darryl. However, I don't think this solution will work in our
> usage. We're running a blog. Administrators get un-cached access,
> straight through varnish. Then, when they publish, we issue a purge
> across the entire site. We need to do thi
What's the purpose of these requirements? Just curious.
--Michael
On Jul 25, 2009, at 9:10 PM, Ryan Chan wrote:
>
> Hello,
>
> I have serveral web sites running on Apache/PHP, I want to install a
> Transparent Reverse Proxy (e.g. squid, varnish) to cache the static
> stuff. (By looking at e
I think you mean 1 week :)
--Michael
On Jun 15, 2009, at 11:02 AM, Jauder Ho wrote:
Well, Velocity is in 2 weeks in San Jose if anyone wants to meet.
It's short notice but probably an appropriate conference.
http://en.oreilly.com/velocity2009
--Jauder
On Mon, Jun 15, 2009 at 3:07 AM, Poul
Ok, so your average latency is 16ms. At a concurrency of 10, at most,
you can obtain 625r/s.
(1 request/connection / 0.016s = 62.5 request/s/connection * 10
connections = 625 request/s)
Try increasing your benchmark concurrency.
--Michael
On Jun 1, 2009, at 11:10 PM, Andreas Jung wrote:
>
I think the lesson of these cases is pretty clear: make your
cacheable working set fits into the proxy server's available memory --
or, if you want to exceed your available memory, make sure your hit
ratio is sufficiently high that the cache server rarely resorts to
paging in the data. Ot
On Apr 29, 2009, at 9:30 AM, Nick Loman wrote:
> Michael S. Fischer wrote:
>> On Apr 29, 2009, at 9:22 AM, Poul-Henning Kamp wrote:
>>> In message <49f87de4.3040...@loman.net>, Nick Loman writes:
>>>
>>>> Has Varnish got a solution to this problem wh
On Apr 29, 2009, at 9:22 AM, Poul-Henning Kamp wrote:
> In message <49f87de4.3040...@loman.net>, Nick Loman writes:
>
>> Has Varnish got a solution to this problem which does not involve
>> time-wait recycling? One thing I've thought of is perhaps
>> SO_REUSEADDR
>> is used or could be used when
Not that I have an answer, but I'd be curious to see the differences
in 'pmap -x ' output for the different children.
--Michael
On Apr 7, 2009, at 6:27 PM, Darryl Dixon - Winterhouse Consulting wrote:
>> Hi All,
>>
>> I have an odd problem that I have only noticed happening since
>> moving f
On Feb 12, 2009, at 3:34 AM, Poul-Henning Kamp wrote:
> Well, if people in general think our defaults should be that way, we
> can change them, our defaults are whatever the consensus can agree on.
I'm with the OP. Regardless of the finer details of the RFC, if I'm a
web developer and I set the
On Feb 3, 2009, at 6:25 AM, Tollef Fog Heen wrote:
> If it has expired, the client just won't send it, so just check
> req.http.cookie for the relevant cookie and you'll be fine.
I strongly advise against this, as it could subject you to replay
attacks.
That said, the client does not include a
On Jan 28, 2009, at 10:04 AM, Niall O'Higgins wrote:
>> This is a typical indication of raw overload, what levels of traffic
>> are you hitting it with ?
>
> This kind of thing:
>
> Transaction rate:3776.65 trans/sec
> Throughput: 1.68 MB/sec
> Concurrency:
On Jan 28, 2009, at 4:30 AM, Poul-Henning Kamp wrote:
> Your question is -exactly- why I want the rename: purge sounds like
> something happens to the object right now, and that is not possible
> from the CLI context.
How about 'qpurge' ?
--Michael
___
On Jan 9, 2009, at 1:59 AM, Tollef Fog Heen wrote:
> | What about CARP-like cache routing (i.e., where multiple cache
> servers
> | themselves are hash buckets)? This would go a LONG way towards
> | scalability.
>
> http://varnish.projects.linpro.no/wiki/PostTwoShoppingList second item
> sounds
+1. This is a very good idea for optimizing RAM utilization.
--Michael
On Jan 8, 2009, at 11:25 AM, Jeff Anderson wrote:
> Thanks for the response.
>
> I think inline page compression would be great too. Store gzipped
> objects in the persistent cache and unzip if uncompressed objects are
> re
What about CARP-like cache routing (i.e., where multiple cache servers
themselves are hash buckets)? This would go a LONG way towards
scalability.
--Michael
On Jan 8, 2009, at 2:29 AM, Tollef Fog Heen wrote:
>
> Hi,
>
> a short while before Christmas, I wrote up a small document pointing
On Jan 6, 2009, at 7:42 AM, Marcus Smith wrote:
> "The build system will automatically detect the availability of
> epoll()
> and build the corresponding cache_acceptor. It will also automatically
> detect the availability of sendfile(), though its use is discouraged
> (and disabled by default) d
On Dec 8, 2008, at 9:03 AM, Per Buer wrote:
> Rebert Luc wrote:
>> Hello,
>>
>> In our studies we have a project which consists in testing the
>> performance of Varnish in order to make a comparative with and
>> without
>> the proxy cache.
>> Does anyone know which utilities to employ ? (knowing
How many CPUs (including all cores) are in your systems?
--Michael
On Nov 20, 2008, at 12:06 PM, Michael wrote:
> Hi,
>
> PF> What does "overflowed work requests" in varnishstat signify ? If
> this
> PF> number is large is it a bad sign ?
>
> I have similar problem. "overflowed work requests"
ns
> fine .What could be the issue?
>
> On Thu, Nov 20, 2008 at 4:12 PM, Michael S. Fischer
> <[EMAIL PROTECTED]> wrote:
>> Smells like an architecture mismatch. Any chance you're running a
>> 32-bit Varnish build?
>>
>> --Michael
>>
>> On
Smells like an architecture mismatch. Any chance you're running a
32-bit Varnish build?
--Michael
On Thu, Nov 20, 2008 at 1:34 AM, Paras Fadte <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have installed varnish 2.0.2 on openSUSE 10.3 (X86-64) , but it
> doesn't seem to start and I get "VCL compilation
I assume this is for logging daemon metadata/error conditions and not
actual traffic?
If this is for request/response logging, consider implementing a
bridge daemon that reads from the SHM like varnishlog or varnishncsa
does, and which then sends the output via liblogging. This will
provide the f
Nearly every modern webserver has optimized file transfers using
sendfile(2). You're not going to get any better performance by shifting the
burden of this task to your caching proxies.
--Michael
On Tue, Aug 12, 2008 at 12:53 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm cert
This sounds an awful lot like "no PAE kernel" -- i.e., 32 bits and a really
old OS.
--Michael
On Fri, Jun 20, 2008 at 2:42 AM, kuku li <[EMAIL PROTECTED]> wrote:
> Hello,
>
> we have been running varnish for a while but noticed that varnish will just
> restart itself as the virtual memory goes t
On Thu, Jun 19, 2008 at 5:37 AM, Rafael Umann <[EMAIL PROTECTED]>
wrote:
> > What is your request:connection ratio?
>
> Unfortunately now i dont have servers doing 2 hits/second, and
> thats why i dont have stats for you.
Actually, it's right there in your varnishstat output:
36189916
On Wed, Jun 18, 2008 at 4:51 AM, Rafael Umann <[EMAIL PROTECTED]>
wrote:
>
> If it is a 32bits system, probably the problem is that your stack size
> is 10Mb. So 238 * 10mb = ~2gb
>
> I decreased my stack size to 512Kb. Using 1gb storage files i can now
> open almost 1900 threads using all the 2gb
Raising the number of threads will not significantly improve Varnish
concurrency in most cases. I did a test a few months ago using 4 CPUs on
RHEL 4.6 with very high request concurrency and a very low
request-per-connection ratio (i.e., 1:1, no keepalives) and found that the
magic number is about
On Mon, Jun 2, 2008 at 7:57 AM, Chris Shenton <[EMAIL PROTECTED]>
wrote:
> We have to fill out pounds of paperwork in order to take any outage on
> a public server, no matter how small. Is there a way to restart
> Varnish without any downtime -- to continue accepting but holding
> connections unt
Why are you using Varnish to serve primarily images? Modern webservers
serve static files very efficiently off the filesystem.
Best regards,
--Michael
On Sun, Jun 1, 2008 at 8:58 AM, Barry Abrahamson <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> Is anyone running multiple varnish instances per server (o
On Sun, Apr 20, 2008 at 10:25 AM, Timothy Ball <[EMAIL PROTECTED]> wrote:
> Does anyone have a script that takes varnishlog output and munges it into
> something that looks combinedlog-ish? Queries to google-tube have not been
> useful.
varnishncsa(1) comes in the box.
--Michael
On Tue, Apr 15, 2008 at 11:53 PM, Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> In message <[EMAIL PROTECTED]>, "Mich
>
> ael S. Fischer" writes:
>
> >> Varnish for instance assumes that the administrator is not a total
> >> madman, who would do something as patently stupid as you prospose
>
On Tue, Apr 15, 2008 at 1:16 AM, Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> >Well-engineered software doesn't make potentially false assumptions
> >about the environment in which it runs.
>
> And they don't.
>
> Varnish for instance assumes that the administrator is not a total
> madman, w
On Tue, Apr 15, 2008 at 12:25 AM, Ricardo Newbery
<[EMAIL PROTECTED]> wrote:
> Assuming that "nobody" is an available user on your system, then is
> the "-u user" option for varnishd superfluous?
Who's to say that "nobody" is an unprivileged user?
/etc/passwd:
nobody:*:0:0:alias for root:...
On Tue, Apr 8, 2008 at 4:34 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> > I should add a qualifier to my vote, that stale-while-revalidate
> > generally is used to mask suboptimal backend performance and so I
> > discourage it in favor of fixing the backend.
>
> Of course the main premise of
On Tue, Apr 8, 2008 at 4:25 PM, Michael S. Fischer <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> > +1 on stale-while-revalidate. I found this one to be real handy.
>
> Another +1
I should add a qualifier t
On Tue, Apr 8, 2008 at 4:18 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> +1 on stale-while-revalidate. I found this one to be real handy.
Another +1
--Michael
___
varnish-misc mailing list
varnish-misc@projects.linpro.no
http://projects.linpro.no/
On Fri, Apr 4, 2008 at 3:31 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> > > Again, "static" content isn't only the stuff that is served from
> > > filesystems in the classic static web server scenario. There are plenty
> of
> > > "dynamic" applications that process content from database -- a
On Mon, Apr 7, 2008 at 2:14 PM, Simon Lyall <[EMAIL PROTECTED]> wrote:
> On Mon, 7 Apr 2008, Michael S. Fischer wrote:
> > That said, it wouldn't make sense to entirely deallocate your swap
> > space, since the kernel may decide to page or swap out processes other
&
On Mon, Apr 7, 2008 at 9:00 AM, Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> Sascha Ottolski <[EMAIL PROTECTED]> writes:
> > now that my varnish processes start to reach the RAM size, I'm wondering
> > what a dimension of swap would be wise? I currently have about 30 GB
> > swap space for 32
On Fri, Apr 4, 2008 at 11:05 AM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> Again, "static" content isn't only the stuff that is served from
> filesystems in the classic static web server scenario. There are plenty of
> "dynamic" applications that process content from database -- applying skin
On Fri, Apr 4, 2008 at 3:20 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> you are right, _if_ the working set is small. in my case, we're talking
> 20+ mio. small images (5-50 KB each), 400+ GB in total size, and it's
> growing every day. access is very random, but there still is a good
> am
On Thu, Apr 3, 2008 at 8:59 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> Well, first of all you're setting up a false dichotomy. Not everything
> fits neatly into your apparent definitions of dynamic versus static. Your
> definitions appear to exclude the use case where you have cacheable c
On Thu, Apr 3, 2008 at 7:37 PM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> URL versioning is usually not appropriate for html
> pages or other primary resources that are intended to be reached directly by
> the end user and whose URLs must not change.
Back to square one. Are these latter reso
On Thu, Apr 3, 2008 at 11:53 AM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> On Apr 3, 2008, at 11:04 AM, Michael S. Fischer wrote:
> > On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> >
> > > and I don't wan't upstre
On Thu, Apr 3, 2008 at 10:58 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> and I don't wan't upstream caches or browsers to cache that long, only
> varnish, so setting headers doesn't seem to fit.
Why not? Just curious. If it's truly cachable content, it seems as
though it would make sense
On Thu, Apr 3, 2008 at 10:26 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> All this with 1.1.2. It's vital to my setup to cache as many objects as
> possible, for a long time, and that they really stay in the cache. Is
> there anything I could do to prevent the cache being emptied? May be
>
On Mon, Mar 31, 2008 at 11:08 AM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> probably not exactly the same, but may be someone finds it useful: If
> just started to dive a bit into HAProxy (http://haproxy.1wt.eu/): the
> development version has the ability to calculate the loadbalancing
> based
On Mon, Mar 31, 2008 at 10:34 PM, Stig Sandbeck Mathisen <[EMAIL PROTECTED]>
wrote:
> On Mon, 31 Mar 2008 20:10:06 +0200, Sascha Ottolski <[EMAIL PROTECTED]>
> said:
>
> > is there anything like a snapshot release that is worth giving it a
> > try, especially if my configuration will hopefully sta
On Fri, Mar 28, 2008 at 4:58 AM, Florian Engelhardt <[EMAIL PROTECTED]>
wrote:
> You could store the sessions on a separate server, for instance on a
> memcache or in a database
Good idea. (Though if you use memcached, you'd probably want to
periodically copy the backing store to a file to surv
The Transfer-Encoding: header is missing from the Varnish response as well.
--Michael
On Thu, Mar 27, 2008 at 7:55 AM, Florian Engelhardt <[EMAIL PROTECTED]>
wrote:
> Hello,
>
> i've got a problem with the X-JSON HTTP-Header not beeing delivered by
> varnish in pipe and pass mode.
> My applicati
On Fri, Mar 21, 2008 at 3:36 AM, Ricardo Newbery <[EMAIL PROTECTED]> wrote:
> and I'm wondering if the first part of this is unnecessary. For
> example, what happens if I have this...
>
>
> if (req.http.Cookie ~ "(__ac=|_ZopeId=)") {
> pass;
> }
>
> but no Cookie header is p
On Mon, Mar 17, 2008 at 3:32 PM, DHF <[EMAIL PROTECTED]> wrote:
> This is called CARP/"Cache Array Routing Protocol" in squid land.
> Here's a link to some info on it:
>
> http://docs.huihoo.com/gnu_linux/squid/html/x2398.html
>
> It works quite well for reducing the number of globally duplicat
On Mon, Mar 17, 2008 at 8:57 AM, Poul-Henning Kamp <[EMAIL PROTECTED]> wrote:
> >No, we were talking about how long an idle backend connection is kept
> >open (or at least I was).
>
> Yes I know :-)
>
> And we don't do anything to close those before the backend closes on
> us, we have no reas
On Mon, Mar 17, 2008 at 12:42 AM, Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> "Michael S. Fischer" <[EMAIL PROTECTED]> writes:
>
> > Dag-Erling Smørgrav <[EMAIL PROTECTED]> writes:
> > > I think the default timeout on backends connection ma
On Feb 13, 2008 7:41 AM, Dag-Erling Smørgrav <[EMAIL PROTECTED]> wrote:
> I believe varnishlog -w /var/log/varnish.log is enabled by default if
> you install from packages on !FreeBSD. We may want to change this.
This was true for my RHEL 4 installation. I was only able to achieve
16,000 connec
On Sun, Mar 16, 2008 at 10:02 AM, Michael S. Fischer
<[EMAIL PROTECTED]> wrote:
I don't know why I'm having such a problem with this. Sigh! I think
I got it right this time.
> > If I were designing such a service, my choices would be:
>
> Corrections:
>
>
On Sun, Mar 16, 2008 at 10:00 AM, Michael S. Fischer
<[EMAIL PROTECTED]> wrote:
> If I were designing such a service, my choices would be:
Corrections:
> (1) 4 machines, each with 4-disk RAID 1 (fast, but dangerous)
> (2) 4 machines, each with 5-disk RAID 5 (safe, fast
On Fri, Mar 14, 2008 at 1:37 PM, Sascha Ottolski <[EMAIL PROTECTED]> wrote:
> The challenge is to server 20+ million image files, I guess with up to
> 1500 req/sec at peak.
A modern disk drive can service 100 random IOPS (@ 10ms/seek, that's
reasonable). Without any caching, you'd need 15 disks
On Mon, Mar 10, 2008 at 7:41 AM, Michael S. Fischer
<[EMAIL PROTECTED]> wrote:
>
> On Mon, Mar 10, 2008 at 3:57 AM, Gsm Lock <[EMAIL PROTECTED]> wrote:
> > I have a few backend servers . Static documents on servers has ugly
> > addresses as http://my-next-
On Mon, Mar 10, 2008 at 3:57 AM, Gsm Lock <[EMAIL PROTECTED]> wrote:
> I have a few backend servers . Static documents on servers has ugly
> addresses as http://my-next-back.end/111../785643../blabla/.../my.doc
> (mostly unstructured).
> Some of them has not unique names.
> I need them to be ac
On Tue, Mar 4, 2008 at 1:53 AM, Henning Stener <[EMAIL PROTECTED]> wrote:
>
> Are you sending one request per connection and closing it, or are you
> serving a number of requests to 10K different connections? In the last
> case how many requests/sec are you seeing?
In our test, we sent about 20
On Thu, Feb 28, 2008 at 9:52 PM, Mark Smallcombe <[EMAIL PROTECTED]> wrote:
> What tuning recommendations do you have for varnish to help it handle high
> load?
Funny you should ask, I've been spending a lot of time with Varnish in
the lab. Here are a few observations I've made:
(N.B. We're
;
> > -Original Message-----
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> > Of Michael S. Fischer
>
>
> > Sent: Thursday, February 28, 2008 1:57 PM
> > To: Andrew Knapp
> > Cc: varnish-misc@projects.linpro.no
> > S
> Anyone have any ideas? I'm running the 1.1.2-5 rpms from sf.net on
> Centos 5.1.
>
> Thanks,
> Andy
>
>
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:varnish-misc-
> > [EMAIL PROTECTED] On Behalf Of Andrew Knapp
>
> >
Does 'sysctl fs.file-max' say? It should be >= the ulimit.
--Michael
On Wed, Feb 20, 2008 at 4:04 PM, Andrew Knapp <[EMAIL PROTECTED]> wrote:
>
>
>
>
> Hello,
>
>
>
> I'm getting this error when running varnishd:
>
>
>
> >>
>
> Child said (2, 15369): < 217:
>
> Condition((pipe(w->pipe)) == 0)
(1) Feature request: Can a knob be added to turn down the verbosity of
Varnish logging? Right now on a quad-core Xeon we can service about
14k conn/s, which is good, but I wonder whether we could eke out even
more performance by quelling information that we don't need to log.
(2) HTTP/1.1 keep-al
89 matches
Mail list logo