Hello guys,
I have some doubts, and I will appreciate if someone help me :-)
I posted something about this some days ago [1].
Basically, in the node tree that keeps the objects in the cache, I
inserted a list that keeps all listeners [2] and a FD that points to
tempfile. In ngx_http_upstream.c,
The patch in not in the milling list. We just spoke about the same problem
before in the list with other developers. Unfortunately I cannot share the
patch because it has been made for commercial project. However I am going
to ask for permition to share it.
On Fri, Aug 30, 2013 at 12:04 PM, Split
Hello Anatoli,
> I think this is asynchronous and if the upstream is faster than the
> downstream it save the data to cached file faster and the downstream gets
> the data from the file instead of the mem buffers.
In this case, I don't need to worry about upstream/downstream speed. Very good!
>
Hello Mathew,
> This is an interesting idea, while I don't see it being all that useful for
> most applications there are some that could really benefit (large file
> proxying first comes to mind). If it could be achieved without introducing
> too much of a CPU overhead in keeping track of the req
Is the patch on this mailing list (forgive me I cant see it)?
Ill happily test it for you, although for me to get any personal benefit
there would need to be a size restriction since 99.9% of requests are just
small HTML documents and would not benifit. Also the standard caching
(headers that resu
I discussed the idea years ago here in the mailing list but nobody from the
main developers liked it. However I developed a patch and we have this in
production more than 1 year and it works fine.
Just think for the following case:
You have a new file which is 1 GB and it is located far from the c
This is an interesting idea, while I don't see it being all that useful for
most applications there are some that could really benefit (large file
proxying first comes to mind). If it could be achieved without introducing
too much of a CPU overhead in keeping track of the requests & available
parts
Hello,
On Wed, Aug 28, 2013 at 7:56 PM, Alex Garzão wrote:
> Hello Anatoli,
>
> Thanks for your reply. I will appreciate (a lot) your help :-)
>
> I'm trying to fix the code with the following requirements in mind:
>
> 1) We were upstreams/downstreams with good (and bad) links; in
> general, up
Hello Anatoli,
Thanks for your reply. I will appreciate (a lot) your help :-)
I'm trying to fix the code with the following requirements in mind:
1) We were upstreams/downstreams with good (and bad) links; in
general, upstream speed is more than downstream speed but, in some
situations, the down
I had the same problem and I wrote a patch to reuse the file with I already
have in tmp directory for the second stream (and for all streams before the
file is completely cached). Unfortunately I cannot share it but can give
you an idea how to do it.
On Tue, Aug 27, 2013 at 8:43 PM, Alex Garzão
Hello Wandenberg,
Thanks for your reply.
Using proxy_cache_lock, when the second request arrive, it will wait
until the object is complete in the cache (or until
proxy_cache_lock_timeout expires). But, in many cases, my upstream has
a really slow link and NGINX needs more than 30 minutes to downl
Try to use the proxy_cache_lock configuration, I think this is what you are
looking for.
Don't forget to configure the proxy_cache_lock_timeout to your use case.
On Aug 26, 2013 6:54 PM, "Alex Garzão" wrote:
> Hello guys,
>
> This is my first post to nginx-devel.
>
> First of all, I would like to
Hello guys,
This is my first post to nginx-devel.
First of all, I would like to congratulate NGINX developers. NGINX is
an amazing project :-)
Well, I'm using NGINX as a proxy server, with cache enabled. I noted
that, when two (or more) users trying to download the same object, in
parallel, and
13 matches
Mail list logo