An update on this - I ended up implementing support for asynchronous file
open, based on the thread pool feature that was added in nginx 1.7.11.
I copied nginx's ngx_open_file_cache.c (from 1.9.0) and made it
asynchronous, source code is here:
https://github.com/kaltura/nginx-vod-module/blob/maste
Thank you all for your replies.
Since all 3 replies suggest some form of caching I'll respond to them
together here -
The nginx servers that I mentioned in my post do not serve client requests
directly, the clients always hit the CDN first (we use mostly Akamai), and
the CDN then pulls from these
Hi all,
In our production environment, we have several nginx servers running on
Ubuntu that serve files from a very large (several PBs) NFS mounted storage.
Usually the storage responds pretty fast, but occasionally it can be very
slow. Since we are using aio, when a file read operation runs slow
Thanks Maxim, that's very helpful, it works great
Eran
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,255292,255302#msg-255302
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Thank you Maxim, your suggestion will definitely work for me.
Are you familiar with any simple "non-core" module that does it ? I will
need to test for the existence of this function or the need to explicitly
add librt, and update CORE_LIBS accordingly + define some preprocessor macro
that I can l
Hi all,
Is it possible for an nginx module to define custom compilation switches
that add external libs / preprocessor macros ? Is there some example of a
module that does it ?
Specifically, what I'm trying to do is measure time accurately in my module
for benchmarking purposes. Since I use Linux,
Thank you very much, Maxim.
I implemented the solution as you advised.
Eran
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,253006,253116#msg-253116
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Thank you very much Maxim ! this works !
However, I bumped into a new problem, I use 2 different types of asyncronous
operations in my code - file I/O and http requests. When I call
ngx_http_run_posted_requests from the aio callback it works well, but when I
call it from the HTTP completion callba
Maxim, thank you very much for your response.
To clarify - the problem is not about freeing the request (I don't think
there's a resource leak here), the problem is that the connection is just
left hanging until the client closes it / the server is restarted.
It is true that write_event_handler g
Hi all,
In the module I'm developing, I have the possibility of encountering an
error after the response headers were already sent. As the headers were
already sent (with status 200) the only way for me to signal the error to
the client would be to close the connection. I tried calling
ngx_http_fi
Hi all,
I'm working on a native nginx module in which I want to read an input file
and perform some manipulations on its data. Since the files I'm reading are
big and accessed over NFS, I want to use asynchronous I/O for reading them,
and I want to implement it as a pipeline of chunks, i.e. read a
Thank you for your reply.
But, actually, I am looking for a native C module...
Eran
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,247481,247483#msg-247483
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/ngi
Hi All,
I want to develop an Nginx HTTP module that gets several values from
memcache, performs some processing on them and returns the result to the
client. I want all memcache operations to be performed asynchronously
without blocking the worker process for maximum scalability. For this
reason,
13 matches
Mail list logo