Hey,
I have two major types of requests for my app:
- long-running (10 sec and more, I can differentiate them by url)
- normal (less than 1 sec)
Question is: I'd like to setup the server in a way that:
1) normal requests are served by 15 unicorn workers
2) long-running requests are served by additional 5 unicorn workers with
their own queue
Separate queue for long-running requests is to prevent people who run long
requests
consume all workers (for example: hit refresh 20 times or just do too many
valid but long requests).
Here's the possible solution I came up with and it seems to work.
What do you think about it? Does it have problems I didn't think of?
Are there better ways to do the same thing?
My solution so far:
- in nginx:
- create two upstream servers
- configure nginx to pass long-running request to a long-running upstream
upstream unicorn {
server unix:/tmp/unicorn.sock;
}
upstream long_requests_unicorn {
server unix:/tmp/long_requests_unicorn.sock;
}
server {
location ~ ^/(long_request_url1|long_request_url2) {
if (!-f $request_filename) {
proxy_pass http://long_requests_unicorn;
break;
}
}
if (!-f $request_filename) {
proxy_pass http://unicorn;
break;
}
}
- in unicorn configuration file:
- listen to both sockets in master
- after forking a child, close the socket it doesn't need to listen to
worker_processes 20
listen File.join('/tmp/unicorn.sock')
listen File.join('/tmp/long_requests_unicorn.sock')
def assign_to_queue(server, worker)
queue = case worker.nr
when 0...15 then '/tmp/unicorn.sock'
when 15...20 then '/tmp/long_requests_unicorn.sock'
else raise "Can't find queue for the worker ##{worker.nr}"
end
server.listeners = Unicorn::HttpServer::LISTENERS.find_all do |io|
server.send(:sock_name, io) == queue
end
end
after_fork do |server, worker|
assign_to_queue(server, worker)
end
_______________________________________________
Unicorn mailing list - [email protected]
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying