Quique wrote:
Thanks to respond so soon. we assumes that with the option named -n it´s create
#cpus worker threads .
If not specified, named will try to determine the number of CPUs present and create one thread per CPU.
If it is unable to determine the number of CPUs, a single worker thread will be
created.
In our case, we dont put -n parameter, and bind up 35 threads, (32 = 32 CPU and
other three that whe dont know).
I like to know if it,s possible up more than one process per CPU.We have these
two machines configured with an only physical interface and at the moment they
are serving 2000 qps of real trafic.
We specified the -n parameter just for testing to find an optimal
number. Generally having ~1 thread per processor thread worked well.
If you are going to run more than one named process then you'll likely
want to specify the number of threads because the threads start
thrashing when using the default. For example, if you have 4 interfaces
on a T2000 8 core 4 hw threads, and don't specify the number of threads,
each named will try to use 35 threads leading to 140 threads total
running on the box. It will work, but we found it's not optimal.
Few days ago we configured the load balancer to send it about 4000 qps and we
saw the same case that you many clients queries timed out/were dropped.
We also thought that with more interfaces we increase the yield of the machine
because we have one machine with two interfaces and a good yield, ¿we dont know
why?. But it is something that we have still not proven in the T2000.
I read in the post something like “BIND put more threads in recvmsg”, could you
explain me that, please, or tell me where can i read some about this.
I looked at the source code. It appears that BIND creates a few manager
threads and a number of worker threads. One of the manager threads
deals with communciation from the interface. Most of the code is in
lib/isc/unix/socket.c . You can trace through the code from recv_msg to
the single isc_socketmgr_t per interface.
Hope that helps.
-Andy
Thanks for everything.
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
[email protected]
_______________________________________________
networking-discuss mailing list
[email protected]