Hi Bobby,
in your case, what needs to be checked is if you are maxing the CPU or
not. It's very likely that your handler is waiting for Kafka to process
some operation, locking the thread if this operation is synchronous.
In this case, the server where your MINA handler is running will
basically do nothing but wait, and the performance might be abysmal.
Increasing the number of threads capable of handling incoming messages
is the way to go, up to a point.
The acceptor thread pool, as you said, is maxed by the number of
existing processor. It can be increased if needed, up to a point. So you
are doing the right thing here, because the default NioDatagramAcceptor
constructor does not declare a thread pool.
All in all, that should flow. What I would suggest at this point, and
because I don't have your code, so what I say are just pure conjectures,
would be to mock the Kafka endpoint in your handler, to see how fast the
MINA server is doing when the mock kafka handler is doing nothing
(eventually printing a message, but no more). That would narrow the search:
- if it's fast, MINA is not the culprit
- if it's still slow, that it will be time to go further on mina side,
and a piece of code I can debug would then be useful.
Thanks!
On 18/01/2024 08:37, Bobby R. Harsono wrote:
Hi Emmanuel,
By increasing num of threads of handling, i believe i have done so,
*|UDPListener| --> |MainAdapterHandler| --> |KafkaProducer|*
This is basic of my application, so to summarize
1. - UDPListener: this class holds NioDatagram objects etc;
2. - MainAdapterHandler: this class holds all of my handling logic,
extends IoHandlerAdapter Inside this class, i called KafkaProducer
to submit message
Things to notes are:
1. I set a threadpool executor as parameter of NioDatagramAcceptor, the
executor has max value of available processors (acceptor=
newNioDatagramAcceptor(IOExecutor.get());)
2. I set a threadpool in defaultIoChainBuilder
(chain.addLast("threadPool",
newExecutorFilter(filterChainBuilderExecutorMaxPoolSize));)
3. Also, i make MainAdapterHandler class threaded by creating a class
extends Thread (ThreadedHandlerAdapter), so if i set another
executor, this line will be executed
(executor.get().execute(newThreadedHandlerAdapter(this, session,
message));)
Or am i missing somethings here?
Thank you
On 17/01/2024 17:00, Emmanuel Lécharny wrote:
Hi,
hard to tell.
Have you profiled your application? Are you CPU bound? If not, you
should probably increase the default number of threads capable of
handling the incoming messages (it defaults to Nb cores + 1 )
On 16/01/2024 09:04, Bobby R. Harsono wrote:
Hi all,
Im using Mina 2.2.3 to build a UDP Server,
Currently, i implemented one IoHandlerAdapterclass, lets call this
MainHandlerAdapter, inside i have plenty of variables of custom
instance; Several of them are @Autowired on the parent class and when
creating UDP Server i set them through constructor.
In the end of processing, i use KafkaProducer class to send event to
kafka topic. All is fine and well, until i found leaked threads, i
fixed this by
1. supplying custom threadpool executor in acceptor=
newNioDatagramAcceptor(IOExecutor.get());
2. removing @Async on my send() method in KafkaProducer - this way,
will not spawn any more threads
3. made MainHandlerAdapter threaded,
4. supply executor in filterchainbuilder -> DefaultIoFilterChainBuilder
chain = acceptor.getFilterChain();
chain.addLast("threadPool", new
ExecutorFilter(Executors.newCachedThreadPool()));
The thread leaked is no more but there is another problem,
My UDP Listener received around 500k TPS, an instance deployed in K8
can handle around 30-50k TPS but when measuring with core utilizing,
1 core only handled around 1-4k TPS, this is very low compared to
another component (not using Mina) that achieved minn 8k TPS
Are there any suggestions how to improve my UDP Server performance ??
Another problem is when i increased deployment pod, no significant
tps improved
--
-----------
Bobby R. Harsono
Software Developer
Tricada Intronik - Bandung
--
*Emmanuel Lécharny* P. +33 (0)6 08 33 32 61
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]