As a curiosity, if one were to create an EVAPI client to handle these requests, 
wouldn't the transaction still need to be stored in shared memory with somewhat 
similar memory usage?

I'm not quite clear why you keep stating that it's not going to be free.  I 
never claimed it to be.  In fact, I've consistently stated that the cost is in 
shared memory, and I don't see any possible way in which the requests could be 
processed that is not in shared memory somewhere UNLESS the response time can 
be addressed.


Kaufman
Senior Voice Engineer



E: [email protected]







SIP.US Client Support: 800.566.9810  |  SIPTRUNK Client Support: 800.250.6510  
|  Flowroute Client Support: 855.356.9768

[img]<https://www.sip.us>
[img]<https://www.siptrunk.com>
[img]<https://www.flowroute.com>


________________________________
From: Alex Balashov via sr-users <[email protected]>
Sent: Monday, December 23, 2024 3:02 PM
To: [email protected] <[email protected]>
Cc: Alex Balashov <[email protected]>
Subject: [SR-Users] Re: Kamailio not receiving packets on high CPS


CAUTION: This email originated from outside the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.



On Dec 23, 2024, at 3:14 pm, Ben Kaufman <[email protected]> wrote:

The difference in performance is substantial.  Your indication that the 
performance of the two modules as near equal is incorrect.  Both in theory and 
in practice it is the better of the two options.

If you confine the scope of your evaluation to Kamailio itself, then of course 
it's substantial; you've deputised the event mux/polling workload into kernel 
space. While that makes it invisible, it doesn't obviate the clock cycles. I'm 
talking about formal performance.

This is like saying that Node can handle a tremendous amount of requests with a 
single process. I suppose it can, but only because it's able to farm out the 
work of monitoring sockets for data to the OS, and erase it from the ledger, if 
you like, of userspace costs. They still go somewhere, and that somewhere is 
constrained by available resources and dimensioning.

Assuming you've used this repo and assets to make your case:

https://github.com/whosgonna/kamailio_http_async

this is not a fair or reasonable comparison. In one case, you're using an 
external polling loop, and in the other, you're blocking your worker processes 
by definition. I doubt you could get 3 CPS through that config if the async 
shvar is set to 0.

Furthermore, the idea of an HTTP service that responds like a metronome in 1 
sec, over local sockets and without any of the overhead of connection setup 
over a real-world network, is so contrived as to be tautological. You've 
neutered the synchronous approach to the maximum possible extent, while testing 
the asynchronous one in highly idealised conditions. In fact, tried your repo, 
followed your instructions on a hex-core server with 16 GB of RAM (2 GB of SHM 
allocated to Kamailio) and was able to get about 1600 CPS -- more than twice 
the OP's ask -- before seeing any retransmissions. It's so contrived as to be 
tautological, as if to say that being rich, young and healthy is better than 
being old, ill and poor. I cannot agree more.

If you empower the synchronous http_client approach with a comparable degree of 
parallelism, i.e. a large pool of worker processes with minimum package memory, 
you'll get comparable throughput. I agree that there are memory limits around 
that in Kamailio's concurrency model, and that was never in dispute. But you're 
going to pay it somewhere either way:

Mem: 16175052K used, 217632K free, 266764K shrd, 219644K buff, 10489916K cached
CPU:  22% usr  15% sys   0% nic  58% idle   0% io   0% irq   3% sirq
Load average: 0.46 0.54 0.45 7/555 32
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
   14     1 root     R    2076m  12%   3  14% kamailio -dDDE -de -n 2 -m 2048
   10     1 root     R    2073m  12%   1   3% kamailio -dDDE -de -n 2 -m 2048
    9     1 root     S    2073m  12%   3   3% kamailio -dDDE -de -n 2 -m 2048
   12     1 root     S    2073m  12%   0   0% kamailio -dDDE -de -n 2 -m 2048
    1     0 root     S    2073m  12%   4   0% kamailio -dDDE -de -n 2 -m 2048
   17     1 root     S    2073m  12%   4   0% kamailio -dDDE -de -n 2 -m 2048
   13     1 root     S    2073m  12%   1   0% kamailio -dDDE -de -n 2 -m 2048
   16     1 root     S    2073m  12%   3   0% kamailio -dDDE -de -n 2 -m 2048
   11     1 root     S    2073m  12%   5   0% kamailio -dDDE -de -n 2 -m 2048
    7     1 root     S    2073m  12%   5   0% kamailio -dDDE -de -n 2 -m 2048
    8     1 root     S    2073m  12%   2   0% kamailio -dDDE -de -n 2 -m 2048
   15     1 root     S    2073m  12%   4   0% kamailio -dDDE -de -n 2 -m 2048
   18     1 root     S    2073m  12%   2   0% kamailio -dDDE -de -n 2 -m 2048
   26     0 root     S     1696   0%   1   0% ash
   32    26 root     R     1624   0%   0   0% top -d 1

async is not magic. This is why we say it moves the problem around.

-- Alex

--
Alex Balashov
Principal Consultant
Evariste Systems LLC
Web: https://evaristesys.com
Tel: +1-706-510-6800

__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions -- 
[email protected]
To unsubscribe send an email to [email protected]
Important: keep the mailing list in the recipients, do not reply only to the 
sender!

Reply via email to