Never had to use it. If you work with roundtrip times in microseconds,
it becomes irrelevant. 
Have been replacing Req./Rep. situations with events, dataflows and
triggers effectively removing the associated latencies.

Implementing ST/SH, or similar, services is interesting though. I can
see the value, but implemented as suggested previously. 
There are probably others on the list that can suggest how processing
suspend/resume can be done in a good way.
It's suspend/resume you want. http_whatever is probably not.

Regards,
Stefan

lör 2024-12-21 klockan 11:40 +0100 skrev Sergio Charrua via sr-users:
> Hello,
> 
> just wondering what would be the best approach to use
> http_async_client on a stateless redirect server, if that is even
> possible. Any examples to share?
> 
> Also, as a side note, documentation/wiki Kamailio Modules  states
> that HTTP_CLIENT is a "Sync and async HTTP client using CURL library"
> but there are no examples on how to use async HTTP with this
> module.... 
> 
> Atenciosamente / Kind Regards / Cordialement / Un saludo,
> 
> Sérgio Charrua 
>  
> 
> 
> 
> 
> On Fri, Dec 20, 2024 at 10:19 AM Stefan via sr-users
> <[email protected]> wrote:
> > 
> > Hi, Tried to comment yesterday but it didn't go through.
> > 
> > <snip>
> > I would suggest use some message bus technology, work with services
> > accessed over that bus and data that live in RAM. Choose your
> > favorite
> > programming language and build. I typically work with roundtrip
> > times
> > of microseconds. Then you can do pretty much what you like, no
> > problems.
> > 
> > //s
> > </snip>
> > 
> > Now that I see the services accessed, ST/SH, is "http across town",
> > nothing changes, then whatever http is to be processed should be
> > done at/as a service. 
> > Accessed using your favorite message bus tech. from Kamailio. 
> > Keeping the kamailio processing asynch and using an edge triggered
> > message bus.
> > 
> > Regards,
> > Stefan 
> > 
> > 
> > 
> > 
> > tor 2024-12-19 klockan 20:34 -0500 skrev Alexis Fidalgo via sr-
> > users:
> > > The mentioned ‘aws hosted webservice’ on my email is a stir
> > > shaken + call fraud scoring + ecnam service, no possible cache
> > > there. 
> > > 
> > > Actually the diameter section is not cacheable either (leads to
> > > false positives) I  just tested and  mentioned it with the
> > > intention to graphic the concept of ‘delete the wait’ (if
> > > possible)
> > > 
> > > Sometimes caching is not possible at all. And the price (as we
> > > pay) is to assign resources and watch it being occupied waiting.
> > > Trust me, 9000 caps made me try a lot of paths 
> > > 
> > > 
> > > Enviado desde dispositivo móvil 
> > > 
> > > > El 19 dic 2024, a la(s) 7:12 p. m., Sergio Charrua
> > > > <[email protected]> escribió:
> > > > 
> > > > 
> > > > I understand the idea behind using cache facilities, however in
> > > > this case, being a ST/SH attestation service, all calls need to
> > > > be attested and the probability that multiple calls having src
> > > > and dst numbers identical is, IMHO, very reduced.
> > > > So I don't see how a caching system could be used here, hence
> > > > why HTTP is "a necessary evil".... unless i'm missing something
> > > > here....
> > > > 
> > > > Atenciosamente / Kind Regards / Cordialement / Un saludo,
> > > > 
> > > > Sérgio Charrua
> > > > 
> > > > On Thu, Dec 19, 2024 at 11:19 PM Alexis Fidalgo via sr-users
> > > > <[email protected]> wrote:
> > > > > To add some information if useful. In our case there are 2
> > > > > main “actions” performed as processing when http service is
> > > > > called that takes a lot of time (there are more but are
> > > > > negligible against these 2)
> > > > > 
> > > > > 1. A query to an Diameter DRA (which for external reasons or
> > > > > customer reasons) takes an average of 150ms (and in some
> > > > > other cities of the deploys can take more because of the
> > > > > delay in the links)
> > > > > 2. A query to a rest webservice that is in AWS, not that bad
> > > > > as point 1 but still bad (~70ms)
> > > > > 
> > > > > So, in the best of scenarios, we have ~225ms, maybe if the
> > > > > DRA load balances to another HSS we get ~180ms total call
> > > > > processing.
> > > > > 
> > > > > That is bad, really bad.
> > > > > 
> > > > > To probe the idea and before any major changes in any part, I
> > > > > started to keep DRA responses in a redis (we use dragonfly),
> > > > > it’s not consistent in terms of our call processing flow but
> > > > > my idea was to “remove” the delay of the diameter query (I
> > > > > run the query, keep the result in redis, when a new request
> > > > > arrives for the same value I use the cached value and send a
> > > > > new DRA query in other thread to avoid lock the current one
> > > > > and update the redis).
> > > > > 
> > > > > With this, we removed the DRA query wait, so 220ms became
> > > > > 70/72ms.
> > > > > 
> > > > > Instantly all cpu usage, all retransmissions, everything
> > > > > disappeared, cpu, mem went down drastically, etc etc.
> > > > > 
> > > > > That’s why I mentioned ’wait time is the enemy’, http is not
> > > > > (maybe is not your best friend but not the enemy). If it
> > > > > works, there’s an analogy, you can try to improve
> > > > > aerodynamics, engine and whatever in a F1 car, but if the
> > > > > driver is heavy …
> > > > > 
> > > > > Hope it helps
> > > > > 
> > > > > 
> > > > > > On Dec 19, 2024, at 5:18 PM, Alex Balashov via sr-users
> > > > > <[email protected]> wrote:
> > > > > > 
> > > > > > 
> > > > > >> On Dec 19, 2024, at 1:06 pm, Calvin E. via sr-users
> > > > > <[email protected]> wrote:
> > > > > >> 
> > > > > >> Consider scaling out instead of scaling up. Now that you
> > > > > know the
> > > > > >> apparent limit of a single node for your use case, put it
> > > > > behind a
> > > > > >> Kamailio dispatcher.
> > > > > > 
> > > > > > On the other hand, you might consider scaling up first,
> > > > > since HTTP performs so badly that there is lots of
> > > > > improvement to be squeezed out of optimising that even a
> > > > > little. 
> > > > > > 
> > > > > > You might think of scaling out as a form of premature
> > > > > optimisation. :-)
> > > > > > 
> > > > > > -- Alex
> > > > > > 
> > > > > > -- 
> > > > > > Alex Balashov
> > > > > > Principal Consultant
> > > > > > Evariste Systems LLC
> > > > > > Web: https://evaristesys.com
> > > > > > Tel: +1-706-510-6800
> > > > > > 
> > > > > > __________________________________________________________
> > > > > > Kamailio - Users Mailing List - Non Commercial Discussions
> > > > > -- [email protected]
> > > > > > To unsubscribe send an email to
> > > > > [email protected]
> > > > > > Important: keep the mailing list in the recipients, do not
> > > > > reply only to the sender!
> > > > > 
> > > > > __________________________________________________________
> > > > > Kamailio - Users Mailing List - Non Commercial Discussions --
> > > > > [email protected]
> > > > > To unsubscribe send an email to
> > > > > [email protected]
> > > > > Important: keep the mailing list in the recipients, do not
> > > > > reply only to the sender!
> > > __________________________________________________________
> > > Kamailio - Users Mailing List - Non Commercial Discussions --
> > > [email protected]
> > > To unsubscribe send an email to [email protected]
> > > Important: keep the mailing list in the recipients, do not reply
> > > only to the sender!
> > 
> > 
> > __________________________________________________________
> > Kamailio - Users Mailing List - Non Commercial Discussions --
> > [email protected]
> > To unsubscribe send an email to [email protected]
> > Important: keep the mailing list in the recipients, do not reply
> > only to the sender!
> __________________________________________________________
> Kamailio - Users Mailing List - Non Commercial Discussions --
> [email protected]
> To unsubscribe send an email to [email protected]
> Important: keep the mailing list in the recipients, do not reply only
> to the sender!

__________________________________________________________
Kamailio - Users Mailing List - Non Commercial Discussions -- 
[email protected]
To unsubscribe send an email to [email protected]
Important: keep the mailing list in the recipients, do not reply only to the 
sender!

Reply via email to