we had experienced the same problems when batch messages are sent thro HTTP
interface to Kannel. This is probably due to outgoing message queue overflow
in bearerbox. Since the flow control is at SMSC <--> bearerbox interface.
Messages from smsbox interface are pumped into bearerbox and kept in
outgoing message queue till they get delivered to SMSC. Since now you are
using stop and wait flow control between kannel and SMSC, this may be
leading to outgoing message queue getting increased beyond a level which
MAX_ALLOCATIONS can support.
It will help if somebody can built in a flow control in bearerbox<-->smsbox
interface. For the time, TIm, u can do with an increase the size of
MAX_ALLOCATIONS(gwmem-check.c). A NATIVE malloc configure option will also
help.
-JS

----- Original Message -----
From: "Tim Hammonds" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, October 24, 2001 9:27 PM
Subject: PANIC: Too many concurrent allocations


> I am using a cvs version of Kannel as an SMS Gateway to a UCP/EMI SMSC.
>
> I was originally having throughput problems with Kannel V1.1.5, as the
SMSC
> was ignoring messages sent if they received more than 5 a second.
>
> My earlier posting to this list resulted in the suggestion that I use the
> CVS version as it has flow control built in and the "stop-and-wait"
protocol
> would sort things out. By setting flow-control to 1 in the config, Kannel
> now waits for the ACK from the SMSC and the maximum throughput is
achieved.
> Great stuff.
>
> My next problems is that after submitting batches of (over 2000) messages
to
> Kannel (using the http interface) I invariably get the message
>
> "PANIC: Too many concurrent allocations"
>
> in the log file followed  by the Kannel processes dying and the remainder
of
> the messages in the batch being lost.
>
> Is this a Kannel issue, Linux/Kernel issue, hardware issue, network issue
or
> SMSC issue?
>
> Have any of you kind people got a suggestion as to what causes this
problem
> and how I can overcome it, as I need to use Kannel in a live system!
>
> Regards,
>
> Tim.
>


Reply via email to