2014-11-05 18:27 GMT+01:00 Alvaro Cornejo <cornejo.alv...@gmail.com>:
>
>> > - use sqlbox.
>>
>> What's that?
>
> Is an additional module that allows you to send/receive sms through a
> database. Instead of calling smsbox for sending an sms, just do an insert on
> a specific table and sqlbox will take care of everything.

I need to study that a little bit. But not now as I am running short in time.

>> > - hack kannel code to handle dlr as needed.
>>
>> This is what I have done with a three line patch: replace the
>> sql_remove with an sql_update.
>> In my opinion, an SQL table like the one for DLRs should only get
>> INSERTs and SELECTs and never UPDATEs and DELETEs. Also for the sake
>> of scalability.
>
> Not sure what you have done is right. As far as I understand, the table you
> define in kannel config handles kannel queue to smscs. If you disable
> deletes, your message "queue" will grow infinitly? You might want to
> duplicate inserts into a second table and updated that new table instead of
> poking with kannels queue table.

The motivation to all of this (little) work is that I wanted to give
the SQL DLR storage engine more power: why just keeping tracks of DLRs
and not making a complete history to be mined later?
Yes, I know. I could have been using either the dlr-url stuff or the
sqlbox to plug onto the DLR storage mechanism.
But why wasting resources when every needed piece of information was
there just to be discarded?
The application SQL DLR storage engine insists in deleting rows when
the SMS reaches its final status.
This sounded quite wrong to me and I would use that database *also* to
extract statistics about the traffic.
>From the storage and computing power perspective that seemed to me not
to be a problem at all.
While writing new software (for the dlr-url) or spending more time now
(for the sqlbox) seemed not to be viable at this stage.

It looks like it's a little bit complex: it's not as trivial as said
in my previous message.
And can lead to overgrowing tables. Sure, I know. But, please, read along.

Two steps: one on the database (PostgreSQL in my case) and one on the
application.

First of all I rised the log levels of my RDBMS and looked for the
actual SQL statements to be run.
Basically I changed the DLR table from a status table (with inserts,
updates and deletes) to a history table (with only inserts) by adding
a timestamp (to the usec) column.
So, yes, the table would grow indefinitely, more about this later.
Then I created a view on that so to trick the application about the OID.
That gets created at runtime by concatenating dlr.smsc, dlr.ts,
dlr.destination, dlr.source and the timestamp (to the msec). The
application never uses that OID directly but only to look for the
right row in the dlr table:

DELETE FROM dlr WHERE oid = (SELECT oid FROM dlr WHERE smsc='mysmsc'
AND ts='21' LIMIT 1);

Finally a simple couple of triggers to "intercept" inserts and updates.
I could also intercept the deletes but that is useless, as explained below.

In gw/dlr.c, at the dlr_find() function I changed the code in order to
"update the view" with the new status instead of deleting the original
submission row. A few lines of code to be changed.
I did this because the deletion code would not propagate the new
status, while the update code did.

Et voi-la, my history table receives every single update to SMS
history, thus allowing me to record it all.
>From time to time (Weekly? Monthly? Yearly?) I can clean that table up
in a number of ways PostgreSQL provides.
My solution can be made better and more robust (I am working on it).

Reply via email to