Rafi Cohen wrote:
Hi Dennis, the first approach is clear now and I may proceed with it.
Good to hear.
The second approach is interesting and chalenging, but leaves some
issues to clarify and in case I find solutions to those issues I well
may adopt it.
1. format of csv file: I have no idea how this csv file is created and
which database engine is used. I do know that I receive such a file once
or twice a day.
While reading the file onto the list of structures, I also validate the
consistency of the data in the file.
Before the data lines (starting with "d,") there is a header line
"hdr,". This line contains the sequential number of the file, number of
records in the file and a checksum on a specific field of the data (say
coumn 2).
As I knew nothing up to now about .import, I wonder if there is a way to
include those checings in the second approach?
Firstly, this does not sound like a standard CSV format file (see http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm ).

Secondly, your validation checks can not be added to the normal csv .import command using the sqlite shell, but they could be added to a customized copy of the import routine that you add to your own code. It might also be possible to convert some of the validation tests to SQL check constraints on the table columns as well.
2. The deletion of the future dates is incorrect. On the contrary, in
the first approach, I re-examine the remaining structures each half a
minute until any of them becomes past date, then I process it just like
any other past date structures and then free it.
In case a new .csv file arrives, I add the new list of structures to the
remaining ones and continue to examine them every half a minute.
I could do the same with the sql3_exec statement in the second approach,
but I need another approach for the case of the future records.
I hope you have satisfying answers for those 2 issues and then I'll be
glad to proceed with the second approach.
In that case you could split the imported data into two tables using the date test. And then process only the table that contains the old records. The future records would remain in a second table. This second table would probably be the same one you import your new csv file records into. The processing of the old records would proceed as before.

   //assumes table imported contains the imported records
   //select records to process based on date and time
   create table process as
select * from imported where (date || ' ' || time) <= datetime('now');
   delete from imported where id in (select id from process);

You could also skip the concatenation by splitting the data and time test if you replace the condition above with

   date <= date('now' and time <= time('now')

Its probably a case of six of one or half a dozen of the other.

There really are a lot of different possibilities for processing the data once you have them in table in the database.

HTH
Dennis Cote

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to