Just out of curiosity, does SQLite keep a record on how large the content
of a blob value is?  If I make a fictitious call like [ *select
length(BlobField) from A_Table where TheID=1* ] does SQLite actually count
each and every byte from start to finish, or, does it reference an extra
bit/nibble/byte/word/integer/int64 internal record on that field?

I know in some languages, depending on the declaration type of the
variable, getting the length of a string can be done by looking at the very
first set of bytes of the string instead of counting characters from start
to finish.  For instance, back in the Delphi 7 days, I could use *
LengthOfString:=SubjectString[0];* or something to that effect.  I can't
recall if that initial character was a byte, word, or integer (8-bit,
16-bit, 32-bit) but it never failed me. ;)

I'm in a self-imposed situation where I'm kind of looking at the "best
method for the sake of speed" on how to most efficiently implement the
design of the table and the code to get information about the blob data.
Once the data is in the field, that record will never change, only
deleted.  So I'm pondering on whether I should just use length(BlobField)
or (as some would say "best practices") reference a numerical field that
stores the length of the chunk of data that is assigned on the insert.

My thoughts are to go for the numerical field, but, if there isn't any
difference speed wise, I'll just stick with the length, but I'm easy to
sway.

Considerations:
- This application will NOT be run on a limited OS, like Android or iOS.
- What I'm storing is in the 10s of megabytes per RECORD, so the size and
number of fields isn't exactly a serious concern.
- Writing the data to the table isn't a significant ordeal as far as speed
is concerned, but to validate I'm not writing multiple copies of the same
file to the database, I'm looking at the files time stamp (Accuracy to the
minute) and the physical size of the file.  The application that writes
what I'm archiving, first of all, isn't mine, and second saves data using
13 file names, and overwrites the data... hence my attempt at archiving.
- I have absolutely NO idea how many rows this particular table is going to
contain eventually.  I may be looking at maybe 30ish, or 300ish, or
3000ish.  (I'm kinda getting nauseous thinking about that 3000 mark and the
potential size of the database. Not in the least towards the capabilities
of SQLite (I have total faith it can handle the sheer size) but more about
the capacity of my SSD. ;))
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to