Re: [sqlite] Query help

2013-07-27 Thread Joseph L. Casale
> Will the SQL 1969 "EXCEPT" compound operator not work for some reason?

Worked perfect, my sql is weak as I didn't even know of this one...
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query help

2013-07-27 Thread Keith Medcalf
Will the SQL 1969 "EXCEPT" compound operator not work for some reason?

> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Joseph L. Casale
> Sent: Saturday, 27 July, 2013 15:24
> To: 'sqlite-users@sqlite.org'
> Subject: [sqlite] Query help
> 
> Hey guys,
> I am trying to left join the results of two selects that both look
> exactly like this:
> 
>   SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM
> table_a
> 
> Both tables have the exact data type and format, I need to reformat
> each tables
> results, then join and return only what is in table_a and not in
> table_b.
> 
> Any guidance on how one might do this in sqlite?
> Thanks!
> jlc
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query help

2013-07-27 Thread Joseph L. Casale
Hey guys,
I am trying to left join the results of two selects that both look exactly like 
this:

  SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a

Both tables have the exact data type and format, I need to reformat each tables
results, then join and return only what is in table_a and not in table_b.

Any guidance on how one might do this in sqlite?
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Length of Blobs

2013-07-27 Thread RSmith

On 2013/07/27 16:35, Stephen Chrzanowski wrote:

...//verification purposes.  The file time stamp "should" be enough, but there
is that one time of the year when 2am comes twice in a day in most parts of
the world, so, covering that aspect with the concept of looking at file
sizes as part of unique identification.  **As I write this, I just thought
about reading a chunk of the file on the drive and running it against an
MD5 or CRC64/128 check sum algorithm, and storing that result in the
database instead of relying on a file size hmmm...  That'd be a balance
between speed and accuracy.  I don't want to chew too much IO time on lower
end machines.  Both cases don't give a 100% accurate assessment to a truly
"unique" file, but I wonder which would give the better rate of accuracy?
Maybe integrating all three?  .. Sorry.. rambling.. tired... heh


It's very much possible for an MD5 Hash to return the same result for two different files, moreso than hitting the the exact same 
timestamp twice on Timezone day, but less likely than a CRC doing the same, which in turn is less likely than a non-unique filesize. 
a Timestamp+MD5 Hash is the way to go to ensure uniqueness, or at least, render it's re-occurance likelihood to the anals of 
oblivion. In this regard, you only need to Hash the first n bytes of the file to save FLOPS where n need only be as big as 
experimentally determined to avoid hitting the same file header content more than once (the file header is likely to be quite unique 
from save to save) - probably  4K to 8K bytes would suffice.




As for saving to the SSD.. I hear ya...  I love the speed (Raid-0 dual
250gig SATA3) but hate that they'll die due to running out of good memory
blocks, but, at least not as 'randomly' as a platter drive, and ooohhh so
much quieter. ;)


Well yes, but the IO systems employed by these drives are smart, data do not get written if it doesn't change the underlying memory 
states etc. Even in strenuous use you should get a good 5 years out of a modern SSD, and when it dies, it will be gradual and with a 
lot of warning. Beware these"security"  utilities that promises to really wipe data by overwriting it several times etc, they will 
eat through a fresh SSD in a few months. Other than that, an SSD has some longevity and my comment about NOT logging to it has more 
to do with the normal space restrictions, and of course it is unnecessary to visit abuse on a drive known to not enjoy it.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Length of Blobs

2013-07-27 Thread Stephen Chrzanowski
Good read.

So, SQLite keeps a record of the length of each field, and determines the
result of a length by returning a value stored at a 'black box' level,
similar to the Delphi/Pascal way of non-null-byte-string termination.
Perfect.  Also, I noted your comment that just using length versus adding
extra code to manage an extra field shouldn't show any difference, so I'll
leave it at that.

The purpose of my application is for an archiving setup for a game I play.
SQLite is perfect for this since I will be hopefully be converting this app
to multi-platform for at least Linux and maybe Mac if the game plays on
it.  Right now, the code is Win32 coded under D2010 right now as I'm
comfortable with the IDE and components, but once I get to the point where
the code functions, I'll be able to at least take a second stab at getting
something working in Lazarus/Free Pascal.

As mentioned, this game stores only 13 files (10 save slots for the user, 3
for "automatic" backups if the option is set).  The only indication of
where I'm at in the game universe at that time is the date/time I saved,
the amount of money, and what system I'm in, which is really pointless when
you hang around the same area a lot  (Home base advantage and all), or come
back to the game after a long period of time.  The game gives no option to
enter a description of the save, or a sign as to why I saved at that point
except via a "game time dialog" hint that I'll get my rear end handed to me
on a silver platter by the enemies, or, the better-than-off chance that
I'll run into a gate accidentally...

With this app I'm writing, it'll basically eliminate the 13 slot problem,
and EASILY step back a few saves without leaving the game environment.
What I'm calling "rolling saves" will allocate a user-defined set of slots
that allows me to go back a few "saves" while still in game.  As an
example, if I assign save slots 5-10 as to be used as rolling saves, slot 5
would be the most recent save, 6 would be the one previous, 7 would be the
one previous to that, so on and so on.  Of course, direction of this sort
is user defined as well.  So the question of how SQLite handles length()
for a blob field comes into play as I didn't want it to count, byte for
byte, potentially gigabytes of data just to verify two key aspects of a
physical file.  If it did count bytes, the extra field obviously would have
been the route to go as I'm already reading that data from the drive for
verification purposes.  The file time stamp "should" be enough, but there
is that one time of the year when 2am comes twice in a day in most parts of
the world, so, covering that aspect with the concept of looking at file
sizes as part of unique identification.  **As I write this, I just thought
about reading a chunk of the file on the drive and running it against an
MD5 or CRC64/128 check sum algorithm, and storing that result in the
database instead of relying on a file size hmmm...  That'd be a balance
between speed and accuracy.  I don't want to chew too much IO time on lower
end machines.  Both cases don't give a 100% accurate assessment to a truly
"unique" file, but I wonder which would give the better rate of accuracy?
Maybe integrating all three?  .. Sorry.. rambling.. tired... heh

The question about how frequently data gets added depends on the factors of
my remembering to save, my actually playing, and what options I have in
this tool to monitor.  Average size looks to be 13meg, but I've only seen
saves on my system, and not others.

As for saving to the SSD.. I hear ya...  I love the speed (Raid-0 dual
250gig SATA3) but hate that they'll die due to running out of good memory
blocks, but, at least not as 'randomly' as a platter drive, and ooohhh so
much quieter. ;)  However, while developing this app on another machine
with just platter via ESATA, it took about 5 seconds to dump the 128meg
data into the database.  I'm at home as I write this email, so will look
forward to seeing how well the tool performs while developing.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question about proper use of prepare_v2()

2013-07-27 Thread Maxim Khitrov
On Sat, Jul 27, 2013 at 2:04 AM, Stephan Beal  wrote:
> Hi, all,
>
> i'm porting some code from one sqlite3-using project (Fossil SCM) to
> another sqlite3-using project (a prototype for Fossil v2) and i came across
> this code snippet:
>
>   while( rc==SQLITE_OK && z[0] ){
> pStmt = 0; // < type=(sqlite3_stmt*)
> rc = sqlite3_prepare_v2(g.db, z, -1, &pStmt, &zEnd);
> if( rc!=SQLITE_OK ) break;
> if( pStmt ){
>...
> }
> z = zEnd;
>   }
>
> My question is: is the if(pStmt) block there a case of too much error
> handling, or can it really happen that prepare() returns OK but also leaves
> pStmt as NULL (e.g. for an empty SQL statement)???

pStmt will be NULL if zSql contains only a comment or whitespace. There
is a comment about this in sqlite3_exec (legacy.c).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Length of Blobs

2013-07-27 Thread RSmith

Hi Stephen,

On 2013/07/27 13:22, Stephen Chrzanowski wrote:

Just out of curiosity, does SQLite keep a record on how large the content
of a blob value is?  If I make a fictitious call like [ *select
length(BlobField) from A_Table where TheID=1* ] does SQLite actually count
each and every byte from start to finish, or, does it reference an extra
bit/nibble/byte/word/integer/int64 internal record on that field?
SQLite has type identifiers and length specifiers for every datatype - i.e. it stores the length of variable length types such as 
text and blob. You can even very deterministically find exactly how many bytes a data field takes up by understanding its type and 
length specifiers and data section. There is no byte-counting.

http://www.sqlite.org/different.html#flex


I know in some languages, depending on the declaration type of the
variable, getting the length of a string can be done by looking at the very
first set of bytes of the string instead of counting characters from start
to finish.  For instance, back in the Delphi 7 days, I could use *
LengthOfString:=SubjectString[0];* or something to that effect.  I can't
recall if that initial character was a byte, word, or integer (8-bit,
16-bit, 32-bit) but it never failed me. ;)
In the first Pascal environments a string had a maximum length of 255 (256 if you include the 1-byte length specifier). This worked 
a lot better for keeping shorter strings on the stack from a programming point of view than the (back-then) C-type of habit of 
having to set memory aside for every string with direct memory pointers (where an extra Zero character denotes the end of the 
string, but the length is not limited to 255) or the more stack-compatible character arrays.  However, this fixed a problem that was 
soon to be no longer a problem (memory usage and speed of requesting memory from the OS etc.), so very soon, those kind of strings 
were relegated to the "ShortString" type in Delphi and more conventional memory strings were used (By Delphi 1 in fact), along with 
another brilliant invention called reference counting. Since then all String types were really just pointers to memory byte lists 
ending in a #00 and very much compatible with the C model (and others).
The fact that you could still do that in Delphi 7 is only because of maintained backward compatibility from the folks at Borland, 
and had no significance to D7 at all, or indeed to any new RAD studio incarnations by Embarcadero.  Interestingly, this backward 
compatibility did provide some advantage, in that a memory table was kept (as is still the case today) of strings and references to 
multiple instances, and so getting the length of a string was simply a reference lookup in stead of byte-counting until #00 - which 
was the normal C method back then. You can still declare a shortstring (or ANSIString) in Delphi, but they are no more efficient 
than a long one these days, also, normal strings now can be almost 4GB long (32bit counter) or 2GB if it is an MCBS or WideString 
allowing UTF8 and other formats - none of which worked (or works) with shortstrings which have ANSI format only.



I'm in a self-imposed situation where I'm kind of looking at the "best
method for the sake of speed" on how to most efficiently implement the
design of the table and the code to get information about the blob data.
Once the data is in the field, that record will never change, only
deleted.  So I'm pondering on whether I should just use length(BlobField)
or (as some would say "best practices") reference a numerical field that
stores the length of the chunk of data that is assigned on the insert.

Length() should be as efficient as any lookup system you could implement.


Considerations:
- This application will NOT be run on a limited OS, like Android or iOS.
- What I'm storing is in the 10s of megabytes per RECORD, so the size and
number of fields isn't exactly a serious concern.
- Writing the data to the table isn't a significant ordeal as far as speed
is concerned, but to validate I'm not writing multiple copies of the same
file to the database, I'm looking at the files time stamp (Accuracy to the
minute) and the physical size of the file.  The application that writes
what I'm archiving, first of all, isn't mine, and second saves data using
13 file names, and overwrites the data... hence my attempt at archiving.
- I have absolutely NO idea how many rows this particular table is going to
contain eventually.  I may be looking at maybe 30ish, or 300ish, or
3000ish.  (I'm kinda getting nauseous thinking about that 3000 mark and the
potential size of the database. Not in the least towards the capabilities
of SQLite (I have total faith it can handle the sheer size) but more about
the capacity of my SSD. ;))

Well, rest assured, we all use SQLite databases with several millions of records and many gigabytes of data. The size should be easy 
to guess at - what is the average size of the files you need to archive? How freque

Re: [sqlite] How to create connection life-time object?

2013-07-27 Thread Dušan Paulovič
What I need is 'install and don't care' because I use many C++ classes
as lifetime objects.
I think that my solution does work now well, but if there would be a way to
avoid use of SQL parser, it would be better.


2013/7/27 Max Vlasov 

> On Fri, Jul 26, 2013 at 9:56 PM, Dušan Paulovič 
> wrote:
>
> > Thanks for suggestion, but:
> > 1.) one object is not linked to one connection
> >
>
> If you have your own memory management, it's not a problem since the scheme
> I described is basically just a storage of pointers. To free or not to free
> (if the pointer points to a disposable entity) is your decision at your
> chosen time. But if you want automatic reference counting (so when a
> pointer not referenced anymore, it would be automatically deallocated),  I
> agree, the proposal is not good. Probably using your own global structure
> not related to sqlite is less pain.
>
>
> > 2.) object is not destroyed together with connection
> >
>
> The same, you're free to use the table just as pointers storage
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Length of Blobs

2013-07-27 Thread Stephen Chrzanowski
Just out of curiosity, does SQLite keep a record on how large the content
of a blob value is?  If I make a fictitious call like [ *select
length(BlobField) from A_Table where TheID=1* ] does SQLite actually count
each and every byte from start to finish, or, does it reference an extra
bit/nibble/byte/word/integer/int64 internal record on that field?

I know in some languages, depending on the declaration type of the
variable, getting the length of a string can be done by looking at the very
first set of bytes of the string instead of counting characters from start
to finish.  For instance, back in the Delphi 7 days, I could use *
LengthOfString:=SubjectString[0];* or something to that effect.  I can't
recall if that initial character was a byte, word, or integer (8-bit,
16-bit, 32-bit) but it never failed me. ;)

I'm in a self-imposed situation where I'm kind of looking at the "best
method for the sake of speed" on how to most efficiently implement the
design of the table and the code to get information about the blob data.
Once the data is in the field, that record will never change, only
deleted.  So I'm pondering on whether I should just use length(BlobField)
or (as some would say "best practices") reference a numerical field that
stores the length of the chunk of data that is assigned on the insert.

My thoughts are to go for the numerical field, but, if there isn't any
difference speed wise, I'll just stick with the length, but I'm easy to
sway.

Considerations:
- This application will NOT be run on a limited OS, like Android or iOS.
- What I'm storing is in the 10s of megabytes per RECORD, so the size and
number of fields isn't exactly a serious concern.
- Writing the data to the table isn't a significant ordeal as far as speed
is concerned, but to validate I'm not writing multiple copies of the same
file to the database, I'm looking at the files time stamp (Accuracy to the
minute) and the physical size of the file.  The application that writes
what I'm archiving, first of all, isn't mine, and second saves data using
13 file names, and overwrites the data... hence my attempt at archiving.
- I have absolutely NO idea how many rows this particular table is going to
contain eventually.  I may be looking at maybe 30ish, or 300ish, or
3000ish.  (I'm kinda getting nauseous thinking about that 3000 mark and the
potential size of the database. Not in the least towards the capabilities
of SQLite (I have total faith it can handle the sheer size) but more about
the capacity of my SSD. ;))
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Hi,I have a question about how to balance the memory which the SQLite uses and the write speed which SQlite supports, if you have some advices,please tell me ,thank you

2013-07-27 Thread wangqiong 00215988


Hi,I have a question about how to balance the memory which the SQLite uses and 
the write speed which SQlite supports, if you have some advices,please tell me 
,thank you:)

The details:
As far as I know, SQLite supports the All-memory DB and All-disk DB, when I use 
the All-memory DB, SQlite can support the fastest written-speed but at the same 
time need the biggest memory which my program can’t support

I need test to find the relation about memory and the written-speed.

Can you tell me which parameter which I can use to set the memory the SQLite 
can use? Or  which doc can tell the way to set memory about SQLite.


Best wishes

Winnie

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users