Re: [sqlite] Re : Patch - Column names

2006-12-08 Thread Sandeep Suresh



Nikki Locke wrote:

Joe Wilson wrote:
  
Looks reasonable. Consider filing a ticket with your patch. 
 
A few "make test" tests that rely on the old short name behavior 
for SELECT * with joins will report failure. Whether the old way is 
necessarily correct in some tests is a matter of debate. 



As will all production code which uses column names on selects with joins! 
This patch had better not be included in the release code - perhaps it

could be enabled with a PRAGMA or something?

  


I can see, how this would break, when short_column_names flag is turned 
on ( because in that case, no matter what the column names shouldn't be 
prefixed with table names ) . This fix is not the right one, I guess !


- Sandeep.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Unicode Help

2006-12-08 Thread Trevor Talbot

On 12/8/06, Kees Nuyt <[EMAIL PROTECTED]> wrote:

On Fri, 8 Dec 2006 15:54:45 +, you wrote:

> How do you set Notepad to Ecnoding = Unicode.
> I cant see an option for that ?

Perhaps it listens to a BOM?


It does, and will also try heuristics to detect the encoding if no BOM
is present.  But, what I was referring to is File->Open; there's a
dropdown at the bottom to choose the encoding type.

Anyway, glad you got it sorted :)

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite3_column_double - problems when assigning values to variable

2006-12-08 Thread John Stanton

Maybe your statement does not reference a valid row.

Marten Feldtmann wrote:

I have written some interface code to the sqlite3 API using the
OpenWatcom 1.5 compiler and I wrapped several functions like
sqlite3_column_text, sqlite3_column_int and all went fine.

Then I tried to wrap sqlite3_column_double(..) and when I do something
like


...
sqlite3_column_double( stmt, 0);
...

the statement works without problems, but when I write:


double dVal;
...
dVal = sqlite3_column_double( stmt, 0);
...

I get a general protection fault.

I'm not that C guru - but what is the problem here ?


Marten

- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] is blob compression worth it

2006-12-08 Thread Daniel Önnerby

Thanks for pointing out the obvious :)

Seriously though, there are times when probably all of us has made "just 
a simple database" that was not normalized in the correct way that later 
turns out to be used a lot more than intended. Normalizing the database 
at a later state requires a lot of more reprogramming and rewriting a 
lot of sql. I could see a use of this kind of functionality but the best 
way would always be to normalize.
But then again I was just curios to see if anyone had tried or thought 
about something like this before. I'm not even sure I would like this 
type of functionality implemented in SQLite


Best regards
Daniel

John Stanton wrote:
Your solution here is to normalize your database.  Third normal form 
will do it for you.


Daniel Önnerby wrote:

Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a 
lot of them have the same data in them, is there any way to make a 
plugin to sqlite that in this case would just save a reference to 
another blob if it's identical. I guess this could save a lot of 
space without any fancy decompression algorithm, and if the 
blob-field is already indexed there would be no extra time to locate 
the other identical blobs :)


Just a thought :)

John Stanton wrote:


What are you using for compression?

Have you checked that you get a useful degree of compression on that 
numeric data?  You might find that it is not particularly amenable 
to compression.


Hickey, Larry wrote:

I have a blob structure which is primarily doubles. Is there anyone 
with
some experience with doing data compression to  make the blobs 
smaller?

Tests I have
run  so far indicate that compression is too slow on blobs of a 
few  meg to
be practical. I get now at least 20 to 40 inserts per  second but 
if a single compression
takes  over a second, it's clearly not worth the trouble. Does 
anybody have experience

with a compression scheme with blobs that consist of mostly arrays of
doubles?
Some  schemes ( ibsen) offer lightening speed decompression so if the
database was primarily used  to read, this would be good choice but 
very

expensive to do
the compression required  to make it.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] sqlite3_column_double - problems when assigning values to variable

2006-12-08 Thread Marten Feldtmann

I have written some interface code to the sqlite3 API using the
OpenWatcom 1.5 compiler and I wrapped several functions like
sqlite3_column_text, sqlite3_column_int and all went fine.

Then I tried to wrap sqlite3_column_double(..) and when I do something
like


...
sqlite3_column_double( stmt, 0);
...

the statement works without problems, but when I write:


double dVal;
...
dVal = sqlite3_column_double( stmt, 0);
...

I get a general protection fault.

I'm not that C guru - but what is the problem here ?


Marten

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Unicode Help

2006-12-08 Thread Kees Nuyt
On Fri, 8 Dec 2006 15:54:45 +, you wrote:

> How do you set Notepad to Ecnoding = Unicode.
> I cant see an option for that ?

Perhaps it listens to a BOM?
http://unicode.org/unicode/faq/utf_bom.html#22

It would mean you have to initialize your textfile before
editing with some utility like awk:

 BOF file initutf.cmd  linewrapped by mail !!
@echo off
echo Build a few common BOM prefixed UTF files

echo BOM for UTF-8
awk "BEGIN{printf(\"\xEF\xBB\xBFUTF-8\"); exit 0}" >utf8.txt

echo BOM for UTF-16 Little Endian
awk
"BEGIN{printf(\"\xFF\xFE\x55\x00\x54\x00\x46\x00\x2D\x00\x31\x00\x36\x00\x4C\x00\x45\x00\");
exit 0}" >utf16LE.txt

echo BOM for UTF-16 Big Endian
awk
"BEGIN{printf(\"\xFE\xFF\x00\x55\x00\x54\x00\x46\x00\x2D\x00\x31\x00\x36\x00\x42\x00\x45\");
exit 0}" >utf16BE.txt

 EOF file initutf.cmd 

(tested, works with notepad.exe v5.1.2600.2180 Dutch)

HTH
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] enabling loading of extensions?

2006-12-08 Thread Richard Harding
I'm new to sqlite and trying to see about using sqlite with the fulltext 
plugin to test against what I already do in mysql.


I have the plugin .so and cannot load it since the default is to disable 
loading of extensions. What I can't figure out is how to flip that 
switch. There is reference to function that looks like you can do it 
programatically, but I just want to enable it on my test db via command 
line. Am I missing something obvious here? The end goal is to add sqlite 
support to a python script I have.


Thanks for any help.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Determining Data Types

2006-12-08 Thread Will Leshner

On 12/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

This gets into my ignorance of how sqlite works internally, but if I quote an 
"integer" value, it will still be stored correctly?  I have no problems doing 
that, I just wasn't sure if that was the case.  Also, how do I need to send date fields 
to sqlite to have them stored properly?


I think you said you were still using 2.8 and in 2.8 everything is
strings anyway, so it doesn't matter.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Determining Data Types

2006-12-08 Thread epankoke
This gets into my ignorance of how sqlite works internally, but if I quote an 
"integer" value, it will still be stored correctly?  I have no problems doing 
that, I just wasn't sure if that was the case.  Also, how do I need to send 
date fields to sqlite to have them stored properly?

--
Eric Pankoke
Founder / Lead Developer
Point Of Light Software
http://www.polsoftware.com/

 -- Original message --
From: "Will Leshner" <[EMAIL PROTECTED]>
> On 12/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> 
> > I can construct the update statement without much issue, but I need to know 
> whether or not I should put quotes around value, so I need to know whether or 
> not field3 is a text type field.
> 
> Why not just quote all the values?
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] is blob compression worth it

2006-12-08 Thread John Stanton
Your solution here is to normalize your database.  Third normal form 
will do it for you.


Daniel Önnerby wrote:

Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a 
lot of them have the same data in them, is there any way to make a 
plugin to sqlite that in this case would just save a reference to 
another blob if it's identical. I guess this could save a lot of space 
without any fancy decompression algorithm, and if the blob-field is 
already indexed there would be no extra time to locate the other 
identical blobs :)


Just a thought :)

John Stanton wrote:


What are you using for compression?

Have you checked that you get a useful degree of compression on that 
numeric data?  You might find that it is not particularly amenable to 
compression.


Hickey, Larry wrote:


I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to  make the blobs smaller?
Tests I have
run  so far indicate that compression is too slow on blobs of a few  
meg to
be practical. I get now at least 20 to 40 inserts per  second but if 
a single compression
takes  over a second, it's clearly not worth the trouble. Does 
anybody have experience

with a compression scheme with blobs that consist of mostly arrays of
doubles?
Some  schemes ( ibsen) offer lightening speed decompression so if the
database was primarily used  to read, this would be good choice but very
expensive to do
the compression required  to make it.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Unicode Help

2006-12-08 Thread Da Martian

EUREKA!

Ok I got it working now. It turns out my source was UTF-8 Encoded, so even
when i used the utf-16 functions it wasnt comming out right. I am now doing
a converstion in delphi from UTF-8 to UTF16 and using all UTF-16 sqlite
functions as recommended.

Thanks a million for all your help, it was all your suggestions which lead
me to the solution.

Much appreciated.

Have a good weekend.

S


On 12/8/06, Trevor Talbot <[EMAIL PROTECTED]> wrote:


On 12/7/06, Da Martian <[EMAIL PROTECTED]> wrote:

> Yeah I am currently using VirtualTree from Mikes Delphi Gems. Its fully
> unicode enabled (I beleive). I use WideStrings through out the entire
> pipeline from xml I recieve into SQLite via the prepare16 back out
through
> column_text16 into virtual tree. Well thats true, the SQL APIs are
mapped to
> return PWideChar which is then copied via System.Move into a widestring
as
> follows:

[ DLL interfaces ]

> Previously (before my langauge headaches :-)  ) I was doing the above
> without the APIs ending in 16, and everything was string and PChar in
the
> above layer. The layer that used this class has always had "WideString".
>
> I realise your probably not delphi pros, but if you do spot something
stupid
> I am doing I would appreciate any help you can offer.

I've never used Delphi, "but I did sleep at a Holiday Inn last night"...

It looks fine to me.  To help check it, one thing you can try is
writing the result of FieldAsString directly to a file as raw bytes,
then in notepad open that with "encoding" set to "Unicode".  E.g.
something logically equivalent to:

  size := Length(field) * 2;
  SetLength(buffer, size );
  System.Move(field^, buffer^, size);
  file.Write(buffer, size);

I imagine you don't have to jump through hoops like that, but
hopefully you see what I have in mind.  If the result looks good in
notepad, then you know this layer is fine, so the problem must be
closer to the display layer.


-
To unsubscribe, send email to [EMAIL PROTECTED]

-




Re: [sqlite] Determining Data Types

2006-12-08 Thread Will Leshner

On 12/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


I can construct the update statement without much issue, but I need to know 
whether or not I should put quotes around value, so I need to know whether or 
not field3 is a text type field.


Why not just quote all the values?

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] about call back of sqlite.

2006-12-08 Thread Pat Wibbeler
You might also be able to watch the database file itself for changes using a 
platform specific mechanism.  On UNIX, you could use kqueue's and on Windows a 
combination of FindFirstChangeNotification, FindNextChangeNotification and 
WaitForMultipleObjects.  This would allow you to watch for database changes 
without polling.  

Pat  

-Original Message-
From: Roberto [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 08, 2006 4:23 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] about call back of sqlite.

On 08/12/06, hongsion <[EMAIL PROTECTED]> wrote:
> Hi sqlite,
> I want to using sqlite this way.  Application  A  register callback
> to sqlite db. And application B  will modify sqlite db. I want each time
> sqlite was modified by B, A is notified by callback. Is this possible in
> sqlite?  A and B run in different process. Thanks!
>


No it is not possible, you have to implement your own cross process
mechanism to notify applications of changes. Another idea DRH brough
up in the past, is to poll a table which stores ID's of items that
have changed.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] doing everything sqlite warns against

2006-12-08 Thread Hickey, Larry
I have a system requirement that feeds thousands of keyed records, each of
which contains a blob of around a meg or so into an sqlite database. Ok so
far. But the records come from multiple
processes, and I can't afford to do have commits after each insertion for
obvious performance reasons (if you have ever tried that you understand why)
. 
So suppose we batch the commits, and via an external
method not discussed here, and we can serialize the order of the commits
from the various processes so they don't over-lap. A "inserting" process
will block until it has "commit "rights". 
Are we still ok- i.e. leave the database in a sound condition?
 (commits can be batched a few hundred at a time maybe more) Contention for
"commit" rights is unfortunate, but at the end of the day, if I can get 20
or so records inserted per second by
this rube Goldberg, I'm ok with that, cause that is certainly the slow part,
not the ability of the multiple feeders to supply records to insert.

Here is a bad extra ingredient that may be a deal breaker. The processes are
on machines over a network, and have opened the same database on the same
target machine where the database
lives.. Network based windows file systems and locking are a recipe for
sqlite corruption is my understanding.( Assume  the serialization trick I
mentioned earlier was made to work even
if the processes invoking the trick were on the network.)
 So even if we were ok without this network issue, are we now into a
unworkable design?
The common table into which insertions will be done does not have indexes on
it of course. We do that later after all processes are finished.
In summary:
 Alternative I; multiple sqlite processes on different machines accessing
the same database, but commits serialized (by a method not covered here).
assume disaster without this serialized commit trick.

Alternative II:
I could design a system service that runs a single instance of sqlite, and
make the users on other machines add records via the service on the target
machine.
, like designing my own little client server subsystem.  I 
have done similar things before but its a lot of complexity and if I don't
need it, I don't want to do it.

Alternative III Maybe I should just use another sql product (mysql or
sqlserver ...) and odbc to do this- I don't like that complexity either.

Alternative IV:If there was a lightening fast sqlite bulk loader, I could
let each process make its own database first, and then dump(merge) them all
together as a final step.
 That's an alternative design but without a fast back door bulk loader, this
alternative does not seem too good. (all that extra IO) ugh.

Pure insertion speed: ( pair of 4.8ms disks in a raid0 disk array)
The final size of the database is almost out of the range of sqlite, being
between 100 and 120 Gbytes. I have tested sqlite on databases of this size
for insertion speed, and it was not as bad
as I expected from the caveats in the Owens  sqlite book. Average of 12
inserts per second over the load of the entire 120G file file.16/second for
100 G a 100 G database, and for reasonable sizes
like 1 gig, you can do 100 inserts/second. (all 1 transaction of course).
If I can get this kind of speed with multiple load sources located on
different machines, I would have what I need

Pure retrieval speed:  at around 120 Gbytes, 125,000 records with blobs
between about 1 meg each , I could get over 600 retrievals per second so
while insertions hurt, retrievals are fine.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Determining Data Types

2006-12-08 Thread Scott Hess

On 12/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

I appreciate the insight, but I'm not sure it helps with my situation.
This is what I'm trying to accomplish:

1. User selects 1 or more fields from a table using basic "SELECT fields
FROM table" syntax, and the result is retrieved using sqlite_get_table
(this is still in v2.8x)

2. User calls a function that says "I want to update field 3 in row 10".
I have to construct a query that says "UPDATE table SET field3 = value
WHERE ROWID = "

I can construct the update statement without much issue, but I need
to know whether or not I should put quotes around value, so I need
to know whether or not field3 is a text type field.  I hope that explains
my situation a little better.


You'll need to do more than put quotes around the value!

Instead of taking this route, could you construct an UPDATE using bind
placeholders ("?"), then bind parameters using the appropriate type
you've received?  Then sqlite itself will take care of whatever
quoting is necessary.  This removes an entire class of sql injection
attack.

-scott

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Determining Data Types

2006-12-08 Thread epankoke
I appreciate the insight, but I'm not sure it helps with my situation.  This is 
what I'm trying to accomplish:

1. User selects 1 or more fields from a table using basic "SELECT fields FROM 
table" syntax, and the result is retrieved using sqlite_get_table (this is 
still in v2.8x)
2. User calls a function that says "I want to update field 3 in row 10".  I 
have to construct a query that says "UPDATE table SET field3 = value WHERE 
ROWID = "

I can construct the update statement without much issue, but I need to know 
whether or not I should put quotes around value, so I need to know whether or 
not field3 is a text type field.  I hope that explains my situation a little 
better.

--
Eric Pankoke
Founder / Lead Developer
Point Of Light Software
http://www.polsoftware.com/

 -- Original message --
From: Seth Falcon <[EMAIL PROTECTED]>
> [EMAIL PROTECTED] writes:
> 
> > Please forgive me, because I know this has been covered before, but
> > at the moment I'm at a loss.  I am writing an SQLite wrapper for a
> > relatively new language, and one of the requirements is that the
> > user be able to add and update date as if they were using an ADO
> > recordset.  As a result, I need to be able to build an SQL statement
> > behind the scenes that is aware of each field's data type. 
> 
> I'm not sure I understand what you want.  But here are some possible
> starting points:
> 
> When using the SQLite C API, you can determine if the columns in a
> result set correspond to a database table.  In this case, it is
> possible to access the table definition and the column types can be
> parsed [*1*].
> 
> If the result columns do not directly correspond to table columns,
> then, AFAIK, the best you can do is to use sqlite3_column_type.  As
> long as you don't have NULLs, this will tell you the affinity type of
> the column in the result set.
> 
> Aside: it would be really nice to be able to determine affinity type
> from a compiled SQL statement (result of sqlite3_prepare) and not have
> to actually _step() to get this info.
> 
> [*1*] See this thread on the SQLite list:
> http://marc.theaimsgroup.com/?l=sqlite-users=116416179332110=2
> 
> HTH,
> 
> + seth
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Determining Data Types

2006-12-08 Thread Seth Falcon
[EMAIL PROTECTED] writes:

> Please forgive me, because I know this has been covered before, but
> at the moment I'm at a loss.  I am writing an SQLite wrapper for a
> relatively new language, and one of the requirements is that the
> user be able to add and update date as if they were using an ADO
> recordset.  As a result, I need to be able to build an SQL statement
> behind the scenes that is aware of each field's data type. 

I'm not sure I understand what you want.  But here are some possible
starting points:

When using the SQLite C API, you can determine if the columns in a
result set correspond to a database table.  In this case, it is
possible to access the table definition and the column types can be
parsed [*1*].

If the result columns do not directly correspond to table columns,
then, AFAIK, the best you can do is to use sqlite3_column_type.  As
long as you don't have NULLs, this will tell you the affinity type of
the column in the result set.

Aside: it would be really nice to be able to determine affinity type
from a compiled SQL statement (result of sqlite3_prepare) and not have
to actually _step() to get this info.

[*1*] See this thread on the SQLite list:
http://marc.theaimsgroup.com/?l=sqlite-users=116416179332110=2

HTH,

+ seth

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: Unicode Help

2006-12-08 Thread Da Martian



It looks fine to me.  To help check it, one thing you can try is
writing the result of FieldAsString directly to a file as raw bytes,
then in notepad open that with "encoding" set to "Unicode".  E.g.
something logically equivalent to:

  size := Length(field) * 2;
  SetLength(buffer, size );
  System.Move(field^, buffer^, size);
  file.Write(buffer, size);

I imagine you don't have to jump through hoops like that, but
hopefully you see what I have in mind.  If the result looks good in
notepad, then you know this layer is fine, so the problem must be
closer to the display layer.



Hi

How do you set Notepad to Ecnoding = Unicode. I cant see an option for that
?


Re: [sqlite] is blob compression worth it

2006-12-08 Thread Dennis Cote

Daniel Önnerby wrote:

Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a 
lot of them have the same data in them, is there any way to make a 
plugin to sqlite that in this case would just save a reference to 
another blob if it's identical. I guess this could save a lot of space 
without any fancy decompression algorithm, and if the blob-field is 
already indexed there would be no extra time to locate the other 
identical blobs :)



Daniel,

This is exactly what relational database normalization is about. If you 
have many copies of the same blob you have redundant data. The best way 
to handle that is to normalize the database by moving one copy of the 
redundant data into a separate table. Then you store the id of that 
record in the original tables where you need a reference to the data. 
For blob data you would probably want to store a hash of the blob value 
to speed comparisons, but this isn't absolutely necessary. You can 
reconstruct the original data records by joining the original tables 
with the new blob table when needed.


You can do it now without any new plugin for sqlite, and it works for 
any relational database.


Normalization like this works just as well for non blob data.

Dennis Cote



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Determining Data Types

2006-12-08 Thread epankoke
Please forgive me, because I know this has been covered before, but at the 
moment I'm at a loss.  I am writing an SQLite wrapper for a relatively new 
language, and one of the requirements is that the user be able to add and 
update date as if they were using an ADO recordset.  As a result, I need to be 
able to build an SQL statement behind the scenes that is aware of each field's 
data type.  I vaguely recall discussions about this in the past, and I thought 
the consensus was that the SQL used to create the table needed to be parsed to 
determine data types of fields.  If this is the case, then (1) where can I find 
this SQL, and (2) does someone already have a routine to parse it that they 
could share?  If there is a better way to approach this, I'd appreciate that 
information as well, even if it's just a link to a different thread or a link 
to the SQLite web site.  Thanks for your time.

--
Eric Pankoke
Founder / Lead Developer
Point Of Light Software
http://www.polsoftware.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] about call back of sqlite.

2006-12-08 Thread Roberto

On 08/12/06, hongsion <[EMAIL PROTECTED]> wrote:

Hi sqlite,
I want to using sqlite this way.  Application  A  register callback
to sqlite db. And application B  will modify sqlite db. I want each time
sqlite was modified by B, A is notified by callback. Is this possible in
sqlite?  A and B run in different process. Thanks!




No it is not possible, you have to implement your own cross process
mechanism to notify applications of changes. Another idea DRH brough
up in the past, is to poll a table which stores ID's of items that
have changed.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] is blob compression worth it

2006-12-08 Thread Daniel Önnerby

Just out of curiosity.
If I for instants have 1000 rows in a table with a lot of blobs and a 
lot of them have the same data in them, is there any way to make a 
plugin to sqlite that in this case would just save a reference to 
another blob if it's identical. I guess this could save a lot of space 
without any fancy decompression algorithm, and if the blob-field is 
already indexed there would be no extra time to locate the other 
identical blobs :)


Just a thought :)

John Stanton wrote:

What are you using for compression?

Have you checked that you get a useful degree of compression on that 
numeric data?  You might find that it is not particularly amenable to 
compression.


Hickey, Larry wrote:

I have a blob structure which is primarily doubles. Is there anyone with
some experience with doing data compression to  make the blobs smaller?
Tests I have
run  so far indicate that compression is too slow on blobs of a few  
meg to
be practical. I get now at least 20 to 40 inserts per  second but if 
a single compression
takes  over a second, it's clearly not worth the trouble. Does 
anybody have experience

with a compression scheme with blobs that consist of mostly arrays of
doubles?
Some  schemes ( ibsen) offer lightening speed decompression so if the
database was primarily used  to read, this would be good choice but very
expensive to do
the compression required  to make it.


- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 






- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] ip2long

2006-12-08 Thread Daniel Önnerby
I havn't followed the whole thread, but from what I understand you store 
the data as numbers and not IP-strings.
The most common way of checking a range is to use a IP-mask, and this 
should work fine in this case as well.

Something like:
SELECT * FROM myiptable WHERE (ip & "maskvalue") == ("subnet IP" & 
"maskvalue")


the maskvalue in this case could be something like 255.255.255.0 
(decimal value 4294967040) to check the subnet




Kevin Waterson wrote:

How can I SELECT an IP within a range?
All my IP's are stored as 1.2.3.4 but I need to check if
each is within a range of long IP's.

Can this be done with SQLite? or do I need some sort of
external ip2long() function to throw at it?

Kind regards
kevin

  


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] database locked

2006-12-08 Thread LEGRAND, Véronique
Hello,

 

I encounter the following difficulties with sqlite3:

 

I have 4 different processes working on an sqlite3 database. When the "sql
timeout" (set by "sqlite3_busy_timeout") is set to 5 seconds and that all
processes are running on a same windows machine, it seems OK (I am not 100%
sure of that, need to do more tests).

 

When the configuration is the following:

3 processes are running on a windows XP pro (service pack 2) machine (with
NTFS filesystem)

The 4th process and the database are located on another windows machine
(same OS), NTFS filesystem.

 

I cannot let the sql timeout set to 5 seconds because I get a lot of
"database locked" errors; doing the same tests with a timeout set to 30
seconds really improves the results: I get really less "database locked"
errors so, I have set it to 1 minute and it works better again ( I got only
2 "database locked" messages whereas the 4 processes had been running and
working on the database (reading/writing and doing immediate transaction)
for 18 hours.

To give you an idea, with an sql timeout of 5 seconds, I got the first
"database locked" after les than 30 minutes activity. With a timeout set to
30 seconds, I got the "database locked" problem after more than 3 hours.

 

As I read in the sqlite3 documentation that when a process locks the
database using a reserved lock (which is what I use as I do "immediate"
transactions) it took only a few milliseconds, I am very surprise to find
that I need a such long timeout.

Does anyone have an explanation or has someone already had that problem?

 

To try to understand what happens, i have added many traces in my code (each
trace is written with the hour in the format hour:minute:second; 

My code looks like this:

Trace1 

 sqlite3_exec(handle,"begin immediate transaction something");

trace2

...

Trace3

Sqlite3_exec(handle,"end transaction something");

Trace4

 

I have noticed that when a processus crashes dued to a "database locked", in
the log files of the other processes, at nearly the same hour, 20 seconds
pass between trace1 and trace2 or between trace3 and trace4 whereas usually
(I do many transactions like that) it doesn't even take a second.

What can cause a processus to take 20 seconds to execute "end transaction"?

Is the following scenario possible?

A process writes "trace3" in the log file then is put in a queue by the
windows OS. Transaction is still active so, the database is locked and other
processes ("woken up" by the windows OS) cannot do their "begin
transaction"=> they write "trace1" in their log file and start to wait (does
sqlite3 have its own process queue to handle that? I have the impression
that it is not the processus that has been waiting for the longuest time
that is given the lock).

20 seconds after, the windows OS finally wakes up our first process that is
now able to perform its "end transaction" and write "trace4" in the log
file. Other processes can now perform their "begin transaction" and write
"trace2" in the log file.

 

If that scenario is possible, then it should also happen when all 4
processes are running on the same machine...

 

Any information is welcome.

Thank you in advance.