Re: [sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread Rich Rattanni

I stand corrected, thank you Andrew.  I seriuosly doubt it is a bug in
SQlite, but I have had a hell of a time with sqlite and binding
dynamically allocated text and binary data.

On 7/1/07, Andrew Finkenstadt <[EMAIL PROTECTED]> wrote:

On 7/1/07, Rich Rattanni <[EMAIL PROTECTED]> wrote:
>
> I was trying to look through the SQLITE source code to see how the
> sqlite3_bind_blob routine worked.
>
> sqlite3_bind_blob passes the data pointer to bindText
> bindText passes the data pointer to sqlite3VdbeMemSetStr
> sqlite3VdbeMemSetStr then does...
> ...
> pMem->z = (char *)z;
>   if( xDel==SQLITE_STATIC ){
> pMem->flags = MEM_Static;
>   }else if( xDel==SQLITE_TRANSIENT ){
> pMem->flags = MEM_Ephem;
>   }else{
> pMem->flags = MEM_Dyn;
> pMem->xDel = xDel;
>   }
> ...
>
> I dont see anywhere where sqlite3 copies data to a private buffer, I
> just see where sqlite3 saves a copy of the user pointer.
>


Further down in that function, after setting MEM_Ephem, there are these
lines of code:

 if( pMem->flags&MEM_Ephem ){
   return sqlite3VdbeMemMakeWriteable(pMem);
 }

which does the memory copy when SQLITE_TRANSIENT is used as a side-effect of
making it "writable".

In your original outline you issued sqlite3_step before freeing the memory.
If you leave it that way, you can get away with SQLITE_STATIC when binding
the blob... which might indicate something by whether/where the crash still
occurs.

--andy
 just a sqlite user, not really a knower-of-code



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread Andrew Finkenstadt

On 7/1/07, Rich Rattanni <[EMAIL PROTECTED]> wrote:


I was trying to look through the SQLITE source code to see how the
sqlite3_bind_blob routine worked.

sqlite3_bind_blob passes the data pointer to bindText
bindText passes the data pointer to sqlite3VdbeMemSetStr
sqlite3VdbeMemSetStr then does...
...
pMem->z = (char *)z;
  if( xDel==SQLITE_STATIC ){
pMem->flags = MEM_Static;
  }else if( xDel==SQLITE_TRANSIENT ){
pMem->flags = MEM_Ephem;
  }else{
pMem->flags = MEM_Dyn;
pMem->xDel = xDel;
  }
...

I dont see anywhere where sqlite3 copies data to a private buffer, I
just see where sqlite3 saves a copy of the user pointer.




Further down in that function, after setting MEM_Ephem, there are these
lines of code:

 if( pMem->flags&MEM_Ephem ){
   return sqlite3VdbeMemMakeWriteable(pMem);
 }

which does the memory copy when SQLITE_TRANSIENT is used as a side-effect of
making it "writable".

In your original outline you issued sqlite3_step before freeing the memory.
If you leave it that way, you can get away with SQLITE_STATIC when binding
the blob... which might indicate something by whether/where the crash still
occurs.

--andy
 just a sqlite user, not really a knower-of-code


Re: [sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread Rich Rattanni

On 7/1/07, Rich Rattanni <[EMAIL PROTECTED]> wrote:

On 7/1/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> "Rich Rattanni" <[EMAIL PROTECTED]> wrote:
> > All:
> > I am using SQlite to store several rows of binary data into a
> > database.  In my code I have a several character arrays containing
> > data.  I then do the following...
> >
> > 1.Begin Transaction
> > 2.For each blob...
> > 2a.sqlite3_prepare("Insert statement...")
> > 2b.call sqlite3_bind_blob(stmt, col#, dataPointer, sizeOfData, 
SQLITE_TRANSIENT)
> > 2c.sqlite3_step()
> > 2d.end
> > 3.Free data pointers.
> > 4.Commit Transaction
> >
> > This code segfaults.   Now If i move the free data pointers to outside
> > the commit, everything is fine.  According to the API documentation,
> > "If the fifth argument has the value SQLITE_TRANSIENT, then SQLite
> > makes its own private copy of the data immediately, before the
> > sqlite3_bind_*() routine returns."  I may be misinterperting the
> > documentation, or perhaps this is a bug in sqlite (course I am
> > assuming the former is true).
> >
> > Could anyone shed some light on my mystery?  I am wondering if I need
> > to enable (when sqlite is compiled) support for SQLITE_TRANSIENT?
> >
>
> The documentation is correct - SQLITE_TRANSIENT causes SQLite
> to make a copy of the data before sqlite3_bind_blob() returns.
> You should be able to free the dataPointer prior to the sqlite3_step().
> No special compile-time optimizations are required.
>
> If you have a reproducible test case, we will look into the problem.
>
> --
> D. Richard Hipp <[EMAIL PROTECTED]>
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>

Sir:
Thanks for the quick reply.  I will try to work up a test case in
my spare time  I had GDB running when this problem occured, and it
said the offending function was sqlite3pager_get().  I am not sure if
this helps in any way.  Again thanks for your response.

--
Rich Rattanni



Sir:

I was trying to look through the SQLITE source code to see how the
sqlite3_bind_blob routine worked.

sqlite3_bind_blob passes the data pointer to bindText
bindText passes the data pointer to sqlite3VdbeMemSetStr
sqlite3VdbeMemSetStr then does...
...
pMem->z = (char *)z;
 if( xDel==SQLITE_STATIC ){
   pMem->flags = MEM_Static;
 }else if( xDel==SQLITE_TRANSIENT ){
   pMem->flags = MEM_Ephem;
 }else{
   pMem->flags = MEM_Dyn;
   pMem->xDel = xDel;
 }
...

I dont see anywhere where sqlite3 copies data to a private buffer, I
just see where sqlite3 saves a copy of the user pointer.

I see that sqlite3VdbeMemDynamicify memcpy's from the private data to
the vdbe object.  Should the if ...else if ... else then look like
this?
if( xDel==SQLITE_STATIC ){
   pMem->flags = MEM_Static;
 }else if( xDel==SQLITE_TRANSIENT ){
   pMem->flags = MEM_Ephem;
   sqlite3VdbeMemDynamicify(pMem);
 }else{
   pMem->flags = MEM_Dyn;
   pMem->xDel = xDel;
 }

Just trying to help if I can...  Am I looking at this right?

--
Rich Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread Rich Rattanni

On 7/1/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

"Rich Rattanni" <[EMAIL PROTECTED]> wrote:
> All:
> I am using SQlite to store several rows of binary data into a
> database.  In my code I have a several character arrays containing
> data.  I then do the following...
>
> 1.Begin Transaction
> 2.For each blob...
> 2a.sqlite3_prepare("Insert statement...")
> 2b.call sqlite3_bind_blob(stmt, col#, dataPointer, sizeOfData, 
SQLITE_TRANSIENT)
> 2c.sqlite3_step()
> 2d.end
> 3.Free data pointers.
> 4.Commit Transaction
>
> This code segfaults.   Now If i move the free data pointers to outside
> the commit, everything is fine.  According to the API documentation,
> "If the fifth argument has the value SQLITE_TRANSIENT, then SQLite
> makes its own private copy of the data immediately, before the
> sqlite3_bind_*() routine returns."  I may be misinterperting the
> documentation, or perhaps this is a bug in sqlite (course I am
> assuming the former is true).
>
> Could anyone shed some light on my mystery?  I am wondering if I need
> to enable (when sqlite is compiled) support for SQLITE_TRANSIENT?
>

The documentation is correct - SQLITE_TRANSIENT causes SQLite
to make a copy of the data before sqlite3_bind_blob() returns.
You should be able to free the dataPointer prior to the sqlite3_step().
No special compile-time optimizations are required.

If you have a reproducible test case, we will look into the problem.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Sir:
Thanks for the quick reply.  I will try to work up a test case in
my spare time  I had GDB running when this problem occured, and it
said the offending function was sqlite3pager_get().  I am not sure if
this helps in any way.  Again thanks for your response.

--
Rich Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Semantic Database Design

2007-07-01 Thread brettg


FYI, here is a good article describing how to serialize .NET data (as BLOB) to
Sqlite database:


http://www.windevtools.com/show_article.asp?pg=dotNET%20Serialization%20to%20SQLite%20Database/howto.html

cheers



This message was sent using IMP, the Internet Messaging Program.




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Semantic Database Design

2007-07-01 Thread Trevor Talbot

On 7/1/07, Mov GP 0 <[EMAIL PROTECTED]> wrote:

2007/7/1, Trevor Talbot <[EMAIL PROTECTED]>:
>
> On 7/1/07, Mov GP 0 <[EMAIL PROTECTED]> wrote:
>
> > I'm searching for a database backend for a .NET project that involves a
> > semantic database backend. The problem is that I want to store any
> generic
> > kind of custom .NET datatype and not only SQL-primitives.
>
> >CREATE TABLE @TableName
> >(
> >   Object INT,
> >   Data @Datatype
> >)
> >
> > The @Datatype Parameter is a special object that implements something
> like a
> > "ISQLiteDatatype" Interface that provides information about the class
> that
> > is needed for getting information about the used datatype. This
> includes:
> >
> >-   how to serialize (use also ISerializeable)
> >-   how to sort (use also ICompareable)
> >-   is it possible to do byte ordering with the serialized data?
> >-   minimal and maximal Datasize and -lenght?
> >-   etc.
> >
> > SQL is not capable to handle this issue, but I think it should be
> possible
> > using the API. At least in MS SQL Server it is possible, but you can't
> > create new datatypes on the fly, but you have to upload and register the
> > datatype's class to the server as DLL.



> I'm not sure I understand what you're looking for here.  The SQL
> interface (for any DBMS) is not going to act like a .NET object; that
> behavior is contained within your application.  What exactly are you
> trying to gain with custom data types within the DB?  Are you trying
> to standardize an interface for other applications to use when talking
> to the same DB?



The implementation of this is also possible with C++. I just see it from the
.NET perspective because I'm a .NET developer. Currently I have no knowledge
about how SQLite is working, because I've never used it. I'm currently
looking at serval possible database candidates including also Oracle and MS
SQL Server and want to learn about the strenght and weaknesses of each.


It's the same with any language.  An RDBMS is relational storage;
individual pieces of data are not intelligent.  Object behavior is an
application thing, not a database thing.  A typical ORM
(object-relational mapper) treats a class as a table and vice versa.


> SQLite is an embedded database engine, so when you need to make
> customizations like this, nobody other than your application is going
> to be using the DB.  Since you're not modifying fundamental DBMS
> functionality, you don't need to do anything to SQLite; just build a
> wrapper in your application that uses BLOBs to store everything, and
> build the rest of your application on top of that wrapper.
>
> The same approach will work with pretty much any other DBMS as well.



What I want is more efficient sorting and searching. Therefore I want tables
that are optimized for sorting, searching, and storing a specific datatype.
Using only BLOBS is a possible solution. But searching ie. a number within
unsorted and variable size (from 1Bi to 2GiBi) BLOB table is not as
efficient than searching in a specialized fixed-size-integer table.


I don't think we're on the same page here.  You originally asked to
store arbitrary data.  If you need to store something that doesn't map
to any other primitive, then you store it as a BLOB (or closest
reasonable equivalent for the specific DBMS).  What other option is
there?


As example you can think about sorting geographical coordinates. In an
classical RDBMS this would look like:

   CREATE TABLE Pos
   (
  Pos_ID INT,
  lat FLOAT,
  long FLOAT
   )


Okay.


It is very easy and effective to sort this table for latitude and longitude.
You can also create such a table automatically using XML Schema. But when
you have an OWL ontology you can't create the table automatically anymore.
You would have to write specialized proxy classes that are using SPARQL
queries against the data to result in tables. The queries would be
specialized on specific schemes.

Now we come to the possibility to store this as BLOB. Because RDF can use
any datatype you will have to store everything from booleans to images,
music and videos. You can't make any good algorithms for sorting and
searching that mess. Searching for a specific number would require to search
the whole database in a serial manner and thus resuling in a O(N)
searchtime, where N is the size of the whole database. The really advantage
of using BLOBs is the speed while inserting data into the table, because
there is no lookup needed to determine where to store the data.

So my idea is to partition the data into separate tables. This will also
result in a float table. Because you can sort this table you can also make a
book search. Therefore you will result in a searchtime is about O(log(N)),
while N is the number of integers rather than the size of the whole
database.


You've lost me in terminology.  I'll leave this to someone who knows
what you're talking about.


You can realize such a thing by creatin

Re: [sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread drh
"Rich Rattanni" <[EMAIL PROTECTED]> wrote:
> All:
> I am using SQlite to store several rows of binary data into a
> database.  In my code I have a several character arrays containing
> data.  I then do the following...
> 
> 1.Begin Transaction
> 2.For each blob...
> 2a.sqlite3_prepare("Insert statement...")
> 2b.call sqlite3_bind_blob(stmt, col#, dataPointer, sizeOfData, 
> SQLITE_TRANSIENT)
> 2c.sqlite3_step()
> 2d.end
> 3.Free data pointers.
> 4.Commit Transaction
> 
> This code segfaults.   Now If i move the free data pointers to outside
> the commit, everything is fine.  According to the API documentation,
> "If the fifth argument has the value SQLITE_TRANSIENT, then SQLite
> makes its own private copy of the data immediately, before the
> sqlite3_bind_*() routine returns."  I may be misinterperting the
> documentation, or perhaps this is a bug in sqlite (course I am
> assuming the former is true).
> 
> Could anyone shed some light on my mystery?  I am wondering if I need
> to enable (when sqlite is compiled) support for SQLITE_TRANSIENT?
> 

The documentation is correct - SQLITE_TRANSIENT causes SQLite
to make a copy of the data before sqlite3_bind_blob() returns.
You should be able to free the dataPointer prior to the sqlite3_step().
No special compile-time optimizations are required.

If you have a reproducible test case, we will look into the problem.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query assistance...

2007-07-01 Thread Jeff Godfrey


- Original Message - 
From: "Jeff Godfrey" <[EMAIL PROTECTED]>

Thanks Gerry,

Indeed, I have tried exactly that.


Hmmm... This seems to be a thread for eating crow... ;^)

Hi Gerry,

My actual query is a bit more complex than the sample I posted.  I 
thought my posted sample was a valid representation of what I'm 
actually trying to do (and I still do), though I had trouble with the 
"ID" and "Version" in the result set being constructed from differing 
records.  I've added a table containing exactly the data I showed in 
my example, and indeed, the query you posted does seem to work exactly 
as I requested.


So, I seem to have failed in my efforts to accurately "simplify" the 
problem.  Now, I need to figure out what's different between my real 
situation and the one I proposed here.


Thanks again for you assistance.

Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Seg Fault when using sqlite3_bind_blob

2007-07-01 Thread Rich Rattanni

All:
   I am using SQlite to store several rows of binary data into a
database.  In my code I have a several character arrays containing
data.  I then do the following...

1.Begin Transaction
2.For each blob...
2a.sqlite3_prepare("Insert statement...")
2b.call sqlite3_bind_blob(stmt, col#, dataPointer, sizeOfData, SQLITE_TRANSIENT)
2c.sqlite3_step()
2d.end
3.Free data pointers.
4.Commit Transaction

This code segfaults.   Now If i move the free data pointers to outside
the commit, everything is fine.  According to the API documentation,
"If the fifth argument has the value SQLITE_TRANSIENT, then SQLite
makes its own private copy of the data immediately, before the
sqlite3_bind_*() routine returns."  I may be misinterperting the
documentation, or perhaps this is a bug in sqlite (course I am
assuming the former is true).

Could anyone shed some light on my mystery?  I am wondering if I need
to enable (when sqlite is compiled) support for SQLITE_TRANSIENT?

--
Richard Rattanni

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query assistance...

2007-07-01 Thread Jeff Godfrey


- Original Message - 
From: "Andrew Finkenstadt" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, July 01, 2007 4:45 PM
Subject: Re: [sqlite] SQL query assistance...



On 7/1/07, Jeff Godfrey <[EMAIL PROTECTED]> wrote:


Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, 
but
for each group will return the record with the highest version 
number and a
count of the items in the group.  So, with the above data, I'd 
expect this

as output:

ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.




select ID, Name, max(version), count(*) Count
from table_name
group by ID, Name
order by ID, Name

... that looks to meet your criteria.


Hmmm... Looking closer, this doesn't seem to be right.  Sorry for my 
earlier report - I didn't look at the data close enough.  Since ID is 
always unique, won't GROUP BY ID effectively eliminate any possible 
grouping?  Sorry, still learning here...


Thanks,

Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query assistance...

2007-07-01 Thread Jeff Godfrey


- Original Message - 
From: "Andrew Finkenstadt" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, July 01, 2007 4:45 PM
Subject: Re: [sqlite] SQL query assistance...



On 7/1/07, Jeff Godfrey <[EMAIL PROTECTED]> wrote:


Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, 
but
for each group will return the record with the highest version 
number and a
count of the items in the group.  So, with the above data, I'd 
expect this

as output:

ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.




select ID, Name, max(version), count(*) Count
from table_name
group by ID, Name
order by ID, Name

... that looks to meet your criteria.


Andrew,

Indeed, the above does seem to do exactly what I need.  I guess the 
missing link (in my experimentation), was adding ID to the GROUP BY 
step.  Without that, the returned ID and Version weren't guaranteed to 
come from the same original record.


It's always so easy when you see it done... Thanks for your 
assistance.


Jeff 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query assistance...

2007-07-01 Thread Jeff Godfrey


- Original Message - 
From: "Gerry Snyder" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, July 01, 2007 4:40 PM
Subject: Re: [sqlite] SQL query assistance...



Jeff Godfrey wrote:

Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, 
but for each group will return the record with the highest version 
number and a count of the items in the group.  So, with the above 
data, I'd expect this as output:


ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.




It would seem that something like:

select ID, Name, max(Version),count(*) from table group by Name

should work.


Thanks Gerry,

Indeed, I have tried exactly that.  The problem is that max(Version) 
just returns the maximum version for the group, right?  So, the 
returned version doesn't necessarily "belong" to the ID that's 
returned, correct?  The result should be exact copies of the original 
records  that had the highest version for each group, along with a 
count for each group.  The above query seems to mix the "highest 
version" with whatever record was chosen to represent the group.  I 
need to ensure that the group record *is* the record with the highest 
version...


Thanks,

Jeff



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQL query assistance...

2007-07-01 Thread Andrew Finkenstadt

On 7/1/07, Jeff Godfrey <[EMAIL PROTECTED]> wrote:


Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, but
for each group will return the record with the highest version number and a
count of the items in the group.  So, with the above data, I'd expect this
as output:

ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.




select ID, Name, max(version), count(*) Count
from table_name
group by ID, Name
order by ID, Name

... that looks to meet your criteria.


Re: [sqlite] SQL query assistance...

2007-07-01 Thread Gerry Snyder

Jeff Godfrey wrote:

Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, but for 
each group will return the record with the highest version number and a count 
of the items in the group.  So, with the above data, I'd expect this as output:

ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.

  


It would seem that something like:

select ID, Name, max(Version),count(*) from table group by Name

should work. What hav you tried, and what went wrong?

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] SQL query assistance...

2007-07-01 Thread Jeff Godfrey
Hi All,

Given the following sample data...

ID  Name  Version
--- - ---
1   name1  0.9
2   name1  1.0
3   name2  1.2
4   name3  1.0
5   name3  1.7
6   name3  1.5

I need to create a query that will group the data together by Name, but for 
each group will return the record with the highest version number and a count 
of the items in the group.  So, with the above data, I'd expect this as output:

ID  Name  Version Count
--- - --- -
2   name1 1.0 2
3   name2 1.2 1
5   name3 1.7 3

Thanks for any assistance.



Jeff

[sqlite] error in step

2007-07-01 Thread Frederic de la Goublaye

Hi all,

Any idear about this error:

2007-07-01 17:01:48,145 (BotTopicData.java:getTopics:342) ERROR -
sSql=SELECT * FROM topic WHERE value LIKE 'O';
2007-07-01 17:01:48,155 (SQLite.java:execQuery:79) ERROR - SQLite.Exception:
error in step

After that (not sure), my tomcat server go to ERROR 404 and it crashes.

Best regards
Frederic


Re: [sqlite] Semantic Database Design

2007-07-01 Thread Mov GP 0

2007/7/1, Trevor Talbot <[EMAIL PROTECTED]>:


On 7/1/07, Mov GP 0 <[EMAIL PROTECTED]> wrote:

> I'm searching for a database backend for a .NET project that involves a
> semantic database backend. The problem is that I want to store any
generic
> kind of custom .NET datatype and not only SQL-primitives.

>CREATE TABLE @TableName
>(
>   Object INT,
>   Data @Datatype
>)
>
> The @Datatype Parameter is a special object that implements something
like a
> "ISQLiteDatatype" Interface that provides information about the class
that
> is needed for getting information about the used datatype. This
includes:
>
>-   how to serialize (use also ISerializeable)
>-   how to sort (use also ICompareable)
>-   is it possible to do byte ordering with the serialized data?
>-   minimal and maximal Datasize and -lenght?
>-   etc.
>
> SQL is not capable to handle this issue, but I think it should be
possible
> using the API. At least in MS SQL Server it is possible, but you can't
> create new datatypes on the fly, but you have to upload and register the
> datatype's class to the server as DLL.

I'm not sure I understand what you're looking for here.  The SQL
interface (for any DBMS) is not going to act like a .NET object; that
behavior is contained within your application.  What exactly are you
trying to gain with custom data types within the DB?  Are you trying
to standardize an interface for other applications to use when talking
to the same DB?



The implementation of this is also possible with C++. I just see it from the
.NET perspective because I'm a .NET developer. Currently I have no knowledge
about how SQLite is working, because I've never used it. I'm currently
looking at serval possible database candidates including also Oracle and MS
SQL Server and want to learn about the strenght and weaknesses of each.


SQLite might be better suited for this issue, because as part of the
> application it can access the sorting and serialisation functions
directly
> rather than importing DLLs. But the question is how I can do that.

SQLite is an embedded database engine, so when you need to make
customizations like this, nobody other than your application is going
to be using the DB.  Since you're not modifying fundamental DBMS
functionality, you don't need to do anything to SQLite; just build a
wrapper in your application that uses BLOBs to store everything, and
build the rest of your application on top of that wrapper.

The same approach will work with pretty much any other DBMS as well.



What I want is more efficient sorting and searching. Therefore I want tables
that are optimized for sorting, searching, and storing a specific datatype.
Using only BLOBS is a possible solution. But searching ie. a number within
unsorted and variable size (from 1Bi to 2GiBi) BLOB table is not as
efficient than searching in a specialized fixed-size-integer table.

As example you can think about sorting geographical coordinates. In an
classical RDBMS this would look like:

  CREATE TABLE Pos
  (
 Pos_ID INT,
 lat FLOAT,
 long FLOAT
  )

It is very easy and effective to sort this table for latitude and longitude.
You can also create such a table automatically using XML Schema. But when
you have an OWL ontology you can't create the table automatically anymore.
You would have to write specialized proxy classes that are using SPARQL
queries against the data to result in tables. The queries would be
specialized on specific schemes.

Now we come to the possibility to store this as BLOB. Because RDF can use
any datatype you will have to store everything from booleans to images,
music and videos. You can't make any good algorithms for sorting and
searching that mess. Searching for a specific number would require to search
the whole database in a serial manner and thus resuling in a O(N)
searchtime, where N is the size of the whole database. The really advantage
of using BLOBs is the speed while inserting data into the table, because
there is no lookup needed to determine where to store the data.

So my idea is to partition the data into separate tables. This will also
result in a float table. Because you can sort this table you can also make a
book search. Therefore you will result in a searchtime is about O(log(N)),
while N is the number of integers rather than the size of the whole
database.

You can realize such a thing by creating a table for each primitive datatype
by hand and serialize the objects into this elementar datatypes and the
semantics of the object. But because RDF has a very extensible Datatype
system a approach that handles also user defined datatypes would boost the
performance of the serialisation and unserialisation process.

Further speedup would be possible to partition the tables of the primitive
types even further. Ie. there might be a partition that holds ie. floats for
latitute and another partition for holding floats for longitude. This
approach might reduce the searching t

Re: [sqlite] sqlite testing with Boundschecker

2007-07-01 Thread drh
"Damian Slee" <[EMAIL PROTECTED]> wrote:
> 
> I have done some testing with 3.2 and now 3.4, visual studio2003 and
> boundschecker (and XP). 

Thank you for the report.

I have analyzed the errors reported by boundschecker and they
all appear to be false postives.  The SQLite 3.4.0 code base
is correct in all cases and boundchecker is complaining about
problems that do not exist.

The following is typical:

> 
> Dangling Pointer: Pointer 0x010D9250, allocated by HeapAlloc, has already
> been freed.
> 
> Vdbe.c sqlite3VdbeExec Line 469
> 
>   if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE;
>   assert( db->magic==SQLITE_MAGIC_BUSY );
>   [pTos = p->pTos;]
>   if( p->rc==SQLITE_NOMEM ){
> 

There are two pointers on the offending line: "p" and "pTos".  The
p pointer appears to be valid.  Otherwise, the "return SQLITE_MISUSE"
two lines above would have be taken.  Or at worst, the dereference
of p two lines above should have triggered a similar error.  So
apparently boundschecker is upset because SQLite is merely making
a copy pointer to previously freed memory.  There really is no
harm in this.  Nothing bad can happen unless the program actually
tries to dereference the pointer - which it never does.

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-07-01 Thread RaghavendraK 70574
Hi Joe/John

Appreciate your reply.
Result: with your suggestions it is to down to 3 secs.
But i need to get it to 1sec. Possible?

Here are the findings,
1) With Mem db default compile options and with suggestions(bind+ index) you 
have
provided to use bind performance stands at 10-11secs.
2) With File db default compile options it stands at 13-14secs.
3) begin and commit seems to have no affect on the query.
   Seperate index on mem table seems to have no affect.
4) Earlier Joe had suggetsed to make changes to page size,with that
it is down to 3 secs.[Along with your suggestions]
  Compile Option changes:
  ./configure --enable-threadsafe

  BCC = gcc -g -O2

  # C Compile and options for use in building executables that
  # will run on the target platform.  (BCC and TCC are usually the
  # same unless your are cross-compiling.)
  #
  TCC = gcc -g -O2 -I. -I${TOP}/src

  # Define -DNDEBUG to compile without debugging (i.e., for production 
usage)
  # Omitting the define will cause extra debugging code to be inserted and
  # includes extra comments when "EXPLAIN stmt" is used.
  #
  TCC += -DNDEBUG=1 -DSQLITE_DEFAULT_CACHE_SIZE=32768 
-DSQLITE_DEFAULT_PAGE_SIZE=8192 -DSQLITE_DEFAULT_TEMP_CACHE_SIZE=32768 
-DTEMP_STORE=3

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Joe Wilson <[EMAIL PROTECTED]>
Date: Sunday, July 1, 2007 12:47 pm
Subject: Re: [sqlite] Re: In Mem Query Performance

> In addition,
> 
> - make a new index only on column1
> - move the prepare before the loop to avoid reparsing the SELECT 
> each time
> - use sqlite3_bind_* and sqlite3_reset in the loop.
> - move finalize after the loop.
> - query the test table directly - not the temporary ttest table.
> - don't SELECT * if you only need a column or two
> 
> --- RaghavendraK 70574 <[EMAIL PROTECTED]> wrote:
> > int main()
> > {
> >   sqlite3* db;
> >   int ret = sqlite3_open("abc",&db);
> > 
> >   char* errmsg;
> >   char buf[1024];
> >   sprintf(buf,"create temporary table ttest(column1 text,column2 
> text,primary key
> > (column1,column2));"); 
> >   ret = sqlite3_exec(db , buf, NULL ,NULL,&errmsg);
> > 
> > 
> >   sprintf(buf,"insert into ttest select * from test;");
> >   ret = sqlite3_exec(db , buf, NULL ,NULL,&errmsg);
> > 
> > 
> >   //read all the srvName from file and store it in arr and query.
> >   fstream fin("query.sql",ios::in);
> >   string data[10];
> >
> >for(int j=0;!fin.eof();j++)
> >{
> >   fin>>data[j];
> >   //cout< >} 
> >fin.close();
> > 
> >cout<<"Calling Test Now"< >sleep(1);
> > 
> >//Now Query Data.
> >time_t start = time(0);
> >list lst;
> > 
> >char* szError=0;
> >const char* szTail=0;
> >sqlite3_stmt* pVM;
> > 
> >for(int k=0;k<10;k++)
> >{
> >   sprintf(buf,"select * from ttest where column1 = 
> '%s'",data[k].c_str());  
> >   
> > 
> >   ret = sqlite3_prepare(db, buf, -1, &pVM, &szTail);  
> >   ret = sqlite3_step(pVM); 
> >   //lst.push_back();
> >   ret = sqlite3_finalize(pVM);
> >}
> >//
> >time_t end = time(0);
> >cout<<"start="< >cout<<"end="< > 
> >   return 0;
> > }
> > 
> > //Result: It takes 17 sec to read 100,000 records. Can it be 
> reduced to 1 sec.???
> 
> 
> 
>   
> 
> Take the Internet to Go: Yahoo!Go puts the Internet in your 
> pocket: mail, news, photos & more. 
> http://mobile.yahoo.com/go?refer=1GNXIC
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Semantic Database Design

2007-07-01 Thread Trevor Talbot

On 7/1/07, Mov GP 0 <[EMAIL PROTECTED]> wrote:


I'm searching for a database backend for a .NET project that involves a
semantic database backend. The problem is that I want to store any generic
kind of custom .NET datatype and not only SQL-primitives.



   CREATE TABLE @TableName
   (
  Object INT,
  Data @Datatype
   )

The @Datatype Parameter is a special object that implements something like a
"ISQLiteDatatype" Interface that provides information about the class that
is needed for getting information about the used datatype. This includes:

   -   how to serialize (use also ISerializeable)
   -   how to sort (use also ICompareable)
   -   is it possible to do byte ordering with the serialized data?
   -   minimal and maximal Datasize and -lenght?
   -   etc.

SQL is not capable to handle this issue, but I think it should be possible
using the API. At least in MS SQL Server it is possible, but you can't
create new datatypes on the fly, but you have to upload and register the
datatype's class to the server as DLL.


I'm not sure I understand what you're looking for here.  The SQL
interface (for any DBMS) is not going to act like a .NET object; that
behavior is contained within your application.  What exactly are you
trying to gain with custom data types within the DB?  Are you trying
to standardize an interface for other applications to use when talking
to the same DB?


SQLite might be better suited for this issue, because as part of the
application it can access the sorting and serialisation functions directly
rather than importing DLLs. But the question is how I can do that.


SQLite is an embedded database engine, so when you need to make
customizations like this, nobody other than your application is going
to be using the DB.  Since you're not modifying fundamental DBMS
functionality, you don't need to do anything to SQLite; just build a
wrapper in your application that uses BLOBs to store everything, and
build the rest of your application on top of that wrapper.

The same approach will work with pretty much any other DBMS as well.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-07-01 Thread John Stanton
It looks as if you would do better to run your job as one or more 
transactions and to use the Sqlite API, with the sequence:

  sqlite3_prepare
  begin transaction
loop
  sqlite3_bind
  sqlite3_step
  sqlite3_reset
end
  commit
  sqlite3_finalize
You may need to split your job into multiple transactions to manage 
their size.


RaghavendraK 70574 wrote:

Hi All,

Pls find the sample test below,
A) Using Sqlite 3.3.17,SuSE9 version,8GB Ram,4 CPU machine,
1) Compile the generate the test data program and generate the data
2) Import the data to DB
3) Compile the perf Data program and then generate perf result.
4) Result for 100,000 records it takes 17 sec. Find one record at a time.

regards
ragha
@@1) Program to generate the test data:

//Compile: g++ -g gen.cpp -o gen

#include 
#include 
#include 
#include 

using namespace std;


int main()
{
 fstream fout("data.sql",ios::out);
 fstream fout1("query.sql",ios::out);
  //Schema
  fout<<"create table test (serName text,doid text,primary key 
(serName,doid));">data[j];
  //cout<   } 
   fin.close();


   cout<<"Calling Test Now"< lst;

   char* szError=0;
   const char* szTail=0;
   sqlite3_stmt* pVM;

   for(int k=0;k<10;k++)
   {
  sprintf(buf,"select * from ttest where column1 = '%s'",data[k].c_str());  
	  

  ret = sqlite3_prepare(db, buf, -1, &pVM, &szTail);  
  ret = sqlite3_step(pVM);		 
  //lst.push_back();
  ret = sqlite3_finalize(pVM);
   }

   //
   time_t end = time(0);
   cout<<"start="<
Date: Saturday, June 30, 2007 9:48 pm
Subject: [sqlite] Re: In Mem Query Performance



Hi RaghavendraK,

* RaghavendraK 70574 <[EMAIL PROTECTED]> [2007-06-25 08:45]:


When the DB is opened in "in Memory mode",performance of query
does not improve. For table which has 10 columns of type Text
and each column having 128bytes data and having a total of
1 records.


that is small enough to fit into memory, and therefore small
enough to fit into the OS disk cache. In such a case, the
performance of SQLite does not differ significantly between
in-memory and on-disk databases.

Your problem is elsewhere. If you provide your schema and
queries, someone might be able to tell you what about them makes
SQLite go so slow and how to make it faster.

Regards,
--
Aristotle Pagaltzis // 

---
--
To unsubscribe, send email to [EMAIL PROTECTED]
---
--





-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Semantic Database Design

2007-07-01 Thread Mov GP 0

Hi,
I'm searching for a database backend for a .NET project that involves a
semantic database backend. The problem is that I want to store any generic
kind of custom .NET datatype and not only SQL-primitives.

First, the Database should have a triple store for the semantic relations
that is built of three 32-bit integer columns. This is easy to implement.

  CREATE TABLE Triples
  (
 Subject INT,
 Predicate INT,
 Object INT
  )

But next to this I want to have some additional tables for storing literals,
data, and files. The Tables should consist of a row storing an integer that
gives the ID and an additional column that is storing the generic data. I
might use BLOBs for this issue, but its more efficient to use separate
tables. There should be a separate table for each datatype.

  @TableName = '_' + typeof( @Datatype ).ToString()

  CREATE TABLE @TableName
  (
 Object INT,
 Data @Datatype
  )

The @Datatype Parameter is a special object that implements something like a
"ISQLiteDatatype" Interface that provides information about the class that
is needed for getting information about the used datatype. This includes:

  -   how to serialize (use also ISerializeable)
  -   how to sort (use also ICompareable)
  -   is it possible to do byte ordering with the serialized data?
  -   minimal and maximal Datasize and -lenght?
  -   etc.

SQL is not capable to handle this issue, but I think it should be possible
using the API. At least in MS SQL Server it is possible, but you can't
create new datatypes on the fly, but you have to upload and register the
datatype's class to the server as DLL.

SQLite might be better suited for this issue, because as part of the
application it can access the sorting and serialisation functions directly
rather than importing DLLs. But the question is how I can do that.


ys, MovGP0


[sqlite] RE: sqlite testing with Boundschecker

2007-07-01 Thread Damian Slee
Re-submitted cause line color in red didn’t  come thru.  [] around the
reported line

 


Subject: sqlite testing with Boundschecker

 

Hi,

I have done some testing with 3.2 and now 3.4, visual studio2003 and
boundschecker (and XP).  I get a few reported errors when I simply do the
.dump command with no db.  Happens on other times with a db, this is easier.
Line numbers are from the downloaded 3.4.0 source.  I don’t really know how
the vdbe works, so not sure how to start on solving the write overrun one…

 

Thanks,

damian

 

Vdbeaux.c sqlite3VdbeChangeP3 line 534

Memory block at address void* _Src = 0x0012C92C in argument 2 in memcpy is
too small, should be 17, was 16.

if( pKeyInfo ){

  unsigned char *aSortOrder;

  [memcpy(pKeyInfo, zP3, nByte);]

  aSortOrder = pKeyInfo->aSortOrder;

 

 

Write Overrun: Memory write to 0x010D8A88 (2) overruns destination block
0x010D89F8 (60) allocated by HeapAlloc.

Vdbe.c, sqlite3VdbeExec, Line 701.  This occurs when running sqlite3.exe
with no db, then executing .dump, then second breakpoint on the pTos->flags
line invokes this boundschecker error.  

case OP_Integer: {

  pTos++;

  [pTos->flags = MEM_Int;]

  pTos->u.i = pOp->p1;

  break;

}

 

Resource Leak Exiting Program: Handle 0x0014 allocated by TlsAlloc.

Os_win.c line 1730

if( !keyInit ){

sqlite3OsEnterMutex();

if( !keyInit ){

  [key = TlsAlloc();]

  if( key==0x ){

sqlite3OsLeaveMutex();

return 0;

  }

 

 

Dangling Pointer: Pointer 0x010D9250, allocated by HeapAlloc, has already
been freed.

Vdbe.c sqlite3VdbeExec Line 469

if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE;

  assert( db->magic==SQLITE_MAGIC_BUSY );

  [pTos = p->pTos;]

  if( p->rc==SQLITE_NOMEM ){

 

 

 

Pointer Error:  Pointer 0x010DA9F0, used as an argument, is out of range; no
longer within block 0x010DAA30 (204), allocated by malloc.

Vdbeaux.c, sqlite3VdbeMakeReady line 889 

for(n=0; nnMem; n++){

p->aMem[n].flags = MEM_Null;

  }

 

  [p->pTos = &p->aStack[-1];]

  p->pc = -1;

  p->rc = SQLITE_OK;

 

similar error Cleanup, line 967

static void Cleanup(Vdbe *p){

  int i;

  if( p->aStack ){

releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));

[p->pTos = &p->aStack[-1];]

  }

  closeAllCursors(p);

 

 

Similar error  vdbe.c sqlite3VdbeExec, line2681

sqlite3VdbeMemIntegerify(pTos);

  iDb = pTos->u.i;

  assert( (pTos->flags & MEM_Dyn)==0 );

  [pTos--;]

  assert( iDb>=0 && iDbnDb );

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010D8A30 and
0x010D8A70.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec line 983

static void Cleanup(Vdbe *p){

  int i;

  if( p->aStack ){

[releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));]

p->pTos = &p->aStack[-1];

  }

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010DAD10 and
0x010DACD0.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec – line 983

  pFirstColumn = &pTos[0-pOp->p1];

  [for(pMem = p->aStack; pMemaStack ){

[releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));]

p->pTos = &p->aStack[-1];

  }

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010DAD10 and
0x010DACD0.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec line 983

pFirstColumn = &pTos[0-pOp->p1];

  [for(pMem = p->aStack; pMem

[sqlite] sqlite testing with Boundschecker

2007-07-01 Thread Damian Slee
Hi,

I have done some testing with 3.2 and now 3.4, visual studio2003 and
boundschecker (and XP).  I get a few reported errors when I simply do the
.dump command with no db.  Happens on other times with a db, this is easier.
Line numbers are from the downloaded 3.4.0 source.  I don’t really know how
the vdbe works, so not sure how to start on solving the write overrun one…

 

Thanks,

damian

 

Vdbeaux.c sqlite3VdbeChangeP3 line 534

Memory block at address void* _Src = 0x0012C92C in argument 2 in memcpy is
too small, should be 17, was 16.

if( pKeyInfo ){

  unsigned char *aSortOrder;

  memcpy(pKeyInfo, zP3, nByte);

  aSortOrder = pKeyInfo->aSortOrder;

 

 

Write Overrun: Memory write to 0x010D8A88 (2) overruns destination block
0x010D89F8 (60) allocated by HeapAlloc.

Vdbe.c, sqlite3VdbeExec, Line 701.  This occurs when running sqlite3.exe
with no db, then executing .dump, then second breakpoint on the pTos->flags
line invokes this boundschecker error.  

case OP_Integer: {

  pTos++;

  pTos->flags = MEM_Int;

  pTos->u.i = pOp->p1;

  break;

}

 

Resource Leak Exiting Program: Handle 0x0014 allocated by TlsAlloc.

Os_win.c line 1730

if( !keyInit ){

sqlite3OsEnterMutex();

if( !keyInit ){

  key = TlsAlloc();

  if( key==0x ){

sqlite3OsLeaveMutex();

return 0;

  }

 

 

Dangling Pointer: Pointer 0x010D9250, allocated by HeapAlloc, has already
been freed.

Vdbe.c sqlite3VdbeExec Line 469

if( p->magic!=VDBE_MAGIC_RUN ) return SQLITE_MISUSE;

  assert( db->magic==SQLITE_MAGIC_BUSY );

  pTos = p->pTos;

  if( p->rc==SQLITE_NOMEM ){

 

 

 

Pointer Error:  Pointer 0x010DA9F0, used as an argument, is out of range; no
longer within block 0x010DAA30 (204), allocated by malloc.

Vdbeaux.c, sqlite3VdbeMakeReady line 889 

for(n=0; nnMem; n++){

p->aMem[n].flags = MEM_Null;

  }

 

  p->pTos = &p->aStack[-1];

  p->pc = -1;

  p->rc = SQLITE_OK;

 

similar error Cleanup, line 967

static void Cleanup(Vdbe *p){

  int i;

  if( p->aStack ){

releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));

p->pTos = &p->aStack[-1];

  }

  closeAllCursors(p);

 

 

Similar error  vdbe.c sqlite3VdbeExec, line2681

sqlite3VdbeMemIntegerify(pTos);

  iDb = pTos->u.i;

  assert( (pTos->flags & MEM_Dyn)==0 );

  pTos--;

  assert( iDb>=0 && iDbnDb );

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010D8A30 and
0x010D8A70.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec line 983

static void Cleanup(Vdbe *p){

  int i;

  if( p->aStack ){

releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));

p->pTos = &p->aStack[-1];

  }

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010DAD10 and
0x010DACD0.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec – line 983

  pFirstColumn = &pTos[0-pOp->p1];

  for(pMem = p->aStack; pMemaStack ){

releaseMemArray(p->aStack, 1 + (p->pTos - p->aStack));

p->pTos = &p->aStack[-1];

  }

 

 

Unrelated Pointer: Comparing two unrelated pointers 0x010DAD10 and
0x010DACD0.  Pointers are not within the same buffer.

Vdbe.c sqlite3VdbeExec line 983

pFirstColumn = &pTos[0-pOp->p1];

  for(pMem = p->aStack; pMem