Re: [sqlite] Fwd: signal 6 during sqlite_step in WAL mode

2014-07-01 Thread Eduardo Morras
On Tue, 1 Jul 2014 08:07:52 +0300
Mattan Shalev <mat...@reduxio.com> wrote:

> Hey guys,
> I'm getting signal 6 during sqlite_step in WAL mode. Working on Ubuntu
> 12.04, sqlite3 3.7.9.
> One process is the writing continuously,  while other process reads
> from the DB in a multi threaded access. I made sure that sqlite is
> configured to serialised mode.
> 
> Here is the backtrace:
> 
> #0 0x7f4f78f0d08d in nanosleep ()
> #from /lib/x86_64-linux-gnu/libc.so.6 1 0x7f4f78f0cf2c in sleep
> #() from /lib/x86_64-linux-gnu/libc.so.6 2 0x7f4f7982d881 in
> #signal_catch (signo=6) at signal_masker.c:35 3  #called> 4 0x7f4f78e83425 in raise ()
> #called> from /lib/x86_64-linux-gnu/libc.so.6
> #5 0x7f4f78e86b8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
> #6 0x7f4f78ec139e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #7 0x7f4f78ecbb96 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
> #8 0x7f4f77661690 in sqlite3_free () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #9 0x7f4f77664b96 in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #10 0x7f4f77664c7c in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #11 0x7f4f77664cb0 in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #12 0x7f4f7768b477 in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #13 0x7f4f7769a249 in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #14 0x7f4f776a9689 in ?? () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #15 0x7f4f776ae451 in sqlite3_step () from
> /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> #16 0x7f4f79416557 in get_down_intervals_size
> #(start_time=1403711040,
> end_time=1577836800,
> size=0x7f3c49c68958) at stats_ext.c:202
> #17 0x00407f80 in ns__get_stats_performance (s=0x1e4cb00,
> start_time=1403711040, end_time=1577836800,
> time_unit=NS_MINUTES, resource_id=0, ret=0x7f3c49c68990) at
> msg_handlers/capi_wrappers.c:636
> #18 0x004461c1 in soap_serve_ns__get_stats_performance
> (soap=0x1e4cb00) at soapServer.c:1294
> #19 0x004415ae in soap_serve_request (soap=0x1e4cb00) at
> soapServer.c:119
> #20 0x00440f3e in soap_serve (soap=0x1e4cb00) at
> #soapServer.c:39 21 0x00403c90 in process_request
> #(s=0x1e4cb00) at be_server.c:111 22 0x7f4f79a3de9a in
> #start_thread () from
> /lib/x86_64-linux-gnu/libpthread.so.0
> #23 0x7f4f78f413fd in clone ()
> #from /lib/x86_64-linux-gnu/libc.so.6 24 0x in ?? ()
> 
> Does anyone has an idea?

Don't know with the information you show. But I suggest you to not use 
sqlite3.so from system and add sqlite3.c and sqlite3.h to your project. 
Therefore you can, among other improvements, build a newer and more error free 
sqlite3 version with debug information on and see what happens between the #8 
sqlite3_free() call and #14 sqlite3_step(). Increase in compile time is minimal 
(less than 2 seconds) in my laptop in a fresh rebuild.

> 
> Thanks!
> 
> Mattan.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding WHERE to query --> database disk image is malformed

2014-06-24 Thread Eduardo Morras
On Mon, 23 Jun 2014 20:16:44 -0700
Jerry Krinock <je...@ieee.org> wrote:

> How can it be that adding a WHERE clause to a successful query causes
> ‘database disk image is malformed’?
> 
> My database has one table named `itemTable`.  This table has two
> columns, `key` which is type text and `value` which is type blob.
> There are two rows of data.  Their `value` blobs are actually strings
> encoded as UTF16 little endian.
> 
> The issue is demonstrated in the following transcript, using the
> sqlite command line tool in Mac OS X.
> 
> Air2: jk$ sqlite3 Test.sql 
> SQLite version 3.7.13 2012-07-17 17:46:21
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> SELECT key from itemTable ;
> profileName
> extoreName
> sqlite> SELECT key FROM `itemTable` WHERE `key` = 'profileName' ;
> Error: database disk image is malformed
> sqlite> SELECT * FROM `itemTable` WHERE `key` = 'nonexistent' ;
> Error: database disk image is malformed
> 
> // Same succeed,fail result if I change query to "SELECT *" instead
> of "SELECT key".
> 
> sqlite> SELECT * FROM `itemTable` ;
> profileName|Fooobar
> extoreName|Baah
> sqlite> SELECT * FROM `itemTable` WHERE `key` = 'profileName' ;
> Error: database disk image is malformed
> 
> One thing I find rather surprising is that sqlite seems to know that
> the 14-byte and 24-byte blobs are UTF16-LE encoded strings, and
> prints them as “Fooobar” and “Baah”.
> 
> Is my database OK or malformed?

Did you create the db schema with newer version of sqlite3? Perhaps you did 
that and create a partial index, not supported on older sqlite3 versions.

Please, check index schemas for a where clause in them.

> 
> The same thing happens when I execute the failing query with the
> sqlite3 C Library, using years-old tested code.
> 
> Thank you!
> 
> Jerry Krinock

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What's the purpose of the "automatic index on" warning message?

2014-04-08 Thread Eduardo Morras
On Tue, 08 Apr 2014 15:22:18 +0200
Jens Miltner <j...@mac.com> wrote:

> CREATE INDEX a_idx1 ON a(b_id);
> CREATE INDEX a_idx2 ON a(identifier, b_id);
> 
> both of which could be used according to the JOIN statement and/or
> the CASE statement (if this part would use an index at all).
> 
> 
> I understand it's hard to tell where the problem is without knowing
> the complete query details, but maybe some hint on what would prevent
> the use of a regular index might help pointing me in the right
> direction...

Did you run ANALYZE?

Is b_id table a primary key? If it is then delete it because Sqlite adds 
primary key on indexs by default. Run analyze/reindex after change.

> 
> Thanks,
> -jens
> 
> 


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Improving Bulk Insert Speed (C/C++)

2014-04-02 Thread Eduardo Morras
On Wed, 2 Apr 2014 09:09:58 -0700
Kevin Xu <accol...@gmail.com> wrote:

> 
> On Apr 2, 2014, at 9:01 AM, Simon Slavin <slav...@bigfraud.org> wrote:
> 
> > 
> > On 2 Apr 2014, at 4:55pm, Kevin Xu <accol...@gmail.com> wrote:
> > 
> >> PRAGMA journal_mode=MEMORY;
> > 
> > Since you're not using WAL mode, try it using WAL mode.  If you
> > were already using WAL mode, I'd suggest trying with it off !
> > 
> > Given your answers to the questions so far, I can't think of
> > anything else to try you haven't already thought of.  The result
> > you report don't seem unusual in any way.  I'm not looking at them
> > and thinking you've done anything wrong.
> > 
> > Simon.
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> Yes, I have tried various combinations of WAL/OFF/MEMORY and they do
> not appear to affect the insert speed in any meaningful manner (e.g.
> +/- 0.3s on 500K inputs).
> 
> However, http://blog.quibb.org/2010/08/fast-bulk-inserts-into-sqlite/
> appear to to achieve an average of over 150K inserts per second over
> 4 million rows. I am aware that benchmark results are highly
> dependent on input size, platform, and hardware, I would imagine the
> SSD I am running on would not be such a tight bottleneck. Thus, I am
> trying to see if there is anything I can do to improve on this.

Perhaps he has 2 disks, first for read FASTQ, second for write sqlite db.

> 
> Kevin

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Use of AUTOINCREMENT

2014-03-28 Thread Eduardo Morras
On Fri, 28 Mar 2014 12:56:59 +0100
"Kleiner Werner" <sqliteh...@web.de> wrote:

> 
> 
> Hello
> I saw some discussions here about using the word "AUTOINCREMENT" for
> a primary key or not. 
> Sometimes the meaning is "INTEGER PRIMARY KEY" is enough, because
> this will also make an auto increment id. 
> Can someone give me a clarification if "AUTOINCREMENT" is needed or
> not? In the FAQ I can read this;
> " The new key will be unique over all keys currently in the table,
> but it might overlap with keys that have been previously deleted from
> the table." Does this mean, if there are 10 rows with id 1 -10 and I
> delete row with id 5, then the next insert will be ID = 5? Or is next
> ID = 11? 
> I understand that if I need absolutely unique IDs over the lifetime I
> need to use AUTOINCREMENT. That's correct? And with AUTOINCREMENT the
> last IDs will be stored in a table "sqlite_sequence"?
> [https://3c.web.de/mail/client/dereferrer?redirectUrl=http%3A%2F%2Fsqlite.org%2Ffileformat2.html%23seqtab]
> Are there sow disatvantages of using AUTOINCREMENT , like SQLite is
> slower or something else? best regard

As I'm who said "AUTOINCREMENT" is not needed, I'll answer too.

In your example, if it's defined as INTEGER PRIMARY KEY, next one may be 5, but 
not always. If you add "AUTOINCREMENT" it must be 11. You decide if you need it 
or not based on your use case. Other people add "AUTOINCREMENT" by default in 
all cases.

For example, if you store mails in a table, you want their id with 
"AUTOINCREMENT" because this manner they are always sorted in recepction order. 
If a mail is deleted, a new one will be the max(id)+1, and not between mails 3 
months ago.

But if you store other data and from time to time you truncate data tables and 
rebuild the database, you may want new data begin with 0 instead. Or if you 
need to have a top/last id and all others must be smaller.

Sqlite isn't slower if use "AUTOINCREMENT", if use it, it must query 
sqlite_secuence table, if not, it must search an empty id and both data are on 
memory often.

Summary, it's not safer nor unsafer, faster or slower, it depends on your use 
case.

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Primary key used in compound index

2014-03-26 Thread Eduardo Morras
On Tue, 25 Mar 2014 20:19:40 +
Pavel Vazharov <pa...@anchorworks.com> wrote:

> Hi guys,
> 
> 
> I've the following scenario (I'm presenting a simplified version of
> our scenario, but the main problem remains the same as in this
> scenario):
> 
> 
> CREATE TABLE test_event
> (
> "id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
> "type" INTEGER NOT NULL,
> "some_integer_data" INTEGER NOT NULL,
> "some_string_data" TEXT,
> "deleted" INTEGER NOT NULL,
> major INTEGER NOT NULL DEFAULT 0
> );
> CREATE INDEX IDX_test_event_1 ON test_event (deleted, major, id);

You have defined:

"id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT.

but,

a) as it's a primary key it cannot, never, be null, so "NOT NULL" has no 
meaning.
b) you added "AUTOINCREMENT", but, in Sqlite, if you insert a row whose primary 
key is a integer, as NULL, it's automatically autoincremented. So Autoincrement 
has no meaning neither.

> When I execute:
> 
> 
> explain query plan select * from test_event where deleted = 0 and
> major = 1 and id > 5 order by id asc limit 1;
> 
> It returns:
> 
> 
> ?selectid order from detail
> 0 0 0 SEARCH TABLE test_event USING INDEX IDX_test_event_1 (deleted=?
> AND major=?)
> 
> So I can see that the id is not used, only delete and major part of
> the compound index are used. In our scenario this leads to slow
> selects, really slow.

Yes, you have a "LIMIT 1", so not need to use the last index column because no 
need an additional filter on id.

> 
> I tested adding another column "dup_id" INTEGER NOT NULL, changing
> the index to be on (deleted, major, dup_id) and making all dup_ids
> equals to the corresponding ids. In this case the explain query plan
> 
> explain query plan select * from test_event where deleted = 0 and
> major = 1 and dup_id > 5 order by dup_id asc limit 1;
> 
> returns:
> 
> ?selectid order from detail
> 0 0 0 SEARCH TABLE test_event USING INDEX IDX_test_event_1 (deleted=?
> AND major=? AND dup_id>?)
> 
> And the select statement runs about 100 times faster than the
> previous one.
> 
> 
> My question is: Is there a way the primary key to be used as a part
> of the compound index? Adding additional column works, but it will
> increase the database size and will slow down the insert statements
> because this additional column needs to be updated with trigger or
> with additional query.

In indexs, the primary key is always added as last column. Sqlite uses it for 
retrieve the selected rows data from full table. In your first example, id is 
added twice to the index.

> Thanks in advance,
> Pavel.

L
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Crash if "sqlite3_db_release_memory" has no memory to release

2014-03-13 Thread Eduardo Morras

Hi, I get app crash if sqlite3_db_release_memory(db) is called and Sqlite3 has 
no memory to release, for example calling it twice. 

The backtrace is:

#0  0x0042ab60 in sqlite3PcacheShrink (pCache=0x6f6c6f63207b2065)
at /usr/home/gowen/clang/scgi/src/sqlite3.c:37835
#1  0x00415f3c in sqlite3PagerShrink (pPager=0x801072308)
at /usr/home/gowen/clang/scgi/src/sqlite3.c:42949
#2  0x00415ee8 in sqlite3_db_release_memory (db=0x801007808)
at /usr/home/gowen/clang/scgi/src/sqlite3.c:120647
#3  0x004ab3e3 in db_get_binary_content (fn=0x7fffcf10 
"background.jpeg", 
size=0x7fffce4c, ret=0x7fffd6e8) at 
/usr/home/gowen/clang/scgi/src/db.c:116

Line 116 on #3 is the call to sqlite3_db_release_memory(db). 

I know it's an internal sqlite3 call and I should use sqlite3_release_memory 
instead.

Versions tried 3.8.3, 3.8.4, 3.8.4.1. amalgamation under FreeBSD 9.2 x86_64. 
using clang 3.4 compiler. 

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Out of memory error for SELECT char();

2014-03-09 Thread Eduardo Morras
On Sat, 8 Mar 2014 14:09:17 -0500
Richard Hipp <d...@sqlite.org> wrote:


> It isn't really running out of memory
> 
> The implementation of char() allocates 4 bytes of output buffer for
> each input character, which is sufficient to hold any valid unicode
> codepoint. But with zero input characters, that means it tries to
> allocate a zero-byte output buffer.  sqlite3_malloc() returns NULL
> when asked to allocate zero bytes, at which point the char()
> implementation thinks that the malloc() failed and reports the
> output-of-memory error.

It's OS dependant. From malloc FreeBSD man page, malloc.conf/_malloc_options, V 
option means:

 V   Attempting to allocate zero bytes will return a NULL pointer
 instead of a valid pointer.  (The default behavior is to make a
 minimal allocation and return a pointer to it.)  This option is
 provided for System V compatibility.  This option is incompatible
 with the ``X'' option.

> 
> The fix is to allocate 4*N+1 bytes instead of 4*N bytes.  Dan is
> checking in the fix even as I type this reply.
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Eduardo Morras
On Fri, 07 Mar 2014 15:39:57 +0100
Clemens Ladisch <clem...@ladisch.de> wrote:

> Actually, no change to SQLite itself would be needed.  It's possible
> to create an extension that provides a function that allows to
> register another function that executes a custom SQL expression:
> 
>  SELECT register_simple_function('rpad', 2, 'SELECT printf
> (''%-*s'', ?, ?)');
> 
> 
> In practice, the biggest problem probably is that SQLite doesn't have
> that many built-in functions; most useful functions would require more
> than that.

So, if a webapp that uses SQLite doesn't check it's input, functions that 
renames SQLite internals can be injected

SELECT register_simple_function('MAX', 1, 'DROP TABLE ?');

No?

> 
> 
> Regards,
> Clemens

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Eduardo Morras
On Thu, 6 Mar 2014 22:41:31 -0500
Gabor Grothendieck <ggrothendi...@gmail.com> wrote:

> On Thu, Mar 6, 2014 at 8:41 PM, RSmith <rsm...@rsweb.co.za> wrote:
> >
> > On 2014/03/07 01:59, Gabor Grothendieck wrote:
> >>
> >>
> >>>
> >>>>
> >>>>> A small enhancement request:
> >>>>>
> >>>>> It would be great if the RPAD and LPAD functions could be
> >>>>> implemented in
> >>>>> sqlite.
> >>>>>
> >>>> The SQLite you can get the effect of RPAD(x,y) using PRINTF
> >>>> ('%-*s',y,x). See
> >>>> http://www.sqlite.org/lang_corefunc.html#printf for details.
> >>>
> >>> Thanks, but you snipped the relevant part of my post:
> >>> "I know I can easily achieve the equivalent ... but if the
> >>> functions were available natively it would avoid the need to hack
> >>> third party SQL scripts."
> >>>
> >> I have also found that it was tedious to retarget MySQL scripts to
> >> SQLite because many of the function calls are different.  Its not
> >> just rpad and lpad but other functions too.
> >
> >
> > Speaking as someone who retargets (nice word btw.) SQL scripts
> > often, yes I agree, it's a bit of a chore to retarget SQL scripts
> > to SQLite sometimes, but not really moreso than retargeting a
> > script from MSSQL to PostGres or
> 
> I have also retargeted MySQL scripts to H2 and it was easier than to
> SQLite. 

Creating extensions in SQLite is not difficult nor hard. You can define yours 
with this line:

 sqlite3_create_function(db, "RPAD", SQLITE_UTF8, 
SQLITE_ANY||SQLITE_DETERMINISTIC, 0, sqlcmd_rpad, 0, 0);

And create your function with:

 static void sqlcmd_rpad(sqlite3_context *context, int argc,sqlite3_value 
**argv){

   // argc has number of parameters in **argv
   // parse them as you do within C main 
   // 
   sqlite3_result_text(context, char_to_return, length_char_to_return, 
SQLITE_TRANSIENT);
 }

Some weeks ago, there was a mail collecting several sites where find usefule 
functions && extensions.

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Solved Weird out of memory problem a prepare

2014-03-05 Thread Eduardo Morras
On Wed, 05 Mar 2014 11:12:45 +0200
RSmith <rsm...@rsweb.co.za> wrote:

> If that query fails in an SQLite tool too, then maybe there is a
> problem, or if you use a custom altered version of the SQLite code.
> Barring that, you need to hunt down the corrupting code - Good luck!

I find it and solve. It's a macro expansion in one function extension that 
collides and redefines a #define in sqlite3.h

Thanks to all.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Weird out of memory problem a prepare

2014-03-04 Thread Eduardo Morras
On Tue, 4 Mar 2014 15:19:24 +
Simon Slavin <slav...@bigfraud.org> wrote:

> 
> On 4 Mar 2014, at 3:15pm, Simon Slavin <slav...@bigfraud.org> wrote:
> 
> > On 4 Mar 2014, at 3:09pm, Eduardo Morras <emorr...@yahoo.es> wrote:
> > 
> >> zSql= "SELECT r.name, s.content FROM resource AS r, static AS s
> >> WHERE (r.ids = s.ids AND r.name = ?);";
> >> 
> >> [snip]
> >> 
> >> Error on query: out of memory
> > 
> > I think this might require comparing every row in resource with
> > every row in static.  Which is a lot of temporary data to hold in
> > memory.
> > 
> > You might try something like
> > 
> > SELECT r.name, s.content FROM resource AS r JOIN static AS s ON
> > s.ids = r.ids  WHERE r.name = ?
> 
> and, of course, an index
> 
> CREATE INDEX i1 on resource (name,ids)
> 
> will make it run extremely quickly.

I'll reply both answers here, if you don't mind.

Thanks Simon, for the answers.

The tables have 4 rows each one, that's why I got suprised with the Out of 
Memory error. The biggest row has 12KB and with the join I do, shouldn't use 
more than 200KB.

Changing the ',' with the join you propose, gives Out of Memory too. It happens 
on prepare phase, before binding the ? with my data. The query didn't reach the 
step call.

Trying simple "SELECT r.name FROM resource AS r WHERE r.name = ?" gets "Out of 
memory" too calling preparev2.

Surely something is rotten on my development platform...

> 
> Simon.

Thanks
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Weird out of memory problem a prepare

2014-03-04 Thread Eduardo Morras

Hi,

I have this code that fails always with the error output next: 

=
  zSql= "SELECT r.name, s.content FROM resource AS r, static AS s WHERE (r.ids 
= s.ids AND r.name = ?);";

  stmt = NULL;
  rc = sqlite3_prepare_v2(db, zSql, strlen(zSql), , NULL);

  if ( rc != SQLITE_OK ){
dprintf( log, "Error on query: %s\n", sqlite3_errmsg(db));
dprintf( log, "Query : %s\n", zSql);
dprintf( log, " at %s %d:\n", __func__, __LINE__);
exit(1);
=

Error on query: out of memory

Query : SELECT r.nombre, s.content FROM resource AS r, static AS s WHERE (r.ids 
= s.ids AND r.nombre = ?);
 at process_request 66:
=

The tables schema are:
"CREATE TABLE IF NOT EXISTS resource(\n"
"  rid INTEGER PRIMARY KEY NOT NULL,\n"   // Resource ID
"  type INTEGER NOT NULL,\n"  // Type of resource 
(music,video,+18)
"  ids INTEGER,\n"// FK Static Resource ID (if 
applicable)
"  sys INTEGER DEFAULT 0,\n"  // Is a system file (do not 
delete)
"  replicated INTEGER DEFAULT 0,\n"   // Is resource replicated 
somewhere (and safe)
"  nombre TEXT NOT NULL,\n"   // Resource name (filename or 
appropiate)
"  path TEXT\n"   // Path to resource
");

and

"CREATE TABLE IF NOT EXISTS static(\n"
"  ids INTEGER PRIMARY KEY,\n"// Static resource ID
"  desc TEXT,\n"  // Description
"  content BLOB\n"// Main content
");\n"

I don't know why I get an Out of memory preparing the query. Vars are 
initialized, db points to an open sqlite3 db, and stmt is the first time used 
int the code. I use a similar code on other projects abd works with no 
problems. 

Any clue?

Thanks

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 03 Mar 2014 17:36:10 +0100
Jean-Christophe Deschamps <j...@antichoc.net> wrote:

> 
> >It's how RAID5 works. Check this page docs http://baarf.com/ about
> >it.
> 
> This is utter BS.

No.
 
> Serious RAID controllers perform parallel I/O on as many drives that 
> are making up a given array. Of course I'm talking of SAS drives here 
> with battery backed-up controller.
> 
> Kid sister RAID5-6 implementations using SATA drives and no dedicated 
> hardware are best avoided and have more drawbacks than are listed in 
> cited prose.
> 
> I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in
> RAID6 and a couple more in RAID1 and I've yet to witness any problem
> whatsoever.

RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now not. A 
minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise disk SAS 15Krpm 
146 GB 6G is $350, and a not enterprise grade cheaper and bigger. Now RAID1E 
and RAID10E give more flexibility and variable security, from "paranoid" to "i 
don't care" grades.

When something goes wrong:

RAID 3-4-5-6
When one of your disk brokes, replace it. 
Then rebuild the RAID3-4-5-6. 
You need read from all disks to recover the lost blocks. 
All disks are busy recovering it and your R/W performance drops. 
Recovery reads the same block on and the parity data, makes some computations 
and writes the lost block. 
If any of the RAID disks is near its MTBF and fails, you lost everything.

RAID 10
When one of your disks brokes, replace it.
Then rebuild the RAID10.
You need read from mirror disks to recover lost blocks.
Only the mirror disks are busy recovering and your R/W performance drops only 
when accessing data in those disks.
Recovery reads the same block and directly writes lost block.
If all disks that mirrors to broken one are near its MTBF and fail, you lost 
everything.

The time to recover a RAID 10 is less (lot less) than recreating a RAID3-4-5-6.

> It's just like talking BS on a language because of some obscure bug
> in a non-conformant compiler. 

No, it's talking BS on language that is bad designed for your actual needs and 
no matter which compiler you use because is not an implementation problem. 

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 3 Mar 2014 11:00:47 +
Simon Slavin <slav...@bigfraud.org> wrote:

> What the heck ?  Is this a particular implementation of RAID or a
> conceptual problem with how RAID is designed to work ?  It sounds
> like a bug in one particular model rather than a general problem with
> how RAID works.

It's how RAID5 works. Check this page docs http://baarf.com/ about it. 

> 
> Simon.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Network Storage

2014-02-24 Thread Eduardo Morras
On Mon, 24 Feb 2014 13:54:39 +0100
Richard Schülein <r...@noveltech.de> wrote:

> So that means, if there is only 1 machine accessing the network database,
> there should be no problem with the network access and the locking?

I ended using only one sqlite full accessing the db. All others submit requests 
to it via nanomsg.

> 
> Best regards/Mit freundlichen Grüßen

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to detect database change made by another application

2013-12-14 Thread Eduardo Morras
On Fri, 13 Dec 2013 23:19:55 + (UTC)
Will Parsons <varro@nodomain.invalid> wrote:

> I have two applications that access the same database and that can run
> at the same time.  If changes are made to the database by one
> application, I would like the other application to update its display
> to reflect the change.  The total_changes() function only works from a
> single connexion so doesn't help here, and it appears there is no API
> to access to the file change counter that's kept in an SQLite3 database.
> 
> Any suggestions on how to handle this?  In case it's relevant, the
> *only* purpose of the 2nd application's being able to detect a change
> is so that it can update its own display, and I will be using the Ruby
> bindings to SQLite3 in both programs.

There are many ways to do that, for example using pragma user_version. Each 
time you commit a change, upgrade the user_version and when you want to check 
if the other app has made a change check if it's equal to your user_version 
value.

You can do similar with a simple table with a single column row, every 
modification ends with updating the table row value.

I use user_version for other things, I put there the version of my app/db, but 
it can fit your needs.

> 
> -- 
> Will
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite scalability

2013-11-22 Thread Eduardo Morras
On Thu, 21 Nov 2013 11:43:32 +0200
Baruch Burstein <bmburst...@gmail.com> wrote:

> Hi all,
> 
> I know SQLite is supposed to support DB sizes in the TB (I think the
> default configuration can reach 1TB). I am curious if anyone actually uses
> SQlite at anywhere near this. Does anyone use it regularly for DBs 500GB+
> in size, or with tables containing 10 billion rows+? How much concurrency
> does your use require? How long do selects take (assuming indexes are set
> correctly?) Are there problems of locking ("normal" SQLite doesn't usually
> suffer from locking since transactions are very quick, but if transactions
> can be in the order of 100's of ms, I think locking can easily happen if
> the DB is accessed concurrently a few times a second, though I am not sure
> if this may only apply to writes).
> 
> I understand that the answer to most of these questions can be very
> hardware (and software) dependent, but I am just trying to get a feel for
> SQLite's applicability for a project I am working on that may reach limits
> like these.

Depends on what type/kind of use you need for your data. If you are going to do 
a lot insert, update, delete, perhaps sqlite isn't for you. 

If it's principal use is for select, when populate the tables, do it pre-sorted 
by the data primary key or by the colum which makes a better quality index for 
your use (look for 'low quality indexes' in sqlite docs, and do the opposite).

Normalize your db as much as you can, but not more ;) .

Sqlite allows you to attach up to 30 db files. Split your data between 2-3 db 
and put each one in different disks.

Increase cache size before creating any table, other RDBMS uses a lot of memory 
from several MBs to GBs, give Sqlite cache 500MB-1GB for example.

Write your queries in different ways and test which is better, there are some 
tricks with indexes, joins..., that can help you (study documentation)

Set STATS3 or 4 for analyze your db data before creating any table.

Set autovacuum full before creating any table, even if you don't plan to 
delete/update data. If I Remember Correctly (if not correct me please), 
autovacuum adds metadata to db file that allows Sqlite engine do some internal 
works faster.

> 
> Thanks,
> Baruch
> 
> -- 
> ˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite clusters?

2013-09-17 Thread Eduardo Morras
On Tue, 17 Sep 2013 22:19:57 +0200
Paolo Bolzoni <paolo.bolzoni.br...@gmail.com> wrote:

> > We have an app that uses SQLite, running in 4 servers, with 0mq (nanomsg 
> > soon) to get locking exclusive on writes on all databases and pass data 
> > sending raw sql inserts, updates and deletes. We don't have lot of nor big 
> > writes, never use triggers that modify data or calculate secundary data nor 
> > call now() on sql. We have not tried to run it on more servers or with 
> > bigger write load.
> Sorry for the out topic, but why you want to leave 0mq? We always
> found it great...

Principal differences that convinced us to change:

0mq is LGPL C++, nanomsg MIT/X11 C.
0mq is huge, nanomsg is light.

not everything is good, 0mq is mature while nanomsg is under active development 
and is incomplete but with time...

Martin Sústrik was 0mq architect/project developer/man behind the throne, and 
now is nanomsg architect/project developer/man behind the throne. 

http://zeromq.org/0mq
http://nanomsg.org/   nanomsg
http://250bpm.com/    Martin Sústrik Home Page

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite clusters?

2013-09-17 Thread Eduardo Morras
On Tue, 17 Sep 2013 06:58:06 -0700 (PDT)
Jason H <scorp...@yahoo.com> wrote:

> That's the whole point of using SQLite, it's not 'big iron' - it's a bunch of 
> iron filings working together.

You can get a nice surprise with SQLite when you feed it with the same RAM 
amount you put on MySQL server. SQLite usually gets worse benchmarks because it 
is configured with 2MB of ram and compare it with a MySQL with full server 
memory (4-6GB?)

> I'm just suggesting the amount of work to get such a thing going is not that 
> much work, but I wanted to float it here to see if there were any good 
> reasons why it was a bad idea. :-)
> 

We have an app that uses SQLite, running in 4 servers, with 0mq (nanomsg soon) 
to get locking exclusive on writes on all databases and pass data sending raw 
sql inserts, updates and deletes. We don't have lot of nor big writes, never 
use triggers that modify data or calculate secundary data nor call now() on 
sql. We have not tried to run it on more servers or with bigger write load.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite clusters?

2013-09-17 Thread Eduardo Morras
On Mon, 16 Sep 2013 14:04:13 -0700 (PDT)
Jason H <scorp...@yahoo.com> wrote:

> I'm transitioning my job from embedded space to Hadoop space. I was wondering 
> if it is possible to come up with a SQLite cluster adaptation.
> 
> I will give you a crash course in hadoop. Basically we get a very large CSV, 
> which is chopped up into 64MB chunks, and distributed to a number of nodes. 
> The file is actually replicated 2 times for a total of 3 copies of all chunks 
> on the cluster (no chunk is repeatedly stored on the same node). Then 
> MapReduce logic is run, and the results are combined. Instrumental to this is 
> the keys are returned in sorted order.
> 
> All of this is done in java (70% slower than C, on average, and with some 
> non-trivial start-up cost). Everyone is clamoring for SQL to be run on the 
> nodes. Hive attempts to leverage SQL, and is successful to some degree. But 
> being able to use Full SQL would be a huge improvement. Akin to Hadoop is 
> HBase 
> 
> HBase is similar with Hadoop, but it approaches things in a more conventional 
> columnar format It a copy of "BigTable" form google.. Here, the notion of 
> "column families" is important because column families are files. A row is 
> made up of keys, at leas one column family. There is an implied join between 
> the key, and each column family. As the table is viewed though, it is void as 
> a join between the key and all column families. What denotes a column family 
> (cf) is not specified, however the idea is to group columns into cfs by 
> usage. That is cf1 is your most commonly needed data, and cfN is the least 
> often needed.
> 
> HBase is queried by a specialized API. This API is written to work over very 
> large datasets, working directly with the data. However not all uses of HBase 
> need this. The majority of queries are distributed just because they are over 
> a huge dataset, with a modest amount of rows returned. Distribution allows 
> for much more paralleled disk reading.  For this case, a SQLite cluster makes 
> perfect sense. 
> 
> Mapping all of this to SQLite, I could see a bit of work could go a long way. 
> Column families can be implemented as separate files, which are ATTACHed and 
> joined as needed. The most complicated operation is a join, where we have to 
> coordinate the list of distinct values of the join to all other notes, for 
> join matching. We then have to move all of that data to the same node for the 
> join. 
> 
> The non-data input is a traditional SQL statement, but we will have to parse 
> and restructure the statement to join for the needed column families. Also 
> needed is a way to ship a row to another server for processing. 
> 
> I'm just putting this out there as me thinking out loud. I wonder how it 
> would turn out. Comments?

If you want nosql, look at hypertable. It's c++, faster (on our environment) 
than Hadoop/HBase and uses hql, similar to sql. It uses the same filesystem 
Hadoop uses.

About a cluster of sqlite, it depends on what you need, distribute data rows on 
all sqlite cluster or all sqlite dbs has the same rows (like raid0 and raid1).

The problem you describe with distribute data rows on all sqlite cluster 
(raid0) could be minimized applying the where conditions to tables before the 
join, except those ones that uses more than one joining table to evaluate. Then 
move, as you say, data to same node, for example the one with a LEFT table and 
more rows.

The other manner (raid1), having all sqlite servers the same data is easier and 
is done with virtual tables and nanomsg/0mq to send locks on write (only one 
server can write to all) and data. But if you need it, PostgreSQL don't need 
those tricks.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-03 Thread Eduardo Morras
On Mon, 2 Sep 2013 13:48:02 +0100
Simon Slavin <slav...@bigfraud.org> wrote:

> 
> On 2 Sep 2013, at 8:25am, Eduardo Morras <emorr...@yahoo.es> wrote:
> 
> > Or create the index with collate
> > 
> > CREATE INDEX idx_collated_column ON myTable ( column COLLATE NOCASE )
> 
> The problem with doing it in the index is that it's hard to predict when 
> SQLite will use a particular index. Better to think about the nature of your 
> data when you create the table.

But if you define it at table/column level, the collate will be applied to all 
operations that use the column, like group by or may altere other indexes 
(making "aAa" and "aaa" equal column has less distinct values and index stats 
aren't real ). Don't know if column collation overrides index collation or 
viceversa.

I see it as one of the reduced list of cases where developer should send hints 
or suggestions (mafia style) to sqlite to use a defined index, if more than one 
is defined on same columns with different collation.

> 
> On the other hand if you have already created your table and have lots of 
> foreign keys it can be a pain to try to reconstruct your data by DROPping and 
> recreating tables.  It may be easier just to add a new index.
> 
> Simon.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What am I missing when recovering a hot journal?

2013-09-03 Thread Eduardo Morras
On Mon, 2 Sep 2013 21:45:04 -0700
Michael Foss <foss.m...@gmail.com> wrote:

> You are right that new data is usually more than a single INSERT. I will
> try out your suggestion of opening and closing the database file outside of
> each transaction.
> 
> I am also going to look into WAL journalling mode since I am only writing
> data to the disk in the mode that my embedded system operates. I am not yet
> sure whether it is a better choice with respect to random power loss.

If you switch to wal mode, setting pragma wal_autocheckpointing to low value 
may help you. Default is 1000, in a similar environment (embedded device that 
reads data from external analogic devices) we setted it to 1.

> ~Mike Foss

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-02 Thread Eduardo Morras
On Mon, 2 Sep 2013 02:12:05 +0100
Simon Slavin <slav...@bigfraud.org> wrote:

> 
> On 2 Sep 2013, at 2:03am, Joseph L. Casale <jcas...@activenetwerx.com> wrote:
> 
> > I am using LIKE as the columns are indexed NOCASE and I need the
> > comparison case insensitive.
> 
> Have you tried using '=' ?
> 
> Also if you declare the columns as COLLATE NOCASE in your table definition, 
> then using '=' will definitely work the way you want it to.  An example would 
> be
> 
> CREATE TABLE myTable (myName TEXT COLLATE NOCASE)

Or create the index with collate

CREATE INDEX idx_collated_column ON myTable ( column COLLATE NOCASE )

> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Where Clause

2013-08-02 Thread Eduardo Morras
On Fri, 2 Aug 2013 14:04:40 +0530
techi eth <techi...@gmail.com> wrote:

> At the end of execution I am expecting SET will change the value but it
> actually doesn't SET the value due to wrong condition.
> 
> I will check return from sqlite3 & due to success return I am in wrong
> assumption.


You must separate errors calling sqlite3 lib and errors in sql. You get 
SQLITE_OK because sqlite3 lib executed your sql query sucessfully. If your sql 
where malformed, sintactically incorrect, etc... there's an error at sql level, 
not library and get an SQLITE_ERROR Sql error or unkown database (not database 
file, but database name in sql).

In this case, your sql code is correct, and when you execute it, there's no sql 
error nor library error. You can use sqlite3_update_hook(...) call to hook a 
callback code to count how many rows where affected on updating. It's 10-15 
lines of code + comments. Here's a pointer to it 
http://www.sqlite.org/capi3ref.html#sqlite3_update_hook , remember to free/fry 
it after use ;)

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory usage of sqlite3

2013-07-17 Thread Eduardo Morras
On Wed, 17 Jul 2013 12:04:52 +0200
Paolo Bolzoni <paolo.bolzoni.br...@gmail.com> wrote:

> On Tue, Jul 16, 2013 at 8:13 PM, Eduardo <emorr...@yahoo.es> wrote:
> 
> > Can you show us the query and/or schemas? If not:
> Sure, I appended everything in the bottom of this email.
> Unfortunately gmail will mess-up the layout, I hope it will be
> readable.
> 
> (See here, it seems google does not know the mean of "should")
> https://groups.google.com/forum/#!topic/google-mail-problem-solving-uk/p8KyYZR2e04
> 
> > a) Does it JOIN multiple big tables (in rows and/or columns)?
> No, but I got a sub query is probably the culprit.
> 
> > e) Did you normalize the database?
> Should be, yes.
> 
> > In both cases (if you can show us the query/schema or not) what do you 
> > really want to ask to the database? (not to us but to sqlite3, perhaps the 
> > query can be reformulated)
> 
> 
> Here is everything... I think the problem is shown in the:
> Explain: 0 0 0 SCAN TABLE tour AS t1 (~100 rows)
> that is part of a sub-query.
> 
> 
> -- First statement (sets the pragmas):
> 
> PRAGMA foreign_keys = ON;
> PRAGMA synchronous = OFF;
> PRAGMA journal_mode = MEMORY;
> PRAGMA cache_size = -10240;
> PRAGMA auto_vacuum = NONE;
> PRAGMA locking_mode = EXCLUSIVE;
> PRAGMA secure_delete = OFF;
> PRAGMA temp_store = 0;
> 
> -- Second statement (creates/clean the schema):
> BEGIN TRANSACTION;
> DROP TABLE IF EXISTS tour;
> CREATE TABLE tour (id integer,
>   score integer NOT NULL,
>   cost integer NOT NULL,
>   last_poi integer,
>   unsorted_path blob,
>   unsorted_path_tmp blob,
>   PRIMARY KEY(id));
> 

Some changes, if blob is bigger than a few bytes, you should normalize them. If 
2 blobs are equal, their id must be equal and you don't waste time comparing 
nor memory joining blob content. So you get:

 DROP TABLE IF EXISTS tour_blob;
 CREATE TABLE tour_blob (id INTEGER PRIMARY KEY,
   n_blob blob);

 DROP TABLE IF EXISTS tour;
 CREATE TABLE tour (id integer,
   score integer NOT NULL,
   cost integer NOT NULL,
   last_poi integer,
   FOREIGN KEY (unsorted_path) REFERENCES tour_blob(id) ON 
DELETE CASACADE,
   FOREIGN KEY (unsorted_path_tmp) REFERENCES tour_blob(id) ON 
DELETE CASACADE,
   PRIMARY KEY(id));

You can replace tour_unsorted_path_idx with a new index too:

> DROP INDEX IF EXISTS tour_unsorted_path_idx;
> CREATE INDEX tour_unsorted_path_idx ON tour (unsorted_path);

 DROP INDEX IF EXISTS tour_unsorted_path_last_poi_cost_idx;
 CREATE INDEX tour_unsorted_path_last_poi_cost_idx ON tour (unsorted_path, 
last_poi, cost);

Take care and make cost the last one, because cost is compared with inequality.

You use '==' instead '=', take care too. I made the same error in a mail some 
weeks ago.

For this query:

> 
> SQL STATEMENT: SELECT id FROM tour ORDER BY id LIMIT ?1
>  8< -
> Query:   SELECT id FROM tour ORDER BY id LIMIT ?1
> Explain: 0 0 0 SCAN TABLE tour USING INTEGER PRIMARY KEY (~100 rows)
>  - >8

I don't know why it doesn't use the primary index. Perhaps analyze statistics 
before solves the problem.

HTH
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory usage of sqlite3

2013-07-15 Thread Eduardo Morras
On Mon, 15 Jul 2013 20:49:52 +0200
Paolo Bolzoni <paolo.bolzoni.br...@gmail.com> wrote:

> From 35-40MB to 940MB; I would put massif result but I think the
> list deletes attachments.

What does PRAGMA temp_store show? Set it to 0 and recheck. Did you compile with 
SQLITE_TEMP_STORE set to 3?


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Connection between SQLite - Other Databases

2013-06-20 Thread Eduardo Morras
On Thu, 20 Jun 2013 04:34:19 +
"Reddy C. balaji" <balaj...@spanservices.com> wrote:

> Hi All,
> Please let me know the options available to connect to other 
> databases ( Oracle, MSSQL , MySQL etc ) from SQLite. For example, to connect 
> from Oracle to other databases DBLink can be used. In the same way, from 
> MSSQL , Linked Server can be used to connect to other databases.

In postgres you can use a fdw with sqlite in it and access directly to sqlite 
dbs. There's also SQL/MED standard to interconnect ddbbmmss.
 
> In the same way are there any features available in SQLite ??

Use csv.
 
> Thank you
> Reddy Balaji C.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select count(*) does not use primary key??

2013-06-07 Thread Eduardo Morras
On Fri, 07 Jun 2013 13:12:14 +0200
Clemens Ladisch <clem...@ladisch.de> wrote:

> Eduardo Morras wrote:
> > where t.a = NULL
> 
>   where t.a IS NULL
> 
> (NULL compares as not equal to any value, including itself.)

OPppss you're right. Thought too fast and wrote even faster :(

> 
> Regards,
> Clemens

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] select count(*) does not use primary key??

2013-06-07 Thread Eduardo Morras
On Thu, 6 Jun 2013 10:53:55 -0400
Richard Hipp <d...@sqlite.org> wrote:

> On Thu, Jun 6, 2013 at 10:52 AM, Gabriel Corneanu <gabrielcorne...@gmail.com
> > wrote:
> 
> > Strange is, count(*) uses the cover index for a but "select count(a)" does
> > NOT use the same cover index...
> >
> 
> count(a) has to check for NULL values of a, which are not counted.
> count(*) does not.

If I understand well , select count(a) from t = (select count(*) from t) - 
(select count(*) from t where t.a = NULL) and both selects will use cover 
indexs, doesn't it? 


> -- 
> D. Richard Hipp
> d...@sqlite.org

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Header Application-ID list

2013-06-07 Thread Eduardo Morras
On Thu, 6 Jun 2013 08:15:57 -0400
Richard Hipp <d...@sqlite.org> wrote:

> On Thu, Jun 6, 2013 at 8:05 AM, Eduardo Morras <emorr...@yahoo.es> wrote:
> 
> >
> > Hi,
> >
> > Is there an official list of assigned application id sqlite header? If
> > exist, How can I register my application-id?
> >
> 
> The official list is here:  www.sqlite.org/src/artifact/f2b23a6bde8f
> 
> Send a request to this mailing list to add new items to the official list.

Thanks, I'll wait a bit until decide what hex describe better the app.
 
> Ideally, this content would be picked up by unix "file" command and be
> distributed to all unix systems.  However, my repeated emails to the
> maintainer Christos Zoulas about this have gone unanswered.  So for now the
> unix "file" command won't recognize the app-id unless you configure it
> yourself.

Then the magic.txt file should have more 'advertising campaign'. A d/l link 
www.sqlite.org front page, distribute within amalgamation and/or installation 
inside port/package/pkgsrc/rpm/your_linux_install_format as sqlite3 man pages 
do.

About Microsoft Windows, there's file in cygwin. But perhaps it's a MS problem 
not have a similar tool. I remember that in MacOS 6 (1988), perhaps earlier 
versions, you must register 4bytes for developer and 4 bytes for application 
(negative values preassigned for Apple use only) so system can identify file 
types, group data files with apps and developers.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Header Application-ID list

2013-06-06 Thread Eduardo Morras

Hi,

Is there an official list of assigned application id sqlite header? If exist, 
How can I register my application-id?

Thanks

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to read log file format into sqlite database?

2013-05-06 Thread Eduardo Morras
On Mon, 6 May 2013 01:32:18 -0700 (PDT)
Newbie89 <sh_ta...@hotmail.com> wrote:

> Can you show me a simple tutorial?urgent...please...
> Is it the library u create I need to include only can function?

http://www.cprogramming.com/

You should run this code

while(!understand()){
  yourself = Read(SourceCode, Books, Documentation);
  yourself = TrytoUse(SourceCode, Snippets);
  yourself = Learn(, C);
}

Where yourself is a pointer to you.

Sorry for being so rude, your questions are not about sqlite, but about C basic 
programming.

If someone else has a better answer, it will welcome.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance HELP

2013-04-30 Thread Eduardo Morras
On Mon, 29 Apr 2013 11:08:48 -0400
"peter korinis" <kori...@earthlink.net> wrote:

> This question regards SQLite (3.6) performance. (Lengthy because I want to
> describe the environment.)
> 
> . Win-7 (64-bit, though I don't know if SQLite uses 64-bit). 
> 
> . 3 year old HP laptop with Intel Core Duo CPU P8400 @ 2.27GHz with
> only 4GB memory
> 
> . 286GB HD (50% full) + 1TB 7200rpm external eSata HD (90% free) -
> contains target SQLite DB.
> 
> . Target DB is single user, read-only, static . as follows
> 
> o   DB size = 24GB: in 8 tables. 2 most used tables contain 43million rows
> and 86 million rows - others are small look-up tables with 50-10,000 rows.
> 1-3 indices per table.
> 
> o   imported from 44GB CSV file with 45million rows of 600 columns
> 
> o   Used for: SQL query only (CLI or GUI), no updates, no deletes, no
> inserts, or no transactions, no apps.
> 
> . Future: Double the size . intend to clone existing DB and populate
> with another year's data. Additional years will likely be added at later
> time.
> 
> Now to the questions:
> 
> 1.   Is my current DB too large for SQLite to handle efficiently? I just
> read in O'Reilly, Using SQLite, book, "If you need to store and process
> several gigabytes or more of data, it might be wise to consider a more
> performance-oriented product."

Nowadays this decission is taken (in my opinion) if the database needs to feed 
a lot of users. The data size is not a priority.

> 
> 2.   Adding which resources would most improve performance ???  (Many
> queries, like a SELECT COUNT (and GROUP) of indexed column on a JOIN of the
> 2 large tables may take 30-60 or more minutes.)

Split data in 2 or more databases and attach them. They can reside on different 
disks. You can partition tables too, by date for example and use UNION on the 
queries.

You can build a temporal table with queries and its results or calculated 
counts. Do a select in this temp table searching for previous queries. You say 
that data is read only, it should work.

> 
> a.   add 4GB or 12GB more memory ?
>
> b.  get faster HD for target DB . ext. eSATA SSD (say 256GB) ?
> 
> c.   get faster ext. eSATA 10,000rpm HD for DB ?
> 
> d.  make performance changes to current DB settings ? (or learn to write
> better SQL ??)

Write better sql has two sides. It's about write better selects and find a 
better schema.
 
> e.  convert from SQLite to MySQL or something else? I like the
> simplicity and low admin or SQLite - so I prefer not to switch DBMS

If i need to upgrade dbms, i use Postgres. But i don't think you need it.

> f.Something else ?

Reread Simon's email and sail the documentation and test.

> Thank you so much for your help.
> 
> peterK

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fw: Fw: Is there a command to lock a database for a short while (0.25 sec, say)

2013-04-24 Thread Eduardo Morras
On Wed, 24 Apr 2013 16:35:21 +0200
"J Trahair" <j.trah...@foreversoftware.co.uk> wrote:

> I've found that either one of these will lock the database:
> 
> mstrSQL = "PRAGMA locking_mode = EXCLUSIVE;BEGIN EXCLUSIVE;"
> mstrSQL = "PRAGMA locking_mode = EXCLUSIVE;BEGIN 
> EXCLUSIVE;COMMIT;"
> 
> and that closing the connection a little while later (eg. 0.25sec)
> 
> mcmd.Connection.Close() 
> 
> releases it. I read in one place that you need a transaction eg. SELECT * 
> FROM Utilities WHERE RecNo = 1   for locking to occur, but this database 
> locks without one.

You are getting the lock twice, one with the pragma, that lock whole db, second 
wih the begin exclusive.

If the database is in wal mode, it needs a read or a write access to get lock. 
Surely your db isn't in wal mode. You can check it calling the correct pragma.

> 
> Jonathan
> ---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fw: Is there a command to lock a database for a short while (0.25 sec, say)

2013-04-24 Thread Eduardo Morras
On Wed, 24 Apr 2013 11:19:32 +0200
"J Trahair" <j.trah...@foreversoftware.co.uk> wrote:

> Thanks for the replies so far.
> 
> I've been trying to get the database to lock - I am using a program I 
> developed which runs on 2 of my computers but connected to the same SQLite 
> database:
> 
> mstrSQL = "PRAGMA locking_mode = EXCLUSIVE;BEGIN EXCLUSIVE;UPDATE 
> Utilities SET OwnersName = '" & Date.Now & "' WHERE RecNo = 1;COMMIT;"

I found, by bad experiences, that it's better to have a program/process that 
owns the database and others send petitions to it, something like a 
client-server, than have an always growing number of process/programs that 
works with the same db file. If some of them are remote process, its even worse.

You can think in a kitchen were lot of cookers work on one meal, using one 
knive, one fire, one pot, one... sooner or later the chaos arrives and the meal 
is carbonized.

> 
> On running this code (VB.Net), the record updates successfully, the message 
> 'Done!" displays, no error message  - but the same program on another 
> computer pointing to the same database across the network is still able to 
> open the database and run the program.
> 

Do you close the db connection after the commit? Or reset locking mode to 
normal? See recent posts about locking db files using nfs.


> I have missed something - but what is it? Thanks for your help.
> 
> Jonathan
> 
> 
>   On 17 April 2013 14:39, J Trahair <j.trah...@foreversoftware.co.uk> wrote:
>   > Hi everyone
>   >
>   > Can my application lock the database for its own exclusive use at the 
> beginning of a series of INSERTs UPDATEs and SELECTs, then unlock it again 
> afterwards? I've read about the five locking states (unlocked, shared, etc.) 
> but I assume the dll handles all that at INSERT or UPDATE time. I'm looking 
> for a way to ensure other users aren't allowed to write to the database 
> during this quarter-second period.
> 
>   http://www.sqlite.org/lang_transaction.html
>   BEGIN EXCLUSIVE
> 
>   >
>   > Thanks in advance.
>   >
>   > Jonathan Trahair
> 
>   Regards,
>   Simon
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-04-02 Thread Eduardo Morras
On Fri, 29 Mar 2013 15:17:52 -0400
Jeff Archer <jsarc...@nanotronicsimaging.com> wrote:

> I have previously made an apparently bad assumption about this so now I
> would like to go back to the beginning of the problem and ask the most
> basic question first without any preconceived ideas.
> 
> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
> 
> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
> 
> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
> 
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?
> Or, is it simple that I should just write directly to file myself?

Piping through gzip -6 or xz -2 will minimize bytes to write. If you are 
working with 5D images xz (7-zip LZMA fork) will do the best. 

For processing you do zcat file | processing_application or xzcat file | 
processing_application

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] :memory: performance in sqlite 3.6

2013-02-22 Thread Eduardo Morras
On Wed, 20 Feb 2013 12:06:17 -0600
Seebs <se...@seebs.net> wrote:

> On 20 Feb 2013, at 11:47, Simon Slavin wrote:
> 
> > On 20 Feb 2013, at 5:32pm, Seebs <se...@seebs.net> wrote:
> >
> >> First: The SQL is completely trivial.
> >> Second: I am not having performance problems with sqlite, I am having 
> >> performance problems with :memory:. Performance on files is lovely.
> >
> > This normally means that your computer has very little actual memory 
> > to devote to 'memory' uses.  It is using a lot of swap space (or 
> > paging, or virtual memory, whatever you want to call it).  So when you 
> > think you are doing calls which reference ':memory:' it actually has 
> > to fetch and write those parts of memory to disk.
> 
> I am pretty sure that's not it.
> 
> I know I'm a newbie on the list, but please imagine for the sake of 
> argument that I am a basically competent programmer with enough sysadmin 
> background to be aware of the obvious problems.
> 
> In this case, I tested this pretty carefully over a period of about six 
> hours of testing across multiple machines. I was watching CPU load, 
> memory load, and all that stuff. And this is a machine with >4GB of 
> *free* memory -- that's over and above even the couple GB of disk cache 
> being used.
> 
> Process memory size is not appreciably different between sqlite 3.6 and 
> 3.7, or between page size of 1024 or 8192. Runtime is massively 
> different. I am pretty sure this is an actual computation-time issue, 
> and my intuition is that it's quite possible there's at least some other 
> performance issues lurking, because it appears that :memory: *used to 
> be* dramatically faster than disk, but something changed in the last 
> couple of years.


The only thing i can think that explain it is compilation options, specifically 
SQLITE_TEMP_STORE=0. If you do pragma temp_store=2, does it work better?


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] :memory: performance in sqlite 3.6

2013-02-20 Thread Eduardo Morras
On Tue, 19 Feb 2013 16:05:14 -0600
Seebs <se...@seebs.net> wrote:

> I'm afraid I haven't got a nicely isolated reproducer for this.
> 
> I maintain a package, called pseudo, which includes a server built 
> around an sqlite database. In the process of debugging some performance 
> issues, I did some casual benchmarking. One of the first things I tried 
> was an in-memory database. Now, as has been observed, this is not 
> necessarily expected to be dramatically faster than an on-disk database.
> 
> What I observed was a very, very, large slowdown. Time for an overall 
> task relying on the database was increased by a factor of 2-3 -- and the 
> database code is not that significant a part of the runtime, usually. I 
> used the sqlite3_profile() and observed that the sum of reported 
> processing time from that was within a few percent of the total increase 
> in execution time, which is at least suspicious.

Execution time doing what?, Waiting for I/O? How do you get execution time? 
What sql are you doing?

> I did a bunch of testing trying to figure out more about this (and many 
> thanks to the friendly folks in #sqlite IRC who helped suggest some).
> 
> First: No, not swapping or paging. We're talking 10MB of database 
> against 12GB of RAM with several GB free. The database on disk was 
> running synchronous = OFF, so I wasn't necessarily expecting huge 
> improvements.

Don't run with synchronous off, it's only calms the symptom, don't cure/repair 
the problem and can mask the real problem.

> In all cases, I was running against brand-new freshly created databases, 
> whether in memory or on disk.
> 
> What I found:

> 4. It scales roughly with database size; at 28,000 rows, it's quite 
> noticeable, and at 84,000 it's painful.

Are you using a join? 

> I did find one thing that made me at least a little suspicious even in 
> 3.7 (specifically 3.7.15.2). In sqlite3PagerMovepage, there's a comment 
> right up at the top about journaling the page we're moving from, so 
> there's a call to sqlite3PagerWrite() if MEMDB. There's no check for 
> journaling mode, and it seems to me that if journaling is off, this 
> shouldn't be needed.

I remember a previous discussion about this topic.

> But that's not nearly enough to explain this.
> 
> Admittedly, a performance issue which seems mostly fixed in 3.7 is 
> probably a lowish priority. What concerns me is that it seems to me that 
> the performance of :memory: may have taken a severe hit at some point, 
> leading to a flood of internet forum posts, stackoverflow questions, and 
> the like about poor performance of :memory:. Since sqlite is so fast to 
> begin with, this may not have gotten noticed.

I don't use :memory: db, when need to do so, i use a normal db with 10-20% more 
page cache than file size. In rare use cases, i use a ram memory disk and 
backup().

> The test case I was using was pseudo version 1.4.5, on Linux hosts, 
> using the pseudo wrapper to untar a 28,000 file tarball. 

Surely I'm misinterpreted it but, Is the sqlite db in a directory with 28000 
files? Each time a journal or temporal file is created, modified and deleted, 
and the main db file is modified, the directory entry must be updated and with 
28000 files it's a very slow process.

Please, post an example of your sql, perhaps it can be tuned for sqlite.

> I am not sure 
> how easy or hard it would be to duplicate this with a simpler test case, 
> and won't have time to look more closely for a while, if ever. I'm 
> passing this on in case this rings a bell for someone, and to have it in 
> the archives if someone else comes looking.
> 
> -s


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 4

2013-02-13 Thread Eduardo Morras
On Tue, 12 Feb 2013 10:26:09 -0600
"Marc L. Allen" <mlal...@outsitenetworks.com> wrote:

> I'm sorry if this isn't the right group, but an earlier message mentioned it, 
> and I found some stuff on the SQLite website.
> 

My answer may be a bit off topic, but if you search the subdomains under 
sqlite.org you can find more stuff like unql.sqlite.org

Go here 
http://www.magic-net.info/dns-and-ip-tools.dnslookup?subd=sqlite.org+subdomains=Find+subdomains
 and check other subdomains, some works others not.

> Although I've had a long-standing project to incorporate SQLite into our 
> product (and have done so), it has never been deployed.  I like the looks of 
> SQLite 4, but have not been able to find any proposed release schedules.
> 
> Can someone point me to an appropriate page or let me know if an official 
> release is planned anytime soon?  Our platform is proprietary, so I'll need 
> to be incorporating source, not binaries.
> 
> Thanks,
> 
> Marc
> ---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange eviction from Linux page cache

2013-02-07 Thread Eduardo Morras
On Thu, 7 Feb 2013 09:56:27 + (GMT)
James Vanns <james.va...@framestore.com> wrote:

> Hello list. I'd like to ask someone with more SQLite experience than me a 
> simple question. First, some background;
> 
> Distribution: Scientific Linux 6.3
> Kernel: 2.6.32-279.9.1.el6.x86_64
> SQLite version: 3.6.20
> 
> We have a single process that, given some data, does some processing and 
> writes it all to a single SQLite DB file. This is a write-once process. When 
> this task is finished, the file itself is marked as read only (0444).
> 
> This file exists on an NFS share for multiple users to read - nothing further 
> is ever written to it. The problem we're seeing is that when this DB file is 
> read from (over NFS) none of the pages are cached (despite ~12GB free for 
> page cache use) or at least immediately evicted. This is quite detrimental to 
> performance because our resulting data files (SQLite DB files) are between 
> 100 to 400 MB in size. We *want* it to be cached - the whole thing. The page 
> cache would do this nicely for us and allow multiple processes on the same 
> machine to share that data without any complication.
> 
> I understand that SQLite implements it's own internal page cache but why, on 
> a standard desktop machine, will it not use the page cache. Is there anyway 
> of forcing it or bypassing the internal page cache in favour of the job that 
> Linux already does? I cannot find any reference to O_DIRECT or madvise() or 
> favdise() etc. in the code. The following PRAGMAs don't help either;
> 
> PRAGMA writable_schema = OFF
> PRAGMA journal_mode = OFF
> PRAGMA synchronous = OFF
> 
> PRAGMA cache_size = -
> 
> Obviously that last one works - but only for a single process and for the 
> lifetime of that process. We want the pages to reside in RAM afterwards.
> 
> Anyone out there know how to correct this undesirable behaviour?

You should use these pragmas too :

PRAGMA temp_store = MEMORY;
PRAGMA read_uncommited = TRUE;

If not, a big select with a big sort could try to use temporal files on your 
nfs server. As you aren't doing any write, no need to wait for write locking.

If you need cache being persistent between process on the same server, you can 
build a ram disk, write the db there and use it from any process. This way you 
read the db only once from nfs. Even better, you can shutdown nfs because a 
simple ftp/http server and wget/fetch can do what you want, serve/receive read 
only files.

> Regards,
> 
> Jim Vanns
> 
> -- 
> Jim Vanns
> Senior Software Developer
> Framestore
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deletion slow?

2013-02-07 Thread Eduardo Morras
On Tue, 5 Feb 2013 12:54:13 +
Jason Gauthier <jgauth...@lastar.com> wrote:

> Hey Everyone,
> 
>  I am a fairly new user of sqlite, but not particularly new to SQL 
> principles.  I am developing an application that will run on a low end system.
> Not quite embedded, but not quite a PC.  In my application, I do frequent 
> table deletes.  My results have been poor, and I am looking for someone to 
> tell me "I'm > doing it wrong", or maybe "that's the best you're going to 
> get", etc.

> Any thoughts on why this may be so slow, or what I can do to improve it?

Don't know if the index is updated after each row delete or after the whole 
delete transaction is commited. For the first you can try:

time sqlite3 trip.db "PRAGMA automatic_index= FALSE; delete from trip where 
key<=1400;PRAGMA automatic_index= TRUE; reindex trip"

If you delete a high percentage of the table rows, it's faster select the data 
to save in a new table, drop original table and "alter table temp rename to 
trip". If you use a memory temp table and a powerloss happens, your data is 
lost, use a non-temp table.

The trick of adding a new column for mark dirty rows will not work because you 
are using a sd-card, the cost of mark as delete/dirty those rows is greater 
than deleting them.


> 
> Thanks,
> 
> Jason
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] special aggregate feature vs query optimization

2013-01-10 Thread Eduardo Morras
On Thu, 10 Jan 2013 14:53:52 +0100
E.Pasma <pasm...@concepts.nl> wrote:

> Hello,
> 
> this mail is about the aggregate feature that was unveiled in the  
> release log of version 3.17.11, http://www.sqlite.org/releaselog/3_7_11.html 
>   :
> 
> A query of the form: "SELECT max(x), y FROM table" returns the value  
> of y on the same row that contains the maximum x value.

True

> I just want to point to a construction where one would expect this to  
> work however it does not. I tried a query that returns only the value  
> of y and intuitively wrote:
> 
> select y from (select max(x), y from t);
> 

Select max(x), y from t will return 2 colums and n rows, the first column with 
the same value, the maximum of x in table t, the second column all t.y values. 
Something like this:

max(x) |   y 
-
500|  5
500|  3
500|  9
500|  2
500|  31
500|  1
500|  86
500|  64


>From this result table, you are doing select y from (result table) and getting 
>only the y values as you expected

  y
-
  5
  3
  9
  2
  31
  1
  86
  64

> This however no longer returns the value of y corresponding to the  
> maximum x.

For me it works, it shows all y from t. Perhaps i have misunderstood something

> 
> It looks a consequence of query optimization. The query satisfies all  
> conditions for "Query Flattening" in http://www.sqlite.org/optoverview.html 
>   . The max(x) column is then eliminated.
> 
> Hope this is useful to know for who is using the feature.
> 
> EPasma
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using sqlite in multi fork() without threads

2013-01-08 Thread Eduardo Morras
On Mon, 07 Jan 2013 20:09:02 +0700
Dan Kennedy <danielk1...@gmail.com> wrote:

> On 01/07/2013 03:22 PM, Eduardo Morras wrote:
> >
> > Hi, I use sqlite in some of my projects. In one it follows a
> > parent/multichild model (multifork). The database is managed by the
> > parent, open close backups etc, opened before the fork, and after it
> > the childs use the database connection. I don't want to corrupt the
> > database, but the documentation only talks about the use with
> > threads. I use a version compiled with -DSQLITE_THREADSAFE=1 and
> > shared_cache mode on setted by parent before open db.
> 
> I'm not sure I follow this exactly.
> 
> You don't want to open a connection in the parent, call fork,
> then continue to use the same connection in the child process.
> That will lead to problems.
> 
> On the other hand, if you are calling fork(), then exec() from
> within the child process, then opening a new connection to the
> same db, that's fine.

I'm still thinking about it, because now (5 minutes since last reply to 
thread), i want to used shared cache but it will not work (ref from 
http://www.sqlite.org/sharedcache.html) so each child will have it's own shared 
cache for its threads/subchilds and not a global cache shared with parent and 
all childs.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using sqlite in multi fork() without threads

2013-01-08 Thread Eduardo Morras
On Mon, 7 Jan 2013 11:11:01 +0100
Pavlos Christoforou <pav...@p9ft.com> wrote:

> Hello Eduardo,
> 
> We do (succesfully) use sqlite in a similar manner as you describe in your
> post but the code ensures the DB is opened in each individual child process
> *after* a child is successfully forked. It is not a good idea to carry open
> database connection across the fork (or any other open file descriptor for
> that matter).
> 
> Cheers

I used it that way because when i made my multifork server the INET connection 
must b setted by the parent and propagated to childs, otherwise we get errors 
because port is already in use by another pid. I thought the same may happened 
with the db connector and/or db file descriptor.

> Pavlos
>
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using sqlite in multi fork() without threads

2013-01-08 Thread Eduardo Morras
On Mon, 7 Jan 2013 12:27:54 +
Simon Slavin <slav...@bigfraud.org> wrote:

> 
> On 7 Jan 2013, at 8:22am, Eduardo Morras <emorr...@yahoo.es> wrote:
> 
> > The app works but, is anyone working with sqlite this way? Any advice to 
> > avoid db corrupts, deadlocks, whatever?
> 
> Read this page:
> 
> <http://www.sqlite.org/threadsafe.html>
> 
> and the parts of this page which mention threads:
> 
> <http://www.sqlite.org/howtocorrupt.html>
> 
> Apart from that you should be okay.

I read the advice on FAQ Rogers sent, it seems to work using serialized mode, 
but in future when i don't remember about it or i don't work/maintain the code 
it may work as it shouldn't and corrupt the db. I prefer to open the db 
connection after the fork.

> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using sqlite in multi fork() without threads

2013-01-08 Thread Eduardo Morras
On Mon, 07 Jan 2013 05:43:50 -0800
Roger Binns <rog...@rogerbinns.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On 07/01/13 00:22, Eduardo Morras wrote:
> > opened before the fork, and after it the childs use the database
> connection. I don't want to corrupt the database,
> 
> https://sqlite.org/faq.html#q6
> 
> You can't use the database across forks.  Parent and child would both
> consider themselves owners of open databases and silently corrupt each
> other's work and interfere with each other's locks.  You will corrupt the
> database. You need to close all databases and then do the fork, and then
> open databases in the children.

Years using sqlite (since 3.0 days) and never seen that advice about fork. It's 
the first time i use it with fork. Nothing wrong happened yet because i tested 
the app wrongly, perhaps it worked for now because -DSQLITE_THREADSAFE=1 
(serialized) was setted and may bang my foot in future. I'll change it, so 
childs and parent will open the connection after the fork. 

> In APSW I provide a fork checker.  This is done by providing an
> alternative mutex implementation that records the process id a mutex was
> created in and then checking the mutex is only ever used in the same
> process id.  (Each database connection has its own mutex.)
> 
> The code that does this:
> 
>   https://code.google.com/p/apsw/source/browse/src/apsw.c#704
> 
> In my benchmark tests I measured a slowdown of 1%.  ie if your code did
> nothing but SQLite calls then you can expect it to be about 1% slower.
> 
> I strongly recommend you do something like this to ensure that no
> developer accidentally has databases used across forks.

Interesting way to bypass the fork() problem and enhace sqlite. It may work 
with other types of applications architectures, not only fork(), f.ex. using 
exec*() after fork so you get 3 different apps using the same db, but i prefer 
to fix the bug.

> Roger

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using sqlite in multi fork() without threads

2013-01-07 Thread Eduardo Morras

Hi, I use sqlite in some of my projects. In one it follows a parent/multichild 
model (multifork). The database is managed by the parent, open close backups 
etc, opened before the fork, and after it the childs use the database 
connection. I don't want to corrupt the database, but the documentation only 
talks about the use with threads. I use a version compiled with 
-DSQLITE_THREADSAFE=1 and shared_cache mode on setted by parent before open db.

The app works but, is anyone working with sqlite this way? Any advice to avoid 
db corrupts, deadlocks, whatever? Please, don't talk about changing to 
multithread model, i don't want to use them for this project.


Thanks

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C++ - nothing has been stored!!??

2012-07-25 Thread Eduardo Morras

At 03:27 25/07/2012, you wrote:

And here I am again, asking for what you find so easy and I. well, just
cannot understand.
...
but this does not store anything in the database table. I have used an
sqlite browser to determine if, in fact, some data was stored, but no,
nothing has been stored.

What am I doing wrong?



Where/when do you do the commit? 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] (no subject)

2012-06-06 Thread Eduardo Morras

At 07:08 06/06/2012, you wrote:

Hi,
I have few queries regarding the SQLite. I have listed them below please go
through them and give your answers. Thanks
Say if I am working on a DB with 3 tables with 100 records each and each
records with some 5 fields of data.Each field is varchar(20)

  1. What will be the Stack Size?


Depends on what are you doing with the db, it vary with time.


  2. What will be the Heap Size?


Again, depends on what are you doing.


  3. What is the max no. of file pointers that will be used at a point of
  time?


If I Remember Correctly (IIRC) it's 2 per db file. But it was on 
3.2.x times. Keep in mind that sqlite can access and manage up to 32 
db files using ATTACH.



  4. What is the max no. of Mutexes that will be used at a point of time?


Don't know, why do you need it?


  5. Is there a tool to calculate these?


Compile with debug mode on and use gdb.


Please bare my ignorance if any and answer these questions.

Thanks Again,

Raghava Reddy
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 100% CPU utilization sqlite running in VMWare

2012-06-06 Thread Eduardo Morras

At 06:48 05/06/2012, you wrote:

Have a market data extractor running tick tapes and saving data to sqlite db.
All coded in C++ and previously ran fine and fast on bare metal.

Processes tick tapes from NAS and saved to local sqlite db's on c drive.

We moved program onto VMWare VM's... same nas tick tapes.  Writing
db's to a share.

On a 4 core vm we can run 3 extractors concurrently at 25 - 75% CPU
utilization.  The 4th pegs the CPU at 100%

Same on an 8 core VM ... seems we can run Cores - 1 without pegging the CPU.

The memory footprint for the program is a tiny 9mb...  On bare metal
we can run 20 extractors concurrently.   Looking for suggestions on
how to optimize for VMware.


So you want to minimize the cpu usage? It looks like your os is 
running on the last core, it has work to do too ;) Apart form using 
directly a hard disk instead a vdmk virtual hard disk, i don't know 
other tricks on vmware sorry.


Did you create the indexs before the mass inserts? If yes, sqlite is 
reindexing after your 10.000 inserts. Create the tables without any 
indexes, and recreate them after the bulk insert. Set pragma 
automatic_index=FALSE too to disable automatic index creation on 
primary key on the not indexed table.


Calculate how much database size increase happens when you insert the 
10.000 rows. If you are using wal mode, set wal file size to this 
size plus a bit more. This way only one wal file will be used. Other 
journal modes (try off mode) may be faster but riskier.


9MB of memory, perhaps the system is copying data between kernel 
structures and sqlite, like disk cache entries (I don't know what os 
are you using, it's a blind shoot) try to increase the pragma cache page.


Increase page size to 8K-32K, this way sqlite will reduce the amount 
of administrative internal work on pages.


Of course, read the documentation, and if you have any cuestion on it ask.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Two potential memory leaks in sqlite-3.7.11

2012-04-11 Thread Eduardo Morras

At 14:06 11/04/2012, you wrote:

On Wed, Apr 11, 2012 at 02:01:29PM +0200, Eduardo Morras wrote:
> At 12:09 11/04/2012, you wrote:
> >Hi, all
> >I have applied a memory leak detection tool
> >Melton<http://lcs.ios.ac.cn/~xuzb/melton.html> to
> >find memory leaks in sqlite-3.7.11.
> >Two bugs were found, and I check them manually as the real ones.
> >Here is the url of the bugs:
> >http://lcs.ios.ac.cn/~xuzb/bugsfound/memleak/sqlite-3.7.11/realbu 
gs/index.html

>
> For the first one, check line 6307. It does a trick to call free
> only if the pointer is not null.

That's zSelect. The leaked object is zTmp, which has been sneakily
allocated by the appendText function on 6276. Looks like it's actually
leaked on every call.


It's realloced inside appendText function, you're right.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Two potential memory leaks in sqlite-3.7.11

2012-04-11 Thread Eduardo Morras

At 12:09 11/04/2012, you wrote:

Hi, all
I have applied a memory leak detection tool
Melton to
find memory leaks in sqlite-3.7.11.
Two bugs were found, and I check them manually as the real ones.
Here is the url of the bugs:
http://lcs.ios.ac.cn/~xuzb/bugsfound/memleak/sqlite-3.7.11/realbugs/index.html


For the first one, check line 6307. It does a trick to call free only 
if the pointer is not null.


For the second perhaps the same trick as 6307 line should be added in 
7572, changing:


- 7572 free(zBuf);
+ 7572 if (zBuf) free(zBuf);

Because the malloc in line 7555 is inside if from line 7546.


Or you can download it at:
http://lcs.ios.ac.cn/~xuzb/bugsfound/memleak/sqlite-3.7.11/realbugs_sqlite_3.7.11.tar.gz


Hope for your replies if these bugs are the real ones. Thank you!

--
Zhenbo Xu
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] In-memory and temporary database

2012-03-29 Thread Eduardo Morras

At 11:32 28/03/2012, you wrote:

Hello,
   I'm doing some test with SQlite 3.7.11 in Windows Embedded industrial PC.
I need to store data in a table very quickly. Let's say new data each 10 msec.
I tryed with DB on disk but obviously was not possible to reach the correct
performance.
So I tryed with in-memory DB and in this case the performance results are
good.
The problem is that I can't share the in-memory database with other
applications.
I red a lot of article and documentation but I have not solved the problem.
Maybe with temporary DB I can have the same performance and also I can share
DB?
Have you some suggestion?


DOn't know for windows embedded. I used Linux in embedded devices 
some years ago. I used a ramdisk and from time to time create/open a 
new db/table, swap insertions to new one and close the old one. Then 
a simple move from the ramdisk to server. The last thing i did was to 
drop sqlite from the device, write an app that export data in csv 
format and send it to server directly, this worked because moved the 
decission/analisys of data from th device to the server.



Simone Pietro Lossani
Mezzomerico
Italy
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimize Table Access Efficiency

2012-03-21 Thread Eduardo Morras

At 07:53 21/03/2012, Max Vlasov wrote:

On Wed, Mar 21, 2012 at 1:33 AM, Tim Morton  wrote:

> So it seems the index is no help;
>


Are you sure you did as Simon explained?
Becausem my tests shows the trick works... (sqlite 3.7.10)


The trick works, but i think he didn't timed what he should timed.

He did:

cur.execute('SELECT topics FROM dict')
start = time.time()
for item in cur:
tmp.append(item[0])
print time.time() - start, 'get WL'

So he timed getting the data from the python cursor and moving to a 
python array. I think he should time:


start = time.time()
cur.execute('SELECT topics FROM dict')
for item in cur:
tmp.append(item[0])
print time.time() - start, 'get WL'

Including the SELECT because that's where the trick work.


The timing was unaffected by sqlite and system cache since the db was
reopened with a special trick on Windows forcing it to clear cache for this
file

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Amalgamation

2012-03-08 Thread Eduardo Morras

At 04:58 08/03/2012, you wrote:

Hi guys,

Decided to open a discussion on SQLite amalgamation. Please forgive 
us, Windows users, who unlucky enough to use Microsoft Visual 
Studio. As many of you know, it is year 2012 now, and Visual Studio 
is up to version 10, but the guys at Microsoft still use 16-bit 
indices for the debugger. As a result, the source files with the 
line count greater than 65535 cannot be debugged.


It is very annoying, and we, Windows users, should use a 
non-recommended source for SQLite in our projects.


Can you guys create a version with source split into files with each 
of them not greater than 65535 lines (call it as you like, VC 
special version, semi-amalgamated, etc.), but without the words "not 
recommended".


This would be very helpful.

Regards,

Andrew Cherednik

Senior Analyst / Application Developer | Healthscope Limited Level 
1, 312 St Kilda Road | Melbourne Victoria 3004


You can delete all comments and blank lines, this way sqlite3.c will 
be circa 7 lines.


HTH 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backup API handling of corrupt source DB

2012-03-07 Thread Eduardo Morras

At 12:22 07/03/2012, you wrote:

Dear list,

I'm using the backup api to frequently backup
a running sqlite database.

I'm wondering if the backup API is able to detect a corrupt
database or will it simply also backup a corrupt DB ?

I evaluating the need to issue a (timeconsuming) "pragma 
integrity_check" prior

running the backup to avoid that a backup will be overwritten
with an invalid database.


You can do the backup and after that do an integrity check on the 
backup. Surely you're backing up on a different server, don't you? If 
the back up pass the integrity check it a real backup, if not, launch 
a warning.



Sure I could try to simulate that, but probably somebody here
knows the answer.

Thank you.

Marcus



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Three small patches for lemon

2012-01-08 Thread Eduardo Morras

At 04:21 08/01/2012, you wrote:

[I sent this 3 months ago, but got no response - if this is the wrong
place to be sending patches, please tell me where they should be sent.]

Attached are three small patches for lemon:

lemon-remove-duplicate-prototype.patch - remove a duplicate prototype
for SetNew().

lemon-two-typos.patch - fix a typo in a message and another in a
comment.

lemon-unused-parameter.patch - remove the errsym parameter to
resolve_conflict() which is no longer used.


I don't see any attachment. Perhaps mail list software removes them. 
Perhaps you should upload patches to your site and send the links.



Cheers,
Olly



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] search

2011-12-27 Thread Eduardo Morras

At 15:10 27/12/2011, Mohit Sindhwani wrote:

On 27/12/2011 9:25 PM, Eduardo Morras wrote:

At 11:27 27/12/2011, you wrote:

Thank you. I agree. It's correct.

I already have data base with /country/state/city/village format. Is it
possible to do that while virtual table creation time?

if yes, how?


Change '/' to ','. This way you get a csv file which you can import 
directly. Perhaps you need to add a line with the table column description.


or at the SQLite3 shell do
sqlite>separator /
before doing a .import

no?


Yes, didn't remember.


Cheers,
Mohit.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] search

2011-12-27 Thread Eduardo Morras

At 11:27 27/12/2011, you wrote:

Thank you. I agree. It's correct.

I already have data base with /country/state/city/village format. Is it
possible to do that while virtual table creation time?

if yes, how?


Change '/' to ','. This way you get a csv file which you can import 
directly. Perhaps you need to add a line with the table column description.




in case, if I have 250 levels like this ex: file system. how to do this.
any idea?


You can use a shell script to do so. I leave it as exercise to you ;)


Thanks in advance.


HTH 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How about a proper forum rather than an e-mail list

2011-10-18 Thread Eduardo Morras

At 14:04 18/10/2011, Igor Tandetnik wrote:

Frank Missel  wrote:
> I think that the sqlite-users e-mail list has enough traffic to warrant a
> proper forum.

For what it's worth, I'm using GMane (http://gmane.org/), which is a 
mailing list-to-NNTP gateway and happens to carry this list. I'm 
old-fashioned enough to believe that an NNTP newsgroup *is* the 
proper forum. Can't stand modern Web-based forum interfaces.


++


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Example of implementing undo/redo using sqlite3?

2011-08-04 Thread Eduardo Morras
At 19:58 04/08/2011, Stephan Beal wrote:
>Hi, all!
>
>http://www.sqlite.org/whentouse.html
>
>says:
>
>"Temporary triggers can be added to the database to record all changes into
>a (temporary) undo/redo log table. These changes can then be played back
>when the user presses the Undo and Redo buttons. Using this technique, an
>unlimited depth undo/redo implementation can be written in surprisingly
>little code."
>
>i'm wondering if anyone can point me to an example of implementing such a
>beast?

I have not do it but you can mimic/implement it using 2 colums in 
each table and inserting there the transaction number (insert, 
update, delete only) and delete status. Updates duplicates rows with 
different transaction number (bigger one), Delete duplicate row and 
marks it as delete. To restore a previous point

select * from table where transaction_idhttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Threading makes SQLite 3x slower??

2011-08-04 Thread Eduardo Morras

Oks, another let's try another thing/think.

Try the select without the COUNT(*):

SELECT class FROM data_r JOIN data USING (rowNum) WHERE 57 < col0min 
AND col0max < 61 AND 52 < col1min AND col1max < 56 AND 66 < col2min 
AND col2max < 70 AND 88 < col3min AND col3max < 92 AND 133 < col4min 
AND col4max < 137 AND 57 < col0 AND col0 < 61 AND 52 < col1 AND col1 
< 56 AND 66 < col2 AND col2 < 70 AND 88 < col3 AND col3 < 92 AND 133 
< col4 AND col4 < 137 AND 81 < col5 AND col5 < 85 GROUP BY class;

In some rdbms (don' know in sqlite), count, avg, sum, etc... implies 
a table scan, making the select very slow.

HTH


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Threading makes SQLite 3x slower??

2011-08-04 Thread Eduardo Morras
At 06:53 04/08/2011, you wrote:
>I was hoping they wouldn't block each other because it's a read 
>lock. I tried making an index on all the columns, but R*tree table + 
>JOIN that I'm using runs about 10x faster. I might have done 
>something wrong, so I'm open to suggestions on a better index, though.

They block each other not trying to get a lock, but trying to get 
access to disk and cache. Disk access time and cache is shared 
between all threads and if all threads needs access to different 
parts of the db they will figth like barbarians, you should convert 
them to roman cohorts instead. You can make your page cache size 
bigger using pragmas, check maillist and documentation.

>I don't think that UPDATEing 99.99% of 17 million columns is going 
>to run faster than the pure select statement that I have. Each 
>select statement is fairly random, a small selection out of an 
>N-dimensional space.

You don't need to update your table, you can use a temp memory table 
for that with candidates. First select candidates where 57Am I understanding you? Any other ideas?
>~Seth


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LevelDB benchmark

2011-07-29 Thread Eduardo Morras
At 14:19 29/07/2011, Black, Michael (IS) wrote:
>What they don't say explicitly is that if all you need is key/value 
>capability then an SQL database is overkill and only slows you down 
>(bit of a duh factor there though not obvious to 
>neophytes).  Generally speaking that's one thing they don't teach in 
>college is optimization.  I can't count the # of projects I've 
>worked on where what was implemented was so slow as to be useless by 
>the time it was done only to have to redesign the thing for 
>speed.  The one I'm working now I've sped up by 100X.

A  data storage can work with O(1)  algorithms while 
B-Tree indexes are O(nlogn). It can't do everything you can do with a 
B-Tree but it does what you need.

>The bad part is you'd better really know that all you need is 
>key/value otherwise you have to re-do things and kill your 
>performance which then could mean rearchitecting your solution.
>
>I know my boss is always changing requirements on our IR programs...:-)

Jejejeje, a boss is always O(n^2) or worse in time implementation :) 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LevelDB benchmark

2011-07-29 Thread Eduardo Morras
At 02:53 29/07/2011, Richard Hipp wrote:
>On Thu, Jul 28, 2011 at 12:27 AM,  wrote:
>
>I suspect that I will come up with other suggestions once I have a chance to
>dig a little deeper into the benchmarks.  If you have suggestions, please
>publish them here.

They are comparing orange with apples. They compare a  
data store with a Sql relational database. From their home page(1):
* This is not a SQL database. It does not have a relational data 
model, it does not support SQL queries, and it has no support for indexes.
So, they support simple queries ( = simbol) and not complex queries 
(<=, >=, <, >, !=). Why don't compare it with Hadoop, Pnuts/Sherpa, 
or similar No-Sql data stores?

>Thanks for your help and support!


(1) http://code.google.com/p/leveldb/ 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Do I need to migrate to MySQL?

2011-06-03 Thread Eduardo Morras
At 12:58 03/06/2011, you wrote:
>Guys, the server for this game -
>
>http://www.frozensynapse.com
>
>uses SQLite.  We've had an unexpectedly successful launch which has
>resulted in the server being swamped with players, and I'm trying to
>optimise everywhere I can.   I've always been under the impression that
>SQLite is pefectly fast and it's the scripting language I wrote the
>server in which is too blame.  (Yes, I know writing a back-end in a
>single-threaded scripting language is an absolutely terrible idea).
>However, everyone in the industry I talk to says that SQLite must be one
>of the problems.
>
>I may be looking at a complete re-write.  I may also need to have a
>solution which scales beyond one machine.  Can anyone give me advice on
>this matter specifically?
>
>(The video on that website at 2.04 gives a good idea of what kind of
>functions are being powered by the database).

You only uses sqlite for the player lists? What preferences/pragmas 
do you use for sqlite? Is the server a dedicated server or a shared 
vps? Which configuration hardware? Tell us more about our 
configuration and metal so we can help you.

>Thanks,
>Ian
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] insert speeds slowing down as database size increases (newb)

2008-10-29 Thread Eduardo Morras
At 13:10 29/10/2008, you wrote:
>Look up the implications of Sqlite's ACID feature and the use of
>transactions.  COMMITs are tied to disk rotation speed.  On our Sqlite
>databases where we look for performance we use 15,000 rpm disks and are
>diligent in wrapping INSERTs, UPDATEs and DELETEs in transactions and
>get very good results.
>
>The Sqlite B-Tree indices do slow down on insertion as extra levels are
>created in the index as it grows large.  That is an inherent feature of
>such structures.
>JS

I think the problem is that he creates the index before insert the 
data. He should insert the data using begin..commit and after insert 
creates the indexes. If not, for each insert, the indexes must be 
reworked/redone and slowdown happens

HTH 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up read-only databases?

2008-10-28 Thread Eduardo Morras
At 13:47 15/10/2008, you wrote:
>Hi there,
>
>i am using a 120MB database in an embedded project (a DVD-ROM project)
>and was wondering what I can do to speed up its reading using diverse
>PRAGMA statements.
>The database is locked, meaning that no data is being inserted or
>deleted from it. I am solely after speeding up its reading performance.
>
>Indices have been set, would augmenting the cache size for Sqlite do
>something?

How many memory has your embedded project? You can create a new 
in-memory database and copy there your database data.

>Grateful for any info,
>
>--
>Christophe Leske

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] design question / discussion

2008-05-21 Thread Eduardo Morras
At 19:12 20/05/2008, you wrote:
>Actually my reason for writing into a seperate database is more...
>well crude.  I tar several databases together then encrypt using
>openSSL. Then an FTP like program transmits the data a central server.
>  I must suspend writing into the database for the duration of the tar
>operation since tar does not abide by sqlites file locking rules.

Perhaps i've missed something or don't understand it well. Your 
databases is all on the same file or do you have 2 separate sqlite 
sessions to 2 different databases files? In the first scenario you 
must be very fast and in the second you can switch from one database 
to the other, unbind (close) the sqlite, do ftp or what ever you want 
and delete database file.

You attach x+1 to x. Why do you need it? If you delete old records on 
x after the ftp you can trash x, work with x+1 and recreate a void x.

I think you only need 2 databases and while you add data to A, you 
copy and delete B. Then switch A and B. Perhaps you need 3 databases, 
and separate the download and . On the other side you can attach the 
databases and reconstruct one big database.

If you need compression you can check any lzp, lzo or lzrh 
algorithms, they are very fast, and compress the files "on the fly". 
This compression algorithms works well with text data and bad with 
binary data. Take care because sqlite does already compress data in 
the databases files.

HTH



--
With sufficient thrust, pigs fly just fine. However, this is not
necessarily a good idea. It is hard to be sure where they are going to
land, and it could be dangerous sitting under them as they fly
overhead. -- RFC 1925  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Porting into a microcontroller, minimum requirements

2008-05-07 Thread Eduardo Morras
At 02:43 07/05/2008, you wrote:

>Hi,
>
>I was wondering if anyone has any basic guide lines on embedding SQLite into
>a microcontroller.  For example, I am considering using an 8/16 bit processor
>with 1 MB flash, 1 MB SRAM and 2 GB data storage (SD card).  Has anyone ported
>this before to an embedded system without an OS?

I have ported it to a 32bit. Take a look at contiki, it's a nano 
(femto) OS that works very well, is very portable and easy to add 
sqlite to it (in 32 bit world)

>Thanks,
>Andy

L


One reason that life is complex is that it has a real part and an 
imaginary part
-Andrew Koenig  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] splite database under version control (subversion)?

2008-05-07 Thread Eduardo Morras
At 23:19 04/05/2008, you wrote:

>If you use svn only to sync over many pc maybe you can put your project on a
>usb device (or usb hd) so you don't need to "sync" with svn.
>
>If you WANT to use svn and you are under linux you can create an alias for
>commit that make a dump of your db then commit it and one alias that update
>and read the dump in db.
>
>Maybe in windows you can make a batch, I don't know windows very well.

In windows you can change the extension to .sdb and windows will do 
all the work. Very slow, very risky but works


One reason that life is complex is that it has a real part and an 
imaginary part
-Andrew Koenig  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob :: storing files within sqlite3

2008-04-24 Thread Eduardo Morras
At 09:57 24/04/2008, you wrote:

>Hi.
>
>I intend to use sqlite3 to store files.
>
>I've tested inserting an 7.6MB file in a blob column but it returns a
>SQLite3::TooBigException. I'm using ruby API.
>
>I've installed sqlite3 through a binary package. How can I know what is
>the maximum length permited to a blob column ?
>
>Is there any recomendation on how could I store the files ? Split it
>into permited length pieces ?

Don't store the files in the Blob, just a 
absolute/relative path to the file or a 
compressed/zipped file. If you choose the 
compressed file, you can put in more than one 
file if you add the number of the file.

>Thanks in advance.

HTH

>-- João Macaíba.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Version 3.5.5 Released

2008-02-03 Thread Eduardo Morras
At 18:33 31/01/2008, you wrote:
>The big change from version 3.5.4 is that the internal virtual
>machine was reworked to use operands in registers rather than
>pulling operands from a stack.  The virtual machine stack has
>now been removed.  The removal of the VM stack will help prevent
>future stack overflow bugs and will also facilitate new optimizations
>in future releases.
>
>There should be no user-visible changes to the operation of SQLite
>in this release, except that the output of EXPLAIN looks different.
>
>In order to make this change, about 8.5% of the core SQLite code
>had to be reworked.  We thought this might introduce instability.
>But we have done two weeks of intensive testing, during which time
>we have increased the statement test coverage to 99% and during
>which we have found and fixed lots of minor bugs (mostly things
>like leaking memory following a malloc failure).  But for all of
>that testing, we have not detected a single bug in the new
>register-based VM.  And for that reason, we believe the new
>VM, and hence version 3.5.5, is stable and ready for production
>use.

The new VM was designed with a possible (low probable) port to 
VHDL/Verilog for Sqlite in FPGA? Changing from a VM stack to VM 
register based make easier to port.

Perhaps it's a very crazy idea...


One reason that life is complex is that it has a real part and an 
imaginary part
-Andrew Koenig  

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementation of TPCC benchmark

2008-01-31 Thread Eduardo Morras

At 23:53 30/01/2008, you wrote:


On Wed, 30 Jan 2008, Steve VanDeBogart wrote:


SQLite-ers,

OSDL-DBT is a set of TPC like benchmarks implemented by OSDL
(http://osdldbt.sourceforge.net/).  I've added SQLite support to 
the DBT2 benchmark (TPCC like).  The patch is attached in case anyone is

interested in running a TPCC like benchmark on SQLite.


Sorry, the patch can be obtained from here:
http://cs.ucla.edu/~vandebo/dbt2-sqlite.patch


And where can i compare the results? The DBT2 Benchmark patch permits 
to do a TPCC benchmark? Will the TPCC benchmark include sqlite results?


Sorry but i know nothing about this type of benchs.

TIA 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Performance tuning using PRAGMA, other methods

2007-11-30 Thread Eduardo Morras

At 19:12 20/11/2007, you wrote:


What are the 'biggest bang for the buck' sqlite optimization techniques
to apply to a working system to tune performance?

Q1)) PRAGMA: Does anyone have experience and good results optimizing
sqlite performance using PRAGMA's? If so, which ones, how were they
used, and what was the performance increase?
Q2)) Other techniques: Any success stories on sqlite optimization
methods of any type would be appreciated.


In other techniques,
a) normalize the db
b) don't group in a db file two or more groups of tables that aren't 
related, better each group to its db file and better on separate physical disks
c) if you have read-only db and enough ram, use a memory disk or a 
memory swapable disk amd copy db to it. Disable sqlite caches.
d) use a libc with simd, for example under powerpc, a libc with 
altivec code for sorts, mem-copy, mem-compare, etc.. can boost your 
sqlite (i have seen x7) if it's cpu hogged (memory dbs)
e) I'm not sure, but using a journaling or softupdate enabled 
filesystem should means you can disable syncronous pragma.


Also, tweak sqlite code. For example i tweaked an old version (3.1.2 
or something like that) for not use journal files, it just modifies 
the db file directly on writes, very risky, but a lot faster.


HTH 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Soft heap limit enforcement performance

2007-10-30 Thread Eduardo Morras

At 19:41 30/10/2007, you wrote:

Mostly I am interested in making sure that malloc(1000) does not
fail even though you have 5 bytes free and they just happen
to be scattered about as 100 discontinguous blocks of 500 bytes
each.


On the embebed device i worked (i made only the micro-os with sqlite) 
2 years ago i "designed" a pseudo-handle. This worked with a maximum 
number of  masters (255) and divide the memory pool in 8192 bytes 
size. Then a byte-map of 64KB (65536 blocks of 8192 bytes) so 64MB of 
memory pool with 0 for free and any other with the masters owner. 
Each block can be freed or be owned by any of the 255 masters. Of 
course it had internal fragmentation and minimal external, but the 
block size can change to any other value or the byte-map (but then a 
linear search for free or next blocks) and blocks can be moved for 
minimize external fragmentation.






-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Re: In Mem Query Performance

2007-07-02 Thread Eduardo Morras

At 12:16 02/07/2007, you wrote:

Hi,

Pls notify me if it can be brought down to 1 sec.
There is no mem constraint.


I don't know x86 asm or gcc for x86, but in powerpc asm, if i use 
altivec libraries (libfreevec, f.e.) for string comparations, mem 
moves/copies, sorts etc... i get from x2 to x16 speed improvements 
(and more). Perhaps changing the comparation routines with a 
sse/sse2/3dNow or similar asm code you can get a speedup boost.



regards
ragha


**
This document represent my ideas.They are original from me.
It's forbidden think the same than me, without previous payment.
If you agree me, PAYME. LOOKOUT: Lawyer BITES. 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Cache invalidation after insert statements.

2007-06-22 Thread Eduardo Morras

At 11:20 22/06/2007, you wrote:



HI all.

Thanks for everyones help the problem is now solved. The memory drive worked
like a bomb. Basically the problem on that server was that the insanely high
IO prevented the OS from caching the file which slowed down the performance.
After installing a mem drive ( using mfs ) and reducing the cache size,
multiple connections are now flying. Im getting insane speeds. SQLite FTW!


It's better to write a custom ram drive with sqlite than sqlite with 
a ram drive. This way the ram drive can lock to the database and make 
a copy of ramdrive to disk.




--
"General error, hit any user to continue." 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] How SQLite may work on AVR or similar mcu?

2007-06-15 Thread Eduardo Morras

At 14:18 13/06/2007, you wrote:

I do not need it actually, but i am so inquisitive.

I have heard that sqlite has been ported to AVR too.
How may be solved the main storage solved on it?
So for example, i have system with ATmega64 with 128KB of external SRAM.
The whole ram is organized as static 60KB and 18 pages of 4KB (the last 4KB
page lays in the internal RAM shadow), but it can be simlply reconfigured
because the memory address logic is provided by programmable logic device
(CPLD)

As the firm data storage memory is there 1MB spi dataflash, but there is a
possibility to replace it with 16MB chip with the same interface for an
example.

I am at 230th page of Mike Owen's book The definitive Guide to 
SQLite and with

every next page, my image of SQL on a chip w/o OS  disappears in haze.

Is i wrote before, i do not need it actually, i use sqlite in linux
environment.

Thank you for let you make me a picture.

Jakub Ladman


128KB RAM is very low. I have run it on powerpc 440 device with 
uclinux but with more ram (1GB). There is minimalist os that can be 
fit on a few KB, check contiki and contiki mail-list 
http://www.sics.se/contiki/ 
https://lists.sourceforge.net/lists/listinfo/contiki-developers it's 
used on commodore 64 and other relics. I used it before in a 
development card (Virtex II Pro - xupv2p). You can cut down contiki 
and sqlite features. The problem will be the stack/heap that sqlite 
needs for run. I don't know about ATmega64 but can't you add a fpga 
with dram controller? Perhaps using a free i/o port or changing the 
cpld with the fpga?


HTH

--
"General error, hit any user to continue."  



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Stack usage

2007-06-05 Thread Eduardo Morras

At 20:31 05/06/2007, you wrote:


> Excuse me for make this question but have you
> normalized your database? Try to design/define it in at least 3NormalForm.

My application's doesn't create any databases itself. It allows users to
store any data. And users need to be able to store any number of columns in
1 table (the most I'v heard about is about 1, but I wouldn't be
surprised if they had more). Trust me, they need it and they can't do
anything about it..
But the problem isnt't actually caused by many columns in the table. The
query i posted before doesn't need any table in the database - it was
created based on the a table but any similar query will crash SQLite. The
query was created by JDBC driver - I will change driver's code and the
problem will be solved for now, but it doesn't mean that everything is ok.
Similar automatically created queries may be created based on anything, for
example rows of some table - 1 rows isn't too many, right? And that
query may cause any application to crash. And possibly there are other
innocent looking queries that are implemented using recursion that will do
the same damage (maybe nested selects ?).


The number of rows is not a problem, the number of columns may be. My 
database teacher said that "the best table have 2 columns" too 
theoric for me but several hundreds or thousands... too practical.




-
Antivirus Warning: User detected. Please keep away from keyboard or 
you will be erased. Thank you. 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Stack usage

2007-06-05 Thread Eduardo Morras

At 16:40 05/06/2007, you wrote:

Joe Wilson napisa³(a):
> Please respond to the mailing list in the future.

Sorry. Different client. I didn't notice the adress.

> At least theres a known workaround, so no problem.

Workaround is not a solution.

>
> > > hence your problem.
> >
> > Sure it is. Just like any bug or missing feature in any application. But
> Iv always hoped that
> > SQLite wont constraint me. Its 
sad the it does, and horrible that it

> crashes instead returning
> > an error.
>
> You should ask for your money back.

Yes it's free. But I think that no user (or 
author) of software like when software crashes. Unfortunatelly SQLite does.


Last time wasn't sent to the mailing list so once more:
Does anyone know other places apart from unions 
where deep recursion may be used?


Excuse me for make this question but have you 
normalized your database? Try to design/define it in at least 3NormalForm.



A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail? 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite internal structs don't make use of C bitfields?

2007-06-02 Thread Eduardo Morras
At 17:18 31/05/2007, you wrote:
>On 5/31/07, Eduardo Morras <[EMAIL PROTECTED]> wrote:
>>At 23:25 30/05/2007, you wrote:
>>>Setting and reading individual bytes (u8 in sqlite-speak) are not
>>>threadsafe either. Only reading/setting entire entire words
>>>are threadsafe on most architectures.
>>
>>Using a uint32 for store the flags is threadsafe. There are less than 32 
>>true/false values and read/set is simple. I see no difference doing
>>
>>if (uint8==0){   // read/test bit
>>uint8=1; // set bit
>>whatever more
>>}
>
>Not atomic, so not thread-safe.
>You have a race condition waiting to happen.
>
>>and
>>
>>if (uint32&){ // read/test bit
>>uint32&&=MASK;// set bit
>>whatever
>>}
>
>Also not atomic, so not thread-safe.

I think i have not explained too well. I have wanted to point that if you can 
modify/set 8 or 9 values atomically/thread-safe you can do it with one 32 bit 
flag value. 



   Usuario de FreeBSD+Xfce, OpenOffice y muchos mas OSS.
Microsoft declara que el OSS viola 235 patentes. Por favor, DENUNCIAME. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread Eduardo Morras
At 20:58 01/06/2007, you wrote:
>On 6/1/07, Eduardo Morras <[EMAIL PROTECTED]> wrote:
>
>>If i remember well, sqlite uses two databases for metadata/schema
>
>Databases or tables? Could you be thinking of the sqlite_master table?

 You're right 1 db and 2 tables. Yes i'm thinking on sqlite_master table.

Thanks



   Usuario de FreeBSD+Xfce, OpenOffice y muchos mas OSS.
Microsoft declara que el OSS viola 235 patentes. Por favor, DENUNCIAME. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Does Attach improve Concurrency ?

2007-06-01 Thread Eduardo Morras
At 19:32 01/06/2007, you wrote:

>When you have a connection with multiple attached databases and the
>connection acquires an exclusive lock, does it always lock all attached
>databases or does it keep track of which databases require the lock?  Does
>using separate databases and attaching them improve concurrency (by
>providing finer-grained locking)?

It locks all attached databases. No, it does not improve concurrency but i can 
improve speed if database files are on more than one phisical disk. The sqlite 
bottleneck is i/o access on most cases.



   Usuario de FreeBSD+Xfce, OpenOffice y muchos mas OSS.
Microsoft declara que el OSS viola 235 patentes. Por favor, DENUNCIAME. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread Eduardo Morras
At 19:24 01/06/2007, you wrote:
>Why you said less than 29?

SQLite has a soft limit of 10 databases and a hard limit of 32, you can change 
it at compile time. If i remember well, sqlite uses two databases for 
metadata/schema, so you get a max of 30 databases, you need another one as 
master db, so you get 29 free databases. 


---
Nunca mezclo pastillas con alcohol, ... las disuelvo. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Concurrency

2007-06-01 Thread Eduardo Morras
At 11:17 01/06/2007, you wrote:
>Hi,
>
>I am having a scenario where I have one reader/writer and many writer threads.
>All writers are pretty basic (single INSERT INTO; some sort of a logging info
>what a thread has done).
>
>I believe I will receive many BUSY return codes and I don't like these
>spinlock-like retries. The problem I am having with this design is that I would
>like to complete the thread ASAP, so that I don't have many threads idling and
>consuming resources of my embedded system.
>
>I was thinking to either:
>
>a. Use mutex/semaphore before writting to the database or
>
>b. Have a (thread safe) list of INSERT INTO strings that every writer thread
>populates and the main reader/writer thread later executes.
>
>Is this a good approach? Does anyone have a better design? I don't want to use
>other database, because I think Sqlite is great for an embedded system that I
>am using.

How many threads have you?. If threads number is low (less than 29) you can use 
a database for each thread. Each one will have it's own file and no write lock 
problems. From time to time a simple sql query can get all data from those 
databases, write to the main one and delete the databases.

HTH



   Usuario de FreeBSD+Xfce, OpenOffice y muchos mas OSS.
Microsoft declara que el OSS viola 235 patentes. Por favor, DENUNCIAME. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite internal structs don't make use of C bitfields?

2007-05-30 Thread Eduardo Morras
At 23:25 30/05/2007, you wrote:
>--- [EMAIL PROTECTED] wrote:
>> - Original Message 
>> > MemPage bitfield patch below. 
>> > 
>> > sizeof(MemPage) on Linux: 
>> > 
>> >   original: 84
>> >   patched:  76
>> > ...
>> > Break-even for memory is 904/8 = 113 MemPage structs allocated.
>> 
>> I didn't look at the code, so mind me :)
>> 
>> If the MemPage are malloced individually (instead of being put in arrays), 
>> then they are 16 byte
>> aligned on most platforms, making the allocated block effectively the same 
>> size (well, that
>> depends on how many bytes are used by malloc before the user block in 
>> memory).
>
>This patch does indeed save memory - on Linux at least.
>Linux has a malloc mimimum resolution of 8 bytes, not 16.

No, for example, under powerpc and linux with altivec enabled in kernel it's 16 
bytes (or use www.freevec.org library). I think under other simd capable 
processors it's 16 bytes too.

>> Also, my take on bitfields is that they are not thread/multi processor 
>> friendly (there is no
>> atomic "set bit"), and also compilers typically don't optimize well with 
>> that (so before
>> applying this patch, I would test on other platforms than gcc linux x86).
>
>Setting and reading individual bytes (u8 in sqlite-speak) are not 
>threadsafe either. Only reading/setting entire entire words 
>are threadsafe on most architectures.

Using a uint32 for store the flags is threadsafe. There are less than 32 
true/false values and read/set is simple. I see no difference doing

if (uint8==0){   // read/test bit
uint8=1; // set bit
whatever more
}

and 

if (uint32&){ // read/test bit
uint32&&=MASK;// set bit
whatever
} 

in speed, and a compiler should not make worse code on last one. So say

>> Also, my take on bitfields is that they are not thread/multi processor 
>> friendly (there is no
>> atomic "set bit"), and also compilers typically don't optimize well with 
>> that (so before
>> applying this patch, I would test on other platforms than gcc linux x86).

is not true.


--
Dios es real, a no ser que esté declarado como entero...


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] R-Trees and SQLite

2007-05-22 Thread Eduardo Morras
At 18:09 20/05/2007, you wrote:
>In a previous post drh mentioned:
>
>  You need an R-Tree index to do something like this.  The
>  public-domain version of SQLite only supports B-Tree indices.
>
>  (http://www.mail-archive.com/sqlite-users%40sqlite.org/msg24643.html)
>
>Does this imply that there exists a commercial version of SQLite with
>R-Tree indexing?
>
>I did not see it mentioned here:
>
>  http://www.hwaci.com/sw/sqlite/prosupport.html

No, it just says that if you want a R-Tree SQLite version you can contact and 
contract him for add this feature.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Efficiency and Pragmas

2007-04-18 Thread Eduardo Morras

>
>On 4/18/07, Samuel R. Neff <[EMAIL PROTECTED]> wrote:
>>
>>cache is per-connection so if you open and close connections a lot then a
>>large cache won't help your program.  The command line app is a single
>>connection so a large cache there will help (although not with the first
>>queries--only subsequent ones).
>>
>>Synchronous off is dangerous.  Search the history for details.
>>
>>Sam

If you are under windows you can set pragma CACHE_SIZE to default and try this: 
System Properties > Advanced Properties > Efficiency > Advanced Options > 
Memory Use set it to System Cache. This way the system disk cache get more 
(nearly all) memory. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Data structure

2007-04-12 Thread Eduardo Morras
At 17:35 11/04/2007, you wrote:
>Lloyd wrote:
>>
>>Sorry, I am not talking about the limitations of the system in our side,
>>but end user who uses our software. I want the tool to be run at its
>>best on a low end machine also. 
>>I don't want the capabilities of a data base here. Just want to store
>>data, search for presence, remove it when there is no more use of it.
>>Surely I will check out BerkeleyDB. The data set must be in ram, because
>>the total size of it is very small. (Few maga bytes) I just want to
>>spped up the search, which is done millions of times.
>>Thanks,
>> LLoyd
>You might discover that you can craft a very effective memory resident storage 
>system using a compression system like Huffman Encoding and an index method 
>appropriate to the key you are using for retrieval.  That could work very well 
>in an embedded system, have a small footprint in data and code and be very 
>fast.

There is a book about that (2 books) called Managing Gigabytes (1 and 2 
editions) which shows how to use compression techniques with data search. The 
full source code is open source (or i remember it was). 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Hashing question

2007-03-29 Thread Eduardo Morras
At 16:20 29/03/2007, you wrote:
>Eduardo Morras <[EMAIL PROTECTED]> wrote:
>> Hello:
>> 
>>   I'm trying to use altivec simd code to SQLite. Has read hash.c
>> but it looks like hashing is done one by one when insert data.
>> Is it true? When is called rehash function? If i insert data with
>> index i think hashing happens one by one, but if i insert all 
>> data and after that make an index rehash works. Can anyone confirm
>> this? Rehash function is the only one i can convert to altivec 
>> (with it's own hashing functions).
>> 
>
>The hash.c code is only used for some internal housekeeping
>details within SQLite.  (Symbol tables and that kind of thing.)
>It does not come into play when actually executing SQL statements.
>SQL data is stored using B-Trees only.  No hashing.
>
>--
>D. Richard Hipp  <[EMAIL PROTECTED]>

Thanks D.Richard, i'm trying to speed up the "create index" code and thought 
that it creates a hash from data and then insert it (the hash/index) in the 
btree. AFAIK Tree functions (insert, delete, move) cannot be convert to simd, 
but other code yes. For example in

CREATE TABLE tba1 (one varchar(15), two int, three real)
CREATE INDEX ind1 ON tba1(one, three)

an index ind1 is created. That's where i think a hash is done, 
ind1=function(one,three) and it can be simd by working with 4(one,three) pairs 
at same time.

Any how, thanks again. 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Hashing question

2007-03-29 Thread Eduardo Morras
At 11:27 29/03/2007, you wrote:
>Hello:
>
>  I'm trying to use altivec simd code to SQLite. Has read hash.c but it looks 
> like hashing is done one by one when insert data. Is it true? When is called 
> rehash function? If i insert data with index i think hashing happens one by 
> one, but if i insert all data and after that make an index rehash works. Can 
> anyone confirm this? Rehash function is the only one i can convert to altivec 
> (with it's own hashing functions).
>
>Thanks 

Ok, i was wrong, it can be done but needs a rework from other files too. The 
"void *sqlite3HashInsert(Hash *pH, const void *pKey, int nKey, void *data)" 
function must be changed for get 4 pKey,nKey pairs and an int to say if they 
are 4, 3, 2 or 1 keys or a special pKey,nKey pair with "no-op" meaning.

The best way i think is to make a *sqlite3HashInsertSimd or similar that checks 
which simd (altivec, mmx, sse or similar) works at startup or compile time and 
the function caller/user of sqlite3HashInsert decides based on the amount of 
data to hash if uses the simd or the normal function. I expect a x4 to x7 speed 
increment using altivec. Don't know for other simd architectures. 





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Hashing question

2007-03-29 Thread Eduardo Morras
Hello:

  I'm trying to use altivec simd code to SQLite. Has read hash.c but it looks 
like hashing is done one by one when insert data. Is it true? When is called 
rehash function? If i insert data with index i think hashing happens one by 
one, but if i insert all data and after that make an index rehash works. Can 
anyone confirm this? Rehash function is the only one i can convert to altivec 
(with it's own hashing functions).

Thanks 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] One big database or various smaller ones?

2007-03-26 Thread Eduardo Morras
At 17:22 26/03/2007, you wrote:
>Hi
>
>I have a database with three tables (bigrams, trigrams and
>tetragrams). Basically, they are:
>  TABLE( A, B, Count)  IDX(A,B)  IDX(B)
>  TABLE( A, B, C, Count)  IDX(A,B,C)  IDX(B)  IDX(C)
>  TABLE( A, B, C, D, Count) IDX (A,B,C,D) IDX(B) IDX(C) IDX(D)
>
>Now, this database is quickly getting around the 2GB of disk space.
>My question, about efficiency, is, it better to have:
>- one big database, one connection from C code to the database
>- three smaller databases, three connection from C code to the database
>
>Thank you
>Alberto

The problem with this database is that the three tables are related, no? A,B,C 
and D are the same data on all tables. If it's not true and tables have 
separate data, you can easily split in three database files, permiting you 
concurrent writing and getting faster read times if are on distinct drives.

If tables haven't separate data (the A on first table is the same on the second 
and third) you must update in cascade the 3 databases.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index creation on huge table will never finish.

2007-03-22 Thread Eduardo Morras

At 04:47 22/03/2007, you wrote:


I don't think that solves my problem.  Sure, it guarantees that the IDs are
unique, but not the strings.

My whole goal is to be able to create a unique identifier for each string,
in such a way that I dont have the same string listed twice, with different
identifiers.

In your solution, there is no way to lookup a string to see if it already
exists, since there is no index on the string.

Thanks,
Chris


So you have a file with data, a large collection of strings 112 
millions, each at most 80-bytes, although typically

shorter.

How do you manage repeated data? Replace? First In? Modify string to be unique?

You want put them in a sqlite3 database, but each string must be only 
once. The problem i see here is if you have a data file with repeated 
strings or not. I think that a grep or a perl script can help you a 
lot cleaning your data first. Then import to database will be fast.


HTH




--
"Hemos encontrado al enemigo y somos nosotros"



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Saving binary files

2007-03-18 Thread Eduardo Morras

At 19:00 18/03/2007, you wrote:

Hello John,

thanks for the valuable piece of advice. The idea is that either

1) I store data in tabular form and work with them
2) I create a table of blobs and each blob is the binary content of a file

(2) is my method in question, for (1) we all know it works. So I turned to
SQLite just because it seems that it is a lighweight single file database.
So, even if i don't like (2), I can setup an implementation where I have a
file system inside a fully portable file.

BR

dimitris


You can use zlib to dwhat you want. It has functions for add and 
delete files, it's flat file and provides medium/good compression. 
You can store your file metadata on SQLite as zip filename, name of 
the binary file, an abstract or even a password for zip file.


HTH 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite Performance

2007-03-16 Thread Eduardo Morras

At 15:51 16/03/2007, you wrote:

Dennis,

  Yes the data will be read later by down stream processing.

  I do have the option of either putting the data into sqlite at 
the start (when its read) or putting it into a flat file and then 
Later loading it into a sqlite db via a downstream job.


  A great deal of the data columns are simple numeric values and 
thats where sqlite really shines in that is portable between 
systems having differing endianness.


  Here is a summary of the entire processing where a1/b1 are 
different processes and probably differnt host platforms.


  a1, read from producing system, generate output data (be it flat 
file or sqlite).

  a2. Compress data file.
  a3. Transfer  compressed datafile to target system.
  a4. Goto a1

  b1. Receive datafile and uncompress.
  b2. Read datafile and load into Master DB. (This might just be a 
simple attach).

  b3. Massage data in Master db ???
  b4. Read and process data from MasterDb. Delete or mark as deleteable.
  b5. Delete processed data from MasterDb. (this could be in a 
seperate thread).

  b6. Goto step b1.

  The nice thing about simply attaching as a DB in step b2 is that 
when all data is processed from step b4 then step b5 to purge is a 
simple detach and operating system unlink for the underlying 
datafile. Which I suspect will be infinately faster than a sql delete command.


  Thanks,
  Ken


Ken, i have your same scenario, the producing system generate data 
and it's read by sbc card, but as Dennis says in last message, have 
sqlite running on that card system. From time to time dump database 
to b system for backup. It's lot easier than 
read/compress/transfer/insert on different machines.





---
Useful Acronymous : DMCA = Don't Make Content Accessible 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Embedded Database Functions

2006-12-24 Thread Eduardo Morras

At 17:00 24/12/2006, you wrote:

On Sun, Dec 24, 2006 at 09:35:01AM -0600, John Stanton wrote:

> >There is even a starting grammar for you:
> >  http://www.antlr.org/grammar/1107752678378/PLSQLGrammar.g

> A compiler for a subset of PL/SQL would not be too arduous a project,

If what you want is something like PL/SQL, it might be both easier and
better to retarget PostgreSQL's PL/pgSQL for SQLite opcodes, rather
than starting from scratch with your own Oracle PL/SQL style language.
(And PL/pgSQL's BSD license is compatible with SQLite's.)

--
Andrew Piskorski <[EMAIL PROTECTED]>
http://www.piskorski.com/


Don't know how much size has PL/SQL implementation in PostgreSQL, i 
think use it twice if SQLite (200-300KB) grows-up to 500KB or more by 
adding PostgreSQL code. I doubt that Postgre implementation is done 
with lemon, which is the lalr(1) parser used by SQLite, so some 
incompatibilities may/will occur, that means glue code or twice code 
(lemon+pgsql pl/sql parser) and make work harder and non-maintenable 
in future.



Alien.org contacted...waiting for Ripley 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Embedded Database Functions

2006-12-23 Thread Eduardo Morras

At 03:41 22/12/2006, you wrote:
There has been discussion about extending Sqlite to have more 
functions, but the risk is creating Sqlite-Bloat and losing the most 
endearing feature of the product, its light weight and simplicity.


Here is an interesting and thought provoking discussion on the 
general subject.  A good case is made for more extensive use of a 
data-specific language, specifically PL/SQL.


It would be a major restructure of Sqlite, but it would be 
intriguing to see the VDBE virtual machine extended to have the 
features to execute a subset of PL/SQL and to have the Sqlite 
compiler also compile the PL/SQL dialect into virtual machine 
code.  Then executables could be stored in the database, functions 
could be added and with ingenious design the footprint could be 
smaller by splitting off the compiler and having as many internal 
functions as possible realized in the dense virtual machine 
code.  It is not a new concept, having been the principle behind 
Java and many other similar projects, but it is one where the ideal 
solution is yet to emerge.


By using an efficient virtual machine with a dense target code 
performance should be good and be frugal on resources on embedded 
devices.  The virtual machine code is processor independent, making 
distribution of modules very straight forward.


http://www.viacognis.com/devicesql/ArticleDataCentricSoftware.pdf


Sounds very interesting, and after reading it i think there no 
embedded database like sqlite that has PL/SQL, only Oracle and 
similar sizes. I think it's a very good idea and i can help in 
develop it, but don't know much about the VDBE code. I think adding 
control structures to SQLite SQL will not make it heavier, perhaps 
20-40 Kb of optional code in parser/vdbe. Optimization will be another headache


But the question is always the same, Who?. Read this message from Dr. 
Richard Hipp on 23/10/2006 with subject "Re: [sqlite] Extra functions 
- New Project?"




-
Useful Acronymous : FAQ = Frequently 'Answered' Questions   



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



<    1   2   3   >