Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-28 Thread Kim Gräsman
Hi Bob,

Den 18 maj 2017 9:15 em skrev "Bob Friesenhahn" <
bfrie...@simple.dallas.tx.us>:

On Thu, 18 May 2017, Kim Gräsman wrote:

>
>> The request is issued early on when the connection is first opened so no
>> actual queries have been issued at that time.
>>
>
> Then my (black-box) guess is that you're seeing the bump from
> allocating heap space for whatever structures the schema needs.
>

Our schema takes a bit over 1MB to load on a 32-bit CPU.  The increased
usage we are seeing is on the order of 200k so it is not the schema.
Regardless, the application is using the database immediately so it must
always consume the schema.


Well, when you run pragma cache_size before any other queries, it *will*
load the schema. So unless you are also seeing a 1MB bump besides the 200K,
it must be schema data. Maybe it occupies less memory than you've
estimated?

Perhaps the SQLite devs can think of a way to diagnose this more exactly.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-18 Thread Kim Gräsman
On Thu, May 18, 2017 at 8:27 PM, Bob Friesenhahn
<bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 18 May 2017, Kim Gräsman wrote:
>
>> On Thu, May 18, 2017 at 6:10 PM, Bob Friesenhahn
>> <bfrie...@simple.dallas.tx.us> wrote:
>>>
>>> Does anyone know why using 'PRAGMA cache_size=0' (or some other small
>>> value)
>>> to attempt to decrease memory usage (and it is reported as immediately
>>> decreased in the shell by .stats) actually significantly increases heap
>>> memory usage?
>>
>>
>> It sounds like you have active operation with a larger cache size
>> before issuing the PRAGMA, is that so?
>
>
> The request is issued early on when the connection is first opened so no
> actual queries have been issued at that time.

Then my (black-box) guess is that you're seeing the bump from
allocating heap space for whatever structures the schema needs.

> An earlier developer had tried the same thing almost 5 years ago (with a
> much older sqlite) and noticed a 200k jump in heap usage.

We're at 3.14.1, so I'm also speaking from an older timeframe.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] PRAGMA cache_size=0 increases memory usage

2017-05-18 Thread Kim Gräsman
On Thu, May 18, 2017 at 6:10 PM, Bob Friesenhahn
<bfrie...@simple.dallas.tx.us> wrote:
> Does anyone know why using 'PRAGMA cache_size=0' (or some other small value)
> to attempt to decrease memory usage (and it is reported as immediately
> decreased in the shell by .stats) actually significantly increases heap
> memory usage?

It sounds like you have active operation with a larger cache size
before issuing the PRAGMA, is that so?

If not, 'PRAGMA cache_size' loads the schema under the hood, and
depending on how large/complex it is, this can make quite a footprint.

I noticed this first hand when I issued 'PRAGMA cache_size=128' with a
fixed heap on a large schema and immediately ran out of memory.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Duplicate documentation

2017-05-04 Thread Kim Gräsman
Hi all,

I just recently realized that there are two sets of documentation for
the SQLite configuration options:
https://www.sqlite.org/c3ref/c_config_getmalloc.html (this looks outdated)
and
https://www.sqlite.org/c3ref/c_config_covering_index_scan.html

Similarly for the DB configuration verbs:
https://www.sqlite.org/c3ref/c_dbconfig_lookaside.html (old) vs
https://sqlite.org/c3ref/c_dbconfig_enable_fkey.html

The documentation seems to be internally self-consistent, but Google
still indexes the old content, and it's hard to keep track of which
version you're seeing depending on how you got there.

Is there a way to fix this somehow?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-27 Thread Kim Gräsman
On Wed, Apr 26, 2017 at 11:00 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> I've used a spreadsheet internally to forecast how much memory SQLite
> will need in a given configuration.
>
> I've cleaned/generalised it a little and posted here:
> https://docs.google.com/spreadsheets/d/1cCawZE9AdF4jlmgAMjbz_pGZI0o9WkWddPyV2YglFCY/edit#gid=0
>
> Since we now get out-of-memory with a fixed heap, I'm guessing there's
> some kind of think-o in the calculations above.

Let me confirm that.

I'd completely neglected to perform the actual Robson calculation...
So the totals I'd calculated were just a detailed forecast for the M
part of Robson's proof.

I still need to scale it by n to get a proper heap size, N:
https://www.sqlite.org/malloc.html#_mathematical_guarantees_against_memory_allocation_failures

Sorry about the noise.

This is actually simple to add to the spreadsheet, so I'll extend it
to cover Robson end-to-end.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-27 Thread Kim Gräsman
Hi Simon,

On Wed, Apr 26, 2017 at 11:46 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 26 Apr 2017, at 10:00pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Specifically, I wonder if 4MiB for the general-purpose heap is maybe
>> entirely unreasonable? Is there a way to forecast how much memory will
>> be necessary for transactions and query processing, or does that
>> depend entirely on the workload?
>
> Depends on a number of factors, including the data present in the
> database.  In other words, you can forecast memory-usage based on the
> data in your test database, but that does not allow you to predict
> memory-use for your customer’s databases unless you know how many rows
> they have in their tables, whether the chunkiness of their data has
> changed since the last ANALYZE and such things.  This isn’t the way
> you’re meant to use SQL.  You’re meant to issue your commands and have
> the SQL engine worry about executing them.

https://www.sqlite.org/malloc.html leads me to believe otherwise.
Especially the zero-malloc feature-set seems exactly aimed at
controlling this.

> So do you have good indexes ?

Yes. And we force index use explicitly to make sure we get consistent
results, instead of relying on the query planner.

> With them, SQLite can just iterate through the entries in the table, as
> indicated by index order.  This dramatically reduces usage of the cache.

Cache usage should not be a problem -- we can control that with the
soft heap limit to have it auto-adjust.

Compared to page cache, lookaside and scratch, general-purpose memory
is not as well-described. I think that makes sense -- it's essentially
"everything else" -- but I was expecting and hoping SQLite wouldn't
indiscriminately use general-purpose memory for transaction/query
processing. That said, it's hard to see through all the layers here.
Some of this is described in:
https://www.sqlite.org/malloc.html#_computing_and_controlling_parameters_m_and_n

I don't see anything there that applies directly to our situation.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite memory calculator (was Multiple sessions, page sizes and cache)

2017-04-26 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 5:18 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
> Hi all,
>
> On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
>> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>>
>>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>>> I just trade malloc heap fragmentation for SQLite private heap
>>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>>> well even in the face of multiple page sizes and connections?
>>
>> As described in the document that you read
>> (https://www.sqlite.org/malloc.html), if you keep your memory usage
>> below the Robson limit, then fragmentation will never be a problem.
>> Guaranteed.
>>
>> If you exceed the Robson limit, it is theoretically possible that
>> fragmentation could cause problems.  But in practice, we don't
>> normally have issues.
>>
>>> 2) What happens if the configured heap is depleted?
>>
>> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
>> your memory utilization and raise alarms if it gets too high.  And/or
>> call sqlite3_db_release_memory() to try to free up memory from caches
>> if you start to run low.
>
> I've now worked through our application to be able to use
> SQLITE_CONFIG_HEAP. I thought I'd share some observations:
>
> - With SQLITE_CONFIG_HEAP, different page sizes between connections
> are no longer a problem -- they are allocated from the same arena, but
> can co-exist nicely. That said, we've gone to the same page size for
> all connections/databases to keep the maths easier
> - Prepared statements were an unexpected problem. We had unbounded
> statement caches with a large number of statements, hogging memory
> from SQLite's private heap. The total size requirement in extreme
> situations came to almost 20MB. By turning the statement caches into
> LRU caches and limiting their size in bytes, we could cap the maximum
> number of bytes used for cached statements and use that number to
> forecast heap size requirements.
> - We've set the minimum allocation size to 32. This led to higher
> memory usage compared to malloc, because small allocations were
> rounded up, e.g. for statements.
> - We recompiled SQLite with SQLITE_DEFAULT_PAGE_SIZE= and
> SQLITE_DEFAULT_CACHE_SIZE=128. This means any ad-hoc connection that
> doesn't explicitly specify page/cache size will play well with the
> system at large. It may be slow by default, but can be trivially sped
> up by issuing "PRAGMA cache_size=n" after making system-wide
> adjustments to heap size, to cover for the additional memory pressure.
> - Besides forecast for scratch, page cache, lookaside and cached
> statements, I've added 4MB, arbitrarily, for general-purpose
> allocations
> - We use sqlite3_soft_heap_limit64 to get softer handling of heap
> depletion. I've set the soft heap limit to (heapsize - 1MB), to give
> SQLite ample room for small "soft" overruns. On the assumption that
> individual allocations typically aren't larger than 1MB.

I've used a spreadsheet internally to forecast how much memory SQLite
will need in a given configuration.

I've cleaned/generalised it a little and posted here:
https://docs.google.com/spreadsheets/d/1cCawZE9AdF4jlmgAMjbz_pGZI0o9WkWddPyV2YglFCY/edit#gid=0

Since we now get out-of-memory with a fixed heap, I'm guessing there's
some kind of think-o in the calculations above.

Specifically, I wonder if 4MiB for the general-purpose heap is maybe
entirely unreasonable? Is there a way to forecast how much memory will
be necessary for transactions and query processing, or does that
depend entirely on the workload?

The calculator seems generally useful, so if we can iron out good
guidelines it may come in handy for someone else.

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
On Wed, Apr 26, 2017 at 5:58 PM, Dominique Devienne <ddevie...@gmail.com> wrote:
> On Wed, Apr 26, 2017 at 5:34 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Great, that means the numbers add up. This is a monster transaction
>> updating 5M rows, and page size is 512 bytes, so I think we have roughly 1
>> row/page.
>
> Which such a small page size though? What's the native filesystem page size
> in your situation?
> I suspect having a single row per page (or worse, fewer than 1 per page) is
> not ideal perf-wise, no?

Yes, the small page size was chosen for other reasons, I think we have
some work to do to find a better size here.

But that's not really germane to this, I just wanted to understand why
the -shm grew so large, and now I understand the maths better.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Den 26 apr. 2017 3:45 em skrev "Richard Hipp" <d...@sqlite.org>:

> On 4/26/17, Richard Hipp <d...@sqlite.org> wrote:
> >  That would imply you are changing about a
> > half million pages of your database inside a single transaction.
>
> Correction:  About 5 million pages.  Missed a zero.  (Time for coffee, I
> guess)
>

Always time for coffee.

Great, that means the numbers add up. This is a monster transaction
updating 5M rows, and page size is 512 bytes, so I think we have roughly 1
row/page.

I never got the sense that the shm size was directly proportional to number
of pages modified, but I can see that now.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] -shm grows with large transaction

2017-04-26 Thread Kim Gräsman
Hi again,

I've been experimenting with limiting memory usage in our SQLite-based
app. Ran into an unrelated oddity that I thought I'd ask about:

We're running a couple of massive upgrade steps on over 5 million
quite large (70+ columns) rows.

There are two unrelated steps;

1) DROP COLUMN-replacement where all data is copied to a table with
the new schema, the old table is dropped, and the new table is renamed
to replace
2) UPDATE statement of all rows

These are obviously huge transactions, so I was expecting to see the
WAL file grow to about the same size as the original database.

But for some reason, the WAL-index (-shm) file also grows to about
40MiB in size. From the docs, I've got the impression that it would
typically stay at around 32KiB. Does this seem normal? I just ran into
some high-level documentation for the WAL-index [1] where it indicates
there's a page correlation -- is it growing so much because we're
touching so many pages?

SQLite v.3.14.1.

Thanks,
- Kim

[1] https://www.sqlite.org/fileformat2.html#walindexformat
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-24 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 5:18 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> I have a nagging suspicion that the tempdb is allocated from the
> private heap (we've set pragma temp_store=memory) -- what are the
> characteristics of that database? Can I forecast how large it will be?

The behavior is the same with pragma temp_store=file (turns out it was
`file` when this statement ran all along.)

I've tried dumping all available global memory statistics when I run
out of memory, but I don't see much I can use:

SQLite error 7: failed to allocate 1024 bytes
SQLite error 7: failed to allocate 1024 bytes
SQLite error 7: statement aborts at 3: [UPDATE mytable SET ... WHERE ...]
Effective soft heap limit: 22192128 bytes
Memory used 5064704 (max 22535968)
Number of outstanding allocations: 9807 (max 26872)
Number of pcache pages used: 0 (max 0)
Number of pcache overflow bytes: 3239936 (max 3239936)
Number of scratch allocations used: 0 (max 0)
Number of scratch overflow bytes: 0 (max 1024)
Largest allocation: 65540 bytes
Largest pcache allocation: 672 bytes
Largest scratch allocation: 560 bytes

It looks like the highwater for memory-used is when the heap was
depleted. But other than that, I'm stumped. What's a good way to
understand where memory is going in this statement?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-23 Thread Kim Gräsman
On Wed, Apr 19, 2017 at 1:18 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 4/19/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> I'm building for both Windows and Linux and it's a reproducible build,
>> so I'd rather not depend on anything extra.
>
> The only thing extra you need is tclsh installed as it is used during
> the build process.  Besides TCL, just a C compiler and the ordinary
> tools.  On unix, just run "./configure; make".  On Windows with MSVC
> the command is "nmake /f makefile.msc"

OK, thanks! I'll see what I can do.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 6:21 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 21 Apr 2017, at 5:16pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Could be. Would that show up in EXPLAIN somehow?
>
> You would find it easier to spot using EXPLAIN QUERY PLAN.  'scan' means it’s 
> just looking through the entire table, but you’ll see mentions of indexes 
> there.
>
>> The statement I've
>> had trouble with so far is an UPDATE of a 5-million-row table.
>
> Does it have a WHERE clause ?  SQLite may decide to do ordering when you use 
> WHERE or ORDER BY.  If there is no convenience index then it will create a 
> temporary one.
>
> There are ways to minimise this, using ANALYZE and creating permanent 
> indexes, but it would probably be easier to start from your WHERE clause and 
> which indexes you have already created.

Thanks.

There is a WHERE clause, but EXPLAIN QUERY PLAN does not mention any indexes:

sele  order  from  deta
- 
0  00   SCAN TABLE mytable

Also, this is just one of many (hundreds) statements, and I don't have
a good way to audit all of them. You're essentially saying that any
statement may allocate pages corresponding to an index for filtered
columns... I can see how that can get large, quickly.

Reading https://sqlite.org/tempfiles.html, it looks like these
temporary/transient indexes go into the temp store. I'm guessing the
temp store is also allocated from the private heap if
temp_store=memory?

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
On Fri, Apr 21, 2017 at 6:05 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 21 Apr 2017, at 4:18pm, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Unfortunately, we're getting occasional out-of-memory errors on
>> statement execution.
>
> Is there a chance that your statements are making up temporary indexes ?  
> They can be arbitrarily large depending on how many rows you have in a table.

Could be. Would that show up in EXPLAIN somehow? The statement I've
had trouble with so far is an UPDATE of a 5-million-row table.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-04-21 Thread Kim Gräsman
Hi all,

On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>> I just trade malloc heap fragmentation for SQLite private heap
>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>> well even in the face of multiple page sizes and connections?
>
> As described in the document that you read
> (https://www.sqlite.org/malloc.html), if you keep your memory usage
> below the Robson limit, then fragmentation will never be a problem.
> Guaranteed.
>
> If you exceed the Robson limit, it is theoretically possible that
> fragmentation could cause problems.  But in practice, we don't
> normally have issues.
>
>>
>> 2) What happens if the configured heap is depleted?
>
> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
> your memory utilization and raise alarms if it gets too high.  And/or
> call sqlite3_db_release_memory() to try to free up memory from caches
> if you start to run low.

I've now worked through our application to be able to use
SQLITE_CONFIG_HEAP. I thought I'd share some observations:

- With SQLITE_CONFIG_HEAP, different page sizes between connections
are no longer a problem -- they are allocated from the same arena, but
can co-exist nicely. That said, we've gone to the same page size for
all connections/databases to keep the maths easier
- Prepared statements were an unexpected problem. We had unbounded
statement caches with a large number of statements, hogging memory
from SQLite's private heap. The total size requirement in extreme
situations came to almost 20MB. By turning the statement caches into
LRU caches and limiting their size in bytes, we could cap the maximum
number of bytes used for cached statements and use that number to
forecast heap size requirements.
- We've set the minimum allocation size to 32. This led to higher
memory usage compared to malloc, because small allocations were
rounded up, e.g. for statements.
- We recompiled SQLite with SQLITE_DEFAULT_PAGE_SIZE= and
SQLITE_DEFAULT_CACHE_SIZE=128. This means any ad-hoc connection that
doesn't explicitly specify page/cache size will play well with the
system at large. It may be slow by default, but can be trivially sped
up by issuing "PRAGMA cache_size=n" after making system-wide
adjustments to heap size, to cover for the additional memory pressure.
- Besides forecast for scratch, page cache, lookaside and cached
statements, I've added 4MB, arbitrarily, for general-purpose
allocations
- We use sqlite3_soft_heap_limit64 to get softer handling of heap
depletion. I've set the soft heap limit to (heapsize - 1MB), to give
SQLite ample room for small "soft" overruns. On the assumption that
individual allocations typically aren't larger than 1MB.

Unfortunately, we're getting occasional out-of-memory errors on
statement execution.

I've started looking into `.stats` in the sqlite3 shell, and am
thinking about the pulling out the memory statistics inside our
application to try and understand where memory is going. Any idea what
I might look out for?

I have a nagging suspicion that the tempdb is allocated from the
private heap (we've set pragma temp_store=memory) -- what are the
characteristics of that database? Can I forecast how large it will be?

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-18 Thread Kim Gräsman
I'm building for both Windows and Linux and it's a reproducible build,
so I'd rather not depend on anything extra.

Thanks,
- Kim

On Wed, Apr 19, 2017 at 2:10 AM, David Burgess <dburg...@gmail.com> wrote:
> If you are running Linux, the build from source (i.e. making your own
> amalgamation) is straightforward.
> You may have to install a few extras, but nothing onerous.
>
>
>
> On Wed, Apr 19, 2017 at 5:57 AM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> Hi all,
>>
>> The docs at https://sqlite.org/compile.html are a little ominous when
>> it comes to the OMIT flags with the amalgamation:
>>
>>> Important Note: The SQLITE_OMIT_* options may not work with the 
>>> amalgamation.
>>> SQLITE_OMIT_* compile-time options usually work correctly only when SQLite 
>>> is
>>> built from canonical source files.
>>
>> Is this still true? I see there are some OMIT flags that affect parser
>> generation and plug into the build system, but most of them are really
>> just preprocessor toggles, right?
>>
>> I'm specifically looking to enable:
>>
>> * SQLITE_OMIT_AUTOINIT
>> * SQLITE_OMIT_SHARED_CACHE
>>
>> hoping for some speedups.
>>
>> Thanks,
>> - Kim
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> --
> David Burgess
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_OMIT_* vs amalgamation

2017-04-18 Thread Kim Gräsman
Hi all,

The docs at https://sqlite.org/compile.html are a little ominous when
it comes to the OMIT flags with the amalgamation:

> Important Note: The SQLITE_OMIT_* options may not work with the amalgamation.
> SQLITE_OMIT_* compile-time options usually work correctly only when SQLite is
> built from canonical source files.

Is this still true? I see there are some OMIT flags that affect parser
generation and plug into the build system, but most of them are really
just preprocessor toggles, right?

I'm specifically looking to enable:

* SQLITE_OMIT_AUTOINIT
* SQLITE_OMIT_SHARED_CACHE

hoping for some speedups.

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query default lookaside pool size

2017-03-08 Thread Kim Gräsman
On Wed, Mar 8, 2017 at 2:41 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 8 Mar 2017, at 11:09am, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
>> Is there a way to query SQLite build parameters at runtime, more
>> specifically SQLITE_DEFAULT_LOOKASIDE?
>
> <https://www.sqlite.org/pragma.html#pragma_compile_options>

Ah. Love it, thanks!

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query default lookaside pool size

2017-03-08 Thread Kim Gräsman
Hi all,

Is there a way to query SQLite build parameters at runtime, more
specifically SQLITE_DEFAULT_LOOKASIDE?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minor doc patches

2017-03-06 Thread Kim Gräsman
On Mon, Mar 6, 2017 at 12:36 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 3/6/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>> Hi all,
>>
>> I'm reading the documentation on memory
>> (https://www.sqlite.org/malloc.html) again and stumbled on some typos:
>
> Thanks.  Fixed now.

Thanks! Already live on the website, nice.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Minor doc patches

2017-03-06 Thread Kim Gräsman
Hi all,

I'm reading the documentation on memory
(https://www.sqlite.org/malloc.html) again and stumbled on some typos:

> 3.3 Page cache memory
> [... ] It is not unusual to see _the database page cache consumes_ over 10 
> times more memory than the rest of SQLite combined.

This should read "the database page cache _consume_"

> 3.6. Setting memory usage limits
> [...]
> If attempts are made to _allocate more memory that specified_ by the soft 
> heap limit

Should read "allocate more memory _than_ specified"

> [...]
> If SQLite is not able to free up enough auxiliary memory to stay below the 
> limit, _it goes ahead and allocations_ the extra memory

Should read "it goes ahead and _allocates_ the extra memory"

For what it's worth,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-19 Thread Kim Gräsman
On Fri, Feb 17, 2017 at 8:37 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 2/17/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> Thanks, shrink_memory could come in handy. But this is not the problem
>> we're trying to solve -- rather we want to claim as much memory as possible
>> up-front to make mem usage deterministic and make it easier to diagnose
>> other subsystems with run-away memory usage.
>
> In that case, consider using MEMSYS5
> (https://www.sqlite.org/malloc.html#experimental_memory_allocators).
> You give SQLite a single big chunk of RAM up front, and that is all
> the memory it will ever use.  And it won't share any of it with any
> other subsystems.

As I've said earlier, we don't want to risk SQLITE_NOMEM.

But if we can combine sqlite_soft_heap_limit64 with
SQLITE_CONFIG_HEAP, that might give us a nice self-adjusting system
(where caches are flushed when heap is running low) in the rare cases
that this might happen.

Do the two play nice together? I guess the challenge is to find a good
number for the limit, but we can afford to "waste" some memory to play
it safe here.

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-17 Thread Kim Gräsman
Den 17 feb. 2017 5:33 em skrev "Richard Hipp" <d...@sqlite.org>:

On 2/17/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>
> If we let it use malloc directly and control it
> with the soft heap limit, we'll have a steadily rising memory curve
> until all caches are fully loaded (could take weeks.)
>

Caches get flushed from time to time, for example when another process
writes to the database.

If that does not work out, you can always reclaim memory on demand
using "PRAGMA shrink_memory;"
(https://www.sqlite.org/pragma.html#pragma_shrink_memory) or
equivalently sqlite3_db_release_memory().  Both will call free() on
unused cache lines


Thanks, shrink_memory could come in handy. But this is not the problem
we're trying to solve -- rather we want to claim as much memory as possible
up-front to make mem usage deterministic and make it easier to diagnose
other subsystems with run-away memory usage.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Smallest reasonable cache size

2017-02-17 Thread Kim Gräsman
Thanks for your responses!

On Thu, Feb 16, 2017 at 8:56 PM, Dominique Pellé
<dominique.pe...@gmail.com> wrote:
>
> I think it's probably best to give a large cache_size to
> each connection, and limit the total amount of memory
> used by SQLite with sqlite3_soft_heap_limit64().
> This will effectively limit the global amount of cache
> used if cache pages comes from the heap.
>
> For example, if you give 20MB to each connection
> (PRAGMA cache_size=) and limit the global SQLite
> heap usage to 30 MB with sqlite3_soft_limit() then even
> if you have say 100 connections, SQLite will not use
> more than 30 MB of heap. If a connection needs to
> cache pages, unpinned cache pages from other
> connections will be discarded in LRU order. Strictly
> speaking, SQLite could use more than the 30 MB
> soft limit if it has no other choice to allocate memory
> (hence a soft limit), but in general that does not happen.

We want to limit/avoid incremental growth of caches, i.e. we prefer to
allocate X amount of memory up-front for caches, and have SQLite work
primarily with that. If we let it use malloc directly and control it
with the soft heap limit, we'll have a steadily rising memory curve
until all caches are fully loaded (could take weeks.)

Also, my mental model is that there's one memory arena, whether system
malloc or configured with SQLITE_CONFIG_PAGECACHE, but any number of
independent connection caches competing for memory from that arena. It
seems to me the behavior you're describing would require coordination
between caches, so maybe my assumptions are wrong here?

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Smallest reasonable cache size

2017-02-16 Thread Kim Gräsman
Hi all,

In my battles with lots of connections competing over precious cache
memory, I've considered giving some non-critical connections zero
cache using `PRAGMA cache_size=0`.

Is this a reasonable thing to do? If zero is too extreme, what might a
more moderate small cache size be? 32? 64? 100?

Some of these connections are only opened to do `PRAGMA quick_check;`,
I'm guessing that operation is not very cache-intensive, assuming it
reads pages sequentially and checks them?

Thanks for any advice on this,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections, page sizes and cache

2017-02-16 Thread Kim Gräsman
I've now built an inventory of all our connections and their
lifetimes. They come in three primary classes:

1) Perpetual -- opened on app startup, closed on shutdown
2) Periodical, transient, serial -- only one at a time, happens quite
rarely (online backup)
3) Stochastic, transient, concurrent -- can happen whenever, and
potentially in parallel

Since 1 & 2 are unavoidable, it makes sense to reserve cache memory
for them up-front with SQLITE_CONFIG_PAGECACHE.

But the class-3 connections for the most part never happen. When they
do happen, they will definitely overlap with (1) and might overlap
with (2).

So I was thinking it would be useful for us to have these connections
by-pass the global preallocated page cache buffer and just go straight
to malloc(). That way we don't risk that they steal pages from the
perpetual connections, and force *them* to go to malloc() for memory,
increasing overall memory usage.

Would something like "PRAGMA cache_allocator=malloc" be a crazy idea?

Thanks,
- Kim

On Sat, Feb 4, 2017 at 3:21 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
> Hi Richard,
>
> I got side-tracked with other work, but am coming back to this now.
>
> On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
>> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>>
>>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>>> I just trade malloc heap fragmentation for SQLite private heap
>>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>>> well even in the face of multiple page sizes and connections?
>>
>> As described in the document that you read
>> (https://www.sqlite.org/malloc.html), if you keep your memory usage
>> below the Robson limit, then fragmentation will never be a problem.
>> Guaranteed.
>>
>> If you exceed the Robson limit, it is theoretically possible that
>> fragmentation could cause problems.  But in practice, we don't
>> normally have issues.
>>
>>>
>>> 2) What happens if the configured heap is depleted?
>>
>> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
>> your memory utilization and raise alarms if it gets too high.  And/or
>> call sqlite3_db_release_memory() to try to free up memory from caches
>> if you start to run low.
>
> Thanks, makes sense!
>
> We are having a hard time estimating needed memory up-front, and we
> can't afford SQLITE_NOMEM (we'd rather take slower
> allocation/fragmentation). Especially since our app has a mix of
> long-running and transient connections, it's hard for us to tell
> exactly how many, with what cache needs, will be running concurrently
> at any one time.
>
> One thing that we've been discussing internally, and can't find a
> definitive answer to:
>
> Why are these arenas/buffers global, and not associated with each
> connection? It seems to me making a big chunk malloc() to match cache
> size when opening a connection would make it easier for an application
> such as ours, where some connections come and go. I guess it could be
> more fragmentation-/OOM-prone but with a reasonable malloc
> implementation it feels like it would be more flexible without
> sacrificing much of the benefits.
>
> Thank you,
> - Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-02-04 Thread Kim Gräsman
Hi Richard,

I got side-tracked with other work, but am coming back to this now.

On Sun, Jan 15, 2017 at 5:50 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/15/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> 1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
>> I just trade malloc heap fragmentation for SQLite private heap
>> fragmentation? Or does SQLite's fragmentation-prevention strategy work
>> well even in the face of multiple page sizes and connections?
>
> As described in the document that you read
> (https://www.sqlite.org/malloc.html), if you keep your memory usage
> below the Robson limit, then fragmentation will never be a problem.
> Guaranteed.
>
> If you exceed the Robson limit, it is theoretically possible that
> fragmentation could cause problems.  But in practice, we don't
> normally have issues.
>
>>
>> 2) What happens if the configured heap is depleted?
>
> You get SQLITE_NOMEM errors.  Use sqlite3_memory_used() to monitor
> your memory utilization and raise alarms if it gets too high.  And/or
> call sqlite3_db_release_memory() to try to free up memory from caches
> if you start to run low.

Thanks, makes sense!

We are having a hard time estimating needed memory up-front, and we
can't afford SQLITE_NOMEM (we'd rather take slower
allocation/fragmentation). Especially since our app has a mix of
long-running and transient connections, it's hard for us to tell
exactly how many, with what cache needs, will be running concurrently
at any one time.

One thing that we've been discussing internally, and can't find a
definitive answer to:

Why are these arenas/buffers global, and not associated with each
connection? It seems to me making a big chunk malloc() to match cache
size when opening a connection would make it easier for an application
such as ours, where some connections come and go. I guess it could be
more fragmentation-/OOM-prone but with a reasonable malloc
implementation it feels like it would be more flexible without
sacrificing much of the benefits.

Thank you,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: Illegal initialization in icu.c : sqlite3IcuInit

2017-01-27 Thread Kim Gräsman
David,

On Fri, Jan 27, 2017 at 12:51 PM, David Empson <demp...@emptech.co.nz> wrote:
>
>> On 27/01/2017, at 9:09 PM, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> On Thu, Jan 26, 2017 at 10:08 PM, David Empson <demp...@emptech.co.nz 
>> <mailto:demp...@emptech.co.nz>> wrote:
>>>
>>>> On 26/01/2017, at 8:46 PM, Clemens Ladisch <clem...@ladisch.de> wrote:
>>>>
>>>>> …
>>>>>  {"icu_load_collation",  2, SQLITE_UTF8, (void*)db, icuLoadCollation},
>>>>> };
>>>
>>> The ANSI/ISO C 1990 standard states this in section 6.5.7, under 
>>> Constraints:
>>>
>>> “All the expressions in an initializer for an object that has static 
>>> storage duration or in an initializer list for an object that has aggregate 
>>> or union type shall be constant expressions.”
>>
>> But this object doesn't have static storage duration, does it?
>> `scalars` is just a local variable in a function:
>> https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449 
>> <https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449>
>>
>> unless I'm looking at the wrong version.
>
> That version does have the text quoted above.
>
> The problem is that ANSI/ISO C 1990 stipulates that an initializer for any 
> object of aggregate type, whether or not it is static, must be constant.

Ah, misread the part about aggregate types in general. Sorry about the noise!

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG: Illegal initialization in icu.c : sqlite3IcuInit

2017-01-27 Thread Kim Gräsman
On Thu, Jan 26, 2017 at 10:08 PM, David Empson <demp...@emptech.co.nz> wrote:
>
>> On 26/01/2017, at 8:46 PM, Clemens Ladisch <clem...@ladisch.de> wrote:
>>
>> Ziemowit Laski wrote:
>>> Visual C++
>>
>> Which one?
>>
>>> correctly catches this.
>>
>> Oh?  What exactly is illegal about this?
>>
>>>  struct IcuScalar {
>>>const char *zName;/* Function name */
>>>int nArg; /* Number of arguments */
>>>int enc;  /* Optimal text encoding */
>>>void *pContext;   /* sqlite3_user_data() context 
>>> */
>>>void (*xFunc)(sqlite3_context*,int,sqlite3_value**);
>>>  } scalars[] = {
>>>...
>>>{"icu_load_collation",  2, SQLITE_UTF8, (void*)db, icuLoadCollation},
>>>  };
>
> The ANSI/ISO C 1990 standard states this in section 6.5.7, under Constraints:
>
> “All the expressions in an initializer for an object that has static storage 
> duration or in an initializer list for an object that has aggregate or union 
> type shall be constant expressions.”

But this object doesn't have static storage duration, does it?
`scalars` is just a local variable in a function:
https://sourcecodebrowser.com/sqlite3/3.6.21/icu_8c_source.html#l00449

unless I'm looking at the wrong version.

Again, it would be nice to see the actual warning from MSVC.

FWIW,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-15 Thread Kim Gräsman
Hi Richard,

On Sat, Jan 14, 2017 at 2:17 PM, Richard Hipp <d...@sqlite.org> wrote:
> On 1/13/17, Kim Gräsman <kim.gras...@gmail.com> wrote:
>>
>> In an effort to reduce memory usage/fragmentation, we're trying to
>> configure SQLite to allocate as much memory as necessary up-front
>> (based on the excellent information in
>> https://www.sqlite.org/malloc.html).
>>
>
> The easiest way to do this is to compile with SQLITE_ENABLE_MEMSYS5
> then give SQLite 10 or 20MB of memory to use at start-time by invoking
>
>sqlite3_config(SQLITE_CONFIG_HEAP, malloc(1000), 1000, 64);
>
> or similar.  The page-cache and scratch memory will both be
> automatically served from this pool if you take no further actions,
> and this usually works just fine.  Use sqlite3_memory_highwater(0) to
> see how close you are getting to memory exhaustion so that you can
> judge if 10MB is too little or too much and make appropriate changes.
>
> Note that you cannot increase or decrease the amount of memory
> available to memsys5 at runtime, except by completely closing all
> SQLite database connections and interfaces, running sqlite3_shutdown()
> and then redoing the sqlite3_config() call above with the new memory
> size.

I like the sound of 'easiest' :-)

Two concerns;

1) If I configure a global SQLite heap with SQLITE_CONFIG_HEAP, won't
I just trade malloc heap fragmentation for SQLite private heap
fragmentation? Or does SQLite's fragmentation-prevention strategy work
well even in the face of multiple page sizes and connections?

2) What happens if the configured heap is depleted?

Thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-15 Thread Kim Gräsman
On Sat, Jan 14, 2017 at 1:09 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 14 Jan 2017, at 8:54am, Clemens Ladisch <clem...@ladisch.de> wrote:
>
>> If your use case is more complex than the simple mechanism offered by
>> SQLITE_CONFIG_PAGECACHE, consider using SQLITE_CONFIG_PCACHE2 (or
>> SQLITE_CONFIG_MALLOC).
>
> However, the improvement in time provided by this may not be as great as you 
> think.
> Before investing lots of time in building special page-size-handling into 
> your program,
> do some tests in a demo application running on your hardware.  You can spend 
> a forty
> hours learning all about the internals of SQLite just to end up with a speed 
> increase of 1%.
> And then you have a ton of complicated low-level code to maintain.

Yes, we're not looking for speed-ups, primarily, but rather reduced
heap fragmentation and control over memory usage.

SQLITE_CONFIG_PCACHE2 is definitely furthest down on my list, for
precisely the reasons you quote.

- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple sessions, page sizes and cache

2017-01-14 Thread Kim Gräsman
Hi Clemens,

Thanks for your help!

On Sat, Jan 14, 2017 at 9:54 AM, Clemens Ladisch <clem...@ladisch.de> wrote:
> Kim Gräsman wrote:
>> I would expect SQLite to allocate a page cache per session.
>
> There is typically one page cache instance per connection.
> (A connection is what you get from sqlite3_open*(); a session would be
> what you get from sqlite3session_create().)

Oh, I thought I'd seen 'session' somewhere, so went with that. Sorry
about the confusion.

>> So if we want to use SQLITE_CONFIG_PAGECACHE to provide a
>> preallocated buffer for the page cache, it looks like we have to
>> provide it for the worst case, i.e. max(page size) * sum(cache
>> requirements for all sessions). Is that the case?
>
> Yes; the SQLITE_CONFIG_PAGECACHE memory is used by all cache instances.
>
>> is SQLITE_CONFIG_PAGECACHE used to specify an arena for page caches,
>> or a buffer for pages between all sessions?
>
> A page cache instance is a list of pages; memory for each page is
> allocated individually.

... from the buffer provided to SQLITE_CONFIG_PAGECACHE, right?

Great, that confirms our suspicion.

>> Since we have different page sizes (512 bytes and 4K respectively) I'm
>> worried we'll have lots of dead space if SQLITE_CONFIG_PAGECACHE is
>> set up for 4K pages and requests for 512-byte pages are served from
>> there directly.
>
> If your use case is more complex than the simple mechanism offered by
> SQLITE_CONFIG_PAGECACHE, consider using SQLITE_CONFIG_PCACHE2 (or
> SQLITE_CONFIG_MALLOC).

As for SQLITE_CONFIG_MALLOC, we were considering using that to
configure a fixed-size arena for everything not covered by scratch +
page cache. Not sure if it's even necessary.

SQLITE_CONFIG_PCACHE2 looks comparatively difficult to get right, but
I guess it would allow us to keep two arenas, one for 512-byte pages
and one for 4K pages.

Are these reasonable approaches?

Also, having thought some more about this... The places where the docs
warn that a page cache instance will fall back on sqlite3_malloc -- if
the size of SQLITE_CONFIG_PAGECACHE is configured smaller than the
cache sizes actually requested by connections, will every excess page
allocation hit sqlite3_malloc?

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multiple sessions, page sizes and cache

2017-01-13 Thread Kim Gräsman
Hi all,

We have an application with multiple databases of different page
sizes. We keep a few long-running sessions, but also open transient
sessions occasionally.

In an effort to reduce memory usage/fragmentation, we're trying to
configure SQLite to allocate as much memory as necessary up-front
(based on the excellent information in
https://www.sqlite.org/malloc.html).

Naively, I would expect SQLite to allocate a page cache per session.
But reading the documentation, it sounds like there's only one page
cache buffer, shared between all sessions.

So if we want to use SQLITE_CONFIG_PAGECACHE to provide a
preallocated buffer for the page cache, it looks like we have to
provide it for the worst case, i.e. max(page size) * sum(cache
requirements for all sessions). Is that the case?

Moreover, if SQLITE_CONFIG_PAGECACHE is under-allocated, the docs say
SQLite goes to malloc (or some variety thereof):

> If SQLite needs a page-cache entry that is larger than "sz" bytes or if it 
> needs more than N entries,
> it falls back to using the general-purpose memory allocator.
https://www.sqlite.org/malloc.html#_page_cache_memory

and:

> If additional page cache memory is needed beyond what is provided by this 
> option, then
> SQLite goes to sqlite3_malloc() for the additional storage space.
https://www.sqlite.org/c3ref/c_config_getmalloc.html#sqliteconfigpagecache

Does it do this for every page that doesn't fit, or just up-front to
allocate a page cache block? That is, is SQLITE_CONFIG_PAGECACHE used
to specify an arena for page caches, or a buffer for pages between all
sessions?

Since we have different page sizes (512 bytes and 4K respectively) I'm
worried we'll have lots of dead space if SQLITE_CONFIG_PAGECACHE is
set up for 4K pages and requests for 512-byte pages are served from
there directly.

Many thanks,
- Kim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite logo usage

2012-12-10 Thread Bruno Kim Medeiros Cesar
Thanks for your answer.

Is it explicit somewhere in the site, or should it be? I guessed to be the
case, but there's no notice as we see on NASA images, for example.

Bruno Kim Medeiros Cesar
Eng. de Computação 2008 - EESC - USP
brunokim...@gmail.com
Skype: bkim.cesar

2012/12/10 Richard Hipp <d...@sqlite.org>

>
> You are welcomed to use a copy of the logo from the website, scaled to
> whatever size you need but otherwise unaltered, and to use the SQLite name
> as long as you do not imply that your product or service is officially
> endorsed by the SQLite developers.
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite logo usage

2012-12-10 Thread Bruno Kim Medeiros Cesar
Hello.

Is there a policy on how to use the SQLite logo? Is it copyrighted or in
public domain? I found it as an SVG in a non-official site, and was
wondering if it is free to edit and modify.

I would like to do a little homage to SQLite, and have inserted the
blessing within the blue box as in the attached image. I wonder if it is ok
to publish it.

Thanks for your attention,

Bruno Kim Medeiros Cesar
Eng. de Computação 2008 - EESC - USP
brunokim...@gmail.com
Skype: bkim.cesar
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_step hold about 0.5sec in iphone.

2011-05-21 Thread Hyoseok Kim
Hi, I'm Hyoseok Kim from Korea.

I develop a app on iphone. I use SQLite for about 120,000 data. In my code,
sqlite3_step hold 0.5~1.0 sec.
Please give me a hint to escape sqlite3_step method rapidly.

Please help me.
Thank you so much.


My code is below...

   if (sqlite3_prepare_v2(db, query_stmt, -1, , NULL) ==
SQLITE_OK) {

int rowCount = 10;



while (sqlite3_step(statement) != SQLITE_DONE) {



// Some code to retrieve values



// Looping...

// 0.5~1.0 sec hold after read last row data.

}

 }
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Hello.. I want to translate SQlite.org into Korean language version.

2011-02-12 Thread SeungJin Kim
Hello.
My name is Seungjin Kim.
I'm from Korea and my job is teaching english for elementary students.
My hobby is php programming because I was before php programmer.
These day I programming on android and iOS. So I'm using SQLite now.
I appreciate SQLite database. It can hleped my programs.
If possible.. May I translate Your website into Korean language version?
Then It can more helps to revitalize SQLite in Korea.
In korean has no SQLite book of Korean version language because anyone
didn't want to translate SQLite book.
So I decided or wish to translate www.SQLite.org
<http://www.sqlite.org/>into korean language.
My E-mail address is gudo...@gmail.com. (My website is
http://www.jw-app.orgbut it is constructing now)
I will wait your reply.

Thank you very much.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Relationship class in sqlite?

2010-11-24 Thread Dekay Kim
Hi,

What I am trying to do is to basically define the relationship between
various ArcGIS shapefiles and tables that are contained in one
sqlite(spatialite extension) database.

After defining the relationship, I want to be able to navigate through the
related values in a GIS application (e.g. QGIS)

In ArcGIS-FGDB(Feature Geodatabase) set-up, I can define something called
"relationship class" which enables such capability.

So far, I figured out that I can relate two tables using the syntax
"REFERENCES" in sqlite, but it does not seem to have something similar to
relationship class explicitly.

Assuming that I completed defining the relationship between the shapefiles
using REFERENCES keyword in sqlite, is there any visualization tool that
enables me the navigation between the related table attribute values?

I would appreciate your answer.

Thanks!

DK

-- 
Dekay Kim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_OMIT_WSD (3.6.23.1)

2010-05-28 Thread Albert Kim

Hi Dan,
It doesn't matter that it will never be written to.  Since the variable is a
non-const static it will get mapped into the WSD portion of memory.   

There are actually a few other global static variables that are getting
placed in the WSD section of memory.

Here is a list of non-const statics that exist even with SQLITE_OMIT_WSD
turned on:
(taken from a .map file)

sqlite3_temp_directory   0x00040d24   Data   4 
sqlite3.o(.data)
sqlite3PendingByte   0x00040d20   Data   4 
sqlite3.o(.data)
ata...@openstattable_0   0x00040d28   Data   8 
sqlite3.o(.data)
MemJournalMethods0x00040d30   Data  52 
sqlite3.o(.data)
smu...@sqlite3defaultmutex_0 0x00040d64   Data  36 
sqlite3.o(.data)
defaultmeth...@sqlite3pcachesetdefault_0 0x00040d88   Data  44 
sqlite3.o(.data)
detach_f...@sqlite3detach_0  0x00040db4   Data  32 
sqlite3.o(.data)
attach_f...@sqlite3attach_0  0x00040dd4   Data  32 
sqlite3.o(.data)
a...@sqlite3savepoint_00x00040df4   Data  12 
sqlite3.o(.data)

-Albert


Dan Kennedy-4 wrote:
> 
> 
> On May 28, 2010, at 1:11 AM, Kim, Albert wrote:
> 
>> Hi,
>>
>> We are compiling sqlite 3.6.23.1 with the SQLITE_OMIT_WSD compile  
>> time flag turned on.  We are using the amalgamation.   We found that  
>> this didn't completely eliminate the writable static data in our  
>> binary, and some investigation led me to this static variable:
>>
>> SQLITE_PRIVATE VdbeOp *sqlite3VdbeGetOp(Vdbe *p, int addr){
>>  static VdbeOp dummy;
>>
>> Is this is a known issue?
> 
> That variable will never be written to. A pointer to it
> may be returned to the caller, but the caller will only
> read, never modify the contents of the structure.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://old.nabble.com/SQLITE_OMIT_WSD-%283.6.23.1%29-tp28705310p28712877.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLITE_OMIT_WSD (3.6.23.1)

2010-05-28 Thread Kim, Albert
Hi,

We are compiling sqlite 3.6.23.1 with the SQLITE_OMIT_WSD compile time flag 
turned on.  We are using the amalgamation.   We found that this didn't 
completely eliminate the writable static data in our binary, and some 
investigation led me to this static variable:

SQLITE_PRIVATE VdbeOp *sqlite3VdbeGetOp(Vdbe *p, int addr){
  static VdbeOp dummy;

Is this is a known issue?

Thanks!
-Albert


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange SQLITE_CORRUPT error

2009-03-20 Thread Kim, Albert
Hi Richard,

Thanks for your quick response!  The call stack at the point when the 
SQLITE_CORRUPT_BKPT occurs looks like this:

sqlite3VdbeExec
   case OP_OpenWrite:
sqlite3BtreeCursor
btreeCursor
getAndInitPage
sqlite3BtreeGetPage
sqlite3PagerAcquire
pagerAcquire

It appears that the invalid page is operand p2 of an OpenWrite op.  It almost 
seems as if the prepared statement is already faulty, is that possible?

-Albert

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of D. Richard Hipp
Sent: Thursday, March 19, 2009 5:48 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Strange SQLITE_CORRUPT error


On Mar 19, 2009, at 8:33 PM, Kim, Albert wrote:
>
> /* The maximum page number is 2^31. Return SQLITE_CORRUPT if a page
> ** number greater than this, or zero, is requested.
> */
> if( pgno>PAGER_MAX_PGNO || pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){
>   return SQLITE_CORRUPT_BKPT;  <<<<<<<<<<<< }
>
>
> With some debug printfs I was able to see that pgno is -1 at this  
> point!


pgno is an unsigned integer, so the value is really 4294967295.  That  
number is greater than PAGER_MAX_PGNO, which is what is causing the  
SQLITE_CORRUPT to be returned.

If you can figure out who is calling pagerAcquire() with the faulty  
page number, that would be helpful.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Strange SQLITE_CORRUPT error

2009-03-19 Thread Kim, Albert
Hi,

I'm seeing a very strange, consistently reproducible case, in which
SQLITE_CORRUPT is being returned in response to a simple insert
statement.  There is only one connection to the db so it is unlikely a
concurrency issue.  The insert is immediately after the table is
created.  The table creation statement goes through successfully and
if I look at the db file afterward with the sqlite explorer, the db
and table look fine, but without inserted values, of course.   This
happens every time, in exactly the same way, which seems to rule out
random corruption of the db file.I did a little debugging into the
sqlite code and this is where the "corruption" is caught first:

static int pagerAcquire(
 Pager *pPager,  /* The pager open on the database file */
 Pgno pgno,  /* Page number to fetch */
 DbPage **ppPage,/* Write a pointer to the page here */
 int noContent   /* Do not bother reading content from disk if true */
){
 PgHdr *pPg;
 int rc;

 assert( pPager->state==PAGER_UNLOCK || pPager->nRef>0 || pgno==1 );

 /* The maximum page number is 2^31. Return SQLITE_CORRUPT if a page
 ** number greater than this, or zero, is requested.
 */
 if( pgno>PAGER_MAX_PGNO || pgno==0 || pgno==PAGER_MJ_PGNO(pPager) ){
   return SQLITE_CORRUPT_BKPT;  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .timer with windows cli

2009-02-25 Thread Kim Boulton
Roger,

Thanks for that.

However, how do you get the thing to compile with MinGW?
I Installed mingw and msys, fixed a problem with the msys batch file, 
fixed the paths to mingw so that gcc was found, did a compile and then a 
make but get linker failures at that point.

To get this far I referred the compile guidelines on sqlite.org although 
that info seems a little out of date.

I have MS VC++6 stored away somewhere although I'd rather get minGW 
working if that's the way it's normally compiled.

All this just to turn the timer function on, it might be wiser for the 
sqlite bods to release windows binaries which have the optional 
functions available. The timer would be a popular feature :-)

Cheers.

kim

Roger Binns wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Kim Boulton wrote:
>   
>> Anywhere I can download one would be nice.
>> 
>
> You can get the free MinGW compiler from http://www.mingw.org - it is
> what I use to compile SQLite and other code on Windows.  MinGW is a
> Windows port of gcc plus other GNU tools.
>
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkmlFmcACgkQmOOfHg372QS+DACeNpfb7gF67XkSJGMQ749YHH+A
> CYcAoKlA/O112SUGYrY4LUTMuHq0xjN3
> =rnfb
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] .timer with windows cli

2009-02-25 Thread Kim Boulton
Hello,

Is there a compiled v3.6.11 sqlite3.exe with the .timer facility 
enabled. I don't have a C compiler on this machine.

Anywhere I can download one would be nice.

Thanks

kim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread Kim Boulton

As the topic goes I am new to Sqlite. The gui I'd been using was 
compiled with an older version, than the 3.6.10 cli I downloaded. So 
things were a bit choppy. I'll stick to using just the downloaded Sqlite 
cli, now I (sort of) know what I'm doing with it.

I'll post the results once I've finished trying all the options, it's 
looking good so far though :-)

k


D. Richard Hipp wrote:
>
> On Feb 23, 2009, at 2:42 AM, Kim Boulton wrote:
>>
>> But OR is a lot slower than using UNION ALL on both Sqlite and Mysql
>
>
> The optimizer was significantly enhanced for version 3.6.8 in order to 
> better handle OR in WHERE clauses.  What version of SQLite did you 
> test this with?
>
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-23 Thread Kim Boulton
Hehe, probably a combination of rubbish grep (i used regex function in a 
text editor) and vaccuming a 4GB table at the same time.

@echo off
setlocal
set starttime=%time%
egrep --count 
"(W[CEF][SZ]|..W[CEF]S|...W[CEF]S|W[3CEF]S[25]..|W3S..|.11[CEF]S.)," 
my-30-million-rows-of-data.txt
set stoptime=%time%
echo Started: %starttime%
echo Ended: %stoptime%

results in:
24561
Started:  9:00:58.82
Ended:  9:01:34.29

36-ish seconds. obviously the regex needs a bit of work as there are 
supposed to be around 200,000 matches.

interesting nonetheless, never used grep before...useful.

k

John Machin wrote:
> On 23/02/2009 5:14 PM, Kim Boulton wrote:
>   
>> Hello,
>>
>> Thanks
>>
>> The grep regex on the text file found around 10,000 lines over 5 minutes 
>> (out of a total possible 200,000 rows), at which time I stopped it, 
>> interesting experiment anyway :-)
>> 
>
> Uh-huh ... so you'd estimate that it would take 5 minutes * (200K rows / 
> 10k rows) = 100 minutes to get through the lot, correct?
>
> I tried an experiment on a 161Mb CSV file with about 1.1M 
> name-and-address-etc rows in it. Because none of the patterns in your 
> query are likely to match my data, I added an extra pattern that would 
> select about 22% of the records (ended up with 225K output rows), 
> putting it at the end to ensure it got no unfair advantage from a regex 
> engine that tested each pattern sequentially.
>
> BTW, I had to use egrep (or grep -E) to get it to work.
>
> Anyway, it took about 6 seconds. Scaling up by number of input records: 
> 6 * 30M / 1M = 180 seconds = 3 minutes. Scaling up by file size: 6 * 500 
> / 161 = 19 seconds. By number of output rows: 6 * 200 / 225 ... forget 
> it. By size of output rows: ... triple forget it.
>
> Conclusion: something went drastically wrong with your experiment. 
> Swapping? Other processes hogging the disk or the CPU? A really duff grep??
>
> Anyway, here's my environment: 2.0 GHz single-core AMD Turion (64 bit 
> but running 32-bit Windows XP SP3), using GNU grep 2.5.3 from the 
> GnuWin32 project; 1 GB memory.
>
> Cheers,
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton

Woo, that's a bit better. Setting the page_size = 4096 gets the query 
time down to 17 seconds, so it's now faster than Mysql. Result!

I also tried changing the query to:
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN 
('S','Z')) OR
 (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 'S' AND c6 IS NULL) OR
 (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 'S' AND c7 IS NULL) OR
 (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3 IN ('2','5') ) OR
 (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 IS NULL ) OR
 (c2 = '1' AND c3 = '1' AND c4 IN ('C','E','F') AND c5 = 'S' AND c7 IS NULL)
ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;

But OR is a lot slower than using UNION ALL on both Sqlite and Mysql

I'm going to try the page_size at 8192 and see if that works even better.

Thanks to everyone for the help!

kim

Kim Boulton wrote:
> Thanks for the hints so far.
>
> Here is my existing Mysql setup and what I've tried with Sqlite
>
> *Mysql Table structure:*
> CREATE TABLE  `mydb`.`mytable` (
>   `c1` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c2` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c3` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c4` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c5` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c6` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NULL,
>   `c7` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NULL,
>   `price` smallint(4) NOT NULL,  KEY `c1` (`c1`),  KEY `c2` (`c2`),  
> KEY `c3` (`c3`),  KEY `c4` (`c4`),  KEY `c5` (`c5`),  KEY `c6` 
> (`c6`),  KEY `c7` (`c7`),  KEY `price` (`price`) ) ENGINE=MyISAM 
> DEFAULT CHARSET=latin1 PACK_KEYS=1 ROW_FORMAT=COMPRESSED;
>
> *CSV Import Data e.g.:*
> 1,A,P,0,0,X,X,300
> 1,A,P,0,0,X,P,
> A,A,P,0,0,,,2000
> B,3,Y,0,1,X,,300
>  approx 30 million row, 500MB csv, text file
>
> If I compress the table it is approximately 900MB in size with an 
> index of approximately 550MB, which i can load into memory via LOAD 
> INDEX INTO CACHE
>
> *Mysql Query example:*
> SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 
> IN ('S','Z')) UNION ALL
> SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 
> 'S' AND c6 IS NULL) UNION ALL
> SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 
> 'S' AND c7 IS NULL) UNION ALL
> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND 
> c3 IN ('2','5') ) UNION ALL
> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 
> IS NULL ) UNION ALL
> SELECT * FROM mytable WHERE (c2 = '1' AND c3 = '1' AND c4 IN 
> ('C','E','F') AND c5 = 'S' AND c7 IS NULL)
> ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;
> *
> The above query takes 37 seconds and produces approx 200,000 results 
> and uses 550MB RAM*
>
> *So, in Sqlite3 I created a similar table structure like this:*
> CREATE TABLE [mytable] ([c1] TEXT  NOT NULL,[c2] TEXT  NOT NULL,[c3] 
> TEXT  NOT NULL,[c4] TEXT  NOT NULL,[c5] TEXT  NOT NULL,[c6] TEXT  
> NULL,[c7] TEXT  NULL,[price] INTEGER  NOT NULL)
> CREATE INDEX [c1] ON [mytable]([c1]  ASC)
> CREATE INDEX [c2] ON [mytable]([c2]  ASC)
> CREATE INDEX [c3] ON [mytable]([c3]  ASC)
> CREATE INDEX [c4] ON [mytable]([c4]  ASC)
> CREATE INDEX [c5] ON [mytable]([c5]  ASC)
> CREATE INDEX [c6] ON [mytable]([c6]  ASC)
> CREATE INDEX [c7] ON [mytable]([c7]  ASC)
> CREATE INDEX [price] ON [mytable]([price]  ASC)
>
> *Then I imported the same csv data using:*
> .separator ","
> .import mycsv.csv mytable
>
> *Then fixed the NULL values in the last two columns with:*
> UPDATE mytable SET c6 = NULL where c6 = '';
> UPDATE mytable SET c7 = NULL where c7 = '';
>
> Then Vacuumed - took 6 hours!
>
> This leaves me with a 4GB Sqlite table
>
> *Then queried the Sqlite3 table with:*
> PRAGMA cache_size = 2000; */uses up 1.5GB RAM regardless*/
> PRAGMA page_size = 2000; /*this doesn'

Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton

Woo, that's a bit better. Setting the page_size = 4096 gets the query 
time down to 17 seconds, so it's now faster than Mysql. Result!

I also tried changing the query to:
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN 
('S','Z')) OR
 (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 'S' AND c6 IS NULL) OR
 (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 'S' AND c7 IS NULL) OR
 (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3 IN ('2','5') ) OR
 (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 IS NULL ) OR
 (c2 = '1' AND c3 = '1' AND c4 IN ('C','E','F') AND c5 = 'S' AND c7 IS NULL)
ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;

But OR is a lot slower than using UNION ALL on both Sqlite and Mysql

I'm going to try the page_size at 8192 and see if that works even better.

Thanks to everyone for the help!

kim

Kim Boulton wrote:
> Thanks for the hints so far.
>
> Here is my existing Mysql setup and what I've tried with Sqlite
>
> *Mysql Table structure:*
> CREATE TABLE  `mydb`.`mytable` (
>   `c1` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c2` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c3` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c4` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c5` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NOT NULL,
>   `c6` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NULL,
>   `c7` 
> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>  
> NULL,
>   `price` smallint(4) NOT NULL,  KEY `c1` (`c1`),  KEY `c2` (`c2`),  
> KEY `c3` (`c3`),  KEY `c4` (`c4`),  KEY `c5` (`c5`),  KEY `c6` 
> (`c6`),  KEY `c7` (`c7`),  KEY `price` (`price`) ) ENGINE=MyISAM 
> DEFAULT CHARSET=latin1 PACK_KEYS=1 ROW_FORMAT=COMPRESSED;
>
> *CSV Import Data e.g.:*
> 1,A,P,0,0,X,X,300
> 1,A,P,0,0,X,P,
> A,A,P,0,0,,,2000
> B,3,Y,0,1,X,,300
>  approx 30 million row, 500MB csv, text file
>
> If I compress the table it is approximately 900MB in size with an 
> index of approximately 550MB, which i can load into memory via LOAD 
> INDEX INTO CACHE
>
> *Mysql Query example:*
> SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 
> IN ('S','Z')) UNION ALL
> SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 
> 'S' AND c6 IS NULL) UNION ALL
> SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 
> 'S' AND c7 IS NULL) UNION ALL
> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND 
> c3 IN ('2','5') ) UNION ALL
> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 
> IS NULL ) UNION ALL
> SELECT * FROM mytable WHERE (c2 = '1' AND c3 = '1' AND c4 IN 
> ('C','E','F') AND c5 = 'S' AND c7 IS NULL)
> ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;
> *
> The above query takes 37 seconds and produces approx 200,000 results 
> and uses 550MB RAM*
>
> *So, in Sqlite3 I created a similar table structure like this:*
> CREATE TABLE [mytable] ([c1] TEXT  NOT NULL,[c2] TEXT  NOT NULL,[c3] 
> TEXT  NOT NULL,[c4] TEXT  NOT NULL,[c5] TEXT  NOT NULL,[c6] TEXT  
> NULL,[c7] TEXT  NULL,[price] INTEGER  NOT NULL)
> CREATE INDEX [c1] ON [mytable]([c1]  ASC)
> CREATE INDEX [c2] ON [mytable]([c2]  ASC)
> CREATE INDEX [c3] ON [mytable]([c3]  ASC)
> CREATE INDEX [c4] ON [mytable]([c4]  ASC)
> CREATE INDEX [c5] ON [mytable]([c5]  ASC)
> CREATE INDEX [c6] ON [mytable]([c6]  ASC)
> CREATE INDEX [c7] ON [mytable]([c7]  ASC)
> CREATE INDEX [price] ON [mytable]([price]  ASC)
>
> *Then I imported the same csv data using:*
> .separator ","
> .import mycsv.csv mytable
>
> *Then fixed the NULL values in the last two columns with:*
> UPDATE mytable SET c6 = NULL where c6 = '';
> UPDATE mytable SET c7 = NULL where c7 = '';
>
> Then Vacuumed - took 6 hours!
>
> This leaves me with a 4GB Sqlite table
>
> *Then queried the Sqlite3 table with:*
> PRAGMA cache_size = 2000; */uses up 1.5GB RAM regardless*/
> PRAGMA page_size = 2000; /*this doesn'

Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton

Sorry, I forgot to mention that I was running the query twice, first to
cache it, second to measure the speed.

Yeah it's a horrible query but produces the fastest results in Mysql.
maybe using OR instead works better on Sqlite, working on that one.

Problem I have with indexes in sqlite is that the table balloons 400MB
in size with every index i add. At the moment it's 4GB with one on each
column, if i add more indexes across other columns i'm thinking it'll
get too big to cache up.

thanks

Thomas Briggs wrote:
>For starters, I think that loading the index into the cache in
> MySQL is biasing your performance measures.  SQLite will automatically
> load pages of any necessary indexes into memory as part of executing
> the query, but doing so takes time.  By preloading the index on MySQL,
> you're removing that time from the amount of time MySQL spends
> executing that query, so you're not really comparing apples to apples.
>  Said another way: how long does the query take in MySQL if you don't
> pre-load the index?
>
>Secondly, the page size needs to be set before the database is
> created, i.e. before you execute the CREATE TABLE statement.
>
>Thirdly, that's a pretty horrible query. :)  Collapsing the
> different queries into fewer queries - even if it isn't a single query
> - will speed things up.  Each SELECT is a separate query that requires
> that the table data be read; reducing the number of SELECTs will thus
> reduce query time.
>
>Lastly, your indexes need to reflect how the data is accessed.  For
> example, if you're going to do "WHERE c5 = x AND c6 = y AND c7 = 7"
> then you need one index on (c5, c6, c7) - not three individual indexes
> on c5, c6, and c7.
>
>Performance tuning rule #1 - problems are almost always in your use
> of the database, not the database itself. :)
>
>-Tom
>
> On Sun, Feb 22, 2009 at 3:48 AM, Kim Boulton <k...@jesk.co.uk> wrote:
>   
>> Thanks for the hints so far.
>>
>> Here is my existing Mysql setup and what I've tried with Sqlite
>>
>> *Mysql Table structure:*
>> CREATE TABLE  `mydb`.`mytable` (
>>  `c1`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NOT NULL,
>>  `c2`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NOT NULL,
>>  `c3`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NOT NULL,
>>  `c4`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NOT NULL,
>>  `c5`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NOT NULL,
>>  `c6`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NULL,
>>  `c7`
>> enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
>> NULL,
>>  `price` smallint(4) NOT NULL,  KEY `c1` (`c1`),  KEY `c2` (`c2`),  KEY
>> `c3` (`c3`),  KEY `c4` (`c4`),  KEY `c5` (`c5`),  KEY `c6` (`c6`),  KEY
>> `c7` (`c7`),  KEY `price` (`price`) ) ENGINE=MyISAM DEFAULT
>> CHARSET=latin1 PACK_KEYS=1 ROW_FORMAT=COMPRESSED;
>>
>> *CSV Import Data e.g.:*
>> 1,A,P,0,0,X,X,300
>> 1,A,P,0,0,X,P,
>> A,A,P,0,0,,,2000
>> B,3,Y,0,1,X,,300
>>  approx 30 million row, 500MB csv, text file
>>
>> If I compress the table it is approximately 900MB in size with an index
>> of approximately 550MB, which i can load into memory via LOAD INDEX INTO
>> CACHE
>>
>> *Mysql Query example:*
>> SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN
>> ('S','Z')) UNION ALL
>> SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 =
>> 'S' AND c6 IS NULL) UNION ALL
>> SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 =
>> 'S' AND c7 IS NULL) UNION ALL
>> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3
>> IN ('2','5') ) UNION ALL
>> SELECT * FROM mytable WHERE (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6
>> IS NULL ) UNION ALL
>> SELECT * FROM mytable WHERE (c2 = '1' AND c3 = '1' AND c4 IN
>> ('C','E','F') AND c

Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton

You have a good point there, I've been using that wrongly. I'll try that.

Thanks


Kees Nuyt wrote:
> On Sun, 22 Feb 2009 08:48:00 +0000, Kim Boulton
> <k...@jesk.co.uk> wrote in k...@jesk.co.uk, General Discussion
> of SQLite Database <sqlite-users@sqlite.org>:
>
>   
>> *Then queried the Sqlite3 table with:*
>> PRAGMA cache_size = 2000; */uses up 1.5GB RAM regardless*/
>> 
>
> cache_size is expressed in number of pages.
> default_cache_size is useful too.
>
>   
>> PRAGMA page_size = 2000; /*this doesn't make any difference*/
>> 
>
> PRAGMA page_size will only make a difference if you use it
> when creating the database (before the first table is
> created), or just before a VACUUM statement.
> Don't make it too big. 4096 or 8192 are a good start to
> experiment with.
>
>   
>> Unless anyone has some good ideas I might 
>> have to give up on Sqlite.
>> 
>
> I'm sure you can get more performance if you tweak page_size
> and cache_size with some more understanding.
> Use whatever engine is best for the task.
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton

Hello,

Thanks

The grep regex on the text file found around 10,000 lines over 5 minutes 
(out of a total possible 200,000 rows), at which time I stopped it, 
interesting experiment anyway :-)

Using OR instead of UNION in Mysql is definitely slower, although maybe, 
as someone has mentioned above, the Sqlite optimizer works better with 
OR. Will try that next.

Thanks.


> Try this:
>
> (1) ditch the redundant commas in your csv file
> (2) run this (suitably adjusted for my errors and typos, and for my
> ignorance of the minutiae of command-line and grep syntax on your
> [unspecified IIRC] OS):
>
> grep
> "(W[CEF][SZ]|..W[CEF]S|...W[CEF]S|W[3CEF]S[25]..|W3S..|.11[CEF]S.),"
> yourfile.csv
>
> How long does that take?
>
> Another suggestion: search for clues on whether it might be better
> instead of doing
> select * from mytable where 
> union all
> etc etc
> select * from mytable where 
> to do
> select * from mytable where  or  or etc etc
> and if you don't find a strong weight of opinion for one way or the
> other, give it a try.
>
> HTH,
>
> John
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newb-ish performance questions

2009-02-22 Thread Kim Boulton
Thanks for the hints so far.

Here is my existing Mysql setup and what I've tried with Sqlite

*Mysql Table structure:*
CREATE TABLE  `mydb`.`mytable` (
  `c1` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NOT NULL,
  `c2` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NOT NULL,
  `c3` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NOT NULL,
  `c4` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NOT NULL,
  `c5` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NOT NULL,
  `c6` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NULL,
  `c7` 
enum('0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z')
 
NULL,
  `price` smallint(4) NOT NULL,  KEY `c1` (`c1`),  KEY `c2` (`c2`),  KEY 
`c3` (`c3`),  KEY `c4` (`c4`),  KEY `c5` (`c5`),  KEY `c6` (`c6`),  KEY 
`c7` (`c7`),  KEY `price` (`price`) ) ENGINE=MyISAM DEFAULT 
CHARSET=latin1 PACK_KEYS=1 ROW_FORMAT=COMPRESSED;

*CSV Import Data e.g.:*
1,A,P,0,0,X,X,300
1,A,P,0,0,X,P,
A,A,P,0,0,,,2000
B,3,Y,0,1,X,,300
 approx 30 million row, 500MB csv, text file

If I compress the table it is approximately 900MB in size with an index 
of approximately 550MB, which i can load into memory via LOAD INDEX INTO 
CACHE

*Mysql Query example:*
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN 
('S','Z')) UNION ALL
SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 
'S' AND c6 IS NULL) UNION ALL
SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 
'S' AND c7 IS NULL) UNION ALL
SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3 
IN ('2','5') ) UNION ALL
SELECT * FROM mytable WHERE (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 
IS NULL ) UNION ALL
SELECT * FROM mytable WHERE (c2 = '1' AND c3 = '1' AND c4 IN 
('C','E','F') AND c5 = 'S' AND c7 IS NULL)
ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;
*
The above query takes 37 seconds and produces approx 200,000 results and 
uses 550MB RAM*

*So, in Sqlite3 I created a similar table structure like this:*
CREATE TABLE [mytable] ([c1] TEXT  NOT NULL,[c2] TEXT  NOT NULL,[c3] 
TEXT  NOT NULL,[c4] TEXT  NOT NULL,[c5] TEXT  NOT NULL,[c6] TEXT  
NULL,[c7] TEXT  NULL,[price] INTEGER  NOT NULL)
CREATE INDEX [c1] ON [mytable]([c1]  ASC)
CREATE INDEX [c2] ON [mytable]([c2]  ASC)
CREATE INDEX [c3] ON [mytable]([c3]  ASC)
CREATE INDEX [c4] ON [mytable]([c4]  ASC)
CREATE INDEX [c5] ON [mytable]([c5]  ASC)
CREATE INDEX [c6] ON [mytable]([c6]  ASC)
CREATE INDEX [c7] ON [mytable]([c7]  ASC)
CREATE INDEX [price] ON [mytable]([price]  ASC)

*Then I imported the same csv data using:*
.separator ","
.import mycsv.csv mytable

*Then fixed the NULL values in the last two columns with:*
UPDATE mytable SET c6 = NULL where c6 = '';
UPDATE mytable SET c7 = NULL where c7 = '';

Then Vacuumed - took 6 hours!

This leaves me with a 4GB Sqlite table

*Then queried the Sqlite3 table with:*
PRAGMA cache_size = 2000; */uses up 1.5GB RAM regardless*/
PRAGMA page_size = 2000; /*this doesn't make any difference*/
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN 
('S','Z')) UNION ALL
SELECT * FROM mytable WHERE (c3 = 'W' AND c4 IN ('C','E','F') AND c5 = 
'S' AND c6 IS NULL) UNION ALL
SELECT * FROM mytable WHERE (c4 = 'W' AND c5 IN ('C','E','F') AND c6 = 
'S' AND c7 IS NULL) UNION ALL
SELECT * FROM mytable WHERE (c1 = 'W' AND c2 IN ('3','C','E','F') AND c3 
IN ('2','5') ) UNION ALL
SELECT * FROM mytable WHERE (c1 = 'W' AND c2 = '3' AND c3 = 'S' AND c6 
IS NULL ) UNION ALL
SELECT * FROM mytable WHERE (c2 = '1' AND c3 = '1' AND c4 IN 
('C','E','F') AND c5 = 'S' AND c7 IS NULL)
ORDER BY c1,c2,c3,c4,c5,c6,c7 ASC;

*The above query takes 57 seconds, 20 seconds slower than Mysql but 
produces the same results - but uses up 1.5GB RAM!*

Also if I try some more complex queries it always runs out steam at 2GB 
RAM and crashes. Maybe that is a limitation of my PC or XP though.

I have also tried...
1. Sqlite table with no indexes - very slow!
2. Sqlite table with one unique index using all columns - still slower 
than seperate indexes.

So I've had a fair crack of the whip with Sqlite. I was hoping it would 
be faster and use less memory, no luck though.

Unless anyone has some good ideas I might have to give up on Sqlite.

Cheers.

Kim

Kim Boulton wrot

[sqlite] Newb-ish performance questions

2009-02-19 Thread Kim Boulton
Hello,

I'm trying out Sqlite3 with an eye to improving the performance of 
queries on an existing MySQL database.

I've imported the data into sqlite which is approx. 30 million rows of 
part numbers each with a price.

So far, it's approx. four times slower than the MySQL version, and the 
size of the sqlite database is too big to fit in memory (several GB) 
whereas I can get the MySQL data down to 900MB if it's compressed and 
read only.

I would appreciate some tips or pointers on getting sqlite3 performance 
up and the data size down. I googled but couldn't find much.

I don't need concurrency or inserts, it's single user, read only.

TIA

kimb




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] primary key is indexed by default ?

2008-12-10 Thread Richard Kim
Hi 
 
Just wondering that if you set primary key in table, the column is
indexed by default in sqlite ?

thanks a lot
 
 
Richard K
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] concurrent write

2008-11-28 Thread Richard Kim
Hi Alexey,

Thanks for your link, but it is not really click me too well
I should set this busy_timeout to longer ms than current ?


Richard K 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alexey Pechnikov
Sent: Friday, November 28, 2008 3:00 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] concurrent write

Hello!

В сообщении от Saturday 29 November 2008 01:16:17 Richard Kim написал(а):
> We have database table, and it is being written, sometimes 30 times a 
> second.
> Some of the data succeeded, but some aren't.
>  
> How should we handle this sort of lock issue  ?

http://www.sqlite.org/c3ref/busy_timeout.html

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] concurrent write

2008-11-28 Thread Richard Kim
Hi,
 
We have database table, and it is being written, sometimes 30 times a
second.  
Some of the data succeeded, but some aren't.
 
How should we handle this sort of lock issue  ?

thanks
 
Richard K
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Malformed schema error on sqlite3odbc.dll with version 0.79

2008-10-15 Thread Richard Kim

Hi guys, 

We are using sqlite as our database and noticed some weird odbc driver
problem.

"malformed database schema (deleteUserGroupsByUsers) - no such
table: main.Users (11)"

If I open the database with other tool such as sqlite studio, or firefox
plug in, there is no problem - the schema check seems to be okay, and
all the tables are there.. etc. 
Also If I use S3ODBC.dll, we don't see ths issue of the malformed
database schema either.

It happens only if I use sqlite3odbc.dll with version 0.79.

Is this known bug ??

Thanks 

Richard K

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Iterating a result from a query

2005-07-25 Thread Kim Bendtsen

>
> I cache the items I'm displaying in an STL vector (C++). If the user moves
> to a page where I have no data I load it from the database as needed. I was
> considering loading data in the background in case the user might need it
> since I can predict what they might do relatively easily.

Prefetching sounds like a wonderfull idea, it is definitely in a phase +2 from 
current phase, but if the concept is working it might make the flow a lot 
smoother. 

Cheers
Kim B.


Re: [sqlite] Iterating a result from a query

2005-07-25 Thread Kim Bendtsen
Thank you for your thoughts. 

> You can build the result set into an array, then step through, back and
> around the array.
>
> SQLite already provides this functionality:
> http://www.sqlite.org/capi3ref.html#sqlite3_get_table

Having the entire result in memory would be an easy solution but unfortunately 
it will take up to much memory. I think I will keep an array of rowIds for 
the entries in the result, sort of a cache. Whenever the user scrolls to a 
position where I don't have the data, then I will fetch it for him and cache 
the id. That way it is easy to create random access via the cached rowIds and 
I only need to store. 

> The only downside is that you'll have the full result set in memory, even
> if you are only using a small portion of it.
>
> If you want to limit the amount of data at any time, you can use the:
>
>  SELECT ... LIMIT n OFFSET m;
>
> form of select, where n is the number of rows you want, and m is the
> offset you want to start from. For a table view, this may be appropriate,
> but you have to recompute the results for each change of page, potentially
> slowing your app down for large queries.

For the relatively simple tables that we have at the moment, I don't think its 
an issue to make one lookup per item when scrolling. But tests will show 
that. 

> >Cheers
> >Kim B.


Re: [sqlite] Iterating a result from a query

2005-07-25 Thread kim
> Hi There
>
> After executing a query I get the result back and can traverse it using
> sqlite3_step. This is fine for most systems, however for what I wish to
> use
> SQLite for, I would need something similar to stepForward, stepBackward,
> stepToFirst, stepToLast.
>
> The application where I'm going to use it is when showing a list items
> from a
> database table in a window. If the window can display 5 items, there is no
> need to load more than 5 rows in the beginning. Here sqlite3_step is
> sufficient. But
> if the users press up arrow, it should take them to the bottom of the
> list,
> and I would like to use something similar to setToLast. If the user keep
> pressing up, I would need to stepBackward.
> I know this might not be something for the sqlite database. How would you
> go
> about wrapping this interface? I know it is a broad question and your
> suggestion is very much appriciated! :)

Any thoughts/brainstorming would be great from anyone :)

Personally I don't have much experience with programming a database and
the theory in it. But I suppose that in order to be able to go backward,
the results already iterated through must be kept(at least a reference to
it, something like rowId). If setToLast is used, I must use the
sqlite3_step in order to find the last entry and then cache the other
results.

Is there an easier way??


Cheers
Kim B.





[sqlite] Iterating a result from a query

2005-07-24 Thread Kim Bendtsen
Hi There 

After executing a query I get the result back and can traverse it using 
sqlite3_step. This is fine for most systems, however for what I wish to use 
SQLite for, I would need something similar to stepForward, stepBackward, 
stepToFirst, stepToLast. 

The application where I'm going to use it is when showing a list items from a 
database table in a window. If the window can display 5 items, there is no 
need to load more than 5 rows in the beginning. Here sqlite3_step is 
sufficient. But 
if the users press up arrow, it should take them to the bottom of the list, 
and I would like to use something similar to setToLast. If the user keep 
pressing up, I would need to stepBackward. 
I know this might not be something for the sqlite database. How would you go 
about wrapping this interface? I know it is a broad question and your 
suggestion is very much appriciated! :)


Cheers
Kim B.


[sqlite] sqllite remote server system.

2004-04-28 Thread Kim Min-Su
Hi.

who check source code in my friend project for sqliteDBMS add on?

http://sourceforge.net/projects/sqlitedbms/

i want to patch sqlite remote management system feature like that project in the 
future as soon as you possible.

thank you for your works.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [sqlite] New Mailing List: List-Id Header

2003-10-19 Thread JeeBak Kim
* D. Richard Hipp ([EMAIL PROTECTED]) [031019 14:08]:
> William Trenker wrote:
> >
> >Any chance of adding the standard List-Id header to the new mailing list?  
> 
> If I made the change correctly, this reply should have the List-Id header
> line.

I'm getting the backspace character in that header:

  List-Id: SQLite Users Mailing List 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]