Writing, adding and using your own functions within SQLite is pretty
easy. That's probably your best bet to solve this problem.
-T
> -Original Message-
> From: Fowler, Jeff [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 17, 2008 11:59 PM
> To: sqlite-users@sqlite.org
> Subje
Ahh yes, the obvious answer. Duh. Thanks. :)
> -Original Message-
> From: Trevor Talbot [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, December 18, 2007 8:27 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Re: SQLite and Columnar Databases
>
> On 12/18/0
> If a DBMS is smart enough, it can automatically pick the best storage
> method for performance and you don't have to think about it.
>
> However, many DBMS are not that smart and so typically users find
> themselves making explicit changes to their schemas, specifying the
> storage method
> As my father was fond of saying; "Money talks and BS walks."
Unfortunately, that rule is decidedly invalid in the commercial
software world.
Bear in mind that DRH doesn't sell software, which is part of the
reason why he and Encirq go about doing things differently.
-T
-
> This model is completely removed from how the data is physically
> stored, eg whether in rows first or in columns first, and the
> physical store is determined just by the DBMS behind the scenes, and
> hence is an implementation detail. The DBMS can arrange how it likes
> in order to satis
ED]
> Sent: Thursday, December 13, 2007 11:51 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] SQLite and Columnar Databases
>
> --- Tom Briggs <[EMAIL PROTECTED]> wrote:
> >For clarity, my definition of small is about 200GB, so I'm not
> > selling
Based on my experience with SQLite, it would be a huge undertaking to
re-work it to use column-oriented storage. And I don't think it would
really fit with SQLite's goal, either; column oriented databases are
best suited to aggregate queries against large amounts of data, while
SQLite is best
> Something I will say about this, for people who don't know, is that
> this columnar thing is strictly an implementation detail. While
I think that this is an oversimplification. That's somewhat like
saying that the way you use a sledge hammer is no different than how you
use a claw ham
> BTW, several PRAGMAS actually increase performance in my embedded app
> case - maybe 15-30% depending upon transaction activity and the way I
> structure transaction commits. Specific PRAGMAS that helped include:
This is exactly what irritates me about conversations like this - the
pragmas
ies on sqlite optimization
> methods of any type would be appreciated.
>
>
>
>
>
> -Original Message-
> From: Tom Briggs [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 20, 2007 10:40 AM
> To: sqlite-users@sqlite.org
> Subject: RE: [sqlite] Performan
Which pragmas will be most effective (and what values you should use
for each) depends on what you're trying to do with the database.
Synchronous is important if you're writing frequently, for example, but
won't matter much in a read-only setting. Appropriate values for the
page_size and cache
I don't think that this is particularly hard to do: search the CVS
history for the addition of the REGEXP operator for a simple example.
Or, I'll find the link for you. :P
http://www.sqlite.org/cvstrac/chngview?cn=2478
This does more than just add the regexp operator, but it's still
> Because my concern is this, I don't know how SQLite will do
>
> Delete from table where col not in (select from large temp dataset)
>
> How the delete will actually be walked, if it will create a serverside
> cursor and walk the values in the in statement then it will
> be fine and
> fast,
Your suggested temp table approach is how I would solve this; if
everything is properly indexed it won't be too bad. Even if it is bad,
it'll be better than updating columns within the table and then deleting
rows based on that.
Another potential alternative is to:
1. Load all new rows
In general, it's best to only include the columns you need in the
SELECT clause. And not just with SQLite - that's the best approach when
dealing with any database. SQLite is a bit more forgiving because
there's no network between the client and the database to slow things
down, but that's st
Hrmm... I wonder if this would work (complete guess, totally
untested)
INSERT OR REPLACE INTO core
SELECT Core.A, Updates.B, Core.C, Updates.D
FROM Core INNER JOIN Updates ON (Core.A = Updates.A)
Idea being, I guess, to get the rows that you ultimately want from
the sub-select and then use
I'm not sure I correctly understand your question, but: escaping the
% in your query may be what you're looking for, i.e.
delete from table where itemName like '\%.%' escape '\'
Maybe. :)
-Tom
> -Original Message-
> From: RaghavendraK 70574 [mailto:[EMAIL PROTECTED]
> Sen
> Also, you may want to consider avoiding performing an IN on a UNION.
> As far as I know, SQLite doesn't optimize that, so will build the
> entire union before performing the IN. If you instead do the
> following, it should be a lot faster (if you have lots of data). But
> I may be wron
I don't think this would be very scalable, but you could do something
like:
DELETE FROM table
WHERE ROWID NOT IN
(
SELECT MIN(ROWID)
FROM table
GROUP BY NAME
)
This is totally untested, BTW - just a thought. :)
-Tom
> -Original Message-
> From: Andre du P
Message-
> From: Lee Crain [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 03, 2007 12:25 PM
> To: sqlite-users@sqlite.org
> Cc: [EMAIL PROTECTED]
> Subject: RE: [sqlite] Problem: Can't See Tables in Database
>
> Tom Briggs,
>
>
>
> Thank you for your r
Are the database files you're trying to open in the same directory as
the executable? What happens if you path the full path and file name to
sqlite3_open? What is the full command line used to start the sqlite3
command prompt?
The problem will turn out to be something very simple... Thin
> sqlite3_temp_directory is hardcoded
> as 0 in main.c for all platforms.
> would it be better to make it configrable for different
> platforms and use
> PRAGMA to overide the defult in runtime.
>
> i am looking forward to your opiniions.
>
> thanks again
Why not just use PRAGMA temp_store_directory, as the comments
directly above that line suggest?
> -Original Message-
> From: weiyang wang [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, June 06, 2007 7:09 AM
> To: sqlite-users
> Subject: [sqlite] sqlite3_temp_directory in main.c
>
> hi
The insert will not be allowed until the query is closed, so you
can't get into this situation.
-T
> -Original Message-
> From: B V, Phanisekhar [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, June 06, 2007 8:01 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Does sqlite3_step
When querying the table be sure to put the indexed column(s) in the
ORDER BY clause though - otherwise the index won't do you any good. :)
-T
> -Original Message-
> From: John Stanton [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, June 05, 2007 9:20 AM
> To: sqlite-users@sqlite.org
>
> If you require ACID type data integrity and have a single
> disk there is
> no such thing as a "high concurrency database". They all
Then don't blame me if he's asking the wrong questions. :)
-T
-
To unsubs
I have no suggestions (I'm not an embedded systems guy), but your initial
comment implied that there were other options. If there aren't then your
original statement is invalid and there's nothing to discuss. :)
-T
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL P
> I don't want to use
> other database, because I think Sqlite is great for an
> embedded system that I
> am using.
I think that your own questions about concurrency prove this
incorrect. If you need high concurrency and you don't like retries,
SQLite is not the database for you.
-T
--
You will likely be well served by a compound index on (value,key).
As the schema stands now, the indexes will help find records with
matching values, but not with matching keys; providing one index that
correlates the two should help.
Disclaimer: I haven't recreated your schema, added said
The short answer is that you can not (as far as I'm aware, anyway)
turn of journal files, though it's a common question and I'm sure you
can find a lot of information about it by searching the archives.
TEMP_STORE controls where "truly" temp files (e.g. those needed for
sorting, and I belie
Would attaching a database mid-transaction, then making changes in
the newly-attached database, make it impossible to create the correct
master journal file for the overall transaction? Just a curious shot in
the dark.
-Tom
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[
> This is quick simple and portable to your application only.
> You can not
> use a 3rd party GUI database browser that is statically linked to a
> different SQLite library (such as the standard distribution)
> to view or
> modify your database. You must add specific support for any desired
ereby avoid
> > underkill and overkill.
> >
> > The add-on functions, and application interfaces are better being
> > contributed software than to bloat Sqlite distributions and
> be a boat
> > anchor on its continued development.
> >
> > Tom Briggs
> In the case of SQLite, I (arguably) have to use a 3rd party
> management
> tool, for which my custom functions are no longer available. I'm
> curious how others handle this.
>
> A. You don't need or use any custom SQL functionality
> B. You don't use a 3rd party SQLite management tool
>
You can't use BCP to load data without some form of input file, can
you? So what could you do with BCP that you can't do by landing the
data in a CSV file and then loading it using some other tool?
I think that DRH's point was that the functionality provided by the
command line tool is not
; is a limit to the number of databases you can attach (I think
> it's 32?) and
> I might have more than that. So I thought the safer route was
> to attach as I
> go..
>
> Serena.
>
>
> On 10/4/06, Tom Briggs <[EMAIL PROTECTED]> wrote:
> >
> &
Do you know all of the databases that you want to attach to at the
start of processing? If so, there's no reason you can't simply attach
them all beforehand, start your transaction, and complete all your
processing. If you have to decide dynamically, based on the data, which
databases you'll
37 matches
Mail list logo