On Tue, Aug 10, 2010 at 05:46:26PM -0700, Paweł Hajdan, Jr. wrote:
So this is another chromium patch I'd like to submit:
http://src.chromium.org/viewvc/chrome/trunk/src/third_party/sqlite/preload-cache.patch?revision=26596view=markup
I'm not the author of that one, but the main idea seems to
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and use
insert into customers select * from source.customers where name LIKE 'x%'
I can approximately calculate, how big the new database will grow. Is
there a way to tell SQLite to
Am 12.08.2010 12:08, schrieb TeDe:
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and use
insert into customers select * from source.customers where name LIKE 'x%'
I can approximately calculate, how big the new database
I can approximately calculate, how big the new database will grow. Is
there a way to tell SQLite to reserve an inital space or numer of pages
instead of letting the database file grow again and again? I'm looking
for a way to speed up the import.
Why do you think that this kind of function
I can see where he is coming from. By reserving the appropriate number of pages
up front the import does not have to wait for disk IO or CPU cycles if it runs
out of pages.
--Original Message--
From: Pavel Ivanov
Sender: sqlite-users-boun...@sqlite.org
To: General Discussion of SQLite
I can see where he is coming from. By reserving the appropriate number of
pages up front the import does not have to wait for disk IO or CPU cycles if
it runs out of pages.
I wouldn't be so sure about that. Did anybody make any measurements?
1) I don't know where do you think CPU cycles are
Am 12.08.2010 12:16, schrieb Martin.Engelschalk:
Am 12.08.2010 12:08, schrieb TeDe:
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and use
insert into customers select * from source.customers where name LIKE 'x%'
I can
Hi all,
I would like to know how can I find out how many dirty pages are on wal
cache after an update query (during a transaction).
My intention is to run a long transaction, and to end it when cache/wal file
is getting too large.
Thanks,
Yoni.
___
Am 12.08.2010 13:04, schrieb TeDe:
Am 12.08.2010 12:16, schrieb Martin.Engelschalk:
Am 12.08.2010 12:08, schrieb TeDe:
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and use
insert into customers select * from
On 12 Aug 2010, at 12:09pm, Yoni Londner wrote:
I would like to know how can I find out how many dirty pages are on wal
cache after an update query (during a transaction).
My intention is to run a long transaction, and to end it when cache/wal file
is getting too large.
Sorry, but the two
Hello Pawel,
you made some good points. I'm still in the stage of evaluation, I don't
claim to know, its faster. But I saw that behavior on a filemanger: when
you copy a large file, it immediately reseveres the whole space. The
same with STL vectors: initializing it with a size is faster than
Am 12.08.2010 13:16, schrieb Martin.Engelschalk:
Am 12.08.2010 13:04, schrieb TeDe:
Am 12.08.2010 12:16, schrieb Martin.Engelschalk:
Am 12.08.2010 12:08, schrieb TeDe:
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and
The same with STL vectors: initializing it with a size is faster than
growing it element by element.
That's pretty bad comparison. Initializing of vector with size is
faster because it doesn't have to copy all elements later when it
reallocates its memory. File system doesn't work that way, it
TeDe tede_1...@gmx.de escribió:
Hello,
I want to import a big subset of data from one database to a new one. I
attach the two databases together and use
insert into customers select * from source.customers where name LIKE 'x%'
I can approximately calculate, how big the new database will
On 12 Aug 2010, at 12:37pm, Pavel Ivanov wrote:
Here I (or we) think of the cycles the system needs when the small niche
of the initial database is exhausted and it has to look for another free
block on the filesystem. If you can tell the system in advance, how big
the niche has to be, it
Michael -- I just wanted to follow up that we used your instructions to
patch sqlite and ran our tests again with success. Thanks so much for
your help!
Peter
On 8/11/10 8:17 AM, Black, Michael (IS) wrote:
My patches are awating moderator approval so I figured I'd just send out the
info in
Is there a way to get at the documentation for previous versions of
SQLite?
I'm running 3.5.9, and don't have much of an opportunity to upgrade. Is
there a way that I can get a snapshot of what the wiki / website
documentation looked like for 3.5.9?
Thanks,
Jon
On Thu, 12 Aug 2010 10:59:46 -0500, Jon Polfer
jpol...@forceamerica.com wrote:
Is there a way to get at the documentation for previous versions of
SQLite?
I'm running 3.5.9, and don't have much of an opportunity to upgrade. Is
there a way that I can get a snapshot of what the wiki / website
I am trying to alter the dump function of the command line shell. I am not
completely familiar with C Programming so I am sort of in the dark here.
As I understand it's workings now the .dump command performs a 'select *' on
each row in row id order. If it encounters an error it skips to the end
Hi,
I would like to fetch or build the SQLite3 documentation. Its webpage
doesn't host it and it has at least one change[1] over v3.7.0 .
I've checked out the fossil repository of the docs, but I can't build
it; I don't know how to do it. The build process documentation doesn't
list it[2] at
I'll run the test again tonight to give you the size of the -wal and
-shm file at the point of failure.
On WinXP (32 bits) the memory used by the application is not a problem
as it stays also in the low hundreds of MB even though the virtual bytes
(which are explained like this by perfmon:
A couple of seconds before the failure occurs, the test.db-wal file is
5,502,389KB and test.db-shm is 10,688KB.
The private bytes (probably the best measure of how much memory a
windows application is using) is perhaps a few megs above 130MB.
Making the change to have it commit every 1 records
Hi,
I'm very beginner in SQL query syntax and my problem is :
I have a database that contains several objects and their properties.
Each object has it's own table where its properties are stored ( TEXT
Keyword, REAL Value ).
So, in each table we find Keywords such as : Weight, Length, Cost,...
Is it safe to use sqlite if the db resides on a NFS share? I had ~200
servers in a cluster and they was writing a script to keep a central
webpage of their status. My plan was to run cron jobs on each server
that at intervals update the status of the specific servers. Then a
script on the central
On Thu, Aug 12, 2010 at 11:59 AM, Jon Polfer jpol...@forceamerica.comwrote:
Is there a way to get at the documentation for previous versions of
SQLite?
I'm running 3.5.9, and don't have much of an opportunity to upgrade. Is
there a way that I can get a snapshot of what the wiki / website
On Thu, Aug 12, 2010 at 1:16 PM, Laszlo Boszormenyi g...@debian.hu wrote:
Hi Richard,
Where can I download the documentation of SQLite version 3.7.0.1 ?
I can't build it from the fossil repository and your server doesn't seem
to host it in the same packaged format like v3.7.0 [1]. Please let
On 12 Aug 2010, at 2:31pm, John wrote:
I'm very beginner in SQL query syntax and my problem is :
I have a database that contains several objects and their properties.
Each object has it's own table where its properties are stored ( TEXT
Keyword, REAL Value ).
So, in each table we find
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/11/2010 10:28 PM, Kirk Clemons wrote:
I would like to be able to tell SQLite that if line 2 is bad just skip to
line 3 and so on until the database has been parsed all the through
maximizing the amount of data I retain.
It would require
INTEGER Object
TEXT Keyword
REAL Value
Or perhaps you'd prefer to give each object a name:
TEXT Object
TEXT Keyword
REAL Value
Or both
create table objects (
object_id integer,
object_name text,
object_keyword text,
object_value real
);
with best wishes
Artur
So my problem is the following:
I store some gamedata in my database. Also time data how long you played
the game.
That would be days, hours, minutes, seconds. My first try is to use a
integer value, but the numbers doesn't get insert right and it change the
numbers every time i save. Also
On 12 Aug 2010, at 12:37pm, Pavel Ivanov wrote:
Here I (or we) think of the cycles the system needs when the small
niche
of the initial database is exhausted and it has to look for another
free
block on the filesystem. If you can tell the system in advance, how big
the niche has to be,
Artur Reilin sql...@yuedream.de wrote:
So my problem is the following:
I store some gamedata in my database. Also time data how long you played
the game.
That would be days, hours, minutes, seconds. My first try is to use a
integer value, but the numbers doesn't get insert right and it
Hello everyone,
I had 2 questions:
1) Given the query:
SELECT col1 FROM table WHERE col2 = ? GROUP BY col3, col4 ORDER BY col5,
col6, col7, col8;
What would be the right index to create?
I was thinking it would be:
CREATE INDEX index1 ON table (col2, col3, col4, col5, col6, col7,
Aly Hirani alyhir...@gmail.com wrote:
I had 2 questions:
1) Given the query:
SELECT col1 FROM table WHERE col2 = ? GROUP BY col3, col4 ORDER BY col5,
col6, col7, col8;
What does this query mean? If col5, col6, col7 and col8 vary within a group
defined by col3, col4, how exactly should
34 matches
Mail list logo