Hi Bo,
> boun...@sqlite.org] On Behalf Of Bo Peng
> > I wonder if it would be better on just having the data organized
> > before loading it, so that the records in each of the 5000 tables
> > would be contiguously stored. Of course, that also depends on how much
> > new data will be added to the
2011/10/25 Bo Peng :
> Tables are added in batch and then kept unchanged. I mean, a database
> might have 1000 new tables one day, and 2000 later. All operations are
> on single tables.
>
> Each table is for one 'sample'. All tables have one column for 'item
> id', and optional
> Doing vacuum on a 288 Gb database is probably going to take some time.
I submitted the command yesterday night and nothing seems to be
happening after 8 hours (sqlite3 is running and there is disk
activity, but I do not see a .journal file).
> I wonder if it would be better on just having the
> boun...@sqlite.org] On Behalf Of Bo Peng
>
> I will do this multiple times, with different conditions (e.g. SELECT
> MAX(c) FROM TABLE_X WHRE b > 1.0) so maintaining number of rows would
> not help. I intentionally avoided TRIGGERs because of the large amount
> (billions) of data inserted.
>
>
On 24 Oct 2011, at 4:13pm, Bo Peng wrote:
>> Can I ask which file-system you were using on the SSD drive when you
>> obtained this result?
>
> It is ext4 on a 512G SSD on a Ubuntu system.
Wow. I don't know what hard disk hardware or driver you were using originally,
but it sucks. Even for
> Can I ask which file-system you were using on the SSD drive when you
> obtained this result?
It is ext4 on a 512G SSD on a Ubuntu system.
Bo
___
sqlite-users mailing list
sqlite-users@sqlite.org
On 10/24/2011 09:20 PM, Bo Peng wrote:
Other than using a SSD to speed up random access, I hope a VACUUM
operation would copy tables one by one so content of the tables would
not scatter around the whole database. If this is the case, disk
caching should work much better after VACUUM... fingers
>> Other than using a SSD to speed up random access, I hope a VACUUM
>> operation would copy tables one by one so content of the tables would
>> not scatter around the whole database. If this is the case, disk
>> caching should work much better after VACUUM... fingers crossed.
>
> VACUUM will
On 23 Oct 2011, at 4:13pm, Bo Peng wrote:
> Other than using a SSD to speed up random access, I hope a VACUUM
> operation would copy tables one by one so content of the tables would
> not scatter around the whole database. If this is the case, disk
> caching should work much better after
On Sun, Oct 23, 2011 at 8:57 AM, Simon Slavin wrote:
> It seems that this was the first problem he found with the way he arranged
> this database. But our solution to it would be different depending on
> whether he wanted to do this just the once, or it was a regular
On 23 Oct 2011, at 2:47pm, Bo Peng wrote:
> On Sun, Oct 23, 2011 at 8:12 AM, Black, Michael (IS)
> wrote:
>> #1 What's the size of your database?
>
> 288G, 5000 table, each with ~1.4 million records
Worth adding here Bo's original post:
On 22 Oct 2011, at 8:52pm, Bo
On Sun, Oct 23, 2011 at 8:12 AM, Black, Michael (IS)
wrote:
> #1 What's the size of your database?
288G, 5000 table, each with ~1.4 million records
> #2 What's your cache_size setting?
default
> #3 How are you loading the data? Are your table inserts interleaved or by
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Bo Peng [ben@gmail.com]
Sent: Saturday, October 22, 2011 10:05 PM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Concurrent readonly access to a large database
13 matches
Mail list logo