Eric Smith writes:
>
> Jay A. Kreibich wrote:
>
> > Try getting rid of the PK definition and see how much that buys you.
> > It might be worth it, or it might not.
>
> and Simon Slavin wrote:
>
> > We know you are doing a huge amount of writing to this database. Are
> > you also readi
Black, Michael (IS) wrote:
> Hmmma 6.5X speed diff between RAM and disk? Sounds pretty good to
> me. Not sure why you expect better.
I don't expect better. As I said, I'm "not really sure what I should
expect here, or how to find out what I should expect".
> 10,000/sec is crappy?
Eric Smith wrote:
> I'd like to show it to the forum -- is it possible to send emails with
> attachments here? It's a 60kb jpg file.
God bless the gnuplot developers, who provided an ascii output option:
time (minutes) to insert 2m records
10 ++--+--+---+--
On Sat, Jun 19, 2010 at 12:58:45PM -0400, Eric Smith scratched on the wall:
> Jay A. Kreibich wrote:
>
> I think the use case will usually be (only) writes followed by (only)
> reads. There may be incremental writes later, but they will hopefully
> be small compared to the initial seed. I in
On Fri, Jun 18, 2010 at 07:01:25PM -0700, Scott Hess scratched on the wall:
> On Fri, Jun 18, 2010 at 5:24 PM, Eric Smith wrote:
> > Jay A. Kreibich wrote:
> >> Yes. Hence the "and this is the important part" comment. Most of
> >> the time when people are building billion-row files, they're bui
_
From: sqlite-users-boun...@sqlite.org on behalf of Eric Smith
Sent: Sat 6/19/2010 11:58 AM
To: Jay A. Kreibich
Cc: sqlite-users@sqlite.org
Subject: Re: [sqlite] unexpected large journal file
Jay A. Kreibich wrote:
> Try getting rid of the PK definition and see how much that buys you.
>
On 6/18/2010 7:01 PM, Scott Hess wrote:
>
> The old-school solution to this problem is an external sort (*). The
> basic idea is that you process the incoming data in memory-sized
> chunks (ie, fast), then write out sorted subfiles. Then you process
> the sorted files in parallel, merging to the
Jay A. Kreibich wrote:
> Try getting rid of the PK definition and see how much that buys you.
> It might be worth it, or it might not.
and Simon Slavin wrote:
> We know you are doing a huge amount of writing to this database. Are
> you also reading it frequently ? If not, then it might
On 19 Jun 2010, at 1:24am, Eric Smith wrote:
> It's not in any way a result of my schema? My primary key is a pair of
> integers A,B. The first column in this particular use case is in the
> range A = [0, 2million) and the second is in the range B = [0, infinity).
> We
>
> insert records A=0-
On Fri, Jun 18, 2010 at 5:24 PM, Eric Smith wrote:
> Jay A. Kreibich wrote:
>> Yes. Hence the "and this is the important part" comment. Most of
>> the time when people are building billion-row files, they're building
>> a new DB by importing a static source of data. If things go wrong,
>> you
On Fri, Jun 18, 2010 at 08:24:47PM -0400, Eric Smith scratched on the wall:
> Jay A. Kreibich wrote:
>
> > Yes. Hence the "and this is the important part" comment. Most of
> > the time when people are building billion-row files, they're building
> > a new DB by importing a static source of d
Jay A. Kreibich wrote:
> Yes. Hence the "and this is the important part" comment. Most of
> the time when people are building billion-row files, they're building
> a new DB by importing a static source of data. If things go wrong,
> you just throw out the database and try again.
That's
Simon Slavin wrote:
> How do you pass the handle from your C code to your Tcl code ?
I don't. I pass it from Tcl to C.
The handle in Tcl is a command name registered with the interpreter.
SQLite attaches a structure to the registration that contains the
underlying handle as used by the
On 18 Jun 2010, at 8:47pm, Eric Smith wrote:
> Don't know what I was thinking when I typed that. I'm sharing a
> connection in a single thread, mixing C API calls and Tcl API calls.
> The C API calls drive the INSERTs; Tcl API calls drive BEGIN/COMMIT
How do you pass the handle from your C co
On Fri, Jun 18, 2010 at 04:07:53PM -0400, Eric Smith scratched on the wall:
> Jay A. Kreibich wrote:
>
> > > I'd really love to avoid writing a big journal file. And I'd love to
> > > avoid doing a billion-row insert in one transaction.
> >
> > So turn journaling off.
>
> ... which implies p
Jay A. Kreibich wrote:
> > I'd really love to avoid writing a big journal file. And I'd love to
> > avoid doing a billion-row insert in one transaction.
>
> So turn journaling off.
... which implies possible corruption on app failure, right?
I want progress to be saved every once in a while
On Fri, Jun 18, 2010 at 03:37:19PM -0400, Eric Smith scratched on the wall:
> I have no user-defined indices in my db, and want to do a largish number
> of inserts (a few billion). I COMMIT every 10 million INSERTs or so --
> so if my app dies (or I want to kill it) I don't have to start over.
Terribly sorry to self-reply, but I have a correction:
> I'm using the Tcl API, which probably doesn't matter for this question.
Don't know what I was thinking when I typed that. I'm sharing a
connection in a single thread, mixing C API calls and Tcl API calls.
The C API calls drive the IN
18 matches
Mail list logo