Use new and chunk it up:

   (dbSync)
   (for A As
      (at (0 . 1000) (commit 'upd) (prune) (dbSync))
      (new (db: +Article) '(+Article) key1 value1 key2 value2 ... ))
   (commit 'upd)

With new! you are locking and writing every row so should only be used
in cases where you know you are only inserting one (or maybe very
few).

Above we create them in memory and write 1000 of them at a time.

If you have 12 million you should probably use an even higher number than 1000.

/Henrik



On Wed, May 30, 2012 at 10:36 AM, Joe Bogner <joebog...@gmail.com> wrote:
> I'm evaluating the use of picolisp for analyzing large datasets. Is it
> surprising that inserting a million rows into a simple db would take 5+
> minutes on modern hardware? I killed it after that after about 500K were
> inserted. I checked by ctrl+c and then inspecting N. It seems to
> progressively get slower after about 100K records.
>
> (pool "foo.db")
> (class +Invoice +Entity)
> (rel nr (+Key +Number))
> (zero N)
> (do 1000000 (new! '(+Invoice) 'nr (inc 'N)))
>
> I have just testing out the concept. My input data will be a flat file of
> invoice data (12 million rows+)
>
> Thanks
> Joe
-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to