Hi Andrew:
Thanks for the explanation. I am going to assume that database neutrality
is the main selling point of sticking with the DAL.
I second your interest in a DAL method for genuine bulk loading, though I
reckon that would be quite a beastly project.
I appreciate everyone's input. Thi
HI Martin,
It depends on the RDBMS. Some are still one row at a time, which makes
insert and bulk_insert the same speed (it just makes the statement easier
to write.
Hi MIke,
One of the goals of the DAL is to make the api database neutral, allowing
you to switch between databases without chang
"bulk insert is a way faster than regular insert when you have many rows"
I think we need to clarify terms. By Massimo's own account in the web2py
book, the DAL bulk insert is not faster than db.insert unless you are using
the GAE. So are you talking about your db's native bulk methods or is the
bulk insert is a way faster than regular insert when you have many rows. If
you are under mysql you can use load data infile, this is incredible fast,
but you need special privileges under mysql.
2012/8/19 Andrew
> Is it possible that we add a "native bulk insert" function which is coded
> up in
Is it possible that we add a "native bulk insert" function which is coded up in
each adapter. Even bulk_insert is an odbc 1 row at a time-slow for big files.
I need to load huge files all the time and I am writing custom modules to do
this with a native loader. Should this be a dal option? W
if it is a one time data load and i have the proper constraints set in
SQLLite, i would use raw SQL.
if this was repetitive use and i wanted to use web2py validation i would
use the DAL with bulk insert (making sure that i validate the data before
adding to the insert queue)
On Friday, August
10 minutes is all right. Still, if there are two methods and they are
qualitatively equal apart from speed, I'd prefer to use the faster one.
So posing the question, once again, is there a compelling reason to use the
DAL for bulk uploads. More specficially, what extras, if any, are being
added
On Friday, August 17, 2012 8:29:12 AM UTC-4, Mike Girard wrote:
>
> The data will be coming from a large XML file, so my script will parse
> that and make inserts into several different tables. It's fairly
> straightforward.
>
> So is it correct to say that -
>
> 1. There is no compelling reason
The data will be coming from a large XML file, so my script will parse that and
make inserts into several different tables. It's fairly straightforward.
So is it correct to say that -
1. There is no compelling reason to do this without the DAL
2. My options in the DAL are bulk_insert, looping
You can enter either via db(query).insert(name=value, ...) or by a CSV
file. Both are convenient. I suppose you could argue that CSV is convenient
if your data are dumped from some other db without computing an
intermediate value. If your data are less clean, perhaps, or you need to do
some com
10 matches
Mail list logo