Re: [Tutor] insert queries into related tables referencing foreign keys using python
On 12/24/2011 11:13 PM, Lie Ryan wrote: Querying .lastrowid is pretty much safe as long as you don't use a single cursor from multiple threads. The .lastrowid attribute belongs to a cursor, so write operations from one cursor would not affect the .lastrowid of other cursors. However, note that multiple cursors created from a single connection will be able to see each other's changes immediately (as opposed to when commited). This might or might not always be desirable. In sqlite, it is more common to create one **connection** for each thread. Creating one connection for each thread prevents concurrency problems since each thread will not see uncommitted data from another thread. However, the recommended scenario is to avoid multithreading at all. sqlite developers have a strong opinion against multithreading (http://www.sqlite.org/faq.html#q6), even though they claimed that sqlite is the *embedded* SQL engine with the most concurrency (and it does very well in multithreaded scenarios). It is common pattern in sqlite-backed applications to have a single thread doing all the writes. Okay... sounds like I should be safe for the most part. Down the road (way down the road) I had some thoughts of working on an application that would in certain situations have multiple users (1-10) and had hoped that as long as I kept the sqlite insert/update activity wrapped in transactions there wouldn't be much problem with table locks, etc. and in this case, confusing lastrowid from one transaction with that from another. By the time I get to where I'm ready/willing/able to write that particular app, I might have moved on to an ORM, though. Thanks, Monte ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] insert queries into related tables referencing foreign keys using python
On 12/25/2011 01:57 AM, Monte Milanuk wrote: Lie Ryan gmail.com> writes: Be careful that in multithreaded program, each thread should have their own cursors, or otherwise another thread could possibly do another insert before you can query the lastrowid. okay, this touches on something that had been worrying me a bit... whether another insert could hit before I queried for the lastrowid, regardless of how I did it. So you're saying that as long as I have the one cursor open, the lastrowid it gets will be the lastrowid from its operations, regardless of other commits or transactions that may have happened in the meantime? Querying .lastrowid is pretty much safe as long as you don't use a single cursor from multiple threads. The .lastrowid attribute belongs to a cursor, so write operations from one cursor would not affect the .lastrowid of other cursors. However, note that multiple cursors created from a single connection will be able to see each other's changes immediately (as opposed to when commited). This might or might not always be desirable. In sqlite, it is more common to create one **connection** for each thread. Creating one connection for each thread prevents concurrency problems since each thread will not see uncommitted data from another thread. However, the recommended scenario is to avoid multithreading at all. sqlite developers have a strong opinion against multithreading (http://www.sqlite.org/faq.html#q6), even though they claimed that sqlite is the *embedded* SQL engine with the most concurrency (and it does very well in multithreaded scenarios). It is common pattern in sqlite-backed applications to have a single thread doing all the writes. In general, despite the superficial similarities, most database engine wrappers have their own ways of doing stuffs. Generally, you need a full-blown ORM to smooth out the differences. So... what would be considered a 'full-blown' ORM? SQLobject or SQLalchemy... or something else? Most database engine thin-wrappers conforms to the Python Database API Specification (PEP249), including sqlite3; despite that these wrappers all conforms to a common API, you still need to be familiar with each database engine to do a lot of common stuffs and -- except on trivial cases -- code written for one PEP249-conforming database engine generally cannot be ported to another PEP249-conforming database engine without modification. Almost all ORM that supports multiple DB engine backends should be able to abstract the differences. ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] insert queries into related tables referencing foreign keys using python
On Sat, Dec 24, 2011 at 9:57 AM, Monte Milanuk wrote: > Lie Ryan gmail.com> writes: > >> In python-sqlite, the rowid of the >> last insert operation can be queried using cursor.lastrowid. Therefore, >> you can query the lastrowid, right after the insert, to find the primary >> key of the value you had just inserted. So, in code: >> >> ... >> cur = conn.execute('INSERT ... ') >> pk = cur.lastrowid >> ... >> >> or even: >> >> ... >> pk = conn.execute('INSERT ... ').lastrowid >> ... >> >> Be careful that in multithreaded program, each thread should have their >> own cursors, or otherwise another thread could possibly do another >> insert before you can query the lastrowid. >> > > okay, this touches on something that had been worrying me a bit... whether > another insert could hit before I queried for the lastrowid, regardless of > how I > did it. So you're saying that as long as I have the one cursor open, the > lastrowid it gets will be the lastrowid from its operations, regardless of > other > commits or transactions that may have happened in the meantime? > >> >> In general, despite the superficial similarities, most database engine >> wrappers have their own ways of doing stuffs. Generally, you need a >> full-blown ORM to smooth out the differences. >> > > So... what would be considered a 'full-blown' ORM? SQLobject or SQLalchemy... > or something else? > > Thanks, > > Monte > > > ___ > Tutor maillist - Tutor@python.org > To unsubscribe or change subscription options: > http://mail.python.org/mailman/listinfo/tutor Django has an ORM and it works with sqlite3, mysql and I think postgress Although Django is a full framework for writing web apps the various modules can be used together or separately. http://www.djangobook.com/en/2.0/chapter05/ talks about models and the ORM -- Joel Goldstick ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] insert queries into related tables referencing foreign keys using python
Lie Ryan gmail.com> writes: > In python-sqlite, the rowid of the > last insert operation can be queried using cursor.lastrowid. Therefore, > you can query the lastrowid, right after the insert, to find the primary > key of the value you had just inserted. So, in code: > > ... > cur = conn.execute('INSERT ... ') > pk = cur.lastrowid > ... > > or even: > > ... > pk = conn.execute('INSERT ... ').lastrowid > ... > > Be careful that in multithreaded program, each thread should have their > own cursors, or otherwise another thread could possibly do another > insert before you can query the lastrowid. > okay, this touches on something that had been worrying me a bit... whether another insert could hit before I queried for the lastrowid, regardless of how I did it. So you're saying that as long as I have the one cursor open, the lastrowid it gets will be the lastrowid from its operations, regardless of other commits or transactions that may have happened in the meantime? > > In general, despite the superficial similarities, most database engine > wrappers have their own ways of doing stuffs. Generally, you need a > full-blown ORM to smooth out the differences. > So... what would be considered a 'full-blown' ORM? SQLobject or SQLalchemy... or something else? Thanks, Monte ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] insert queries into related tables referencing foreign keys using python
On 12/24/2011 11:07 AM, Monte Milanuk wrote: So... most python-sqlite tutorials concentrate on single tables. The few that deal with multiple tables and that mention foreign keys and such seem to demonstrate mainly using hard-coded data instead of parameterized insert queries into tables with auto-increment primary keys. For the most part I'm able to figure things out as I go using a variety of documents both print and electronic... but when I don't *know* the pk number (because its automatically assigned) it makes it tough to supply it as a foreign key for another insert query into related tables. In sqlite, if a table contains a column of type INTEGER PRIMARY KEY, then that column becomes an alias for the ROWID (http://www.sqlite.org/autoinc.html). In python-sqlite, the rowid of the last insert operation can be queried using cursor.lastrowid. Therefore, you can query the lastrowid, right after the insert, to find the primary key of the value you had just inserted. So, in code: ... cur = conn.execute('INSERT ... ') pk = cur.lastrowid ... or even: ... pk = conn.execute('INSERT ... ').lastrowid ... Be careful that in multithreaded program, each thread should have their own cursors, or otherwise another thread could possibly do another insert before you can query the lastrowid. Whats the 'right' way to do this sort of record insert or update query? Insert into the main table first, then do a select query to find the last rowid and store it in a python variable and then use that as a parameter for the rest of the insert queries to related tables? Pull the value from the seq column of the sqlite-sequence table for the table with the primary key, and use that (not sure how robust that would be down the road, or how portable it would be if I later moved to MySQL for the DB)? Or is this something an ORM like SQLalchemy would smooth over for me? In part I'm (also) wondering if this may be an artificial problem, as I'm trying to import data from a csv file i.e. one big table and then break it up and insert it into multiple tables in the sqlite database... In general, despite the superficial similarities, most database engine wrappers have their own ways of doing stuffs. Generally, you need a full-blown ORM to smooth out the differences. ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
[Tutor] insert queries into related tables referencing foreign keys using python
So... most python-sqlite tutorials concentrate on single tables. The few that deal with multiple tables and that mention foreign keys and such seem to demonstrate mainly using hard-coded data instead of parameterized insert queries into tables with auto-increment primary keys. For the most part I'm able to figure things out as I go using a variety of documents both print and electronic... but when I don't *know* the pk number (because its automatically assigned) it makes it tough to supply it as a foreign key for another insert query into related tables. Whats the 'right' way to do this sort of record insert or update query? Insert into the main table first, then do a select query to find the last rowid and store it in a python variable and then use that as a parameter for the rest of the insert queries to related tables? Pull the value from the seq column of the sqlite-sequence table for the table with the primary key, and use that (not sure how robust that would be down the road, or how portable it would be if I later moved to MySQL for the DB)? Or is this something an ORM like SQLalchemy would smooth over for me? In part I'm (also) wondering if this may be an artificial problem, as I'm trying to import data from a csv file i.e. one big table and then break it up and insert it into multiple tables in the sqlite database... ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor