> this is always easily worked around by specifing the "ON" clause to your
> join(), as the second argument of table1.join(table2,
> and_(table1.c.foo==table2.c.bar, table1.c.bat==table2.c.hoho, ...)).

Ah, okay.  No foreign key, just a join.  For reference, this was what
I ended up going with.
j = jobsTable.join(usersTable, jobsTable.c.userId.like(usersTable.c.userId))

> as long as the sqlite table has only one column that is declared as
> primary key in the CREATE TABLE statement, sqlite can autoincrment.  a
> "userId" column declared as the PRIMARY KEY and a second "userName" column
> that only has a unique constraint on it does not impact SQLite's
> autoincrement capability.

Ah, I see.  I think the only thing I don't understand still is using
this mapping with a session and handling inserts that aren't unique.
Using a session, the problem presents itself at the commit, which
seems to mess up the entire insertion instead of the single collision.
 Maybe it's more of an SQL question, but when using a session/mapper
configuration like this and one were continually trying to insert data
into the database that might have been added already, how do you
efficiently skip that insertion?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to