[sqlalchemy] Mapping serial(auto increment) Postgres to bigserial

2007-10-02 Thread voltron

How does one specify that the auto incrementing field should map to
big serial and not serial?

thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: one-to-many access and modification

2007-10-02 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of Ken Pierce
 Sent: 29 September 2007 00:21
 To: sqlalchemy
 Subject: [sqlalchemy] one-to-many access and modification
 
 
 Hello all,
 
 I just got into using sqlalchemy today and I have a question about
 accessing data in a one-to-many relationship.
 
 I've got a simplified example below. Basically the system is for a
 group of us voting on DVDs we want to rent. There are users and films
 and for each film a user makes a yes / no choice (where ? is
 undecided).
 
 # create tables
 users_table = Table('users', metadata,
 Column('id', Integer, primary_key=True),
 Column('name', String(40)),
 mysql_engine='InnodB')
 
 films_table = Table('films', metadata,
 Column('fid', Integer, primary_key=True),
 Column('title', String(128)),
 mysql_engine='InnodB')
 
 choices_table = Table('choices', metadata,
 Column('fid', Integer, ForeignKey('films.fid'), primary_key=True),
 Column('id', Integer, ForeignKey('users.id'), primary_key=True),
 Column('choice', MSEnum('?','Y','N')),
 mysql_engine='InnodB')
 
 # classes here
 class User(object):
 ...
 class Film(object):
 
 class Choice(object):
 
 
 # mappers
 mapper(User, users_table)
 mapper(Film, films_table, properties={'choices':relation(Choice)})
 mapper(Choice, choices_table)
 
 So, if I retrieve a Film object f, f.choices gives me a list of Choice
 objects. From that Film object, I want to look up (and possibly
 modify) a users choice.
 
 My question is, is there a way to set up this relationship so that I
 could access a users choice like a dictionary (either with a user id
 or User object), or do I have to write a method to search the list and
 return the object -- in which case, would the changes be reflected in
 the session / database?
 
 Thanks in advance,
 Most impressed so far,
 
 Ken Pierce.
 

You may be interested in the 'Custom List Classes' section of the
documentation:

http://www.sqlalchemy.org/docs/03/adv_datamapping.html#advdatamapping_p
roperties_customlist

If you are using 0.4, the mechanism has changed slightly:

http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_
collections

Regardless of whether you do this or not, if you wrote a method that
searched the list of choices and returned the appropriate one, the
changes should always be reflected in the session.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-02 Thread [EMAIL PROTECTED]

Hi Michael, sorry about the lack of information; I wasn't clear on
what you were looking for.

The failing constraint is a customer one for email addresses:

CREATE DOMAIN base.email as TEXT CHECK
(VALUE ~ '[EMAIL PROTECTED](\\.[-\\w]+)*\\.\\w{2,4}$');

Thanks again!
Mark

On Oct 1, 5:46 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 1, 2007, at 2:29 PM, [EMAIL PROTECTED] wrote:





  Hi Michael,

  I'm creating the session by:

Session = sessionmaker(bind = engine,
 autoflush = True,
 transactional = True)
session = Session()

  and I'm not using any threading at all (therefore no thread-local
  storage). The only thing between the commit and the next query is some
  reporting of statistics (using sys.stdout).

  I'm getting a constraint violation IntegrityError.

 unique constraint ? PK constraint ? foreign key constraint ?are
 you doing any explicit INSERT statements of your own independent of
 the session ?

 I cant diagnose the problem any further on this end without an
 explicit example.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping serial(auto increment) Postgres to bigserial

2007-10-02 Thread Ants Aasma

On Oct 2, 10:06 am, voltron [EMAIL PROTECTED] wrote:
 How does one specify that the auto incrementing field should map to
 big serial and not serial?
Use the sqlalchemy.databases.postgres.PGBigInteger datatype for that
field.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-02 Thread Michael Bayer


On Oct 2, 2007, at 7:56 AM, [EMAIL PROTECTED] wrote:


 Hi Michael, sorry about the lack of information; I wasn't clear on
 what you were looking for.

 The failing constraint is a customer one for email addresses:

 CREATE DOMAIN base.email as TEXT CHECK
 (VALUE ~ '[EMAIL PROTECTED](\\.[-\\w]+)*\\.\\w{2,4}$');


you'd have to ensure the text youre inserting meets that regular  
expression.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IntegrityError during query?

2007-10-02 Thread Michael Bayer


On Oct 2, 2007, at 7:56 AM, [EMAIL PROTECTED] wrote:


 Hi Michael, sorry about the lack of information; I wasn't clear on
 what you were looking for.

 The failing constraint is a customer one for email addresses:

 CREATE DOMAIN base.email as TEXT CHECK
 (VALUE ~ '[EMAIL PROTECTED](\\.[-\\w]+)*\\.\\w{2,4}$');


oh.  also, you should not save() the object which represents this  
insert (or at least perform any additional queries) until its fields  
are fully assembled.  since you are using autoflush=True, the  
contents of the session will be flushed automatically before every  
query.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiple database connections - Using ORM to copy table data from one DB to another

2007-10-02 Thread Cory Johns

I had originally tried the expunge method, as in the code I had attached,
but if I just use session.save(policy), I get the aforementioned exception,
and if I use session.save_or_update(policy) it simply does nothing.

When using session.merge(policy), I get the following exception:

  File build\bdist.win32\egg\sqlalchemy\orm\session.py, line 483,
in merge
NameError: global name 'mapperutil' is not defined

Regarding the eager-loading, I understand that the configuration of the
mappers will be in effect.  I meant to inquire as to whether there was a way
to recursively override the mappers, as this is a different use than the ORM
was originally intended for, and the usual behavior of lazy-loading is not
desired here.  I considered using the eagerload option, but my ORM has
several levels (eg, Policy contains Insureds which each contain Addresses)
and it didn't seem like the option would apply all the way down.

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
Behalf Of Michael Bayer
Sent: Saturday, September 22, 2007 11:11 AM
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Re: Multiple database connections - Using ORM to
copy table data from one DB to another




On Sep 18, 2007, at 5:47 PM, Cory Johns wrote:

 I'm trying to make a small utility that uses a larger application's  
 ORM to
 copy an object from one database (dev) to another (qa) for testing  
 purposes.
 But I'm running in to trouble getting SQLAlchemy to use the multiple
 database connections.  I can get the object to load, and then open a
 connection to the other database, but when I try to call save, I  
 get the
 following error:

   sqlalchemy.exceptions.InvalidRequestError: Instance
 'thig.base.model.policy.Policy object at 0x018C5AB0' is a detached
 instance or is already persistent in a different Session

 Is it possible to re-attach an ORM instance to a new session in  
 order to
 duplicate the data to another database like I'm doing?  If so, how  
 do I go
 about that?

use session.merge() is probably the most straightforward way (returns  
a second object instance associated with the new session).  or, you  
can expunge() the object from the first session, then save_or_update 
() to the second session.


 Additionally, as I suspect this will be a problem once I get it re- 
 attached,
 is there an easy way to specify that all properties, recursively,  
 should be
 eager loaded?

whatever the configuration is of the mappers() representing the  
involved classes will remain in effect.







CONFIDENTIAL NOTICE: This email including any attachments, contains 
confidential information belonging to the sender. It may also be 
privileged or otherwise protected by work product immunity or other 
legal rules. This information is intended only for the use of the 
individual or entity named above.  If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, 
distribution or the taking of any action in reliance on the contents 
of this emailed information is strictly prohibited.  If you have 
received this email in error, please immediately notify us by 
reply email of the error and then delete this email immediately.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiple database connections - Using ORM to copy table data from one DB to another

2007-10-02 Thread Michael Bayer


On Oct 2, 2007, at 10:32 AM, Cory Johns wrote:


 I had originally tried the expunge method, as in the code I had  
 attached,
 but if I just use session.save(policy), I get the aforementioned  
 exception,
 and if I use session.save_or_update(policy) it simply does nothing.

 When using session.merge(policy), I get the following exception:

 File build\bdist.win32\egg\sqlalchemy\orm\session.py, line 483,
 in merge
   NameError: global name 'mapperutil' is not defined

thats a bug in the exception throw.  if you upgrade to 0.4 or the  
latest trunk of 0.3 its fixed (I recommend 0.4).  the error message  
it would like to show you is:

Instance %s has an instance key but is not persisted

which means, you are artifically attaching an _instance_key to the  
object but its not actually present in the database.  if youre  
copying over to a new database, remove the _instance_key attribute  
from the object before merging it or saving to the new session;  
otherwise it thinks no changes are needed.



 Regarding the eager-loading, I understand that the configuration of  
 the
 mappers will be in effect.  I meant to inquire as to whether there  
 was a way
 to recursively override the mappers, as this is a different use  
 than the ORM
 was originally intended for, and the usual behavior of lazy-loading  
 is not
 desired here.  I considered using the eagerload option, but my ORM has
 several levels (eg, Policy contains Insureds which each contain  
 Addresses)
 and it didn't seem like the option would apply all the way down.

the eagerload options do apply to multiple levels.in 0.3 you need  
to specify a separate eagerload() option for each path, i.e.

query.options(eagerload('a'), eagerload('a.b'), eagerload('a.b.c'))

in 0.4 just use query.options(eagerload_all('a.b.c'))



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping serial(auto increment) Postgres to bigserial

2007-10-02 Thread voltron

Thanks

On Oct 2, 3:54 pm, Ants Aasma [EMAIL PROTECTED] wrote:
 On Oct 2, 10:06 am, voltron [EMAIL PROTECTED] wrote: How does one specify 
 that the auto incrementing field should map to
  big serial and not serial?

 Use the sqlalchemy.databases.postgres.PGBigInteger datatype for that
 field.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Fwd: Migrate: svndump loaded into google code project

2007-10-02 Thread Mark Ramm

I thought this might be of interest to many of you.

We've been trying to gather up a few people to and revive the
floundering SQLAlchemy migrations project.  There's actually some
pretty good code there already, it just needs a bit of love and
tenderness to get it up to date wth SQLALchemy 0.4.

It would be a huge benefit to have a clear story to tell in the python
world about how to do agile database development, or just how to
manage database upgrades as your application evolves.

If you feel up to it, join the mailing list,
([EMAIL PROTECTED]) look over the code, ask some
questions, and get involved.

--Mark

-- Forwarded message --
From: Mark Ramm [EMAIL PROTECTED]
Date: Oct 2, 2007 11:39 AM
Subject: Re: Migrate: svndump loaded into google code project?
To: [EMAIL PROTECTED]


 I just loaded Evan's svndump into the shiny new google code repository
 at: http://sqlalchemy-migrate.googlecode.com/svn/

 Now everything is ready to start the work on reviving migrate.

Thanks Jan!

Just a reminder, I think we should work on getting the monkey patch
removal branch working with 0.3.10 first.

I think someone may have even completed this work, if so now would be
a great time to check that in. ;)

Once that's done I would propose moving that branch to trunk and
working on making it 0.4 ready.

--Mark


-- 
Mark Ramm-Christensen
email: mark at compoundthinking dot com
blog: www.compoundthinking.com/blog

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---