Re: [sqlalchemy] MapperExtension.append_result ....

2011-02-08 Thread Martijn Moeling
Michael,


I took a look at the recipe you indicated, it looks promising but the check 
should be constructed from database results. Another issue is that this project 
is implemented in my web based desktop/Os which uses SQLAlchemy from the bottem 
up. So modifiing the session object globally is with a PreFilteredQuery is not 
a real option. Creating a session for this program only might be an option 
but I am not sure how that will turn out.

Being it a web based (and so Handle request and die), Persistence is (to me) 
not very usefull and I need to reload everything for every action.

the @reconstructor hook seems too outdated. I moved to 0.6.6 last week, and 
only will upgrade to stable/production versions since in my case there is a lot 
to it.

I need to transparently add being queried functionality to mapped objects. 
This functionality is will be mixed in and should be able to limit the results 
when being queried. Since my class definitions are so complex I would like to 
make a (not functional) example on what I am in search of. and I will not 
bother you with chemistry stuff...


Class ACL(Base):
Id = Column(Integer, primary_key=True)

tablename = Column(Unicode(...
tableId= Column(Integer

RecordId = ForeignKeyContruct( / ForeignKey   (not sure yet)

Record = relation( self.tablename

User_Group = relation to Person, group 

Bool columns..
MayRead
MayWrite
MayCreate


Class Mixinstuff(Object)

Rights = {}  # Rights[MayRead] etc. will be set upon load



Class Person(Base,Mixinstuff)

Id = Column(Integer, primary_key=True)

ACLs = relation('ACL'   All ACL records which have   tablename = 
'person' and tableID = Person.Id, cascade=delete and ACL record for me ) # 
ACL's work on many tables
I might not define the relation here but backref from the acl record 
depending on how to build what I want

addresses = relation( 

Class Address(Base, ACLMixinstuff)

Id = Column(Integer, primary_key=True)

ACLs = relation('ACL'   All ACL records which have   tablename = 
'person' and tableID = Person.Id, cascade=delete) # ACL's work on many tables
I might not define the relation here but backref from the acl record 
depending on how to build what I want

class ME()
userId =  1 (foreignkey to Person)
groups = [1,2,3,4]  (relationship with groups (same polymorhic 
baseclass)


Now consider ME being a member of Everyone not guest

ACLS for Person
ME  | table = person | Id =  1| MayRead = F
Everyone| table = person | Id = 1 | MayRead = T
Guest   | table = person | Id = 1 | MayRead = F


user = ME, GROUPS = [Everyone]

A query for Session.query(Persons).all() should NOT return Person.Id although 
Everyone says True, personal Permissions overrule group permissions , simple 
boolean operations. If no ACLs are found It all defaults to false or true not 
sure yet on how this will work on my real data model, since this will be the 
model on which atoms and molecule connections are Allowed

If However the ACL's turn out that ME.MayRead = T, I will only get related 
addresses I actually may read. This should work automatically for each class 
with Mixedinstuff inherited

This is whilst I do not want the Users of this model to be bothered with 
this, the should add data to their model and query to generate list of possible 
new molecules.

I am some sort of clueless on how to do this properly

the MapperExtention.append_result still seems the best way...   

if calculate_ACLs(Session = object_session(self), tablename = 
instance.__tablename__, TableId = instance.__=TableId__, CheckFor = ME, Right = 
MayRead ):
EXT_CONTINUE
else:
EXT_STOP

Dont you?

One other thing, the CalculateACLs query should be as light as possible It will 
only need to return True or False if possible using database functions and if 
possible be database independant.
Can you help me on that one too?


def calculate-ACLs(...):
BOOLGROUP = Session.query(ACL).filter(and_(tablename= .., tableId 
=...,USER_GROUP in me.literal_colum(..?..?..?))..
BOOLME = the same but now for ME, is easy no boolean calculation needed 
in query  

if BoolME:
return BOOLME
else:
return BOOLGROUP




Martijn











On Feb 7, 2011, at 5:55 PM, Michael Bayer wrote:

 
 On Feb 7, 2011, at 11:42 AM, Martijn Moeling wrote:
 
 I think, I might be helped with the create_instance event
 
 Assuming you're talking about when the ORM establishes an instance from a 
 newly fetched row, you can use the @reconstructor hook for that.   0.7 
 publishes this event additionally as the load event.
 
 
 
 
 I will never ever stop a class from being saved/persistent, 
 
 it is the other way around. I 

Re: [sqlalchemy] MapperExtension.append_result ....

2011-02-08 Thread Michael Bayer

On Feb 8, 2011, at 6:05 AM, Martijn Moeling wrote:

 Michael,
 
 
 I took a look at the recipe you indicated, it looks promising but the check 
 should be constructed from database results. Another issue is that this 
 project is implemented in my web based desktop/Os which uses SQLAlchemy from 
 the bottem up. So modifiing the session object globally is with a 
 PreFilteredQuery is not a real option. Creating a session for this program 
 only might be an option but I am not sure how that will turn out.

Well a MapperExtension is also global to that class.Subclassing Query 
with rules for a specific mapper is fairly easy to isolate to those use cases.

 
 Being it a web based (and so Handle request and die), Persistence is (to me) 
 not very usefull and I need to reload everything for every action.

That is typical for a web application.

 
 the @reconstructor hook seems too outdated. I moved to 0.6.6 last week, and 
 only will upgrade to stable/production versions since in my case there is a 
 lot to it.

@reconstructor is a standard feature since 0.5 and continues to be.

 
 I need to transparently add being queried functionality to mapped objects. 
 This functionality is will be mixed in and should be able to limit the 
 results when being queried. Since my class definitions are so complex I would 
 like to make a (not functional) example on what I am in search of. and I will 
 not bother you with chemistry stuff...
 
 
 user = ME, GROUPS = [Everyone]
 
 A query for Session.query(Persons).all() should NOT return Person.Id although 
 Everyone says True, personal Permissions overrule group permissions , simple 
 boolean operations. If no ACLs are found It all defaults to false or true not 
 sure yet on how this will work on my real data model, since this will be the 
 model on which atoms and molecule connections are Allowed
 
 If However the ACL's turn out that ME.MayRead = T, I will only get related 
 addresses I actually may read. This should work automatically for each 
 class with Mixedinstuff inherited
 
 This is whilst I do not want the Users of this model to be bothered with 
 this, the should add data to their model and query to generate list of 
 possible new molecules.
 
 I am some sort of clueless on how to do this properly
 
 the MapperExtention.append_result still seems the best way... 
 
   if calculate_ACLs(Session = object_session(self), tablename = 
 instance.__tablename__, TableId = instance.__=TableId__, CheckFor = ME, Right 
 = MayRead ):
   EXT_CONTINUE
   else:
   EXT_STOP
 
 Dont you?

I guess what you're expressing is that your ACL rules need to fire off using 
Python code, not SQL expressions.The whole thing seems quite awkward to me 
since there's nothing to stop someone from saying Query(MyACLObject.id, 
MyACLObject.name, ...), etc., they get all the data from the ACL row anyway, or 
similarly if they were to say Query(SomeClass, SomeOtherClass, MyACLObject) 
using a join, again the append_result() hook isn't used.If it were me I'd 
be using some filter function around query() in an explicit sense to do it, but 
this is just a matter of style.  The hook will work fine if its limitations are 
OK with you.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] MapperExtension.append_result ....

2011-02-08 Thread Martijn Moeling
Michael,

Thank you,

The final solution has nothing to do with ACL's or addresses and security for 
others getting results by querying is a none issue.
As mentioned before I am building a database and tools to help chemists 
selecting molecule structures. It is all way more complex than you might think 
since the ACL records have ACL records assosiated to them to.
Setting up relations and queries is a total nightmare because almost all 
relations end up to be circular over multiple tables. controlling the eager 
loading where possible for convenience and where impossible has been a huge job 
although SQLAlchemy is a huge help.

I only use this as a understandable data structure since I know how hard it was 
to understand the terminology. I do not want to bring that to this group and 
more importantly since I search the mailinglist myself a lot it can help others 
finding a solution to their needs.  I find that the deeper I dive into SA, the 
less examples are available, the harder it is to test functionality and 
sometimes documentation gets more sparse.

Thank you again...

Martijn

 


On Feb 8, 2011, at 4:21 PM, Michael Bayer wrote:

 
 On Feb 8, 2011, at 6:05 AM, Martijn Moeling wrote:
 
 Michael,
 
 
 I took a look at the recipe you indicated, it looks promising but the check 
 should be constructed from database results. Another issue is that this 
 project is implemented in my web based desktop/Os which uses SQLAlchemy from 
 the bottem up. So modifiing the session object globally is with a 
 PreFilteredQuery is not a real option. Creating a session for this program 
 only might be an option but I am not sure how that will turn out.
 
 Well a MapperExtension is also global to that class.Subclassing Query 
 with rules for a specific mapper is fairly easy to isolate to those use cases.
 
 
 Being it a web based (and so Handle request and die), Persistence is (to me) 
 not very usefull and I need to reload everything for every action.
 
 That is typical for a web application.
 
 
 the @reconstructor hook seems too outdated. I moved to 0.6.6 last week, and 
 only will upgrade to stable/production versions since in my case there is a 
 lot to it.
 
 @reconstructor is a standard feature since 0.5 and continues to be.
 
 
 I need to transparently add being queried functionality to mapped objects. 
 This functionality is will be mixed in and should be able to limit the 
 results when being queried. Since my class definitions are so complex I 
 would like to make a (not functional) example on what I am in search of. and 
 I will not bother you with chemistry stuff...
 
 
 user = ME, GROUPS = [Everyone]
 
 A query for Session.query(Persons).all() should NOT return Person.Id 
 although Everyone says True, personal Permissions overrule group permissions 
 , simple boolean operations. If no ACLs are found It all defaults to false 
 or true not sure yet on how this will work on my real data model, since this 
 will be the model on which atoms and molecule connections are Allowed
 
 If However the ACL's turn out that ME.MayRead = T, I will only get related 
 addresses I actually may read. This should work automatically for each 
 class with Mixedinstuff inherited
 
 This is whilst I do not want the Users of this model to be bothered with 
 this, the should add data to their model and query to generate list of 
 possible new molecules.
 
 I am some sort of clueless on how to do this properly
 
 the MapperExtention.append_result still seems the best way...
 
  if calculate_ACLs(Session = object_session(self), tablename = 
 instance.__tablename__, TableId = instance.__=TableId__, CheckFor = ME, 
 Right = MayRead ):
  EXT_CONTINUE
  else:
  EXT_STOP
 
 Dont you?
 
 I guess what you're expressing is that your ACL rules need to fire off using 
 Python code, not SQL expressions.The whole thing seems quite awkward to 
 me since there's nothing to stop someone from saying Query(MyACLObject.id, 
 MyACLObject.name, ...), etc., they get all the data from the ACL row anyway, 
 or similarly if they were to say Query(SomeClass, SomeOtherClass, 
 MyACLObject) using a join, again the append_result() hook isn't used.If 
 it were me I'd be using some filter function around query() in an explicit 
 sense to do it, but this is just a matter of style.  The hook will work fine 
 if its limitations are OK with you.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 

[sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Martijn Moeling
Hi,

I am having a small issue with multiple python modules and declarative...

I might miss something but

Consider:

a.py:

8---
Base = declarative_base()

class A(Base):
...
8---

b.py
Base = declarative_base()

class B(Base):
...
8---

c.py
Base = declarative_base()

class C(Base):
...
8---

d.py

Base = declarative_base()

from A import * # imports base 
from B import * # imports base
from C import * # imports base

Class D1(Base)
...

Class D2(A)
...


in d.py I want to create:

def create_tables(engine):
metadata= Base.metadata
metadata.create_all(engine)


Is there any way to properly add the metadata from the imported modules in 
d.py  to the Base.metadata during the import..?
Think of modules a,b,c and d are together in a package and d is imported with 
similar packages into something bigger

Martijn








-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Michael Bayer

On Feb 8, 2011, at 1:19 PM, Martijn Moeling wrote:

 Hi,
 
 I am having a small issue with multiple python modules and declarative...
 
 I might miss something but
 
 Consider:
 
 a.py:
 
 8---
 Base = declarative_base()
 
 class A(Base):
   ...
 8---
 
 b.py
 Base = declarative_base()
 
 class B(Base):
   ...
 8---
 
 c.py
 Base = declarative_base()
 
 class C(Base):
   ...
 8---
 
 d.py
 
 Base = declarative_base()
 
 from A import * # imports base 
 from B import * # imports base
 from C import * # imports base
 
 Class D1(Base)
 ...
 
 Class D2(A)
 ...
 
 
 in d.py I want to create:
 
 def create_tables(engine):
   metadata= Base.metadata
   metadata.create_all(engine)
 
 
 Is there any way to properly add the metadata from the imported modules in 
 d.py  to the Base.metadata during the import..?
 Think of modules a,b,c and d are together in a package and d is imported with 
 similar packages into something bigger

Usually the convention is that all modules in an application share the same 
declarative base object (i.e. Base).   If you wanted multiple Base objects but 
have them share a common MetaData, you can declare the MetaData up front, then 
create each Base using declarative_base(metadata=my_metadata). Otherwise if 
you're really looking to merge together multiple MetaData objects, that's not 
really in the API right now in a clean way, you'd perhaps call .tometadata() on 
each Table object but that's really not what I would do here - it copies the 
whole Table object and isn't really for a use case like this.   If you can at 
least have a common MetaData object, or better a common Base object, that would 
be the best way to go.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Martijn Moeling
Michael,

Do you ever sleep?

I am not sure I get your point. How do I set up a common Base. 

I could do Base= Declarative_base()

from a import A (or * not sure how this differs in this case)
from b import B (or *)

If I do not declare Base in module a I get an Import Error on class A(Base), 
the same for importing b. 

This gets even more complicated when Base should be build from classes defined 
across modules.

at the end there is one main.py importing all modules and this should be able 
to define the Main Base.

Any suggestions on how to tackle this. I know have multiple modules and am 
glueing everything together. This even gets more problematic with Inheritance.

one solution could be..

from a import A, BaseA 
from b import B, BaseB

Base = declarative_base() 

metadata - BaseA.metadata + BaseB.metadata
metadata.create_all(engine)

What I do not get is how the mapper is configured. Normally with declarative 
Base it is not used in Production fase for as far as I can see.
The mapper is part of the Class right? and session does not use Base at all? 
but gets it when needed - Session.query(A). ?
Or am I totally wrong on this?

Can I something like this:

from a import * (imports class A and the declarative_base BaseA)

Base = declarative_base()

class C(Base):
  ...

BaseForB = declarative_base(metadata=BaseA)
class B(BaseForB)


Mixin classes are of type object so there is no issue since @declared_attr etc 
works

class D(Base,mixinclass) works without a Base at all so there is no Issue

Am I right? in understanding your comments on my first mail in this topic?

Martijn







On Feb 8, 2011, at 7:46 PM, Michael Bayer wrote:

 
 On Feb 8, 2011, at 1:19 PM, Martijn Moeling wrote:
 
 Hi,
 
 I am having a small issue with multiple python modules and declarative...
 
 I might miss something but
 
 Consider:
 
 a.py:
 
 8---
 Base = declarative_base()
 
 class A(Base):
  ...
 8---
 
 b.py
 Base = declarative_base()
 
 class B(Base):
  ...
 8---
 
 c.py
 Base = declarative_base()
 
 class C(Base):
  ...
 8---
 
 d.py
 
 Base = declarative_base()
 
 from A import * # imports base 
 from B import * # imports base
 from C import * # imports base
 
 Class D1(Base)
 ...
 
 Class D2(A)
 ...
 
 
 in d.py I want to create:
 
 def create_tables(engine):
  metadata= Base.metadata
  metadata.create_all(engine)
 
 
 Is there any way to properly add the metadata from the imported modules in 
 d.py  to the Base.metadata during the import..?
 Think of modules a,b,c and d are together in a package and d is imported 
 with similar packages into something bigger
 
 Usually the convention is that all modules in an application share the same 
 declarative base object (i.e. Base).   If you wanted multiple Base objects 
 but have them share a common MetaData, you can declare the MetaData up front, 
 then create each Base using declarative_base(metadata=my_metadata). 
 Otherwise if you're really looking to merge together multiple MetaData 
 objects, that's not really in the API right now in a clean way, you'd perhaps 
 call .tometadata() on each Table object but that's really not what I would do 
 here - it copies the whole Table object and isn't really for a use case like 
 this.   If you can at least have a common MetaData object, or better a common 
 Base object, that would be the best way to go.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Chris Withers

Hi Martin,

On 08/02/2011 19:25, Martijn Moeling wrote:


I am not sure I get your point. How do I set up a common Base.

I could do Base= Declarative_base()

from a import A (or * not sure how this differs in this case)
from b import B (or *)

If I do not declare Base in module a I get an Import Error on class A(Base), 
the same for importing b.


The answer is you define your base somewhere and have both a and b 
import it from there.



This gets even more complicated when Base should be build from classes defined 
across modules.


...and then even more so when one base needs to be shared between 
multiple python packages ;-)



Any suggestions on how to tackle this.


I have a package I'm itching to release which will help with this.
Keep your eyes open for 'mortar_rdb' ;-)
(it's currently blocked on sqlalchemy-migrate getting a release and me 
doing some renaming from it's customer-specific internal name...)


cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Michael Bayer
the idea is like this:

myproject/
myproject/__init__.py
myproject/meta.py
myproject/somepackage/__init__.py
myproject/somepackage/a.py
myproject/someotherpackage/__init__.py
myproject/someotherpackage/b.py

myproject/__init__.py:

from myproject.somepackage import a
from myproject.someotherpackage import b

meta.py:

Base = declarative_base()

a.py:

from myproject.meta import Base

b.py:

from myproject.meta import Base










On Feb 8, 2011, at 2:25 PM, Martijn Moeling wrote:

 Michael,
 
 Do you ever sleep?
 
 I am not sure I get your point. How do I set up a common Base. 
 
 I could do Base= Declarative_base()
 
 from a import A (or * not sure how this differs in this case)
 from b import B (or *)
 
 If I do not declare Base in module a I get an Import Error on class 
 A(Base), the same for importing b. 
 
 This gets even more complicated when Base should be build from classes 
 defined across modules.
 
 at the end there is one main.py importing all modules and this should be able 
 to define the Main Base.
 
 Any suggestions on how to tackle this. I know have multiple modules and am 
 glueing everything together. This even gets more problematic with Inheritance.
 
 one solution could be..
 
 from a import A, BaseA 
 from b import B, BaseB
 
 Base = declarative_base() 
 
 metadata - BaseA.metadata + BaseB.metadata
 metadata.create_all(engine)
 
 What I do not get is how the mapper is configured. Normally with declarative 
 Base it is not used in Production fase for as far as I can see.
 The mapper is part of the Class right? and session does not use Base at all? 
 but gets it when needed - Session.query(A). ?
 Or am I totally wrong on this?
 
 Can I something like this:
 
 from a import * (imports class A and the declarative_base BaseA)
 
 Base = declarative_base()
 
 class C(Base):
  ...
 
 BaseForB = declarative_base(metadata=BaseA)
 class B(BaseForB)
 
 
 Mixin classes are of type object so there is no issue since @declared_attr 
 etc works
 
 class D(Base,mixinclass) works without a Base at all so there is no Issue
 
 Am I right? in understanding your comments on my first mail in this topic?
 
 Martijn
 
 
 
 
 
 
 
 On Feb 8, 2011, at 7:46 PM, Michael Bayer wrote:
 
 
 On Feb 8, 2011, at 1:19 PM, Martijn Moeling wrote:
 
 Hi,
 
 I am having a small issue with multiple python modules and declarative...
 
 I might miss something but
 
 Consider:
 
 a.py:
 
 8---
 Base = declarative_base()
 
 class A(Base):
 ...
 8---
 
 b.py
 Base = declarative_base()
 
 class B(Base):
 ...
 8---
 
 c.py
 Base = declarative_base()
 
 class C(Base):
 ...
 8---
 
 d.py
 
 Base = declarative_base()
 
 from A import * # imports base 
 from B import * # imports base
 from C import * # imports base
 
 Class D1(Base)
 ...
 
 Class D2(A)
 ...
 
 
 in d.py I want to create:
 
 def create_tables(engine):
 metadata= Base.metadata
 metadata.create_all(engine)
 
 
 Is there any way to properly add the metadata from the imported modules 
 in d.py  to the Base.metadata during the import..?
 Think of modules a,b,c and d are together in a package and d is imported 
 with similar packages into something bigger
 
 Usually the convention is that all modules in an application share the same 
 declarative base object (i.e. Base).   If you wanted multiple Base objects 
 but have them share a common MetaData, you can declare the MetaData up 
 front, then create each Base using declarative_base(metadata=my_metadata).   
   Otherwise if you're really looking to merge together multiple MetaData 
 objects, that's not really in the API right now in a clean way, you'd 
 perhaps call .tometadata() on each Table object but that's really not what I 
 would do here - it copies the whole Table object and isn't really for a use 
 case like this.   If you can at least have a common MetaData object, or 
 better a common Base object, that would be the best way to go.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 

[sqlalchemy] dynamic classes and tables

2011-02-08 Thread farcat
Hi everyone,

I am new to sqlalchemy and figuring out whether it is right for my
project. What I am looking for is the ability to change classes and
tables on the flight, with as much freedom as possible, potentially
having metaclasses figuring out the difference between versions and
updating the database accordingly. Additionally I would like to import
databases and automatically generate class definitions. Some code I
tried:

code
from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relation, sessionmaker

Base = declarative_base()

def Init(self, title=None, year=None):
self.title = title
self.year = year
def Repr(self):
return Movie(%r, %r, %r) % (self.title, self.year,
self.director)

Movie = type(Movie, (Base,),{'__tablename__': movies,
id:Column(Integer, primary_key=True),
title: Column(String(255), nullable=False),
year: Column(Integer),
directed_by: Column(Integer,
ForeignKey('directors.id')),
director: relation(Director, backref='movies',
lazy=False)})
setattr(Movie, __init__, classmethod(Init))
setattr(Movie, __repr__, classmethod(Repr))


class Director(Base):
__tablename__ = 'directors'

id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False, unique=True)

def __init__(self, name=None):
self.name = name

def __repr__(self):
return Director(%r) % (self.name)

engine = create_engine('sqlite:///meta.db', echo=True)
Base.metadata.create_all(engine)

if __name__ == __main__:
Session = sessionmaker(bind=engine)
session = Session()

m1 = Movie(Star Trek, 2009)
m1.director = Director(JJ Abrams)

d2 = Director(George Lucas)
d2.movies = [Movie(Star Wars, 1977), Movie(THX 1138, 1971)]

try:
session.add(m1)
session.add(d2)
session.commit()
except:
session.rollback()

alldata = session.query(Movie).all()
for somedata in alldata:
print somedata

\code

with as error:


2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
PRAGMA table_info(directors)
2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
()
Traceback (most recent call last):
  File D:\Documents\Code\NetBeans\test\alchemy\src\alchemy.py, line
49, in module
m1.director = Director(JJ Abrams)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py,
line 158, in __set__
2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
PRAGMA table_info(movies)
2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
()
self.impl.set(instance_state(instance),
AttributeError: 'Movie' object has no attribute '_sa_instance_state'

Can anyone shed some light or explain the error message?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Martijn Moeling
Clear, if all packages are in the same project that is..

and what if in a.py I want to inherit some class mapped with b.py  

mixin does not allways work as a solution subclassing (DeclarativeMeta) is an 
option, not sure

Once I do a base=declarative_base(metadata=BaseB)

every class x(Base) gets the Columns in BaseB to it ..?

I have been setting up some tests and one way or the other I need to know how 
Session is using Base, if I can have multiple declarative_base instances and 
how Session relates to that.


Consider this:

Base = Declarative_base()

class a(Base):




def create_table(engine):
b=a()
metadata = b.metadata
metadata.create_all(engine)

Base = None  # IMPORTANT IS Sessionmaker using intropsection to find out an 
instance of Declarative base?

engine = 
Session = SessionMaker(bind=engine)

would this or a  similar approach work?


On Feb 8, 2011, at 9:12 PM, Michael Bayer wrote:

 the idea is like this:
 
 myproject/
 myproject/__init__.py
 myproject/meta.py
 myproject/somepackage/__init__.py
 myproject/somepackage/a.py
 myproject/someotherpackage/__init__.py
 myproject/someotherpackage/b.py
 
 myproject/__init__.py:
 
 from myproject.somepackage import a
 from myproject.someotherpackage import b
 
 meta.py:
 
 Base = declarative_base()
 
 a.py:
 
 from myproject.meta import Base
 
 b.py:
 
 from myproject.meta import Base
 
 
 
 
 
 
 
 
 
 
 On Feb 8, 2011, at 2:25 PM, Martijn Moeling wrote:
 
 Michael,
 
 Do you ever sleep?
 
 I am not sure I get your point. How do I set up a common Base. 
 
 I could do Base= Declarative_base()
 
 from a import A (or * not sure how this differs in this case)
 from b import B (or *)
 
 If I do not declare Base in module a I get an Import Error on class 
 A(Base), the same for importing b. 
 
 This gets even more complicated when Base should be build from classes 
 defined across modules.
 
 at the end there is one main.py importing all modules and this should be 
 able to define the Main Base.
 
 Any suggestions on how to tackle this. I know have multiple modules and am 
 glueing everything together. This even gets more problematic with 
 Inheritance.
 
 one solution could be..
 
 from a import A, BaseA 
 from b import B, BaseB
 
 Base = declarative_base() 
 
 metadata - BaseA.metadata + BaseB.metadata
 metadata.create_all(engine)
 
 What I do not get is how the mapper is configured. Normally with declarative 
 Base it is not used in Production fase for as far as I can see.
 The mapper is part of the Class right? and session does not use Base at all? 
 but gets it when needed - Session.query(A). ?
 Or am I totally wrong on this?
 
 Can I something like this:
 
 from a import * (imports class A and the declarative_base BaseA)
 
 Base = declarative_base()
 
 class C(Base):
 ...
 
 BaseForB = declarative_base(metadata=BaseA)
 class B(BaseForB)
 
 
 Mixin classes are of type object so there is no issue since @declared_attr 
 etc works
 
 class D(Base,mixinclass) works without a Base at all so there is no Issue
 
 Am I right? in understanding your comments on my first mail in this topic?
 
 Martijn
 
 
 
 
 
 
 
 On Feb 8, 2011, at 7:46 PM, Michael Bayer wrote:
 
 
 On Feb 8, 2011, at 1:19 PM, Martijn Moeling wrote:
 
 Hi,
 
 I am having a small issue with multiple python modules and declarative...
 
 I might miss something but
 
 Consider:
 
 a.py:
 
 8---
 Base = declarative_base()
 
 class A(Base):
...
 8---
 
 b.py
 Base = declarative_base()
 
 class B(Base):
...
 8---
 
 c.py
 Base = declarative_base()
 
 class C(Base):
...
 8---
 
 d.py
 
 Base = declarative_base()
 
 from A import * # imports base 
 from B import * # imports base
 from C import * # imports base
 
 Class D1(Base)
 ...
 
 Class D2(A)
 ...
 
 
 in d.py I want to create:
 
 def create_tables(engine):
metadata= Base.metadata
metadata.create_all(engine)
 
 
 Is there any way to properly add the metadata from the imported modules 
 in d.py  to the Base.metadata during the import..?
 Think of modules a,b,c and d are together in a package and d is imported 
 with similar packages into something bigger
 
 Usually the convention is that all modules in an application share the same 
 declarative base object (i.e. Base).   If you wanted multiple Base objects 
 but have them share a common MetaData, you can declare the MetaData up 
 front, then create each Base using declarative_base(metadata=my_metadata).  
Otherwise if you're really looking to merge together multiple MetaData 
 objects, that's not really in the API right now in a clean way, you'd 
 perhaps call .tometadata() on each Table object but that's really not what 
 I would do here - it copies the whole Table object and isn't really for a 
 use case like this.   If you can at 

Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Michael Bayer

On Feb 8, 2011, at 3:57 PM, Martijn Moeling wrote:

 Clear, if all packages are in the same project that is..
 
 and what if in a.py I want to inherit some class mapped with b.py  
 
 mixin does not allways work as a solution subclassing (DeclarativeMeta) is an 
 option, not sure
 
 Once I do a base=declarative_base(metadata=BaseB)
 
 every class x(Base) gets the Columns in BaseB to it ..?
 
 I have been setting up some tests and one way or the other I need to know how 
 Session is using Base, if I can have multiple declarative_base instances and 
 how Session relates to that.
 
 
 Consider this:
 
 Base = Declarative_base()
 
 class a(Base):
   
   
   
 
 def create_table(engine):
   b=a()
   metadata = b.metadata
   metadata.create_all(engine)
 
 Base = None  # IMPORTANT IS Sessionmaker using intropsection to find out 
 an instance of Declarative base?
 
 engine = 
 Session = SessionMaker(bind=engine)
 
 would this or a  similar approach work?

Session does not care about Base, nor does MetaData.

The Base gives you this:

class MyObject(Base):
...
related = relationship(Related)

the string Related is looked up in a dictionary inside of Base.   The 
dictionary is called _decl_class_registry.

MetaData gives you this:

class MyObject(Base):
...
related_id = Column(Integer, ForeignKey('related.id'))

the string 'related.id' is broken into 'related' and 'id' and is looked up 
inside a dictionary inside of MetaData().   The dictionary is called tables.  
  There's some extra lookup helper mechanics surrounding this dictionary in 0.7 
which is why we don't want you manipulating .tables directly.

Those two registries are the *only* thing you get from a Base and a MetaData 
that is dependent on how many of them are in use.   Neither is strictly needed. 
  relationship() accepts the real class itself, i.e. relationship(Related).   
ForeignKey accepts a real column object, i.e. ForeignKey(related_table.c.id).   
 The registries are strictly for the purpose of making it *easier* to organize 
table metadata and declarative classes without worrying about order of 
dependencies, allowing specification of related constructs via string name.  
Otherwise feel free to declare every single class and Table on its own Base and 
MetaData, it makes no difference.

If you have multiple projects each with their own set of unrelated tables, 
there is no need to merge any Base or MetaData objects together.  Simply call 
create_all() on each MetaData() object as needed.   The namespaces remain 
entirely separate, as they should.

If OTOH the multiple projects are linking to each other's tables and classes, 
then these projects have a dependency on each other.  You should change them 
such that a common MetaData, and optionally a Base, can be specified from which 
they all make usage of - unless you can get away with not declaring any 
inter-package relationships or foreign keys with string names.









 
 
 On Feb 8, 2011, at 9:12 PM, Michael Bayer wrote:
 
 the idea is like this:
 
 myproject/
 myproject/__init__.py
 myproject/meta.py
 myproject/somepackage/__init__.py
 myproject/somepackage/a.py
 myproject/someotherpackage/__init__.py
 myproject/someotherpackage/b.py
 
 myproject/__init__.py:
 
 from myproject.somepackage import a
 from myproject.someotherpackage import b
 
 meta.py:
 
 Base = declarative_base()
 
 a.py:
 
 from myproject.meta import Base
 
 b.py:
 
 from myproject.meta import Base
 
 
 
 
 
 
 
 
 
 
 On Feb 8, 2011, at 2:25 PM, Martijn Moeling wrote:
 
 Michael,
 
 Do you ever sleep?
 
 I am not sure I get your point. How do I set up a common Base. 
 
 I could do Base= Declarative_base()
 
 from a import A (or * not sure how this differs in this case)
 from b import B (or *)
 
 If I do not declare Base in module a I get an Import Error on class 
 A(Base), the same for importing b. 
 
 This gets even more complicated when Base should be build from classes 
 defined across modules.
 
 at the end there is one main.py importing all modules and this should be 
 able to define the Main Base.
 
 Any suggestions on how to tackle this. I know have multiple modules and am 
 glueing everything together. This even gets more problematic with 
 Inheritance.
 
 one solution could be..
 
 from a import A, BaseA 
 from b import B, BaseB
 
 Base = declarative_base() 
 
 metadata - BaseA.metadata + BaseB.metadata
 metadata.create_all(engine)
 
 What I do not get is how the mapper is configured. Normally with 
 declarative Base it is not used in Production fase for as far as I can 
 see.
 The mapper is part of the Class right? and session does not use Base at 
 all? but gets it when needed - Session.query(A). ?
 Or am I totally wrong on this?
 
 Can I something like this:
 
 from a import * (imports class A and the declarative_base BaseA)
 
 Base = declarative_base()
 
 class C(Base):
...
 
 BaseForB = declarative_base(metadata=BaseA)
 

Re: [sqlalchemy] Declarative, Imports and Base

2011-02-08 Thread Martijn Moeling
Clear!


On Feb 8, 2011, at 10:21 PM, Michael Bayer wrote:

 
 On Feb 8, 2011, at 3:57 PM, Martijn Moeling wrote:
 
 Clear, if all packages are in the same project that is..
 
 and what if in a.py I want to inherit some class mapped with b.py  
 
 mixin does not allways work as a solution subclassing (DeclarativeMeta) is 
 an option, not sure
 
 Once I do a base=declarative_base(metadata=BaseB)
 
 every class x(Base) gets the Columns in BaseB to it ..?
 
 I have been setting up some tests and one way or the other I need to know 
 how Session is using Base, if I can have multiple declarative_base instances 
 and how Session relates to that.
 
 
 Consider this:
 
 Base = Declarative_base()
 
 class a(Base):
  
  
  
 
 def create_table(engine):
  b=a()
  metadata = b.metadata
  metadata.create_all(engine)
 
 Base = None  # IMPORTANT IS Sessionmaker using intropsection to find out 
 an instance of Declarative base?
 
 engine = 
 Session = SessionMaker(bind=engine)
 
 would this or a  similar approach work?
 
 Session does not care about Base, nor does MetaData.
 
 The Base gives you this:
 
 class MyObject(Base):
...
related = relationship(Related)
 
 the string Related is looked up in a dictionary inside of Base.   The 
 dictionary is called _decl_class_registry.
 
 MetaData gives you this:
 
 class MyObject(Base):
...
related_id = Column(Integer, ForeignKey('related.id'))
 
 the string 'related.id' is broken into 'related' and 'id' and is looked up 
 inside a dictionary inside of MetaData().   The dictionary is called 
 tables.There's some extra lookup helper mechanics surrounding this 
 dictionary in 0.7 which is why we don't want you manipulating .tables 
 directly.
 
 Those two registries are the *only* thing you get from a Base and a MetaData 
 that is dependent on how many of them are in use.   Neither is strictly 
 needed.   relationship() accepts the real class itself, i.e. 
 relationship(Related).   ForeignKey accepts a real column object, i.e. 
 ForeignKey(related_table.c.id).The registries are strictly for the 
 purpose of making it *easier* to organize table metadata and declarative 
 classes without worrying about order of dependencies, allowing specification 
 of related constructs via string name.  Otherwise feel free to declare every 
 single class and Table on its own Base and MetaData, it makes no difference.
 
 If you have multiple projects each with their own set of unrelated tables, 
 there is no need to merge any Base or MetaData objects together.  Simply call 
 create_all() on each MetaData() object as needed.   The namespaces remain 
 entirely separate, as they should.
 
 If OTOH the multiple projects are linking to each other's tables and classes, 
 then these projects have a dependency on each other.  You should change them 
 such that a common MetaData, and optionally a Base, can be specified from 
 which they all make usage of - unless you can get away with not declaring any 
 inter-package relationships or foreign keys with string names.
 
 
 
 
 
 
 
 
 
 
 
 On Feb 8, 2011, at 9:12 PM, Michael Bayer wrote:
 
 the idea is like this:
 
 myproject/
 myproject/__init__.py
 myproject/meta.py
 myproject/somepackage/__init__.py
 myproject/somepackage/a.py
 myproject/someotherpackage/__init__.py
 myproject/someotherpackage/b.py
 
 myproject/__init__.py:
 
 from myproject.somepackage import a
 from myproject.someotherpackage import b
 
 meta.py:
 
 Base = declarative_base()
 
 a.py:
 
 from myproject.meta import Base
 
 b.py:
 
 from myproject.meta import Base
 
 
 
 
 
 
 
 
 
 
 On Feb 8, 2011, at 2:25 PM, Martijn Moeling wrote:
 
 Michael,
 
 Do you ever sleep?
 
 I am not sure I get your point. How do I set up a common Base. 
 
 I could do Base= Declarative_base()
 
 from a import A (or * not sure how this differs in this case)
 from b import B (or *)
 
 If I do not declare Base in module a I get an Import Error on class 
 A(Base), the same for importing b. 
 
 This gets even more complicated when Base should be build from classes 
 defined across modules.
 
 at the end there is one main.py importing all modules and this should be 
 able to define the Main Base.
 
 Any suggestions on how to tackle this. I know have multiple modules and am 
 glueing everything together. This even gets more problematic with 
 Inheritance.
 
 one solution could be..
 
 from a import A, BaseA 
 from b import B, BaseB
 
 Base = declarative_base() 
 
 metadata - BaseA.metadata + BaseB.metadata
 metadata.create_all(engine)
 
 What I do not get is how the mapper is configured. Normally with 
 declarative Base it is not used in Production fase for as far as I can 
 see.
 The mapper is part of the Class right? and session does not use Base at 
 all? but gets it when needed - Session.query(A). ?
 Or am I totally wrong on this?
 
 Can I something like this:
 
 from a import * (imports class A and the declarative_base BaseA)

[sqlalchemy] Problem with table reflection (version 0.6.6) with mysql database

2011-02-08 Thread Paul Rigor
Hello,

I have a table with the following schema:

+-+---+--+-+-++
| Field   | Type  | Null | Key | Default | Extra  |
+-+---+--+-+-++
| acc | varchar(1024) | YES  | | NULL||
| is_obsolete | int(11)   | YES  | | NULL||
| is_root | int(11)   | YES  | | NULL||
| term_type   | varchar(1024) | YES  | | NULL||
| id  | int(11)   | YES  | | NULL||
| cid | int(11)   | NO   | PRI | NULL| auto_increment |
| name| varchar(1024) | YES  | | NULL||
+-+---+--+-+-++


When attempting to run the following code to obtain column information
programmatically...

from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.engine import reflection
dburi = mysql://...
engine =  create_engine(dburi)
meta = MetaData(dburi)
user_table = Table('term', meta,useexisting=True)
engine.reflecttable(user_table,include_columns=True) # More verbose error
trace
insp = reflection.Inspector.from_engine(engine)
insp.reflecttable(user_table, include_columns=True)


I get the following problem:
python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/sqlalchemy/engine/reflection.pyc
in reflecttable(self, table, include_columns)
383 found_table = True
384 name = col_d['name']
-- 385 if include_columns and name not in include_columns:
386 continue
387

TypeError: argument of type 'bool' is not iterable

Is there a better way of obtaining table schemas?

Thanks,
Paul

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problem with table reflection (version 0.6.6) with mysql database

2011-02-08 Thread Michael Bayer
include_columns is a list of strings indicating the names of those columns 
which you'd like reflected.  If you want to reflect all columns from the table, 
leave that argument out.

On Feb 8, 2011, at 4:36 PM, Paul Rigor wrote:

 Hello,
 
 I have a table with the following schema:
 
 +-+---+--+-+-++
 | Field   | Type  | Null | Key | Default | Extra  |
 +-+---+--+-+-++
 | acc | varchar(1024) | YES  | | NULL|| 
 | is_obsolete | int(11)   | YES  | | NULL|| 
 | is_root | int(11)   | YES  | | NULL|| 
 | term_type   | varchar(1024) | YES  | | NULL|| 
 | id  | int(11)   | YES  | | NULL|| 
 | cid | int(11)   | NO   | PRI | NULL| auto_increment | 
 | name| varchar(1024) | YES  | | NULL|| 
 +-+---+--+-+-++
 
 
 When attempting to run the following code to obtain column information 
 programmatically...
 
 from sqlalchemy import create_engine, MetaData, Table
 from sqlalchemy.engine import reflection
 dburi = mysql://...
 engine =  create_engine(dburi)
 meta = MetaData(dburi)
 user_table = Table('term', meta,useexisting=True)
 engine.reflecttable(user_table,include_columns=True) # More verbose error 
 trace
 insp = reflection.Inspector.from_engine(engine)
 insp.reflecttable(user_table, include_columns=True) 
 
 
 I get the following problem:
 python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/sqlalchemy/engine/reflection.pyc
  in reflecttable(self, table, include_columns)
 383 found_table = True
 384 name = col_d['name']
 -- 385 if include_columns and name not in include_columns:
 386 continue
 387
 
 TypeError: argument of type 'bool' is not iterable
 
 Is there a better way of obtaining table schemas?
 
 Thanks,
 Paul
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dynamic classes and tables

2011-02-08 Thread Michael Bayer

On Feb 8, 2011, at 3:52 PM, farcat wrote:

 Hi everyone,
 
 I am new to sqlalchemy and figuring out whether it is right for my
 project. What I am looking for is the ability to change classes and
 tables on the flight, with as much freedom as possible, potentially
 having metaclasses figuring out the difference between versions and
 updating the database accordingly. Additionally I would like to import
 databases and automatically generate class definitions. Some code I
 tried:
 
 code
 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import relation, sessionmaker
 
 Base = declarative_base()
 
 def Init(self, title=None, year=None):
self.title = title
self.year = year
 def Repr(self):
return Movie(%r, %r, %r) % (self.title, self.year,
 self.director)
 
 Movie = type(Movie, (Base,),{'__tablename__': movies,
id:Column(Integer, primary_key=True),
title: Column(String(255), nullable=False),
year: Column(Integer),
directed_by: Column(Integer,
 ForeignKey('directors.id')),
director: relation(Director, backref='movies',
 lazy=False)})
 setattr(Movie, __init__, classmethod(Init))
 setattr(Movie, __repr__, classmethod(Repr))
 
 
 class Director(Base):
__tablename__ = 'directors'
 
id = Column(Integer, primary_key=True)
name = Column(String(50), nullable=False, unique=True)
 
def __init__(self, name=None):
self.name = name
 
def __repr__(self):
return Director(%r) % (self.name)
 
 engine = create_engine('sqlite:///meta.db', echo=True)
 Base.metadata.create_all(engine)
 
 if __name__ == __main__:
Session = sessionmaker(bind=engine)
session = Session()
 
m1 = Movie(Star Trek, 2009)
m1.director = Director(JJ Abrams)
 
d2 = Director(George Lucas)
d2.movies = [Movie(Star Wars, 1977), Movie(THX 1138, 1971)]
 
try:
session.add(m1)
session.add(d2)
session.commit()
except:
session.rollback()
 
alldata = session.query(Movie).all()
for somedata in alldata:
print somedata
 
 \code
 
 with as error:
 
 
 2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
 PRAGMA table_info(directors)
 2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
 ()
 Traceback (most recent call last):
  File D:\Documents\Code\NetBeans\test\alchemy\src\alchemy.py, line
 49, in module
m1.director = Director(JJ Abrams)
  File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py,
 line 158, in __set__
 2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
 PRAGMA table_info(movies)
 2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
 ()
self.impl.set(instance_state(instance),
 AttributeError: 'Movie' object has no attribute '_sa_instance_state'
 
 Can anyone shed some light or explain the error message?

your __init__ monkeypatch is interfering with SQLA's wrapping of this method.

Try

Movie = type(Movie, (Base,),{'__tablename__': movies,
__init__:Init,
   id:Column(Integer, primary_key=True),
   title: Column(String(255), nullable=False),
   year: Column(Integer),
   directed_by: Column(Integer,ForeignKey('directors.id')),
   director: relation(Director, 
backref='movies',lazy=False)})


instead.



 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
To simplify date handling in a project on which I am working, I am
storing UTC dates in the database in a timestamp with timezone field,
however, because cx_Oracle does not have any timezone functionality, I
need to cast the UTC timestamp I'm inserting into the database as a
timestamp in UTC so that the database does not convert it to the db
timezone. This also needs to apply to default values.


I have the following, however, it is not called for default values:

from pytz import UTC
class UTCDateTime(TypeDecorator):
impl = TIMESTAMP

# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)

return value

@compiles(_BindParamClause)
def _compile_utc_date(element, compiler, **kw):
if isinstance(element.type, UTCDateTime) :
   return from_tz(cast(%s as timestamp), 'UTC') \
   % compiler.visit_bindparam(element, **kw)

return compiler.visit_bindparam(element, **kw)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Problem with table reflection (version 0.6.6) with mysql database

2011-02-08 Thread Paul Rigor
Hello,

The same error happens with version 0.6.0 as well as 0.5.8. Although for
0.5.8, the error message is different (see below). Note also that the
version of the Mysql python driver is 1.2.3. Thanks!!!

QLAlchemy-0.5.8-py2.6.egg/sqlalchemy/databases/mysql.pyc in reflect(self,
connection, table, show_create, charset, only)
   2133
   2134 if only:
- 2135 only = set(only)
   2136
   2137 for line in re.split(r'\r?\n', show_create):

TypeError: 'bool' object is not iterable

Cheers,
Paul



On Tue, Feb 8, 2011 at 1:36 PM, Paul Rigor paulri...@gmail.com wrote:

 Hello,

 I have a table with the following schema:

 +-+---+--+-+-++
 | Field   | Type  | Null | Key | Default | Extra  |
 +-+---+--+-+-++
 | acc | varchar(1024) | YES  | | NULL||
 | is_obsolete | int(11)   | YES  | | NULL||
 | is_root | int(11)   | YES  | | NULL||
 | term_type   | varchar(1024) | YES  | | NULL||
 | id  | int(11)   | YES  | | NULL||
 | cid | int(11)   | NO   | PRI | NULL| auto_increment |
 | name| varchar(1024) | YES  | | NULL||
 +-+---+--+-+-++


 When attempting to run the following code to obtain column information
 programmatically...

 from sqlalchemy import create_engine, MetaData, Table
 from sqlalchemy.engine import reflection
 dburi = mysql://...
 engine =  create_engine(dburi)
 meta = MetaData(dburi)
 user_table = Table('term', meta,useexisting=True)
 engine.reflecttable(user_table,include_columns=True) # More verbose error
 trace
 insp = reflection.Inspector.from_engine(engine)
 insp.reflecttable(user_table, include_columns=True)


 I get the following problem:
 python2.6/site-packages/SQLAlchemy-0.6.6-py2.6.egg/sqlalchemy/engine/reflection.pyc
 in reflecttable(self, table, include_columns)
 383 found_table = True
 384 name = col_d['name']
 -- 385 if include_columns and name not in include_columns:
 386 continue
 387

 TypeError: argument of type 'bool' is not iterable

 Is there a better way of obtaining table schemas?

 Thanks,
 Paul


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread Michael Bayer

On Feb 8, 2011, at 4:56 PM, chris e wrote:

 To simplify date handling in a project on which I am working, I am
 storing UTC dates in the database in a timestamp with timezone field,
 however, because cx_Oracle does not have any timezone functionality, I
 need to cast the UTC timestamp I'm inserting into the database as a
 timestamp in UTC so that the database does not convert it to the db
 timezone. This also needs to apply to default values.
 
 
 I have the following, however, it is not called for default values:
 

The @compiles for _BindParamClause was never expected and is not covered within 
a visit_insert() right now.  Ticket #2042 is added.  However, you're better off 
using SQL level UTC functions for defaults in any case which would be the 
workaround here.  Here's one I use for PG + SQL Server.  You can add another 
for Oracle that includes your CAST expression if needed:

class utcnow(expression.FunctionElement):
type = DateTime()

@compiles(utcnow, 'postgresql')
def _pg_utcnow(element, compiler, **kw):
return TIMEZONE('utc', CURRENT_TIMESTAMP)

@compiles(utcnow, 'mssql')
def _ms_utcnow(element, compiler, **kw):
return GETUTCDATE()



 from pytz import UTC
 class UTCDateTime(TypeDecorator):
impl = TIMESTAMP
 
# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)
 
return value
 
 @compiles(_BindParamClause)
 def _compile_utc_date(element, compiler, **kw):
if isinstance(element.type, UTCDateTime) :
   return from_tz(cast(%s as timestamp), 'UTC') \
   % compiler.visit_bindparam(element, **kw)
 
return compiler.visit_bindparam(element, **kw)
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
I now have the code below, but the _oracle_utc_timestamp function is
never called, even when I do explicitly set a value.

class UTCTimestamp(TypeDecorator):
impl = TIMESTAMP

# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)

return value

def process_bind_param(self, value, dialect):
# if we have a value convert it to UTC if needed
if value != None :
if value.tzinfo :
value = value.astimezone(UTC)
return value

class utc_timestamp(FunctionElement):
type = DateTime()

@compiles(utc_timestamp)
def _oracle_utc_timestamp(element, compiler, **kw):
import pdb
pdb.set_trace()






On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2011, at 4:56 PM, chris e wrote:

  To simplify date handling in a project on which I am working, I am
  storing UTC dates in the database in a timestamp with timezone field,
  however, because cx_Oracle does not have any timezone functionality, I
  need to cast the UTC timestamp I'm inserting into the database as a
  timestamp in UTC so that the database does not convert it to the db
  timezone. This also needs to apply to default values.

  I have the following, however, it is not called for default values:

 The @compiles for _BindParamClause was never expected and is not covered 
 within a visit_insert() right now.  Ticket #2042 is added.  However, you're 
 better off using SQL level UTC functions for defaults in any case which would 
 be the workaround here.  Here's one I use for PG + SQL Server.  You can add 
 another for Oracle that includes your CAST expression if needed:

 class utcnow(expression.FunctionElement):
     type = DateTime()

 @compiles(utcnow, 'postgresql')
 def _pg_utcnow(element, compiler, **kw):
     return TIMEZONE('utc', CURRENT_TIMESTAMP)

 @compiles(utcnow, 'mssql')
 def _ms_utcnow(element, compiler, **kw):
     return GETUTCDATE()

  from pytz import UTC
  class UTCDateTime(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

  @compiles(_BindParamClause)
  def _compile_utc_date(element, compiler, **kw):
     if isinstance(element.type, UTCDateTime) :
        return from_tz(cast(%s as timestamp), 'UTC') \
            % compiler.visit_bindparam(element, **kw)

     return compiler.visit_bindparam(element, **kw)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread Michael Bayer


The TypeDecorator is not used for defaults.   You set default=utc_timestamp() 
on your Column.The SQL expression is rendered directly in the INSERT when 
no value given, no bind params used.


On Feb 8, 2011, at 10:13 PM, chris e wrote:

 I now have the code below, but the _oracle_utc_timestamp function is
 never called, even when I do explicitly set a value.
 
 class UTCTimestamp(TypeDecorator):
impl = TIMESTAMP
 
# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)
 
return value
 
def process_bind_param(self, value, dialect):
# if we have a value convert it to UTC if needed
if value != None :
if value.tzinfo :
value = value.astimezone(UTC)
return value
 
 class utc_timestamp(FunctionElement):
type = DateTime()
 
 @compiles(utc_timestamp)
 def _oracle_utc_timestamp(element, compiler, **kw):
import pdb
pdb.set_trace()
 
 
 
 
 
 
 On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2011, at 4:56 PM, chris e wrote:
 
 To simplify date handling in a project on which I am working, I am
 storing UTC dates in the database in a timestamp with timezone field,
 however, because cx_Oracle does not have any timezone functionality, I
 need to cast the UTC timestamp I'm inserting into the database as a
 timestamp in UTC so that the database does not convert it to the db
 timezone. This also needs to apply to default values.
 
 I have the following, however, it is not called for default values:
 
 The @compiles for _BindParamClause was never expected and is not covered 
 within a visit_insert() right now.  Ticket #2042 is added.  However, you're 
 better off using SQL level UTC functions for defaults in any case which 
 would be the workaround here.  Here's one I use for PG + SQL Server.  You 
 can add another for Oracle that includes your CAST expression if needed:
 
 class utcnow(expression.FunctionElement):
 type = DateTime()
 
 @compiles(utcnow, 'postgresql')
 def _pg_utcnow(element, compiler, **kw):
 return TIMEZONE('utc', CURRENT_TIMESTAMP)
 
 @compiles(utcnow, 'mssql')
 def _ms_utcnow(element, compiler, **kw):
 return GETUTCDATE()
 
 from pytz import UTC
 class UTCDateTime(TypeDecorator):
impl = TIMESTAMP
 
# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)
 
return value
 
 @compiles(_BindParamClause)
 def _compile_utc_date(element, compiler, **kw):
if isinstance(element.type, UTCDateTime) :
   return from_tz(cast(%s as timestamp), 'UTC') \
   % compiler.visit_bindparam(element, **kw)
 
return compiler.visit_bindparam(element, **kw)
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread chris e
I understand that's the case for defaults, but to test the code I
actually set a value for one of the columns that is of type
UTCTimestamp, and the compiler extension was never called. I'm
stumped.

That said, I can keep moving, I've decided to just use oracle
timestamps without timezones, and always convert to UTC since
cx_Oracle doesn't handle timezones properly.

On Feb 8, 7:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 The TypeDecorator is not used for defaults.   You set 
 default=utc_timestamp() on your Column.    The SQL expression is rendered 
 directly in the INSERT when no value given, no bind params used.

 On Feb 8, 2011, at 10:13 PM, chris e wrote:

  I now have the code below, but the _oracle_utc_timestamp function is
  never called, even when I do explicitly set a value.

  class UTCTimestamp(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

     def process_bind_param(self, value, dialect):
         # if we have a value convert it to UTC if needed
         if value != None :
             if value.tzinfo :
                 value = value.astimezone(UTC)
         return value

  class utc_timestamp(FunctionElement):
     type = DateTime()

  @compiles(utc_timestamp)
  def _oracle_utc_timestamp(element, compiler, **kw):
     import pdb
     pdb.set_trace()

  On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 8, 2011, at 4:56 PM, chris e wrote:

  To simplify date handling in a project on which I am working, I am
  storing UTC dates in the database in a timestamp with timezone field,
  however, because cx_Oracle does not have any timezone functionality, I
  need to cast the UTC timestamp I'm inserting into the database as a
  timestamp in UTC so that the database does not convert it to the db
  timezone. This also needs to apply to default values.

  I have the following, however, it is not called for default values:

  The @compiles for _BindParamClause was never expected and is not covered 
  within a visit_insert() right now.  Ticket #2042 is added.  However, 
  you're better off using SQL level UTC functions for defaults in any case 
  which would be the workaround here.  Here's one I use for PG + SQL Server. 
   You can add another for Oracle that includes your CAST expression if 
  needed:

  class utcnow(expression.FunctionElement):
      type = DateTime()

  @compiles(utcnow, 'postgresql')
  def _pg_utcnow(element, compiler, **kw):
      return TIMEZONE('utc', CURRENT_TIMESTAMP)

  @compiles(utcnow, 'mssql')
  def _ms_utcnow(element, compiler, **kw):
      return GETUTCDATE()

  from pytz import UTC
  class UTCDateTime(TypeDecorator):
     impl = TIMESTAMP

     # add the UTC time zone info to naive timestamps
     def process_result_value(self, value, dialect) :
         if value != None :
             value = UTC.localize(value)

         return value

  @compiles(_BindParamClause)
  def _compile_utc_date(element, compiler, **kw):
     if isinstance(element.type, UTCDateTime) :
        return from_tz(cast(%s as timestamp), 'UTC') \
            % compiler.visit_bindparam(element, **kw)

     return compiler.visit_bindparam(element, **kw)

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Custom UTC DateTime Type with Custom insert functionality

2011-02-08 Thread Michael Bayer
Yeah, thats because the compiler extension, wrapped directly around 
_BindParamClause, is not called on an insert, when the bind is generated within 
the compile.  That is why I made ticket #2042.  Not going to work until I get 
around to that.

It would work if you generated the bindparam() yourself, as occurs with 
literal(), i.e. table.insert().values(date=literal(some_date, 
type=UTCNowColumn()), but that's not the general INSERT form that gets used 
with the ORM and such.




On Feb 8, 2011, at 11:25 PM, chris e wrote:

 I understand that's the case for defaults, but to test the code I
 actually set a value for one of the columns that is of type
 UTCTimestamp, and the compiler extension was never called. I'm
 stumped.
 
 That said, I can keep moving, I've decided to just use oracle
 timestamps without timezones, and always convert to UTC since
 cx_Oracle doesn't handle timezones properly.
 
 On Feb 8, 7:18 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 The TypeDecorator is not used for defaults.   You set 
 default=utc_timestamp() on your Column.The SQL expression is rendered 
 directly in the INSERT when no value given, no bind params used.
 
 On Feb 8, 2011, at 10:13 PM, chris e wrote:
 
 I now have the code below, but the _oracle_utc_timestamp function is
 never called, even when I do explicitly set a value.
 
 class UTCTimestamp(TypeDecorator):
impl = TIMESTAMP
 
# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)
 
return value
 
def process_bind_param(self, value, dialect):
# if we have a value convert it to UTC if needed
if value != None :
if value.tzinfo :
value = value.astimezone(UTC)
return value
 
 class utc_timestamp(FunctionElement):
type = DateTime()
 
 @compiles(utc_timestamp)
 def _oracle_utc_timestamp(element, compiler, **kw):
import pdb
pdb.set_trace()
 
 On Feb 8, 6:09 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2011, at 4:56 PM, chris e wrote:
 
 To simplify date handling in a project on which I am working, I am
 storing UTC dates in the database in a timestamp with timezone field,
 however, because cx_Oracle does not have any timezone functionality, I
 need to cast the UTC timestamp I'm inserting into the database as a
 timestamp in UTC so that the database does not convert it to the db
 timezone. This also needs to apply to default values.
 
 I have the following, however, it is not called for default values:
 
 The @compiles for _BindParamClause was never expected and is not covered 
 within a visit_insert() right now.  Ticket #2042 is added.  However, 
 you're better off using SQL level UTC functions for defaults in any case 
 which would be the workaround here.  Here's one I use for PG + SQL Server. 
  You can add another for Oracle that includes your CAST expression if 
 needed:
 
 class utcnow(expression.FunctionElement):
 type = DateTime()
 
 @compiles(utcnow, 'postgresql')
 def _pg_utcnow(element, compiler, **kw):
 return TIMEZONE('utc', CURRENT_TIMESTAMP)
 
 @compiles(utcnow, 'mssql')
 def _ms_utcnow(element, compiler, **kw):
 return GETUTCDATE()
 
 from pytz import UTC
 class UTCDateTime(TypeDecorator):
impl = TIMESTAMP
 
# add the UTC time zone info to naive timestamps
def process_result_value(self, value, dialect) :
if value != None :
value = UTC.localize(value)
 
return value
 
 @compiles(_BindParamClause)
 def _compile_utc_date(element, compiler, **kw):
if isinstance(element.type, UTCDateTime) :
   return from_tz(cast(%s as timestamp), 'UTC') \
   % compiler.visit_bindparam(element, **kw)
 
return compiler.visit_bindparam(element, **kw)
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group 
 athttp://groups.google.com/group/sqlalchemy?hl=en.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this 

[sqlalchemy] Re: dynamic classes and tables

2011-02-08 Thread farcat
Thank you, that works.

Is there any way to later add or remove attributes, using the
declarative system?

Cheers, Lars

On Feb 8, 10:46 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2011, at 3:52 PM, farcat wrote:









  Hi everyone,

  I am new to sqlalchemy and figuring out whether it is right for my
  project. What I am looking for is the ability to change classes and
  tables on the flight, with as much freedom as possible, potentially
  having metaclasses figuring out the difference between versions and
  updating the database accordingly. Additionally I would like to import
  databases and automatically generate class definitions. Some code I
  tried:

  code
  from sqlalchemy import *
  from sqlalchemy.ext.declarative import declarative_base
  from sqlalchemy.orm import relation, sessionmaker

  Base = declarative_base()

  def Init(self, title=None, year=None):
     self.title = title
     self.year = year
  def Repr(self):
     return Movie(%r, %r, %r) % (self.title, self.year,
  self.director)

  Movie = type(Movie, (Base,),{'__tablename__': movies,
                     id:Column(Integer, primary_key=True),
                     title: Column(String(255), nullable=False),
                     year: Column(Integer),
                     directed_by: Column(Integer,
  ForeignKey('directors.id')),
                     director: relation(Director, backref='movies',
  lazy=False)})
  setattr(Movie, __init__, classmethod(Init))
  setattr(Movie, __repr__, classmethod(Repr))

  class Director(Base):
     __tablename__ = 'directors'

     id = Column(Integer, primary_key=True)
     name = Column(String(50), nullable=False, unique=True)

     def __init__(self, name=None):
         self.name = name

     def __repr__(self):
         return Director(%r) % (self.name)

  engine = create_engine('sqlite:///meta.db', echo=True)
  Base.metadata.create_all(engine)

  if __name__ == __main__:
     Session = sessionmaker(bind=engine)
     session = Session()

     m1 = Movie(Star Trek, 2009)
     m1.director = Director(JJ Abrams)

     d2 = Director(George Lucas)
     d2.movies = [Movie(Star Wars, 1977), Movie(THX 1138, 1971)]

     try:
         session.add(m1)
         session.add(d2)
         session.commit()
     except:
         session.rollback()

     alldata = session.query(Movie).all()
     for somedata in alldata:
         print somedata

  \code

  with as error:

  2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
  PRAGMA table_info(directors)
  2011-02-08 21:50:47,553 INFO sqlalchemy.engine.base.Engine.0x...ef0L
  ()
  Traceback (most recent call last):
   File D:\Documents\Code\NetBeans\test\alchemy\src\alchemy.py, line
  49, in module
     m1.director = Director(JJ Abrams)
   File C:\Python27\lib\site-packages\sqlalchemy\orm\attributes.py,
  line 158, in __set__
  2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
  PRAGMA table_info(movies)
  2011-02-08 21:50:47,555 INFO sqlalchemy.engine.base.Engine.0x...ef0L
  ()
     self.impl.set(instance_state(instance),
  AttributeError: 'Movie' object has no attribute '_sa_instance_state'

  Can anyone shed some light or explain the error message?

 your __init__ monkeypatch is interfering with SQLA's wrapping of this method.

 Try

 Movie = type(Movie, (Base,),{'__tablename__': movies,
                     __init__:Init,
                    id:Column(Integer, primary_key=True),
                    title: Column(String(255), nullable=False),
                    year: Column(Integer),
                    directed_by: Column(Integer,ForeignKey('directors.id')),
                    director: relation(Director, 
 backref='movies',lazy=False)})

 instead.









  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.