[sqlalchemy] Re: DB Operational Error

2008-11-04 Thread Raoul Snyman

Hi Michael,

On Nov 2, 1:07 am, Michael Bayer [EMAIL PROTECTED] wrote:
 look into the pool_recycle option described 
 athttp://www.sqlalchemy.org/docs/05/dbengine.html

I'm also getting these errors, and I have pool_recycle, pool_size and
max_cycle set. I'm using Pylons, SQLAlchemy 0.5 and MySQLdb 1.2.2

# lines from my Pylons ini file
sqlalchemy.default.pool_recycle = 3600
sqlalchemy.default.pool_size = 32
sqlalchemy.default.max_overflow = 1024

I think that the reason this happens is because this is not a stateful
app, but rather a state-less web site. Every request to the site is a
full but isolated execution of the app, and this makes me think that
SQLAlchemy doesn't really have a Pool to work with.

I'm rather flummoxed on this one. No one seems to have an answer other
than pool_recycle - which is not working.

Kind regards,

Raoul Snyman

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DB Operational Error

2008-11-04 Thread [EMAIL PROTECTED]



On Nov 4, 3:36 am, Raoul Snyman [EMAIL PROTECTED] wrote:
 Hi Michael,

 On Nov 2, 1:07 am, Michael Bayer [EMAIL PROTECTED] wrote:

  look into the pool_recycle option described 
  athttp://www.sqlalchemy.org/docs/05/dbengine.html

 I'm also getting these errors, and I have pool_recycle, pool_size and
 max_cycle set. I'm using Pylons, SQLAlchemy 0.5 and MySQLdb 1.2.2

 # lines from my Pylons ini file
 sqlalchemy.default.pool_recycle = 3600
 sqlalchemy.default.pool_size = 32
 sqlalchemy.default.max_overflow = 1024


You have a recycle time of 1 hour (3600 seconds).  That is usually
short enough.  I've encountered situations where a firewall closed
idle connections after 30 minutes, leading to server has gone away
errors.  Could the MySQL configuration be set to an idle timeout that
is less than an hour?  Could you somehow be opening connections from
outside the pool?  (I did that to myself when trying to add some
conversion functions to the MySQLdb module.)

 I think that the reason this happens is because this is not a stateful
 app, but rather a state-less web site. Every request to the site is a
 full but isolated execution of the app, and this makes me think that
 SQLAlchemy doesn't really have a Pool to work with.


Then you'd be starting fresh every time and there would be no
opportunity for a stale connection.

 I'm rather flummoxed on this one. No one seems to have an answer other
 than pool_recycle - which is not working.

You could try cutting the recyle value down to 60 seconds.  You need a
recycle time that is less than the idle timeout.  If 60 seconds fixes
the problem, then you know that you have a short idle timeout setting
elsewhere.

 Kind regards,

 Raoul Snyman
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] proposed extension to SessionExtension: after_bulk_operation

2008-11-04 Thread Martijn Faassen

Hi there,

I've been using zope.sqlalchemy's integration with SQLALchemy and it's 
been working pretty well so far.

Today however I ran into a snag when using session.query(..).delete(). 
While a query immediately after the delete showed no more objects, in 
the next transaction the objects would re-appear.

This turned out because zope.sqlalchemy tracks whether a session has 
changed over time, and has no way of tracking the transaction has been 
changed when this (and also session.query(..).update()) is in use, and 
then rolls back the transaction.

zope.sqlalchemy offers a way to manually mark a transaction as changed:

   zope.sqlalchemy.mark_changed()

It's annoying however to have to remember to call this when using these 
operations.

After discussion between myself and Laurence Rowe we came up with the 
following proposal (mostly it's Laurence's :):

   session.query(...).delete() and session.query(...).update() call
   a new method on SessionExtension, if at least result.rowcount != 0. We
   propose the name after_bulk_operation() for this new method.

   We can then modify zope.sqlalchemy's SessionExtension to mark the
   session as changed by hooking into this method.

What do people think?

Regards,

Martijn


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] foreign key reference between the models of two different TurboGears projects?

2008-11-04 Thread JV

Hi

How can I have a foreign key reference between the models of two
different TurboGears  projects?
I am using sqlalchemy.

JV

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DB Operational Error

2008-11-04 Thread Michael Bayer

the MySQL has gone away error usually corresponds to a connection  
thats been held open without activity for 8 hours.   if thats not the  
case here, either the MySQL client config has the timeout turned way  
down or something is broken at the network level.

im not sure if by isolated execution you mean that youre running  
Pylons as a CGI or doing some other kind of teardown after requests,  
so some clarification on that may be helpful here.


On Nov 4, 2008, at 3:36 AM, Raoul Snyman wrote:


 Hi Michael,

 On Nov 2, 1:07 am, Michael Bayer [EMAIL PROTECTED] wrote:
 look into the pool_recycle option described 
 athttp://www.sqlalchemy.org/docs/05/dbengine.html

 I'm also getting these errors, and I have pool_recycle, pool_size and
 max_cycle set. I'm using Pylons, SQLAlchemy 0.5 and MySQLdb 1.2.2

 # lines from my Pylons ini file
 sqlalchemy.default.pool_recycle = 3600
 sqlalchemy.default.pool_size = 32
 sqlalchemy.default.max_overflow = 1024

 I think that the reason this happens is because this is not a stateful
 app, but rather a state-less web site. Every request to the site is a
 full but isolated execution of the app, and this makes me think that
 SQLAlchemy doesn't really have a Pool to work with.

 I'm rather flummoxed on this one. No one seems to have an answer other
 than pool_recycle - which is not working.

 Kind regards,

 Raoul Snyman

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sharding with 2 pools of connections

2008-11-04 Thread Michael Bayer


On Nov 4, 2008, at 4:10 PM, Lenn0x wrote:


 Hi

 I have 2 sessions created:

 a_engine = create_engine('mysql:///host1', echo=True)
 a_session = scoped_session(sessionmaker(autoflush=True,
 transactional=True, bind=a_engine))

 b_engine = create_engine('mysql:///host2', echo=True)
 b_session = scoped_session(sessionmaker(autoflush=True,
 transactional=True, bind=b_engine))

 The issue I run into is, what happens when b_session's server crashes?
 I am looking at this from a high availability standpoint. I want to
 round-robin between 2 pools of persistent connetions. Both hosts share
 the same data. If b_session is down, don't use that session until its
 recovered.

 In my cases using a hardware vip would probably make this easier, but
 since we're dealing with persistent connections -- it makes it a tad
 more complex. I *could* create 1 pool thats connected directly to a
 hardware vip and let that round-robin through the DBs on the initial
 'connect'. But if I start adding more servers that host my application
 I could run into a scenario of more connections living on A vs B and
 it becomes unpredictable. Managing 2 pools allows me to control X
 connections per host and adjust as I add more servers.


how is your own in-application round robining going to do a better job  
than an existing load balancing solution , such as 
http://dev.mysql.com/doc/refman/5.0/en/load-balancer.html 
  ?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] extending pre-existing (read only) tables

2008-11-04 Thread rdmurray

I've got a somewhat oddball situation, and I'm hoping for some pointers
to the right sqlalchemy features to handle this.  Or, failing that,
hints as to the places I'm going to need to write code to compensate
for the weirdness I have to deal with.

The situation is that I get periodic copies of a set of database tables.
In my local database, I need to provide supplemental data that is
maintained only in my copy.

In my application library, I need to support two models: a model that
includes only the data of which I get periodic copies, and a second
model that includes my supplemental data.  So, object wise what I think
I want to have is a set of objects for the imported data, and then
a second set of objects that subclasses the first and extends them.
(ex: a Publication object, and then an ExtendedPublication object).
So I have two sets of tables, the base tables (whose schema I would
prefer not to touch), and a second set of parallel tables containing
my local data.  On import, I just drop and replace the base tables,
leaving my tables untouched.

Right now I'm playing with mapper inheritance.  The first stumbling
block I've come to is the case where the local table doesn't yet have
a row for the object from the base table.  In that case, a query on my
subclassed object returns nothing.  Do I need to write an import program
that creates rows in the local tables for any items in the imported data
that don't yet have them?  That strikes me as rather fragile.  Is there
a way to automate the creation of the local table row with default values?

Or am I approaching this all wrong?

--RDM

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: single table inheritance mapping

2008-11-04 Thread David Gardner

Maybe I did something wrong so loosely following the example 
http://techspot.zzzeek.org/?p=4
which I realize is a bit old now. I was able to get my basic mappers 
setup and working.

class NodeLoader(MapperExtension):
def create_instance(self, mapper, selectcontext, row, class_):
#print row
if row['nodehierarchy_type'] == 'file':
return FileNode(None,None,None)
else:
return Node(None,None,None)

However the index in the RowProxy changes if I try to eagerload the 
children. So this works:

a=query = 
session.query(Node).filter(Node.name=='testytest').filter(Node.parentuid==uid).first()

But this fails:
a=query = 
session.query(Node).filter(Node.name=='testytest').filter(Node.parentuid==uid).options(eagerload_all('Children')).first()

I get:

  File 
/users/dgardner/dev/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc2-py2.5.egg/sqlalchemy/engine/base.py,
 
line 1508, in lookup_key
raise exc.NoSuchColumnError(Could not locate column in row for 
column '%s' % (str(key)))
sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for 
column 'nodehierarchy_type'


Is there a better way to do this? I still have the option of reverting 
to my original mapper setup where I mapped the classes against select() 
queries.

David Gardner wrote:
 Aha! This worked perfectly. I followed your create_instance() example at
 http://techspot.zzzeek.org/?p=4

 I definitely owe you a beer!

 Michael Bayer wrote:
   
 On Oct 30, 2008, at 5:30 PM, David Gardner wrote:


 
 This works. However although type=='file' is the only time I want/need
 to have a subclass for, I still need to know the original value of
 type.
 Is there a way in SA to basically get two copies of the column?
 Maybe I
 could return a tuple from MyRow.__getitem__ ?

 The best think I can think of is to create a view (PostgreSQL):
 CREATE OR REPLACE VIEW nodehierarchy_view AS  SELECT *, type AS poly
 FROM nodehierarchy;

 and map my classes against the view, that way I have two copies of the
 type columns.

 To complicate things slightly more I am joining against a type table,
 who's primary key is a text field. So typically when I just want to
 know
 the name of the type I do node.type, but sometimes I need additional
 information and do a node.Type.other_info. Right now the relation is
 commented out since it isn't working.

 node_mapper = mapper(Node, nodehierarchy_table, extension=MyExt(),
 polymorphic_on=nodehierarchy_table.c.type,
 polymorphic_identity='node',
properties = {
#'Type' : relation (Type,
 primaryjoin=(type_table.c.name==nodehierarchy_table.c.type),
 uselist=False),

   
 The way I'd want to add polymorphic_on as a callable would allow one
 to replace the entire mechanism of the discriminator, such that
 there's not even a specific column that SQLA knows about in that
 regard.   This is a moderate change to the mapper internals, but the
 big behavioral change is that the *population* of the column would
 also become something you'd have to do on your end, and I'd also have
 to figure out what its behavior with regards to querying becomes - it
 would probably be several functions the user needs to implement.

 Right now Im wondering if it's not easier for you to just provide a
 create_instance() mapper extension that just returns the base class
 for all discriminators other than file, and you just do away with
 using polymorphic_on. The fact that you are compressing all those
 discriminators into one class means that you're losing a lot of the
 semantics of inheritance already- you cant query for them without
 explicitly saying filter(discriminator==sometype), for example.

 
 


 --
 David Gardner
 Pipeline Tools Programmer, Sid the Science Kid
 Jim Henson Creature Shop
 (323) 802-1717 [EMAIL PROTECTED]



 

   


-- 
David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
(323) 802-1717 [EMAIL PROTECTED]



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Composite columns with declarative base

2008-11-04 Thread Ritesh Nadhani
Unfortunately, it still gives me an error.

http://paste.pocoo.org/show/90191

Did I miss something?

PS: I added the __get__ method just for the fun of it, I have no idea what
it does. Looking at the docs:
http://www.sqlalchemy.org/docs/04/sqlalchemy_orm_interfaces.html#docstrings_sqlalchemy.orm.interfaces_PropComparator,
seems that I have to implement other methods but I am not sure which
one.

Any help is appreciated.

On Mon, Oct 27, 2008 at 7:45 AM, Michael Bayer [EMAIL PROTECTED]wrote:


 theres a bug in that the error message is misleading, but in fact a
 composite property owns the columns within it which cannot be mapped
 separately, so to make that work you'd need  to say:

 class User(Base):

 __tablename__ = 'user'

 house_address_id = Column('house_address', Integer,
 ForeignKey('address.id'))
 office_address_id = Column('office_address', Integer,
 ForeignKey('address.id'))
 house_address = relation(Address,
 primaryjoin=house_address_id==Address.id)
 office_address = relation(Address,
 primaryjoin=office_address_id==Address.id)
 comp = composite(Comp, Column('id', Integer, primary_key=True,
 autoincrement=True), Column('name', CHAR))

 but the way you're using Comp isn't going to work in any case;  you're
 actually looking for comparable_property() here:

 class MyComparator(sqlalchemy.orm.interfaces.PropComparator):
 def __eq__(self, other):
 return self.comp == other.comp

 class User(Base):

 __tablename__ = 'user'

 id = Column('id', Integer, primary_key=True, autoincrement=True)
 name = Column('name', CHAR)
 house_address_id = Column('house_address', Integer,
 ForeignKey('address.id'))
 office_address_id = Column('office_address', Integer,
 ForeignKey('address.id'))
 house_address = relation(Address,
 primaryjoin=house_address_id==Address.id)
 office_address = relation(Address,
 primaryjoin=office_address_id==Address.id)

 @property
 def comp(self):
 return self.id + self.name

 comp = comparable_property(MyComparator)


 On Oct 27, 2008, at 9:22 AM, riteshn wrote:

 
  Hello all
 
  New to SQLAlchemy and ORM and loving it. I am trying to use the
  declarative base extension with composite column.
 
  I have two very simple tables - user and address.
 
  My code at: http://python.pastebin.com/m6e032164 works without any
  problem.
 
  I am trying to put the same thing using declarative base:
  http://python.pastebin.com/m1a05e5c0 and it throws me the error.
 
  Any ideas?
 
  


 



-- 
Ritesh
http://www.riteshn.com

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sharding with 2 pools of connections

2008-11-04 Thread Lenn0x

After pondering this for awhile, I think the easiest solution would be
to do this:

1. Have it connect to a hardware vip that manages failovers etc to my
2 servers.
2. Have 1 dedicated pool, when the connect occurs on the vip, the vip
will evenly distribute.
3. Set the pool to have idle recycle timeouts

This would probably be the easiest and best way. Sacrifice trying to
set connection limits per host, and just let the connections recycle
thesmelves so that it eventually just 'fixes' itself.


This would work also for when you need to take hosts out of rotation
for maintenance, the VIP will notice this, and then the app servers
will eventually knock the connections off with the recycle.

-Chris

On Nov 4, 3:39 pm, Lenn0x [EMAIL PROTECTED] wrote:
 Again that is like a hardware vip (that solution is a bit more
 complex) but the same issue still comes up. If the app connects to a
 mysql load balancer, and it goes down. Now the app can't connect to
 the other mysql servers. A hardware vip works fine but I am curious
 how others might be handling this? Since I don't want extra load on 1
 particular server since I am using persistent connections.

 On Nov 4, 2:40 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  On Nov 4, 2008, at 4:10 PM, Lenn0x wrote:

   Hi

   I have 2 sessions created:

   a_engine = create_engine('mysql:///host1', echo=True)
   a_session = scoped_session(sessionmaker(autoflush=True,
   transactional=True, bind=a_engine))

   b_engine = create_engine('mysql:///host2', echo=True)
   b_session = scoped_session(sessionmaker(autoflush=True,
   transactional=True, bind=b_engine))

   The issue I run into is, what happens when b_session's server crashes?
   I am looking at this from a high availability standpoint. I want to
   round-robin between 2 pools of persistent connetions. Both hosts share
   the same data. If b_session is down, don't use that session until its
   recovered.

   In my cases using a hardware vip would probably make this easier, but
   since we're dealing with persistent connections -- it makes it a tad
   more complex. I *could* create 1 pool thats connected directly to a
   hardware vip and let that round-robin through the DBs on the initial
   'connect'. But if I start adding more servers that host my application
   I could run into a scenario of more connections living on A vs B and
   it becomes unpredictable. Managing 2 pools allows me to control X
   connections per host and adjust as I add more servers.

  how is your own in-application round robining going to do a better job  
  than an existing load balancing solution , such 
  ashttp://dev.mysql.com/doc/refman/5.0/en/load-balancer.html
    ?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Composite columns with declarative base

2008-11-04 Thread Michael Bayer
attached is a script illustrating the usage of comparable_property, in roughly the same way you were using composite earlier.from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.orm.interfaces import PropComparator
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite://', echo=True)

Base = declarative_base(bind=engine)

class MyComparator(PropComparator):
def __eq__(self, other):
return and_(User.id==other.id, User.name==other.name)

class Comp(object):
def __init__(self, id, name):
self.id = id
self.name = name

def __eq__(self, other):
return self.id == other.id and self.name == other.name

class Address(Base):
__tablename__ = 'address'

id = Column('id', Integer, primary_key=True, autoincrement=True)
street = Column('street', CHAR)

class User(Base):
__tablename__ = 'user'

id = Column('id', Integer, primary_key=True, autoincrement=True)
name = Column('name', CHAR)
house_address_id = Column('house_address', Integer, ForeignKey('address.id'))
office_address_id = Column('office_address', Integer, ForeignKey('address.id'))
house_address = relation(Address, primaryjoin=house_address_id==Address.id)
office_address = relation(Address, primaryjoin=office_address_id==Address.id)

@property
def comp(self):
return Comp(self.id, self.name)

comp = comparable_property(MyComparator, comp)

Base.metadata.create_all()

session = sessionmaker()()
session.save(
User(
name='user1', 
house_address=Address(street='somestreet'), 
office_address=Address(street='someotherstreet')
)
)

rows = session.query(User).filter(User.comp==Comp(1, user1)).all()
assert rows[0].comp == Comp(1, user1)

On Nov 4, 2008, at 7:29 PM, Ritesh Nadhani wrote:Unfortunately, it still gives me an error.http://paste.pocoo.org/show/90191Did I miss something?PS: I added the __get__ method just for the fun of it, I have no idea what it does. Looking at the docs: http://www.sqlalchemy.org/docs/04/sqlalchemy_orm_interfaces.html#docstrings_sqlalchemy.orm.interfaces_PropComparator , seems that I have to implement other methods but I am not sure which one. Any help is appreciated.On Mon, Oct 27, 2008 at 7:45 AM, Michael Bayer [EMAIL PROTECTED]> wrote:  theres a "bug" in that the error message is misleading, but in fact a composite property owns the columns within it which cannot be mapped separately, so to make that "work" you'd need to say:  class User(Base):__tablename__ = 'user'house_address_id = Column('house_address', Integer, ForeignKey('address.id'))   office_address_id = Column('office_address', Integer, ForeignKey('address.id'))   house_address = relation(Address, primaryjoin=house_address_id==Address.id)   office_address = relation(Address, primaryjoin=office_address_id==Address.id)   comp = composite(Comp, Column('id', Integer, primary_key=True, autoincrement=True), Column('name', CHAR))  but the way you're using Comp isn't going to work in any case; you're actually looking for comparable_property() here:  class MyComparator(sqlalchemy.orm.interfaces.PropComparator):   def __eq__(self, other): return self.comp == other.comp  class User(Base):__tablename__ = 'user'id = Column('id', Integer, primary_key=True, autoincrement=True)   name = Column('name', CHAR)   house_address_id = Column('house_address', Integer, ForeignKey('address.id'))   office_address_id = Column('office_address', Integer, ForeignKey('address.id'))   house_address = relation(Address, primaryjoin=house_address_id==Address.id)   office_address = relation(Address, primaryjoin=office_address_id==Address.id)@property   def comp(self): return self.id + self.namecomp = comparable_property(MyComparator)   On Oct 27, 2008, at 9:22 AM, riteshn wrote:  > > Hello all > > New to SQLAlchemy and ORM and loving it. I am trying to use the > declarative base extension with composite column. > > I have two very simple tables - user and address. > > My code at: http://python.pastebin.com/m6e032164 works without any > problem. > > I am trying to put the same thing using declarative base: > http://python.pastebin.com/m1a05e5c0 and it throws me the error. > > Any ideas? > > >-- Riteshhttp://www.riteshn.com --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group.  To post to this group, send email to sqlalchemy@googlegroups.com  To unsubscribe from this group, send email to [EMAIL PROTECTED]  For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- 

[sqlalchemy] Re: Composite columns with declarative base

2008-11-04 Thread Michael Bayer
also, I wonder how the way you were doing it before, with composite,  
was actually working out ?   It wasn't intended to hold half of a  
primary key like that which is probably why I warned against it, but  
if its working for you, there's no reason not to use it.

I.e. with declarative just put the Column objects inside the composite:

class User(Base):

 __tablename__ = 'user'

 house_address_id = Column('house_address', Integer,
ForeignKey('address.id'))
 office_address_id = Column('office_address', Integer,
ForeignKey('address.id'))
 house_address = relation(Address,
primaryjoin=house_address_id==Address.id)
 office_address = relation(Address,
primaryjoin=office_address_id==Address.id)
 comp = composite(Comp, Column('id', Integer, primary_key=True,
autoincrement=True), Column('name', CHAR))



On Nov 4, 2008, at 7:29 PM, Ritesh Nadhani wrote:

 Unfortunately, it still gives me an error.

 http://paste.pocoo.org/show/90191

 Did I miss something?

 PS: I added the __get__ method just for the fun of it, I have no  
 idea what it does. Looking at the docs: 
 http://www.sqlalchemy.org/docs/04/sqlalchemy_orm_interfaces.html#docstrings_sqlalchemy.orm.interfaces_PropComparator
  
  , seems that I have to implement other methods but I am not sure  
 which one.

 Any help is appreciated.

 On Mon, Oct 27, 2008 at 7:45 AM, Michael Bayer [EMAIL PROTECTED] 
  wrote:

 theres a bug in that the error message is misleading, but in fact a
 composite property owns the columns within it which cannot be mapped
 separately, so to make that work you'd need  to say:

 class User(Base):

 __tablename__ = 'user'

 house_address_id = Column('house_address', Integer,
 ForeignKey('address.id'))
 office_address_id = Column('office_address', Integer,
 ForeignKey('address.id'))
 house_address = relation(Address,
 primaryjoin=house_address_id==Address.id)
 office_address = relation(Address,
 primaryjoin=office_address_id==Address.id)
 comp = composite(Comp, Column('id', Integer, primary_key=True,
 autoincrement=True), Column('name', CHAR))

 but the way you're using Comp isn't going to work in any case;  you're
 actually looking for comparable_property() here:

 class MyComparator(sqlalchemy.orm.interfaces.PropComparator):
 def __eq__(self, other):
 return self.comp == other.comp

 class User(Base):

 __tablename__ = 'user'

 id = Column('id', Integer, primary_key=True, autoincrement=True)
 name = Column('name', CHAR)
 house_address_id = Column('house_address', Integer,
 ForeignKey('address.id'))
 office_address_id = Column('office_address', Integer,
 ForeignKey('address.id'))
 house_address = relation(Address,
 primaryjoin=house_address_id==Address.id)
 office_address = relation(Address,
 primaryjoin=office_address_id==Address.id)

 @property
 def comp(self):
 return self.id + self.name

 comp = comparable_property(MyComparator)


 On Oct 27, 2008, at 9:22 AM, riteshn wrote:

 
  Hello all
 
  New to SQLAlchemy and ORM and loving it. I am trying to use the
  declarative base extension with composite column.
 
  I have two very simple tables - user and address.
 
  My code at: http://python.pastebin.com/m6e032164 works without any
  problem.
 
  I am trying to put the same thing using declarative base:
  http://python.pastebin.com/m1a05e5c0 and it throws me the error.
 
  Any ideas?
 
  






 -- 
 Ritesh
 http://www.riteshn.com

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key reference between the models of two different TurboGears projects?

2008-11-04 Thread JV

Michael

Can you elaborate a bit on how to do each of the thing you said -
metadata sharing between 2 separate Turbogears projects and placing
the actual column in the ForeignKey constructor, instead of string
'table.column.id'.

I read your earlier discussion
http://groups.google.com/group/sqlalchemy/browse_thread/thread/4db84b9f07236fed/5fa2ee7131798264?lnk=gstq=metadata+sharing#5fa2ee7131798264
but I am still not clear on how to share the metadata. The only way I
can think of is to import the metadata object from one model1 to
model2, but that shall require the two pkgs to be present in fixed
relative positions. I don't see a common module where I can keep the
metadata object and use it in either pkg.

JV

On Nov 4, 6:19 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 either share the same MetaData object across those two models, or  
 place the actual Column object instead of a string in the ForeignKey()  
 constructor.

 On Nov 4, 2008, at 7:16 AM, JV wrote:



  Hi

  How can I have a foreign key reference between the models of two
  different TurboGears  projects?
  I am using sqlalchemy.

  JV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key reference between the models of two different TurboGears projects?

2008-11-04 Thread JV

I got a solution. Thanks to Michael. Here is what he suggested.

*

You use the Column object:

from myproject.model import sometable

someothertable = Table('someothertable', metadata, Column('foobar',
Integer, ForeignKey(sometable.c.foobar))


as you can see the common theme in both cases is that one project has
to get at the model of the other one - either the metadata object, or
the table/column object (you can get at the metadata of any table
using Table.metadata).   The problem you're laying out is essentially
one of one project having a dependency on the other, so unless you
wanted to redefine the same Table objects as are in the first project
as in the second project, you'd have an import dependency from the
first one to the second.

as far as importing modules across Turbogears projects, you'd have to
ask on their list how that might be accomplished.
*

Both the ways work. :)
Just that I will have to maintain a fixed relative path between the
two packages and append the location of one pkg to sys.path while
accessing its model from the other.


JV


On Nov 5, 6:40 am, JV [EMAIL PROTECTED] wrote:
 Michael

 Can you elaborate a bit on how to do each of the thing you said -
 metadata sharing between 2 separate Turbogears projects and placing
 the actual column in the ForeignKey constructor, instead of string
 'table.column.id'.

 I read your earlier 
 discussionhttp://groups.google.com/group/sqlalchemy/browse_thread/thread/4db84b...
 but I am still not clear on how to share the metadata. The only way I
 can think of is to import the metadata object from one model1 to
 model2, but that shall require the two pkgs to be present in fixed
 relative positions. I don't see a common module where I can keep the
 metadata object and use it in either pkg.

 JV

 On Nov 4, 6:19 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  either share the same MetaData object across those two models, or  
  place the actual Column object instead of a string in the ForeignKey()  
  constructor.

  On Nov 4, 2008, at 7:16 AM, JV wrote:

   Hi

   How can I have a foreign key reference between the models of two
   different TurboGears  projects?
   I am using sqlalchemy.

   JV
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---