[sqlalchemy] Re: Declarative Models: Can they be used with two databases and two schema names?

2012-06-07 Thread Shawn Wheatley
Fantastic, that event hack was just what I needed. I did have to change 
sub to replace on the statement, for any future readers, but that was a 
trivial change. Thanks Michael!

Shawn

On Wednesday, June 6, 2012 3:51:55 PM UTC-4, Shawn Wheatley wrote:

 Hi,

 I'm trying to use my declarative models to copy data from an Oracle 
 database with a non-default schema name to a SQLite database (which has no 
 schema name, or at least a default name that can't be changed). Copying 
 from Oracle to Oracle has not been a problem for me, but Oracle to SQLite 
 will not work. The problem for me is that the schema definition used for 
 SQL generation is on the table. I went through a fruitless exercise of 
 calling tometadata on every table in the metadata created by the 
 generated declarative base class, copying into a new MetaData object. I 
 then swapped the metadata on the declarative base and ran my query, with 
 the intention of swapping it back after. No luck.

 The purpose of my project is to surgically extract related data for a 
 small subset of accounts from our production database and bring it down to 
 a local SQLite database. Does anybody have experience doing this? Am I 
 going about this the wrong way?

 Thanks for any help,
 Shawn


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/h8kudiaKdHsJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Declarative: Joined Inheritance + Two Tables To One Object

2011-05-24 Thread Michael Bayer

On May 23, 2011, at 10:32 PM, Israel Ben Guilherme Fonseca wrote:

 D'oh, I figured myself, It was very easy. I just followed the guide again and 
 it worked. 
 
 One question though. Let's use the guide example for this:
 
 Let's say that AddressUser inherits(joined inheritance) from another class, 
 and that class have a id with the same name (user_id), I get a warning like 
 this:
 
 Implicitly combining column address.user_id with column superclass.user_id 
 under attribute 'user_id'.  This usage will be prohibited in 0.7.  Please 
 configure one or more attributes for these same-named columns explicitly.
 
 All ids are indeed, the same id so it seems ok for me. Why is it being 
 prohibited? Issued with advanced cases, or just to avoid hard-to-debug errors?

The map of the attribute id to two columns named id is based on their name 
alone, not that they actually have anything to do with each other.If they 
do relate to each other, then you're fine, but otherwise yes it totally can 
confuse people.This rule is suspended when using joined table inheritance 
since column names are already considered to be significant in that situation.


 
 
 2011/5/23 Israel Ben Guilherme Fonseca israel@gmail.com
 I have the following:
 
 class Person(Base):
__tablename__ = 'pessoa'
id = Column(id_person), Integer, primary_key = True)
name = Column(String)
 
 class Teacher(Person):
__tablename__ = 'teacher'
id = Column(id_teacher, Integer, ForeignKey(Person.id), primary_key=True)
info = Column(String)
 
 class Salary(Base):
__tablename__ = 'salary'
   id = Column(String)
   value = Column(Numeric)
 
 That's ok, but I wanted to merge the Salary and Teacher objects following the 
 guide:
 
 http://www.sqlalchemy.org/docs/06/orm/mapper_config.html#mapping-a-class-against-multiple-tables
 
 Am I forced to map the Teacher and Salary in the non-declarative mode to 
 achieve this? It's nice to keep things declarative, because it automatically 
 create the __init__ method with the columns as parameters.
 
 I have another classes that have relationships to those classes (and they are 
 declarative too), and things get nasty when I mix declarative with the 
 standard way.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Declarative: Joined Inheritance + Two Tables To One Object

2011-05-23 Thread Israel Ben Guilherme Fonseca
D'oh, I figured myself, It was very easy. I just followed the guide again
and it worked.

One question though. Let's use the guide example for this:

Let's say that AddressUser inherits(joined inheritance) from another class,
and that class have a id with the same name (user_id), I get a warning like
this:

Implicitly combining column address.user_id with column superclass.user_id
under attribute 'user_id'.  This usage will be prohibited in 0.7.  Please
configure one or more attributes for these same-named columns explicitly.

All ids are indeed, the same id so it seems ok for me. Why is it being
prohibited? Issued with advanced cases, or just to avoid hard-to-debug
errors?


2011/5/23 Israel Ben Guilherme Fonseca israel@gmail.com

 I have the following:

 class Person(Base):
__tablename__ = 'pessoa'
id = Column(id_person), Integer, primary_key = True)
name = Column(String)

 class Teacher(Person):
__tablename__ = 'teacher'
id = Column(id_teacher, Integer, ForeignKey(Person.id),
 primary_key=True)
info = Column(String)

 class Salary(Base):
__tablename__ = 'salary'
   id = Column(String)
   value = Column(Numeric)

 That's ok, but I wanted to merge the Salary and Teacher objects following
 the guide:


 http://www.sqlalchemy.org/docs/06/orm/mapper_config.html#mapping-a-class-against-multiple-tables

 Am I forced to map the Teacher and Salary in the non-declarative mode to
 achieve this? It's nice to keep things declarative, because it automatically
 create the __init__ method with the columns as parameters.

 I have another classes that have relationships to those classes (and they
 are declarative too), and things get nasty when I mix declarative with the
 standard way.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: declarative - automatically add a primary key if the table doesn't have one

2010-09-22 Thread Yap Sok Ann


On Sep 22, 11:37 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 22, 2010, at 4:30 AM, Yap Sok Ann wrote:



  This is related to topic need 0.6_beta2-compat declarative meta
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/ae7cb9...

  Prior to version 0.6, I use the following code to automatically add a
  primary key if the table doesn't have one defined:

  from sqlalchemy.ext.declarative import declarative_base,
  DeclarativeMeta
  from sqlalchemy.schema import Column
  from sqlalchemy.types import Integer

  class Meta(DeclarativeMeta):
     def __init__(cls, classname, bases, dict_):
         for attr in dict_.itervalues():
             if isinstance(attr, Column) and attr.primary_key:
                 break
         else:
             dict_['id'] = Column(Integer, primary_key=True)
         return super(Meta, cls).__init__(classname, bases, dict_)

  Base = declarative_base(metaclass=Meta)

  Of course, that doesn't work anymore in 0.6. The suggestion from the
  aforementioned threads is to replace:

  dict_['id'] = Column(Integer, primary_key=True)

  with

  cls.id = Column(Integer, primary_key=True)

  Unfortunately, that alone doesn't work in this case. The problem is
  that the Base class itself will be the first one to go through the
  Meta.__init__() method, so the whole thing essentially becomes:

  Base.id = Column(Integer, primary_key=True)

  For it to work, I have to wrap the code in an if-block, i.e.

  class Meta(DeclarativeMeta):
     def __init__(cls, classname, bases, dict_):
         if classname != 'Base':
             for attr in dict_.itervalues():
                 if isinstance(attr, Column) and attr.primary_key:
                     break
             else:
                 cls.id = Column(Integer, primary_key=True)
         return super(Meta, cls).__init__(classname, bases, dict_)

  which looks rather ugly. Is there a cleaner way to achieve this?

 I didn't think metaclasses were supposed to be pretty ?    Checking that 
 you're not the base is pretty standard metaclass stuff.      If the 
 hardcoded name is the issue, you can look in bases:

         if object not in bases:

 or something more generic:

         for k in cls.__mro__[1:]:
             if isinstance(k, Meta):
                 # you're a Base subclass

Good point. I shall stick with the name checking solution then. Thank
you for your help.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: declarative base - can a relationship be used within a column_property?

2010-08-21 Thread Yap Sok Ann
On Aug 22, 2:12 am, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 21, 2010, at 1:38 PM, Michael Bayer wrote:

  On Aug 19, 2010, at 6:38 AM, Yap Sok Ann wrote:

  With declarative base, is it possible to use a relationship within a
  column_property?

  you mean, as I am seeing below, to use the any() operator produced by a 
  relationship...

  Here's some sample code to illustrate what I want to
  achieve:

    players = relation('Player', back_populates='team')

    # This doesn't work
    #has_star_player = column_property(players.any(star=True))

    # This works
    has_star_player = column_property(
        exists().where(id == Player.team_id).where(Player.star ==
  True))

  # This also works
  Team.__mapper__.add_property(
    'has_star_player2',
    column_property(Team.players.any(star=True)),
  )

  so that's your answer - the two approaches you have are fine.  For the 
  other one, you'd call players.comparator.any(), but that also doesn't work 
  since the relationship is not aware of its parent mapper at that point.

 oh well, lets make this easier, declarative documents this:

 Team.has_star_player_2 = column_property(Team.players.any(star=True))

Cool, that's definitely the best approach. Thank you.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: declarative autoloading table class with composite foreign/primary key

2010-08-04 Thread jgs9000

Michael

thanks very much for your helpful advice - the problem seems to
actually involve the autoloading of the table in question.

The table structure is as follows:

CREATE TABLE wcs (
image_id   INTEGER NOT NULL,
amp  INTEGER NOT NULL,
ctype1   TEXT,
(other column defs deleted)
PRIMARY KEY (image_id, amp),
FOREIGN KEY (image_id, amp) REFERENCES science_amp
ON DELETE NO ACTION
ON UPDATE CASCADE)

and the class definition is as follows:

class Wcs(skymapper_db.db.TableBase):

  from sqlalchemy import Table, Column, Integer, String, MetaData,
ForeignKey, ForeignKeyConstraint
  from sqlalchemy.orm import relationship, backref

  __tablename__ = 'wcs'
  __table_args__ = {'autoload':True}

  image_id = Column(Integer, primary_key=True)
  amp = Column(Integer, primary_key=True)

  def __init__(self):
from sqlalchemy import ForeignKeyConstraint
self.__table__.append_constraint(ForeignKeyConstraint(['image_id',
'amp'], ['science_amp.image_id', 'science_amp.amp']))

  def __repr__(self):
return Wcs(%s, %s) % (self.image_id, self.amp)

If I attempt to instantiate the class as defined, I get:

Traceback (most recent call last):
  File C:\Users\jgs900\Work\skymapper-alchemy\wcs.py, line 13, in
module
class Wcs(skymapper_db.db.TableBase):
  File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
line 1017, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
line 926, in _as_declarative
**table_kw)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 209,
in __new__
table._init(name, metadata, *args, **kw)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 269,
in _init
self._init_items(*args)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 60,
in _init_items
item._set_parent(self)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 809,
in _set_parent
table.constraints.remove(fk.constraint)
KeyError: ForeignKeyConstraint()

but if I disable the autoloading, there is no problem

PS: sqlalchemy v0.6.3, Python 2.7, PostgreSQL 8.4.1

Thanks again - Jon

On Aug 4, 2:06 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 3, 2010, at 10:24 PM, jgs9000 wrote:

  Hi

  I'm relatively new to SQLAlchemy, so thanks in advance for any help
  with this issue.

  I'm trying to construct a class to model a legacy table which has a
  composite primary key which is also
  a composite foreign key referencing the composite primary key of a
  second table. I'm trying to define this
  class declaratively, and also have it autoload the remaining table
  structure from the underlying table.

 i dont know that we have any tests which do a pure autoload plus a foreign 
 key constraint otherwise not associated with anything.   so its likely a bug. 
   you might want to try calling table.append_constraint(constraint) after the 
 autoload completes.



  This is what I have:

  class Wcs(skymapper_db.db.TableBase):

   from sqlalchemy import Column, Integer, ForeignKeyConstraint

   __tablename__ = 'wcs'
   __table_args__ = (
                              ForeignKeyConstraint(['image_id', 'amp'],
  ['science_amp.image_id', 'science_amp.amp']),
                              {'autoload':True}
                    )

   image_id = Column(Integer, primary_key=True)
   amp = Column(Integer, primary_key=True)

   def __init__(self):
     pass

   def __repr__(self):
     return Wcs(%s, %s) % (self.image_id, self.amp)

  As it stands, I get an error when I try to instantiate this class:

  C:\Users\jgs900\Work\skymapper-alchemywcs.py
  Traceback (most recent call last):
   File C:\Users\jgs900\Work\skymapper-alchemy\wcs.py, line 13, in
  module
     class Wcs(skymapper_db.db.TableBase):
   File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
  line 1017, in __init__
     _as_declarative(cls, classname, cls.__dict__)
   File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
  line 926, in _as_declarative
     **table_kw)
   File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 209,
  in __new__
     table._init(name, metadata, *args, **kw)
   File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 269,
  in _init
     self._init_items(*args)
   File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 60,
  in _init_items
     item._set_parent(self)
   File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 809,
  in _set_parent
     table.constraints.remove(fk.constraint)
  KeyError: ForeignKeyConstraint()

  but if I leave out the autoload instruction, there is no problem.

  Am i doing something fundamentally wrong? Or am I just making a syntax
  error of some sort. Any help
  would be greatly appreciated.

  --
  You received this message because you are subscribed to 

Re: [sqlalchemy] Re: declarative autoloading table class with composite foreign/primary key

2010-08-04 Thread Michael Bayer

On Aug 4, 2010, at 9:05 PM, jgs9000 wrote:

 
 Michael
 
 thanks very much for your helpful advice - the problem seems to
 actually involve the autoloading of the table in question.

so there's a small bug that is easy to fix, that is ticket #1865 and it is 
fixed in r742bd985b4e0, latest tip, so at the very least the code you have 
(minus your __init__ method) will work.

But, if you are on Postgresql, and you're using reflection, there is absolutely 
no reason to specify image_id and amp explicitly, nor is there a need to 
specify the composite foreign key constraint - all of that will be reflected.   
Postgresql reflection is very complete and will pull all those details in for 
you.  Perhaps though you're using reflection just to pull in extra columns.  
I hardly ever use reflection for non-trivial applications.

The __init__ method of a declarative class is not where you'd put things 
related to the configuration of the mapped table.   A Python class's __init__ 
method is called for each instantaition of the object, and in the case of an 
ORM a mapped class instantiation corresponds to a row in the mapped table.
You can keep the ForeignKeyConstraint in the __table_args__ now (even though 
like I said none of that should be needed), but if you were to use 
append_constraint(), you'd do that outside of the class definition, right after 
the class has been declared.



 
 The table structure is as follows:
 
 CREATE TABLE wcs (
image_id   INTEGER NOT NULL,
amp  INTEGER NOT NULL,
ctype1   TEXT,
(other column defs deleted)
PRIMARY KEY (image_id, amp),
FOREIGN KEY (image_id, amp) REFERENCES science_amp
ON DELETE NO ACTION
ON UPDATE CASCADE)
 
 and the class definition is as follows:
 
 class Wcs(skymapper_db.db.TableBase):
 
  from sqlalchemy import Table, Column, Integer, String, MetaData,
 ForeignKey, ForeignKeyConstraint
  from sqlalchemy.orm import relationship, backref
 
  __tablename__ = 'wcs'
  __table_args__ = {'autoload':True}
 
  image_id = Column(Integer, primary_key=True)
  amp = Column(Integer, primary_key=True)
 
  def __init__(self):
from sqlalchemy import ForeignKeyConstraint
self.__table__.append_constraint(ForeignKeyConstraint(['image_id',
 'amp'], ['science_amp.image_id', 'science_amp.amp']))
 
  def __repr__(self):
return Wcs(%s, %s) % (self.image_id, self.amp)
 
 If I attempt to instantiate the class as defined, I get:
 
 Traceback (most recent call last):
  File C:\Users\jgs900\Work\skymapper-alchemy\wcs.py, line 13, in
 module
class Wcs(skymapper_db.db.TableBase):
  File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
 line 1017, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File C:\Python27\lib\site-packages\sqlalchemy\ext\declarative.py,
 line 926, in _as_declarative
**table_kw)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 209,
 in __new__
table._init(name, metadata, *args, **kw)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 269,
 in _init
self._init_items(*args)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 60,
 in _init_items
item._set_parent(self)
  File C:\Python27\lib\site-packages\sqlalchemy\schema.py, line 809,
 in _set_parent
table.constraints.remove(fk.constraint)
 KeyError: ForeignKeyConstraint()
 
 but if I disable the autoloading, there is no problem
 
 PS: sqlalchemy v0.6.3, Python 2.7, PostgreSQL 8.4.1
 
 Thanks again - Jon
 
 On Aug 4, 2:06 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Aug 3, 2010, at 10:24 PM, jgs9000 wrote:
 
 Hi
 
 I'm relatively new to SQLAlchemy, so thanks in advance for any help
 with this issue.
 
 I'm trying to construct a class to model a legacy table which has a
 composite primary key which is also
 a composite foreign key referencing the composite primary key of a
 second table. I'm trying to define this
 class declaratively, and also have it autoload the remaining table
 structure from the underlying table.
 
 i dont know that we have any tests which do a pure autoload plus a foreign 
 key constraint otherwise not associated with anything.   so its likely a 
 bug.   you might want to try calling table.append_constraint(constraint) 
 after the autoload completes.
 
 
 
 This is what I have:
 
 class Wcs(skymapper_db.db.TableBase):
 
  from sqlalchemy import Column, Integer, ForeignKeyConstraint
 
  __tablename__ = 'wcs'
  __table_args__ = (
 ForeignKeyConstraint(['image_id', 'amp'],
 ['science_amp.image_id', 'science_amp.amp']),
 {'autoload':True}
   )
 
  image_id = Column(Integer, primary_key=True)
  amp = Column(Integer, primary_key=True)
 
  def __init__(self):
pass
 
  def __repr__(self):
return Wcs(%s, %s) % (self.image_id, self.amp)
 
 As it stands, I get an error when 

[sqlalchemy] Re: Declarative with mix-in and __table__ fails [patch]

2010-06-04 Thread Gunnlaugur Briem
On Jun 4, 9:23 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 as long as all tests pass it is fine.  If you could give me a patch that 
 includes a test for this in test_declarative, that would be supremely helpful 
 (if you want to make a trac ticket and target it at the 0.6.2 milestone).

Will do.

 do you mean, if the new class overrides what the mixin provides ?   i'm not 
 sure why we'd need to do anything if the class overrides the mixin.

Because the mix-in is itself an API, a public interface defining
properties on which other code may depend. It is a superclass, and a
subclass should extend or elaborate what the superclass' definition
promises — not replace it. That was my thinking. And it was wrong.
Because there are further “shouldn't”s: yeah, a subclass shouldn't
break its superclass' API, but also a superclass API shouldn't be too
specific, and a framework shouldn't be too restrictive.

The mix-in specifies the names of its columns, and *maybe* it
specifies more detail (e.g. id must be Integer, or name column must be
unique), but then that's application-specific. (Sure, that id column
has type Integer, but it had to have *some* type; the mix-in author
may well mean it as a default and not a restriction.) Allowing
whatever property (type, uniqueness) of the columns to be overridden
is a valid application design choice, so DeclarativeMeta should not
needlessly enforce anything beyond the names.

So yeah, you're right, no need to do any more.

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: declarative + order_by of 2 columns: is it possible?

2010-05-18 Thread sandro dentella
Hi,

On 11 Mag, 18:23, sandro dentella san...@e-den.it wrote:
 Hi,

 i have a working declarative configuration that has a relation as
 this::

   client  = relation(Cliente, backref='jobs' , lazy=False,
 order_by=status.desc)

 now I'd like to add a second column in the order_by field but adding a
 list doesn't seem to work. I tried:

   client  = relation(Cliente, backref='jobs' , lazy=False,
 order_by=[status.desc, description])

 before posting the error I'd like to understand if that should be
 correct as I don't find in the docs the correct syntax, I just find
 the syntax for order_by method of query.

sorry for reposting, but I can't even understnd if the above syntax
should be allowed or not.
According to docs for 'relation':

 order_by – indicates the ordering that should be applied when loading
these items.

doesn't meant it accepts more than one column, order_by for query
does...
It's not vital but I'd like to know if it's possible and I'm just
misinterpreting the syntax.

thanks

sandro

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: declarative and session

2009-12-28 Thread karikris...@gmail.com
Thank you so much. I will try adding base class derived from
declarative base and implement save, update and delete methods.

Regards,

Krish

On Dec 27, 2:19 pm, Serge Koval serge.ko...@gmail.com wrote:
 You can always do self.session.add(self) in save() without checks. If
 your model was already associated with the session before, it won't add
 it again.

 Here's sample code I use:

 def save(self, flush=False):
     self.session.add(self)

     if flush:
         self.session.flush()

 def delete(self):
     self.session.delete(self)

 Serge.

 karikris...@gmail.com wrote:
  I am using declarative style models and very much happy about it.

  I see __table__ contains the classic sa.Table reference for advanced
  queries. Like that do we have session is attached to the model class
  derived from declarative_base?

  I am very much curious to make django/rail style save, update, delete
  methods on top of sqlalchemy.

  I don't know much  about Exlir but I see it does things over smart
  matching some ruby like stuff. Not sure about community support.

  If the session is attached, I can say like

  class MyModel(SQLAlchemyDecBase)
  ...
      def save(self):
            self.session.add(self)
           #or
            meta.session.add(self)

  --

  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Declarative, single table inheritance and column type override

2009-12-21 Thread Sergey Koval
Sorry for misleading question. I found the problem and it was related to the
index management.

Here's sample:

...

class Person(DeclarativeBase):
  id = Column(Integer, primary_key=True)
  test = Column(Integer, index=True, unique=True)

... and another file:

from test import Person

class Engineer(Person):
  test = Column(Integer)

def drop_test_index():
for i in Engineer.__table__.indexes:
if Engineer.test in i.columns:
Engineer.__table__.indexes.remove(i)
break
drop_test_index()

Fixes it. Is there better way to do it?

Thanks,
Serge.

On Mon, Dec 21, 2009 at 12:03 PM, Serge Koval serge.ko...@gmail.com wrote:

 Hello,

 I'm trying to override column type in a single-table inheritance, using
 declarative syntax and a bit stuck. Is it possible at all?
 Sample code:

 class Person(DeclarativeBase):
   id = Column(Integer, primary_key=True)
   test = Column(Integer, unique=True)

 class Engineer(Person):
   test = Column(Integer)

 Instead of removing unique flag (from inherited column definition), I get
 composite column test.

 Thanks,
 Serge.




--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Declarative base - Joined Table Inheritence

2009-09-27 Thread Jarrod Chesney

I might be getting a bit ambitious here, But is this possible?

I'm using a different polymorphic_on for the second level of
inheritance.

I tried it but it only seems to polymorphicly return records of type
_UtConfReconcilerActions


class _UtConfActions(Base):

__tablename__ = 'tblActions'

# 1 to Many relationship to the managed
id = Column(Integer, primary_key=True)
managed_id = Column(Integer, ForeignKey('tblManagedDetails.id'))
component = Column(String)

__mapper_args__ = {'polymorphic_on': component, 'with_polymorphic':
'*'}


class _UtConfReconcilerActions(_UtConfActions):

__tablename__ = 'tblReconcilerActions'

# 1 to Many relationship to the managed
id = Column(Integer, ForeignKey('tblActions.id'), primary_key=True)
action = Column(String)

__mapper_args__ = {'polymorphic_identity': 'RECONCILER',
'polymorphic_on': action, 'with_polymorphic': '*'}




class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions):

__tablename__ = 'tblReconcilerActionSnapshot'
__mapper_args__ = {'polymorphic_identity': 'SNAPSHOT'}

# Joined table inheritence
id = Column(Integer, ForeignKey('tblReconcilerActions.id'),
primary_key=True)

revision = Column(String)
comment = Column(String)



On Sep 17, 10:48 am, Jarrod Chesney jarrod.ches...@gmail.com wrote:
 That worked, Thanks, ITS AWESOME :-)

 On Sep 17, 6:03 am, Conor conor.edward.da...@gmail.com wrote:



  On Sep 15, 11:03 pm,Jarrod Chesneyjarrod.ches...@gmail.com wrote:

   Hi All
   I've been reading the documentation for ages and i can't figure out
   why when i print the results a query from my inherited table, It just
   prints them as the base type.

   I was hoping someone here would be nice enough to help me solve this
   problem.

   I thought the last print statement would print an instance of the
   _UtConfReconcilerActionSnapshot class but it doesn't

   I've got one record in both tables and 'id' = 1 in each table.
   What am i doing wrong?

  You are missing polymorphic_on in
  _UtConfReconcilerActions.__mapper_args__. Without this, SQLAlchemy
  cannot do polymorphic loads. Try this as your __mapper_args__:
  {'polymorphic_on': 'object_type', 'with_polymorphic': '*'}

  Hope it helps,
  -Conor

Begin code 

   from sqlalchemy import Table, Column, Integer, String, MetaData,
   ForeignKey, CheckConstraint
   from sqlalchemy.orm import relation
   from sqlalchemy.ext.declarative import declarative_base
   from sqlalchemy import create_engine
   from sqlalchemy.orm import sessionmaker

   __DATABASE_NAME__='UtConfSom.sqlite'

   Base = declarative_base()

   # == Reconciler Actions
   ===

   class _UtConfReconcilerActions(Base):

           __tablename__ = 'tblReconcilerActions'
           __mapper_args__ = {'with_polymorphic': '*'}

           # 1 to Many relationship to the managed
           id = Column(Integer, primary_key=True)
           action = Column(String, CheckConstraint(action in ('SNAPSHOT',
   'COMPARE', 'UPGRADE')))
           object_type = Column(String, CheckConstraint(object_type in 
   ('ALL',
   'SCHEMA', 'TABLE', 'COLUMN', 'INDEX')))

           def __repr__(self):
                   return ('%s' % _UtConfReconcilerActions.__name__
                               + \n  id='%i' % self.id
                               + \n  managed_id='%i' % self.managed_id
                               + \n  action='%s' % self.action
                               + \n  object_type='%s' % self.object_type
                               )

   class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions):

           __tablename__ = 'tblReconcilerActionSnapshot'
           # __mapper_args__ = {'with_polymorphic': '*'}
           __mapper_args__ = {'polymorphic_identity': 'snapshot',
   'with_polymorphic': '*'}

           # Joined table inheritence
           id = Column(Integer, ForeignKey('tblReconcilerActions.id'),
   primary_key=True)

           revision = Column(String)
           comment = Column(String)

           def __repr__(self):
                   return (_UtConfReconcilerActions.__repr__(self)
                               + \n  '%s' % 
   _UtConfReconcilerActionSnapshot.__name__
                               + \n  id='%s' % self.revision
                               + \n  revision='%s' % self.revision
                               )

   __db_exists = os.path.exists(__DATABASE_NAME__)

   engine = create_engine('sqlite:///' + __DATABASE_NAME__)

   # New database, create the tables
   if not __db_exists:
           Base.metadata.create_all(engine)
           print  sys.stderr, (WARINING - Creating empty '%s' database %
   __DATABASE_NAME__ )

   Session = sessionmaker(bind=engine)
   session = Session()

   print session.query(_UtConfReconcilerActions).with_polymorphic
   ('*').first()

end code 

[sqlalchemy] Re: Declarative base - Joined Table Inheritence

2009-09-16 Thread Conor

On Sep 15, 11:03 pm, Jarrod Chesney jarrod.ches...@gmail.com wrote:
 Hi All
 I've been reading the documentation for ages and i can't figure out
 why when i print the results a query from my inherited table, It just
 prints them as the base type.

 I was hoping someone here would be nice enough to help me solve this
 problem.

 I thought the last print statement would print an instance of the
 _UtConfReconcilerActionSnapshot class but it doesn't

 I've got one record in both tables and 'id' = 1 in each table.
 What am i doing wrong?

You are missing polymorphic_on in
_UtConfReconcilerActions.__mapper_args__. Without this, SQLAlchemy
cannot do polymorphic loads. Try this as your __mapper_args__:
{'polymorphic_on': 'object_type', 'with_polymorphic': '*'}

Hope it helps,
-Conor

  Begin code 

 from sqlalchemy import Table, Column, Integer, String, MetaData,
 ForeignKey, CheckConstraint
 from sqlalchemy.orm import relation
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker

 __DATABASE_NAME__='UtConfSom.sqlite'

 Base = declarative_base()

 # == Reconciler Actions
 ===

 class _UtConfReconcilerActions(Base):

         __tablename__ = 'tblReconcilerActions'
         __mapper_args__ = {'with_polymorphic': '*'}

         # 1 to Many relationship to the managed
         id = Column(Integer, primary_key=True)
         action = Column(String, CheckConstraint(action in ('SNAPSHOT',
 'COMPARE', 'UPGRADE')))
         object_type = Column(String, CheckConstraint(object_type in ('ALL',
 'SCHEMA', 'TABLE', 'COLUMN', 'INDEX')))

         def __repr__(self):
                 return ('%s' % _UtConfReconcilerActions.__name__
                             + \n  id='%i' % self.id
                             + \n  managed_id='%i' % self.managed_id
                             + \n  action='%s' % self.action
                             + \n  object_type='%s' % self.object_type
                             )

 class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions):

         __tablename__ = 'tblReconcilerActionSnapshot'
         # __mapper_args__ = {'with_polymorphic': '*'}
         __mapper_args__ = {'polymorphic_identity': 'snapshot',
 'with_polymorphic': '*'}

         # Joined table inheritence
         id = Column(Integer, ForeignKey('tblReconcilerActions.id'),
 primary_key=True)

         revision = Column(String)
         comment = Column(String)

         def __repr__(self):
                 return (_UtConfReconcilerActions.__repr__(self)
                             + \n  '%s' % 
 _UtConfReconcilerActionSnapshot.__name__
                             + \n  id='%s' % self.revision
                             + \n  revision='%s' % self.revision
                             )

 __db_exists = os.path.exists(__DATABASE_NAME__)

 engine = create_engine('sqlite:///' + __DATABASE_NAME__)

 # New database, create the tables
 if not __db_exists:
         Base.metadata.create_all(engine)
         print  sys.stderr, (WARINING - Creating empty '%s' database %
 __DATABASE_NAME__ )

 Session = sessionmaker(bind=engine)
 session = Session()

 print session.query(_UtConfReconcilerActions).with_polymorphic
 ('*').first()

  end code 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base - Joined Table Inheritence

2009-09-16 Thread Jarrod Chesney

That worked, Thanks, ITS AWESOME :-)

On Sep 17, 6:03 am, Conor conor.edward.da...@gmail.com wrote:
 On Sep 15, 11:03 pm, Jarrod Chesney jarrod.ches...@gmail.com wrote:

  Hi All
  I've been reading the documentation for ages and i can't figure out
  why when i print the results a query from my inherited table, It just
  prints them as the base type.

  I was hoping someone here would be nice enough to help me solve this
  problem.

  I thought the last print statement would print an instance of the
  _UtConfReconcilerActionSnapshot class but it doesn't

  I've got one record in both tables and 'id' = 1 in each table.
  What am i doing wrong?

 You are missing polymorphic_on in
 _UtConfReconcilerActions.__mapper_args__. Without this, SQLAlchemy
 cannot do polymorphic loads. Try this as your __mapper_args__:
 {'polymorphic_on': 'object_type', 'with_polymorphic': '*'}

 Hope it helps,
 -Conor



   Begin code 

  from sqlalchemy import Table, Column, Integer, String, MetaData,
  ForeignKey, CheckConstraint
  from sqlalchemy.orm import relation
  from sqlalchemy.ext.declarative import declarative_base
  from sqlalchemy import create_engine
  from sqlalchemy.orm import sessionmaker

  __DATABASE_NAME__='UtConfSom.sqlite'

  Base = declarative_base()

  # == Reconciler Actions
  ===

  class _UtConfReconcilerActions(Base):

          __tablename__ = 'tblReconcilerActions'
          __mapper_args__ = {'with_polymorphic': '*'}

          # 1 to Many relationship to the managed
          id = Column(Integer, primary_key=True)
          action = Column(String, CheckConstraint(action in ('SNAPSHOT',
  'COMPARE', 'UPGRADE')))
          object_type = Column(String, CheckConstraint(object_type in ('ALL',
  'SCHEMA', 'TABLE', 'COLUMN', 'INDEX')))

          def __repr__(self):
                  return ('%s' % _UtConfReconcilerActions.__name__
                              + \n  id='%i' % self.id
                              + \n  managed_id='%i' % self.managed_id
                              + \n  action='%s' % self.action
                              + \n  object_type='%s' % self.object_type
                              )

  class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions):

          __tablename__ = 'tblReconcilerActionSnapshot'
          # __mapper_args__ = {'with_polymorphic': '*'}
          __mapper_args__ = {'polymorphic_identity': 'snapshot',
  'with_polymorphic': '*'}

          # Joined table inheritence
          id = Column(Integer, ForeignKey('tblReconcilerActions.id'),
  primary_key=True)

          revision = Column(String)
          comment = Column(String)

          def __repr__(self):
                  return (_UtConfReconcilerActions.__repr__(self)
                              + \n  '%s' % 
  _UtConfReconcilerActionSnapshot.__name__
                              + \n  id='%s' % self.revision
                              + \n  revision='%s' % self.revision
                              )

  __db_exists = os.path.exists(__DATABASE_NAME__)

  engine = create_engine('sqlite:///' + __DATABASE_NAME__)

  # New database, create the tables
  if not __db_exists:
          Base.metadata.create_all(engine)
          print  sys.stderr, (WARINING - Creating empty '%s' database %
  __DATABASE_NAME__ )

  Session = sessionmaker(bind=engine)
  session = Session()

  print session.query(_UtConfReconcilerActions).with_polymorphic
  ('*').first()

   end code 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Michael Bayer

Gregg Lind wrote:
 What I think I'm seeing is that an object can be created even without it's
 ForeignKeyConstraint being filled.

 To run the test code below:

 $ dropdb test18; createdb test18; python testcode.py

on is not defined:

ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
[A1String.id, A1String.string, A1String.origin], on),

when removing on, the row inserts with regstring_id as NULL.  PG
appears to accept this so I would assume PG considers a three-column
foreign key with one NULL to be NULL.  If I try it with all three columns
not null, then you get the constraint error.


SQLalchemy itself relies upon the database to enforce constraints.In
this case you should have the NOT NULL constraint on the Product
columns.







 This builds on
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f3f2555a5e7/
 .

 I understand that the foreign table can't create the referent.  (and
 finding
 the best idiom for use one if it exists or create one) is yet be
 determined.  What I truly don't understand is how any instances of
 Product
 can be created, since there is a FK constraint that is not fulfulled.

 1.  Is the foreign key constraint fulfilled?
 2.  Is there a good create the referent if it doesn't exist, else use it
 idiom?
 3.  Is the polymorphic table business complicating it?  It seems liek the
 compound primary key for A1String is.

 
 from sqlalchemy.ext.declarative import
 declarative_base
 from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
 PrimaryKeyConstraint
 from sqlalchemy import ForeignKeyConstraint
 from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
 from sqlalchemy.orm import relation, backref
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy.schema import DDL

 import sys
 ECHO = bool((sys.argv + [False])[1])

 ## utilties for connecting the db, printing it, etc.
 def print_schema(T=postgres, Base=None):
 ''' print print_schema will print the schema in use '''
 from StringIO import StringIO
 buf = StringIO()
 engine = create_engine('%s://' % T, strategy='mock', executor=lambda
 s,
 p='': buf.write(str(s) + p))
 Base.metadata.create_all(engine)
 return buf.getvalue()

 def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
 engine = create_engine(connstring, echo=echo)
 Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
 session = Session()
 Base.metadata.bind = engine
 Base.metadata.create_all()
 return session, engine

 def _class_repr(self):
 ''' print our SA class instances in a nicer way '''
 # ugly, use sparingly, may have performance hit
 d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != _]
 d = sorted(d, key=lambda x: x[0].lower())
 return %s, %s % (self.__class__, d)


 Base = declarative_base()

 class Polystring(Base):
 __tablename__ = 'strings'
 id = Column(Integer, nullable=False, primary_key=True)
 string = Column(String, nullable=False, primary_key=True)
 origin = Column(String, nullable=False, primary_key=True)
 __mapper_args__ = {'polymorphic_on': origin}

 # subtype of string
 class A1String(Polystring):
 __mapper_args__ = {'polymorphic_identity': 'a1'}
 products = relation('Product', order_by=Product.id)

 class Product(Base):
 __tablename__ = 'product'
 __table_args__ = (
  ForeignKeyConstraint(['regstring_id', 'regstring',
 'regstring_type'], [A1String.id, A1String.string
 , A1String.origin], on),
 {}
 )
 id = Column(Integer,primary_key=True)
 regstring_id = Column(Integer)
 regstring = Column(String)
 regstring_type = Column(String,default=asn)



 ## test code
 session,eng = db_setup(postgres:///test18, Base=Base, echo=ECHO)
 add = session.add
 q = session.query
 c = session.commit
 r = _class_repr


 A = Product(id=192832, regstring=some part id)
 print r(A)
 add(A)
 c()  # commit
 print map(r,q(Product).all())
 print somehow this managed to get in, without making a polystring, which
 it
 should be referencing.
 assert len(q(Polystring).all())  0, So, where is the polystring?
 --

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Conor

On Sep 15, 4:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Gregg Lind wrote:
  What I think I'm seeing is that an object can be created even without it's
  ForeignKeyConstraint being filled.

  To run the test code below:

  $ dropdb test18; createdb test18; python testcode.py

 on is not defined:

 ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
 [A1String.id, A1String.string, A1String.origin], on),

 when removing on, the row inserts with regstring_id as NULL.  PG
 appears to accept this so I would assume PG considers a three-column
 foreign key with one NULL to be NULL.  If I try it with all three columns
 not null, then you get the constraint error.

 SQLalchemy itself relies upon the database to enforce constraints.    In
 this case you should have the NOT NULL constraint on the Product
 columns.


To expand on this:
Most (all?) databases default to a MATCH SIMPLE policy for foreign key
constraints: if any FK column is NULL then the FK constraint is
satisfied (regardless of the actual values of the non-null columns).
It looks like you want MATCH FULL behavior: if some but not all FK
columns are NULL then the FK constraint fails.

Assuming you really do need the the FK columns to be nullable, you
have to either add MATCH FULL to your DDL (probably have to use DDL()
+ ALTER TABLE; also requires that your database actually supports
MATCH FULL) or add a check constraint that mimics the MATCH FULL
behavior, e.g.:
(regstring_id IS NULL) = (regstring IS NULL) = (regstring_type IS
NULL)

  This builds on
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f...
  .

  I understand that the foreign table can't create the referent.  (and
  finding
  the best idiom for use one if it exists or create one) is yet be
  determined.  What I truly don't understand is how any instances of
  Product
  can be created, since there is a FK constraint that is not fulfulled.

  1.  Is the foreign key constraint fulfilled?
  2.  Is there a good create the referent if it doesn't exist, else use it
  idiom?
  3.  Is the polymorphic table business complicating it?  It seems liek the
  compound primary key for A1String is.

  
  from sqlalchemy.ext.declarative import
  declarative_base
  from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
  PrimaryKeyConstraint
  from sqlalchemy import ForeignKeyConstraint
  from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
  from sqlalchemy.orm import relation, backref
  from sqlalchemy import create_engine
  from sqlalchemy.orm import sessionmaker
  from sqlalchemy.schema import DDL

  import sys
  ECHO = bool((sys.argv + [False])[1])

  ## utilties for connecting the db, printing it, etc.
  def print_schema(T=postgres, Base=None):
      ''' print print_schema will print the schema in use '''
      from StringIO import StringIO
      buf = StringIO()
      engine = create_engine('%s://' % T, strategy='mock', executor=lambda
  s,
  p='': buf.write(str(s) + p))
      Base.metadata.create_all(engine)
      return buf.getvalue()

  def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
      engine = create_engine(connstring, echo=echo)
      Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
      session = Session()
      Base.metadata.bind = engine
      Base.metadata.create_all()
      return session, engine

  def _class_repr(self):
      ''' print our SA class instances in a nicer way '''
      # ugly, use sparingly, may have performance hit
      d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != _]
      d = sorted(d, key=lambda x: x[0].lower())
      return %s, %s % (self.__class__, d)

  Base = declarative_base()

  class Polystring(Base):
      __tablename__ = 'strings'
      id = Column(Integer, nullable=False, primary_key=True)
      string = Column(String, nullable=False, primary_key=True)
      origin = Column(String, nullable=False, primary_key=True)
      __mapper_args__ = {'polymorphic_on': origin}

  # subtype of string
  class A1String(Polystring):
      __mapper_args__ = {'polymorphic_identity': 'a1'}
      products = relation('Product', order_by=Product.id)

  class Product(Base):
      __tablename__ = 'product'
      __table_args__ = (
           ForeignKeyConstraint(['regstring_id', 'regstring',
  'regstring_type'], [A1String.id, A1String.string
  , A1String.origin], on),
          {}
      )
      id = Column(Integer,primary_key=True)
      regstring_id = Column(Integer)
      regstring = Column(String)
      regstring_type = Column(String,default=asn)

  ## test code
  session,eng = db_setup(postgres:///test18, Base=Base, echo=ECHO)
  add = session.add
  q = session.query
  c = session.commit
  r = _class_repr

  A = Product(id=192832, regstring=some part id)
  print r(A)
  add(A)
  c()  # commit
  print map(r,q(Product).all())
  print somehow this managed to get in, without making a polystring, which
  it
  should be referencing.
  assert 

[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Gregg Lind
Thank you both for the advice.  Dern NULLs causing trouble again.

GL

On Tue, Sep 15, 2009 at 4:34 PM, Conor conor.edward.da...@gmail.com wrote:


 On Sep 15, 4:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  Gregg Lind wrote:
   What I think I'm seeing is that an object can be created even without
 it's
   ForeignKeyConstraint being filled.
 
   To run the test code below:
 
   $ dropdb test18; createdb test18; python testcode.py
 
  on is not defined:
 
  ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
  [A1String.id, A1String.string, A1String.origin], on),
 
  when removing on, the row inserts with regstring_id as NULL.  PG
  appears to accept this so I would assume PG considers a three-column
  foreign key with one NULL to be NULL.  If I try it with all three columns
  not null, then you get the constraint error.
 
  SQLalchemy itself relies upon the database to enforce constraints.In
  this case you should have the NOT NULL constraint on the Product
  columns.
 

 To expand on this:
 Most (all?) databases default to a MATCH SIMPLE policy for foreign key
 constraints: if any FK column is NULL then the FK constraint is
 satisfied (regardless of the actual values of the non-null columns).
 It looks like you want MATCH FULL behavior: if some but not all FK
 columns are NULL then the FK constraint fails.

 Assuming you really do need the the FK columns to be nullable, you
 have to either add MATCH FULL to your DDL (probably have to use DDL()
 + ALTER TABLE; also requires that your database actually supports
 MATCH FULL) or add a check constraint that mimics the MATCH FULL
 behavior, e.g.:
 (regstring_id IS NULL) = (regstring IS NULL) = (regstring_type IS
 NULL)

   This builds on
  http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f.
 ..
   .
 
   I understand that the foreign table can't create the referent.  (and
   finding
   the best idiom for use one if it exists or create one) is yet be
   determined.  What I truly don't understand is how any instances of
   Product
   can be created, since there is a FK constraint that is not fulfulled.
 
   1.  Is the foreign key constraint fulfilled?
   2.  Is there a good create the referent if it doesn't exist, else use
 it
   idiom?
   3.  Is the polymorphic table business complicating it?  It seems liek
 the
   compound primary key for A1String is.
 
   
   from sqlalchemy.ext.declarative import
   declarative_base
   from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
   PrimaryKeyConstraint
   from sqlalchemy import ForeignKeyConstraint
   from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
   from sqlalchemy.orm import relation, backref
   from sqlalchemy import create_engine
   from sqlalchemy.orm import sessionmaker
   from sqlalchemy.schema import DDL
 
   import sys
   ECHO = bool((sys.argv + [False])[1])
 
   ## utilties for connecting the db, printing it, etc.
   def print_schema(T=postgres, Base=None):
   ''' print print_schema will print the schema in use '''
   from StringIO import StringIO
   buf = StringIO()
   engine = create_engine('%s://' % T, strategy='mock',
 executor=lambda
   s,
   p='': buf.write(str(s) + p))
   Base.metadata.create_all(engine)
   return buf.getvalue()
 
   def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
   engine = create_engine(connstring, echo=echo)
   Session = sessionmaker(bind=engine, autoflush=False,
 autocommit=False)
   session = Session()
   Base.metadata.bind = engine
   Base.metadata.create_all()
   return session, engine
 
   def _class_repr(self):
   ''' print our SA class instances in a nicer way '''
   # ugly, use sparingly, may have performance hit
   d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != _]
   d = sorted(d, key=lambda x: x[0].lower())
   return %s, %s % (self.__class__, d)
 
   Base = declarative_base()
 
   class Polystring(Base):
   __tablename__ = 'strings'
   id = Column(Integer, nullable=False, primary_key=True)
   string = Column(String, nullable=False, primary_key=True)
   origin = Column(String, nullable=False, primary_key=True)
   __mapper_args__ = {'polymorphic_on': origin}
 
   # subtype of string
   class A1String(Polystring):
   __mapper_args__ = {'polymorphic_identity': 'a1'}
   products = relation('Product', order_by=Product.id)
 
   class Product(Base):
   __tablename__ = 'product'
   __table_args__ = (
ForeignKeyConstraint(['regstring_id', 'regstring',
   'regstring_type'], [A1String.id, A1String.string
   , A1String.origin], on),
   {}
   )
   id = Column(Integer,primary_key=True)
   regstring_id = Column(Integer)
   regstring = Column(String)
   regstring_type = Column(String,default=asn)
 
   ## test code
   session,eng = db_setup(postgres:///test18, Base=Base, echo=ECHO)
   add = session.add
   q = 

[sqlalchemy] Re: : Declarative and association object same model

2009-09-04 Thread asrenzo

Here is a primaryjoin, secondaryjoin I tried without success:

neighbors = relation(Place, primaryjoin=(Place.id ==
neighbors_table.place_id), secondaryjoin=
(neighbors_table.neighbor_id == Place.id),
secondary=neighbors_table)

and the error is:

sqlalchemy.exc.InvalidRequestError: When compiling mapper Mapper|Shop|
shops, expression 'Shop.id == neighbours_table.shop_id' failed to
locate a name (name 'neighbours_table' is not defined). If this is a
class name, consider adding this relation() to the class
'shoplocator.orm.shop.Shop' class after both dependent classes have
been defined.

Which I can't understand because neighbors table is defined before the
Place class definition.

Regards,

Laurent
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: : Declarative and association object same model

2009-09-04 Thread Michael Bayer

asrenzo wrote:

 Here is a primaryjoin, secondaryjoin I tried without success:

 neighbors = relation(Place, primaryjoin=(Place.id ==
 neighbors_table.place_id), secondaryjoin=
 (neighbors_table.neighbor_id == Place.id),
 secondary=neighbors_table)

 and the error is:

 sqlalchemy.exc.InvalidRequestError: When compiling mapper Mapper|Shop|
 shops, expression 'Shop.id == neighbours_table.shop_id' failed to
 locate a name (name 'neighbours_table' is not defined). If this is a
 class name, consider adding this relation() to the class
 'shoplocator.orm.shop.Shop' class after both dependent classes have
 been defined.

 Which I can't understand because neighbors table is defined before the
 Place class definition.

the first strange thing is the message says Shop.id but the string you
are showing says Place.id.   Anyway, the Table object neighbors_table
is not part of the locals() when the string expressions are evaluated. 
therefore just don't use string arguments for
primaryjoin/secondaryjoin/secondary, use the expression directly (i.e.
secondary=neighbors_table, primaryjoin=id==neighbors_table.c.neighbor_id).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: : Declarative and association object same model

2009-09-04 Thread Laurent Rahuel

Thanks Michael,

Everything is OK now.

Sorry for the typo, I was renaming my classes.

Regards,

Laurent

Le 04/09/2009 16:30, Michael Bayer a écrit :
 asrenzo wrote:

 Here is a primaryjoin, secondaryjoin I tried without success:

 neighbors = relation(Place, primaryjoin=(Place.id ==
 neighbors_table.place_id), secondaryjoin=
 (neighbors_table.neighbor_id == Place.id),
 secondary=neighbors_table)

 and the error is:

 sqlalchemy.exc.InvalidRequestError: When compiling mapper Mapper|Shop|
 shops, expression 'Shop.id == neighbours_table.shop_id' failed to
 locate a name (name 'neighbours_table' is not defined). If this is a
 class name, consider adding this relation() to theclass
 'shoplocator.orm.shop.Shop'  class after both dependent classes have
 been defined.

 Which I can't understand because neighbors table is defined before the
 Place class definition.
  
 the first strange thing is the message says Shop.id but the string you
 are showing says Place.id.   Anyway, the Table object neighbors_table
 is not part of the locals() when the string expressions are evaluated.
 therefore just don't use string arguments for
 primaryjoin/secondaryjoin/secondary, use the expression directly (i.e.
 secondary=neighbors_table, primaryjoin=id==neighbors_table.c.neighbor_id).



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: : Declarative and association object same model

2009-09-04 Thread Michael Bayer

Laurent Rahuel wrote:

 Thanks Michael,

 Everything is OK now.

 Sorry for the typo, I was renaming my classes.

it wouldn't be tough for us to enhance declarative such that you can name
Table objects in those strings as wellsince we have the MetaData
available.I'll add a ticket.



 Regards,

 Laurent

 Le 04/09/2009 16:30, Michael Bayer a écrit :
 asrenzo wrote:

 Here is a primaryjoin, secondaryjoin I tried without success:

 neighbors = relation(Place, primaryjoin=(Place.id ==
 neighbors_table.place_id), secondaryjoin=
 (neighbors_table.neighbor_id == Place.id),
 secondary=neighbors_table)

 and the error is:

 sqlalchemy.exc.InvalidRequestError: When compiling mapper Mapper|Shop|
 shops, expression 'Shop.id == neighbours_table.shop_id' failed to
 locate a name (name 'neighbours_table' is not defined). If this is a
 class name, consider adding this relation() to theclass
 'shoplocator.orm.shop.Shop'  class after both dependent classes have
 been defined.

 Which I can't understand because neighbors table is defined before the
 Place class definition.

 the first strange thing is the message says Shop.id but the string you
 are showing says Place.id.   Anyway, the Table object
 neighbors_table
 is not part of the locals() when the string expressions are evaluated.
 therefore just don't use string arguments for
 primaryjoin/secondaryjoin/secondary, use the expression directly (i.e.
 secondary=neighbors_table,
 primaryjoin=id==neighbors_table.c.neighbor_id).



 


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative style many to many, possible fix

2009-09-02 Thread Michael Bayer

secondary requires a Table object as its argument.   it is not
recommended to create a relation that uses a mapped table as its
secondary unless the relation specifies viewonly=True.



Jae Kwon wrote:

 Is there a way to declaratively create many to many relationships
 where the 'secondary' parameter for the relationship is deferred ?

 I couldn't get this to work, e.g.

 class User(DeclarativeBase):
  id = Column(Integer, primary_key=True)
  name = Column(String(20))
  groups = relation(Group, primaryjoin=(User.id ==
 GroupMember.user_id), secondaryjoin=(GroupMember.group_id ==
 Group.id), secondary=GroupMember)

 (the other classes are defined later).

 I was able to get around this with the following patch.

 --- a/sqlalchemy0.5/lib/sqlalchemy/orm/properties.py  Mon Aug 31
 22:37:21 2009 -0700
 +++ b/sqlalchemy0.5/lib/sqlalchemy/orm/properties.py  Tue Sep 01
 22:11:07 2009 -0700
 @@ -736,7 +745,11 @@
   # accept callables for other attributes which may require
 deferred initialization
   for attr in ('order_by', 'primaryjoin', 'secondaryjoin',
 'secondary', '_foreign_keys', 'remote_side'):
   if util.callable(getattr(self, attr)):
 -setattr(self, attr, getattr(self, attr)())
 +called_value = getattr(self, attr)()
 +# the 'secondary' param requires a table, not a
 declarative class...
 +if attr == 'secondary' and hasattr(called_value,
 '__mapper__'):
 +called_value = called_value.__mapper__.mapped_table
 +setattr(self, attr, called_value)

   # in the case that InstrumentedAttributes were used to
 construct
   # primaryjoin or secondaryjoin, remove the _orm_adapt
 annotation so these

 - Jae

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative style many to many, possible fix

2009-09-02 Thread Jae Kwon

Thanks for looking.

What happens when viewonly=False?

I tried appending/popping from the list of related secondary objects  
but I didn't see any duplicate inserts/deletes.

  - Jae



On Sep 2, 2009, at 8:16 AM, Michael Bayer wrote:


 secondary requires a Table object as its argument.   it is not
 recommended to create a relation that uses a mapped table as its
 secondary unless the relation specifies viewonly=True.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative style many to many, possible fix

2009-09-02 Thread Michael Bayer

Jae Kwon wrote:

 Thanks for looking.

 What happens when viewonly=False?

 I tried appending/popping from the list of related secondary objects
 but I didn't see any duplicate inserts/deletes.


if you create new entities on the secondary table, and also insert
records in the relation() with the secondary, it will persist them
separately.

I have found myself using this pattern, however, since relation +
secondary can create more efficient joins than an eagerload on an
association object (the latter is solvable but it is a bit complex).



   - Jae



 On Sep 2, 2009, at 8:16 AM, Michael Bayer wrote:


 secondary requires a Table object as its argument.   it is not
 recommended to create a relation that uses a mapped table as its
 secondary unless the relation specifies viewonly=True.


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative style many to many, possible fix

2009-09-02 Thread Jae Kwon


 if you create new entities on the secondary table, and also insert
 records in the relation() with the secondary, it will persist them
 separately.

I see that now.

 I have found myself using this pattern, however, since relation +
 secondary can create more efficient joins than an eagerload on an
 association object (the latter is solvable but it is a bit complex).

I don't understand. Which pattern?

I'm going to use the pattern of setting viewonly=True, since it makes  
the code so much cleaner (w/ declarative base) to have all many-to- 
many relations
defined inside the class declaration.

  - Jae



  - Jae



 On Sep 2, 2009, at 8:16 AM, Michael Bayer wrote:


 secondary requires a Table object as its argument.   it is not
 recommended to create a relation that uses a mapped table as its
 secondary unless the relation specifies viewonly=True.






 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative way of delete-orphan

2009-08-26 Thread Mike Conley
Add cascade='delete-orphan' to the relation definition for children.

cascade='all,delete-orphan' is also a fairly common option.

See the documentation for other options in cascade.
http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html#sqlalchemy.orm.relation


-- 
Mike Conley



On Wed, Aug 26, 2009 at 11:20 AM, rajasekhar911 rajasekhar...@gmail.comwrote:


 Hi
 How do i define a delete-orphan using declarative base?
 I am using sqlite and SA0.5.5
 I have defined a one to one relation.
 class Child(DeclarativeBase):
__tablename__='children'
id=Column(String(50),primary_key=True)
parent_id=Column(String(50),ForeignKey
 ('parent.id',onupdate=CASCADE,ondelete=CASCADE))
name=Column(String(50))

 class Parent(DeclarativeBase):
__tablename__='parent'
id=Column(String(50),primary_key=True)
name=Column(String(50))
children=relation('Child', uselist=False)

 when i delete the parent it makes the parent_id None in Child.

 I tried giving ondelete=DELETE according to

 http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/schema.html#sqlalchemy.schema.ForeignKey
 ondelete – Optional string. If set, emit ON DELETE value when
 issuing DDL for this constraint. Typical values include CASCADE,
 DELETE and RESTRICT.
 But gave syntax error while trying to create the child table near
 DELETE

 I tried making parent_id as primarykey for Child.But that gave the
 error Constraint tried to blank out the
 PrimaryKey for instance

 what am i doing wrong?
 thnx in advance.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative Base: remote_side single_parent

2009-08-06 Thread Michael Bayer

allen.fowler wrote:


 I tried:

 children = relation(Node, backref=backref(parent,
 remote_side=nodes.id))


 got it to work with:

 remote_side=[id]

 But:

 1) Why is remote_side a list?

in this case you could just say remote_side=id.   its optionally a list if
multiple columns occur on the remote side of the join condition (i.e. as
in a composite primary key).


 2) Where does single_parent fit in to this?

single_parent is a flag that gets suggested to you if you attempt to use
delete-orphan cascade with a many-to-one or many-to-many foreign key
combination (meaning they are essentially distilled into one-to-one or
one-to-many).  it's basically requiring you to sign the agreement I
promise not to connect this object to more than one parent of this type so
that delete-orphan does what's advertised.   if it hasn't been suggested
to you, there's no need to use it (unless you want to enforce that
contract otherwise).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative Base: remote_side single_parent

2009-08-06 Thread allen.fowler



On Aug 6, 4:05 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 allen.fowler wrote:

  I tried:

  children = relation(Node, backref=backref(parent,
  remote_side=nodes.id))

  got it to work with:

  remote_side=[id]

  But:

  1) Why is remote_side a list?

 in this case you could just say remote_side=id.   its optionally a list if
 multiple columns occur on the remote side of the join condition (i.e. as
 in a composite primary key).

  2) Where does single_parent fit in to this?

 single_parent is a flag that gets suggested to you if you attempt to use
 delete-orphan cascade with a many-to-one or many-to-many foreign key
 combination (meaning they are essentially distilled into one-to-one or
 one-to-many).  it's basically requiring you to sign the agreement I
 promise not to connect this object to more than one parent of this type so
 that delete-orphan does what's advertised.   if it hasn't been suggested
 to you, there's no need to use it (unless you want to enforce that
 contract otherwise).

Thank you for the clarification, Mike.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base and Adjacency List Relationships

2009-08-04 Thread Werner F. Bruhin

Hi,

maxi wrote:
 Hi,

 I´ve just using sqlalchemy 0.5.1 with python 2.6 and turbogers, but I
 found a little problem trying to configurate adjacency relationship
 with declarative base.


 My object class is something like this:

 class QueryGroup(DeclarativeBase):
 __tablename__ = 'queries_group'

 qry_grp_id = Column(Smallinteger, primary_key=True)
 qry_grp_desc = Column(Unicode(20), nullable=False)
 parent_id = relation('QueryGroup', backref='parent')

 When I try to generate my tables, I get an error like this:

 sqlalchemy.exc.ArgumentError: Could not determine join condition
 between parent/
 child tables on relation QueryGroup.parent_id.  Specify a
 'primaryjoin' expressi
 on.  If this is a many-to-many relation, 'secondaryjoin' is needed as
 well.

 How can I specify the correct statement?

 Using traditional mapper approach, I can do:

 queries_group = Table(...)

 mapper(QueryGroup, queries_group, properties={
 'children': relation(QueryGroup, cascade=all,
 backref=backref(parent, remote_side=
 [queries_group.c.qry_grp_id]))
 }
 )


 Can I do the same using declarative style? How?
   
I use declarative like this to do the above.

class Lang(Base):
__table__ = sa.Table(u'lang', metadata,
sa.Column(u'id', sa.Integer(), sa.Sequence('gen_lang_id'), 
primary_key=True, nullable=False),
sa.Column(u'lang', sa.String(length=5, convert_unicode=False)),
)


class Users(Base):
__table__ = sa.Table(u'users', metadata,
sa.Column(u'id', sa.Integer(), sa.Sequence('gen_users_id'), 
primary_key=True, nullable=False),
sa.Column(u'name', sa.String(length=20, convert_unicode=False)),
sa.Column(u'fk_lang', sa.Integer(), sa.ForeignKey(u'lang.id')),
)
   
lang = sao.relation(Lang)

could also be:
lang = sao.relation(Lang, backref='users')

Werner

 Thanks in advance.
 ---
 Maxi.
















 

   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-24 Thread Gregg Lind

Mike, I totally appreciate the help, but it's just not working, for me.
I feel like you've given tons of time on trying to fix this, so if anyone
else wants to step in to hit me with the clue stick, that would be delightful.


More details

db's tried:  postgres, sqlite
sqlalchemy version:   0.5.5 on 64-bit Centos

--
connstring='postgres:///test_a'
def demo():
session.query(Route).delete()
for t,h,ts,s in samples :
session.add(Route(target=t,hop_id=h,ts=ts,startpoint=s))

session.flush()
session.commit()
sq = 
session.query(Route.ts,Route.startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))\
.group_by(Route.ts,Route.startpoint,Route.target).subquery()
q = session.query(Route,sq.c.max_hop).join(sq,sq.c.max_hop==Route.hop_id)
q.all()
---

Which gives:

ArgumentError: Can't find any foreign key relationships between
'route' and '%(47624668442128 anon)s'

All the other code is as in the first email.  I'm not sure what's
supposed to be happening,
but something clearly isn't right, and I'm clearly having some core model
grokking fail.

In addtion:

 print join(Route,sq, Route.hop_id==sq.c.max_hop)
class '__main__.Route' JOIN (SELECT route.ts AS ts, route.startpoint
AS startpoint, route.target AS target, max(route.hop_id) AS max_hop
FROM route GROUP BY route.ts, route.startpoint, route.target) AS
anon_1 ON route.hop_id = anon_1.max_hop

This doesn't seem to connect any of the other parts, other than the hop id.



On Thu, Jul 23, 2009 at 8:13 PM, Michael Bayermike...@zzzcomputing.com wrote:


 On Jul 23, 2009, at 8:43 PM, Gregg Lind wrote:


 Hm.  I appreciate the help, but something is clearly still failing
 here.

 session.query(Route,*sq.c).join(sq.c.max_hop)
 ArgumentError: Can't find any foreign key relationships between
 'route' and 'max_hop'

 Maybe the filter based solution is just fine here :)

 that's not the call sig for query.join() .   if youre dealing with SQL
 expression components, its join((selectable, onclause), ... ), so here
 join((sq, sq.c.max_hop==Route.hop)) .



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-24 Thread Michael Bayer

Gregg Lind wrote:
 session.query(Route.ts,Route.startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))\
 .group_by(Route.ts,Route.startpoint,Route.target).subquery()
 q =
 session.query(Route,sq.c.max_hop).join(sq,sq.c.max_hop==Route.hop_id)
 q.all()

join takes tuples in this form:

join((sq,sq.c.max_hop==Route.hop_id))


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-24 Thread Gregg Lind

Thank you!  That tuple thing was a fail on my part, clearly.

Doing it exactly as you describe still doesn't get things to be, for
lack of a better term, correlated.

This, however, achieves what I want:

session.query(Route,sq.c.max_hop).join((sq,
and_(Route.hop_id==sq.c.max_hop,
Route.ts==sq.c.ts,Route.startpoint==sq.c.startpoint,Route.target==sq.c.target))).all()

This seems no different than the filter based approach outlined in the
initial code.   Is this not the point of correlated sub queries, or am
I missing something?  If so, how do I achieve it?

Thanks again, for all the help, and for making SqlA such a great (and
powerful!) product.

Thanks!

GL

On Fri, Jul 24, 2009 at 11:59 AM, Michael Bayermike...@zzzcomputing.com wrote:

 Gregg Lind wrote:
 session.query(Route.ts,Route.startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))\
         .group_by(Route.ts,Route.startpoint,Route.target).subquery()
     q =
 session.query(Route,sq.c.max_hop).join(sq,sq.c.max_hop==Route.hop_id)
     q.all()

 join takes tuples in this form:

 join((sq,sq.c.max_hop==Route.hop_id))


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-23 Thread Michael Bayer

Gregg Lind wrote:

 I have read over
 http://www.sqlalchemy.org/docs/05/ormtutorial.html#using-subqueries
 and http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg11439.html,
 but I'm having trouble putting the pieces together.

 In the demo() below, I want to find the row in the database with the
 max for every unique combination of Route(target,startpoint,ts).  The
 code I have there *works*, but doesn't seem to use any subquery magic
 at all, nothing from 'correlated' subqueries.   What might I be
 missing?

im assuming you're using MySQL since the GROUP BY below doesn't
accommodate every column in the subquery (would be rejected by most DBs). 
youll want to query each column individually that is part of what you are
grouping by.i think you also need to use func.max() here and not
func.min().  the join of the subquery to parent table is then probably
just on hop_id.   no correlation of subquery is needed either since you
are intersecting two complete sets together (all routes intersected with
all max hop id routes grouped by x, y, z).



 Thanks!

 Gregg L.
 --
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import MetaData
 from sqlalchemy import Table, Column, Integer, String
 from sqlalchemy import create_engine
 from sqlalchemy.orm import sessionmaker
 from sqlalchemy import func

 Base = declarative_base()

 class Route(Base):
 __tablename__ = 'route'
 target = Column(String, nullable=False, primary_key=True)
 hop_id = Column(Integer, nullable=False, primary_key=True)
 ts = Column(Integer, nullable=False, primary_key=True)
 startpoint = Column(String, nullable=False, primary_key=True)
 # a bunch of other fields
 #data = Column(String, nullable=True, primary_key=False)
 #...
 def __repr__(self):
 return %s %s %s %s %(self.target, self.hop_id, self.ts,
 self.startpoint)

 connstring='sqlite:///:memory:'
 engine = create_engine(connstring, echo=False)
 session = sessionmaker(bind=engine, autoflush=False, autocommit=False)()
 Base.metadata.bind = engine
 Base.metadata.create_all()

 samples = [
 ('T1',1,1000,'S1'),
 ('T1',2,1000,'S1'),
 ('T1',3,1000,'S1'),
 ('T1',1,1000,'S2'),
 ('T1',2,1000,'S2'),
 ('T2',1,1000,'S1'),
 ('T2',2,1000,'S1'),
 ('T2',3,1000,'S1'),
 ('T2',4,1000,'S1'),
 ('T2',1,1500,'S1')]

 def demo():
 for t,h,ts,s in samples :
 session.add(Route(target=t,hop_id=h,ts=ts,startpoint=s))

 session.flush()
 session.commit()
 # row in the database with the max_hop for every unique
 combination of Route(target,startpoint,ts)
 sq = session.query(Route,func.min(Route.hop_id).label('max_hop'))
 sq = sq.group_by(Route.ts,Route.startpoint,Route.target).subquery()
 q = session.query(Route,sq.c.max_hop)
 q = q.filter(Route.target==sq.c.target)
 q = q.filter(Route.startpoint == sq.c.startpoint)
 q = q.filter(Route.hop_id == sq.c.hop_id)
 q.all()


 

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-23 Thread Gregg Lind

On Thu, Jul 23, 2009 at 3:24 PM, Michael Bayermike...@zzzcomputing.com wrote:

 im assuming you're using MySQL since the GROUP BY below doesn't
 accommodate every column in the subquery (would be rejected by most DBs).

Corrected.  It was Sqlite, but good catch.

 youll want to query each column individually that is part of what you are
 grouping by.    i think you also need to use func.max() here and not
 func.min().

Yes, dumbpants on me there!

 the join of the subquery to parent table is then probably
 just on hop_id.   no correlation of subquery is needed either since you
 are intersecting two complete sets together (all routes intersected with
 all max hop id routes grouped by x, y, z).


How do I implement this join?  If I do this:

sq = 
session.query(Route.ts,Route.startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))
sq = sq.group_by(Route.ts,Route.startpoint,Route.target).subquery()

then:

q = session.Query(Route,*sq.c).join(???)

What would that join be on?  Hop_id isn't in the subquery.

I don't mean to be dense, but I'm not quite getting your response.
Perhaps I don't understand what correlated subqueries in  SqlA are.
Is there is a reference that explains where they're used?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-23 Thread Michael Bayer


On Jul 23, 2009, at 5:20 PM, Gregg Lind wrote:


 How do I implement this join?  If I do this:

 sq =  
 session 
 .query 
 (Route 
 .ts 
 ,Route 
 .startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))
 sq = sq.group_by(Route.ts,Route.startpoint,Route.target).subquery()

 then:

 q = session.Query(Route,*sq.c).join(???)

 What would that join be on?  Hop_id isn't in the subquery.


sq.c.max_hop



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-23 Thread Michael Bayer


On Jul 23, 2009, at 8:43 PM, Gregg Lind wrote:


 Hm.  I appreciate the help, but something is clearly still failing  
 here.

 session.query(Route,*sq.c).join(sq.c.max_hop)
 ArgumentError: Can't find any foreign key relationships between
 'route' and 'max_hop'

 Maybe the filter based solution is just fine here :)

that's not the call sig for query.join() .   if youre dealing with SQL  
expression components, its join((selectable, onclause), ... ), so here  
join((sq, sq.c.max_hop==Route.hop)) .



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative, correlated subqueries

2009-07-23 Thread Gregg Lind

Hm.  I appreciate the help, but something is clearly still failing here.

 session.query(Route,*sq.c).join(sq.c.max_hop)
ArgumentError: Can't find any foreign key relationships between
'route' and 'max_hop'

Maybe the filter based solution is just fine here :)



On Thu, Jul 23, 2009 at 7:29 PM, Michael Bayermike...@zzzcomputing.com wrote:


 On Jul 23, 2009, at 5:20 PM, Gregg Lind wrote:


 How do I implement this join?  If I do this:

 sq =
 session
 .query
 (Route
 .ts
 ,Route
 .startpoint,Route.target,func.max(Route.hop_id).label('max_hop'))
 sq = sq.group_by(Route.ts,Route.startpoint,Route.target).subquery()

 then:

 q = session.Query(Route,*sq.c).join(???)

 What would that join be on?  Hop_id isn't in the subquery.


 sq.c.max_hop



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative defer error

2009-06-02 Thread Michael Bayer

quirogaco wrote:

   File C:\Acappella_2Zero\thirdparty\sqlalchemy\orm\mapper.py, line
 1612, in _
 instance
 identitykey = identity_key(row)
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\orm\mapper.py, line
 1553, in i
 dentity_key
 return (identity_class, tuple(row[column] for column in pk_cols))
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\orm\mapper.py, line
 1553, in 
 genexpr
 return (identity_class, tuple(row[column] for column in pk_cols))
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\engine\base.py, line
 1348, in
 __getitem__
 return self.__parent._get_col(self.__row, key)
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\engine\base.py, line
 1609, in
 _get_col
 type_, processor, index = self._props[key]
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\util.py, line 68, in
 __missing
 __
 self[key] = val = self.creator(key)
   File C:\Acappella_2Zero\thirdparty\sqlalchemy\engine\base.py, line
 1507, in
 fallback
 raise exc.NoSuchColumnError(Could not locate column in row for
 column '%s'
  % (str(key)))
 sqlalchemy.exc.NoSuchColumnError: Could not locate column in row for
 column 'd_
 users.name'


your code example doesn't illustrate this (and is also hard to understand
since it uses all kinds of methods to which nobody outside of your
organization could be familiar with, such as
Fdb.Unicode(...Adb.Primary_Key(True))) but the stack trace reveals that
name is considered to be a primary key by the mapper.   Primary key
columns can't be deferred.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Michael Bayer

Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
 __tablename__ = 'Data'
 lowername=Column(Unicode, nullable=False)
 __table_args__ = (
 pg_only_constraint,  {}
 )



The cleanest way is to use the schema.DDL() construct which can filter
against various backends, but requires that you spell out the constraint
explicitly:

DDL(CREATE CONSTRAINT , on=postgres).execute_at('after-create',
Data.__table__)

Alternatively, if you want to stick with the CheckConstraint object you
can create a function create_pg_constraints() which is called at the
point your app calls create_engine(), that contains all PG specific
constructs - the function would be called based on engine.dialect.name ==
postgres.

We have a more flexible architecture in 0.6 for this sort of thing and I
think if we add an AddConstraint() construct there and also move most of
DDL()'s execute-at and on functionality into the base DDLElement class,
that would enable both constructs to be combined together as in
AddConstraint(CheckConstraint(...args...),
on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Gregg Lind

As always, thank you for the complete, exhaustive answer.  This
particular thing is definitely and edge case, and rather non-obvious,
so thank you for walking me through it.

Either of those are clean enough for me!

Is there are more proper / general way to describe the problem, so
google and make this answer easier to find?

Gregg



On Fri, May 29, 2009 at 12:10 PM, Michael Bayer
mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
     __tablename__ = 'Data'
     lowername=Column(Unicode, nullable=False)
     __table_args__ = (
         pg_only_constraint,  {}
         )



 The cleanest way is to use the schema.DDL() construct which can filter
 against various backends, but requires that you spell out the constraint
 explicitly:

 DDL(CREATE CONSTRAINT , on=postgres).execute_at('after-create',
 Data.__table__)

 Alternatively, if you want to stick with the CheckConstraint object you
 can create a function create_pg_constraints() which is called at the
 point your app calls create_engine(), that contains all PG specific
 constructs - the function would be called based on engine.dialect.name ==
 postgres.

 We have a more flexible architecture in 0.6 for this sort of thing and I
 think if we add an AddConstraint() construct there and also move most of
 DDL()'s execute-at and on functionality into the base DDLElement class,
 that would enable both constructs to be combined together as in
 AddConstraint(CheckConstraint(...args...),
 on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Michael Bayer

Gregg Lind wrote:

 I used the DDL style

 DDL('''ALTER TABLE data ADD CONSTRAINT lowername_check  CHECK
 (lowername !~ '[[\:upper\:]]')''',
 on=postgres).execute_at('after-create',Data.__table__)

 and now my print_schema method (based on
 http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring)
  breaks (on PG only, because of the DDL), with this error:

 TypeError: unsupported operand type(s) for +: '_TextClause' and 'str'

 I escaped the colons in the DDL.   Workarounds?

escape them with \\: or use r'\:'



 Gregg


 Code:

 def print_schema(T=postgres):
 ''' print print_schema will print the schema in use '''
 global Base
 from StringIO import StringIO
 buf = StringIO()
 print '%s://' % T
 engine = create_engine('%s://' % T, strategy='mock',
 executor=lambda s, p='': buf.write(s + p))
 Base.metadata.create_all(engine)
 return buf.getvalue()



 On Fri, May 29, 2009 at 12:27 PM, Gregg Lind gregg.l...@gmail.com wrote:
 As always, thank you for the complete, exhaustive answer.  This
 particular thing is definitely and edge case, and rather non-obvious,
 so thank you for walking me through it.

 Either of those are clean enough for me!

 Is there are more proper / general way to describe the problem, so
 google and make this answer easier to find?

 Gregg



 On Fri, May 29, 2009 at 12:10 PM, Michael Bayer
 mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
     __tablename__ = 'Data'
     lowername=Column(Unicode, nullable=False)
     __table_args__ = (
         pg_only_constraint,  {}
         )



 The cleanest way is to use the schema.DDL() construct which can filter
 against various backends, but requires that you spell out the
 constraint
 explicitly:

 DDL(CREATE CONSTRAINT , on=postgres).execute_at('after-create',
 Data.__table__)

 Alternatively, if you want to stick with the CheckConstraint object you
 can create a function create_pg_constraints() which is called at the
 point your app calls create_engine(), that contains all PG specific
 constructs - the function would be called based on engine.dialect.name
 ==
 postgres.

 We have a more flexible architecture in 0.6 for this sort of thing and
 I
 think if we add an AddConstraint() construct there and also move most
 of
 DDL()'s execute-at and on functionality into the base DDLElement
 class,
 that would enable both constructs to be combined together as in
 AddConstraint(CheckConstraint(...args...),
 on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



 



 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Gregg Lind

Alas, that doesn't seem to matter or help.

Even this statement causes the same issue.  Odd.  Must not be related
to the colons, alas.

DDL(r''' ''', on=postgres).execute_at('after-create',Data.__table__)



On Fri, May 29, 2009 at 3:08 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I used the DDL style

 DDL('''ALTER TABLE data ADD CONSTRAINT lowername_check  CHECK
 (lowername !~ '[[\:upper\:]]')''',
         on=postgres).execute_at('after-create',Data.__table__)

 and now my print_schema method (based on
 http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring)
  breaks (on PG only, because of the DDL), with this error:

 TypeError: unsupported operand type(s) for +: '_TextClause' and 'str'

 I escaped the colons in the DDL.   Workarounds?

 escape them with \\: or use r'\:'



 Gregg


 Code:

 def print_schema(T=postgres):
     ''' print print_schema will print the schema in use '''
     global Base
     from StringIO import StringIO
     buf = StringIO()
     print '%s://' % T
     engine = create_engine('%s://' % T, strategy='mock',
 executor=lambda s, p='': buf.write(s + p))
     Base.metadata.create_all(engine)
     return buf.getvalue()



 On Fri, May 29, 2009 at 12:27 PM, Gregg Lind gregg.l...@gmail.com wrote:
 As always, thank you for the complete, exhaustive answer.  This
 particular thing is definitely and edge case, and rather non-obvious,
 so thank you for walking me through it.

 Either of those are clean enough for me!

 Is there are more proper / general way to describe the problem, so
 google and make this answer easier to find?

 Gregg



 On Fri, May 29, 2009 at 12:10 PM, Michael Bayer
 mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
     __tablename__ = 'Data'
     lowername=Column(Unicode, nullable=False)
     __table_args__ = (
         pg_only_constraint,  {}
         )



 The cleanest way is to use the schema.DDL() construct which can filter
 against various backends, but requires that you spell out the
 constraint
 explicitly:

 DDL(CREATE CONSTRAINT , on=postgres).execute_at('after-create',
 Data.__table__)

 Alternatively, if you want to stick with the CheckConstraint object you
 can create a function create_pg_constraints() which is called at the
 point your app calls create_engine(), that contains all PG specific
 constructs - the function would be called based on engine.dialect.name
 ==
 postgres.

 We have a more flexible architecture in 0.6 for this sort of thing and
 I
 think if we add an AddConstraint() construct there and also move most
 of
 DDL()'s execute-at and on functionality into the base DDLElement
 class,
 that would enable both constructs to be combined together as in
 AddConstraint(CheckConstraint(...args...),
 on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



 



 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Michael Bayer

Gregg Lind wrote:

 Alas, that doesn't seem to matter or help.

 Even this statement causes the same issue.  Odd.  Must not be related
 to the colons, alas.

 DDL(r''' ''', on=postgres).execute_at('after-create',Data.__table__)

didnt realize you're printing with mock.  its:

buf.write(str(s) + p)







 On Fri, May 29, 2009 at 3:08 PM, Michael Bayer mike...@zzzcomputing.com
 wrote:

 Gregg Lind wrote:

 I used the DDL style

 DDL('''ALTER TABLE data ADD CONSTRAINT lowername_check  CHECK
 (lowername !~ '[[\:upper\:]]')''',
         on=postgres).execute_at('after-create',Data.__table__)

 and now my print_schema method (based on
 http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring)
  breaks (on PG only, because of the DDL), with this error:

 TypeError: unsupported operand type(s) for +: '_TextClause' and 'str'

 I escaped the colons in the DDL.   Workarounds?

 escape them with \\: or use r'\:'



 Gregg


 Code:

 def print_schema(T=postgres):
     ''' print print_schema will print the schema in use '''
     global Base
     from StringIO import StringIO
     buf = StringIO()
     print '%s://' % T
     engine = create_engine('%s://' % T, strategy='mock',
 executor=lambda s, p='': buf.write(s + p))
     Base.metadata.create_all(engine)
     return buf.getvalue()



 On Fri, May 29, 2009 at 12:27 PM, Gregg Lind gregg.l...@gmail.com
 wrote:
 As always, thank you for the complete, exhaustive answer.  This
 particular thing is definitely and edge case, and rather non-obvious,
 so thank you for walking me through it.

 Either of those are clean enough for me!

 Is there are more proper / general way to describe the problem, so
 google and make this answer easier to find?

 Gregg



 On Fri, May 29, 2009 at 12:10 PM, Michael Bayer
 mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
     __tablename__ = 'Data'
     lowername=Column(Unicode, nullable=False)
     __table_args__ = (
         pg_only_constraint,  {}
         )



 The cleanest way is to use the schema.DDL() construct which can
 filter
 against various backends, but requires that you spell out the
 constraint
 explicitly:

 DDL(CREATE CONSTRAINT ,
 on=postgres).execute_at('after-create',
 Data.__table__)

 Alternatively, if you want to stick with the CheckConstraint object
 you
 can create a function create_pg_constraints() which is called at
 the
 point your app calls create_engine(), that contains all PG specific
 constructs - the function would be called based on
 engine.dialect.name
 ==
 postgres.

 We have a more flexible architecture in 0.6 for this sort of thing
 and
 I
 think if we add an AddConstraint() construct there and also move most
 of
 DDL()'s execute-at and on functionality into the base DDLElement
 class,
 that would enable both constructs to be combined together as in
 AddConstraint(CheckConstraint(...args...),
 on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



 



 



 


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative base -- only add a constraint if particular db engine

2009-05-29 Thread Michael Bayer

mock is on the way out as a general use tool.



Gregg Lind wrote:

 You got me there!   Updating the FAQ on it would fix the issue for others.

 For reference:

 ##
 from sqlalchemy import *
 from sqlalchemy.schema import DDL
 from sqlalchemy.ext.declarative import declarative_base

 Base = declarative_base()

 class Data(Base):
 __tablename__ = 'data'
 f1 = Column(Integer, nullable=False, primary_key=True)
 f2 = Column(Integer, nullable=False, primary_key=True)

 DDL(r''' ''', on=postgres).execute_at('after-create',Data.__table__)


 def print_schema_wrong(db):
 from StringIO import StringIO
 buf = StringIO()
 engine = create_engine('%s://' % db, strategy='mock',
 executor=lambda s, p='': buf.write(s + p))
 #meta = MetaData()
 meta = Base.metadata
 meta.create_all(engine)
 print buf.getvalue()


 def print_schema(db):
 from StringIO import StringIO
 buf = StringIO()
 engine = create_engine('%s://' % db, strategy='mock',
 executor=lambda s, p='': buf.write(str(s) + p))
 #meta = MetaData()
 meta = Base.metadata
 meta.create_all(engine)
 print buf.getvalue()


 # fine
 print_schema('sqlite')
 print_schema('postgres')
 print_schema_wrong('sqlite')

 # will throw an error
 print_schema_wrong('postgres')

 ##33


 On Fri, May 29, 2009 at 3:46 PM, Michael Bayer mike...@zzzcomputing.com
 wrote:

 Gregg Lind wrote:

 Alas, that doesn't seem to matter or help.

 Even this statement causes the same issue.  Odd.  Must not be related
 to the colons, alas.

 DDL(r''' ''', on=postgres).execute_at('after-create',Data.__table__)

 didnt realize you're printing with mock.  its:

 buf.write(str(s) + p)







 On Fri, May 29, 2009 at 3:08 PM, Michael Bayer
 mike...@zzzcomputing.com
 wrote:

 Gregg Lind wrote:

 I used the DDL style

 DDL('''ALTER TABLE data ADD CONSTRAINT lowername_check  CHECK
 (lowername !~ '[[\:upper\:]]')''',
         on=postgres).execute_at('after-create',Data.__table__)

 and now my print_schema method (based on
 http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring)
  breaks (on PG only, because of the DDL), with this error:

 TypeError: unsupported operand type(s) for +: '_TextClause' and 'str'

 I escaped the colons in the DDL.   Workarounds?

 escape them with \\: or use r'\:'



 Gregg


 Code:

 def print_schema(T=postgres):
     ''' print print_schema will print the schema in use '''
     global Base
     from StringIO import StringIO
     buf = StringIO()
     print '%s://' % T
     engine = create_engine('%s://' % T, strategy='mock',
 executor=lambda s, p='': buf.write(s + p))
     Base.metadata.create_all(engine)
     return buf.getvalue()



 On Fri, May 29, 2009 at 12:27 PM, Gregg Lind gregg.l...@gmail.com
 wrote:
 As always, thank you for the complete, exhaustive answer.  This
 particular thing is definitely and edge case, and rather
 non-obvious,
 so thank you for walking me through it.

 Either of those are clean enough for me!

 Is there are more proper / general way to describe the problem, so
 google and make this answer easier to find?

 Gregg



 On Fri, May 29, 2009 at 12:10 PM, Michael Bayer
 mike...@zzzcomputing.com wrote:

 Gregg Lind wrote:

 I use declarative base for defining classes.

 I have a constraint that only works in Postgres.  How do I declare
 that constraint lowername_check only if the session is going
 postgres (and not to sqlite, for example).

 pg_only_constraint = CheckConstraint(lowername !~
 '[[:upper:]]',name='lowername_check'),
 class Data(Base):
     __tablename__ = 'Data'
     lowername=Column(Unicode, nullable=False)
     __table_args__ = (
         pg_only_constraint,  {}
         )



 The cleanest way is to use the schema.DDL() construct which can
 filter
 against various backends, but requires that you spell out the
 constraint
 explicitly:

 DDL(CREATE CONSTRAINT ,
 on=postgres).execute_at('after-create',
 Data.__table__)

 Alternatively, if you want to stick with the CheckConstraint object
 you
 can create a function create_pg_constraints() which is called at
 the
 point your app calls create_engine(), that contains all PG
 specific
 constructs - the function would be called based on
 engine.dialect.name
 ==
 postgres.

 We have a more flexible architecture in 0.6 for this sort of thing
 and
 I
 think if we add an AddConstraint() construct there and also move
 most
 of
 DDL()'s execute-at and on functionality into the base DDLElement
 class,
 that would enable both constructs to be combined together as in
 AddConstraint(CheckConstraint(...args...),
 on=postgres)).execute_at('after-create', Data.__table__).





 Thanks!

 Gregg Lind

 



 



 



 


 



 


 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from 

[sqlalchemy] Re: declarative and __table_args__ I must be missing something simple.

2009-04-08 Thread Wayne Witzel

Yep, there it is. Stupidly simple. Dug up some old know working source
that did and diff and grep later I found the cause.

__table_args__ needs to be give a tuple with an empty dictionary, like
so.

__table_args__ = (ForeignKeyConstraint(['parent_id', 'parent_ref'],
['parent.id', 'parent.ref']), {})

Now all is well, sorry for the ML clutter. I am face palming in 3, 2,
1 

On Apr 8, 4:49 pm, Wayne Witzel wwitz...@gmail.com wrote:
 I assume I am over looking some simple thing, but I just can't seem to
 find it. Thanks for the assist, I have palms open ready for face
 planting.

 Using a class and table with orm.mapper()

 class Child(object):
     pass
 child_table = Table('child', meta.metadata,
         Column('parent_id', Integer, primary_key=True),
         Column('parent_ref', Integer, nullable=False),
         Column('content', String(10)),
         ForeignKeyConstraint(['parent_id', 'parent_ref'],
 ['parent.id', 'parent.ref'])
 )
 orm.mapper(Child, child_table)

 class Parent(object):
     pass
 parent_table = Table('parent', meta.metadata,
         Column('id', Integer, primary_key=True),
         Column('ref', Integer, primary_key=True)
 )
 orm.mapper(Parent, parent_table, properties={
         'children':relation(Child, lazy=False)

 })

 Produces the following create

 2009-04-08 16:36:54,319 INFO sqlalchemy.engine.base.Engine.0x...a6b0
 CREATE TABLE parent (
         id INTEGER NOT NULL,
         ref INTEGER NOT NULL,
         PRIMARY KEY (id, ref)
 )
 2009-04-08 16:36:54,319 INFO sqlalchemy.engine.base.Engine.0x...a6b0
 {}
 2009-04-08 16:36:54,454 INFO sqlalchemy.engine.base.Engine.0x...a6b0
 COMMIT
 2009-04-08 16:36:54,456 INFO sqlalchemy.engine.base.Engine.0x...a6b0
 CREATE TABLE child (
         parent_id INTEGER NOT NULL,
         parent_ref INTEGER NOT NULL,
         content VARCHAR(10),
         PRIMARY KEY (parent_id),
          FOREIGN KEY(parent_id, parent_ref) REFERENCES parent (id, ref)
 )

 Using what I believe is the exact same thing with declarative produces
 the creates minus the composite foreign key and then of course is
 unable to establish the relation.

 class Parent(Base):
     __tablename__ = 'parent'
     id = Column(Integer, primary_key=True)
     ref = Column(Integer, primary_key=True)
     children = relation(Child, lazy=False)

 class Child(Base):
     __tablename__ = 'child'
     __table_args__ = ForeignKeyConstraint(['parent_id','parent_ref'],
 ['parent.id', 'parent.ref'])

     parent_id = Column(Integer, primary_key=True)
     parent_ref = Column(Integer, nullable=False)
     content = Column(String(10))

 The create output is

 2009-04-08 16:47:08,331 INFO sqlalchemy.engine.base.Engine.0x...a710
 CREATE TABLE child (
         parent_id INTEGER NOT NULL,
         parent_ref INTEGER NOT NULL,
         content VARCHAR(10),
         PRIMARY KEY (parent_id)
 )
 2009-04-08 16:47:08,331 INFO sqlalchemy.engine.base.Engine.0x...a710
 {}
 2009-04-08 16:47:08,464 INFO sqlalchemy.engine.base.Engine.0x...a710
 COMMIT
 2009-04-08 16:47:08,466 INFO sqlalchemy.engine.base.Engine.0x...a710
 CREATE TABLE parent (
         id INTEGER NOT NULL,
         ref INTEGER NOT NULL,
         PRIMARY KEY (id, ref)
 )
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-25 Thread Christiaan Putter

Hi,

Right now I'm really only using multiple threads for speeding up
downloads over the internet (which I need to update the data).
Actually commiting the data is still done via the gui's thread
session.  At some point speeding up the algorithms would be nice, a
concurrent solution would be ideal seeing as how the calculations for
each record are independent.  Though I might look into RPC for that
instead to get actual speed ups.  That will probably solve the thread
/ session problem in any case.

The unit testing has revealed some serious problems.  Mainly todo with
traits and the declarative extension not knowing how to cooperate (eg.
traits provides a mechanism to set up default values, though the
instrumented attribute obviously overrides that and returns what sql
says it should).  I really don't want to set up the mapper from a
class definition myself, declarative makes that much simpler so i
don't want to give up on that.  So I'll have to create my own
intrumented attribute / traits subclass.  Either a trait that wraps an
instrumented attribute, or an instrumented attribute that wraps a
trait.  I need to somehow provide both traits and SA with the
functionality they need in a single attribute on the instance.  What
do you think will be easiest?

That of course raises the question of what should take precedence.
The value provided by traits or the one from the database.  Or
whichever is not None.

I'll look into the mapper extensions.

I never thought some simple unit testing would cause so many headaches :-)

I'm talking to someone at enthought to maybe integrate that into their
sandbox for now.  I'm sure once someone else with more experience with
the traits framework looks at it we'll be able to figure out what the
simplest way to use sql with traits would be.   And then the
implementation should become clearer as well.

Hope you're having a nice day,
Crhistian

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-25 Thread az

ah. when u don't have an explicit spec, the testcases are the real 
spec. so make sure u really cover all them funny cases (-:

i had the default_values problem too, and to solve it i have split the 
attr.access into two layers: one that sits below SA (as a fake dict), 
and one thin that sits on top of it. First one handles basic stuff / 
data storage, latter does default_values and lazy-autosetting it (in 
terms of static_type/). Look for AutoSetter and related in 
dbcook/usage/static_type/sa2static.py

There is subtle difference in whether the traits/descriptor will 
auto-set the default value or the SA/sql-server, make sure u 
understand it. First case SA/sql would never know about missing 
value.

as of precedence... i have one declaration and i would not care who 
sets the vales as long it is what's in the declaration.

you choose yours
svilen

On Wednesday 25 March 2009 14:47:35 Christiaan Putter wrote:
 Hi,

 Right now I'm really only using multiple threads for speeding up
 downloads over the internet (which I need to update the data).
 Actually commiting the data is still done via the gui's thread
 session.  At some point speeding up the algorithms would be nice, a
 concurrent solution would be ideal seeing as how the calculations
 for each record are independent.  Though I might look into RPC for
 that instead to get actual speed ups.  That will probably solve the
 thread / session problem in any case.

 The unit testing has revealed some serious problems.  Mainly todo
 with traits and the declarative extension not knowing how to
 cooperate (eg. traits provides a mechanism to set up default
 values, though the instrumented attribute obviously overrides that
 and returns what sql says it should).  I really don't want to set
 up the mapper from a class definition myself, declarative makes
 that much simpler so i don't want to give up on that.  So I'll have
 to create my own intrumented attribute / traits subclass.  Either a
 trait that wraps an instrumented attribute, or an instrumented
 attribute that wraps a trait.  I need to somehow provide both
 traits and SA with the functionality they need in a single
 attribute on the instance.  What do you think will be easiest?

 That of course raises the question of what should take precedence.
 The value provided by traits or the one from the database.  Or
 whichever is not None.

 I'll look into the mapper extensions.

 I never thought some simple unit testing would cause so many
 headaches :-)

 I'm talking to someone at enthought to maybe integrate that into
 their sandbox for now.  I'm sure once someone else with more
 experience with the traits framework looks at it we'll be able to
 figure out what the simplest way to use sql with traits would be.  
 And then the implementation should become clearer as well.

 Hope you're having a nice day,
 Crhistian

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-23 Thread az

let me think about it.
as i see it u want to have runtime-invented data-only extensions to 
some object - or is it data + some algo about it? would the algos be 
new and reside in the new thing or they're just there and used?

imo this should be done via joins ala pyprotocols; inheritance is more 
about constant structures while u want same base object to grow and 
grow. i think SA.query(A,B1,B2,...).join(A.b1).join(A.b2)... gives u 
such notion (amde as A.b1 pointing to B1 etc)  - try? u'll get tuples 
of (A,B1,B2,B3...) objects there

hmm, one suggestion - try doing all this while forgetting about traits 
and meta-stuff first. once u get the dynamic=structure persistency 
working, add meta-stuff on top.

ciao
svil

On Sunday 22 March 2009 23:31:49 Christiaan Putter wrote:
 Hi,

 Maybe some better explanation is required from my side for you to
 be able to help me better.

 My app is basically for doing some plotting, analysis and filtering
 of securities on the stock market.

 The main class that does most of the work is
 Security(HasTraitsORM), where HasTraitsORM is my version of
 declarative's Base class which takes care of turning traits
 (attributes) into correctly typed Columns for declarative to set up
 the mapping.

 The Security class also handles retrieving historical data from an
 hdf5 file which in my opinion is better suited to storing such data
 in an hierarchical structure (with added transparent compression).

 SQLAlchemy comes into the picture for running queries mostly.  And
 storing precomputed values.

 For instance, using the above setup, I can add a child class
 SMAFields at runtime which computes some simple moving averages of
 closing prices.  Once computed this gets stored in its own table
 with columns sma10, sma20, sma50, etc. for the different periods.

 Using a combination of traits and SA any Security instance then has
 some new attributes, sec.sma10 for instance.  Which is either
 retrieved from the SMAFields table via SA or computed from methods
 on the SMAFields class if needed.

 Though the really useful bit is querying:
 sess.query(Security).filter(Security.sma10 = Security.sma20),
 which behaves as one would expect.

 The idea being that the user can easily add new functions for
 computing values from a security's data, and after the new table is
 created and filled with the results, he can run normal sql queries
 on it.  Any other child classes that get added also have access to
 the SMAFields class's fields through it's Security relation.  Thus
 a user can add as many child classes with as many columns as they
 want, and have access to all the fields they've already set up. 
 I'm not sure something like that is possible through inheritance?

 The new fields can then be added to the table viewer in the gui and
 basically be used as if they were part of the Security class from
 the beginning, running sql queries being the most important
 feature.

 It mostly works, though the really ugly hacking of SA I've done to
 get it to work might not be perfect.

 Is there a more elegant way to this?  Is this even the best
 approach at allowing a user to add new behaviour to the
 application?

 I'm new to SA and have never used any SQL before either so I'm sure
 there must be better design patterns for something like this
 already.

 I hope it's clear what my intent is now.

 Thanks again for you help svilen.

 Regards,
 Christian

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-23 Thread Christiaan Putter

Hi,

You're quite right, the join was the tricky bit at first.  Basically I
construct an outer join over all the 'dynamic' extension classes.
Here's an extract:

def _get_selectable(self):
j = None
for key, val in HasTraitsORM._decl_class_registry.iteritems():
if not val.__table__.exists():
continue
fks = val.__table__.foreign_keys
if Security.id not in [fk.column for fk in fks]:
continue
if j != None:
j = j.outerjoin(val.__table__)
else:
j = Security.__table__.outerjoin(val.__table__)

return j

It iterates over all known mapped classes, then constructs the outer
join only with those that have a Security.id foreign key constraint.
The returned join j then gets used during queries as such:

q = sess.query(Security).select_from(j)


The 'dynamically' added classes look something like this:

class SMAFields(SecurityFields):

 sma10   = Float(sqldb=True)

 def _get_sma10(self):
   ##  some algo ###
   return res


That code just resides in some text editor provided for the user where
he can change it, add new class etc.  Then from within the main app I
use exec to compile it. The base class sets up the mapping, foreign
keys, creates the table and also adds the sma10 column to the Security
class.  Which is why I can then:

data = q.filter(Security.sma10 = 5)[:]

So even though the sma10 column isn't on the Security table itself,
through the foreign key constraint and the outer join, it behaves as
one would expect.

And the query only returns Security instances because I use
sess.query(Security), without any of the other added classes.

That all works quite well.  The problems I'm having right now:

1. figuring out what the best practice is for working with data using
multiple threads
- only have one session in the gui thread? other threads call
commit on this session using a lock?
- expunge instances from gui sess before working on then, then
commit changes in new session?  then add them back to gui's session?

2. Creation of SMAFields records if they don't exist.  Right now I'm
just creating them by hand.  Though automatic creation when loading a
Security would be nice.
- tried with orm.reconstruct decorator on Security class, doesn't
seem to work.


Maybe a bit too complicated for what it achieves, but the ability to
define new fields with custom algorithms on the fly can be quite
useful for users.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-23 Thread az

On Monday 23 March 2009 23:51:26 Christiaan Putter wrote:
 Hi,

 You're quite right, the join was the tricky bit at first. 
...

 The 'dynamically' added classes look something like this:

 class SMAFields(SecurityFields):

  sma10   = Float(sqldb=True)

  def _get_sma10(self):
##  some algo ###
return res


 That code just resides in some text editor provided for the user
 where he can change it, add new class etc.  Then from within the
 main app I use exec to compile it. The base class sets up the
 mapping, foreign keys, creates the table and also adds the sma10
 column to the Security class.  Which is why I can then:

 data = q.filter(Security.sma10 = 5)[:]

 So even though the sma10 column isn't on the Security table itself,
 through the foreign key constraint and the outer join, it behaves
 as one would expect.

 And the query only returns Security instances because I use
 sess.query(Security), without any of the other added classes.

 That all works quite well.  The problems I'm having right now:

 1. figuring out what the best practice is for working with data
 using multiple threads
 - only have one session in the gui thread? other threads call
 commit on this session using a lock?
 - expunge instances from gui sess before working on then, then
 commit changes in new session?  then add them back to gui's
 session?

 2. Creation of SMAFields records if they don't exist.  Right now
 I'm just creating them by hand.  Though automatic creation when
 loading a Security would be nice.
 - tried with orm.reconstruct decorator on Security class,
 doesn't seem to work.

can't really help u here...
for multithread, try whichever is easier to make _now_, and think 
about redoing it for real later... why multithread though?
reconstruct is when loading from db to memory. IMO u 
need on_save-like thing - see mapper extensions / session 
extensions.

 Maybe a bit too complicated for what it achieves, but the ability
 to define new fields with custom algorithms on the fly can be quite
 useful for users.
the dynamic ORM road is not really travelled too much... keep going. 
maybe put what u did somewhere to be visible? with _good_ description 
what u need from it...

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread az

good on yer!
one thing that i sorta failed is making a clean set of tests for this.
if u aren't too far in the usage, save some major future headaches and 
do yourself a favour, make such test set. All the things u've tried 
has to be (simple) testcases - it will be the _spec_ of how the thing 
works / what u want from it. Anyone else on the same path could also 
use it then ;-)

i hope that once u do that u'll get an idea of where exactly u have 
failed and fix it.

as of expiration, it also did me some trouble. i have not stepped on 
that session-gone one but i can see how it happens. The (expected) 
life-times of SA-touched things aren't that obvious, and aren't 
documented. e.g. Objects living longer than session or in an extreme 
case, the mappers, might be troublesome.

what i can suggest is, put debug statements and run with echo=True and 
watch what goes on behind and how SA-activities intertwist with 
traits' /your ones.

ciao
svilen

On Sunday 22 March 2009 04:00:27 Christiaan Putter wrote:
 Hi svilen,

 Thanks for your advice, going through your code helped a lot in
 understanding how SA works.

 I've gotten traited classes to behave like classes generated by
 SA's declarative extension.  After a lot of stepping through code I
 realised the main problem was that SA removes values from an
 instance __dict__ once they have been expired.  Traits smartly (or
 not so smartly) picks this up and on the next access to those
 attributes and returns their default values.  Whereas SA would
 normally refresh the values from SQL.  This caused some serious
 headaches, and really strange behaviour.

 My solution was to the overwrite the class's __getattribute__ with:

 def __getattribute__(self, key):
 try:
 dict = super(HasTraitsORM,
 self).__getattribute__('__dict__')

 if ('_sa_state' in dict):
 state = dict['_sa_state']
 if key in state.manager.keys():
 return state.get_impl(key).get(state)

 except Exception, e:
 print Exception in SA getattr for %s: %s % (key, e)
 pass

 return super(HasTraitsORM, self).__getattribute__(key)


 Which tries to first get an attribute from the instance's state
 (which will issue SQL if needed), and if that fails pass the
 request along to its super class where Traits will do it's magic if
 needed.

 This seems to work in the tests if been trying.  Most of the work
 is done by SA declarative, and I've just added a thin layer on top
 to get it to behave well with Traited classes and map traits to SA
 columns.

 One issue I'm still having though is that after commits all
 attributes get expired.  If you close the session the instance was
 attached to the data you just commited can't be accessed any more. 
 This is also the behaviour of plain declarative classes.  This is
 rather unintuitive I believe since once you've comitted an instance
 and closed the session you can't use the data you just set any
 more. In my case this means records being displayed in a gui get
 corrupted.

 Though what's strange is that if you access the attributes after
 the commit and before you close the session, and only then close
 the session, the values remain in the instance's __dict__ and the
 class behaves normally again.  Is this intended for some reason? 
 Is there a workaround?  Personally I don't think quantum physics
 should apply to the bits of my programs and thus merely observing
 them should not change their behaviour. From going through the
 source I found a 'dont_expire_missing' attribute on attribute
 implementations, though this does unfortunately not do what I'd
 like.

 So what I'm doing now is keeping a session open on the gui's
 thread. Which causes some problems when trying to modify said
 instances from other threads and commiting the new values.

 How's this usually done is SA?  Right now I've put a lock on the
 gui thread's session and allow other threads to issue a commit on
 that main thread if need be.  I can't imagine this is the best way
 to do it...

 Hope you're all having a great weekend,

 Christian

 2009/2/8  a...@svilendobrev.com:
  afaiknow these traits are like my own static_types, i.e.
  descriptors holding metadata and applying it to attribute access.
  i have been combining SA with static_type for more than 2 years
  now, since SA 3.0.
 
  The approach i did in the beginning was to replace the object's
  __dict__ by something smart that is static_type-aware. When
  InstrumentionManager framework came into place, i did not find it
  any different, as SA still uses the __dict__ for data access. The
  difference is that now my  __dict__ replacement is created once
  and not at every attr.access.
 
  i did suggest one patch about replaceing the SA's
  obj.__dict__.whatever usage with an explicit set of methods (so
  one knows what access to mime), and that was making SA 1-2%
  faster, but it wasn't accepted.
 
  basicaly now there's 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread Christiaan Putter

Hi Svilen,

Setting up tests is a very good idea, I'll start on some unit testing
immediately.  Definitely the best way to insure behaviour remains
constant as you're working on the implementation, was just to lazy to
do so up till now.

Speaking of laziness, I've noticed that setting lazy=False on a
relation does not effect refresh operations on the relation, it only
loads on reconstruction.  Having cascade='all' doesn't change this
either.  Is their some setting that I'm missing?

Another little issue I'm trying to get fixed out is extending a
class's attributes at run time, let's call it Parent.  The need for
this is allowing users to add custom fields (in a seperate table as
the parent) at run time to Parent.  My approach is letting the user
set up a new class (ChildX), which inherits  Base (not parent),  with
the attributes and methods that compute said attributes during run
time.  Then I compile that using exec, and and do some magic in its
metaclass.  Basically I add a relation to Parent with backref to
ChildX and uselist=False on both sides.  Enthought then allows me to
add new traits to the Parent class that delegate their value the
ChildX instance through the backref on Parent.  This is basically just
mapping the Parent class over several tables, with the ability to
extend the class on the fly.

It's mostly working.  One problem though is that on reconstructing a
Parent instance (or creating a new one) I have to have a mechanism
that checks if all the Child classes have instances related to this
record of Parent.  Not a gig deal though.

Is their a better way to do this?  I'm doing some really ugly hacking
to the Parent's mapper at run time, seeing as Mapper doesn't have a
delete_property method.

Should I try using normal inheritance instead? Will I be able to
access ChildX.x from ChildY for instance?  And what about changing
ChildX's definition and running exec on it again?

Perhaps someone has done something similar before and can give me some pointers.

Thanks for all the advice up till now, it's been really useful.

Have a great day,
Christian


2009/3/22  a...@svilendobrev.com:

 good on yer!
 one thing that i sorta failed is making a clean set of tests for this.
 if u aren't too far in the usage, save some major future headaches and
 do yourself a favour, make such test set. All the things u've tried
 has to be (simple) testcases - it will be the _spec_ of how the thing
 works / what u want from it. Anyone else on the same path could also
 use it then ;-)

 i hope that once u do that u'll get an idea of where exactly u have
 failed and fix it.

 as of expiration, it also did me some trouble. i have not stepped on
 that session-gone one but i can see how it happens. The (expected)
 life-times of SA-touched things aren't that obvious, and aren't
 documented. e.g. Objects living longer than session or in an extreme
 case, the mappers, might be troublesome.

 what i can suggest is, put debug statements and run with echo=True and
 watch what goes on behind and how SA-activities intertwist with
 traits' /your ones.

 ciao
 svilen

 On Sunday 22 March 2009 04:00:27 Christiaan Putter wrote:
 Hi svilen,

 Thanks for your advice, going through your code helped a lot in
 understanding how SA works.

 I've gotten traited classes to behave like classes generated by
 SA's declarative extension.  After a lot of stepping through code I
 realised the main problem was that SA removes values from an
 instance __dict__ once they have been expired.  Traits smartly (or
 not so smartly) picks this up and on the next access to those
 attributes and returns their default values.  Whereas SA would
 normally refresh the values from SQL.  This caused some serious
 headaches, and really strange behaviour.

 My solution was to the overwrite the class's __getattribute__ with:

     def __getattribute__(self, key):
         try:
             dict = super(HasTraitsORM,
 self).__getattribute__('__dict__')

             if ('_sa_state' in dict):
                 state = dict['_sa_state']
                 if key in state.manager.keys():
                     return state.get_impl(key).get(state)

         except Exception, e:
             print Exception in SA getattr for %s: %s % (key, e)
             pass

         return super(HasTraitsORM, self).__getattribute__(key)


 Which tries to first get an attribute from the instance's state
 (which will issue SQL if needed), and if that fails pass the
 request along to its super class where Traits will do it's magic if
 needed.

 This seems to work in the tests if been trying.  Most of the work
 is done by SA declarative, and I've just added a thin layer on top
 to get it to behave well with Traited classes and map traits to SA
 columns.

 One issue I'm still having though is that after commits all
 attributes get expired.  If you close the session the instance was
 attached to the data you just commited can't be accessed any more.
 This is also the behaviour of plain 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread az

On Sunday 22 March 2009 21:17:15 Christiaan Putter wrote:
 Hi Svilen,

 Setting up tests is a very good idea, I'll start on some unit
 testing immediately.  Definitely the best way to insure behaviour
 remains constant as you're working on the implementation, was just
 to lazy to do so up till now.
i'm sure if u do the tests first trying to cover as many use cases as 
u can, u may even better understand what u really need... TDD of 
sorts as first feedback to requirements-analisys.

 Speaking of laziness, I've noticed that setting lazy=False on a
 relation does not effect refresh operations on the relation, it
 only loads on reconstruction.  Having cascade='all' doesn't change
 this either.  Is their some setting that I'm missing?
dunno.

 Another little issue I'm trying to get fixed out is extending a
 class's attributes at run time, let's call it Parent.  The need for
 this is allowing users to add custom fields (in a seperate table as
 the parent) at run time to Parent.  My approach is letting the user
 set up a new class (ChildX), which inherits  Base (not parent), 
 with the attributes and methods that compute said attributes during
 run time.  Then I compile that using exec, and and do some magic in
 its metaclass.  Basically I add a relation to Parent with backref
 to ChildX and uselist=False on both sides.  Enthought then allows
 me to add new traits to the Parent class that delegate their value
 the ChildX instance through the backref on Parent.  This is
 basically just mapping the Parent class over several tables, with
 the ability to extend the class on the fly.

 It's mostly working.  One problem though is that on reconstructing
 a Parent instance (or creating a new one) I have to have a
 mechanism that checks if all the Child classes have instances
 related to this record of Parent.  Not a gig deal though.

 Is their a better way to do this?  I'm doing some really ugly
 hacking to the Parent's mapper at run time, seeing as Mapper
 doesn't have a delete_property method.

 Should I try using normal inheritance instead? Will I be able to
 access ChildX.x from ChildY for instance?  And what about changing
 ChildX's definition and running exec on it again?

i don't really get the reason of why u do all this. 
wanna extend existing class runtime or want to add new class?
IMO just subclassing on-the-fly would be easier. 


svil

 Perhaps someone has done something similar before and can give me
 some pointers.

 Thanks for all the advice up till now, it's been really useful.

 Have a great day,
 Christian

 2009/3/22  a...@svilendobrev.com:
  good on yer!
  one thing that i sorta failed is making a clean set of tests for
  this. if u aren't too far in the usage, save some major future
  headaches and do yourself a favour, make such test set. All the
  things u've tried has to be (simple) testcases - it will be the
  _spec_ of how the thing works / what u want from it. Anyone else
  on the same path could also use it then ;-)
 
  i hope that once u do that u'll get an idea of where exactly u
  have failed and fix it.
 
  as of expiration, it also did me some trouble. i have not stepped
  on that session-gone one but i can see how it happens. The
  (expected) life-times of SA-touched things aren't that obvious,
  and aren't documented. e.g. Objects living longer than session or
  in an extreme case, the mappers, might be troublesome.
 
  what i can suggest is, put debug statements and run with
  echo=True and watch what goes on behind and how SA-activities
  intertwist with traits' /your ones.
 
  ciao
  svilen
 
  On Sunday 22 March 2009 04:00:27 Christiaan Putter wrote:
  Hi svilen,
 
  Thanks for your advice, going through your code helped a lot in
  understanding how SA works.
 
  I've gotten traited classes to behave like classes generated by
  SA's declarative extension.  After a lot of stepping through
  code I realised the main problem was that SA removes values from
  an instance __dict__ once they have been expired.  Traits
  smartly (or not so smartly) picks this up and on the next access
  to those attributes and returns their default values.  Whereas
  SA would normally refresh the values from SQL.  This caused some
  serious headaches, and really strange behaviour.
 
  My solution was to the overwrite the class's __getattribute__
  with:
 
      def __getattribute__(self, key):
          try:
              dict = super(HasTraitsORM,
  self).__getattribute__('__dict__')
 
              if ('_sa_state' in dict):
                  state = dict['_sa_state']
                  if key in state.manager.keys():
                      return state.get_impl(key).get(state)
 
          except Exception, e:
              print Exception in SA getattr for %s: %s % (key,
  e) pass
 
          return super(HasTraitsORM, self).__getattribute__(key)
 
 
  Which tries to first get an attribute from the instance's state
  (which will issue SQL if needed), and if that fails pass the
  request along to its 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread Christiaan Putter

Oh yeah, before I forget...

Regarding the object expiration:  The behaviour as it is now in SA is
fine I guess, seeing as the purpose of having an ORM is having
instances reflect their status in the database at all times.  No
session means no database and so the behaviour is going to be strange.

What I'm thinking of doing is using the class manager to set up a
separate dict for sql state, say obj.__sa_dict__.  And thus on
expiration __sa_dict__ gets cleared instead of obj.__dict__ . The
trick will be in getting attribute access to behave correctly (I'm not
really sure what 'correct' is yet).

I'm thinking:

1.  On initialization __dict__ should be copied to __sa_dict__
 - Easily done through the class mapper.
 - __dict__ is mostly empty on initialization though... ?

2.  If there is an active object session values should be retrieved
via the object's state class, thus using __sa_dict__
 - Can be done through the object's __getattribute__ method...

3.  When setting an attribute copy it to both dicts?  Not sure about this yet...

4.  On attribute expiration first copy the value to __dict__ so
everyone else still has access to it in case the session gets closed.
 - Makes a bit more sense then 3.
 - How to implement it though?

Thus once the session is closed we can continue working with the
instance and choose to either:

a.  merge it back into a new session.  Thus saving any changes we made
while the session was closed.

b.  get a new instance from a new session via
sess.query(Class).get(id) and thus discard changes made in the mean
time.


Does this make sense?  Is there an existing mechanism to do this already?

Or are there really important reasons never to work with instances
that aren't attached to any sessions? I'm thinking mainly that having
several copies of essentially the same data is not a good idea perhaps
and should be treated with care.


Some thoughts from anyone perhaps?

Y'all have a great week,

Christian




2009/3/22 Christiaan Putter ceput...@googlemail.com:
 Hi Svilen,

 Setting up tests is a very good idea, I'll start on some unit testing
 immediately.  Definitely the best way to insure behaviour remains
 constant as you're working on the implementation, was just to lazy to
 do so up till now.

 Speaking of laziness, I've noticed that setting lazy=False on a
 relation does not effect refresh operations on the relation, it only
 loads on reconstruction.  Having cascade='all' doesn't change this
 either.  Is their some setting that I'm missing?

 Another little issue I'm trying to get fixed out is extending a
 class's attributes at run time, let's call it Parent.  The need for
 this is allowing users to add custom fields (in a seperate table as
 the parent) at run time to Parent.  My approach is letting the user
 set up a new class (ChildX), which inherits  Base (not parent),  with
 the attributes and methods that compute said attributes during run
 time.  Then I compile that using exec, and and do some magic in its
 metaclass.  Basically I add a relation to Parent with backref to
 ChildX and uselist=False on both sides.  Enthought then allows me to
 add new traits to the Parent class that delegate their value the
 ChildX instance through the backref on Parent.  This is basically just
 mapping the Parent class over several tables, with the ability to
 extend the class on the fly.

 It's mostly working.  One problem though is that on reconstructing a
 Parent instance (or creating a new one) I have to have a mechanism
 that checks if all the Child classes have instances related to this
 record of Parent.  Not a gig deal though.

 Is their a better way to do this?  I'm doing some really ugly hacking
 to the Parent's mapper at run time, seeing as Mapper doesn't have a
 delete_property method.

 Should I try using normal inheritance instead? Will I be able to
 access ChildX.x from ChildY for instance?  And what about changing
 ChildX's definition and running exec on it again?

 Perhaps someone has done something similar before and can give me some 
 pointers.

 Thanks for all the advice up till now, it's been really useful.

 Have a great day,
 Christian


 2009/3/22  a...@svilendobrev.com:

 good on yer!
 one thing that i sorta failed is making a clean set of tests for this.
 if u aren't too far in the usage, save some major future headaches and
 do yourself a favour, make such test set. All the things u've tried
 has to be (simple) testcases - it will be the _spec_ of how the thing
 works / what u want from it. Anyone else on the same path could also
 use it then ;-)

 i hope that once u do that u'll get an idea of where exactly u have
 failed and fix it.

 as of expiration, it also did me some trouble. i have not stepped on
 that session-gone one but i can see how it happens. The (expected)
 life-times of SA-touched things aren't that obvious, and aren't
 documented. e.g. Objects living longer than session or in an extreme
 case, the mappers, might be troublesome.


[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread Christiaan Putter

Hi,

Maybe some better explanation is required from my side for you to be
able to help me better.

My app is basically for doing some plotting, analysis and filtering of
securities on the stock market.

The main class that does most of the work is Security(HasTraitsORM),
where HasTraitsORM is my version of declarative's Base class which
takes care of turning traits (attributes) into correctly typed Columns
for declarative to set up the mapping.

The Security class also handles retrieving historical data from an
hdf5 file which in my opinion is better suited to storing such data in
an hierarchical structure (with added transparent compression).

SQLAlchemy comes into the picture for running queries mostly.  And
storing precomputed values.

For instance, using the above setup, I can add a child class SMAFields
at runtime which computes some simple moving averages of closing
prices.  Once computed this gets stored in its own table with columns
sma10, sma20, sma50, etc. for the different periods.

Using a combination of traits and SA any Security instance then has
some new attributes, sec.sma10 for instance.  Which is either
retrieved from the SMAFields table via SA or computed from methods on
the SMAFields class if needed.

Though the really useful bit is querying:
sess.query(Security).filter(Security.sma10 = Security.sma20), which
behaves as one would expect.

The idea being that the user can easily add new functions for
computing values from a security's data, and after the new table is
created and filled with the results, he can run normal sql queries on
it.  Any other child classes that get added also have access to the
SMAFields class's fields through it's Security relation.  Thus a user
can add as many child classes with as many columns as they want, and
have access to all the fields they've already set up.  I'm not sure
something like that is possible through inheritance?

The new fields can then be added to the table viewer in the gui and
basically be used as if they were part of the Security class from the
beginning, running sql queries being the most important feature.

It mostly works, though the really ugly hacking of SA I've done to get
it to work might not be perfect.

Is there a more elegant way to this?  Is this even the best approach
at allowing a user to add new behaviour to the application?

I'm new to SA and have never used any SQL before either so I'm sure
there must be better design patterns for something like this already.

I hope it's clear what my intent is now.

Thanks again for you help svilen.

Regards,
Christian

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-21 Thread Christiaan Putter

Hi svilen,

Thanks for your advice, going through your code helped a lot in
understanding how SA works.

I've gotten traited classes to behave like classes generated by SA's
declarative extension.  After a lot of stepping through code I
realised the main problem was that SA removes values from an instance
__dict__ once they have been expired.  Traits smartly (or not so
smartly) picks this up and on the next access to those attributes and
returns their default values.  Whereas SA would normally refresh the
values from SQL.  This caused some serious headaches, and really
strange behaviour.

My solution was to the overwrite the class's __getattribute__ with:

def __getattribute__(self, key):
try:
dict = super(HasTraitsORM, self).__getattribute__('__dict__')

if ('_sa_state' in dict):
state = dict['_sa_state']
if key in state.manager.keys():
return state.get_impl(key).get(state)

except Exception, e:
print Exception in SA getattr for %s: %s % (key, e)
pass

return super(HasTraitsORM, self).__getattribute__(key)


Which tries to first get an attribute from the instance's state (which
will issue SQL if needed), and if that fails pass the request along to
its super class where Traits will do it's magic if needed.

This seems to work in the tests if been trying.  Most of the work is
done by SA declarative, and I've just added a thin layer on top to get
it to behave well with Traited classes and map traits to SA columns.

One issue I'm still having though is that after commits all attributes
get expired.  If you close the session the instance was attached to
the data you just commited can't be accessed any more.  This is also
the behaviour of plain declarative classes.  This is rather
unintuitive I believe since once you've comitted an instance and
closed the session you can't use the data you just set any more. In my
case this means records being displayed in a gui get corrupted.

Though what's strange is that if you access the attributes after the
commit and before you close the session, and only then close the
session, the values remain in the instance's __dict__ and the class
behaves normally again.  Is this intended for some reason?  Is there a
workaround?  Personally I don't think quantum physics should apply to
the bits of my programs and thus merely observing them should not
change their behaviour. From going through the source I found a
'dont_expire_missing' attribute on attribute implementations, though
this does unfortunately not do what I'd like.

So what I'm doing now is keeping a session open on the gui's thread.
Which causes some problems when trying to modify said instances from
other threads and commiting the new values.

How's this usually done is SA?  Right now I've put a lock on the gui
thread's session and allow other threads to issue a commit on that
main thread if need be.  I can't imagine this is the best way to do
it...

Hope you're all having a great weekend,

Christian




2009/2/8  a...@svilendobrev.com:

 afaiknow these traits are like my own static_types, i.e. descriptors
 holding metadata and applying it to attribute access.
 i have been combining SA with static_type for more than 2 years now,
 since SA 3.0.

 The approach i did in the beginning was to replace the object's
 __dict__ by something smart that is static_type-aware. When
 InstrumentionManager framework came into place, i did not find it any
 different, as SA still uses the __dict__ for data access. The
 difference is that now my  __dict__ replacement is created once and
 not at every attr.access.

 i did suggest one patch about replaceing the SA's
 obj.__dict__.whatever usage with an explicit set of methods (so one
 knows what access to mime), and that was making SA 1-2% faster, but
 it wasn't accepted.

 basicaly now there's a thin layer on top of SA, then SA itself, then a
 thick layer underneath managing the data (the fake __dict__).

 declarative+traits... u'll end up where i was. dbcook.sf.net is doing
 that - since beginning. and it's switchable on/off.
 It all works well and stable, in project with 250-300 classes,
 although about 15% slower than without it (-:)

 The sa2static code:
 svn co
 http://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/static_type/

 The static_type itself:
 svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/static_type

 whole dbcook:
 svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk

 i have quite some experience fighting this field, ask if u want.

 ciao
 svilen
 www.svilendobrev.com

 On Sunday 08 February 2009 01:51:20 cputter wrote:
 Hi guys and girls,

 I've recently discovered the joys of using sqlalchemy and would
 love to using it together with Traits.  A few months back there was
 an attempt to integrate sqlalchemy into traits, though it wasn't
 really comprehensive in exploiting all of sqlalchemy's potential.

 So 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-08 Thread az

afaiknow these traits are like my own static_types, i.e. descriptors 
holding metadata and applying it to attribute access.
i have been combining SA with static_type for more than 2 years now, 
since SA 3.0.

The approach i did in the beginning was to replace the object's 
__dict__ by something smart that is static_type-aware. When 
InstrumentionManager framework came into place, i did not find it any 
different, as SA still uses the __dict__ for data access. The 
difference is that now my  __dict__ replacement is created once and 
not at every attr.access. 

i did suggest one patch about replaceing the SA's 
obj.__dict__.whatever usage with an explicit set of methods (so one 
knows what access to mime), and that was making SA 1-2% faster, but 
it wasn't accepted.

basicaly now there's a thin layer on top of SA, then SA itself, then a 
thick layer underneath managing the data (the fake __dict__).

declarative+traits... u'll end up where i was. dbcook.sf.net is doing 
that - since beginning. and it's switchable on/off.
It all works well and stable, in project with 250-300 classes, 
although about 15% slower than without it (-:)

The sa2static code:
svn co 
http://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/static_type/

The static_type itself:
svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/static_type

whole dbcook: 
svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk

i have quite some experience fighting this field, ask if u want.

ciao
svilen
www.svilendobrev.com

On Sunday 08 February 2009 01:51:20 cputter wrote:
 Hi guys and girls,

 I've recently discovered the joys of using sqlalchemy and would
 love to using it together with Traits.  A few months back there was
 an attempt to integrate sqlalchemy into traits, though it wasn't
 really comprehensive in exploiting all of sqlalchemy's potential.

 So I'm trying to work on that and combine ext.Declarative with
 traits. The basic idea is to use the DeclarativeMeta type to
 generate Columns from Traits and pass those on for the Declarative
 extension to do its magic.  This would allow mixing of sqlalchemy
 attributes and trait attributes in a single class so that we could
 still make use of all the relational setup sqlalchemy does in any
 case.

 Reading through several threads and looking at Elixir's SA
 integration helped me a bit though I couldn't find any
 documentation on how to implement the InstrumentationManager
 interface.  I'm assuming this would be essential for letting Traits
 and SQLAlchemy play well together.

 There's still a lot of work to do, and I'm not really sure what
 needs to be done for everything to work properly.  Would really
 appreciate it if someone could help me out.


 Here's an example of how it's working at the moment, I'll add the
 actual implementation at the end.

 #
 class User(HasTraitsORM):
 __tablename__ = 'users'

 id = Column('id', Integer, primary_key=True)
 name = Str(sqldb=True)

 def _name_changed(self, old, new):
 print 'Changed name from %s to %s.' % (old, new)

 def __repr__(self):
 return 'User(%s, %s)' % (self.id, self.name)

 people = ['John', 'Charls','Steve','Smith','Jane']

 for per in people:
 obj = User(name=per)
 sess = sqlservice.Session()
 sess.add(obj)
 sess.commit()
 sess.close()
 print obj

 session = sqlservice.Session()
 print '\nQuery all users\n'
 for user in session.query(User).order_by(User.name).all():
 print user

 session.close()


 

 Which spits out:
 ###
 Changed name from  to John.
 User(users.id, )
 Changed name from  to Charls.
 User(users.id, )
 Changed name from  to Steve.
 User(users.id, Steve)
 Changed name from  to Smith.
 User(users.id, Smith)
 Changed name from  to Jane.
 User(users.id, Jane)

 Query all users

 User(2, Charls)
 User(1, John)

 ##

 Which is really strange behaviour.  There's obviously something
 wrong in my implementation of HasTraitsORM but why the different
 results within the same loop???  Why add only two instances?

 Totally baffles me.

 Here's the rest of my code, hope somehow can help me out.  It's
 very messy, I've been hacking at it like crazy with no success :-)

 Hope you're all having a great weekend.
 -Chris


 ##

 # Standard library imports.
 import logging

 # Enthought library imports
 from enthought.preferences.api import Preferences
 from enthought.traits.api import \
 HasTraits, MetaHasTraits, Int, Str, Bool, Float, Any,\
 String, Enum, Python, \
 on_trait_change, TraitListObject

 # Package imports
 import sqlalchemy
 from sqlalchemy import Column, Integer
 from sqlalchemy.schema import MetaData
 from sqlalchemy.orm.interfaces import MapperProperty,
 InstrumentationManager
 from sqlalchemy.orm.attributes import get_attribute, set_attribute,
 is_instrumented
 from 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Christiaan Putter

Hi Michael,

Thanks for your swift reply.

I wasn't really sure which way to go with combining Traits and SA.
From reading through the source it seemed that I had to use
InstrumentationManager, I think it said somewhere it was the stable
public interface.  I had a look at the Trellis source as well, they
subclass the ClassManager though.  I'll give that a try next.  Either
way it's not quite clear from the source what I really need to
implement in my own interface and how SA expects that to behave.


At the moment I think the problem lies with how attributes are get and
set.  From what I understand from going through the SA source is that
the mapper places decoraters on the attributes once Declarative has
created the appropriate tables.  And Traits does something similar
though I'm not quite sure how.  The entire process seems rather
complex, complicated by the fact that Traits sometimes writes directly
to the __dict__ of an instance without going through setattr first
(which I'm guessing is what's needed by SA for its bookkeeping).

I'm quite sure that the SQL SA produces is correct, it's just the
object's state that isn't being handled properly.  Is there some
documentation somewhere that explains what the various methods in the
interface (either InstrumentationManager or ClassManager) are supposed
to do?  For instance is install_state(self, class_, instance, state)
supposed to copy the data in the state parameter to the instance or
merely store it for future use by state_getter ?


I'll give the ClassManager a try now.  Hope you can make some sense of my code.

Enjoy your evening,
Chris

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Michael Bayer


On Feb 7, 2009, at 8:27 PM, Christiaan Putter wrote:


 Hi Michael,

 Thanks for your swift reply.

 I wasn't really sure which way to go with combining Traits and SA.
 From reading through the source it seemed that I had to use
 InstrumentationManager, I think it said somewhere it was the stable
 public interface.  I had a look at the Trellis source as well, they
 subclass the ClassManager though.  I'll give that a try next.  Either
 way it's not quite clear from the source what I really need to
 implement in my own interface and how SA expects that to behave.

uh thats a little weird since the InstrumentationManager is designed  
to be the thing you subclass.  you're not supposed to subclass  
ClassManager.   There should be no difference in behavior subclassing  
one or the other.


 At the moment I think the problem lies with how attributes are get and
 set.  From what I understand from going through the SA source is that
 the mapper places decoraters on the attributes once Declarative has
 created the appropriate tables.  And Traits does something similar
 though I'm not quite sure how.  The entire process seems rather
 complex, complicated by the fact that Traits sometimes writes directly
 to the __dict__ of an instance without going through setattr first
 (which I'm guessing is what's needed by SA for its bookkeeping).

yeah thats all true but the point of InstrumentationManager is that  
its all OK - you send SQLA the events it needs.

 I'm quite sure that the SQL SA produces is correct, it's just the
 object's state that isn't being handled properly.  Is there some
 documentation somewhere that explains what the various methods in the
 interface (either InstrumentationManager or ClassManager) are supposed
 to do?  For instance is install_state(self, class_, instance, state)
 supposed to copy the data in the state parameter to the instance or
 merely store it for future use by state_getter ?

well thats all internal-ish API for which you'd have to just trace out  
the flow of data.install_state() for example just sticks an  
InstanceState attributre on a given instance.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Christiaan Putter

Hi,

 uh thats a little weird since the InstrumentationManager is designed
 to be the thing you subclass.  you're not supposed to subclass
 ClassManager.   There should be no difference in behavior subclassing
 one or the other.


It behaves a little weird to when I tried it myself.  I'll stick to
the InstrumentationManager then if that's the way to do it.



 At the moment I think the problem lies with how attributes are get and
 set.  From what I understand from going through the SA source is that
 the mapper places decoraters on the attributes once Declarative has
 created the appropriate tables.  And Traits does something similar
 though I'm not quite sure how.  The entire process seems rather
 complex, complicated by the fact that Traits sometimes writes directly
 to the __dict__ of an instance without going through setattr first
 (which I'm guessing is what's needed by SA for its bookkeeping).

 yeah thats all true but the point of InstrumentationManager is that
 its all OK - you send SQLA the events it needs.


The question is what events does it need?


 I'm quite sure that the SQL SA produces is correct, it's just the
 object's state that isn't being handled properly.  Is there some
 documentation somewhere that explains what the various methods in the
 interface (either InstrumentationManager or ClassManager) are supposed
 to do?  For instance is install_state(self, class_, instance, state)
 supposed to copy the data in the state parameter to the instance or
 merely store it for future use by state_getter ?

 well thats all internal-ish API for which you'd have to just trace out
 the flow of data.install_state() for example just sticks an
 InstanceState attributre on a given instance.


I've been stepping through the code for several days now and I'm
slowly starting to understand what SA needs and how it operates, not
there yet though.  Do you perhaps know of another project that uses
the InstrumentationManager interface?  Seeing some implemented code
would certainly help out my understanding.


I'm just glad that debugging python is much more fun than debugging c++.


Let me know if you see some flaws in my code.

Thanks for your help Michael.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Michael Bayer


On Feb 7, 2009, at 8:57 PM, Christiaan Putter wrote:

 yeah thats all true but the point of InstrumentationManager is that
 its all OK - you send SQLA the events it needs.


 The question is what events does it need?

there's a demo here:

http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/custom_attributes/custom_management.py

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Christiaan Putter

Thanks for the link.  That's where I figured out most of what I'm trying to do.

Now that I've stepped through the code I noticed that 'install_state'
is only being called on the first 2 iterations of the loop.  Haven't
found out yet why that is.

What exactly is supposed to be in the state?  Is an instance.__dict__
supposed to be identical to that of state.dict?   It seems that after
a commit that state.dict is cleared so I'm guessing it's only used for
dirty attributes.  For some reason after the commit instance.__dict__
gets modified and the attributes that have been updated get cleared,
only for the first two iterations though.

I changed the loop to look like:

sess = sqlservice.Session()
for per in people:
obj = User(name=per)
sess.add(obj)
print obj
sess.commit()
sess.close()


which behaves the way one would expect...   So I'm guessing it must be
in the way I'm storing the state right? At the moment the
InstrumentationManager stores the states in a dict self.states.  So I
tried to rewrite it to store it directly on the instance, say
instance.__sa_state__, but that doesn't work.  When add gets called,
install_state hasn't been called yet.  Is that supposed to be that
way?  Maybe my instances aren't getting instantiated correctly.

This is becoming really confusing.  I'm taking a quick smoke break.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-07 Thread Michael Bayer

install_state is normally called during __init__ of your object.  the  
attributes package decorates __init__ for this purpose.   so you have  
to get __init__, or __new__, or whatever, to ensure that  
attributes.instance_state(obj) will return an InstanceState at all  
times.

On Feb 7, 2009, at 10:02 PM, Christiaan Putter wrote:


 Thanks for the link.  That's where I figured out most of what I'm  
 trying to do.

 Now that I've stepped through the code I noticed that 'install_state'
 is only being called on the first 2 iterations of the loop.  Haven't
 found out yet why that is.

 What exactly is supposed to be in the state?  Is an instance.__dict__
 supposed to be identical to that of state.dict?   It seems that after
 a commit that state.dict is cleared so I'm guessing it's only used for
 dirty attributes.  For some reason after the commit instance.__dict__
 gets modified and the attributes that have been updated get cleared,
 only for the first two iterations though.

 I changed the loop to look like:

 sess = sqlservice.Session()
 for per in people:
obj = User(name=per)
sess.add(obj)
print obj
 sess.commit()
 sess.close()


 which behaves the way one would expect...   So I'm guessing it must be
 in the way I'm storing the state right? At the moment the
 InstrumentationManager stores the states in a dict self.states.  So I
 tried to rewrite it to store it directly on the instance, say
 instance.__sa_state__, but that doesn't work.  When add gets called,
 install_state hasn't been called yet.  Is that supposed to be that
 way?  Maybe my instances aren't getting instantiated correctly.

 This is becoming really confusing.  I'm taking a quick smoke break.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative Base ID

2009-02-01 Thread Michael Bayer


primary key identifiers are acquired using database-specific methods,  
such as AUTOINCREMENT on mysql, SERIAL on postgres, SQLites implicit  
OID behavior.   these methods all start at 1.   you can explicitly set  
the primary key attributes on a pending object to 0 and flush to force  
a zero.

On Feb 1, 2009, at 7:24 AM, vctr...@gmail.com wrote:


 The ID field in a declarative base is a sequence that is not controled
 by the user (or is it?).

 Is there a way to get it to start the counting of the ID from 0 and
 not from 1?

 Thanks
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative Base ID

2009-02-01 Thread vctr...@gmail.com

Thanks

On Feb 1, 7:17 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 primary key identifiers are acquired using database-specific methods,  
 such as AUTOINCREMENT on mysql, SERIAL on postgres, SQLites implicit  
 OID behavior.   these methods all start at 1.   you can explicitly set  
 the primary key attributes on a pending object to 0 and flush to force  
 a zero.

 On Feb 1, 2009, at 7:24 AM, vctr...@gmail.com wrote:





  The ID field in a declarative base is a sequence that is not controled
  by the user (or is it?).

  Is there a way to get it to start the counting of the ID from 0 and
  not from 1?

  Thanks- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and relation to self

2009-01-06 Thread Gennady Kovalev



On 6 янв, 23:05, Michael Bayer zzz...@gmail.com wrote:
 this was a bug in 0.5.0rc4 and is fixed in 0.5.0.

 On Jan 6, 2:49 pm, Gennady Kovalev gennady.kova...@gmail.com wrote:

  Hi!

  I try to clean up my code, and read in google group about possibility
  create relation to self when class is not defined yet. I write
  example,
  but got an error (see below).

  My code is:
   cut 
  engine = create_engine('sqlite:///:memory:', echo=True)
  Base = declarative_base(bind=engine)

  class Node(Base):
      __tablename__ = 'node'
      id = Column(Integer, primary_key=True)
      title = Column(String(64))
      node_id = Column(Integer, ForeignKey('node.id'))
      children = relation('Node', \
              backref=backref('parent', remote_side='Node.id'))

  Base.metadata.create_all()

  Session = scoped_session( \
              sessionmaker(autocommit=False, \
                  autoflush=False, bind=engine))
  session = Session()

  root = Node(title='root')
  node = Node(title='node')
  root.children.append(node)

  session.save(root)
  session.commit()
   cut 

  Workaround worked correctly:

  class Node(Base):
    # define attrs without `children'
    # ...

  Node.__mapper__.add_property('children', relation(Node, \
              backref=backref('parent', remote_side=Node.id)))

  Last traceback lines (formated):

  ...
  File ...orm/properties.py, line 578, in do_init
    self._determine_local_remote_pairs()
  File ...orm/properties.py, line 820, in
  _determine_local_remote_pairs
    self.local_side, self.remote_side = \
      [util.OrderedSet(x) for x in zip(*list(self.local_remote_pairs))]

  ValueError: need more than 0 values to unpack

  (self.local_remote_pairs is `[]' here)

  Can I define relation inside non-defined class, or I must use some
  workarounds?

  Thank you.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and relation to self

2009-01-06 Thread Gennady Kovalev


On 6 янв, 23:05, Michael Bayer zzz...@gmail.com wrote:
 this was a bug in 0.5.0rc4 and is fixed in 0.5.0.

Ohh, thank you.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative inheritance

2008-11-11 Thread Michael Bayer



sure its along these lines


class Content(my_declarative_base):
 __tablename__ = 'content'
 id = Column(Integer, primary_key=True)
 title = Column(String(80))
 parent_id = Column(Integer, ForeignKey('content.id'))
 type = Column(String(32))

 children = relation(Content, backref=backref('parent',  
remote_side=id))

 __mapper_args__ = {'polymorphic_on':type,  
'polymorphic_identity':'content'}



On Nov 11, 2008, at 10:13 AM, cropr wrote:


 Does somebody know if is possible to use a declarative class
 definition for the schema below

 
 content  = Table('content', meta.metadata,
Column('id', types.Integer, primary_key=True, autoincrement=True),
Column('title', types.String(80)),
Column('parent_id', types.Integer, ForeignKey('content.id')),
Column('type',types.String(32)),
)

 class Content(object):

 mapper(Content, content, polymorphic_on=content.c.type,
 polymorphic_identity='content', properties={
'children': relation(Content, backref=backref('parent',
 remote_side=[content.c.id]))
})
 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative and self-referential table

2008-11-11 Thread Michael Bayer

always use remote_side=table.c.id on the many to one side of a self  
referential relation, in the case of declarative it would look like  
remote_side=id.   See 
http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_selfreferential
 
  for information on this.


On Nov 11, 2008, at 2:15 PM, MikeCo wrote:


 I have a table that defines a self-referential hierarchy. My problem
 is figuring out how to specify that relationship in declarative
 syntax. From reading the documentation and looking at example
 basic_tree.py, I think I understand it when using tables and mappers,
 but can't get it right with declarative.

 Here is one way I tried it. Using 0.5rc2 or 0.5rc3 gives the traceback
 below on session.commit() telling me I have circular references.

 import sqlalchemy
 print 'SQLAlchemy version', sqlalchemy.__version__

 from sqlalchemy import create_engine, Column, Integer, String,
 MetaData, ForeignKey
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import sessionmaker, relation, backref

 engine = create_engine(r'sqlite:///:memory:')
 Base = declarative_base(bind=engine)
 class People(Base):
__tablename__ = 'people'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('people.id'))
name = Column(String)
parent = relation('People', cascade='all',
 backref=backref('children'))

 Base.metadata.create_all()
 Session = sessionmaker()
 session = Session()

 p1 = People(name='john')
 p2 = People(name='susie')
 p3 = People(name='joannie')
 p1.children.append(p2)
 p2.children.append(p3)
 session = Session()
 session.add(p1)
 session.commit()

  File sample_recursive.py, line 30, in module
session.commit()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\session.py, line 670, in commit
self.transaction.commit()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\session.py, line 375, in commit
self._prepare_impl()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\session.py, line 359, in _prepare_impl
self.session.flush()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\session.py, line 1361, in flush
self._flush(objects)
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\session.py, line 1431, in _flush
flush_context.execute()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 261, in execute
tasks = self._sort_dependencies()
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 302, in _sort_depen
 dencies
for t in task._sort_circular_dependencies(self,
 [self.get_task_by_mapper(i) for i in cycles]):
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\orm\unitofwork.py, line 568, in _sort_circu
 lar_dependencies
head = topological.sort_as_tree(tuples,
 object_to_original_task.keys())
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\topological.py, line 58, in sort_as_tree
return _organize_as_tree(_sort(tuples, allitems,
 allow_cycles=with_cycles))
  File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
 \sqlalchemy\topological.py, line 212, in _sort
raise CircularDependencyError(Circular dependency detected  +
 repr(edges) + repr(queue))
 sqlalchemy.exc.CircularDependencyError: Circular dependency detected
 [(sqlalchemy.orm.identity.IdentityManagedState object
 at 0x00E80510, sqlalchemy.orm.identity.IdentityManagedState object
 at 0x00E80490), (sqlalchemy.orm.identity.IdentityMa
 nagedState object at 0x00E80490,
 sqlalchemy.orm.identity.IdentityManagedState object at 0x00E80510),
 (sqlalchemy.orm.id
 entity.IdentityManagedState object at 0x00E80490,
 sqlalchemy.orm.identity.IdentityManagedState object at 0x00E80C70),
 (
 sqlalchemy.orm.identity.IdentityManagedState object at 0x00E80C70,
 sqlalchemy.orm.identity.IdentityManagedState object at
 0x00E80490)][]

 --
 Mike Conley

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative and self-referential table

2008-11-11 Thread MikeCo

Thanks for the pointer about remote_side.

In my application, the table definitions are not visible at runtime,
and the class definitions are autoloaded from the engine. So, I can't
say remote_side=table.c.id because table is not available. I do have
this solution that works well:


class People(Base):
__tablename__ = 'people'
__table_args__ = {'autoload':True}

People.children = relation(People, cascade=all,
backref=backref(parent, remote_side=[People.id]))

but I am wondering if I can move the definition of children into the
class somehow. Using this form:


class People(Base):
__tablename__ = 'people'
__table_args__ = {'autoload':True}
children = relation(People, cascade=all,
backref=backref(parent, remote_side=[People.id]))

gives a compile time error because People is being referenced before
it is completely defined. Trying to quote the word People (as either
'People' or 'people') also gives various runtime errors. I can
certainly live with adding the relation after defining the class, but
it would be cleaner to embed the relation in the class definition if
possible.



On Nov 11, 2:37 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 always use remote_side=table.c.id on the many to one side of a self  
 referential relation, in the case of declarative it would look like  
 remote_side=id.   
 Seehttp://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relatio...
   for information on this.

 On Nov 11, 2008, at 2:15 PM, MikeCo wrote:



  I have a table that defines a self-referential hierarchy. My problem
  is figuring out how to specify that relationship in declarative
  syntax. From reading the documentation and looking at example
  basic_tree.py, I think I understand it when using tables and mappers,
  but can't get it right with declarative.

  Here is one way I tried it. Using 0.5rc2 or 0.5rc3 gives the traceback
  below on session.commit() telling me I have circular references.

  import sqlalchemy
  print 'SQLAlchemy version', sqlalchemy.__version__

  from sqlalchemy import create_engine, Column, Integer, String,
  MetaData, ForeignKey
  from sqlalchemy.ext.declarative import declarative_base
  from sqlalchemy.orm import sessionmaker, relation, backref

  engine = create_engine(r'sqlite:///:memory:')
  Base = declarative_base(bind=engine)
  class People(Base):
     __tablename__ = 'people'
     id = Column(Integer, primary_key=True)
     parent_id = Column(Integer, ForeignKey('people.id'))
     name = Column(String)
     parent = relation('People', cascade='all',
  backref=backref('children'))

  Base.metadata.create_all()
  Session = sessionmaker()
  session = Session()

  p1 = People(name='john')
  p2 = People(name='susie')
  p3 = People(name='joannie')
  p1.children.append(p2)
  p2.children.append(p3)
  session = Session()
  session.add(p1)
  session.commit()

   File sample_recursive.py, line 30, in module
     session.commit()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\session.py, line 670, in commit
     self.transaction.commit()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\session.py, line 375, in commit
     self._prepare_impl()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\session.py, line 359, in _prepare_impl
     self.session.flush()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\session.py, line 1361, in flush
     self._flush(objects)
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\session.py, line 1431, in _flush
     flush_context.execute()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\unitofwork.py, line 261, in execute
     tasks = self._sort_dependencies()
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\unitofwork.py, line 302, in _sort_depen
  dencies
     for t in task._sort_circular_dependencies(self,
  [self.get_task_by_mapper(i) for i in cycles]):
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\orm\unitofwork.py, line 568, in _sort_circu
  lar_dependencies
     head = topological.sort_as_tree(tuples,
  object_to_original_task.keys())
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\topological.py, line 58, in sort_as_tree
     return _organize_as_tree(_sort(tuples, allitems,
  allow_cycles=with_cycles))
   File C:\Python25\lib\site-packages\sqlalchemy-0.5.0rc2-py2.5.egg
  \sqlalchemy\topological.py, line 212, in _sort
     raise CircularDependencyError(Circular dependency detected  +
  repr(edges) + repr(queue))
  sqlalchemy.exc.CircularDependencyError: Circular dependency detected
  [(sqlalchemy.orm.identity.IdentityManagedState object
  at 0x00E80510, sqlalchemy.orm.identity.IdentityManagedState object
  at 0x00E80490), 

[sqlalchemy] Re: declarative and self-referential table

2008-11-11 Thread Michael Bayer


On Nov 11, 2008, at 4:52 PM, MikeCo wrote:


 Thanks for the pointer about remote_side.

 In my application, the table definitions are not visible at runtime,
 and the class definitions are autoloaded from the engine. So, I can't
 say remote_side=table.c.id because table is not available. I do have
 this solution that works well:


class People(Base):
__tablename__ = 'people'
__table_args__ = {'autoload':True}

People.children = relation(People, cascade=all,
backref=backref(parent, remote_side=[People.id]))

 but I am wondering if I can move the definition of children into the
 class somehow. Using this form:


class People(Base):
__tablename__ = 'people'
__table_args__ = {'autoload':True}
children = relation(People, cascade=all,
backref=backref(parent, remote_side=[People.id]))

class People(Base):
 arguments ...
children = relation(People, backref=backref(parent,  
remote_side=People.id))

or my preference:

class People(Base):
 __table__ = people_table = Table('people', Base.metadata,  
autoload=True)
 ... arguments ...
 children = relation(People, backref=backref(parent,  
remote_side=people_table.c.id))




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative, autoload and late binding to engine

2008-11-06 Thread Martijn Faassen

Hi there,

MikeCo wrote:
 I have an application that will need to bind to several different
 databases with the same structure. The databases to be accessed are
 not known at import time, that will be determined later while the
 application is running. I want to use declarative and autoload the
 columns.  Autoload needs a database connection available when a class
 is first compiled to do the database introspection, that is the part I
 can't figure out. Maybe a custom metaclass?
 
 If this is already solved, can someone point me at the answer? I
 haven't found it yet.

I've solved it, but the solution is rather particular to Zope technology 
right now. It's spread around z3c.saconfig and megrok.rdb:

http://svn.zope.org/z3c.saconfig/trunk

http://svn.zope.org/megrok.rdb/trunk/

Let me sketch out what's going on:

* z3c.saconfig sets up a special scoped session, with a custom session 
factory and scopefunc. The session factory looks up in the Zope 3 
component architecture for a way to create an engine, but you could use 
some other strategy.

* the engine factory is also looked up dynamically by the session 
factory and in turn creates a SQLAlchemy engine (or returns an existing 
one if the engine is already created).

* when an engine is first created, an event is fired. In effect this 
event is fired when a session is needed for the first time in the 
application.

Now megrok.rdb hooks into this event. If will reflect any tables that 
need to be reflected and create any tables that need to be created (if 
their structure is defined in python).

reflection in the simplest case can be done like this:

metadata.reflect(bind=engine)

and creation can be done like this:

metadata.create_all(engine)

Now at the point the event is handled, previously the various 
declarative classes have been associated with the metadata object.

megrok.rdb actually doesn't use the declarative extension's metaclass 
approach, but instead drives the declarative extension's 
instrument_declarative from a grokker (see the martian library). In my 
approach I use an explicit metadata object. I believe the declarative 
extension creates one on the fly (but you can get to it with Base.metadata).

Anyway, all of this sounds very complicated, as the Zope and Grok stuff 
take particular approaches towards configurability. I think the core of 
this approach is:

* hook up your classes to a metadata (declarative does this for you)

* at the appropriate time, do metadata.reflect(bind=engine)

Regards,

Martijn



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative application example

2008-10-28 Thread Michael Bayer


On Oct 28, 2008, at 9:32 AM, writeson wrote:


 Hi everyone,

 I'm struggling to try and use SqlAlchemy (0.5rc2) with a project I'm
 working on. The database part isn't very hard, it's just a flat table,
 but I keep running into problems with SA throwing exceptions. The SA
 documentation is good, but as of yet I haven't gotten the ah ha
 moment of how to put it all together. What I need to see is a simple,
 complete application example that would show me how the pieces fit
 together as a whole. In particular I'm looking for things like this:

 1)When and how to update the database when modifying a Class
 instance.

the ORM tutorial steps through this process fairly methodically, and  
touches upon the full lifecycle of an object.

Also a good read of the session chapter explains exactly what the  
Session is doing.  0.5's default session configuration follows a model  
whereby you generally don't have to deal with anything except add/ 
delete/commit.


 2)Can database object instances be maintained in a Python list, or
 should they be handled one at a time?

there's no restriction on how mapped instances are structured or  
persisted.


 3)Does deleting a database object instance (del obj) delete the
 row from the database, or should session.delete(obj) be called first
 and then del obj?

del obj just dereferences the variable named obj from the local  
namespace in the Python process.If no further references remain on  
the object, it will be garbage collected, including from SQLA's  
Session.  This does not communicate a database delete operation to the  
session.SQLAlchemy records an object for pending deletion using  
Session.delete(obj).  If you issue this call, the Session temporarily  
creates a strong reference to the object's mapped information so that  
it stays in scope, at least within the Session, until the transaction  
commits.

 4)Is it possible to apply a Python __cmp__() method to a list of
 database objects, or should SA order by operations be used instead?

either approach is feasable depending on the situation


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative application example

2008-10-28 Thread Doug Farrell

Michael,

Thanks for your response. I'll take another look at the ORM tutorial and
the Session chapter now that I've got a little experience and see if
some lights go on.

Your text about deleting instances matches my understanding of how
things work, but it's good to get confirmation and that I'm not missing
something. I'm guessing the Session having a strong reference keeps the
object alive even though I delete it from the list.

Again, thanks!
Doug

 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
 On Behalf Of Michael Bayer
 Sent: Tuesday, October 28, 2008 10:30 AM
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: Declarative application example
 
 
 
 On Oct 28, 2008, at 9:32 AM, writeson wrote:
 
 
  Hi everyone,
 
  I'm struggling to try and use SqlAlchemy (0.5rc2) with a project I'm
  working on. The database part isn't very hard, it's just a flat
 table,
  but I keep running into problems with SA throwing exceptions. The SA
  documentation is good, but as of yet I haven't gotten the ah ha
  moment of how to put it all together. What I need to see is a
simple,
  complete application example that would show me how the pieces fit
  together as a whole. In particular I'm looking for things like this:
 
  1)When and how to update the database when modifying a Class
  instance.
 
 the ORM tutorial steps through this process fairly methodically, and
 touches upon the full lifecycle of an object.
 
 Also a good read of the session chapter explains exactly what the
 Session is doing.  0.5's default session configuration follows a model
 whereby you generally don't have to deal with anything except add/
 delete/commit.
 
 
  2)Can database object instances be maintained in a Python list,
 or
  should they be handled one at a time?
 
 there's no restriction on how mapped instances are structured or
 persisted.
 
 
  3)Does deleting a database object instance (del obj) delete the
  row from the database, or should session.delete(obj) be called first
  and then del obj?
 
 del obj just dereferences the variable named obj from the local
 namespace in the Python process.If no further references remain on
 the object, it will be garbage collected, including from SQLA's
 Session.  This does not communicate a database delete operation to the
 session.SQLAlchemy records an object for pending deletion using
 Session.delete(obj).  If you issue this call, the Session temporarily
 creates a strong reference to the object's mapped information so that
 it stays in scope, at least within the Session, until the transaction
 commits.
 
  4)Is it possible to apply a Python __cmp__() method to a list of
  database objects, or should SA order by operations be used instead?
 
 either approach is feasable depending on the situation
 
 
  
  options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread az

i dont know if elixir has such 'attribute/behaviour injectors', i have 
DBCOOK_no_mapping=True in dbcook.sf.net that does that. the machinery 
behdin it is soewhat tricky (x in class.__dict__ and not in 
base_class.__dict__ etc stuff)

u may try your stuff as mixin, that may or may not work.

On Monday 29 September 2008 18:01:00 Joril wrote:
 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every
 other entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
 id = Column(Integer, primary_key=True)
 creation_time = Column(DateTime)
 modify_time = Column(DateTime)

 class TestEntity(BaseObject):
 value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does
 not have a mapped_table specified.  (Are you using the return value
 of table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could
 not assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

 Many thanks for your time!
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Gaetan de Menten

On Mon, Sep 29, 2008 at 5:01 PM, Joril [EMAIL PROTECTED] wrote:

 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every other
 entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
id = Column(Integer, primary_key=True)
creation_time = Column(DateTime)
modify_time = Column(DateTime)

 class TestEntity(BaseObject):
value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does not
 have a mapped_table specified.  (Are you using the return value of
 table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could not
 assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

I don't know whether this is currently possible with Declarative or
not. In the case it isn't, patching Declarative should be quite easy
(but I don't know if such a patch would be accepted or not). If you
don't want to go down that route, Elixir does support that pattern.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Joril

 I don't know whether this is currently possible with Declarative or
 not. In the case it isn't, patching Declarative should be quite easy
 (but I don't know if such a patch would be accepted or not).

I see.. I'll wait a bit then, maybe one of the developers will tell us
if it'd be acceptable :)

 If you don't want to go down that route, Elixir does support that pattern.

Do you mean these
http://elixir.ematia.de/trac/wiki/FAQ#HowdoIaddfunctionalitytoallmyentitiestothebaseclass
http://elixir.ematia.de/trac/wiki/FAQ#HowdoIprovideadifferentbaseclassthanEntity
?

The comment Note that in this case you'll lose all default methods
provided by the Entity class makes me think that extending Entity
isn't supported even in Elixir, have I got it wrong?
Thanks again!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative and common fields

2008-09-29 Thread Michael Bayer


On Sep 29, 2008, at 11:01 AM, Joril wrote:


 Hi everyone!
 I'm new to SQLAlchemy and I'm using version 0.5rc1..
 I need every entity class to have a few common fields, so I tried
 writing an abstract base class, declarative-style, that every other
 entity class would subclass. So for example:

 ---
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy import Column, String, DateTime, Integer

 ORMBase = declarative_base()

 class BaseObject(ORMBase):
id = Column(Integer, primary_key=True)
creation_time = Column(DateTime)
modify_time = Column(DateTime)

 class TestEntity(BaseObject):
value = Column(String)
 ---

 But SQLAlchemy complains that
 sqlalchemy.exc.ArgumentError: Mapper 'Mapper|BaseObject|None' does not
 have a mapped_table specified.  (Are you using the return value of
 table.create()?  It no longer has a return value.)

 Is there a way to tell SQLAlchemy to treat BaseObject like a non-
 mapped-class? I tried using ORMBase as mixin to TestEntity (so
 BaseObject extends object and TestEntity extends BaseObject and
 ORMBase), but now I get a

 sqlalchemy.exc.ArgumentError: Mapper Mapper|TestEntity|tests could not
 assemble any primary key columns for mapped table 'tests'

 so I guess that maybe I'm going down the wrong road.. Am I doing
 something that Declarative doesn't like? :) Should I try Elixir
 instead?

 Many thanks for your time!

the Column objects that are present on each declarative class are  
unique to that class, so the creation of those three Column objects  
would have to occur for each class.  This suggests that the correct  
approach would be to extend the declarative metaclass to provide this  
behavior:

from sqlalchemy import *
from sqlalchemy.ext.declarative import DeclarativeMeta, declarative_base

class MyDefaults(DeclarativeMeta):
 def __init__(cls, classname, bases, dict_):
 dict_['id'] = Column(Integer, primary_key=True)
 dict_['creation_time'] = Column(DateTime)
 dict_['modify_time'] = Column(DateTime)
 return DeclarativeMeta.__init__(cls, classname, bases, dict_)

Base = declarative_base(metaclass=MyDefaults)

class TestEntity(Base):
 __tablename__ = 'test'
 value = Column(String)

print TestEntity.id == 5
print TestEntity.creation_time



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative documentation

2008-09-17 Thread Empty

Hi Doug,

 I'm a new user (like this week) of SqlAlchemy and I'm trying to find
 more information about using the Declarative system. In particular I'm
 trying to build a hierarchical table with one-to-many relationships
 within the table. So if anyone knows where there might be some
 additional documentation about this, examples or just some guidance,
 I'd very much appreciate it!!

There's extensive documentation online and in the ext/declarative.py
module itself.  Beyond that it's basically just straight SQLAlchemy.
So you would be handling a self referential hierarchy as demonstrated
here:

http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_selfreferential

I hope that helps.

Michael
http://blog.michaeltrier.com/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative documentation

2008-09-17 Thread Doug Farrell

Michael,

Thanks for getting back to me so quickly. I've figured out how to get a
one-to-many, single table relationship working with one way relations,
here is that configuration:

class JobData(Base):
__tablename__ = jobs
id   = Column(Integer, primary_key=True, autoincrement=True)
pid  = Column('pid', Integer, ForeignKey('jobs.id'))
srcpath  = Column(String(128), default=None)
press= Column(Integer, default=None)
priority = Column(Integer, default=None)
created  = Column(DateTime, default=datetime.datetime.now)
  
def __init__(self, srcpath=None):
session = Session()
self.srcpath = srcpath
session.add(self)
session.commit()

Jobs that are children of other jobs get their pid field initialized,
and this seems to work well.

I wasn't sure if the link you sent was what you intended as that brought
up a page about eager loading. However, it was interesting reading all
the same!

Thanks again,
Doug


 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
 On Behalf Of Empty
 Sent: Wednesday, September 17, 2008 9:02 AM
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: Declarative documentation
 
 
 Hi Doug,
 
  I'm a new user (like this week) of SqlAlchemy and I'm trying to find
  more information about using the Declarative system. In particular
 I'm
  trying to build a hierarchical table with one-to-many relationships
  within the table. So if anyone knows where there might be some
  additional documentation about this, examples or just some guidance,
  I'd very much appreciate it!!
 
 There's extensive documentation online and in the ext/declarative.py
 module itself.  Beyond that it's basically just straight SQLAlchemy.
 So you would be handling a self referential hierarchy as demonstrated
 here:
 

http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_
 selfreferential
 
 I hope that helps.
 
 Michael
 http://blog.michaeltrier.com/
 
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-21 Thread Jim Jones

On Tue, 2008-08-19 at 16:16 -0400, Michael Bayer wrote:
 
 On Aug 19, 2008, at 4:07 PM, Gaetan de Menten wrote:
 
  Simpler than Elixir? How so? If you are speaking about the internal
  guts, then you are right declarative is simpler (which is normal since
  it does less), but if you meant the usage (and I *think* it's what the
  original poster meant here), I have to disagree... Their simplicity is
  comparable, and I would even vote for Elixir as slightly simpler
  because of the little helper methods you get for free... If you meant
  usage here, please explain why you think so.
 
 As far as declarative being simpler I would say, source code wise,  
 architecture wise, things-that-can-go-wrong-wise (i.e., an  
 configurational error message in Elixir can be an elixir problem, or a  
 SQLA problem, or both, its hard to trace), as well as that you only  
 have to learn one paradigm, not two, in order to use it.   Though on  
 the last point perhaps Elixir is finally getting to the point where  
 you really don't need to know SQLA configuration in order to use it.

Just another datapoint: me too.
I started out with elixir but soon switched to declarative because
too many times I shot myself in the foot with the subtle
differences/interactions between elixir and SA.

Furthermore pretty much all SA documentation you can find is in
pure SA-speak. I found it easier to convert that to declarative-speak
than to elixir-speak because declarative is closer to the real thing.

IMHO elixir is a two-bladed sword. If you know that your task at hand is
very simple (and will stay that way) then it can help you with the
syntactic sugar. Everyone else will eventually grow out of it.


regards
-jj


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-21 Thread Gaetan de Menten

On Thu, Aug 21, 2008 at 2:06 PM, Jim Jones [EMAIL PROTECTED] wrote:

 On Tue, 2008-08-19 at 16:16 -0400, Michael Bayer wrote:

 On Aug 19, 2008, at 4:07 PM, Gaetan de Menten wrote:

  Simpler than Elixir? How so? If you are speaking about the internal
  guts, then you are right declarative is simpler (which is normal since
  it does less), but if you meant the usage (and I *think* it's what the
  original poster meant here), I have to disagree... Their simplicity is
  comparable, and I would even vote for Elixir as slightly simpler
  because of the little helper methods you get for free... If you meant
  usage here, please explain why you think so.

 As far as declarative being simpler I would say, source code wise,
 architecture wise, things-that-can-go-wrong-wise (i.e., an
 configurational error message in Elixir can be an elixir problem, or a
 SQLA problem, or both, its hard to trace), as well as that you only
 have to learn one paradigm, not two, in order to use it.   Though on
 the last point perhaps Elixir is finally getting to the point where
 you really don't need to know SQLA configuration in order to use it.

 Just another datapoint: me too.
 I started out with elixir but soon switched to declarative because
 too many times I shot myself in the foot with the subtle
 differences/interactions between elixir and SA.

 Furthermore pretty much all SA documentation you can find is in
 pure SA-speak. I found it easier to convert that to declarative-speak
 than to elixir-speak because declarative is closer to the real thing.

That's true... to some extent. Once you get that all arguments to
relation can be also passed to the different Elixir relationships, and
column arguments passed to Field, translating SA-speak to Elixir-speak
becomes trivial.

 IMHO elixir is a two-bladed sword. If you know that your task at hand is
 very simple (and will stay that way) then it can help you with the
 syntactic sugar.

These days, Elixir can handle almost any situation that SA can handle.
The only limitation that will never be overcome by Elixir, (but
neither by declarative) is to be able to map one class several times
(to different selectables).

 Everyone else will eventually grow out of it.

That's quite a bold claim... Ok, Elixir is not your style, that's
fine.  Also, Elixir certainly doesn't suit every project out there,
I'm aware of that. But implying it's not worth it for anybody is
well... uninformed. Just don't bash something you probably don't
really know.

-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-21 Thread Jim Jones

On Thu, 2008-08-21 at 16:08 +0200, Gaetan de Menten wrote:
  Everyone else will eventually grow out of it.
 
 That's quite a bold claim... Ok, Elixir is not your style, that's
 fine.  Also, Elixir certainly doesn't suit every project out there,
 I'm aware of that. But implying it's not worth it for anybody is
 well... uninformed. Just don't bash something you probably don't
 really know.

Sorry if that came across harsh, I was only speaking about
my expirience and impression ofcourse.


regards
-jj


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-21 Thread az

well, if u want something that looks simpler but is actualy quite 
more complex/twisted inside, try dbcook.sf.net. 
it tries to hide _all the sql-schema stuff for u.
but it's definitely not for faint-hearted, no time to brush it up... 
even the examples look like a battle field.
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/usage/example/

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-19 Thread Jose Galvez

I take it back about Elixir and legacy databases, it seems to work with 
them just as easy as sqlalchemy does.  I'll have to look much closer at 
Elixir
Jose

Jose Galvez wrote:
 What is the proposed stability of declarative functions which I guess 
 are pretty new.  From what I've read so far I really like it and was 
 thinking of using it, but was just wondering what the long turn outlook 
 for it looked like?  After doing some reading on the new release of 
 Elixir, Elixir looks like a mich simplier and more feature complete then 
 declarative, but It does not look like Elixir works with a legacy 
 databse (but I'm still looking into that) so I was wondering about 
 declarative's long term stability. 

 Thanks Jose

 

   

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: declarative

2008-08-19 Thread Michael Bayer


On Aug 19, 2008, at 2:43 PM, Jose Galvez wrote:


 What is the proposed stability of declarative functions which I guess
 are pretty new.  From what I've read so far I really like it and was
 thinking of using it, but was just wondering what the long turn  
 outlook
 for it looked like?  After doing some reading on the new release of
 Elixir, Elixir looks like a mich simplier and more feature complete  
 then
 declarative, but It does not look like Elixir works with a legacy
 databse (but I'm still looking into that) so I was wondering about
 declarative's long term stability.

declarative is intended to be a lot simpler than Elixir, so thats  
funny you see it the other way around.   I'm using it (declarative) on  
a production project and so are many others, and forms the basis of  
the object-relational plugin for Grok.  Its fully stable since it is  
using the same SQLAlchemy constructs that regular mapper() and Table  
calls do.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative 0.5 tutorial fails to create tables before insert

2008-07-18 Thread Kris Kennaway

Thanks, this wasn't clear from the tutorial.

Kris

On Jul 18, 6:04 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jul 17, 2008, at 4:44 PM, Kris Kennaway wrote:







  from sqlalchemy.orm import sessionmaker
  Session = sessionmaker(bind=engine)
  session = Session()

  ed_user = User('ed', 'Ed Jones', 'edspassword')
  session.add(ed_user)

  session.add_all([
          User('wendy', 'Wendy Williams', 'foobar'),
          User('mary', 'Mary Contrary', 'xxg527'),
          User('fred', 'Fred Flinstone', 'blah')])

  session.commit()

 you're missing a metadata.create_all() in there.   From your  
 declarative base call Base.metadata.create_all(engine).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative 0.5 tutorial fails to create tables before insert

2008-07-17 Thread Michael Bayer


On Jul 17, 2008, at 4:44 PM, Kris Kennaway wrote:



 from sqlalchemy.orm import sessionmaker
 Session = sessionmaker(bind=engine)
 session = Session()

 ed_user = User('ed', 'Ed Jones', 'edspassword')
 session.add(ed_user)

 session.add_all([
 User('wendy', 'Wendy Williams', 'foobar'),
 User('mary', 'Mary Contrary', 'xxg527'),
 User('fred', 'Fred Flinstone', 'blah')])

 session.commit()


you're missing a metadata.create_all() in there.   From your  
declarative base call Base.metadata.create_all(engine).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---