[sqlalchemy] Re: getting data from primary keys

2009-09-15 Thread C.T. Matsumoto
Thanks for the help!

T

On Tue, Sep 15, 2009 at 12:33 PM, King Simon-NFHD78  wrote:

>
> > -Original Message-
> > From: sqlalchemy@googlegroups.com
> > [mailto:sqlalch...@googlegroups.com] On Behalf Of C.T. Matsumoto
> > Sent: 15 September 2009 07:21
> > To: sqlalchemy@googlegroups.com
> > Subject: [sqlalchemy] Re: getting data from primary keys
> >
> > That did the trick.
> >
> > Thanks a lot.
> >
> > Your solution uses the orm sessionmaker. Till now my script
> > was relying on sqlalchemy's expression
> > language. Is there some way of doing the same with the
> > expression language? Or would it get too
> > complicated? (Just curious)
> >
> > Cheers,
> >
> > T
> >
>
> How about:
>
>  import sqlalchemy as sa
>
>  key_cols = [c for c in table.primary_key.columns]
>   query = sa.select(key_cols)
>  print query.execute().fetchall()
>
> Or
>
>  print connection.execute(query).fetchall()
>
> Hope that helps,
>
> Simon
>
> >
>


-- 
Todd Matsumoto

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.55, orm, varying relation join on criteria

2009-09-15 Thread me

thanks for the (insanely fast) help! wanted to avoid doing something
unnecessarily odd.

i ended up wrapping the relation in an object proxy when passing it to
the join. the proxy ANDs additional criterion into the primary or
secondary join attributes of the relation based on what is supplied
when creating the proxy. though clearly odd it seem to generally work.
i'm not sure if this may break orm magic somewhere down the line...

code looks  like this:

query(User).outerjoin((Email, RelationProxy(User.emails, Email.name !=
"bogus")))

the object proxy code is based on the following if anyone's
interested:

http://code.activestate.com/recipes/496741/
http://code.activestate.com/recipes/519639/

On Sep 14, 2:13 pm, "Michael Bayer"  wrote:
> me wrote:
>
> > For certain orm queries with a 1-to-many relation i want to left outer
> > join and then update the "on-clause" for that relation. Since the
> > criteria changes between queries I cannot fix the join criteria when
> > specifying my object/table mappings.
>
> > For example:
> >     tables: user, email
> >     relation:  user.emails (1-many)
>
> >     select *
> >     from user
> >     left outer join email on email.user_id = user.id and email.name
> > like '%hello%'
>
> > While this is easy to write as a one off query in my case I need to be
> > able to add variable filtering to the join on-clause and in a way that
> > hopefully works for more complex relations.
>
> > So e.g. if I have a query built like this:
> >     query(user).outerjoin((email, emails))
>
> > Is there a general way to add to the primary/secondary join criteria
> > that is pulled from this emails relation? Or is there a better way to
> > express this in SA that I've missed?
>
> the contract of query.outerjoin(SomeClass.property) is that you're doing a
> plain join from A to B along pre-established routes.   If you'd like the
> criterion of the ON clause to be customized, the standard route is to
> spell out the entire thing you want completely.   The only potential time
> saver here would be if you held onto the primaryjoin aspect of the
> relation and used it in an AND clause, which at first looks like:
>
> query(User).outerjoin((Email, and_(email_primary_join, )))
>
> the next level would be that you'd pull "email_primary_join" from the
> mapping.  You can get at this via User.emails.property.primaryjoin.
>
> at the moment that's as automated as it gets as far as what's built in.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Declarative base - Joined Table Inheritence

2009-09-15 Thread Jarrod Chesney

Hi All
I've been reading the documentation for ages and i can't figure out
why when i print the results a query from my inherited table, It just
prints them as the base type.

I was hoping someone here would be nice enough to help me solve this
problem.

I thought the last print statement would print an instance of the
_UtConfReconcilerActionSnapshot class but it doesn't

I've got one record in both tables and 'id' = 1 in each table.
What am i doing wrong?

<<< Begin code >

from sqlalchemy import Table, Column, Integer, String, MetaData,
ForeignKey, CheckConstraint
from sqlalchemy.orm import relation
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

__DATABASE_NAME__='UtConfSom.sqlite'

Base = declarative_base()

# == Reconciler Actions
===

class _UtConfReconcilerActions(Base):

__tablename__ = 'tblReconcilerActions'
__mapper_args__ = {'with_polymorphic': '*'}

# 1 to Many relationship to the managed
id = Column(Integer, primary_key=True)
action = Column(String, CheckConstraint("action in ('SNAPSHOT',
'COMPARE', 'UPGRADE')"))
object_type = Column(String, CheckConstraint("object_type in ('ALL',
'SCHEMA', 'TABLE', 'COLUMN', 'INDEX')"))


def __repr__(self):
return ("'%s'" % _UtConfReconcilerActions.__name__
+ "\n  id='%i'" % self.id
+ "\n  managed_id='%i'" % self.managed_id
+ "\n  action='%s'" % self.action
+ "\n  object_type='%s'" % self.object_type
)


class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions):

__tablename__ = 'tblReconcilerActionSnapshot'
# __mapper_args__ = {'with_polymorphic': '*'}
__mapper_args__ = {'polymorphic_identity': 'snapshot',
'with_polymorphic': '*'}

# Joined table inheritence
id = Column(Integer, ForeignKey('tblReconcilerActions.id'),
primary_key=True)

revision = Column(String)
comment = Column(String)

def __repr__(self):
return (_UtConfReconcilerActions.__repr__(self)
+ "\n  '%s'" % 
_UtConfReconcilerActionSnapshot.__name__
+ "\n  id='%s'" % self.revision
+ "\n  revision='%s'" % self.revision
)

__db_exists = os.path.exists(__DATABASE_NAME__)

engine = create_engine('sqlite:///' + __DATABASE_NAME__)

# New database, create the tables
if not __db_exists:
Base.metadata.create_all(engine)
print >> sys.stderr, ("WARINING - Creating empty '%s' database" %
__DATABASE_NAME__ )

Session = sessionmaker(bind=engine)
session = Session()

print session.query(_UtConfReconcilerActions).with_polymorphic
('*').first()

<< end code >>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: intersphinx docs

2009-09-15 Thread Brett

Sorry, I had put a "." in 05

On Sep 15, 4:22 pm, "Michael Bayer"  wrote:
> Brett wrote:
>
> > There is a Sphinx module called intersphinx that allows API docs to be
> > cross referenced between sites.  To make it work you have to upload
> > the objects.inv file to you web server so that when someone generates
> > their own Sphinx-based docs it knows how to reference the remote docs.
>
> > Does SQLAlchemy provide the objects.inv file on its webserver and if
> > not would it be possible to upload it?
>
> I wasn't familiar with this file but sphinx seems to be putting it where
> expected:
>
> http://www.sqlalchemy.org/docs/05/objects.invhttp://www.sqlalchemy.org/docs/06/objects.inv
>
>
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: intersphinx docs

2009-09-15 Thread Michael Bayer

Brett wrote:
>
> There is a Sphinx module called intersphinx that allows API docs to be
> cross referenced between sites.  To make it work you have to upload
> the objects.inv file to you web server so that when someone generates
> their own Sphinx-based docs it knows how to reference the remote docs.
>
> Does SQLAlchemy provide the objects.inv file on its webserver and if
> not would it be possible to upload it?

I wasn't familiar with this file but sphinx seems to be putting it where
expected:

http://www.sqlalchemy.org/docs/05/objects.inv
http://www.sqlalchemy.org/docs/06/objects.inv


>
>
> >
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] intersphinx docs

2009-09-15 Thread Brett

There is a Sphinx module called intersphinx that allows API docs to be
cross referenced between sites.  To make it work you have to upload
the objects.inv file to you web server so that when someone generates
their own Sphinx-based docs it knows how to reference the remote docs.

Does SQLAlchemy provide the objects.inv file on its webserver and if
not would it be possible to upload it?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Gregg Lind
Thank you both for the advice.  Dern NULLs causing trouble again.

GL

On Tue, Sep 15, 2009 at 4:34 PM, Conor  wrote:

>
> On Sep 15, 4:08 pm, "Michael Bayer"  wrote:
> > Gregg Lind wrote:
> > > What I think I'm seeing is that an object can be created even without
> it's
> > > ForeignKeyConstraint being filled.
> >
> > > To run the test code below:
> >
> > > $ dropdb test18; createdb test18; python testcode.py
> >
> > "on" is not defined:
> >
> > ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
> > [A1String.id, A1String.string, A1String.origin], on),
> >
> > when removing "on", the row inserts with "regstring_id" as NULL.  PG
> > appears to accept this so I would assume PG considers a three-column
> > foreign key with one NULL to be NULL.  If I try it with all three columns
> > not null, then you get the constraint error.
> >
> > SQLalchemy itself relies upon the database to enforce constraints.In
> > this case you should have the "NOT NULL" constraint on the Product
> > columns.
> >
>
> To expand on this:
> Most (all?) databases default to a MATCH SIMPLE policy for foreign key
> constraints: if any FK column is NULL then the FK constraint is
> satisfied (regardless of the actual values of the non-null columns).
> It looks like you want MATCH FULL behavior: if some but not all FK
> columns are NULL then the FK constraint fails.
>
> Assuming you really do need the the FK columns to be nullable, you
> have to either add MATCH FULL to your DDL (probably have to use DDL()
> + ALTER TABLE; also requires that your database actually supports
> MATCH FULL) or add a check constraint that mimics the MATCH FULL
> behavior, e.g.:
> (regstring_id IS NULL) = (regstring IS NULL) = (regstring_type IS
> NULL)
>
> > > This builds on
> > >http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f.
> ..
> > > .
> >
> > > I understand that the foreign table can't create the referent.  (and
> > > finding
> > > the best idiom for "use one if it exists or create one") is yet be
> > > determined.  What I truly don't understand is how any instances of
> > > "Product"
> > > can be created, since there is a FK constraint that is not fulfulled.
> >
> > > 1.  Is the foreign key constraint fulfilled?
> > > 2.  Is there a good "create the referent if it doesn't exist, else use
> it"
> > > idiom?
> > > 3.  Is the polymorphic table business complicating it?  It seems liek
> the
> > > compound primary key for A1String is.
> >
> > > 
> > > from sqlalchemy.ext.declarative import
> > > declarative_base
> > > from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
> > > PrimaryKeyConstraint
> > > from sqlalchemy import ForeignKeyConstraint
> > > from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
> > > from sqlalchemy.orm import relation, backref
> > > from sqlalchemy import create_engine
> > > from sqlalchemy.orm import sessionmaker
> > > from sqlalchemy.schema import DDL
> >
> > > import sys
> > > ECHO = bool((sys.argv + [False])[1])
> >
> > > ## utilties for connecting the db, printing it, etc.
> > > def print_schema(T="postgres", Base=None):
> > > ''' print print_schema will print the schema in use '''
> > > from StringIO import StringIO
> > > buf = StringIO()
> > > engine = create_engine('%s://' % T, strategy='mock',
> executor=lambda
> > > s,
> > > p='': buf.write(str(s) + p))
> > > Base.metadata.create_all(engine)
> > > return buf.getvalue()
> >
> > > def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
> > > engine = create_engine(connstring, echo=echo)
> > > Session = sessionmaker(bind=engine, autoflush=False,
> autocommit=False)
> > > session = Session()
> > > Base.metadata.bind = engine
> > > Base.metadata.create_all()
> > > return session, engine
> >
> > > def _class_repr(self):
> > > ''' print our SA class instances in a nicer way '''
> > > # ugly, use sparingly, may have performance hit
> > > d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != "_"]
> > > d = sorted(d, key=lambda x: x[0].lower())
> > > return "<%s, %s>" % (self.__class__, d)
> >
> > > Base = declarative_base()
> >
> > > class Polystring(Base):
> > > __tablename__ = 'strings'
> > > id = Column(Integer, nullable=False, primary_key=True)
> > > string = Column(String, nullable=False, primary_key=True)
> > > origin = Column(String, nullable=False, primary_key=True)
> > > __mapper_args__ = {'polymorphic_on': origin}
> >
> > > # subtype of string
> > > class A1String(Polystring):
> > > __mapper_args__ = {'polymorphic_identity': 'a1'}
> > > products = relation('Product', order_by="Product.id")
> >
> > > class Product(Base):
> > > __tablename__ = 'product'
> > > __table_args__ = (
> > >  ForeignKeyConstraint(['regstring_id', 'regstring',
> > > 'regstring_type'], [A1String.id, A1String.string
> > > , A1String.origin], on),
> > > {}
> > > )
> > > id = 

[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Conor

On Sep 15, 4:08 pm, "Michael Bayer"  wrote:
> Gregg Lind wrote:
> > What I think I'm seeing is that an object can be created even without it's
> > ForeignKeyConstraint being filled.
>
> > To run the test code below:
>
> > $ dropdb test18; createdb test18; python testcode.py
>
> "on" is not defined:
>
> ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
> [A1String.id, A1String.string, A1String.origin], on),
>
> when removing "on", the row inserts with "regstring_id" as NULL.  PG
> appears to accept this so I would assume PG considers a three-column
> foreign key with one NULL to be NULL.  If I try it with all three columns
> not null, then you get the constraint error.
>
> SQLalchemy itself relies upon the database to enforce constraints.    In
> this case you should have the "NOT NULL" constraint on the Product
> columns.
>

To expand on this:
Most (all?) databases default to a MATCH SIMPLE policy for foreign key
constraints: if any FK column is NULL then the FK constraint is
satisfied (regardless of the actual values of the non-null columns).
It looks like you want MATCH FULL behavior: if some but not all FK
columns are NULL then the FK constraint fails.

Assuming you really do need the the FK columns to be nullable, you
have to either add MATCH FULL to your DDL (probably have to use DDL()
+ ALTER TABLE; also requires that your database actually supports
MATCH FULL) or add a check constraint that mimics the MATCH FULL
behavior, e.g.:
(regstring_id IS NULL) = (regstring IS NULL) = (regstring_type IS
NULL)

> > This builds on
> >http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f...
> > .
>
> > I understand that the foreign table can't create the referent.  (and
> > finding
> > the best idiom for "use one if it exists or create one") is yet be
> > determined.  What I truly don't understand is how any instances of
> > "Product"
> > can be created, since there is a FK constraint that is not fulfulled.
>
> > 1.  Is the foreign key constraint fulfilled?
> > 2.  Is there a good "create the referent if it doesn't exist, else use it"
> > idiom?
> > 3.  Is the polymorphic table business complicating it?  It seems liek the
> > compound primary key for A1String is.
>
> > 
> > from sqlalchemy.ext.declarative import
> > declarative_base
> > from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
> > PrimaryKeyConstraint
> > from sqlalchemy import ForeignKeyConstraint
> > from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
> > from sqlalchemy.orm import relation, backref
> > from sqlalchemy import create_engine
> > from sqlalchemy.orm import sessionmaker
> > from sqlalchemy.schema import DDL
>
> > import sys
> > ECHO = bool((sys.argv + [False])[1])
>
> > ## utilties for connecting the db, printing it, etc.
> > def print_schema(T="postgres", Base=None):
> >     ''' print print_schema will print the schema in use '''
> >     from StringIO import StringIO
> >     buf = StringIO()
> >     engine = create_engine('%s://' % T, strategy='mock', executor=lambda
> > s,
> > p='': buf.write(str(s) + p))
> >     Base.metadata.create_all(engine)
> >     return buf.getvalue()
>
> > def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
> >     engine = create_engine(connstring, echo=echo)
> >     Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
> >     session = Session()
> >     Base.metadata.bind = engine
> >     Base.metadata.create_all()
> >     return session, engine
>
> > def _class_repr(self):
> >     ''' print our SA class instances in a nicer way '''
> >     # ugly, use sparingly, may have performance hit
> >     d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != "_"]
> >     d = sorted(d, key=lambda x: x[0].lower())
> >     return "<%s, %s>" % (self.__class__, d)
>
> > Base = declarative_base()
>
> > class Polystring(Base):
> >     __tablename__ = 'strings'
> >     id = Column(Integer, nullable=False, primary_key=True)
> >     string = Column(String, nullable=False, primary_key=True)
> >     origin = Column(String, nullable=False, primary_key=True)
> >     __mapper_args__ = {'polymorphic_on': origin}
>
> > # subtype of string
> > class A1String(Polystring):
> >     __mapper_args__ = {'polymorphic_identity': 'a1'}
> >     products = relation('Product', order_by="Product.id")
>
> > class Product(Base):
> >     __tablename__ = 'product'
> >     __table_args__ = (
> >          ForeignKeyConstraint(['regstring_id', 'regstring',
> > 'regstring_type'], [A1String.id, A1String.string
> > , A1String.origin], on),
> >         {}
> >     )
> >     id = Column(Integer,primary_key=True)
> >     regstring_id = Column(Integer)
> >     regstring = Column(String)
> >     regstring_type = Column(String,default="asn")
>
> > ## test code
> > session,eng = db_setup("postgres:///test18", Base=Base, echo=ECHO)
> > add = session.add
> > q = session.query
> > c = session.commit
> > r = _class_repr
>
> > A = Product(id=192832, reg

[sqlalchemy] Re: Declarative issues, with compound foreign key

2009-09-15 Thread Michael Bayer

Gregg Lind wrote:
> What I think I'm seeing is that an object can be created even without it's
> ForeignKeyConstraint being filled.
>
> To run the test code below:
>
> $ dropdb test18; createdb test18; python testcode.py

"on" is not defined:

ForeignKeyConstraint(['regstring_id', 'regstring','regstring_type'],
[A1String.id, A1String.string, A1String.origin], on),

when removing "on", the row inserts with "regstring_id" as NULL.  PG
appears to accept this so I would assume PG considers a three-column
foreign key with one NULL to be NULL.  If I try it with all three columns
not null, then you get the constraint error.


SQLalchemy itself relies upon the database to enforce constraints.In
this case you should have the "NOT NULL" constraint on the Product
columns.






>
> This builds on
> http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f3f2555a5e7/
> .
>
> I understand that the foreign table can't create the referent.  (and
> finding
> the best idiom for "use one if it exists or create one") is yet be
> determined.  What I truly don't understand is how any instances of
> "Product"
> can be created, since there is a FK constraint that is not fulfulled.
>
> 1.  Is the foreign key constraint fulfilled?
> 2.  Is there a good "create the referent if it doesn't exist, else use it"
> idiom?
> 3.  Is the polymorphic table business complicating it?  It seems liek the
> compound primary key for A1String is.
>
> 
> from sqlalchemy.ext.declarative import
> declarative_base
> from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
> PrimaryKeyConstraint
> from sqlalchemy import ForeignKeyConstraint
> from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
> from sqlalchemy.orm import relation, backref
> from sqlalchemy import create_engine
> from sqlalchemy.orm import sessionmaker
> from sqlalchemy.schema import DDL
>
> import sys
> ECHO = bool((sys.argv + [False])[1])
>
> ## utilties for connecting the db, printing it, etc.
> def print_schema(T="postgres", Base=None):
> ''' print print_schema will print the schema in use '''
> from StringIO import StringIO
> buf = StringIO()
> engine = create_engine('%s://' % T, strategy='mock', executor=lambda
> s,
> p='': buf.write(str(s) + p))
> Base.metadata.create_all(engine)
> return buf.getvalue()
>
> def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
> engine = create_engine(connstring, echo=echo)
> Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
> session = Session()
> Base.metadata.bind = engine
> Base.metadata.create_all()
> return session, engine
>
> def _class_repr(self):
> ''' print our SA class instances in a nicer way '''
> # ugly, use sparingly, may have performance hit
> d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != "_"]
> d = sorted(d, key=lambda x: x[0].lower())
> return "<%s, %s>" % (self.__class__, d)
>
>
> Base = declarative_base()
>
> class Polystring(Base):
> __tablename__ = 'strings'
> id = Column(Integer, nullable=False, primary_key=True)
> string = Column(String, nullable=False, primary_key=True)
> origin = Column(String, nullable=False, primary_key=True)
> __mapper_args__ = {'polymorphic_on': origin}
>
> # subtype of string
> class A1String(Polystring):
> __mapper_args__ = {'polymorphic_identity': 'a1'}
> products = relation('Product', order_by="Product.id")
>
> class Product(Base):
> __tablename__ = 'product'
> __table_args__ = (
>  ForeignKeyConstraint(['regstring_id', 'regstring',
> 'regstring_type'], [A1String.id, A1String.string
> , A1String.origin], on),
> {}
> )
> id = Column(Integer,primary_key=True)
> regstring_id = Column(Integer)
> regstring = Column(String)
> regstring_type = Column(String,default="asn")
>
>
>
> ## test code
> session,eng = db_setup("postgres:///test18", Base=Base, echo=ECHO)
> add = session.add
> q = session.query
> c = session.commit
> r = _class_repr
>
>
> A = Product(id=192832, regstring="some part id")
> print r(A)
> add(A)
> c()  # commit
> print map(r,q(Product).all())
> print "somehow this managed to get in, without making a polystring, which
> it
> should be referencing."
> assert len(q(Polystring).all()) > 0, "So, where is the polystring?"
> --
>
> >
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] relation in object mapped to select statement?

2009-09-15 Thread Bryan

The following code models a simple system that tracks the transfer of
construction tools between jobs.  Equip (equipment) is transferred
between Jobs via Shipments.

Towards the end I attempt to map a class to a select statement in
order to make reporting simple.  Instead of dealing with sql to do the
reporting, I wanted to map an object to a summarizing sql statement,
and create a sort of object model that covers most of the summarizing
I will need to do.

I can't figure out how to map an object to a select statement and
include a relation in the object.  The code below should run in python
2.6:


from sqlalchemy import *
from sqlalchemy.orm import mapper, sessionmaker, relation
from sqlalchemy.sql import *
from datetime import date

# SA objects
db = create_engine('sqlite://', echo=True)
meta = MetaData()
session = sessionmaker(bind=db)()

# Table schema
job = Table('job', meta,
Column('id', Integer, primary_key=True),
Column('number', Integer))

equip = Table('equip', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(255)))

equip_shipment = Table('equip_shipment', meta,
Column('id', Integer, primary_key=True),
Column('shipDate', Date),
Column('fromJobId', Integer, ForeignKey('job.id')),
Column('toJobId', Integer, ForeignKey('job.id')),
Column('isBroken', Boolean))

equip_shipment_item = Table('equip_shipment_item', meta,
Column('id', Integer, primary_key=True),
Column('shipmentId', Integer, ForeignKey
('equip_shipment.id')),
Column('equipId', Integer, ForeignKey('equip.id')),
Column('qty', Integer))
meta.create_all(db)

# Objects
class KeywordInitMixin(object):
'''Fills object's attributes with whatever keyword args were given
to init.

As an example, allows me to simply inherit from this class like
this:

class Test(KeywordInitMixin):
pass

...And then create objects like this:

t = Test(foo=1, bar='spam')
assert t.foo == 1
assert t.bar == 'spam'
'''
def __init__(self, **kwargs):
for attr in self.ATTRS:
if attr in kwargs:
setattr(self, attr, kwargs[attr])
else:
setattr(self, attr, None)
# Set any properties
for attr, val in kwargs.items():
# See if class has a property by this name
if (hasattr(self.__class__, attr) and
getattr(self.__class__, attr).__class__ is property):
setattr(self, attr, val)
def __repr__(self):
args = ['%s=%s' % (arg, val) for arg, val in
self.__dict__.items() if
arg in self.ATTRS and val]
args = ', '.join(args)
name = self.__class__.__name__
result = '%s(%s)' % (name, args)
return result

class Job(KeywordInitMixin):
ATTRS = ['number']
class Equip(KeywordInitMixin):
ATTRS = ['name']
class Shipment(KeywordInitMixin):
ATTRS = ['shipDate', 'fromJob', 'toJob', 'isBroken']
class ShipmentItem(KeywordInitMixin):
ATTRS = ['shipment', 'equip', 'qty']

# Map schema to objects
mapper(Job, job)
mapper(Equip, equip)
mapper(Shipment, equip_shipment,
properties={
'fromJob': relation(Job,
primaryjoin=equip_shipment.c.fromJobId==job.c.id),
'toJob': relation(Job,
primaryjoin=equip_shipment.c.toJobId==job.c.id),
}
)
mapper(ShipmentItem, equip_shipment_item,
properties={
'shipment': relation(Shipment, backref='items'),
'equip': relation(Equip)
}
)

# -
# Create some test data
# -
# Jobs
warehouse1 = Job(number=10001)
job1 = Job(number=1)
job2 = Job(number=2)
# Equipment
bClamps = Equip(name=u'Bridge Clamps')
cLocks = Equip(name=u'420 Channel Lock')
smallLock = Equip(name=u'Small 3210 Lock')
toolChest = Equip(name=u'Tool Chest')
# Add to orm
session.add_all([warehouse1, job1, job2, bClamps, cLocks])
# Ship tools to job 1
ship1 = Shipment(fromJob=warehouse1, toJob=job1, shipDate=date.today
())
ship1.items.append(ShipmentItem(qty=5, equip=bClamps))
# Transfer tools from job 1 to job 2
ship2 = Shipment(fromJob=job1, toJob=job2, shipDate=date.today())
ship2.items.append(ShipmentItem(qty=2, equip=bClamps))
# Job 1 returns some tools to the warehouse
ship3 = Shipment(fromJob=job1, toJob=warehouse1, shipDate=date.today
())
ship3.loadedBy = ship3.deliveredBy = 'jane doe'
ship3.items.append(ShipmentItem(qty=2, equip=smallLock))
# Add to orm
session.add_all([ship1, ship2, ship3])
# Job 1 breaks some tools
broken = Shipment(fromJob=job1, isBroken=True, shipDate=date.today())
broken.items.append(ShipmentItem(qty=1, equip=smallLock))
# Break more of same equip, but in different line item to test
aggregation
broken.items.append(ShipmentItem(qty=4, equip=smallLock))
# Job 2 breaks stuff too
broken2 = Shipment(fromJob=job2, isBroken=

[sqlalchemy] Declarative issues, with compound foreign key

2009-09-15 Thread Gregg Lind
What I think I'm seeing is that an object can be created even without it's
ForeignKeyConstraint being filled.

To run the test code below:

$ dropdb test18; createdb test18; python testcode.py

This builds on
http://groups.google.com/group/sqlalchemy/browse_thread/thread/eb240f3f2555a5e7/
.

I understand that the foreign table can't create the referent.  (and finding
the best idiom for "use one if it exists or create one") is yet be
determined.  What I truly don't understand is how any instances of "Product"
can be created, since there is a FK constraint that is not fulfulled.

1.  Is the foreign key constraint fulfilled?
2.  Is there a good "create the referent if it doesn't exist, else use it"
idiom?
3.  Is the polymorphic table business complicating it?  It seems liek the
compound primary key for A1String is.


from sqlalchemy.ext.declarative import
declarative_base
from sqlalchemy import CheckConstraint, ForeignKey, MetaData,
PrimaryKeyConstraint
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import Table, Column, Integer, Boolean,Unicode,String
from sqlalchemy.orm import relation, backref
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import DDL

import sys
ECHO = bool((sys.argv + [False])[1])

## utilties for connecting the db, printing it, etc.
def print_schema(T="postgres", Base=None):
''' print print_schema will print the schema in use '''
from StringIO import StringIO
buf = StringIO()
engine = create_engine('%s://' % T, strategy='mock', executor=lambda s,
p='': buf.write(str(s) + p))
Base.metadata.create_all(engine)
return buf.getvalue()

def db_setup(connstring='sqlite:///:memory:', echo=False, Base=None):
engine = create_engine(connstring, echo=echo)
Session = sessionmaker(bind=engine, autoflush=False, autocommit=False)
session = Session()
Base.metadata.bind = engine
Base.metadata.create_all()
return session, engine

def _class_repr(self):
''' print our SA class instances in a nicer way '''
# ugly, use sparingly, may have performance hit
d = [(k,v) for k,v in self.__dict__.iteritems() if k[0] != "_"]
d = sorted(d, key=lambda x: x[0].lower())
return "<%s, %s>" % (self.__class__, d)


Base = declarative_base()

class Polystring(Base):
__tablename__ = 'strings'
id = Column(Integer, nullable=False, primary_key=True)
string = Column(String, nullable=False, primary_key=True)
origin = Column(String, nullable=False, primary_key=True)
__mapper_args__ = {'polymorphic_on': origin}

# subtype of string
class A1String(Polystring):
__mapper_args__ = {'polymorphic_identity': 'a1'}
products = relation('Product', order_by="Product.id")

class Product(Base):
__tablename__ = 'product'
__table_args__ = (
 ForeignKeyConstraint(['regstring_id', 'regstring',
'regstring_type'], [A1String.id, A1String.string
, A1String.origin], on),
{}
)
id = Column(Integer,primary_key=True)
regstring_id = Column(Integer)
regstring = Column(String)
regstring_type = Column(String,default="asn")



## test code
session,eng = db_setup("postgres:///test18", Base=Base, echo=ECHO)
add = session.add
q = session.query
c = session.commit
r = _class_repr


A = Product(id=192832, regstring="some part id")
print r(A)
add(A)
c()  # commit
print map(r,q(Product).all())
print "somehow this managed to get in, without making a polystring, which it
should be referencing."
assert len(q(Polystring).all()) > 0, "So, where is the polystring?"
--

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?

2009-09-15 Thread Michael Bayer

bojanb wrote:
>
> Yes, I want to map to a join between two classes which are parts of
> joined table inheritance. I don't think it's complex - it fits very
> naturally with the problem I am modeling.
>
> When I said it's efficient, I meant that the generated SQL is optimal,
> ie. the same as I would write if I were doing it by hand. "eagerload"
> and "with_polymorphic" result in SQL that also queries on fields of
> sibling classes (ie. which inherit from the same superclass but are
> not in the inheritance path of the final class) which are unnecessary,
> and contains a subquery (which I believe is not optimal).

with_polymorphic can be set against any subset of classes, not just '*'.

>
> My understanding is that performing joins on indexed fields is what
> RDBMS do well. However, if the query turned out to be too slow I can
> always switch to single-table inheritance - whether I use joined-table
> or single-table inheritance is just an implementation detail (as I
> understand it).

oh that query is going to be pretty slow for sure (though "slow" is a
relative term).

>
> The problem is not that sequence is not firing off, it's that it's
> firing for a sequence that doesn't exist. In the code above, it's
> trying to get the next value from "supervisor_relations_id" sequence,
> but that sequence doesn't exist because of inheritance. It should be
> trying to get from "relations_id_sequence" but for some reason it
> isn't. If you run the code you can see what's going on exactly in the
> SQL echo.

I don't have the time most of today to get into it so I can't confirm
what's going on.  Any chance you could map to a straight join of all four
tables instead of a join to two sub-joins ?






>
> I will play around with MapperExtension and single-table inheritance
> and see what I get. However, I just thought that since selects and
> updates work so nicely in this setup, create should also work in the
> same way.
>
> On Sep 15, 4:32 pm, "Michael Bayer"  wrote:
>> bojanb wrote:
>>
>> > The problem is when I have an object mapped against two tables, both
>> > of which are part of an inheritance hierarchy. I managed to
>> > synchronize the foreign key with the primary key (per the
>> > documentation link you provided). However, SQLAlchemy doesn't (or I
>> > can't instruct it how to) set the polymorphic discrimintaor fields
>> > appropriately. I can set them manually, but then insert fails because
>> > it looks for the sequence object on the inherited table, which doesn't
>> > exist (it exist only on the root table of the inheritance hierarchy).
>>
>> > Here's example code. In brief, I have a Person->Employee and Relation-
>> >>SupervisorRelation as two independent inheritance hierarchies.
>> > Relation is defined between two Persons, and SupervisorRelation
>> > between two Employees. I want to hide this as an implementation and
>> > have a Subordinate class that the programmer down the line will work
>> > with. Subordinate contains fields from Employee and
>> > SupervisorRelation. Querying on Subordinate works (efficiently, too),
>> > and so does attribute modification. I would like to be able to create
>> > it also (after populating the required fields and commit, the
>> > underlying engine should create both a new Employee and a new
>> > SupervisorRelation).
>>
>> let me get this straight.   you want to map to a JOIN, which itself is
>> JOINed against two joined-table inheritance subclasses.
>>
>> That is
>>
>>      +--- join  --+
>>      |                    |
>>     join                join
>>
>> and each call to Query() would emit a JOIN against two sub-JOINs.
>>
>> is this correct ?  is there a reason this need be so complex ?  (and its
>> efficient ? really ?  a query like that would bring any DB to a halt on
>> a
>> large dataset, I would think...)
>>
>> If the issue is just a sequence not firing off, an immediate workaround
>> would be to fire the sequence off yourself.  you can even do this in a
>> MapperExtension.before_insert() (good place for your polymorphic
>> identity
>> setting too).  I don't as yet understand why the normal sequence firing
>> wouldn't be working here, is one firing off and the other not ?
>>
>>
>>
>> > I hope this makes sense. Here's the code. When run, it throws
>> > "ProgrammingError: (ProgrammingError) relation
>> > "supervisor_relations_id_seq" does not exist"
>>
>> > 
>>
>> > from sqlalchemy import create_engine, Table, Column, Integer, String,
>> > MetaData, ForeignKey
>> > from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload,
>> > join
>> > from sqlalchemy.orm.mapper import validates
>>
>> > DB_URI='postgres://postg...@localhost/postgres' #Replace this
>> > accordingly
>> > db_engine=create_engine(DB_URI, echo=False)
>> > metadata = MetaData()
>>
>> > class Person(object):
>>
>> >     def __init__(self, name):
>> >         self.name = name
>>
>> > persons = Table('persons',
>> >                 metadata,
>> >                 Column('id', Integer, pri

[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?

2009-09-15 Thread bojanb

Yes, I want to map to a join between two classes which are parts of
joined table inheritance. I don't think it's complex - it fits very
naturally with the problem I am modeling.

When I said it's efficient, I meant that the generated SQL is optimal,
ie. the same as I would write if I were doing it by hand. "eagerload"
and "with_polymorphic" result in SQL that also queries on fields of
sibling classes (ie. which inherit from the same superclass but are
not in the inheritance path of the final class) which are unnecessary,
and contains a subquery (which I believe is not optimal).

My understanding is that performing joins on indexed fields is what
RDBMS do well. However, if the query turned out to be too slow I can
always switch to single-table inheritance - whether I use joined-table
or single-table inheritance is just an implementation detail (as I
understand it).

The problem is not that sequence is not firing off, it's that it's
firing for a sequence that doesn't exist. In the code above, it's
trying to get the next value from "supervisor_relations_id" sequence,
but that sequence doesn't exist because of inheritance. It should be
trying to get from "relations_id_sequence" but for some reason it
isn't. If you run the code you can see what's going on exactly in the
SQL echo.

I will play around with MapperExtension and single-table inheritance
and see what I get. However, I just thought that since selects and
updates work so nicely in this setup, create should also work in the
same way.

On Sep 15, 4:32 pm, "Michael Bayer"  wrote:
> bojanb wrote:
>
> > The problem is when I have an object mapped against two tables, both
> > of which are part of an inheritance hierarchy. I managed to
> > synchronize the foreign key with the primary key (per the
> > documentation link you provided). However, SQLAlchemy doesn't (or I
> > can't instruct it how to) set the polymorphic discrimintaor fields
> > appropriately. I can set them manually, but then insert fails because
> > it looks for the sequence object on the inherited table, which doesn't
> > exist (it exist only on the root table of the inheritance hierarchy).
>
> > Here's example code. In brief, I have a Person->Employee and Relation-
> >>SupervisorRelation as two independent inheritance hierarchies.
> > Relation is defined between two Persons, and SupervisorRelation
> > between two Employees. I want to hide this as an implementation and
> > have a Subordinate class that the programmer down the line will work
> > with. Subordinate contains fields from Employee and
> > SupervisorRelation. Querying on Subordinate works (efficiently, too),
> > and so does attribute modification. I would like to be able to create
> > it also (after populating the required fields and commit, the
> > underlying engine should create both a new Employee and a new
> > SupervisorRelation).
>
> let me get this straight.   you want to map to a JOIN, which itself is
> JOINed against two joined-table inheritance subclasses.
>
> That is
>
>      +--- join  --+
>      |                    |
>     join                join
>
> and each call to Query() would emit a JOIN against two sub-JOINs.
>
> is this correct ?  is there a reason this need be so complex ?  (and its
> efficient ? really ?  a query like that would bring any DB to a halt on a
> large dataset, I would think...)
>
> If the issue is just a sequence not firing off, an immediate workaround
> would be to fire the sequence off yourself.  you can even do this in a
> MapperExtension.before_insert() (good place for your polymorphic identity
> setting too).  I don't as yet understand why the normal sequence firing
> wouldn't be working here, is one firing off and the other not ?
>
>
>
> > I hope this makes sense. Here's the code. When run, it throws
> > "ProgrammingError: (ProgrammingError) relation
> > "supervisor_relations_id_seq" does not exist"
>
> > 
>
> > from sqlalchemy import create_engine, Table, Column, Integer, String,
> > MetaData, ForeignKey
> > from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload,
> > join
> > from sqlalchemy.orm.mapper import validates
>
> > DB_URI='postgres://postg...@localhost/postgres' #Replace this
> > accordingly
> > db_engine=create_engine(DB_URI, echo=False)
> > metadata = MetaData()
>
> > class Person(object):
>
> >     def __init__(self, name):
> >         self.name = name
>
> > persons = Table('persons',
> >                 metadata,
> >                 Column('id', Integer, primary_key=True),
> >                 Column('type', String(1), nullable=False),
> >                 Column('name', String(100), nullable=False))
>
> > class Employee(Person):
>
> >     def __init__(self, name, position):
> >         Person.__init__(self, name)
> >         self.position = position
>
> > employees = Table('employees',
> >                   metadata,
> >                   Column('id', Integer, ForeignKey('persons.id'),
> > primary_key=True),
> >                   Column('position', String(5

[sqlalchemy] query().all() OK, query().delete() can't locate bind

2009-09-15 Thread jeff.enderw...@gmail.com

I'm trying to delete in bulk using query().  query() seems to work
fine:

(Pdb) Session.query(TreeNode).filter(TreeNode.guid.in_
(deadNodeGuids)).all()
[,
]

But delete() is not happy:

(Pdb) Session.query(TreeNode).filter(TreeNode.guid.in_
(deadNodeGuids)).delete()
*** UnboundExecutionError: Could not locate a bind configured on SQL
expression or this Session

I'm using 0.55, 'binds' (more than one engine), and scoped_session.
Any idea why the binds would 'stick' for the read query but not the
delete? Here's the binds:

Session.configure(binds={User:engineAccounts, TreeRoot:engineTrees,
TreeNode:engineTrees, Content:engineTrees})

TIA,
Jeff

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic loader versus lazy=True

2009-09-15 Thread Alexandre Conrad

2009/9/10 Wolodja Wentland :
> Class Bar(object):
>
>    def all_foo(self):
>        foo_query.all()
>
>    def foo_startwith(self, search_string):
>        foo.query.filter(tbl.c.col.like('%s%%'% ...))

Note that a .startswith() method is already implemented in SA:

http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/schema.html?highlight=startswith#sqlalchemy.schema.Column.startswith

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic loader versus lazy=True

2009-09-15 Thread Alexandre Conrad

Keep in mind that the method on your Bar class:

   def all_foo(self):
   foo_query.all()

will return a raw *list* of Foo objects. If you append more Foo
objects to it, they won't be seen by SQLAlchemy's session, thus not
being commited. Although, if you have set on your mapper:

  properties={
'all_foo': relation(Foo)
})

"bar.all_foo" will return an InstrumentedList object, which somehow
knows it's related to Bar and any new objects appended in here will be
seen by SA's session, thus changes will be commited.

(SA gurus, correct me if I'm wrong)

This behavior might help you make a choice on your lazy strategy.

Regards,
Alex

2009/9/15 Wolodja Wentland :
> On Thu, Sep 10, 2009 at 23:27 +0200, Wolodja Wentland wrote:
>> Hi all,
>>
>> I observed that if I define a relation (foo_query) as lazy='dynamic' and
>> access all referenced entities with foo_query.all() that the query will
>> be executed every time i access it. That is not a big surprise ;-)
>>
>> In a library I am writing i want to provide methods that allow
>> pre-filtering of referenced entities and also on that provides access to
>> all entities. I am wondering if it is better/faster/.. to define *two*
>> relations for filtering and accessing all entities respectively.
>>
>> I can't really decide between the following two approaches and would be
>> happy if someone could provide some tips:
>>
>> Approach 1
>> --
>>
>> Class Bar(object):
>>
>>     def all_foo(self):
>>         foo_query.all()
>>
>>     def foo_startwith(self, search_string):
>>         foo.query.filter(tbl.c.col.like('%s%%'% ...))
>>
>> mapper(Bar,
>>        ...
>>        properties={
>>          'foo_query': relation(Foo, lazy='dynamic')
>>          })
>>
>> Approach 2
>> --
>>
>> Class Bar(object):
>>
>>     def foo_startwith(self, search_string):
>>         foo.query.filter(tbl.c.col.like('%s%%'% ...))
>>
>> mapper(Bar,
>>        ...
>>        properties={
>>          'all_foo': relation(Foo)
>>          })
>>        properties={
>>          'foo_query': relation(Foo, lazy='dynamic')
>>          })
>>
>> Which one is faster? Does it make a difference, given the
>> optimisation/cache in the database? Will it just mean more bloat in the
>> mapper definition?
>
> Nobody can help with the decision?
>
> Wolodja Wentland
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkqvts4ACgkQc5LrxXrbkwjC+wCfeyV3pLGq2ZGxn3ZNYmmc3cLK
> M9cAniKtnGlxymFccEiUENy7UzOrFlFk
> =ydwe
> -END PGP SIGNATURE-
>
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dynamic loader versus lazy=True

2009-09-15 Thread Wolodja Wentland
On Thu, Sep 10, 2009 at 23:27 +0200, Wolodja Wentland wrote:
> Hi all,
> 
> I observed that if I define a relation (foo_query) as lazy='dynamic' and
> access all referenced entities with foo_query.all() that the query will
> be executed every time i access it. That is not a big surprise ;-)
> 
> In a library I am writing i want to provide methods that allow
> pre-filtering of referenced entities and also on that provides access to
> all entities. I am wondering if it is better/faster/.. to define *two*
> relations for filtering and accessing all entities respectively.
> 
> I can't really decide between the following two approaches and would be
> happy if someone could provide some tips:
> 
> Approach 1
> --
> 
> Class Bar(object):
> 
> def all_foo(self):
> foo_query.all()
> 
> def foo_startwith(self, search_string):
> foo.query.filter(tbl.c.col.like('%s%%'% ...))
> 
> mapper(Bar,
>...
>properties={
>  'foo_query': relation(Foo, lazy='dynamic')
>  })
> 
> Approach 2
> --
> 
> Class Bar(object):
> 
> def foo_startwith(self, search_string):
> foo.query.filter(tbl.c.col.like('%s%%'% ...))
> 
> mapper(Bar,
>...
>properties={
>  'all_foo': relation(Foo)
>  })
>properties={
>  'foo_query': relation(Foo, lazy='dynamic')
>  })
> 
> Which one is faster? Does it make a difference, given the
> optimisation/cache in the database? Will it just mean more bloat in the
> mapper definition?

Nobody can help with the decision?

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Mapping arbitrary selectables

2009-09-15 Thread Mike Conley
Submitted ticket #1542

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: do I need subqueries for this?

2009-09-15 Thread Conor

On Sep 15, 5:38 am, Crusty  wrote:
> Hey everyone,
>
> sorry for the title, I couldnt think of any way to describe this in
> short.
> I have 3 Classes, which have basically this relationship:
>
> 1 Class1 has n Class2 ( 1:n)
> 1 Class2 has n Class3 ( 1:n)
>
> So basically it looks like this:
>
> Class1
>   |-- Class2
>            |-- Class3
>
> Now if I join them all together, i get something like this:
>
> Class1     Class2       Class3
> --
>     1              1              1
>     1              1              2
>     1              2              1
>     1              2              2
>     2              1              1
>     2              1              2
>     2              2              1
>     2              2              2
>  etc
>
> so if I loop through the results i would have something like this:
>
>   for (class1, class2, class3) in results:
>         print class1, class2, class3
>
> But what I would really like to do is:
>
> for (class1, class2_results) in class1:
>        print "results for class1:
>        for (result, class3_results) in class2_results:
>                print "results for class2:"
>                for result in class3_results:
>                       print "result"
>
> which will give me an output more like this:
>
> results for class1:
>         result1
>         results for class2:
>                 result1
>      
>
> And so on.
> In short, I want to get get xxx rows of class1 repeating, but I want
> to get one result per class1, containing nested results.
>
> Is that possible and do I need subqueries for that?
>
> Greetings,
>
> Tom

As long as you have ORM relations set up (I will assume you have
Class1.class2_results and Class2.class3_results), you can use
eagerloading to get your nested loops while still sending only one
query to the database:
q = session.query(Class1)
q = q.options(eagerload_all("class2_results.class3_results"))
for class1 in q:
print "results for class1:"
for class2 in class1.class2_results:
print "results for class2:"
for class3 in class2.class3_results:
print "result"

The generated SQL will look like:
SELECT 
FROM Class1 LEFT OUTER JOIN Class2 ON <...> LEFT OUTER JOIN Class3 ON
<...>

If you need to join the classes manually (to use Class2 and/or Class3
in an ORDER BY clause, for example), you can use contains_eager to
notify sqlalchemy about those joins:
q = session.query(Class1)
q = q.outerjoin(Class1.class2_results)
q = q.outerjoin(Class2.class3_results)
q = q.options(contains_eager("class2_results"))
q = q.options(contains_eager("class2_results.class3_results"))
for class1 in q:
print "results for class1:"
for class2 in class1.class2_results:
print "results for class2:"
for class3 in class2.class3_results:
print "result"

Hope it helps,
-Conor
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] do I need subqueries for this?

2009-09-15 Thread Crusty

Hey everyone,

sorry for the title, I couldnt think of any way to describe this in
short.
I have 3 Classes, which have basically this relationship:

1 Class1 has n Class2 ( 1:n)
1 Class2 has n Class3 ( 1:n)

So basically it looks like this:

Class1
  |-- Class2
   |-- Class3

Now if I join them all together, i get something like this:

Class1 Class2   Class3
--
1  1  1
1  1  2
1  2  1
1  2  2
2  1  1
2  1  2
2  2  1
2  2  2
 etc

so if I loop through the results i would have something like this:

  for (class1, class2, class3) in results:
print class1, class2, class3

But what I would really like to do is:

for (class1, class2_results) in class1:
   print "results for class1:
   for (result, class3_results) in class2_results:
   print "results for class2:"
   for result in class3_results:
  print "result"

which will give me an output more like this:


results for class1:
result1
results for class2:
result1
 

And so on.
In short, I want to get get xxx rows of class1 repeating, but I want
to get one result per class1, containing nested results.

Is that possible and do I need subqueries for that?

Greetings,

Tom

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping arbitrary selectables

2009-09-15 Thread Michael Bayer

Mike Conley wrote:
> When mapping an arbitrary selectable, does mapper's primary_key argument
> need to be a primary key in the base table?
> Using 0.5.6, but I seem to remember same behavior in earlier versions.
>
> This works and does not generate any errors:
>
> t1 = Table('t1', meta, Column('foo', Integer, primary_key=True))
> s1 = select([t1.c.foo])
> class One(object): pass
> mapper(One, s1, primary_key=[s1.c.foo])## Note: also OK without pk
> argument
>
> This raises an exception complaining about the primary key
>
> t2 = Table('t2', meta, Column('bar', Integer))
> s2 = select([t2.c.bar])
> class Two(object): pass
> mapper(Two, s2, primary_key=[s2.c.bar])  # same error using [t2.c.bar]
>
> ArgumentError: Mapper Mapper|Two|%(31476816 anon)s could not assemble any
> primary key columns for mapped table '%(31476816 anon)s'

that seems like a bug to me.   if you gave the mapper primary_key, and the
column is on the selectable, that error should never raise.   should not
matter what is present on the underlying Table objects.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?

2009-09-15 Thread Michael Bayer
              Column('id', Integer, primary_key=True),
>>                 Column('name', String(100), nullable=False))
>>
>> class Employee(object):
>>
>>     def __init__(self, name, position):
>>         self.name = name
>>         self.position = position
>>
>> employees = Table('employees',
>>                   metadata,
>>                   Column('id', Integer,
>> ForeignKey('persons.id'),primary_key=True),
>>                   Column('position', String(50), nullable=False))
>>
>> class Meeting(object):
>>
>>     def __init__(self, date, person_from, person_to):
>>         self.date = date
>>         self.person_from = person_from
>>         self.person_to = person_to
>>
>> meetings = Table('meetings',
>>                  metadata,
>>                  Column('id', Integer, primary_key=True),
>>                  Column('date', String(8), nullable=False),
>>                  Column('person_from_id', Integer,
>> ForeignKey('persons.id'), nullable=False),
>>                  Column('person_to_id', Integer,
>> ForeignKey('persons.id'),
>> nullable=False))
>>
>> mapper(Employee, employees.join(persons), properties={
>>     'id':[persons.c.id, employees.c.id]
>>
>> })
>>
>> mapper(Meeting, meetings, properties={
>>         'person_from': relation(Employee,
>> primaryjoin=(meetings.c.person_from_id==persons.c.id)),
>>         'person_to': relation(Employee,
>> primaryjoin=(meetings.c.person_to_id==persons.c.id)),
>>         })
>>
>> if __name__ == '__main__':
>>     metadata.create_all(db_engine)
>>     s=sessionmaker(bind=db_engine)()
>>
>>     john = Employee('John', 'person')
>>     peter = Employee('Peter', 'clerk')
>>     jack = Employee('Jack', 'manager')
>>     m1 = Meeting('20090914', peter, john)
>>     m2 = Meeting('20090915', peter, jack)
>>     s.add_all([john, peter, jack, m1, m2])
>>     s.commit()
>>
>>     #We now want to print the names and positions of everyonePeter has
>> ever met with
>>     peters_meetings =
>> s.query(Meeting).options(eagerload('person_to')).filter_by(person_from=pete­r).all()
>>     for meeting in peters_meetings:
>>         print meeting.date,
>> meeting.person_to.name,meeting.person_to.position
>>
>>
>>
>>
>>
>> > On Sep 14, 4:31 pm, "Michael Bayer"  wrote:
>> >>bojanbwrote:
>>
>> >> > The root of the problem is inheritance. Let's say that I have a
>> Person
>> >> > class and an Employee class that inherits from it. I also have a
>> >> > Meeting class that records meetings between two persons.
>>
>> >> > A query on Meeting will always lazy load Employee's attributes,
>> >> > regardless of any lazy/eagerload settings. E.g. if I want to print
>> the
>> >> > list of names of all persons somebody had meetings with and also
>> their
>> >> > position if they are employees (null if they're not), it will
>> always
>> >> > be done lazily. This is bad when I have, let's say, 100.000
>> Meetings.
>>
>> >> > I guess I can build a custom join and work from that, but if I have
>> >> > two levels of inheritance on one side and three levels on the other
>> >> > side, I will have to write a six-way join, and this, I'm sure
>> you'll
>> >> > agree, sort of defeats the purpose of an object-relational mapper.
>>
>> >> > Using classes mapped against multiple tables would elegantly solve
>> >> > this problem, if I could only instantiate them (see my original
>> post).
>>
>> >> > Here's the code that shows attributes of inherited objects are
>> loaded
>> >> > lazily:
>>
>> >> oh.  you want with_polymorphic() for this.
>>
>> >>http://www.sqlalchemy.org/docs/05/mappers.html#controlling-which-tabl...
>>
>> >> > 
>>
>> >> > from sqlalchemy import create_engine, Table, Column, Integer,
>> String,
>> >> > MetaData, ForeignKey
>> >> > from sqlalchemy.orm import mapper, relation, sessionmaker,
>> eagerload
>> >> > from sqlalchemy.orm.m

[sqlalchemy] Mapping arbitrary selectables

2009-09-15 Thread Mike Conley
When mapping an arbitrary selectable, does mapper's primary_key argument
need to be a primary key in the base table?
Using 0.5.6, but I seem to remember same behavior in earlier versions.

This works and does not generate any errors:

t1 = Table('t1', meta, Column('foo', Integer, primary_key=True))
s1 = select([t1.c.foo])
class One(object): pass
mapper(One, s1, primary_key=[s1.c.foo])## Note: also OK without pk
argument

This raises an exception complaining about the primary key

t2 = Table('t2', meta, Column('bar', Integer))
s2 = select([t2.c.bar])
class Two(object): pass
mapper(Two, s2, primary_key=[s2.c.bar])  # same error using [t2.c.bar]

ArgumentError: Mapper Mapper|Two|%(31476816 anon)s could not assemble any
primary key columns for mapped table '%(31476816 anon)s'



-- 
Mike Conley

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to instantiate objects of a class mapped against multiple tables?

2009-09-15 Thread bojanb
main__':
metadata.create_all(db_engine)
s=sessionmaker(bind=db_engine)()
try:
jack = Employee('Jack', 'manager')
s.add(jack)
s.commit()
#Here we try to create a Subordinate object which should
automatically create dependant objects
db_engine.echo = True
subordinate = Subordinate()
subordinate.person_to = jack
subordinate.name = 'Peter'
subordinate.position = 'clerk'
subordinate.additional_info = 'Works for Jack since 2007'
subordinate.type = 'E'
subordinate.relation_type='S'
s.add(subordinate)
s.commit() #Fails

finally:
db_engine.echo = False
s.close()
metadata.drop_all(db_engine)




 Here's the code that demonstrates the problem from my original
question (the previous code was in response to your reply which

On Sep 14, 6:52 pm, "Michael Bayer"  wrote:
> bojanbwrote:
>
> > Actually you can't use with_polymorphic() in the query because Meeting
> > is not an inherited object (one would get an InvalidRequestError if
> > one tried). But plugging:
>
> > with_polymorphic='*'
>
> > in the mapper for Person makes the eagerload work in the code above.
>
> > However, we're off on a tangent. I still don't know how to instantiate
> > objects of a class mapped against two tables when they contain both an
> > autogenerated primary key from the first table and a mandatory foreign
> > key from the second...
>
> Just to clarify, the mapper on Employee with the "with_polymorphic='*'"
> *is* a mapper that is mapped against two tables, in pretty much the same
> way as a map against a plain join is represented.  So I'm assuming this is
> unsuitable only because it's your observation that the joined tables in
> your particular system are more of an "implementation detail" and you
> don't really need to represent inheritance.
>
> So, as far as synchronizing the foreign key with the primary key of the
> two tables in a mapper that is mapped to a plain join, you just map two
> columns to one attribute.  This is also in the docs, 
> athttp://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-agains...
> .   The tables are populated in order of foreign key dependency, and after
> each primary key generation the value is synchronized to the mapped
> attribute, where its then available for the insert into the second table.
>
> Since I didn't read your initial (very long) email carefully enough, here
> is your sample program using that style.
>
> from sqlalchemy import create_engine, Table, Column, Integer,
> String,MetaData, ForeignKey
> from sqlalchemy.orm import mapper, relation, sessionmaker, eagerload
> from sqlalchemy.orm.mapper import validates
>
> db_engine=create_engine('sqlite://', echo=True)
> metadata = MetaData()
>
> persons = Table('persons',
>                 metadata,
>                 Column('id', Integer, primary_key=True),
>                 Column('name', String(100), nullable=False))
>
> class Employee(object):
>
>     def __init__(self, name, position):
>         self.name = name
>         self.position = position
>
> employees = Table('employees',
>                   metadata,
>                   Column('id', Integer,
> ForeignKey('persons.id'),primary_key=True),
>                   Column('position', String(50), nullable=False))
>
> class Meeting(object):
>
>     def __init__(self, date, person_from, person_to):
>         self.date = date
>         self.person_from = person_from
>         self.person_to = person_to
>
> meetings = Table('meetings',
>                  metadata,
>                  Column('id', Integer, primary_key=True),
>                  Column('date', String(8), nullable=False),
>                  Column('person_from_id', Integer,
> ForeignKey('persons.id'), nullable=False),
>                  Column('person_to_id', Integer, ForeignKey('persons.id'),
> nullable=False))
>
> mapper(Employee, employees.join(persons), properties={
>     'id':[persons.c.id, employees.c.id]
>
> })
>
> mapper(Meeting, meetings, properties={
>         'person_from': relation(Employee,
> primaryjoin=(meetings.c.person_from_id==persons.c.id)),
>         'person_to': relation(Employee,
> primaryjoin=(meetings.c.person_to_id==persons.c.id)),
>         })
>
> if __name__ == '__main__':
>     metadata.create_all(db_engine)
>     

[sqlalchemy] Re: getting data from primary keys

2009-09-15 Thread King Simon-NFHD78

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:sqlalch...@googlegroups.com] On Behalf Of C.T. Matsumoto
> Sent: 15 September 2009 07:21
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: getting data from primary keys
> 
> That did the trick.
> 
> Thanks a lot.
> 
> Your solution uses the orm sessionmaker. Till now my script 
> was relying on sqlalchemy's expression 
> language. Is there some way of doing the same with the 
> expression language? Or would it get too
> complicated? (Just curious)
> 
> Cheers,
> 
> T
> 

How about:

  import sqlalchemy as sa

  key_cols = [c for c in table.primary_key.columns]
  query = sa.select(key_cols)
  print query.execute().fetchall()
  
Or

  print connection.execute(query).fetchall()

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---