Re: [sqlalchemy] is there a more proper way to chain dynamic or clauses ?

2013-02-22 Thread Conor
On 02/21/2013 07:31 PM, Jonathan Vanasco wrote:
 basd on a bunch of error messages, this example works...

 criteria = ( ('male',35),('female','35) )
 query = session.query( model.Useraccount )
 ands = []
 for set_ in criteria :
   ands.append(\
   sqlalchemy.sql.expression.and_(\
   model.Useraccoun.gender == set_[0] ,
   model.Useraccoun.age == set_[1] ,
   )
   )
   query = query.filter(\
   sqlalchemy.sql.expression.or_( *ands )
   )
   results= query.all()

 this seems really awkward though.  is there a better way to build up a
 set of dynamic or criteria ?

For this specific case, if your database supports it, you can use the
tuple_ construct:

criteria = (('male', 35), ('female', 35))
query = session.query(model.Useraccount)
query = query.filter(sa.tuple_(model.Useraccount.gender, 
model.Useraccount.age).in_(criteria))
results = query.all()

It's cleaner and should give better index usage.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] Error when ordering query by column_property

2012-11-28 Thread Conor
I'm using SQLAlchemy 0.7.9 with PostgreSQL.

I have a column_property that I'm having trouble with (test case at
http://pastebin.com/hm5CDiV4, reformatted rendered SQL at
http://pastebin.com/hztBvLUU). The column_property is included twice in
the SELECT columns clause with the same alias, causing a 'column
reference anon_2 is ambiguous' error when executed.

I can only get this to happen if:

 1. The column_property expression is Boolean-typed, e.g. somecol =
column_property(othercol  0). I get no problems with a regular
boolean column or if the column_property is an arithmetic
expression, e.g. somecol = column_property(othercol + 1).
 2. I order the query by the column property, e.g. q = q.order_by(somecol)
 3. The query uses LIMIT + joinedload, which causes a subquery to be
used and forces the ORDER BY clause to be included in the subquery's
SELECT columns clause.

It seems like SQLAlchemy's is this property already in the columns
clause code fails to detect Boolean-typed column_property objects. Does
this sound right?

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Returningquery results with the result number

2012-03-15 Thread Conor
On 03/15/2012 04:52 AM, Eduardo wrote:
 Hi,
 In order to avoid bottlenecks I am force to limit the number of
 returned results using LIMIT and OFFSET.
 Since I am not returning all results upon a query I need to include
 the number of hits in the result.
 somequery.count()
 somequery.limit(n).offset(m).all()
 The problem is that response time takes twice as long as for either
 the count query or the query retrieving results.
 Is there any way to do this more efficiently, to make a query first,
 then to count results and return the result chunk defined with LIMIT
 and OFFSET?
 What is the best practice for this?
 Thanks
 ED

If your DB supports window functions and you are using SQLAlchemy 0.7+,
you can include func.count().over() in your query, which effectively
includes somequery.count() as a column in each row.

So this:

somequery = session.query(SomeClass).filter(...)
count = somequery.count()
some_class_instances = somequery.limit(n).offset(m).all()

becomes:

rows = session.query(SomeClass, 
func.count().over().label(count)).filter(...).limit(n).offset(m).all()
if rows:
count = rows[0][1]
some_class_instances = [row[0] for row in rows]
else:
# Either no rows matched or the limit+offset is out of range. We will 
assume the former.
count = 0
some_class_instances = []

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Only one record with parent_id of None; constraint possible?

2012-03-14 Thread Conor
On 03/14/2012 06:37 AM, Daniel Nouri wrote:
 On Tue, Mar 13, 2012 at 5:03 PM, Conor conor.edward.da...@gmail.com wrote:
 On 03/13/2012 09:21 AM, Daniel Nouri wrote:

 I have a node with a parent_id, which may be None (for the root node).
  Can I make a SQL table constraint that says: 'there may only be one
 node with the parent_id of None' (while it's fine if many nodes share
 a parent_id that's not None)?

 Thanks,
 Daniel

 You can use a functional unique index that takes advantage of multiple NULLs
 in being allowed in a unique index (beware: older MS SQL versions did not
 follow this behavior):

 CREATE UNIQUE INDEX mytable_parent_id_un ON mytable (CASE WHEN parent_id IS
 NULL THEN 1 ELSE NULL END)
 Thanks very much.  This looks like what I want.  I tried this using
 the event/DDL, but with both SQLite and Postgres, I'm getting this
 error:

 OperationalError: (OperationalError) near CASE: syntax error
 u'CREATE UNIQUE INDEX nodes_parent_id_un ON nodes (CASE WHEN parent_id
 IS NULL THEN 1 ELSE NULL END)' ()

 Did I get the syntax wrong?

Oops, PostgreSQL requires extra parens:

CREATE UNIQUE INDEX nodes_parent_id_un ON nodes ((CASE WHEN parent_id IS NULL 
THEN 1 ELSE NULL END))

I did not see a way to create functional indexes in SQLite. You can
probably achieve the same effect with triggers.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Only one record with parent_id of None; constraint possible?

2012-03-13 Thread Conor
On 03/13/2012 09:21 AM, Daniel Nouri wrote:
 I have a node with a parent_id, which may be None (for the root node).
  Can I make a SQL table constraint that says: 'there may only be one
 node with the parent_id of None' (while it's fine if many nodes share
 a parent_id that's not None)?

 Thanks,
 Daniel


You can use a functional unique index that takes advantage of multiple
NULLs in being allowed in a unique index (beware: older MS SQL versions
did not follow this behavior):

CREATE UNIQUE INDEX mytable_parent_id_un ON mytable (CASE WHEN parent_id IS 
NULL THEN 1 ELSE NULL END)

AFAIK SQLAlchemy's Index class does not support functional indexes, but
you can work around that via DDL events:

event.listen(mytable, after_create, DDL(CREATE UNIQUE INDEX ...))

Also, if you want to trim down index size, some databases (e.g.
PostgreSQL) support partial indexes, while others (older Oracle versions
I believe) simply do not index NULLs at all. Partial index example:

CREATE UNIQUE INDEX mytable_parent_id_un ON mytable (1) WHERE parent_id IS NULL

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How do I change enum values after previously creating a table?

2012-02-08 Thread Conor
On 02/08/2012 07:29 PM, Jackson, Cameron wrote:

 One of my tables used to have something like:

 role = Column(Enum('user', 'admin', name = User_Roles))

  

 Now I want to change it to:

 role = Column(Enum('user', 'superuser', name = User_Roles))

  

 But I can't figure out how to change the enum values in the database.
 I've tried dropping the table from pgAdmin and recreating it with
 SQLAlchemy, which all seems to go fine, but then when I go back to
 pgAdmin and try to enter a new user with the role 'superuser', I get
 an error back:

  

 An error has occurred:

  

 ERROR: invalid input value for enum User_Roles: superuser

 LINE 1: ..., password, role) VALUES ('john'::text, 'pass'::text,
 'superuser...

  
 ^

I'm guessing the old User_Roles type still exists in the database. You
can drop it with DROP TYPE User_Roles.

 I've tried searching through pg_admin for where this enum is being
 defined, but I can't find it. Can anyone tell me where in pgAdmin I
 need to go to change or delete the enum, or how to make SQLAlchemy do
 so when it creates the table?

pgAdmin does not show types by default. You can enable it by going to
File  Options, Browser tab, and checking the Types checkbox.

Since PostgreSQL types are not owned by tables, I don't think there is a
good way to make this automatic. You can try adding an after-drop DDL
event that drops the User_Roles type when you drop the table, but that
has its own problems.

If you are looking to migrate table data to use the new enum, a script
like this should work:

ALTER TYPE User_Roles RENAME TO User_Roles_Old;
CREATE TYPE User_Roles AS ENUM ('user', 'superuser');
ALTER TABLE some_table ALTER COLUMN some_column TYPE User_Roles USING
  CASE some_column
WHEN 'user'::User_Roles_Old THEN 'user'::User_Roles
WHEN 'admin'::User_Roles_Old THEN 'superuser'::User_Roles
  END;
DROP TYPE User_Roles_Old;

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] a list as a named argument for an in clause

2012-01-24 Thread Conor
On 01/22/2012 01:49 PM, alex bodnaru wrote:
 hello friends,

 i'm using sa at a quite low level, with session.execute(text, dict)

 is it possible to do something in the spirit of:

 session.execute(select * from tablename where id in (:ids), 
 dict(ids=[1,2,3,4])) ?

 thanks in advance,
 alex


I'm not aware of a general way to do this. If you are using
PostgreSQL+psycopg2, you can use the = ANY(...) operator instead of the
IN operator:

session.execute(select * from tablename where id = ANY (:ids), 
dict(ids=[1,2,3,4]))

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] duplicate key trick

2012-01-20 Thread Conor
On 01/20/2012 10:42 AM, lestat wrote:
 Hi!

 Maybe anyone can tell how I can except this error?


 class TmpTest(db.Model, UnicodeMixin):
 __tablename__ = 'tmp_test'

 id = db.Column(db.Integer, primary_key=True)
 name = db.Column(db.String(50), nullable=False, unique=True)
 count = db.Column(db.Integer, nullable=False, default=0,
 server_default='0')
 time_create = db.Column(db.DateTime, nullable=False,
 default=func.now(), server_default=expression.text('now()'))




 from models import TmpTest
 from sqlalchemy.orm.exc import NoResultFound
 from app import db
 from multiprocessing import Pool, Process

 names = ['tmp_test_{0}'.format(num) for num in range(1000)]

 def sqlalchemy_test():
 for name in names:
 try:
 t = TmpTest.query.filter_by(name=name).one()
 except NoResultFound:
 t = TmpTest()
 t.name = name
 t.count = 0
 t.count += 1
 t = db.session.merge(t)
 db.session.add(t)
 db.session.commit()


 if __name__ == '__main__':
 p1 = Process(target=sqlalchemy_test)
 p1.daemon = True
 p2 = Process(target=sqlalchemy_test)
 p2.daemon = True
 p3 = Process(target=sqlalchemy_test)
 p3.daemon = True
 p4 = Process(target=sqlalchemy_test)
 p4.daemon = True
 p5 = Process(target=sqlalchemy_test)
 p5.daemon = True
 p6 = Process(target=sqlalchemy_test)
 p6.daemon = True
 p7 = Process(target=sqlalchemy_test)
 p7.daemon = True
 p8 = Process(target=sqlalchemy_test)
 p8.daemon = True
 p9 = Process(target=sqlalchemy_test)
 p9.daemon = True
 p10 = Process(target=sqlalchemy_test)
 p10.daemon = True
 p1.start()
 p2.start()
 p3.start()
 p4.start()
 p5.start()
 p6.start()
 p7.start()
 p8.start()
 p9.start()
 p10.start()
 p1.join()
 p2.join()
 p3.join()
 p4.join()
 p5.join()
 p6.join()
 p7.join()
 p8.join()
 p9.join()
 p10.join()



 IntegrityError: (IntegrityError) duplicate key value violates unique
 constraint tmp_test_name_key
 DETAIL:  Key (name)=(tmp_test_45) already exists.
  'INSERT INTO tmp_test (name, count, time_create) VALUES (%(name)s, %
 (count)s, now()) RETURNING tmp_test.id' {'count': 1, 'name':
 'tmp_test_45'}


 IntegrityError: (IntegrityError) duplicate key value violates unique
 constraint tmp_test_name_key
 DETAIL:  Key (name)=(tmp_test_26) already exists.
  'INSERT INTO tmp_test (name, count, time_create) VALUES (%(name)s, %
 (count)s, now()) RETURNING tmp_test.id' {'count': 1, 'name':
 'tmp_test_26'}
 Process Process-7:


 etc...

 Thanks!



AFAIK there is no standard way via SQLAlchemy/DBAPI to catch just a
particular constraint violation or even tell which constraint was
violated given an IntegrityError. The only way I have been able to do
this by extracting out the constraint name from the error text via
regular expressions. Unfortunately this technique is specific to both
your database vendor and possibly DBAPI implementation. For
PostgreSQL+psycopg2 I do something like:

try:
   [...]
except IntegrityError as e:
if get_constraint_name(e) == tmp_test_name_key:
print Duplicate name
else:
raise

def get_constraint_name(e):
# Unique constraint violations in PostgreSQL have error code 23505.
if e.orig.pgcode == 23505:
return re.search(r'^ERROR:  duplicate key value violates unique 
constraint (.*?)',
 e.orig.pgerror).group(1)
else:
handle other constraint types

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] PG Sequence on non primary_key column and ForeignKey

2012-01-18 Thread Conor
On 01/18/2012 02:14 PM, Martijn Moeling wrote:
 I managed to get PG (9.0) installed and I am in the process over
 moving over from MySQL,
 I allready have stuff working and am now trying to get the main reason
 for the Quick move: Sequence.

 I have a object like:

 seq=Sequence('serialnumber', metadata=Base.metadata)
 Class Object1(Base):
 Id= Column(Integer, primary_key=True)
 SerialNumber= Column(Integer, seq) # Or whatever I try here


 Class Object(Base):
 Id= Column(Integer, primary_key=True)
 SerialNumber= Column(Integer, ForeignKey('object1.SerialNumber')


 with a create_all I get this:
 (ProgrammingError) there is no unique constraint matching given keys
 for referenced table object1

 I might be looking in the wrong direction here, the documentation only
 talks about Sequence with primary_key set to True

 Martijn

PostgreSQL requires the target of a foreign key (Object1.SerialNumber in
your case) to have a unique constraint on it. Adding unique=True to
the column definition would do it.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Group / Order by field in relationship?

2011-08-17 Thread Conor
On 08/17/2011 12:01 AM, Mark Erbaugh wrote:
 Is it possible to group or order by a field in a many to one related
 table?

 class Rental(Base):
 __tablename__ = 'rental'

 rental_id = Column(Integer, autoincrement=True, primary_key=True)
 inventory_id = Column(Integer,
 ForeignKey(Inventory.inventory_id), nullable=False)
 
 inventory = relation(Inventory,
 uselist=False,
 backref='rentals',
 )

 class Inventory(Base):
 __tablename__ = 'inventory'

 inventory_id = Column(Integer, autoincrement=True, primary_key=True)
 film_id = Column(Integer, ForeignKey(Film.film_id), nullable=False)
 
 film = relation(Film,
 uselist=False,
 backref='inventory',
 )


 session.query(Rental).order_by(Rental.inventory.film_id)  generates
 the error:

 Neither 'InstrumentedAttribute' object nor 'Comparator' object has an
 attribute 'film_id'

You have to explicitly join to the related table, e.g.:

session.query(Rental).join(Rental.inventory).order_by(Inventory.film_id)

For bonus points, you can tell SQLAlchemy that Rental.inventory has been
eagerloaded. This may reduce the number of lazy loads when you access a
Rental instance's inventory:

q = session.query(Rental)
q = q.join(Rental.inventory)
q = q.options(sqlalchemy.orm.contains_eager(Rental.inventory))
q = q.order_by(Inventory.film_id)

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] storedprocedure with input parameter

2011-07-27 Thread Conor
On 07/27/2011 10:42 AM, werner wrote:
 I like to use a stored procure which needs a input parameter in
 something like this:

 seltest = db.sa.select([id,
 name]).select_from(db.sa.func.someStoredProc(2)).alias()
 seltestm = db.sao.mapper(ATest, seltest, primary_key=[seltest.c.id])
 result = session.query(seltestm).get(73)

 above works, but I would really need to replace the hardcoded 2 with
 a function, i.e.:

 seltest = db.sa.select([id,
 name]).select_from(db.sa.func.someStoredProc(getSomeUserValue())).alias()
 seltestm = db.sao.mapper(ATest, seltest, primary_key=[seltest.c.id])

 # set the SomeUserValue here and then do
 result = session.query(seltestm).get(73)

 tried using functools.partial but I get a InterfaceError exception.

 Werner

I believe you want to replace getSomeUserValue() with
sa.bindparam(callable_=getSomeUserValue). See the docs at
http://www.sqlalchemy.org/docs/core/expression_api.html#sqlalchemy.sql.expression.bindparam.

As an aside, do you really want to map a class against a dynamic query?
I'm not sure how well the ORM deals with that. At the very least, I
think you need to ensure that SomeUserValue does not change while using
the session.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Unexpected IntegrityError when trying to add new related element

2011-07-21 Thread Conor
On 07/21/2011 09:20 AM, Ben Sizer wrote:
 I have 2 classes:

 Base = declarative_base()

 class Parent(Base):
 __tablename__ = 'parent'
 id = Column(Integer, primary_key=True)
 children = relationship(Child, cascade=all)

 class Child(Base):
 __tablename__ = 'child'
 id = Column(Integer, primary_key=True)
 parent_id = Column(Integer, ForeignKey('parent.id'),
 nullable=False)
 details = Column(Text(), nullable=False)

 I recently added nullable=False to the Child.parent_id column, and now
 the following code produces an error.

 p = self.session.query(Parent).filter_by(id=1234).one()
 p.children = [Child(details=x) for x in x_list] # don't think
 content of x_list is relevant
 self.session.commit()

 File C:\Python27\lib\site-packages\sqlalchemy-0.7.1-py2.7.egg
 \sqlalchemy\engine\default.py, line 325, in do_execute
 cursor.execute(statement, parameters)
 IntegrityError: (IntegrityError) child.parent_id may not be NULL
 u'UPDATE child SET parent_id=? WHERE child.id = ?' (None, 1)

 Why is it generating this particular UPDATE statement? Shouldn't it
 add the correct parent_id from the Parent object, not None? Even if I
 add 'parent_id=1234' into the Child() constructor, it still attempts
 to set parent_id to None with this UPDATE.

 What am I doing wrong?

It means that there is a Child row already in the database with
parent_id=1234. When you reassign p.children to not include that child,
SQLAlchemy detects that the child object is now an orphan (has no
parent). Based on your cascade rules (cascade=all), SQLAlchemy will
try to NOT delete the child, but instead set its parent_id to NULL (the
only sensible alternative to not deleting the child).

If you want the child to be deleted in this case, change the cascade to
all,delete-orphan. Otherwise, you need to ensure that the child is
kept in p.children, e.g. p.children += [Child(details=x) for x in x_list].

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sorting a list of tables from a database based on their dependency order

2011-07-11 Thread Conor
On 07/11/2011 02:37 PM, Tony wrote:
 I'm doing this:
 sengine = create_enging(blah..blah)
 smeta = MetaData(bind=sengine)
 meta.reflect(sengine)
 tables = smeta.tables.keys()
 to get a list of tables in a database, but unfortunately the table
 list are not in their dependency order. For example:
 A is depending on B (has a foreign key reference to B), B is
 depending on C, the correct table list should be [A,B,C]. Yet the
 smeta.tables.keys() will only return a unsorted list. Is there an API
 or a code snippet to show me how to sort the returned list on their
 dependency order ?

 I have this dumb code snippet,which I think is really inefficient.

 sorted_table = []

  
 #===
 # sort all table list on their dependency order
  
 #===
 while len(sorted_table) != len(tables):
 for table_name in tables:
 if table_name in sorted_table:
 continue
 table = Table(table_name, smeta, autoload=True)
 if len(table.foreign_keys) == 0:
 sorted_table.append(table_name)
 else:
 for foreign_key in table.foreign_keys:
 if foreign_key.target_fullname.split(.)[0] in
 sorted_table:
 pass
 else:break
 else:
 sorted_table.append(table_name)

 Thanks  Regards
 Tony

SQLAlchemy provides this via smeta.sorted_tables.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Managing one-to-one relation?

2011-07-05 Thread Conor
On 07/05/2011 03:09 PM, Vlad K. wrote:

 Hi.

 I have a situation where I have X data models, and while each has its
 own properties, they all share a set of fields with common meaning,
 like id, title, description, and some others.

 What way would you recommend to approach this problem? If I wasn't
 using SQLAlchemy, I'd have one main table with serial pkey and several
 subtables (not using database inheritance functionality) in one to one
 relation via the pkey. The backend is Postgres.


 Thanks!

SQLAlchemy can model this via joined table inheritance:
http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance.
You are still responsible for setting up the tables in the way you
described, but SQLAlchemy will take care of the rest.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Problem using ResultProxy

2011-07-01 Thread Conor
On 07/01/2011 08:45 AM, RVince wrote:
 I have a query using join() where my tables have a few hundred
 thousand records in Postgres, that takes about 30-45 seconds
 regardless of how the tables are indexed:

 query =
 Session.query(SmartdataEligibilityRecord).join(Member).filter(Member.id==SmartdataEligibilityRecord.member_id).order_by(Member.last_name.asc()).all()

 If I rewrite this as:

 connection = engine.connect()
 trans = connection.begin()
 qs = select * from smartdata_eligibility_records,members where
 members.id=smartdata_eligibility_records.member_id order by
 members.last_name asc;;
 query =connection.execute(qs)
 connection.close()

 it flies, taking only a few seconds. My problem is that now the query
 object is a ResultProxy type. It looks like under the former way I was
 doing this, using join() it returned tuples. Now I am getting errors
 such as:

 TypeError: Sorry, your collection type is not supported by the
 paginate module. You can provide a list, a tuple, a SQLAlchemy select
 object or a SQLAlchemy ORM-query object.

 Is there a way I can casr this ResultProxy obect to a usable object
 here (like tuples?) Thanks, RVince

In your second example, you have not yet retrieved any rows out of the
DBAPI cursor. I believe psycopg2 collects all the rows for you anyway at
the execute() step, but this is not guaranteed. A more realistic
comparison would be to use .execute().fetchall() instead of just .execute().

Anyway, using fetchall() will give you the rows as a list, which you can
then hand off to paginate. However, since you are using pagination, it
does not make sense to fetch all the rows anyway: just let paginate
modify the query to fetch the correct subset of rows.

via ORM: remove the .all():

query = Session.query(SmartdataEligibilityRecord)
query = query.join(Member)
query = query.filter(Member.id==SmartdataEligibilityRecord.member_id)
query = query.order_by(Member.last_name.asc())

via SQL layer (assuming you are using declarative):

query = select([SmartdataEligibilityRecord.__table__, Member.__table__])
query = query.where(Member.id == SmartdataEligibilityRecord.member_id)
query = query.order_by(Member.last_name.asc())

Note how the above does not use .all() or .execute(), because we will
let paginate take care of that. When using paginate, the overhead of ORM
should be negligible, so I recommend staying with the ORM in this case.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Relationship between a class with two different instances of other class (Newbie)

2010-10-29 Thread Conor
On 10/29/2010 09:43 AM, Hector Blanco wrote:
 Hello list...

 I wrote a couple of days ago about how to model an structure of three
 classes 
 (http://groups.google.com/group/sqlalchemy/browse_thread/thread/5ba5c4ad16f789d6#).
 I thing I almost have it, but I am still getting problems mapping an
 structure like this.

 class Child(rdb.Model):
   def __init__(self):
   self.field1 = “hello world”

 class Parent(rdb.Model):
   def __init__(self):
   self.child1 = Child()
   self.child2 = Child()

 The “Parent” class has two different instances of a Child() class. I
 am not even sure about how to treat this (two different 1:1
 relationships or a 1:2 relationship).

 My last try is this mapping:

 (don't let the rdb.Model thing fool you, is just something that
 automatically maps the class to the table specified in rdb.tablename).

 class Child(rdb.Model):
   rdb.metadata(metadata)
   rdb.tablename(children_table)
   id = Column(id, Integer, primary_key=True)
   field1 = Column(“field1”, String(64))   #Irrelevant
   def __init__(self):
   self.field1 = “hello world”

 class Parent(rdb.Model):
   rdb.metadata(metadata)
   rdb.tablename(parent_table”)

   id = Column(id, Integer, primary_key=True)
   child1_id = Column(child_1_id, Integer, 
 ForeignKey(children_table.id))
   child2_id = Column(child_2_id, Integer, 
 ForeignKey(children_table.id))

   child1 = relationship(Child,
   primaryjoin = (child1_id==children_table.id)
   )

   child2 = relationship(Child,
   primaryjoin = (child2_id==children_table.id)
   )

 I have tried (almost) everything. I say “almost” because obviously I
 haven't tried the right thing. I keep getting errors that sqlalchemy
 can't determine the relationship between the two tables.

 It looks very similar to:
 http://www.sqlalchemy.org/docs/orm/relationships.html#multiple-relationships-against-the-same-parent-child

 but I also understand that maybe I can't get this working that way,
 because I am actually putting two ForeignKeys from the same table
 (“children_table”) in the parent table. I am not sure how that will
 work out, or is correct or what... :-(

 Thank you!

Usually we consider the table with the foreign key as the child table,
but that's just being picky. The problem is that SQLAlchemy is treating
children_table.id as a literal instead of a clause, so your join would
be like (parent JOIN child ON parent.child1_id = 'children_table.id').
Obviously that is not what you want. There are several ways to formulate
primaryjoin/secondaryjoin.

Pass the whole thing in as a string:

child1 = relationship(Child, primaryjoin=Parent.child1_id == Child.id)

Use the column objects directly (this requires that Child be defined
before Parent):

child1 = relationship(Child, primaryjoin=child1_id==Child.id)

Use a callable (my favorite):

child1 = relationship(Child, primaryjoin=lambda: Parent.child1_id == Child.id)

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Simple relationship 1:1 fails with Foreign key assocated with column ------- could not find table

2010-10-29 Thread Conor
 in Python,
 or an SqlAlchemy type). I don't know, something like:

 from whatever.repository.of.tables import parent_table

  so I can, without quotes, use:

 id = Column(id, Integer, ForeignKey(parent_table.id), primary_key = True)

 (I guess that may work)

 Thank you all.

In SQLAlchemy you get around circular dependencies by:

* Using strings as the target of ForeignKey()
* Using class name strings as the target of a relation (declarative
  only)
* Using strings or callables as primaryjoin/secondaryjoin arguments
  in a relationship()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Relationship between a class with two different instances of other class (Newbie)

2010-10-29 Thread Conor
On 10/29/2010 11:51 AM, Hector Blanco wrote:
 Thanks Conor!

 The callable works like a charm! It's great news! (I've been trying to
 figure out this for 3 days... yeah... I guess I'm not that smart)

 Now that I have it working, a “design” question pops up. Nothing
 technical, really.

 As Connor mentioned in his reply:
 “Usually we consider the table with the foreign key as the child
 table, but that's just being picky”

 That's very true, and now I don't know how to design it...

It would be easier to design it if we had more concrete names instead of
Parent and Child. What is the actual use case? Is this a tree
hierarchy? Does each parent have exactly two children?

 I can do it the way I asked or...

 class Child(rdb.Model):
   rdb.metadata(metadata)
   rdb.tablename(children_table)
   id = Column(id, Integer, primary_key=True)
   parent_id = Column(id, Integer, ForeignKey(“parent_table.id”))  # New!
   type = Column(type, ShortInteger)# New!

   field1 = Column(“field1”, String(64))  #Irrelevant
   def __init__(self):
   self.field1 = “hello world”

 class Parent(rdb.Model):
   rdb.metadata(metadata)
   rdb.tablename(parent_table”)

   id = Column(id, Integer, primary_key=True)

   child1 = relationship(
   # Well... this I still don't know how to write it down,
   # but it would be something like:
   #  Give me all the children whose “parent_id” is my “id”
   #  AND type == 1
   # I'll deal with the joins and that depending on your answer, 
 guys
   )

   child2 = relationship(
   #  Would be same as above
   #  AND type == 2
   )

 This may be good for adding new children to the parent class... If I
 add a “Parent.child3”, I just need to create a new relationship very
 similar to the already existing ones.

 The way I asked in my former question would imply creating a new
 relationship AND adding a new foreign key to the parent.

 I'd like to know what people that know much more about databases think :)

I'm confused as to why you would want separate child1, child2, etc.
relationships instead of a single children relationship. Is Child.type
really something you want for distinguishing children, or is it
something you added to try and make the relationships work?

Assuming you really do want to keep separate child1 and child2
relationships, and they are both one-to-one relationships, they would
look like this:

# omit uselist=False if this is a one-to-many relationship
child1 = relationship(
Child,
primaryjoin=lambda: and_(Child.parent_id == Parent.id, Child.type == 1),
uselist=False)
child2 = relationship(
Child,
primaryjoin=lambda: and_(Child.parent_id == Parent.id, Child.type == 2),
uselist=False)

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Many to One vs session

2010-10-29 Thread Conor
On 10/29/2010 01:23 PM, Mark Erbaugh wrote:
 I have a table (T) that has a many-to-one relationship (via foreign key 
 inclusion) to a category table (C). Let's name the relationship category. 
 When I retrieve an existing T record, SA populates the category field with an 
 instance of C.  I can change the category to a different value by storing a 
 different instance of C into category.

Technically, SQLAlchemy by default does not populate the category
relation until you access it the first time (i.e. it is a lazy load).

 My question is does it matter if the various instances of C are associated 
 with the same session as the T instance?

I'm not sure what you mean here. There is no problem having multiple C
instances in the same session as the T instance. SQLAlchemy will
complain (hopefully; I haven't tried it) if multiple C instances in the
same session share a primary key.

 Can the C instances come from a different session, or can the be expunge'd 
 from the session that retrieved them?

The only restriction is, if the T instance is part of session S, then
the C instance that you assign to T.category must be part of S or not be
part of any session. It is allowed if you first expunge the C instance
from another session and merge it into S before assigning it to T.category.

Note that there are only a few use cases for moving instances from one
session to another that I am aware of:

* Caching query results. You expunge the instances from the original
  session, keep them around in memory (or serialized to disk,
  memcache, etc.). Then you can merge them back into subsequent
  sessions to avoid repeated DB queries.
* Passing objects between threads. Sessions are not thread-safe, so
  if you want to pass objects from thread A to thread B, you have to
  merge session A's objects into session B before thread B can use them.

 Can I store an entirely new C instance, and if so, will SA do an insert into 
 the C table?

Yes, this is part of the save-update cascade[1] which is enabled by
default on a relationship. Generally you only worry about cascade for
one-to-many or many-to-many relationships. At least that's my experience.

-Conor

[1]
http://www.sqlalchemy.org/docs/orm/relationships.html#the-relationship-api

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Can't adapt type on a Foreign key

2010-10-28 Thread Conor
On 10/28/2010 06:57 AM, Fabien wrote:
 I got a problem with PostGreSQL 8.4 and tables reflection. My metadata
 object seems to be ok (it has foreign keys, primary keys, every
 columns and tables). But when I try to associate an object to an
 another one through a Foreign key, I get :
 sqlalchemy.exc.ProgrammingError: (ProgrammingError) can't adapt type
 'EventParameters' 'INSERT INTO event (.

 Here, my code :
 http://nopaste.info/d3d74b436e.html

 And my database schema :
 http://nopaste.info/8798dcf247.html

 What's wrong ? :/

 Thank you in advance
 Fabien

In this line:

e.eventparametersid=ep

you are assigning an object instance to a column, which is invalid. You
need to add a relationship Event.eventparameters that will allow you
to assign object instances:

mapper(Event,
   metadata.tables['event'],
   properties={'eventparameters': relationship(EventParameters)})

[...]

e.eventparameters = ep

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] how to use object_session properly

2010-10-28 Thread Conor
On 10/28/2010 08:16 PM, anusha k wrote:
 hello all

 I want to know how to create Session with object_session which can be
 used to add data to the database i.e using Session.add_all().So i can
 pass a list as parameter to add_all as my length of list is of
 variable length. When i am trying to do that i was not able to
 associate instance properly and getting Session as None. I already
 gone through the below link but left with no clue. So please help me
 with this.


 http://www.sqlalchemy.org/docs/orm/session.html#sqlalchemy.orm.session.object_session

object_session only works for objects that have already been attached
to a session. To add objects to a session, you need to have the session
object available beforehand. Usually people do this using a global
Session variable that is created from a call to scoped_session():

Session = scoped_session(sessionmaker(some_engine))

[...]

Session.add_all([obj1, obj2, obj3])

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Query help

2010-10-16 Thread Conor
On 10/16/2010 03:31 PM, rake wrote:
 Table structure:

 A(id, name)
 B(id, name, A_id)
 C(id, name, B_id)

 one-to-many A-B and B-C

 I'm trying to use session.query() to select all rows of A such that
 none of the joined B rows have any joined C rows.

 A:
 (1,'A1')
 (2,'A2')

 B:
 (1,'B1',1)
 (2,'B2',1)
 (3,'B3',2)

 C:
 (1,'C1',1)

 So, it would return (2,'A2'). Any help would be appreciated.

   

Assuming you have relations A.bs and B.cs:

q = session.query(A)
q = q.filter(~A.bs.any(B.cs.any())

which would translate roughly into this SQL:

SELECT A columns
FROM A WHERE NOT EXISTS (
  SELECT 1
  FROM B
  WHERE B.a_id = A.id
  AND EXISTS (
SELECT 1
FROM C
WHERE C.b_id = B.id))

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] efficiently setting many to many collections when I have a sequence of ids.

2010-10-14 Thread Conor
 On 10/14/2010 01:30 PM, Chris Withers wrote:
 Hi All,

 I currently have code that looks like this:

 recipients = []
 if recipient_ids:
for id in recipient_ids.split(','):
   recipients.append(
   session.query(recipient.Recipient).filter_by(id=id).one()
   )
 else:
 recipient_ids = ()
 feed.recipients = recipients

 Where the models are:

 feed_recipient = Table('feed_recipient', Base.metadata,
 Column('feed_id', String(length=32), ForeignKey('feed.id')),
 Column('recipient_id', Integer, ForeignKey('recipient.id')),
 )

 class Feed(Base):
 __tablename__ = 'feed'
 id = Column(String(length=32), primary_key=True)
 recipients = relation('Recipient', secondary=feed_recipient)

 class Recipient(Base,ComputedMapperArgs):
 __tablename__='recipient'
 feeds = relation('Feed', secondary=feed_recipient)

 It feels like a horribly inefficient way of updating the many-to-many
 relationship. I guess I could just use the sql abstraction layer, but
 that feels like circumventing the ORM without just cause ;-)

 Am I missing something? If I have a sequence of ids where I want to
 update the many to many relationship as above, what's the best way of
 doing it?

You can at least reduce it to a single query:

recipient_id_list = recipient_ids.split(',')
q = session.query(recipient.Recipient)
q = q.filter(recipient.Recipient.id.in_(recipient_id_list))
recipients = q.all()
if len(recipients) != len(recipient_id_list):
invalid_ids = (set(recipient_id_list)
   - set(x.id for x in recipients))
raise StandardError(Invalid recipient ids: %s % sorted(invalid_ids))
feed.recipients = recipients

I wish there was an easy way to do, say:

feed.recipients.ids = recipient_id_list

which would let you avoid querying at all. The downside is you won't
notice invalid ids until the next flush(). Alternatively, you could also
accomplish this via:

feed.recipients = [session.merge(recipient.Recipient(id=id), load=False)
   for id in recipient_id_list]

if session.merge() allowed you to merge newly-created transient objects
with load=False. Maybe add a force parameter to merge, to tell
SQLAlchemy that you really know what you are doing?

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] MapperExtension.[before|after]_update problem

2010-10-13 Thread Conor

 On 10/13/2010 10:55 AM, Christophe de Vienne wrote:

Hi all,

I am running into an issue with MapperExtension.[before|after]_update.

Context
---

SQLAlchemy 0.5.8
TurboGears 1.1.1

Description
---

I will attempt a simple description first, as I don't think my actual
code will help (I know it is not a thing to say, but really).

We have 2 mapped classes, Parent and Child.

Child is mapped this way :

mapper(Child, child_table, properties={
   parent=relation(Parent, backref='children')
})

And parent has a mapperextension that defines a after_update :

def after_update(self, mapper, connection, instance):
 for child in instance.children:
 child.name = 'another name'


If I do 2 session flush() after modifying a Parent instance, the
modifications on the children should be reflected to the database.

It is the case in my unit tests if I use directly the DBSession and
manipulate the objects 'myself'.

BUT, if I go through the complete TG stack, in the unittests or in
real-life, the modifications done on child are never sent to the database.

One subtle thing though : if, before the first flush(), I access the
children attribute, the problem goes away.

Example :

parent = DBSession.query(Parent).get('myid')
parent.name = 'test'
# parent.children # Un-commenting this line solve the issue, but of
   # course it is not an acceptable solution
DBSession.flush()
DBSession.flush()

I could not reproduce in a simpler context, and don't know how to go
further in my investigation.

Help ?

Thanks

Christophe


AFAIK SQLAlchemy does not support the following in MapperExtensions:

   * lazy-loading related objects (maybe?)
   * changing the flush plan, which I believe means changing which
 objects are considered new, dirty, or deleted

Your code is possibly trying to do both. You need to instead create a 
SessionExtension and override before_flush, which allows you to modify 
the session however you want, e.g. (untested):


class MySessionExtension(object):
def before_flush(self, session, flush_context, instances):
for obj in session.dirty:
if isinstance(obj, Parent):
for child in obj.children:
child.name = 'another name'

-Conor

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Self-referencing relation

2010-10-12 Thread Conor
On 10/12/2010 09:20 AM, Jim Steil wrote:
  Hi

 Using SQLAlchemy 0.5.8.

 I have the following model...


 class Link(DeclarativeBase):
 __tablename__ = 'link'

 linkId = Column(Integer, autoincrement=True, primary_key=True)
 name = Column(Unicode(50), nullable=False)
 parentLinkId = Column(Integer, ForeignKey('link.linkId'))
 url = Column(Unicode(255))
 permissionId = Column(Integer,
 ForeignKey('tg_permission.permission_id'))
 description = Column(Text())
 parentLink = relation(Link, primaryjoin=parentLinkId==Link.linkId)
 permission = relation(Permission,
 primaryjoin=permissionId==Permission.permission_id)

 The problem is with the second to the last line.  I'm trying to create
 a relation to point to the parent link, which points back to the same
 table.  I'm using TurboGears and when I try to start my server I get
 the following error:

 parentLink = relation(Link, primaryjoin=parentLinkId==Link.linkId)
 NameError: name 'Link' is not defined

 I'm new to SA.  Can someone help me out with this relation?

Due to Python semantics, Link is not bound to anything until the class
definition is complete. The get around this, SQLAlchemy provides several
options:

   1. In this case, you can simply use parentLinkId==linkId as the
  primaryjoin.
   2. You can wrap the primaryjoin in a function/lambda, e.g.
  primaryjoin=lambda: Link.parentLinkId==Link.LinkId. SQLAlchemy
  will call this function sometime after the class definition is
  complete, at which point Link is bound to something.
   3. You can use a string as the primaryjoin, e.g.
  primaryjoin=Link.parentLinkId==Link.linkId. SQLAlchemy will
  evaluate this expression similar to option #2.

In any case, you need to include a remote_side argument to the relation
to indicate that this is the many-to-one side of the relationship[1], e.g.

parentLink = relation(Link, primaryjoin=lambda: Link.parentLinkId==Link.linkId, 
remote_side=lambda: [Link.linkId])

-Conor

[1]:
http://www.sqlalchemy.org/docs/orm/relationships.html?highlight=remote_side#adjacency-list-relationships

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Reverse look-up of declarative classes

2010-10-12 Thread Conor
On 10/12/2010 01:16 PM, Branko Vukelic wrote:
 Hi, list,

 Given all models used the declarative syntax (and I assume they are
 now all somehow known to the base), and given a class name in string
 form, can I somehow retrieve the actual class?

 Regards,

   

The information is available in the declarative base class, via the
_decl_class_registry attribute:

Base = declarative_base()
class MappedClass(Base):
__tablename__ = mapped_class

id = Column(Integer, primary_key=True)

assert Base._decl_class_registry[MappedClass] is MappedClass

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] NUMERIC in sqlite

2010-10-06 Thread Conor
On 10/06/2010 10:00 AM, Christopher Singley wrote:
 [...]
 2. store the decimals as integers, using a type with a fixed exponent.   Use
 Integer, and place a TypeDecorator around it which multiplies Decimal
 objects upwards by the fixed exponent going in and back down going out.
 
 Is it as simple as this?
 
 class DecimalInt(sqlalchemy.types.TypeDecorator):
 impl = sqlalchemy.types.Integer

 def process_bind_param(self, value, dialect):
 return int(value * 10**4) # basis pt precision

 def process_result_value(self, value, dialect):
 return decimal.Decimal(value / 10**4) # basis pt precision
 
   

value / 10**4 will either truncate to an integer (Python 2) or return
a float (Python 3 or with from __future__ import division) if value is
not divisible by 10**4. I think you want decimal.Decimal(value) /
10**4 instead.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy + Testing a webserver using InnoDB fails (MyISAM is okay)

2010-09-29 Thread Conor
On 09/29/2010 01:47 PM, Conradaroma wrote:
 Hi,

 I am currently trying to move my DB tables over to InnoDB from MyISAM.
 I am having timing issues with requests and cron jobs that are running
 on the server that is leading to some errors. I am quite sure that
 transaction support will help me with the problem. I am therefore
 transitioning to InnoDB.

 I have a suite of tests which make calls to our webservices REST API
 and receive XML responses. The test suite is fairly thorough, and it's
 written in Python and uses SQLAlchemy to query information from the
 database. When I change the tables in the system from MyISAM to InnoDB
 however, the tests start failing. However, the tests aren't failing
 because the system isn't working, they are failing because the ORM is
 not correctly querying the rows from the database I am testing on.
 when I step through the code I see the correct results, but the ORM is
 not returning the correct results at all.

 Basic flow is:

 class
 UnitTest(unittest.TestCase):
 def setUp(self):
 # Create a test object in DB that gets affected by the web
 server
 testObject =
 Obj(foo='one')
 self.testId = testObject.id

  
 session.add(testObject)
  
 session.commit()

 def tearDown(self):
 # Clean up after the test
 testObject =
 session.query(Obj).get(self.testId)

  
 session.delete(testObject)
 session.commit()

 def test_web_server(self):
 # Ensure the initial state of the
 object.
 objects = session.query(Obj).get(self.testId)
 assert objects.foo ==
 'one'

 # This will make a simple HTTP get call on an url that will
 modify the DB
 response = server.request.increment_foo(self.testId)

 # This one fails, the object still has a foo of
 'one'
 # When I stop here in a debugger though, and look at the
 database,
 # The row in question actually has the correct value in the
 database.
 # 
 objects =
 session.query(Obj).get(self.testId)
 assert objects.foo == 'two'
 Using MyISAM tables to store the object and this test will pass.
 However, when I change to InnoDB tables, this test will not pass. What
 is more interesting is that when I step through the code in the
 debugger, I can see that the datbase has what I expect, so it's not a
 problem in the web server code. I have tried nearly every combination
 of expire_all, autoflush, autocommit, etc. etc, and still can't get
 this test to pass.

 I can provide more info if necessary.

 Thanks, Conrad

   

Although you have expunged everything from the unit test's session, it
is still holding onto a DB connection that is in a transaction that
began in your Ensure the initial state of the object block. Since the
server made its DB changes after the transaction started, the DB
connection will not see those changes. You need to call
session.close() just before your This one fails block. This will
cause the session to expunge all objects and rollback any transaction it
is in, and the next query will start in a new transaction which will see
the changes.

The reason (I assume) that setting autocommit=True did not work is that
it SQLAlchemy only autocommits when it detects an
INSERT/UPDATE/DELETE-type query.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Populating a self-referential table

2010-09-28 Thread Conor
On 09/28/2010 02:28 PM, Mark Erbaugh wrote:
 I have a self-referential table:

 class L3Acct(BASE):
 
 __tablename__ = 'l3_acct'
 
 id = Column(Integer, primary_key=True)
 parent = Column(ForeignKey('l3_acct.id'))

 []


 When adding new rows to the table, the id field is not assigned a value until 
 the data is actually written to the database.  When adding several rows to a 
 session object is there a way for a new row to reference a row that has 
 previously been added in the same batch, but hasn't been assigned an id yet?

 I'm using SQLAlchemy 0.5.8.

 Thanks,
 Mark
   

You can do this if there is a relationship between the parent/child
L3Acct objects, e.g.:

class L3Acct(BASE):
   [..existing declarations..]

   parent_obj = relation(L3Acct, backref=children, remote_side=[id])

Then you can add the object to the session without using ids:

parent = L3Acct()
child = L3Acct(parent_obj=parent)
session.add(parent) # child is added implicitly
session.flush()

The ORM will take care of inserting parents before children.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] current_year and current_week

2010-09-21 Thread Conor
On 09/21/2010 02:00 PM, Dave Fowler wrote:
 Hi,

 I'm new to SQLAlchemy but I've read the entire oreilly book and done a
 lot of googling and i've found that there is support for the generic
 functions

 current_date, current_time and current_timestamp

 but i can find nothing for finding the year, or week of a date
 column.

 I need them in order to group results by week or year depending on the
 user input.


 Does current_year and current_week exist somewhere?  And if not is
 there a way to create your own custom functions based on Column
 types?  I've yet to run into any documentation on that processes
 either.

 Thanks!
   

Every DB has its own method for extracting parts out of a datetime. Here
are a few:

* PostgreSQL [1]: extract('ISOYEAR', some_timestamp_expr),
  extract('WEEK', some_timestamp_expr)
* MySQL [2]: func.year(some_timestamp_expr),
  func.week(some_timestamp_expr), func.yearweek(some_timestamp_expr)
* SQLite [3]: func.strftime('%Y-%W', some_timestamp_expr). I do not
  believe you get correct ISO year-week behavior here, so beware.

-Conor

[1]
http://www.postgresql.org/docs/8.4/static/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT
[2] http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html
[3] http://www.sqlite.org/lang_datefunc.html

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: sql.expression.text in clause how to?

2010-09-16 Thread Conor
On 09/16/2010 04:28 PM, Pykler wrote:
 It seems it is not possible to do this with bindparams, my workaround
 was as follows

 On Sep 16, 2:01 pm, Pykler hnass...@gmail.com wrote:
   
 engine.execute(sql.expression.text('''
 select * from users where id in %s
 '''% SqlTuple([1,2,3,4]))
 
 Where SqlTuple is a tuple with a custom repr method to print without
 the trailing ,
   

If you are using PostgreSQL with psycopg2, you can pass arrays as
bindparams in certain situations. Basically, if you can put an
ARRAY[...]::some_type[] literal in the SQL text, then you use a
bindparam there instead.

engine.execute(
text('''select * from users where id = ANY (:ids)'''),
ids=[1,2,3,4])

I would imagine that other databases that support arrays would work
similarly. There is also a ghetto way where you format ids as a
delimited string, e.g. ,1,2,3,4, and use select * from users where
:ids LIKE '%,' || id || ',%'. But you are probably better off
formatting the SQL directly like your workaround.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] PostgreSQL, cascading and non-nullable ForeignKeys

2010-09-15 Thread Conor
On 09/15/2010 05:04 PM, BenH wrote:
 Hi,

 I'm using SqlAlchemy 0.6.3 and PostgreSQL 8.4 and I'm trying to setup
 a cascading delete between several levels of tables. The problem seems
 to be that I can't have a relationship with cascade=all and a column
 with ForeignKey that has nullable=False.
   

Your cascade clause is on the wrong side of the relationship. It means
you want to cascade TO the remote object, not FROM the remote object.

 Here is my example:

 from sqlalchemy import create_engine, Table, Column, Integer, String,
 MetaData, ForeignKey
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import scoped_session, sessionmaker, relationship

 engine = create_engine('postgresql://tsq:passt%qw...@localhost:5432/
 ce_cascade_test', echo=True)

 Session = scoped_session(sessionmaker())
 Base = declarative_base()

 Session.configure(bind=engine)

 s = Session()

 class User(Base):
 __tablename__ = users

 id = Column(Integer, primary_key=True)
 name = Column(String)

 device = relationship(Device, uselist=False)
   

This should be:

device = relationship(Device, cascade=all, passive_deletes=True, 
uselist=False)

Also, including uselist=False indicates that the User-Device
relationship is one-to-one. The lack of a unique constraint on
devices.user_id suggests a one-to-many relationship.

 @classmethod
 def create(cls, user_name, device_name, manufacturer):
 new_user = User()
 new_user.name = user_name

 new_user.device = Device.create(device_name)

 return new_user

 class Device(Base):
 __tablename__ = devices

 id = Column(Integer, primary_key=True)
 name = Column(String)
 user_id = Column(Integer,
  ForeignKey('users.id'),
  nullable=False)

 manufacturer_id = Column(Integer,
  ForeignKey('manufacturers.id'),
  nullable=False)

 user = relationship(User,
 uselist=False,
 cascade=all)
   

This should be:

user = relationship(User)

Adding uselist=False here is redundant, since that is the default for
the side which contains the foreign key. Also, by including
'cascade=all', you are telling SQLAlchemy to implicitly delete the
user when the device is deleted. This is probably not what you want.

 @classmethod
 def create(cls, name):
 new_device = Device()
 new_device.name = name

 return new_device


 Base.metadata.create_all(engine)

 user = User.create(bob, iphone)

 s.add(user)
 s.commit()

 s.delete(user)
 s.commit()

 If I run this then I get an Integrity error during the delete (because
 it is updating the user_id to null before deleting it). I would like
 to keep the nullable=False on the ForeignKey.

 I've tried adding ondelete=CASCADE to the ForeignKey and adding
 passive_deletes=True to the relationship but it always throw the same
 Integrity error.

 I know I'm missing something but I can't find it in the docs, what am
 I missing?

 Thanks for your help,

 BEN
   

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Combining aliases with labels

2010-09-09 Thread Conor
On 09/08/2010 01:05 PM, Jack Kordas wrote:
 When I try to use both aliases and labels, the results are not named
 as expected.

 Instead of being able to access the columns as label-name_column-
 name it appears as original-table-name_numeric-sequence_column-
 name

 Thanks,
   Jack

 Sample code follows:

 parent = Table('parent', metadata,
 Column('id', INTEGER(), primary_key=True),
 Column('name', VARCHAR(length=128)),
 Column('first_id', INTEGER(),
 ForeignKey(u'child.id')),
   )

 child = Table('child', metadata,
 Column('id', INTEGER(), primary_key=True),
 Column('name', VARCHAR(length=128))
   )

 def test_labels1(conn):
 s = select([parent,child], use_labels=True)
 s = s.where(parent.c.first_id==child.c.id)
 return conn.execute(s).fetchone()

 def test_alias1(conn):
 firstchild = child.alias()
 s = select([parent,firstchild], use_labels=True)
 s = s.where(parent.c.first_id==firstchild.c.id)
 return conn.execute(s).fetchone()

 conn = engine.connect()

 results = test_labels1(conn)
 print results.parent_name
 print results.child_name

 results = test_alias1(conn)
 print 'alias1 results: '
 print results.parent_name
 #print results.firstchild_name # expected this to work
 print results.child_1_name # this worked instead
   

You need to set an explicit name for the alias to prevent SQLAlchemy
from generating an anonymous name[1]:

firstchild = child.alias(firstchild)

-Conor

[1]
http://www.sqlalchemy.org/docs/core/expression_api.html#sqlalchemy.sql.expression.FromClause.alias

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: hierarchical data storage and searching

2010-09-09 Thread Conor
On 09/09/2010 02:18 AM, Chris Withers wrote:
 On 08/09/2010 19:23, Gunnlaugur Briem wrote:
 http://communities.bmc.com/communities/docs/DOC-9902

 Thanks for all of the references, but this one in particular.
 Materialized paths looks like its the closest to what I'm after.

 However, with materialized paths, I'm wondering with a structure like:

 /
 /a/
 /a/b
 /a/b/1
 /a/b/2
 /a/c
 /b/
 /b/1
 /b/2

 How to phrases the sql to answer the question:

 Does the current user have access to anything in /a or below

 ...particularly where the user is granted access only to /a/b/1, for
 example.

It's hard to give specifics without knowing what kind of access control
you are using (e.g. separate read/write permissions, inherited
permissions, full ACLs), but here is one attempt:

SELECT EXISTS (SELECT 1 FROM access_control WHERE (path = '/a' OR path LIKE 
'/a/%') AND user = :user AND permission = :permission)

Most materialized path queries use LIKE a lot. As long as you keep the %
character at the end, a good DB will be able to use an index to speed up
the query.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] string_agg() with order by clause

2010-08-27 Thread Conor
On 08/27/2010 05:06 PM, David Gardner wrote:
 I should have linked to the docs in question
 http://developer.postgresql.org/pgdocs/postgres/sql-expressions.html#SYNTAX-AGGREGATES


 On 08/27/2010 03:03 PM, David Gardner wrote:
 Recently Postgres added a new aggregate function called string_agg().
 I have been able to use it like:
 Session.query(Asset, func.string_agg(some_col, ','))

 This works, but according to the docs I should be able to do
 string_agg(some_col, ',' ORDER BY some_col)
 Is there a way to do this in SQLAlchemy?

I think you have to write your own compiler extension:

import sqlalchemy as sa
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import ColumnElement
from StringIO import StringIO

class string_agg(ColumnElement):
type = sa.String()

def __init__(self, expr, separator=None, order_by=None):
self.expr = expr
self.order_by = order_by
self.separator = separator

@compiles(string_agg, 'mysql')
def _compile_string_agg_mysql(element, compiler, **kw):
buf = StringIO()
buf.write('group_concat(')
buf.write(compiler.process(element.expr))
if element.order_by is not None:
buf.write(' ORDER BY ')
buf.write(compiler.process(element.order_by))
if element.separator is not None:
buf.write(' SEPARATOR ')
buf.write(compiler.process(sa.literal(element.separator)))
buf.write(')')
return buf.getvalue()

# Use 'postgres' for SQLAlchemy  0.6.
@compiles(string_agg, 'postgresql')
def _compile_string_agg_postgresql(element, compiler, **kw):
buf = StringIO()
buf.write('string_agg(')
buf.write(compiler.process(element.expr))
if element.separator is not None:
buf.write(', ')
buf.write(compiler.process(sa.literal(element.separator)))
if element.order_by is not None:
buf.write(' ORDER BY ')
buf.write(compiler.process(element.order_by))
buf.write(')')
return buf.getvalue()

if __name__ == '__main__':
clause = string_agg(sa.literal_column('some_column'),
', ',
order_by=sa.literal_column('some_other_column').asc())

mysql_engine = sa.create_engine('mysql:///')
print 'MySQL: %s' % clause.compile(dialect=mysql_engine.dialect)

pg_engine = sa.create_engine('postgresql:///')
print 'PostgreSQL: %s' % clause.compile(dialect=pg_engine.dialect)

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dictionary-like objects for ORM

2010-08-18 Thread Conor
On 08/17/2010 11:32 AM, yota wrote:
 Hello,

 sqlalchemy seems to be the proper tool for my needs but I can't figure
 out how to design my project or set the ORM properly.

 Let's say, I build a music database, storing tracks and their
 associated metadata in an sql-like database defined as such :

 TRACK_TABLE ( ident *, url , duration )
 METADATA_TABLE ( track_ident *, field_name *, field_content )

 (track_ident, field_name) being the primary key for METADATA_TABLE...


 intuitively, a meaningful object for a metadata set would be a
 dictionary-like object. The name and number of fields being unknown in
 advance but stored in the METADATA_TABLE as one row per field.

 Here is the question : how may I use the ORM to map
 a table :
 123 / title / waka waka
 123 / artist / shakira
 123 / featuring / my sister

 to an object like :
 metadata.title = waka waka
 metadata.artist = shakira
 metadata.featuring = my sister

 or a dictionary like :
 metadata = { title : waka waka, artist : shakira,
 featuring : my sister }

 or whatsoever ...

 the solution might also be in rewrite of the sql schema :)


 thanks for your advices
   

SQLAlchemy supports for this via the attribute_mapped_collection and
association_proxy classes:

from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm.collections import attribute_mapped_collection

def create_metadata(field_name, field_content):
return TrackMetaData(field_name=field_name, field_content=field_content)

class Track(Base):
__tablename__ = track

id = Column(Integer, primary_key=True)
[...]

field2metadata = relationship(TrackMetaData,
  backref=track,
  
collection_class=attribute_mapped_collection(field_name))
field2content = association_proxy(field2metadata,
  field_content,
  creator=create_metadata)

# I'm only naming this class TrackMetaData to prevent confusion with 
sqlalchemy.MetaData.
class TrackMetaData(Base):
__tablename__ = metadata

track_id = Column(Integer, ForeignKey(track.id), primary_key=True)
field_name = Column(Unicode(...), primary_key=True)
field_content = Column(Unicode(...), nullable=False)

In this way you can access field2content like a dictionary:

artist = track.field2content[uartist]
track.field2content[utitle] = uwaka waka

Note that association_proxy does not supply a comparator yet, so if you
want to join/query on metadata then you need to use the field2metadata
relationship:

# Find all tracks by artist Shakira.
q = Session.query(Track)
q = q.filter(Track.field2metadata.any(and_(TrackMetaData.field_name == 
uartist,
   TrackMetaData.field_content = 
uShakira)))
tracks = q.all()

# Load all tracks, eagerloading their metadata.
q = Session.query(Track)
q = q.options(joinedload(Track.field2metadata))
tracks = q.all()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to get read-only objects from database?

2010-08-18 Thread Conor
On 08/18/2010 10:27 AM, Alvaro Reinoso wrote:
 Hello,

 I'd like to query the database and get read-only objects with session
 object. I need to save the objects in my server and use them through
 the user session. If I use a object outside of the function that calls
 the database, I get this error:

 DetachedInstanceError: Parent instance is not bound to a Session;
 lazy load operation of attribute 'items' cannot proceed

 I don't need to make any change in those objects, so I don't need to
 load them again.

 Is there any way that I can get that?

 Thanks in advance!
   

You have two options:

   1. Keep the DB session open longer so the objects can lazy-load
  attributes from the database.
   2. Ensure that all the attributes you will use are fully loaded in
  your function that calls the database. Generally this means
  adding joinedload() or subqueryload() options to your DB queries.
  This will prevent lazy-loads from occurring and makes it safe to
  use the objects without a DB session.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] how to dirty Session/Engine when connection string changes

2010-08-17 Thread Conor
On 08/17/2010 01:44 PM, fulv wrote:
 This is my stack:

 -   Mac OS X 10.6.4
 - freeTDS 0.82
 - unixODBC 2.3.0
 - pyodbc 2.1.7
 - z3c.saconfig 0.11
 - SQLAlchemy 0.6.3
 - Zope 2.10.11
 - Plone 3.3.5

 I have a control panel in my application that allows to change the
 connection parameters, i.e. generate a new connection string.  I'm
 trying to dirty the Session or Engine or both, so that when the
 connection string changes, the session does not use the old Engine,
 but will create a new one.

 This is what I do now when the connection parameters change:

 from z3c.saconfig import Session
 from z3c.saconfig.interfaces import IEngineFactory
 from zope.component import getUtility

 getUtility(IEngineFactory, name='').reset()
 Session().expire_all()
 Session().remove()

 Clearly, this is not enough, because the Session keeps using the old
 Engine.  Any tips would be appreciated!

 Thanks!

   

In vanilla SQLAlchemy (w/o Zope addons), you could do this:

Session.remove() # NOTE: No parens after Session
Session.configure(bind=new engine object)

Also, are you sure your IEngineFactory object is returning the correct
engine? The default EngineFactory class does not seem to have a public
API for changing engine args. Otherwise, that should be enough.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Updating a Table after changing class based on Declarative in PostgreSQL

2010-08-16 Thread Conor
On 08/15/2010 11:36 AM, mclovin wrote:
 I am new to SQL and SQLalchemy, but say I have a class like this:

 class User(Base):
 __tablename__ = users

 id = Column(Integer, primary_key=True)
 name = Column(String, unique=True)
 join = Column(DateTime)
 infractions =Column(Integer)
 posts = Column(Integer)

 def __init__(self, name):
 self.name = name
 self.join = datetime.datetime.now()
 self.infractions = 0
 self.posts = 0


 but I wanted to change it to this:
 class User(Base):
 __tablename__ = users

 id = Column(Integer, primary_key=True)
 name = Column(String, unique=True)
 join = Column(DateTime)
 infractions =Column(Integer)
 #posts = Column(Integer) REMOVE POSTS
 bannedTill = Column(DateTime)   #ADD BANNEDTILL

 def __init__(self, name):
 self.name = name
 self.join = datetime.datetime.now()
 self.infractions = 0
 self.bannedTill = datetime.datetime.now()


 Where I remove the column posts and add a column bannedTill. What
 are the steps to update my table users to reflect these changes
 without losing the data that is already in the table (I will populate
 my new field manually).
   

What you want is called schema migration. It's a big topic, but the
simplest way is to manually execute raw SQL commands to add/remove
columns. For example, in PostgreSQL:

BEGIN;
ALTER TABLE users DROP COLUMN posts;
ALTER TABLE users ADD COLUMN bannedTill TIMESTAMP;
COMMIT;

Alternatively, there is a package called sqlalchemy-migrate
http://code.google.com/p/sqlalchemy-migrate/ that offers a more
sophisticated way to migrate schemas.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Unknown Issue Causing Extremely Slow Query

2010-07-19 Thread Conor
On 07/19/2010 02:47 PM, Michael Bayer wrote:
 On Jul 19, 2010, at 1:53 PM, Adrian Price-Whelan wrote
 Here is some more detailed information trying the query multiple ways:

 Piping the command into psql and writing to a tmp file takes 12 seconds (tmp 
 file is 241MB):

 time echo SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS spectrum_ra, 
 spectrum.dec AS spectrum_dec, spectrum.values AS spectrum_values, 
 spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask 
 AS spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk 
 = spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits'; | psql spectradb -U postgres  tmp

 real 0m12.052s
 user 0m2.501s
 sys  0m0.274s

 engine.execute on the same query took ~6 seconds:
 spectra = engine.execute(SELECT spectrum.pk AS spectrum_pk, spectrum.ra AS 
 spectrum_ra, spectrum.dec AS spectrum_dec, spectrum.values AS 
 spectrum_values, spectrum.spectrum_header_pk AS spectrum_spectrum_header_pk, 
 spectrum.fiber_number AS spectrum_fiber_number, spectrum.inv_var AS 
 spectrum_inv_var, spectrum.and_mask AS spectrum_and_mask, spectrum.or_mask 
 AS spectrum_or_mask FROM spectrum JOIN spectrum_header ON spectrum_header.pk 
 = spectrum.spectrum_header_pk WHERE spectrum_header.filename = 
 'spPlate-3586-55181.fits';)
 
 Call fetchall() on the result to get a better idea what's going on.
 [...]
   

Assuming fetchall() brings the execution time back to ~486 seconds, it
is likely your bottleneck is psycopg2's str-to-Decimal conversion. A big
part of this is that Python's Decimal class is written in pure Python.
You can override psycopg2's type conversion[1], but the only easy speed
increase I see is to leave the values as strings. Parsing them as floats
may help as well.

-Conor

[1] http://initd.org/psycopg/docs/faq.html#problems-with-type-conversions

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Doing a UNION and aliasing columns

2010-07-13 Thread Conor
On 07/13/2010 02:43 PM, Andrew Bialecki wrote:
 Any thoughts on a workaround?
   

Since you're selecting individual columns instead of mapped classes, the
ORM doesn't provide much benefit over straight SQL expressions. Try this
(untested):

by_email = select([User.email.label(text), User.email.label(email)])
by_email = by_email.where(User.email.like(%%%s%% % search_term))

by_first_name = select([User.first_name.label(text), 
User.email.label(email)])
by_first_name = by_first_name.where(User.first_name.like(%%%s%% % 
search_term))

q = union_all(by_email, by_first_name)
for (test, email) in session.execute(q).fetchall():
...

-Conor

 On Tue, Jul 13, 2010 at 3:40 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
   
 that's an intricate trick which currently doesn't work with the ORM.   #1852 
 is added to see if there's a quick way to fix this (very likely there isn't 
 something quick).

 You'd have to trick it using column(email) or something like that.


 On Jul 13, 2010, at 3:02 PM, Andrew Bialecki wrote:

 
 I tried to write the following code assuming a User class with
 first_name, last_name, and email:

 search_term = Andrew

 by_email = session.query(User.email.label(text),
 User.email.label(email)).filter(User.email.like(%%%s%% %
 search_term)
 by_first_name = session.query(User.first_name.label(text),
 User.email.label(email)).filter(User.first_name.like(%%%s%% %
 search_term)
 matches = by_email.union_all(by_first_name).all()

 with the hope that this would return rows like so:

 text   email
 -   ---
 and...@test.comand...@test.com
 Andrew some.ran...@email.com

 However, I get:

 text   email
 -   ---
 and...@test.comand...@test.com
 Andrew Andrew

 When I look at the SQL, the wrapper select for the UNION is wrong.
 Any idea how to get the behavior I want?  Thanks!
   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] cascading a delete in a many-to-many relation

2010-07-12 Thread Conor
On 07/12/2010 02:38 PM, tom wrote:
 Hi,

 I have a problem with cascading a delete. I have two tables, and they
 are mapped many-to-many:

 class File(object): pass
 file_table = Table('file', metadata,
 Column('id', Integer, primary_key=True, autoincrement=True),
 Column('filename', String(255)),
 }

 class FileHost(object): pass
 file_host = Table('host', metadata,
 Column('id', Integer, primary_key=True, autoincrement=True ),
 Column('name', String(255)),
 )

 file_hosted = Table('file_hosted', metadata,
 Column('id_host', Integer, ForeignKey('host.id')),
 Column('id_file', Integer, ForeignKey('file.id'))
 )

 session.mapper(File, file_table, properties={
 'host': relation(FileHost, secondary=file_hosted, backref='files',
 cascade='all,delete-orphan', single_parent=True)
 })
 session.mapper(FileHost, file_host)


 This is the error I get:
 sqlalchemy.exc.IntegrityError: (IntegrityError) update or delete on
 table file violates foreign key constraint
 file_hosted_id_file_fkey on table file_hosted
 DETAIL:  Key (id)=(50905) is still referenced from table
 file_hosted.


 Can somebody please tell me what I'm doing wrong because I tried to
 find an answer and couldn't. This was the only somewhat related thing
 I found: http://www.mail-archive.com/sqlalchemy@googlegroups.com/msg13198.html
   

You are telling SQLAlchemy to cascade File deletes to FileHost, but you
want it the other way around. You can fix this by moving the
cascade='all,delete-orphan' and single_parent=True clauses into the
backref. You also probably want use_list=False.

session.mapper(File, file_table, properties={
'host': relation(FileHost,
 backref=backref('files',
 cascade='all,delete-orphan',
 single_parent=True),
 secondary=file_hosted,
 use_list=False)
})

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Efficiency of adding to M:M relations

2010-06-30 Thread Conor
On 06/30/2010 03:32 AM, Paul Johnston wrote:
 Hi,

 I hope everyone's well here. It's been some time since I posted. Great
 to see it up to 0.6, and even more progress on MS-SQL (although I'm
 now unlikely to be using that).

 As always, I'm using SQLAlchemy as part of a web app. I have a set of
 checkboxes, which I'm saving into an M:M relation. The app receives a
 list of IDs from the client. To save these to the M:M, I need them as
 database objects. So I'm doing (roughly):

 myobj.relation = [OtherTable.get(i) for i in ids]

 The problem with this is it's causing a database query for each id.
 What I'd really like to do is somehow create a placeholder object
 with just the id, that doesn't cost a database query to create. After
 that, I'll trust flush() to do its magic as efficiently as possible.

 Paul

   

Here's an ugly way to do it, which injects a persistent OtherTable
instance into the session without verifying it exists in the database.

from sqlalchemy.orm import identity_key_from_primary_key
from sqlalchemy.orm.attributes import instance_state

def inject_persistent_instance(session, cls, primary_key):
temp = cls()
state = instance_state(temp)
state.key = identity_key_from_primary_key(primary_key)
return session.merge(temp)

myobj.relation = [inject_persistent_instance(session, OtherTable, i) for i in 
ids]
# You risk flush errors if the OtherTable rows did not exist in the database.
session.flush()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Efficiency of adding to M:M relations

2010-06-30 Thread Conor
On 06/30/2010 02:10 PM, Conor wrote:
 On 06/30/2010 03:32 AM, Paul Johnston wrote:
 Hi,

 I hope everyone's well here. It's been some time since I posted. Great
 to see it up to 0.6, and even more progress on MS-SQL (although I'm
 now unlikely to be using that).

 As always, I'm using SQLAlchemy as part of a web app. I have a set of
 checkboxes, which I'm saving into an M:M relation. The app receives a
 list of IDs from the client. To save these to the M:M, I need them as
 database objects. So I'm doing (roughly):

 myobj.relation = [OtherTable.get(i) for i in ids]

 The problem with this is it's causing a database query for each id.
 What I'd really like to do is somehow create a placeholder object
 with just the id, that doesn't cost a database query to create. After
 that, I'll trust flush() to do its magic as efficiently as possible.

 Paul

   

 Here's an ugly way to do it, which injects a persistent OtherTable
 instance into the session without verifying it exists in the database.

 from sqlalchemy.orm import identity_key_from_primary_key
 from sqlalchemy.orm.attributes import instance_state

 def inject_persistent_instance(session, cls, primary_key):
 temp = cls()
 state = instance_state(temp)
 state.key = identity_key_from_primary_key(primary_key)
 return session.merge(temp)
   

That last line should have been:

return session.merge(temp, load=False)

 myobj.relation = [inject_persistent_instance(session, OtherTable, i) for i in 
 ids]
 # You risk flush errors if the OtherTable rows did not exist in the database.
 session.flush()
   

 -Conor


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Efficiency of adding to M:M relations

2010-06-30 Thread Conor
On 06/30/2010 02:15 PM, Conor wrote:
 On 06/30/2010 02:10 PM, Conor wrote:
 On 06/30/2010 03:32 AM, Paul Johnston wrote:
 Hi,

 I hope everyone's well here. It's been some time since I posted. Great
 to see it up to 0.6, and even more progress on MS-SQL (although I'm
 now unlikely to be using that).

 As always, I'm using SQLAlchemy as part of a web app. I have a set of
 checkboxes, which I'm saving into an M:M relation. The app receives a
 list of IDs from the client. To save these to the M:M, I need them as
 database objects. So I'm doing (roughly):

 myobj.relation = [OtherTable.get(i) for i in ids]

 The problem with this is it's causing a database query for each id.
 What I'd really like to do is somehow create a placeholder object
 with just the id, that doesn't cost a database query to create. After
 that, I'll trust flush() to do its magic as efficiently as possible.

 Paul

   

 Here's an ugly way to do it, which injects a persistent OtherTable
 instance into the session without verifying it exists in the database.

 from sqlalchemy.orm import identity_key_from_primary_key
 from sqlalchemy.orm.attributes import instance_state

 def inject_persistent_instance(session, cls, primary_key):
 temp = cls()
 state = instance_state(temp)
 state.key = identity_key_from_primary_key(primary_key)
 return session.merge(temp)
   

 That last line should have been:

 return session.merge(temp, load=False)

Arrgh, one more time:

from sqlalchemy.orm import object_mapper
from sqlalchemy.orm.attributes import instance_state

def inject_persistent_instance(session, cls, primary_key):
mapper = object_mapper(cls)
temp = cls()
state = instance_state(temp)
state.key = mapper.identity_key_from_primary_key(primary_key)
return session.merge(temp, load=False)

myobj.relation = [inject_persistent_instance(session, OtherTable, i) for i in 
ids]
# You risk flush errors if the OtherTable rows did not exist in the database.
session.flush()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-28 Thread Conor
On 06/10/2010 04:32 PM, Az wrote:
 Let me take a guess:

 class Supervisor(object):
   def __init__(self, ee_id, name, original_quota, loading_limit):
   self.ee_id = ee_id
   self.name = name
   self.original_quota = original_quota
   self.loading_limit = loading_limit
   self.predecr_quota = 0
   self.offered_proj = set()
   self.total_prealloc_pop = 0
   self.total_postalloc_pop = 0

   def __repr__(self):
   return str(self)

   def __str__(self):
   return self.name
   return %s %s %s (Offered projects: %s) %(self.ee_id, 
 self.name,
 self.predecr_quota, self.offered_proj)

 So *inside* the Supervisor class would I define it like this (trying
 to have a go at it)?

 def __deepcopy__(self, memo):
 dc = type(self)()
 dc.__dict__.update(self.__dict__)
 for attr in dir(supervisor):
 if not attr.startswight('__'):
 self.attr = deepcopy(self.attr, memo)
   

The location of the __deepcopy__ method is correct, but there are
several problems with the implementation:

   1. You are modifying self, when you want to modify dc.
   2. You are not returning dc.
   3. dir(supervisor) (I assume you meant dir(self)) will include
  attributes from the class object itself, but you don't want to
  make copies of those attributes. I would recommend iterating over
  self.__dict__ instead or manually specifying the attributes to be
  copied.
   4. attr.startswith('__') will not catch '_sa_instance_state': you
  probably want attr.startswith('_') or attr.startswith('_sa_') instead.

 So this only overrides __deepcopy__ when I call it for a Supervisor
 and not for any of the other classes right?
   

Correct.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-28 Thread Conor
On 06/10/2010 12:33 PM, Az wrote:
 The pprintout was:

 {type 'collections.defaultdict': 156,
  type 'bool': 2,
  type 'float': 1,
  type 'int': 538,
  type 'list': 1130,
  type 'dict': 867,
  type 'NoneType': 1,
  type 'set': 932,
  type 'str': 577,
  type 'tuple': 1717,
  type 'type': 5,
  class 'sqlalchemy.util.symbol': 1,
  class 'sqlalchemy.orm.state.InstanceState': 236,
  class 'ProjectParties.Student': 156,
  class 'ProjectParties.Supervisor': 39,
  class 'ProjectParties.Project': 197}

 I think the InstanceStates come from the Supervisor and Project
 classes (197+39 = 236)
   

Sounds right. You will need to override __deepcopy__ on those classes as
well.

 Sounds pretty ugly. What if you add extra tables to represent runs
 and/or trials?

 class Run(Base):
 # Having a separate table here gives you nice auto-incrementing run ids
 # and lets you attach additional information to a run, such as timestamp,
 # human-supplied comment, etc.
 __tablename__ = 'run'
 id = Column(Integer, primary_key=True)
 timestamp = Column(DateTime, nullable=False)
 # comment = Column(UnicodeText(100), nullable=False)

 trials = relationship('Trial',
   back_populates='run',
   order_by=lambda: Trial.id.asc())

 class Trial(Base):
 # Having a separate table here is of dubious value, but hey it makes the
 # relationships a bit nicer!
 __tablename__ = 'trial'
 __table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
 run_id = Column(Integer, ForeignKey('run.id'))
 id = Column(Integer)

 run = relationship('Run', back_populates='trials')
 sim_allocs = relationship('SimAllocation', back_populates='trial')

 class SimAllocation(Base):
 ...
 __table_args__ = (PrimaryKeyConstraint('run_id', 'trial_id', 'stud_id'),
   ForeignKeyConstraint(['run_id', 'trial_id'],
['trial.run_id', 'trial.id']),
   {})

 run_id = Column(Integer)
 trial_id = Column(Integer)
 stud_id = Column(Integer)

 trial = relationship('Trial', back_populates='sim_allocs')
 
 Ah true, my solution was rather hacky and not very elegant.

 Your class definitions... are you defining both table and Class in one
 go? Would I have to change the way my monteCarloBasic creates
 instances of SimAllocation?
   

I assumed you were using the declarative extension
(sqlalchemy.ext.declarative) to generate the table, class, and mapper in
one go. It's not at all necessary: you can define the tables, classes,
and mappers separately. Just use what you are most comfortable with.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Primary key not unique?

2010-06-28 Thread Conor
On 06/14/2010 12:13 PM, Az wrote:
 I've got an error that says the primary key is not unique. This
 shouldn't be possible since my primary keys are unique IDs :S
   

It sounds like you are importing the same projects from your XML file to
the database multiple times. If you used session.merge instead of
session.add in addToTable, SQLAlchemy should be updating the existing
project rows instead of trying to insert new ones. I don't do this much
myself, so I could be wrong.

 This is only happening after I switched to a physical sqlite3 database
 from the :memory: one
   

This is because you are now starting from a populated database instead
of an empty one.

 #BEGIN#

 File Main.py, line 97, in module
 MCS.addToTable()
   File /XXX/MonteCarloSimulation.py, line 77, in addToTable
 session.flush()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 1354, in flush
 self._flush(objects)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 1432, in _flush
 flush_context.execute()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 261, in execute
 UOWExecutor().execute(self, tasks)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 753, in execute
 self.execute_save_steps(trans, task)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 768, in execute_save_steps
 self.save_objects(trans, task)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 759, in save_objects
 task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 mapper.py, line 1428, in _save_obj
 c = connection.execute(statement.values(value_params), params)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 824, in execute
 return Connection.executors[c](self, object, multiparams, params)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 874, in _execute_clauseelement
 return self.__execute_context(context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 896, in __execute_context
 self._cursor_execute(context.cursor, context.statement,
 context.parameters[0], context=context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 950, in _cursor_execute
 self._handle_dbapi_exception(e, statement, parameters, cursor,
 context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 931, in _handle_dbapi_exception
 raise exc.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be
 unique u'INSERT INTO projs (proj_id, proj_allocated, proj_blocked,
 proj_sup, presim_pop) VALUES (?, ?, ?, ?, ?)' [111, None, None,
 44, 0]

 #END#
   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-09 Thread Conor
On 06/08/2010 10:54 PM, Az wrote:
 memo = {}
 copied_students = copy.deepcopy(students, memo)
 copied_supervisors = copy.deepcopy(supervisors, memo)
 copied_projects = copy.deepcopy(projects, memo)

 After you do this, memo will contain a record of all copied objects. You
 should examine memo.values() to see if it is copying more than you
 expected. If it did copy just what you expected, then my worries were
 unfounded.
 
 I'll let you know how that turns out soonish. While I know it's my
 data, is there anything you can suggest from your experience that you
 consider to be unexpected?
   

Expected: students, supervisors, projects, dictionaries of said objects,
and other attribute values (strings, ints, lists, etc.). Unexpected:
anything else, especially sessions, InstanceState objects, or other ORM
support objects.

 Yes, session_id/trial_id and stud_id can repeat, and you can still group
 things together by run_id. Alternatively, you could add an
 autoincrementing primary key to SimAllocation, but I believe it is
 redundant since the combination (run_id, session_id/trial_id, stud_id)
 should be unique anyway. run_id can definitely be a datetime, but I'm
 not sure how well sqlite (it sounds like you're using sqlite) supports
 datetimes in queries 
 (seehttp://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#date-an...).
 A GUID (or UUID) is just a 128-bit value (usually random); the benefit
 here is you can generate it on the client side and be confident that it
 will be unique on the server (to avoid duplicate primary key errors).
 Using datetimes or database sequences would also work. You can
 definitely pass the run_id as an argument to monteCarloBasic, or to each
 object's create_db_record method.
 
 Also I get why you mention three keys: run_id/guid/uuid and session_id/
 trial_id alone won't suffice... but since we know there are unique
 students (within each single allocation run, etc. So I can get rid of
 the ident then? It serves no other purpose really if I can get a key
 combo that's unique and works for.
   

Yes, ident is redundant if you have those three columns.

 I am indeed using SQLite3. I take it take my physical database has to
 something like:
 engine = create_engine('sqlite:///Database/spalloc.sqlite3',
 echo=False)?
   

Looks good.

 Also I take it I should generate the UUID (http://docs.python.org/
 library/uuid.html) when I call the MonteCarloBasic function right?
 Since it should be the same for each call, I take I'll have to
 generate it before the loop. Additionally, how would I actually query
 a 128-bit value? Say I have a bit in my GUI where the supervisor can
 put in a UUID to pull the data off the Database. How would he actually
 know which UUID to put in? Any ideas?
   

Yes, one UUID generation per call to monteCarloBasic. As for knowing
which UUID to query on, you can always query distinct values of the
run_id column, e.g.
session.query(SimAllocation.run_id).distinct().all(), and present them
as a list to the user. However that doesn't really help people know
which UUID to use. Using timestamps (i.e. columns of type
sqlalchemy.DateTime) instead of UUIDs for SimAllocation.run_id may
improve that situation.

 Also once I've got my stuff in the physical database and after my
 program is done, I'd call session.close() right? How do I access the
 DB data then? Would I have to write some separate functions that allow
 me to access the data without using (for example)
 'session.query(Student)...`? This way the user (i.e. my supervisor)
 won't have to keep running the readData, monteCarloBasic, etc
 functions just to access the DB (that would be poor indeed!).
   

My impression is that readData is only used to import/migrate data into
the database, and that you wouldn't call it very often.

Calling session.close() is not necessary if you have a single global
session like you do. You only need it if you are worried that the
database might get modified concurrently by another transaction (from a
different process, session, etc.). Having said this, session.close()
does not prevent you from using the session later on: it just closes out
any pending transaction and expunges all object instances (including any
student, supervisor, and project instances you may have added/loaded).
This ensures that it sees fresh data for any future queries.

In conclusion, using session.query(Student)... should work whether you
have run monteCarloBasic or not.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-09 Thread Conor
On 06/09/2010 12:44 AM, Az wrote:
 Traceback (most recent call last):
   File Main.py, line 39, in module
 MCS.monteCarloBasic(trials)
   File //MonteCarloSimulation.py, line 163, in monteCarloBasic
 session.merge(temp_alloc)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 1158, in merge
 self._autoflush()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 897, in _autoflush
 self.flush()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 1354, in flush
 self._flush(objects)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 session.py, line 1432, in _flush
 flush_context.execute()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 261, in execute
 UOWExecutor().execute(self, tasks)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 753, in execute
 self.execute_save_steps(trans, task)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 768, in execute_save_steps
 self.save_objects(trans, task)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 unitofwork.py, line 759, in save_objects
 task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 mapper.py, line 1428, in _save_obj
 c = connection.execute(statement.values(value_params), params)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 824, in execute
 return Connection.executors[c](self, object, multiparams, params)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 874, in _execute_clauseelement
 return self.__execute_context(context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 896, in __execute_context
 self._cursor_execute(context.cursor, context.statement,
 context.parameters[0], context=context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 950, in _cursor_execute
 self._handle_dbapi_exception(e, statement, parameters, cursor,
 context)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
 engine/base.py, line 931, in _handle_dbapi_exception
 raise exc.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 sqlalchemy.exc.IntegrityError: (IntegrityError) columns uid,
 session_id, stud_id are not unique u'INSERT INTO sim_alloc (ident,
 uid, session_id, stud_id, alloc_proj, alloc_proj_rank) VALUES
 (?, ?, ?, ?, ?, ?)' [1, '1d295f48-7386-11df-8e87-00264a052efc', 1,
 5796, 119, 1]

 

 Good news: Got the UUID working in a snap.
 Bad news: See error :(


 Note: This happened when I started using 'session.merge(temp_alloc)'
 instead of 'session.add'
   

The most likely cause is if you call session.add(temp_alloc) after
calling session.merge(temp_alloc) for the same temp_alloc object. I
noticed your original monteCarloBasic had two calls to
session.add(temp_alloc); did both get changed to
session.merge(temp_alloc)? If that doesn't work, can you verify that
SQLAlchemy's primary key for SimAllocation matches the database's
primary key for sim_alloc? What column type are you using for uid? Which
call to session.merge is failing (line 163 according to your traceback),
the one inside your for rank in ranks loop or the one outside?

Also, since you know you are creating new sim_alloc rows in the database
(instead of overwriting existing ones), you can use session.add instead
of session.merge. This will prevent unnecessary SELECTs to your database.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email

Re: [sqlalchemy] Updating field by applying transformation function to current field value

2010-06-09 Thread Conor
On 06/09/2010 07:58 AM, bartomas wrote:
 Hi,

 I'm new to SqlAlchemy. I'd like to find the simplest way of updating
 the fields of a table by applying a transformation function to the
 current value of the field.
 I've tried the following:

 ## Code snippet

 engine = create_engine('mysql://root:t...@localhost:3306/Test1')
 metadata = MetaData()
 metadata.bind = engine
 employees = Table('employees', metadata, autoload=True)

 upd =
 employees.update(values={employees.c.fullname:transform2(employees.c.fullname)})

 engine.execute(upd)


 def transform2(currentValue):
 return re.sub('Peter','Paul',currentValue)


 def transform1(currentValue):
 return 'Mr ' + currentValue

 ## end code snippet



 When applying the function transform1  which just concatenates another
 string to the current value, it works fine. However if I apply
 transform2 that does a regular expression substitution on the value I
 get an error message saying that the re.sub function expects a string
 argument.
 How can I retrieve the current value from a Column object to transform
 it? Or is there a better way to this problem?

 Many thanks for any help.
   

In both transform functions, currentValue is the column object, not the
value for a particular row. transform1 happens to work because column
objects can turn the + operator into a SQL expression. To get the same
effect for transform2, you have to use database functions, e.g. for
PostgreSQL:

upd = employees.update(values={employees.c.fullname: 
func.regex_replace(employees.c.fullname, 'Peter', 'Paul')})

If the transform function really does have to be run client-side, you
have no choice but to SELECT all the rows, apply the transform, and
issue many UPDATEs back to the database. Using the ORM can make this easier.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-09 Thread Conor
')

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-08 Thread Conor
, stud_id)
should be unique anyway. run_id can definitely be a datetime, but I'm
not sure how well sqlite (it sounds like you're using sqlite) supports
datetimes in queries (see
http://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#date-and-time-types).
A GUID (or UUID) is just a 128-bit value (usually random); the benefit
here is you can generate it on the client side and be confident that it
will be unique on the server (to avoid duplicate primary key errors).
Using datetimes or database sequences would also work. You can
definitely pass the run_id as an argument to monteCarloBasic, or to each
object's create_db_record method.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-07 Thread Conor
On 06/06/2010 02:58 PM, Az wrote:
 Hi Conor,

 Basically I sat down and made some decisions and changes.

 I've created an actual copy of the Student class as in I've now got
 two classes, Student and StudentUnmapped. The Unmapped one has the
 same attributes as the mapped one, except for being... well, unmapped.
 Now I can a) use deepcopy and b) change the objects without worry.
 resetData() will act on the unmapped dictionary as well so the mapped
 object remains safe and unchanged.
   

Sounds good. Just beware that deepcopy will try to make copies of all
the objects referenced by your StudentUnmapped objects (assuming you
didn't define __deepcopy__), so you may end up copying projects,
supervisors, etc.

 Sorry for beating around the bush with questions that were a bit non-
 SQLA.

 Let's get back to some SQLA questions:

 1. The only changes I'd push onto the mapped object would be... after
 running my MC, I get a bunch of probabilities -- those I want to
 persist. How do I modify the field in a table I've already
 session.commit()-ed using the following function. This happens
 pretty much after I've finished reading in the dictionaries
 completely. After that I just add each thing to the relevant table.
 But I'd want to update some attributes of student because I want to be
 able to have in the database for access later.

 def addToTable():
 Very simple SQLAlchemy function that populates the Student,
 Project
 and Supervisor tables.

 for student in students.itervalues():
 session.add(student)
 session.flush()

 for project in projects.itervalues():
 session.add(project)
 session.flush()

 for supervisor in supervisors.itervalues():
 session.add(supervisor)
 session.flush()

 session.commit()
   

It sounds like you want to a) INSERT students/projects/supervisors that
don't yet exist in the database, and b) UPDATE
students/projects/supervisors that do exist in the database. If so, I
think you want to use session.merge instead of session.add.

 2. Say I've now got a physical database and I've run my Monte-Carlo
 multiple times. I think I'd either want to a) have the original M-C
 sessions be overwritten or b) create another set of data, perhaps
 using the data to differentiate the two. How can I do this? Can I
 query each one separately? Or am I better off just with an overwrite?
   

You can indeed append the new set of data to the existing data. You
would just need another column in SimAllocation to distinguish between
different calls to monteCarloBasic. I would recommend using a database
sequence or GUIDs to ensure that each call to monteCarloBasic gets a
unique value for this column.

 3. Finally, regarding the GUI. If each function indicates a separate
 thread, then in that case, yes with my GUI I'd be passing the
 session from thread to thread since I'm no longer just running Main.py
 but rather, the constituent functions one by one. How do I deal with
 this? The reason I used the database was because of persistence and I
 definitely want my data to persist between threads (and after I've
 closed my program) so I can use them for all manner of useful
 calculations, queries and output.
   

Just to be clear, by thread I mean actual system threads spawned by
the the thread or threading module. If this is indeed what you want,
then you probably have a UI thread and a worker thread that runs
monteCarloBasic. Since you should not share a single session object
between threads, you can:

   1. Change monteCarloBasic to not rely on sessions (including their
  persistent
  
http://www.sqlalchemy.org/docs/reference/orm/sessions.html#sqlalchemy.orm.session.Session
  objects) at all (you would have to make copies of your students,
  projects, and supervisors before hading them over to
  monteCarloBasic). You are already sort-of on this track by using
  StudentUnmapped objects. In this way, monteCarloBasic returns its
  results as a set of objects that are not attached to any session
  (either because they are unmapped or are transient
  
http://www.sqlalchemy.org/docs/reference/orm/sessions.html#sqlalchemy.orm.session.Session
  instances), which the UI thread uses to update the database. How
  you pass data from worker threads to the UI thread is dependent on
  your GUI toolkit.
   2. Change monteCarloBasic to create its own session from the
  sessionmaker object. This will let monteCarloBasic read and write
  from/to the database, but you will have to arrange for your UI
  thread session to expire_all or close itself appropriately so it
  can see the new data.

Again, this thread business is probably overkill for your project, so
you may want to avoid it altogether.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send

Re: [sqlalchemy] How to load properties of table manually?

2010-06-07 Thread Conor
On 06/07/2010 12:34 AM, Victor Lin wrote:
 Hi,

 I use a cache on some pages of my TurboGears2 application,  and I
 encounter a problem.  There is a query in those page method, for
 example:

 @beaker_cache(expire=30, type='memory', query_args=True)
 @expose()
 def foo():
 data = DBSession.query(Bar).all()
 return dict(data=data)
   

Your expose decorator suggests to me that you want whole-page caching,
in which case you usually want to cached the rendered output of the
controller instead of the query results. I think you will get this
effect if you added a template argument to the expose decorator or
manually render the output in your controller.

If you do want to cache just the query results, then you should first
expunge each object in the results before adding them to the cache. This
will ensure that they don't hold on to your original session. Then you
will want to merge them into the current session back after loading them
from the cache. Example:

@expose()
def foo():
@beaker_cache(expire=30, type='memory', query_args=True)
def real_foo():
data = DBSession.query(Bar).all()
for obj in data:
DBSession.expunge(obj)
return data

data = real_foo()
data = [DBSession.merge(obj, load=False) for obj in data]
return dict(data=data)

You should also look at SQLAlchemy's beaker_caching example (available
in 0.6) for a more automatic way to do caching.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-07 Thread Conor
/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 interfaces.py, line 408, in init
 self.do_init()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 properties.py, line 714, in do_init
 self._get_target()
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 properties.py, line 726, in _get_target
 self.mapper = mapper.class_mapper(self.argument, compile=False)
   File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
 lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
 util.py, line 564, in class_mapper
 raise exc.UnmappedClassError(class_)
 sqlalchemy.orm.exc.UnmappedClassError: Class 'ProjectParties.Student'
 is not mapped

 #

 งง[Q2:] What's that all about? Something wrong with the inheritence?
   

I don't know if there is a way to get the inheritance to work they way
you want it, but not using inheritance like I did above sidesteps the issue.

 I would recommend using a database
 sequence or GUIDs to ensure that each call to monteCarloBasic gets a
 unique value for this column.
 
 As another key sequence different from the simple ident ==
 row_number I'm currently using right? I'll look into that.
   

The problem is that your ident always starts at 1 for each call to
monteCarloBasic. So, assuming your primary key for SimAllocation
consists of some combination of (session_id, ident, stud_id), you will
be reusing the same primary keys for each call to monteCarloBasic. If
you want to overwrite the rows with the primary keys, then you should
either DELETE the old rows first or maybe use session.merge(temp_alloc)
to get the find or create behavior. If you do NOT want to overwrite
the rows, then you need to ensure that some set of columns in
SimAllocation is globally unique, regardless of how many times
monteCarloBasic has been called. An easy way to do this is to change
ident to use a database sequence or GUID, but there are many other
solutions. You probably want to group together SimAllocations from a
particular call to monteCarloBasic together, in which case you would add
a run_id column to SimAllocation, where rows with the same run_id were
created in the same call to monteCarloBasic. I think a primary key of
(run_id, session_id/trial_id, stud_id) would be good.

 The thread business is indeed going over my head :S.

   
 In this way, monteCarloBasic returns its
  results as a set of objects that are not attached to any session
  (either because they are unmapped or are transient
  
 http://www.sqlalchemy.org/docs/reference/orm/sessions.html#sqlalchemy...
  instances), which the UI thread uses to update the database. How
  you pass data from worker threads to the UI thread is dependent on
  your GUI toolkit.
 
 My GUI toolkit is Tkinter?
   

Never used it, sorry. In general, every UI toolkit has a message/event
queue to which you can post messages from any thread. So you could do
something like:

result = monteCarloBasic(...)

def runs_in_ui_thread():
update_database(result)

ui_toolkit.post_callback(runs_in_ui_thread)

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-05 Thread Conor
On 06/05/2010 08:06 PM, Az wrote:
 Cheers!

 Creating a new instance of my mapped class and settings, manually.
 Gotcha. I think this will be an easier solution for me.

 Nah, I'm not in a web framework.

 Additional Q:

 +++

 Currently, my database is being stored in memory and it's fine like
 that since a) my data isn't very expansive and b) I'm running the
 program (python Main.py) from a command line where I can just comment
 out various functions in my Main file. However, I'm now designing a
 GUI for my code where I want to be able to call each function
 manually. Now, all functions that reference the session will reference
 session I defined as:

 Session = sessionmaker(bind=engine)
 session = Session()

 Thus after I run my finish my monteCarloBasic (defined way up at the
 top), I'd probably hit another button that would do what I did for
 CODE 2, i.e.

 def checkFor4545(trials):
 sid = 4545
 print sid
 project_id_list = list(students[sid].preferences)
 for project_id in project_id_list
 gotcha =
 session.query(SimAllocation).filter(SimAllocation.student_id ==
 sid).filter(PP.SimAllocation.alloc_proj == project_id).count()
 print project_id, gotcha/trials

 This basically counts the number of times student 4545 got each of his
 projects for entire Monte-Carlo simulation and prints the average over
 the trials.

 So basically when I'm in command line mode and go python Main.py my
 Main looks like:

 trials = 100
 monteCarloBasic(trials)
 checkFor4545(trials)

 So those will run one after the other.

 Now when I've got a GUI, I'll have the Monte-Carlo run separately and
 then afterwards, hit a button that corresponds to
 'checkFor4545(trials)'. Now, the reason I used SQLAlchemy in the first
 place (besides the general usefulness of SQL) is for data persistence.
 Will 'checkFor4545(trials)' display the same output as it would if it
 were run serially from Main.py? Will it reference the same session?
 (Probably a question you'll want to slap your forehead over, but I
 just want to verify I've got my understanding correct).
   

I see nothing that indicates that they would NOT see the same session,
but I do have some comments:

* GUIs usually run long tasks in background threads to keep the UI
  responsive. If you were to do this, you would not want to use a
  single global session, because sharing a session between threads
  is a big no-no.
* I'm concerned what the call to resetData does. If it resets
  student-project associations, then could it end up deleting the
  temp_allocs you just added in that trial?
* What should happen if you run monteCarloBasic multiple times? It
  seems like you would get duplicate primary keys on SimAllocation
  rows after the 1st call.

 Additionally, when I save to a physical database file, what happens
 everytime I run monteCarloBasic(trials) (since it writes to the
 database). Will it rewrite it every time? Or will it keep appending to
 it?
   

I don't see anything that would indicate rewriting the database in the
code that you have shown (except maybe as a side-effect of your
resetData function that I noted above). Also, you may get duplicate
primary key errors like I mentioned above.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] EXISTS statements with any(), but with a join

2010-06-04 Thread Conor
On 06/04/2010 03:13 PM, Kent wrote:
 Say I track Inventory with three classes: Product, Inventory, Location

 This any() expression yields the following output.

 session.query(Product).filter(Product.inventory.any(Location.siteid==u'EAST')).all()

 SELECT ...
 FROM products
 WHERE EXISTS (SELECT 1
 FROM inventory, locations
 WHERE products.productid = inventory.productid AND locations.siteid = %
 (siteid_1)s)

 What if I really need the inventory and locations tables to by JOINed.

 SELECT ...
 FROM products
 WHERE EXISTS (SELECT 1
 FROM inventory JOIN locations ON sqla_magic_clause  
 JOIN
 WHERE products.productid = inventory.productid AND locations.siteid = %
 (siteid_1)s)

 Can I get to this with the any() expression?

   

The quickest way is to add another any() clause. Assuming your
Inventory-Location relation is many-to-one or one-to-one (meaning you
would use has() instead of any()), you can use this query:

session.query(Product).filter(
Product.inventory.any(
Inventory.location.has(Location.siteid==u'EAST'))).all()

If you don't like nesting another EXISTS clause in your SQL, you can
create the inner query manually:

subq = session.query(Inventory)
subq = subq.join(Inventory.location)
subq = subq.filter(Inventory.productid == Product.productid)
subq = subq.filter(Location.siteid == u'EAST')
subq = subq.correlate(Product) # Probably not needed.
subq = subq.subquery()

session.query(Product).filter(exists(subq)).all()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Questions about session

2010-06-04 Thread Conor
On 06/03/2010 02:33 PM, Az wrote:
 Firstly, apologies if I'm demanding too much but basically I'm quite a
 beginner at Python programming and this is for a University project,
 which is why I'm keen to get this done (due in a few days!). So I hope
 you won't mind me asking some questions that may seem really basic.

   
 deepcopy has issues because SQLAlchemy places extra information on your 
 objects, i.e. an _sa_instance_state attribute, that you dont want in your
 copy.  You *do* however need one to exist on your object.  Therefore 
 deepcopy is not supported right now by SQLAlchemy ORM objects.
 
   
 There are ways to manually blow away the old _sa_instance_state and put a 
 new one on the object, but the most straightforward is to make a new
 object with __init__() and set up the attributes that are significant, 
 instead of doing a full deep copy.
 
 Could you explain what you mean by creating a new object with
 __init__() and setting up the attributes? Would this be a new class
 that isn't mapped using SQLA?

   

He just means creating a new instance of your mapped class and settings
its attributes manually, e.g.:

def copy(self):
   copy = MyMappedClass()
   copy.attr1 = self.attr1
   copy.attr2 = self.attr2
   return copy

 if you do really want to use deepcopy, you'd have to implement 
 __deepcopy__() on your objects and ensure that a new _sa_instance_state is 
 set up,
 there are functions in sqlalchemy.orm.attributes which can help with that.   
  This *should* be made an official SQLA recipe, but we haven't gotten
 around to it.
 
 Could you please explain what you mean by that? Would it be possible
 to give me an idea or an example of how such would work?

   

In theory you can use a generic __deepcopy__ implementation for ORM
classes. A very simple version might be:

def orm_deepcopy(self, memo):
mapper = class_mapper(self.__class__)
result = self.__class__()
memo[id(self)] = result
for prop in mapper.iterate_properties():
value = getattr(self, prop.key)
setattr(result, prop.key, deepcopy(value, memo))
return result

class MyMappedClass(...):
__deepcopy__ = orm_deepcopy

Beware that this implementation does not handle overlapping properties
well (e.g. relations and their corresponding foreign key columns),
lazy-loading properties, read-only properties, clearing out
auto-incrementing primary keys, etc. I would not recommend this
approach, as a use-case-specific copy() method will be much easier to
tailor to your needs.

 How can I stop it from closing the
 sessions?
   
   
 nothing in SQLA closes sessions.  Your program is doing that.
 
 I'm not issuing a session.close() anywhere (I checked). Are there any
 other ways of closing a session besides that? (If the answer is
 Plenty, don't worry about it... I'll try to track it down then)
   

If you are in a web framework, it may be closing the session for you
(usually by calling Session.remove() on a ScopedSession). Additionally,
are you sure that your object-to-copy is not transient when you make
your deepcopy?

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] EXISTS statements with any(), but with a join

2010-06-04 Thread Conor
On 06/04/2010 03:56 PM, Kent Bower wrote:
 Nice.  That might come in very useful, thanks.

 However, I can't quite get the second approach to work:

 exq=DBSession.query(Inventory).join(Location).filter(Location.siteid=='03').correlate(Product).subquery()
 DBSession.query(Product).filter(exists(exq)).all()

 Traceback (most recent call last):
   File console, line 1, in module
   File
 /home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py,
 line 512, in exists
 return _Exists(*args, **kwargs)
   File
 /home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py,
 line 2806, in __init__
 s = select(*args, **kwargs).as_scalar().self_group()
   File
 /home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py,
 line 237, in select
 return Select(columns, whereclause=whereclause, from_obj=from_obj,
 **kwargs)
   File
 /home/rarch/tg2env/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6-linux-x86_64.egg/sqlalchemy/sql/expression.py,
 line 3637, in __init__
 raise exc.ArgumentError(columns argument to select() must 
 ArgumentError: columns argument to select() must be a Python list or
 other iterable



 print exq
 SELECT inventory.productid, inventory.siteid, inventory.locationid,
 inventory.receiptdate, inventory.qty, inventory.lastmovedate
 FROM inventory JOIN locations ON locations.siteid = inventory.siteid
 AND locations.locationid = inventory.locationid
 WHERE locations.siteid = :siteid_1

 Any idea what I am doing wrong?

   1. Oops, looks like exists() does not detect the subquery as a select
  statement and tries to create its own. Try replacing .subquery()
  with .statement.
   2. exq is missing a WHERE clause that relates Inventory to Product.

exq=DBSession.query(Inventory).join(Location).filter(Location.siteid=='03').filter(Location.productid==Product.productid).correlate(Product).statement
DBSession.query(Product).filter(exists(exq)).all()

another alternative is to construct the subquery outside of the ORM.
This at least gives you the nice SELECT 1 output:

exq = select([1], from_obj=orm.join(Inventory, Location))
exq = exq.where(Inventory.productid == Product.productid)
exq = exq.where(Location.siteid == '03')
exq = exq.correlate(Product.__table__)
DBSession.query(Product).filter(exists(exq)).all()

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Updating table w/ bindparam

2010-05-20 Thread Conor




On 05/20/2010 09:56 AM, Bryan wrote:

  I know this has got to be simple.  I am updating "table1" in MySQL.

u = table1.update()
u.values(col1=bindparam('_col1'), col2=bindparam('_col2') ...

updateVals = [
{'_col1': 5, '_col2': table1.col1 * 5}
]

engine.execute(u, updateVals)

I was expecting table1.col1 * 5 to show up as: `table1`.`col2` =
`table1`.`col1` * 5

But it shows up as: '`table1`.`col1` * %s'
in the query log.

Note the quotes around the actual result, and the missing 5 value.
The 5 is never being placed into the string being sent to the server.
  

You are not allowed to use expressions as bind params, only
constants. Bind params are not allowed to change the "structure" of the
SQL statement, because part of the reason for bind params in the first
place is the server only has to parse/plan the SQL statement once,
regardless of how many items you put in updateVals.

-Conor






-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Re: Updating table w/ bindparam

2010-05-20 Thread Conor
On 05/20/2010 10:28 AM, Bryan wrote:
 How about some sort of literal: table1.col1 * 5 (without the quotes)
 as the value of a bindparam?  So to the bindparam it would look like a
 constant, but when it got to the server, it would be interpreted as an
 expression?  I would be loosing the automatic table/column name
 insertion that SA provides.

   

Bind params don't work like that: you just cannot inject arbitrary SQL
via bind params. You have to include the expression as part of the
update statement itself. The only downside to this approach is you (may)
lose the executemany() behavior you may have been looking for
(executemany is when you pass an array of bind params to execute):

u = table1.update()
u = u.values(col1=5, col2=table1.c.col1 * 5)
engine.execute(u)

If you really need bind params, the best you can do is this, which is
probably not what you want:

u = table1.update()
u = u.values(col1=bindparam('_col1'), col2=table1.c.col1 * bindparam('_col2'))
engine.execute(u, [{'_col1': 5, '_col2': 5}])

I also just noticed that your original example:

u = table1.update()
u.values(col1=bindparam('_col1'), col2=bindparam('_col2') ...

tried to modify the update object in place, but the values() method
returns a new update object instead of modifying the original.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Updating table w/ bindparam

2010-05-20 Thread Conor
On 05/20/2010 10:53 AM, Bryan wrote:

 Including the expressing in the values() clause w/ a bindparam like in
 your second example did the trick.  I haven't checked the MySQL logs
 if I loose the executemany() benefits (not sure off the top of my head
 how the MySQL logs would show those benefits), but at least my tests
 are passing now.

 I'm curious, isn't this still better than looping through all the
 updates I need to do and building an update object for each row?
 MySQL may see the same amount of unique UPDATE queries either way, but
 at least SA only needs to create one update object when using
 bindparams and a list of values.

   

The performance advantage of executemany is that the DBAPI client only
needs to send one UPDATE statement to the server along with a bunch of
values instead of a bunch of UPDATE statements. So you only pay the
parsing/planning/network latency penalty once.

The cost of SQLAlchemy generating update objects and compiling them to
SQL is generally negligible compared to the cost of actually executing
them. If all your updates are of the form col1=constant, col2=col1 *
constant, then by all means use executemany. I had just figured that
you wanted some rows to use col2=constant while other rows used
col2=col1 * constant, which is not supported by a single executemany
statement.

-Conor

 My actual code doesn't modify the update object in place, sorry about
 the typo.

 On May 20, 8:41 am, Conor conor.edward.da...@gmail.com wrote:
   
 On 05/20/2010 10:28 AM, Bryan wrote:

 
 How about some sort of literal: table1.col1 * 5 (without the quotes)
 as the value of a bindparam?  So to the bindparam it would look like a
 constant, but when it got to the server, it would be interpreted as an
 expression?  I would be loosing the automatic table/column name
 insertion that SA provides.
   
 Bind params don't work like that: you just cannot inject arbitrary SQL
 via bind params. You have to include the expression as part of the
 update statement itself. The only downside to this approach is you (may)
 lose the executemany() behavior you may have been looking for
 (executemany is when you pass an array of bind params to execute):

 u = table1.update()
 u = u.values(col1=5, col2=table1.c.col1 * 5)
 engine.execute(u)

 If you really need bind params, the best you can do is this, which is
 probably not what you want:

 u = table1.update()
 u = u.values(col1=bindparam('_col1'), col2=table1.c.col1 * 
 bindparam('_col2'))
 engine.execute(u, [{'_col1': 5, '_col2': 5}])

 I also just noticed that your original example:

 u = table1.update()
 u.values(col1=bindparam('_col1'), col2=bindparam('_col2') ...

 tried to modify the update object in place, but the values() method
 returns a new update object instead of modifying the original.

 -Conor
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] underscore in mssql column names?

2010-05-19 Thread Conor




On 05/19/2010 06:47 AM, mte wrote:

  Hi,
I need to connect to an existing MS SQL database with SqlAlchemy. I'm
using a declarative_base (haven't tried with manual mapping) and it
seems to work except for a few fields.
Those are all defined as "col1 =
Column('some_field_with_underscores_in_name', AnyType)"
(AnyType meaning Integer, String or whatever else).
If I do a session.query(MyClass).first().col1 I get None returned. I
get the correct value if I do session.query(MyClass).first().col2
(which doesn't have underscores in its name) or if I fetch the row
with session.execute("SELECT * FROM mytable").
Can this be considered a bug or is there a setting I haven't specified
correctly?

  

Here is a possible reason: SQLAlchemy has a "use_labels" feature
which adds table/alias prefixes to columns in the columns clause to
help disambiguate columns from different tables with the same name. It
could be that your col1 is named in such a way that makes
SQLAlchemy think that "use_labels" is in effect, which could cause it
to not recognize it properly. For example, if "MyClass" uses the "myclass"
table, and col1's name is "myclass_id", then SQLAlchemy may try to
interpret that as the "id" column of "myclass" instead of the
"myclass_id" column.

If this is indeed the case, the problem should go away if you really
enable the "use_labels" feature:

session.query(MyClass).with_labels().first().col1

If this does not fix it, please post the SQL that is emitted by
SQLAlchemy when you use session.query(MyClass).first(). You
can see the emitted SQL by passing echo=True to create_engine.
-Conor






-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Two subclasses (single table inheritance) both requiring the same columns

2010-05-07 Thread Conor




On 05/07/2010 09:59 AM, Oliver Beattie wrote:

  Hi there,

I have some (declarative, polymorphic) classes that use single-table
inheritance. My configuration is similar to the below:

class BasicObject(Base):
   col1 = Column(…)
   # discriminator, polymorphic setup etc

class ObjectOne(BasicObject):
  col2 = Column(…)
  col3 = Column(…)

class ObjectTwo(BasicObject):
  col2 = Column(…) # -- This is the same as col2 on ObjectOne
  col4 = Column(…)

Previously, this worked fine in SA 0.5, but in 0.6 it raises an
ArgumentError telling me it conflicts. For the moment, I have just
moved these properties to the `BasicObject`, but that's not really
where they should be. I have additional classes of `BasicObject` that
shouldn't have that `col2` attribute.

Is there a way I can get around this (and essentially declare them
twice on the same table?). I know I could probably set them as
different database columns and keep them as the same attribute on the
mapper, but I really need the ability to query against that column
across the different mappers (without having to do an OR). Is there a
way around this?

  

In this particular case, you can add an intermediate base class that
declares col2:

class BasicObject(Base):
col1 = Column(...)

class IntermediateBase(BasicObject):
col2 = Column(...)

class ObjectOne(IntermediateBase):
col3 = Column(...)

class ObjectTwo(IntermediateBase):
col4 = Column(...)

This approach fails when you have multiple columns to mix and match
between derived classes. The only way I have been able to do this is to
declare the table separately and selectively include ColumnPropertys
in each class:

basic_object = Table(
Column("col1", ...),
Column("col2", ...),
Column("col3", ...),
Column("col4", ...),
discriminator column, etc.)

base_props = ("col1", descriminator column, etc.)

class BasicObject(Base):
__table__ = basic_object
__mapper_args__ = {
"exclude_properties": None,
"include_properties": base_props,
...}

class ObjectOne(BasicObject):
__mapper_args__ = {
"exclude_properties": None,
"include_properties": base_props + ("col2", "col3"),
...}

class ObjectTwo(BasicObject):
__mapper_args__ = {
"exclude_properties": None,
"include_properties": base_props + ("col2", "col4"),
...}

I don't like the idea of "merging" two Column definitions
into a single column, because then you have to decide which Column
instance "wins", check for non-identical columns, etc. However, it
would be nice to allow declarative classes reference the same Column
instance multiple times, e.g.:

_col2 = Column("col2", ...)

class ObjectOne(BasicObject):
col2 = _col2
col3 = Column(...)

class ObjectTwo(BasicObject):
col2 = _col2
col3 = Column(...)

-Conor






-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] filter by backref field

2010-05-04 Thread Conor
On 05/03/2010 10:33 PM, a...@vurve.com wrote:
 Hi All,

 This might be a noob question, but I wasn't able to to find the answer
 combing through the docs and google search.  Given the following
 declarations

 Base = declarative_base()

 class A(Base):
 __tablename__ = 'A'
 id = Column(Integer, primary_key=True)

 class B(Base):
 __tablename__ = 'B'
 id = Column(Integer, primary_key=True)
 a_id = Column(Integer, ForeignKey('A.id'))
 a = relationship('A', backref=backref('b'))

 I want to query for all A where B is not null (essentially an inner
 join on A) with something like this

 session.query(A).options(joinedload('b')).filter(A.b != None)

 but it won't work because 'A.b' is a backref field.  If I try
 filter('A.b' != None) it won't work either.So 2 part question:

 1) is there a better way to do an inner join like this?
 2) in general, how do you use backref fields inside of filter
 criteria?

   

The problem is not due to the backref: it is because A.b is a
one-to-many relationship, and A.b != None does not make sense for a
one-to-many relationship. If you really want inner-join semantics, then
you can do this (note that no filter is needed because that is
inherently part of the inner join):

session.query(A).join(A.b).options(contains_eager('b'))

Alternatively, if you want to ensure that the database returns only one
row per A, you can use a query like this, which will use an EXISTS clause:

session.query(A).filter(A.b.any())

Note that the query object will filter out duplicate As on the client
side in any case, so you will not notice a difference between these queries.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Relationships in an inheritance heirarchy

2010-04-19 Thread Conor
Nickle wrote:
 I have the following python classes.

 class Instrument (object):

 def __init__ (self, name):
 self.name = name

 class Equity (Instrument):

 def __init__ (self, name, currency_id):
 Instrument.__init__(self, name)
 self.currency_id = currency_id

 class Currency (Instrument):
 pass

 Pretty simple cut down example. Equity has a relationship with a
 currency

 Mapping is as follows

 instruments = Table ('instruments', metadata,
 Column ('id', Integer, primary_key=True),
 Column ('type', String (10), nullable=False),
 Column ('name', String(50), nullable=False, unique=True),
 Column ('currency_id', Integer, ForeignKey ('instruments.id'))
 )

 instrument_mapper   = mapper (Instrument, instruments,
 polymorphic_on=instruments.c.type, polymorphic_identity='instrument')
 equity_mapper   = mapper (Equity,
 inherits=instrument_mapper, polymorphic_identity='equity')
 currency_mapper = mapper (Currency,
 inherits=instrument_mapper, polymorphic_identity='currency',
   properties={'equities': relationship
 (Instrument, backref=backref('currency',
 remote_side=[instruments.c.id]))}
   )


 One thing I'm not sure about. The currency mapper should have a
 relationship with Currency and not with Instruments. ie. I want to
 restrict the relationship to Currencies.

 Is there a standard way of going around this?
   

I'm assuming that you want Currency.equities to be a relationship to
Equity, not Currency. You should just be able to use Equity instead of
Instrument as the first argument to relationship(). This will cause
SQLAlchemy to include a clause similar to instruments.type IN
('equity') in your query when you load Currency.equities.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] 2 Sessions with same object...how do they interact?

2010-04-15 Thread Conor
NickPerkins wrote:
 Just when I thought I understood the Session object, I found this
 behavior:
 ( I am starting to think about concurrency and locking )

 I create 2 sessions, and load each with the same ( persisted ) object.
 The objects appear to be independent, even after both sessions have
 flushed,
 but when I commit one session, it suddenly picks up the change from
 the other session!

 This is confusing me!
 What is going on here?
 ( my business object is a Policy, and column test is not the PK )


 class PolicyNotFoundException(Exception):
 pass

 def open_policy(key):
 session = Session()
 try:
 policy = session.query(Policy).filter_by(key=key).all()[0]
 return policy, session
 except:
 raise PolicyNotFoundException

 def test_concurrent_updates():
 policy1,session1 = open_policy('NICK')
 policy2,session2 = open_policy('NICK') # retrieves on pk...gets
 same record
 assert( policy1 != policy2 )
 assert( session1 != session2 )
 assert( policy1 in session1 )
 assert( policy1 not in session2 )
 assert( policy2 in session2 )
 assert( policy2 not in session1 )

 policy1.test = 'ONE'
 session1.flush()

 policy2.test = 'TWO'
 session2.flush()

 assert( policy1.test == 'ONE' ) # not affected by flush of session
 2

 session1.commit()
 assert( policy1.test == 'TWO' ) #  really?
   

I see two possible explanations:

   1. (More likely) You are using a database that does not support
  transactions (e.g. MySQL with MyISAM tables). If this is the case,
  every insert/update/delete is immediately visible to other DB
  connections. In this case, assert(policy1.test == 'ONE') succeeds
  because the policy1 object has a cached value for its 'test'
  attribute. If you had put session1.expire(policy1)) between
  session2.flush() and assert(policy1.test == 'ONE'), the assertion
  would fail. Since session1.commit() expires every object in
  session1, policy1 will then pick up the new value from the DB.
   2. Your sessions have autocommit enabled on your sessions. I think
  SQLAlchemy would auto-expire objects in this case (causing
  assert(policy1.test == 'ONE') to fail), but I'm not sure.

I would recommend that you enable SQL logging on your engine so you can
observe exactly when policy1 fetches the new values from the DB, and
when COMMITs are getting issued to the DB.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Is it possible to combine Query results for sorting the output?

2010-04-13 Thread Conor
Richard de Koning wrote:

 I created a function where I can loop through a list of words and
 search for these words in a database which looks like:

 for instance in session.query(table).filter(or_(\
 table.logtext.like(term), table.titlelog.like(term)))\
 .order_by(desc(table.unixtime)):

 I sort the outcome descending on unixtime and later write it to a
 file.

 If there are different terms the sorting order isn't working. I can do
 this by storing the outcome in tuples or writing to a temp table, but
 that seems inefficient. I've checked the documentation but cannot find
 anything about it.

 Is it possible to merge or append the outcome of different queries
 together?
   

Is this what you want? The query below ORs all your term clauses
together, so it has the effect of merging the results.

clauses = []
for term in terms:
clauses.append(table.logtext.like(term))
clauses.append(table.titlelog.like(term))

q = session.query(table)
if len(clauses)  0:
q = q.filter(or_(*clauses))
q = q.order_by(desc(table.unixtime))
# Reduce client memory usage for non-MySQL databases.
q = q.yield_per(1000)
for instance in q:
...

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Is it possible to combine Query results for sorting the output?

2010-04-13 Thread Conor
Richard de Koning wrote:
 Thanks Conor. It works like a charm.

 You gave me a lot of insight in using sqla more flexible. Up to now
 I'm having very long statements but your way is much more self-
 explanatory than my own long versions.

 I didn't now the yield_per. Why is it only for non-MySQL databases?

   

I don't really use MySQL myself, but I've heard on this list that the
mysqldb DBAPI implementation always stores the resultset in memory
before passing it on to the client (or SQLAlchemy in this case). AFAIK
most other DBAPI implementations properly fetch XXX rows at a time when
you do a fetchmany, so SQLAlchemy can take advantage of that.

-Conor

 On Apr 13, 8:30 pm, Conor conor.edward.da...@gmail.com wrote:


   
 Is this what you want? The query below ORs all your term clauses
 together, so it has the effect of merging the results.

 clauses = []
 for term in terms:
 clauses.append(table.logtext.like(term))
 clauses.append(table.titlelog.like(term))

 q = session.query(table)
 if len(clauses)  0:
 q = q.filter(or_(*clauses))
 q = q.order_by(desc(table.unixtime))
 # Reduce client memory usage for non-MySQL databases.
 q = q.yield_per(1000)
 for instance in q:
 ...

 -Conor
 

   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Inserting comments into SQL queries

2010-04-13 Thread Conor
George V. Reilly wrote:
 I'm at the MySQL conference. A couple of speakers have recommended
 adding SQL comments to queries for debugging; e.g., attributing a
 query to a higher-level operation, or that can be parsed by a slave
 during replication.

 Is there a way to do this in SQLAlchemy? That is, the generated SQL
 has a /* comment */ inserted by my code.
   

At the SQL expression layer, you have several ways to include comments
in the compiled SQL statement:

   1. The prefixes parameter to insert():
  
http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#sqlalchemy.sql.expression.insert
   2. prefix_with() generative method on Insert objects:
  
http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#sqlalchemy.sql.expression.Insert.prefix_with
   3. prefixes parameter to select():
  
http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#sqlalchemy.sql.expression.select
   4. append_prefix() generative method on Select objects:
  
http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#sqlalchemy.sql.expression.Select.append_prefix

e.g. select([table], prefixes=[/* some comment */]) would compile as
SELECT /* some comment */ column1, column2 FROM table

If you want to apply prefixes by default or at the ORM layer, check out
this thread:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/a7e05537ae504d61

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Newbie mapper question

2010-04-07 Thread Conor
Damien Tougas wrote:
 Hello,

 I am new at this, so I am sure that this is easy. I understand how the
 mapper works, and I can get it working fine when mapping tables to
 classes. What I need to do is map the following select statement to a
 class:

 SELECT function_name(column_name) AS c1 FROM table_name

 The usage of the function as part of the select statement is causing
 me some problems. Initially I tried this:

 table = Table('table_name', metadata, Column('column_name', Integer,
 primary_key=True))
 sel = select ([table.c.column_name, 'function_name(column_name) AS
 c1'], from_obj=[table]).alias()
 mapper(ClassName, sel)

 That didn't work correctly, so I am somewhat at a loss as to how to
 handle that function in the select statement. Does anyone have any
 suggestions?

   

See
http://www.sqlalchemy.org/docs/05/mappers.html#mapping-a-class-against-arbitrary-selects
for an example for mapping to arbitrary selects. In your case, it would
be something like:

sel = select([func.function_name(table.c.column_name).label('c1'), 
from_obj=table).alias('some_alias')

Not sure if the alias part is needed.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] where clause construction using variable names and values

2010-03-31 Thread Conor
Tejaswi wrote:
 I have a dict of keys to values which have to go into my where clause
 with an and_.

 Say dict = {key1: value1, key2: value2}

 my select statement should look like select * from blah where key1 =
 value1 and key2 = value2

 I know this has to do with constructing the right where clause
 element, but I cannot seem to find documentation on it.

 select([table], meta.c.columnname == value) doesn't take a variable
 key.

 Any help would be greatly appreciated.

 -T

   

You can treat meta.c as a dict, e.g. meta.c[key] == value.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: where clause construction using variable names and values

2010-03-31 Thread Conor
Tejaswi wrote:
 I am not using sa.orm. I want to use only the sql expression syntax.

 @Conor: I tried the dict approach. The problem is, I don't know how
 many key value pairs I will have. I will have to use a map, or map* to
 construct the full set of where clauses. This is the syntax I am not
 able to figure out.

   

How about this:

clauses = [meta.c[key] == value for (key, value) in dict.iteritems()]
select([table], and_(*clauses))

or, generatively:

s = select([table])
for (key, value) in dict.iteritems():
s = s.where(meta.c[key] == value)

-Conor

 On Mar 31, 10:39 am, werner wbru...@free.fr wrote:
   
 On 31/03/2010 08:19, Tejaswi wrote:



 
 I have a dict of keys to values which have to go into my where clause
 with an and_.
   
 Say dict = {key1: value1, key2: value2}
   
 my select statement should look like select * from blah where key1 =
 value1 and key2 = value2
   
 I know this has to do with constructing the right where clause
 element, but I cannot seem to find documentation on it.
   
 select([table], meta.c.columnname  == value) doesn't take a variable
 key.
   
 Any help would be greatly appreciated.
   
 Are you using SA.orm?

 If yes, then you probably want to look at query.Query.filter  and/or
 query.Query.filter_by.

 Werner
 

   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: (Workaround found and SA bug with localhost?) error connecting to remote mysql database

2010-03-25 Thread Conor
Richard de Koning wrote:
 I've done some troubleshooting and these are my preliminary
 conclusions.

 A ssh-tunnel is used to reach the remote MySQL database server that
 only runs on 127.0.0.1 (localhost).
 Commands sent to 127.0.0.1 port 3307 are forwarded to remote server on
 port 3306.

 When I connect with  MySQLdb.connect and  create an normal SQL-
 statement from a Python-script it works as it should.
 sql = INSERT INTO data (fname, lname) VALUES ('%s', '%s')  % (fname,
 lname)

 So the problem must be somewhere in Sqlalchemy. Maybe it has anything
 to do with 'localhost'  somewhere in the code . When I initially ran
 the query on port 3307 the data was inserted in the local development-
 database that was running on the standard MySQL 3306 port. I'll use
 the normal SQL as a workaround, but I would prefer to do it in SA.

 Is this a known issue or should I report this as a bug?

   

My experience has been that the MySQL client library interprets
'localhost' to use the local UNIX socket (e.g. /var/run/mysql/...)
instead of TCP/IP. I would recommend using '127.0.0.1' or equivalent as
the host.

-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] implications of weak_identity_map

2010-03-24 Thread Conor
Sebastian Elsner wrote:
 My first question is: What exactly is the commit doing to the list
 returned from the query so that pointers to other objects are lost
 (python.exe will crash on me then)?

 The commit expires all attributes by default, since the transaction is
 committed, and upon next access will be loaded again from the database.
 Feel free to turn this flag off if you don't want the reload.

 There's no reason why any of this would crash the interpreter, however.
 It only means your model will refresh its information from the database.


 The expiration was the problem. As soon as I turned it off, the errors
 and crashes went away
 I was getting:
 Attribute Error: SomeClass object has no attribute '_sa_instance_state'
 This happened when:
 list=session.query(SomeClass).all()
 list.somerelation.append(SomeRelatedClassInstance)
 session.commit()

 The docs state:

 expire_on_commit
 Defaults to True. When True, all instances will be fully expired after
 each commit(), so that all attribute/object access subsequent to a
 completed transaction will load from the most recent database state.

 This means, when I access an expired attribute it will issue another
 query creating a new instance of the attribute/relation I wanted to
 follow? Subsequently the memory address will change?  Do I understand
 this right?
 I am asking this, because the Qt Tree i am using to display the data
 heavily relies on internal pointers, so you would have a dangling
 pointer pointing to nowhere, which would explain the crashes.

I believe the real problem is that these internal pointers are not
doing proper incref/decref on the python objects they point to, so your
python objects are getting garbage collected prematurely. The
expire-on-commit readily exposes this issue, but disabling it will not
really fix the underlying problem.

A newer version of pyQT may do the incref/decref for you. If not, your
QAbstractItemModel-derived object should keep strong references to
everything that was pointed to by an internal pointer in a private
list or set. This should prevent them from being garbage collected.

 Can you please explain a bit more what the expiration does (if I did
 not get it right)?

 Or is there another solution? Is the design of my data structure
 (meaning
 having mapped classes simultaneously as tree nodes) crap?

 oh.  What's this mean ?   I can't imagine what you'd be doing
 there.   If
 it were like:

 def foo(data):
 class Foo(object):
 data = data
 mapper(Foo, sometable)
 return Foo

 for x in (1, 2, 3):
 myobject.collection.append(foo(x))

 that would be more or less insane, sure.  But even then.
 myobject.collection is definitely not a SQLAlchemy instrumented
 attribute
 - because its contents would have to be instances of a mapped class.  So
 even with the above (entirely unnecessary and crazy) pattern, I don't
 see
 how expiration is getting in the way.


 This means: I have three declarative classes: Category, Asset and
 Note. An Asset belongs to one category and a note belongs to one
 asset. This is basically a tree structure, and can be reproduced with
 a relational database and therefore sqlalchemy. The classes just have
 additional methods like:

 def child(self,row):
 return self.somerelation[row]


 to tell the Qt API, that i am using as a gui toolkit, which instance
 to use in the tree. As said I think the problem lies there, because
 the api keeps internal pointers to the individual items of a query
 result, but on a commit they are expired and then changed by another
 query when accessed again, so the toolkit can't find them any more.
 Turning off expiration is the key.

 if you want to know more about the crashed I can send you an test script.

 Thank you

 Sebastian


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: in_() operator is not currently implemented for many-to-one-relations - alternatives?

2010-03-19 Thread Conor
Stodge wrote:
 Thanks. That doesn't quite work. Based on my data, the following
 should (and does) work because it only returns document id=1, which
 only has these two tags:

 tag_list = ['my document', 'source code']
 session.query(Document).\
   filter(Document.tags.any(Tag.tag.in_([t for t in tag_list]))).\
   filter(~Document.tags.any(~Tag.tag.in_([t for t in tag_list])))

 The following should return no records, as there is no document that
 has only these tags. Instead it returns document id=2, which only has
 the tag 'random stuff':

 tag_list = ['my document', 'source code', 'random stuff']
 session.query(Document).\
   filter(Document.tags.any(Tag.tag.in_([t for t in tag_list]))).\
   filter(~Document.tags.any(~Tag.tag.in_([t for t in tag_list])))

   

Try this query instead:

tag_list = ['my document', 'source code', 'random stuff']
q = session.query(Document)
for t in tag_list:
q = q.filter(Document.tags.any(Tag.tag == t))
q = q.filter(~Document.tags.any(~Tag.tag.in_(tag_list)))

Alternatively, if your database supports aggregating row sets into
arrays/strings (e.g. PostgreSQL supports ARRAY(SELECT ...) to collect
row sets into an array, MySQL and others have GROUP_CONCAT() that I
believe you can use for this purpose), you may prefer a different
technique. I will show an example with PostgreSQL and ARRAY().

subq = session.query(Tag.tag)
subq = subq.join(document_tags)
subq = subq.filter(document_tags.c.document_id == Document.id)
subq = subq.order_by(Tag.tag.asc())
subq = subq.correlate(Document)
subq = subq.subquery()

q = session.query(Document)
q = q.filter(func.array(subq) == sorted(tag_list))
q = q.correlate(s) # May not be needed

This should generate SQL like:

SELECT Document columns
FROM document
WHERE ARRAY(
  SELECT tag.tag
  FROM tag
  JOIN document_tags ON document_tags.tag_id = tag.id
  WHERE document_tags.document_id = document.id
  ORDER BY tag.tag ASC
) = %(array_1)s

where array_1 would be ['my document', 'random stuff', 'source code'].
Note how both the subquery and array_1 have to be sorted. I tend to
prefer this type of query since its complexity doesn't grow as you add
more tags in your search criteria.

-Conor


 On Mar 19, 10:15 am, Michael Bayer mike...@zzzcomputing.com wrote:
   
 select document.* from document join tags on document.id=tags.document_id
 where tags.tag='foo' and tags.tag='bar' and tags.tag=

 am I missing something ?  that would return no rows in most cases.

 if you want to find documents that have an exact list of tags, you'd have to 
 do something like the IN query we started with, and additionally ensure no 
 extra tags remain.

 like:

 sess.query(Document).\
 filter(Document.tags.any(Tag.id.in_([t.id for t in 
 tag_list])).\
 filter(~Document.tags.any(~Tag.id.in_([t.id for t in 
 tag_list]))

 On Mar 19, 2010, at 8:31 AM, Stodge wrote:

 
 Now we're getting somewhere:
   
 expressions = []
 for tag in tag_list:
expressions += [Tag.tag==tag]
 documents =
 session.query(Document).join(Document.tags).filter(and_(*expressions))
   
 Thanks to a Storm example I found. :)
   
 On Mar 19, 8:12 am, Stodge sto...@gmail.com wrote:
   
 Ok so far I have this:
 
 expressions = []
 for tag in tag_list:
 expressions += session.query(Document).filter(Tag.tag==tag)
 documents =
 session.query(Document).join(Document.tags).filter(and_(*expressions))
 
 Doesn't work but it's progress! :)
 
 On Mar 18, 2:37 pm, Stodge sto...@gmail.com wrote:
 
 Thanks that worked beautifully.
   
 On a similar note, how would I match documents with only the tags that
 I specify in the list? My naive attempt is:
   
 for tag in tag_list:
 session.query(Document).join(Document.tags).filter_by(tag=tag)
   
 But that doesn't work.
   
 On Mar 15, 10:54 am, Michael Bayer mike...@zzzcomputing.com wrote:
   
 Stodgewrote:
 
 I have two classes with a third table:
   
 document_tags = Table('document_tags', metadata,
Column('document_id', Integer, ForeignKey('documents.id')),
Column('tag_id', Integer, ForeignKey('tags.id'))
 )
   
 class Document(Base):
__tablename__ = 'documents'
   
id = Column(Integer, primary_key=True)
title = Column(String)
filename = Column(String)
tags = relation('Tag', secondary=document_tags, backref='tags')
   
def __init__(self, title, filename):
self.title = title
self.filename = filename
   
 class Tag(Base):
__tablename__ = 'tags'
   
id = Column(Integer, primary_key=True)
tag = Column(String)
   
def __init__(self, tag):
self.tag = tag
   
 I want to find all documents with tags in a given list of tags:
   
 documents =
 session.query(Document).filter(Document.tags.in_(tag_list

[sqlalchemy] Single table inheritance subclass relations

2010-03-11 Thread Conor
I'm having trouble with many-to-one relationships to subclasses that use
single table inheritance. I have tried this in 0.5.8 and 0.6beta1.

Here is my test case:

import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

employee = sa.Table(employee, Base.metadata,
sa.Column(id, sa.Integer, primary_key=True),
sa.Column(type, sa.Integer, nullable=False))

class Employee(Base):
__table__ = employee
__mapper_args__ = {polymorphic_identity: 0,
   polymorphic_on: employee.c.type}

class Engineer(Employee):
__mapper_args__ = {polymorphic_identity: 1}

class Other(Base):
__tablename__ = other

# Fields
id = sa.Column(sa.Integer, primary_key=True)
engineer_id = sa.Column(sa.Integer,
sa.ForeignKey(employee.id))

# Relations
engineer = orm.relation(Engineer)

print Other.engineer.has()

I get this exception:

AttributeError: 'ClauseList' object has no attribute 'proxy_set'

This appears to be the same issue described at
http://groups.google.com/group/sqlalchemy/browse_thread/thread/7fcb0c0c6e1c809f.
Is this a bug? Currently, I am working around the issue by changing
Other.engineer to refer to Employee and using
Other.engineer.of_type(Engineer).has(), but this is less than ideal.

NOTE: In my real use case, I have compound foreign keys and check
constraints on Other to ensure that its engineer_id only points to
Engineer objects. However that was not necessary to demonstrate the
problem, so it has been omitted here.

Thanks,
-Conor

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: preserving uselist relation instances in a session.refresh()

2010-03-04 Thread Conor
Kent wrote:
 I agree I shouldn't care, so maybe there is another way to attack my
 problem.  The reason I care is because I've extended the python object
 with some auxiliary information that I need.  After the refresh() in
 this case, I still need access to that data that is tied to the
 object, but not present in the database (it is transient data).  If
 sqla creates a new instance, I loose that data.

 Is there a better mechanism for doing that?

   

You need to either manually keep strong references to each object that
has the auxiliary information or disable the weak identity map. See
http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
or
http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
for more information.

-Conor


 On Mar 4, 3:38 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   
 refresh doesn't remove any objects from the session so its a matter of
 what is present in the session, not marked as dirty, and strongly
 referenced on the outside.   if you're using refresh you shouldn't care
 about how it gets data back into the collection.

 Kent wrote:
 
 What's strange is that I can't recreate the problem on more simple
 stage.  Every time I refresh() on the parent object, the list objects
 remain the same.  In other words, *sometimes* it behaves as I hope it
 to (by apparently refreshing the list's objects) and *sometimes* if
 throws them out and creates new ones.  The mystery to me is what
 determines when it will create new instances vs. refreshing the
 existing ones?
   
 On Mar 4, 3:24 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   
 Kent wrote:
 
 If I use session.refresh(obj) to re-load an obj that has a one-to-many
 relational property, the objects in the list are *replaced* instead of
 *refreshed* if they already exist.
   
 Suppose department has a list of employees:
   
 suppose dept.employees = [ emp1, emp2 ]
   
 session.refresh(dept)
   
 the dept.employees list's elements are replaced with new objects
 instead of reusing those that existed and refreshing them.
   
 Is it possible to have those same objects re-used and simply refreshed
 instead of replaced?
   
 you can only turn off refresh-expire cascade, which will prohibit the
 operation from traveling into the child objects. the collection is
 still
 refreshed for obvious reasons, its one of the attributes on your mapped
 object.
 
 To achieve your specified behavior, use session.refresh() given as its
 second argument the set of attribute names which are safe to be reloaded
 completely (in this case the scalars). Then for each uselist
 attribute,
 iterate the collection of each and call the desired version of
 session.refresh() for those.
 
 This is an easy refresh() function to create in a generalized way by
 inspecting the class-level attributes of the incoming object.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: preserving uselist relation instances in a session.refresh()

2010-03-04 Thread Conor
Conor wrote:
 Kent wrote:
   
 I agree I shouldn't care, so maybe there is another way to attack my
 problem.  The reason I care is because I've extended the python object
 with some auxiliary information that I need.  After the refresh() in
 this case, I still need access to that data that is tied to the
 object, but not present in the database (it is transient data).  If
 sqla creates a new instance, I loose that data.

 Is there a better mechanism for doing that?

   
 

 You need to either manually keep strong references to each object that
 has the auxiliary information or disable the weak identity map. See
 http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
 or
 http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
 for more information.
   

That second link should be
http://www.sqlalchemy.org/docs/05/session.html#session-attributes. Oops.

 On Mar 4, 3:38 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   
 
 refresh doesn't remove any objects from the session so its a matter of
 what is present in the session, not marked as dirty, and strongly
 referenced on the outside.   if you're using refresh you shouldn't care
 about how it gets data back into the collection.

 Kent wrote:
 
   
 What's strange is that I can't recreate the problem on more simple
 stage.  Every time I refresh() on the parent object, the list objects
 remain the same.  In other words, *sometimes* it behaves as I hope it
 to (by apparently refreshing the list's objects) and *sometimes* if
 throws them out and creates new ones.  The mystery to me is what
 determines when it will create new instances vs. refreshing the
 existing ones?
   
 On Mar 4, 3:24 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   
 
 Kent wrote:
 
   
 If I use session.refresh(obj) to re-load an obj that has a one-to-many
 relational property, the objects in the list are *replaced* instead of
 *refreshed* if they already exist.
   
 Suppose department has a list of employees:
   
 suppose dept.employees = [ emp1, emp2 ]
   
 session.refresh(dept)
   
 the dept.employees list's elements are replaced with new objects
 instead of reusing those that existed and refreshing them.
   
 Is it possible to have those same objects re-used and simply refreshed
 instead of replaced?
   
 
 you can only turn off refresh-expire cascade, which will prohibit the
 operation from traveling into the child objects. the collection is
 still
 refreshed for obvious reasons, its one of the attributes on your mapped
 object.
 
 To achieve your specified behavior, use session.refresh() given as its
 second argument the set of attribute names which are safe to be reloaded
 completely (in this case the scalars). Then for each uselist
 attribute,
 iterate the collection of each and call the desired version of
 session.refresh() for those.
 
 This is an easy refresh() function to create in a generalized way by
 inspecting the class-level attributes of the incoming object.
   

   

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Querying with join and filter/order on second table

2009-12-23 Thread Conor
Daishy wrote:
 Hi again,

 I have two tables contacts(Fields id, typ_id, name) and types (Fields
 id, value). contacts has a typ_id which points to typ.id (Many-To-One-
 Relation) and an according relation (Using declarative-Style)
 Now i want to query the contacts-table, but i also want to filter and
 order on the related fields. For example (I'm using Turbogears2,
 DBSession is the sqlalchemy-instance):
 DBSession.query(Contact).???.order_by(Typ.value)
 should load the contact-table, eager load the typ-tables related to
 the contact-row and order the whole result-set by the typ-value
 column.
 (The sql would be something like: select * from contacts join typ on
 contacts.typ_id = typ.id order_by typ.value )

 But i cant get that to work :/ I tried using the .options(eager_load
 ('typ')) but that doesnt sort it right. Using .join(Typ) works, but it
 doesnt select the variables from typ, so no eager-loading.

 I'm pretty new to SQLAlchemy, so i hope i provided all relevant infos.
 Thanks very much for any answer, i'm kinda stuck on this problem :)

 Greetings,
 Daishy
   

You're almost there. Use the .join(Type) approach, but also add
.options(contains_eager('typ')) to the query. This is the general
pattern to use when you want to eager-load related objects but also use
related fields in filter(), order_by(), etc.

In case you were wondering, .options(eager_load(...)) is designed to not
affect the semantics of the query in any way, i.e you could drop the
eager_load option and the query would select the same root objects in
the same order with the same limit, etc. To ensure this, SQLAlchemy does
not expose eager-loaded objects for you to use in the rest of the query.

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Querying for empty / non-empty collection (in many-to-many relation)

2009-12-14 Thread Conor
Marcin Krol wrote:
 Hello,


 rsvs = session.query(Reservation).filter(Reservation.email == 
 em).filter(Reservation.newhosts == [] ).options(eagerload('newhosts')).all()

 sqlalchemy.exc.InvalidRequestError: Can't compare a collection to an 
 object or collection; use contains() to test for membership.

 Well I can always do:

 rsvs = session.query(Reservation).filter(Reservation.email == 
 em).options(eagerload('newhosts')).all()


 for r in rsvs:
  if r.newhosts != []:
  print r.id, 'newhosts', r.newhosts

 or

 for r in rsvs:
  if r.newhosts == []:
  print r.id, 'newhosts', r.newhosts


 But, that's ugly like. So, the question is, is it possible to query for 
 empty or non-empty collection?

 Regards,
 mk
   

If you really do need to peek inside non-empty collections like the
example above, then I think your eagerloading query works best.

Otherwise, you can use this (which generates a NOT EXISTS subquery) to
find reservations without any newhosts:
q = session.query(Reservation)
q = q.filter(Reservation.email == em)
q = q.filter(~Reservation.newhosts.any())
rsvs = q.all()

or this (explicit anti-join, assumes that Reservation.newhosts is a
relation to a Host class):
q = session.query(Reservation)
q = q.outerjoin(Reservation.newhosts)
q = q.filter(Reservation.email == em)
q = q.filter(Host.id == None)
rsvs = q.all()

or, if you need to include both empty and non-empty newhosts, with a
flag indicating which:
q = session.query(Reservation, Reservation.newhosts.any())
q = q.filter(Reservation.email == em)
rsvs_with_flag = q.all()

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] ConcurrentModificationError

2009-12-02 Thread Conor
Dave Paola wrote:
 Indeed, I do have TaskTags mapped to it's own class.  However, I never
 explicitly delete any TaskTag object, only create them.

 In any case, what would the preferred way to add a new tag to a task
 (a new entry in the association table)?  I was using the ORM to just
 create a new instance of TaskTag (the mapped class).  If having the
 association table mapped to its own class becomes problematic, what's
 the convention for accomplishing this?  

 Thanks for your feedback :-)


(I'm about 80% sure this paragraph is correct, so take it with a grain
of salt).
I'm guessing you also have a relation from Task to TaskTags (e.g.
Task.task_tags). By default, many-to-many relations will cascade the
delete to the the secondary table (so the Task.tags relation cascades
DELETEs to tasktags_table), and one-to-many relations will set foreign
key columns to NULL on the related table (so the Task.task_tags relation
cascades UPDATEs to tasktags_table). Add this all up and SQLAlchemy will
try to UPDATE a deleted row or DELETE an updated row, depending on which
cascade happens first. A similar situation occurs if the Task.task_tags
relation has cascade=delete set (SQLAlchemy would try to DELETE a
deleted row).

Usually a table like tasktags_table has no columns except for foreign
keys to the related tables (e.g. task_id and tag_id columns). If this is
the case for your tasktags_table table, you probably don't want to map
it at all: just use it as a secondary table in the relation.

Otherwise, I would recommend the association_proxy method over the
viewonly=True method, because the viewonly=True method leaves a lot of
room for things to get out of sync until you commit or expire the
session. To use the association_proxy, try this:

from sqlalchemy.ext.associationproxy import association_proxy

mapper(Task, tasks_table, properties = {
# Assumes you have a TaskTag.tag relation.
'tags' :  association_proxy(
'task_tags',
'tag',
creator=lambda tag: TaskTag(tag=tag)),
'task_tags': relation(TaskTag, lazy=False)
})

You only need the creator argument if you want to create TaskTag objects
implicitly, e.g. my_task.tags.append(my_tag). I'm also guessing your
TaskTag constructor accepts a tag keyword parameter.

Hope it helps,
-Conor


 On Wed, Dec 2, 2009 at 4:58 PM, Michael Bayer
 mike...@zzzcomputing.com mailto:mike...@zzzcomputing.com wrote:


 On Dec 2, 2009, at 5:46 PM, Dave Paola wrote:

 I'm getting this: ConcurrentModificationError: updated rowcount 0
 does not match number of objects updated 1 when I try to commit a
 simple deletion.  I'm using Sqlite locally but the error also
 occurs on a Postgres database in a live environment with only ONE
 user connected.

 I saw this post from Nov.
 11th: 
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/e7175c7351963128
 but because I'm not doing any copying (just deletion), it didn't
 seem to provide anything useful.  Both session.dirty and
 session.new are empty.

 I have a Task class mapped to an association table called TaskTag
 that has a task_id and tag_id.  I'm literally doing a
 session.delete(task) followed by a session.commit() and
 session.close().  Here's my mapper:

 mapper(Task, tasks_table, properties = {
 'tags' :  relation(Tag, secondary=tasktags_table,
 lazy = False)
 })

 I suspect this has something to do with the many-to-many
 relationship, but for the life of me I cannot figure out what's
 going on.  Thanks in advance.

 this can happen if you have tasktags_table explicitly mapped
 elsewhere.the mapper for tasktags_table will issue a DELETE,
 and then if Task.tags is also modified in some way that affects
 the same row, the DELETE issued corresponding to the relation()
 will not find its row.In that case the Concurrent name is
 referring to two different configurations within a single flush
 conflicting with each other.  If this is your issue, strategies to
 address include placing viewonly=True on the relation() or using
 the association proxy pattern (you can even use both if you want
 to pick and choose how the SQL to load records is emitted).

 P.S. I use SqlAlchemy so often, I love the framework.  Thanks to
 everyone for your hard work, it's greatly appreciated :-)

 The compliments are appreciated !


--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Filter on relation???

2009-11-24 Thread Conor
Christian Démolis wrote:
 Thx for your answer Thomas

 I want the attribute to not return the complete object just some of
 the attribute of the other table.

 In my case Utilisateur has some attributes : Login, Nom, Prenom, ...
 I dont want Verouillage.LeNomDuUtilisateur to return complete object
 Utilisateur
 I want it to return only Login attributes.

 Primaryjoin seems to work on condition (where clause) and not on
 select condition (select ... in a query)

 Is it possible to limit selected attributes in the relation?


You can use sqlalchemy.ext.associationproxy.association_proxy to turn a
collection of related objects into a collection of related object
attributes:
_LeUtilisateur = relation(Utilisateur, backref=backref('verrouillage'))
LeNomDuUtilisateur = association_proxy('_LeUtilisateur', 'Login')

-Conor

 2009/11/24 Tefnet Developers - Tomasz Jezierski develop...@tefnet.pl
 mailto:develop...@tefnet.pl

 Dnia 2009-11-24, Wt o godzinie 11:18 +0100, Christian Démolis pisze:
  Is it possible to put a filter on a relation in the declaration?
 
  Example :
  LeNomDuUtilisateur = relation(Utilisateur,
  filter_by=Utilisateur.Login, backref=backref('verrouillage'))

 I'm not sure what exactly your example means.. but if you want extra
 filters on relation, you can change primaryjoin
 
 http://www.sqlalchemy.org/docs/05/mappers.html#specifying-alternate-join-conditions-to-relation

 something like:
 LeNomDuUtilisateur = relation(Utilisateur,
 primaryjoin=sqlalchemy.and_(defaultjoincondition,
 yourextrafiltercondition), backref='verrouillage')


 Tomasz Jezierski
 Tefnet
 www.tefnet.pl http://www.tefnet.pl


--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Filter on relation???

2009-11-24 Thread Conor
Christian Démolis wrote:
 Cool, it's very powerful, it will allow me to save my Bandwidth
 because i take just what i want and not the entire object.

 Thanks Conor


I think I gave you the wrong impression: association_proxy does not
replace the original relation in any way. You are still using a full
SELECT when using the association_proxy, because the association_proxy
does its thing outside of SQL.

If you need to pick and choose columns in the SELECT statement, I would
advise doing that in the query:
q = session.query(Utilisateur.Login)
q = q.join(Utilisateur.verouillage)
q = q.filter(Verouillage.id == some_id)
logins = q.all()

Alternatively, you can look into using the sqlalchemy.orm.defer query
option to tell the query to NOT load a given column when loading a given
class. More info at
http://www.sqlalchemy.org/docs/05/reference/orm/query.html#query-options.

-Conor

 2009/11/24 Conor conor.edward.da...@gmail.com
 mailto:conor.edward.da...@gmail.com

 Christian Démolis wrote:
  Thx for your answer Thomas
 
  I want the attribute to not return the complete object just some of
  the attribute of the other table.
 
  In my case Utilisateur has some attributes : Login, Nom, Prenom, ...
  I dont want Verouillage.LeNomDuUtilisateur to return complete object
  Utilisateur
  I want it to return only Login attributes.
 
  Primaryjoin seems to work on condition (where clause) and not on
  select condition (select ... in a query)
 
  Is it possible to limit selected attributes in the relation?
 

 You can use sqlalchemy.ext.associationproxy.association_proxy to
 turn a
 collection of related objects into a collection of related object
 attributes:
 _LeUtilisateur = relation(Utilisateur,
 backref=backref('verrouillage'))
 LeNomDuUtilisateur = association_proxy('_LeUtilisateur', 'Login')

 -Conor

  2009/11/24 Tefnet Developers - Tomasz Jezierski
 develop...@tefnet.pl mailto:develop...@tefnet.pl
  mailto:develop...@tefnet.pl mailto:develop...@tefnet.pl
 
  Dnia 2009-11-24, Wt o godzinie 11:18 +0100, Christian
 Démolis pisze:
   Is it possible to put a filter on a relation in the
 declaration?
  
   Example :
   LeNomDuUtilisateur = relation(Utilisateur,
   filter_by=Utilisateur.Login, backref=backref('verrouillage'))
 
  I'm not sure what exactly your example means.. but if you
 want extra
  filters on relation, you can change primaryjoin
 
 
 http://www.sqlalchemy.org/docs/05/mappers.html#specifying-alternate-join-conditions-to-relation
 
  something like:
  LeNomDuUtilisateur = relation(Utilisateur,
  primaryjoin=sqlalchemy.and_(defaultjoincondition,
  yourextrafiltercondition), backref='verrouillage')
 
 
  Tomasz Jezierski
  Tefnet
  www.tefnet.pl http://www.tefnet.pl http://www.tefnet.pl
 


--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Filter on relation???

2009-11-24 Thread Conor
Christian Démolis wrote:
 I m disappointed

 I already use session.query everywhere in my code.

 Maybe this mechanism of prefiltered select relation without dl all
 the object (proxy) will appear in a future version of SQL Alchemy (we
 already can indicate the order_by, why not the select column)...

 I can t use the defer query option because i need to obtain my object
 in multiple ways. (entire object, only some attribute...)

 What do you think of that (it s just an idea)?
 Declare an object twice (one with all the attribute and one with only
 some attributes).
 In my relation, when i want to access to only some attribute i bind
 the relation to the light object
 It s a brutal method, i dont think if it can work...


I cannot speak to how feasible it is to add this feature to a relation,
but if it is added then it would likely be built on top of query options
since there is so much overlap. I think your multiple definition
approach is possible, but it is not something I would want to attempt.

When you use defer as a query option, you are telling SQLAlchemy to not
load that attribute for that query only, so I'm not sure why you are
hesitant to use query options. I'm assuming you need a bit more, so here
is another option (untested):

   1. Make the Verouillage.Utilisateur relation a dynamic relation by
  passing lazy='dynamic' to the relation(). This will make the
  relation return a Query instead of a list.
   2. Add a property to Verouillage to wrap the dynamic relation, e.g.:
  @property
  def Utilisateur_Login(self):
  q = self.Utilisateur
  # Repeat for all Utilisateur attributes you do not want to load
  q = q.options(defer(some_Utilisateur_attribute))
  return q
   3. Use the new property instead of the original relation when you
  want to restrict your SELECT columns.

I'm not sure how well this technique is supported. You should test with
SQL echo enabled on your engine to confirm that it works.

I do have a side question: if bandwidth is such a concern, maybe it
makes more sense to make ALL mapped attributes deferred by default (via
sqlalchemy.orm.deferred), and selectively undefer attributes on a
per-query basis?

-Conor

 2009/11/24 Conor conor.edward.da...@gmail.com
 mailto:conor.edward.da...@gmail.com

 Christian Démolis wrote:
  Cool, it's very powerful, it will allow me to save my Bandwidth
  because i take just what i want and not the entire object.
 
  Thanks Conor
 

 I think I gave you the wrong impression: association_proxy does not
 replace the original relation in any way. You are still using a full
 SELECT when using the association_proxy, because the association_proxy
 does its thing outside of SQL.

 If you need to pick and choose columns in the SELECT statement, I
 would
 advise doing that in the query:
 q = session.query(Utilisateur.Login)
 q = q.join(Utilisateur.verouillage)
 q = q.filter(Verouillage.id == some_id)
 logins = q.all()

 Alternatively, you can look into using the sqlalchemy.orm.defer query
 option to tell the query to NOT load a given column when loading a
 given
 class. More info at
 http://www.sqlalchemy.org/docs/05/reference/orm/query.html#query-options.

 -Conor

  2009/11/24 Conor conor.edward.da...@gmail.com
 mailto:conor.edward.da...@gmail.com
  mailto:conor.edward.da...@gmail.com
 mailto:conor.edward.da...@gmail.com
 
  Christian Démolis wrote:
   Thx for your answer Thomas
  
   I want the attribute to not return the complete object
 just some of
   the attribute of the other table.
  
   In my case Utilisateur has some attributes : Login, Nom,
 Prenom, ...
   I dont want Verouillage.LeNomDuUtilisateur to return
 complete object
   Utilisateur
   I want it to return only Login attributes.
  
   Primaryjoin seems to work on condition (where clause) and
 not on
   select condition (select ... in a query)
  
   Is it possible to limit selected attributes in the relation?
  
 
  You can use sqlalchemy.ext.associationproxy.association_proxy to
  turn a
  collection of related objects into a collection of related
 object
  attributes:
  _LeUtilisateur = relation(Utilisateur,
  backref=backref('verrouillage'))
  LeNomDuUtilisateur = association_proxy('_LeUtilisateur',
 'Login')
 
  -Conor
 
   2009/11/24 Tefnet Developers - Tomasz Jezierski
  develop...@tefnet.pl mailto:develop...@tefnet.pl
 mailto:develop...@tefnet.pl mailto:develop...@tefnet.pl
   mailto:develop...@tefnet.pl mailto:develop...@tefnet.pl
 mailto:develop...@tefnet.pl mailto:develop...@tefnet.pl
  
   Dnia 2009-11-24, Wt o godzinie 11:18 +0100, Christian

Re: [sqlalchemy] Creating container relationship in declarative SQLAlchemy

2009-11-24 Thread Conor
jgarbers wrote:
 Hi! I've posted this question over on Stack Overflow -- thought
 perhaps the mailing list might be a better resource for help.  (If
 you're a Stack Overflow user and want the points, here's the link:
 http://stackoverflow.com/questions/1791713/creating-container-relationship-in-declarative-sqlalchemy
 )

 My Python / SQLAlchemy application manages a set of nodes, all derived
 from a base class Node. I'm using SQLAlchemy's polymorphism features
 to manage the nodes in a SQLite3 table. Here's the definition of the
 base Node class:

 class Node(db.Base):
 __tablename__ = 'nodes'
 id = Column(Integer, primary_key=True)
 node_type = Column(String(40))
 title = Column(UnicodeText)
 __mapper_args__ = {'polymorphic_on': node_type}
 and, as an example, one of the derived classes, NoteNode:

 class NoteNode(Node):
 __mapper_args__ = {'polymorphic_identity': 'note'}
 __tablename__ = 'nodes_note'
 id = Column(None,ForeignKey('nodes.id'),primary_key=True)
 content_type = Column(String)
 content = Column(UnicodeText)

 Now I need a new kind of node, ListNode, that is an ordered container
 of zero or more Nodes. When I load a ListNode, I want it to have its
 ID and title (from the base Node class) along with a collection of its
 contained (child) nodes. A Node may appear in more than one ListNode,
 so it's not a proper hierarchy. I would create them along these lines:

 note1 = NoteNode(title=uNote 1, content_type=text/text,
 content=uI am note #1)
 session.add(note1)

 note2 = NoteNode(title=uNote 2, content_type=text/text,
 content=uI am note #2)
 session.add(note2)

 list1 = ListNode(title=uMy List)
 list1.items = [note1,note2]
 session.add(list1)

   


From the Node may appear in more than one ListNode bit, I gather this
is a many-to-many relationship, so you need a secondary table to link
ListNodes with Nodes. Also, because it is an ordered list, your
secondary table needs a position/index column. Having an ordered
many-to-many relationship can be tricky. When both sides of the
relationship have a common base class, this is even more tricky. Here is
one way you can define your secondary table and ListNode:

from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.ext.orderinglist import ordering_list

class ListNode(Node):
__mapper_args__ = {'polymorphic_identity': 'list'}
__tablename__ = 'nodes_list'

id = Column(Integer, ForeignKey('nodes.id'), primary_key=True)
contents = relation(
'ListNodeContents',
backref='list_node',
cascade='all,delete-orphan',
collection_class=ordering_list('position'),
order_by=lambda: [ListNodeContents.position],
primaryjoin=lambda: (
ListNodeContents.list_node_id == ListNode.__table__.c.id))
nodes = association_proxy(
'contents',
'node',
creator=lambda node: ListNodeContents(node=node))

# Usually you do not need a separate mapped class for a many-to-many
# relationship: you would just define a table. However, when we add the
# 'position' column to support ordering_list, we need a real mapped class.
class ListNodeContents(Base):
__tablename__ = 'nodes_list_contents'

list_node_id = Column(
Integer,
ForeignKey('nodes_list.id'),
primary_key=True)
node_id = Column(
Integer,
ForeignKey('nodes.id'),
primary_key=True)
position = Column(Integer, nullable=False)

node = relation(Node)

You can then access a ListNode's contents using my_list_node.nodes.

 The list of children should only consist of Node objects -- that is,
 all I need is their base class stuff. They shouldn't be fully realized
 into the specialized classes (so I don't get the whole graph at once,
 among other reasons).
   

You don't need to worry about loading the whole graph at once; you have
to go out of your way to do that (eagerloading). And unless you enable
polymorphic loading (using with_polymorphic method of the Query object
or mapper parameter), you will only load base class data when loading
the relation. You still get instances of derived classes (which is a
good thing), but their derived-table data will not be loaded until you
try to access those specific columns.

Hope it helps,
-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] SQLAlchemy 5.4.p1 - RuntimeError - why?

2009-11-24 Thread Conor
Hermann Himmelbauer wrote:
 Hi,
 I'm experiencing some interesting error here with SQLAlchemy:
 When I execute one of my functions, which do a simple session.query, the 
 following traceback occurs, but only sometimes - no clue why and when:

 --- snip ---
   File /home/bank/zbsp/buildout/src/bsp/bsp/torder.py, line 183, in 
 search_torder_tmpl
 customer = get_customer(bspconf, session, customerid=customerid)
   File /home/bank/zbsp/buildout/src/bsp/bsp/customer.py, line 71, in 
 get_custo
 mer
 customer = session.query(Customer).filter_by( 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/query.py,
  
 line 1226, in first
 ret = list(self[0:1]) 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/query.py,
  
 line 1147, in __getitem__
 return list(res) 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/query.py,
  
 line 1286, in __iter__
 self.session._autoflush() 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/session.py,
  
 line 899, in _autoflush
 self.flush() 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/session.py,
  
 line 1354, in flush
 self._flush(objects) 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/session.py,
  
 line 1359, in _flush
 if (not self.identity_map.check_modified() and 
 File 
 /home/bank/zbsp/buildout/eggs/SQLAlchemy-0.5.4p1-py2.4.egg/sqlalchemy/orm/identity.py,
  
 line 56, in check_modified
 for state in self._mutable_attrs:
 RuntimeError: dictionary changed size during iteration
 - snip --

 Any hints about what to do?
   

Looks like this was a bug and was fixed in 0.5.6. Time to upgrade?

You may be able to work around this issue somewhat by forcing a garbage
collection just before your query:
import gc
...
gc.collect()
customer = session.query(Customer).filter_by(
...

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] get_or_create(**kwargs) ?

2009-11-23 Thread Conor
chaouche yacine wrote:
 --- On Sun, 11/22/09, Conor conor.edward.da...@gmail.com wrote:

   
 There is a problem with your code when the tag is in the
 cache: if the
 tag is added to the session via session.add or a relation
 add cascade,
 SQLAlchemy will try to INSERT the tag into the database on
 the next
 flush. 
 -Conor
 

 I don't know, it seems not. I created a new empty Tags table, I create a new 
 Pylons tag, and it creates it only on the first flush. If I add it a second 
 time to the session, and reflush it, it won't try to re-insert it in the db. 
 Here's my ipython session : 

 In [1]: from someproject.model.tag import *

 In [2]: setup_all()

 In [3]: create_all()
 12:59:34,878 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] SHOW VARIABLES 
 LIKE 'sql_mode'
 12:59:34,882 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ()
 12:59:34,886 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] DESCRIBE `Tags`
 12:59:34,886 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ()
 12:59:34,888 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ROLLBACK
 12:59:34,889 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] 
 CREATE TABLE `Tags` (
   id INTEGER NOT NULL AUTO_INCREMENT, 
   name VARCHAR(64), 
   PRIMARY KEY (id)
 )


 12:59:34,907 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ()
 12:59:34,959 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] COMMIT

 In [4]: pylons = Tag(Pylons)
 12:59:59,171 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] BEGIN
 /home/chaouche/PYTHONENV/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/engine/default.py:230:
  SAWarning: Unicode type received non-unicode bind param value 'Pylons'
   param.append(processors[key](compiled_params[key]))
 12:59:59,172 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] SELECT `Tags`.id 
 AS `Tags_id`, `Tags`.name AS `Tags_name` 
 FROM `Tags` 
 WHERE `Tags`.name = %s 
  LIMIT 0, 1
 12:59:59,172 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ['Pylons']
 not in the database

 In [5]: pylons = Tag(Pylons)

 In [6]: session.add(pylons)

 In [7]: session.flush
 Out[7]: bound method ScopedSession.do of 
 sqlalchemy.orm.scoping.ScopedSession object at 0x98572ac

 In [8]: session.flush()
 13:00:14,116 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] INSERT INTO 
 `Tags` (name) VALUES (%s)
 13:00:14,116 INFO  [sqlalchemy.engine.base.Engine.0x...6fec] ['Pylons']

 In [9]: session.add(pylons)

 In [10]: session.flush()

 In [11]: pylons = Tag(Pylons)

 In [12]: pylons
 Out[12]: Tag Pylons

 In [13]: session.add
 session.add  session.add_all  

 In [13]: session.add(pylons)

 In [14]: session.flush()

 In [15]: 
   

I think the problem is still there, but it is only exposed if the
session that inserts the Pylons tag is different from the session that
gets the tag from a cache hit. If you are indeed using Pylons, you will
get a new session for each HTTP request, so you will see the error in
that situation.

If your session object was created by sessionmaker(), you should be
able to trigger the problem by adding session.remove() between lines
10 and 11 in your test case above.

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] Warning: Can't create database 'TEST'; database exists

2009-11-23 Thread Conor
Peter wrote:
 Hi
 I do:

 engine = create_engine('mysql://r...@localhost/tmp)
 Session = sessionmaker()
 Session.configure(bind=engine)
 connection = engine.connect()
 session = Session()

 # suppose database TEST exists
 connection.execute('CREATE DATABASE IF NOT EXISTS TEST')

 Why does this gives a warning :

 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.6-py2.5.egg/sqlalchemy/engine/default.py:123:
  
 Warning: Can't create database 'TEST'; database exists
   cursor.execute(statement, parameters)
   

It looks like the MySQLdb cursor is emitting the warning, and AFAIK
there is no way to prevent MySQLdb from generating warnings. You can
tell Python to suppress this specific warning via:
import MySQLdb
import warnings
warnings.filterwarnings(
action=ignore,
category=MySQLdb.Warning,
message=Can't create database 'TEST'; database exists)

or suppress all MySQLdb warnings via:
import MySQLdb
import warnings
warnings.filterwarnings(
action=ignore,
category=MySQLdb.Warning)

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] get_or_create(**kwargs) ?

2009-11-22 Thread Conor
chaouche yacine wrote:
 --- On Fri, 11/20/09, Conor conor.edward.da...@gmail.com wrote:

   
 Also, there is a recipe that looks for a matching object in
 the session
 before querying the database:
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject

 -Conor

 

 Thank you Conor for you useful pointer. I have used this recipe and changed a 
 little bit to use beaker as a caching mechanism with a memcached backend. So 
 here's how it looks like :

 from elixir import Entity, EntityMeta,setup_all,create_all,metadata
 from pylons import cache

 class MetaTag(EntityMeta):
 cache = cache.get_cache(tags,type=memory)
 def __call__(cls,name):
 
 If it's in the cache, return the cached version
 If not in the cache :
 If it's in the database, retrieve it, cache it and return it
 If it's not there, create it, cache it and return it
 
 theTag = MetaTag.cache.get_value(key=name,createfunc=lambda:None)
 if not theTag :
 #not in the cache
 theTag = cls.query.filter_by(name=name).first()
 if not theTag:
 #not in the database either
 print not in the database
 theTag = type.__call__(cls,name)
 session.add(theTag)
 #Adding it to the cache, after creating it in the database if it 
 wasn't there
 MetaTag.cache.set_value(key=name,value=theTag)
 return theTag
 
 class Tag(Entity):
 __metaclass__ = MetaTag
 using_options (tablename=Tags)
 name   = Field(Unicode(64))

 def __init__(self,name,*args,**kw):
 Entity.__init__(self,*args,**kw)
 self.name = name

 def __repr__(self):
 return Tag %s %  self.name

 metadata.bind = mysql://username:passw...@localhost:3306/db
 metadata.bind.echo = True

 setup_all()
 create_all()

 And here's how it plays on the interpreter (ipython) :
 Alger is already in the database, Constantine is not. I find the metaclass 
 approach very API-friendly.

 ...

 In [4]: Tag(Alger)
 21:38:21,328 INFO  [sqlalchemy.engine.base.Engine.0x...2fec] BEGIN
 /home/chaouche/PYTHONENV/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/engine/default.py:230:
  SAWarning: Unicode type received non-unicode bind param value 'Alger'
   param.append(processors[key](compiled_params[key]))
 21:38:21,382 INFO  [sqlalchemy.engine.base.Engine.0x...2fec] SELECT `Tags`.id 
 AS `Tags_id`, `Tags`.name AS `Tags_name` 
 FROM `Tags` 
 WHERE `Tags`.name = %s 
  LIMIT 0, 1
 21:38:21,382 INFO  [sqlalchemy.engine.base.Engine.0x...2fec] ['Alger']
 Out[4]: Tag Alger

 In [5]: Tag(Alger)
 Out[5]: Tag Alger

 In [8]: Tag(Constantine)
 /home/chaouche/PYTHONENV/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/engine/default.py:230:
  SAWarning: Unicode type received non-unicode bind param value 'Constantine'
   param.append(processors[key](compiled_params[key]))
 21:39:17,487 INFO  [sqlalchemy.engine.base.Engine.0x...2fec] SELECT `Tags`.id 
 AS `Tags_id`, `Tags`.name AS `Tags_name` 
 FROM `Tags` 
 WHERE `Tags`.name = %s 
  LIMIT 0, 1
 21:39:17,487 INFO  [sqlalchemy.engine.base.Engine.0x...2fec] ['Constantine']
 not in the database
 Out[8]: Tag Constantine

 In [9]: Tag(Constantine)
 Out[9]: Tag Constantine

 Any comments are very appreciated.

 Y.Chaouche
   

There is a problem with your code when the tag is in the cache: if the
tag is added to the session via session.add or a relation add cascade,
SQLAlchemy will try to INSERT the tag into the database on the next
flush. To prevent this, you need to tell SQLAlchemy that the tag
object's state is persistent (already in the DB) instead of pending
(needs to be inserted). You can do this by adding this code for a cache hit:
theTag = session.merge(theTag, dont_load=True)

You can read more about session.merge at
http://www.sqlalchemy.org/docs/05/session.html#merging. Searching this
group for merge and dont_load is also a good idea.

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] get_or_create(**kwargs) ?

2009-11-20 Thread Conor
Michael Bayer wrote:
 chaouche yacine wrote:
   
 But how can I be sure that the city of New Jersey will be inserted before
 the user in the database so that the new user row will get the proper city
 id ?
 

 SQLAlchemy takes care of that automatically once you configure the
 relationship between user and city using relation().


   
 So I figured to rewrite the city helper function like this :

 def city(name):
 theCity = City.query.filter_by(name=name).first()
 if not theCity :
theCity = City(name=name)
session.commit() # to be sure it will be inserted before the user
 return theCity
 

 if you did go this route (which you don't need to), you probably don't
 want to commit() in the middle of constructing your User object like that
 - the commit() should be held off until the full set of related things
 you're constructing have been all put in the database.  that's why its
 called a transaction.   More appropriate would be flush() which ensures
 everything pending is issued to the current transaction.   Once everything
 is inserted into the database the way you want, commit pushes the data
 out to the world, making that data available to other transactions.
   

Also, there is a recipe that looks for a matching object in the session
before querying the database:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




Re: [sqlalchemy] Automatically Creating Table Definitions

2009-11-20 Thread Conor
Rodney Haynie wrote:
 In SQLAlchemy, is there some process available that will create the
 following code from an existing database/table?  i.e. the database has
 one table, the name of the table is users.
 After running the process, the following code would be created in a file:

 users = Table('users', metadata,
 Column('user_id', Integer, primary_key=True),
 Column('name', String(40)),
 Column('age', Integer),
 Column('password', String),
 )

 I understand SQLAlchemy has Reflection, but I am basically talking about
 creating all of this metadata code automatically as a jump start for
 large existing databases.  And something that could be run to keep the
 metadata code in sync after running schema migrations.
   

I haven't used it myself, but here is a project that will auto-generate
your model:
http://code.google.com/p/sqlautocode/

Can't help you with the sync issue, unless rerunning sqlautocode counts.

-Conor

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] Re: Select as

2009-11-16 Thread Conor

Tomas Zulberti wrote:
 Hi. I am sort of a newbie on SQLAlchemy. Is there a way to do a query
 with the ORM, and doing an as on the select.

 For example:

 class Example(Base):
 name = Column(Unicode(512) )


 query = session.query(Example.name AS foo)
 query.all()

   
Every column object has a label method, e.g. Example.name.label(foo).
However, the label is lost when you do query.all(), so I'm not sure what
you're trying to do here.

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Primary key is must in every table.

2009-11-16 Thread Conor

Niyas wrote:
 Dear All,

 In my application, i have a master table and a details table. In the
 details table
 containes foreign key only no primary key. I will be getting the
 following
 error while running.

 ArgumentError: Mapper Mapper|LpoIssueDetails|tbl_lpo_issue_details
 could not assemble any primary key columns for mapped table
 'tbl_lpo_issue_details'


 Is it must a Primary key in the details table.?

 Thanks in advance.

 Regards,
 Niyas
   

SQLAlchemy mappers require some sort of key to map rows to instances.
The key that it uses does not have to be the primary key of the table
you are using: any set of columns usable for a unique constraint would
work. See the allow_null_pks and primary_key mapper args at 
http://www.sqlalchemy.org/docs/05/reference/orm/mapping.html#defining-mappings
for more details.

BTW, is there a reason you cannot make your foreign key column also be
the primary key?

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query on a related object's field

2009-11-13 Thread Conor

Mike Conley wrote:
 I haven't seen how to do this using the relation directly. I do:

 session.query(Someclass).join(OtherClass).filter_by(OtherClass.field==somevalue)




 I did do some experimenting with a more abstract approach, but did not
 find any need in my application. The only advantage is that it takes
 away the need to know up front what is the name of the other class.
 That might be useful if you are building a framework based on SA, but
 not in most applications.

 otherclas = SomeClass.relname.property.mapper
 session.query(Someclass).join(otherclas).filter_by(otherclas.c.field==somevalue)

Another approach is using the has/any methods of the relation:
If SomeClass.relation is many-to-one or one-to-one:
session.query(SomeClass).filter(SomeClass.relation.has(field=somevalue))
or if the relation is one-to-many or many-to-many:
session.query(SomeClass).filter(SomeClass.relation.any(field=somevalue))

This approach produces an EXISTS clause in the SQL, so it's usually
slower than the join approach. On the other hand, it may be easier to
use inside more complicated queries (e.g. inside AND, OR, or NOT
expressions).

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: TypeError when appending to Associationproxy

2009-11-13 Thread Conor

Matthew R wrote:
 Hello,

 Just getting started with association proxy and having trouble making
 it work. The lookups seem to work fine but when I try to add a new
 element to the association, I get TypeError: __init__() takes exactly
 1 argument (2 given). Code  test case below, I've left a bunch of
 columns out for brevity:


 from sqlalchemy import Column, Integer, String, DateTime, Boolean,
 ForeignKey, Text, Date
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import relation, backref
 from sqlalchemy.ext.associationproxy import association_proxy

 Base = declarative_base()

 class Org(Base):
 __tablename__ = 'tOrg'

 id = Column(Integer, primary_key=True, name='OrgID')
 name = Column(String(100), name='OrgName')

 def __repr__(self):
 return Org(%s: '%s') % (self.id, self.name,)


 class News(Base):
 __tablename__ = 'tNews'

 id = Column(Integer, primary_key=True, name='NewsID')
 title = Column(String(255), name='NewsTitle')
 body = Column(Text, name='NewsBody')
 author = Column(String(255), name='NewsAuthor')
 is_active = Column(Boolean, name='NewsActive')
 date = Column(Date, name='NewsDate')
 priority = Column(Integer, name='NewsPriority')

 orgs = association_proxy('newsorgs', 'org')

 def __repr__(self):
 return News(%s: '%s') % (self.id, self.title,)

 class NewsOrg(Base):
 __tablename__ = 'trefNewsOrg'

 id = Column(Integer, primary_key=True, name='NewsOrgID')
 news_id = Column(Integer, ForeignKey(News.id), name='NewsID')
 news = relation('News', backref=backref('newsorgs'))

 org_id = Column(Integer, ForeignKey(Org.id), name='OrgID')
 org = relation(Org)

 def __repr__(self):
 if self.org:
 orgname = self.org.name
 else:
 orgname = 'ALL'
 return NewsOrg(%s: '%s', (%s)) % (self.id,
 self.news.title, orgname,)

 def testcase(session):
 myorg = session.query(Org).filter(Org.id==6).one()
 otherorg_news_associations = session.query(NewsOrg).filter
 (NewsOrg.org_id==1).all()
 mystory = otherorg_news_associations[0].news
 mystory.orgs.append(myorg) # -- TypeError: __init__() takes
 exactly 1 argument (2 given)
 session.commit()
   
The association_proxy is trying to create the NewsOrg object by calling
NewsOrg(myorg). Since your NewsOrg class does not override __init__, the
default declarative __init__ is used, which takes only keyword
arguments. This causes the TypeError: __init__() takes exactly 1
argument (2 given) error.

You can fix this by either adding an __init__ method to NewsOrg like this:
def __init__(self, org=None, **kwargs):
super(NewsOrg, self).__init__(**kwargs)
self.org = org

or, preferably, add a creator argument to association_proxy:
orgs = association_proxy('newsorgs', 'org', creator=lambda org:
NewsOrg(org=org))

You can find out more about the 'creator' argument at:
http://www.sqlalchemy.org/docs/05/reference/ext/associationproxy.html#api

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [TYPO FIX] UNION with SQLAlchemy: Some questions

2009-11-05 Thread Conor

AF wrote:
 Hello,

 [Note: This is a revised edition of my previous post multi table
 select]

 I have events logged to several diffeerent tables, and I need to get
 the union of these different tables.  (The event types use different
 tables for various reasons.)

 Here is what I am doing now, and it seems to work:

 # Note the static 'event_type' columns
 stmt = 'SELECT * FROM (SELECT A AS event_type, user_id as user,
 notes, time_created '
 stmt = stmt + 'FROM events_a WHERE user_id = :uid '
 stmt = stmt + 'UNION ALL '
 stmt = stmt + 'SELECT SELECT B AS event_type, user_id as user,
 notes, time_created '
 stmt = stmt + 'FROM events_b WHERE  user_id = :uid) as events_all '
 stmt = stmt + 'ORDER BY events_all.time_created DESC'

 res = DBSession.execute(stmt, dict(uid = uid))
 records = res.fetchall()

 So I have several questions:

 1) Is the the raw SQL I am using sane?
   

There are a few things that would make it more palatable to SQLAlchemy:
1. Move the WHERE user_id = :uid clauses to the outer SELECT
statement. This will make it easier to use in more complicated queries.
I don't think this will kill your performance, but hey what do I know?
2. Use explicit columns in the outer SELECT instead of the wildcard.
SQLAlchemy will do this for you if you use its SQL expression functions,
so don't worry about this.

 2) How can I use SQLAlchemy to simply things?

 3) How would I add tables events_c? events_d?
   

This thread describes a very similar situation:
http://groups.google.com/group/sqlalchemy/browse_thread/thread/21f47b750b672ea9

Using that as a template, you can do this (assuming event_a, event_b,
etc. are Table objects):
events_selectable = union_all(
event_a.select().column(literal_column('A').label(event_type)
event_b.select().column(literal_column('B').label(event_type)
event_c.select().column(literal_column('C').label(event_type)
event_d.select().column(literal_column('D').label(event_type))

# At this point, you can map class Event to events_selectable (not shown).
# Now, query events for a given uid:
q = DBSession.query(Event)
q = q.filter_by(user_id=uid)
q = q.order_by(Event.time_created.desc())
records = q.all()

# or, without using the ORM:
record_rows = DBSession.execute(events_selectable.select(
events_selectable.c.user_id == uid,
order_by=events_selectable.c.time_created.desc()).fetchall()

 4) Since the events_x tables are already defined with
 declarative_base, is there a reasonable way to make an SQLAlchemy
 Events object that knows where to retrieve and insert event records
 based on the event_type field?
   

For your previous posts, I thought you wanted a single class (Event)
mapped to the union, which is what the ORM example above assumes. It
will handle retrieving event records just fine (e.g.
filter_by(event_type=A)), but will not work for inserting them. To do
that, you will need separate EventA, EventB, etc. classes. This will
probably clash with the union class in ways that I don't know how to
resolve, so watch out.

However, you can still call insert() directly on the table objects, e.g.
event_a.insert().values(user_id=uid, ...). You just don't get the nice
ORM interface.

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Check if an item exists in a relation

2009-10-29 Thread Conor

Sergey V. wrote:
 Hi all,

 I must be missing something obvious here...

 Let's suppose I have the following class:

 class User(Base):
 # 
 addresses = relation(Address, backref=user)

 and I have a number which may be an ID of an Address object. How do I
 check if the number is an ID of one of Addresses of a given User?

 I could do that just iterating over the addresses:

 for address in user.addresses:
  if address.id == ID:
 print TADA!

 ... but this doesn't seem like a good solution. There must be a way to
 make SQLAlchemy to return the value.

 (to make it a bit more interesting - the code needs to be generic,
 i.e. the function just gets some SA-mapped object and property name,
 so I can't just build a query manually like this -

 addr = session.query(Address).filter(id=address_id).filter(user_id =
 user.id).one()

 - because I don't know what the join fields are (and if possible I'd
 like this to work with many-to-many relations too)
 )

 Thanks!
   
Some assumptions:
1. SA-mapped object means the user object in the example
2. property name means addresses in the example
3. The function shouldn't assume that you want an Address object
4. The ID attribute is known ahead of time (e.g. its always id). If
not, your function will need another parameter.
5. The function needs to work on many-to-many relationships in addition
to one-to-many.

Then this should work:
def get_related_by_id(obj, property_name, id):
relation = getattr(obj.__class__, property_name) # in example:
User.addresses
related_class = relation.property.argument # in example: Address
return Session.query(related_class).filter(relation.any(id=id)).first()

example usage:
address_exists = get_related_by_id(user, addresses, 1234) is not None

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Check if an item exists in a relation

2009-10-29 Thread Conor

Sergey V. wrote:
 Some assumptions:
 1. SA-mapped object means the user object in the example
 2. property name means addresses in the example
 3. The function shouldn't assume that you want an Address object
 4. The ID attribute is known ahead of time (e.g. its always id). If
 not, your function will need another parameter.
 5. The function needs to work on many-to-many relationships in addition
 to one-to-many.

 Then this should work:
 def get_related_by_id(obj, property_name, id):
 relation = getattr(obj.__class__, property_name) # in example:
 User.addresses
 related_class = relation.property.argument # in example: Address
 return Session.query(related_class).filter(relation.any(id=id)).first()

 example usage:
 address_exists = get_related_by_id(user, addresses, 1234) is not None
 

 Cool, it almost solves my problem!

 However, it doesn't check if a User have a given Address, it just
 checks if an Address exists in general. It would be easy to add
 another filter() by User.id, but that wouldn't work for many-to-many
 relations.

 My hope was to somehow get a Query object from a relation property
 with everything already set up (imagining that a relation somewhere
 stores the query it itself uses) and then just attach another filter()
 to it.

 Imaginary code:

 relation = getattr(obj.__class__, property_name)
 q = relation.get_query(...)
 result = q.filter(id = address_id).first()

 This approach works in Django's ORM, so it definitely should be doable
 in SA :)
The relation.any(id=id) part will produce an EXISTS clause relating
the address to the given user. However, now that I think about it, I
believe it will only work if you are querying users, not addresses (due
to the way that any() correlates its EXISTS clause to the outer query).

Luckily, SQLAlchemy provides another query method, with_parent(), that
should work, and is a bit cleaner than my first approach:
q = session.query(Address)
q = q.with_parent(user, addresses)
q = q.filter_by(id=address_id)
result = q.first()

or, as a generic function:
def get_related_by_id(obj, property_name, id):
relation = getattr(obj.__class__, property_name)
related_class = relation.property.argument
q = session.query(related_class)
q = q.with_parent(obj, property_name)
q = q.filter_by(id=id)
result = q.first()

Finally, SQLAlchemy has a (deprecated, unfortunately) Query classmethod
that makes it cleaner still:
def get_related_by_id(obj, property_name, id):
q = Query.query_from_parent(obj, property_name)
q = q.filter_by(id=id)
return q.first()

-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   >