[sqlalchemy] Re: Possible to build a query object from a relation property?
On Dec 9, 2007, at 10:55 PM, Allen Bierbaum wrote: > > I am using SA 0.3.11 and I would like to know if there is a way to get > a query object from a relation property. I have several one-to-many > relationships in my application. These are all setup and work very > well, but I find that I often want to perform further filtering of the > objects in the relationship list property. I could write python code > to do it, but if I could get SA to do it on the server, then all the > better. it is the "dynamic" relation that you want, but for 0.3 you can write your own read-only property via: class MyClass(object): def _get_prop(self): return object_session(self).query(ChildClass).with_parent(self, 'attributename') attributename = property(_get_prop) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Possible to build a query object from a relation property?
I am using SA 0.3.11 and I would like to know if there is a way to get a query object from a relation property. I have several one-to-many relationships in my application. These are all setup and work very well, but I find that I often want to perform further filtering of the objects in the relationship list property. I could write python code to do it, but if I could get SA to do it on the server, then all the better. To be more specific, here is an example from the SA documentation: mapper(Address, addresses_table) mapper(User, users_table, properties = { 'addresses' : relation(Address) } ) user_fred = session.query(User).filter_by(user_name="fred") # Is it possible to do something like this? fred_ny_addresses = getQuery(user_fred.addresses).filter_by(state="NY") I know that SA 0.4 has support for dynamic_loader properties which would be fairly similar to this, but I am stuck with 0.3.11 for now. I think what I want is also a bit different then a dynamic_loader because 95% of the time I want to use it as a standard relation property and the performance is not such that I am worried about loading the entire list of Address entities. All I want is a shortcut for creating a query object with the same settings as those used for the query used to create the list for the relation property. Is this possible in any way? Thanks, Allen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?
I cant reproduce your problem, although i dont have access to MSSQL here and there may be some issue on that end. Attached is your script using an in-memory sqlite database, with the update inside of a while loop, and it updates regularly.A few things to try on the MSSQL side, if the issue is due to some typing issue, try not using autoload=True, try using generic types instead of the MSSQL specific ones, etc., in an effort to narrow down what might be the problem. also ive added "MSSQL/pyodbc" to the subject line here in case any of the MSSQL crew wants to try out your script with pyodbc. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- import sqlalchemy as sa import datetime, time from sqlalchemy.orm import sessionmaker from sqlalchemy import * sa_engine=sa.create_engine("sqlite://",echo=True) metadata = sa.MetaData(sa_engine) Session = sessionmaker(bind=sa_engine, autoflush=True,transactional=True) sa_session = Session() jobs = sa.Table('jobs', metadata, sa.Column('identifier', Integer, primary_key=True), sa.Column('section', Integer), sa.Column("start",DateTime, primary_key=True), sa.Column("stop",DateTime), sa.Column("station", sa.VARCHAR(20))) metadata.create_all() class Job(object): def __init__(self, identifier, start): self.identifier, self.start=identifier, start sa.orm.mapper(Job, jobs) j = Job("TEST1", datetime.datetime.now()) sa_session.save(j) sa_session.commit() # The following part is here just to simluate my problem... if I keep using j instead of getting j1 from query # the record is updated as well while True: sa_session.clear() time.sleep(1) j1=sa_session.query(Job).all()[0] j1.stop=datetime.datetime.now() sa_session.save_or_update(j1) sa_session.commit() On Dec 9, 2007, at 5:26 PM, Smoke wrote: > > On 9 Dic, 21:37, Michael Bayer <[EMAIL PROTECTED]> wrote: >> theyre entirely supported. try to provide a fully working example >> illustrating the problem youre having. > > > > Here's a small example just to simulate the problem.. The last part of > this code is there just to simulate the problem... normally i would > just keep using j and update it... and this updates the record into > the db. But if I get an instance of the Job class from a query on the > db and try to update ( or save_or_update)it the record is not updated > into the db as well.. > > Here the sample code: > > > import sqlalchemy as sa > import datetime, time > from sqlalchemy.orm import sessionmaker > > sa_engine=sa.create_engine("mssql://user:[EMAIL PROTECTED]/myDB", > echo=True) > metadata = sa.MetaData(sa_engine) > Session = sessionmaker(bind=sa_engine, autoflush=True, > transactional=True) > sa_session = Session() > > > jobs = sa.Table('jobs', metadata, > sa.Column('identifier', > sa.databases.mssql.MSUniqueIdentifier, > primary_key=True), > sa.Column('section', > sa.databases.mssql.MSUniqueIdentifier), > > sa.Column("start",sa.databases.mssql.MSDateTime_pyodbc, > primary_key=True), > > sa.Column("stop",sa.databases.mssql.MSDateTime_pyodbc), > sa.Column("station", > sa.VARCHAR(20)), > autoload=True) > > class Job(object): > def __init__(self, identifier, start): > self.identifier, self.start=identifier, start > > sa.orm.mapper(Job, jobs) > > j = Job("TEST1", datetime.datetime.now()) > sa_session.save(j) > sa_session.commit() > # The following part is here just to simluate my problem... if I keep > using j instead of getting j1 from query > # the record is updated as well > sa_session.clear() > time.sleep(1) > j1=sa_session.query(Job).all()[0] > j1.stop=datetime.datetime.now() > sa_session.save_or_update(j1) > sa_session.commit() > --~--~-~--~~~---~--~~ > You received this message because you are subscribed to the Google > Groups "sqlalchemy" group. > To post to this group, send email to sqlalchemy@googlegroups.com > To unsubscribe from this group, send email to [EMAIL PROTECTED] > For more options, visit this group at > http://groups.google.com/group/sqlalchemy?hl=en > -~--~~~~--~~--~--~--- >
[sqlalchemy] Re: Matching a DateTime-field
Any query using sql expressions is going to want to use correctly typed data -- you're trying to query a date column with a string value. The LIKE operator is for string data. I'm not up on my mssql date expressions, but the answer is going to resemble something like this: .filter(and_(func.datepart('year', List.expire) == 2007, func.datepart('month', List.expire) == the_month_number)) On 12/9/07, Adam B <[EMAIL PROTECTED]> wrote: > > > Hello! > > I'm trying to do a query that gets all lists within a specific month, > but > SQLAlchemy whines. :/ > > > The error: > /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ > sqlalchemy/databases/mysql.py:1475: Warning: Incorrect datetime value: > '"2007-"+str(month)+"%"' for column 'expire' at row 1 > cursor.execute(statement, parameters) > > The code: > L = > session.query(List).join('friends').filter( > Friend.username==identity.current.user_name).filter(List.expire.like > ('"2007-"+str(month) > +"%"')).all() > > > columnt "expire" is DateTime in the model. > > Any ideas how to do this? > > br > > Adam > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
> I hate to disagree here, and I can see what you're getting at, but > honestly, the "INSERT on save()" approach is exactly the naive active- > record-like pattern that SQLAlchemy's ORM was designed to get away from. > > The way the unit of work functions, we dont generate ids until a flush > occurs. Flushes dont occur unless you say flush(), or if you have I'm not saying flush on save. I'm saying flush at the last possible moment (which is what it does now) but I want "last possible moment" to include "program tried to access a database-generated field" s1 = Something('foo1') session.save(s1) s2 = Something('foo2') session.save(s2) # Nothing flushed yet s3 = Something('foo3') session.save(s3) url_for_foo = "/something?id=%d" % s3.id # s3 should be flushed, nothing else though (since s3.id was accessed) -Adam Batkin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
On Dec 9, 2007, at 2:31 PM, Artur Siekielski wrote: > > I'm writing DAO module for db access which must be independent of rest > of the system. I'm looking for a class which can be used as a proxy > for SQL results. Query would be good, if it would be possible to have > fully functional Query instance representing any SQL statement. But I > cannot tell users - here you have a Query object, but filtering > sometimes doesn't work! So it seems that if I want to have object- > oriented proxy for SQL results with lazy loading, I must write my own > wrapper. > OK, ive got the initial implementation for this in r3904. whatever you put into query.select_from(), thats what its going to select from. all the fun starts when you start filter()ing and join()ing. select_from() also needs to be called before you set up any joins or criterions; it will issue a warning if you do otherwise. it still needs some work, its not applied yet to count(), edge cases like group_by()/ having() dont work yet either (ticket 898 is a reminder). we will probably look into moving away from from_statement() and into select_from() since it now handles a superset of use cases. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
On Dec 9, 2007, at 4:57 PM, Adam Batkin wrote: > Ahh, but session.save() was already called, so trying to fetch a > database-generated attribute (such as the primary key in my case) > should > trigger a flush of the row itself. That can be done with any database. > It wouldn't be done on __init__, nor would it be done on save(). It > would be done only once you tried to fetch the id property (only for > objects in the Pending state) > > Okay, as an example. Let's say you have: > > something_table = Table('something', metadata, > Column('id', Integer, primary_key=True), > Column('name', String) > ) > > class Something(object): > def __init__(self,name): > self.name = name > > def __repr__(self): > return "" % (self.id, self.name) > > mapper(Something,something_table) > > obj = Something('blah') > session.save(obj) > print "Look ma, a something: %s" % obj > > In theory that will throw an exception since Something's __repr__ will > have None for the id property, since id was never retrieved. > > (if I just did: > > obj = Something('blah') > print "Bad idea: %s" % obj > > then I would expect an exception, since it's not saved) > > Does this description make more sense that what I said before? > I hate to disagree here, and I can see what you're getting at, but honestly, the "INSERT on save()" approach is exactly the naive active- record-like pattern that SQLAlchemy's ORM was designed to get away from. The way the unit of work functions, we dont generate ids until a flush occurs. Flushes dont occur unless you say flush(), or if you have autoflush=True in which case they occur right before a query. Having flush() happen automatically for save() would be an enormous change to how we've been doing things for quite some time, the implications and side effects of which I have not thought through, but i can think of a few immediate ones: it would break compatibility with everyone thats using the Session.mapper extension, which saves automatically on __init__, and it would certainly break compatibility with any number of existing applications that are calling save() without the "autoflush on save" expectation. But beyond that, it would wreak havoc with core features such as cascading...right now, you can append() items to a collection, and they are save()d automatically as they are added. By deferring the SQL until a single flush() stage, we dont incur the overhead of flush() for every item (which, depending on configuration, may include checking out connections, starting/ committing transactions, sorting dependencies, etc.); the full graph of objects is assembled and *then* flushed - this is the entire point of the unit of work pattern, that many changes are pushed into a single cohesive transactional event, reducing database traffic, avoiding deadlocks, etc. For the issue above, it seems a whole lot easier to just type "Something %r". I dont see any fundamental issue by the fact that primary keys are not immediately present upon save(), and additionally if your __repr__ wanted to print out foreign key identifiers as well (like many-to-ones), you're back to the same problem again. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys...
On 9 Dic, 21:37, Michael Bayer <[EMAIL PROTECTED]> wrote: > theyre entirely supported. try to provide a fully working example > illustrating the problem youre having. Here's a small example just to simulate the problem.. The last part of this code is there just to simulate the problem... normally i would just keep using j and update it... and this updates the record into the db. But if I get an instance of the Job class from a query on the db and try to update ( or save_or_update)it the record is not updated into the db as well.. Here the sample code: import sqlalchemy as sa import datetime, time from sqlalchemy.orm import sessionmaker sa_engine=sa.create_engine("mssql://user:[EMAIL PROTECTED]/myDB", echo=True) metadata = sa.MetaData(sa_engine) Session = sessionmaker(bind=sa_engine, autoflush=True, transactional=True) sa_session = Session() jobs = sa.Table('jobs', metadata, sa.Column('identifier', sa.databases.mssql.MSUniqueIdentifier, primary_key=True), sa.Column('section', sa.databases.mssql.MSUniqueIdentifier), sa.Column("start",sa.databases.mssql.MSDateTime_pyodbc, primary_key=True), sa.Column("stop",sa.databases.mssql.MSDateTime_pyodbc), sa.Column("station", sa.VARCHAR(20)), autoload=True) class Job(object): def __init__(self, identifier, start): self.identifier, self.start=identifier, start sa.orm.mapper(Job, jobs) j = Job("TEST1", datetime.datetime.now()) sa_session.save(j) sa_session.commit() # The following part is here just to simluate my problem... if I keep using j instead of getting j1 from query # the record is updated as well sa_session.clear() time.sleep(1) j1=sa_session.query(Job).all()[0] j1.stop=datetime.datetime.now() sa_session.save_or_update(j1) sa_session.commit() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
On Dec 9, 2007, at 3:59 PM, Chris M wrote: > > I'll commit what I have and some tests sometime soon so you can see > what's going on (unless you're by chance magical and already know > what's going on!) im doing some surgery on Query at the momentmight be better if you wait for the next checkin. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
>> My thought is that sqlalchemy should force the object to be flushed >> (or >> whatever must be done to determine the ID, possibly just selecting the >> next value from a sequence) when the id property is retrieved. >> > > can't be done for mysql, sqlite, MSSQL, others, without issuing an > INSERT. you cant INSERT on __init__ since not every attribute may be > populated on the object, and additionally our session doesnt generally > like to do things "automatically", with the exception of the > "autoflush" feature. also we don't emit any modifying SQL externally > to the flush. if youre using a database like postgres or oracle, > you're free to execute the sequence yourself and apply the new value > to the primary key attribute of your object, and it will be used as > the primary key value when the INSERT does actually occur. Ahh, but session.save() was already called, so trying to fetch a database-generated attribute (such as the primary key in my case) should trigger a flush of the row itself. That can be done with any database. It wouldn't be done on __init__, nor would it be done on save(). It would be done only once you tried to fetch the id property (only for objects in the Pending state) Okay, as an example. Let's say you have: something_table = Table('something', metadata, Column('id', Integer, primary_key=True), Column('name', String) ) class Something(object): def __init__(self,name): self.name = name def __repr__(self): return "" % (self.id, self.name) mapper(Something,something_table) obj = Something('blah') session.save(obj) print "Look ma, a something: %s" % obj In theory that will throw an exception since Something's __repr__ will have None for the id property, since id was never retrieved. (if I just did: obj = Something('blah') print "Bad idea: %s" % obj then I would expect an exception, since it's not saved) Does this description make more sense that what I said before? -Adam Batkin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Matching a DateTime-field
Hello! I'm trying to do a query that gets all lists within a specific month, but SQLAlchemy whines. :/ The error: /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/ sqlalchemy/databases/mysql.py:1475: Warning: Incorrect datetime value: '"2007-"+str(month)+"%"' for column 'expire' at row 1 cursor.execute(statement, parameters) The code: L = session.query(List).join('friends').filter(Friend.username==identity.current.user_name).filter(List.expire.like('"2007-"+str(month) +"%"')).all() columnt "expire" is DateTime in the model. Any ideas how to do this? br Adam --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Lazy ID Fetching/generation
On Dec 9, 2007, at 4:10 PM, Adam Batkin wrote: > > If I create an object, then save() it, potentially the object won't be > actually persisted until sqlalchemy decides that it needs to (for > example on flush/commit, or when some query involving Thing's table > gets > executed) which is good. But (in my opinion) the lazyness is a bit too > lazy when it comes to autogenerated primary keys: > > t = Something('foo') > session.save(t) > assert t.id is None > > but if I then: > > session.flush() > assert t.id is not None whats the issue there? a lot of attributes get populated after flush, not just primary key attributes but also foreign key-holding attributes. if you need the PK because you're trying to link up related objects manually on their foreign keys, relation() takes care of that for you when using the ORM. (but if you choose, you can autogenerate the ID yourself if its possible with your database). > > > My thought is that sqlalchemy should force the object to be flushed > (or > whatever must be done to determine the ID, possibly just selecting the > next value from a sequence) when the id property is retrieved. > can't be done for mysql, sqlite, MSSQL, others, without issuing an INSERT. you cant INSERT on __init__ since not every attribute may be populated on the object, and additionally our session doesnt generally like to do things "automatically", with the exception of the "autoflush" feature. also we don't emit any modifying SQL externally to the flush. if youre using a database like postgres or oracle, you're free to execute the sequence yourself and apply the new value to the primary key attribute of your object, and it will be used as the primary key value when the INSERT does actually occur. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Lazy ID Fetching/generation
If I create an object, then save() it, potentially the object won't be actually persisted until sqlalchemy decides that it needs to (for example on flush/commit, or when some query involving Thing's table gets executed) which is good. But (in my opinion) the lazyness is a bit too lazy when it comes to autogenerated primary keys: t = Something('foo') session.save(t) assert t.id is None but if I then: session.flush() assert t.id is not None My thought is that sqlalchemy should force the object to be flushed (or whatever must be done to determine the ID, possibly just selecting the next value from a sequence) when the id property is retrieved. Thoughts? -Adam Batkin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
heres the output of: sel = users.select(users.c.id.in_([7, 8])).alias() sess.query(User).options(eagerload('addresses')).select_from(sel)[1] SELECT anon_1.anon_2_id AS anon_1_anon_2_id, anon_1.anon_2_name AS anon_1_anon_2_name, addresses_1.id AS addresses_1_id, addresses_1.user_id AS addresses_1_user_id, addresses_1.email_address AS addresses_1_email_address FROM (SELECT anon_2.id AS anon_2_id, anon_2.name AS anon_2_name, anon_2.id AS anon_2_oid FROM (SELECT users.id AS id, users.name AS name FROM users WHERE users.id IN (%(users_id_1)s, %(users_id_2)s)) AS anon_2 ORDER BY anon_2.id LIMIT 1 OFFSET 1) AS anon_1 LEFT OUTER JOIN addresses AS addresses_1 ON anon_1.anon_2_id = addresses_1.user_id ORDER BY anon_1.anon_2_id, addresses_1.id thats a mouthful. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
I'll commit what I have and some tests sometime soon so you can see what's going on (unless you're by chance magical and already know what's going on!) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
Nope, eagerloads are a no-go. I tried changing 901 of query.py again to: context.exec_with_path(m, value.key, value.setup, context, parentclauses=clauses, parentmapper=m) but that did not work either. The code around exec_with_path and setup_query confuses me, I'm not sure I can fix eagerloads by myself. Currently without setting parentmapper=m it tries to find the columns in the table of the main entity, so I think this is at least a step in the right direction. On Dec 9, 2:15 pm, Michael Bayer <[EMAIL PROTECTED]> wrote: > On Dec 9, 2007, at 1:21 PM, Chris M wrote: > > > > > Is this how you want to do it? Unfortunately, just your fix alone > > doesn't do the trick BUT if you change line 901 of query.py to > > > context.exec_with_path(m, value.key, value.setup, context, > > parentclauses=clauses) > > > it works, and all ORM tests run fine. > > I think we should go for it, if for no other reason than add_entity() > is a fairly new method, so its better we start establishing the > "ordered" behavior sooner rather than later. > > Id be curious to know if acutal eager loads work off the second entity > also (im thinking...maybe ? ). > > you can commit this change if you'd like, but id ask that a few (very > short) tests be added to test/orm/query.py which validate the behavior > of options() both with and without an add_entity() (i.e., a test that > would fail if you didnt implement the feature). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: save_or_update and composit Primary Keys...
On Dec 9, 2007, at 2:54 PM, Smoke wrote: > When i create and save a j = Job(identifier, start), I have no > problems and it saves the new record on the table, but when i want to > update ( update or save_or_update ) the record with the stop time i > just don't update the record... It does not throw any new > exception I've also tryied to change the table definition putting > the primary_key on both columns definition instead of using > PrimaryKeyConstraint ( as you can see by the comments.. ) but the > result is the same... > Am I missing something? Or maybe composite primary keys tables are not > supported for updating using the session ? > theyre entirely supported. try to provide a fully working example illustrating the problem youre having. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
On Dec 9, 2007, at 2:31 PM, Artur Siekielski wrote: > >> no, from_statement replaces all filtering. > > Shouldn't it throw some exception then? funny you should say that, this week we've been adding warnings for query methods that are called when they would ignore some part of the existing criterion, so will add this. > cannot tell users - here you have a Query object, but filtering > sometimes doesn't work! So it seems that if I want to have object- > oriented proxy for SQL results with lazy loading, I must write my own > wrapper. we've had requests for this before, and since we've recently greatly improved our ability to alias clauses against a new selectable, im going to try to commit this. select_from()'s behavior is going to change here but I dont think the replaced behavior is anything anyone was using (i.e. it currently builds a list of clauses,but you dont really need that for anything). note that we haven't had this feature before since its quite complicated; if I say query.select_from(users.select(users.c.id.in_([7, 8])).alias()).filter(User.id==7), the generated query must be: SELECT anon_1.id AS anon_1_id, anon_1.name AS anon_1_name FROM (SELECT users.id AS id, users.name AS name FROM users WHERE users.id IN (%(users_id_1)s, %(users_id_2)s)) AS anon_1 WHERE anon_1.id = %(users_id_3)s ORDER BY anon_1.id i.e. the incoming filter() criterion has to be aliased, the actual columns which the mapper receives are now named differently so are also translated on a row-by-row basis, etc. currently i have it working for non-eager queries. when the feature is complete there still may be more complex queries that just dont come out correctly, we'll have to see. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] save_or_update and composit Primary Keys...
Hi, These days i'm playing with sqlalchemy to know if it can fit my needs... I'm having some troubles with this ( maybe it's a real dumb question.. or maybe a non supported feature.. :) ): I have a database (mssql) with some tables with composite primary keys... something like this: t_jobs = sa.Table('jobs', metadata, sa.Column('identifier', sa.VARCHAR(20)),#, primary_key=True), sa.Column('job_batch', sa.databases.mssql.MSUniqueIdentifier), sa.Column("start",_sql.MSDateTime_pyodbc),#, primary_key=True), sa.Column("stop",_sql.MSDateTime_pyodbc), sa.Column("station", sa.VARCHAR(20)), sa.PrimaryKeyConstraint('identifier', 'inizio'), autoload=True) and it's mapped to a class... like this: class Job(object): ... sa.orm.mapper(Job, t_jobs) When i create and save a j = Job(identifier, start), I have no problems and it saves the new record on the table, but when i want to update ( update or save_or_update ) the record with the stop time i just don't update the record... It does not throw any new exception I've also tryied to change the table definition putting the primary_key on both columns definition instead of using PrimaryKeyConstraint ( as you can see by the comments.. ) but the result is the same... Am I missing something? Or maybe composite primary keys tables are not supported for updating using the session ? Thanks, Fabio --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
> no, from_statement replaces all filtering. Shouldn't it throw some exception then? > so, what is it youre trying to do exactly ? I'm writing DAO module for db access which must be independent of rest of the system. I'm looking for a class which can be used as a proxy for SQL results. Query would be good, if it would be possible to have fully functional Query instance representing any SQL statement. But I cannot tell users - here you have a Query object, but filtering sometimes doesn't work! So it seems that if I want to have object- oriented proxy for SQL results with lazy loading, I must write my own wrapper. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
On Dec 9, 2007, at 1:21 PM, Chris M wrote: > > Is this how you want to do it? Unfortunately, just your fix alone > doesn't do the trick BUT if you change line 901 of query.py to > > context.exec_with_path(m, value.key, value.setup, context, > parentclauses=clauses) > > it works, and all ORM tests run fine. I think we should go for it, if for no other reason than add_entity() is a fairly new method, so its better we start establishing the "ordered" behavior sooner rather than later. Id be curious to know if acutal eager loads work off the second entity also (im thinking...maybe ? ). you can commit this change if you'd like, but id ask that a few (very short) tests be added to test/orm/query.py which validate the behavior of options() both with and without an add_entity() (i.e., a test that would fail if you didnt implement the feature). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
On Dec 9, 2007, at 1:03 PM, Artur Siekielski wrote: > > But is Query object constructed by from_statement fully functional? > Using "filter" doesn't work for me - it returns the same query. no, from_statement replaces all filtering. so, what is it youre trying to do exactly ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
But is Query object constructed by from_statement fully functional? Using "filter" doesn't work for me - it returns the same query. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Undeferring attributes off joined entities
Is this how you want to do it? Unfortunately, just your fix alone doesn't do the trick BUT if you change line 901 of query.py to context.exec_with_path(m, value.key, value.setup, context, parentclauses=clauses) it works, and all ORM tests run fine. On Dec 8, 10:59 pm, Michael Bayer <[EMAIL PROTECTED]> wrote: > On Dec 8, 2007, at 9:55 PM, Chris M wrote: > > > > > One thing I'd be worried about is that after an add_entity there is no > > way to set options on the main entity afterwards. You could provide a > > reset_entitypoint, but it wouldn't work the same as with joins because > > after a reset_joinpoint you can rejoin along the same path to filter > > more criterion if necessary. Still, I think some functionality is > > better than no functionality... it's not that big of a deal, is it? > > when the notion of "reset_entitypoint" comes in, things have just > gottten out of hand. at some point we're going to have to decide > that order is significant with query's generative behavior, and people > will have to use it with that knowledge in mind. its already been > suggested as a result of other behaviors (such as > query[5:10].order_by('foo') doesnt generate a subquery, for example). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
On Dec 9, 2007, at 12:31 PM, Artur Siekielski wrote: > > Hi again. > Thanks for hints on using "instances" method. But is there any method > to get Query object representing query result? > > I have spent more time on my problem. It's important for me if I can > use Query object as a proxy to instances fetched from DB, or if I must > fall back to raw list. Almost working solution (I'm using PostgreSQL) > is that: > > q = > dbSession > .query(DomainClass).select_from(compoundSelect.alias('myalias')) > > The problem is visible here: print q > SELECT > FROM DomainClassTable, > > The problem is that "DomainClassTable" is always added to FROM clause, > even if I throw it away by hand from q._from_obj list... > hey there - sure, its in the ORM tutorial, and i just fixed that it had no docstring in the pydoc in r3901, and its called from_statement(). select_from() is for adding additional FROM clauses to the generated query. from_statement() is used to entirely replace the compiled statement with that of your own, i.e. dbSession.query(DomainClass).from_statement(myselect). this is the equivalent to query(DomainClass).instances(myselect.execute()). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: From arbitrary SELECT to Query
Hi again. Thanks for hints on using "instances" method. But is there any method to get Query object representing query result? I have spent more time on my problem. It's important for me if I can use Query object as a proxy to instances fetched from DB, or if I must fall back to raw list. Almost working solution (I'm using PostgreSQL) is that: q = dbSession.query(DomainClass).select_from(compoundSelect.alias('myalias')) The problem is visible here: >>> print q SELECT FROM DomainClassTable, The problem is that "DomainClassTable" is always added to FROM clause, even if I throw it away by hand from q._from_obj list... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA and MySQL replication.
On Dec 9, 2007, at 11:50 AM, Anton V. Belyaev wrote: > > On Dec 6, 11:51 pm, Andrew Stromnov <[EMAIL PROTECTED]> wrote: >> I have DB with onemasterserver and several replicated slaves >> (MySQL). How to implement this functionality: read-only request can >> be >> passed to any DB (masterorslave), but any write request with >> following read requests must be sended tomasteronly (to avoid >> synchronization lag). > > This is an example of vertical partitioning. I am trying to find out > how to implement this with SA too. > > Quite an offen-used scheme. Strange that no one has replied this > thread yet. > > SA even has support for sharding (which is more complex than vertical > partitioning IMHO) so there certainly should be the way for 1-master- > N- > slaves scheme. "vertical" partioning at the Session level is pretty straightforward..this example: http://www.sqlalchemy.org/docs/04/session.html#unitofwork_contextual_partitioning_vertical is pretty short but thats really all there is to itjust map classes to different database connections. partitioning among read-only or write is a different case - the easiest way is to use two Sessions, one bound to the read-only DB, one bound to the write DB. your application methods choose which session they want based on it they are writers or not. I would not advise having a single session "switch" databases mid-stream since you lose transactional consistency in that case, but if you really want to do that, i.e. you'll ensure that no transactional state is present on the session after youve read, you can re-bind it to the writer engine using session.bind = , and then continue. you can create a very simple subclass of Session which overrides flush() to do this, and if you're using sessionmaker just send in your class with the "class_" keyword argument. with MySQL there are a lot of third party clustering tools out there which could obviate the need for SQLAlchemy to be aware of things like this. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Determining types of joined attributes
On Dec 9, 2007, at 10:10 AM, Brendan Arnold wrote: > > Ah I see, sorry, what I meant to ask was if there was a way to tell > the difference with _instances_ of orm objects youd call object_mapper(instance) and then do the same thing using mapper.get_property(). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA and MySQL replication.
On Dec 6, 11:51 pm, Andrew Stromnov <[EMAIL PROTECTED]> wrote: > I have DB with onemasterserver and several replicated slaves > (MySQL). How to implement this functionality: read-only request can be > passed to any DB (masterorslave), but any write request with > following read requests must be sended tomasteronly (to avoid > synchronization lag). This is an example of vertical partitioning. I am trying to find out how to implement this with SA too. Quite an offen-used scheme. Strange that no one has replied this thread yet. SA even has support for sharding (which is more complex than vertical partitioning IMHO) so there certainly should be the way for 1-master-N- slaves scheme. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Determining types of joined attributes
Ah I see, sorry, what I meant to ask was if there was a way to tell the difference with _instances_ of orm objects brendan On Dec 8, 2007 7:32 PM, Michael Bayer <[EMAIL PROTECTED]> wrote: > > > On Dec 8, 2007, at 2:09 PM, Brendan Arnold wrote: > > > > > hmm strange, i tried this out with sqlalchemy version 0.4.1 but it > > does not seem to work... > > > s.name > > u'HALN01100601' > isinstance(s.name, sqlalchemy.orm.PropertyLoader) > > False > s.targets > > [] > isinstance(s.targets, sqlalchemy.orm.PropertyLoader) > > False > > > > both attributes were loaded by sqlalchemy. also, > > > getattr(s.materials, "direction") > > Traceback (most recent call last): > > File "", line 1, in ? > > AttributeError: 'InstrumentedList' object has no attribute 'direction' > > > > > > here is a code example illustrating my previous email: > > from sqlalchemy import * > from sqlalchemy.orm import * > > from sqlalchemy.orm.properties import PropertyLoader > from sqlalchemy.orm.sync import ONETOMANY, MANYTOONE, MANYTOMANY > > metadata = MetaData() > > t = Table('foo', metadata, Column('id', Integer, primary_key=True)) > t2 = Table('bar', metadata, Column('id', Integer, primary_key=True), > Column('fooid', Integer, ForeignKey('foo.id'))) > > class T(object):pass > class T2(object):pass > > mapper(T, t, properties={ > 't2s':relation(T2) > }) > mapper(T2, t2) > > prop = T.t2s.property > if isinstance(prop, PropertyLoader): > if prop.direction == ONETOMANY: > print "onetomany" > elif prop.direction == MANYTOONE: > # etc > > > > > brendan > > > > > > > > > > On Dec 3, 2007 9:30 PM, Michael Bayer <[EMAIL PROTECTED]> > > wrote: > >> > >> id look at prop = MyClass.someattribute.property, use > >> isinstance(prop, > >> PropertyLoader) to determine a relation and not a scalar, combined > >> with getattr(prop, "direction") == sqlalchemy.orm.sync.MANYTOMANY, > >> ONETOMANY, etc. to get the type of join. > >> > >> > >> On Dec 3, 2007, at 3:43 PM, Brendan Arnold wrote: > >> > >>> > >>> hi there, > >>> > >>> i'd like a way to determine if an attribute of an orm object is: > >>> > >>> a:) a sqlalchemy generated list of objects (i.e. many-to-many) > >>> b:) a single sqlalchemy joined object (i.e.one-to-many) > >>> c:) a 'scalar' loaded from the database (i.e. a string, float, > >>> integer) > >>> > >>> at present i'm copying the text generated by > >>> 'type(orm_obj.joined_list)' to determine a: and a 'type(float, int > >>> etc.)' for c:, whats left is b. > >>> > >>> this seems shakey, is there a better way? are there some 'types' > >>> defined in sqlalchemy? > >>> > >>> brendan > >>> > > >> > >> > >>> > >> > > > > > > > > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---