[sqlalchemy] Re: Column metadata from mapped class

2008-07-22 Thread huy do



On Jul 22, 1:33 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jul 21, 2008, at 9:45 PM, huy do wrote:



  Because a mapped class can have an arbitary select as it's source i.e
  columns from any table theoretically, it would still be nice to know
  exactly which columns were used to map the properties of a given
  class. The .c on the model class use to give us the metadata (i.e
  either the select or table aka relation) which was used to populate
  the class. Can we get an extension to get this feature back (please) ?

 If you just want the Table, its just  
 class_mapper(class).mapped_table.    Theres lots of ways to build your  
 own .c. class attirbute and such, including:

         MyClass.c = class_mapper(MyClass).columns

cool.


 the .c. really had to be removedits entirely different now if  
 you say query.filter(MyClass.c.foo=='bar') in 0.5 since no adaptation  
 will take place.

I don't mind getting rid of  MyClass.c.foo == 'bar' with MyClass.foo
== 'bar',
but is it possible to add MyClass.foo.c to get at the metadata behind
the column itself.

Thanks

huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal sessions and transactions

2008-07-19 Thread huy do



On Jul 20, 12:13 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jul 19, 2008, at 9:10 AM, Contact 42 wrote:





  Hi,

  I am using sa 0.5b2 under python 2.5.

  Should the following code commit myobj.

  engine = create_engine(appconfig.dburi, strategy='threadlocal')
  Session = scoped_session(sessionmaker(bind=engine, autoflush=True,
  autocommit=True))

  sess = Session()
  engine.begin()
  sess.add(myobj)
  engine.commit()

  I am noticing that it does not. However, if I do

  sess = Session()
  sess.begin()
  sess.add(myobj)
  sess.commit()

  it works.

  Am I missing something ?

 The commit() method on Session has the additional functionality of  
 flushing itself when called.  If you want to use Session within a  
 threadlocal transaction at the engine level, you'd have to ensure that  
 you flush() the session manually before commit.

I thought autoflush=True  would do that for me. This is not a problem,
I'll just use the session transaction interface.


 These days the Session has become more prominent as the center of  
 transactional control, however, so you might find it easier to just go  
 through scoped_session() for transaction control entirely.   Session  
 has an execute() and connection() method for issuing raw SQL too.

Yeah this is great. I will move all my engine stuff to use the session
directly.

  If  
 you continue to use the threadlocal engine strategy, then implicit  
 executions (i.e. somestatement.execute()) should participate in the  
 same transaction as the Session.

Is there a warning in that If ? Is the threadlocal strategy not the
most popular for web programs ?

Thanks,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapped class and c attribute in 0.5

2008-06-25 Thread Huy Do

Hi,

I read in the 0.5 release notes that the c attribute was no longer 
necessary when doing queries using the mapped class, but I did not see 
it mentioned that the c attribute was removed all together.

It's just that I've been using the c attribute in my Mapped classes to 
access the database/table metadata and now all my code will need to be 
changed to use the table.c attribute instead.

Was there a reason for removing the c attribute from mapped classes 
altogether ?

Thanks,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapped class and c attribute in 0.5

2008-06-25 Thread Huy Do

Michael Bayer wrote:
 On Jun 25, 2008, at 10:48 AM, Huy Do wrote:

   
 Hi,

 I read in the 0.5 release notes that the c attribute was no longer
 necessary when doing queries using the mapped class, but I did not see
 it mentioned that the c attribute was removed all together.

 It's just that I've been using the c attribute in my Mapped classes to
 access the database/table metadata and now all my code will need to be
 changed to use the table.c attribute instead.

 Was there a reason for removing the c attribute from mapped classes
 altogether ?

 

 in 0.5, the attributes on classes vs. the columns on Tables are very  
 different beasts now.   Its not at all the same thing if you say:

 sess.query(MyClass.id, MyClass.name)

 vs.

 sess.query(mytable.c.id, mytable.c.name)


 since in the former case, we know that we are dealing with the MyClass  
 mapper; in the latter case, we're not.  The behavior of things like  
 join(), polymorphic loading, other options, etc. are completely  
 different - in the case of joined table inheritance it's dramatically  
 different, where Subclass.id and subtable.c.id are literally  
 different columns.   So we really can't have a casual .c. attribute  
 hanging around on classes with the new behavior;  the user needs to be  
 very explicit when choosing between MyClass.attr vs. table.c.attr.

 That said, you can retroactively add your .c. attribute using a  
 MapperExtension that implements instrument_class(), and assigns the  
 c attribute from the table to the class.
   
Michael,

Thanks for the suggestion. Is there any metadata on the MyClass.id field 
at all ? like which column it's mapped too ?

Lucky for me, I don't use any of the advanced SA features like 
inheritance and polymorphic stuff, so usually my MyClass.id is the same 
as my mytable.c.id (in the old SA).

Thanks again,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adding a filter_by() clause by default.

2008-06-25 Thread Huy Do

Michael Bayer wrote:
 using 0.5:

 from sqlalchemy.orm import Query, sessionmaker

 class MyQuery(Query):
 def __new__(cls, entities, **kwargs):
 if hasattr(entities[0], 'deleted_at'):
 return Query(entities,
 **kwargs).filter_by(deleted_at=None)
 else:
 return object.__new__(cls)

 Session = sessionmaker(query_cls=MyQuery)
   
This is cool stuff.

huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-13 Thread Huy Do

Hi Rick,
 I use a similar technique with a Pylons controller, but instead of 
 engine.begin(), I use session.begin(). Then by passing around the 
 session for all calls made by that controller, 
I had so much legacy SA code, that I just couldn't easily retrofit this 
pattern throughout the code, but thanks for the tip.
 I can use Session.execute() for expression-based and text-based SQL 
 mixed with ORM ops, and it all commits in one shot on one thread 
 without using threadlocal. This allows the freedom to open another 
 session in a separate transaction for those odd places where it's 
 needed, and have arbitrary expression-based SQL execute in that other 
 transaction -- something I don't think 
 you'll be able to do with threadlocal + implicit execution.
I was hoping that I could create a another session using another engine 
(non threadlocal), for these special cases. I have not yet run into this 
need yet. Just out of interest, have you run into any such use cases ?

Thanks

Huy

 Rick


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-13 Thread Huy Do


 I was hoping that I could create a another session using another  
 engine
 (non threadlocal), for these special cases. I have not yet run into  
 this
 need yet. Just out of interest, have you run into any such use cases ?

 Thanks

 

 if youre using threadlocal engine, you can still say engine.connect()  
 and get a Connection that is not part of the threadlocal context (and  
 start transactions off that connection, bind it to sessions,  
 whatever).  therefore you really dont lose anything when using  
 threadlocal.
You've thought of everything :-)

Thanks.

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] threadlocal transactions, engine, and the Session

2007-11-12 Thread Huy Do

Hi,

I've just had  a heck of a time getting transactions to behave correctly 
after upgrading to 0.4dev from 0.3.6, and I just wanted to make sure 
that I am doing things correctly.

I've found that to totally control transactions myself, and get ORM 
sessions (i.e Session.flush()) to interact with SQL transactions i.e 
table.insert().execute(), I had to do the following.

engine = create_engine(appconfig.dburi, strategy='threadlocal', echo=False)
Session = scoped_session(sessionmaker(bind=engine, autoflush=False, 
transactional=False))
metadata = MetaData(engine)

then.

engine.begin()
try:
   // Session.flush()
   // mytable.insert().execute() stuff
   engine.commit
except:
   engine.rollback()

Does this seem correct ?

Previously i used autoflush=True, transactional=True together with 
Session.begin(), Session.commit(), Session.rollback() and I ran into all 
sorts of issues e.g transaction was started but never committed etc.

Thanks

Huy



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-12 Thread Huy Do

Michael Bayer wrote:
 On Nov 12, 2007, at 5:37 AM, Huy Do wrote:

   
 Hi,

 I've just had  a heck of a time getting transactions to behave  
 correctly
 after upgrading to 0.4dev from 0.3.6, and I just wanted to make sure
 that I am doing things correctly.

 I've found that to totally control transactions myself, and get ORM
 sessions (i.e Session.flush()) to interact with SQL transactions i.e
 table.insert().execute(), I had to do the following.

 engine = create_engine(appconfig.dburi, strategy='threadlocal',  
 echo=False)
 Session = scoped_session(sessionmaker(bind=engine, autoflush=False,
 transactional=False))
 metadata = MetaData(engine)

 then.

 engine.begin()
 try:
   // Session.flush()
   // mytable.insert().execute() stuff
   engine.commit
 except:
   engine.rollback()

 Does this seem correct ?

 Previously i used autoflush=True, transactional=True together with
 Session.begin(), Session.commit(), Session.rollback() and I ran into  
 all
 sorts of issues e.g transaction was started but never committed etc.

 

 The reason you have to use threadlocal in that case is because you  
 are relying upon implicit execution of things like mytable.insert(),  
 and you also want the transaction to propigate from engine to session  
 without explicitly passing the connection to it.  so what youre doing  
 above is OK.   it might actually be the easiest way to do it and is  
 the only way the implicit execution style can participate in the  
 transaction.
   
Great. Thanks for the confirmation.
 2. use the engine to manage the transaction but dont use threadlocal:

 conn = engine.connect()
 trans = conn.begin()
 Session(bind=conn)
 try:
   # 
   conn.execute(mytable.insert(), stuff)
   trans.commit()
 except:
   trans.rollback()
 finally:
   Session.close()
   
There's no way i'm giving up threadlocal :-) I love it (at least in my 
web programs).

I have this transaction decorator which I wrap all my data access code 
in, and with SA's cool transaction support, I don't have to worry about 
transaction commit/rollback handling again.

def transaction(func):
'''
This is a decorator for wrapping methods in a db transaction
'''
def trans_func(*args, **kws):
engine.begin()
try:
f = func(*args, **kws)
engine.commit()
return f
except:
engine.rollback()
raise
return trans_func

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: plain python objects from ORM

2007-09-25 Thread Huy Do


 I think this would be a great feature to have because there are  
 many use
 cases in my application (mainly displaying/processing tables) where I
 don't want/need the overhead of the instrumentation (and it really  
 does
 add quite a bit), but would still love the excellent mapping abilities
 (i.e have fully hydrated domain objects rather then ResultProxy).
 

 I think theres still going to be a lot of overhead even without  
 instrumentation.  anyway, this would be an enormous amount of effort  
 to establish and also to keep test coverage going, and would probably  
 be a significant complication to the internals.  Id rather focus on  
 making our current, single approach faster and better (note that 0.4  
 uses about half the method call overhead of 0.3 for load operations).
   
fair enough. I think I'm just a big fan of the ORM load features, but 
not of the (cascading) flush (save/update/delete) features. I am a 
control freak when it comes to the database. There was a stage when I 
did not like the sql generated from the ORM load as well, but recently 
(not sure which version) the sql it generates is almost ;-) as good as 
the one i write by hand

 If you want, just create a rudimentary object creation layer over the  
 normal SQL constructs.  It would be more appropriate for this to be  
 something entirely separate from the existing orm module.
   
I was thinking along these lines. I am going to study your 
query.instances to get some hints

Thanks for all the help.

Huy


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] plain python objects from ORM

2007-09-24 Thread Huy Do

Hi,

Is it possible to get SA ORM to return plain python objects (with 
eagerloaded relations and all) but without any attribute instrumentation 
(or anything else magically added by SA).

Thanks,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: plain python objects from ORM

2007-09-24 Thread Huy Do

Michael Bayer wrote:
 On Sep 24, 2007, at 11:48 AM, Huy Do wrote:

   
 Hi,

 Is it possible to get SA ORM to return plain python objects (with
 eagerloaded relations and all) but without any attribute  
 instrumentation
 (or anything else magically added by SA).

 


 not really.   unless you remove the instrumentation from the classes  
 themselves afterwards (a one way operation).
   
Any pointers on how to do this ? to the whole object hierachy.
 of course you could argue that theres no technical reason the ORM  
 shouldnt be able to do this.  there could be some extremely  
 specialized rewrite of attributes.py that could do it perhaps, but  
 maintaining test coverage for that would be a whole project in itself
Would a mapper extension allow me to do this ?

I think this would be a great feature to have because there are many use 
cases in my application (mainly displaying/processing tables) where I 
don't want/need the overhead of the instrumentation (and it really does 
add quite a bit), but would still love the excellent mapping abilities 
(i.e have fully hydrated domain objects rather then ResultProxy).


Thanks

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to leave cascade up to the database

2007-07-01 Thread Huy Do

Michael Bayer wrote:

 On Jun 30, 6:54 am, Huy Do [EMAIL PROTECTED] wrote:
   
 This is my problem. I have cascade delete on one of my tables, and when
 i try to delete a record, i get the null not allowed in foreign key
 error. This happens because SA is trying to update my child records when
 i delete the main record. I was hoping to use
 Relation(...cascade=expunge),
 

 use relation(cascade=all, delete-orphan, passive_deletes=True).  it
 will then issue a DELETE for all child records that are present in the
 session, but leave those which arent loaded alone, for your database
 CASCADE to take care of.
   
This sounds like what i need. Thanks Michael.

 but it doesn't seem to do what I
   
 thought. I just want SA not to do an update on the child records (or
 anything else on the child records), just delete the main record.
 

 the only other option here is to set viewonly=True on the relation().
 but then it wouldnt process save operations either.

   
Thanks for the tip on viewonly. I like the sound of it.

huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how to leave cascade up to the database

2007-06-30 Thread Huy Do

Michael Bayer wrote:
 On Jun 29, 2007, at 10:38 AM, Huy Do wrote:

   
 Hi,

 How do I configure my mapper/table to leave the cascade up to my
 database. i.e I don't want SA to auto update or delete my child  
 objects
 when I delete or update my parent object.

 


 you are probably looking for passive_deletes=True on your  
 relations.   you can also prevent SQLAlchemy from marking child items  
 as deleted by just not using delete cascade, but you cannot prevent  
 it from updating the foreign key of child items with NULL..also if a  
 delete cascade takes place, the session's view of objects will be  
 incorrect if one of the deleted child items is still present in the  
 session...so best to leave your cascade matching that of the database.
   
Yeah...but I really don't want SA doing the delete or update for me, 
when my database can do it. I guess that's the price for using the ORM.
 the passive_deletes flag will just prevent it from defensively  
 loading in child items upon delete which is probably what youre  
 looking for.
   
This is my problem. I have cascade delete on one of my tables, and when 
i try to delete a record, i get the null not allowed in foreign key 
error. This happens because SA is trying to update my child records when 
i delete the main record. I was hoping to use 
Relation(...cascade=expunge), but it doesn't seem to do what I 
thought. I just want SA not to do an update on the child records (or 
anything else on the child records), just delete the main record.

Thanks

Huy




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how to leave cascade up to the database

2007-06-29 Thread Huy Do

Hi,

How do I configure my mapper/table to leave the cascade up to my 
database. i.e I don't want SA to auto update or delete my child objects 
when I delete or update my parent object.

Thanks

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Group by? Still a problem

2007-06-27 Thread Huy Do

I think you should listen to that error message.

user.id must appear in the group by or be used in an aggregate 
function i.e count, sum, avg etc.

The other problem you are using the ORM interface. You should be using 
the SQL select.

I'm not sure what you are trying to achieve, but your original query 
does not make sense from any SQL perspective.

This is an example from the docs on how to use group by from an SQL select.

s = select([customers, 
func.count(orders).label('order_count'), 
func.max(orders.price).label('highest_order')],
customers.c.customer_id==orders.c.customer_id,
group_by=[c for c in customers.c]
).alias('somealias')

Huy


 Then it must be a bug, I still get an error

 _execute  build\bdist.win32\egg\sqlalchemy\engine\base.py 602
 SQLError: (ProgrammingError) column user.id must appear in the
 GROUP BY clause or be used in an aggregate function


 On Jun 27, 9:09 pm, Andreas Jung [EMAIL PROTECTED] wrote:
   
 --On 27. Juni 2007 12:00:13 -0700 voltron [EMAIL PROTECTED] wrote:



 
 I´m guessing a bit because I still could not find the group_by  entry
 in the docs
   
 This works:
 user.select(links.c.id 3, order_by=[user.c.id]).execute()
   
 but this does not
 user.select(links.c.id 3,group_by=[user.c.dept]).execute()
   
 What should be the right syntax?
   
 Works for me:

 for row in  session.query(am).select(am.c.hidx=='HI1561203',
 group_by=[am.c.hidx]):
 print row.hidx, row.versionsnr

 -aj

  application_pgp-signature_part
 1KDownload
 


 
   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: group_by and column labels

2007-06-08 Thread Huy Do

Michael Bayer wrote:
 On Jun 7, 2007, at 7:17 AM, Huy Do wrote:

   
 Michael Bayer wrote:
 
 put strings into group_by:

 group_by = [client_code, client_name, ...]

   
 Hi Michael,

 Sorry I'm not sure what you are suggesting. I don't really want to
 retype my column names again in the group_by clause.

 

 i was suggesting a workaround.
   
just thought i might have missed something. thanks.
 My intention was to be able to reuse the actual column list in the  
 group
 by clause.
 

 would be easier if you can wait for 0.4 on that.

   
Not a problem.

Just though it was a nice thing to have, because I have always hated 
retyping all those columns names when i need group by in my sql.

Thanks

Huy



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: group_by and column labels

2007-06-08 Thread Huy Do

Hi Rick,

If this works, that is fine for me. I just don't want to retype my 
columns to avoid errors.

I'll test and then send an update.

Thanks

Huy
 I think the .name property of a labled column holds the label, so how 
 about

 
 db.job_table, db.client_table, db.service_type_table],
  group_by = [col.name http://col.name for col in group_by],
 


 On 6/7/07, *Michael Bayer* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 On Jun 7, 2007, at 7:17 AM, Huy Do wrote:

 
  Michael Bayer wrote:
  put strings into group_by:
 
  group_by = [client_code, client_name, ...]
 
 
  Hi Michael,
 
  Sorry I'm not sure what you are suggesting. I don't really want to
  retype my column names again in the group_by clause.
 

 i was suggesting a workaround.

  My intention was to be able to reuse the actual column list in the
  group
  by clause.

 would be easier if you can wait for 0.4 on that.



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: group_by and column labels

2007-06-08 Thread Huy Do


 I think the .name property of a labled column holds the label, so how 
 about

 
 db.job_table, db.client_table, db.service_type_table],
  group_by = [col.name http://col.name for col in group_by],
 

 
This almost worked, but not quite. Using the name column, does give me 
the label, but it also only gives the column name, which is a problem 
when you have tables with the same column in the from clause (Ambiguous 
column error from the db server). The select column statement has 
table.column_name whilst the group by (using the method above) only 
has column_name or label_name.

If i use use_labels option for the select statement, it works, because 
then the label names are unique across all columns.

thanks

Huy
 On 6/7/07, *Michael Bayer* [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:



 On Jun 7, 2007, at 7:17 AM, Huy Do wrote:

 
  Michael Bayer wrote:
  put strings into group_by:
 
  group_by = [client_code, client_name, ...]
 
 
  Hi Michael,
 
  Sorry I'm not sure what you are suggesting. I don't really want to
  retype my column names again in the group_by clause.
 

 i was suggesting a workaround.

  My intention was to be able to reuse the actual column list in the
  group
  by clause.

 would be easier if you can wait for 0.4 on that.



 
 





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: group_by and column labels

2007-06-07 Thread Huy Do

Michael Bayer wrote:
 put strings into group_by:

 group_by = [client_code, client_name, ...]
   

Hi Michael,

Sorry I'm not sure what you are suggesting. I don't really want to 
retype my column names again in the group_by clause.

My intention was to be able to reuse the actual column list in the group 
by clause.

Thanks,


Huy



 On Jun 6, 2007, at 7:07 PM, Huy Do wrote:

   
 Hi,

 I am trying to reuse my column list in my group by clause but some  
 of my
 columns use label() which is causing a sql syntax error because of the
 column as label in the group by clause. Is it possible to get the
 group_by to only use the label side of a column .

 eg. (This doesn't work because of the labels
 group_by = [
 service.date,
 service.client_id,
 service_type.type_code,
 service.service_code,
 client.code.label('client_code'),
 client.name.label('client_name'),
 func.coalesce(func.nullif(client_rate.description,  
 ''),
 service_type.description).label('service_description'),
 service.rate_amt
 ]
 columns = group_by[:].extend([
 func.sum(service.quantity).label('quantity'),
 func.sum(service.service_amt).label('service_amt')
 ])

 s = select(columns, criteria,
 from_obj=[outerjoin(db.service_table,  
 db.client_rate_table),
 db.job_table, db.client_table, db.service_type_table],
 group_by=group_by,
 order_by=[service.date, client.name, service.service_code]
 )

 Many thanks,

 Huy

 





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] group_by and column labels

2007-06-06 Thread Huy Do

Hi,

I am trying to reuse my column list in my group by clause but some of my 
columns use label() which is causing a sql syntax error because of the 
column as label in the group by clause. Is it possible to get the 
group_by to only use the label side of a column .

eg. (This doesn't work because of the labels
group_by = [
service.date,
service.client_id,
service_type.type_code,
service.service_code,
client.code.label('client_code'),
client.name.label('client_name'),
func.coalesce(func.nullif(client_rate.description, ''),  
service_type.description).label('service_description'),
service.rate_amt
]
columns = group_by[:].extend([
func.sum(service.quantity).label('quantity'),
func.sum(service.service_amt).label('service_amt')
])
  
s = select(columns, criteria,
from_obj=[outerjoin(db.service_table, db.client_rate_table), 
db.job_table, db.client_table, db.service_type_table],
group_by=group_by,
order_by=[service.date, client.name, service.service_code]
)

Many thanks,

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative style on SQL-API layer

2007-06-05 Thread Huy Do

Michael Bayer wrote:
 I want to wake this thread up again.  can we get some arguments pro/
 con to converting select() to work in a generative style ?

 generative means either just the first, or both of these two things:

 - methods like append_whereclause() return a select object with
 which to call further genreative methods.  this is a good thing
 because you can say select.append_whereclause().order_by().group_by()
 etc.
   
I'm for returning self rather then a copy, because needing a copy in my 
experience thus far with SA has been the exception.
What's wrong with s1 = select.copy() to explicitly get a copy.

Are you going to do the the select.where() change :-)
 - the select object you get back is a *copy* of the object which you
 called.
   advantages include:
 * is more Pythonic (not sure why this is, Mike Orr said so,
 would like more exposition)
 * you might want to reuse the original object differently (is that
 such a common pattern ?  seems weird to me..more exposition here too)
 * would be consistent with orm's Query() object which has made its
 choice on the copy side of things

   disadvantages:
 * inconsistent ?  the select() function also takes a whole
 assortment of arguments which can construct the entire instance at
 once.  the generative API already adds more than one way to do it.
 * performance.  an application that builds queries on the fly and
 executes them will be generating many copies of a select(), most of
 which are thrown away.  if the ORM uses these approaches as well,
 latency is added throughout.
   
 for performance considerations, select() can implement both a
 generative and a non-generative API (in fact it will always have a non-
 generative API for adding modifiers anyway, just that its marked with
 underscores as semi-private).  this can be specified either via
 constructor argument or by a different set of methods that are non-
 generative.  however, either of these ideas complicate the select()
 object.  we might want to go with just a flag generative=False that
 quietly allows an application to optimize itself.  or we might want to
 say, build the select object all at once if the overhead of
 generativeness is to be avoided.





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: In Turbogears, using the same transaction for mapped objects and direct operations

2007-06-05 Thread Huy Do

Marco Mariani wrote:
 Sanjay ha scritto:

   
 Need help on how to do it. Being a turbogears application where db
 entities like metadata and session are imported rather than created by
 me, I am confused and could not successfully implement the pattern
 provided in the docs.
   
 

 I'm sure there are cleaner ways, but this worked for me

 conn = session.context.get_current().connection(SomeMappedClass)
 conn.execute(...)
   
yeah. i wish this were more transparent for threadlocal engines.

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping existing structure of a database

2007-06-04 Thread Huy Do

Michael Bayer wrote:
 On Jun 4, 2007, at 6:47 PM, nathan harmston wrote:

   
 What kind of overhead is associated with using the autoload flag?

 What kind of overhead would be associated with this over a network?  
 (with a remote database). Is there a way to dump the structure of  
 a database to a file and import this as a kind of module?

 

 no, theres no built in way to dump to a file and re-import. youd have  
 to write that yourself.  please contribute it if you do so since we  
 get this question once a month, 
I would recommed using the AutoCode script from 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode rather then 
autoload directly. It just gives that little bit more control by 
tweaking the autocode.py script, but also the convenience of autoload.

However, if you still want to use autoload, this is what I use to do 
with autoload metadata. Remember to delete the dump file when you change 
your database.

==

from sqlalchemy import pool, create_engine, BoundMetaData, MetaData
import psycopg2 as psycopg

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode
psycopg = pool.manage(psycopg, pool_size=10)
engine = 
create_engine('postgres://username:[EMAIL PROTECTED]/database', 
strategy='threadlocal')
metadata = BoundMetaData(engine)

if os.path.exists('dump.metadata'):
meta_load = pickle.load(file('dump.metadata', 'rb'))
for table in meta_load.tables.values():
table.tometadata(meta)
   
branch_table = Table('branch', meta, autoload=True)
# define the reset of your tables.

if not os.path.exists('dump.metadata'):
meta_save = MetaData()
for table in meta.tables.values():
table.tometadata(meta_save)
pickle.dump(meta_save, file('dump.metadata', 'wb'))



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy and running custom sql.

2007-06-04 Thread Huy Do

wongobongo wrote:
 You mean something like this?

 K

 ---

 # Running a SQL Statement using SQLAlchemy

 from sqlalchemy import *

 dsn = 'mysql://root:[EMAIL PROTECTED]:3306/tester'

 # Create a database engine
 db = create_engine(dsn)

 # Do some SQL
 text(INSERT INTO insertable SELECT NULL,txt FROM test_table;,
  engine=db).execute()
   
Or directly off the engine itself, including bind parameters.

curs = db.text(select * from my_table where col1 = :col1_value and col2 
= :col2_value order by col1 limit 10 offset 
20).execute(col1_value=hello, col2_value=world)
rs = curs.fetchall()

Even better is to use the sqlalchemy sql constructs. It may seem more 
long winded for short queries but complex queries are much easier to 
express. Especially once the generative selects are introduced :-)

from sqlalchemy import select, and_
my_table = Table('my_table', metadata, )

col = my_table.c
criteria = and_(col.col1 == hello, col.col2 == world)
s = select([my_table], criteria, orderby=[col.col1], limit=10, offset=20)
curs = s.execute()
rs = curs.fetchal()

This will give you lists of ResultProxy (where each element can have 
it's columns accessed as element[0], element['col1'], or element.col1)

or you can create a query object and get it to return instances of your 
model object (once you map them of course).

eg using select above

query = session.query(MyClass)
rs = query.instances(s.execute())

Huy
 On May 27, 1:18 pm, SamDonaldson [EMAIL PROTECTED] wrote:
   
 I'd like to be able to run a custom query without having to use any of
 the ORM stuff.  I'd like to pass in a sql query string to some API
 that returns back to me instances (it'd be great), or tuples.  How do
 I do this?

 Thanks.
 





--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query.instances and select statement

2007-05-30 Thread Huy Do

Michael Bayer wrote:
 On May 29, 2007, at 10:28 PM, Huy Do wrote:

   
 s = select()
 query = db.query(model.Invoice).options(contains_eager('client'))
 query = query.from_select(s)   #
 query = query.limit(10).offset(10)
 rs = query.list()

 What I'm doing, is pretty much setting up a query in one place of my
 program, then passing this query around, manipulating certain  
 aspects of
 that query, then finally executing the query in some other place in my
 program.
 

 sure you can use query.select_from(myselect).  but its not going to  
 modify your select() statement, its giong to apply all those other  
 constraints to a select * from (your select) type of clause.  if  
 you want the SQL to be more finely tuned than that youd have to build  
 up your select() separately first.
   
Thanks Michael. I should have read the docs more carefully before 
asking. That's exactly what I want. As you describe this should be fine 
for my use case, because I am only changing the limit, offset and order 
by clauses. The function which adds these constraints is solely dealing 
with paging so this is perfect.

Huy
 alternatively you can forego making a select() on your own and be  
 totally generative with the query itself, using filter(), order_by(),  
 limit(), etc.
   






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query.instances and select statement

2007-05-29 Thread Huy Do

Hi,

I am currently doing the following query:

s = select()
query = db.query(model.Invoice).options(contains_eager('client'))
rs = query.instances(s.execute())

I was wondering if it was possible to instead of query.instances(), just 
attached the select to the query object, then continue to manipulate the 
query object further, before executing the query later.

eg.

s = select()
query = db.query(model.Invoice).options(contains_eager('client'))
query = query.from_select(s)   #
query = query.limit(10).offset(10)
rs = query.list()

What I'm doing, is pretty much setting up a query in one place of my 
program, then passing this query around, manipulating certain aspects of 
that query, then finally executing the query in some other place in my 
program.

Thanks

Huy




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] transaction commit problem

2007-04-19 Thread Huy Do

Hi,

I am having problems getting transactions working.

I am calling conn.begin() but before I call trans.commit(), something 
else commits for me.
It is committing right after I do the job_insert.execute(). Can someone 
shed some light.

Many Thanks

Huy

I have the following code

trans = db.engine.contextual_connect().begin
try:
for client_id in slist.keys():
job_insert.execute( )
job_id = engine.func.currval('job_id_seq').scalar()
for r in rows:
service_insert.execute( )
trans.commit()
self.error('Storage Period Charged Successfully. You can now 
proceed with Invoicing')
return self.redirect('/invoice')
except:
self.error(sys.exc_info()[1])
trans.rollback()
return self.render('storage_charge.mak')  

and the following SQL generated


sqlalchemy.engine.base.Engine.0x..f0 BEGIN
sqlalchemy.engine.base.Engine.0x..f0 select nextval('job_id_seq')
 sqlalchemy.engine.base.Engine.0x..f0 None
sqlalchemy.engine.base.Engine.0x..f0 INSERT INTO job (id, j
ob_type, client_reference, client_id, is_complete, complete_date, 
warehouse_code, update
d_by) VALUES (%(id)s, %(job_type)s, %(client_reference)s, %(client_id)s, 
%(is_complete)s
, %(complete_date)s, %(warehouse_code)s, %(updated_by)s)
sqlalchemy.engine.base.Engine.0x..f0 {'updated_by': 'admin'
, 'job_type': 'STO', 'id': 31L, 'warehouse_code': 'syd1', 'client_id': 
19, 'complete_dat
e': datetime.date(2007, 4, 20), 'is_complete': True, 'client_reference': 
'02-04-2007- 08
-04-2007'}
sqlalchemy.engine.base.Engine.0x..f0 COMMIT
sqlalchemy.engine.base.Engine.0x..f0 SELECT currval(%(currval)s)
sqlalchemy.engine.base.Engine.0x..f0 {'currval': 'job_id_seq'}
sqlalchemy.engine.base.Engine.0x..f0 select nextval('service_id_seq')
sqlalchemy.engine.base.Engine.0x..f0 None
sqlalchemy.engine.base.Engine.0x..f0 INSERT INTO service (i
d, service_code, service_amt, quantity, updated_by, client_id, 
description, job_id, date
, rate_amt) VALUES (%(id)s, %(service_code)s, %(service_amt)s, 
%(quantity)s, %(updated_b
y)s, %(client_id)s, %(description)s, %(job_id)s, %(date)s, %(rate_amt)s)
2007-04-20 01:46:51,546 INFO sqlalchemy.engine.base.Engine.0x..f0 
{'description': None,
'service_code': 'O/S', 'service_amt': Decimal(187.50), 'updated_by': 
'admin', 'rate_am
t': Decimal(12.50), 'client_id': 19, 'date': datetime.date(2007, 4, 
8), 'quantity': 15
L, 'id': 21723L, 'job_id': 31L}
sqlalchemy.engine.base.Engine.0x..f0 ROLLBACK


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: transaction commit problem

2007-04-19 Thread Huy Do

Michael Bayer wrote:
 On Apr 19, 2007, at 11:52 AM, Huy Do wrote:

   
 I have the following code

 trans = db.engine.contextual_connect().begin
 try:
 for client_id in slist.keys():
 job_insert.execute( )
 job_id = engine.func.currval('job_id_seq').scalar()
 for r in rows:
 service_insert.execute( )
 trans.commit()
 self.error('Storage Period Charged Successfully. You  
 can now
 proceed with Invoicing')
 return self.redirect('/invoice')
 except:
 self.error(sys.exc_info()[1])
 trans.rollback()
 return self.render('storage_charge.mak')
 

 this code assumes the usage of the threadlocal engine strategy,  
 since you are not explicitly using the connection from  
 contextual_connect() along with your execute() statements.
   
Sorry about that Michael. I changed my strategy to 'threadlocal' and 
everything is working fine now.

Huy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Opinion on correct use of Sqlalchemy

2007-04-09 Thread Huy Do

Michael Bayer wrote:
 On Apr 7, 2007, at 8:11 AM, HD Mail wrote:

   
 Hi,

 I was after some opinions on the following use of SA.

 1. Is there any problems using SA in this way ?
 2. Is there better ways of achieving this ?

 
 ...
   
 query = db.query(model.Asset).options(contains_eager('location'),
 contains_eager('type'))
 r = query.instances(s.execute())
 return r, count

 

 that youve constructed exactly the query you want and used it via  
 instances() is exactly how i want people to use SQLAlchemy when they  
 know what non-trivial SQL they want to issue.  query() only creates a  
 very specific kind of SQL and it could never support figuring out how  
 to construct the SQL that you already know how to do.

 Particularly for your query you are doing an eager load between  
 asset and location yet a lot of your query criterion depends upon  
 location, so in that sense yes you have to use custom SQL, since  
 query() will never involve eager loaded joins in the query criterion.

 however, theres a reason query follows this behavior, which is that  
 if you are loading Asset objects with an eager loaded list of  
 Location objects, but you have placed limiting criterion on the list  
 of Locations specifically, you will now have Asset objects whose  
 loaded list of Locations may not be complete compared to whats in the  
 database (and they will remain that way until those instances are  
 expire()d or similar).   So you should be aware that that is the  
 effect youre getting in your code.
   

Hi Michael,

Everything you say makes perfect sense for 1:N relationships, but in my 
case, and with alot of other cases where I need the order by or the 
criteria/filter on the joined table, it's a 1:1. In these cases I'm not 
sure why SA can't generate the same type of SQL statement that I am 
above. It would make perfect sense for it to.

I understand the eagerload problem with a list of child objects but with 
1:1 relations I think the query interface should be querying in the same 
way that my manual SQL is.

 also the separate count() step may be problematic since consider it  
 wont return just the number of Asset objects loaded, but the number  
 of rows total, which is Asset * Location * AssetType object rows.  if  
 you want just the number of Asset's returned youd just return len(r).
   
You're right, but because the the joins are 1:1, len(r) and count() will 
give me the same result.

Thanks

Huy



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---