[sqlalchemy] Re: Trying to detect which class methods were added by the mapper.

2007-06-13 Thread sdobrev

are u using assign_mapper? 
use plain mappers to have minimal impact, u will still get plenty of 
new descriptors + __init__ replaced

On Wednesday 13 June 2007 07:35:16 Ian Charnas wrote:
 Inspired by the SQLAlchemy docs, I'm writing a documentation
 generator in python using a combination of epydoc (for
 parsing/introspection), genshi (templates), docutils (for
 restructured text), and pygments (syntax highlighting).. and I just
 noticed that the documentation for classes mapped by SQLAlchemy
 always includes the methods like select, count, get_by, etc
 that were added by the mapper.  This is very undesirable, and I'm
 looking for a way to detect which methods were added to the class
 by the SQLAlchemy mapper, and which methods were there to begin
 with.

 Does anyone have any ideas?  I was hoping there would be something
 like Animal.select.mapper or Animal.select._sqlalchemy that I
 could use to differentiate which methods were added by the mapper
 and which were there originally, but I can't seem to find any such
 thing.

 many thanks in advance,
 -Ian Charnas from the Pagoda CMS team, www.pagodacms.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Deleting single association in many-to-many relationship

2007-06-13 Thread David Bolen

This seems like it would be a very common scenario, but it's got me
stumped and feeling a bit stupid at the moment - I would appreciate
anyone helping to point me in the right direction.

I'm using the ORM for a many-to-many relationship, for which over time
I need to be able to prune individual associations from items that may
(obviously) have several currently active.  But I can't seem to figure
out how to do it through ORM/session/object actions without fully
purging the object in question, also removing it from associations I
don't want to touch.  It seems far too fragile for what I would think
would be very common needs of a many-to-many relationship.

For example, here's a stripped down many-to-many setup for jobs which
contain files.  Files may be shared amongst jobs, thus the
many-to-many relationship.

  - - - - - - - - - - - - - - - - - - - - - - - - -

from sqlalchemy import *
from sqlalchemy.orm import *

meta = BoundMetaData('sqlite:///')

jobs = Table('jobs', meta,
 Column('id', Integer, primary_key=True),
 Column('name', String))

files = Table('files', meta,
  Column('id', Integer, primary_key=True),
  Column('name', String))

jobs_files = Table('jobs_files', meta,
   Column('job_id', Integer, ForeignKey('jobs.id')),
   Column('file_id', Integer, ForeignKey('files.id')))


class Job(object):
pass
class File(object):
pass

mapper(File, files)
mapper(Job, jobs,
   properties = { 'files': relation(File, lazy=False,
backref=backref('jobs', lazy=False),
secondary=jobs_files) })


def setup():
meta.create_all()
s = create_session()

f1 = File()
f1.name = 'File 1'
f2 = File()
f2.name = 'File 2'
fc = File()
fc.name = 'File Common'

j1 = Job()
j1.name = 'Job 1'
j2 = Job()
j2.name = 'Job 2'

s.save(j1)
s.save(j2)

j1.files.extend([f1, fc])
j2.files.extend([f2, fc])

s.flush()

  - - - - - - - - - - - - - - - - - - - - - - - - -

I've used eager loading for the relationships since that mimics my
actual code, but I believe the issues hold even with lazy loading.

(No loading, lazy=None, is whole other thing as that seems very
fragile with respect to relations since if you don't have a particular
relation loaded when you modify an instance SA doesn't know to
follow.  Not that I really blame SA in that case I suppose.)
  
Anyway, assuming that setup() has been called, these are the sort of
activities that have me stumped:

* I want to remove the association between File Common and Job 1
  but without affecting Job 2.

  If I session.delete() the fc instance directly, SA purges the file
  completely, including links to both jobs.  I can understand SA
  thinking I want the file completely gone in this scenario.

  But if I remove the fc instance from the relation list (files) from
  either job, SA also fully purges fc, including the link to the other
  job.  This includes the case of deleting one of the jobs if I have
  the cascade on the files relation including delete.  This would
  seem to prevent me from using a delete cascade, since then deleting
  any job would remove files it contains from all other jobs also
  containing those files which sort of defeats the purpose (at least
  for me) of the many to many relationship.

  The only case where I'd want the fc instance in the database to be
  fully purged would be if I was deleting the last association with
  any jobs, something I thought delete-orphan would handle.

* Providing I can resolve the prior point, I was hoping to have a way
  that would let me remove a job completely, including any associated
  files, but have the file records only pruned if they did not belong
  to any other job.

  Originally I had tried including delete and delete-orphan in the
  cascade rules would accomplish this.  But I found that the delete
  cascade triggered behavior as above - fully removing all files even
  if they still belong to other jobs.  If I only leave the
  delete-orphan cascade, deleting the parent job has no impact on the
  files, leaving stranded files and the old associations around.

Most of the many-to-many examples I found tend to use all,
delete-orphan as the cascade rules, but at least in my experience
that makes it dangerous to delete, in my scenario, job instances as it
purges all contained files even if they are still referenced in other
jobs.  And you'd never know later that they were ever part of the
other jobs (E.g., they are cleanly removed from all jobs during the
flush).  While I can understand why assuming a simple iteration over
child container objects during the delete cascade, in a practical
sense that doesn't seem very useful for many-to-many relationships
(as opposed to one-to-one/one-to-many).

Am I just missing something 

[sqlalchemy] Re: Deleting single association in many-to-many relationship

2007-06-13 Thread sdobrev

what version u use?

i tried your thing, that is 
$ python -i zz.py
 s = create_session()
 j = s.query(Job)[0] #get first
 del j.files[0]
 s.flush()
and seems to work

before:
 for a in s.query(Job): print a.name, a.files
... 
Job 1 [__main__.File object at 0xb78ec72c, __main__.File object at 
0xb78e666c]
Job 2 [__main__.File object at 0xb78ec72c, __main__.File object at 
0xb78ec76c]

after:
 for a in s.query(Job): print a.name, a.files
... 
Job 1 [__main__.File object at 0xb78e666c]
Job 2 [__main__.File object at 0xb78ec72c, __main__.File object at 
0xb78ec76c]


-
the only line removed was the from .orm import * - u shouldnt 
use any of those internal stuff unless u know what u do.

ciao
svil


On Wednesday 13 June 2007 13:13:58 David Bolen wrote:
 from sqlalchemy import *
 from sqlalchemy.orm import *

 meta = BoundMetaData('sqlite:///')

 jobs = Table('jobs', meta,
  Column('id', Integer, primary_key=True),
  Column('name', String))

 files = Table('files', meta,
   Column('id', Integer, primary_key=True),
   Column('name', String))

 jobs_files = Table('jobs_files', meta,
Column('job_id', Integer,
 ForeignKey('jobs.id')), Column('file_id', Integer,
 ForeignKey('files.id')))


 class Job(object):
 pass
 class File(object):
 pass

 mapper(File, files)
 mapper(Job, jobs,
properties = { 'files': relation(File, lazy=False,
 backref=backref('jobs',
 lazy=False), secondary=jobs_files) }) 

 def setup():
 meta.create_all()
 s = create_session()

 f1 = File()
 f1.name = 'File 1'
 f2 = File()
 f2.name = 'File 2'
 fc = File()
 fc.name = 'File Common'

 j1 = Job()
 j1.name = 'Job 1'
 j2 = Job()
 j2.name = 'Job 2'

 s.save(j1)
 s.save(j2)

 j1.files.extend([f1, fc])
 j2.files.extend([f2, fc])

 s.flush()



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Roger Demetrescu

Hi Michael,

Some of my coworkers had the same needs of Gaetan... And while I
understand your solution, I figure out if SA could have it natively
(detecting the presence of a dictionary)...

Somethink like:


query.get(dict(columnB='foo', columnA='bar')


Lazy programmers are the best ones...  :)



On 6/12/07, Michael Bayer [EMAIL PROTECTED] wrote:

 look through the keys in [c.key for c in table.primary_key], match
 those up

 i.e.

 query.get(*[mydict[c.key] for c in table.primary_key])


 On Jun 12, 1:07 pm, Gaetan de Menten [EMAIL PROTECTED] wrote:
  Hi,
 
  Anybody knows how I could emulate the behavior of Query.get (ie get
  the result from the session if possible instead of always fetching
  from the db) if I have the values for the different columns of the
  primary as keyword arguments (ie not in the order of the columns of
  the initial table)? I need a kind of a mix between get_by and get. Any
  idea?
 
  --
  Gaëtan de Mentenhttp://openhex.org


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Marco Mariani

Roger Demetrescu ha scritto:

 query.get(dict(columnB='foo', columnA='bar')

 Lazy programmers are the best ones...  :)
   

That's the reason lazy programmers share a superclass for all their 
domain objects... hint, hint :-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread sdobrev


 Some of my coworkers had the same needs of Gaetan... And while I
 understand your solution, I figure out if SA could have it natively
 (detecting the presence of a dictionary)...

 Somethink like:


 query.get(dict(columnB='foo', columnA='bar')
 Lazy programmers are the best ones...  :)

why not query.get(**dict(columnB='foo', columnA='bar')) ?
it should work as is..



 On 6/12/07, Michael Bayer [EMAIL PROTECTED] wrote:
  look through the keys in [c.key for c in table.primary_key],
  match those up
 
  i.e.
 
  query.get(*[mydict[c.key] for c in table.primary_key])
 
  On Jun 12, 1:07 pm, Gaetan de Menten [EMAIL PROTECTED] 
wrote:
   Hi,
  
   Anybody knows how I could emulate the behavior of Query.get (ie
   get the result from the session if possible instead of always
   fetching from the db) if I have the values for the different
   columns of the primary as keyword arguments (ie not in the
   order of the columns of the initial table)? I need a kind of a
   mix between get_by and get. Any idea?
  
   --

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] flushing and saving data using default_metadata

2007-06-13 Thread voltron

I have decided to use the global_metada to setup the tables in my app,
in one of the tables, users, I setup an admin account

user = User(john doe, [EMAIL PROTECTED])

how do I flush the the above object? user.flush() does not work in
this context because the User object does not have the attribute
flush ( traceback)

thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PYTZ and SA

2007-06-13 Thread asm

Am looking in to this but have little to offer at this point.

My target stack involves Turbogears, SA and PostgreSQL (with SQLite
being used during development).

Am not how many of the DBMS engines support datetimes that are better
than naive wrt timezones which may limit the way SA has to work.

Anyways - do speak up - you have my full attention and probably
others.
   A

On Jun 12, 12:12 am, Michael Carter [EMAIL PROTECTED] wrote:
 Hello,

 I was having some issues using pytz (python timezone definitions) with
 sqlalchemy. Before I post the specifics of my problem, I'm curious if there
 is any documentation on this kind of thing or If anyone else here had tried
 it before.

 Thanks,

 Michael Carter


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] postgresql rules and update

2007-06-13 Thread Antonio

hi all,

I've partition a table in PostgreSQL, with rules on UPDATE, DELETE and INSERT.
When I INSERT a row, ok, but when i UPDATE a row, the program raise:

ConcurrentModificationError: Updated rowcount 0 does not match number of
objects updated 1

I know that when i use a RULE the command status return 0:

ma=# SELECT * from smoduls.datirim_prev where id=6;
turno | id | rim_id | prev_id
---+++-
d |  6 |  2 |  12
(1 row)

ma=# UPDATE smoduls.datirim_prev set rim_id=3 where id=6;
UPDATE 0
^^
ma=# SELECT * from smoduls.datirim_prev where id=6;
turno | id | rim_id | prev_id
---+++-
d |  6 |  3 |  12
(1 row)

but is sane to catch the exception with pass ?? what can I do to resolve ?

tanks in advance ... I hope that you've understood ;-))

-- 
#include stdio.h
int main(void){char c[]={10,65,110,116,111,110,105,111,32,98,97,114,98,111,110,
101,32,60,104,105,110,100,101,109,105,116,64,116,105,115,99,97,108,105,110,101,
116,46,105,116,62,10,10,0};printf(%s,c);return 0;}

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Roger Demetrescu

On 6/13/07, Marco Mariani [EMAIL PROTECTED] wrote:

 Roger Demetrescu ha scritto:

  query.get(dict(columnB='foo', columnA='bar')
 
  Lazy programmers are the best ones...  :)
 

 That's the reason lazy programmers share a superclass for all their
 domain objects... hint, hint :-)

Yeaph, I totally agree..  :)

But notice that this feature is related to SA's Query class, and not
my domain objects...
Unless I do something like (if it is possible at all):

pseudo code

class BaseClass(object)
@staticmethod
def give_me_pk_values_in_correct_order(self, **kw):
  table = don't know how to retrieve the correct table object
  return [kw[c.key] for c in table.primary_key]

class Customer(BaseClass): pass

/pseudo code


But the use of this function is to ugly to my taste (I know, the
give_me_pk_values_in_correct_order is too big here):

customer = 
session.query(Customer).get(Customer.give_me_pk_values_in_correct_order(dict(columnX=3,
columnY=4, columnZ=343))

Having to repeat Customer reference twic annoys me...  :)


Or am I missing a better way of doing this ?


Cheers,

Roger





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange InvalidRequestError when running 0.3.3 code on 0.3.8

2007-06-13 Thread Michael Bayer
youre expressing the secondary join condition on a many-to-many as  
shoved into the primary join.  SA will be confused by that, as it  
requires knowledge of primary/secondary join conditions separately in  
order to properly construct lazy loading criterion.

Group.mapper = mapper(Group, groups, properties = {
 'members': relation(User, viewonly=True,
 secondary=membership,
 backref='groups',
 primaryjoin=and_(membership.c.level==100,  
membership.c.group_id==groups.c.id),
 secondaryjoin=membership.c.user_id==users.c.id)
})



On Jun 12, 2007, at 11:53 PM, Michael Carter wrote:

 from sqlalchemy import *
 db = create_engine('sqlite://', echo=True)
 meta = BoundMetaData(db)
 session = create_session()

 groups = Table(groups, meta,
 Column(id,Integer, primary_key=True),
 Column(name, Unicode(40)))

 membership = Table(membership, meta,
 Column(user_id, Integer, ForeignKey(users.id),  
 primary_key=True),
 Column(group_id, Integer, ForeignKey(groups.id),  
 primary_key=True),
 Column(level, Integer))

 users = Table(users, meta,
 Column(id, Integer, primary_key=True),
 Column(name, Unicode(255)))

 class User(object):
 def __repr__(self): return str(self)
 def __str__(self): return User %s: %s % (self.id, self.name)

 class Group(object):
 def __repr__(self): return str(self)
 def __str__(self): return Group %s: %s % (self.id, self.name)

 class Membership(object):
 def __repr__(self): return str(self)
 def __str__(self): return Membership %s, %s: %s %  
 (self.user_id, self.group_id, self.level)


 Group.mapper = mapper(Group, groups, properties = {
 'members': relation(User, viewonly=True,
 secondary=membership, backref='groups',
 primaryjoin=and_(membership.c.user_id==users.c.id,  
 membership.c.level==100, membership.c.group_id==groups.c.id)),
 })

 User.mapper = mapper(User, users, properties = {
 })

 Membership.mapper = mapper(Membership, membership, properties = {
 'user': relation(User),
 'group': relation(Group)
 })

 meta.drop_all()
 meta.create_all()
 u = User()
 g = Group()
 m = Membership()
 u.name = 'Michael Carter'
 g.name = 'Home'
 m.group = g
 m.user = u
 m.level = 100
 session.save(u)
 session.save(g)
 session.save(m)
 session.flush()
 print u.groups
 print g.members


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Trying to detect which class methods were added by the mapper.

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 12:35 AM, Ian Charnas wrote:


 Inspired by the SQLAlchemy docs, I'm writing a documentation generator
 in python using a combination of epydoc (for parsing/introspection),
 genshi (templates), docutils (for restructured text), and pygments
 (syntax highlighting).. and I just noticed that the documentation for
 classes mapped by SQLAlchemy always includes the methods like
 select, count, get_by, etc that were added by the mapper.  This
 is very undesirable, and I'm looking for a way to detect which methods
 were added to the class by the SQLAlchemy mapper, and which methods
 were there to begin with.

 Does anyone have any ideas?  I was hoping there would be something
 like Animal.select.mapper or Animal.select._sqlalchemy that I
 could use to differentiate which methods were added by the mapper and
 which were there originally, but I can't seem to find any such thing.


SQLAlchemy does not add methods to classes, the assignmapper  
extension does.  dont use the assign_mapper extension.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting single association in many-to-many relationship

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 7:45 AM, [EMAIL PROTECTED] wrote:

 the only line removed was the from .orm import * - u shouldnt
 use any of those internal stuff unless u know what u do.

no, this will be required in 0.4, and its mentioned in some of the  
0.3 docs as well.  sqlalchemy.orm is not an internal module.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 7:54 AM, voltron wrote:


 I have decided to use the global_metada to setup the tables in my app,
 in one of the tables, users, I setup an admin account

 user = User(john doe, [EMAIL PROTECTED])

 how do I flush the the above object? user.flush() does not work in
 this context because the User object does not have the attribute
 flush ( traceback)


first of all, global_metadata is being removed in version 0.4.   not  
to mention, being able to say user.flush() (which only occurs now  
when you use assignmapper).  global_metadata is hardcoded to the  
dynamicmetadata which is an object most people shouldnt be using.   
shortcuts to configuration like that lead to people not understanding  
very well what they're doing.

but beyond that, global_metadata has nothing to do with the object  
relational mapper.  you would still have to create an explicit  
session with the example code above, in which to save your User object.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Roger Demetrescu

On 6/13/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 
  Some of my coworkers had the same needs of Gaetan... And while I
  understand your solution, I figure out if SA could have it natively
  (detecting the presence of a dictionary)...
 
  Somethink like:
 
 
  query.get(dict(columnB='foo', columnA='bar')
  Lazy programmers are the best ones...  :)

 why not query.get(**dict(columnB='foo', columnA='bar')) ?
 it should work as is..

No, it doesn't...  :/

I have the following code:

duplicata_table = Table('DUPLICATA', metadata,
Column('dpl_loja', String(2), primary_key=True),
Column('dpl_id', Integer, Sequence('gen_dpl_id'), primary_key=True),
# other columns
)

mapper(Duplicata, duplicata_table, properties={
'loja': duplicata_table.c.dpl_loja,
'id': duplicata_table.c.dpl_id,
# other proprerties
})


Now trying your suggestion:

 d = session.query(Duplicata).get(**dict(loja='02', id=9))
Traceback (most recent call last):
  File pyshell#17, line 1, in module
d = session.query(Duplicata).get(**dict(loja='02', id=9))
TypeError: get() takes exactly 2 non-keyword arguments (1 given)


It expects me to do :
 d = session.query(Duplicata).get(['02', 9])

Dealing with compound primary key of 2 columns is easy... but believe
me: I have worked with some legacy database in the past which has
tables with more than 9 columns in their primary key... !!

I wrote my own ORM in that occasion, and remember that passing primary
keys values was a pain in the a**... :)
To avoid having to pass 10 positional parameters, I made use of
records (Delphi was my language)... just to make it possible to name
each parameter (the same thing python give us with dict())

So, if I was to use SA with that database, I would have to make some
utility functions like:

def pkvalues(table, **kw):
return [kw[c.key] for c in table.primary_key]


and do search in those monster tables ('monster' because of its primary key):

m = session.query(Monster).get(**pkvalues(monster_table, col1=23,
col2='dsfs', col3=7, etc...))


The point is: if SA already knows witch table is associated with each
ORM class, why not putting that function inside it (in Session.get()
or a better place) ? And when I say witch table, it's because of the
complexity of using polymorphic (which I haven't used it yes, but
have the desire to)


But maybe I am over complicating things...  :)


Cheers,

Roger




 
  On 6/12/07, Michael Bayer [EMAIL PROTECTED] wrote:
   look through the keys in [c.key for c in table.primary_key],
   match those up
  
   i.e.
  
   query.get(*[mydict[c.key] for c in table.primary_key])
  
   On Jun 12, 1:07 pm, Gaetan de Menten [EMAIL PROTECTED]
 wrote:
Hi,
   
Anybody knows how I could emulate the behavior of Query.get (ie
get the result from the session if possible instead of always
fetching from the db) if I have the values for the different
columns of the primary as keyword arguments (ie not in the
order of the columns of the initial table)? I need a kind of a
mix between get_by and get. Any idea?
   
--

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgresql rules and update

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 8:32 AM, Antonio wrote:


 hi all,

 I've partition a table in PostgreSQL, with rules on UPDATE, DELETE  
 and INSERT.
 When I INSERT a row, ok, but when i UPDATE a row, the program raise:

 ConcurrentModificationError: Updated rowcount 0 does not match  
 number of
 objects updated 1

 I know that when i use a RULE the command status return 0:

 ma=# SELECT * from smoduls.datirim_prev where id=6;
 turno | id | rim_id | prev_id
 ---+++-
 d |  6 |  2 |  12
 (1 row)

 ma=# UPDATE smoduls.datirim_prev set rim_id=3 where id=6;
 UPDATE 0
 ^^
 ma=# SELECT * from smoduls.datirim_prev where id=6;
 turno | id | rim_id | prev_id
 ---+++-
 d |  6 |  3 |  12
 (1 row)

 but is sane to catch the exception with pass ?? what can I do to  
 resolve ?

 tanks in advance ... I hope that you've understood ;-))



SQLAlchemy's ORM relies upon cursor.rowcount after an UPDATE or  
DELETE to get the number of rows affected. Why exactly does your rule  
cause this to fail ?  if no way around that, id have to provide a  
hook into the postgres.py dialect to disable rowcount.  you might  
want to experiment with raw DBAPI code and see if you can get  
cursor.rowcount to behave properly in conjunction with your database  
setup.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 8:48 AM, Roger Demetrescu wrote:
 But the use of this function is to ugly to my taste (I know, the
 give_me_pk_values_in_correct_order is too big here):

 customer = session.query(Customer).get 
 (Customer.give_me_pk_values_in_correct_order(dict(columnX=3,
 columnY=4, columnZ=343))

 Having to repeat Customer reference twic annoys me...  :)


its python !  its,  there should be only one way to do it, we're  
taking away map() and reduce() because you already have list  
comprehensions.theres all kinds of things you might want to pass  
to get(), how would I know how you want to translate it ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting single association in many-to-many relationship

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 6:13 AM, David Bolen wrote:

 * I want to remove the association between File Common and Job 1
   but without affecting Job 2.

   If I session.delete() the fc instance directly, SA purges the file
   completely, including links to both jobs.  I can understand SA
   thinking I want the file completely gone in this scenario.

   But if I remove the fc instance from the relation list (files) from
   either job, SA also fully purges fc, including the link to the other
   job.  This includes the case of deleting one of the jobs if I have
   the cascade on the files relation including delete.  This would
   seem to prevent me from using a delete cascade, since then deleting
   any job would remove files it contains from all other jobs also
   containing those files which sort of defeats the purpose (at least
   for me) of the many to many relationship.

If I add this code to the bottom:

del j1.files[0]
s.flush()

the first flush is:

BEGIN
INSERT INTO files (name) VALUES (?)
['File 1']
INSERT INTO files (name) VALUES (?)
['File Common']
INSERT INTO files (name) VALUES (?)
['File 2']
INSERT INTO jobs (name) VALUES (?)
['Job 1']
INSERT INTO jobs (name) VALUES (?)
['Job 2']
INSERT INTO jobs_files (job_id, file_id) VALUES (?, ?)
[[1, 1], [1, 2], [2, 3], [2, 2]]
COMMIT


the second flush is:

BEGIN
DELETE FROM jobs_files WHERE jobs_files.job_id = ? AND  
jobs_files.file_id = ?
[1, 1]
COMMIT

association is deleted only.



 * Providing I can resolve the prior point, I was hoping to have a way
   that would let me remove a job completely, including any associated
   files, but have the file records only pruned if they did not belong
   to any other job.

that you have to do manually.  cascade isnt going to do that for  
you.  namely, that it requires descending into all child objects,  
magically producing backreferences to them in order to load their  
parent objects (in the case backrefs werent defined), then issuing  
SELECTs for all child objects to determine if they had any other  
parents.  way out of scope.

 Am I just missing something blindingly obvious, or should I be trying
 to manage the many-to-many relationships, at least deleting, in some
 other way?  Should I be interacting with the relationship table
 directly (but then, how do I remove the files from the contained
 project objects without triggering the deletion behavior I don't  
 want)?

if you really wanted to manipulate the relationship table directly,  
thats fine, but ensure that you expire/refresh the relevant objects  
and /or clear out the whole session before continuing past that point  
since SA wouldnt otherwise know you modified relationships behind its  
back.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgresql rules and update

2007-06-13 Thread Antonio

* mercoledì 13 giugno 2007, alle 09:15, Michael Bayer wrote :
 SQLAlchemy's ORM relies upon cursor.rowcount after an UPDATE or  
 DELETE to get the number of rows affected. Why exactly does your rule  
 cause this to fail ?

because is a 'INSTEAD RULE'  the base table 'prev' has no row ...
the table 'prev2007' INHERITS from 'prev' ...
and the db return a rowcount of '0' although the row is updated into
'prev2007' ...

CREATE RULE previsione2007_upd AS ON UPDATE TO rsm.prev
WHERE NEW.data_m BETWEEN '1/1/2007' AND '31/12/2007'
DO INSTEAD UPDATE rsm.prev2007
SET nave_id = NEW.nave_id,
...

 if no way around that, id have to provide a  
 hook into the postgres.py dialect to disable rowcount.
 you might  
 want to experiment with raw DBAPI code and see if you can get  
 cursor.rowcount to behave properly in conjunction with your database  
 setup.

I'll try ... tanks for the answer ..

-- 
#include stdio.h
int main(void){char c[]={10,65,110,116,111,110,105,111,32,98,97,114,98,111,110,
101,32,60,104,105,110,100,101,109,105,116,64,116,105,115,99,97,108,105,110,101,
116,46,105,116,62,10,10,0};printf(%s,c);return 0;}

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Erroneous primary key of in-memory instance after session flush?

2007-06-13 Thread Michael Bayer

ticket #603 has been added for this issue.

On Jun 12, 2007, at 11:48 PM, David Bolen wrote:


 I was converting an older table definition from using an integer
 primary key to a string (representation of UUID), and ran into a bit
 of strange behavior, where my object instance's String primary key
 receives an integer value (which appears to be the internal sqlite
 rowid) after a flush.  From prior reading, I believed that I ought to
 be able to use a non-integer primary key.

 I was using 0.3.8 when I first run into this, but it appears to hold
 true for the current SVN trunk as well.

 It may just be with the sqlite layer (the DB I've been using and the
 only one I have handy), or it may be an issue with handling default
 values for primary key columns, I'm not sure.

 For a short sample exhibiting the problem:

   - - - - - - - - - - - - - - - - - - - - - - - - -

 import uuid
 from sqlalchemy import *
 from sqlalchemy.orm import *

 engine = create_engine('sqlite:///')

 meta = BoundMetaData(engine)

 def default_uuid():
 return str(uuid.uuid4())

 test_table = Table(
 'test', meta,
 Column('uuid', String, default=default_uuid, primary_key=True),
 )

 class Test(object):
 pass

 test_mapper = mapper(Test, test_table)


 if __name__ == __main__:

 meta.create_all()
 s = create_session(engine)

 test = Test()

 # Works if the line below is uncommented
 # test.uuid = default_uuid()
 s.save(test)
 s.flush()
 print 'Post-Flush:', test.uuid

 u = test_table.select().execute().fetchone()
 print 'DB:', u

   - - - - - - - - - - - - - - - - - - - - - - - - -


 When run as is above, the output for me looks like:

 Post-Flush: 1
 DB: (u'1cfcb156-2a90-42ec-9c96-75a4b8bf60e7',)

 (If you enable tracing on the engine, you can see that the actual uuid
  column value inserted into the database during the flush is, in fact,
  the data shown from the select, which doesn't match that in the  
 object
  instance in memory following the flush)

 Running with the commented line (manual key assignment)  
 uncommented, yields:

 Post-Flush: d05ebdde-267f-43ae-a7df-f6d588e431a2
 DB: (u'd05ebdde-267f-43ae-a7df-f6d588e431a2',)

 which is what I originally expected for the first case.

 Now, I know that sqlite is internally creating a rowid field since I
 don't have an Integer primary key, and I'm presuming that's what the
 erroneous 1 value is in the first case (it is, in fact, an 'int', not
 a string), but I don't know why it's being assigned to my in-memory
 copy of the object, and since that's the primary key field, it no
 longer maps properly to the database.

 Am I doing something wrong in the above?

 Thanks.

 -- David


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting single association in many-to-many relationship

2007-06-13 Thread David Bolen

Michael Bayer [EMAIL PROTECTED] writes:

 On Jun 13, 2007, at 6:13 AM, David Bolen wrote:

 * I want to remove the association between File Common and Job 1
   but without affecting Job 2.

   If I session.delete() the fc instance directly, SA purges the file
   completely, including links to both jobs.  I can understand SA
   thinking I want the file completely gone in this scenario.

   But if I remove the fc instance from the relation list (files) from
   either job, SA also fully purges fc, including the link to the other
   job.  This includes the case of deleting one of the jobs if I have
   the cascade on the files relation including delete.  This would
   seem to prevent me from using a delete cascade, since then deleting
   any job would remove files it contains from all other jobs also
   containing those files which sort of defeats the purpose (at least
   for me) of the many to many relationship.

 If I add this code to the bottom:

 del j1.files[0]
 s.flush()

(...)

 the second flush is:
 
 BEGIN
 DELETE FROM jobs_files WHERE jobs_files.job_id = ? AND  
 jobs_files.file_id = ?
 [1, 1]
 COMMIT
 
 association is deleted only.

Argh - my bad.  I should have left a cascade option commented in the
example, since it needs to change slightly to show different
behaviors.

With the default cascade (as in the example as posted, just
save-update), I also see the individual removal from the parent object
only affects the association.  But nothing other than the association
is ever touched, and if you remove it from all parents, you end up
with an orphaned file (no jobs).  E.g., in your case above, file 1 was
only associated with job 1, and is now orphaned.

If the cascade becomes save-update, delete-orphan, then I get the
behavior where deleting from a single parent job flushes all
associations for the file.  Although its true you never get an orphan
that way, files get orphaned before their time.

So, adding the cascade, and then changing your line to del j1.files[1]
(to delete the common file), I get on the second flush:

BEGIN
DELETE FROM jobs_files WHERE jobs_files.job_id = ? AND jobs_files.file_id = ?
[[1, 2], [2, 2]]
DELETE FROM files WHERE files.id = ?
[2]
COMMIT

which is removing the common file association with both jobs, and then
removing the file itself.  It's true that after doing the former, it's
an orphan which can be deleted, but I don't know why it's including
the second job in the association pruning.

So I was either getting orphans, or completely removed files when
trying to remove a single association.  I can't seem to get the two
desired behaviors together?


On the second point, the complexity of the full cascade recursion with
orphan detection makes sense.  I suppose I'm interested in any input
from anyone else as to how they are handling these sorts of operations
in many-to-many cases with changing associations.

-- David


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread sdobrev

well, if u dont want to write the same thing over and over, write one 
wrapping function, and publish it here.

e.g. something like (pseudocode):
def _get_pk_ordered( klas):
   table = orm.mapper.registry(klas).mapped_table  #or select_table
   
   return whatever-list-of-columns

def get_by_pk( session, klas, **columns):
   cols = _get_pk_ordered( klas)
   assert len(cols) == len(columns)
   return session.query(klas).get( [columns[c.name] for c in cols])

i guess, if its really useful, it may even go into some extension/ 
module.


   Some of my coworkers had the same needs of Gaetan... And while
   I understand your solution, I figure out if SA could have it
   natively (detecting the presence of a dictionary)...
  
   Somethink like:
  
  
   query.get(dict(columnB='foo', columnA='bar')
   Lazy programmers are the best ones...  :)
 
  why not query.get(**dict(columnB='foo', columnA='bar')) ?
  it should work as is..

 No, it doesn't...  :/

 I have the following code:

 duplicata_table = Table('DUPLICATA', metadata,
 Column('dpl_loja', String(2), primary_key=True),
 Column('dpl_id', Integer, Sequence('gen_dpl_id'),
 primary_key=True), # other columns
 )

 mapper(Duplicata, duplicata_table, properties={
 'loja': duplicata_table.c.dpl_loja,
 'id': duplicata_table.c.dpl_id,
 # other proprerties
 })

 Now trying your suggestion:
  d = session.query(Duplicata).get(**dict(loja='02', id=9))

 Traceback (most recent call last):
   File pyshell#17, line 1, in module
 d = session.query(Duplicata).get(**dict(loja='02', id=9))
 TypeError: get() takes exactly 2 non-keyword arguments (1 given)

 It expects me to do :
  d = session.query(Duplicata).get(['02', 9])

 Dealing with compound primary key of 2 columns is easy... but
 believe me: I have worked with some legacy database in the past
 which has tables with more than 9 columns in their primary key...
 !!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deleting single association in many-to-many relationship

2007-06-13 Thread sdobrev


 On the second point, the complexity of the full cascade recursion
 with orphan detection makes sense.  I suppose I'm interested in any
 input from anyone else as to how they are handling these sorts of
 operations in many-to-many cases with changing associations.

As i need a history (bitemporal) of all things in my db, i did not 
have updates nor deletions - all those operations become inserts (of 
same record with diff. status). For the same reasone, the one-to-many 
rels have become many-to-many - so u can add new versions on either 
side of the relation without changing any previous versions - only 
manipulating the associations inbetween.

Then, for the reasons of nested user-transactions and their rollback 
(see the thread same weeks ago), i ended up updating and deleting 
these (many-to-many) associations. This transaction engine is not 
100% working yet, but so far i haven't run into any cascade 
problems - and i do rely on them to work properly.

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread Michael Bayer


On Jun 13, 2007, at 12:07 PM, voltron wrote:


 aha, ok, thanks, but I saw it in the docs, ist it deprecated?  Then I
 ´m in a spot

DynamicMetaData is not deprecated.   but when you connect to it, its  
a thread-local connection.  other threads that access it wont have  
any engine. you just want to use a regular MetaData, which these days  
has a connect() method.  but you might not even need to do that with  
a pylons configuration.


 If I go the route of creating meta first I would have to create it in
 every table file, in users.py, addresses.py e.tc. I tried using a file
 called _initmodels.py and created the metadata only once and made alle
 the separate table files import from it, but that ultimately failed
 because when I imported it in websetup.py in pylons so I can use it to
 setup my databses and app, it did not find the metadata properly, it
 assumed it was another metadata, and naturally, no dtabases were
 created

that sounds like you need to get control of your application  
modules.  you should declare your metadata in some common place, like  
for pylons in base.py or app_globals.py, where everyone else can get  
to it.

but additionally, Pylons connects the database to the session, so  
typically in a pylons app theres not even any need to connect the  
engine to the metadata.  to issue a create_all() just do  
metadata.create_all(connectable=sessioncontext.current.connect()).
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query.get with unordered multiple-column-primary-key

2007-06-13 Thread Gaetan de Menten

On 6/13/07, Roger Demetrescu [EMAIL PROTECTED] wrote:

 Hi Michael,

 On 6/13/07, Michael Bayer [EMAIL PROTECTED] wrote:
 
 
  On Jun 13, 2007, at 8:48 AM, Roger Demetrescu wrote:
   But the use of this function is to ugly to my taste (I know, the
   give_me_pk_values_in_correct_order is too big here):
  
   customer = session.query(Customer).get
   (Customer.give_me_pk_values_in_correct_order(dict(columnX=3,
   columnY=4, columnZ=343))
  
   Having to repeat Customer reference twic annoys me...  :)
  
 
  its python !  its,  there should be only one way to do it, we're
  taking away map() and reduce() because you already have list
  comprehensions.theres all kinds of things you might want to pass
  to get(), how would I know how you want to translate it ?

Well, from my understanding, the spirit of get is to fetch something
that you already know the primary key for, so supporting them in the
form of a dictionary seem pretty natural to me.

 Indeed different people will have different opinions how things should 
 work
 In my case, passing a dict() in query.get() was so intuitive to me,
 that I got a little
 frustrated when noticed it didn't work ...

Same here.

 If query.get() is smart to
 deal with a sequence (eg: list or tuple), why it shouldn't be with a
 mapping ?   :)

 Anyway, a couple of custom utilities functions will definitely do the job 
 here..

Indeed. No worries, I'll survive this... It's not something I come
across that often anyway. It just felt natural to be able to do it.
-- 
Gaëtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql reflection NoSuchTableError

2007-06-13 Thread one.person

Thanks a lot for the help guys.  I got this to work by specifying the
schema= argument properly.  It turns out I had to specify the
'schema' argument to Table() as what MSSQL refers to as the 'Owner' of
the table in enterprise manager.

Still confused over the difference between schema and owner in MSSQL,
but now at least it works!

Thanks again

On Jun 12, 7:05 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 That looks OK to me.

 Try pasting that query (cleaned-up) into a query window on Enterprise
 Manager and see what kind of results you get. The ? arguments are
 positional, so the first would be the table 'zone'; the second the schema
 'dbo'.

 On 6/12/07, one.person [EMAIL PROTECTED] wrote:



  Thanks for all the quick replies.  Currently on pyodbc 2.0.3.6.

  Anyway, I tried this (the table 'zones' most definitely exists):

   metadata.engine.echo = True
   zones_table = Table('zones', metadata, autoload=True)

  2007-06-12 18:20:40,924 INFO sqlalchemy.engine.base.Engine.0x..b0 SET
  nocount ON
  2007-06-12 18:20:40,924 INFO sqlalchemy.engine.base.Engine.0x..b0 None
  2007-06-12 18:20:40,924 INFO sqlalchemy.engine.base.Engine.0x..b0
  SELECT [COLUMNS_eba4].[TABLE_SCHEMA], [COLUMNS_eba4].[TABLE_NAME],
  [COLUMNS_eba4].[COLUMN_NAME], [COLUMNS_eba4].[IS_NULLABLE],
  [COLUMNS_eba4].[DATA_TYPE], [COLUMNS_eba4].[ORDINAL_POSITION],
  [COLUMNS_eba4].[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_eba4].
  [NUMERIC_PRECISION], [COLUMNS_eba4].[NUMERIC_SCALE], [COLUMNS_eba4].
  [COLUMN_DEFAULT]
  FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_eba4]
  WHERE [COLUMNS_eba4].[TABLE_NAME] = ? AND [COLUMNS_eba4].
  [TABLE_SCHEMA] = ? ORDER BY [COLUMNS_eba4].[ORDINAL_POSITION]
  2007-06-12 18:20:40,940 INFO sqlalchemy.engine.base.Engine.0x..b0
  ['zones', 'dbo']

  Traceback (most recent call last):
File pyshell#6, line 1, in -toplevel-
  zones_table = Table('zones', metadata, autoload=True)
File build\bdist.win32\egg\sqlalchemy\schema.py, line 166, in
  __call__
File build\bdist.win32\egg\sqlalchemy\engine\base.py, line 809, in
  reflecttable
File build\bdist.win32\egg\sqlalchemy\databases\mssql.py, line
  506, in reflecttable
  NoSuchTableError: zones

  I think that was what you guys meant by query logging


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread voltron

Actually I thought that global_metadata was deprecated not Dynamic
since you said i should not be using it.

creating metadata in the base.py causes a trace, using:

from sqlalchemy import *
g.metadata = MetaData()


D:\Projects\Pylons_projects\gameolymppaster setup-app development.ini
Traceback (most recent call last):
  File C:\Python24\Scripts\paster-script.py, line 7, in ?
sys.exit(
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\c
ommand.py, line 76, in run
invoke(command, command_name, options, args[1:])
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\c
ommand.py, line 115, in invoke
exit_code = runner.run(args)
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\a
ppinstall.py, line 65, in run
return super(AbstractInstallCommand, self).run(new_args)
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\c
ommand.py, line 210, in run
result = self.command()
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\a
ppinstall.py, line 451, in command
installer.setup_config(
  File c:\python24\lib\site-packages\PasteScript-1.3.4-py2.4.egg\paste
\script\a
ppinstall.py, line 579, in setup_config
mod = import_string.try_import_module(mod_name)
  File c:\python24\lib\site-packages\Paste-1.3-py2.4.egg\paste\util
\import_stri
ng.py, line 81, in try_import_module
return import_module(module_name)
  File c:\python24\lib\site-packages\Paste-1.3-py2.4.egg\paste\util
\import_stri
ng.py, line 67, in import_module
mod = __import__(s)
  File D:\Projects\Pylons_projects\gameolymp\gameolymp\websetup.py,
line 4, in
 ?
from gameolymp.maf.models.models import *
  File D:\Projects\Pylons_projects\gameolymp\gameolymp\maf\models
\models.py, l
ine 23, in ?
from user import *
  File D:\Projects\Pylons_projects\gameolymp\gameolymp\maf\models
\user.py, lin
e 6, in ?
from gameolymp.lib.base import *
  File D:\Projects\Pylons_projects\gameolymp\gameolymp\lib\base.py,
line 16, i
n ?
g.metadata = MetaData()
  File c:\python24\lib\site-packages\Paste-1.3-py2.4.egg\paste
\registry.py, li
ne 128, in __setattr__
setattr(self._current_obj(), attr, value)
  File c:\python24\lib\site-packages\Paste-1.3-py2.4.egg\paste
\registry.py, li
ne 177, in _current_obj
raise TypeError(
TypeError: No object (name: G) has been registered for this thread

On Jun 13, 6:15 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 13, 2007, at 12:07 PM, voltron wrote:



  aha, ok, thanks, but I saw it in the docs, ist it deprecated?  Then I
  ´m in a spot

 DynamicMetaData is not deprecated.   but when you connect to it, its
 a thread-local connection.  other threads that access it wont have
 any engine. you just want to use a regular MetaData, which these days
 has a connect() method.  but you might not even need to do that with
 a pylons configuration.



  If I go the route of creating meta first I would have to create it in
  every table file, in users.py, addresses.py e.tc. I tried using a file
  called _initmodels.py and created the metadata only once and made alle
  the separate table files import from it, but that ultimately failed
  because when I imported it in websetup.py in pylons so I can use it to
  setup my databses and app, it did not find the metadata properly, it
  assumed it was another metadata, and naturally, no dtabases were
  created

 that sounds like you need to get control of your application
 modules.  you should declare your metadata in some common place, like
 for pylons in base.py or app_globals.py, where everyone else can get
 to it.

 but additionally, Pylons connects the database to the session, so
 typically in a pylons app theres not even any need to connect the
 engine to the metadata.  to issue a create_all() just do
 metadata.create_all(connectable=sessioncontext.current.connect()).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to run a query in an object's session/connection

2007-06-13 Thread Roger Demetrescu

On 6/13/07, kwarg [EMAIL PROTECTED] wrote:

 I don't explicitly create a transaction - it's all done by TG/SA
 behind the scenes.

Take a look at this thead (the 8th message):

http://tinyurl.com/39bytt

Where it says:


As of TG 1.0.2+ you can now get access to the SA transaction via
cherrypy.request.sa_transaction. You can also replace it with a new one
and TG will then commit that after the controller returns.

So after doing your own commit/rollback/flush/whatever-you-feel-like
just make sure you leave another valid transaction there for TG to
commit on the way out. Probably still not the cleanest way to handle
this but it should work.



Hope it helps...


Roger




 On Jun 13, 11:39 am, Michael Bayer [EMAIL PROTECTED] wrote:
  On Jun 13, 2007, at 11:21 AM, kwarg wrote:
 
 
 
 
 
 
 
   I defined a model with several DB mapped entities. Here's one of them:
   ...
   address_table = Table(address, metadata, autoload = True)
   ...
   class Address(object):
   pass
   ...
   assign_mapper(session.context, Address, address_table)
 
   Then in the controller an instance of Address class is modified ans
   saved:
   # 1. construct/read object
   a = Address.get(address_id)
   ...
   # 2. modification of object a properties
   ...
   # 3. save changes
   a.save()
   a.flush()
   All of 1., 2. and 3. use the same connection for interaction with the
   DB.
   I need to run my custom query in that connection before 3. executes.
   How can I get that connection?
   Thank you.
 
  how are you creating the transaction ?  if via Connection, you have
  it.  if via SessionTransaction,  trans.connection(Address).execute
  (your statement), or alternatively session.context.connection
  (Address).execute(your statement).- Hide quoted text -
 
  - Show quoted text -


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread Michael Bayer

dont use g.  just get at it via myapp.base.metadata.

if you want to use g, put it in app_globals.py.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread voltron

almost there:

I put this in base.py

from sqlalchemy import *
metadata = MetaData()

so anywhere I need it I just import:

from gameolymp.lib.base import *

I have no errors, but no databases are created or dropped, this is
what I added to my websetup.py

from sqlalchemy import *
from myapp.lib.base import *

 uri = conf['sqlalchemy.dburi']
 engine = create_engine(uri)
 metadata = BoundMetaData(engine)

 metadata.create_all()

no errors, but no reaction


Thanks for your patience Michael



On Jun 13, 7:58 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 dont use g.  just get at it via myapp.base.metadata.

 if you want to use g, put it in app_globals.py.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapped class + querying for DISTINCT field

2007-06-13 Thread Christoph Haas

I use my assign_mapper'd classes with a lot of joy so far. But now I
feel trapped. My table:

records_table = Table(
'records', meta,
Column('id', Integer, primary_key=True),
Column('name', Unicode(80)),
Column('type', Unicode(10)),
Column('content', Unicode(200)),
Column('ttl', Integer),
Column('prio', Integer))

My mapping:

class DnsRecord(object): pass
assign_mapper(session_context, DnsRecord, records_table)

Now I'd like to select all objects from that table with unique values in
the name column. In old-school SQL that would be:

SELECT DISTINCT name,* FROM records;

When I start with...

DnsRecord.select(...)

...I can just change the WHERE clause but not the actually selected
data. Do I really have to use

select([DnsRecord.c.name], distinct=True)

? It's simple and working. But I'd rather like to get the result as a
mapped class so that I can work with other columns, too. This way I just
received a list of strings when I .execute() it.

On #sqlalchemy I was proposed to:
- use a subselect with .filter()
- map a select instead of a Table()
- use result-set mapping

But these solutions sounded like dirty workarounds and I couldn't even
figure out the proper syntax to try it.

I have seen similar requests on the list that went unanswered. Is it an
obviously lacking feature in SA or is my idea just plain stupid? What is
the one truly right way (tm) to handle this?

Thanks,
 Christoph


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread voltron

I have found a solution which works, but what is the correct way
Michael? I created in base.py:

metadata = DynamicMetaData()

then in my websetup.py :

uri = conf['sqlalchemy.dburi']
engine = create_engine(uri)
metadata.connect(uri)
metadata.create_all()

this works, but you mentioned that I should use MetaData() in my case,
which I am having problems with, could you detail at where and what I
should import to use Metadata()? Why the preference over Dynamic?

thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flushing and saving data using default_metadata

2007-06-13 Thread voltron

another thing, g does not work from websetup.py, which I would have
liked

thanks


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cached ORM instances and eagerload queries

2007-06-13 Thread Cory Johns

I find myself in a situation where I need certain (normally lazy-loaded)
properties of an ORM class to be eagerloaded for a particular query.  I pass
withoptions=[eagerload('property')] in to session.query(), and everything
works fine.  At least, it did until the query picked up a record that had
been previously returned by another query without eagerloading.  Because the
record was already cached in the session, it didn't pick up on the eagerload
option.

What I'd like to know is, is there a way to force the eagerloading query to
apply its eagerloading behavior, even if the record is already cached in the
session?

Thanks,


Cory Johns
Systems
Tower Hill Insurance Group, Inc.





CONFIDENTIAL NOTICE: This email including any attachments, contains 
confidential information belonging to the sender. It may also be 
privileged or otherwise protected by work product immunity or other 
legal rules. This information is intended only for the use of the 
individual or entity named above.  If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, 
distribution or the taking of any action in reliance on the contents 
of this emailed information is strictly prohibited.  If you have 
received this email in error, please immediately notify us by 
reply email of the error and then delete this email immediately.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cached ORM instances and eagerload queries

2007-06-13 Thread Michael Bayer



On Jun 13, 4:23 pm, Cory Johns [EMAIL PROTECTED] wrote:
 I find myself in a situation where I need certain (normally lazy-loaded)
 properties of an ORM class to be eagerloaded for a particular query.  I pass
 withoptions=[eagerload('property')] in to session.query(), and everything
 works fine.  At least, it did until the query picked up a record that had
 been previously returned by another query without eagerloading.  Because the
 record was already cached in the session, it didn't pick up on the eagerload
 option.

 What I'd like to know is, is there a way to force the eagerloading query to
 apply its eagerloading behavior, even if the record is already cached in the
 session?

im assuming you mean one of the lead instances in your selection did
not get its collection updated since it was already present (as
opposed to, one of the eagerly loaded child items didnt get its
attributes refreshed).

I normally answer this along the lines of use load() to load the
instance, which reloads all of its attributes, or expire()/refresh()
an already loaded instance.  but you are trying to load a set of
instances here and theres no public hook to indicate load() behavior
for a whole list.

this is something easy enough to add in version 0.4 of query so i will
for now show you the non-public way you can accomplish this:

query= session.query(MyClass).options(..).filter_by(..)..etc..
result = query._select_statement(query.compile(),
populate_existing=True)

the effect that the above will have is to completely disregard any
attributes set on elements that are already in the session; they'll
all be overwritten...i.e. all column-based attributes, collections,
etc. all the way down for everything accessed. its like calling
refresh() on every instance.

if you need finer grained options than that, i.e.
options(overwrite_collection('foo')), that would take a lot more
tinkering under the hood.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using bind parameters to execute a statement

2007-06-13 Thread Michael Bayer

not really.  execute_text() takes **params (and *args).  heres some
unit test code:

conn.execute(insert into users (user_id, user_name) values (%(id)s, %
(name)s), id=4, name='sally')
conn.execute(insert into users (user_id, user_name) values (%(id)s, %
(name)s), {'id':2, 'name':'ed'}, {'id':3, 'name':'horse'})
conn.execute(insert into users (user_id, user_name) values (%s, %s),
[2,ed], [3,horse])
conn.execute(insert into users (user_id, user_name) values (%s, %s),
4, 'sally')

postgreSQL is going to want pyformat params for what youre doing.


On Jun 13, 4:17 pm, kwarg [EMAIL PROTECTED] wrote:
 Calling a PostgreSQL stored procedure using a DB connection object:
 conn.execute(select * from my_stored_procedure(:p1), p1 = 10)
 throws the following
 TypeError: execute_text() got an unexpected keyword argument 'p1'.
 Any ideas?
 Thanks.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cached ORM instances and eagerload queries

2007-06-13 Thread Cory Johns

Excellent, thanks.  Works a treat.

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
Behalf Of Michael Bayer
Sent: Wednesday, June 13, 2007 4:51 PM
To: sqlalchemy
Subject: [sqlalchemy] Re: Cached ORM instances and eagerload queries


...

query= session.query(MyClass).options(..).filter_by(..)..etc..
result = query._select_statement(query.compile(),
populate_existing=True)

...




CONFIDENTIAL NOTICE: This email including any attachments, contains 
confidential information belonging to the sender. It may also be 
privileged or otherwise protected by work product immunity or other 
legal rules. This information is intended only for the use of the 
individual or entity named above.  If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, 
distribution or the taking of any action in reliance on the contents 
of this emailed information is strictly prohibited.  If you have 
received this email in error, please immediately notify us by 
reply email of the error and then delete this email immediately.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---