[sqlalchemy] Replacing a Session Engine

2008-11-24 Thread Koen Bok

Hey I was wondering if it was possible to replace an engine in a
session. It does not seem to work, but maybe I'm doing something
really stupid.

http://pastie.org/322501

Kindest regards,

Koen Bok - madebysofa.com


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing a Session Engine

2008-11-24 Thread Koen Bok

Ok, that sounds logical, but that defeats the purpose of it for me :-)

Whenever a Checkout user logs out I clear the session. On a new login
I create fresh session that has to re-cache lot of data. The logins
depend on a valid postgres user. If I could find a way to replace the
engine/connection for a session it would keep the objects around,
making login way faster the second time.

Any ideas? :-)

- Koen

On Nov 24, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 that is the way to do it as far as switching engines.  your example
 doesn't work because the state of p1 is persistent as opposed to
 pending, so the example updates a row that isn't there.

 you need to build a copy constructor on Person and make a new,
 non-persistent Person object for your second engine.   the Session can't
 implicitly figure out what your intent is.



 Koen Bok wrote:

  Hey I was wondering if it was possible to replace an engine in a
  session. It does not seem to work, but maybe I'm doing something
  really stupid.

 http://pastie.org/322501

  Kindest regards,

  Koen Bok - madebysofa.com
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Smart Caching Mapper Question

2008-05-28 Thread Koen Bok

Hey All,

I have a conceptual question.

You have two ways to get relations; lazy and nonlazy. Nonlazy works
great for saving queries but can get pretty slow with complicated
joins. So I was wondering if there was a third way; pre fetching all
the data for relations and let the mapper get the relation data from a
cache instead of doing another query.

It's kinda hard to explain, so I wrote an example script at:
http://paste.pocoo.org/show/55145/

I guess this should be possible by writing some MapperExtension? Did
anyone do anything like this, or maybe has some pointers?

Thanks!

Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Smart Caching Mapper Question

2008-05-28 Thread Koen Bok

Yep, exactly that. It would speed up my (UI) app immensely. Any ideas
how to approach something like that?

On May 28, 5:07 pm, [EMAIL PROTECTED] wrote:
 some time ago i posted a list of my ideas along 
 this..http://groups.google.com/group/sqlalchemy/browse_thread/thread/d88696...



  be ware: its all pure theory.
   -1 (horizontal) (eager) loading ONLY of the needed row attributes,
  also hierarhicaly (a.b.c.d)
   -2 (vertical) simultanously loading of columns - e.g. the lazy
  attribites - wholly, or in portions/slices (depending on UI
  visibility or other slice-size)
   -3 skipping creation of objects - only using the data, if time of
  creation gets critical. For example a simple report for a
  name.alias and age of person, the creation of 100,000 Persons can
  be ommitted. To be able to do drill-down, the person.db_id would be
  needed+stored too.
   -4 cacheing of some aggregations/calculations in special
  columns/tables, so they're not re-invented everytime
   -5 translate the whole report - calculations, aggregations,
  grouping etc. into sql and use the result as is (with same thing
  about db_id's)

 except the #4/aggregation which is pretty automated now, i dont have
 yet implementation of the rest.
 i think u're talking about #2 ?

 ciao
 svilen

  Hey All,

  I have a conceptual question.

  You have two ways to get relations; lazy and nonlazy. Nonlazy works
  great for saving queries but can get pretty slow with complicated
  joins. So I was wondering if there was a third way; pre fetching
  all the data for relations and let the mapper get the relation data
  from a cache instead of doing another query.

  It's kinda hard to explain, so I wrote an example script at:
 http://paste.pocoo.org/show/55145/

  I guess this should be possible by writing some MapperExtension?
  Did anyone do anything like this, or maybe has some pointers?

  Thanks!

  Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problems with py2app

2008-04-26 Thread Koen Bok

Hey Iain,

If you build apps with py2app it tries to figure out which modules to
include automatically. If these modules are nested in some weird way
it sometimes chokes. A solution is to import that module by hand in
your main script (your-app-name.py) or telling py2app it needs to add
the module in the setup dict (see manual).

Good luck!

Koen - madebysofa.com

On Apr 26, 7:53 am, iain duncan [EMAIL PROTECTED] wrote:
 Hi folks, I seem to be having a problem with sqlalchemy and py2app, but
 I am very new to OS X and py2app, so I could be doing something stupid.
 When we try to build the app we get this:

 ImportError: No module named logging
 Traceback (most recent call last):
    File
 /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boo 
 t__.py,
 line 137, in module
      _run('booking_main.py')
    File
 /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boo 
 t__.py,
 line 134, in _run
      execfile(path, globals(), globals())
    File
 /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/booki 
 ng_main.py,
 line 10, in module
      from model_extern import *
    File model_extern.pyc, line 15, in module
    File sqlalchemy/__init__.pyc, line 29, in module
    File sqlalchemy/engine/__init__.pyc, line 54, in module
    File sqlalchemy/engine/base.pyc, line 16, in module
    File sqlalchemy/logging.pyc, line 35, in module
 ImportError: No module named logging
 2008-04-25 22:43:27.066 booking_main[457] booking_main Error
 2008-04-25 22:43:27.067 booking_main[457] booking_main Error
 An unexpected error has occurred during execution of the main script

 I'm not import logging, and the only other libraries being used are
 wxPython and formencode. wx is working ok.

 Any tips or even stories of success/failure would be much appreciated!
 Thanks
 Iain
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get Mapper for Table

2008-04-17 Thread Koen Bok

Got it, thanks!

On Apr 16, 4:26 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 16, 2008, at 10:13 AM, Koen Bok wrote:



  Hey all,

  Before 0.4.5 I used this code to get the mapped class for a table. It
  worked fine, but mapper_registry was made private in 0.4.5 and the
  private function does not behave the same way for some reason. But as
  this code is pretty nasty anyway I was wondering if there was a better
  way to do this. If not, can anyone hint me how to make this work
  again?

 from sqlalchemy.orm import mapper as _mapper

 my_table_registry = {}
 def mapper(cls, tbl, **kwargs):
     my_table_registry[tbl] = cls
     return _mapper(cls, tbl, **kwargs)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Get Mapper for Table

2008-04-16 Thread Koen Bok

Hey all,

Before 0.4.5 I used this code to get the mapped class for a table. It
worked fine, but mapper_registry was made private in 0.4.5 and the
private function does not behave the same way for some reason. But as
this code is pretty nasty anyway I was wondering if there was a better
way to do this. If not, can anyone hint me how to make this work
again?

If you are wondering, I need this for asynchronous updates in
postgres. The modification table holds the table name for a
modification and inserted by a trigger.

Koen

classTableCache = dict()

def classForTable(tableName):
Returns the mapped class for a particular table name.

if classTableCache == dict():
for table in metadata.table_iterator(reverse=False):
for mapper in mapperlib.mapper_registry.values():
if table == mapper.base_mapper.mapped_table:
# This is an extremely ugly hack that 
only works when all patent
mapper names
# are longer than inherited mapper names
if classTableCache.has_key(table.name):
if len(str(mapper.class_)) 
len(str(classTableCache[table.name])):

classTableCache[table.name] = mapper.class_
else:
classTableCache[table.name] = 
mapper.class_

return classTableCache[tableName]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapper Issue Upgrading to 0.4.5

2008-04-15 Thread Koen Bok

For anyone who needs this too, there is a small typo in the above
example. It should be:

mapper(PGUser, pg_user, properties={
'groups': relation(PGGroup, viewonly=True,
primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
foreign_keys=[pg_group.c.grolist],
_local_remote_pairs=[(pg_user.c.usesysid, 
pg_group.c.grolist)])})

On Apr 14, 11:26 pm, Koen Bok [EMAIL PROTECTED] wrote:
 Wow that is fast!

 Thanks man ;-)

 On Apr 14, 5:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:

  On Apr 14, 2008, at 7:44 AM, Koen Bok wrote:

   Hey All,

   I was upgrading our app from 0.4.1 to 0.4.5 and this code broke:

   pg_user = Table('pg_user', metadata,
      Column('usesysid', Integer, primary_key=True),
      Column('usename', Unicode(), unique=True))

   pg_group = Table('pg_group', metadata,
      Column('grosysid', Integer, primary_key=True),
      Column('groname', Unicode(), unique=True),
      Column('grolist', PGArray(unicode)))

   mapper(PGUser, pg_user, properties={
      'groups': relation(PGGroup, viewonly=True,
              primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
              foreign_keys=[pg_group.c.grolist])})

  use the trunk and say:

  mapper(PGUser, pg_user, properties={
       'groups': relation(PGGroup, viewonly=True,
           primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
           foreign_keys=[pg_group.c.grolist],
           _local_remote_pairs=[pg_user.c.usesysid, pg_group.c.grolist]
           )})

  this doesn't yet handle the full blown functions/casts in primaryjoin  
  conditions ticket but its part of the way there.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapper Issue Upgrading to 0.4.5

2008-04-14 Thread Koen Bok

Hey All,

I was upgrading our app from 0.4.1 to 0.4.5 and this code broke:

pg_user = Table('pg_user', metadata,
Column('usesysid', Integer, primary_key=True),
Column('usename', Unicode(), unique=True))

pg_group = Table('pg_group', metadata,
Column('grosysid', Integer, primary_key=True),
Column('groname', Unicode(), unique=True),
Column('grolist', PGArray(unicode)))

mapper(PGUser, pg_user, properties={
'groups': relation(PGGroup, viewonly=True,
primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
foreign_keys=[pg_group.c.grolist])})

Specify the foreign_keys argument to indicate which columns on the
relation are foreign. % (self.primaryjoin, self))
ArgumentError: Could not determine relation direction for primaryjoin
condition 'pg_user.usesysid = any(pg_group.grolist)', on relation
PGUser.groups (PGGroup). Specify the foreign_keys argument to indicate
which columns on the relation are foreign.

I found this post(1) so I tried upgrading to SVN, but I still got the
error. Does it have to do anything with the any() function and the
PGArray column type?

Koen

(1)
http://groups.google.com/group/sqlalchemy/browse_thread/thread/f8e5b5fa07ed9ab9/749beae25624c2f8?lnk=gstq=foreign_keys#749beae25624c2f8


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapper Issue Upgrading to 0.4.5

2008-04-14 Thread Koen Bok

Wow that is fast!

Thanks man ;-)

On Apr 14, 5:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 14, 2008, at 7:44 AM, Koen Bok wrote:





  Hey All,

  I was upgrading our app from 0.4.1 to 0.4.5 and this code broke:

  pg_user = Table('pg_user', metadata,
     Column('usesysid', Integer, primary_key=True),
     Column('usename', Unicode(), unique=True))

  pg_group = Table('pg_group', metadata,
     Column('grosysid', Integer, primary_key=True),
     Column('groname', Unicode(), unique=True),
     Column('grolist', PGArray(unicode)))

  mapper(PGUser, pg_user, properties={
     'groups': relation(PGGroup, viewonly=True,
             primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
             foreign_keys=[pg_group.c.grolist])})

 use the trunk and say:

 mapper(PGUser, pg_user, properties={
      'groups': relation(PGGroup, viewonly=True,
          primaryjoin=pg_user.c.usesysid==func.any(pg_group.c.grolist),
          foreign_keys=[pg_group.c.grolist],
          _local_remote_pairs=[pg_user.c.usesysid, pg_group.c.grolist]
          )})

 this doesn't yet handle the full blown functions/casts in primaryjoin  
 conditions ticket but its part of the way there.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: downtime

2008-03-24 Thread Koen Bok

Hey Mike,

We (Sofa) just sponsored a year 512 hosting. Check the paypal. And
keep up the good work!

Koen Bok

- madebysofa.com

On Mar 24, 8:39 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 another theory is that I had some vhost config that wasn't synced up  
 on both DNS servers that I use.   not sure if that could have created  
 some confusing situation on the apache side if something changed DNS-
 wise.

 On Mar 24, 2008, at 2:39 PM, Michael Bayer wrote:



  Well, I dont think anything exotic caused this downtime.  Looks pretty
  much like Django user traffic was suddenly pointed at the website via
  Michael Trier's blog and some child processes got hung.   So we
  basically got Django'ed :)

  The site is just a 256 meg slicehost running four hosts on only four
  child apache procs running, so if our traffic is starting to hit the
  next level i might need to up it to 512
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Koen Bok

Hey Manilo,

My feeling is that this is out of the scope of the SQLAlchemy project.
It should support the official workings for each database package, not
extend it. It is pretty easy to implement this yourself and use
SQLAlchemy on top of that (I use a lot of triggers in my app). There
may be something to say for pythonizing triggers so they become
database agnostic, but I am not even sure if that is possible given
all the different triggers different databases can have.

What do you mean by 'unstable internals' in SQLAlchemy?

Kindest regards,

Koen Bok

On Jan 28, 12:03 pm, Manlio Perillo [EMAIL PROTECTED] wrote:
 Hi.

 In this wiki page:http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the internals
 of SQLAlchemy are unstable.

 Thanks   Manlio Perillo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Polymorphic from multiple columns

2007-12-03 Thread Koen Bok

I'd like to make a polymorphic mapper based on two columns. Is that
possible?

See example code here: http://paste.pocoo.org/show/13799/

Thanks, Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Expire Objects from other Thread

2007-12-01 Thread Koen Bok

This is quite hackish, but just an experiment for me.

We use listen/notify in our app. A seperate thread is listening for
notifications from updated rows in the table. If one is received I'd
like to expire an object in the Session cache so it gets refetched the
next time it is accessed. I have it working, but I get a lot of these:

AttributeError: 'ReceiveRequest' object has no attribute
'_instance_key'

I guess that's because the background thread is marking objects as
expired while the main thread is performing actions on them.

Is there a way to make this work? Or is it not a good idea at all?

Some example code of my implementation: http://paste.pocoo.org/show/13402/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Expire Objects from other Thread

2007-12-01 Thread Koen Bok

Cool, thanks!

I got it working pretty wel I think. I replaced the iterator code with
a Session.get. Check out the code here: http://paste.pocoo.org/show/13434/

The only thing I am still wondering if Session.get also returns
objects in Session.new? Otherwise this won't fix the other bug. I
think it does as my tests pass fine.

You just earned yourself another sushi meal ;-)

Koen

On Dec 1, 8:40 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Dec 1, 2007, at 9:33 AM, Koen Bok wrote:



  This is quite hackish, but just an experiment for me.

  We use listen/notify in our app. A seperate thread is listening for
  notifications from updated rows in the table. If one is received I'd
  like to expire an object in the Session cache so it gets refetched the
  next time it is accessed. I have it working, but I get a lot of these:

  AttributeError: 'ReceiveRequest' object has no attribute
  '_instance_key'

  I guess that's because the background thread is marking objects as
  expired while the main thread is performing actions on them.

 _instance_key doesn't get removed by the expire operation, so thats  
 something else.  i notice youre iterating through the whole session  
 which will return objects from new as well, so you might want to  
 iterate just through session.identity_map.values() there.   also i  
 dont understand the point of that little iterate through  
 session.registry.registry but the main idea seems OK.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Polymorphic Mapper with Custom Column Type

2007-11-25 Thread Koen Bok

Hi, I am trying to store numeric values for an object inverted in the
database. For that I created a custom TypeDecorator. It work's fine.
But I only want to apply it to one inherited object type for that
object. See example code at: http://paste.pocoo.org/show/12411/

Thanks, Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Merge Questions

2007-11-20 Thread Koen Bok

I have some questions about pickling/merging objects.

I have written example code to demonstrate: http://pastie.caboo.se/120146

Kindest regards,

Koen Bok
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Merge Questions

2007-11-20 Thread Koen Bok

Aight, thanks for the explanation!

Koen

On Nov 20, 5:02 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 20, 2007, at 6:19 AM, Koen Bok wrote:



  I have some questions about pickling/merging objects.

  I have written example code to demonstrate:http://pastie.caboo.se/120146

  Kindest regards,

 OK ive rewritten that error message in r3809 since it made no sense.  
 now it will say:

 Could not update instance '[EMAIL PROTECTED]', identity key (class  
 '__main__.User', (1,), None); a different instance with the same  
 identity key already exists in this session.

 the error is that you need to use the return value of merge:

 user1 = Session.merge(user1, dont_load=True)

 as for the dirty list, i think we might need to put a more friendly  
 dirty accessor on there (or a note in the docs)...the merge process  
 sets attributes in the normal way so that things like backrefs fire  
 off, but a side effect is that its flipping on the modified flag on  
 every object.  the modified flag is just a cue to add the object to  
 the flush process, but if nothing actually changed on it then it wont  
 be updated.

 A more expensive check, i.e. comparing the attribute values on the  
 instances, would reveal that they arent dirty after all, and this is  
 the check that happens during flush.

 we cant *really* preserve the modified flag here from merged to  
 already-present, since if A1.foo == ed, and A2.foo == jack, both  
 have no modified flag, but then you merge A2 on top of A1, A1 *is*  
 actually modified.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4.1 released

2007-11-18 Thread Koen Bok

Thanks Mike. Cool stuff!

On Nov 19, 1:26 am, Michael Bayer [EMAIL PROTECTED] wrote:
 Hello alchemers -

 This is an awesome release.  I'm excited about this one. With our new  
 shiny clean 0.4 codebase, internals are starting to look a lot more  
 intelligent, and new things are becoming possible.  Call counts are  
 going down like a rock.   Intents and behaviors are clarifying and  
 sharpeningplus Super Mario Galaxy arrives tomorrow so its time for  
 a break.

 Some highlights of this release:

 - you might notice that some eager load operations are suddenly a lot  
 faster, particularly on MySQL.  This is because we've improved the  
 queries that are issued when you use eager loading with LIMIT and/or  
 OFFSET; whereas we previously would wrap the LIMITed query in a  
 subquery, join back to the mapped table, and then add the eager  
 criterion outer joined against the mapped table, a trick we've been  
 doing since 0.1.0, we now outer join the eager criterion directly  
 against the subquery, and the main mapper pulls rows straight from the  
 subquery columns.  Improved SQL expression functionality has allowed  
 this to be possible.  What it means is, an eager load with LIMIT/
 OFFSET uses one less JOIN in all cases.   This is an example of SA's  
 very rich expression constructs paying off - since a query that is  
 much more efficient on the database side trumps the hundred or so  
 method calls spent compiling the query anyday.

 - session.refresh() and session.expire() can now operate on individual  
 instance attributes.   Just say session.expire(myobject, ['items',  
 'description', 'name']), and all three of those attributes, whether  
 they're just columns or relations to other objects, will go blank  
 until you next access them on the instance, at which point they are  
 refreshed from the DB.  Column attributes will be grouped together in  
 a single select() statement and related tables will be lazy loaded  
 individually right now.  Also, the internal mechanisms used by  
 deferred() columns, refresh/expire operations, and polymorphically  
 deferred columns have all been merged into one system, which means  
 less internal complexity and more consistent behavior.

 - the API of the session has been hardened.  This means its going to  
 check more closely that operations make sense (and it also no longer  
 raises some errors that did not make sense in certain circumstances).  
 The biggest gotcha we've observed so far from people using trunk is  
 that session.save() is used *only* for entities that have not been  
 saved to the database yet.  If you put an already-stored instance in  
 save(), you'll get an error.  This has always been the contract, it  
 just hasn't complained previously. If you want to put things in the  
 session without caring if they've already been saved or not, use  
 session.save_or_update(myinstance).  We've also fixed things regarding  
 entities that have been de-pickled and placed back into the session -  
 some annoying errors that used to occur have been fixed.

 - still in the session category, the merge() method gets a  
 dont_load=True argument.  Everyone using caches like memcached can  
 now place copies of their cached objects back in the session using  
 myinstance = merge(mycachedinstance, dont_load=True), and the  
 instance will be fully copied as though it were loaded from the  
 database, *without* a load operation proceeding; it will trust that  
 you want that instance state in the session.

 - query.options() are way more intelligent.  Suppose you have a large  
 bidirectional chain of relations.   If you say something like  
 query.options(eagerload('orders.items.keywords.items.orders')), it  
 will accurately target the 'orders' relation at the end of that chain  
 and nothing else.  On a similar topic, self-referential eagerloads can  
 be set up on the fly, such as  
 query.options(eagerload_all('children.children.children')) without  
 needing to set the join_depth flag on relation().

 - method call overhead continues to be cut down.  Many expensive calls  
 in statement compilation, clauseelement construction, and statement  
 execution have been whacked away completely and replaced with simpler  
 and more direct behaviors, and results are more accurate and correct.  
 This continues along from all that we've done in 0.4 and at this point  
 most call counts should be half of what they were in 0.3.   I invite  
 everyone to take a tour around expression.py, compiler.py, and  
 critique; we've had a huge amount of housecleaning in these modules  
 (and others), and further suggestions/ideas/flames are entirely  
 welcome (though not too early in the morning) on sqlalchemy-devel.

 - in the fringe category, you can now define methods like __hash__(),  
 __nonzero__(), and __eq__() on your mapped instances and the ORM won't  
 get confused; we've rearranged things so that those methods are not  
 accessed by the ORM.

 - a 

[sqlalchemy] Temporary Disable Session.mapper with Scoping

2007-11-15 Thread Koen Bok

Hi,

I use scoped_session in our project for auto object saving which is
great. But when I initialize a new database I need to flush some
objects with another engine. I could not find a nice way to do that so
I created a new session with it's own engine and merged the objects
into that one and cleared the from my Session object. That worked, but
all their relations are lost and they have a parent child relation.
How to do this?

Sample code uploaded as test_scope.py

Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Temporary Disable Session.mapper with Scoping

2007-11-15 Thread Koen Bok

Ok, I could do that.

On Nov 15, 4:09 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 15, 2007, at 9:06 AM, Koen Bok wrote:



  Hi,

  I use scoped_session in our project for auto object saving which is
  great. But when I initialize a new database I need to flush some
  objects with another engine. I could not find a nice way to do that so
  I created a new session with it's own engine and merged the objects
  into that one and cleared the from my Session object. That worked, but
  all their relations are lost and they have a parent child relation.
  How to do this?

  Sample code uploaded as test_scope.py

 it would be easier if you just didnt use autosave, id really love to  
 remove that feature entirely.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Object Init Bug?

2007-11-14 Thread Koen Bok

Yep. That was it. Stupid me...

On Nov 13, 4:10 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 13, 2007, at 8:25 AM, Koen Bok wrote:

  Hi, I have this strange bug or problem...

  I have a basic class like this mapped to a table with the same name.
  Metatype is a relation to the metatype object.

  class MetaData(DBObject):

 def __init__(self, metatype, value, item):

 self.metatype = metatype
 self.item = item
 self.value = value

  class MetaType(DBObject):

 def __init__(self, name, parent=None, offset=None):

 self.parent = parent
 self.offset = offset
 self.name = name

  Now when I create a MetaData object like this I get the following
  error:

  nameMetaType = MetaType('name')
  product = Product(Decimal('20.00'), Decimal('15.00'))
  Session.commit()

  MetaData(nameMetaType, 'Koen', product)

  Traceback (most recent call last):
   method testMetabug in test_connection.py at line 16
 MetaData(nameMetaType, 'Koen', product)
   method init in attributes.py at line 1025
 oldinit(instance, *args, **kwargs)
  TypeError: __init__() takes at most 3 arguments (4 given)

 what happens if you change the name MetaData to something else ?  
 your error below seems to indicate that your MetaData class/object is  
 getting treated like a SQLAlchemy MetaData object.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Object Init Bug?

2007-11-13 Thread Koen Bok

Hi, I have this strange bug or problem...

I have a basic class like this mapped to a table with the same name.
Metatype is a relation to the metatype object.

class MetaData(DBObject):

def __init__(self, metatype, value, item):

self.metatype = metatype
self.item = item
self.value = value

class MetaType(DBObject):

def __init__(self, name, parent=None, offset=None):

self.parent = parent
self.offset = offset
self.name = name

Now when I create a MetaData object like this I get the following
error:

 nameMetaType = MetaType('name')
 product = Product(Decimal('20.00'), Decimal('15.00'))
 Session.commit()

 MetaData(nameMetaType, 'Koen', product)

Traceback (most recent call last):
  method testMetabug in test_connection.py at line 16
MetaData(nameMetaType, 'Koen', product)
  method init in attributes.py at line 1025
oldinit(instance, *args, **kwargs)
TypeError: __init__() takes at most 3 arguments (4 given)

That seems odd to me, as it clearly takes 4 arguments: self, metatype,
value, item. But I tried it anyway with 3 arguments to see what it
did:

 MetaData(nameMetaType, 'Koen')

Traceback (most recent call last):
  method testMetabug in test_connection.py at line 16
MetaData(nameMetaType, 'Koen')
  method init in attributes.py at line 1025
oldinit(instance, *args, **kwargs)
  method __init__ in schema.py at line 1070
self.reflect()
  method reflect in schema.py at line 1193
available = util.OrderedSet(bind.engine.table_names(schema,
AttributeError: 'MetaType' object has no attribute 'engine'

Extra info: The item is a polymorphic mapper to product, employee
etc... I use scoped_session and created my mappers with
Session.mapper.

If you need more info let me know...

Thanks,

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Code Organisation

2007-11-07 Thread Koen Bok

Hi there,

We have a pretty large project by now and we run into import loops. So
I decided to restructure the code, and I hoped some people with more
experience can comment on this.

The basic problem is this:

We have the database object code, mappers and tables neatly organized
in one module (db). The controller code imports this module to get
access to these objects. All fine.

But we have another object called Connection which is a singleton
class that actually manages the connection to our database. It is
basically a wrapper for create_engine and contextual_session. But next
to that it keeps info about the current login state like the employee,
location etc. The mapped database objects need this info on their turn
to add the current user to a new object etc. So the Connection object
depends on the Mapped Database Objects, but the Mapped Database Object
depend on the Connection object too.

Anyone got a good tip to solve this? Or designed something similar?

Thanks, Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Load Lazy Relation in Separate Thread

2007-09-13 Thread Koen Bok

Hi, I am doing some optimizations on our app, and one thing I was
wondering is if I can have a lazy relation loading in a separate
thread.

The way I'd like to use it is like this. I have a list with products,
and these products show a graph with stock levels. To draw this graph,
the object fetches it's relation stock. The stock object is built like
this:

SELECT
amount - (SELECT sum(qty) FROM orders WHERE
id_product=stock.id_product) as available
FROM stock
WHERE id_product=1234;

This is because we need to correct stock levels with the already
placed orders to get a good idea of availability.

This query takes fairly long (depending on the orders) so I'd like to
draw a gray bar initially, start a thread to fetch the stock object
and draw it once we have that result. Is this possible? If so, can
anyone give me a hint how to start?

Thanks,

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reviving migrate

2007-09-09 Thread Koen Bok

I have no info on this, but I have seen the current owner on this and
the migrate group.

It would be awesome if migrate would work again.

Koen

On Sep 9, 2:53 pm, Jan Dittberner [EMAIL PROTECTED]
wrote:
 I read the discussion regarding the status of the migrate tool and
 would like to help with this project.

 I built the Debian package for migrate (http://packages.qa.debian.org/
 m/migrate.html) which is now available in Debian testing and I use
 migrate with the SA version contained in etch (0.3.1). Maybe I could
 find some time to look at the existing code and create a version that
 works with current SA. Is someone here with a better insight on what
 is broken in migrate now, to get me started?

 Regards
 Jan Dittberner


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get Table/Column Sequence Names

2007-09-08 Thread Koen Bok

Thanks for the tip!

I solved it this way now and it works pretty well.

if True in [c.primary_key for c in tableName.c]:
sequenceName = '%s_id_seq' % tableName

Koen

On Sep 9, 12:52 am, Orest Kozyar [EMAIL PROTECTED] wrote:
 I don't know if SQLAlchemy has an easy way of determining this metadata, but
 you can likely query the system tables directly.  pg_attrdef stores this
 information in the adbin/adsrc columns.  You can parse the adsrc column to
 get the sequence name.  You can filter your query easily by table and/or
 column.  
 Seehttp://www.postgresql.org/docs/8.2/interactive/catalog-pg-attrdef.htmlfor
 more info.  

 Hopefully someone else who knows SQLAlchemy better will know whether it's
 possible to easily extract this info via metadata.  

 Orest -Original Message-
  From: sqlalchemy@googlegroups.com
  [mailto:[EMAIL PROTECTED] On Behalf Of Koen Bok
  Sent: Saturday, September 08, 2007 6:17 PM
  To: sqlalchemy
  Subject: [sqlalchemy] Get Table/Column Sequence Names

  For sqlalchemy auto generated sequences (postgres) is there a
  nice way to find out:

  - If a table/column has any
  - If so, what are the names

  Thanks! Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] max/coalesce bug in 0.4?

2007-09-06 Thread Koen Bok

Hi there, I am upgrading my app to 0.4 and while it's going pretty
well, I encountered something strange.

I have the following code:

request_table = Table('request', metadata,
Column('id', Integer, primary_key=True),
...
Column('metanumberstate', Integer, nullable=False,
default=func.coalesce(func.max(metanumber_table.c.id), 0)),
...
)

And this is bound to the Request object. But when I init a new
instance and commit it I get an error (see below). But this always
worked (0.3.x) so I have no clue what I am doing wrong. Anyone has a
hint?

ProgrammingError: (ProgrammingError) missing FROM-clause entry for
table metanumber
LINE 1: ...uest), NULL, 5, 1, 2, 3, 2, NULL, 7,
coalesce(max(metanumber...
 ^
 'INSERT INTO request (id, number, id_parent, id_item, id_employee,
id_terminal, id_location, srcstocktype, dststocktype, metadatastate,
metanumberstate, metataxstate, quantity, discount, over ride_price,
allocation, date, is_business, has_tax, is_quote) VALUES (%(id)s,
(SELECT coalesce(max(number), 0) + 1 FROM request), %(id_parent)s, %
(id_item)s, %(id_employee)s, %(id_terminal)s, %(id_location)s, %
(srcstocktype)s, %(dststocktype)s, %(metadatastate)s,
coalesce(max(metanumber.id), %(coalesce)s), coalesce(max(metatax.id), %
(coalesce_1)s), %(quantity)s, %(discount)s, %(override_price)s, %
(allocation)s, now(), %(is_business)s, %(has_tax)s, %
(is_quote)s)' {'id_item': 5L, 'is_business': False, 'srcstocktype': 2,
'is_quote': False, 'coalesce_1': 0, 'id_terminal': 2L, 'id_parent':
None, 'id_employee': 1L, 'allocation': 0, 'coalesce': 0,
'id_location': 3L, 'has_tax': True, 'override_price': None,
'dststocktype': None, 'discount': None, 'quantity': 1, 'id': 1L,
'metadatastate': 7L}


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: max/coalesce bug in 0.4?

2007-09-06 Thread Koen Bok

Tried that, but it just places the select statement within the insert
statement without brackets:

2007-09-06 18:00:57,603 INFO sqlalchemy.engine.base.Engine.0x..90
INSERT INTO request (id, metanumberstate) VALUES (%(id)s, SELECT
coalesce(max(metanumber.id), %(coalesce)s)
FROM metanumber)
2007-09-06 18:00:57,604 INFO sqlalchemy.engine.base.Engine.0x..90
{'coalesce': 0, 'id': 1L}
2007-09-06 18:00:57,609 INFO sqlalchemy.engine.base.Engine.0x..90
ROLLBACK

If I put the brackets in by hand, it works fine...

On Sep 6, 5:53 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Sep 6, 2007, at 10:28 AM, Koen Bok wrote:



  Aight, that would be cool. So this still has to be implemented then,
  right?

 embedded select works eh ?  try making your default that:

 default=select([func.max(metanumber.id)])


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Connection Timeout

2007-09-05 Thread Koen Bok

My app saves bookmarks to different Postgres servers, and tries to
connect at login. But if the server from the bookmark is not reachable
or down, it only times out after like 2 minutes. So I could either
check if the host is available before connecting (telnetlib connection
to the Postgres port) or put the testconnection in a thread and kill
it if there is no connection after x seconds. Or is there a nice way
to solve this from withing SQLAlchemy?

Ciao

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generate new id

2007-08-19 Thread Koen Bok

Dear Gennady,

I'm afraid flushing is your only option, as you need the next number
from the database sequence. What I do is flushing some objects in
their init methods already.

Kindest regards,

Koen Bok

On Aug 19, 7:22 am, Gennady [EMAIL PROTECTED] wrote:
 Hello,

 I'm a new sqlalchemy user. I have a question about generate id.

 I have a class with __init__ method, and after some initialization
 __init__ call events. Event handlers must know about id of new object.

 But if I use Sequence I don't know about new id in __init__ method
 before I flush changes to the database. And I can have two or more
 processes that connect to database. If I generate id in python may be
 a conflict on save objects in two processes.

 What is right way to resolve this problem? How to generate id and use
 transactions?

 Thank you.

 Gennady.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Serialize ClauseLists

2007-08-14 Thread Koen Bok

For our app we'd like the user to construct and save search queries
that we execute on the database. My first thought was to construct
something like:

predicate = and_(or_(item.c.id5, item.c.id3),
item.c.name.like('aap')))

And save this to a pickle column in the database. But it gives:

raise TypeError, can't pickle %s objects % base.__name__
TypeError: can't pickle module objects

I could write wrappers around or_ and and_ and the binaryexpressions
etc but I have the feeling there must be a better approach. Anyone got
a hint?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cumulative Select

2007-07-24 Thread Koen Bok

I need to do a cumulative select on postgres, and I got it to work
with some extreme SQL query, but I was wondering if it could be done
more easily without having to drop to SQL but with SQLAlchemy
statements.

Let's say I have a table with stock transactions like this:

Transaction
id
id_product
price
quantity

And it is filled like this:

1   1   12  10
2   1   13  5
3   1   12  3
4   1   11  6
5   1   10  5

Now at moment X my stock is 13 and I want to know the costs for each
product in my stock. So I add a cumulative column to select on and
expect to get the last three rows back as their cumulative total is =
as my stock:

CUM
1   1   12  10  29
2   1   13  5   19
3   1   12  3   14  this
4   1   11  6   11  this
5   1   10  5   5   and this...

Extra info:

This is the query I currently use to get the transaction ID and offset
back:

SELECT
f1.id,
(
SELECT
coalesce(sum(quantity), 0)
FROM transaction f2
WHERE f2.id=f1.id
AND f2.id_item = %s
) - %s as offset
FROM
transaction f1
AND f1.id_item = %s
AND %s = (
SELECT
coalesce(sum(quantity), 0)
FROM transaction f2
WHERE f2.id=f1.id
AND f2.id_item = %s
)
ORDER BY f1.id DESC LIMIT 1


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cumulative Select

2007-07-24 Thread Koen Bok

And again you made my day...

On Jul 24, 7:17 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 I had to put one little trick in here to make the subquery -  
 something work, which is something i should look into; otherwise it  
 went straight in.

 from sqlalchemy import *

 transaction = table('transaction',
  column('id'),
  column('id_product'),
  column('price'),
  column('quantity')
  )

 f1 = transaction.alias('f1')
 f2 = transaction.alias('f2')

 subquery = select([func.coalesce(func.sum(f2.c.quantity))], and_
 (f2.c.idf1.c.id, f2.c.id_product==bindparam('f2item')), scalar=True)

 s = select([f1.c.id, subquery.label('foo') - bindparam('offset').label
 ('offset')], and_(
  f1.c.id_product==bindparam('f1item'),
  bindparam('something') = subquery
  ) , order_by=[desc(f1.c.id)], limit = 1
 )

 print s

 I think in 0.4 im going to deprecate scalar=True and instead have  
 you sayselect(...).scalar()

 On Jul 24, 2007, at 9:38 AM, Koen Bok wrote:



  I need to do a cumulative select on postgres, and I got it to work
  with some extreme SQL query, but I was wondering if it could be done
  more easily without having to drop to SQL but with SQLAlchemy
  statements.

  Let's say I have a table with stock transactions like this:

  Transaction
 id
 id_product
 price
 quantity

  And it is filled like this:

  1  1   12  10
  2  1   13  5
  3  1   12  3
  4  1   11  6
  5  1   10  5

  Now at moment X my stock is 13 and I want to know the costs for each
  product in my stock. So I add a cumulative column to select on and
  expect to get the last three rows back as their cumulative total is =
  as my stock:

 CUM
  1  1   12  10  29
  2  1   13  5   19
  3  1   12  3   14  this
  4  1   11  6   11  this
  5  1   10  5   5   and this...

  Extra info:

  This is the query I currently use to get the transaction ID and offset
  back:

  SELECT
 f1.id,
 (
 SELECT
 coalesce(sum(quantity), 0)
 FROM transaction f2
 WHERE f2.id=f1.id
 AND f2.id_item = %s
 ) - %s as offset
  FROM
 transaction f1
  AND f1.id_item = %s
  AND %s = (
 SELECT
 coalesce(sum(quantity), 0)
 FROM transaction f2
 WHERE f2.id=f1.id
 AND f2.id_item = %s
 )
  ORDER BY f1.id DESC LIMIT 1


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Many to many to same table

2007-07-20 Thread Koen Bok

Hello, I am trying to do something odd, but I was wondering if this
could be done with SQLAlchemy. See the script for details:

from sqlalchemy import *

metadata = BoundMetaData('sqlite://')

class Item(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return '%s:%s' % (self.__class__.__name__, self.name)


item_table = Table('item', metadata,
Column('id', Integer, primary_key=True),
Column('name', String(50)))

collection_table = Table('collection', metadata,
Column('id_coll', Integer, ForeignKey(item.id), nullable=False),
Column('id_item', Integer, ForeignKey(item.id), nullable=False))

item_mapper = mapper(Item, item_table, properties=dict(
collections=relation(Item, secondary=collection_table ,lazy=True,
backref='items')))

metadata.create_all()

session = create_session()

session.save(Item('Koen Bok'))
session.save(Item('Dirk Stoop'))

session.flush()

# And now we should be able to do something like this:

items = session.query(Item).select()

for item in items:
for i in items:
item.items.append(i)
item.collections.append(i)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many to many to same table

2007-07-20 Thread Koen Bok

Thanks Michael, thats just too cool!

On Jul 20, 10:35 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 yup, just spell it out

 item_mapper = mapper(Item, item_table, properties=dict(
 collections=relation(Item, secondary=collection_table ,lazy=True,  
 primaryjoin=item_table.c.id==collection_table.c.id_coll,
 secondaryjoin=item_table.c.id==collection_table.c.id_item,
 backref='items')))

 ...

 # do a more reasonable collection setup than the original test

 (koen, dirk) = session.query(Item).order_by(desc
 (item_table.c.name)).all()

 koen.collections.append(dirk)
 assert dirk in koen.collections
 assert koen in dirk.items
 session.flush()
 session.clear()

 (koen, dirk) = session.query(Item).order_by(desc
 (item_table.c.name)).all()
 assert dirk in koen.collections
 assert koen in dirk.items

 On Jul 20, 2007, at 4:00 PM, Koen Bok wrote:

  from sqlalchemy import *

  metadata = BoundMetaData('sqlite://')

  class Item(object):
 def __init__(self, name):
 self.name = name
 def __repr__(self):
 return '%s:%s' % (self.__class__.__name__, self.name)

  item_table = Table('item', metadata,
 Column('id', Integer, primary_key=True),
 Column('name', String(50)))

  collection_table = Table('collection', metadata,
 Column('id_coll', Integer, ForeignKey(item.id), nullable=False),
 Column('id_item', Integer, ForeignKey(item.id), nullable=False))

  item_mapper = mapper(Item, item_table, properties=dict(
 collections=relation(Item, secondary=collection_table ,lazy=True,
  backref='items')))

  metadata.create_all()

  session = create_session()

  session.save(Item('Koen Bok'))
  session.save(Item('Dirk Stoop'))

  session.flush()

  # And now we should be able to do something like this:

  items = session.query(Item).select()

  for item in items:
 for i in items:
 item.items.append(i)
 item.collections.append(i)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] No update in cascade

2007-07-06 Thread Koen Bok

I have a strange situation. If someone _really_ needs to know why I
could explain.

When an object gets deleted by SQLAlchemy, it's related objects are
getting updated setting their foreign keys referring to the deleted
items to NULL. But what if I don't want that to happen? Viewonly is
not an option here, as I need to be able to make changes.

Example

Moment x: Creating item

Item id=3 name='test'

Moment x+1: Creating note

Item id=3 name='test'
Note id=9 id_item=3

Moment x+2: Deleting item

Note id=9 id_item=None

But I want it still to be:

Note id=9 id_item=3


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get URL from engine

2007-04-29 Thread Koen Bok

That would be very nice. You have my vote.

Koen

On Apr 29, 5:13 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 28, 2007, at 10:34 AM, Koen Bok wrote:



  Is there a nice way to get the URL from a given engine?

 not at the moment...the url goes into a create function and gets  
 thrown away.  have been meaning to just connect url to engine  
 when the engine is created so its available as engine.url.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] GRANT weirdness.

2007-04-29 Thread Koen Bok

I want to grant permissions through sqlalchemy, but for some reason if
I let an engine execute the sql it does not work:

Code:

from sqlalchemy import *

engine = create_engine('postgres://localhost/Test')
engine.echo = True

engine.execute('CREATE USER testuser;')
engine.execute('GRANT SELECT ON TABLE testtable TO testuser;')

Result:

Test=# \z testtable
   Access privileges for database Test
 Schema | Name  | Type  | Access privileges
+---+---+
 public | checkout_info | table | {koen=arwdxt/koen}
(1 row)

And when I do the exact same sql in psql

Test=# CREATE USER testuser;
CREATE ROLE
Test=# GRANT SELECT ON TABLE testtable TO testuser;
GRANT
Test=# \z testtable
   Access privileges for database Test
 Schema | Name  | Type  | Access privileges
+---+---+
 public | Test | table | {koen=arwdxt/koen,testuser=r/koen}
(1 row)

I never had any problems doing any other sql stuff like creating
triggers etc with this. Anyone got a hint?

Thanks,

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: GRANT weirdness.

2007-04-29 Thread Koen Bok

And ofcourse the first result should be this in the above mail:

 Test=# \z testtable
Access privileges for database Test
  Schema | Name  | Type  | Access privileges
 +---+---+
  public | testtable | table | {koen=arwdxt/koen}
 (1 row)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: GRANT weirdness.

2007-04-29 Thread Koen Bok

Yep that's it.

If anyone has the same just do this:

engine.execute('BEGIN;GRANT SELECT ON TABLE checkout_info TO
testuser;COMMIT;')

Ciao

Koen

On Apr 29, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Apr 29, 2007, at 9:28 AM, Koen Bok wrote:



  I want to grant permissions through sqlalchemy, but for some reason if
  I let an engine execute the sql it does not work:

  Code:

  from sqlalchemy import *

  engine = create_engine('postgres://localhost/Test')
  engine.echo = True

  engine.execute('CREATE USER testuser;')
  engine.execute('GRANT SELECT ON TABLE testtable TO testuser;')

 my only guess is that the GRANT needs a COMMIT issued afterwards  
 (which is a postgres weirdness in itself, that schema changes are  
 part of transactions).  SA's autocommit logic is based on the  
 statement text containing INSERT, UPDATE, ALTER etc. but i dont think  
 GRANT is in there.   so use an explicit connection/transaction for now.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Catching username password errors

2007-04-23 Thread Koen Bok

I am trying to catch username password errors to show in the
interface. But somehow I cannot catch the exeption. I wrap everything
in a try except, but the exeption is still throwed and my app stops
functioning. What am I doing wrong?

Koen

loginInfo = (
self.username,
self.password,
store.server.address,
store.server.port,
store.tableName.replace(' ', '\ '))

engineURL = 'postgres://%s:[EMAIL PROTECTED]:%s/%s' % loginInfo


try:
engine = create_engine(engineURL)
try:
c = engine.connect()
c.close()
finally:
d.dispose()
except:
self.showError(UserPasswordError)
return False
else:
self.showError(None)


# Make the actual connection here, we should improve this
try:
engine.connect(engineURL)
result = engine.execute(text('SELECT NOW()'))
except:
engine.dispose()
self.showError(UserPasswordError)
return False


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Postgres user management and SQLAlchemy

2007-04-16 Thread Koen Bok

I wondered if it was possible to manage users inside postgres with the
help of SQLAlchemy. But I guess Postgres users are special objects and
not just rows in a table. I tried to do this, but it did not work.

from sqlalchemy import *

metadata = BoundMetaData('postgres://127.0.0.1/template1')

pg_authid = Table('pg_authid', metadata, autoload=True)
pg_auth_members = Table('pg_auth_members', metadata, autoload=True)

For some reason, I get a key error back. Would this be possible at
all?

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Find out which objects were deleted

2007-04-15 Thread Koen Bok

Yep, and I want to know them after the flush :-)

On Apr 15, 7:11 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 session has a deleted accessor with all items marked as deleted.  
 (its cleared out on flush()).

 On Apr 15, 2007, at 11:20 AM, Koen Bok wrote:



  Is there a nice way to find out if an object has been deleted?
  Ofcourse I could set a deleted = False on every mapped object and
  subclass sqlalchemy's session object to set it to true when a delete()
  is called on an object. But isn't there a less complicated way?

  Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column to default to itself

2007-04-13 Thread Koen Bok

Ok, I'd rather handle it on the database level. Is that just a matter
of creating a function and calling it on insert?

Koen

On Apr 13, 4:47 am, Ants Aasma [EMAIL PROTECTED] wrote:
 On Apr 13, 2:47 am, Jorge Godoy [EMAIL PROTECTED] wrote:

  IF you insist on doing that at your code, make the column UNIQUE (or a
  PK...) and write something like this pseudocode:

  def save_data():
  def insert_data():
 try:
 unique_column_value = get_max_from_unique_column
 Class(unique_column_value + 1, 'other data')
 except YourDBExceptionForConstraintViolation:
 sleep(random.random())
 insert_data()

  The 'sleep(random.random())' is there to avoid constant clashes and to
  be fair to all connections that are inserting data on your table.

 To get an uninterrupted number sequence you need to serialize your
 inserts to that specific entity, for which you basically need locking.
 The quoted approach is optimistic locking, where you hope that no one
 tries to insert another row between when you use the
 get_max_from_unique_column and do the database commit, but are ready
 to retry if that expectation fails. Another way would be to use
 pessimistic locking, by doing the get_max_from_unique_column query
 with lockmode='update'. Then any other thread trying to insert another
 row while you're busy inserting yours will have to wait.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column to default to itself

2007-04-13 Thread Koen Bok

request_table = Table('request', metadata,
Column('id', Integer, primary_key=True),
Column('number', Integer, unique=True, nullable=True,
default=text('(SELECT coalesce(max(number), 0) + 1 FROM 
request)')))

This seems to work well. But is  this a good way to do this or can it
cause complications?

On Apr 13, 11:23 am, Koen Bok [EMAIL PROTECTED] wrote:
 Ok, I'd rather handle it on the database level. Is that just a matter
 of creating a function and calling it on insert?

 Koen

 On Apr 13, 4:47 am, Ants Aasma [EMAIL PROTECTED] wrote:

  On Apr 13, 2:47 am, Jorge Godoy [EMAIL PROTECTED] wrote:

   IF you insist on doing that at your code, make the column UNIQUE (or a
   PK...) and write something like this pseudocode:

   def save_data():
   def insert_data():
  try:
  unique_column_value = get_max_from_unique_column
  Class(unique_column_value + 1, 'other data')
  except YourDBExceptionForConstraintViolation:
  sleep(random.random())
  insert_data()

   The 'sleep(random.random())' is there to avoid constant clashes and to
   be fair to all connections that are inserting data on your table.

  To get an uninterrupted number sequence you need to serialize your
  inserts to that specific entity, for which you basically need locking.
  The quoted approach is optimistic locking, where you hope that no one
  tries to insert another row between when you use the
  get_max_from_unique_column and do the database commit, but are ready
  to retry if that expectation fails. Another way would be to use
  pessimistic locking, by doing the get_max_from_unique_column query
  with lockmode='update'. Then any other thread trying to insert another
  row while you're busy inserting yours will have to wait.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Putting Session.flush in a seperate thread

2007-04-10 Thread Koen Bok

  the elegant way would be to not worry about the thread scope of a  
 connection and just use connection pooling normally.  pull out a  
 connection when needed, close it when complete (which is really just  
 a return to the connection pool).  keep the scope of those two  
 operations local to a single thread.

This is exactly what I'd like to do. But I have no clue to assign a
new connection to a session in a new thread. This is what I'm doing.

class FlushThread(Thread):

def __init__(self, session):
super(FlushThread, self).__init__()
self.session = session
self.start()

def run(self):
print self.session
#engine.connect()
#self.session.bind_to(engine)
self.session.flush()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Putting Session.flush in a seperate thread

2007-04-09 Thread Koen Bok

We are building a GUI app, and we were thinking about wrapping
session.flush() in a thread with a timer that detects a timeout. That
way we would have better performace and we can generate warnings if
the connection goes down. Do you guys think this is smart, or are
there complications?

I tried to build this already but I cannot figure out how to assign
the shared connection to a thread. I always get 'No connection
defined'.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Changing lazy property for primary mapper on the fly

2007-03-26 Thread Koen Bok

Ok, thanks!

On Mar 26, 4:26 am, Michael Bayer [EMAIL PROTECTED] wrote:
 no.  do a clear_mappers() and build your mappers again if you need to  
 change the base configuration.

 On Mar 25, 2007, at 3:35 PM, Koen Bok wrote:



  I get this, but that's only on a particular query object. That makes
  sense, but is there no way to 'globally' set this in the mapper?

  Koen

  On Mar 25, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 http://www.sqlalchemy.org/docs/
  datamapping.html#datamapping_selectrelations_options

  On Mar 25, 2007, at 6:50 AM, Koen Bok wrote:

  Is it possible to change the lazy property of a primary mapper of an
  object's relation on the fly so from then on it wil change the eager
  loading of that relation.

  I want to use this for a small debug window in my app where I can
  change these on the fly to test which settings are optimal for the
  connection.

  Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Unwanted Cascaded Deletes

2007-03-25 Thread Koen Bok

I have a mapper like this:

mapper(Request, request_table, properties={
[SOME MORE STUFF]
'stock': relation(Stock, lazy=True, uselist=False, 
primaryjoin=and_(
request_table.c.id_item==stock_request.c.id_product,

request_table.c.id_location==stock_request.c.id_location,

request_table.c.id_stocktype==stock_request.c.id_stocktype),
foreign_keys=[request_table.c.id_item, 
request_table.c.id_location,
request_table.c.id_stocktype])})

Whenever I remove a request object that has a stock object associated,
it deletes the stock object too. When I change the foreign_keys to
stock_request.c.id_product, stock_request.c.id_location,
stock_request.c.id_stocktype it tries to update the related object
with NULL values.

But I want it to not do anyithing at all! I played with the cascade
rules, but they all either delete or update.

Thanks!

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Changing lazy property for primary mapper on the fly

2007-03-25 Thread Koen Bok

Is it possible to change the lazy property of a primary mapper of an
object's relation on the fly so from then on it wil change the eager
loading of that relation.

I want to use this for a small debug window in my app where I can
change these on the fly to test which settings are optimal for the
connection.

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Changing lazy property for primary mapper on the fly

2007-03-25 Thread Koen Bok

I get this, but that's only on a particular query object. That makes
sense, but is there no way to 'globally' set this in the mapper?

Koen

On Mar 25, 4:51 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 http://www.sqlalchemy.org/docs/
 datamapping.html#datamapping_selectrelations_options

 On Mar 25, 2007, at 6:50 AM, Koen Bok wrote:



  Is it possible to change the lazy property of a primary mapper of an
  object's relation on the fly so from then on it wil change the eager
  loading of that relation.

  I want to use this for a small debug window in my app where I can
  change these on the fly to test which settings are optimal for the
  connection.

  Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapper with custom outerjoin and sum functions

2007-03-25 Thread Koen Bok

Hello again!

I need a mapper that does an outer join on another table.

stock_request_join = sql.outerjoin(request_table, stock_table, and_(
request_table.c.id_item==stock_table.c.id_product,
request_table.c.id_location==stock_table.c.id_location,
request_table.c.id_stocktype==stock_table.c.id_stocktype))

stock_request = select(
[stock_request_join] + \
[stock_table.c.quantity.op('-')
(func.sum(request_table.c.quantity)).label('unordered')] + \
[stock_table.c.quantity.op('-')
(func.sum(request_table.c.allocation)).label('unallocated')],
group_by=[c for c in stock_request_join.c],
correlate=False).alias('stock_request')

This generates the right SQL. But I can't get it to work with the
original column names, then I get name ambigious errors. When I do it
with the joined names, the mapper tries to insert a request too when I
want to create stock which makes kind of sense.

When I remove [stock_request_join] and insert [c for c in
stock_request_join.c] the JOIN syntax is being removed and it does a
regular select against request and stock.

Basically I want to end up with a regular Stock object with the
aggregrate columns request_table.c.quantity and
request_table.c.allocation. I have it working with a regular select,
but if there is no request in te table linked to a stock it does not
return that stock when doing a session.query(Stock).select(). Left
outer joining request would do the trick.

Thanks in advance!

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Eager loading self referential mapper

2007-03-23 Thread Koen Bok

Could anyone please explain me a little why self referantial mappers
cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
theoratically impossible or impractical?

Ciao

Koen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Eager loading self referential mapper

2007-03-23 Thread Koen Bok

Yeah, I am sure the tree will never be deeper than 3 or 4 levels.

Koen

On Mar 23, 5:24 pm, Andreas Jung [EMAIL PROTECTED] wrote:
 --On 23. März 2007 16:14:26 + Koen Bok [EMAIL PROTECTED] wrote:



  Could anyone please explain me a little why self referantial mappers
  cannot be eager-loading. Is this not yet integrated in SQLAlchemy or
  theoratically impossible or impractical?

 Do you really want to load a possibly *huge* tree using eager loading?

 -aj

  application_pgp-signature_part
 1KDownload


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using mapper with custom select creates unneeded subquery

2007-03-22 Thread Koen Bok

It worked! Thanks a bunch!

On Mar 22, 5:06 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 clearly correlate=False has to be more prominently mentioned, not  
 just in the FAQ but in the main docs, there should be a section  
 specifically on subqueries and their mechanics.

 On Mar 22, 2007, at 6:53 AM, King Simon-NFHD78 wrote:



  This caught me out a couple of weeks ago, and I've seen a couple of
  other similar questions as well. You need to add 'correlate=False' to
  the nested select.

  I wonder if this should be added to the FAQ?

  Hope that helps,

  Simon

  -Original Message-
  From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
  On Behalf Of Koen Bok
  Sent: 22 March 2007 10:47
  To: sqlalchemy
  Subject: [sqlalchemy] Re: Using mapper with custom select creates
  unneeded subquery

  Let me post some sample code with that:

  mapper(Request, request_table, properties={
 'children' : relation(
 Request,

  primaryjoin=request_table.c.id_parent==request_table.c.id,
 backref=backref(parent,
  remote_side=[request_table.c.id])),
 'i': relation(Item,
  primaryjoin=item_table.c.id==request_table.c.id_item,
  backref='requests', lazy=True),
 [SOME MORE STUFF]
 'stock': relation(Stock, primaryjoin=and_(

  request_table.c.id_item==stock_table.c.id_product,

  request_table.c.id_location==stock_table.c.id_location,

  request_table.c.id_stocktype==stock_table.c.id_stocktype),
 foreign_keys=[stock_table.c.id_product,
  stock_table.c.id_location,
  stock_table.c.id_stocktype])})

  stock_request = select(
 [c for c in stock_table.c] + \
 [stock_table.c.quantity.op('-')
  (func.sum(request_table.c.quantity)).label('unordered')] + \
 [stock_table.c.quantity.op('-')
  (func.sum(request_table.c.allocation)).label('unallocated')],
 and_(
 request_table.c.id_item==stock_table.c.id_product,
 request_table.c.id_location==stock_table.c.id_location,

  request_table.c.id_stocktype==stock_table.c.id_stocktype),
 group_by=[c for c in stock_table.c]).alias('stock_request')

  mapper(Stock, stock_request, properties={
 'product': relation(Item,
  primaryjoin=item_table.c.id==stock_table.c.id_product,
  backref='_stock'),
 'location': relation(Item,
  primaryjoin=item_table.c.id==stock_table.c.id_location),
 'stocktype': relation(StockType)})

  If you need more, just let me know!

  Koen

  On Mar 22, 11:42 am, Koen Bok [EMAIL PROTECTED] wrote:
  Thanks for the reply! If the performance is about equal, that's fine!

  But I think I might have found a bug.

  When I make a selection it generates the following (faulty) SQL  
  query:

  SELECT
  stock_request.id_stocktype AS stock_request_id_stocktype,
  stock_request.unordered AS stock_request_unordered,
  stock_request.id_location AS stock_request_id_location,
  stock_request.id_product AS stock_request_id_product,
  stock_request.unallocated AS stock_request_unallocated,
  stock_request.quantity AS stock_request_quantity,
  stock_request.id AS stock_request_id FROM
  (
  SELECT
  stock.id AS id,
  stock.id_stocktype AS id_stocktype,
  stock.id_product AS id_product,
  stock.id_location AS id_location,
  stock.quantity AS quantity,
  (stock.quantity - sum(request.quantity)) AS  
  unordered,
  (stock.quantity - sum(request.allocation)) AS
  unallocated
  FROM request
  WHERE
  request.id_item = stock.id_product
  AND
  request.id_location = stock.id_location
  AND
  request.id_stocktype = stock.id_stocktype
  GROUP BY
  stock.id,
  stock.id_stocktype,
  stock.id_product,
  stock.id_location,
  stock.quantity,
  stock.quantity
  ) AS stock_request, stock
  WHERE
  stock.id_product = 5
  AND
  stock.id_location = 7
  AND
  stock.id_stocktype = 1
  ORDER BY
  stock_request.id
  LIMIT 1

  The FROM in the subquery should be: FROM request, stock

  The strange thing is that whenever I print the subquery's sql, it has
  stock in the FROM and tehrefore is correct.

  Or am I not understanding it right?

  Koen

  On Mar 22, 2:58 am, Michael Bayer [EMAIL PROTECTED] wrote:

  when you pass a selectable to the mapper, the mapper considers that
  selectable to be encapsulated, in the same way as a table is.  the
  Query cannot add any extra criterion to that selectable directly
  since it would modify the results and corrupt the meaning, if not
  the actual syntax, of the selectable itself.  therefore the mapper
  is always going to select * from (your selectable) - its the only
  way to guarantee the correct

[sqlalchemy] Complicated Mapper with Count Function

2007-02-25 Thread Koen Bok

Dear all.

I have to make a complicated mapper, and I need a little help.

We have a list of products. These products have stock, each individual
stock item has an entry in the stockproduct table. So to get the total
stock we need to count the items in the stock database. We can filter
them by a particular stock.

So I made a function to create a mapper to do just that. But there are
two problems:

- It's not working well, because if the count function equals 0 (no
stock) the product does not appear in the results.
- I have the feeling this can be better optimized, but I can't see it
(maybe put it in a join or subquery?)

The function

def productStockMapper(stockList):

or_list = or_()
for stock in stockList:
or_list.append(stockproduct_table.c.stock_id==stock.id)

s = select([product_table,
func.count(stockproduct_table.c.id).label('stock')],
and_(
stockproduct_table.c.product_id==product_table.c.id,
or_list),
group_by=[c for c in product_table.c]).alias('count_select')

return mapper(Product, s, non_primary=True)

The tables:

product_table = Table('products', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(255), nullable=False)
Column('code', Unicode(255), unique=True, nullable=False))

stockproduct_table = Table('stockproducts', metadata,
Column('id', Integer, primary_key=True),
Column('stock_id', Integer, ForeignKey(stocks.id), nullable=False),
Column('product_id', Integer, ForeignKey(products.id),
nullable=False))

stock_table = Table('stocks', metadata,
Column('id', Integer, primary_key=True),
Column('name', Unicode(255), nullable=False))


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---