[sqlalchemy] Re: [Twisted-Python] SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Chris Withers

Matthew Williams wrote:
 From previous posts to this and other lists, it seems that ORMs and  
threads don't get along too well...


What makes you think that?

and, as far as I can tell, there's  
no way to get away from threads if you don't want longish queries to  
block your entire application.


Right, SQLAlchemy doesn't play nicely with *non-threaded* environments, 
from my understanding, which may well be wrong ;-)


It took me quite some time to piece together everything I could find  
related to sAsync (which seems to be a dead but functional project),  
so I threw up a quick Trac page for it at http://sasync.org.


Cool. What is it you're doing that needs to mix Twisted and SQLAlchemy?

cheers,

Chris

PS: I've CC'ed the SQLAlchemy list in as well as I think people there 
may want to add to this discussion...


--
Simplistix - Content Management, Batch Processing  Python Consulting
   - http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Matthew Williams


On Mar 26, 2010, at 3:20 AM, Chris Withers wrote:


Matthew Williams wrote:
From previous posts to this and other lists, it seems that ORMs  
and  threads don't get along too well...


What makes you think that?


First of all, most of my impressions about ORMs come from SQLAlchemy.   
This quote from this list (http://twistedmatrix.com/pipermail/twisted-python/2009-March/019359.html 
) sums up what I have found as well:


It's much trickier if you want to use the ORM, unless you are very
careful to fully eager load every thing in any possible database
operation if you have need of the information subsequently in your
twisted code. Otherwise you may block unexpectedly simply when
accessing your objects, and end up with database operations from the
main twisted thread.

So perhaps I should have said SQL Alchemy's ORM and threads don't get  
along too well... that's not to say it's impossible, you just have to  
be exceedingly careful how you use it.


and, as far as I can tell, there's  no way to get away from threads  
if you don't want longish queries to  block your entire application.


Right, SQLAlchemy doesn't play nicely with *non-threaded*  
environments, from my understanding, which may well be wrong ;-)


It took me quite some time to piece together everything I could  
find  related to sAsync (which seems to be a dead but functional  
project),  so I threw up a quick Trac page for it at http://sasync.org 
.


Cool. What is it you're doing that needs to mix Twisted and  
SQLAlchemy?


The project (an internal project) doesn't really *need* to mix them...  
I could just use mysqldb.


Matthew

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Chris Withers

Matthew Williams wrote:


It's much trickier if you want to use the ORM, unless you are very
careful to fully eager load every thing in any possible database
operation if you have need of the information subsequently in your
twisted code. Otherwise you may block unexpectedly simply when
accessing your objects, and end up with database operations from the
main twisted thread.

So perhaps I should have said SQL Alchemy's ORM and threads don't get 
along too well... that's not to say it's impossible, you just have to 
be exceedingly careful how you use it.


I think you have the wrong end of the stick.
SQLAlchemy and threads play fine, SQLAlchemy and Twisted's asynchronous 
model, which refuses to use threads on principle, do not, for the 
reasons you describe.


It took me quite some time to piece together everything I could find  
related to sAsync (which seems to be a dead but functional project),  
so I threw up a quick Trac page for it at http://sasync.org.


Cool. What is it you're doing that needs to mix Twisted and SQLAlchemy?


The project (an internal project) doesn't really *need* to mix them... I 
could just use mysqldb.


Heh, wrong end of the stick again; my question was why you needed to use 
Twisted ;-)


cheers,

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread King Simon-NFHD78
 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:sqlalch...@googlegroups.com] On Behalf Of Matthew Williams
 Sent: 26 March 2010 12:10
 To: sqlalchemy@googlegroups.com; twisted-pyt...@twistedmatrix.com
 Subject: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync
 
 
 On Mar 26, 2010, at 3:20 AM, Chris Withers wrote:
 
  Matthew Williams wrote:
  From previous posts to this and other lists, it seems that ORMs  
  and  threads don't get along too well...
 
  What makes you think that?
 
 First of all, most of my impressions about ORMs come from 
 SQLAlchemy.   
 This quote from this list 
 (http://twistedmatrix.com/pipermail/twisted-python/2009-March/
019359.html 
 ) sums up what I have found as well:
 
 It's much trickier if you want to use the ORM, unless you are very
 careful to fully eager load every thing in any possible database
 operation if you have need of the information subsequently in your
 twisted code. Otherwise you may block unexpectedly simply when
 accessing your objects, and end up with database operations from the
 main twisted thread.
 
 So perhaps I should have said SQL Alchemy's ORM and threads 
 don't get  
 along too well... that's not to say it's impossible, you 
 just have to  
 be exceedingly careful how you use it.
 

I think that point should be clarified, so that people don't later come
across this post and just accept it without understanding.

I imagine that SQLALchemy is used in a lot of threaded applications. For
example, it is the recommended ORM in web frameworks such as Pylons and
TurboGears, which work fine in threaded environments. However, typically
in these libraries a web request is handled by a single thread, and all
the SQLAlchemy operations occur within the scope of that request. As
long as you don't share a Session instance between the threads, you
won't have any problems. SQLAlchemy provides a ScopedSession class which
helps in these situations, as you can call the constructor many times on
a single thread and always get the session instance back for that
thread. Sessions themselves aren't thread-safe.

When an instance is loaded from the database, it is linked to the
session that loaded it. This means that when you have lazy-loading
properties on that instance (such as related classes, or deferred column
properties), they will be automatically loaded when they are accessed,
in the same session.

This will cause a problem if you load an instance in thread A, hand the
object off to thread B, and then thread B accesses one of these
lazy-loading properties. The load will occur in thread A's session,
which might be in the middle of doing something else.

The solution to this is either to eager-load all the attributes you
think you are going to need before handing the instance off to another
thread (difficult), or (probably better) to detach (expunge) the
instance from thread A's session. Thread B should then merge the object
into its own session (using the load=False flag so that it doesn't
needlessly requery the database).

The Session docs at http://www.sqlalchemy.org/docs/session.html explain
the lifecycle of loaded instances.

I haven't actually done any of this - I've only ever used SA from TG and
command-line scripts, but I think the principles are about right. I hope
that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Matthew Williams


On Mar 26, 2010, at 7:16 AM, Chris Withers wrote:
Cool. What is it you're doing that needs to mix Twisted and  
SQLAlchemy?
The project (an internal project) doesn't really *need* to mix  
them... I could just use mysqldb.


Heh, wrong end of the stick again; my question was why you needed to  
use Twisted ;-)


I'm using txjsonrpc to build a JSON-RPC server. If you have  
suggestions for alternatives to txjsonrpc, I'm certainly open to them,  
as this project is just getting started.


Matthew

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Fernando Takai
Hi!,

I'm using SQLAlchemy on a heavily threaded env - something like 30~40
threads working with SQLAlchemy objects.
What you need to watchout is:

* Eager load objects - getting nasty lazyload exceptions is not funny
* Take off the objects from the session and, if you need to use them
later, merge to the current thread session - i did this because i had
some object-is-already-on-session-foobar exception
* Don't forget to use scoped_session

Beside those two points, working with threaded apps is quite easy with
SQLAlchemy. :)

On Fri, Mar 26, 2010 at 10:12 AM, Matthew Williams mgwilli...@gmail.com wrote:
 Thank you, Simon, for clarifying this and pointing out that part of the
 SQLAlchemy docs... somehow I missed that part :-).

 On Mar 26, 2010, at 7:30 AM, King Simon-NFHD78 wrote:

 I think that point should be clarified, so that people don't later come
 across this post and just accept it without understanding.

 I imagine that SQLALchemy is used in a lot of threaded applications. For
 example, it is the recommended ORM in web frameworks such as Pylons and
 TurboGears, which work fine in threaded environments. However, typically
 in these libraries a web request is handled by a single thread, and all
 the SQLAlchemy operations occur within the scope of that request. As
 long as you don't share a Session instance between the threads, you
 won't have any problems. SQLAlchemy provides a ScopedSession class which
 helps in these situations, as you can call the constructor many times on
 a single thread and always get the session instance back for that
 thread. Sessions themselves aren't thread-safe.

 When an instance is loaded from the database, it is linked to the
 session that loaded it. This means that when you have lazy-loading
 properties on that instance (such as related classes, or deferred column
 properties), they will be automatically loaded when they are accessed,
 in the same session.

 This will cause a problem if you load an instance in thread A, hand the
 object off to thread B, and then thread B accesses one of these
 lazy-loading properties. The load will occur in thread A's session,
 which might be in the middle of doing something else.

 The solution to this is either to eager-load all the attributes you
 think you are going to need before handing the instance off to another
 thread (difficult), or (probably better) to detach (expunge) the
 instance from thread A's session. Thread B should then merge the object
 into its own session (using the load=False flag so that it doesn't
 needlessly requery the database).

 The Session docs at http://www.sqlalchemy.org/docs/session.html explain
 the lifecycle of loaded instances.

 I haven't actually done any of this - I've only ever used SA from TG and
 command-line scripts, but I think the principles are about right. I hope
 that helps,

 Simon

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.





-- 
Fernando Takai

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Chris Withers

King Simon-NFHD78 wrote:

The solution to this is either to eager-load all the attributes you
think you are going to need before handing the instance off to another
thread (difficult), or (probably better) to detach (expunge) the
instance from thread A's session.


Are there any recommended code examples around for doing this?

Once detatched, are these objects (I'm guessing for most people they'll 
be instances of declaratively mapped models) pickleable?



Thread B should then merge the object
into its own session (using the load=False flag so that it doesn't
needlessly requery the database).


Good code examples of this around too?

cheers

Chris

--
Simplistix - Content Management, Batch Processing  Python Consulting
- http://www.simplistix.co.uk

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] The correct usage of use_alt to avoid circular dependency

2010-03-26 Thread Michael Bayer
Tan Yi wrote:
 I want to create a table, say:
employee_table = Table(
 'employee',metadata,
 Column('id',Integer,primary_key=True),
 Column('name',String(255))
)
   staffGroup_Table = Table(
  'role',metadata,
Column('manager',None,ForeignKey('employee.id')),
Column('worker',None,ForeignKey('employee.id')),
Column('janitorr',None,ForeignKey('employee.id'))
   )
 metadata.create_all()

 however this will generate circular dependency, I tried to use use_alt
 = True with ForeignKey constraint , but no luck.
 What is the correct way of creating table for this kind of situation :
 a table refers to another table with a composite foreign keys on the
 same column?

the name of the flag is use_alter, and there is also no circular
dependency above.  employee is created first, role second.   there's
also no composite foreign key represented above; a composite foreign key
is one that references a composite primary key, i.e. a primary key that
consists of more than one column.





 Thank you!

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy, Twisted, and sAsync

2010-03-26 Thread Michael Bayer
Fernando Takai wrote:
 Hi!,

 I'm using SQLAlchemy on a heavily threaded env - something like 30~40
 threads working with SQLAlchemy objects.
 What you need to watchout is:

 * Eager load objects - getting nasty lazyload exceptions is not funny
 * Take off the objects from the session and, if you need to use them
 later, merge to the current thread session - i did this because i had
 some object-is-already-on-session-foobar exception
 * Don't forget to use scoped_session

 Beside those two points, working with threaded apps is quite easy with
 SQLAlchemy. :)

even the eager advice above has nothing to do with threads.   that has
to do with using your objects after your session has been closed.   if you
think of your objects as agents of your current database transaction,
and as a detached object as a something that really needs to be
re-attached to a transaction somewhere so that it can become an agent
once again, no such issues occur.




 On Fri, Mar 26, 2010 at 10:12 AM, Matthew Williams mgwilli...@gmail.com
 wrote:
 Thank you, Simon, for clarifying this and pointing out that part of the
 SQLAlchemy docs... somehow I missed that part :-).

 On Mar 26, 2010, at 7:30 AM, King Simon-NFHD78 wrote:

 I think that point should be clarified, so that people don't later come
 across this post and just accept it without understanding.

 I imagine that SQLALchemy is used in a lot of threaded applications.
 For
 example, it is the recommended ORM in web frameworks such as Pylons and
 TurboGears, which work fine in threaded environments. However,
 typically
 in these libraries a web request is handled by a single thread, and all
 the SQLAlchemy operations occur within the scope of that request. As
 long as you don't share a Session instance between the threads, you
 won't have any problems. SQLAlchemy provides a ScopedSession class
 which
 helps in these situations, as you can call the constructor many times
 on
 a single thread and always get the session instance back for that
 thread. Sessions themselves aren't thread-safe.

 When an instance is loaded from the database, it is linked to the
 session that loaded it. This means that when you have lazy-loading
 properties on that instance (such as related classes, or deferred
 column
 properties), they will be automatically loaded when they are accessed,
 in the same session.

 This will cause a problem if you load an instance in thread A, hand the
 object off to thread B, and then thread B accesses one of these
 lazy-loading properties. The load will occur in thread A's session,
 which might be in the middle of doing something else.

 The solution to this is either to eager-load all the attributes you
 think you are going to need before handing the instance off to another
 thread (difficult), or (probably better) to detach (expunge) the
 instance from thread A's session. Thread B should then merge the object
 into its own session (using the load=False flag so that it doesn't
 needlessly requery the database).

 The Session docs at http://www.sqlalchemy.org/docs/session.html explain
 the lifecycle of loaded instances.

 I haven't actually done any of this - I've only ever used SA from TG
 and
 command-line scripts, but I think the principles are about right. I
 hope
 that helps,

 Simon

 --
 You received this message because you are subscribed to the Google
 Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.





 --
 Fernando Takai

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: The correct usage of use_alt to avoid circular dependency

2010-03-26 Thread Tan Yi
Thanks for clarification. Sorry that I do not think I understand
circular dependency and compsite key very well.

Also can you help me to read the circular dependency error message? I
mean How to find the circle from the error message of Circular
Dependency.
Here is the error message:

sqlalchemy.exc.CircularDependencyError: Circular dependency detected
[(DataValues, Groups), (DataValues, DerivedFrom), (Sites,
TexasStream), (Sites, DataValues), (Sites, ReservoirRatingCurve),
(Sites, Reservoir), (TexasStream, HydrologicUnit), (TexasStream,
MajorWaterRight), (TexasStream, Reservoir), (Reservoir,
MajorWaterRight), (Reservoir, ReservoirRatingCurve), (Reservoir,
StorageInformation), (MajorWaterRight, Reservoir), (HydrologicUnit,
Sites)][]

Thanks a lot...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Clarification about performance and relation()

2010-03-26 Thread masetto
Excuse me,

After your valuable advice, i've modified my code and i've removed the
manual setting of the foreign key (which was completely wrong). But now
i've another problem, maybe due to another misunderstanding.
(I've moved all my ORM classes within the same module now).

I am parsing an xml where i can found something like:

definition id=1
   ..
  platform Windows XP/platform
  ..
/definition

definition id=2
   ..
  platform Linux/platform
  ..
/definition

definition id=3
  ..
  platform Windows 7/platform
  platform Windows XP/platform
  platform Windows Vista/platform
 ..
/definition

When a single platform is associated to a definition (1:1), i expect the
following table layout:

table_platform:

id | platform| definitionId_fk

1   Windows XP1
2   Linux   2

When N platforms are associated to the same definition (N:1), i expect the
following table layout:

id | platform| definitionId_fk

3   Windows 7  3
4   Windows XP   3
5   Windows Vista3
5   Solaris   4

For the first case, everything works fine and i got exactly what i am
expecting but, for the second case i got:

id | platform| definitionId_fk

3   Windows 7  None
4   Windows XP   None
5   Windows Vista3

Maybe it's a stupid problem but i can't figure it out at the moment :/

Code:
...
for definitions in ovalXML._childrenMap['definitions']:
for definition in definitions.getchildren():
defInst = ORM_Classes.DefinitionClass(definition)
...
if subElem1.tag == mainNS + platform:
platf = ORM_Classes.PlatformClass()

platf.setPlatform(str(subElem1))

defInst.PlatformRel = [platf]

session.add(defInst)
session.add(platf)

#i perform a commit every 1000 definitions as you suggested :)

DefinitionClass:

class DefinitionClass(Base):
__tablename__ = 'definitions'

defId = Column(Integer, primary_key=True)
...
version = Column(String)

PlatformRel = relation(PlatformClass, backref=definitions)

def __init__(self, node):
self.version = node.get(version)
...

PlatformClass:

class PlatformClass(Base):
__tablename__ = 'platform'

platformId = Column(Integer, primary_key=True)
platform = Column(String)

platformId_fk = Column('definitionId_fk', Integer,
ForeignKey('definitions.defId'))

def setPlatform(self, node):
self.platform = node

What can i do || correct to get the expected result?

Thanks for your patience.
---
Masetto



On Thu, Mar 25, 2010 at 4:56 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 the relationship between two tables requires both the ForeignKey to be
 present as well as the relationship()  (relation() in 0.5) function to be
 present in the mapping.


 masetto wrote:
  From 30 mins to 2mins... shame :P
 
  Thanks Micheal !
 
  Forgive me, what about the other question about foreign keys?
 
 
  On Thu, Mar 25, 2010 at 3:43 PM, Michael Bayer
  mike...@zzzcomputing.comwrote:
 
  masetto wrote:
   Hi all,
  
   i am writing a python script which parse an xml file (python lxml) and
   import it into a sqlite db, and it works.
   The xml file size is about 30Mb and the import operation takes about
  15
   minutes (do you think is too much? is there something i can do to
  speed
  up
   the process?)
  
   This is a piece of the import function:
  
   ...
   for definition in definitions.getchildren(): #iterate for every xml
   children
   node
   defInst = SQLTableBuilder_Definition.DefinitionClass(definition)
  #read
   and write on db some attribute of the node
   ...
   if subbaElem1.tag == mainNS + platform: #another loop iterate
  for
   every sub-node of the definition node
   platf = SQLTableBuilder_Platform.PlatformClass()
   platf.setPlatform(str(subbaElem1))
   platf.platformId_fk = defInst.defId
  
   session.add(platf)
   session.commit()
...
session.add(defInst)
session.commit()
 
 
  don't commit on every node and on every sub-node.  Just commit once
  every
  1000 new objects or so.   will save a ton of processing.
 
 
 
  
   where DefinitionClass contains the attributes declaration
  (primary_key,
   column(string), etc.) and a Foreign Key.
   There is a relation between the definition table and the platform
  table
   (one
   or more platforms - Operating System - can be associated to a single
   definition) so,
   in the platform table, i've added the following: platformId_fk =
   Column('definitionId_fk', Integer, ForeignKey('definitions.defId'))
  
   All my ORM-Classes are declared within n different classes within n
   different 

Re: [sqlalchemy] Re: The correct usage of use_alt to avoid circular dependency

2010-03-26 Thread Michael Bayer
Tan Yi wrote:
 Thanks for clarification. Sorry that I do not think I understand
 circular dependency and compsite key very well.

 Also can you help me to read the circular dependency error message? I
 mean How to find the circle from the error message of Circular
 Dependency.
 Here is the error message:

 sqlalchemy.exc.CircularDependencyError: Circular dependency detected
 [(DataValues, Groups), (DataValues, DerivedFrom), (Sites,
 TexasStream), (Sites, DataValues), (Sites, ReservoirRatingCurve),
 (Sites, Reservoir), (TexasStream, HydrologicUnit), (TexasStream,
 MajorWaterRight), (TexasStream, Reservoir), (Reservoir,
 MajorWaterRight), (Reservoir, ReservoirRatingCurve), (Reservoir,
 StorageInformation), (MajorWaterRight, Reservoir), (HydrologicUnit,
 Sites)][]


this kind of issue is addressed here:

http://www.sqlalchemy.org/docs/mappers.html#rows-that-point-to-themselves-mutually-dependent-rows





 Thanks a lot...

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] UniqueConstraint case sensitive

2010-03-26 Thread jose soares

Hi all,

I would like to create an UniqueConstraint like this one:

CREATE UNIQUE INDEX uniqinx ON prod(lower(name))

Could you help me to translate it to SQLAlchemy using UniqueConstraint ?

Thank you.

j






--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Clarification about performance and relation()

2010-03-26 Thread werner

Hi Masetto,

On 26/03/2010 16:01, masetto wrote:


Maybe it's a stupid problem but i can't figure it out at the moment :/

Code:
...
for definitions in ovalXML._childrenMap['definitions']:
for definition in definitions.getchildren():
defInst = ORM_Classes.DefinitionClass(definition)
session.add(defInst)  # I think this line should be 
here, you have it further down

...
if subElem1.tag == mainNS + platform:
platf = ORM_Classes.PlatformClass()
platf.setPlatform(str(subElem1))

#defInst.PlatformRel = [platf]# change this to

   platf.definitions = defInst

Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Kent
Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
(for both postgres and oracle)?

I've implemented a subclass of Sequence myself, but it isn't very
elegant, because I'm not familiar enough with the code to know which
methods to override for create() output.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Clarification about performance and relation()

2010-03-26 Thread masetto
First of all, thanks for your answer :)

#   defInst.PlatformRel = [platf]# change this to
platf.definitions = defInst

I don't have any definitions attribute within the PlatformClass, i suppose
you mean the foreign key, isnt'it?
That is

platf.platformId_fk = defInst

However, this results in another error:

sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 1 -
probably unsupported type. u'INSERT INTO platform (platform,
definitionId_fk) VALUES (?, ?)' ['Microsoft Windows 2000',
ORM_Classes.DefinitionClass object at 0x8f5278c]

I've played a little with it, then i've moved the relation() from
DefinitionClass to PlatformClass:

class PlatformClass(Base):
__tablename__ = 'platform'

platformId = Column(Integer, primary_key=True)
platform = Column(String)

platformId_fk = Column('definitionId_fk', Integer,
ForeignKey('definitions.defId'))
PlatformRel = relation(DefinitionClass, backref=platform)

and then:

platf.PlatformRel = defInst

Now i got the expected data! It WORKS :P Thanks Werner!

But, i need to understand.. why now it's working?

From the doc:

We are also free... to define the
relationship()http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationshiponly
on one class and not the other. It is also possible to define two
separate 
relationship()http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationshipconstructs
for either direction, which is generally safe for many-to-one and
one-to-many relationships, but not for many-to-many relationships.

Maybe i don't have well understood the role of the relation()/relationship()
function but, shouldn't be the same thing to define the relation() within
the DefinitionClass? I've only changed the location of the relation() and
now it works.
Can you kindly better explain me the role of the relationship() function?

Mmm... please correct me if i'm wrong:

- The relationship between the User and Address classes is defined
separately using the
relationship()http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationshipfunction
OK, and is the only way to define a relation between two tables.

- If i put relationship() in both classes i got a *bidirectional*relationship

- Because of the *placement* of the foreign key, from Address to User it is
*many to one*...   !!! Oh, is this the point, right? If, in the same class,
i define a foreign key AND a relationship() i create a many to one relation
with the linked table

- ..., and from User to Address it is *one to many* - This is valid only
in the bidirectional case or it's automatic when i declare somewhere
foreign key + relationship() ?

- Initially i've defined the foreign key in the PlatformClass and the
relation() in the DefinitionClass. Which type of relation i've created in
that way?

Thanks again!


On Fri, Mar 26, 2010 at 4:50 PM, werner wbru...@free.fr wrote:

 Hi Masetto,

 On 26/03/2010 16:01, masetto wrote:
 

  Maybe it's a stupid problem but i can't figure it out at the moment :/

 Code:
 ...
 for definitions in ovalXML._childrenMap['definitions']:
for definition in definitions.getchildren():
defInst = ORM_Classes.DefinitionClass(definition)

session.add(defInst)  # I think this line should be here,
 you have it further down

...
if subElem1.tag == mainNS + platform:
platf = ORM_Classes.PlatformClass()
platf.setPlatform(str(subElem1))

 #defInst.PlatformRel = [platf]# change this to

   platf.definitions = defInst

 Werner

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Unifying large objects on commit/flush

2010-03-26 Thread Torsten Landschoff
Hi there!

We are considering to use SQLAlchemy for a new project. Our first tests
look promising and it is a fun to use SA. But I still have a problem to
implement some special features we would like to have.

For example, I want to store large (tens to hundreds of MB) objects into
the database but keep the option open to store them into the filesystem
later. I would prefer storing them into the DB to have them under
transaction protection but OTOH I know that this can become a
performance problem. So I want to build an interface to allow external
(file or extra DB) storage later.

So instead of the blob itself I want to store a cryptographic hash (like
git, Mercurial, Fossil SCM etc. do) and index the real data from that.
If somebody tries to import the same file twice, it should just reuse
the existing blob (and possibly sanity check if the content matches).

The following example is a greatly simplified example of that approach.
It works like this, but I would like to do without the exception handler
at the end ;-)

--
import sqlalchemy as sa
import sqlalchemy.orm as orm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
import hashlib

Base = declarative_base()

class CommonStorage(Base):
__tablename__ = common
hashval = sa.Column(sa.String, primary_key=True)
value   = sa.Column(sa.LargeBinary)
def __init__(self, v):
self.value = v
self.hashval = hashlib.md5(v).hexdigest()

class StorageUser(Base):
__tablename__ = user
id= sa.Column(sa.Integer, primary_key=True)
ref   = sa.Column(None, sa.ForeignKey(CommonStorage.hashval))
rel   = orm.relation(CommonStorage)
value = association_proxy(rel, value)

engine = sa.create_engine(sqlite:///, echo=True)
Base.metadata.create_all(engine)
session = orm.sessionmaker(bind=engine, autoflush=False)()

ua, ub = StorageUser(), StorageUser()
ua.value = ub.value = Something
session.add(ua)
session.commit()
session.add(ub)
try:
session.commit()
except sa.exc.FlushError:
# I really like this error handling - we have to rollback first to
# acknowledge the error. Cool! Never ignore errors and carry on again!
session.rollback()
ub.rel = session.query(CommonStorage).filter(ub.rel.hashval == 
CommonStorage.hashval).first()
session.add(ub)
session.commit()
---

I tried using a MapperExtension on the StorageUser and to replace the
ref inside before_insert and before_update by searching for a matching
hash:

-
class StorageExtension(interfaces.MapperExtension):
def before_insert(self, mapper, connection, instance):
if instance.rel != None:
sess = orm.object_session(instance)
existing = sess.query(CommonStorage).filter(instance.rel.hashval == 
CommonStorage.hashval).first()
if existing != None:
instance.rel = existing
before_update = before_insert


 class StorageUser(Base):
 __tablename__ = user
+__mapper_args__ = dict(extension=StorageExtension())
-

While the extension gets called and tries to replace the relation, SA
still tries to insert the new entry.


Any way to get this implemented?

Greetings and thanks for any hint, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden
Tel: +49-(0)351-4519587
Fax: +49-(0)351-4519561

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registergericht: Mannheim, HRB: 109659, Sitz: Karlsruhe,
Geschäftsführer:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Clarification about performance and relation()

2010-03-26 Thread Michael Bayer
masetto wrote:

 defInst.PlatformRel = [platf]

dont you mean to be appending here ?  definst.platformrel.append(platf)






 session.add(defInst)
 session.add(platf)

 #i perform a commit every 1000 definitions as you suggested :)

 DefinitionClass:

 class DefinitionClass(Base):
 __tablename__ = 'definitions'

 defId = Column(Integer, primary_key=True)
 ...
 version = Column(String)

 PlatformRel = relation(PlatformClass, backref=definitions)

 def __init__(self, node):
 self.version = node.get(version)
 ...

 PlatformClass:

 class PlatformClass(Base):
 __tablename__ = 'platform'

 platformId = Column(Integer, primary_key=True)
 platform = Column(String)

 platformId_fk = Column('definitionId_fk', Integer,
 ForeignKey('definitions.defId'))

 def setPlatform(self, node):
 self.platform = node

 What can i do || correct to get the expected result?

 Thanks for your patience.
 ---
 Masetto



 On Thu, Mar 25, 2010 at 4:56 PM, Michael Bayer
 mike...@zzzcomputing.comwrote:

 the relationship between two tables requires both the ForeignKey to be
 present as well as the relationship()  (relation() in 0.5) function to
 be
 present in the mapping.


 masetto wrote:
  From 30 mins to 2mins... shame :P
 
  Thanks Micheal !
 
  Forgive me, what about the other question about foreign keys?
 
 
  On Thu, Mar 25, 2010 at 3:43 PM, Michael Bayer
  mike...@zzzcomputing.comwrote:
 
  masetto wrote:
   Hi all,
  
   i am writing a python script which parse an xml file (python lxml)
 and
   import it into a sqlite db, and it works.
   The xml file size is about 30Mb and the import operation takes
 about
  15
   minutes (do you think is too much? is there something i can do to
  speed
  up
   the process?)
  
   This is a piece of the import function:
  
   ...
   for definition in definitions.getchildren(): #iterate for every xml
   children
   node
   defInst =
 SQLTableBuilder_Definition.DefinitionClass(definition)
  #read
   and write on db some attribute of the node
   ...
   if subbaElem1.tag == mainNS + platform: #another loop iterate
  for
   every sub-node of the definition node
   platf =
 SQLTableBuilder_Platform.PlatformClass()
   platf.setPlatform(str(subbaElem1))
   platf.platformId_fk = defInst.defId
  
   session.add(platf)
   session.commit()
...
session.add(defInst)
session.commit()
 
 
  don't commit on every node and on every sub-node.  Just commit once
  every
  1000 new objects or so.   will save a ton of processing.
 
 
 
  
   where DefinitionClass contains the attributes declaration
  (primary_key,
   column(string), etc.) and a Foreign Key.
   There is a relation between the definition table and the platform
  table
   (one
   or more platforms - Operating System - can be associated to a
 single
   definition) so,
   in the platform table, i've added the following: platformId_fk =
   Column('definitionId_fk', Integer, ForeignKey('definitions.defId'))
  
   All my ORM-Classes are declared within n different classes within n
   different python modules so, i've included the needed imports
  everytime i
   needed it.
   And i suppose this is a problem, at least for me, sometime, because
  when
  i
   try to add: PlatformRel =
   relation(SQLTableBuilder_Definition.DefinitionClass,
  backref=platform)
   within my platformClass, i got: 'list' object has no attribute
   '_sa_instance_state' :/
  
   So, i've tried to manually set the foreign key, as you can see
  above.
  In
   the documentation (http://www.sqlalchemy.org/docs/ormtutorial.html)
 i
   read:
SQLAlchemy is automatically aware of many-to-one/one-to-many
 based
  on
   foreign keys. Does this mean that what i've done is correct or i'm
 a
   little
   confused? If i manually set a foreign key value, does sqlalchemy
   understand that a relation between two tables exists?
  
   Thanks for your attention.
   ---
   Masetto
  
   --
   You received this message because you are subscribed to the Google
  Groups
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to
   sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 sqlalchemy%2bunsubscr...@googlegroups.comsqlalchemy%252bunsubscr...@googlegroups.com
 
  .
   For more options, visit this group at
   http://groups.google.com/group/sqlalchemy?hl=en.
  
  
 
  --
  You received this message because you are subscribed to the Google
  Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 

Re: [sqlalchemy] Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Michael Bayer
Kent wrote:
 Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
 (for both postgres and oracle)?

 I've implemented a subclass of Sequence myself, but it isn't very
 elegant, because I'm not familiar enough with the code to know which
 methods to override for create() output.

im not familiar with those options but to implement a subclass of Sequence
with additional options, you'd also implement a subclass of
sqlalchemy.schema.CreateSequence and use @compiles to define its
compilation, as in
http://www.sqlalchemy.org/docs/reference/ext/compiler.html#dialect-specific-compilation-rules




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Michael Bayer
Kent wrote:
 Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
 (for both postgres and oracle)?

 I've implemented a subclass of Sequence myself, but it isn't very
 elegant, because I'm not familiar enough with the code to know which
 methods to override for create() output.

correction:  redefine the compilation for CreateSequence:

from sqlalchemy import *
from sqlalchemy import schema
from sqlalchemy.ext.compiler import compiles


class MySeq(Sequence):
def __init__(self, *args, **kw):
self.cycle = kw.pop('cycle', False)
super(MySeq, self).__init__(*args, **kw)

@compiles(schema.CreateSequence)
def compile(element, compiler, **kw):
if isinstance(element.element, MySeq):
return CREATE SEQUENCE %s %s % (element.element.name,
element.element.cycle and CYCLE or )
else:
return compiler.visit_create_sequence(element)






 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Unifying large objects on commit/flush

2010-03-26 Thread Michael Bayer
Torsten Landschoff wrote:
 While the extension gets called and tries to replace the relation, SA
 still tries to insert the new entry.


 Any way to get this implemented?

here's the relevant bit of documentation:


http://www.sqlalchemy.org/docs/reference/orm/interfaces.html?highlight=mapperextension#sqlalchemy.orm.interfaces.MapperExtension.before_insert

Column-based attributes can be modified within this method which will
result in the new value being inserted. However **no** changes to the
overall flush plan can be made, and manipulation of the Session  will not
have the desired effect. To manipulate the Session  within an extension,
use SessionExtension.


so here you need to use SessionExtension as you'd like to manipulate the
flush plan.





 Greetings and thanks for any hint, Torsten

 --
 DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
 Torsten Landschoff

 Office Dresden
 Tel: +49-(0)351-4519587
 Fax: +49-(0)351-4519561

 mailto:torsten.landsch...@dynamore.de
 http://www.dynamore.de

 Registergericht: Mannheim, HRB: 109659, Sitz: Karlsruhe,
 Geschäftsführer:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: want to suppress automatic refresh

2010-03-26 Thread keith cascio
Michael,

On Mar 25, 3:00 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 SQLAlchemy also doesn't issue BEGIN.  You might want to look at setting 
 autocommit to false on your MySQLdb connection, since that's the layer that 
 would be sending out BEGIN.

I looked into this.  BTW, I use SQLAlchemy v 0.5 in a corporate
environment, and upgrading is expensive (time/effort).

My belief that 0.5 issued BEGIN statements was based on inspection of
the echo log i.e. create_engine(..., echo=True).  I thought that the
echo log was a transparent trace of the exact statements sent to the
server.  Upon further investigation, that seems not the case.  The
BEGIN statements appear in the echo log, but they don't show up in the
mysqld server log.  We can see why at line 984 of engine/base.py
[ http://www.sqlalchemy.org/trac/browser/lib/sqlalchemy/engine/base.py#L984
]

982 def _begin_impl(self):
983 if self._echo:
984 self.engine.logger.info(BEGIN)
985 try:
986 self.engine.dialect.do_begin(self.connection)
987 except Exception, e:
988 self._handle_dbapi_exception(e, None, None, None,
None)
989 raise

Therefore, I am no longer worried about the emission of BEGIN.
However, inspecting the mysqld general query log [
http://dev.mysql.com/doc/refman/5.0/en/query-log.html ], I noticed a
bunch of ROLLBACK statements.

   8 Query  INSERT INTO xxx (yy, zzz) VALUES (NULL, 'foobar')
   8 Query  commit
   8 Query  rollback
   8 Query  INSERT INTO jjj (yy, zzz, a, , ccc, dd,
ee, fff...
   8 Query  commit
   8 Query  rollback
   8 Query  UPDATE jjj SET a=1, =2, ccc=1, dd=now() WHERE
jjj.zzz = 193
   8 Query  INSERT INTO lll (dd, z, , ss, , w,
uuu) VALUES ('2010-03-15...
   8 Query  INSERT INTO  (yy, nn) VALUES (NULL, 'barfoo')
   8 Query  commit
   8 Query  rollback

Every COMMIT is followed by a ROLLBACK, which appears wasteful.  Which
software do I blame for that, SQLAlchemy or the MySQLdb DBAPI
connector?

Thanks,
Keith

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Clarification about performance and relation()

2010-03-26 Thread werner

Hi Masetto,

On 26/03/2010 17:43, masetto wrote:

First of all, thanks for your answer :)

You are welcome.


#   defInst.PlatformRel = [platf]# change this to
platf.definitions = defInst

I don't have any definitions attribute within the PlatformClass, i 
suppose you mean the foreign key, isnt'it?

Yes you do:)

You have/had this in your model:
PlatformRel = relation(PlatformClass, backref=definitions)

backref creates a relation called definitions in the PlatformClass.

That is

platf.platformId_fk = defInst

However, this results in another error:

sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding 
parameter 1 - probably unsupported type. u'INSERT INTO platform 
(platform, definitionId_fk) VALUES (?, ?)' ['Microsoft Windows 
2000', ORM_Classes.DefinitionClass object at 0x8f5278c]


I've played a little with it, then i've moved the relation() from 
DefinitionClass to PlatformClass:


class PlatformClass(Base):
__tablename__ = 'platform'

platformId = Column(Integer, primary_key=True)
platform = Column(String)

platformId_fk = Column('definitionId_fk', Integer, 
ForeignKey('definitions.defId'))

PlatformRel = relation(DefinitionClass, backref=platform)

and then:

platf.PlatformRel = defInst

Now i got the expected data! It WORKS :P Thanks Werner!
I think it should have worked the other way round too.  But frankly I am 
not an expert on SA, nor am I too comfortable just looking at code 
fragments in an email.


But, i need to understand.. why now it's working?

From the doc:

We are also free... to define the relationship() 
http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationship 
only on one class and not the other. It is also possible to define two 
separate relationship() 
http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationship 
constructs for either direction, which is generally safe for 
many-to-one and one-to-many relationships, but not for many-to-many 
relationships.


Maybe i don't have well understood the role of the 
relation()/relationship() function but, shouldn't be the same thing to 
define the relation() within the DefinitionClass? I've only changed 
the location of the relation() and now it works.

Can you kindly better explain me the role of the relationship() function?


Let me try and I hope that others will jump in if I say something 
misleading.


relationship() (or its old but still valid equivalent relation()) or the 
new name relationship() allow you to define relationships between two 
tables.


You could do it in one of the two table like this (this is what I do 
most of the time):

define it in PlatformClass:
PlatformRel = relation(DefinitionClass, backref=platform) - this sets 
up both relationships from PlatformClass to DefinitionClass (a 
one-to-many) and from DefinitionClass to PlatformClass (a many-to-one).


or you could turn it around and define it in DefinitionClass:
DefinitionRel = relation(PlatfromClass, backref=definitions)

or do it in both tables like this:
PlatformRel = relation(DefinitionClass) - sets up the one-to-many
DefinitionRel = relation(PlatformClass) - sets up the many-to-one


Mmm... please correct me if i'm wrong:

- The relationship between the User and Address classes is defined 
separately using the relationship() 
http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationship 
function

OK, and is the only way to define a relation between two tables.

- If i put relationship() in both classes i got a *bidirectional* 
relationship

Or use backref


- Because of the *placement* of the foreign key, from Address to User 
it is *many to one*...   !!! Oh, is this the point, right? If, in the 
same class, i define a foreign key AND a relationship() i create a 
many to one relation with the linked table


- ..., and from User to Address it is *one to many* - This is valid 
only in the bidirectional case or it's automatic when i declare 
somewhere foreign key + relationship() ?

Defining the foreign key does NOT setup/define a relation().


- Initially i've defined the foreign key in the PlatformClass and the 
relation() in the DefinitionClass. Which type of relation i've created 
in that way?
It does really not matter in which class you define the relation().  
What type it will be I showed higher up, but you should also look at 
uselist in relationship() doc.


You might also want to look at the doc of:
http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.relationship
http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.backref

I hope it helps and didn't cause more confusion.

Werner

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 

[sqlalchemy] Using reflection to create read-only classes and objects

2010-03-26 Thread mr
Hejhej,
first I'd like to thank the developers of SQLAlchemy for their great
effort. The library works like a treat and is well documented. It's a
pleasure working with it...
Nevertheless I've currently run into a problem when trying to create
classes that are based on read-only reflection from existing database
tables. Here *read-only* means that I have a database table with fixed
contents that I'm not going to change in application. However, I need
this table and its contents to create other objects.
Let's assume we have a table *foos* with a number of rows - each of
them representing a distinct *foo*. There is the usual id column
*foo_id*, then there's *foo_name* and *foo_key*. I've been able to
come up with the following code to that uses reflection to derive the
table metadata.

engine = create_engine('postgresql://...')
Base = declarative_base()

class Foo(Base):
__tablename__ = 'foos
__autoload__ = True
__table_args__ = {'autoload_with': engine}

def __init__(self):
pass

Now when I create a foo object with f = Foo() I can see all columns of
the existing table by doing something like:
print f.metadata.tables[f.__tablename__].columns

What I want to do now is to initialize a *foo*-object by specifying
*foo_key* in the constructor so that I have exactly the one distinct
row containing *foo_key* at my disposal for further processing.

def __init__(self, foo_key):
...

However I was not able to do this... Am I missing something here? Or
did anybody else had a similar problem and was able to solve it...
Thank you very much in advance.
Regards,
Markus

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Using reflection to create read-only classes and objects

2010-03-26 Thread Michael Bayer
mr wrote:
 Hejhej,
 first I'd like to thank the developers of SQLAlchemy for their great
 effort. The library works like a treat and is well documented. It's a
 pleasure working with it...
 Nevertheless I've currently run into a problem when trying to create
 classes that are based on read-only reflection from existing database
 tables. Here *read-only* means that I have a database table with fixed
 contents that I'm not going to change in application. However, I need
 this table and its contents to create other objects.
 Let's assume we have a table *foos* with a number of rows - each of
 them representing a distinct *foo*. There is the usual id column
 *foo_id*, then there's *foo_name* and *foo_key*. I've been able to
 come up with the following code to that uses reflection to derive the
 table metadata.

 engine = create_engine('postgresql://...')
 Base = declarative_base()

 class Foo(Base):
 __tablename__ = 'foos
 __autoload__ = True
 __table_args__ = {'autoload_with': engine}

 def __init__(self):
 pass

 Now when I create a foo object with f = Foo() I can see all columns of
 the existing table by doing something like:
 print f.metadata.tables[f.__tablename__].columns

 What I want to do now is to initialize a *foo*-object by specifying
 *foo_key* in the constructor so that I have exactly the one distinct
 row containing *foo_key* at my disposal for further processing.

 def __init__(self, foo_key):
 ...

 However I was not able to do this... Am I missing something here? Or
 did anybody else had a similar problem and was able to solve it...
 Thank you very much in advance.

I think you are looking for this 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/UniqueObject




 Regards,
 Markus

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: want to suppress automatic refresh

2010-03-26 Thread Michael Bayer
keith cascio wrote:

 Every COMMIT is followed by a ROLLBACK, which appears wasteful.  Which
 software do I blame for that, SQLAlchemy or the MySQLdb DBAPI
 connector?

It's not wasteful at all in the usual case unless one wants to have
leftover row/table locks and transactional state sitting idle in their
connection pool, preventing other operations from proceeding.   In the
case of no statements executed since the previous COMMIT, its simple and
100% reliable to issue a simple ROLLBACK instead of attempting to gauge if
any further statements *might* have been executed on the connection since
the last known COMMIT.   This is all within the usual realm of a ROLLBACK
costing almost nothing.

But since you're on MyISAM and have no transactional state, and somehow a
ROLLBACK is incurring unacceptable overhead (curious, do you have any
profiling data which illustrates how much time this takes ?), as I
mentioned earlier this is configurable, you want to set reset_on_return
to False in your Pool.   you'll need to create the QueuePool manually and
pass it to create_engine() using pool=QueuePool(...).




 Thanks,
 Keith

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Kent Bower

Thanks for the info.

Since it is NOCYCLE in oracle and NO CYCLE in postgres, I would check 
the engine.dialect.name in the compile, method correct?


if eng.dialect.name == 'oracle':
sql +=  NOCYCLE
elif eng.dialect.name == 'postgres':
sql +=  NO CYCLE
else:
raise Exception(RSequence is only implemented 
for Oracle and PostgreSQL!)


How do I get a hold of the engine from within a Sequence object?


On 3/26/2010 2:26 PM, Michael Bayer wrote:

Kent wrote:
   

Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
(for both postgres and oracle)?

I've implemented a subclass of Sequence myself, but it isn't very
elegant, because I'm not familiar enough with the code to know which
methods to override for create() output.
 

correction:  redefine the compilation for CreateSequence:

from sqlalchemy import *
from sqlalchemy import schema
from sqlalchemy.ext.compiler import compiles


class MySeq(Sequence):
 def __init__(self, *args, **kw):
 self.cycle = kw.pop('cycle', False)
 super(MySeq, self).__init__(*args, **kw)

@compiles(schema.CreateSequence)
def compile(element, compiler, **kw):
 if isinstance(element.element, MySeq):
 return CREATE SEQUENCE %s %s % (element.element.name,
element.element.cycle and CYCLE or )
 else:
 return compiler.visit_create_sequence(element)





   

--
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/sqlalchemy?hl=en.


 
   


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Sequences support for CYCLE and MIN/MAX values

2010-03-26 Thread Kent
Sorry!

Nevermind, your link answers that question.

Thanks.

On Mar 26, 4:23 pm, Kent Bower k...@retailarchitects.com wrote:
 Thanks for the info.

 Since it is NOCYCLE in oracle and NO CYCLE in postgres, I would check
 the engine.dialect.name in the compile, method correct?

                      if eng.dialect.name == 'oracle':
                          sql +=  NOCYCLE
                      elif eng.dialect.name == 'postgres':
                          sql +=  NO CYCLE
                      else:
                          raise Exception(RSequence is only implemented
 for Oracle and PostgreSQL!)

 How do I get a hold of the engine from within a Sequence object?

 On 3/26/2010 2:26 PM, Michael Bayer wrote:

  Kent wrote:

  Any plans to support MINVALUE, MAXVALUE, CYCLE, NOCYCLE for sequences
  (for both postgres and oracle)?

  I've implemented a subclass of Sequence myself, but it isn't very
  elegant, because I'm not familiar enough with the code to know which
  methods to override for create() output.

  correction:  redefine the compilation for CreateSequence:

  from sqlalchemy import *
  from sqlalchemy import schema
  from sqlalchemy.ext.compiler import compiles

  class MySeq(Sequence):
       def __init__(self, *args, **kw):
           self.cycle = kw.pop('cycle', False)
           super(MySeq, self).__init__(*args, **kw)

  @compiles(schema.CreateSequence)
  def compile(element, compiler, **kw):
       if isinstance(element.element, MySeq):
           return CREATE SEQUENCE %s %s % (element.element.name,
  element.element.cycle and CYCLE or )
       else:
           return compiler.visit_create_sequence(element)

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Unifying large objects on commit/flush

2010-03-26 Thread Torsten Landschoff
Hi Michael,

On Fri, 2010-03-26 at 14:30 -0400, Michael Bayer wrote:

 here's the relevant bit of documentation:
 
 http://www.sqlalchemy.org/docs/reference/orm/interfaces.html?highlight=mapperextension#sqlalchemy.orm.interfaces.MapperExtension.before_insert
 
 Column-based attributes can be modified within this method which will
 result in the new value being inserted. However **no** changes to the
 overall flush plan can be made, and manipulation of the Session  will not
 have the desired effect. To manipulate the Session  within an extension,
 use SessionExtension.

Thanks for the pointer. I read that part before but was not sure if I
have to modify the flush plan.

I attached the modified source code that actually works.

I dislike this solution for the following reasons:

* The extension scans through all new instances which could be quite a
number.
* The session must be modified (okay, no real problem).
* In case multiple classes use the CommonStorage class, the
StorageExtension must be adjusted. It would be better to operate on
CommonStorage instances but I don't know how to find the related classes
before the whole thing goes to the database.


Another question: Any idea when the second SA book will be published? I
bought the Essential SA book but it is a bit outdated covering 0.4.x.

Thanks, Torsten

-- 
DYNAmore Gesellschaft fuer Ingenieurdienstleistungen mbH
Torsten Landschoff

Office Dresden

mailto:torsten.landsch...@dynamore.de
http://www.dynamore.de

Registration court: Mannheim, HRB: 109659, based in Karlsruhe,
Managing director:  Prof. Dr. K. Schweizerhof, Dipl.-Math. U. Franz

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

import sqlalchemy as sa
import sqlalchemy.orm as orm
import sqlalchemy.orm.interfaces as interfaces
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.associationproxy import association_proxy
import hashlib

Base = declarative_base()

class CommonStorage(Base):
__tablename__ = common
hashval = sa.Column(sa.String, primary_key=True)
value = sa.Column(sa.LargeBinary)
def __init__(self, v):
self.value = v
self.hashval = hashlib.md5(v).hexdigest()
def joinFrom(self, session):
copy = session.query(CommonStorage).filter(self.hashval==CommonStorage.hashval).first()
return copy or self

class StorageUser(Base):
__tablename__ = user
id = sa.Column(sa.Integer, primary_key=True)
ref = sa.Column(None, sa.ForeignKey(CommonStorage.hashval))
rel = orm.relation(CommonStorage)
value = association_proxy(rel, value)

class StorageExtension(interfaces.SessionExtension):
def before_flush(self, session, flush_context, instances=None):
for d in session.new:
if isinstance(d, StorageUser) and d.rel is not None:
original = d.rel
d.rel = d.rel.joinFrom(session)
if original is not d.rel:
session.expunge(original)

engine = sa.create_engine(sqlite:///, echo=True)
Base.metadata.create_all(engine)
session = orm.sessionmaker(bind=engine, extension=StorageExtension(), autoflush=False)()

ua, ub = StorageUser(), StorageUser()
ua.value = ub.value = Something
session.add(ua)
session.commit()
session.add(ub)
session.commit()


[sqlalchemy] Re: want to suppress automatic refresh

2010-03-26 Thread keith cascio
Michael,

I apologize if I came off at all rude.  I noticed how helpful you are
and I value your advice.  Thank you for your patience.

On Mar 26, 1:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 It's not wasteful at all in the usual case unless one wants to have leftover 
 row/table locks and transactional state sitting idle in their connection 
 pool, preventing other operations from proceeding.   In the case of no 
 statements executed since the previous COMMIT, its simple and 100% reliable 
 to issue a simple ROLLBACK instead of attempting to gauge if any further 
 statements *might* have been executed on the connection since the last known 
 COMMIT.   This is all within the usual realm of a ROLLBACK costing almost 
 nothing.

Now I understand the reason for it.  Good explanation.

 But since you're on MyISAM and have no transactional state, and somehow a 
 ROLLBACK is incurring unacceptable overhead (curious, do you have any 
 profiling data which illustrates how much time this takes ?)

Here is a ping from the real production client against the real
production server:

 $ /usr/sbin/ping -s a.bbb..d.com
 .
 .
 a.bbb..d.com PING Statistics
 17 packets transmitted, 17 packets received, 0% packet loss
 round-trip (ms)  min/avg/max/stddev = 112./120.2/127./5.34

120 milliseconds is 3/25ths of a second (  1/9 of a second).  I can't
test on the production servers, but I tested my program between two
servers with 56ms ping:

 56ms avg ping, 14MiB input
 -
 reset_on_return=True:  1031.53 seconds (17.2 minutes)
 reset_on_return=False:  932.27 seconds (15.5 minutes)

So, with 56ms ping, the ROLLBACK statements incur my program a 10%
slowdown.  I also tried it between two servers with 158ms ping (much
smaller input).

 158ms avg ping, 664KiB input
 -
 reset_on_return=True:   145.20 seconds (2.42 minutes)
 reset_on_return=False:  127.43 seconds (2.12 minutes)

So, with 158ms ping, the ROLLBACK statements incur my program a 12%
slowdown.  We consider 10-12% *SIGNIFICANT*!  And that slowdown is for
a bunch of useless ROLLBACKs against a database with no transaction
support whatsoever.  They are not free.  They cost.  As I said before,
touching the database is expensive.  It doesn't matter if the traffic
accomplishes nothing useful.

 as I mentioned earlier this is configurable, you want to set 
 reset_on_return to False in your Pool.   you'll need to create the 
 QueuePool manually and pass it to create_engine() using pool=QueuePool(...).

This worked.  Now I don't see the ROLLBACK statements any more.  This
is extremely helpful.  It's saving us about 10-12% elapsed time.  Now
if we could get rid of the COMMIT statements, we would likely save
another 10%.

Thanks,
Keith

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: want to suppress automatic refresh

2010-03-26 Thread Michael Bayer

On Mar 26, 2010, at 7:36 PM, keith cascio wrote:

 Here is a ping from the real production client against the real
 production server:
 
 $ /usr/sbin/ping -s a.bbb..d.com
 .
 .
 a.bbb..d.com PING Statistics
 17 packets transmitted, 17 packets received, 0% packet loss
 round-trip (ms)  min/avg/max/stddev = 112./120.2/127./5.34
 
 120 milliseconds is 3/25ths of a second (  1/9 of a second).  I can't
 test on the production servers, but I tested my program between two
 servers with 56ms ping:
 
 56ms avg ping, 14MiB input
 -
 reset_on_return=True:  1031.53 seconds (17.2 minutes)
 reset_on_return=False:  932.27 seconds (15.5 minutes)
 
 So, with 56ms ping, the ROLLBACK statements incur my program a 10%
 slowdown.  I also tried it between two servers with 158ms ping (much
 smaller input).

how many sessions are you opening and closing ?   theres exactly one connection 
pool return per session.commit().   two whole minutes of rollback() would 
imply you're checking in/out of the pool hundreds of thousands of times.   My 
own tests show that MySQLdb can do 2 rollback calls per second on a local 
network connection, or 2.4 million in a couple of minutes.  Using OurSQL, the 
time cuts in half.   if you're on ORM, keep a single connection checked out and 
use flush() to send changes over.   then you'll have no commit or rollback 
whatsoever sent over.






 
 158ms avg ping, 664KiB input
 -
 reset_on_return=True:   145.20 seconds (2.42 minutes)
 reset_on_return=False:  127.43 seconds (2.12 minutes)
 
 So, with 158ms ping, the ROLLBACK statements incur my program a 12%
 slowdown.  We consider 10-12% *SIGNIFICANT*!  And that slowdown is for
 a bunch of useless ROLLBACKs against a database with no transaction
 support whatsoever.  They are not free.  They cost.  As I said before,
 touching the database is expensive.  It doesn't matter if the traffic
 accomplishes nothing useful.
 
 as I mentioned earlier this is configurable, you want to set 
 reset_on_return to False in your Pool.   you'll need to create the 
 QueuePool manually and pass it to create_engine() using pool=QueuePool(...).
 
 This worked.  Now I don't see the ROLLBACK statements any more.  This
 is extremely helpful.  It's saving us about 10-12% elapsed time.  Now
 if we could get rid of the COMMIT statements, we would likely save
 another 10%.
 
 Thanks,
 Keith
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.