[sqlalchemy] Re: Insertion order not respecting FK relation

2008-06-11 Thread Malthe Borch

I can add to this that the issue occurs only on consequent appends. 
Here's the excerpt that leads to the IntegrityError, demonstrating this.

collection = Collection()
session.save(collection)
session.flush()

vinyl = Vinyl()
colletion.records.append(vinyl)
session.flush()

vinyl = Vinyl()
colletion.records.append(vinyl)
session.flush()

The last two flushes are based on the following units-of-work:

First append and flush:

(Pdb) pp self.tasks.values()
[UOWTask(0x27a5f90) Mapper: 'Mapper/Join object on soup(40067600) and 
__builtin__:ICollection(41259088)',
  UOWTask(0x27a5ff0) Mapper: 'Mapper/Join object on Join object on 
soup(40067600) and __builtin__:IAlbum(40262960)(41171024) and 
__builtin__:IVinyl(41171728)',
  UOWTask(0x27a5fd0) Mapper: 'Relation/relation']

Second append and flush:

(Pdb) pp self.tasks.values()
[UOWTask(0x2799fd0) Mapper: 'Mapper/Join object on Join object on 
soup(40067600) and __builtin__:IAlbum(40262960)(41171024) and 
__builtin__:IVinyl(41171728)',
  UOWTask(0x27993b0) Mapper: 'Relation/relation']

For some reason, on the first append, there's a save task defined on 
the collection-object; I'm not sure what this means, since it should 
already be saved and flushed at this point.

\malthe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: assocClass.linkA == A.id fails

2008-06-11 Thread az

well, it will be useful if when a m/anytoone relation (i.e. plain 
reference) klas.descriptor is used in an expression context, to yield 
just the respective column.


On Tuesday 10 June 2008 21:15:06 Michael Bayer wrote:
 these are the valid comparisons:

 print AB.this == A()
 print AB.this_id == A.name

 On Jun 10, 2008, at 8:32 AM, [EMAIL PROTECTED] wrote:
  funny... just a plain relation, i've removed all the assoc
  stuff... what i am missing???
 
  here the case, attached.
  no matter what, i doesnot work; older sa versions give different
  error...
 
  On Tuesday 10 June 2008 16:59:01 Michael Bayer wrote:
  manytoone should be fine.  try making a simplified test case
  (just any old many to one).
 
  On Jun 10, 2008, at 9:39 AM, [EMAIL PROTECTED] wrote:
  g'day.
  i stepped on strange behavour (0.4 latest):
  class AB is an assoc proxy, .myb pointing to B;
  the clause
   AB.myb == B.dbid
  fails with NotImplementedError:
  File sqlalchemy/sql/expression.py, line 1191, in __eq__
return self.operate(operators.eq, other)
  File sqlalchemy/sql/expression.py, line 1315, in operate
return o[0](self, op, other[0], *o[1:], **kwargs)
  File sqlalchemy/sql/expression.py, line 1277, in __compare
obj = self._check_literal(obj)
  File sqlalchemy/sql/expression.py, line 1421, in
  _check_literal return other.expression_element()
  File sqlalchemy/orm/attributes.py, line 53, in
  expression_element return self.comparator.expression_element()
  File sqlalchemy/orm/interfaces.py, line 432, in
  expression_element return self.clause_element()
  File sqlalchemy/sql/expression.py, line 1170, in
  clause_element raise NotImplementedError()
  NotImplementedError
 
  these work (not the .property):
  AB.myb.property == B.dbid
  AB.myb == 3
 
  AB.myb.property.direction is MANYTOONE
  seems i'm missing something??
  any idea or prepare a test case?
  svilen
 
  aa.py

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance problem with Psyco

2008-06-11 Thread az

if u replace back the offending function with its plain pythonic 
variant, will it work?

50% boost... just because of the python? u're constructing too many 
queries over and over. try cache them, and reuse as building 
blocks... - or cache their results... or change the model.

e.g. on my model initialy i had 4500 queries constructed and issued, 
after some caching they went to 4000 and then to about 1100. 
Now they under 10... one big query for lots of objects instead of 
thousands of small queries per-piece. And it is still near same model 
which was made to allow efficient usage - but noone used that.

On Wednesday 11 June 2008 00:13:55 Artur Siekielski wrote:
 On Jun 10, 6:11 pm, [EMAIL PROTECTED] wrote:
  well.. see the difference, the function that eats the time is
  sql.sort_tables. all else is nearly same.
  compare the two sources and see for yourself.
  the sort_tables() itself seems same, but traversing (recursive)
  has changed a lot. from depth first into breadth first or
  similar.
 
  something is going on there.

 You're right. I have looked at 0.4.4 - 0.4.5 diff but don't see
 anything suspicious.

  AFAIremember psyco can optimize loops and arithmetics but is very
  bad with func-calls.
 
  profile both versions of that function without psyco and see
  what's difference in funccalls etc.

 I don't think such a huge performance decrease could be caused by
 more function calls. I think it must be Psyco's bug. As I've
 written I've also experienced some strange errors.

 I understand that it's very hard to debug. 50% performance boost is
 noticeable but I can (must :)) live without it.

  ciao
 
  svilen

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance problem with Psyco

2008-06-11 Thread Michael Bayer


On Jun 10, 2008, at 3:13 PM, Artur Siekielski wrote:




 On Jun 10, 8:11 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 I would first take a look at the SQL
 being issued as the first source of speed differences;  if in 0.4.5
 there's suddenly a whole series of deletes occuring which do not
 within 0.4.4, then that's the source of the difference.

 SQL is the same. These differences are also when I don't flush the
 Session. Also performance when not using Psyco is the same in 0.4.4
 and 0.4.5. But switching Psyco on in 0.4.4 causes 50% performance
 increase and in 0.4.5 about 1000% decrease.

im afraid you'll have to dig deeper and/or work with some psyco  
developers.  We have performance suites that show very similar results  
across 0.4.4 and forward.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insertion order not respecting FK relation

2008-06-11 Thread Michael Bayer

you'd have to work this into a full self-contained script which I can  
run locally since it seems theres some specific usage pattern creating  
the issue.  (i.e. its very difficult for me to piece together snippets  
and guess where the issue might be occuring).



On Jun 11, 2008, at 5:43 AM, Malthe Borch wrote:


 I can add to this that the issue occurs only on consequent appends.
 Here's the excerpt that leads to the IntegrityError, demonstrating  
 this.

 collection = Collection()
 session.save(collection)
 session.flush()

 vinyl = Vinyl()
 colletion.records.append(vinyl)
 session.flush()

 vinyl = Vinyl()
 colletion.records.append(vinyl)
 session.flush()

 The last two flushes are based on the following units-of-work:

 First append and flush:

 (Pdb) pp self.tasks.values()
 [UOWTask(0x27a5f90) Mapper: 'Mapper/Join object on soup(40067600) and
 __builtin__:ICollection(41259088)',
  UOWTask(0x27a5ff0) Mapper: 'Mapper/Join object on Join object on
 soup(40067600) and __builtin__:IAlbum(40262960)(41171024) and
 __builtin__:IVinyl(41171728)',
  UOWTask(0x27a5fd0) Mapper: 'Relation/relation']

 Second append and flush:

 (Pdb) pp self.tasks.values()
 [UOWTask(0x2799fd0) Mapper: 'Mapper/Join object on Join object on
 soup(40067600) and __builtin__:IAlbum(40262960)(41171024) and
 __builtin__:IVinyl(41171728)',
  UOWTask(0x27993b0) Mapper: 'Relation/relation']

 For some reason, on the first append, there's a save task defined on
 the collection-object; I'm not sure what this means, since it should
 already be saved and flushed at this point.

 \malthe

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Second DSN create_engine logic

2008-06-11 Thread fairwinds

Hi, I have been thinking about logic for a second DSN for failover
with some time interval to try the second source. If I am using
asynchronous replication, time is required to allow slave to become
master.

For application logic, I don't want to tolerate a failure. Rather, I
want to log exception and setup up a sleep and try cycle until the
next DSN is available.

I am wondering if this logic ought to be in sqlalchemy's create_engine
to allow a second DSN and a parameter for time between tries to second
DSN. Overall, connections to original DSN would die and exception
would trigger trying the second source. Perhaps there are alternatives
also. Many thanks.

Regards,
David
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Second DSN create_engine logic

2008-06-11 Thread Michael Bayer


On Jun 11, 2008, at 10:01 AM, fairwinds wrote:


 Hi, I have been thinking about logic for a second DSN for failover
 with some time interval to try the second source. If I am using
 asynchronous replication, time is required to allow slave to become
 master.

 For application logic, I don't want to tolerate a failure. Rather, I
 want to log exception and setup up a sleep and try cycle until the
 next DSN is available.

 I am wondering if this logic ought to be in sqlalchemy's create_engine
 to allow a second DSN and a parameter for time between tries to second
 DSN. Overall, connections to original DSN would die and exception
 would trigger trying the second source. Perhaps there are alternatives
 also. Many thanks.


its out of scope to be within create_engine itself.  create_engine  
doesn't even make any connections to the database.

an appropriate method here would be to create a function as follows:

from sqlalchemy import create_engine as _create_engine

def create_engine(url1, url2):
e = _create_engine(url1)
try:
c = e.connect()
c.close()
return e
except:
# log error
# 

return _create_engine(url2)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cannot abort wxPython thread with SQLAlchemy

2008-06-11 Thread Peter Hansen

Dominique wrote:
 On 10 juin, 02:38, Peter Hansen [EMAIL PROTECTED] wrote:
 As Python has no way to actually terminate a thread, can you explain
 what you mean by stop this thread?  Are you simply cloning the code
 from the wxPython example, with the delayedresult.AbortEvent() object,
 and calling .set() on it?
 
 That's exactly what I do.
 My Abort button is linked to an abort function which calls
 abortEvent.set(), like in the demo.
 In the producer function, I launch the query.
 What I'd like to do  is to be able to stop the thread, while the query
 is being done.
 Is it possible or am I trying to do something impossible ?

As no one else has chimed in, I'll go out on a limb a bit and say that 
it's impossible.  Python itself definitely doesn't have any way to 
forcibly kill a thread, at least not one that is buried in an external 
call (e.g. in the sqlite library).

There is a mechanism that's been added in recent versions that can 
terminate (under certain conditions) pure Python code in another thread 
by asynchronously raising an exception: search for python asynchronous 
exception and make sure you understand the issues before trying to use it.

If you could restructure your application so the long-running query 
occurs in a separate process, you could kill the process using 
operating system support for that, though perhaps not in a clean fashion.

Aside from that, you don't have many options.  What about changing the 
query so that it will return its results in increments, rather than all 
at once?  If it's a long-running query but you can break it up that way, 
then the check event flag approach you're using would be able to work.

-Peter

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Performance problem with Psyco

2008-06-11 Thread Artur Siekielski

On Jun 11, 11:53 am, [EMAIL PROTECTED] wrote:
 if u replace back the offending function with its plain pythonic
 variant, will it work?

Which function do you mean?

 50% boost... just because of the python? u're constructing too many
 queries over and over.

No, the main module in which we have 50% performance boost is mostly
number crunching code which Psyco optimizes very good. Though SA
engine is called quite often, there are about 20 INSERTs per second.

 try cache them, and reuse as building
 blocks... - or cache their results...

We use caching. BTW, something like builtin caching would be nice, eg.
we have a few tables with practically constant content (they are like
ENUM values) and they could be loaded once per application run.

 e.g. on my model initialy i had 4500 queries constructed and issued,
 after some caching they went to 4000 and then to about 1100.
 Now they under 10... one big query for lots of objects instead of
 thousands of small queries per-piece. And it is still near same model
 which was made to allow efficient usage - but noone used that.

I've also performed such optimizations, for me mapper's option were
sufficient (mainly eagerloading for using LEFT OUTER JOINs). I must
also be careful no to cache too aggressively, we have too much data to
load all to RAM. In some places I use optimized (memory usage) classes
for holding cache data, with __slots__ defined.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Second DSN create_engine logic

2008-06-11 Thread fairwinds

Hi Michael. I appreciate your reply. Did not know if it was something
you'd want for consider for stock sqlalchemy. Incorporating into app
is easy enough. I like the transparency of your recommendation. Many
thanks.

Regards,
David

On Jun 11, 11:15 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 11, 2008, at 10:01 AM, fairwinds wrote:



  Hi, I have been thinking about logic for a second DSN for failover
  with some time interval to try the second source. If I am using
  asynchronous replication, time is required to allow slave to become
  master.

  For application logic, I don't want to tolerate a failure. Rather, I
  want to log exception and setup up a sleep and try cycle until the
  next DSN is available.

  I am wondering if this logic ought to be in sqlalchemy's create_engine
  to allow a second DSN and a parameter for time between tries to second
  DSN. Overall, connections to original DSN would die and exception
  would trigger trying the second source. Perhaps there are alternatives
  also. Many thanks.

 its out of scope to be within create_engine itself.  create_engine  
 doesn't even make any connections to the database.

 an appropriate method here would be to create a function as follows:

 from sqlalchemy import create_engine as _create_engine

 def create_engine(url1, url2):
         e = _create_engine(url1)
         try:
                 c = e.connect()
                 c.close()
                 return e
         except:
                 # log error
                 # 

                 return _create_engine(url2)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] text

2008-06-11 Thread jack2318

I tried very simple test:

s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something')
count = conn.execute(s).fetchone()

and this produced error
Unexpected error: type 'exceptions.TypeError' not enough arguments
for format string

not big deal there is few way around but just for completeness I
believe it should work (no parameters to expand)

regards
-- jacek
PS: I am using mysql




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: text

2008-06-11 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
On Behalf Of jack2318
 Sent: 11 June 2008 17:50
 To: sqlalchemy
 Subject: [sqlalchemy] text
 
 
 I tried very simple test:
 
 s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something')
 count = conn.execute(s).fetchone()
 
 and this produced error
 Unexpected error: type 'exceptions.TypeError' not enough arguments
 for format string
 
 not big deal there is few way around but just for completeness I
 believe it should work (no parameters to expand)
 
 regards
 -- jacek
 PS: I am using mysql


Are you sure you didn't try:

s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something%') 

Ie. Included a percent sign in the query?

I think that sql strings get passed through python's % substitution
function at some point in SQLAlchemy (or possibly the DBAPI driver), so
you need to use '%%' instead of '%'.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: text

2008-06-11 Thread Andreas Jung



--On 11. Juni 2008 17:55:44 +0100 King Simon-NFHD78 
[EMAIL PROTECTED] wrote:





-Original Message-
From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]

On Behalf Of jack2318

Sent: 11 June 2008 17:50
To: sqlalchemy
Subject: [sqlalchemy] text


I tried very simple test:

s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something')
count = conn.execute(s).fetchone()

and this produced error
Unexpected error: type 'exceptions.TypeError' not enough arguments
for format string

not big deal there is few way around but just for completeness I
believe it should work (no parameters to expand)

regards
-- jacek
PS: I am using mysql



Are you sure you didn't try:

s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something%')




'%' must be quoted as '%%' in Python.

-aj

pgpwdoqzxGU8V.pgp
Description: PGP signature


[sqlalchemy] sqlalchemy sessions, Queue

2008-06-11 Thread bollwyvl

I've got a thread that creates a new instances (connected to a
realtime interface), a main thread, and a pool of threads that *does
stuff* to my instances. After *the stuff* is done, they usually get
written to the database by the main thread.

What's the right way to remove these things from whatever session
they're in when they are created, live unattached in the Queue, and
then reattach them to the session where *stuff* might need doing?

Thanks.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query arithmetics

2008-06-11 Thread az

hello.
back to that theme of query.filter_or() and the missing notion of 
logical parenthesises
http://groups.google.com/group/sqlalchemy/browse_thread/thread/f6798eb5ef2c0bfe
i used once the subject in some explaination but just now realized 
what it can mean.

can we define arithmetics over query objects? with set-like meanings.
e.g. 
query(A).whatever_filter | query(A).another_filter
  - equivalent or'ring the two criterias
query(A).whatever_filter  query(A).another_filter
  - equivalent and'ring the two criterias
~query(A).whatever_filter
  - the opposite of the criteria, e.g. ~query(A) wothout filters 
should return empty result

this leaves the parenthesises to the user.
+ can mean same as |, - can be like set.subtract(), etc.

if the queries are over different klases, then addEntity and return 
tuples;

not sure about query options, what happens to them... i guess they 
disappear (or error) if different and stay if same

what do u think?
ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: text

2008-06-11 Thread jack2318

Yep. You are right %% instead of \%
thanks
-- jacek

On Jun 11, 9:59 am, Andreas Jung [EMAIL PROTECTED] wrote:
 --On 11. Juni 2008 17:55:44 +0100 King Simon-NFHD78



 [EMAIL PROTECTED] wrote:

  -Original Message-
  From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
  On Behalf Of jack2318
  Sent: 11 June 2008 17:50
  To: sqlalchemy
  Subject: [sqlalchemy] text

  I tried very simple test:

  s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something')
  count = conn.execute(s).fetchone()

  and this produced error
  Unexpected error: type 'exceptions.TypeError' not enough arguments
  for format string

  not big deal there is few way around but just for completeness I
  believe it should work (no parameters to expand)

  regards
  -- jacek
  PS: I am using mysql

  Are you sure you didn't try:

  s = text(SELECT COUNT(*) FROM table WHERE field LIKE 'something%')

 '%' must be quoted as '%%' in Python.

 -aj

  application_pgp-signature_part
 1KDownload
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] awful query

2008-06-11 Thread az

here this trouble...

i have a forest of bitemporal/versioned objects that are all linked 
via m2m relations, and all they own parameters, via m2m again.

any idea how to map something like this then without going down to 
tables:
(pseudocode)
Doc.ver == ver
and (
Doc.dbid == Doc2Place.doc and  
Doc2Place.place == ParamOwnership._place
or
Doc.dbid == Doc2Position.doc and (
Doc2Position.poz == ParamOwnership._poz
or
Doc2Position.poz == Position2Dept.poz and (
Position2Dept.dept == ParamOwnership._dept
or
Position2Dept.dept == Dept2Dept.child and 
Dept2Dept.parent == ParamOwnership._dept
)
)
)
and ParamOwnership.value == ParamValue.db_id

where: 
 Doc, Place, Position, Dept all have ParamValues (via m2m 
ParamValueOwnership)
 Doc has Place and Position (via m2m Doc2Place, Doc2Position)
 Postion has Dept (via m2m Pos2Dept)
 Dept has parent Dept (via m2m Dept2Dept)
the last one is recursive, but i'll handle it somehow, adjacent sets 
or unroll several levels.

the good news is that i'm generating the above on level of tables/ 
plain column expressions (unrolling); the bad is that i may need all 
the intermediate objects as well... will add_entity( all them Doc 
Place Position Dept Ownership) do it?

another trouble is the ParamOwnership m2m, it has different links for 
diff. types of owners (mutualy exclusive), because otherwise i hit 
multiple inheritance. 

any idea about multiple inheritance? i.e. it looks like creating 
same object via different polymorhic hierarchies..

ta
svilen

On Tuesday 10 June 2008 16:39:57 you wrote:
 g'day.
 i stepped on strange behavour (0.4 latest):
  class AB is an assoc proxy, .myb pointing to B;
 the clause
AB.myb == B.dbid
 fails with NotImplementedError:
 File sqlalchemy/sql/expression.py, line 1191, in __eq__
 return self.operate(operators.eq, other)
   File sqlalchemy/sql/expression.py, line 1315, in operate
 return o[0](self, op, other[0], *o[1:], **kwargs)
   File sqlalchemy/sql/expression.py, line 1277, in __compare
 obj = self._check_literal(obj)
   File sqlalchemy/sql/expression.py, line 1421, in _check_literal
 return other.expression_element()
   File sqlalchemy/orm/attributes.py, line 53, in
 expression_element return self.comparator.expression_element()
   File sqlalchemy/orm/interfaces.py, line 432, in
 expression_element return self.clause_element()
   File sqlalchemy/sql/expression.py, line 1170, in clause_element
 raise NotImplementedError()
 NotImplementedError

 these work (not the .property):
  AB.myb.property == B.dbid
  AB.myb == 3

 AB.myb.property.direction is MANYTOONE
 seems i'm missing something??
 any idea or prepare a test case?
 svilen



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: reading from one database and writing to another

2008-06-11 Thread qhfgva

Thanks for the replies.  I'll have to take a while to digest the
comments and suggestions to see how I'm going to proceed.

I'm sort of in an initial trial mode working a few hours here and
there to get an idea if using sqlalchemy is feasible.  I'm expecting
it will be and I'll probably just forge ahead and do whatever it takes
to get something working.  Right now we are using mysqldb and hand
crafted sql strings and my own very lightweight (er, dumb) read only
ORM layer.   Unfortunately I'll need to slowly introduce sqlalchemy in
little bits since the system is too huge to even consider a
frankenstein switch from one method to another.

I'm most intruiged by the mysql_proxy idea.  This would be a win for
us even if for some reason I don't get purchase on introducing
sqlalchemy to our system.  I'm assuming that sqlalchemy in this
scenario isn't even aware of the proxy level at all.

thanks

On Jun 9, 12:44 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 we're also assuming that you've exhausted these approaches (which  
 would be a lot more appropriate for this sort of thing):

 http://forge.mysql.com/wiki/MySQL_Proxyhttp://dev.mysql.com/doc/refman/5.0/en/replication.html

 On Jun 9, 2008, at 2:22 PM, qhfgva wrote:



  On Jun 6, 12:34 pm, Michael Bayer [EMAIL PROTECTED] wrote:
  On Jun 6, 2008, at 2:29 PM, qhfgva wrote:

  We have (what I think of as) a moderately complicated database
  configuration and I'm hoping there will be a way to configure
  sqlalchemy to deal with it.  The basic scenario is like this:

  There are N mysql servers in different geographical regions that are
  all replicating against one master.  In the interest of speed the  
  rule
  in each location is to do reads which are very frequent against the
  local copy of the database and if there is a write to do that  
  against
  the master.  As an added wrinkle the user has an option to write to
  the master with a master_pos_wait so that the current process will
  wait until replication has caught up with the update just executed.
  Hopefully that makes sense and gives enough of a flavor of what I've
  got in mind.

  I'm pretty new to sqlalchemy.  Is the above feasible?  If so are  
  there
  examples to compare with and learn from doing something similar?
  Where (api/code) would I start looking to accomplish the above?

  Any tips to get me going would be much appreciated.

  easiest approach is to use multiple sessions, multiple engines.    
  Your
  app would need to know which engine it wants to talk to, and binds a
  session to that engine.  Binding is described here:  
  http://www.sqlalchemy.org/docs/04/session.html#unitofwork_getting_bin
  ...

  Thanks I'll take a look.  I left out what I think is an important part
  of this scenario (or maybe it's trivial - I don't have a good
  perspective on this yet).  In any case,  I would like to use the ORM
  component of sqlalchemy and completely hide the fact that the read/
  write connections are possibly different.   (They might become the
  same connection if the local database becomes unaccessible and/or is
  too far behind the master).

  In other words I'd like to have a handle to, say, a user object, and
  do reads/updates to it with the programmer using this object not
  caring about how the work gets done.   So for instance I select a
  number of user objects that come from the local database.  Later I
  update a field on one of these and the update takes place on the
  master directly.

  Is that weird?  Doable?  Unfortunately this is the environment I need
  to get this working with.

  As a side note, we manage this difference by hand now, it's really
  annoying which is why I'd love to abstract it away.

  thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is it possible to have a relation on string?

2008-06-11 Thread Bobby Impollonia

Does SA support the following scenario? :
I have a class (let's call in User). I have a many-to-many
relationship between Users and urls where a url is just a string. So I
want to have a secondary table where one column is a foreign key on
User and the other is a string.

If the second column where a foreign key on a mapped table, I could
use a relation so that for a given user I could append, remove, etc.
the related items via an instrumented list. With them being raw
strings instead of foreign keys on something else, is something like
this still possible? Is there a way to say that I want an instrumented
list of strings?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query arithmetics

2008-06-11 Thread Michael Bayer


On Jun 11, 2008, at 9:43 AM, [EMAIL PROTECTED] wrote:


 hello.
 back to that theme of query.filter_or() and the missing notion of
 logical parenthesises
 http://groups.google.com/group/sqlalchemy/browse_thread/thread/f6798eb5ef2c0bfe
 i used once the subject in some explaination but just now realized
 what it can mean.

 can we define arithmetics over query objects? with set-like meanings.
 e.g.
 query(A).whatever_filter | query(A).another_filter
  - equivalent or'ring the two criterias
 query(A).whatever_filter  query(A).another_filter
  - equivalent and'ring the two criterias
 ~query(A).whatever_filter
  - the opposite of the criteria, e.g. ~query(A) wothout filters
 should return empty result

 this leaves the parenthesises to the user.
 + can mean same as |, - can be like set.subtract(), etc.

 if the queries are over different klases, then addEntity and return
 tuples;

 not sure about query options, what happens to them... i guess they
 disappear (or error) if different and stay if same

 what do u think?

Sure, instead of query(A).whatever_filter call query(A).whereclause.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: awful query

2008-06-11 Thread Michael Bayer


On Jun 11, 2008, at 10:45 AM, [EMAIL PROTECTED] wrote:


 here this trouble...

 i have a forest of bitemporal/versioned objects that are all linked
 via m2m relations, and all they own parameters, via m2m again.

 any idea how to map something like this then without going down to
 tables:
 (pseudocode)
Doc.ver == ver
and (
Doc.dbid == Doc2Place.doc and
Doc2Place.place == ParamOwnership._place
or
Doc.dbid == Doc2Position.doc and (
Doc2Position.poz == ParamOwnership._poz
or
Doc2Position.poz == Position2Dept.poz and (
Position2Dept.dept == ParamOwnership._dept
or
Position2Dept.dept == Dept2Dept.child and
Dept2Dept.parent == ParamOwnership._dept
)
)
)
and ParamOwnership.value == ParamValue.db_id

overnormalized ?   (not really sure, just something to ask)


 the good news is that i'm generating the above on level of tables/
 plain column expressions (unrolling); the bad is that i may need all
 the intermediate objects as well... will add_entity( all them Doc
 Place Position Dept Ownership) do it?


dont use add_entity(), just send everything you need to  
session.query().

 another trouble is the ParamOwnership m2m, it has different links for
 diff. types of owners (mutualy exclusive), because otherwise i hit
 multiple inheritance.

 any idea about multiple inheritance? i.e. it looks like creating
 same object via different polymorhic hierarchies..

The ORM doesn't support multiple inheritance directly (I've never  
worked with multiple inheritance of tables before so I'm not an  
authority on what that even looks like).



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is it possible to have a relation on string?

2008-06-11 Thread Michael Bayer


On Jun 11, 2008, at 3:06 PM, Bobby Impollonia wrote:


 Does SA support the following scenario? :
 I have a class (let's call in User). I have a many-to-many
 relationship between Users and urls where a url is just a string. So I
 want to have a secondary table where one column is a foreign key on
 User and the other is a string.

unless there's a third table involved, that sounds more like a one-to- 
many collection to me (user table-table of URLs).


 If the second column where a foreign key on a mapped table, I could
 use a relation so that for a given user I could append, remove, etc.
 the related items via an instrumented list. With them being raw
 strings instead of foreign keys on something else, is something like
 this still possible? Is there a way to say that I want an instrumented
 list of strings?

if you have a table of User id (FK) and URL string (FK), that's just a  
many-to-many table.  Set it as secondary in a relation() and you're  
good to go.   Strings can be foreign keys just like integers or most  
anything else.

If you want the collection to return strings directly and not mapped  
objects, the current approach for that is to use the associationproxy,  
detailed in the plugins chapter of the docs.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insertion order not respecting FK relation

2008-06-11 Thread Malthe Borch
Michael Bayer wrote:
 you'd have to work this into a full self-contained script which I can  
 run locally since it seems theres some specific usage pattern creating  
 the issue.  (i.e. its very difficult for me to piece together snippets  
 and guess where the issue might be occuring).

This is reasonably self-contained; I've tried to make it as short as 
possible.

src/example/tables.py:  All tables and mappers
src/example/README.txt: Short demonstration which leads to error

You can run the example using:

$ python bootstrap.py
$ bin/buildout
$ bin/test

Note that the example requires a Python with a working psycopg2; the 
testrunner expects a database called test to be available on a running 
postgres.

\malthe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



example.tar.gz
Description: GNU Zip compressed data


[sqlalchemy] Re: Insertion order not respecting FK relation

2008-06-11 Thread Michael Bayer
thanks for this example.   There's several issues with this mapping.

The most crucial, although not the issue in this specific example, is  
that the relations table is used both as the secondary table in a  
relation(), and is also mapped directly to the Relation class.  SQLA  
does not track this fact and even in a working mapping will attempt to  
insert multiple, redundant rows into the table if you had, for  
example, appended to the records collection and also created a  
Relation object.  This is mentioned at the bottom of 
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_patterns_association
 
  but is also more strongly emphasized in the 0.5 docs, since its a  
very common mistake (its also not entirely a mistake if the mappings  
are used carefully or with the viewonly=True flag, hence we haven't  
built a check for this, although its probably something we should do).

The next issue which is the specific cause of the problem here is that  
SQLA's topological sort is based off of the relationships between  
classes and objects, and not directly the foreign key relationships  
between tables.   Specifically, there is no stated relationship  
between the Record class and the Soup/Collection classes - yet you  
append a Record object to the records collection which is only meant  
to store Soup objects.  SQLA sees no dependency between the  
Collection and Record mappers in this case, and the order of table  
insertion is undefined.  This collection append is only possible due  
to the enable_typechecks=False setting which essentially causes SQLA  
to operate in a slightly broken mode to allow very specific use  
cases to work (which are not this one- hence SQLA's behavior is still  
undefined).   enable_typechecks , as the initial error message  
implied when it mentioned polymorphic mapping, is meant to be used  
only with inheritance scenarios, and only with objects that are  
subclasses of the collected object.   It suggests that a certain  
degree of typechecking should remain even if enable_typechecks is  
set to False (something for me to consider in 0.5).

I've considered someday doing a rewrite of UOW that ultimately bases  
topological off of ForeignKey and the actual rows to be inserted, and  
that's it.   It's nothing that will happen anytime soon as its a huge  
job and our current UOW is extremely stable and does a spectacular job  
for almost two years at this point.  But even then, while such an  
approach might prevent this specific symptom with this specific  
mapping, it seems like a bad idea in any case to support placing  
arbitrary, unrelated types into collections that have been defined as  
storing a certain type.   I'm not sure at all if that approach to UOW  
wouldn't ultmately have all the same constraints as our current  
approach anyway.

Fortunately, the solution here is very simple as your table setup is a  
pure classic joined table inheritance configuration.   The attached  
script (just one script; sorry, all the buildout stuff seemed a little  
superfluous here) illustrates a straightforward mapping against these  
tables which only requires that Record and Collection subclass Soup  
(which is the nature of the joins on those tables).  The joins  
themselves are generated automatically by SQLA so theres no need to  
spell those out.  The enable_typechecks flag is still in use here in  
its stated use case; that you have a collection which can flush  
subtypes of Soup, but when queried later, will only return Soup  
objects.  You can improve upon that by using a polymorphic  
discriminator (see the docs for info on that).

The script illustrates using the secondary table in the records  
collection; this is what seems reasonable considering that there is no  
other meaningful data in the relations table (the surrogate PK in  
that table is also superfluous).  If there are meaningful columns in  
your actual application's version of the table, then you'd want to do  
away with secondary and use the association object pattern.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('postgres://scott:[EMAIL PROTECTED]/test', echo=True)

# set up session
connection = engine.connect()
Session = sessionmaker(autoflush=True, transactional=True)
session = Session(bind=connection)

# set up metadata
metadata = MetaData(engine)

class Soup(object):
pass

class Collection(Soup):
pass

class Relation(object):
pass

class Record(Soup):
pass

soup = Table(
'soup',
metadata,
Column('id', Integer,