[sqlalchemy] Re: table name in foreign key in wrong case on Mac OS X

2009-10-02 Thread rootsmith

Just a quick update ... Mac OS X is case preserving but case
insensitive so a query directly within mysql for table USER using
user will result correctly.  So this just deepens the mystery more -
on UNIX either MySQLdb or SQLAlchemy is preserving the case of the
table name as USER but on Mac OS X it is not preserving it, which
should work, but whatever routine that underlies the search for the
foreign key in the example I have given is not working.

On Oct 1, 10:27 pm, rootsmith ke...@rootsmith.ca wrote:
 Previously I was doing development of a project on an Ubuntu 9.04
 machine with MySQL 5.0 and just recently switched to doing development
 on a MacBook Pro.  I installed identical versions of MySQL on the
 MacBook.  I used the exact same source files and installed the same
 versions of all libraries for the project, however, when I try to
 initialize the model on the MacBook I get the following stack trace:

 File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/__init__.py, line 751, in
 mapper
     return Mapper(class_, local_table, *args, **params)
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, line 194, in
 __init__
     self._configure_inheritance()
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, line 239, in
 _configure_inheritance
     self.inherit_condition = sqlutil.join_condition
 (self.inherits.local_table, self.local_table)
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/sql/util.py, line 135, in
 join_condition
     between '%s' and '%s' % (a.description, b.description))
 sqlalchemy.exc.ArgumentError: Can't find any foreign key relationships
 between 'USER' and 'MANAGER'

 This is despite the fact that there is a foreign relation between the
 two as follows on the MANAGER table:

 CONSTRAINT FK_MANAGER_USER FOREIGN KEY (user_id)
     REFERENCES `USER`(user_id)

 I have traced the problem so far to the fact that on Ubuntu, the
 ForeignKey in SQLAlchemy is built as ForeignKey(u'USER.user_id') and
 on the Mac it is ForeignKey(u'user.user_id').  This has been confirmed
 by tracing the code into sql.util.join_condition where the call
 fk.get_referant() is being made.

 Note again, this is the EXACT SAME code running on both machines.  I
 don't know if it is a MySQL setting (unlikely), MySQLdb, or SQLAlchemy
 that is causing the incorrect case issue on the table name.  Does
 anyone know off hand what might be going on here?

 Just to confirm, the table in the database is called USER and
 therefore any search for user will not yield any results.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Select from multiple databases

2009-10-02 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:sqlalch...@googlegroups.com] On Behalf Of kkapron
 Sent: 01 October 2009 21:27
 To: sqlalchemy
 Subject: [sqlalchemy] Select from multiple databases
 
 
 Hello, I'm a beginner in sqlalchemy and I've got a problem with select
 statement.
 
 example:
 
 tabela T1: id, username, date (in database B1)
 tabela T2: id, user_id, ip (in database B2)
 
 I've got two engines
 ...
 self.db['B1']['engine'] = create_engine('mysql://B1')
 self.db['B1']['conn'] = self.db['B1']['engine'].connect()
 self.db['B1']['metadata'] = MetaData()
 self.db['B1']['metadata'].bind = self.db['B1']['engine']
 self.db['B1']['metadata'].create_all()
 
 self.db['B2']['engine'] = create_engine('mysql://B2')
 self.db['B2']['conn'] = self.db['B1']['engine'].connect()
 self.db['B2']['metadata'] = MetaData()
 self.db['B2']['metadata'].bind = self.db['B2']['engine']
 self.db['B2']['metadata'].create_all()
 ...
 
 and tables
 ...
 self.tables['T1'] = Table('T1', self.db['B1']['metadata'],
 autoload=True)
 self.tables['T2'] = Table('T2', self.db['B2']['metadata'],
 autoload=True)
 ...
 
 and a test query:
 ...
 T1 = self.tables['T1']
 T2 = self.tables['T2']
 s = select( [T1.c.username, T2.c.ip], (T2.c.user_id == T1.c.id) )
 s.execute().fetchall()
 
 and error:
 
 (ProgrammingError) (1146, Table 'B2.T1' doesn't exist)...
 
 it's true that T1 doesn't exists in B2, because it exist in T2.
 
 Does anybody know how to help me? :)
 

I don't think you can do this - the 'select' function represents a
single SQL SELECT statement, which can't be sent to 2 different database
servers. You'd need to run two separate queries and join the results in
Python.

If your tables are actually in different schemas but the same MySQL
instance (ie. If you can connect to the MySQL server and write 'SELECT *
FROM B1.T1' and 'SELECT * FROM B2.T2'), then you can use a single engine
and metadata to access them both by specifying the schema in your Table
definitions. See:

 
http://www.sqlalchemy.org/docs/05/metadata.html#specifying-the-schema-na
me

for an example.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Orm slow to update why?

2009-10-02 Thread Christian Démolis
*Hello M. Bayer

Ok i understand now what it is strange

My configuration
I m on windows xp pro sp3
Python is 2.5.2
SqlAlchemy is 0.5.6
Dell optiplex 755
Intel core 2 duo e8300
2go ram
**
I dont pass any paremeters in create_engine, is it wrong?*

chaine =
'mysql://'+ParametresDeConnection[Identifiant]+:+ParametresDeConnection[MotDePasse]+'@'+ParametresDeConnection['Ip']+'/'+ParametresDeConnection[BaseDeDonnees]
engine = create_engine(chaine)
*
This is the final part of my declaration*

class Query(Query):
def __init__(self, *arg, **kw):
   self._populate_existing = True
   super(Query, self).__init__(*arg, **kw)

Session = scoped_session(sessionmaker(autocommit=True, bind=engine,
query_cls=Query))
session = Session()

import MySQLdb
db = MySQLdb.connection(host=192.168.0.110, user=apm, passwd=apm,
db=azeane)
print AvecMySQlDB
xref = time.time()
for x in xrange(100):
db.query(UPDATE utilisateur SET Dispo=+str(x%2)+ WHERE
IdUtilisateur=1)
r = db.store_result()
print Update 1000 MySQLdb, time.time()-xref

xreftotal = time.time()
for x in xrange(100):

session.query(Utilisateur).filter(Utilisateur.IdUtilisateur==1).update({'Dispo':x%2},
False)
print Update 1000 SqlAlchemy, time.time()-xref

*As u can see, i m not in transactional mode.* *I add a new benchmark and i
execute it with local database and tunisia database
Local results
Update 1000 MySQLdb 0.016324249
Update 1000 SqlAlchemy 0.235000133514

Tunisia results
Update 1000 MySQLdb 0.016324249
Update 1000 SqlAlchemy 31.0309998989

It s crazy ^^

I don t know why my station is slow? Do u have an idea?*

2009/10/1 Michael Bayer mike...@zzzcomputing.com


 Christian Démolis wrote:
  Maybe because the database is in Tunisia and my Computer in France.
  I don t use sqlite, i use MySQL.
  I just did a test on internet in Tunisia, 39kbits/sec upload and
  417kbits/sec

 right but, you had these results:

 MySQLdb - .09 seconds
 ORM - .3 seconds

 so, network overhead of sending update  string to tunisia, is at most
 .09 seconds.  SQLAlchemy also pulls the cursor.rowcount back, so that
 perhaps adds network overhead as well (and you would see this in the
 profiling results), although that should be tiny.   so .21 seconds of
 overhead approximately would appear to be spent locally.whereas on my
 system within .21 seconds I can issue that same amount of work a few
 thousand times.   the simple ORM examples you have should not be issuing
 any other SQL statements.


 
  2009/10/1 Michael Bayer mike...@zzzcomputing.com
 
 
  Christian Démolis wrote:
   With debug mode it seems to take 0.15 second, my timer print 0.45 s
   i don t know why this difference?
  
   2009-10-01 17:00:38,586 INFO sqlalchemy.engine.base.Engine.0x...7f50
   UPDATE
   utilisateur SET `Dispo`=%s WHERE utilisateur.`IdU
   tilisateur` = %s
   2009-10-01 17:00:38,586 INFO sqlalchemy.engine.base.Engine.0x...7f50
  [0,
   1L]
   2009-10-01 17:00:38,743 INFO sqlalchemy.engine.base.Engine.0x...7f50
   COMMIT
   With ORM force update 0.45368665
 
  On my workstation which is an intel mac, I can run 1000
  Session.execute(update...) against a sqlite memory db statements in
  .15
  seconds, using the simple time.time() approach to measure.  The sqlite
  cursor directly, which is against an empty table so is insanely fast,
  can
  run 1000 in .01 seconds, so that is .14 seconds of overhead within SQLA.
 
  In your case, you're getting .2-.3 seconds of overhead just for *one*
  statement, subtracting what your database call takes natively.   Meaning
  that while I can execute around 8000 Session.execute() statements per
  second, you can execute less than five.  That seems very strange.
 
 
 
 
  
   2009/10/1 Michael Bayer mike...@zzzcomputing.com
  
  
   Michael Bayer wrote:
   
Christian Démolis wrote:
Hello,
   
I tried all the method to compare the different methods :
  
  
   Here's a decorator I would advise using:
  
  
  
 
 http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677
  
   Also I would advise testing this as well.  Session.execute() creates
  a
   text() construct which does some regular expression matching that you
   don't need:
  
   engine.execute(update table set foo=bar)
  
  
  
  
  
  
  
  
  
  
  
   
*TEST CODE*
   
xref = time.time()
self.UtilisateurCourant.Dispo = 1
session.merge(self.UtilisateurCourant, dont_load=True)
session.flush()
print With ORM dont_load, time.time()-xref
   
xref = time.time()
   
   
  
 
 session.query(Utilisateur).filter(Utilisateur.IdUtilisateur==self.UtilisateurCourant.IdUtilisateur).update({'Dispo':0},
False)
print With ORM force update, time.time()-xref
   
xref = time.time()
if self.UtilisateurCourant.IdUtilisateur:
session.execute(UPDATE utilisateur SET Dispo=0
  

[sqlalchemy] Re: Orm slow to update why?

2009-10-02 Thread Christian Démolis
I does a mistake in my last, that is the results (error in parameters of
mysqldb connection)

Local database
Update 1000 MySQLdb 0.0319998264313
Update 1000 SqlAlchemy 0.265999794006

France-Tunisia
Update 1000 MySQLdb 10.391324
Update 1000 SqlAlchemy 42.157924

Sorry

2009/10/2 Christian Démolis christiandemo...@gmail.com

 *Hello M. Bayer

 Ok i understand now what it is strange

 My configuration
 I m on windows xp pro sp3
 Python is 2.5.2
 SqlAlchemy is 0.5.6
 Dell optiplex 755
 Intel core 2 duo e8300
 2go ram
 **
 I dont pass any paremeters in create_engine, is it wrong?*

 chaine =
 'mysql://'+ParametresDeConnection[Identifiant]+:+ParametresDeConnection[MotDePasse]+'@'+ParametresDeConnection['Ip']+'/'+ParametresDeConnection[BaseDeDonnees]
 engine = create_engine(chaine)
 *
 This is the final part of my declaration*

 class Query(Query):
 def __init__(self, *arg, **kw):
self._populate_existing = True
super(Query, self).__init__(*arg, **kw)

 Session = scoped_session(sessionmaker(autocommit=True, bind=engine,
 query_cls=Query))
 session = Session()

 import MySQLdb
 db = MySQLdb.connection(host=192.168.0.110, user=apm, passwd=apm,
 db=azeane)
 print AvecMySQlDB
 xref = time.time()
 for x in xrange(100):
 db.query(UPDATE utilisateur SET Dispo=+str(x%2)+ WHERE
 IdUtilisateur=1)
 r = db.store_result()
 print Update 1000 MySQLdb, time.time()-xref

 xreftotal = time.time()
 for x in xrange(100):

 session.query(Utilisateur).filter(Utilisateur.IdUtilisateur==1).update({'Dispo':x%2},
 False)
 print Update 1000 SqlAlchemy, time.time()-xref

 *As u can see, i m not in transactional mode.* *I add a new benchmark and
 i execute it with local database and tunisia database
 Local results
 Update 1000 MySQLdb 0.016324249
 Update 1000 SqlAlchemy 0.235000133514

 Tunisia results
 Update 1000 MySQLdb 0.016324249
 Update 1000 SqlAlchemy 31.0309998989

 It s crazy ^^

 I don t know why my station is slow? Do u have an idea?*


 2009/10/1 Michael Bayer mike...@zzzcomputing.com


 Christian Démolis wrote:
  Maybe because the database is in Tunisia and my Computer in France.
  I don t use sqlite, i use MySQL.
  I just did a test on internet in Tunisia, 39kbits/sec upload and
  417kbits/sec

 right but, you had these results:

 MySQLdb - .09 seconds
 ORM - .3 seconds

 so, network overhead of sending update  string to tunisia, is at most
 .09 seconds.  SQLAlchemy also pulls the cursor.rowcount back, so that
 perhaps adds network overhead as well (and you would see this in the
 profiling results), although that should be tiny.   so .21 seconds of
 overhead approximately would appear to be spent locally.whereas on my
 system within .21 seconds I can issue that same amount of work a few
 thousand times.   the simple ORM examples you have should not be issuing
 any other SQL statements.


 
  2009/10/1 Michael Bayer mike...@zzzcomputing.com
 
 
  Christian Démolis wrote:
   With debug mode it seems to take 0.15 second, my timer print 0.45 s
   i don t know why this difference?
  
   2009-10-01 17:00:38,586 INFO sqlalchemy.engine.base.Engine.0x...7f50
   UPDATE
   utilisateur SET `Dispo`=%s WHERE utilisateur.`IdU
   tilisateur` = %s
   2009-10-01 17:00:38,586 INFO sqlalchemy.engine.base.Engine.0x...7f50
  [0,
   1L]
   2009-10-01 17:00:38,743 INFO sqlalchemy.engine.base.Engine.0x...7f50
   COMMIT
   With ORM force update 0.45368665
 
  On my workstation which is an intel mac, I can run 1000
  Session.execute(update...) against a sqlite memory db statements in
  .15
  seconds, using the simple time.time() approach to measure.  The sqlite
  cursor directly, which is against an empty table so is insanely fast,
  can
  run 1000 in .01 seconds, so that is .14 seconds of overhead within
 SQLA.
 
  In your case, you're getting .2-.3 seconds of overhead just for *one*
  statement, subtracting what your database call takes natively.
 Meaning
  that while I can execute around 8000 Session.execute() statements per
  second, you can execute less than five.  That seems very strange.
 
 
 
 
  
   2009/10/1 Michael Bayer mike...@zzzcomputing.com
  
  
   Michael Bayer wrote:
   
Christian Démolis wrote:
Hello,
   
I tried all the method to compare the different methods :
  
  
   Here's a decorator I would advise using:
  
  
  
 
 http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677
  
   Also I would advise testing this as well.  Session.execute() creates
  a
   text() construct which does some regular expression matching that
 you
   don't need:
  
   engine.execute(update table set foo=bar)
  
  
  
  
  
  
  
  
  
  
  
   
*TEST CODE*
   
xref = time.time()
self.UtilisateurCourant.Dispo = 1
session.merge(self.UtilisateurCourant,
 dont_load=True)
session.flush()
print With ORM dont_load, time.time()-xref
   
xref = 

[sqlalchemy] UniqueConstraint with a function

2009-10-02 Thread Joril

Hi everyone!
I have a class Document with attributes Number and Date, is
there a way to set a UniqueConstraint on Number + year(Date)?

(SQLAlchemy 0.5.5, PostgreSQL 8.3.4)

Thanks in advance!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] support for set/replace on a collection in AttributeExtension

2009-10-02 Thread Tefnet Developers

Hi, 

I need to get a single event at an extension in case of such operation:

obj.colAttr = [x, y, z]

right now I will receive:
extension.remove(...) for each value currently in colAttr
extension.append(...) for x, y and z.

what I need is something like:

extension.replace(oldvalues, values) with two lists or something like
that.

Right now my approach (yes, I know this is sick) is:

def weComeFrom():
f=inspect.currentframe().f_back.f_back
while(inspect.getmodule(f).__name__.startswith('sqlalchemy.')):
f=f.f_back
return (f.f_lasti, f.f_code)


class ImmutableExtension(TefAttributeExtension):
active_history = True
triggerName = 'immutable'

def append(self, state, value, initiator):
f = weComeFrom()
try:
lf = initiator.__tefLastComeFrom
except AttributeError:
lf = None
if len(self._getOldValue(state, initiator)) == 0:
initiator.__tefLastComeFrom = f
return value
elif f == lf:
return value
else:
self.raiseError(state, value, None, initiator)

What is the proper way to achieve this?

regards,
Filip Zyzniewski


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Problem limiting the output of outer join

2009-10-02 Thread tavol

I want to list the all 'Groups' that 'Users' belong to as indicated by
the User2Group link table  - as well as the Groups the user does not
belong to. This returns the desired data set except it is not filtered
for a particular user,

# output not filtered for user
s.query(Group, User2Group).outerjoin(User2Group).all()
# psuedo output - not filtered:
# Group 1 Yes
# Group 2 Yes
# Group 2 Yes
# Group 3 Yes
# Group 2 Yes
# Group 4 Yes
# Group 1 None
# Group 2 None
# Group 3 None
# Group 4 None

 If I add a filter, then then the query seems to become more like
inner join as both Group and User2Group are filtered for user_id = 5

# groups not linked to id = 5 excluded
s.query(Group).outerjoin(User2Group).filter(User2Group.user_id==5).all
()

# Output too filtered:
# Group 2 Yes
# Group 3 Yes

Any suggestions how I can return a list of all groups and show where
there is a User2Group record for e.g. User ID 5

# Desired output:
# Group 1 None
# Group 2 Yes
# Group 3 Yes
# Group 4 None

thanks for any suggestions

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table name in foreign key in wrong case on Mac OS X

2009-10-02 Thread Michael Bayer

rootsmith wrote:

 Just a quick update ... Mac OS X is case preserving but case
 insensitive so a query directly within mysql for table USER using
 user will result correctly.  So this just deepens the mystery more -
 on UNIX either MySQLdb or SQLAlchemy is preserving the case of the
 table name as USER but on Mac OS X it is not preserving it, which
 should work, but whatever routine that underlies the search for the
 foreign key in the example I have given is not working.

Not much mystery here; MySQL's casing behavior is platform dependent. 
Here's some background at:

http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

by far the easiest way to deal with this kind of issue is to use all case
insensitive identifiers throughout your database.   In SQLAlchemy,
specifying a name as all lower case (i.e. users) indicates case
insensitive treatment, where no quoting will be applied.





 On Oct 1, 10:27 pm, rootsmith ke...@rootsmith.ca wrote:
 Previously I was doing development of a project on an Ubuntu 9.04
 machine with MySQL 5.0 and just recently switched to doing development
 on a MacBook Pro.  I installed identical versions of MySQL on the
 MacBook.  I used the exact same source files and installed the same
 versions of all libraries for the project, however, when I try to
 initialize the model on the MacBook I get the following stack trace:

 File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/__init__.py, line 751, in
 mapper
     return Mapper(class_, local_table, *args, **params)
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, line 194, in
 __init__
     self._configure_inheritance()
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/mapper.py, line 239, in
 _configure_inheritance
     self.inherit_condition = sqlutil.join_condition
 (self.inherits.local_table, self.local_table)
   File /Users/kevin/mydevenv/lib/python2.6/site-packages/
 SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/sql/util.py, line 135, in
 join_condition
     between '%s' and '%s' % (a.description, b.description))
 sqlalchemy.exc.ArgumentError: Can't find any foreign key relationships
 between 'USER' and 'MANAGER'

 This is despite the fact that there is a foreign relation between the
 two as follows on the MANAGER table:

 CONSTRAINT FK_MANAGER_USER FOREIGN KEY (user_id)
     REFERENCES `USER`(user_id)

 I have traced the problem so far to the fact that on Ubuntu, the
 ForeignKey in SQLAlchemy is built as ForeignKey(u'USER.user_id') and
 on the Mac it is ForeignKey(u'user.user_id').  This has been confirmed
 by tracing the code into sql.util.join_condition where the call
 fk.get_referant() is being made.

 Note again, this is the EXACT SAME code running on both machines.  I
 don't know if it is a MySQL setting (unlikely), MySQLdb, or SQLAlchemy
 that is causing the incorrect case issue on the table name.  Does
 anyone know off hand what might be going on here?

 Just to confirm, the table in the database is called USER and
 therefore any search for user will not yield any results.
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: UniqueConstraint with a function

2009-10-02 Thread Joril

On 2 Ott, 10:42, Joril jor...@gmail.com wrote:
 I have a class Document with attributes Number and Date, is
 there a way to set a UniqueConstraint on Number + year(Date)?

Self-followup: should I use a unique Index instead? But how do you
create a functional index in SQLAlchemy?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: padding of CHAR fields, inconsistent where clause; Oracle example

2009-10-02 Thread volx

My module is called sandbox.py
After importing it to ipython and letting it run, here's what I get
for the test you suggested:

In [47]: (sandbox.price_sources.c.desciption=='EJV').right.type
Out[47]: OracleChar(length=100, convert_unicode=False,
assert_unicode=None)

The trouble that for some reason the code goes into
sqlalchemy.types.String.get_dbapi_type instead of
sqlalchemy.databases.oracle.OracleChar

In [14]: pdb.run('sandbox.test()')
 string(1)module()
(Pdb) b /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-
py2.6.egg/sqlalchemy/engine/default.py:322
Breakpoint 7 at /usr/local/lib/python2.6/site-packages/
SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/engine/default.py:322
(Pdb) c
2009-10-02 17:58:18,020 INFO sqlalchemy.engine.base.Engine.0x...97ec
SELECT USER FROM DUAL
2009-10-02 17:58:18,020 INFO sqlalchemy.engine.base.Engine.0x...97ec
{}
2009-10-02 17:58:18,025 INFO sqlalchemy.engine.base.Engine.0x...97ec
select table_name from all_tables where table_name=:name and
owner=:schema_name
2009-10-02 17:58:18,026 INFO sqlalchemy.engine.base.Engine.0x...97ec
{'name': 'VICTOR_PRICE_SOURCES', 'schema_name': 'CDOCOLLAT'}
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/engine/default.py(322)set_input_sizes()
- dbtype = typeengine.dialect_impl(self.dialect).get_dbapi_type
(self.dialect.dbapi)
(Pdb) dbtype
*** NameError: name 'dbtype' is not defined
(Pdb) typeengine.dialect_impl(self.dialect)

OracleChar(length=100, convert_unicode=False, assert_unicode=None)

(Pdb) # we should be going into OracleChar.get_dbapi_type

*** SyntaxError: unexpected EOF while parsing (stdin, line 1)
(Pdb) s
--Call--
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/types.py(124)dialect_impl()
- def dialect_impl(self, dialect, **kwargs):
(Pdb) l
119   Column('data', MyType(16))
120   )
121
122 
123
124  - def dialect_impl(self, dialect, **kwargs):
125 try:
126 return self._impl_dict[dialect]
127 except AttributeError:
128 self._impl_dict = weakref.WeakKeyDictionary()   #
will be optimized in 0.6
129 return self._impl_dict.setdefault(dialect,
dialect.type_descriptor(self))
(Pdb) l
130 except KeyError:
131 return self._impl_dict.setdefault(dialect,
dialect.type_descriptor(self))
132
133 def __getstate__(self):
134 d = self.__dict__.copy()
135 d.pop('_impl_dict', None)
136 return d
137
138 def get_col_spec(self):
139 Return the DDL representation for this type.
140 raise NotImplementedError()
(Pdb) dialect
sqlalchemy.databases.oracle.OracleDialect object at 0xa1ae54c
(Pdb) n
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/types.py(125)dialect_impl()
- try:
(Pdb) n
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/types.py(126)dialect_impl()
- return self._impl_dict[dialect]
(Pdb) n
--Return--
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/types.py(126)dialect_impl()-OracleCh...ode=None)
- return self._impl_dict[dialect]
(Pdb) l
121
122 
123
124 def dialect_impl(self, dialect, **kwargs):
125 try:
126  - return self._impl_dict[dialect]
127 except AttributeError:
128 self._impl_dict = weakref.WeakKeyDictionary()   #
will be optimized in 0.6
129 return self._impl_dict.setdefault(dialect,
dialect.type_descriptor(self))
130 except KeyError:
131 return self._impl_dict.setdefault(dialect,
dialect.type_descriptor(self))
(Pdb) dialect
sqlalchemy.databases.oracle.OracleDialect object at 0xa1ae54c
(Pdb) n
--Call--
 /usr/local/lib/python2.6/site-packages/SQLAlchemy-0.5.5-py2.6.egg/sqlalchemy/types.py(477)get_dbapi_type()
- def get_dbapi_type(self, dbapi):
(Pdb) l
472 return value
473 return process
474 else:
475 return None
476
477  - def get_dbapi_type(self, dbapi):
478 return dbapi.STRING
479
480 class Text(String):
481 A variably sized string type.
482
(Pdb) l 1
  1 # types.py
  2 # Copyright (C) 2005, 2006, 2007, 2008, 2009 Michael Bayer
mike...@zzzcomputing.com
  3 #
  4 # This module is part of SQLAlchemy and is released under
  5 # the MIT License: http://www.opensource.org/licenses/mit-license.php
  6
  7 defines genericized SQL types, each represented by a
subclass of
  8 :class:`~sqlalchemy.types.AbstractType`.  Dialects define
further subclasses of these
  9 types.
 10
 11 For more information see the SQLAlchemy documentation on
types.




On Oct 1, 2:48 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 volx wrote:

  Here's my program, modified as you suggest. It also creates the table
  so you can try it on any instance of Oracle. No joy 

[sqlalchemy] joined table inheritance + graph pattern

2009-10-02 Thread Thomas Drake

Hi,

I'm struggling trying to merge two patterns which separately are no-
brainers in sqa: joined tabled inheritance and (an arbitrary number
of) graph relationships, here represented as connections and
dependencies. I've run into quite a few different errors from the
following code using version 0.5.3:

(most recently)

sqlalchemy.orm.exc.UnmappedColumnError: Can't execute sync rule for
source column 'dep_dependency.parent_id'; mapper 'Mapper|Component|
dep_group_components' does not map this column.  Try using an explicit
`foreign_keys` collection which does not include destination column
'dep_group_components.group_id' (or use a viewonly=True relation).

(and when the foreign_keys is blank)

sqlalchemy.exc.ArgumentError: Could not determine relation direction
for secondaryjoin condition 'dep_groups INNER JOIN
dep_group_components ON dep_groups.group_id =
dep_group_components.group_id INNER JOIN dep_dependency ON
dep_group_components.group_id = dep_dependency.child_id', on relation
Component.dependents. Specify the foreign_keys argument to indicate
which columns on the relation are foreign.

Any thoughts on what to do here? I'm fully stumped at this point. I
should note that I've tried using a polymorphic_union (for locating
the value of the polymorphic_on and for primary and secondaryjoin in
relations). That only yields sql code with multiple definitions of
'type'.

Many thanks,
-thomas

---

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base,
DeclarativeMeta
from sqlalchemy.orm import sessionmaker, scoped_session,
create_session
from sqlalchemy import  Table, Column, Integer, String, Date,
DateTime, Time, \
MetaData, ForeignKey, Boolean,
ForeignKeyConstraint
from sqlalchemy.schema import UniqueConstraint, CheckConstraint
from sqlalchemy.orm import backref, mapper, relation, reconstructor,
polymorphic_union
from sqlalchemy import and_, or_

ENGINE_URL = 'mysql://'

TYPE_GROUP = 'group'
TYPE_COMPONENT = 'comp'

Base = declarative_base()
metadata = MetaData()
Base.metadata = metadata

engine = create_engine( ENGINE_URL, echo=False)

class Group(object):
def __repr__(self):
return Group( %s ) % self.name

def __init__(self, name=None, expired=False):

Note that self.type is automatically populated when
polymorphic_on is
configued in Group's mapper.

self.name = name
self.expired = expired

class DependencyHolderGroup(Group):
def __init__(self, name=None, version=None, expired=False):
self.name = name
self.version = version
self.expired = expired

class Component(DependencyHolderGroup):
pass

dep_groups = Table('dep_groups', metadata,

Column('group_id', Integer, primary_key=True),
Column('type', String(8)),
Column('name', String(100), unique=True),
Column('expired', Boolean),

UniqueConstraint('name', 'type', name='name_type_x'),

mysql_engine = 'InnoDb'
)

connections_table = Table('dep_connections', metadata,
Column('parent_id', Integer, ForeignKey('dep_groups.group_id')),
Column('child_id', Integer, ForeignKey('dep_groups.group_id')),

mysql_engine='InnoDb'
)

dependencies_table = Table('dep_dependency', metadata,
Column('parent_id', Integer, ForeignKey('dep_groups.group_id')),
Column('child_id', Integer, ForeignKey('dep_groups.group_id')),

mysql_engine='InnoDb'
)

dep_group_components = Table('dep_group_components', metadata,

Column('group_id', Integer, ForeignKey('dep_groups.group_id'),
primary_key=True),
Column('group_name', String(100)),
Column('version', String(50)),

# ForeignKeyConstraint(['group_name'], ['dep_groups.name']),
# UniqueConstraint('group_name', 'version',
name='name_version_x'),
# CheckConstraint('type == %s' % TYPE_COMPONENT,
name='type_comp_x'),
CheckConstraint('dependents.type != set',
name='no_set_child_x'),

mysql_engine = 'InnoDb'
)

groups_mapper = mapper(Group,

dep_groups,
order_by=['name'],
polymorphic_on=dep_groups.c.type,
polymorphic_identity=TYPE_GROUP,
properties={

'children':relation(Group, lazy=False,
primaryjoin=dep_groups.c.group_id==connections_table.c.parent_id,
secondaryjoin=dep_groups.c.group_id==connections_table.c.child_id,
secondary=connections_table, backref='parents', cascade='all'),
})

primary = dep_groups.join(dep_group_components,
 
dep_groups.c.group_id==dep_group_components.c.group_id).join
(dependencies_table,
 
dep_group_components.c.group_id==dependencies_table.c.parent_id)
secondary = dep_groups.join(dep_group_components,
 
dep_groups.c.group_id==dep_group_components.c.group_id).join
(dependencies_table,
 
dep_group_components.c.group_id==dependencies_table.c.child_id)

components_mapper = mapper(Component,

dep_group_components,
inherits=Group,
order_by=['name','version'],
polymorphic_identity=TYPE_COMPONENT,
properties={

  

[sqlalchemy] Re: joined table inheritance + graph pattern

2009-10-02 Thread Michael Bayer

Thomas Drake wrote:

 primary = dep_groups.join(dep_group_components,

 dep_groups.c.group_id==dep_group_components.c.group_id).join
 (dependencies_table,

 dep_group_components.c.group_id==dependencies_table.c.parent_id)
 secondary = dep_groups.join(dep_group_components,

 dep_groups.c.group_id==dep_group_components.c.group_id).join
 (dependencies_table,

 dep_group_components.c.group_id==dependencies_table.c.child_id)

why is all the above join construction needed ?  I see three classes but
only two mappers.   So I get the impression you're trying to say

Component - DependencyHolderGroup - Group ?  in which case map all three
classes explicitly, and two relation()s to bridge them together.

Also I'd advise upgrading to 0.5.6 as there are many bug fixes since 0.5.3
including within the area of joined table inheritance mapping.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: joined table inheritance + graph pattern

2009-10-02 Thread Thomas Drake

On Oct 2, 3:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Thomas Drake wrote:

  primary = dep_groups.join(dep_group_components,

  dep_groups.c.group_id==dep_group_components.c.group_id).join
  (dependencies_table,

  dep_group_components.c.group_id==dependencies_table.c.parent_id)
  secondary = dep_groups.join(dep_group_components,

  dep_groups.c.group_id==dep_group_components.c.group_id).join
  (dependencies_table,

  dep_group_components.c.group_id==dependencies_table.c.child_id)

 why is all the above join construction needed ?  I see three classes but
 only two mappers.   So I get the impression you're trying to say

 Component - DependencyHolderGroup - Group ?  in which case map all three
 classes explicitly, and two relation()s to bridge them together.

 Also I'd advise upgrading to 0.5.6 as there are many bug fixes since 0.5.3
 including within the area of joined table inheritance mapping.

DependencyHolderGroup is an abstract type for classes like Component,
so I'm was thinking I didn't need a mapper for it.

Otherwise, the join construction was my fix for the errors where the
relation can't relate the base classes group_id column with the
parent_id within the primaryjoin.

Thanks for getting back. I'll update and try your approach.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem limiting the output of outer join

2009-10-02 Thread Conor Davis

tavol wrote:
 I want to list the all 'Groups' that 'Users' belong to as indicated by
 the User2Group link table  - as well as the Groups the user does not
 belong to. This returns the desired data set except it is not filtered
 for a particular user,

 # output not filtered for user
 s.query(Group, User2Group).outerjoin(User2Group).all()
 # psuedo output - not filtered:
 # Group 1 Yes
 # Group 2 Yes
 # Group 2 Yes
 # Group 3 Yes
 # Group 2 Yes
 # Group 4 Yes
 # Group 1 None
 # Group 2 None
 # Group 3 None
 # Group 4 None

  If I add a filter, then then the query seems to become more like
 inner join as both Group and User2Group are filtered for user_id = 5

 # groups not linked to id = 5 excluded
 s.query(Group).outerjoin(User2Group).filter(User2Group.user_id==5).all
 ()

 # Output too filtered:
 # Group 2 Yes
 # Group 3 Yes

 Any suggestions how I can return a list of all groups and show where
 there is a User2Group record for e.g. User ID 5

 # Desired output:
 # Group 1 None
 # Group 2 Yes
 # Group 3 Yes
 # Group 4 None

 thanks for any suggestions
   
When you want to apply a filter to the right-hand side of the outer join
but not the left-hand side, you have to include the filter as part of
the outer join's ON clause. Here is a query that should work (I'm
assuming you want True/False as your second tuple element in the
results; not tested):

s.query(Group, User2Group.anycolumn != None).outerjoin((
User2Group,
(User2Group.group_id == Group.id)  (User2Group.user_id == 5))).all()

which should roughly translate to this SQL:

SELECT [Group columns], User2Group.anycolumn IS NOT NULL
FROM Group
OUTER JOIN User2Group ON User2Group.group_id = Group.id AND User2Group.user_id 
= 5

Note the lack of a WHERE clause in the query and SQL.

Hope it helps,
-Conor


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---