[sqlalchemy] identifier is too long

2010-03-30 Thread jo

Hi all,

I have some troubles creating my db schema with Oracle. The problem is 
on this column:


Column('cod_caratteristica_rischio', Unicode(10), index=True, 
nullable=False)


It works fine in PostgreSQL but when I try it on Oracle sa tries to 
create an index with a name too long ( 30 char).



tbl['rischio_attivita'] = Table('rischio_attivita', database.metadata,
Column('id', Integer, Sequence('rischio_attivita_seq'), primary_key=True),
Column('cod_attivita', Unicode(10),nullable=False),
Column('cod_caratteristica_rischio', Unicode(10), index=True, 
nullable=False),

Column('tipo_calcolo', Integer),
Column('algoritmo', Unicode(200)),
Column('fattore_x', Float, PassiveDefault('1.0')),
ForeignKeyConstraint(['cod_attivita'], ['attivita.codice']),
ForeignKeyConstraint(['cod_caratteristica_rischio'], 
['caratteristica_rischio.codice'])

)


File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py, line 
599, in _execute

raise exceptions.SQLError(context.statement, context.parameters, e)
sqlalchemy.exceptions.SQLError: (DatabaseError) ORA-00972: identifier is 
too long
'CREATE INDEX ix_rischio_attivita_cod_caratteristica_rischio ON 
rischio_attivita (cod_caratteristica_rischio)' {}



--
Jose Soares
Sferacarta Net 
Via Bazzanese 69

40033 Casalecchio di Reno
Bologna - Italy
Ph  +39051591054
fax +390516131537
web:www.sferacarta.com

Le informazioni contenute nella presente mail ed in ogni eventuale file 
allegato sono riservate e, comunque, destinate esclusivamente alla persona o 
ente sopraindicati, ai sensi del decreto legislativo 30 giugno 2003, n. 196. La 
diffusione, distribuzione e/o copiatura della mail trasmessa, da parte di 
qualsiasi soggetto diverso dal destinatario, sono vietate. La correttezza, 
l’integrità e la sicurezza della presente mail non possono essere garantite. Se 
avete ricevuto questa mail per errore, Vi preghiamo di contattarci 
immediatamente e di eliminarla. Grazie.

This communication is intended only for use by the addressee, pursuant to 
legislative decree 30 June 2003, n. 196. It may contain confidential or 
privileged information. You should not copy or use it to disclose its contents 
to any other person. Transmission cannot be guaranteed to be error-free, 
complete and secure. If you are not the intended recipient and receive this 
communication unintentionally, please inform us immediately and then delete 
this message from your system. Thank you.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] @compiles ignores inheritance

2010-03-30 Thread Tobias
Hi,

I am having a bunch of classes that inherit from Function and all of
them should be compiled by a method annotated with @compiles.


class __base_function(Function):
def __init__(self, clause, *clauses, **kw):
self.clause = clause

Function.__init__(self, self.__class__.__name__, *clauses,
**kw)

class wkt(__base_function):
pass

class wkb(__base_function):
pass

[..]

So I thought I could write just one method, that is annotated with
@compiles(__base_function), but this does not work. I have to write a
method for each class that inherits from __base_function:

@compiles(wkt)
def compile_wkt(element, compiler, **kw):
return __call_function(element, compiler)

@compiles(wkb)
def compile_wkb(element, compiler, **kw):
return __call_function(element, compiler)

[..]

Is there a more elegant way that I do not have to write a method for
each class?

Thank you,
Tobias

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] add_column does not correlate with aliased table.

2010-03-30 Thread Kalium
Hi,
I've had a look through the docs and a quick look through the forum
here, and haven't been able to solve my problem. I'm using 0.4

The following works as expected.

q =
System.query().join('activity').group_by(model.System.id).add_column(func.max(Activity.id))

The add_column() recognises that the activity table is already joined.
and thus does not add it to the tables in the 'FROM' clause.


However, the following does not work. The only difference is that now
the joined table (activity) is aliased.

q =
System.query().join('activity',aliased=True).group_by(model.System.id).add_column(func.max(Activity.id))

add_column() does not recognise the aliased activity table and the
from clause now looks something like 'FROM system,activity', whereas
it should be joined.

What are my best solutions?

Cheers
Ray



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] identifier is too long

2010-03-30 Thread Mariano Mara
Excerpts from jo's message of Tue Mar 30 03:25:18 -0300 2010:
 Hi all,
 
 I have some troubles creating my db schema with Oracle. The problem is 
 on this column:
 
 Column('cod_caratteristica_rischio', Unicode(10), index=True, 
 nullable=False)
 
 It works fine in PostgreSQL but when I try it on Oracle sa tries to 
 create an index with a name too long ( 30 char).
 
 
 tbl['rischio_attivita'] = Table('rischio_attivita', database.metadata,
 Column('id', Integer, Sequence('rischio_attivita_seq'), primary_key=True),
 Column('cod_attivita', Unicode(10),nullable=False),
 Column('cod_caratteristica_rischio', Unicode(10), index=True, 
 nullable=False),
 Column('tipo_calcolo', Integer),
 Column('algoritmo', Unicode(200)),
 Column('fattore_x', Float, PassiveDefault('1.0')),
 ForeignKeyConstraint(['cod_attivita'], ['attivita.codice']),
 ForeignKeyConstraint(['cod_caratteristica_rischio'], 
 ['caratteristica_rischio.codice'])
 )
 
 
 File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py, line 
 599, in _execute
 raise exceptions.SQLError(context.statement, context.parameters, e)
 sqlalchemy.exceptions.SQLError: (DatabaseError) ORA-00972: identifier is 
 too long
 'CREATE INDEX ix_rischio_attivita_cod_caratteristica_rischio ON 
 rischio_attivita (cod_caratteristica_rischio)' {}
 

Hi there, instead of creating the index as part of the Column
expression, use the Index[1] statement and assign a name yourself.

[1]
http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html#sqlalchemy.schema.Index

 
 -- 
 Jose Soares
 Sferacarta Net 
 Via Bazzanese 69
 40033 Casalecchio di Reno
 Bologna - Italy
 Ph  +39051591054
 fax +390516131537
 web:www.sferacarta.com
 
 Le informazioni contenute nella presente mail ed in ogni eventuale file 
 allegato sono riservate e, comunque, destinate esclusivamente alla persona o 
 ente sopraindicati, ai sensi del decreto legislativo 30 giugno 2003, n. 196. 
 La diffusione, distribuzione e/o copiatura della mail trasmessa, da parte di 
 qualsiasi soggetto diverso dal destinatario, sono vietate. La correttezza, 
 l’integrità e la sicurezza della presente mail non possono essere garantite. 
 Se avete ricevuto questa mail per errore, Vi preghiamo di contattarci 
 immediatamente e di eliminarla. Grazie.
 
 This communication is intended only for use by the addressee, pursuant to 
 legislative decree 30 June 2003, n. 196. It may contain confidential or 
 privileged information. You should not copy or use it to disclose its 
 contents to any other person. Transmission cannot be guaranteed to be 
 error-free, complete and secure. If you are not the intended recipient and 
 receive this communication unintentionally, please inform us immediately and 
 then delete this message from your system. Thank you.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] add_column does not correlate with aliased table.

2010-03-30 Thread Michael Bayer

On Mar 30, 2010, at 2:47 AM, Kalium wrote:

 Hi,
 I've had a look through the docs and a quick look through the forum
 here, and haven't been able to solve my problem. I'm using 0.4
 
 The following works as expected.
 
 q =
 System.query().join('activity').group_by(model.System.id).add_column(func.max(Activity.id))
 
 The add_column() recognises that the activity table is already joined.
 and thus does not add it to the tables in the 'FROM' clause.
 
 
 However, the following does not work. The only difference is that now
 the joined table (activity) is aliased.
 
 q =
 System.query().join('activity',aliased=True).group_by(model.System.id).add_column(func.max(Activity.id))
 
 add_column() does not recognise the aliased activity table and the
 from clause now looks something like 'FROM system,activity', whereas
 it should be joined.
 
 What are my best solutions?

in 0.4 you'd need to use SQL-level Table and alias() objects to achieve the 
desired effect.  in 0.5 and above, you'd use aliased(Activity) as your entity.  
  the aliased=True option only affects subsequent filter/order_by/group_by 
calls.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Column type in select w/ if condition

2010-03-30 Thread Bryan
The underlying column returns a Decimal object when queried regularly,
and when summed as follows:

select([ mytable.c.hours ])
Decimal(1.0)
select([ func.sum(mytable.c.hours) ])
Decimal(1.0)

...but when I sum it w/ an if statement, it returns a float:

select([ func.sum(func.if_(True, mytable.c.hours, 0)) ])
1.0

How can I control the return type of that summed if column?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] identifier is too long

2010-03-30 Thread jo

I see, thank you, Mariano.
j
Mariano Mara wrote:

Excerpts from jo's message of Tue Mar 30 03:25:18 -0300 2010:
  

Hi all,

I have some troubles creating my db schema with Oracle. The problem is 
on this column:


Column('cod_caratteristica_rischio', Unicode(10), index=True, 
nullable=False)


It works fine in PostgreSQL but when I try it on Oracle sa tries to 
create an index with a name too long ( 30 char).



tbl['rischio_attivita'] = Table('rischio_attivita', database.metadata,
Column('id', Integer, Sequence('rischio_attivita_seq'), primary_key=True),
Column('cod_attivita', Unicode(10),nullable=False),
Column('cod_caratteristica_rischio', Unicode(10), index=True, 
nullable=False),

Column('tipo_calcolo', Integer),
Column('algoritmo', Unicode(200)),
Column('fattore_x', Float, PassiveDefault('1.0')),
ForeignKeyConstraint(['cod_attivita'], ['attivita.codice']),
ForeignKeyConstraint(['cod_caratteristica_rischio'], 
['caratteristica_rischio.codice'])

)


File /usr/lib/python2.4/site-packages/sqlalchemy/engine/base.py, line 
599, in _execute

raise exceptions.SQLError(context.statement, context.parameters, e)
sqlalchemy.exceptions.SQLError: (DatabaseError) ORA-00972: identifier is 
too long
'CREATE INDEX ix_rischio_attivita_cod_caratteristica_rischio ON 
rischio_attivita (cod_caratteristica_rischio)' {}





Hi there, instead of creating the index as part of the Column
expression, use the Index[1] statement and assign a name yourself.

[1]
http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html#sqlalchemy.schema.Index

  


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Column type in select w/ if condition

2010-03-30 Thread Mariano Mara
Excerpts from Bryan's message of Tue Mar 30 11:27:57 -0300 2010:
 The underlying column returns a Decimal object when queried regularly,
 and when summed as follows:
 
 select([ mytable.c.hours ])
 Decimal(1.0)
 select([ func.sum(mytable.c.hours) ])
 Decimal(1.0)
 
 ...but when I sum it w/ an if statement, it returns a float:
 
 select([ func.sum(func.if_(True, mytable.c.hours, 0)) ])
 1.0
 
 How can I control the return type of that summed if column?
 

You could use cast [1] (example: casting to Float, untested):

from sqlalchemy.sql.expression import cast
from sqlalchemy.sa import Float
...
select([ cast(func.sum(func.if_(True, mytable.c.hours, 0)), Float)])

[1] 
http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#sqlalchemy.sql.expression.cast

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] @compiles ignores inheritance

2010-03-30 Thread Michael Bayer
Tobias wrote:
 Hi,


 So I thought I could write just one method, that is annotated with
 @compiles(__base_function), but this does not work. I have to write a
 method for each class that inherits from __base_function:

 @compiles(wkt)
 def compile_wkt(element, compiler, **kw):
 return __call_function(element, compiler)

 @compiles(wkb)
 def compile_wkb(element, compiler, **kw):
 return __call_function(element, compiler)


none of that was really working (including, can't even have @compiles on
the base and subclass at the same time) so that all works in
rea184f5ba747.   latest tip.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Column type in select w/ if condition

2010-03-30 Thread Bryan
That worked, thanks.

On Mar 30, 7:40 am, Mariano Mara mariano.m...@gmail.com wrote:
 Excerpts from Bryan's message of Tue Mar 30 11:27:57 -0300 2010:

  The underlying column returns a Decimal object when queried regularly,
  and when summed as follows:

  select([ mytable.c.hours ])
  Decimal(1.0)
  select([ func.sum(mytable.c.hours) ])
  Decimal(1.0)

  ...but when I sum it w/ an if statement, it returns a float:

  select([ func.sum(func.if_(True, mytable.c.hours, 0)) ])
  1.0

  How can I control the return type of that summed if column?

 You could use cast [1] (example: casting to Float, untested):

 from sqlalchemy.sql.expression import cast
 from sqlalchemy.sa import Float
 ...
 select([ cast(func.sum(func.if_(True, mytable.c.hours, 0)), Float)])

 [1]http://www.sqlalchemy.org/docs/reference/sqlalchemy/expressions.html#...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] PyODBCConnector dbapi question

2010-03-30 Thread Bo Shi
 pep 249 specifies list of tuples for fetchmany() and fetchall()

Hrm, pep-249 seems to only specify sequence and sequence of
sequences for the fetch*() functions, specifying list of tuples only
as one possible example.  Perhaps the C implementation of RowProxy is
being too strict here?  I'm surprised that pyodbc is the only dbapi
implementation that this problem has occurred in... do all the other
implementations subclass tuple for their rows?

On Mon, Mar 29, 2010 at 8:03 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 how come the strack trace shows beta2 as the version number in the path ?   
 did you mean to say between beta1 and beta2 ?   it looks specific to the C 
 rewrite of RowProxy.   basically the rows returned by fetchone(), fetchall() 
 etc. are expected to be tuples.   pep 249 specifies list of tuples for 
 fetchmany() and fetchall() though is less specific for fetchone(), though I'm 
 pretty sure it intends tuples there as well.


 On Mar 29, 2010, at 7:43 PM, Bo Shi wrote:

 Also, dunno if it's helpful or not, but this is a regression in
 0.6beta3.  My dialect plugin works as is when using 0.6beta2.

 On Mon, Mar 29, 2010 at 7:41 PM, Bo Shi bs1...@gmail.com wrote:
 Thanks, explicitly assigning self.dbapi in my dialect constructor
 seems to get around the exception.

 I do, however, encounter a new exception:

  File test_vertica.py, line 57, in testTransactionIsolation
    _, iso_level = e.execute('SHOW TRANSACTION_ISOLATION').fetchone()
  File 
 /home/vmc/ENV/lib/python2.6/site-packages/SQLAlchemy-0.6beta2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 2204, in fetchone
    return self.process_rows([row])[0]
  File 
 /home/vmc/ENV/lib/python2.6/site-packages/SQLAlchemy-0.6beta2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 2163, in process_rows
    for row in rows]
 TypeError: row must be a tuple


 Any idea what's going on?  The stack trace isn't very informative, I'm 
 afraid.

 On Mon, Mar 29, 2010 at 6:05 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Bo Shi wrote:
 Hello,

 I had a custom dialect based on the PyODBC functionality that was
 working with SQLA SVN-6738.  Upgrading to beta 3, my tests no longer
 pass, so I've begun the process updating - on_connect() was easy, now
 I'm stumped on connect(...).  I've gotten to the point where, when
 using my dialect, connect() fails because it attempts to run
 self.dbapi.connect(...) but the PyODBC connector seems to implement it
 as a classmethod:


 Taking the following from the connector in revision control:

 9     class PyODBCConnector(Connector):

 27       �...@classmethod
 28        def dbapi(cls):
 29            return __import__('pyodbc')

 84        def initialize(self, connection):
 85            # determine FreeTDS first.   can't issue SQL easily
 86            # without getting unicode_statements/binds set up.
 87
 88            pyodbc = self.dbapi
 89
 90            dbapi_con = connection.connection
 91
 92            self.freetds = bool(re.match(r.*libtdsodbc.*\.so,
              dbapi_con.getinfo(pyodbc.SQL_DRIVER_NAME)))


 If dbapi is implemented as a class method, then wouldn't the call on
 line 92 fail?  Indeed, that's what I'm seeing.  So is self.dbapi
 getting assigned somewhere else?

 yeah there's a slight misfortune in that naming scheme - the @classmethod
 should have some different name, probably import_dbapi.   the
 reassignment takes place on line 102 of sqlalchemy/engine/default.py.
 this naming scheme is also present in 0.5 - it was just the
 PyODBCConnector that somehow didn't catch up until recently.






 Thanks,
 Bo

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.





 --
 Bo Shi
 617-942-1744




 --
 Bo Shi
 617-942-1744

 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit 

Re: [sqlalchemy] PyODBCConnector dbapi question

2010-03-30 Thread Michael Bayer
Bo Shi wrote:
 pep 249 specifies list of tuples for fetchmany() and fetchall()

 Hrm, pep-249 seems to only specify sequence and sequence of
 sequences for the fetch*() functions, specifying list of tuples only
 as one possible example.  Perhaps the C implementation of RowProxy is
 being too strict here?  I'm surprised that pyodbc is the only dbapi
 implementation that this problem has occurred in... do all the other
 implementations subclass tuple for their rows?

we run the tests all the time with Pyodbc, so I wasn't aware this was a
pyodbc issue.   I'd run without the c extensions for now.   For our C
extension to coerce into a tuple begins to add overhead and defeat the
purpose of using the extensions in the first place, though Gaetan would
have to answer this question.






 On Mon, Mar 29, 2010 at 8:03 PM, Michael Bayer mike...@zzzcomputing.com
 wrote:
 how come the strack trace shows beta2 as the version number in the
 path ?   did you mean to say between beta1 and beta2 ?   it looks
 specific to the C rewrite of RowProxy.   basically the rows returned by
 fetchone(), fetchall() etc. are expected to be tuples.   pep 249
 specifies list of tuples for fetchmany() and fetchall() though is less
 specific for fetchone(), though I'm pretty sure it intends tuples there
 as well.


 On Mar 29, 2010, at 7:43 PM, Bo Shi wrote:

 Also, dunno if it's helpful or not, but this is a regression in
 0.6beta3.  My dialect plugin works as is when using 0.6beta2.

 On Mon, Mar 29, 2010 at 7:41 PM, Bo Shi bs1...@gmail.com wrote:
 Thanks, explicitly assigning self.dbapi in my dialect constructor
 seems to get around the exception.

 I do, however, encounter a new exception:

  File test_vertica.py, line 57, in testTransactionIsolation
    _, iso_level = e.execute('SHOW TRANSACTION_ISOLATION').fetchone()
  File
 /home/vmc/ENV/lib/python2.6/site-packages/SQLAlchemy-0.6beta2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 2204, in fetchone
    return self.process_rows([row])[0]
  File
 /home/vmc/ENV/lib/python2.6/site-packages/SQLAlchemy-0.6beta2-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 2163, in process_rows
    for row in rows]
 TypeError: row must be a tuple


 Any idea what's going on?  The stack trace isn't very informative, I'm
 afraid.

 On Mon, Mar 29, 2010 at 6:05 PM, Michael Bayer
 mike...@zzzcomputing.com wrote:
 Bo Shi wrote:
 Hello,

 I had a custom dialect based on the PyODBC functionality that was
 working with SQLA SVN-6738.  Upgrading to beta 3, my tests no longer
 pass, so I've begun the process updating - on_connect() was easy,
 now
 I'm stumped on connect(...).  I've gotten to the point where, when
 using my dialect, connect() fails because it attempts to run
 self.dbapi.connect(...) but the PyODBC connector seems to implement
 it
 as a classmethod:


 Taking the following from the connector in revision control:

 9     class PyODBCConnector(Connector):

 27       �...@classmethod
 28        def dbapi(cls):
 29            return __import__('pyodbc')

 84        def initialize(self, connection):
 85            # determine FreeTDS first.   can't issue SQL easily
 86            # without getting unicode_statements/binds set up.
 87
 88            pyodbc = self.dbapi
 89
 90            dbapi_con = connection.connection
 91
 92            self.freetds = bool(re.match(r.*libtdsodbc.*\.so,
              dbapi_con.getinfo(pyodbc.SQL_DRIVER_NAME)))


 If dbapi is implemented as a class method, then wouldn't the call on
 line 92 fail?  Indeed, that's what I'm seeing.  So is self.dbapi
 getting assigned somewhere else?

 yeah there's a slight misfortune in that naming scheme - the
 @classmethod
 should have some different name, probably import_dbapi.   the
 reassignment takes place on line 102 of sqlalchemy/engine/default.py.
 this naming scheme is also present in 0.5 - it was just the
 PyODBCConnector that somehow didn't catch up until recently.






 Thanks,
 Bo

 --
 You received this message because you are subscribed to the Google
 Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.





 --
 Bo Shi
 617-942-1744




 --
 Bo Shi
 617-942-1744

 --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 

[sqlalchemy] best-practices question

2010-03-30 Thread Jon Nelson
Let's say I have a database with hundreds or even thousands of tables.
The table structure for this set of tables is *exactly* the same.
Furthermore, let's say the name of each table is predictable.
For example, something like:

tablename_2010_03_05

What I'd like to know is how to best manage making use of the ORM
layer with these tables.
Once I've created the engine and bound it to the metadata, wrapped
that in a session-making-clothes, etc... how do I then go using mapper
to associate object types with tables (since the table will change)?

Do I have to autoload/reflect/whatever every table? What alternatives are there?

-- 
Jon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] best-practices question

2010-03-30 Thread Michael Bayer
Jon Nelson wrote:
 Let's say I have a database with hundreds or even thousands of tables.
 The table structure for this set of tables is *exactly* the same.
 Furthermore, let's say the name of each table is predictable.
 For example, something like:

 tablename_2010_03_05

 What I'd like to know is how to best manage making use of the ORM
 layer with these tables.
 Once I've created the engine and bound it to the metadata, wrapped
 that in a session-making-clothes, etc... how do I then go using mapper
 to associate object types with tables (since the table will change)?

 Do I have to autoload/reflect/whatever every table? What alternatives are
 there?

how about:

def map_a_table(tablename):
table = Table(tablename, metadata,
 ...put the standard set of columns here..

class Foo(object):
pass
Foo.__name__ = tablename
mapper(Foo, table)
return Foo

someone else has the thousands of the same tables setup and also does
this (though to be blunt their top priority is migrating off of that crazy
architecture).





 --
 Jon

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Restricting a delete based on a many-to-many mapping.

2010-03-30 Thread Rich


On Mar 29, 6:15 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 you want the delete to fail if there *are* users associated or if there are 
 *not* ?    for the raise an error if users exist, the most efficient and 
 generic way is to ensure the foreign key on UserGroup is not nullable and 
 delete cascade isn't used.    if you wanted to dig an assertion into the 
 mapper you could use a before_delete() extension.  

As you guessed, the case I was anticipating was raise an error if
users exist.  I'll look into setting a non-nullable column. Thanks a
bunch for the help.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Sharding with _BindParamClause

2010-03-30 Thread George V. Reilly
We're using SQLAlchemy sharding to partition accounts across a couple
of databases. We want to add more partitions, but first we need to
eliminate some unnecessary cross-partition queries.

class FindShardableId(sqlalchemy.sql.ClauseVisitor):
def __init__(self, ids, key_fields, get_shard):
self.ids = ids
self.key_fields = key_fields
self.get_shard = get_shard

def _check_side(self, binary, side, other_side):
has_constant = (isinstance(side, sqlalchemy.Column)
and side.name in self.key_fields
and binary.operator ==
sqlalchemy.sql.operators.eq
and getattr(other_side, value, None))
if has_constant:
self.ids.append(self.get_shard(other_side.value))
return has_constant

def visit_binary(self, binary):
if not self._check_side(binary, binary.left,
binary.right):
# Lazy load properties tend to be reversed, with the
constant on the left
self._check_side(binary, binary.right, binary.left)

def query_chooser(query):
ids = []
if query._criterion is not None:
FindShardableId(
ids,
set([account_id, account_guid]),
lambda account_id:
shard_manager.shard_id_from_guid(account_id)
).traverse(query._criterion)

if len(ids) == 0:
logging.warn(\n\nExecuting query against all shards; 
 this may not be optimal:\n
\t{0}.format(str(query)))
return shards.keys()
else:
return ids

This works well most of the time, but we're finding that some queries
do not have a value. These are all of the form

SELECT shopping_list_items.version AS shopping_list_items_version
FROM shopping_list_items
WHERE shopping_list_items.account_id = :param_1
AND shopping_list_items.shopping_list_item_id = :param_2

and :param1 is of the form _BindParamClause(u'%(63636624 param)s',
None, type_=UUID())

Typically, I'm seeing this come out of the innards of SQLAlchemy,
as one of several queries triggered by, say, a session.merge().

How do we work around this?

Thanks!
/George V. Reilly, Seattle

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: add_column does not correlate with aliased table.

2010-03-30 Thread Kalium


On Mar 30, 10:54 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Mar 30, 2010, at 2:47 AM, Kalium wrote:



  Hi,
  I've had a look through the docs and a quick look through the forum
  here, and haven't been able to solve my problem. I'm using 0.4

  The following works as expected.

  q =
  System.query().join('activity').group_by(model.System.id).add_column(func.max(Activity.id))

  The add_column() recognises that the activity table is already joined.
  and thus does not add it to the tables in the 'FROM' clause.

  However, the following does not work. The only difference is that now
  the joined table (activity) is aliased.

  q =
  System.query().join('activity',aliased=True).group_by(model.System.id).add_column(func.max(Activity.id))

  add_column() does not recognise the aliased activity table and the
  from clause now looks something like 'FROM system,activity', whereas
  it should be joined.

  What are my best solutions?

 in 0.4 you'd need to use SQL-level Table and alias() objects to achieve the 
 desired effect.  in 0.5 and above, you'd use aliased(Activity) as your 
 entity.    the aliased=True option only affects subsequent 
 filter/order_by/group_by calls.

Thanks Michael. Another reason to upgrade to 0.5

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] best-practices question

2010-03-30 Thread Taavi Burns
Jon Nelson wrote:
 Let's say I have a database with hundreds or even thousands of tables.

Sure you didn't mean hundreds OF thousands?  :)

On Tue, Mar 30, 2010 at 2:09 PM, Michael Bayer mike...@zzzcomputing.com wrote:
 def map_a_table(tablename):
    table = Table(tablename, metadata,
         ...put the standard set of columns here..

    class Foo(object):
        pass
    Foo.__name__ = tablename
    mapper(Foo, table)
    return Foo

 someone else has the thousands of the same tables setup and also does
 this (though to be blunt their top priority is migrating off of that crazy
 architecture).

Yup, that's basically what we're doing, though we're using
declarative_base.  We've also got some relation()s between the
dynamically generated classes, and a cache of mapped classes.  Each
logical group of mapped classes inherits from one declarative_base and
share a metadata from it (this might have an impact on GCability, I
haven't personally tested any alternatives).

SQLAlchemy keeps only weakrefs to mappers, so once we expire the
mapped classes from our cache, they're eligible for Python to garbage
collect.  Obviously, we don't have any relation()s between groups of
mappers, or they wouldn't be collectable. :)  Letting the size of the
Python VM grow has horrible consequences for performance, so don't
make a cache any bigger than you absolutely have to.  On a Core2 Duo I
see GC times on the order of 1s/GB of VM memory, during which the
interpreter can't do anything else.  Mapping ~20 tables takes closer
to 1/5s, so it can be way better for latency to have a small cache and
cheap GC than the other way around.  One can also disable GC entirely
and restart the interpreter when it gets too big.  Or run GC when you
can, but be aware that the longer between GC runs, the longer they
will take.

For stability we've also had to backport some extra locking from 0.6
into 0.5, and grab the sqlalchemy.orm._COMPILE_MUTEX when dynamically
adding the relation()s.  There's also a fix for iterating over a
weakkeydict when GC hits (which can change the size of the dict,
throwing a RuntimeError) which I should probably submit upstream. ;)

And yes, one of our top priorities is migrating off of that schema as
quickly as we can, because maintaining this stuff is pure insanity!
Plus databases are designed for a few tables with millions of rows,
not millions of tables with a few rows.  In most cases you should just
let the database handle the load (with appropriate indexes).

-- 
taa
/*eof*/

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.