[sqlalchemy] Re: PrimaryKeyConstraint appears out of nowhere

2009-11-05 Thread Wolodja Wentland
On Thu, Nov 05, 2009 at 19:07 -0500, Michael Bayer wrote:
 it would be an empty PrimaryKeyConstraint.  It serves as the  
 primary_key column collection on the table object.

Thanks for clarifying this. Are there any plans to reflect the names of
Constraints in 0.6?

That would ease the removal of these constraints at runtime with
DropConstraint().

 if it shows up in CREATE TABLE thats a bug though.

It is not, don't worry. I was just a bit surprised to find this empty
PrimaryKeyConstraint in the constraint list.

thanks

Wolodja


signature.asc
Description: Digital signature


[sqlalchemy] Re: TypeError raised when executing INSERT statement

2009-11-03 Thread Wolodja Wentland
On Tue, Nov 03, 2009 at 16:29 -0500, Michael Bayer wrote:

 the error is raised directly by psycopg2.

I was afraid that the error is with psycopg2. Are you aware of any
bugtracker, mailing list or IRC channel that discusses psycopg2?

 I'd try taking out the ; at the end of the statement, 

That did not help unfortunately. Can you think of anything else I might
try? 

The error does not occur with all INSERT statements I read from the
dump files, so I think that the problem is with the actual
string/statement. Why does psycopg2 not hand the statement to PostgreSQL
unaltered? Can I force that somehow?

I think i might just open a subprocess directly to psql to execute these
statements. At least I will not have to bother with bugs not caused by
me.

 and also checking against a literal string to
 see if something weird is happening when you read the file.

I don't think that anything weird happens when I am reading the file,
but testing this is nearly impossible, as the statement is rather huge.

thanks for the help

Wolodja


signature.asc
Description: Digital signature


[sqlalchemy] Re: TypeError raised when executing INSERT statement

2009-11-03 Thread Wolodja Wentland
On Tue, Nov 03, 2009 at 21:57 +0100, Wolodja Wentland wrote:
 psql_engine = create_engine(
 'postgresql+psycopg2://babilen:ax8xwymferm...@localhost/typeerr_bug')
 ^^ oops

Note to self: Double check for sensitive data :-). Guess it is time to
change the password.

You naturally will have to adapt the username and password to your own
database setup.

sorry about that

Wolodja


signature.asc
Description: Digital signature


[sqlalchemy] Re: TypeError raised when executing INSERT statement

2009-11-03 Thread Wolodja Wentland
On Tue, Nov 03, 2009 at 22:49 +0100, Wolodja Wentland wrote:
 On Tue, Nov 03, 2009 at 16:29 -0500, Michael Bayer wrote:

  the error is raised directly by psycopg2.

 I was afraid that the error is with psycopg2. Are you aware of any
 bugtracker, mailing list or IRC channel that discusses psycopg2?
 
  I'd try taking out the ; at the end of the statement, 
 
 That did not help unfortunately. Can you think of anything else I might
 try? 

I found the error. The problem are '%' characters in the insert
statements. Replacing '%' with '%%' solves the problem. I am still not
convinced that using SA/psycopg2 is a better approach than using a psql
subprocess. What do you think?

have a nice day

Wolodja


signature.asc
Description: Digital signature


[sqlalchemy] TypeError raised when executing INSERT statement

2009-11-03 Thread Wolodja Wentland
Hi all,

I have run into a problem with SQLAlchemy r6472 (svn trunk) which might
be a bug.

I will not bore you with the details of the importer tool I am writing,
but rather enclosed a small script that exemplifies the problem.

--- typeerr.py ---
from sqlalchemy import *
from sqlalchemy.orm import *


psql_engine = create_engine(
'postgresql+psycopg2://babilen:ax8xwymferm...@localhost/typeerr_bug')
metadata = MetaData(bind=psql_engine)

ll_t = Table(u'langlinks', metadata,
 Column(u'll_from', Integer,
nullable=False,
server_default='0',
   ),
 Column(u'll_lang', Unicode(20),
nullable=False,
server_default='',
   ),
 Column(u'll_title', Unicode(255),
nullable=False,
server_default='',
   )
)

print u'Create table'
pl_t.create()
print u'Table created'

conn = psql_engine.connect()

# here be dragons
file_content = open('insert_stmt.sql', 'rb').read()
insert_stmt = file_content.decode('utf8')
conn.execute(insert_stmt)
--- snip ---

The program fails with the following exception:

--- Exception with minor editing (path) ---
$ python typeerr.py 
Create table
Table created
Traceback (most recent call last):
  File typeerr.py, line 33, in module
conn.execute(insert_stmt)
  File ../lib/sqlalchemy/engine/base.py, line 975, in execute
return Connection.executors[c](self, object, multiparams, params)
  File ../lib/sqlalchemy/engine/base.py, line 1051, in _execute_text
return self.__execute_context(context)
  File ../lib/sqlalchemy/engine/base.py, line 1060, in __execute_context
self._cursor_execute(context.cursor, context.statement, 
context.parameters[0], context=context)
  File ../lib/sqlalchemy/engine/base.py, line 1120, in _cursor_execute
self.dialect.do_execute(cursor, statement, parameters, context=context)
  File ../lib/sqlalchemy/engine/default.py, line 181, in do_execute
cursor.execute(statement, parameters)
TypeError: 'dict' object does not support indexing
--- snip ---

If you want to test the behaviour yourself you can download 'insert_stmt.sql' 
from:

http://www.cl.uni-heidelberg.de/~wentland/insert_stmt.sql

I would like to note that piping the insert statement directly into psql works 
fine:

--- snip ---
$ psql -d typeerr_bug  insert_stmt.sql 
INSERT 0 32173
--- snip ---

What is causing this behaviour and even more importantly: What can I do
about it?

kind regards

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Create database and transactional blocks strangeness

2009-10-27 Thread Wolodja Wentland
Hi all,

I am having a problem getting database creation on PostgreSQL done
correctly in an API that I am writing. I am using a svn checkout of SA trunk
from yesterday if that is important.

I have use the following code to create the database:

--- snip ---
...
try:
import psycopg2.extensions as e
ISOLATION_LEVEL_AUTOCOMMIT = e.ISOLATION_LEVEL_AUTOCOMMIT
ISOLATION_LEVEL_READ_COMMITTED = e.ISOLATION_LEVEL_READ_COMMITTED
ISOLATION_LEVEL_SERIALIZABLE = e.ISOLATION_LEVEL_SERIALIZABLE
del e
except ImportError, imp_err:
ISOLATION_LEVEL_AUTOCOMMIT = 0
ISOLATION_LEVEL_READ_COMMITTED = 1
ISOLATION_LEVEL_SERIALIZABLE = 2
...
def __init__():
...
self._admin_engine = create_engine(
'%s+%s://%s:%...@%s/postgres'%(self.vendor, self.driver, 
self.user,
 self.password, self.host))
self._AdminSession = sessionmaker(bind=self._admin_engine)
...
@property
def admin_session(self):
if self._admin_session is None:
self._admin_session = self._AdminSession()
return self._admin_session
...

def create(self):
Create this database
# set isolation level to AUTOCOMMIT
# postgres can't CREATE databases within a transaction
self._admin_engine.connect().connection.connection.set_isolation_level(
ISOLATION_LEVEL_AUTOCOMMIT)

self.admin_session.execute('CREATE DATABASE %s'%(self.name))

self._admin_engine.connect().connection.connection.set_isolation_level(
ISOLATION_LEVEL_READ_COMMITTED)
--- snip ---

I can create the database just fine within the interpreter:

--- snip ---
 import mwdb
 db = mwdb.orm.database.PostgreSQLDatabase('psycopg2', 'babilen', 
 'PASSWORD', 'localhost', 'test', 'zh')
 db.all_databases()
['template1', 'template0', 'postgres']
 db.create()
 db.all_databases()
['template1', 'template0', 'postgres', 'test']
 db.drop()
 db.all_databases()
['template1', 'template0', 'postgres']
--- snip ---

But this fails miserably when the API is used within a program:

--- snip ---
dump_db = mwdb.orm.database.PostgreSQLDatabase(
self.options.pg_driver,
self.options.pg_username,
self.options.password,
self.options.pg_host,
self._database_name(dump_info),
dump_info['language'])

if self._database_name(dump_info) not in dump_db.all_databases():
LOG.info('Create database: %s' % self._database_name(dump_info))
dump_db.create()
--- snip ---

Traceback:

--- snip ---
Traceback (most recent call last):
  File /home/babilen/.virtualenvs/wp-import/bin/wp-import, line 185, in 
module
pg_importer.import_from_directory(ARGS[0])
  File 
/home/babilen/.virtualenvs/wp-import/lib/python2.6/site-packages/wp_import/importer.py,
 line 147, in import_from_directory
self._import_dump(dump_info)
  File 
/home/babilen/.virtualenvs/wp-import/lib/python2.6/site-packages/wp_import/importer.py,
 line 103, in _import_dump
dump_db.create()
  File 
/home/babilen/.virtualenvs/wp-import/lib/python2.6/site-packages/mwdb/orm/database.py,
 line 515, in create
self.admin_session.execute('CREATE DATABASE %s'%(self.name))
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/orm/session.py,
 line 739, in execute
clause, params or {})
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/base.py,
 line 975, in execute
return Connection.executors[c](self, object, multiparams, params)
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/base.py,
 line 1037, in _execute_clauseelement
return self.__execute_context(context)
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/base.py,
 line 1060, in __execute_context
self._cursor_execute(context.cursor, context.statement, 
context.parameters[0], context=context)
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/base.py,
 line 1122, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor, context)
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/base.py,
 line 1120, in _cursor_execute
self.dialect.do_execute(cursor, statement, parameters, context=context)
  File 
/home/babilen/.virtualenvs/wp-import/src/sqlalchemy/lib/sqlalchemy/engine/default.py,
 line 181, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.InternalError: (InternalError) CREATE DATABASE cannot run inside 
a transaction block
 'CREATE DATABASE wp_zh_20091023' {}
--- snip ---

Do you have any idea why this is happening?
Is the .connection.connection.set_isolation_level() the right way to do this?
Why do I have to write connection.connection? This used to (?) be different.




signature.asc
Description: Digital signature


[sqlalchemy] Re: Create database and transactional blocks strangeness

2009-10-27 Thread Wolodja Wentland
On Tue, Oct 27, 2009 at 16:22 -0400, Michael Bayer wrote:
 Wolodja Wentland wrote:
 
  def create(self):
  Create this database
  # set isolation level to AUTOCOMMIT
  # postgres can't CREATE databases within a transaction
  self._admin_engine.connect().connection.connection.set_isolation_level(
  ISOLATION_LEVEL_AUTOCOMMIT)
 
  self.admin_session.execute('CREATE DATABASE %s'%(self.name))

 there's nothing about the above code that guarantees the connection on
 which you called set_isolation_level() is the one used by your
 session.execute().   I think you mean to call execute(CREATE DATABASE)
 on the connection returned by self._admin_engine.connect().

You are right! I changed the code to this:

--- snip ---
def create(self):
Create this database
# set isolation level to AUTOCOMMIT
# postgres can't CREATE databases within a transaction
conn = self._admin_engine.connect()
conn.connection.connection.set_isolation_level(
ISOLATION_LEVEL_AUTOCOMMIT)

conn.execute('CREATE DATABASE %s'%(self.name))

conn.connection.connection.set_isolation_level(
ISOLATION_LEVEL_READ_COMMITTED)
--- snip ---

and it works like a charm.

But i still have some little questions...

* Is there an even better way to do this? ;-)

* Is it necessary to set the isolation level to the value it had
  before I set it to ISOLATION_LEVEL_AUTOCOMMIT to make sure that no
  connection uses ISOLATION_LEVEL_AUTOCOMMIT in the future without
  explicitly setting that?

  (I will change the code so it remembers the value of isolation_level
  and use that instead of it to ISOLATION_LEVEL_READ_COMMITTED
  explicitly)

* Why the .connection.connection ? I remember that I had to write just
  one .connection in the past. 

And one more word... This is the fastest mailing list I have ever used.
Thank you so much for reacting so fast on this ML, thank you very much
for SA and thanks for the solution to my problem!

have a great afternoon

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Very odd utf8 problem

2009-10-07 Thread Wolodja Wentland
On Wed, Oct 07, 2009 at 07:55 -0700, Yo'av Moshe wrote:
 See what I mean here (it's me running the same query twice in
 IPython): http://paste2.org/p/457059
 
 What can cause this behavior?! I can't think of anything! I guess that
 one of my users has uploaded some article with some invalid utf8 code,
 but should that kill the query? and how come it doesn't kill the
 second one? and what can I do to avoid it?

In addition to the bug Mike pointed out to you I want to introduce you
to my favourite bug this year:

https://bugs.launchpad.net/ipython/+bug/339642

If you run into unicode issues with IPython it is wise to check the
'python' behaviour before development code against this bug.

kind regards

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Dynamic loader versus lazy=True

2009-09-15 Thread Wolodja Wentland
On Thu, Sep 10, 2009 at 23:27 +0200, Wolodja Wentland wrote:
 Hi all,
 
 I observed that if I define a relation (foo_query) as lazy='dynamic' and
 access all referenced entities with foo_query.all() that the query will
 be executed every time i access it. That is not a big surprise ;-)
 
 In a library I am writing i want to provide methods that allow
 pre-filtering of referenced entities and also on that provides access to
 all entities. I am wondering if it is better/faster/.. to define *two*
 relations for filtering and accessing all entities respectively.
 
 I can't really decide between the following two approaches and would be
 happy if someone could provide some tips:
 
 Approach 1
 --
 
 Class Bar(object):
 
 def all_foo(self):
 foo_query.all()
 
 def foo_startwith(self, search_string):
 foo.query.filter(tbl.c.col.like('%s%%'% ...))
 
 mapper(Bar,
...
properties={
  'foo_query': relation(Foo, lazy='dynamic')
  })
 
 Approach 2
 --
 
 Class Bar(object):
 
 def foo_startwith(self, search_string):
 foo.query.filter(tbl.c.col.like('%s%%'% ...))
 
 mapper(Bar,
...
properties={
  'all_foo': relation(Foo)
  })
properties={
  'foo_query': relation(Foo, lazy='dynamic')
  })
 
 Which one is faster? Does it make a difference, given the
 optimisation/cache in the database? Will it just mean more bloat in the
 mapper definition?

Nobody can help with the decision?

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Dynamic loader versus lazy=True

2009-09-10 Thread Wolodja Wentland
Hi all,

I observed that if I define a relation (foo_query) as lazy='dynamic' and
access all referenced entities with foo_query.all() that the query will
be executed every time i access it. That is not a big surprise ;-)

In a library I am writing i want to provide methods that allow
pre-filtering of referenced entities and also on that provides access to
all entities. I am wondering if it is better/faster/.. to define *two*
relations for filtering and accessing all entities respectively.

I can't really decide between the following two approaches and would be
happy if someone could provide some tips:

Approach 1
--

Class Bar(object):

def all_foo(self):
foo_query.all()

def foo_startwith(self, search_string):
foo.query.filter(tbl.c.col.like('%s%%'% ...))

mapper(Bar,
   ...
   properties={
 'foo_query': relation(Foo, lazy='dynamic')
 })

Approach 2
--

Class Bar(object):

def foo_startwith(self, search_string):
foo.query.filter(tbl.c.col.like('%s%%'% ...))

mapper(Bar,
   ...
   properties={
 'all_foo': relation(Foo)
 })
   properties={
 'foo_query': relation(Foo, lazy='dynamic')
 })

Which one is faster? Does it make a difference, given the
optimisation/cache in the database? Will it just mean more bloat in the
mapper definition?

thanks

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Automatic Polymorphic Identity Assignment

2009-09-09 Thread Wolodja Wentland
On Wed, Sep 09, 2009 at 10:14 -0700, gizli wrote:

 This list has been very helpful so far, thanks a lot. I was just
 wondering if there is a transparent way to assign polymorphic
 identities to ORM classes using single table inheritance. Let's say we
 have a base Task class:
 
 class Task(DeclarativeBase):
__tablename__ = 'Tasks'
 
id = Column(Integer)
name = Column(String)
type = Column(String) etc...
 
__mapper_args__ = {'polymorphic_on' : type} 

 
 class MyTask(Task):
 __mapper_args__ = {'polymorphic_identity': 'MyTask'}
 
 class YourTask(Task):
 __mapper_args__ = {'polymorphic_identity': 'YourTask'}
 
 This is the recommended way in the documentation for declarative
 style. I was wondering if I could get rid of the explicit
 polymorphic_identity setup in the subclasses by some clever
 programming trick. I want to assign the polymorphic identity to be the
 class __name__ automatically if that class extends Task class. I am
 not very well versed in advanced python programming like decorators or
 function/class wrappers, so I wanted to seek your opinion.

Maybe something like:

class Task(DeclarativeBase):
__tablename__ = 'Tasks'
 
id = Column(Integer)
name = Column(String)
type = Column(String) etc...
 
__mapper_args__ = {
'polymorphic_on'  : type,
'polymorphic_identity : self.__class__.__name__,
} 

class MyTask(Task):
pass

I have no idea if that works, but you could try it.

with kind regards

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Drop and recreate primary key indexes

2009-09-08 Thread Wolodja Wentland
Hi all,

is it possible to drop primary key indexes from within SA? I already
found the table.indexes set, but it does not contain the primary key
index. I can therefore drop all indexes for a table except the primary
key one.

It seems to me as if SA relies on a strict naming scheme for primary key
indexes in the form of tablename_pkey is this correct and could i rely
on that if i want to drop the index manually? What happens if there
already is a table with that name prior to table creation?

As you might have guessed i also would like to recreate this index later
on. Is there a automatic way to do so, or do i have to define the index
by myself?

with kind regards

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Drop and recreate primary key indexes

2009-09-08 Thread Wolodja Wentland
On Tue, Sep 08, 2009 at 13:06 +0200, Wolodja Wentland wrote:
 is it possible to drop primary key indexes from within SA? I already
 found the table.indexes set, but it does not contain the primary key
 index. I can therefore drop all indexes for a table except the primary
 key one.

I did some investigation and found that the primary key index is
generated automagically by PostgreSQL rather than SA. This is because i
declare some columns with 'primary_key=True' which causes psql to create
a primary key constraint and an index.

If i connect to the db in question and reflect the tables it is
unfortunately not possible to drop any pkey constraint, because the
.drop() method if not present in sa.schema.PrimaryKeyConstraint but only
in migrate.changeset.constraint.PrimaryKeyConstraint.

I create all PrimaryKeyConstraints explicitly now and use the
PrimaryKeyConstraint class from migrate. The problem i am facing now is
that i do not get instances of migrate's PrimaryKeyConstraint but rather
SA's one if i work on the constraints like exemplified in the following
code snippet:

--- snip --- 
...

metadata = MetaData() 
metadata.bind = engine 
self._metadata.reflect()
tbl = self._metadata.tables[table_name]

pkc = [ con for con in tbl.constraints
  if isinstance(con, PrimaryKeyConstraint) ][0]
log.debug('Primary key constraint for table [%s] on: %s'%(
table_name, pkc.columns.keys()))

log.debug('Dropping pkey constraint ...')
pkc.drop()
^^ This method is not present because i get instances from SA's classes 
not migrate's
--- snip ---

How can i tackle this problem? Any advise is welcome!

with kind regards

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Drop and recreate primary key indexes

2009-09-08 Thread Wolodja Wentland
On Tue, Sep 08, 2009 at 13:05 -0400, Michael Bayer wrote:
 Alternatively, just start using SQLalchemy 0.6 (its trunk so far):
 
 from sqlalchemy.schema import DropConstraint
 for cons in table.constraints:
 if isinstance(con, PrimaryKeyConstraint):
 engine.execute(DropConstraint(con))

As i am using current trunk i tried to implement this approach.
Unfortunately this fails.

My goal is to be able to create and drop *Constraints whenever
i want to. I need this because i am writing a tool that does bulk
imports of data and the constraint checks are a severe performance 
penalty.

Inspired by the now discovered AddConstraint/DropConstraint classes you
pointed me at I implemented constraint creation like:

--- snip ---
metadata = MetaData()
tbl = Table('foo', metadata,
Column('id', Integer)
...
)
...
pkey_constraints = [
PrimaryKeyConstraint(tbl.c.id, inline_ddl=False),
...
]

engine = create_engine(postgresql+...)
tbl.create(bind=engine)
...
engine.execute(AddConstraint(pkey_constraint_for_this_table))
--- snip ---

The assumption that creating a PrimaryKeyConstraint with
inline_ddl=False will prevent SA to generate DDL for this constraint
seems to be false as the generated SQL looks like:

--- snip ---
CREATE TABLE foo (
id SERIAL NOT NULL, 
...
PRIMARY KEY (id)
)
--- snip ---

So the the attempt to manually create the PrimaryKeyConstraint fails
with the error multiple primary keys for table ...! 

Questions:

* Why is that? 
* Is inline_ddl not honoured for PrimaryKeyConstraints, which *is* a
  subclass of Constraint? 
* Is this caused by the table._set_primary_key(self) call
  in PrimaryKeyConstraints._set_parent() ? 
* How can i programmatically create primary key constraints?

I can't drop these constraints as well. Even if i accept that my tables
are created with primary key definitions the recipe you showed me does
not work. It fails with:

--- snip ---
...
/sqlalchemy/sql/compiler.pyc in _requires_quotes(self, value)
   1306 def _requires_quotes(self, value):
   1307 Return True if the given identifier requires quoting.
- 1308 lc_value = value.lower()
   1309 return (lc_value in self.reserved_words
   1310 or self.illegal_initial_characters.match(value[0])

AttributeError: 'NoneType' object has no attribute 'lower'
--- snip ---

Is sqlalchemy-migrate the only way to handle this right now? I created
the primary key constraints by specifying the *columns* as strings
before and used a dictionary to differentiate between primary key
constraint column definitions for various tables? Is this advisable? Is
there a better way to achieve this?

with kind regards and thanks for this great tool!

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Drop and recreate primary key indexes

2009-09-08 Thread Wolodja Wentland
On Tue, Sep 08, 2009 at 17:35 -0400, Michael Bayer wrote:
 
 Wolodja Wentland wrote:
  I can't drop these constraints as well. Even if i accept that my tables
  are created with primary key definitions the recipe you showed me does
  not work. It fails with:
 
  --- snip ---
  ...
  /sqlalchemy/sql/compiler.pyc in _requires_quotes(self, value)
 1306 def _requires_quotes(self, value):
 1307 Return True if the given identifier requires
  quoting.
  - 1308 lc_value = value.lower()
 1309 return (lc_value in self.reserved_words
 1310 or self.illegal_initial_characters.match(value[0])
 
  AttributeError: 'NoneType' object has no attribute 'lower'
 
 that's not a full stack trace but I would gather its looking for the name
 of the constraint.  so if you can provide names for your constraints
 that would allow the DropConstraint to work.

Yes you are right. That is not the full stack trace and the name of the
pkey constraint was missing. This, however, seems to be a bug within the
database reflection framework as these constraints have names within the
database and i am working on reflected tables here. It does not matter
whether i define names while creating the constraints as they are lost
as soon as i reflect the tables.

Any tips on how to dynamically create *Constraints? The inline_ddl idea
does not work and doing something like:

--- snip ---
tbl = metadata.tables[table_name]
...
new_pkc = PrimaryKeyConstraint(
*pkey_columns_for_table(table_name),
**{'name' : '%s_pkey'%(table)})
new_pkc._set_parent(tbl)
engine.execute(AddConstraint(new_pkc)
--- snip ---

... seems a bit hackish. It works however, but i am unsure if calling
_set_parent(tbl) really is safe and the correct way.

I am doing this because i have to create the same DB structure in a
bunch of databases and like to keep the table definition in one place.
The pkey_columns_for_table(..) function retrieves something like:
['foo_column', 'bar_column', ... ]

thanks

Wolodja Wentland


signature.asc
Description: Digital signature


[sqlalchemy] Re: Tutorial program 'selectdemo.py'

2009-09-08 Thread Wolodja Wentland
On Tue, Sep 08, 2009 at 15:09 -0700, drednot57 wrote:
 
 Traceback (most recent call last): File selectdemo.py, line 55, in
 module s = users.select(users.c.name.in_('Mary', 'Susan'))
 TypeError: in_() takes exactly 2 arguments (3 given)
 
 # The in and between operations are also available s =
 users.select(users.c.age.between(30,39)) run(s) # Extra underscore
 after in to avoid conflict with Python keyword s =
 users.select(users.c.name.in_('Mary', 'Susan')) run(s)
^^^ here be dragons!

You just have to change in_('Mary', 'Susan') to in_(['Mary', 'Susan']).
^   ^
in_() takes a *list* not a number of arguments.

with kind regards

Wolodja Wentland


signature.asc
Description: Digital signature