[sqlalchemy] PostgreSQL Multidimensional arrays in SQLAlchemy

2012-03-15 Thread Faheem Mitha


Hi,

I posted 
http://stackoverflow.com/questions/9729175/multidimensional-arrays-in-sqlalchemy 
to StackOverflow. Reproduced below. Please CC me on any reply. Thanks.


   Regards, Faheem



I'm using SQLAlchemy 0.6.3 with PostgreSQL 8.4 on Debian squeeze. I want a 
table where one column stores something in PostgreSQL that shows up in 
Python as a list of integer lists or tuples of integer tuples. E.g.


((1,2), (3,4), (5,6,7))

In the example below the column is `model`. I thought that a reasonable 
approach might be to store stuff as an PG 2 dimensional table, which in PG 
looks like `integer[][]`. I don't know in what form SQLA will return this 
to Python, but I'm hoping it is something like a tuple of tuples.


However, I can't figure out how to tell SQLA to give me a two dimensional 
Integer array. The 
[documentation](http://docs.sqlalchemy.org/en/rel_0_6/dialects/postgresql.html#postgresql-data-types) 
for `sqlalchemy.dialects.postgresql.ARRAY` says



item_type – The data type of items of this array. Note that
dimensionality is irrelevant here, so multi-dimensional arrays like
INTEGER[][], are constructed as ARRAY(Integer), not as
ARRAY(ARRAY(Integer)) or such. The type mapping figures out on the fly.


Unfortunately, I have no idea what that means. How can the type mapping 
figure this out on the fly? It needs to create the correct DDL.
My first and only guess for how to do this would have been 
`ARRAY(ARRAY(Integer))`. Currently I have


  crossval_table = Table(
name, meta,
Column('id', Integer, primary_key=True),
Column('created', TIMESTAMP(), default=now()),
Column('sample', postgresql.ARRAY(Integer)),
Column('model', postgresql.ARRAY(Integer)),
Column('time', Float),
schema = schema,

This creates the following DDL

CREATE TABLE crossval (
id integer NOT NULL,
created timestamp without time zone,
sample integer[],
model integer[],
time double precision
);

which isn't right, of course. What am I missing?

##

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] pickling SQLAlchemy exceptions

2012-01-10 Thread Faheem Mitha


Hi,

See 
http://stackoverflow.com/questions/8785899/hang-in-python-script-using-sqlalchemy-and-multiprocessing 
particularly sbt's answer (http://stackoverflow.com/a/8795763/350713), and 
also the Python bug reports http://bugs.python.org/issue13751 and 
http://bugs.python.org/issue1692335.


The upshot is that SQLAs exceptions don't play nice with Pythons pickle. 
While this may be a known issue, I could not find any discussion of it. If 
this is considered a bug, let me know, and I'll file a bug report.


So, if possible, consider changing things to that SQLAs exceptions do work 
with pickle. Also, I'd welcome any suggestions how to implement sbt's 
copy_reg suggestion for the SQLA exception classes. Can I implement one 
workaround, or so I need one for every class that has a non-trivial 
constructor?


   Regards, Faheem

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-22 Thread Faheem Mitha
[This message has also been posted.]
On Fri, 20 May 2011 09:44:36 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 OK so not just schema names, table structure as well.  So yeah so
 you're using the same classes among entirely different databases
 essentially, so yeah just like our tests do, just tear down before
 remapping for a new DB and you're done.

Hi Michael,

Ok, thanks for the suggestion. Can you point me to a place in your
unit tests where this tear down happens?

Also, can you reply regarding the 'tometadata' thing? Not super
important, but I'm just curious about what it does, exactly. Thanks.

Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-20 Thread Faheem Mitha
On Fri, 20 May 2011 00:52:28 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On May 19, 2011, at 5:24 PM, Faheem Mitha wrote:

 Unfortunately, that is not true.  (So I guess just leaving the
 structure alone and switching dbs will not work.) There are 4
 possible different database layouts. Also, there can be multiple
 schemas in each database.

 So you have a model with a set of tables, the tables are split
  amongst multiple schemas, say schemas A, B, and C.  You then have
  four types of databases, each of which have the identical set of
  table designs, except the actual names of *just the schemas*,
  I.e. A, B, and C, randomly change.

Well, let me try to be clear here, for the record. I have a bunch of
schemas across databases. Let us assume that all schemas are in one
database and there are k schemas. Then each of these k schemas contain
a dataset of 9 tables. Now this collection of 9 tables vary in
structure - there are 4 different possible structures/layouts for
these 9 tables (what are confusingly also called schemas), but the
table names used in all the different layouts are the same, say A, B,
C, I. So, a total of 9k tables. Most of the tables are the same,
just a couple of the tables differ. So, my application may switch from
one set of tables, to another set of tables. I hope it is not quite as
ridiculous as it sounds. Unfortunately I haven't had the help of a
database professional in this project.

 That seems ridiculous.  I would absolutely name the schemas
 consistently.  Or name the tables distinctly in the schemas so that
 search_path could be used.  Very unfortunate that PG doesn't support
 synonyms.

Well, the code for handling these different layouts is essentially the
same, so I've mostly used the same routines across all, and
encapsulated the differences using object orientation. This would be
more difficult if I started using different names fot the different
tables.

Not sure what you mean by name the schemas consistently.

 Since I know you aren't going for that, there's some other Python
 tricks all of which are more complex, or interfere with how mapper()
 works in such a way that I can't guarantee ongoing compatibility,
 than just wiping out everything with clear mappers and re-mapping.
 I would keep the MetaData for each set of tables in a dictionary and
 pull the appropriate set of tables out for each use, send them into
 mapper().  I'd create the three copies of each Table from the
 original one using table.tometadata(schema='newschema').

 # runs only once
 metadatas = {
   'one':MetaData(),
   'two':MetaData(),
   'three':MetaData(),
   'four':MetaData(),
}

 # runs only once, per table
 def table(name, *args, **kw):
 t = Table(name, metadatas['one'], *args, **kw)
 t.tometadata(metadatas['two'], schema='two')
 t.tometadata(metadatas['three'], schema='three')
 t.tometadata(metadatas['four'], schema='four')

Wow, never heard of tometadata before. Google thought I might be
searching for metadata. :-)

At http://readthedocs.org/docs/sqlalchemy/rel_0_6_6/core/schema.html I
found this defn of tometadata

*

tometadata(metadata, schema=symbol 'retain_schema)
Return a copy of this Table associated with a different MetaData.

E.g.:

# create two metadata
meta1 = MetaData('sqlite:///querytest.db')
meta2 = MetaData()

# load 'users' from the sqlite engine
users_table = Table('users', meta1, autoload=True)

# create the same Table object for the plain metadata
users_table_2 = users_table.tometadata(meta2)

***

Does tometadata just retrieve the new table by name? Not completely
clear from the context.

Not sure what is happening here. The idea is to get the equivalent of
t from a different metadata, yes? So wouldn't this be like

def table(name, number, *args, **kw):
t = Table(name, metadatas['one'], *args, **kw)
# Return the table corresponding to t from metadata[number]
return t.tometadata(metadatas[number], schema=number)

 then to map a class:

 mapper(cls, metadatas['two'].tables['some_table'])

This would need to loop over all tables, yes? Here tables is some
other directory? I think I'm missing some context. Off the top of my
head, something like

for cls, name in zip(classnames, tablenames):
mapper(cls, metadatas['two'].table(name, two))

perhaps?
  Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-19 Thread Faheem Mitha
Hi,

I'm belatedly following up on this earlier thread from October 2010. I
decided to go back to it and figure out what the problem was. So, I
created a working minimal example. I posted on StackOverflow, but it
doesn't seem to have attracted much interest, so I'm copying it
here. The SO link is
http://stackoverflow.com/questions/6051674/class-already-has-a-primary-mapper-defined-error-with-sqlalchemy
I propose a possible solution at the end of the post, but I'd like to
know if it meets with the approval of the cognoscenti.  I'll copy the
post below, In case SO falls off the net or something. Please CC me on
any reply. Thanks.

*

Back in October 2010, I posted this question to the Sqlalchemy user
list. At the time, I just used the clear_mappers workaround mentioned
in the message, and didn't try to figure out what the problem
was. That was very naughty of me. Today I ran into this bug again, and
decided to construct a minimal example, which appears below. Michael
also addressed what is probably the same issue back in 2006. I decided
to follow up here, to give Michael a break from my dumb questions.

So, the upshot appears to be that for a given class definition, you
can't have more than one mapper defined. In my case I have the Pheno
class declared in module scope (I assume that is top level scope here)
and each time make_tables runs, it tries to define another mapper.

Mike wrote Based on the description of the problem above, you need to
ensure your Python classes are declared in the same scope as your
mappers. The error message you're getting suggests that 'Pheno' is
declared at the module level. That would take care of the problem,
but how do I manage that, without altering my current structure? What
other options do I have, if any? Apparently mapper doesn't have an
option like if the mapper is already defined, exit without doing
anything, which would take care of it nicely. I guess I could define
a wrapper function, but that would be pretty ugly.

*

from sqlalchemy import *
from sqlalchemy.orm import *

def make_pheno_table(meta, schema, name='pheno'):
pheno_table = Table(
name, meta,
Column('patientid', String(60), primary_key=True),
schema=schema,
)
return pheno_table

class Pheno(object):
def __init__(self, patientid):
self.patientid = patientid

def make_tables(schema):
from sqlalchemy import MetaData
meta = MetaData()
pheno_table = make_pheno_table(meta, schema)
mapper(Pheno, pheno_table)
table_dict = {'metadata': meta, 'pheno_table':pheno_table}
return table_dict

table_dict = make_tables('foo')
table_dict = make_tables('bar')

*

Error message follows. Tested with SQLAlchemy 0.6.3-3 on Debian squeeze.

$ python test.py
Traceback (most recent call last):
  File test.py, line 25, in module
table_dict = make_tables('bar')
  File test.py, line 20, in make_tables
mapper(Pheno, pheno_table)
  File /usr/lib/python2.6/dist-packages/sqlalchemy/orm/__init__.py, line 818, 
in mapper
return Mapper(class_, local_table, *args, **params)
  File /usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py, line 209, 
in __init__
self._configure_class_instrumentation()
  File /usr/lib/python2.6/dist-packages/sqlalchemy/orm/mapper.py, line 381, 
in _configure_class_instrumentation
self.class_)
sqlalchemy.exc.ArgumentError: Class 'class '__main__.Pheno'' already has a 
primary mapper defined. Use non_primary=True t

*

EDIT: Per the documentation in SQLAlchemy: The mapper() API, I could
replace mapper(Pheno, pheno_table) above with

*

from sqlalchemy.orm.exc import UnmappedClassError

try:
class_mapper(Pheno)
except UnmappedClassError:
mapper(Pheno, pheno_table)

*

If a mapper is not defined for Pheno, it throws an
UnmappedClassError. This at least doesn't return an error in my test
script, but I haven't checked if it actually works. Comments?

On Mon, 18 Oct 2010 13:06:01 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Oct 18, 2010, at 5:36 AM, Faheem Mitha wrote:

 Hi,

 I should say in advance that I don't have a reproduction script for
 this, though I could probably manufacture one if necessary.

 I have a python script which calls a function repeatedly. Each time
 the function is called, it creates a bunch of SQLA tables, classes
 and mappers between them. I'd be perfectly happy to remove these
 after each invocation, but I don't know how, and apparently they
 stick around. I'd hope that these objects all get
 overwritten. Since the classes are just regular Python classes, I
 know these do get

[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-19 Thread Faheem Mitha
Hi Michael,

Thanks for the reply.

On Thu, 19 May 2011 10:59:18 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 I'm sure I mentioned earlier, one of the reasons you're finding this
 difficult is because of this type of pattern, where you're calling
 mapper() and Table at a different scope than where you make your
 classes i.e. within a callable versus module level.  At the very
 least a function like make_tables() would ensure that its internal
 implementation is called only once, such as:

 _table_dict = None
 def make_tables():
 global _table_dict
 if _table_dict is not None:
 return _table_dict
_table_dict = { }

 otherwise, you're creating new MetaData objects each time,
 rebuilding a who= le new set of Tables in it each time, i.e. its a
 total mess.  Engine, Session registry, MetaData, your set of classes
 and their mappers, these should all be global objects, created once
 per application. If you want to use functions to initiate their
 existence that is fine but you should be maintaining singleton
 behavior in any case.

You make good points. This certainly makes sense if I am only dealing
with one set of database tables at a time. However, I'm working with
multiple datasets, sometimes within a single Python script. So I guess
some extra wipe the slate clean stuff when switching between
different datasets is needed here.

A couple of comments/questions.

I see you are using _table_dict as an indicator. I'm not sure what

_table_dict = { }

is intended to indicate. Do you just mean initialize _table_dict with
some non-None value before proceeding with the function?

So if I am switching data sets, I guess I would need to recreate the
tables, presumably by creating a new MetaData.  Since the classes
don't change, they don't need to be removed. So, would it sufice to
have

*

do stuff with dataset 1

clear_mappers()
_table_dict = None   # so the make_tables will be rerun and MetaData recreated

do stuff with dataset 2

*

Hmm. maybe clear_mappers should be inside make_tables?

Am I missing anything?
   Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-19 Thread Faheem Mitha
Hi Michael,

On Thu, 19 May 2011 16:13:49 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 Dont wipe anything clean - keep the state of each set of stuff
 separate.  A global dictionary perhaps, with an record inside for
 each configuration.

Could you elaborate on what you have in mind? The tables contain state
reflecting the underlying database tables, right? Are you saying I
should put all of classes + mappers + tables + metadata inside the
value side of a dictionary? That could be pretty awkward to implement.

 There's also not a use case for the same classes to be mapped to
 different Table objects each time - the classes are designed to be
 mapped to a particular table structure and Table+MetaData only
 represent that structure, nothing else.  If you're just switching
 between database backends, that switching occurs by using a
 different Engine and perhaps a different Session registry.

Well, I do of course create a new engine when I switch from connecting
with database1 to connecting with database2. When a new engine is
created and a connection made, do the table objects automatically
clear themselves of any old information? Maybe I'm just indulging in
wishful thinking here...

Regardless, If I don't recreate the tables, the schema will be wrong,
at least. I'm referring here to the schema field in tables for use
with PostgreSQL, though I don't really understand what it does.

 So if I am switching data sets, I guess I would need to recreate
 the tables, presumably by creating a new MetaData.  Since the
 classes don't change, they don't need to be removed. So, would it
 sufice to have

 I think you should consider the classes + mappers + tables +
 metadata to be a single fixed structure.  That's how it was designed
 to work - it's decoupled from an actual database connection.

Hmm. Meaning you should be able to transfer the whole thing between
database connections?

   Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: creating tables and mappers multiple times in the same python script

2011-05-19 Thread Faheem Mitha
On Thu, 19 May 2011 16:57:14 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On May 19, 2011, at 4:48 PM, Faheem Mitha wrote:

 Hi Michael,

 On Thu, 19 May 2011 16:13:49 -0400, Michael Bayer
 mike...@zzzcomputing.com wrote:

 Dont wipe anything clean - keep the state of each set of stuff
 separate.  A global dictionary perhaps, with an record inside for
 each configuration.

 Could you elaborate on what you have in mind? The tables contain state
 reflecting the underlying database tables, right?

 the state is only the names of the columns and their types.  not any
 data or database state.

Ok. I see. I thought it contained data, since you can put data into
it, and then sync with the database.

 Are you saying I should put all of classes + mappers + tables +
 metadata inside the value side of a dictionary? That could be
 pretty awkward to implement.

 probably not, only whatever state it is that you need to switch -
 like if your thing is running with five different database backends
 at different times, you might put each Engine in this dictionary.

 Well, I do of course create a new engine when I switch from connecting
 with database1 to connecting with database2. When a new engine is
 created and a connection made, do the table objects automatically
 clear themselves of any old information? Maybe I'm just indulging in
 wishful thinking here...

 Regardless, If I don't recreate the tables, the schema will be wrong,
 at least. I'm referring here to the schema field in tables for use
 with PostgreSQL, though I don't really understand what it does.

 so you have this:

 database 1:

 schema1.table1
 schema1.table2

 database 2:

 schema2.table1
 schema2.table2

 ?

 otherwise table1/table2 are identical in structure across database1,
 database2 ?

Unfortunately, that is not true.  (So I guess just leaving the
structure alone and switching dbs will not work.) There are 4 possible
different database layouts. Also, there can be multiple schemas in
each database.

database 1:

schema1.table1
schema1.table2

schema2.table1
schema2.table2

I still think nuking the whole thing when switching between datasets
is my best bet. I'm actually a little surprised the thing works,
considering I have such a poor idea what is going on.

So, as regards nuking classes + mappers + tables + metadata. If this
is possible, how would I accomplish it?

Or feel free to suggest if you have better ideas.

 I'd try to get them all to have the same schema argument,
 actually, or modify the database user so that schema1, schema2 is in
 the user's search path:

 http://www.postgresql.org/docs/current/static/ddl-schemas.html#DDL-SCHEMAS-PATH

 this can be applied to the login with ALTER USER, or within just a
 single database session using SET search_path, in which case you'd
 use a connection event to set it up on each new connection.

Yes, that's certainly an option. But that won't solve my problems by
itself. See above.

 then take schema out of the Table def.

 its awkward

That's true.

 Hmm. Meaning you should be able to transfer the whole thing between
 database connections?

 yeah its only a description of structure, not the data.

Ok.
  Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] creating tables and mappers multiple times in the same python script

2010-10-18 Thread Faheem Mitha


Hi,

I should say in advance that I don't have a reproduction script for this, 
though I could probably manufacture one if necessary.


I have a python script which calls a function repeatedly. Each time the 
function is called, it creates a bunch of SQLA tables, classes and mappers 
between them. I'd be perfectly happy to remove these after each 
invocation, but I don't know how, and apparently they stick around. I'd 
hope that these objects all get overwritten. Since the classes are just 
regular Python classes, I know these do get overwritten. I'm not sure 
about the Table objects, and it definitely appears that the mappers don't 
get cleanly overwritten, judging by the error message I get (complete 
traceback at the bottom of the message)


sqlalchemy.exc.ArgumentError: Class 'class 'dbschema.Pheno'' already has 
a primary mapper defined. Use non_primary=True to create a non primary 
Mapper.  clear_mappers() will remove *all* current mappers from all 
classes.


So, I want to

a) confirm that the Table objects get overwritten (they contain state 
data, correct?) and I don't want the Table objects from a previous 
invocation to get mixed up with one from a later invocation.


and ask

b) why aren't the mappers overwritten?

I'm using clear_mappers as a temporary measure. as suggested by the error 
message, but I don't know if this the right approach. It does get rid of 
the error message. Clarifications appreciated. I apologise if this message 
comes across as computationally illiterate but despite using SQLA for 
nearly two years (has it been so long?), I still have little understanding 
of how some of it's magic is accomplished. Thanks!


   Regards, Faheem

***
traceback follows
***

Traceback (most recent call last):
  File load_all_datasets.py, line 58, in module
load_all_datasets(dbname, dbtype, options.test, options.shard, 
options.recreate, options.jobs)

  File /home/faheem/snppy/utils.py, line 6, in wrapper
res = func(*args, **kwargs)
  File load_all_datasets.py, line 39, in load_all_datasets
load_dataset(schema, dbname, options.alleletype, options.jobs, 
options.recreate, options.shard, options.test)

  File /home/faheem/snppy/load_dataset.py, line 71, in load_dataset
p = platform.Single_Illumina(phenofiles, annotfile, genofile, 
genotablefile, newschema, alleletype, dbtype, dbstring_dbname, jobs)

  File /home/faheem/snppy/platform.py, line 130, in __init__
table_dict = make_tables_illumina(schema)
  File /home/faheem/snppy/dbschema.py, line 293, in make_tables
'race':relation(Race, backref='patients'),
  File /usr/lib/pymodules/python2.5/sqlalchemy/orm/__init__.py, line 
751, in mapper

return Mapper(class_, local_table, *args, **params)
  File /usr/lib/pymodules/python2.5/sqlalchemy/orm/mapper.py, line 197, 
in __init__

self._configure_class_instrumentation()
  File /usr/lib/pymodules/python2.5/sqlalchemy/orm/mapper.py, line 349, 
in _configure_class_instrumentation

self.class_)
sqlalchemy.exc.ArgumentError: Class 'class 'dbschema.Pheno'' already has 
a primary mapper defined. Use non_primary=True to create a non primary 
Mapper.  clear_mappers() will remove *all* current mappers from all 
classes.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] session never saved to db

2010-09-11 Thread Faheem Mitha


Hi,

The following session is never saved to the db, not even a schema is 
created, but a query returns correctly. I assume that there is local 
caching going on. but session.dirty etc doesn't show anything. So, two 
questions:


First, how (if possible) can I force sqla to hit the db? In this case, how 
can I force it to return an error for the line


print session.execute(q1).fetchall()

?

Second, how can I get sqla to show there is data unsaved to the db? I 
tried dirty and new. Thanks in advance.


  Regards, Faheem



dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser,password, dbname)
from sqlalchemy import create_engine
from sqlalchemy.orm sessionmaker
db = create_engine(dbstring)
from sqlalchemy import MetaData
from sqlalchemy.sql import text
meta = MetaData()
meta.bind = db
meta.create_all()
Session = sessionmaker()
session = Session(bind=db)
session.execute(DROP SCHEMA IF EXISTS foo CASCADE; CREATE SCHEMA foo;)
q = text(DROP TABLE IF EXISTS foo.activity; CREATE TABLE foo.activity AS
SELECT * FROM pg_stat_activity;)
session.execute(q)
q1 = text(select * from foo.activity;)
print session.execute(q1).fetchall()
print session.dirty
print session.new

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha


Hi,

I'm using scoped_session with PostgreSQL to run multiple threads, each 
thread creating a table. However, I'm getting some pretty weird and 
inconsistent errors (segfaults from Python with some very peculiar 
errors), and I was wondering if my usage was at fault.


The program has a serial bit, and a parallel bit, and currently I'm doing 
something like this. For the serial bit I do


db = create_engine(dbstring)
Session = sessionmaker()
session = Session(bind=db)

and then later I do

session.close()

db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)

Looking at this is seems likely that is would be better to just use 
scoped_session everywhere, that is, just start with


db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)
[proceed with using Session in serial mode and eventually use it in 
parallel mode too]


I'm basically writing to confirm that it is Ok to use scoped_session in 
this way. The way I'm doing it looks a little dodgy. I don't know if this 
is really the cause of my problem - just clutching at straws here. Thanks 
in advance. Please CC me on any reply.


  Regards, Faheem

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha


Addendum: the types of error I'm seeing includes SQLA trying to execute 
notices from the PG server eg. one of the tracebacks I'm seeing is:


#0  PyObject_Malloc (nbytes=86) at ../Objects/obmalloc.c:756
#1  0x00455eb5 in PyString_FromString (str=0x2de0ece0 WARNING:
there is no transaction in progress\n)
at ../Objects/stringobject.c:139

This smells like memory being overwritten. Any idea what might be causing 
this?


Another possibility is that my usage of scoped_session is wrong. I can't 
find any explicit examples of usage in the official documentation, so this 
was partly guesswork on my part. Here is a sketch of my usage. The model 
I'm using is a thread pool, which lines up n jobs in a queue, and has a 
pool of k threads executing them. The problems seem to occur when n is too 
large.


Session = scoped_session(sessionmaker())
Session.configure(bind=db)
pool = ThreadPool(Session, self.jobs)
ids = self.get_idlink_ids(Session)
for i in ids:
pool.add_task(self.load_geno_table_from_file, Session, i)
pool.start()

where load_geno_table_from_file is

def load_geno_table_from_file(self, session, i):
session.execute(ALTER TABLE %s.geno%s SET ( autovacuum_enabled = false 
);%(self.schema, i))
self.drop_geno_table_constraints(session, 'geno%s'%i)
self.copy_data_to_geno_table(session, 'geno%s'%i, 'tmp/geno%s'%i)
self.restore_geno_table_constraints(session, 'geno%s'%i)
session.execute(ALTER TABLE %s.geno%s SET ( autovacuum_enabled = true 
);%(self.schema, i))

and add_task is

def add_task(self, func, *args, **kargs):
Add a task to the queue
self.queue.put((func, args, kargs))

So, Session is passed to load_geno_table_from_file, which executes inside 
each thread. Is that Ok? I'm adding the rest of the thread code below for 
reference.

  Regards, Faheem

***
import Queue, threading, urllib2, time

class Worker(threading.Thread):
Thread executing tasks from a given tasks queue
def __init__(self, session, queue, num):
threading.Thread.__init__(self)
self.num = num
self.queue = queue
self.setDaemon(True)
self.session = session

def run(self):
import traceback
while True:
func, args, kargs = self.queue.get()
try:
func(*args, **kargs)
except:
traceback.print_exc()
self.queue.task_done()

class ThreadPool:
Pool of threads consuming tasks from a queue
def __init__(self, session, num_threads):
from geno import Geno_Shard
self.queue = Queue.Queue()
self.workerlist = []
self.num = num_threads
self.session = session
for i in range(num_threads):
self.workerlist.append(Worker(session, self.queue, i))

def add_task(self, func, *args, **kargs):
Add a task to the queue
self.queue.put((func, args, kargs))

def start(self):
for w in self.workerlist:
w.start()

def wait_completion(self):
Wait for completion of all the tasks in the queue
self.queue.join()



On Thu, 12 Aug 2010, Faheem Mitha wrote:



Hi,

I'm using scoped_session with PostgreSQL to run multiple threads, each thread 
creating a table. However, I'm getting some pretty weird and inconsistent 
errors (segfaults from Python with some very peculiar errors), and I was 
wondering if my usage was at fault.


The program has a serial bit, and a parallel bit, and currently I'm doing 
something like this. For the serial bit I do


db = create_engine(dbstring)
Session = sessionmaker()
session = Session(bind=db)

and then later I do

session.close()

db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)

Looking at this is seems likely that is would be better to just use 
scoped_session everywhere, that is, just start with


db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)
[proceed with using Session in serial mode and eventually use it in parallel 
mode too]


I'm basically writing to confirm that it is Ok to use scoped_session in this 
way. The way I'm doing it looks a little dodgy. I don't know if this is 
really the cause of my problem - just clutching at straws here. Thanks in 
advance. Please CC me on any reply.


 Regards, Faheem




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha


Another reply to my own message, sorry.

Another thing I might be doing wrong is my usage of MetaData.
The code I have looks like

   db = create_engine(self.dbstring)
   meta = self.table_dict['metadata']
   meta.bind = db
   meta.create_all()
   Session = scoped_session(sessionmaker())
   Session.configure(bind=db)
   pool = ThreadPool(Session, self.jobs)
   ids = self.get_idlink_ids(Session)
   for i in ids:
   pool.add_task(self.load_geno_table_from_file, Session, i)
   pool.start()

Perhaps creation of this metadata instance is not thread-safe? I found 
reference to a ThreadLocalMetaData. Would it better to use that instead?


   Regards, Faheem.

On Thu, 12 Aug 2010, Faheem Mitha wrote:



Addendum: the types of error I'm seeing includes SQLA trying to execute 
notices from the PG server eg. one of the tracebacks I'm seeing is:


#0  PyObject_Malloc (nbytes=86) at ../Objects/obmalloc.c:756
#1  0x00455eb5 in PyString_FromString (str=0x2de0ece0 WARNING:
there is no transaction in progress\n)
   at ../Objects/stringobject.c:139

This smells like memory being overwritten. Any idea what might be causing 
this?


Another possibility is that my usage of scoped_session is wrong. I can't find 
any explicit examples of usage in the official documentation, so this was 
partly guesswork on my part. Here is a sketch of my usage. The model I'm 
using is a thread pool, which lines up n jobs in a queue, and has a pool of k 
threads executing them. The problems seem to occur when n is too large.


Session = scoped_session(sessionmaker())
Session.configure(bind=db)
pool = ThreadPool(Session, self.jobs)
ids = self.get_idlink_ids(Session)
for i in ids:
   pool.add_task(self.load_geno_table_from_file, Session, i)
pool.start()

where load_geno_table_from_file is

def load_geno_table_from_file(self, session, i):
   session.execute(ALTER TABLE %s.geno%s SET ( autovacuum_enabled = 
false );%(self.schema, i))

   self.drop_geno_table_constraints(session, 'geno%s'%i)
   self.copy_data_to_geno_table(session, 'geno%s'%i, 'tmp/geno%s'%i)
   self.restore_geno_table_constraints(session, 'geno%s'%i)
   session.execute(ALTER TABLE %s.geno%s SET ( autovacuum_enabled = 
true );%(self.schema, i))


and add_task is

def add_task(self, func, *args, **kargs):
   Add a task to the queue
   self.queue.put((func, args, kargs))

So, Session is passed to load_geno_table_from_file, which executes inside 
each thread. Is that Ok? I'm adding the rest of the thread code below for 
reference.

 Regards, Faheem

***
import Queue, threading, urllib2, time

class Worker(threading.Thread):
   Thread executing tasks from a given tasks queue
   def __init__(self, session, queue, num):
   threading.Thread.__init__(self)
   self.num = num
   self.queue = queue
   self.setDaemon(True)
   self.session = session

   def run(self):
   import traceback
   while True:
   func, args, kargs = self.queue.get()
   try:
   func(*args, **kargs)
   except:
   traceback.print_exc()
   self.queue.task_done()

class ThreadPool:
   Pool of threads consuming tasks from a queue
   def __init__(self, session, num_threads):
   from geno import Geno_Shard
   self.queue = Queue.Queue()
   self.workerlist = []
   self.num = num_threads
   self.session = session
   for i in range(num_threads):
   self.workerlist.append(Worker(session, self.queue, i))

   def add_task(self, func, *args, **kargs):
   Add a task to the queue
   self.queue.put((func, args, kargs))

   def start(self):
   for w in self.workerlist:
   w.start()

   def wait_completion(self):
   Wait for completion of all the tasks in the queue
   self.queue.join()



On Thu, 12 Aug 2010, Faheem Mitha wrote:



Hi,

I'm using scoped_session with PostgreSQL to run multiple threads, each 
thread creating a table. However, I'm getting some pretty weird and 
inconsistent errors (segfaults from Python with some very peculiar errors), 
and I was wondering if my usage was at fault.


The program has a serial bit, and a parallel bit, and currently I'm doing 
something like this. For the serial bit I do


db = create_engine(dbstring)
Session = sessionmaker()
session = Session(bind=db)

and then later I do

session.close()

db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)

Looking at this is seems likely that is would be better to just use 
scoped_session everywhere, that is, just start with


db = create_engine(self.dbstring)
Session = scoped_session(sessionmaker())
Session.configure(bind=db)
[proceed with using Session in serial mode and eventually use it in 
parallel mode too]


I'm basically writing to confirm that it is Ok to use

[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha
On Thu, 12 Aug 2010 08:47:33 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Aug 12, 2010, at 5:10 AM, Faheem Mitha wrote:

 
 Hi,
 
 I'm using scoped_session with PostgreSQL to run multiple threads, each 
 thread creating a table.

 Its generally a poor application practice for an application to need
 new permanent tables on the fly.  I think reddit's application might
 do this, but its considered poor design.  Database schemas are
 supposed to be fixed.

I'm not sure what you mean by on-the-fly. The app is creating the
tables for later use. It is parallelizing the table creation for
performance reasons - there are a lot of tables.

 However, I'm getting some pretty weird and inconsistent errors
 (segfaults from Python with some very peculiar errors), and I was
 wondering if my usage was at fault.

 The program has a serial bit, and a parallel bit, and currently I'm doing 
 something like this. For the serial bit I do
 
 db = create_engine(dbstring)
 Session = sessionmaker()
 session = Session(bind=db)
 
 and then later I do
 
 session.close()
 
 db = create_engine(self.dbstring)
 Session = scoped_session(sessionmaker())
 Session.configure(bind=db)

 Looking at this is seems likely that is would be better to just use 
 scoped_session everywhere, that is, just start with

 db = create_engine(self.dbstring)
 Session = scoped_session(sessionmaker())
 Session.configure(bind=db)
 [proceed with using Session in serial mode and eventually use it in parallel 
 mode too]

 I'm basically writing to confirm that it is Ok to use
   scoped_session in this way. The way I'm doing it looks a little
   dodgy. I don't know if this is really the cause of my problem -
   just clutching at straws here. Thanks in advance. Please CC me on
   any reply

 you can make as many scoped_sessions, metadatas, etc. as you want,
  none of that would cause a segfault.  They are just Python
  objects. Its only if you share a psycopg2 connection between threads
  and have different threads hammer on it simultaneously that there
  would be issues like that.  If you have a single session, and share
  that among threads who access it concurrently, this will produce
  that result.  There should be no need to guess about it.  If
  multiple threads are hitting the session you have above, then that's
  a likely cause of your issue.

Hi Mike,

Thanks for the response, but I don't follow.

When you say multiple threads are hitting the session you have
above, which session are you referring to?  There is more than one
object above that could be called a session. Ie.

Session1 in Session1 = sessionmaker()
session1 in session1 = Session1(bind=db)
Session2 in Session2 = scoped_session(sessionmaker())

Let me try to ask a precise question. If I do

Session = scoped_session(sessionmaker())

then is it ok for this Session object to be be passed around
between multiple threads and used directly as in

Session.commit()

Does this correspond to a single psycopyg2 connection? If it does, and
this usage is wrong, should I be creating separate sessions within
each thread like

session = Session()

and then doing

session.commit()

within each thread? Or something else? My usage is based on examples
online.

My later posts have more details, along with healthy amounts of
confusion. I apologise for my cluelessness, sorry.

Regards, Faheem



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha
On Thu, 12 Aug 2010 09:44:04 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Aug 12, 2010, at 9:25 AM, Faheem Mitha wrote:

 
 Hi Mike,
 
 Thanks for the response, but I don't follow.
 
 When you say multiple threads are hitting the session you have
 above, which session are you referring to?  There is more than one
 object above that could be called a session. Ie.


 I don't actually know, it would require that I have a full install
  of your application for me to run and step through with a debugger
  to fully understand what it's doing.  All I can say from here is
  that the errors you have suggest concurrent access to a single
  psycopg2 connection resource, and that a single Session references a
  single connection when in use.  A MetaData object does not, nor does
  an Engine - only a Session.  If you remove all threading from your
  application and the errors go away, then you know you're accessing
  some resource illegally.

Yes, I see. Yes, the error does not show up unless I run multiple
threads, and I agree with your interpretation.

If MetaData is threadsafe, then using ThreadLocalMetaData is not
necessary?

 Session1 in Session1 = sessionmaker()
 session1 in session1 = Session1(bind=db)
 Session2 in Session2 = scoped_session(sessionmaker())
 
 Let me try to ask a precise question. If I do
 
 Session = scoped_session(sessionmaker())
 
 then is it ok for this Session object to be be passed around
 between multiple threads and used directly as in
 
 Session.commit()
 
 Does this correspond to a single psycopyg2 connection? If it does, and
 this usage is wrong, should I be creating separate sessions within
 each thread like

 That's a scoped_session, which is threadsafe.  Everything you call
 upon it will acquire a Session object from a thread local context,
 and commit() is called on that (for information on thread locals,
 see http://docs.python.org/library/threading.html#threading.local.

 If you pass a scoped_session from one thread to another, and the
 second thread calls commit(), the second thread is not affecting the
 transaction begun by the first.  They are two separate transactions.

Ok. Thanks for the confirmation. So, if I was to use scoped sessions
systematically everywhere, this problem would likely disappear. Can
you confirm that there is no reason not to use scoped sessions
everywhere, even in serial execution? Of course, if that is the case,
then I wonder why non-scoped sessions are used at all.

 Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha
On Thu, 12 Aug 2010 09:44:04 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Aug 12, 2010, at 9:25 AM, Faheem Mitha wrote:

 
 Hi Mike,
 
 Thanks for the response, but I don't follow.
 
 When you say multiple threads are hitting the session you have
 above, which session are you referring to?  There is more than one
 object above that could be called a session. Ie.


 I don't actually know, it would require that I have a full install of your 
 application for me to run and step through with a debugger to fully 
 understand what it's doing. All I can say from here is that the errors 
 you have suggest concurrent access to a single psycopg2 connection resource, 
 and that a single Session references a single connection when in use.   A 
 MetaData object does not, nor does an Engine - only a Session.If you 
 remove all threading from your application and the errors go away, then you 
 know you're accessing some resource illegally.


 
 Session1 in Session1 = sessionmaker()
 session1 in session1 = Session1(bind=db)
 Session2 in Session2 = scoped_session(sessionmaker())
 
 Let me try to ask a precise question. If I do
 
 Session = scoped_session(sessionmaker())
 
 then is it ok for this Session object to be be passed around
 between multiple threads and used directly as in
 
 Session.commit()
 
 Does this correspond to a single psycopyg2 connection? If it does, and
 this usage is wrong, should I be creating separate sessions within
 each thread like

 That's a scoped_session, which is threadsafe.   Everything you call upon it 
 will acquire a Session object from a thread local context, and commit() is 
 called on that (for information on thread locals, see 
 http://docs.python.org/library/threading.html#threading.local.   

 If you pass a scoped_session from one thread to another, and the second 
 thread calls commit(), the second thread is not affecting the transaction 
 begun by the first.   They are two separate transactions.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: using scoped_session in serial mode as well as parallel (multi-threaded) mode

2010-08-12 Thread Faheem Mitha
[This message has also been posted.]
On Thu, 12 Aug 2010 12:47:37 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Aug 12, 2010, at 11:47 AM, Faheem Mitha wrote:

 On Thu, 12 Aug 2010 09:44:04 -0400, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 On Aug 12, 2010, at 9:25 AM, Faheem Mitha wrote:
 
 
 Hi Mike,

 Thanks for the response, but I don't follow.

 When you say multiple threads are hitting the session you have
 above, which session are you referring to?  There is more than one
 object above that could be called a session. Ie.

 I don't actually know, it would require that I have a full install
  of your application for me to run and step through with a debugger
  to fully understand what it's doing.  All I can say from here is
  that the errors you have suggest concurrent access to a single
  psycopg2 connection resource, and that a single Session references a
  single connection when in use.  A MetaData object does not, nor does
  an Engine - only a Session.  If you remove all threading from your
  application and the errors go away, then you know you're accessing
  some resource illegally.

 Yes, I see. Yes, the error does not show up unless I run multiple
 threads, and I agree with your interpretation.

 If MetaData is threadsafe, then using ThreadLocalMetaData is not
 necessary?

 ThreadLocalMetaData is not necessary and is not used for this purpose.

 Ok. Thanks for the confirmation. So, if I was to use scoped sessions
 systematically everywhere, this problem would likely disappear.

 that's not necessarily true - if you share individual persistent
  objects among threads, they reference their owning session in order
  to load additional state.  If you pass objects between threads you
  should merge() them into the current thread's session first, then
  use that result.

I see. That's very enlightening. Can one query such objects to
determine their owning session? Some attribute, perhaps?

 Can you confirm that there is no reason not to use scoped sessions
 everywhere, even in serial execution? Of course, if that is the
 case, then I wonder why non-scoped sessions are used at all.

 scoped_sessions are usually the default choice for web applications
  since they desire distinct transactions and object state for
  individual threads.  They are overkill and potentially confusing or
  inappropriate in other situations, however.

I'm not sure why they would be potentially confusing. What are some of
the downsides? I'd have thought that not having shared state was less
confusing.

  Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: open session blocks metadata create_all method

2010-07-29 Thread Faheem Mitha
On Thu, 29 Jul 2010 11:36:43 +0100, King Simon-NFHD78 simon.k...@motorola.com 
wrote:

 You can tell meta.create_all() to use the same underlying DB connection
 as the session by using the session.connection() method with the 'bind'
 parameter to create_all().

 Ie.

   connection = session.connection()
   meta.create_all(bind=connection)

 See the docs at
 http://www.sqlalchemy.org/docs/session.html#using-sql-expressions-with-s
 essions and
 http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html#sqlalche
 my.schema.MetaData.create_all

 Hope that helps,

Hi Simon,

Thanks. Do you understand why this blocking takes place? I assume by
default create_all tries to make a different connection, and fails for
some reason?

 Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] open session blocks metadata create_all method

2010-07-28 Thread Faheem Mitha


Hi,

When calling create_all on a metadata instance after a session has alrady 
been opened causes the create_all to hang, I assume because the session is 
blocking the create_all. Is there some way to get create_all to use the 
existing session, or any other graceful way around this? Thanks.


I guess another option is to close and then reopen the session after the 
create_all has been called, but I'd prefer not to do that if possible.


  Regards, Faheem.

##

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import MetaData
meta = MetaData()

def make_foo(meta):
foo = Table(
'foo', meta,
Column('id',  Integer, nullable=False, primary_key=True),
)
return foo

def make_bar(meta):
bar = Table(
'bar', meta,
Column('id',  Integer, ForeignKey('foo.id', onupdate='CASCADE',
ondelete='CASCADE'), nullable=False, primary_key=True),
)
return bar

dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser, password, dbname)
from sqlalchemy import create_engine
db = create_engine(dbstring)
meta.bind = db
db.echo = 'debug'
make_foo(meta)
meta.create_all()
Session = sessionmaker()
session = Session(bind=db)
session.execute(select * from foo;)
make_bar(meta)
meta.create_all()

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: open session blocks metadata create_all method

2010-07-28 Thread Faheem Mitha
On Wed, 28 Jul 2010 15:17:09 +0530 (IST), Faheem Mitha fah...@email.unc.edu 
wrote:

 Hi,

 When calling create_all on a metadata instance after a session has
 alrady been opened causes the create_all to hang, I assume because
 the session is blocking the create_all. Is there some way to get
 create_all to use the existing session, or any other graceful way
 around this? Thanks.

 I guess another option is to close and then reopen the session after
 the create_all has been called, but I'd prefer not to do that if
 possible.

Puting a session.close() before the create_all fixes the problem. I
assume this means that create_all doesn't work in the middle of a
transaction, or something like that?
Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: open session blocks metadata create_all method

2010-07-28 Thread Faheem Mitha
Hi Lance,

On Wed, 28 Jul 2010 06:45:30 -0500, Lance Edgar lance.ed...@gmail.com wrote:
 --=-dKyzuPx4woj1H0B5IT48
 Content-Type: text/plain; charset=ISO-8859-1

 On Wed, 2010-07-28 at 11:33 +, Faheem Mitha wrote:

 On Wed, 28 Jul 2010 15:17:09 +0530 (IST), Faheem Mitha 
 fah...@email.unc.edu wrote:
 
  Hi,

  When calling create_all on a metadata instance after a session has
  alrady been opened causes the create_all to hang, I assume because
  the session is blocking the create_all. Is there some way to get
  create_all to use the existing session, or any other graceful way
  around this? Thanks.

  I guess another option is to close and then reopen the session after
  the create_all has been called, but I'd prefer not to do that if
  possible.

 Puting a session.close() before the create_all fixes the problem. I
 assume this means that create_all doesn't work in the middle of a
 transaction, or something like that?


 I can't speak to the underlying mechanics of create_all(), but calling
 session.close() prior to create_all() would work, as you say.  Another
 option would be to simply not use a session, but instead just a
 *connection*.  Sessions are specific to the ORM which, according to the
 code you posted, you are not using.  So if you really just need to make
 a SELECT call to a table, then instead of creating a session and
 calling .execute() on it, you could instead do this:

 db = create_engine(dbstring)
 meta.bind = db
 db.echo = 'debug'
 make_foo(meta)
 meta.create_all()
 db.connect().execute(select * from foo;)
 make_bar(meta)
 meta.create_all()

The example was just an example. After going back and forth a bit,
I've finally standardized on session as the thing to more around in my
application. The db.connect thing works, I think, because autocommit
is the default for connect.

I'd like to hear an explanation of why create_all is blocked here. I
periodically have my scripts hang for no apparent reason, almost
always because the db is blocking something, so would like to become
more educated on this issue.

 Regards, Faheem

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] session.execute with autocommit option turned on is a no-op

2010-07-28 Thread Faheem Mitha


Hi,

In the following script, the last line, namely

session.execute(select * from drop_constraint_if_exists('foo', 'foo_pkey', 
'public');)

doesn't drop the constraint. It does if autocommit is turned off, and a 
session.commit() is issued after the statement.


The autocommit setting works with similar statements that are not wrapped 
up in a function, specifically


session.execute(ALTER TABLE foo DROP CONSTRAINT foo_pkey;)

I also notice that in debug mode, the db issues a COMMIT in the case of

session.execute(ALTER TABLE foo DROP CONSTRAINT foo_pkey;)

but not in the case of

session.execute(select * from drop_constraint_if_exists('foo', 'foo_pkey', 
'public');)

so presumably the problem is that in this case SQLA is not, in fact, 
autocommitting for some reason. Clarifications appreciated.


Regards, 
Faheem

*

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import MetaData

create_drop_constraint_if_exists_function = text(
CREATE OR REPLACE FUNCTION drop_constraint_if_exists (t text, k text, s text = 
NULL) returns void as $$
BEGIN
IF s IS NOT NULL
THEN
EXECUTE 'alter table ' || quote_ident(s) || '.' || quote_ident(t) || ' 
drop constraint ' || quote_ident(k) || ' cascade ';

ELSE
EXECUTE 'alter table ' || quote_ident(t) || ' drop constraint ' || 
quote_ident(k) || ' cascade ';
END IF;
EXCEPTION WHEN undefined_object THEN
END;
$$ LANGUAGE plpgsql;
)

meta = MetaData()

def make_foo(meta):
foo = Table(
'foo', meta,
Column('id',  Integer, nullable=False, primary_key=True),
)
return foo

dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser, password, dbname)
from sqlalchemy import create_engine
db = create_engine(dbstring)
meta.bind = db
db.echo = 'debug'
foo = make_foo(meta)
meta.create_all()
Session = sessionmaker(autocommit=True)
session = Session(bind=db)
session.execute(select * from drop_constraint_if_exists('foo', 'foo_pkey', 
'public');)

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLAlchemy NotImplementedError when doing a metadata create_all

2010-07-27 Thread Faheem Mitha
Hi Mike,

On Tue, 27 Jul 2010 09:07:15 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On Jul 27, 2010, at 1:34 AM, Faheem Mitha wrote:

 My usage is slightly non-standard - a foreign key pointing to a
 foreign key.

 that never worked, most likely.  its very complicated to get the
 types to propagate up a chain like that, and all on a deferred
 execution, in a reasonable way.

Right.

 Ok. Well, it should be removed from the docs, I guess.

 it has.  Its mentioned in one place with a huge caveat.  Where are
 you reading this ?

When I started writing the application in question I was using 0.4,
and I was reading Essential Sqlalchemy at the time too. That was
November 2008. So probably either the 0.4 docs or Essential SQLA
mentioned the option of leaving the type blank.

I just changed the app to state the ForeignKey types explicitly. This
kind of inference is handy, especially when one is changing types. but
I suppose the db would catch inconsistencies anyway.

   Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] SQLAlchemy NotImplementedError when doing a metadata create_all

2010-07-26 Thread Faheem Mitha


Dear SQLAlchemists,

With the following script, I get the traceback below. This is not the 
actual example I ran into, but a small test case. To my surprise, I was 
able to reproduce the identical error almost immediately. In my actual 
code, I was able to work around this error by doing a table.create() on 
the first table followed by a create_all, but that doesn't seem to work 
with my small example.


This is SQLAlchemy 0.5.8-1 running on Debian lenny with python 2.5.2-3, 
and with PostgreSQL 8.4.2-1~bpo50+1. I'm considering moving to 0.6 but am 
concerned about breakage.


This seems pretty innocuous. Clarifications appreciated.

Regards, Faheem.

***
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import MetaData
meta = MetaData()

foo = Table(
'foo', meta,
Column('id',  None, nullable=False, primary_key=True),
)

bar = Table(
'bar', meta,
Column('id',  None, ForeignKey('foo.id', onupdate='CASCADE',
ondelete='CASCADE'), nullable=False, primary_key=True),
)

dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser, password, dbname)
from sqlalchemy import create_engine
db = create_engine(dbstring)
meta.bind = db
meta.create_all()
Session = sessionmaker()
session = Session(bind=db)

**
Traceback (most recent call last):
  File stdin, line 23, in module
  File /usr/lib/pymodules/python2.5/sqlalchemy/schema.py, line 1811, in 
create_all

bind.create(self, checkfirst=checkfirst, tables=tables)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 
1129, in create
self._run_visitor(self.dialect.schemagenerator, entity, 
connection=connection, **kwargs)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 
1158, in _run_visitor

visitorcallable(self.dialect, conn, **kwargs).traverse(element)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 89, 
in traverse

return traverse(obj, self.__traverse_options__, self._visitor_dict)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 
200, in traverse

return traverse_using(iterate(obj, opts), obj, visitors)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 
194, in traverse_using

meth(target)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 
831, in visit_metadata

self.traverse_single(table)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 79, 
in traverse_single

return meth(obj)
  File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 
856, in visit_table
self.append(\t + self.get_column_specification(column, 
first_pk=column.primary_key and not first_pk))
  File /usr/lib/pymodules/python2.5/sqlalchemy/databases/postgres.py, 
line 849, in get_column_specification

colspec +=   + column.type.dialect_impl(self.dialect).get_col_spec()
  File /usr/lib/pymodules/python2.5/sqlalchemy/types.py, line 392, in 
get_col_spec

raise NotImplementedError()
NotImplementedError

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLAlchemy NotImplementedError when doing a metadata create_all

2010-07-26 Thread Faheem Mitha


Hi,

It turns out my example was too hasty. I should have had something like

foobar = Table(
'foo', meta,
Column('id',  Integer, nullable=False, primary_key=True),
)

bar = Table(
'bar', meta,
Column('id',  None, ForeignKey('foo.id', onupdate='CASCADE', 
ondelete='CASCADE'), nullable=False,  primary_key=True),

)

baz = Table(
'baz', meta,
Column('id',  None, ForeignKey('bar.id', onupdate='CASCADE', 
ondelete='CASCADE'), nullable=False, primary_key=True),

)

which also gives the same error. Using None type for ForeignKeys here, per 
the docs. My previous example was using None for a col that was not a 
ForeignKey.


Also, replacing the type of bar.id with Integer gives the error:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) there is no unique 
constraint matching given keys for referenced table bar
 '\nCREATE TABLE baz (\n\tid INTEGER NOT NULL, \n\tPRIMARY KEY (id), \n\t 
FOREIGN KEY(id) REFERENCES bar (id) ON DELETE CASCADE ON UPDATE 
CASCADE\n)\n\n' {}


which is not very encouraging either. Currently trying to get sqla to emit 
the SQL for create_all so I can see what it is trying to do.


I added

db.echo = True
import logging
logging.basicConfig()
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)

but only the

db.echo = True

does something, and that doesn't emit the SQL for creating the tables.

   Regards, Faheem.

On Tue, 27 Jul 2010, Faheem Mitha wrote:



Dear SQLAlchemists,

With the following script, I get the traceback below. This is not the actual 
example I ran into, but a small test case. To my surprise, I was able to 
reproduce the identical error almost immediately. In my actual code, I was 
able to work around this error by doing a table.create() on the first table 
followed by a create_all, but that doesn't seem to work with my small 
example.


This is SQLAlchemy 0.5.8-1 running on Debian lenny with python 2.5.2-3, and 
with PostgreSQL 8.4.2-1~bpo50+1. I'm considering moving to 0.6 but am 
concerned about breakage.


This seems pretty innocuous. Clarifications appreciated.

   Regards, Faheem.

***
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import MetaData
meta = MetaData()

foo = Table(
   'foo', meta,
   Column('id',  None, nullable=False, primary_key=True),
   )

bar = Table(
   'bar', meta,
   Column('id',  None, ForeignKey('foo.id', onupdate='CASCADE',
   ondelete='CASCADE'), nullable=False, primary_key=True),
   )

dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser, password, dbname)
from sqlalchemy import create_engine
db = create_engine(dbstring)
meta.bind = db
meta.create_all()
Session = sessionmaker()
session = Session(bind=db)

**
Traceback (most recent call last):
 File stdin, line 23, in module
 File /usr/lib/pymodules/python2.5/sqlalchemy/schema.py, line 1811, in 
create_all

   bind.create(self, checkfirst=checkfirst, tables=tables)
 File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 1129, 
in create
   self._run_visitor(self.dialect.schemagenerator, entity, 
connection=connection, **kwargs)
 File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 1158, 
in _run_visitor

   visitorcallable(self.dialect, conn, **kwargs).traverse(element)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 89, in 
traverse

   return traverse(obj, self.__traverse_options__, self._visitor_dict)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 200, 
in traverse

   return traverse_using(iterate(obj, opts), obj, visitors)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 194, 
in traverse_using

   meth(target)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 831, 
in visit_metadata

   self.traverse_single(table)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 79, in 
traverse_single

   return meth(obj)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 856, 
in visit_table
   self.append(\t + self.get_column_specification(column, 
first_pk=column.primary_key and not first_pk))
 File /usr/lib/pymodules/python2.5/sqlalchemy/databases/postgres.py, line 
849, in get_column_specification

   colspec +=   + column.type.dialect_impl(self.dialect).get_col_spec()
 File /usr/lib/pymodules/python2.5/sqlalchemy/types.py, line 392, in 
get_col_spec

   raise NotImplementedError()
NotImplementedError



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com

[sqlalchemy] Re: SQLAlchemy NotImplementedError when doing a metadata create_all

2010-07-26 Thread Faheem Mitha


Update:

On Tue, 27 Jul 2010, Faheem Mitha wrote:



Hi,

It turns out my example was too hasty. I should have had something like

foobar = Table(
   'foo', meta,
   Column('id',  Integer, nullable=False, primary_key=True),
   )

bar = Table(
   'bar', meta,
   Column('id',  None, ForeignKey('foo.id', onupdate='CASCADE', 
ondelete='CASCADE'), nullable=False,  primary_key=True),

   )

baz = Table(
   'baz', meta,
   Column('id',  None, ForeignKey('bar.id', onupdate='CASCADE', 
ondelete='CASCADE'), nullable=False, primary_key=True),

   )

which also gives the same error. Using None type for ForeignKeys here, per 
the docs. My previous example was using None for a col that was not a 
ForeignKey.


Also, replacing the type of bar.id with Integer gives the error:

sqlalchemy.exc.ProgrammingError: (ProgrammingError) there is no unique 
constraint matching given keys for referenced table bar
'\nCREATE TABLE baz (\n\tid INTEGER NOT NULL, \n\tPRIMARY KEY (id), \n\t 
FOREIGN KEY(id) REFERENCES bar (id) ON DELETE CASCADE ON UPDATE 
CASCADE\n)\n\n' {}


I can't reproduce this, after having turned on SQL generation. The former 
example, with None for the type of bar.id, is reproducible.


which is not very encouraging either. Currently trying to get sqla to emit 
the SQL for create_all so I can see what it is trying to do.


I added

db.echo = True
import logging
logging.basicConfig()
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
logging.getLogger('sqlalchemy.orm.unitofwork').setLevel(logging.DEBUG)

but only the

db.echo = True

does something, and that doesn't emit the SQL for creating the tables.


Should be 'debug', it seems.


  Regards, Faheem.

On Tue, 27 Jul 2010, Faheem Mitha wrote:



Dear SQLAlchemists,

With the following script, I get the traceback below. This is not the 
actual example I ran into, but a small test case. To my surprise, I was 
able to reproduce the identical error almost immediately. In my actual 
code, I was able to work around this error by doing a table.create() on the 
first table followed by a create_all, but that doesn't seem to work with my 
small example.


This is SQLAlchemy 0.5.8-1 running on Debian lenny with python 2.5.2-3, and 
with PostgreSQL 8.4.2-1~bpo50+1. I'm considering moving to 0.6 but am 
concerned about breakage.


This seems pretty innocuous. Clarifications appreciated.

   Regards, Faheem.

***
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy import MetaData
meta = MetaData()

foo = Table(
   'foo', meta,
   Column('id',  None, nullable=False, primary_key=True),
   )

bar = Table(
   'bar', meta,
   Column('id',  None, ForeignKey('foo.id', onupdate='CASCADE',
   ondelete='CASCADE'), nullable=False, primary_key=True),
   )

dbuser =
password =
dbname =
dbstring = postgres://%s:%...@localhost:5432/%s%(dbuser, password, dbname)
from sqlalchemy import create_engine
db = create_engine(dbstring)
meta.bind = db
meta.create_all()
Session = sessionmaker()
session = Session(bind=db)

**
Traceback (most recent call last):
 File stdin, line 23, in module
 File /usr/lib/pymodules/python2.5/sqlalchemy/schema.py, line 1811, in 
create_all

   bind.create(self, checkfirst=checkfirst, tables=tables)
 File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 1129, 
in create
   self._run_visitor(self.dialect.schemagenerator, entity, 
connection=connection, **kwargs)
 File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 1158, 
in _run_visitor

   visitorcallable(self.dialect, conn, **kwargs).traverse(element)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 89, 
in traverse

   return traverse(obj, self.__traverse_options__, self._visitor_dict)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 200, 
in traverse

   return traverse_using(iterate(obj, opts), obj, visitors)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 194, 
in traverse_using

   meth(target)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 831, 
in visit_metadata

   self.traverse_single(table)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/visitors.py, line 79, 
in traverse_single

   return meth(obj)
 File /usr/lib/pymodules/python2.5/sqlalchemy/sql/compiler.py, line 856, 
in visit_table
   self.append(\t + self.get_column_specification(column, 
first_pk=column.primary_key and not first_pk))
 File /usr/lib/pymodules/python2.5/sqlalchemy/databases/postgres.py, line 
849, in get_column_specification

   colspec +=   + column.type.dialect_impl(self.dialect).get_col_spec()
 File /usr/lib/pymodules/python2.5/sqlalchemy/types.py, line 392, in 
get_col_spec

   raise NotImplementedError()
NotImplementedError





--
You received this message because you

[sqlalchemy] Re: SQLAlchemy NotImplementedError when doing a metadata create_all

2010-07-26 Thread Faheem Mitha
[This message has also been posted.]
Hi Lance,

Thanks for the quick reply.

On Mon, 26 Jul 2010 17:09:06 -0500, Lance Edgar lance.ed...@gmail.com wrote:

 I had a similar question a little while back and here was the answer:

http://groups.google.com/group/sqlalchemy/browse_thread/thread/1e861e9d1a0fefde

(url on one page)

This is the basic usage, which apparently is broken in 6.0 as well. My
usage is slightly non-standard - a foreign key pointing to a foreign
key. I actually read this message in the last day or two, but
apparently didn't register it.

 Basically you shouldn't use None for a ForeignKey's type anymore.

Ok. Well, it should be removed from the docs, I guess.

As for the producing SQL thing, I'm not too concerned as long as it
works, but the docs aren't terribly clear on this. I tried 'debug' as
part of a sequence of trial and error.

BTW, my newsreader (slrn) doesn't cope well with html mail.

  Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: about database connections, engines, dispose and recommended usage

2010-06-27 Thread Faheem Mitha
[This message has also been posted.]
See followup comment below...

On Sat, 15 May 2010 07:44:04 -0700 (PDT), Michael Bayer 
mike...@zzzcomputing.com wrote:


 On May 15, 2:52 am, Faheem Mitha fah...@email.unc.edu wrote:
 Hi,

 I was trying to figure out a way to close all connections to a db.

 Apparently

 db = create_engine(dbstring)
 conn = db.connect()
 [...]
 conn.close()

 doesn't actually close the connection initiated by conn. I have to call

 db.dispose()

 which seems to do so, though I cannot find any clear documentation that
 says so. It would be nice if this was mentioned in the section that
 describes connections.

 its mentioned in pool.dispose() and engine.dispose() is likely missing
 a docstring (also the 0.5 docs are not nearly as up to date as those
 of 0.6).  the method is not guaranteed to close all open connections,
 only those which are not currently checked out.

Belated update - I just looked at this message again, and realised I
don't have a clear idea of what 'checked out' means in this
context. The term is used in
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/pooling.html
several times for example, but I don't see an actual definition there.

  Regards, Faheem.

 In any case, I'm writing to ask what recommended usage for connecting to a
 db is. Let's say I am writing a script which has various functions
 connecting to the same db. Should I pass around a db string corresponding
 to the engine, or should I pass around an engine itself, or should I pass
 around a connection object? I suppose the third might cause problems,
 since presumably only one user can use a connection at a time.

 passing a connection between various methods and functions  implies a
 single thread of execution so there's no concurrency concern there, if
 that's what you mean by one user.  The issue comes down to how you'd
 like your application to deal with transactions, whether you'd like a
 group of operations to proceed on the same Connection (can be made to
 be on all one transaction more easily) or not.   A lot of related
 discussion is at http://www.sqlalchemy.org/docs/dbengine.html,
 including the discussion about transactions, threadlocal strategies,
 etc.   But none of that makes a choice for you.


 If so, with option 1, when passing dbstring to a function, should I close
 the connection and dispose of the created engine,

 option 1 is not an option here and is vastly inefficient.  your
 application should have one engine per database backend.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Error trying to use session.execute

2010-06-14 Thread Faheem Mitha
On Sun, 13 Jun 2010 11:20:58 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 When an exception occurs in a transaction while Postgresql, you in
 most cases must issue a rollback() (that is what (InternalError)
 current transaction is aborted, commands ignored until end of
 transaction block means, thats a PG message). Therefore you cannot
 attempt an operation with PG inside a transaction, have it fail,
 ignore the failure, then continue in the same transaction. You need
 to roll the transaction back and start a new one, or use an
 autocommit mode which accomplishes the same thing.

 As far as 2, I'm not sure what transactionalized execution is.

 It means a statement is executed while a transaction is in progress.
 Each= subsequent statement occurs within the same transaction as the
 previous, until a rollback() or commit() is issued.  This is the
 opposite of autocommit, where each statement occurs in a distinct
 transaction.

Hi Mike,

Thanks for the clarifications.  I figured out that the failure was the
problem, but didn't understand exactly why. So pg doesn't
automatically roll back the transaction, apparently.

   Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Error trying to use session.execute

2010-06-13 Thread Faheem Mitha
[This message has also been posted.]
On Sat, 12 Jun 2010 19:58:28 -0400, Michael Bayer
mike...@zzzcomputing.com wrote:

 this pattern:

 try:
foo()
 except:
pass

 will get you killed every time.

 the difference in result is between the session's default of
  transactionalized execution and the Connection's default of
  autocommit execution.

Hi Mike,

Thanks for the quick reply, as usual.

I'm going to assume comment 1 and comment 2 are unrelated. If they
aren't, please correct me.

As regards 1, I assume you mean try... pass... is a bad idea. I agree,
but there is no

CREATE LANGUAGE IF EXISTS

in PostgreSQL, so CREATE LANGUAGE exits with an error if the language
already exists. So, what alternative do you suggest? I guess catching
the exception within pg itself is an option - I've done it in at least
one other case. The advantage with that approach is that one can then
finetune the catching of the exception more precisely. However, I
don't know if this is relevant to what you mean.

As far as 2, I'm not sure what transactionalized execution is. I
tried Googling for this, and came up, for example, with

http://www.sqlalchemy.org/docs/dbengine.html#using-transactions-with-connection

but this didn't help me understand what the problem was. If the
problem is that the transaction does not commit immediately, why
should that cause an error?

I remembered that text() has the autocommit=True option, and tried
using it with create_drop_constraint_if_exists_function, but it didn't
help.

   Regards, Faheem.

 On Jun 12, 2010, at 3:30 PM, Faheem Mitha wrote:

 
 Hi,
 
 While rearranging some Python code using SQLAlchemy, I managed to
 get this puzzling error. The oddest bit is that using conn.execute
 works, while session.connect doesn't.
 
 Below, the code that doesn't work, the code that works, and last,
 the traceback for the code that doesn't work. If anyone could 
 explain to me what I'm missing, I'd appreciate it.
 
  Regards, Faheem.
 
 **
 
 from sqlalchemy.sql import text
 
 create_plpgsql = text(
 CREATE LANGUAGE plpgsql;
 )
 
 create_drop_constraint_if_exists_function = text(
 CREATE OR REPLACE FUNCTION drop_constraint_if_exists (t text, k text, s text 
 = NULL) returns void as $$
 BEGIN
 IF s IS NOT NULL
 THEN
 EXECUTE 'alter table ' || quote_ident(s) || '.' || quote_ident(t) || ' drop 
 constraint ' || quote_ident(k);
 ELSE
 EXECUTE 'alter table ' || quote_ident(t) || ' drop constraint ' || 
 quote_ident(k);
 END IF;
 EXCEPTION WHEN undefined_object THEN
 END;
 $$ LANGUAGE plpgsql;
 )
 
 *
 this does not work
 *
 from sqlalchemy.orm import sessionmaker
 dbstring = postgres://snp:pqxxro...@localhost:5432/affy6_faheem
 from sqlalchemy import create_engine
 db = create_engine(dbstring)
 Session = sessionmaker(bind=db)
 session = Session()
 try:
session.execute(create_plpgsql)
 except:
pass
 session.execute(create_drop_constraint_if_exists_function)
 
 *
 this works
 *
 rrom sqlalchemy.orm import sessionmaker
 dbstring = postgres://snp:pqxxro...@localhost:5432/affy6_faheem
 from sqlalchemy import create_engine
 db = create_engine(dbstring)
 conn = db.connect()
 try:
conn.execute(create_plpgsql)
 except:
pass
 conn.execute(create_drop_constraint_if_exists_function)
 
 
 
 Traceback (most recent call last):
  File stdin, line 34, in module
  File /usr/lib/pymodules/python2.5/sqlalchemy/orm/session.py, line 753, in 
 execute
clause, params or {})
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 824, in 
 execute
return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 874, in 
 _execute_clauseelement
return self.__execute_context(context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 896, in 
 __execute_context
self._cursor_execute(context.cursor, context.statement, 
 context.parameters[0], context=context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 950, in 
 _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor, context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 931, in 
 _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e, 
 connection_invalidated=is_disconnect)
 sqlalchemy.exc.InternalError: (InternalError) current transaction is 
 aborted, commands ignored until end of transaction block
 \nCREATE OR REPLACE FUNCTION

[sqlalchemy] Re: Error trying to use session.execute

2010-06-13 Thread Faheem Mitha
On Sun, 13 Jun 2010 13:33:39 +0530, Faheem Mitha fah...@email.unc.edu wrote:

 I'm going to assume comment 1 and comment 2 are unrelated. If they
 aren't, please correct me.

 As regards 1, I assume you mean try... pass... is a bad idea. I agree,
 but there is no

 CREATE LANGUAGE IF EXISTS

 in PostgreSQL, so CREATE LANGUAGE exits with an error if the language
 already exists. So, what alternative do you suggest? I guess catching
 the exception within pg itself is an option - I've done it in at least
 one other case. The advantage with that approach is that one can then
 finetune the catching of the exception more precisely. However, I
 don't know if this is relevant to what you mean.

 As far as 2, I'm not sure what transactionalized execution is. I
 tried Googling for this, and came up, for example, with

 http://www.sqlalchemy.org/docs/dbengine.html#using-transactions-with-connection

 but this didn't help me understand what the problem was. If the
 problem is that the transaction does not commit immediately, why
 should that cause an error?

 I remembered that text() has the autocommit=True option, and tried
 using it with create_drop_constraint_if_exists_function, but it didn't
 help.

[Following up to myself]

Adding

session.commit()

after

try:
session.execute(create_plpgsql)
except:
pass

makes this work. I'm still not sure what is happening here, but
possibly session.execute(create_plpgsql) needs a chance to clean up
after itself before another transaction? If that is right, then your
two comments were in fact related. :-) I note that setting
autocommit=True on create_plpgsql doesn't work, presumably because it
errors out, and therefore cannot be committed.

Regardless, a expert explanation would be appreciated.

Would using something like
http://wiki.postgresql.org/wiki/CREATE_OR_REPLACE_LANGUAGE be a
reasonable solution to this? Then the try except would not be
necessary.

 Thanks, Faheem.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Multiple databases or multiple schemas?

2010-06-13 Thread Faheem Mitha
[This message has also been posted.]
On Sun, 13 Jun 2010 13:27:34 -0700, Henry Precheur he...@precheur.org wrote:
 I'm starting a new project which is roughly hosting a bunch of sites. I
 want the sites to be isolated, they'll all have the same schema (data
 definition), but will store their data on different 'name-spaces'.

 The front end will be a python web app. And I'd like to have this
 application talk to all the 'name-spaces' at the same time:

   With a request on example.com/site1, the web app will use the
   'name-space' site1, with example.com/site2 it will use site2.

 I am using Postgres. There are 2 options for the 'name-space': Database
 or Schema [1]:
   1. One database per site
   2. One database for all the sites and 1 schema per site


 Solution #1 would require to maintain 1 connection per site  per python
 process. That means: lots of connections,  lots of memory needed. One
 the other hand, this solution is supported by SQLAlchemy out-of-the-box.
 I'll have a dictionary like that:

 {'site1': Engine('postgres://.../site1',
  'site2': Engine('postgres://.../site2', ...}

 And whenever a request comes in I get the right engine via this
 dictionary.


 Solution #2 is not supported natively by SQLAlchemy. Each time a request
 comes-in I'll have to issue an additional query SET search_path TO
 MY_SITE where MY_SITE is the schema associated with the site.

Sqlalchemy's table can take the qschema as argument, eg.

pheno_table = Table(
'pheno', metadata,
Column('patientid', String(60), primary_key=True),
Column('famid', String(60), nullable=True),
Column('sex_id',  None, ForeignKey(schemaname+'.sex.val', 
onupdate='CASCADE', ondelete='CASCADE'), index=True),
Column('race_id',  None, ForeignKey(schemaname+'.race.val', 
onupdate='CASCADE', ondelete='CASCADE'), index=True),
Column('phenotype', SmallInteger),
schema = schemaname,
)

So I don't think you do have to do that.

 Solution #2 seems much more lightweight to me. The only problem is the
 small overhead that might be created by the additional query.

I'm actually using multiple schemas in one db myself, and it seems to
me sqla supports this just fine. The only time I have to do

SET search_path TO MY_SITE

is when I access the db directly using psql. Of course, you might have
to worry whether the web end of things support schemas too.

Faheem.

 What do you guys think? Will I get into trouble with solution #2?

 If you have alternative suggestions I'd like to hear them :)


 Regards,

 [1]: http://www.postgresql.org/docs/8.4/interactive/ddl-schemas.html

 -- 
   Henry Prêcheur


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Error trying to use session.execute

2010-06-12 Thread Faheem Mitha


Hi,

While rearranging some Python code using SQLAlchemy, I managed to get this 
puzzling error. The oddest bit is that using conn.execute works, while 
session.connect doesn't.


Below, the code that doesn't work, the code that works, and last, the 
traceback for the code that doesn't work. If anyone could explain to me 
what I'm missing, I'd appreciate it.


  Regards, Faheem.

**

from sqlalchemy.sql import text

create_plpgsql = text(
CREATE LANGUAGE plpgsql;
)

create_drop_constraint_if_exists_function = text(
CREATE OR REPLACE FUNCTION drop_constraint_if_exists (t text, k text, s 
text = NULL) returns void as $$

BEGIN
IF s IS NOT NULL
THEN
EXECUTE 'alter table ' || quote_ident(s) || '.' || quote_ident(t) || ' 
drop constraint ' || quote_ident(k);

ELSE
EXECUTE 'alter table ' || quote_ident(t) || ' drop constraint ' || 
quote_ident(k);

END IF;
EXCEPTION WHEN undefined_object THEN
END;
$$ LANGUAGE plpgsql;
)

*
this does not work
*
from sqlalchemy.orm import sessionmaker
dbstring = postgres://snp:pqxxro...@localhost:5432/affy6_faheem
from sqlalchemy import create_engine
db = create_engine(dbstring)
Session = sessionmaker(bind=db)
session = Session()
try:
session.execute(create_plpgsql)
except:
pass
session.execute(create_drop_constraint_if_exists_function)

*
this works
*
rrom sqlalchemy.orm import sessionmaker
dbstring = postgres://snp:pqxxro...@localhost:5432/affy6_faheem
from sqlalchemy import create_engine
db = create_engine(dbstring)
conn = db.connect()
try:
conn.execute(create_plpgsql)
except:
pass
conn.execute(create_drop_constraint_if_exists_function)



Traceback (most recent call last):
  File stdin, line 34, in module
  File /usr/lib/pymodules/python2.5/sqlalchemy/orm/session.py, line 753, 
in execute

clause, params or {})
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 824, 
in execute

return Connection.executors[c](self, object, multiparams, params)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 874, 
in _execute_clauseelement

return self.__execute_context(context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 896, 
in __execute_context
self._cursor_execute(context.cursor, context.statement, 
context.parameters[0], context=context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 950, 
in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor, 
context)
  File /usr/lib/pymodules/python2.5/sqlalchemy/engine/base.py, line 931, 
in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e, 
connection_invalidated=is_disconnect)
sqlalchemy.exc.InternalError: (InternalError) current transaction is 
aborted, commands ignored until end of transaction block
 \nCREATE OR REPLACE FUNCTION drop_constraint_if_exists (t text, k text, 
s text = NULL) returns void as $$\nBEGIN\nIF s IS NOT NUL\
L\nTHEN\nEXECUTE 'alter table ' || quote_ident(s) || '.' || quote_ident(t) 
|| ' drop constraint ' || quote_ident(k);\nELSE\nEXECUTE\
 'alter table ' || quote_ident(t) || ' drop constraint ' || 
quote_ident(k);\nEND IF;\nEXCEPTION WHEN undefined_object THEN\nEND;\n$\

$ LANGUAGE plpgsql;\n {}

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] returning a list of attributes

2010-05-20 Thread Faheem Mitha


Hi,

Is there a more elegant way of getting a list of attributes corresponding 
to a list of objects than the code below? Thanks.


Suppose Foo() is a object with attribute bar...
  Regards, Faheem.

dbstring = ...
from dbschema import Foo
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
db = create_engine(dbstring)
metadata.bind = db
metadata.create_all()
Session = sessionmaker()
session = Session()
print [f.bar for f in session.query(Foo).all()]

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] about database connections, engines, dispose and recommended usage

2010-05-15 Thread Faheem Mitha


Hi,

I was trying to figure out a way to close all connections to a db.

Apparently

db = create_engine(dbstring)
conn = db.connect()
[...]
conn.close()

doesn't actually close the connection initiated by conn. I have to call

db.dispose()

which seems to do so, though I cannot find any clear documentation that 
says so. It would be nice if this was mentioned in the section that 
describes connections.


In any case, I'm writing to ask what recommended usage for connecting to a 
db is. Let's say I am writing a script which has various functions 
connecting to the same db. Should I pass around a db string corresponding 
to the engine, or should I pass around an engine itself, or should I pass 
around a connection object? I suppose the third might cause problems, 
since presumably only one user can use a connection at a time.


If so, with option 1, when passing dbstring to a function, should I close 
the connection and dispose of the created engine, or will the engine just 
exit automatically (this appears to be the case)? In any case, is closing 
the connection a good idea?. If option 2, passing around an engine, 
similar questions.


Currently I'm doing a mix of 1 and 2, and closing connections but not the 
engine. I'd like to standardize on something.


My reference has mostly been 
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/index.html, 
particularly the first two sections, 'Connections' and 'Connection 
Pooling' I'm currently using 0.5.8.


Thanks.
  Regards, Faheem.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] question about urls when creating engine

2010-05-13 Thread Faheem Mitha


Hi,

In

http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#creating-engines

it describes how permitted urls are of the form

dialect://user:passw...@host/dbname[?key=value..],

I'm using postgresql. I believe sqlalchemy uses psycopg2 by default.

I've been connecting using psycopg2 via unix socket and ident sameuser, 
This does not require either host or password. psycopg2 allows me to leave 
both host and password empty. I tested and sqlalchemy barfs if even the 
password is not supplied. If I want to use sqlalchemy, what are my 
options?


  Regards, Faheem.

Traceback (most recent call last):
  File stdin, line 8, in module
  File dbutils.py, line 41, in dbuser_exists
conn = db.connect()
  File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
1221, in connect

return self.Connection(self, **kwargs)
  File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
531, in __init__

self.__connection = connection or engine.raw_connection()
  File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
1276, in raw_connection

return self.pool.unique_connection()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 119, 
in unique_connection

return _ConnectionFairy(self).checkout()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 304, 
in __init__

rec = self._connection_record = pool.get()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 161, 
in get

return self.do_get()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 642, 
in do_get

con = self.create_connection()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 122, 
in create_connection

return _ConnectionRecord(self)
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 198, 
in __init__

self.connection = self.__connect()
  File /var/lib/python-support/python2.5/sqlalchemy/pool.py, line 261, 
in __connect

connection = self.__pool._creator()
  File 
/var/lib/python-support/python2.5/sqlalchemy/engine/strategies.py, line 
80, in connect

raise exc.DBAPIError.instance(None, None, e)
sqlalchemy.exc.OperationalError: (OperationalError) fe_sendauth: no 
password supplied

 None None

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: question about urls when creating engine

2010-05-13 Thread Faheem Mitha
On Thu, 13 May 2010 09:25:21 -0400, Michael Bayer mike...@zzzcomputing.com 
wrote:

 On May 13, 2010, at 7:33 AM, Faheem Mitha wrote:

 
 Hi,
 
 In
 
 http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#creating-engines
 
 it describes how permitted urls are of the form
 
 dialect://user:passw...@host/dbname[?key=value..],
 
 I'm using postgresql. I believe sqlalchemy uses psycopg2 by default.
 
 I've been connecting using psycopg2 via unix socket and ident sameuser, This 
 does not require either host or password. psycopg2 allows me to leave both 
 host and password empty. I tested and sqlalchemy barfs if even the password 
 is not supplied. If I want to use sqlalchemy, what are my options?


 Simply don't put a colon:

 from sqlalchemy import *

 e = create_engine('postgresql://sc...@localhost/test', echo=True)
 e.connect()

Not sure which colon you mean. I wasn't using a colon after the
username.

Ok, so this works - just leaving out password with the colon before
it, and leaving out localhost (but leaving in the colon after
it). Phew.

usr = faheem
db = template1
dbstring = postgres://%s@:5432/%s%(usr, db)
username = foo
print dbuser_exists(dbstring, username)

In case anyone is curious, the function is

def dbuser_exists(dbstring, username):
from sqlalchemy import create_engine
db = create_engine(dbstring)
conn = db.connect()
from sqlalchemy.sql import text
q = text(select usename from pg_user where usename = '%s';%username)
result = conn.execute(q).fetchall()
conn.close()
if len(result)  0 and result[0][0]==username:
return True
else:
return False

   Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: obtaining pid of forked process

2010-02-12 Thread Faheem Mitha
On Fri, 12 Feb 2010 13:33:01 +0100, Alex Brasetvik a...@brasetvik.com wrote:

 On Feb 11, 2010, at 18:58 , Faheem Mitha wrote:

 sqlalchemy forks a process when it calls the db

 No, it does not.

 PostgreSQL forks a new backend process when a connection is
  established, however. It sounds like that's what you want. Do
  SELECT pg_backend_pid() to get the PID of the backend process
  serving your connection.

 That and other stat functions are documented here:
  http://www.postgresql.org/docs/current/static/monitoring-stats.html

I see. Thanks, that's very helpful. Does the pid/process stay the same
across successive calls to text()? I'm guessing that successive calls
to text() would take place within the same session, and therefore
correspond to the same backend session.

If not, how can I get it to return the pid before I start the actual
query? In any case, I'll experiment with this.
 
Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: obtaining pid of forked process

2010-02-12 Thread Faheem Mitha
On Thu, 11 Feb 2010 13:06:03 -0500, Michael Bayer
mike...@zzzcomputing.com wrote:

 Faheem Mitha wrote:

 Hi,

 sqlalchemy forks a process when it calls the db (in my case PostgreSQL,
 but I don't think it matters) using, for example

 from sqlalchemy.sql import text
 s = text(...)

 um, what ? there's no forking in SQLAlchemy.

Ok. Apologies for my cluelessness.

  Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: obtaining pid of forked process

2010-02-12 Thread Faheem Mitha
On Fri, 12 Feb 2010 11:01:23 -0500, Michael Bayer
mike...@zzzcomputing.com wrote:

 you would connect:

 conn = engine.connect()

 check the PID:

 pid = conn.execute(SELECT pg_backend_pid()).scalar()


 then continue as needed:

 conn.execute(text(...))

Thanks, Michael. That's very clear and helpful.

   Regards, Faheem.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] obtaining pid of forked process

2010-02-11 Thread Faheem Mitha


Hi,

sqlalchemy forks a process when it calls the db (in my case PostgreSQL, 
but I don't think it matters) using, for example


from sqlalchemy.sql import text
s = text(...)

My question - is it possible to obtain the pid of this process at the 
python level in some fashion? The reason for this is that I want to plot a 
memory graph of the postgresql process, so it is handy to have the pid for 
this. I'm using Linux (Debian lenny) with pg 8.4.2 and sqla 0.5.7.


Please cc me on any reply. Thanks.
  Regards, Faheem.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] autocommit for execute command (was Re: postgresql CREATE SCHEMA statement does not create schema)

2010-01-24 Thread Faheem Mitha
I'm belatedly following up to this earlier posting.

The problem there was that I wasn't setting autocommit=True in
text(). However, I was wondering what I can do if I want to directly
write

conn.execute(somestuff)
conn.close()

and have it autocommitted, rather than using text(). The execute()
function doesn't appear to have an 'autocommit' option. What can I do
to have a autocommit happen in this case?

   Regards, Faheem.

On Wed, 7 Oct 2009 17:37:51 -0400 (EDT), Faheem Mitha
fah...@email.unc.edu wrote:


 Hi,

 When running this function with postgresql 8.4 and sqla 0.5.5,

 def test(dbstring):
 from sqlalchemy import create_engine
 db = create_engine(dbstring)
 conn = db.connect()
 from sqlalchemy.sql import text
 gq = text(
 SET search_path TO public;
 DROP SCHEMA IF EXISTS foo CASCADE;
 CREATE SCHEMA foo;
 )
 conn.execute(gq)
 conn.close()

 the schema foo is not created. However, removing the

 SET search_path TO public;

 line makes it work. This is not the case when issuing these commands 
 directly via psql. Any idea what might be going on here?

   Regards, Faheem.

 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: autocommit for execute command (was Re: postgresql CREATE SCHEMA statement does not create schema)

2010-01-24 Thread Faheem Mitha
On Sun, 24 Jan 2010 09:27:26 -0500, Michael Bayer
mike...@zzzcomputing.com wrote:

 On Jan 24, 2010, at 7:22 AM, Faheem Mitha wrote:

 I'm belatedly following up to this earlier posting.
 
 The problem there was that I wasn't setting autocommit=True in
 text(). However, I was wondering what I can do if I want to directly
 write
 
 conn.execute(somestuff)
 conn.close()
 
 and have it autocommitted, rather than using text(). The execute()
 function doesn't appear to have an 'autocommit' option. What can I do
 to have a autocommit happen in this case?

 we dont have a per-connection autocommit option, yet.  but I have
 ideas on how to introduce that.  but for now if you don't send the
 option along in the statement, you'd have to do it explicitly,
 ie. trans = conn.begin(); conn.execute(); trans.commit().

Thanks for the quick reply. I see. Thanks for the clarification. I can
do multiple executes before the final commit, yes?

  Regards, Faheem.

 
   Regards, Faheem.
 
 On Wed, 7 Oct 2009 17:37:51 -0400 (EDT), Faheem Mitha
 fah...@email.unc.edu wrote:
 
 
 Hi,
 
 When running this function with postgresql 8.4 and sqla 0.5.5,
 
 def test(dbstring):
from sqlalchemy import create_engine
db = create_engine(dbstring)
conn = db.connect()
from sqlalchemy.sql import text
gq = text(
SET search_path TO public;
DROP SCHEMA IF EXISTS foo CASCADE;
CREATE SCHEMA foo;
)
conn.execute(gq)
conn.close()
 
 the schema foo is not created. However, removing the
 
 SET search_path TO public;
 
 line makes it work. This is not the case when issuing these commands 
 directly via psql. Any idea what might be going on here?
 
  Regards, Faheem.
 
 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en
 -~--~~~~--~~--~--~---
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: postgresql CREATE SCHEMA statement does not create schema

2009-10-10 Thread Faheem Mitha

On Fri, 9 Oct 2009 22:34:11 -0400, Michael Bayer
mike...@zzzcomputing.com wrote:


 On Oct 9, 2009, at 7:35 PM, Faheem Mitha wrote:

 Can you explain why removing the SET search_path TO public; string
 makes a commit happen? You also say that string you have will not
 trip off SQLA's autocommit feature. How does this autocommit
 feature work, and are there certain strings that will trigger an
 autocommit?

 for textual statements, autocommit is applied when this regexp matches:

 AUTOCOMMIT_REGEXP = re.compile(r'\s*(?:UPDATE|INSERT|CREATE|DELETE| 
 DROP|ALTER)',
 re.I | re.UNICODE)

Thanks Michael,

That's very helpful. I didn't realise that sqlalchemy did this kind of
magic.

   Regards, Faheem.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgresql CREATE SCHEMA statement does not create schema

2009-10-09 Thread Faheem Mitha

Confirmed by Alex Grönholm on #postgresql on freenode with pg 8.3 and sqla 
0.5.6. If this is not a bug, i'd like to know what is going on. Typing the 
text in gq directly into psql (all on one line) produces the schema foo as 
expected.

   Regards, Faheem.

btsnp=# \dn
 List of schemas
 Name|  Owner
+--
  information_schema | postgres
  pg_catalog | postgres
  pg_toast   | postgres
  pg_toast_temp_1| postgres
  public | postgres
(5 rows)

btsnp=# SET search_path TO public; DROP SCHEMA IF EXISTS foo CASCADE; CREATE 
SCHEMA foo;
SET
NOTICE:  schema foo does not exist, skipping
DROP SCHEMA
CREATE SCHEMA
btsnp=# \dn
 List of schemas
 Name|  Owner
+--
  foo| faheem
  information_schema | postgres
  pg_catalog | postgres
  pg_toast   | postgres
  pg_toast_temp_1| postgres
  public | postgres
(6 rows)

On Wed, 7 Oct 2009, Faheem Mitha wrote:


 Hi,

 When running this function with postgresql 8.4 and sqla 0.5.5,

 def test(dbstring):
   from sqlalchemy import create_engine
   db = create_engine(dbstring)
   conn = db.connect()
   from sqlalchemy.sql import text
   gq = text(
   SET search_path TO public;
   DROP SCHEMA IF EXISTS foo CASCADE;
   CREATE SCHEMA foo;
   )
   conn.execute(gq)
   conn.close()

 the schema foo is not created. However, removing the

 SET search_path TO public;

 line makes it work. This is not the case when issuing these commands directly 
 via psql. Any idea what might be going on here?

 Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgresql CREATE SCHEMA statement does not create schema

2009-10-09 Thread Faheem Mitha

[This message has also been posted.]
On Fri, 9 Oct 2009 13:28:58 -0400, Michael Bayer
mike...@zzzcomputing.com wrote:

 did you commit your transaction  or set autocommit=True in your text()
 statement ?   that string you have will not trip off SQLA's autocommit
 feature.

Hi,

Thanks for the explanation. No, the code I used is exactly as written,
modulo the db string. which was of the form
postgres://dbuser:pas...@localhost:5432/dbname

I see that the documentation for

sqlalchemy.sql.expression.text

has an option

autocommit=True
indicates this SELECT statement modifies the database, and should
be subject to autocommit behavior if no transaction has been
started.

I missed this, since I assumed that the cursor object would have such
a option, which does not appear to be the case. I've confirmed adding
this option to the text string fixes the problem.

Can you explain why removing the SET search_path TO public; string
makes a commit happen? You also say that string you have will not
trip off SQLA's autocommit feature. How does this autocommit
feature work, and are there certain strings that will trigger an
autocommit?

 Regards, Faheem.

 Faheem Mitha wrote:

 Confirmed by Alex Grönholm on #postgresql on freenode with pg 8.3 and sqla
 0.5.6. If this is not a bug, i'd like to know what is going on. Typing the
 text in gq directly into psql (all on one line) produces the schema foo as
 expected.

Regards,
 Faheem.

 btsnp=# \dn
  List of schemas
  Name|  Owner
 +--
   information_schema | postgres
   pg_catalog | postgres
   pg_toast   | postgres
   pg_toast_temp_1| postgres
   public | postgres
 (5 rows)

 btsnp=# SET search_path TO public; DROP SCHEMA IF EXISTS foo CASCADE;
 CREATE SCHEMA foo;
 SET
 NOTICE:  schema foo does not exist, skipping
 DROP SCHEMA
 CREATE SCHEMA
 btsnp=# \dn
  List of schemas
  Name|  Owner
 +--
   foo| faheem
   information_schema | postgres
   pg_catalog | postgres
   pg_toast   | postgres
   pg_toast_temp_1| postgres
   public | postgres
 (6 rows)

 On Wed, 7 Oct 2009, Faheem Mitha wrote:


 Hi,

 When running this function with postgresql 8.4 and sqla 0.5.5,

 def test(dbstring):
   from sqlalchemy import create_engine
   db = create_engine(dbstring)
   conn = db.connect()
   from sqlalchemy.sql import text
   gq = text(
   SET search_path TO public;
   DROP SCHEMA IF EXISTS foo CASCADE;
   CREATE SCHEMA foo;
   )
   conn.execute(gq)
   conn.close()

 the schema foo is not created. However, removing the

 SET search_path TO public;

 line makes it work. This is not the case when issuing these commands
 directly
 via psql. Any idea what might be going on here?

 Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] postgresql CREATE SCHEMA statement does not create schema

2009-10-07 Thread Faheem Mitha


Hi,

When running this function with postgresql 8.4 and sqla 0.5.5,

def test(dbstring):
from sqlalchemy import create_engine
db = create_engine(dbstring)
conn = db.connect()
from sqlalchemy.sql import text
gq = text(
SET search_path TO public;
DROP SCHEMA IF EXISTS foo CASCADE;
CREATE SCHEMA foo;
)
conn.execute(gq)
conn.close()

the schema foo is not created. However, removing the

SET search_path TO public;

line makes it work. This is not the case when issuing these commands 
directly via psql. Any idea what might be going on here?

  Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table creation oddity

2009-08-21 Thread Faheem Mitha



On Fri, 21 Aug 2009, King Simon-NFHD78 wrote:

 I've never used postgres, but I believe auto-incrementing counters are 
 implemented using database sequences. I think these are incremented 
 outside of a transaction - this ensures that two seperate database 
 connections using the sequence at the same time will get distinct 
 values. So although you aren't commiting your transaction, the sequence 
 still advances.

 I guess the sequence must be associated with the table, so when you drop 
 the table it destroys the sequence as well (I don't know if this is SA 
 behaviour or PG behaviour).

 session.dirty only contains objects that have been loaded from the DB 
 and subsequently modified. You don't ever actually modify your object, 
 so it shouldn't appear in session.dirty. (It should appear in 
 session.new though)

 Hope that helps,

 Simon

Hi Simon,

Thanks for the fast and helpful response. This looks like an artifact of 
how I am creating the table. I wonder if this would still show up if I 
explicitly specified the id. I could check this. Also, presumably if I had 
other cols in the table, they wouldn't show up in sqla's printout.

There is a discussion of this in

http://neilconway.org/docs/sequences/

Quote:

A sequence is a special kind of database object designed for generating 
unique numeric identifiers. It is typically used to generate artificial 
primary keys.

I'm still puzzled why sqlalchemy lists the entries in the table, when 
querying directly via psql doesn't show it.

  Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: table creation oddity

2009-08-21 Thread Faheem Mitha


Hi Simon,

On Fri, 21 Aug 2009, King Simon-NFHD78 wrote:

 Faheem Mitha wrote:

 Thanks for the fast and helpful response. This looks like an artifact 
 of how I am creating the table. I wonder if this would still show up if 
 I explicitly specified the id. I could check this. Also, presumably if 
 I had other cols in the table, they wouldn't show up in sqla's 
 printout.

 The sequence isn't directly related to the table (as far as I'm aware). 
 Instead, when you insert a row into the table, you do something like 
 SELECT NEXTVAL('sequencename') to get the next ID value. SQLALchemy 
 does this automatically for you.

Ah, so that is something I would have to do manually if working with pg 
more directly? Nice of sqla to do it for me.

 I'm still puzzled why sqlalchemy lists the entries in the table, when
 querying directly via psql doesn't show it.


 Your SQLAlchemy operations are happening within a transaction that never
 gets committed. If you turned on SQL echoing (use echo=True or
 echo='debug' in your call to create_engine), you would see that your
 object is actually being inserted into the 'mytest' table. When you
 SELECT from the table, you are still in the same transaction, so you see
 the rows that you have inserted.

 However, when the script exits, the transaction gets rolled back, so you
 never see the new rows in psql. The only evidence that anything ever
 happened is that the sequence has moved on (see the note at the bottom
 of http://www.postgresql.org/docs/8.1/static/functions-sequence.html)

Thanks, that's a very clear and educational explanation. So mytest 
contains this sequence, but is not written to disk.

Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] table creation oddity

2009-08-20 Thread Faheem Mitha


Hi,

The following script is then followed by its output, and finally by the 
table output.

I don't get what is going on here. Yes, I should commit the session, and 
the table is empty as expected, but why does the id keep incrementing on 
successive runs, and where is this table living, if not in the db? I'd 
expect to see the id stay at 1. Also, I'd expect to see something in 
session.dirty. Deleting the table resets the counter back to 1, so it 
looks like it is using the table in some way, but as already stated, the 
table shows as empty via a select * command.

If anyone can clarify what is going on here and satisfy my curiosity, I'd 
appreciate it. Please CC me on any reply. Thanks.

   Regards, Faheem.

***
oddity.py
***

from sqlalchemy import *
from sqlalchemy.orm import mapper, relation, sessionmaker

def add_obj(session, obj):
  Check if object primary key exists in db. If so,exit, else
 add.
 
 from sqlalchemy import and_
 from sqlalchemy.orm import object_mapper
 mapper = object_mapper(obj)
 pid = mapper.primary_key_from_instance(obj)
 criterion = and_(*(col == val for col, val in zip(mapper.primary_key, 
mapper.primary_key_from_instance(obj
 if session.query(obj.__class__).filter(criterion).count()  0:
 print %s object with id %s is already in 
db.%(type(obj).__name__, pid)
 exit
 else:
 session.add(obj)

metadata = MetaData()

mytest_table = Table(
 'mytest', metadata,
 Column('id', Integer, primary_key=True),
 )

class MyTest(object):
 def __init__(self):
 pass

mapper(MyTest, mytest_table)

dbstring = postgres://username:pas...@localhost:5432/oddity
db = create_engine(dbstring)
metadata.bind = db
metadata.create_all()
conn = db.connect()

Session = sessionmaker()
session = Session()
t1 = MyTest()
add_obj(session, t1)
print session.query(MyTest).count()

stmt = mytest_table.select()
for row in stmt.execute():
 print row

stmt = select([mytest_table.c.id])
print anno statement is %s\n%stmt
for row in stmt.execute():
 print row

print session.dirty is %s%session.dirty

#session.commit()
#session.flush()
#conn.close()

*
script output
*
$ python oddity.py
1
(1,)
anno statement is SELECT mytest.id
FROM mytest

(1,)
session.dirty is IdentitySet([])
$ python oddity.py
1
(2,)
anno statement is SELECT mytest.id
FROM mytest

(2,)
session.dirty is IdentitySet([])


table output

oddity=# select * from mytest;
  id

(0 rows)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] the return type of conn.execute(text())

2009-01-27 Thread Faheem Mitha


Hi,

Today I attempted to serialize the return value of the form

result = conn.execute(text())

Till now I thought that the return type was a list of tuples, while in 
fact it is a list of objects of type class 
'sqlalchemy.engine.base.RowProxy'. Hence cPickle refused to serialize 
till I did some conversion.

Just wondering what the reason for this is.

Regards, Faheem.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: the return type of conn.execute(text())

2009-01-27 Thread Faheem Mitha

[This message has also been posted.]
On Wed, 28 Jan 2009 01:28:31 +0200, a...@svilendobrev.com
a...@svilendobrev.com wrote:

 i have recently stumbled on similar - the rowproxy's __hash__ was
 missing. so i have to tuple() them before usage.  Then there was
 Mike's question, what should the RowProxy emulate?  the tuple of the
 row, or something else?

Er, what question was that? Did I miss something?

I'd like to add to my original question a request to make these
objects pickleable as tuples. My understanding is that it just
involves adding some method to the class.

Regards, Faheem.

 Today I attempted to serialize the return value of the form

 result = conn.execute(text())

 Till now I thought that the return type was a list of tuples, while
 in fact it is a list of objects of type class
 'sqlalchemy.engine.base.RowProxy'. Hence cPickle refused to
 serialize till I did some conversion.

 Just wondering what the reason for this is.

 Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: passing tuple argument into sqlalchemy.sql.text string

2009-01-21 Thread Faheem Mitha

On Wed, 21 Jan 2009 10:55:14 -, King Simon-NFHD78
simon.k...@motorola.com wrote:

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:sqlalch...@googlegroups.com] On Behalf Of Faheem Mitha
 Sent: 20 January 2009 22:05
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] passing tuple argument into 
 sqlalchemy.sql.text string

 Hi,
 
 I've got a query as follows:
 
 from sqlalchemy.sql import text
 
 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 
 'DUKE1_plateA_A11.CEL')
 )
 I want to pass in the tuple as an argument, and was wondering 
 how to do 
 it.
 
 So, I'm looking for something conceptually like
 
 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN :plist
 )
 
 gq = conn.execute(gq, plist=('DUKE1_plateA_A10.CEL', 
 'DUKE1_plateA_A11.CEL'))
 
 Note, I want to pass in a tuple of arbitary length, so 
 changing this to 
 pass two string arguments would not do. Perhaps I'm supposed 
 to pass in 
 some bindparams too, but I don't know what type I should be using.
 Regards, Faheem.

 I'm not sure you can do that in the general case. I think bind
 parameters (in the DBAPI sense) are only really intended for
 substituting individual query parameters, not lists.

 If you are happy to regenerate your query each time you want to execute
 it, you could create a function which generates a string of the form
 (:p0, :p1, :p2, :p3) for the given tuple length, and appends that to
 the query.

 If you use the SQLAlchemy expression language to build that query, it'll
 do that for you automatically.

 Hope that helps,

 Simon

Hi Simon,

Thanks for your reply.

I've already been using sql expressions to create this query, but it
was not obvious how to do this using copy to, so I switched back to
not using it.

gq = select([func.decode_genotype(cell_table.c.snpval_id,
snp_table.c.allelea_id, snp_table.c.alleleb_id)],
from_obj=[cell_table.join(snp_table)], order_by =
'sort_key(snp.chromosome), snp.location')
patient_sublist = ['DUKE1_plateA_A10.CEL', 'DUKE1_plateA_A11.CEL']
gq = gq.where(cell_table.c.patient_chipid.in_(patient_sublist))
print gq
#gq = conn.execute(gq).fetchall()

The result of this is 

SELECT decode_genotype(cell.snpval_id, snp.allelea_id, snp.alleleb_id)
AS decode_genotype_1 FROM cell JOIN snp ON snp.fid = cell.snp_id WHERE
cell.patient_chipid IN (:patient_chipid_1, :patient_chipid_2) ORDER BY
sort_key(snp.chromosome), snp.location

The question is, can I make this into a copy using sql expressions,
ie. can I do something like (the current version of my query)

copy (select array_to_string(array_agg(e.decode_genotype_1), E'\t')
from (SELECT decode_genotype(cell.snpval_id, snp.allelea_id,
snp.alleleb_id) AS decode_genotype_1 FROM cell JOIN snp ON snp.fid =
cell.snp_id WHERE cell.patient_chipid IN :plist ORDER BY
sort_key(snp.chromosome), snp.location) as e) to
'/tmp/btsnpSNP_6-chr.ped' with csv;

The differences between the version above and the version below, are
because I made additions to the query since I switched away from using
sql expressions.

Please CC me on any reply.
   Regards, Faheem Mitha.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: passing tuple argument into sqlalchemy.sql.text string

2009-01-21 Thread Faheem Mitha

On Wed, 21 Jan 2009 08:26:13 -0800, jason kirtland
j...@discorporate.us wrote:

 Faheem Mitha wrote:
 
 Hi,
 
 I've got a query as follows:
 
 from sqlalchemy.sql import text
 
 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 
 'DUKE1_plateA_A11.CEL')
 )
 I want to pass in the tuple as an argument, and was wondering how to do 
 it.
 
 So, I'm looking for something conceptually like
 
 gq = text(
 SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
 snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
 cell.snp_id WHERE cell.patient_chipid IN :plist
 )
 
 gq = conn.execute(gq, plist=('DUKE1_plateA_A10.CEL',
   'DUKE1_plateA_A11.CEL'))
 
 Note, I want to pass in a tuple of arbitary length, so changing
 this to pass two string arguments would not do. Perhaps I'm
 supposed to pass in some bindparams too, but I don't know what type
 I should be using.

 IN takes a list of scalars, each of which requires its own :bind
 parameter.  On Postgresql you might find it more convenient to use
 ANY, which takes a single array argument.  WHERE
 cell.patient_chipid ANY (:plist)

Thanks for the suggestion. Can such an array argument be passed in
from Python?

   Regards, Faheem.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] passing tuple argument into sqlalchemy.sql.text string

2009-01-20 Thread Faheem Mitha


Hi,

I've got a query as follows:

from sqlalchemy.sql import text

gq = text(
SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 
'DUKE1_plateA_A11.CEL')
)
I want to pass in the tuple as an argument, and was wondering how to do 
it.

So, I'm looking for something conceptually like

gq = text(
SELECT decode_genotype(cell.snpval_id, snp.allelea_id, 
snp.alleleb_id) FROM cell JOIN snp ON snp.fid = 
cell.snp_id WHERE cell.patient_chipid IN :plist
)

gq = conn.execute(gq, plist=('DUKE1_plateA_A10.CEL', 
'DUKE1_plateA_A11.CEL'))

Note, I want to pass in a tuple of arbitary length, so changing this to 
pass two string arguments would not do. Perhaps I'm supposed to pass in 
some bindparams too, but I don't know what type I should be using.

   Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] returning values as a list of values rather than as list of tuples

2009-01-15 Thread Faheem Mitha


Hi,

The following code returns a list of tuples to python from the db, 
corresponding to the values of the 'snpval_id' column in the table 'cell'. 
I was wondering if there was an easy way to have it return a list of 
values (in this case, integers) instead.

*
db = 
create_engine(postgres://btsnp:pqxxro...@localhost:5432/btsnp_full_genome)
conn = db.connect()
result = conn.execute(select snpval_id from cell where patient_chipid IN 
('Duke1_plateC_F11.CEL')).fetchall()
*

value of result is [(2,), (1,), (-1,), (1,), (1,), (-1,)...]

Please CC me on any reply. Thanks.

Regards, Faheem Mitha.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: returning values as a list of values rather than as list of tuples

2009-01-15 Thread Faheem Mitha



On Thu, 15 Jan 2009, Matthew Zwier wrote:

 Hi Faheem,

 On Thu, Jan 15, 2009 at 11:05 AM, Faheem Mitha fah...@email.unc.edu wrote:


 Hi,

 The following code returns a list of tuples to python from the db,
 corresponding to the values of the 'snpval_id' column in the table 'cell'.
 I was wondering if there was an easy way to have it return a list of
 values (in this case, integers) instead.
 result = conn.execute(select snpval_id from cell where patient_chipid IN 
 ('Duke1_plateC_F11.CEL')).fetchall()
 *

 Easiest thing is probably just to use a list comprehension:
 result_ints = [row[0] for row in result]

Hi Matthew,

Yes, I'm doing that already. Just wondered if there was a way to return it 
in the right form directly.
   Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] error using sqlalchemy.sql text function

2008-12-30 Thread Faheem Mitha


Hi,

The following function (bugfn) gives errors when I try to execute it with 
postgresql using sqlalchemy.sql's text function. I thought the problem was 
with using double quotes () and/or single quotes (') inside the string, 
but no, it seems to be perfectly happy with sex_sub for example.

Executing bugfn directly in psql works fine, so it is not a postgres 
problem.

Traceback follows. If I need to submit an issue, let me know. Please cc me 
on any reply.

 Regards, Faheem Mitha.

***
Traceback (most recent call last):
   File dbsession.py, line 260, in module
 make_tables(dbstring)
   File dbsession.py, line 146, in make_tables
 conn.execute(create_bug_function)
   File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
806, in execute
 return Connection.executors[c](self, object, multiparams, params)
   File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
856, in execute_clauseelement
 return self.__execute_context(context)
   File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
878, in __execute_context
 self._cursor_execute(context.cursor, context.statement, 
context.parameters[0], context=context)
   File /var/lib/python-support/python2.5/sqlalchemy/engine/base.py, line 
925, in _cursor_execute
 self.dialect.do_execute(cursor, statement, parameters, 
context=context)
   File /var/lib/python-support/python2.5/sqlalchemy/engine/default.py, 
line 122, in do_execute
 cursor.execute(statement, parameters)
TypeError: 'dict' object is unindexable


from sqlalchemy.sql import text

create_plpython = text(
CREATE LANGUAGE plpythonu;
)

create_bug_function = text(
CREATE OR REPLACE FUNCTION bugfn (sex text)
RETURNS integer
AS $$
 if sex == 'F':
 return 2
 else:
 raise RuntimeError, %s is wrong%(sex)
 $$ LANGUAGE plpythonu;
)

create_sex_sub_function = text(
CREATE OR REPLACE FUNCTION sex_sub (sex text)
RETURNS integer
AS $$
 if sex == 'M':
 return 1
 if sex == 'F':
 return 2
 else:
 raise RuntimeError, sex must be either 'M' or 'F' but is ' + sex + 
'.
 $$ LANGUAGE plpythonu;
)

  [...]
  db = create_engine(postgres://btsnp:foo...@localhost:5432/btsnp_test)
  conn = db.connect()
  conn.execute(create_plpython)
  conn.execute(create_sex_sub_function)
  conn.execute(create_bug_function)
  conn.close()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: error using sqlalchemy.sql text function

2008-12-30 Thread Faheem Mitha

[This message has also been posted.]
On Tue, 30 Dec 2008 23:51:23 -0500, Michael Bayer
mike...@zzzcomputing.com wrote:

 try escaping your percent signs: %%.  otherwise they appear to
 psycopg2 like bind parameters.  The exception is raised by psycopg2.

Oh, I see. Thanks for the information.

  Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] creating and dropping foreign key constraints using ORM

2008-12-19 Thread Faheem Mitha


Hi,

I'm writing code (see below) to drop and add back foreign key constraints 
to a db table. Incidentally, this code is not working (the function just 
hangs) so I may have made some kind of syntax error. Anyway, I was 
wondering if there was some way to accomplish this in a more high-level 
way using the sqla ORM. The tables in question were created using the ORM, 
so the ORM knows about them, and, at least in theory should be able to 
manipulate them.

Schema follows below.

However, currently, I'm not sure how do this. Suggestions appreciated. 
Please CC me on any reply.
   Regards, Faheem.

*

conn = db.connect()

conn.execute(ALTER TABLE cell DROP CONSTRAINT 
cell_patient_chipid_fkey; ALTER TABLE cell DROP CONSTRAINT 
cell_snp_id_fkey; ALTER TABLE cell DROP CONSTRAINT cell_snpval_id_fkey;)
 #conn.execute(COPY cell FROM ' + csvfilename + ' USING DELIMITERS 
',')

conn.execute(ALTER TABLE ONLY cell ADD CONSTRAINT cell_patient_chipid_fkey 
FOREIGN KEY (patient_chipid)
REFERENCES patient(chipid) ON UPDATE CASCADE ON DELETE CASCADE;)

conn.execute(ALTER TABLE ONLY cell ADD CONSTRAINT cell_snp_id_fkey FOREIGN KEY 
(snp_id)
REFERENCES snp(fid) ON UPDATE CASCADE ON DELETE CASCADE;)

conn.execute(ALTER TABLE ONLY cell ADD CONSTRAINT cell_snpval_id_fkey 
FOREIGN KEY (snpval_id) REFERENCES snpval(val) ON UPDATE CASCADE ON DELETE 
CASCADE;)

conn.close()

**

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm.collections import attribute_mapped_collection
from datetime import datetime

metadata = MetaData()

patient_table = Table(
 'patient', metadata,
 Column('chipid', String(30), primary_key=True, index=True),
 Column('studyid', String(20), nullable=False, index=True),
 Column('sex_id',  None, ForeignKey('sex.val', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False),
 Column('race_id',  None, ForeignKey('race.val', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False),
 Column('phenotype', Boolean),
 )

# Allow M (male), F (female), U (unknown).
sex_table = Table(
 'sex', metadata,
 Column('val', String(1), primary_key=True),
 )

race_table = Table(
 'race', metadata,
 Column('val', String(25), primary_key=True),
 )

cell_table = Table(
 'cell', metadata,
 Column('patient_chipid',  None, ForeignKey('patient.chipid', 
onupdate='CASCADE', ondelete='CASCADE'), index=True, nullable=False, 
primary_key=True),
 Column('snp_id',  None, ForeignKey('snp.fid', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False, primary_key=True),
 Column('snpval_id',  None, ForeignKey('snpval.val', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False)
 )

snp_table = Table(
 'snp', metadata,
 Column('fid', String(20), nullable=False, primary_key=True),
 Column('rsid', String(20), nullable=False),
 Column('chromosome', String(2), nullable=False),
 Column('location', Integer, nullable=False),
 Column('alleleA_id',  None, ForeignKey('allele.val', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False),
 Column('alleleB_id',  None, ForeignKey('allele.val', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False),
 )

allele_table = Table(
 'allele', metadata,
 Column('val', String(1), primary_key=True),
 )

snpval_table = Table(
 'snpval', metadata,
 Column('val', Integer, primary_key=True),
 )

def create_cell(snp, snpval):
 return Cell(snp=snp, snpval=snpval)

class Patient(object):
 def __init__(self, chipid, studyid, sex, race, phenotype):
 self.chipid = chipid
 self.studyid = studyid
 self.sex = sex
 self.race = race
 self.phenotype = phenotype
 def __repr__(self):
 return 'Patient %s'%self.chipid
 snps = association_proxy('by_fid', 'snpval', creator=create_cell)

class Sex(object):
 def __init__(self, val):
 self.val = val
 def __repr__(self):
 return 'Sex %s'%self.val

class Race(object):
 def __init__(self, val):
 self.val = val
 def __repr__(self):
 return 'Race %s'%self.val

class Cell(object):
 def __init__(self, patient=None, snp=None, snpval=None):
 self.patient = patient
 self.snp = snp
 self.snpval = snpval
 def __repr__(self):
 return 'Cell %s'%self.snpval

class Snp(object):
 def __init__(self, fid, rsid, chromosome, location, alleleA, alleleB):
 self.fid = fid
 self.rsid = rsid
 self.chromosome = chromosome
 self.location = location
 self.alleleA = alleleA
 self.alleleB = alleleB
 def __repr__(self):
 

[sqlalchemy] Re: creating and dropping foreign key constraints using ORM

2008-12-19 Thread Faheem Mitha

On Fri, 19 Dec 2008 15:10:07 -0500, Michael Bayer
mike...@zzzcomputing.com wrote:

 On Dec 19, 2008, at 2:43 PM, Faheem Mitha wrote:

 I'm writing code (see below) to drop and add back foreign key
 constraints to a db table. Incidentally, this code is not working
 (the function just hangs) so I may have made some kind of syntax
 error. Anyway, I was wondering if there was some way to accomplish
 this in a more high- level way using the sqla ORM. The tables in
 question were created using the ORM, so the ORM knows about them,
 and, at least in theory should be able to manipulate them.

 The ORM operates at a higher level than that of the underlying
 details of the database and has no awareness of schema generation.
 You're probably referring to the SQL and schema expression language
 which is a separate component of the library.

 For comprehensive support of ALTER constructs, see the Migrate project  
 at http://code.google.com/p/sqlalchemy-migrate/ .

Hi Michael,

Thanks very much for the suggestion. The question is where the migrate
project supports on the fly schema modification as I describe. I'll
take a look.

Regards, Faheem.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: objects created using sqlalchemy

2008-12-06 Thread Faheem Mitha



On Fri, 5 Dec 2008, Faheem Mitha wrote:


 Hi,

 I'm using sqla with the following schema (see below). I'm creating a cell 
 object implicitly, using the function make_cell and the association proxy 
 pattern.

 def make_cell(patient_obj, snp_obj, snpval):
patient_obj.snps[snp_obj] = snpval
return patient_obj

 My question is, is there some way to get my hands on the Cell object that was 
 just created? If possible, I'd like make_cell to return the cell object. My 
 immediate reason is that this would make it easy to save the object using 
 session.save() (there might be some indirect way to do this, of course), but 
 it would be nice anyway.

A followup to my original post. I must be doing something wrong, because 
the Cell object is not being saved. and the proxy in the other direction 
is not being updated either. I'm reluctant to ask for debugging help, but 
I'm having difficulty tracking down the problem.

The files included in order below are

Schema file: dbschema.py 
Utility functions: dbutils.py
Session file: dbsession.py

The last file runs the actual code to populate the dbs, and is one big 
function, make_tables.

The most relevant lines here are:

print p1.snps is %s%p1.snps
print s.patients is %s%s.patients
print cell table is %s%list(cell_table.select().execute())
[...]
get_obj(session, Cell)

The output I'm getting is

p1.snps is {SNP rs10458597: Snpval 0}
s.patients is {}
cell table is []
[...]
*** list of Cell objects in class. ***
*** end list of Cell objects. ***

I wouldn't expect the last three, namely s.patients, cell table and
list of Cell objects to all be empty. Can someone tell me what I'm
doing wrong? For an experienced person, it may be obvious.

Note: My use of cascade in the Mappers may be redundant. I just put it
in there for good measure, and I'm not sure what it does.

 Regards, Faheem.


dbschema.py

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm.collections import attribute_mapped_collection
from datetime import datetime

metadata = MetaData('sqlite:///btsnp.sqlite')

patient_table = Table(
 'patient', metadata,
 Column('id', String(20), primary_key=True, index=True),
 Column('celfilename', String(30), nullable=False, index=True, unique=True),
 Column('sex', String(1)),
 )

cell_table = Table(
 'cell', metadata,
 Column('patient_id',  None, ForeignKey('patient.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False, primary_key=True),
 Column('snp_id',  None, ForeignKey('snp.rsid', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False, primary_key=True),
 Column('snpval_id',  None, ForeignKey('snpval.val', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False)
 )

snp_table = Table(
 'snp', metadata,
 Column('rsid', String(20), nullable=False, primary_key=True),
 Column('chromosome', Integer, nullable=False),
 Column('location', Integer, nullable=False),
 Column('probe_set_id', String(20), nullable=False, unique=True),
 Column('allele', String(3), nullable=False),
 )

snpval_table = Table(
 'snpval', metadata,
 Column('val', Integer, primary_key=True),
 )

metadata.create_all()

def create_cell(snp, snpval):
 return Cell(snp=snp, snpval=snpval)

class Patient(object):
 def __init__(self, id, celfilename, sex):
 self.id = id
 self.celfilename = celfilename
 self.sex = sex
 def __repr__(self):
 return 'Patient %s'%self.id
 snps = association_proxy('by_rsid', 'snpval', creator=create_cell)

class Cell(object):
 def __init__(self, patient=None, snp=None, snpval=None):
 self.patient = patient
 self.snp = snp
 self.snpval = snpval
 def __repr__(self):
 return 'Cell %s'%self.snpval

class Snp(object):
 def __init__(self, rsid, chromosome, location, probe_set_id, allele):
 self.rsid = rsid
 self.chromosome = chromosome
 self.location = location
 self.probe_set_id = probe_set_id
 self.allele = allele
 def __repr__(self):
 return 'SNP %s'%self.rsid
 patients = association_proxy('by_patient', 'snpval', creator=create_cell)

class Snpval(object):
 def __init__(self, val):
 self.val = val
 def __repr__(self):
 return 'Snpval %s'%self.val

# 'cells' corresponds to a 1 to many relation.
mapper(Patient, patient_table, properties={'cells':relation(Cell, 
backref='patient'),
'by_rsid': relation(Cell, cascade = 
all, delete-orphan, collection_class=attribute_mapped_collection('snp'))}
)
# 'patient_snpval

[sqlalchemy] Re: returning primary key of object without know what it is called.

2008-12-05 Thread Faheem Mitha



On Fri, 5 Dec 2008, King Simon-NFHD78 wrote:

 You can get the mapper for a given instance using the
 sqlalchemy.orm.object_mapper function, and that mapper has a
 'primary_key_from_instance' method. A generic primary_key function might
 look like this (untested):

 import sqlalchemy.orm as orm

 def get_primary_key(instance):
   mapper = orm.object_mapper(instance)
   return mapper.primary_key_from_instance(instance)

 Hope that helps,

Hi Simon,

Thanks, that is very helpful. That's exactly what I need.

  Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] objects created using sqlalchemy

2008-12-05 Thread Faheem Mitha


Hi,

I'm using sqla with the following schema (see below). I'm creating a cell 
object implicitly, using the function make_cell and the association proxy 
pattern.

def make_cell(patient_obj, snp_obj, snpval):
 patient_obj.snps[snp_obj] = snpval
 return patient_obj

My question is, is there some way to get my hands on the Cell object that 
was just created? If possible, I'd like make_cell to return the cell 
object. My immediate reason is that this would make it easy to save the 
object using session.save() (there might be some indirect way to do this, 
of course), but it would be nice anyway.

Thanks in advance. Please CC me on any reply.
  Regards, Faheem.

**
dbschema.py
**
from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy.orm.collections import attribute_mapped_collection
from datetime import datetime

metadata = MetaData('sqlite:///btsnp.sqlite')

# *patients*
# patient_id (PK)
# (Can use actual patient id as unique/alternate identifier
# Create index).
# sex - list of choices allowed
# age - (0, 140)
# time of death

patient_table = Table(
 'patient', metadata,
 Column('id', String(20), primary_key=True, index=True),
 Column('celfilename', String(30), nullable=False, index=True, unique=True),
 Column('sex', String(1)),
 )

cell_table = Table(
 'cell', metadata,
 Column('patient_id',  None, ForeignKey('patient.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False, primary_key=True),
 Column('snp_id', None, ForeignKey('snp.rsid', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False, primary_key=True),
 Column('snpval_id', None, ForeignKey('snpval.val', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False)
 )

# *snps*
# snp_id (PK)
# name (name of snp)

snp_table = Table(
 'snp', metadata,
 Column('rsid', String(20), nullable=False, primary_key=True),
 Column('chromosome', Integer, nullable=False),
 Column('location', Integer, nullable=False),
 Column('probe_set_id', String(20), nullable=False, unique=True),
 Column('allele', String(3), nullable=False),
 )

# *doublets*
# doublet_id (PK)
# seq (two letters AA, AG)

snpval_table = Table(
 'snpval', metadata,
 Column('val', Integer, primary_key=True),
 )

metadata.create_all()

def create_cell(snp, snpval):
 return Cell(snp=snp, snpval=snpval)

class Patient(object):
 def __init__(self, id, celfilename, sex):
 self.id = id
 self.celfilename = celfilename
 self.sex = sex
 def __repr__(self):
 return 'Patient %s'%self.id
 snps = association_proxy('by_rsid', 'snpval', creator=create_cell)

class Cell(object):
 def __init__(self, patient=None, snp=None, snpval=None):
 self.patient = patient
 self.snp = snp
 self.snpval = snpval
 def __repr__(self):
 return 'Cell %s'%self.snpval

class Snp(object):
 def __init__(self, rsid, chromosome, location, probe_set_id, allele):
 self.rsid = rsid
 self.chromosome = chromosome
 self.location = location
 self.probe_set_id = probe_set_id
 self.allele = allele
 def __repr__(self):
 return 'SNP %s'%self.rsid
 patients = association_proxy('by_patient', 'snpval', creator=create_cell)

class Snpval(object):
 def __init__(self, val):
 self.val = val
 def __repr__(self):
 return 'Snpval %s'%self.val

# mapper(Broker, brokers_table, properties={
# 'by_stock': relation(Holding,
# collection_class=attribute_mapped_collection('stock'))
# })

# 'cells' corresponds to a 1 to many relation.
mapper(Patient, patient_table, properties={'cells':relation(Cell, 
backref='patient'),
'by_rsid': relation(Cell, 
collection_class=attribute_mapped_collection('snp'))}
)
# 'patient_snpval' corresponds to a many to 1 relation.
# 'patient_snpval' corresponds to a 1 to 1 relation.
mapper(Cell, cell_table, properties={'snp':relation(Snp, backref='cells'),
  'snpval':cell_table.c.snpval_id,
  'snpval_obj':relation(Snpval, 
uselist=False, backref='cell')})
mapper(Snp, snp_table, properties={'by_patient': relation(Cell, 
collection_class=attribute_mapped_collection('patient'))})
mapper(Snpval, snpval_table)

#print patient_mapper.identity_key_from_instance()

# 0) Create doublet (2 letters).
# 1) Enter row names (cols) and patient names( snp ids).
# 2) Look at text files and update linker tables.

**


[sqlalchemy] returning primary key of object without know what it is called.

2008-12-04 Thread Faheem Mitha


Hi,

I'm trying to figure out how to have an object return its primary key 
without knowing what it is called. The docs in 
http://www.sqlalchemy.org/docs/05/sqlalchemy_orm_mapper.html look 
relevant, for example the function identity_key_from_instance (see entry 
from docs below), but I'm not clear about usage. The functions on this 
page look like they are meant to be used as method functions of a mapper 
object, but how should I construct such a mapper object? In my schema 
file, I have lines like

Mapper(Foo, foo_table)

should I be returning an mapper object for use with functions? Ie should I 
be doing

foo_mapper = Mapper(Foo, foo_table)

or similar? The section module sqlalchemy.orm.mapper saya

This is a semi-private module; the main configurational API of the ORM is 
available in module sqlalchemy.orm.

Does this mean it is not meant to be used in this fashion?

Also, I don't understand what is meant by

This value is typically also found on the instance state under the
attribute name key.

in the docs for identity_key_from_instance below.

Please CC me on any reply.
   Thanks and regards, Faheem.




def identity_key_from_instance(self, instance)

Return the identity key for the given instance, based on its primary key 
attributes.

This value is typically also found on the instance state under the 
attribute name key.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inferring object class/table directly

2008-12-03 Thread Faheem Mitha

On Wed, 3 Dec 2008 08:58:42 -0500 (EST), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 On Tue, 2 Dec 2008 at 23:21, Faheem Mitha wrote:
 Yes, I was looking for this, and printed out obj.__dict__ but didn't
 see it there. A dictionary of attributes is very useful in theory, but
 doesn't always seem to have all attributes. Is this documented
 anywhere?

 Try dir(obj).  You'll see it there.  The __dict__ is only for instance
 attributes.

Excellent. This is the first I've heard of this function, but
apparently it is a Python builtin. Well past time to go read the docs
for this, I guess.

 Not too difficult.  You can also use type(obj) instead of
 obj.__class__.

 I thought of trying this, but didn't. It didn't seem likely to work,
 anyway. Is either of these preferred over the other in terms of API
 stability, and if so, why?

 obj.__class__ is a python thing, as is type(obj), and neither of
 those is changing in python 3.0, so I'd think both would be stable API
 wise :)  However, the documentation of __class__ makes it clear you
 get the class back, while the documentation of the 'type' built in
 function does not...so I'd lean toward using __class__, myself.  It
 also means you'll get an earlier error if you accidentally pass
 something that is not actually a class instance into your function.

Thanks. That's very helpful.
 Regards, Faheem.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] inferring object class/table directly

2008-12-02 Thread Faheem Mitha


Hi,

If I have an ORM object, it is sometimes convenient to be able to infer 
the class directly. Eg. consider this function.

def add_patient_obj(session, patient_obj):
  Check if object primary key exists in db. If so,exit, else
 add.
 pid = patient_obj.id
 #print session.query(Patient).filter_by(id=pid).count()
 if session.query(Patient).filter_by(id=pid).count()  0:
 print Patient object with id %s is already in db.%pid
 exit
 else:
 session.save(patient_obj)
 session.commit()

But I want a generic version. Since patient_obj knows what class is 
belongs to, it should be possible not to have to state the class directly, 
which here is Patient.

I have done the following, which works, but is hideous, horrible, ugly, 
fragile hack. Can anyone suggest a better way of doing this?

Please CC me on any reply. Thanks in advance.

Regards, Faheem.

def add_obj(session, obj):
  Check if object primary key exists in db. If so,exit, else
 add.
 
 c = str(type(obj)).split(')[1].split(.)[1]
 s = q = session.query(+ c +)
 exec(s)
 pid = obj.id
 if q.filter_by(id=pid).count()  0:
 print Patient object with id %s is already in db.%pid
 exit
 else:
 session.save(obj)
 session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inferring object class/table directly

2008-12-02 Thread Faheem Mitha

[This message has also been posted.]
Hi Eric,

Thanks very much for the improvement.

On Tue, 2 Dec 2008 15:04:34 -0800 (PST), Eric Ongerth
[EMAIL PROTECTED] wrote:

 def add_obj(session, obj):
   Check if object primary key exists in db. If so,exit, else
  add.
  
  pid = obj.id
 if session.query(obj.__class__).filter_by(id=pid).count():
  print Patient object with id %s is already in db.%pid
  exit
  else:
  session.save(obj)
  session.commit()

Yes, I was looking for this, and printed out obj.__dict__ but didn't
see it there. A dictionary of attributes is very useful in theory, but
doesn't always seem to have all attributes. Is this documented
anywhere?

 Not too difficult.  You can also use type(obj) instead of
 obj.__class__.

I thought of trying this, but didn't. It didn't seem likely to work,
anyway. Is either of these preferred over the other in terms of API
stability, and if so, why?

 Furthermore, if you really need to determine the object's class's
 mapped table,
 obj_table = obj.__class__._sa_class_manager.mapper.mapped_table

 Of course, being an underscored thing, _sa_class_manager is not
 something you should count on from version to version of sqlalchemy,
 so keep that in consideration and don't use it anywhere you don't plan
 to maintain.

Not sure what the object class's mapped table is, but will look it up.

  Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: inferring object class/table directly

2008-12-02 Thread Faheem Mitha

[This message has also been posted.]
On Tue, 2 Dec 2008 18:25:19 -0500, Michael Bayer
[EMAIL PROTECTED] wrote:

 On Dec 2, 2008, at 6:04 PM, Eric Ongerth wrote:

[snip]
 Furthermore, if you really need to determine the object's class's
 mapped table,
 obj_table = obj.__class__._sa_class_manager.mapper.mapped_table

 here's the API way:

 object_mapper(obj).mapped_table

Hi Michael,

Thanks for the clarification.

 Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: adding objects to table only once

2008-11-18 Thread Faheem Mitha



On Mon, 17 Nov 2008, Faheem Mitha wrote:

 Hi,

 I've written a session transcript to init db tables and add objects 
 (well, rows) to the tables. The issue I'm currently facing is how to 
 make the creating and populating the tables section of the script a 
 no-op when the objects exist. If the tables already exist sqlalchemy 
 does nothing, which is fine. However, this script currently does try to 
 add the objects that are already there, and so throws an exception. I 
 suppose the thing to do would be to check for each object whether it 
 already exists in the db, and do nothing if so. What would be the 
 simplest/cleanest way to do so? I've been fiddling with this for a while 
 without finding an obviously good solution. Is it possible to check 
 whether an object is already in a specific table?

[following up to my own message]

The following approach works, but is kinda kludgy. In particular, I'd like 
to genericise it. The main obstacle in doing so is finding a generic 
expression for the primary key. There is always a primary key, and by 
definition it is unique, right? So, I think it makes sense to use that for 
comparison, but the actual name of the primary key can differ and is can 
also be composite. So, is there a way to access it in a generic way? 
Alternatively, is there a better approach to this?

   Thanks, Faheem.

def add_patient_obj(session, patient_obj):
  Check if object primary key exists in db. If so,exit, else add.
 pid = patient_obj.id
 if session.query(Patient).filter_by(id=pid).count()  0:
 print Patient object with id %s is already in db.%pid
 exit
 else:
 session.save(patient_obj)
 session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] odd error message from query

2008-11-17 Thread Faheem Mitha


Hi,

[I was told on IRC this was not worth bothering with, but since I've 
already written it, I'm sending this out. Please ignore if useless.]

I mistakenly did

query = session.query(Patient).filter_by(id==John).all()

and got the curious response:

TypeError: filter_by() takes exactly 1 argument (2 given)

The correct syntax is

query = session.query(Patient).filter_by(id=John).all()

Can anything be done to make this a clearer error message?

 Thanks, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] adding objects to table only once

2008-11-17 Thread Faheem Mitha


Hi,

I've written a session transcript to init db tables and add objects (well, 
rows) to the tables. The issue I'm currently facing is how to make the 
creating and populating the tables section of the script a no-op when 
the objects exist. If the tables already exist sqlalchemy does nothing, 
which is fine. However, this script currently does try to add the objects 
that are already there, and so throws an exception. I suppose the thing to 
do would be to check for each object whether it already exists in the db, 
and do nothing if so. What would be the simplest/cleanest way to do so? 
I've been fiddling with this for a while without finding an obviously good 
solution. Is it possible to check whether an object is already in a 
specific table?

Please CC me on any reply.
   Thanks, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: creator function in association_proxy

2008-11-13 Thread Faheem Mitha



On Wed, 12 Nov 2008, Michael Bayer wrote:

 On Nov 12, 2008, at 3:31 PM, Faheem Mitha wrote:

 2) In section Building Complex Views
 
 print broker.holdings[stock].shares
   # 10
 
 Isn't holdings[stock] the share value? Ie. shouldn't this just be
 
 print broker.holdings[stock]
 
 ?

 im going to go out on a limb here since I did not write these docs

Really? Who did? Good job by the author, anyway.

 and suggest it should say:

 broker.by_stock[stock] = Holding(broker, stock, 10) print 
 broker.by_stock[stock].shares

 to illustrate non-association proxy usage, before the next blurb that 
 describes the association proxy usage.

Sounds plausible.

 4) A more general question:
 
 I'm having difficulty parsing the last three lines in the itemized list 
 in Section associationproxy.
 
 The relation is inspected ... through A creator funcion can be 
 used...
 
 In particular, I'm having difficulty understanding the point of the 
 creator function in association_proxy. I see where creator functions 
 are defined (two places), but don't see them being used.

 the association proxy manages an association relation across three 
 objects, such as:

 A-B-C

 when you say something along the lines of:

 A.someelement = C

 the B is created implicitly by the association proxy.  The creator function 
 is used to customize the process of creating the B instance.

Hmm. Adding something like this would not be a bad idea, except with more 
elaboration.

Just to repeat, I didn't really follow the point of the last three bullet 
points. No doubt I'm being obtuse, but I think a little explicitness would 
not hurt here.

* The relation is inspected to determind the type of the related objects.

What relation are we talking about, and what types? What is the point of 
determining the types? I guess in this case there are three objects and 
two relations as described in your diagram above, namely

A-B-C

* To construct new instances, the type is called with the value being 
assigned, or key and value for dicts.

I have no idea what this means. Are the instances referred to instances of 
the association object B as above?

* A creator function can be used to create instances intead.

I suppose we are talking about instances of B again.

I'm looking at the simplest example that uses a creator function, namely 
the User/Keyword example starting with

def get_current_uid():

Here the creator function is

def _create_uk_by_keyword(keyword):
 A creator function.
 return UserKeyword(keyword=keyword)

Now, can you point me a line in this example

where there is the equivalent of

A.someelement = C

? In this case it would be something like

user.keywords = Keyword('foo')

Hmm, I suppose

for kw in (Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')):
 user.keywords.append(kw)

is also creating association objects, since there is one association 
object (here, UserKeyword) created for each new Keyword.

In any case, I think it would be helpful to emphasize this point, since it 
is not obvious, at least to me. Also, how does creator know where to get 
the arguments from in general? Does it come via the C object (here 
Keywords)? What happens if it isn't specified? Does it fall back to the 
constructor instead, or something else?

Incidentally, in the Broker/Stock example, the creator function, namely

def _create_holding(stock, shares):
 A creator function, constructs Holdings from Stock and share 
quantity.
 return Holding(stock=stock, shares=shares)

isn't used anywhere that I can see, even implicitly, since no stocks are 
added to holdings in the example.

Well, back to trying to get my association proxy example to work. 
Currently, I've got an infinite loop. :-)

 5) An even more general comment:
 
 There are two discussions on mapping association objects. One of them
 is in the mapper docs
 (http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association)
 and uses a one-many and many-one relation. The second is in the docs
 for association proxy
 (http://www.sqlalchemy.org/docs/05/plugins.html#plugins_associationproxy_building),
 two examples in Simplifying Association Object Relations and
 Building Complex Views, and uses association_proxy.
 
 It looks like either of these approaches can be used to initialize and
 update the db. It might be helpful if these two could be connected
 somehow.  Perhaps a see also in the respective sections? I do see a To
 enhance the association object pattern such that direct access to the
 Association object is optional, SQLAlchemy provides the associationproxy.
 in
 http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association,
 but it seems like you could just use the association_proxy setup described
 by itself, and it would suffice. The version in
 http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association
 says explicitly that this is the way to go - doesn't clearly

[sqlalchemy] Re: creator function in association_proxy

2008-11-13 Thread Faheem Mitha



On Thu, 13 Nov 2008, Faheem Mitha wrote:

 On Wed, 12 Nov 2008, Michael Bayer wrote:

 On Nov 12, 2008, at 3:31 PM, Faheem Mitha wrote:

 4) A more general question:
 
 I'm having difficulty parsing the last three lines in the itemized list in 
 Section associationproxy.
 
 The relation is inspected ... through A creator funcion can be used...
 
 In particular, I'm having difficulty understanding the point of the 
 creator function in association_proxy. I see where creator functions are 
 defined (two places), but don't see them being used.
 
 the association proxy manages an association relation across three objects, 
 such as:
 
 A-B-C
 
 when you say something along the lines of:
 
 A.someelement = C
 
 the B is created implicitly by the association proxy.  The creator 
 function is used to customize the process of creating the B instance.

 Hmm. Adding something like this would not be a bad idea, except with more 
 elaboration.

 Just to repeat, I didn't really follow the point of the last three bullet 
 points. No doubt I'm being obtuse, but I think a little explicitness would 
 not hurt here.

 * The relation is inspected to determind the type of the related objects.

 What relation are we talking about, and what types? What is the point of 
 determining the types? I guess in this case there are three objects and two 
 relations as described in your diagram above, namely

 A-B-C

 * To construct new instances, the type is called with the value being 
 assigned, or key and value for dicts.

 I have no idea what this means. Are the instances referred to instances of 
 the association object B as above?

 * A creator function can be used to create instances intead.

 I suppose we are talking about instances of B again.

 I'm looking at the simplest example that uses a creator function, namely the 
 User/Keyword example starting with

 def get_current_uid():

 Here the creator function is

 def _create_uk_by_keyword(keyword):
A creator function.
return UserKeyword(keyword=keyword)

 Now, can you point me a line in this example

 where there is the equivalent of

 A.someelement = C

 ? In this case it would be something like

 user.keywords = Keyword('foo')

 Hmm, I suppose

 for kw in (Keyword('its_big'), Keyword('its_heavy'), Keyword('its_wood')):
user.keywords.append(kw)

 is also creating association objects, since there is one association object 
 (here, UserKeyword) created for each new Keyword.

 In any case, I think it would be helpful to emphasize this point, since it is 
 not obvious, at least to me. Also, how does creator know where to get the 
 arguments from in general? Does it come via the C object (here Keywords)? 
 What happens if it isn't specified? Does it fall back to the constructor 
 instead, or something else?

 Incidentally, in the Broker/Stock example, the creator function, namely

 def _create_holding(stock, shares):
A creator function, constructs Holdings from Stock and share 
 quantity.
return Holding(stock=stock, shares=shares)

 isn't used anywhere that I can see, even implicitly, since no stocks are 
 added to holdings in the example.

On further consideration,

for stock in (Stock('JEK'), Stock('STPZ')):
 broker.holdings[stock] = 123

*is* actually creating Holding objects. Also, it is using stock and shares 
arguements, so I suppose

  def _create_holding(stock, shares):
 A creator function, constructs Holdings from Stock and share 
quantity.
 return Holding(stock=stock, shares=shares)

is coming into play somehow. However, I'm still unclear about the precise 
mechanism by which this works. Regardless, I think it would be helpful to 
flag these examples explicitly, as in look, look, we're invoking the 
creator function and implicitly creating Holding objects here! (or 
whatever).

 Take care, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] creator function in association_proxy

2008-11-12 Thread Faheem Mitha


Hello everyone,

A few minor issues with association_proxy, especially wrt the
association object pattern.

The following points reference
http://www.sqlalchemy.org/docs/05/plugins.html

1) First, a typo. In section Simplifying Association Object
Relations: Because the proxies are backed a regular relation
collection should have the word by or similar following backed.

2) In section Building Complex Views

print broker.holdings[stock].shares
# 10

Isn't holdings[stock] the share value? Ie. shouldn't this just be

print broker.holdings[stock]

?

3) The title Simplifying Association Object Relations appears twice
in
http://www.sqlalchemy.org/docs/05/plugins.html#plugins_associationproxy

4) A more general question:

I'm having difficulty parsing the last three lines in the itemized
list in Section associationproxy.

The relation is inspected ... through A creator funcion can be used...

In particular, I'm having difficulty understanding the point of the
creator function in association_proxy. I see where creator functions are
defined (two places), but don't see them being used.

5) An even more general comment:

There are two discussions on mapping association objects. One of them
is in the mapper docs
(http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association)
and uses a one-many and many-one relation. The second is in the docs
for association proxy
(http://www.sqlalchemy.org/docs/05/plugins.html#plugins_associationproxy_building),
two examples in Simplifying Association Object Relations and
Building Complex Views, and uses association_proxy.

It looks like either of these approaches can be used to initialize and 
update the db. It might be helpful if these two could be connected 
somehow.  Perhaps a see also in the respective sections? I do see a To 
enhance the association object pattern such that direct access to the 
Association object is optional, SQLAlchemy provides the associationproxy. 
in 
http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association,
 
but it seems like you could just use the association_proxy setup described 
by itself, and it would suffice. The version in 
http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_patterns_association
 
says explicitly that this is the way to go - doesn't clearly point to 
alternatives.

Please cc me on any reply.
Thanks, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] emulating spreadsheet in db

2008-11-08 Thread Faheem Mitha


Hi,

I've got some genome data, and I'm trying to move it into a db.

The data looks like

Patient   FOOSNPBARSNP ...
Tom   AA AT
John  AT AA
...

These columns correspond to SNPS 
(http://en.wikipedia.org/wiki/Single_nucleotide_polymorphism). Note there 
are a lot of columns, The question is what the best approach is to emulate 
such a setup. I could not find much information about this scenario. The 
approach I went for was to have the row info and column info in separate 
tables, and then have a linker table containing foreign keys pointing to a 
row, a column, and a SNP value.

Then this will look something like

PATIENT   SNP  SNPVAL
John  Foosnp   AA
Tom   Barsnp   AT

This essentially maps the cartesian product of patient and snp to snpval.

function: PATIENT x SNP - SNPVAL

Is this a reasonable way to approach this? If so, is there some way to 
tell the mapper what kind of relationship this table is trying to define?

 Thanks in advance, Faheem.

*

from sqlalchemy import *
from sqlalchemy.orm import *
from datetime import datetime

metadata = MetaData('sqlite:///data.sqlite')

patient_table = Table(
 'patient', metadata,
 Column('id', String(100), primary_key=True, index=True),
 Column('sex', String(1)),
 )

snp_table = Table(
 'snp', metadata,
 Column('name', String(20), nullable=False, primary_key=True),
 )

snpval_table = Table(
 'snpval', metadata,
Column('id', Integer, primary_key=True),
 Column('val', String(2), nullable=False),
 )

cell_table = Table(
 'patient_snpval', metadata,
 Column('patient',  None, ForeignKey('patient.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False),
 Column('snp',  None, ForeignKey('snp.name', onupdate='CASCADE', 
ondelete='CASCADE'),  index=True, nullable=False),
 Column('snpval',  None, ForeignKey('snpval.id', onupdate='CASCADE', 
ondelete='CASCADE'), index=True, nullable=False)
 )

metadata.create_all()

class Patient(object): pass
class Snp(object): pass
class Snpval(object): pass
class Cell(object): pass

mapper(Patient, patient_table)
mapper(Snp, snp_table)
mapper(Snpval, snpval_table)
mapper(Cell, cell_table)

Session = sessionmaker()
session = Session()
session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---