Re: [sqlalchemy] move objects between two sessions

2017-04-09 Thread A.M.
On 04/09/2017 10:52 AM, SG wrote:
> I have two databases with one session for each of them.
> I would like to move objects contained in one of them to the other.
> I tried to get an object from the first with something like:
> 
> |
> obj =sessionA.query(MyClass).filter(MyClass.name=='some name')
> |
> 
> and then I tried to add this object to the second session with:
> 
> |
> sessionB.add(obj)
> sessionB.commit()
> |
> 
> but I get this exception:
> 
> |
> |Object''isalready attached to session '1'(thisis'2')|
> |
> 
> How could I overcome this exception and be able to move this object?
> Is there a way to do this?

This use case is covered here:

http://stackoverflow.com/questions/11213665/unbind-object-from-session

Call expunge() first.

Cheers,
M


-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


signature.asc
Description: OpenPGP digital signature


Re: [sqlalchemy] Proper way to do processing across entire db?

2013-02-21 Thread A.M.
On Thu, 21 Feb 2013 12:52:42 -0800 (PST), Victor Ng vicng...@gmail.com
wrote:
 I do a lot of processing on large amount of data. 
 
 The common pattern we follow is: 
 
 1. Iterate through a large data set 
 2. Do some sort of processing (i.e. NLP processing like tokenization, 
 capitalization, regex parsing, ... ) 
 3. Insert the new result in another table. 
 
 Right now we are doing something like this: 
 
 for x in session.query(Foo).yield_per(1): 
   bar = Bar()
   bar.hello = x.world.lower()
   session.add(bar)
   session.flush()
 session.commit()

Do you really need to flush after making each new Bar? That implies a
database round-trip and state sync with SQLAlchemy.

In any case, you should gather a profile to see where/how time is getting
spent. SQLAlchemy is a complex framework, so whatever performance
assumptions are implied in the code may be wrong.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] SELECT * FROM function() with declarative

2012-09-20 Thread A.M.

On Sep 20, 2012, at 11:49 AM, David McKeone wrote:

 I've googled around can't seem to find an answer to this, so hopefully 
 someone knows how to do it here.
 
 I'm using PostgreSQL and I have a PL/PGSQL function that filters and modifies 
 a particular table based on a number of conditions and then returns a set of 
 rows as the result.  This pattern has allowed the system to use the functions 
 as if they were tables so that joins can still be done on the resulting 
 values.
 
 So instead of:
 
 SELECT * FROM table1 INNER JOIN table2 ON table1.t2_id = table2.id
 
 I do:
 
 SELECT * FROM my_function( ...args... ) as table1 INNER JOIN table2 ON 
 table1.t2_id = table2.id
 
 That part works ok in plain SQL (and as well in the system I'm converting 
 from)
 
 So now with SQLAlchemy I have my declarative definitions for those tables:
 
 class Table1(Base):
__tablename__ = 'table1'
 
id = Column()
t2_id = Column(ForeignKey())
 
table2 = Relationship( ... )  # Join condition is specified explicitly
 
 class Table2(Base);
__tablename__ = 'table2'
  
id = Column()
 
 and I'm trying to figure out how I would execute a query that looks like this:
 
 result = 
 session.query(Table1).join(Table1.table2).options(contains_eager(Table1.table2))
 
 but using a function to 'fake' Table1 instead.  So basically I'm attempting 
 to get SQLAlchemy to treat the result of my function as if it was the normal 
 Table1 object.  I've tried using select_from() to inject my call to 
 func.my_function() but that doesn't seem to work and since what I'm doing 
 seems like it might be tricky (or not portable across SQL) I thought I'd ask 
 if it's even possible.
 
 Thanks for any help!
 

Perhaps the easiest way is to create a view:

CREATE VIEW table1 AS SELECT * FROM my_function(...);

Cheers,
M



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [PATCH] MATCH clause implementation for foreign keys

2012-06-10 Thread A.M.

On Jun 9, 2012, at 10:41 PM, Michael Bayer wrote:

 it looks great.
 
 This is in the queue as http://www.sqlalchemy.org/trac/ticket/2502.


1765text +=  MATCH %s % constraint.match 

SQL injection? Shouldn't the argument be one of three constants?

I suspect there needs to be some specific per-database-driver logic to handle 
unimplemented cases. PostgreSQL, for example, doesn't support MATCH PARTIAL ( 
http://www.postgresql.org/docs/9.1/static/sql-createtable.html ) and MySQL, 
naturally, completely ignores the syntax and triggers other clauses to be 
ignored:

For users familiar with the ANSI/ISO SQL Standard, please note that no storage 
engine, including InnoDB, recognizes or enforces the MATCH clause used in 
referential integrity constraint definitions. Use of an explicit MATCH clause 
will not have the specified effect, and also causes ON DELETE and ON UPDATE 
clauses to be ignored. For these reasons, specifying MATCH should be avoided.
http://dev.mysql.com/doc/refman/5.5/en/create-table.html

Cheers,
M



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Lock table, do things to table, unlock table: Best way?

2012-05-27 Thread A.M.

On May 27, 2012, at 1:07 AM, Jeff wrote:

 I have multiple processes accessing  a table. All of these processes
 want to read a set of rows from the table, and if the rows are not
 present they will make a calculation and insert the rows themselves.
 The issue comes where process  A does a query to see if the target set
 of rows is present in the table, and they're not, and then another
 starts calculating. While it's calculating, process B inserts the
 rows. Then process A inserts the rows, and now we have two copies of
 these sets of rows. Bad.

You should look at SELECT FOR UPDATE.

http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmode#sqlalchemy.orm.query.Query.with_lockmode

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] modifying enable_seqscan option

2012-03-15 Thread A.M.

On Mar 15, 2012, at 5:54 AM, Eduardo wrote:

 Hi,
 Is it possible to set this option to off by using sqlalchemy?
 Thanks
 Ed
 

connection.execute(SET enable_seqscan TO off;)

But if you use that in production, you're nuts.

Cheers,
M



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] postgres with .. select queries

2012-03-01 Thread A.M.

On Mar 1, 2012, at 9:41 AM, Michael Bayer wrote:

 Right.. So what should I do about oracle CONNECT BY, which is what they have 
 instead of WITH RECURSIVE... Consider that to be a different construct and 
 ignore it for now?   Or should there be some approach that approximates 
 between WITH RECURSIVE and CONNECT BY transparently?  The latter approach has 
 eluded me since structurally they are so different.   I guess a literal CTE 
 implementation to start with doesn't prevent a later agnostic construct from 
 being created. 

Well, the SQL standard points at WITH RECURSIVE which is more general anyway. 
W.R. is basically an inductive query loop construct (base case UNION induction 
step) where CONNECT BY only handles key-based tree retrieval, no?

Also, basic WITH support (without RECURSIVE) would be much appreciated- that 
could offer more flexibility than FROM-subqueries and could open the door for 
W.R.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Reducing instrumentation overhead for read-only entities

2012-02-26 Thread A.M.

On Feb 19, 2012, at 5:24 AM, Andrey Popp wrote:
 Regarding rationale let me describe where I can find this feature useful:
 
  * You have a part of you tables read-only, so you only query data from them.
 
SQLAlchemy doesn't have to track changes on objects of classes mapped to
these tables. According to this stackoverflow post[1] we can get a not so
negligible performance gain here.
 
  * We can map same table on same class using different mappers (one in
read-only mode and other in persistence mode).
 
That way we can use read-only mapper to query data in case we don't want 
 to
change it and we can use persistence mapper otherwise. I believe this
will also lead to better correctness of application itself.

Hello,

After reading your stackoverflow post (implying that a profile revealed much 
instrumentation that you don't need), it occurred to me that you could use 
SQLAlchemy to generate the SQL query which you then pass directly to an execute 
method on a the underlying session bind to bypass generating SQLAlchemy models. 
Ideally, you could further shrink the query by only requesting the columns you 
actually need in your calculations.

This seems at least the least-instrusive approach before you jump to C.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] 0.7.4 and multiple schemas

2012-01-24 Thread A.M.

On Jan 24, 2012, at 11:08 AM, Julien Cigar wrote:

 Hello,
 
 I upgraded to 0.7.4 together with PostgreSQL 9.0.5.
 
 I have a database with several schemas, and it looks like the handle of 
 schemas changed in 0.7.4
 
 I altered the default database search_path with:
 
 {{{
 xxx=# alter DATABASE xxx SET search_path TO public,gis,cr2010;
 }}}
 
 and I used to do something like:
 
 {{{
 meta = MetaData()
 engine = engine_from_config(myconfig)
 meta.reflect()
 }}}
 
 but in this case nothing is reflected .. (meta.tables is empty), is it 
 normal? Also is it intended that SQLAlchemy doesn't care about the default 
 search_path ..?
 
 I wondered if I should use the new Inspector 
 (http://docs.sqlalchemy.org/en/latest/core/schema.html#fine-grained-reflection-with-inspector)
  for such case ?

The search_path works fine for me with reflection. If you login as the intended 
user using psql and enter \d, do you see any tables? Your results suggest no. 
Perhaps you need ALTER USER X SET search_path TO ….

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Declaring compound primary key on reflective model

2011-06-10 Thread A.M.

On Jun 10, 2011, at 2:25 PM, Cody Django wrote:

 Hi all -- I'm new on pylons, coming from a django background.  I'm
 working on a mapfish project.
 
 I'd like to autoload a model based on a db table, but in doing so I
 get the error could not assemble any primary key columns for mapped
 table.
 
 The table itself is a postgres view with no declared primary key
 column, so it makes sense that no primary key columns are detected.
 
 I figure it is possible to define which columns to use as the primary
 key in the __table_args__ dict, but I have yet to figure out exactly
 how.
 
 Tips, please!

I do the same thing:

  newtable = Table(name,
 meta,
 *props,
 autoload=True
 )

where props = ColumnCollection(Column('id',Integer,primary_key=True))

SQLAlchemy overwrites the autoloaded info with the passed-in column info.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Concurrent upsert problem with session.merge()

2011-06-07 Thread A.M.

On Jun 7, 2011, at 7:17 AM, Vlad K. wrote:

 
 Hi all!
 
 
 I have a daily stats table with the date as primary key. Currently the date 
 is string in the MMDD format, but I'll probably port it to a Date type 
 for easier statistical analysis. Other cols are various counters that get 
 incremented in various situations.
 
 My problem is that I need to write code that either updates if row with that 
 date exists, or inserts as new. I know SQLAlchemy does that by itself 
 (session.merge()), but the problem here is that you can't lock a row that 
 does not exist yet, and with concurrency in mind, what will happen if two 
 parallel processes happen to insert same date row at the same time.
 
 One solution would be to catch Integrity error and simply try again (with 
 second attempt finding the pkey and thus updating instead of insert). But 
 since I'm new to SQLAlchemy, I don't know if there is any better method.
 
 The DB backend is Postgres.

This is a common PostgreSQL question which can be addressed with rules or 
triggers or just retrying the transaction, as you suggest. However, the easiest 
solution is to lock the table for the duration of the transaction which may be 
option to you if the transactions are short-lived and not performance-critical.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Exclusive SELECT?

2011-06-01 Thread A.M.

On Jun 1, 2011, at 12:56 AM, thatsanicehatyouh...@mac.com wrote:

 Hi,
 
 I'm working on a script using SQLAlchemy against a PostgreSQL database and 
 using Python's multiprocessing. The pattern is for each thread to:
 
 - start a transaction (session.begin())
 - retrieve the next row in table X that has not yet been processed
 - set a being_processed flag in the row so no other query will return it
 - close the transaction (session.commit())
 
 The rest of the thread then performs the work, saves the results back, and 
 cleans up. This is all working well except for one thing - multiple threads 
 are retrieving the same row. My first solution was to pull the work above 
 into the main thread, but this won't work as I'd like to run this same script 
 on *multiple* machines - the database needs to be the gatekeeper.
 
 Does anyone have any suggestions on how I can make a database-wide exclusive 
 transaction through SQLAlchemy (or otherwise!)? I'm using SQLAlchemy 0.6.7.

Hi Demitri,

The problem is that the flag is set in a transaction A, so transaction B is not 
able to see the change in state until A is committed (in read-committed mode, 
the default), thus creating a race condition. You need to lock the table or use 
another form of synchronization to make sure every connection sees a 
synchronized view of the table.

http://www.postgresql.org/docs/9.0/interactive/sql-lock.html
http://www.postgresql.org/docs/9.0/interactive/sql-select.html#SQL-FOR-UPDATE-SHARE

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] In-memory sqlite database to blob?

2011-03-04 Thread A.M.

On Mar 4, 2011, at 4:50 PM, thatsanicehatyouh...@mac.com wrote:

 Hi,
 
 I'd like to write a script that creates an in-memory SQLite database via 
 SQLAlchemy, but when I've finished with it I'd like to upload it as a file to 
 a server, preferably without ever creating a temporary file on the client 
 side. Is this possible?

It is possible but potentially difficult. You could accomplish this by using 
the backup facility (sqlite3_backup_*) to push the in-memory database to the 
remote location, perhaps over a network-mounted file share. 
(http://www.sqlite.org/backup.html)

It would likely be easier to use iterdump in pysqlite to generate a database 
dump and push the SQL dump text to a server.

If you are trying to avoid hitting the disk, perhaps you can create the sqlite 
db on an in-memory filesystem. Then you could use normal filesystem operations 
to manipulate the resultant database.

In any case, SQLAlchemy won't be directly helpful here.

Cheers,
M


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] autoloading with known columns

2011-02-02 Thread A.M.
Hello,

I am using autoload to load some of my application metadata for views, however, 
since some information cannot be extracted from the view (such as the primary 
key) and  because one UserDefinedType I am using cannot be recognized using 
reflection, I am passing certain column information into the Table __init__ 
method:

Table(viewname,
metadata,
Column('id',Integer,primary_key=True),
Column('acl',ACLItemArrayType,nullable=False),
autoload=True)

Unfortunately, I still get Did not recognize type 'aclitem' of column 'acl' 
because the column information is still trying to be reflected. Would it make 
sense for there to exist an option to exclude column names as part of table 
reflection and exclude those columns which are specified as part of Table()?

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: autoloading with known columns

2011-02-02 Thread A.M.

On Feb 2, 2011, at 4:50 PM, A.M. wrote:

 Hello,
 
 I am using autoload to load some of my application metadata for views, 
 however, since some information cannot be extracted from the view (such as 
 the primary key) and  because one UserDefinedType I am using cannot be 
 recognized using reflection, I am passing certain column information into the 
 Table __init__ method:
 
 Table(viewname,
   metadata,
   Column('id',Integer,primary_key=True),
   Column('acl',ACLItemArrayType,nullable=False),
   autoload=True)
 
 Unfortunately, I still get Did not recognize type 'aclitem' of column 'acl' 
 because the column information is still trying to be reflected. Would it make 
 sense for there to exist an option to exclude column names as part of table 
 reflection and exclude those columns which are specified as part of Table()?

Also, it might be useful to register UserDefinedTypes with an inspector to 
autoload UserDefinedTypes through an additional reverse get_col_spec method.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sqlite3 auto

2011-01-28 Thread A.M.

On Jan 28, 2011, at 8:07 AM, slothy Rulez a lot wrote:

 Hi, []
 
 I'm trying to create tables in SQLITE, with a composite PK (id, eid), I want
 the id have the auto increment.
 I've read this,
 http://www.sqlalchemy.org/docs/dialects/sqlite.html#auto-incrementing-behavior,
 and following your instructions, added __table_args__ =
 {'sqlite_autoincrement':True}
 (I'm using declarative), but i'm not having any difference.
 
 This is the debug exit and the declarative code:
 
 2011-01-28 13:48:56,999 INFO sqlalchemy.engine.base.Engine.0x...b9ec
 CREATE TABLE elementos_filiales (
 id INTEGER,
 tipo TEXT,
 timestamp TEXT,
 eid VARCHAR(127) NOT NULL,
 nombre TEXT,
 eliminado INTEGER,
 padre INTEGER,
 PRIMARY KEY (id, eid),
 FOREIGN KEY(padre) REFERENCES elementos_multinacionales (id)
 )

What you are trying to do won't work in SQLite because INTEGER PRIMARY KEY is 
an alias for a special SQLite ROWID. So, when you create a two-column primary 
key, you lose the ROWID alias feature. AUTOINCREMENT can apparently only be 
used with a single-column primary key:

sqlite create table gonk(a INTEGER AUTOINCREMENT,b varchar,primary key(a,b));
Error: near AUTOINCREMENT: syntax error
sqlite create table gonk(a INTEGER PRIMARY KEY AUTOINCREMENT,b varchar,primary 
key(a,b));
Error: table gonk has more than one primary key

However, you should be able to create a table having id as the primary key and 
a unique constraint on (id,eid) which should be effectively the same.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] autocommit on for DDL

2011-01-27 Thread A.M.

On Jan 26, 2011, at 7:47 PM, Michael Bayer wrote:

 
 On Jan 26, 2011, at 6:32 PM, A.M. wrote:
 
 Well, I spoke too soon :( What is the mistake in the following sample code 
 which causes the COMMITs to be emitted? Setting autocommit to either True or 
 False emits the same SQL. I think this is a case of staring at the same code 
 too long causing brain damage- thanks for your patience and help!
 
 from sqlalchemy.engine import create_engine
 from sqlalchemy.orm.session import sessionmaker
 from sqlalchemy.orm import scoped_session
 from sqlalchemy.schema import DDL,MetaData,Table
 
 engine = create_engine('postgresql://localhost/test',echo=True)
 session = scoped_session(sessionmaker(bind=engine))
 metadata = MetaData()
 metadata.bind = engine
 
 Table('test1',metadata)
 Table('test2',metadata)
 metadata.create_all()
 
 metadata.create_all() looks at the bind attribute, then uses it to execute 
 each DDL statement.   The bind here is an engine so it uses connectionless 
 execution.  connectionless execution is usually autocommit as documented 
 here:  
 http://www.sqlalchemy.org/docs/core/connections.html#connectionless-execution-implicit-execution
 
 to emit create_all() in a transaction:
 
 conn = engine.connect()
 with conn.begin():
   metadata.create_all(conn)

Ugh- thanks for being patient with a noob- I had erroneously assumed that the 
creation of a session would assume responsibility for transaction management 
like the zope transaction handler. It makes sense now that the session is 
exclusively specific to ORM management- the section Joining a Session into an 
External Transaction helped to clear things up- there is indeed some 
interaction between connection and session transactions.

Cheers,
M


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] autocommit on for DDL

2011-01-26 Thread A.M.
Hello,

While working on a database test with nose, I dug into sqlalchemy 0.6.6 until I 
found these lines:

class DDLElement(expression.Executable, expression.ClauseElement):
Base class for DDL expression constructs.

_execution_options = expression.Executable.\
_execution_options.union({'autocommit':True})

In my nose test against postgresql, I emit a bunch of DDL, perform the test and 
then roll back the whole shebang which conveniently makes it look like I didn't 
touch the database. Obviously, the emitted commits were getting in my way, so I 
wrote this:

class NoCommitDDL(DDL):
def __init__(self,*args,**kw):
super(NoCommitDDL,self).__init__(*args,**kw)
unfrozen = dict(self._execution_options)
del unfrozen['autocommit']
self._execution_options = frozendict(unfrozen)

DDL = NoCommitDDL

I still feel like I am missing something though. I understand that PostgreSQL 
is perhaps one of few databases to allow for transaction-aware DDL, but why is 
a commit emitted for the DDL for any database, when the database makes it 
implied anyway?

Thanks.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] autocommit on for DDL

2011-01-26 Thread A.M.

On Jan 26, 2011, at 5:45 PM, Michael Bayer wrote:
 
 
 From this it follows that if you'd like to emit several DDL statements in a 
 transaction, the usage is no different for DDL expressions than for any other 
 kind of DML statement (i.e insert/update/delete).   Use 
 connection.begin()/transaction.commit() as documented at:
 
   http://www.sqlalchemy.org/docs/core/connections.html#using-transactions 
 .
 
 autocommit is described right after that:
 
   
 http://www.sqlalchemy.org/docs/core/connections.html#understanding-autocommit
 
 Also the public API for _execution_options is the execution_options() 
 generative call: 
 
   
 http://www.sqlalchemy.org/docs/core/expression_api.html?highlight=execution_options#sqlalchemy.sql.expression.Executable.execution_options
  .

Thanks for the prodding- I figured out my bug. Here is sample code that 
demonstrates a little surprise. 

First, this code that emits:
BEGIN (implicit)
SELECT 1
ROLLBACK

=
from sqlalchemy.engine import create_engine
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.orm import scoped_session
from sqlalchemy.schema import DDL

engine = create_engine('postgresql://localhost/test',echo=True)
session = scoped_session(sessionmaker(bind=engine))

DDL(SELECT 1).execute(bind=session)
session.rollback()
=

and here is the buggy code which emits:
SELECT 1
COMMIT

from sqlalchemy.engine import create_engine
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.orm import scoped_session
from sqlalchemy.schema import DDL

engine = create_engine('postgresql://localhost/test',echo=True)
session = scoped_session(sessionmaker(bind=engine))

DDL(SELECT 1).execute(bind=engine)
session.rollback()
=

Can you spot the difference? The DDL in the first code is bound to the session 
and the latter code mistakenly binds to the engine for execution resulting in 
two different execution paths. In hindsight, I guess it makes sense, but it 
certainly was not easy to find...

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] autocommit on for DDL

2011-01-26 Thread A.M.
Well, I spoke too soon :( What is the mistake in the following sample code 
which causes the COMMITs to be emitted? Setting autocommit to either True or 
False emits the same SQL. I think this is a case of staring at the same code 
too long causing brain damage- thanks for your patience and help!

from sqlalchemy.engine import create_engine
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.orm import scoped_session
from sqlalchemy.schema import DDL,MetaData,Table

engine = create_engine('postgresql://localhost/test',echo=True)
session = scoped_session(sessionmaker(bind=engine))
metadata = MetaData()
metadata.bind = engine

Table('test1',metadata)
Table('test2',metadata)
metadata.create_all()

2011-01-26 18:27:48,284 INFO sqlalchemy.engine.base.Engine.0x...6cd0 select 
version()
2011-01-26 18:27:48,284 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {}
2011-01-26 18:27:48,286 INFO sqlalchemy.engine.base.Engine.0x...6cd0 select 
current_schema()
2011-01-26 18:27:48,286 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {}
2011-01-26 18:27:48,288 INFO sqlalchemy.engine.base.Engine.0x...6cd0 select 
relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where 
n.nspname=current_schema() and lower(relname)=%(name)s
2011-01-26 18:27:48,288 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {'name': 
u'test1'}
2011-01-26 18:27:48,290 INFO sqlalchemy.engine.base.Engine.0x...6cd0 select 
relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where 
n.nspname=current_schema() and lower(relname)=%(name)s
2011-01-26 18:27:48,290 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {'name': 
u'test2'}
2011-01-26 18:27:48,291 INFO sqlalchemy.engine.base.Engine.0x...6cd0 
CREATE TABLE test1 (
)


2011-01-26 18:27:48,291 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {}
2011-01-26 18:27:48,292 INFO sqlalchemy.engine.base.Engine.0x...6cd0 COMMIT
2011-01-26 18:27:48,293 INFO sqlalchemy.engine.base.Engine.0x...6cd0 
CREATE TABLE test2 (
)


2011-01-26 18:27:48,293 INFO sqlalchemy.engine.base.Engine.0x...6cd0 {}
2011-01-26 18:27:48,294 INFO sqlalchemy.engine.base.Engine.0x...6cd0 COMMIT

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Inserting in a container class that behaves as list (or set) from the contained class (not from the container)

2011-01-21 Thread A.M.

On Jan 21, 2011, at 12:29 PM, Hector Blanco wrote:

 Hello list!
 
 I have a couple of classes. One of the behaves as the container of the other:
 
 class ContainerOfSamples(declarativeBase):
   __tablename__ = containers
 
   _id = Column(id, Integer, primary_key=True)
   _samples = relationship(Samples, cascade=all, delete, 
 collection_class=set)
 
   def setSamples(self, samples):
   self._samples = samples
 
   def getSamples(self):
   return self._samples
 
   def addSample(self, sample):
   self._samples.add(sample)
 
   id = sqlalchemy.orm.synonym('_id', descriptor=property(getId, setId))
   samples = sqlalchemy.orm.synonym('_samples',
   descriptor=property(getSamples, 
 setSamples))
 
 
 class Sample(declarativeBase):
   __tablename__ = containers
 
   _id = Column(id, Integer, primary_key=True)
   _whatever = Column(whatever, String(20))
   _containerId = Column(container_id, Integer, 
 ForeignKey(containers.id))
   _container = relationship(Container, uselist=False)
 
   def __hash__(self):
   return int(self.id)
 
   def setContainer(self, container):
   self._container = container
 
   def getContainer(self):
   return self._container
 
   id = sqlalchemy.orm.synonym('_id', descriptor=property(getId, setId))
   whatever = sqlalchemy.orm.synonym('_whatever',
   descriptor=property(getWhatever, 
 setWhatever))
   container = sqlalchemy.orm.synonym('_container',
   descriptor=property(getContainer, 
 setContainer))
 
 
 sample = Sample()
 container = ContainerOfSamples()
 sample.container(container)

I don't understand the need for the synonyms, but shouldn't this be as simple 
as sample.container = container? The relationship on sample is already 
defined... maybe you are confused because you think you need these getters and 
setters- in the above example, I don't see any need for them.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] On creating new SchemaItems

2011-01-21 Thread A.M.
Hello,

I would like SQLAlchemy to generate views much in the same way it can generate 
tables- perhaps like this:

View('bob',select([...]))

Is the SQLAlchemy code modular enough to support a user-defined SchemaItem or 
does that require changes to SQLAlchemy itself? 

The reason I would very much like this is because I currently use the Table 
objects, munge them through a processor to add common attributes, and generate 
a schema- I would like to be able to do the same with View objects.

I looked at subclassing sqlalchemy.schema.Table, but the __new__ override and 
the fact that the sql.compiler.DDLCompiler has hardcoded 
visit_create_schemaitem names gives me pause as to whether or not this can be 
accomplished without modifying SQLAlchemy itself. 

I realize that questions surrounding view pop up from time-to-time, so does it 
make sense to create or support a dialect-specific or user-defined SchemaItem?

Thanks!

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] On creating new SchemaItems

2011-01-21 Thread A.M.

On Jan 21, 2011, at 1:36 PM, Michael Bayer wrote:

 
 On Jan 21, 2011, at 12:56 PM, A.M. wrote:
 
 Hello,
 
 I would like SQLAlchemy to generate views much in the same way it can 
 generate tables- perhaps like this:
 
 View('bob',select([...]))
 
 Is the SQLAlchemy code modular enough to support a user-defined SchemaItem 
 or does that require changes to SQLAlchemy itself? 
 
 The reason I would very much like this is because I currently use the Table 
 objects, munge them through a processor to add common attributes, and 
 generate a schema- I would like to be able to do the same with View objects.
 
 I looked at subclassing sqlalchemy.schema.Table, but the __new__ override 
 and the fact that the sql.compiler.DDLCompiler has hardcoded 
 visit_create_schemaitem names gives me pause as to whether or not this can 
 be accomplished without modifying SQLAlchemy itself. 
 
 I realize that questions surrounding view pop up from time-to-time, so does 
 it make sense to create or support a dialect-specific or user-defined 
 SchemaItem?
 
 Thanks!
 
 You may not be aware that we have a full API for creation of custom SQL 
 expression subclasses as well as establishing compilation rules, which is 
 documented at http://www.sqlalchemy.org/docs/core/compiler.html .
 
 Regarding views specifically, we've got a usage recipe against this system, 
 though I don't know if its seen any real world usage, at 
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/Views .   Its using 
 lower-case table() objects as the core structure, which is your basic thing 
 with a bunch of columns object, the superclass of Table that doesn't have 
 the hard linkages with MetaData or constraints, so no SchemaItem subclass is 
 needed.  Subclassing TableClause (the result of table()) would be the likely 
 way to go if you wanted your view construct to have extra features.
 

I guess I am curious as to why there should be a built-in way to compile 
SchemaItems and then a user way to do the same thing. Is there a plan to 
unify these methods?

As a python programmer, it seems more natural to me to subclass the relevant 
class than to spam my class with decorators. Does it make sense to offer 
user-defined SchemaItems which would play well with metadata much like there 
are user-defined types?

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] scalar association_proxy

2011-01-19 Thread A.M.

On Jan 19, 2011, at 12:52 PM, AgentOrange wrote:

 Hi Folks,
 
 I have been scratching my head over this one all day, so any advice
 would be greatly appreciated!
 
 I have a pretty simple join table setup like this:
 
 foo foo_bar  bar
 ---
 id  id_foo (unique)  id
id_bar
more_stuff
 
 It looks like a many-to-many relationship, but it's not, since id_foo
 is unique in the join table.
 
 - There can be any number of 'foo' for each 'bar'
 - Each 'foo' has exactly one or zero 'bar'
 
 I have set up the required declarative stuff, and an association proxy
 in 'foo' to get to 'bar' in one nice step. Since there can only ever
 be one (or zero - and this is my problem) I have set it to be a
 scalar.
 
 It all works great, except that if I try and look at the value of the
 assoication proxy in a 'foo' row without a corresponding 'foo_bar',
 then I get AttributeError. It is clearly looking for a 'bar' in a
 'foo_bar' that is a None (since there is no entry), so is
 understandable; but in my case not desirable.
 
 What I would like to do is to get a 'bar' if one exists, else return a
 'None'. Is the only way to do this to write my own wrapper property
 that does a try/catch? And if I do this, will the property stop being
 nice and magical (you know, filterable and comparable etc..)
 
 I am quite a newcomer to SQLAlchemy so go easy on me!

Did you use uselist=False on foo.bar and the associationproxy? It would nice if 
you could show some code.
http://www.sqlalchemy.org/docs/orm/extensions/associationproxy.html?highlight=uselist

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: scalar association_proxy

2011-01-19 Thread A.M.

On Jan 19, 2011, at 1:29 PM, AgentOrange wrote:

 Hi ho,
 
 Thanks for the swift reply!
 
 Did you use uselist=False on foo.bar and the associationproxy? It would nice 
 if you could show some 
 code.http://www.sqlalchemy.org/docs/orm/extensions/associationproxy.html?h...
 
 Yes I did. The code is more complicated as it has a bunch of other
 stuff in it, but here's a condensation:
 
 class Foo(Base):
id = Column(Integer, primary_key = True)
foo_bar_join = relationship(FooBar, primaryjoin = (id ==
 FooBar.id_foo), uselist = False)
bar = association_proxy('foo_bar_join', 'bar', creator = lambda x:
 FooBar(bar = x) )
 
 class Bar(Base):
id = Column(Integer, primary_key = True)
 
 class FooBar(Base):
id_foo = Column(Integer, ForeignKey('foo.id'), primary_key = True)
id_bar = Column(Integer, ForeignKey('bar.id'), primary_key = True)
 
bar = relationship(Bar, primaryjoin = (id_bar == Bar.id))
other_stuff = Column(Integer)
 
 The problem comes if you try to fetch 'bar' from an instance of 'Foo'
 for which there is no entry in the foo_bar join table.

Maybe you can squelch the exception by using a synonym with the descriptor 
argument to map to a fake property.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] dirtying attributes of a user-defined type

2011-01-14 Thread A.M.

On Jan 14, 2011, at 3:26 PM, A.M. wrote:

 I suspect I have to implement special converter methods with psycopg2 
 register_adapter/new_type. That is what I am experimenting with now.


It looks like SQLAlchemy's array type support doesn't support anything beyond 
basic built-in types that accept numbers or text values. (Please do correct me 
if I am wrong.) I suspect this is a limitation of the default psycopg2 array 
support which does not allow passing a type along with the array during 
execution. It also seems to always bind arrays inline.

I worked around this by installing type converters directly in psycopg2. 
SQLAlchemy then ends up with the values as pre-cooked ACLItem and ACLItemArray 
objects. From an external API standpoint, nothing changed.

One stumbling block I had was that the code sample in the docs was too slim and 
there is no UserDefinedType example in the examples directory, so I spent a lot 
of time in the debugger. I hope this code may help others.

Here is the code that worked:
==
import sqlalchemy.types as types
import re
import sqlalchemy.exc
from sqlalchemy.dialects.postgresql import ARRAY
from project.lib.aclitem import ACLItem,ACLItemArray
from copy import deepcopy

#include/utils/acl.h
#define ACL_ALL_RIGHTS_STR  arwdDxtXUCTc 
#update for SQLAlchemy 0.7: 
http://www.sqlalchemy.org/docs/07/orm/extensions/mutable.html

class 
ACLItemArrayType(types.MutableType,types.UserDefinedType,types.Concatenable):
def get_col_spec(self):
return 'aclitem[]'

def bind_processor(self,dialect):
return None

def result_process(self,dialect):
return None

def copy_value(self,value):
return deepcopy(value)

def compare_values(self,a,b):
return a == b

class ACLItemType(types.MutableType,types.UserDefinedType):
#class ACLItemType(types.UserDefinedType,types.MutableType): #FAIL

def get_col_spec(self):
return 'aclitem'

def bind_processor(self,dialect):
return None

def copy_value(self,value):
return deepcopy(value)

def result_processor(self,dialect,column_type):
return None


===
import re
from copy import deepcopy

class ACLItem(object):
def __init__(self,grantee,permissions,grantor,grant_option=False):
self.grantee = grantee
self.permissions = []
if permissions:
for p in permissions:
self.permissions.append(p)
self.grantor = grantor
self.grant_option = grant_option

def _as_pgsql_string(self):
#convert to string 'user grantee=perms/grantor'
string_perms = ''
for perm in self.permissions: 
string_perms += perm

if self.grant_option:
grant_option = '*'
else:
grant_option = ''
return user %s=%s%s/%s % 
(self.grantee,string_perms,grant_option,self.grantor)

def __deepcopy__(self,memo):
return 
ACLItem(self.grantee,self.permissions,self.grantor,self.grant_option)

def __copy__(self):
return deepcopy(self)

@classmethod
def _from_pgsql_string(klass,aclstring):
grantee=perms*/grantor
matches = re.match('([^=]+)=([^/\*]+)(\*?)/(\w+)',aclstring)
if matches is None:
raise ValueError('string does not appear to represent a PostgreSQL 
ACL')
grantee = matches.group(1)
permissions = matches.group(2)
grant_option = bool(len(matches.group(3)))
grantor = matches.group(4)
return ACLItem(grantee,permissions,grantor,grant_option)

def __eq__(self,other_object):
if not isinstance(other_object,self.__class__):
return False
return str(self) == str(other_object)

def has_permission(self,permission_test):
return permission_test in self.permissions

def set_permission(self,permission,on=True):
if not self.has_permission(permission) and on:
self.permissions.append(permission)
elif self.has_permission(permission) and not on:
self.permissions.remove(permission)

def clear_permissions(self):
del self.permissions[:]

def reset_with_acl(self,acl):
takes an acl and replaces its own settings with the argument settings
This is useful for cases where an acl in an array is replaced without 
being creating a new aclitem so that the array order in unmodified

self.grantee = acl.grantee
self.permissions = acl.permissions
self.grantor = acl.grantor
self.grant_option = acl.grant_option

def __str__(self):
return self._as_pgsql_string()

def __repr__(self):
return ACLItem('%s',%s,'%s',%s) % 
(self.grantee,self.permissions,self.grantor,self.grant_option)

class ACLItemArray(list):
#in an aclitem array, the 
def aclitem_for_grantee(self,role):
for acl in self:
if role

[sqlalchemy] dirtying attributes of a user-defined type

2011-01-13 Thread A.M.
Hello,

I have created an SQLAlchemy type which represents a postgresql aclitem (which 
represents postgresql access control lists). I am able to load and save 
newly-created ACLItems from the database, however, modifying the values of an 
instance of the type does not dirty it for flushing. Is there some decorator 
for dirtying accessors to the type instance convenience methods?

Specifically, modifying any of grantee, grantor, permissions, and 
grant_option, does not trigger a proper update.

Cheers,
M

import sqlalchemy.types as types
import re
import sqlalchemy.exc

#include/utils/acl.h
#define ACL_ALL_RIGHTS_STR  arwdDxtXUCTc 

class ACLItem(types.UserDefinedType):

def 
__init__(self,grantee=None,permissions=None,grantor=None,grant_option=False):
#note that sqlalchemy calls this with None arguments for processing
self.grantee = grantee
self.permissions = []
if permissions:
for p in permissions:
self.permissions.append(p)
self.grantor = grantor
self.grant_option = grant_option

def get_col_spec(self):
return 'aclitem'

def bind_processor(self,dialect):
def acl2string(aclitem):
return aclitem._as_pgsql_string()
return acl2string

def compare_values(self,a,b):
return a._as_pgsql_string() == b._as_pgsql_string()

def _as_pgsql_string(self):
#convert to string 'user grantee=perms/grantor'
string_perms = ''
for perm in self.permissions: 
string_perms += perm

if self.grant_option:
grant_option = '*'
else:
grant_option = ''
return user %s=%s%s/%s % 
(self.grantee,string_perms,grant_option,self.grantor)

@classmethod
def _from_pgsql_string(klass,aclstring):
grantee=perms*/grantor
matches = re.match('([^=]+)=([^/\*]+)(\*?)/(\w+)',aclstring)
if matches is None:
raise sqlalchemy.exc.DataError(aclstring,[],'')
grantee = matches.group(1)
permissions = matches.group(2)
grant_option = len(matches.group(3))
grantor = matches.group(4)
return ACLItem(grantee,permissions,grantor,grant_option)

def result_processor(self,dialect,column_type):
def string2acl(aclstring):
return ACLItem._from_pgsql_string(aclstring)
return string2acl

def has_permission(self,permission_test):
return permission_test in self.permissions

def set_permission(self,permission,on=True):
if not self.has_permission(permission):
if on:
self.permissions.append(permission)
else:
self.permissions.remove(permission)

def clear_permissions(self):
del self.permissions[:]

def __str__(self):
return self._as_pgsql_string()

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: complex mapped collection

2010-12-13 Thread A.M.
On Sep 14, 2010, at 5:03 PM, A.M. wrote:

 Hello,
 
 I have a schema like this:
 
 Image
 *id
 *name
 
 Metadatakey
 *id
 *exifkey
 *name
 
 Metadata
 *id
 *image_id - Image
 *metadatakey_id - Metadatakey
 *value

Thanks to Michael Bayer, I was able to put together the image metadata model I 
posted on some months ago. 

I added an association_proxy to make the db schema more transparent which works 
well when reading (as per this example 
http://stackoverflow.com/questions/780774/sqlalchemy-dictionary-of-tags), but 
when setting an attribute through the association proxy, it seems like I would 
need access to the current session to make the determination of whether or not 
the metadatakey exists (and grab it).

Is it OK practice to use the session in the _metadataCreator method? Is there a 
better way to accomplish this? The last line of the source code below 
demonstrates what I wish to accomplish. Thanks.

-M

---

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import Integer,Unicode
from sqlalchemy.orm import relationship,scoped_session,backref
from sqlalchemy.orm.session import sessionmaker
from sqlalchemy.schema import Column,ForeignKey
from sqlalchemy.orm.collections import attribute_mapped_collection
from sqlalchemy.engine import create_engine
from sqlalchemy.ext.associationproxy import association_proxy

Base = declarative_base()

class Image(Base):
   __tablename__ = 'image'
   id = Column(Integer,primary_key=True)
   metadatadict = relationship('MetadataValue',
   
collection_class=attribute_mapped_collection('key'),
   primaryjoin='Image.id==MetadataValue.image_id')

   def _metadataCreator(key,value):
   #--- get existing MetadataName for this key here, but how?
   return MetadataValue(namekey=key,value=value)

   metadatas = 
association_proxy('metadatadict','value',creator=_metadataCreator)

class MetadataName(Base):
   __tablename__ = 'metadataname'
   id = Column(Integer,primary_key=True)
   namekey = Column(Unicode,nullable=False,unique=True)

class MetadataValue(Base):
   __tablename__ = 'metadatavalue'
   id = Column(Integer,primary_key=True)
   image_id = Column(Integer,ForeignKey(Image.id),nullable=False)
   image = relationship(Image,
primaryjoin='MetadataValue.image_id==Image.id')
   metadataname_id = Column(Integer,ForeignKey(MetadataName.id),nullable=False)
   metadataname = relationship(MetadataName,

primaryjoin='MetadataValue.metadataname_id==MetadataName.id')
   value = Column(Unicode,nullable=False)

   @property
   def key(self):
   return self.metadataname.namekey

engine = create_engine('postgresql://localhost/test',echo=True)
session = scoped_session(sessionmaker(autocommit=False,bind=engine))

Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)

metadataname1 = MetadataName(namekey=u'MIMEType')
metadataname2 = MetadataName(namekey=u'PixelWidth')
img1 = Image()
metadatavalue1 = 
MetadataValue(image=img1,value=u'image/png',metadataname=metadataname1)

session.add_all([metadataname1,img1,metadatavalue1,metadataname2])

session.flush()

#BEGIN TEST AREA
print img1.metadatas['MIMEType']
img1.metadatas['PixelWidth'] = u'300'


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing fine-grained access control

2010-11-22 Thread A.M.

On Nov 22, 2010, at 9:30 AM, Dan Ellis wrote:

 I'd like to find some way to implement access controls on mapped
 objects, with the following features:
 
 * Example: given a BlogPost object, only the owner, or a superuser,
 would be allowed to set fields such as title and body.
 * Example: reading the body field would check the privacy field as
 well as the current user, and only let the owner read a private field.
 * The owner should be determined based on a configurable column name,
 or as the result of a method call.
 * The current user should be explicitly specified rather than coming
 from some global state.
 
 The intention is not to make unwanted operations impossible, but to
 offer the programmer a degree of confidence that, so long as he uses
 the object in a particular way, the security constraints he specifies
 won't be violated, regardless of logic errors elsewhere (in a web
 layer, typically).
 
 It seems that one possible way to do this would be to use proxy
 objects to access the real instances. Returning proxies doesn't seem
 difficult (a mapper extension could do this if the mapped class
 specifies it desires it). Interaction with the session might be
 problematic, though, if all you have is proxy objects.
 
 Does this seem to be the correct path to follow, or is there a better
 approach?

The approach I use is rather PostgreSQL-specific but works well for this 
scenario because security is controlled by the database instead of 
potentially-hairy python logic.

When a user first makes a request, I generate the SQLAlchemy models by 
reflecting all views and tables which the user (also mapped as a PostgreSQL 
user) can see (based on the search_path). Optionally-available base model 
classes include details for unreflectable attributes such as foreign keys for 
views. The generated model classes are then cached as modelname_role name.

By manipulating database ACLs and the search_path, it becomes trivial to push 
security and role changes.

This setup allows me to control all models via view manipulation- column- and 
row-level security as well as modifying data for specific roles while using 
bog-standard SQLAlchemy models.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] association_proxy as property?

2010-11-16 Thread A.M.

On Nov 16, 2010, at 11:43 AM, Michael Bayer wrote:

 
 On Nov 16, 2010, at 11:14 AM, A.M. wrote:
 
 Hello,
 
 To generate json from our SQLAlchemy model objects, we are using 
 iterate_properties to determine how to dictify the object. One of our 
 objects uses association_proxy which we would like to represent in the JSON. 
 Unfortunately, because it is not a property, the dictification misses this 
 property. I feel like I am missing something simple here.
 
 How can I make an association_proxy appear to be a property which appears in 
 iterate_properties of the mapper?
 
 1. why not use dir(myobject) to get a view of your object as a whole ?

We swiped dictify from sprox.saormprovider:
http://bitbucket.org/percious/sprox/src/tip/sprox/saormprovider.py#cl-321

We like the behavior that a REST call to retrieve an SQLAlchemy object returns 
pks for relationships (instead of loading the entire result hierarchy), so 
subsequent REST calls are required to dig deeper into the tree of results.

Using dir() is definitely a good suggestion which I will try. You are probably 
right that we are using iterate_properties improperly.

 2. implicit conversion to JSON and such is a little sloppy.   You'd be better 
 off using a structured approach like Colander: 
 http://docs.repoze.org/colander/

It looks like I would have to either re-define all objects using the Colander 
syntax or implement a method which converts existing SQLAlchemy models to 
Colander schema objects. Even if the latter function already exists, I still 
have the problem of determining automatically which properties to encode, no?

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Database Views

2010-11-08 Thread A.M.

On Nov 8, 2010, at 1:16 PM, Adrian wrote:

 Hi all,
 
 This is a topic that has been discussed before, but I haven't been
 able to successfully implement any of the proposed solutions in my own
 code. I've created a few Views in my postgres database, and I'm
 looking for a way to simply query them from sqlalchemy. I tried just
 treating them as tables, but this failed when the mapper wasn't able
 to find a primary key (makes sense).
 
 The code I tried was just a naive attempt:
 
 class SpectrumView(object):
   pass
 
 spectrum_view = Table('spectrum_view', metadata, autoload=True)
 mapper(SpectrumView, spectrum_view)
 
 So I read in some message that it might be possible to specify a
 primary key to the mapper, like
 mapper(SpectrumView, spectrum_view, primary_key=[..]), but the problem
 is I'm not defining my tables in python - they are already created, so
 I don't have any column objects to pass to primary_key.
 
 Anyway, I'm just curious to see if anyone has had success with an
 implementation of database views in sqlalchemy, and possibly examples
 of those cases.

Views are reflectable, so you can reflect the view, then add the primary key to 
the proper column in the metadata.

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Locking strategy

2010-09-30 Thread A.M.

On Sep 30, 2010, at 10:02 AM, Michael Bayer wrote:

 
 On Sep 30, 2010, at 4:43 AM, Warwick Prince wrote:
 
 Hi All
 
 I would like some advice / best practice on the following basic problem 
 please.  I'm new to SQL so am groping with some things that used to be 
 basic. :-(
 
 I've looked around in SA and have only found a few small notes on locking.  
 There is a for_update but when I try this it appears to be locked out for 
 me as well!  Perhaps the answer is here and I'm missing something?
 
 snip
 
 The usual approach to pessimistic locking on a simple scale is to use 
 SELECTFOR UPDATE NOWAIT, which locks the selected rows in the current 
 transaction.   Other transactions which attempt to get to the row are 
 blocked.   The NOWAIT means it will raise immediately instead of blocking.
 
 If you've tried this and you seem to be locked out, then you need to get 
 your transactions straight - you'd need to perform subsequent operations with 
 the same transaction as that which began the for_update.
 
 However, if the lock we're talking about isn't just a matter of ensuring 
 proper concurrency across transactions, and is more like a business-level 
 lock - i.e. User XYZ is editing this document, then yes this is not a DB 
 concurrency issue, its a datamodel one.You'd build the lock table as 
 described, and build a system within your GUI that acquires a row from this 
 table before allowing operations to proceed.
 
 How the application does that depends much on what kind of application it is 
 and over what span such a lock is created.You'd have to trap user actions 
 at the top level of activity which would be locked, sometimes Python 
 decorators are good at this, such as :
 
 @requires_lock
 def user_do_something(x, y, z, ...):
..
 
 where @requires_lock would look at the current user, maybe look at the 
 arguments of user_do_something(), determine if that user has acquired a 
 locking object. 

Another option is to use database advisory locks (where available).

http://www.postgresql.org/docs/8.2/interactive/explicit-locking.html Advisory 
Locks

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] postgresql EXCLUDE constraint

2010-09-23 Thread A.M.
Hello,

I am using this PostgreSQL exclusion constraint:

CONSTRAINT only_one_valid EXCLUDE USING (synthetic_id WITH 
=,COALESCE(obsoleteby,'') WITH =),

How can I represent this using SQLAlchemy Table metadata so that I can create 
the table from the metadata?

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Setting the starting value of an autoincrement field?

2010-09-20 Thread A.M.

On Sep 20, 2010, at 11:00 PM, Russell Warren wrote:

 Is there a way in the standard SQLA dialect support to set the
 starting value of an autoincrement field in the SQLA table definition?
 
 I can't see anything in the docs or code, but something like this is
 what I'm thinking:
 
 empnum = Column(sqla.Integer,
primary_key = True,
autoincrement = True,
autoincrementstart = 1)

Look at class sqlalchemy.schema.Sequence(name, start=None, increment=None,...)
http://www.sqlalchemy.org/docs/core/schema.html?highlight=sequence#sqlalchemy.schema.Sequence


Table('sometable', metadata,
Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
)
http://www.sqlalchemy.org/docs/dialects/postgresql.html?highlight=sequence

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Use regexp in like

2010-09-17 Thread A.M.

On Sep 17, 2010, at 11:12 AM, Michael Bayer wrote:

 
 On Sep 17, 2010, at 10:58 AM, Michael Bayer wrote:
 
 
 On Sep 17, 2010, at 9:14 AM, Michael Hipp wrote:
 
 On 9/14/2010 2:23 PM, Michael Hipp wrote:
 Is it possible to use a regexp in a like() clause? Or some other way to 
 achieve
 something similar?
 
 Can anyone suggest an approach to search a field with a regexp?
 
 if you were using Postgresql, you could use   somecolumn.op(~)(someregexp)
 
 http://www.postgresql.org/docs/8.4/interactive/functions-matching.html#FUNCTIONS-POSIX-REGEXP
 
 cant speak for other platforms though you'd have to consult their 
 documentation.
 
 
 PG also supports MATCH since I see we have some unit tests for that, i.e. 
 column.match(other). I'm not seeing offhand in PG's docs what it 
 interprets the MATCH operator as, i.e. is it a ~, SIMILAR TO, not sure.
 
 I guess the reasons I've never had interest in regexp matching in databases 
 are:
 
 1. its always bad to search through tables without being able to use indexes
 2. if you're needing to dig into text, it suggests the atoms of that text 
 should be represented individually in their own column (i.e. normalize)
 3. no really, I'm doing flat out full text searching on documents and don't 
 want to reinvent.  Well then I'd use a full blown text extension 
 (http://www.postgresql.org/docs/8.3/static/textsearch.html)  or a separate 
 search engine.

The SQLAlchemy match operator compiles to PostgreSQL full-text search (@@ 
to_tsquery).
test/dialect/test_postgresql.py

Cheers,
M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.