[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

Yeah, I think I explained badly.

What I was trying to show with the sqlalchemy vs postgres query logs
is that extra BEGIN that psycopg2 is sending before the SELECT that
sqlalchemy executes. The BEGIN is implicitly added by psycopg2 because
it's in its default transaction isolation state of read comitted.
Which I didn't know was the DBAPI default -- so strike what I said
about changing any default behavior.

But It would be nice to be able to have total control over the
transactional state of the connection, so that when I know that I'm
just doing a select or two I don't have to have the overhead of a
BEGIN that I know is useless, but as things stand I can't do that,
because do_begin isn't implemented for any of the dialects. I guess
that's because sqlalchemy is depending on the underlying DBAPI
implementation to handle sending BEGIN statements before the first
query that is outside of a transaction?

I noticed a TODO about implementing true autocommit behavior, which is
what I need -- a way to tell the dbapi module *not* to start those
implicit transactions. If the engine/connection could be put into true
autocommit mode, and issue connection.begin() from do_begin when in
that mode, then I could do everything I want to do and I don't think
anything would break, since the orm calls do_begin before each flush.
I know how to change the isolation level in the sqlite and postgres
dialects -- would you be interested in a patch or work on a branch
where I added an isolation_level property to the engine, implemented
it for those two, and for those two dialects had do_begin issue a
connection.begin() when the isolation_level was set to autocommit? I
could probably tackle mysql as well, but I have no access to mssql or
oracle, so I can't do those.

Hopefully I'm making some more sense this time...

JP


On Feb 12, 12:42 am, Michael Bayer [EMAIL PROTECTED] wrote:
 im confused.  the django thread seems to be saying that it is setting
 psycopg2 *into* autocommit mode, which causes the problem; the default
 setting of autocommit for DBAPI (and maintained in SA) is False.  When
 i wrote frameworks years ago I always thought autocommit mode was
 the way to go but i since learned that issues like this arise  so SA
 *never* uses autocommit mode on the connection; instead it implements
 its own autocommitting behavior in a manner similar to hibernate -
 it looks at the statement being executed, determines if its a CRUD/
 CREATE/DROP expression, and then explicitly calls COMMIT if no SA
 transaction is already in progress.

 the log you have above doesnt prove that anything unusual is going on,
 since you are illustrating a transactional operation, then a SELECT,
 then another transactional operation, then another SELECT. SA does an
 explicit COMMIT for the CREATE statements since they are required to
 complete the table creation operation.

 this test confirms that psycopg2 defaults to autocommit as false and
 doesnt do anything with transaction modes after the connection is
 opened:

 import psycopg2 as psycopg

 conn = psycopg.connect(user='scott', password='tiger',
 host='127.0.0.1', database='test')

 for x in range(0, 5):
 curs = conn.cursor()
 curs.execute(SELECT 1)
 curs.close()

 log output:

 LOG:  statement: SET DATESTYLE TO 'ISO'
 LOG:  statement: SHOW client_encoding
 LOG:  statement: SHOW default_transaction_isolation
 LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: SELECT 1
 LOG:  statement: ABORT

 one connection, five new cursors, only one set of setup
 corresponding to the connection.

 On Feb 9, 3:52 pm, JP [EMAIL PROTECTED] wrote:

  I noticed this thread on django-devs:

 http://groups.google.com/group/django-developers/browse_frm/thread/52...

  Which notes that psycopg2 by default starts transactions behind the
  scenes, even for select statements. If you happen to be running a web
  app where each hit starts a new cursor, and you only do a few selects,
  this adds a *lot* of overhead relative to autocommit.

  I wondered if sqlalchemy suffered from the same problem with psycopg2,
  and it does. Take a look at what sqlalchemy thinks it was doing for a
  short example, vs what the postgres query log contains.

  The code:

   import sqlalchemy as sa
   meta = sa.DynamicMetaData()
   users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
   primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
   meta.connect('postgres://snip', debug=True)
   meta.create_all()
   users.select().execute()

  sqlalchemy query log:
  select relname from pg_class where lower(relname) = %(name)s
  CREATE TABLE users (
  id SERIAL NOT NULL,
  name VARCHAR(50),
  PRIMARY KEY (id)
  )
  COMMIT
  SELECT users.id, users.name FROM users

  postgres query log:
  LOG:  statement: SET DATESTYLE TO 'ISO'
  LOG:  

[sqlalchemy] Re: Adding with (nolock) to all queries (mssql)

2007-02-12 Thread Michael Bayer



On Feb 12, 4:25 am, Arnar Birgisson [EMAIL PROTECTED] wrote:
 Hi again,

 Yes, we thought that might be the case. We tried changing the
 isolation level on this connection to snapshot. Now I'm getting
 different errors, and more frequent.

 One error I get is this:

 AttributeError: 'ColumnProperty' object has no attribute 'strategy'

that should have nothing to do with transaction isolation.  thats some
mixup of column/relation-based properties on a class (quite a strange
one).


 and another error is object not subscriptable when doing something
 like eststatus = result.fetchone()[0]

how do you know fetchone() is returning a row ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] cross DB development, columns lower/upper case letters

2007-02-12 Thread vkuznet

Hi,
I'm trying to develop a cross-DB application which works with ORACLE
and MySQL back-ends.
Both DBs has the same schema, but of course there is a caveat. ORACLE
has Tables and Columns in upper case and MySQL does not. That leads to
the following problem. When I construct
select(table.c.column)
I face out with problem that I need to specify lower case for column
in MySQL and upper case letters for ORACLE. With table names it's easy
I can use table aliases. But how to avoid problem with columns names.

Thanks,
Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)

2007-02-12 Thread Luke Stebbing

Right, delete-orphan is what adds the lifecycle relationship between
parent and child. It means that the child can't exist without a
parent. That lets SA know that it should eliminate the child rather
than trying to null out the relationship.

You probably want all so that all actions performed on the parent
will propagate to the child, which will remove the need to save
children directly. See here also:

http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_cascade


Cheers,

Luke

On Feb 11, 1:49 pm, Nebur [EMAIL PROTECTED] wrote:
  The minimum correction of my above code seems to be 2 lines:

  1.The cascade rule changed from delete into all,delete-orphan

 No, it turned out there is a still smaller change:
 The cascade rule changed from delete into delete, delete-orphan
 will work, too.
 The delete-orphan makes up the difference.
  Ruben


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Tracking changes on mapped entities

2007-02-12 Thread Allen Bierbaum

Is it possible to examine the session and get a list of all mapped
instances that have been changed?

More details:

I would like to implement an observer pattern in my application.  I
would like to let the code make changes to mapped objects as normal,
but immediately before (or after) a session flush() I need to notify
any interested observers that the state in the mapped objects has
changed.  I know I could do this by adding my own attribute state
tracking to the mapped classes, but I was hoping that since SA is
already doing this there may be a way to just make use of SA's change
tracking.

Has anyone done anything like this?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to determine if an instance has been populated?

2007-02-12 Thread Matt Culbreth

Hello Friends,

I'm working with the latest version of SQLAlchemy now and I have a
question: how do I determine if a particular mapped object instance
has been populated by the database?

The question originates because I have defined a __repr__() method on
one of my mapped objects.  It works fine if the object has been
loaded, but throws a TypeError exception until that time because one
of the statements in the __repr__() method is using an 'int' type.

I can easily handle this by checking for None of course, but is there
a more standard way people use the tool?

Thanks,

Matt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Announcing Elixir!

2007-02-12 Thread Jonathan LaCour

Today, we are pleased to announce the release of Elixir
(http://elixir.ematia.de), a declarative mapper for SQLAlchemy.  Elixir
is the successor to ActiveMapper and TurboEntity, and is a collaboration
between Daniel Haus, Jonathan LaCour and Gaëtan de Menten.  Elixir's
website provides installation instructions, a tutorial, extensive
documentation, and more.

The eventual goal of Elixir is to become an official SQLAlchemy
extension after some time soliciting feedback, bug reports, and testing
from users.

Daniel Haus
 http://www.danielhaus.de

Gaëtan de Menten
 http://openhex.com

Jonathan LaCour
 http://cleverdevil.org



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] deletion behavior question

2007-02-12 Thread iain duncan

I would like some objects that are related through many to many tables
to delete the many to many entry on deletion, but NOT the endpoint. It
seems that cascade=all deletes both, and no arg to cascade leaves left
over invalid entries in the manytomany table. Is there a suggested way
to deal with this?

Should I add a destructor to the classes that wipes out the many to many
entry with a sql delete clause?

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] drop_all not working for me

2007-02-12 Thread percious

See test case:

from turbogears import config
from turbogears.database import metadata
from turbogears import database

from sqlalchemy import Table, Column, Integer, Unicode
import sqlalchemy.orm

config.update({sqlalchemy.dburi:sqlite:///:memory:})
database.bind_meta_data()

Table('t_table', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

Table('t_table_history', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

assert  metadata.tables.keys() == ['t_table', 't_table_history']

metadata.drop_all(tables=['t_table_history',])

#fails
assert  metadata.tables.keys() == ['t_table']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cross DB development, columns lower/upper case letters

2007-02-12 Thread Michael Bayer

are these column names using MixedCase ?  otherwise you can probably
access them in a case-insensitive fashion (oracle col names are
usually case-insensitive)

On Feb 12, 11:35 am, vkuznet [EMAIL PROTECTED] wrote:
 Hi,
 I'm trying to develop a cross-DB application which works with ORACLE
 and MySQL back-ends.
 Both DBs has the same schema, but of course there is a caveat. ORACLE
 has Tables and Columns in upper case and MySQL does not. That leads to
 the following problem. When I construct
 select(table.c.column)
 I face out with problem that I need to specify lower case for column
 in MySQL and upper case letters for ORACLE. With table names it's easy
 I can use table aliases. But how to avoid problem with columns names.

 Thanks,
 Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread Michael Bayer


On Feb 12, 10:30 am, JP [EMAIL PROTECTED] wrote:

 But It would be nice to be able to have total control over the
 transactional state of the connection, so that when I know that I'm
 just doing a select or two I don't have to have the overhead of a
 BEGIN that I know is useless, but as things stand I can't do that,
 because do_begin isn't implemented for any of the dialects. I guess
 that's because sqlalchemy is depending on the underlying DBAPI
 implementation to handle sending BEGIN statements before the first
 query that is outside of a transaction?

DBAPI, including psycopg2, has no begin() method; there is only
rollback() and commit().   like the example illustrates, there is no
BEGIN being issued for every SELECT statement when using psycopg2 in
non-autocommit mode, which applies to SQLAlchemy as well.   therefore
there is no performance bottleneck; this is a django issue only.


 I noticed a TODO about implementing true autocommit behavior, which is
 what I need -- a way to tell the dbapi module *not* to start those
 implicit transactions.

it doesnt.  run the example - no implicit transaction beyond the
initial connection (which is required, since it must implement
rollback() and commit()).  SA has been doing its own autocommit on top
of non-autocommitting connections since the 0.2 series at least.

running in non-auto-commit mode is standard practice for databases
like oracle.  theres always a transaction in place.  but its not
started for every select statement.  only at connection time and after
each rollback or commit.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread Michael Bayer

drop_all() doesnt remove Table instances from the metadata.  the Table
object is a python object, it only represents your real database
table.  you may well want to call create_all() again using that same
Table.

On Feb 12, 3:20 pm, percious [EMAIL PROTECTED] wrote:
 See test case:

 from turbogears import config
 from turbogears.database import metadata
 from turbogears import database

 from sqlalchemy import Table, Column, Integer, Unicode
 import sqlalchemy.orm

 config.update({sqlalchemy.dburi:sqlite:///:memory:})
 database.bind_meta_data()

 Table('t_table', metadata,
   Column('id', Integer, primary_key=True),
   Column('data', Unicode(255)),
   ).create()

 Table('t_table_history', metadata,
   Column('id', Integer, primary_key=True),
   Column('data', Unicode(255)),
   ).create()

 assert  metadata.tables.keys() == ['t_table', 't_table_history']

 metadata.drop_all(tables=['t_table_history',])

 #fails
 assert  metadata.tables.keys() == ['t_table']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: deletion behavior question

2007-02-12 Thread Michael Bayer


On Feb 12, 3:00 pm, iain duncan [EMAIL PROTECTED] wrote:
 I would like some objects that are related through many to many tables
 to delete the many to many entry on deletion, but NOT the endpoint. It
 seems that cascade=all deletes both, and no arg to cascade leaves left
 over invalid entries in the manytomany table. Is there a suggested way
 to deal with this?

if the relationships between two objects is being broken, then the
corresponding rows in the many-to-many table should be removed as
well, provided SA is familiar with the relationship.  no cascade rules
should be needed. feel free to post an example thats not working.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread percious



On Feb 12, 3:49 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 drop_all() doesnt remove Table instances from the metadata.  the Table
 object is a python object, it only represents your real database
 table.  you may well want to call create_all() again using that same
 Table.

 On Feb 12, 3:20 pm, percious [EMAIL PROTECTED] wrote:

  See test case:

  from turbogears import config
  from turbogears.database import metadata
  from turbogears import database

  from sqlalchemy import Table, Column, Integer, Unicode
  import sqlalchemy.orm

  config.update({sqlalchemy.dburi:sqlite:///:memory:})
  database.bind_meta_data()

  Table('t_table', metadata,
Column('id', Integer, primary_key=True),
Column('data', Unicode(255)),
).create()

  Table('t_table_history', metadata,
Column('id', Integer, primary_key=True),
Column('data', Unicode(255)),
).create()

  assert  metadata.tables.keys() == ['t_table', 't_table_history']

  metadata.drop_all(tables=['t_table_history',])

  #fails
  assert  metadata.tables.keys() == ['t_table']

Would it make sense to add the following code to line 905 in your
schema.py???
if tables is None:
self.tables.clear()
else:
for table in tables:
if type(table) is str:
del self.tables[table]
else:
for k, t in self.tables.iteritems():
if t is table:
del self.tables[k]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to determine if an instance has been populated?

2007-02-12 Thread Michael Bayer

check for an _instance_key attribute.

On Feb 12, 1:52 pm, Matt Culbreth [EMAIL PROTECTED] wrote:
 Hello Friends,

 I'm working with the latest version of SQLAlchemy now and I have a
 question: how do I determine if a particular mapped object instance
 has been populated by the database?

 The question originates because I have defined a __repr__() method on
 one of my mapped objects.  It works fine if the object has been
 loaded, but throws a TypeError exception until that time because one
 of the statements in the __repr__() method is using an 'int' type.

 I can easily handle this by checking for None of course, but is there
 a more standard way people use the tool?

 Thanks,

 Matt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread Michael Bayer

not if you understood my previous reply, no.

On Feb 12, 3:55 pm, percious [EMAIL PROTECTED] wrote:
 On Feb 12, 3:49 pm, Michael Bayer [EMAIL PROTECTED] wrote:



  drop_all() doesnt remove Table instances from the metadata.  the Table
  object is a python object, it only represents your real database
  table.  you may well want to call create_all() again using that same
  Table.

  On Feb 12, 3:20 pm, percious [EMAIL PROTECTED] wrote:

   See test case:

   from turbogears import config
   from turbogears.database import metadata
   from turbogears import database

   from sqlalchemy import Table, Column, Integer, Unicode
   import sqlalchemy.orm

   config.update({sqlalchemy.dburi:sqlite:///:memory:})
   database.bind_meta_data()

   Table('t_table', metadata,
 Column('id', Integer, primary_key=True),
 Column('data', Unicode(255)),
 ).create()

   Table('t_table_history', metadata,
 Column('id', Integer, primary_key=True),
 Column('data', Unicode(255)),
 ).create()

   assert  metadata.tables.keys() == ['t_table', 't_table_history']

   metadata.drop_all(tables=['t_table_history',])

   #fails
   assert  metadata.tables.keys() == ['t_table']

 Would it make sense to add the following code to line 905 in your
 schema.py???
 if tables is None:
 self.tables.clear()
 else:
 for table in tables:
 if type(table) is str:
 del self.tables[table]
 else:
 for k, t in self.tables.iteritems():
 if t is table:
 del self.tables[k]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to determine if an instance has been populated?

2007-02-12 Thread Matt Culbreth

That got it, thanks.

On Feb 12, 3:57 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 check for an _instance_key attribute.

 On Feb 12, 1:52 pm, Matt Culbreth [EMAIL PROTECTED] wrote:

  Hello Friends,

  I'm working with the latest version of SQLAlchemy now and I have a
  question: how do I determine if a particular mapped object instance
  has been populated by the database?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cross DB development, columns lower/upper case letters

2007-02-12 Thread Michael Bayer


On Feb 12, 3:45 pm, vkuznet [EMAIL PROTECTED] wrote:
 No, here is real problem:

 schema had the following:

 create table Foo (int id);

 in MySQL it creates table 'Foo' and column 'id'.
 in ORACLE it creates table 'FOO' and column 'ID'

 That's create a problem in sqlalchemy, when I access columns. I can
 make an alias for table names, e.g. tf,
 but then for MySQL I end up using tf.c.id and for ORACLE I need
 tf.c.ID.
 and I cannot use tf.c.ID in MySQL (or tf.c.id in ORACLE) since such
 columns doesn't exists in sqlalchemy
 table object.

look at the echoed SQL.  is SQLAlchemy putting quotes around the
identifier names ?  if not, then you are accessing in a case-
insensitive fashion - the id/ID case doesnt matter.  put the
case_sensitive=False flag on your MetaData and see if that works.
id also advise not using any MixedCase identifier names (which Foo
is).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] schema comments

2007-02-12 Thread Sean Davis
I looked over the docs but didn't find an answer (so the answer is probably
no).  Does SA allow descriptions (database comments) of tables and/or
columns?

Thanks,
Sean

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Never-saved orphans are an error?

2007-02-12 Thread SteveTether



On Feb 10, 11:48 am, Michael Bayer [EMAIL PROTECTED] wrote:
  del p.children[3]

 basically cascade has nothing to say about this operation when applied to
 transient instances.  the history tracking will not see the item as ever
 being part of the children collection since you added it then removed it
 without flushing.

Cascade *does* have something to say about it because the instance was
not transient. Like a dummy I used the all cascade option on the
relation, which caused the child to get attached to the session, i.e.,
go from transient to pending, as soon as it was added to the the
parent's list of children. The full error message, which I should have
included, was:

sqlalchemy.exceptions.FlushError: instance Child('Cheshire') is an
unsaved, pending instance and is an orphan (is not attached to any
parent 'Parent' instance via that classes' 'children' attribute)

I checked sess.new after the del and there it was, without my having
called sess.save() on it. In my application I was after fully explicit
control of sess.save() operations, applying them only to members of
the data structure and only (or course) if they were not already
attached to the session. I'm afraid my over-simplified example didn't
make my motives clear, sorry.

 also, the delete cascade operation does not remove the item from the
 session, it only marks the item to be deleted from the DB, which after a
 flush results in the item being removed from the session.  but this item
 cant be deleted since it was never persisted.  SA basically knows nothing
 about the instance's relationships to anything else and would rather not
 guess what you'd like to do with it.

The delete cascade does not apply since I never called
sess.delete(). It's delete-orphan that applies here and I can avoid
the problem with it by removing all from the cascade options. I'm
structuring my application so that the high-level logic layer does not
know how persistence is implemented, so most of the cascade options
should not be in effect. Everything that should be saved is part of a
graph having a well-defined traversal which the persistence layer can
use to make sure that all the nodes get saved.

By the way, the orphan objects that prompted the FlushError in my
application weren't even known to the high-level layer; they were
association objects created because I was managing a many-to-many
relation with AssociationProxy. The _AssociationList used with it
doesn't implement __delitem__ or slicing. It was only when I added
those for the sake of transparency that I ran into the flush error. I
want the high-level layer to be able to use the full set of list
operations even on lists managed by SA.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

 like the example illustrates, there is no
 BEGIN being issued for every SELECT statement when using psycopg2 in
 non-autocommit mode, which applies to SQLAlchemy as well.   therefore
 there is no performance bottleneck; this is a django issue only.

I guess we're reading the example differently. I see this BEGIN:

LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT users.id, users.name FROM users

in the postgres log, but as far as I can tell sqlalchemy is not
sending it; it's being implicitly inserted by psycopg2 because the
isolation level of the connection is read-committed and it sees a
statement outside of a transaction. If the connection were in the
autocommit isolation level, doing the same thing in sqlalchemy would
not result in that BEGIN, only the SELECT would be issued.

  I noticed a TODO about implementing true autocommit behavior, which is
  what I need -- a way to tell the dbapi module *not* to start those
  implicit transactions.

 it doesnt.  run the example - no implicit transaction beyond the
 initial connection (which is required, since it must implement
 rollback() and commit()).

Either I'm very confused or I'm having lots of trouble getting my
point across. In my experience, it is absolutely the case that the
dbapi drivers are starting implicit transactions when they aren't in
autocommit mode -- otherwise I don't see how any transactions would
work correctly, since do_begin is a no-op everywhere -- so if the
dbapi modules aren't inserting the BEGIN to start transactions, I
don't see where it could be coming from. What am I missing? If I'm
wrong about what's going on here, how does sqlalchemy start
transactions when it needs to, and where is that BEGIN before the
select in my example coming from?

I have an example that I think illustrates what I'm talking about
pretty well, so I'm going to file a ticket and attach the example and
the postgres query log I see when I run it. Hopefully that will clear
things up one way or another.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---