[sqlalchemy] Re: Calculated columns

2007-02-12 Thread Jonathan Ellis

Instead of mapping your table directly, map a select containing the coalesce:

incidents_with_activity = select([incidents,
func.coalesce(...).label('activity')]).alias('incidents_with_activity')
assign_mapper(Incident, incidents_with_activity)

then you can use the activity label anywhere in the ORM queries.

On 2/12/07, Mike Orr <[EMAIL PROTECTED]> wrote:
>
> I have a MySQL table with three date fields.  I need to use the
> coalesced value (i.e., the first one that's not NULL)  in WHERE,
> HAVING, ORDER BY, and I'd like it to also appear as an attribute in my
> session-mapped class.  I can see how to do parts of this, but not how
> to put it all together.
>
> My current task is to get the ten most recent records and display
> their title and date in a Pylons application.
>
> 
> class Incident(object):
> pass
>
> assign_mapper(ctx, Incident, incident_table)
>
> activity = sa.func.coalesce(
> Incident.c.last_entry_date,
> Incident.c.activity_date,
> Incident.c.create_date,
> )
>
> recent = sa.select(
> [Incident.c.id, Incident.c.title, activity.label("activity")],
> Incident.c.is_public,
> order_by=[sa.desc(activity)], limit=10).execute().fetchall()
> 
>
> This works but I'd rather get the result fields as attributes rather
> than keys.  I could wrap the results in a custom class but I may as
> well use the Incident class that's sitting there.
>
> 
> class Incident(object):
> @property
> def activity(self):
> return self.last_entry_date or self.activity_date or
> self.create_date
> 
>
> I haven't tried this yet but I'd be specifying the coalescence twice
> in different ways, which is not ideal.  Is this the best way to go?
>
> I could just use the property and do the sorting/eliminating in
> Python, but then I'd be pulling in all these database records I'll
> never select.
>
> I guess what I'd really like is a calculated column in the table
> object, which I could use in all the clauses and also propagate to
> Incident.  But I'd have to append it to an autoloaded table object,
> because my table has some thirty columns and is maintained by another
> application, so I don't want to keep updating its schema as the
> underlying table changes.
>
> --Mike
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: bindparams don't work with in_?

2007-02-12 Thread dykang

ah ha! That's unfortunate as it prevents me from precompiling any
query with an 'in' in the where clause, but it makes sense after you
mention it. Anyway, thanks for pointing out my mistake,

David

On Feb 11, 9:56 pm, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> ive been alerted that the bindparam() function doesnt work inside of
> in_() and theres a new ticket that addresses this issue, but i havent
> had time to confirm.
>
> however, even if this is fixed, your usage would still be incorrect
> (except perhaps with postgres, which supports an ARRAY type
> implicitly); normally you cant pass an array to a bind parameter
> value.  you would have to have a bindparam() instance for each scalar
> value youd like to pass, i.e.:
>
> s = table.select(table.c.somecol.in_(bindparam('param1'),
> bindparam('param2'), bindparam('param3')))
> s.execute(param1=2, param2=3, param3=4)
>
> On Feb 10, 4:46 pm, "dykang" <[EMAIL PROTECTED]> wrote:
>
> > from sqlalchemy import *
> > I was having some trouble understanding how to use the bindparams, and
> > I haven't been able to get them to work with the in_ operator. The
> > following code is a simple demonstration of what I'm trying (with
> > mysql). It connects to  a db, creates a small table and then tries to
> > compile and execute a query that uses in_.  When I try to execute the
> > code, I get the following exception:
> > sqlalchemy.exceptions.SQLError: (TypeError) not all arguments
> > converted during string formatting 'SELECT testings.id,
> > testings.some_data, testings.some_int \nFROM testings \nWHERE
> > testings.id = %s' [[2, 3, 4]]
>
> > I'm not really clear on how to use the bindparams properly, but this
> > seems to be incorrect, any help would be appreciated,
>
> > David
> > #==begin source below
>
> > meta = DynamicMetaData ()
> > meta.connect ('some uri')
> > meta.engine.echo=True
> > dbsession = create_session (bind_to=(meta.engine))
>
> > TestTbl = Table ('testings', meta,
> > Column ('id', Integer, Sequence ('testings_id_seq'),
> > primary_key=True),
> > Column ('some_data', Unicode (40)),
> > Column ('some_int', Integer),
> > mysql_engine='InnoDB')
>
> > class Testing (object):
> > pass
>
> > TestTbl.create ()
>
> > s = select ([TestTbl], TestTbl.c.id.in_(bindparam('my_id'))).compile
> > ()
> > some_mapper = mapper (Testing, TestTbl)
> > results = dbsession.query(Testing).instances (s.execute(my_id=[2,3,4]))


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

> like the example illustrates, there is no
> BEGIN being issued for every SELECT statement when using psycopg2 in
> non-autocommit mode, which applies to SQLAlchemy as well.   therefore
> there is no performance bottleneck; this is a django issue only.

I guess we're reading the example differently. I see this BEGIN:

LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
LOG:  statement: SELECT users.id, users.name FROM users

in the postgres log, but as far as I can tell sqlalchemy is not
sending it; it's being implicitly inserted by psycopg2 because the
isolation level of the connection is read-committed and it sees a
statement outside of a transaction. If the connection were in the
autocommit isolation level, doing the same thing in sqlalchemy would
not result in that BEGIN, only the SELECT would be issued.

> > I noticed a TODO about implementing true autocommit behavior, which is
> > what I need -- a way to tell the dbapi module *not* to start those
> > implicit transactions.
>
> it doesnt.  run the example - no implicit transaction beyond the
> initial connection (which is required, since it must implement
> rollback() and commit()).

Either I'm very confused or I'm having lots of trouble getting my
point across. In my experience, it is absolutely the case that the
dbapi drivers are starting implicit transactions when they aren't in
autocommit mode -- otherwise I don't see how any transactions would
work correctly, since do_begin is a no-op everywhere -- so if the
dbapi modules aren't inserting the BEGIN to start transactions, I
don't see where it could be coming from. What am I missing? If I'm
wrong about what's going on here, how does sqlalchemy start
transactions when it needs to, and where is that BEGIN before the
select in my example coming from?

I have an example that I think illustrates what I'm talking about
pretty well, so I'm going to file a ticket and attach the example and
the postgres query log I see when I run it. Hopefully that will clear
things up one way or another.

JP


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Never-saved orphans are an error?

2007-02-12 Thread SteveTether



On Feb 10, 11:48 am, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> > del p.children[3]
>
> basically cascade has nothing to say about this operation when applied to
> transient instances.  the history tracking will not see the item as ever
> being part of the "children" collection since you added it then removed it
> without flushing.

Cascade *does* have something to say about it because the instance was
not transient. Like a dummy I used the "all" cascade option on the
relation, which caused the child to get attached to the session, i.e.,
go from transient to pending, as soon as it was added to the the
parent's list of children. The full error message, which I should have
included, was:

sqlalchemy.exceptions.FlushError: instance Child('Cheshire') is an
unsaved, pending instance and is an orphan (is not attached to any
parent 'Parent' instance via that classes' 'children' attribute)

I checked sess.new after the del and there it was, without my having
called sess.save() on it. In my application I was after fully explicit
control of sess.save() operations, applying them only to members of
the data structure and only (or course) if they were not already
attached to the session. I'm afraid my over-simplified example didn't
make my motives clear, sorry.

> also, the "delete" cascade operation does not remove the item from the
> session, it only marks the item to be deleted from the DB, which after a
> flush results in the item being removed from the session.  but this item
> cant be deleted since it was never persisted.  SA basically knows nothing
> about the instance's relationships to anything else and would rather not
> guess what you'd like to do with it.

The "delete" cascade does not apply since I never called
sess.delete(). It's "delete-orphan" that applies here and I can avoid
the problem with it by removing "all" from the cascade options. I'm
structuring my application so that the high-level logic layer does not
know how persistence is implemented, so most of the cascade options
should not be in effect. Everything that should be saved is part of a
graph having a well-defined traversal which the persistence layer can
use to make sure that all the nodes get saved.

By the way, the orphan objects that prompted the FlushError in my
application weren't even known to the high-level layer; they were
association objects created because I was managing a many-to-many
relation with AssociationProxy. The _AssociationList used with it
doesn't implement __delitem__ or slicing. It was only when I added
those for the sake of transparency that I ran into the flush error. I
want the high-level layer to be able to use the full set of list
operations even on lists managed by SA.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Calculated columns

2007-02-12 Thread Mike Orr

I have a MySQL table with three date fields.  I need to use the
coalesced value (i.e., the first one that's not NULL)  in WHERE,
HAVING, ORDER BY, and I'd like it to also appear as an attribute in my
session-mapped class.  I can see how to do parts of this, but not how
to put it all together.

My current task is to get the ten most recent records and display
their title and date in a Pylons application.


class Incident(object):
pass

assign_mapper(ctx, Incident, incident_table)

activity = sa.func.coalesce(
Incident.c.last_entry_date,
Incident.c.activity_date,
Incident.c.create_date,
)

recent = sa.select(
[Incident.c.id, Incident.c.title, activity.label("activity")],
Incident.c.is_public,
order_by=[sa.desc(activity)], limit=10).execute().fetchall()


This works but I'd rather get the result fields as attributes rather
than keys.  I could wrap the results in a custom class but I may as
well use the Incident class that's sitting there.


class Incident(object):
@property
def activity(self):
return self.last_entry_date or self.activity_date or
self.create_date


I haven't tried this yet but I'd be specifying the coalescence twice
in different ways, which is not ideal.  Is this the best way to go?

I could just use the property and do the sorting/eliminating in
Python, but then I'd be pulling in all these database records I'll
never select.

I guess what I'd really like is a calculated column in the table
object, which I could use in all the clauses and also propagate to
Incident.  But I'd have to append it to an autoloaded table object,
because my table has some thirty columns and is maintained by another
application, so I don't want to keep updating its schema as the
underlying table changes.

--Mike


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema comments

2007-02-12 Thread Michael Bayer

id accept decent patches.

On Feb 12, 5:17 pm, "Sean Davis" <[EMAIL PROTECTED]> wrote:
> I looked over the docs but didn't find an answer (so the answer is probably
> "no").  Does SA allow descriptions (database comments) of tables and/or
> columns?
>
> Thanks,
> Sean


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Function Indexes

2007-02-12 Thread Michael Bayer

by "function index", we're talking expressions stuck into Index() ?
shouldnt be too hard to implement, if you submit a patch that does it
similarly to how column level defaults are done.  kind of a 5-liner id
think.

On Feb 12, 5:24 pm, "Troy" <[EMAIL PROTECTED]> wrote:
> Is there a time frame, or an intention, for supporting function
> indexes?  I would like to help out if there is no one on this.  I've
> spent a few hours pouring over the sqlalchemy code and have a few
> ideas and questions if anyone is interested.
>
> Thanks,
>
> Troy


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Function Indexes

2007-02-12 Thread Troy

Is there a time frame, or an intention, for supporting function
indexes?  I would like to help out if there is no one on this.  I've
spent a few hours pouring over the sqlalchemy code and have a few
ideas and questions if anyone is interested.

Thanks,

Troy


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] schema comments

2007-02-12 Thread Sean Davis
I looked over the docs but didn't find an answer (so the answer is probably
"no").  Does SA allow descriptions (database comments) of tables and/or
columns?

Thanks,
Sean

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cross DB development, columns lower/upper case letters

2007-02-12 Thread Michael Bayer


On Feb 12, 3:45 pm, "vkuznet" <[EMAIL PROTECTED]> wrote:
> No, here is real problem:
>
> schema had the following:
>
> create table Foo (int id);
>
> in MySQL it creates table 'Foo' and column 'id'.
> in ORACLE it creates table 'FOO' and column 'ID'
>
> That's create a problem in sqlalchemy, when I access columns. I can
> make an alias for table names, e.g. tf,
> but then for MySQL I end up using tf.c.id and for ORACLE I need
> tf.c.ID.
> and I cannot use tf.c.ID in MySQL (or tf.c.id in ORACLE) since such
> columns doesn't exists in sqlalchemy
> table object.

look at the echoed SQL.  is SQLAlchemy putting quotes around the
identifier names ?  if not, then you are accessing in a case-
insensitive fashion - the id/ID case doesnt matter.  put the
"case_sensitive=False" flag on your MetaData and see if that works.
id also advise not using any MixedCase identifier names (which Foo
is).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to determine if an instance has been populated?

2007-02-12 Thread Matt Culbreth

That got it, thanks.

On Feb 12, 3:57 pm, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> check for an _instance_key attribute.
>
> On Feb 12, 1:52 pm, "Matt Culbreth" <[EMAIL PROTECTED]> wrote:
>
> > Hello Friends,
>
> > I'm working with the latest version of SQLAlchemy now and I have a
> > question: how do I determine if a particular mapped object instance
> > has been populated by the database?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread Michael Bayer

not if you understood my previous reply, no.

On Feb 12, 3:55 pm, "percious" <[EMAIL PROTECTED]> wrote:
> On Feb 12, 3:49 pm, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
>
>
>
> > drop_all() doesnt remove Table instances from the metadata.  the Table
> > object is a python object, it only represents your real database
> > table.  you may well want to call create_all() again using that same
> > Table.
>
> > On Feb 12, 3:20 pm, "percious" <[EMAIL PROTECTED]> wrote:
>
> > > See test case:
>
> > > from turbogears import config
> > > from turbogears.database import metadata
> > > from turbogears import database
>
> > > from sqlalchemy import Table, Column, Integer, Unicode
> > > import sqlalchemy.orm
>
> > > config.update({"sqlalchemy.dburi":"sqlite:///:memory:"})
> > > database.bind_meta_data()
>
> > > Table('t_table', metadata,
> > >   Column('id', Integer, primary_key=True),
> > >   Column('data', Unicode(255)),
> > >   ).create()
>
> > > Table('t_table_history', metadata,
> > >   Column('id', Integer, primary_key=True),
> > >   Column('data', Unicode(255)),
> > >   ).create()
>
> > > assert  metadata.tables.keys() == ['t_table', 't_table_history']
>
> > > metadata.drop_all(tables=['t_table_history',])
>
> > > #fails
> > > assert  metadata.tables.keys() == ['t_table']
>
> Would it make sense to add the following code to line 905 in your
> schema.py???
> if tables is None:
> self.tables.clear()
> else:
> for table in tables:
> if type(table) is str:
> del self.tables[table]
> else:
> for k, t in self.tables.iteritems():
> if t is table:
> del self.tables[k]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to determine if an instance has been populated?

2007-02-12 Thread Michael Bayer

check for an _instance_key attribute.

On Feb 12, 1:52 pm, "Matt Culbreth" <[EMAIL PROTECTED]> wrote:
> Hello Friends,
>
> I'm working with the latest version of SQLAlchemy now and I have a
> question: how do I determine if a particular mapped object instance
> has been populated by the database?
>
> The question originates because I have defined a __repr__() method on
> one of my mapped objects.  It works fine if the object has been
> loaded, but throws a "TypeError" exception until that time because one
> of the statements in the __repr__() method is using an 'int' type.
>
> I can easily handle this by checking for None of course, but is there
> a more standard way people use the tool?
>
> Thanks,
>
> Matt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread percious



On Feb 12, 3:49 pm, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> drop_all() doesnt remove Table instances from the metadata.  the Table
> object is a python object, it only represents your real database
> table.  you may well want to call create_all() again using that same
> Table.
>
> On Feb 12, 3:20 pm, "percious" <[EMAIL PROTECTED]> wrote:
>
> > See test case:
>
> > from turbogears import config
> > from turbogears.database import metadata
> > from turbogears import database
>
> > from sqlalchemy import Table, Column, Integer, Unicode
> > import sqlalchemy.orm
>
> > config.update({"sqlalchemy.dburi":"sqlite:///:memory:"})
> > database.bind_meta_data()
>
> > Table('t_table', metadata,
> >   Column('id', Integer, primary_key=True),
> >   Column('data', Unicode(255)),
> >   ).create()
>
> > Table('t_table_history', metadata,
> >   Column('id', Integer, primary_key=True),
> >   Column('data', Unicode(255)),
> >   ).create()
>
> > assert  metadata.tables.keys() == ['t_table', 't_table_history']
>
> > metadata.drop_all(tables=['t_table_history',])
>
> > #fails
> > assert  metadata.tables.keys() == ['t_table']

Would it make sense to add the following code to line 905 in your
schema.py???
if tables is None:
self.tables.clear()
else:
for table in tables:
if type(table) is str:
del self.tables[table]
else:
for k, t in self.tables.iteritems():
if t is table:
del self.tables[k]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: deletion behavior question

2007-02-12 Thread Michael Bayer


On Feb 12, 3:00 pm, iain duncan <[EMAIL PROTECTED]> wrote:
> I would like some objects that are related through many to many tables
> to delete the many to many entry on deletion, but NOT the endpoint. It
> seems that cascade="all" deletes both, and no arg to cascade leaves left
> over invalid entries in the manytomany table. Is there a suggested way
> to deal with this?

if the relationships between two objects is being broken, then the
corresponding rows in the many-to-many table should be removed as
well, provided SA is familiar with the relationship.  no cascade rules
should be needed. feel free to post an example thats not working.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: drop_all not working for me

2007-02-12 Thread Michael Bayer

drop_all() doesnt remove Table instances from the metadata.  the Table
object is a python object, it only represents your real database
table.  you may well want to call create_all() again using that same
Table.

On Feb 12, 3:20 pm, "percious" <[EMAIL PROTECTED]> wrote:
> See test case:
>
> from turbogears import config
> from turbogears.database import metadata
> from turbogears import database
>
> from sqlalchemy import Table, Column, Integer, Unicode
> import sqlalchemy.orm
>
> config.update({"sqlalchemy.dburi":"sqlite:///:memory:"})
> database.bind_meta_data()
>
> Table('t_table', metadata,
>   Column('id', Integer, primary_key=True),
>   Column('data', Unicode(255)),
>   ).create()
>
> Table('t_table_history', metadata,
>   Column('id', Integer, primary_key=True),
>   Column('data', Unicode(255)),
>   ).create()
>
> assert  metadata.tables.keys() == ['t_table', 't_table_history']
>
> metadata.drop_all(tables=['t_table_history',])
>
> #fails
> assert  metadata.tables.keys() == ['t_table']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread Michael Bayer


On Feb 12, 10:30 am, "JP" <[EMAIL PROTECTED]> wrote:
>
> But It would be nice to be able to have total control over the
> transactional state of the connection, so that when I know that I'm
> just doing a select or two I don't have to have the overhead of a
> BEGIN that I know is useless, but as things stand I can't do that,
> because do_begin isn't implemented for any of the dialects. I guess
> that's because sqlalchemy is depending on the underlying DBAPI
> implementation to handle sending BEGIN statements before the first
> query that is outside of a transaction?

DBAPI, including psycopg2, has no begin() method; there is only
rollback() and commit().   like the example illustrates, there is no
BEGIN being issued for every SELECT statement when using psycopg2 in
non-autocommit mode, which applies to SQLAlchemy as well.   therefore
there is no performance bottleneck; this is a django issue only.

>
> I noticed a TODO about implementing true autocommit behavior, which is
> what I need -- a way to tell the dbapi module *not* to start those
> implicit transactions.

it doesnt.  run the example - no implicit transaction beyond the
initial connection (which is required, since it must implement
rollback() and commit()).  SA has been doing its own autocommit on top
of non-autocommitting connections since the 0.2 series at least.

running in "non-auto-commit" mode is standard practice for databases
like oracle.  theres always a transaction in place.  but its not
started for every select statement.  only at connection time and after
each rollback or commit.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cross DB development, columns lower/upper case letters

2007-02-12 Thread vkuznet

No, here is real problem:

schema had the following:

create table Foo (int id);

in MySQL it creates table 'Foo' and column 'id'.
in ORACLE it creates table 'FOO' and column 'ID'

That's create a problem in sqlalchemy, when I access columns. I can
make an alias for table names, e.g. tf,
but then for MySQL I end up using tf.c.id and for ORACLE I need
tf.c.ID.
and I cannot use tf.c.ID in MySQL (or tf.c.id in ORACLE) since such
columns doesn't exists in sqlalchemy
table object.

Valentin.


On Feb 12, 3:36 pm, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> are these column names using MixedCase ?  otherwise you can probably
> access them in a case-insensitive fashion (oracle col names are
> usually case-insensitive)
>
> On Feb 12, 11:35 am, "vkuznet" <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> > I'm trying to develop a cross-DB application which works with ORACLE
> > and MySQL back-ends.
> > Both DBs has the same schema, but of course there is a caveat. ORACLE
> > has Tables and Columns in upper case and MySQL does not. That leads to
> > the following problem. When I construct
> > select(table.c.column)
> > I face out with problem that I need to specify lower case for column
> > in MySQL and upper case letters for ORACLE. With table names it's easy
> > I can use table aliases. But how to avoid problem with columns names.
>
> > Thanks,
> > Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cross DB development, columns lower/upper case letters

2007-02-12 Thread Michael Bayer

are these column names using MixedCase ?  otherwise you can probably
access them in a case-insensitive fashion (oracle col names are
usually case-insensitive)

On Feb 12, 11:35 am, "vkuznet" <[EMAIL PROTECTED]> wrote:
> Hi,
> I'm trying to develop a cross-DB application which works with ORACLE
> and MySQL back-ends.
> Both DBs has the same schema, but of course there is a caveat. ORACLE
> has Tables and Columns in upper case and MySQL does not. That leads to
> the following problem. When I construct
> select(table.c.column)
> I face out with problem that I need to specify lower case for column
> in MySQL and upper case letters for ORACLE. With table names it's easy
> I can use table aliases. But how to avoid problem with columns names.
>
> Thanks,
> Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] drop_all not working for me

2007-02-12 Thread percious

See test case:

from turbogears import config
from turbogears.database import metadata
from turbogears import database

from sqlalchemy import Table, Column, Integer, Unicode
import sqlalchemy.orm

config.update({"sqlalchemy.dburi":"sqlite:///:memory:"})
database.bind_meta_data()

Table('t_table', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

Table('t_table_history', metadata,
  Column('id', Integer, primary_key=True),
  Column('data', Unicode(255)),
  ).create()

assert  metadata.tables.keys() == ['t_table', 't_table_history']

metadata.drop_all(tables=['t_table_history',])

#fails
assert  metadata.tables.keys() == ['t_table']


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] deletion behavior question

2007-02-12 Thread iain duncan

I would like some objects that are related through many to many tables
to delete the many to many entry on deletion, but NOT the endpoint. It
seems that cascade="all" deletes both, and no arg to cascade leaves left
over invalid entries in the manytomany table. Is there a suggested way
to deal with this?

Should I add a destructor to the classes that wipes out the many to many
entry with a sql delete clause?

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Announcing Elixir!

2007-02-12 Thread Jonathan LaCour

Today, we are pleased to announce the release of Elixir
(http://elixir.ematia.de), a declarative mapper for SQLAlchemy.  Elixir
is the successor to ActiveMapper and TurboEntity, and is a collaboration
between Daniel Haus, Jonathan LaCour and Gaëtan de Menten.  Elixir's
website provides installation instructions, a tutorial, extensive
documentation, and more.

The eventual goal of Elixir is to become an official SQLAlchemy
extension after some time soliciting feedback, bug reports, and testing
from users.

Daniel Haus
 http://www.danielhaus.de

Gaëtan de Menten
 http://openhex.com

Jonathan LaCour
 http://cleverdevil.org



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to determine if an instance has been populated?

2007-02-12 Thread Matt Culbreth

Hello Friends,

I'm working with the latest version of SQLAlchemy now and I have a
question: how do I determine if a particular mapped object instance
has been populated by the database?

The question originates because I have defined a __repr__() method on
one of my mapped objects.  It works fine if the object has been
loaded, but throws a "TypeError" exception until that time because one
of the statements in the __repr__() method is using an 'int' type.

I can easily handle this by checking for None of course, but is there
a more standard way people use the tool?

Thanks,

Matt


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adding "with (nolock)" to all queries (mssql)

2007-02-12 Thread Arnar Birgisson

Hi

On 2/12/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
> On Feb 12, 4:25 am, "Arnar Birgisson" <[EMAIL PROTECTED]> wrote:
> > Hi again,
> >
> > Yes, we thought that might be the case. We tried changing the
> > isolation level on this connection to "snapshot". Now I'm getting
> > different errors, and more frequent.
> >
> > One error I get is this:
> >
> > AttributeError: 'ColumnProperty' object has no attribute 'strategy'
>
> that should have nothing to do with transaction isolation.  thats some
> mixup of column/relation-based properties on a class (quite a strange
> one).

I'm not using any orm here, just building queries... is this a problem
on my end or in SA?

> >
> > and another error is "object not subscriptable" when doing something
> > like "eststatus = result.fetchone()[0]"
>
> how do you know fetchone() is returning a row ?

Uhm.. maybe it's not. But if I hit refresh (this is a webapp) I don't
get the error. The data is not changing wrt to this so I should be
seeing a consistent result.

However, this might be a symptom of the same problem I was having
before - i.e. no rows being returned where there should be some.

Arnar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Tracking changes on mapped entities

2007-02-12 Thread King Simon-NFHD78

Allen Bierbaum wrote:
> 
> Is it possible to examine the session and get a list of all 
> mapped instances that have been changed?
> 

This information is available on the 'new', 'dirty' and 'deleted'
properties of the session:

http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_changed

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Tracking changes on mapped entities

2007-02-12 Thread Allen Bierbaum

Is it possible to examine the session and get a list of all mapped
instances that have been changed?

More details:

I would like to implement an observer pattern in my application.  I
would like to let the code make changes to mapped objects as normal,
but immediately before (or after) a session flush() I need to notify
any interested observers that the state in the mapped objects has
changed.  I know I could do this by adding my own attribute state
tracking to the mapped classes, but I was hoping that since SA is
already doing this there may be a way to just make use of SA's change
tracking.

Has anyone done anything like this?

Thanks,
Allen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cascade-Delete causes AssertionError (Tries to blank-out primary key ...)

2007-02-12 Thread Luke Stebbing

Right, "delete-orphan" is what adds the lifecycle relationship between
parent and child. It means that the child can't exist without a
parent. That lets SA know that it should eliminate the child rather
than trying to null out the relationship.

You probably want "all" so that all actions performed on the parent
will propagate to the child, which will remove the need to save
children directly. See here also:

http://www.sqlalchemy.org/docs/unitofwork.myt#unitofwork_cascade


Cheers,

Luke

On Feb 11, 1:49 pm, "Nebur" <[EMAIL PROTECTED]> wrote:
> > The minimum correction of my above code seems to be 2 lines:
>
> > 1.The cascade rule changed from "delete" into "all,delete-orphan"
>
> No, it turned out there is a still smaller change:
> The cascade rule changed from "delete" into "delete, delete-orphan"
> will work, too.
> The delete-orphan makes up the difference.
>  Ruben


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] cross DB development, columns lower/upper case letters

2007-02-12 Thread vkuznet

Hi,
I'm trying to develop a cross-DB application which works with ORACLE
and MySQL back-ends.
Both DBs has the same schema, but of course there is a caveat. ORACLE
has Tables and Columns in upper case and MySQL does not. That leads to
the following problem. When I construct
select(table.c.column)
I face out with problem that I need to specify lower case for column
in MySQL and upper case letters for ORACLE. With table names it's easy
I can use table aliases. But how to avoid problem with columns names.

Thanks,
Valentin.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adding "with (nolock)" to all queries (mssql)

2007-02-12 Thread Michael Bayer



On Feb 12, 4:25 am, "Arnar Birgisson" <[EMAIL PROTECTED]> wrote:
> Hi again,
>
> Yes, we thought that might be the case. We tried changing the
> isolation level on this connection to "snapshot". Now I'm getting
> different errors, and more frequent.
>
> One error I get is this:
>
> AttributeError: 'ColumnProperty' object has no attribute 'strategy'

that should have nothing to do with transaction isolation.  thats some
mixup of column/relation-based properties on a class (quite a strange
one).

>
> and another error is "object not subscriptable" when doing something
> like "eststatus = result.fetchone()[0]"

how do you know fetchone() is returning a row ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: psycopg2 default transaction inefficiency

2007-02-12 Thread JP

Yeah, I think I explained badly.

What I was trying to show with the sqlalchemy vs postgres query logs
is that extra BEGIN that psycopg2 is sending before the SELECT that
sqlalchemy executes. The BEGIN is implicitly added by psycopg2 because
it's in its default transaction isolation state of read comitted.
Which I didn't know was the DBAPI default -- so strike what I said
about changing any default behavior.

But It would be nice to be able to have total control over the
transactional state of the connection, so that when I know that I'm
just doing a select or two I don't have to have the overhead of a
BEGIN that I know is useless, but as things stand I can't do that,
because do_begin isn't implemented for any of the dialects. I guess
that's because sqlalchemy is depending on the underlying DBAPI
implementation to handle sending BEGIN statements before the first
query that is outside of a transaction?

I noticed a TODO about implementing true autocommit behavior, which is
what I need -- a way to tell the dbapi module *not* to start those
implicit transactions. If the engine/connection could be put into true
autocommit mode, and issue connection.begin() from do_begin when in
that mode, then I could do everything I want to do and I don't think
anything would break, since the orm calls do_begin before each flush.
I know how to change the isolation level in the sqlite and postgres
dialects -- would you be interested in a patch or work on a branch
where I added an isolation_level property to the engine, implemented
it for those two, and for those two dialects had do_begin issue a
connection.begin() when the isolation_level was set to autocommit? I
could probably tackle mysql as well, but I have no access to mssql or
oracle, so I can't do those.

Hopefully I'm making some more sense this time...

JP


On Feb 12, 12:42 am, "Michael Bayer" <[EMAIL PROTECTED]> wrote:
> im confused.  the django thread seems to be saying that it is setting
> psycopg2 *into* autocommit mode, which causes the problem; the default
> setting of autocommit for DBAPI (and maintained in SA) is False.  When
> i wrote frameworks years ago I always thought "autocommit" mode was
> the way to go but i since learned that issues like this arise  so SA
> *never* uses autocommit mode on the connection; instead it implements
> its own "autocommitting" behavior in a manner similar to hibernate -
> it looks at the statement being executed, determines if its a CRUD/
> CREATE/DROP expression, and then explicitly calls COMMIT if no SA
> transaction is already in progress.
>
> the log you have above doesnt prove that anything unusual is going on,
> since you are illustrating a transactional operation, then a SELECT,
> then another transactional operation, then another SELECT. SA does an
> explicit COMMIT for the CREATE statements since they are required to
> complete the table creation operation.
>
> this test confirms that psycopg2 defaults to "autocommit" as false and
> doesnt do anything with transaction modes after the connection is
> opened:
>
> import psycopg2 as psycopg
>
> conn = psycopg.connect(user='scott', password='tiger',
> host='127.0.0.1', database='test')
>
> for x in range(0, 5):
> curs = conn.cursor()
> curs.execute("SELECT 1")
> curs.close()
>
> log output:
>
> LOG:  statement: SET DATESTYLE TO 'ISO'
> LOG:  statement: SHOW client_encoding
> LOG:  statement: SHOW default_transaction_isolation
> LOG:  statement: BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED
> LOG:  statement: SELECT 1
> LOG:  statement: SELECT 1
> LOG:  statement: SELECT 1
> LOG:  statement: SELECT 1
> LOG:  statement: SELECT 1
> LOG:  statement: ABORT
>
> one connection, five new cursors, only one set of "setup"
> corresponding to the connection.
>
> On Feb 9, 3:52 pm, "JP" <[EMAIL PROTECTED]> wrote:
>
> > I noticed this thread on django-devs:
>
> >http://groups.google.com/group/django-developers/browse_frm/thread/52...
>
> > Which notes that psycopg2 by default starts transactions behind the
> > scenes, even for select statements. If you happen to be running a web
> > app where each hit starts a new cursor, and you only do a few selects,
> > this adds a *lot* of overhead relative to autocommit.
>
> > I wondered if sqlalchemy suffered from the same problem with psycopg2,
> > and it does. Take a look at what sqlalchemy thinks it was doing for a
> > short example, vs what the postgres query log contains.
>
> > The code:
>
> > >>> import sqlalchemy as sa
> > >>> meta = sa.DynamicMetaData()
> > >>> users = sa.Table('users', meta, sa.Column('id', sa.Integer, 
> > >>> primary_key=True, autoincrement=True), sa.Column('name', sa.String(50)))
> > >>> meta.connect('postgres://', debug=True)
> > >>> meta.create_all()
> > >>> users.select().execute()
>
> > sqlalchemy query log:
> > select relname from pg_class where lower(relname) = %(name)s
> > CREATE TABLE users (
> > id SERIAL NOT NULL,
> > name VARCHAR(50),
> > PRIMARY KEY

[sqlalchemy] Re: Adding "with (nolock)" to all queries (mssql)

2007-02-12 Thread Arnar Birgisson

Hi again,

Yes, we thought that might be the case. We tried changing the
isolation level on this connection to "snapshot". Now I'm getting
different errors, and more frequent.

One error I get is this:

Mod_python error: "PythonHandler jobtracking.api.controller"
Traceback (most recent call last):
  File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line
299, in HandlerDispatch
result = object(req)
  File 
"/usr/lib/python2.4/site-packages/jobtracking-1.0_r20-py2.4.egg/jobtracking/api/controller.py"
, line 30, in handler
visit = s.query(VisitIdentity).get_by(visit_key=visit_key.value)
  File "build/bdist.linux-i686/egg/sqlalchemy/orm/query.py", line 95, in get_by
  File "build/bdist.linux-i686/egg/sqlalchemy/orm/query.py", line 255,
in select_whereclause
  File "build/bdist.linux-i686/egg/sqlalchemy/orm/query.py", line 458,
in compile
  File "build/bdist.linux-i686/egg/sqlalchemy/orm/interfaces.py", line
75, in setup
  File "build/bdist.linux-i686/egg/sqlalchemy/orm/interfaces.py", line
64, in _get_context_strategy
AttributeError: 'ColumnProperty' object has no attribute 'strategy'

and another error is "object not subscriptable" when doing something
like "eststatus = result.fetchone()[0]"

Any ideas?

Arnar

TypeError: unsubscriptable object

On 2/9/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
> before I read the next 10 replies in this thread, this seems like an issue
> of too high of a transaction isolation level going on (something that
> plagued my SQL server apps when i was writing them, though that was years
> ago).  cant the connection itself have a more lenient isolation level set
> ?
>
>
> Arnar Birgisson wrote:
> >
> > Hi all,
> >
> > I'm using SA to build queries (no orm) and handle connection pooling
> > on a linux machine, connecting to mssql via freetds.
> >
> > This was working very well with SQL Server 2000, but a couple of weeks
> > ago we upgraded to 2005. Since then the SA app has been having some
> > problems. I've checked that I'm using the latest (0.64) version of
> > freetds - which was released to provide support for sql server 2005.
> >
> > Now, I won't go into the specific problems (mainly b/c I'm not sure
> > what's going on) but my dba suggested that I add "with (nolock)" after
> > each table alias in every query (I'm only doing selects). The problem
> > is that the queries are built and executed with SA. Can I plug into
> > some hooks or something to do this?
> >
> > Arnar
> >
> > >
> >
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---