[sqlalchemy] Re: all

2012-07-11 Thread Gunnlaugur Briem
Hi burgiduroy,

On Wednesday, 11 July 2012 15:24:59 UTC, burgiduroy wrote:

 Are there any performance difference between the two?

 for row in  query_object.all():
do_something()

 AND

 for row in query_object:
do_something()


The first fetches and processes all rows in one go before executing the 
loop body. The second streams them in.

How much that really matters depends on the DBAPI (it may fetch all rows in 
one go anyway), and maybe on how heavy the object mapping is ... and on 
whether you end up exiting your loop early (and so avoid processing the 
rest unnecessarily).

Regards,

Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/rtdpsUfOa1QJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Transplanting _order_by_clause when wrapping into subquery

2012-05-31 Thread Gunnlaugur Briem
Hi,

When wrapping a query with an enclosing query to add columns computed from 
the original query's columns, I'm blatantly doing this:

order_clause_list = ClauseList(*fact_query._order_by_clause)
fact_query._order_by_clause = ClauseList()
subq = fact_query.alias('forperc')
return select(
[subq.c[k] for k in subq.c.keys()] + [
extra columns based on subq's columns
for dim in percentage_dims
]
).order_by(*order_clause_list)

since I want the original query's ordering, and subquery ordering is not 
guaranteed to be maintained.

This works (in 0.7.5). But it messes with internals (_order_by_clause). So 
is there a more proper way to do this (for general queries), against the 
public sql.expression API?

Regards,

Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/HlBjhxwBMxQJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Understanding sqlalchemy memory usage and releasing unused memory

2012-02-07 Thread Gunnlaugur Briem
Hi Manav,

the final question did touch SQLAlchemy: the one about expiring and 
expunging.

1. expiring an object creates a weak reference - this is inaccurate. All 
object references held by the the session (except for objects whose 
addition, deletion or attribute change is not yet flushed) are weak 
references in the first place, 
see http://docs.sqlalchemy.org/en/latest/orm/session.html#session-attributes 
and expiring an object just marks its attributes out of date, it does not 
change the session's reference to the object itself

2. Does expunging an object do the same - expunging an object means the 
session no longer holds a reference (weak or otherwise) to that object. But 
the reference was weak in the first place (unless the object was in new, 
dirty or deleted), so expunging neither helps nor hurts in getting the 
object collected.

Your other questions do not involve SQLAlchemy, and that's why nobody here 
is answering them. You might find some other group where questions about 
python basics and process memory management are in scope ... but the One 
True Way to learn these things is to grit your teeth and google and read. 
That may be more work than you were hoping, but such is life. :) Also, this 
talk is 
good: 
http://blip.tv/pycon-us-videos-2009-2010-2011/pycon-2011-dude-where-s-my-ram-a-deep-dive-into-how-python-uses-memory-4896725

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/B1OSJv01mSoJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Building hierarchy tree in reverse

2011-08-01 Thread Gunnlaugur Briem
You could look for recursive CTE (Common Table Expressions), if your 
database engine supports such queries. See e.g. 
http://www.postgresql.org/docs/8.4/static/queries-with.html for PostgreSQL. 
That allows arbitrary-depth queries, as opposed to join chains that have to 
assume a fixed depth. You could probably apply two recursive queries, one 
downward and one upward from the given node, to avoid querying the whole 
tree.

SQLAlchemy has no support for CTEs directly, though of course you can 
construct the query manually and execute and fetch results through 
SQLAlchemy. You *can* get some support for recursive queries 
under SQLAlchemy in https://github.com/marplatense/sqla_hierarchy/ but be 
warned, that project is ... youthful :)

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/g7-7S4mBC3wJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Unable to update Postgres because of natural primary key

2011-07-27 Thread Gunnlaugur Briem
Hi,

instead of db.session.add, what you want is:

import = db.session.merge(import)

See http://www.sqlalchemy.org/docs/orm/session.html#merging : It examines 
the primary key of the instance. If it’s present, it attempts to load an 
instance with that primary key (or pulls from the local identity map

Maybe you were confused by the heading Adding New *or Existing* Items 
in http://www.sqlalchemy.org/docs/orm/session.html#adding-new-or-existing-items 
... here the existing part only applies to *detached* instances (ones that 
were previously associated with a session but have been removed), not to 
*transient* ones (new instances that SQLAlchemy hasn't already seen). 
Transient instances are assumed new by session.add, it doesn't query the 
database to check if the primary key exists.

See Quickie Intro to Object States 
http://www.sqlalchemy.org/docs/orm/session.html#quickie-intro-to-object-states 
and then the rest of the Session tutorial; that should get you going.

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/6lI0LZnLNpYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: RE: [sqlalchemy] Updating records in table not working

2011-07-27 Thread Gunnlaugur Briem
On Wednesday, 27 July 2011 08:23:14 UTC, Simon King wrote:

  I've looked at the SA documentation and as far as I can see the 'add'
  does an insert or an update.

 I think this is incorrect - 'add' always corresponds to 'INSERT'

Only for brand new instances, not associated with a session. For *detached* 
instances the identity is known and the instances will be in session but not 
in session.new, so an UPDATE will be issued.

Regards,

- Gulli



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/oCVN7_jgj4cJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: order_by with property of related table

2011-07-27 Thread Gunnlaugur Briem
Hi,

you need to join explicitly on A.b:

SESSION.query(A).join(A.b).order_by(B.name)

Full example: http://pastebin.com/uMqEa6Cr

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/1KPEOTrno04J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Constructor and destructor of mapped objects

2011-04-26 Thread Gunnlaugur Briem
Hi Dmitry.

You don't need to call the Base constructor, it's just a convenience 
feature: 
http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#class-constructor

For doing stuff on delete, check out the ORM event API, specifically the 
before_delete and after_delete events:

http://www.sqlalchemy.org/docs/orm/interfaces.html#sqlalchemy.orm.interfaces.MapperExtension.before_delete
http://www.sqlalchemy.org/docs/orm/interfaces.html#sqlalchemy.orm.interfaces.MapperExtension.after_delete

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Best way to insert different string to Unicode columns?

2011-04-26 Thread Gunnlaugur Briem
Hi,

the proper way is to handle character encoding yourself, and pass unicode 
objects to methods that expect them.

When you get this SAWarning, you've just passed in a byte sequence (str in 
python 2.x) where a unicode object was expected. Your byte sequence *may* be 
an ASCII-encoded string but may (perhaps later on in your application's 
lifetime) be a differently-encoded string (or even not a string). Your 
application should know which encoding to apply (SQLAlchemy can't know), and 
apply it (e.g. with s.decode(encoding) where encoding might be 'utf8' or 
'iso-8859-1' or 'utf16') before passing to SQLAlchemy.

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: unicode everywhere

2011-04-26 Thread Gunnlaugur Briem
Hi Chris,

Use Unicode/UnicodeText wherever you have text (and are able to know its 
character encoding at storage time, preferably always). Text and String are 
really just byte sequences, and should be used if it's arbitrary bytes you 
want to store.

If you don't have control of the DB schema, then you must follow it; if it 
only stores bytes, then it's bytes you put in (i.e. you have to encode your 
text in some particular character encoding). But your application and your 
collaborators and successors will thank you for enforcing consistency: 
always store your text in the same character encoding. Generally: your 
application is responsible for knowing/guessing the character encoding of 
incoming textual data. Your data model should be unconcerned with it if at 
all possible.

The significant performance overhead quoted in the docs is if you are 
forcing sqlalchemy to do unicode conversion on a platform that already 
natively supports it. I wouldn't worry about it otherwise.

Under Python 3, Text and String still mean the same thing: they still refer 
to byte-sequences stored in the DB, and Unicode and UnicodeText still refer 
to text. The difference under Python 3 is that str is a unicode string 
type (unlike str in Python 2.x), and the byte-sequence type is called 
bytes (which is what str is for in Python 2.x).

So in Python 3 you pass an object of type bytes for Text/String columns 
where you would have passed an object of type str in Python 2.x, and you 
pass an object of type str for UnicodeText/Unicode columns where you would 
have passed an object of type unicode in Python 2.x.

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to add alias for concatentated column ?

2011-02-11 Thread Gunnlaugur Briem
Hi,

you mean a label? Just use parentheses:

sql = Query([
(Trademark.name +   + Model.name).label(combined_name),
Model.id.label(id)
]).filter(Model.trademark_id==Trademark.id) 

HTH,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Map Model to Dynamic Select Statement

2011-02-11 Thread Gunnlaugur Briem
Hi,

1. Probably doable by generating the mapped class on the fly, if you insist, 
but I don't see a reason to.

2. I would certainly rather do it with a function (which you can make a 
class method if you want to, sure). This is a normal app-level query, after 
all.

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SELECT ARRAY(SELECT ...

2010-11-09 Thread Gunnlaugur Briem
This works:

select([
  func.ARRAY(
select([t.c.value])
.where(t.c.id3)
.as_scalar()
  )
  .label('array_col')
])

- Gulli



On Nov 9, 3:43 pm, Michael Hipp mich...@hipp.com wrote:
 Can someone show me the gist of how to construct an SA query that
 produces SQL* of the form

 SELECT ARRAY(SELECT ... FROM ... WHERE ...) as array_col

 Thanks,
 Michael

 *http://www.postgresql.org/docs/8.4/interactive/sql-expressions.html#S...

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Creating tables in correct order

2010-11-08 Thread Gunnlaugur Briem
Hi,

that example code works for me in PostgreSQL, after adding unique=True
on the name attribute of User, and reversing the order of the drop
calls. I don't have a MySQL to try against. Did you get the exact same
error from it when running against MySQL?

As for your actual app:

1. the log output suggests that some of your tables already exist: the
ROLLBACKs are presumably to clear the does-not-exist error condition.
There's no ROLLBACK after DESCRIBE `games` and DESCRIBE
`dependencies`, so those tables probably exist already, and maybe they
don't match the schema of your current declarative setup. create_all
does not modify existing tables, so creating a new table with a
foreign key against them may fail with a mismatch. That might be what
the errno 150 from MySQL means.

2. that decoupled configuration may be right, but not dead-obviously
so :) ... to confirm, you could check that all the tables you expect
the metadata to know about are really there, and have the proper
foreign keys, just before calling create_all. Something like:

import pprint
pprint.pprint(dict(
(tn, [
(c.name, c.foreign_keys)
for c in t.c
if c.foreign_keys
])
for tn, t in metadata.tables.items()
))

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Creating tables in correct order

2010-11-04 Thread Gunnlaugur Briem
Strictly that's not a query, it's a table definition. Do you mean that
you are creating a declarative model class corresponding to this table
definition? Posting your code would help more.

You must define both tables on the same metadata instance (in
declarative, that's typically done by having both model classes extend
the same declarative_base instance). If you call create_all on that
metadata instance, it does respect the dependency order. (Unless
you've found a bug, which is unlikely for code that's as central and
heavily used as this.)

See http://www.sqlalchemy.org/docs/orm/extensions/declarative.html ---
follow the beginning of that, and make sure both of your model classes
extend the same Base instance.

Regards,

- Gulli



On Nov 4, 10:51 pm, Richie Ward rich...@gmail.com wrote:
 I am trying to run this query:
 CREATE TABLE dependenciesbinary (
     id INTEGER NOT NULL AUTO_INCREMENT,
     dependency_mn VARCHAR(128),
     name VARCHAR(128),
     operatingsystem VARCHAR(128),
     architecture VARCHAR(128),
     PRIMARY KEY (id),
     FOREIGN KEY(dependency_mn) REFERENCES dependencies (modulename)
 )ENGINE=InnoDB CHARSET=utf8

 But create_all() is not creating the table dependencies before
 dependenciesbinary which causes MySQL to error due to the missing
 table.

 Is there some way I can change the order of the create statements to
 fix this?

 I am using Declarative if that helps.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: hierarchical data storage and searching

2010-09-08 Thread Gunnlaugur Briem
Hi Chris,

this is more of a relational design question than SQLAlchemy-related,
but take a look at this for an at-a-glance summary of different
approaches and their pros and cons:

http://vadimtropashko.wordpress.com/2008/08/09/one-more-nested-intervals-vs-adjacency-list-comparison/

... here for some illustration and examples:

http://dev.mysql.com/tech-resources/articles/hierarchical-data.html
http://communities.bmc.com/communities/docs/DOC-9902

and here for links to more than you really want to know on the
subject:

http://troels.arvin.dk/db/rdbms/links/#hierarchical

SQLAlchemy support any of these approaches well enough that I don't
think you need to factor SQLAlchemy into your choice of relational
design at all. Check out /examples/nested_sets/ and /examples/
adjacency_list/ (in the SQLAlchemy distribution) for what are probably
the two most common approaches.

Regards,

- Gulli



On Sep 8, 8:22 am, Chris Withers ch...@simplistix.co.uk wrote:
 Hi All,

 I'm trying to solve a hierarchical access control problem, both on the
 storage and querying side.

 So, say I have a tree of content:

 /
 /a/
 /a/1
 /a/2
 /b/
 /b/1
 /b/2

 I want to be able to express and search on the following types of
 requirements:

 User X should be able to access all content in /a and content in /b/1

 The storage side just needs to be usable, speed wise, but obviously the
 query side needs to be lighting fast as it'll be hit *hard* and often.

 What schema/indexes would people recommend for this?
 How would I query that schema fast in SQLAlchemy to be able to answer
 the above type of questions. I guess the API that needs to be fast would
 be along the lines of:

 def can_access(user_id,path):
    return True or False

 The grant python API would look like:

 def grant_access(user_id,*paths):
    ...

 Any help much appreciated!

 cheers,

 Chris

 --
 Simplistix - Content Management, Batch Processing  Python Consulting
              -http://www.simplistix.co.uk

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Couple of breaking changes in 0.6

2010-06-04 Thread Gunnlaugur Briem
Hi,

I'm finally adapting our codebase to SqlAlchemy 0.6, and I've found a
couple of breaking changes that are not documented in
http://www.sqlalchemy.org/trac/wiki/06Migration

First, ResultProxy.keys is now a method returning a list, not a list-
valued attribute, so expressions like k in result.keys or
result.keys.index no longer work; the keys references must be changed
to method invocations.

Second, DeclarativeMeta.__init__ now ignores columns defined in the
dict_ passed to it (by a subclass metaclass constructor), using
cls.__dict__ instead. This seems to be deliberate, the log message on
commit 6055 is “DeclarativeMeta exclusively uses cls.__dict__ (not
dict_)”. But it breaks my use case; we have subclassed the metaclass
to add two columns that are common to a set of entities of ours:

class DimensionValueMeta(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
if not dict_.has_key('__table__'):
dict_['id'] = Column(Integer, primary_key=True)
dict_['value'] = Column(UnicodeText, nullable=False,
unique=True)
DeclarativeMeta.__init__(cls, classname, bases, dict_)

I am told that just assigning cls.id = Column(...) and cls.value =
Column(...) will work, which is cleaner anyway.

I've added draft explanations of these two breaking changes to
http://www.sqlalchemy.org/trac/wiki/06Migration (last two changes by
“guest”) — please review and fix whatever I got wrong.

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Declarative with mix-in and __table__ fails [patch]

2010-06-04 Thread Gunnlaugur Briem
Hi,

if I declaratively map a class with a mix-in and with a __table__
definition that contains the columns in the mix-in, like so:

class MyMixin(object):
id =  Column(Integer, primary_key=True)
def foo(self):
return 'bar'+str(self.id)

class MyModel(Base,MyMixin):
__table__ = Table('test', Base.metadata,
Column('id', Integer, primary_key=True),
Column('name', String(1000), nullable=False, index=True)
)

… then I get a fairly cryptic error:

Traceback (most recent call last):
  File mixin_trouble.py, line 12, in module
class MyModel(Base,MyMixin):
  File /Users/gthb/extsvn/sqlalchemy/lib/sqlalchemy/ext/
declarative.py, line 830, in __init__
_as_declarative(cls, classname, cls.__dict__)
  File /Users/gthb/extsvn/sqlalchemy/lib/sqlalchemy/ext/
declarative.py, line 681, in _as_declarative
if tablename or k not in parent_columns:
TypeError: Error when calling the metaclass bases
argument of type 'NoneType' is not iterable

The real reason is that the case of __table__ with a mix-in is not
handled. If MyModel were defined with __tablename__ and id =
Column(Integer, primary_key=True), then tablename would be truey and
parent_columns would not be checked, so this would not come up.

We could just improve the error message by checking for this
unsupported case. But it seems fairly easy to support it. This simple
change makes the above case work, and breaks no unit tests:

diff -r dd463bfb847d lib/sqlalchemy/ext/declarative.py
--- a/lib/sqlalchemy/ext/declarative.py Mon May 31 15:22:55 2010 -0400
+++ b/lib/sqlalchemy/ext/declarative.py Fri Jun 04 19:05:16 2010 +
@@ -670,7 +670,9 @@
 Columns with foreign keys to other
columns 
 are not allowed on declarative
mixins at this time.
 )
-if name not in dict_:
+if name not in dict_ and not (
+'__table__' in dict_ and name in
dict_['__table__'].c
+):
 
potential_columns[name]=column_copies[obj]=obj.copy()
 elif isinstance(obj, RelationshipProperty):
 raise exceptions.InvalidRequestError(

I may be missing something else that needs to be done in this case,
but in any case this gets my codebase (which I've just modified to use
a mix-in like this instead of a metaclass) running and passing its
tests without problems.

Also, since I've got your attention there, shouldn't this place really
check (in both this new __table__ case and the already-supported
__tablename__ case) that the column definitions are identical, or at
least mostly equivalent? If they don't, then the mix-in class'
expectations are not fulfilled and that may cause all sorts of trouble
so it's better to bump into it right away.

To that end, is there an existing way to check column definition
equality/equivalence, i.e. comparing at least type, constraints and
default and primary_key, and maybe also autoincrement, base_columns,
etc.?

Regards,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Declarative with mix-in and __table__ fails [patch]

2010-06-04 Thread Gunnlaugur Briem
On Jun 4, 9:23 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 as long as all tests pass it is fine.  If you could give me a patch that 
 includes a test for this in test_declarative, that would be supremely helpful 
 (if you want to make a trac ticket and target it at the 0.6.2 milestone).

Will do.

 do you mean, if the new class overrides what the mixin provides ?   i'm not 
 sure why we'd need to do anything if the class overrides the mixin.

Because the mix-in is itself an API, a public interface defining
properties on which other code may depend. It is a superclass, and a
subclass should extend or elaborate what the superclass' definition
promises — not replace it. That was my thinking. And it was wrong.
Because there are further “shouldn't”s: yeah, a subclass shouldn't
break its superclass' API, but also a superclass API shouldn't be too
specific, and a framework shouldn't be too restrictive.

The mix-in specifies the names of its columns, and *maybe* it
specifies more detail (e.g. id must be Integer, or name column must be
unique), but then that's application-specific. (Sure, that id column
has type Integer, but it had to have *some* type; the mix-in author
may well mean it as a default and not a restriction.) Allowing
whatever property (type, uniqueness) of the columns to be overridden
is a valid application design choice, so DeclarativeMeta should not
needlessly enforce anything beyond the names.

So yeah, you're right, no need to do any more.

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Literal table expression (VALUES clause) in select

2010-02-26 Thread Gunnlaugur Briem
Hi,

I want to perform a weighted sum over one column, specifying the
weights in a literal table (a VALUES clause):

select a, sum(b*weights.column1)
FROM (select a, b from whatever) foo
JOIN (values (1, 0.5), (2, -0.5) ) weights ON weights.column2=a
GROUP BY 1;

This is on PostgreSQL 8.4. To represent this in
sqlalchemy.sql.expression terms:

select([subquery.c.a, func.sum(subquery.c.b *
weights_clause.c['column2']),
   weights_clause.c['column1']==subquery.c.a
   )

but how do I get the weights_clause? I want it to compile to VALUES
(1, 0.5), (2, -0.5) (and then wrap it with .alias('weights')) but it
seems like there is no expression for this in SQLAlchemy so far (just
the VALUES support in INSERT and UPDATE, which does not work for this
use).

Does this require a new subclass of FromClause, or is there an
existing way that I'm not seeing?

The column names 'column1', 'column2' are as specified here
http://www.postgresql.org/docs/8.4/static/queries-values.html — the
names may be specific to PostgreSQL, not sure, but I believe (haven't
checked) that SQL generally allows VALUES to be used wherever SELECT
can be used.

Thanks,

- Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Literal table expression (VALUES clause) in select

2010-02-26 Thread Gunnlaugur Briem
This produces the equivalent SELECT ... UNION ALL ... UNION ALL ...
using select, literal_column and union_all (and leaving aside the
details of representing the literal values inside the VALUES clause)
--- works just fine and PostgreSQL produces the same query plan for
it:

num_columns = len(weights_and_valueids[0]) # assuming all are the same
length
weights_clause = union_all(*(
select([literal_column('%s' % weight_and_valueid[i]).label('column
%d' % (i+1))
for i in xrange(0, num_columns)
])
for weight_and_valueid in weights_and_valueids
)).alias('weights')

Is this more portable, and thus should be used in preference to VALUES
as a literal table clause?

Regards,

- Gulli



On Feb 26, 10:28 am, Gunnlaugur Briem gunnlau...@gmail.com wrote:
 Hi,

 I want to perform a weighted sum over one column, specifying the
 weights in a literal table (a VALUES clause):

 select a, sum(b*weights.column1)
     FROM (select a, b from whatever) foo
     JOIN (values (1, 0.5), (2, -0.5) ) weights ON weights.column2=a
 GROUP BY 1;

 This is on PostgreSQL 8.4. To represent this in
 sqlalchemy.sql.expression terms:

 select([subquery.c.a, func.sum(subquery.c.b *
 weights_clause.c['column2']),
        weights_clause.c['column1']==subquery.c.a
        )

 but how do I get the weights_clause? I want it to compile to VALUES
 (1, 0.5), (2, -0.5) (and then wrap it with .alias('weights')) but it
 seems like there is no expression for this in SQLAlchemy so far (just
 the VALUES support in INSERT and UPDATE, which does not work for this
 use).

 Does this require a new subclass of FromClause, or is there an
 existing way that I'm not seeing?

 The column names 'column1', 'column2' are as specified 
 herehttp://www.postgresql.org/docs/8.4/static/queries-values.html— the
 names may be specific to PostgreSQL, not sure, but I believe (haven't
 checked) that SQL generally allows VALUES to be used wherever SELECT
 can be used.

 Thanks,

     - Gulli

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: telling merge() to insert certain objects instead of update

2010-02-08 Thread Gunnlaugur Briem
Trunk commits over the past year:

svn log -q --xml --with-revprop svn:author -r {2009-02-09}:HEAD | grep
'author' | sed 's#author\(.*\)/author#\1#' | sort | uniq -c |
sort -n -k 1

   1 ellisj
   1 ram
   4 ants
   5 jek
  14 lele
  23 ged
  28 empty
  39 pjenvey
 413 zzzeek

So, mostly Mike Bayer. No conclusive data on how many developers he
is.

- Gulli



On Feb 8, 5:00 pm, Kent k...@retailarchitects.com wrote:
 You guys have done some great work, thanks.  (How many developers are
 you?)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: MPTT

2010-01-31 Thread Gunnlaugur Briem
The trouble with mapping against an arbitrary select is that you can't
insert/update/delete through the mapped class. Two ways around that:

1) if your database engine supports it, create a view from the join
query and define insert/update/delete rules on the view in your RDBMS,
and in SQLAlchemy map a class against the view, as if it were a table.
PostgreSQL documentation for this feature:

http://www.postgresql.org/docs/8.4/static/rules-update.html



2) instead of mapping against a query, map against the table normally
but add a column property with a correlated subquery:

http://www.sqlalchemy.org/docs/mappers.html#sql-expressions-as-mapped-attributes

http://www.sqlalchemy.org/docs/reference/orm/mapping.html#sqlalchemy.orm.column_property

You can filter on this column property, but such a query would
presumably be less efficient (correlated subquery vs. join). However,
queries that *don't* filter on this property will be faster (single-
table query vs. join) if you make the property deferred.



Regards,

- Gulli



On Jan 31, 12:13 am, Gunnlaugur Briem gunnlau...@gmail.com wrote:
 Yes, you can map a class against an arbitrary select, not just a
 table, see:

 http://www.sqlalchemy.org/docs/mappers.html#mapping-a-class-against-a...

 So above you have a query that you could map a class against (you
 might want to add the other nested_category columns to it). No need to
 declare the level property specially, it's just a column like any
 other, in the query you're mapping against.

     - Gulli

 On Jan 30, 3:31 pm, Juan Dela Cruz juandelacru...@gmail.com wrote:



  Wow great ;) It really works. Thanks.

  One more thing, is it possible to map this in a Model Class perhaps as a
  python @property name level and make it query-able? Like passing it to a
  filter clause

  e.g.
  nested_category.query.filter(nested_category.c.level=='3')

  On Sat, Jan 30, 2010 at 8:48 AM, Gunnlaugur Briem 
  gunnlau...@gmail.comwrote:

   Hi Juan,

   this will do it in version 10.5.8 (and probably earlier:

   nested_category = Table(
      'nested_category',
      MetaData(),
      Column('category_id', Integer, primary_key=True),
      Column('name', Text, nullable=False),
      Column('lft', Integer, nullable=False),
      Column('rgt', Integer, nullable=False)
      )
   node = nested_category.alias('node')
   parent = nested_category.alias('parent')
   query = select([node.c.name, (func.count(node.c.name) - text
   ('1')).label('level')],
      from_obj=join(node, parent,
                    node.c.lft.between(parent.c.lft, parent.c.rgt)
                    )
      ).group_by(node.c.name)

   str(query) will show that it is correct (it uses a JOIN expression
   instead of the WHERE condition, but that's equivalent and more
   explicit)

   The text('1') instead of just 1 is so that the literal constant 1 is
   not needlessly replaced by a bind param. It works either way though.

   Regards,

      - Gulli

   On Jan 29, 8:53 am, Juan Dela Cruz juandelacru...@gmail.com wrote:
Can someone please help me to figure out the equivalent of this sql 
query
   to
sqlalchemy

This my nested_category table:

+-+--+-+-+
| category_id | name                 | lft | rgt |
+-+--+-+-+
|           1 | ELECTRONICS          |   1 |  20 |
|           2 | TELEVISIONS          |   2 |   9 |
|           3 | TUBE                 |   3 |   4 |
|           4 | LCD                  |   5 |   6 |
|           5 | PLASMA               |   7 |   8 |
|           6 | PORTABLE ELECTRONICS |  10 |  19 |
|           7 | MP3 PLAYERS          |  11 |  14 |
|           8 | FLASH                |  12 |  13 |
|           9 | CD PLAYERS           |  15 |  16 |
|          10 | 2 WAY RADIOS         |  17 |  18 |
+-+--+-+-+

SELECT node.name, (COUNT(node.name)-1) AS level
    FROM nested_category AS node, nested_category AS parent
        WHERE node.lft BETWEEN parent.lft AND parent.rgt
            GROUP BY node.name;

The result will be:
+--+---+

| name                 | depth |
+--+---+
| ELECTRONICS          |     0 |
| TELEVISIONS          |     1 |
| TUBE                 |     2 |
| LCD                  |     2 |
| PLASMA               |     2 |
| PORTABLE ELECTRONICS |     1 |
| MP3 PLAYERS          |     2 |
| FLASH                |     3 |
| CD PLAYERS           |     2 |
| 2 WAY RADIOS         |     2 |
+--+---+

   --
   You received this message because you are subscribed to the Google Groups
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to
   sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegrou

[sqlalchemy] Re: How to diagnose a transaction hang problem?

2010-01-30 Thread Gunnlaugur Briem
You probably don't have to add locks in your code. The database system
takes locks on behalf of your code (in operations initiated by your
code) as necessary to ensure transactional consistency. That's one of
the benefits of an RDBMS.

What you do have to do is to make sure that those database locks don't
stay around forever (blocking other transactions). That happens when
you forget to complete (commit or rollback/close) the transaction that
creates them. If it eventually completes (either way), then things
will work fine for the most part --- and at least not hang.

(There is a lot more to this; you should read up on transactions and
locks, it's fascinating stuff. But your immediate problem is very
likely just a session/connection that you forget to close.)

Regards,

- Gulli


On Jan 30, 4:53 am, 一首诗 newpt...@gmail.com wrote:
 Yeah, there might be another transaction modifying the same data
 (actually the same line of data in database).

 But I didn't expect that might cause problem before!

 Oh, if that's true, then I have to add some lock in my code to avoid
 that.  That's a big problem.

 On Jan 29, 10:13 pm, Alex Brasetvik a...@brasetvik.com wrote:



  On Jan 29, 2010, at 15:01 , 一首诗 wrote:

   What might cause this kind of problem?

  Possibly waiting on locks. Do you have any concurrent transactions 
  modifying the same data?

  When the problem appears, run `select * from pg_stat_activity` to see 
  whether there are locking issues.

  To see the locks involved, run `select * from pg_locks`.

  --
  Alex Brasetvik

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: MPTT

2010-01-30 Thread Gunnlaugur Briem
Yes, you can map a class against an arbitrary select, not just a
table, see:

http://www.sqlalchemy.org/docs/mappers.html#mapping-a-class-against-arbitrary-selects

So above you have a query that you could map a class against (you
might want to add the other nested_category columns to it). No need to
declare the level property specially, it's just a column like any
other, in the query you're mapping against.

- Gulli



On Jan 30, 3:31 pm, Juan Dela Cruz juandelacru...@gmail.com wrote:
 Wow great ;) It really works. Thanks.

 One more thing, is it possible to map this in a Model Class perhaps as a
 python @property name level and make it query-able? Like passing it to a
 filter clause

 e.g.
 nested_category.query.filter(nested_category.c.level=='3')

 On Sat, Jan 30, 2010 at 8:48 AM, Gunnlaugur Briem gunnlau...@gmail.comwrote:



  Hi Juan,

  this will do it in version 10.5.8 (and probably earlier:

  nested_category = Table(
     'nested_category',
     MetaData(),
     Column('category_id', Integer, primary_key=True),
     Column('name', Text, nullable=False),
     Column('lft', Integer, nullable=False),
     Column('rgt', Integer, nullable=False)
     )
  node = nested_category.alias('node')
  parent = nested_category.alias('parent')
  query = select([node.c.name, (func.count(node.c.name) - text
  ('1')).label('level')],
     from_obj=join(node, parent,
                   node.c.lft.between(parent.c.lft, parent.c.rgt)
                   )
     ).group_by(node.c.name)

  str(query) will show that it is correct (it uses a JOIN expression
  instead of the WHERE condition, but that's equivalent and more
  explicit)

  The text('1') instead of just 1 is so that the literal constant 1 is
  not needlessly replaced by a bind param. It works either way though.

  Regards,

     - Gulli

  On Jan 29, 8:53 am, Juan Dela Cruz juandelacru...@gmail.com wrote:
   Can someone please help me to figure out the equivalent of this sql query
  to
   sqlalchemy

   This my nested_category table:

   +-+--+-+-+
   | category_id | name                 | lft | rgt |
   +-+--+-+-+
   |           1 | ELECTRONICS          |   1 |  20 |
   |           2 | TELEVISIONS          |   2 |   9 |
   |           3 | TUBE                 |   3 |   4 |
   |           4 | LCD                  |   5 |   6 |
   |           5 | PLASMA               |   7 |   8 |
   |           6 | PORTABLE ELECTRONICS |  10 |  19 |
   |           7 | MP3 PLAYERS          |  11 |  14 |
   |           8 | FLASH                |  12 |  13 |
   |           9 | CD PLAYERS           |  15 |  16 |
   |          10 | 2 WAY RADIOS         |  17 |  18 |
   +-+--+-+-+

   SELECT node.name, (COUNT(node.name)-1) AS level
       FROM nested_category AS node, nested_category AS parent
           WHERE node.lft BETWEEN parent.lft AND parent.rgt
               GROUP BY node.name;

   The result will be:
   +--+---+

   | name                 | depth |
   +--+---+
   | ELECTRONICS          |     0 |
   | TELEVISIONS          |     1 |
   | TUBE                 |     2 |
   | LCD                  |     2 |
   | PLASMA               |     2 |
   | PORTABLE ELECTRONICS |     1 |
   | MP3 PLAYERS          |     2 |
   | FLASH                |     3 |
   | CD PLAYERS           |     2 |
   | 2 WAY RADIOS         |     2 |
   +--+---+

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegrou 
  ps.com
  .
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: OperationalError: (OperationalError) no such table:

2010-01-29 Thread Gunnlaugur Briem
Hi James,

it would be helpful if you posted the call stack where that error
occurs, and the code leading up to the failed query execution, which
database and driver (the first word in the engine URL).

Without further details, the first place I would look is where the
session (or connection) gets created. Are you sure it is getting the
same engine URL that you used in the python shell code (which you
replaced with login-info before posting)?

- Gulli



On Jan 29, 9:00 pm, James Sathre jamessat...@gmail.com wrote:
 I’ve been stuck trying to get the pylons application to connect to my
 database.  I was able to connect to the database through a python
 shell in the “virtualenv” as you can see below.  The app acts like it
 can connect to the database, but not to the table.

 I admit this is my first pylons project and I'm a little confused as
 to where to start looking for a problem.  There seems to be a lot of
 outdated doc's on the web and I don't know what to believe is the
 current way of doing things.

  import sqlalchemy as sa
  engine = sa.create_engine(login-info)
  from pwi import model
  model.init_model(engine)
  engine.has_table(pwi_wildcard)

 True

 OperationalError: (OperationalError) no such table: pwi_wildcard
 u'SELECT pwi_wildcard.wildcard_id AS pwi_wildcard_wildcard_id,
 pwi_wildcard.priority AS pwi_wildcard_priority, pwi_wildcard.name AS
 pwi_wildcard_name, pwi_wildcard.shot AS pwi_wildcard_shot,
 pwi_wildcard.scene AS pwi_wildcard_scene, pwi_wildcard.created_by AS
 pwi_wildcard_created_by, pwi_wildcard.expires AS pwi_wildcard_expires
 \nFROM pwi_wildcard' []

 thanks in advance,

 James

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: MPTT

2010-01-29 Thread Gunnlaugur Briem
Hi Juan,

this will do it in version 10.5.8 (and probably earlier:

nested_category = Table(
'nested_category',
MetaData(),
Column('category_id', Integer, primary_key=True),
Column('name', Text, nullable=False),
Column('lft', Integer, nullable=False),
Column('rgt', Integer, nullable=False)
)
node = nested_category.alias('node')
parent = nested_category.alias('parent')
query = select([node.c.name, (func.count(node.c.name) - text
('1')).label('level')],
from_obj=join(node, parent,
  node.c.lft.between(parent.c.lft, parent.c.rgt)
  )
).group_by(node.c.name)

str(query) will show that it is correct (it uses a JOIN expression
instead of the WHERE condition, but that's equivalent and more
explicit)

The text('1') instead of just 1 is so that the literal constant 1 is
not needlessly replaced by a bind param. It works either way though.

Regards,

- Gulli


On Jan 29, 8:53 am, Juan Dela Cruz juandelacru...@gmail.com wrote:
 Can someone please help me to figure out the equivalent of this sql query to
 sqlalchemy

 This my nested_category table:

 +-+--+-+-+
 | category_id | name                 | lft | rgt |
 +-+--+-+-+
 |           1 | ELECTRONICS          |   1 |  20 |
 |           2 | TELEVISIONS          |   2 |   9 |
 |           3 | TUBE                 |   3 |   4 |
 |           4 | LCD                  |   5 |   6 |
 |           5 | PLASMA               |   7 |   8 |
 |           6 | PORTABLE ELECTRONICS |  10 |  19 |
 |           7 | MP3 PLAYERS          |  11 |  14 |
 |           8 | FLASH                |  12 |  13 |
 |           9 | CD PLAYERS           |  15 |  16 |
 |          10 | 2 WAY RADIOS         |  17 |  18 |
 +-+--+-+-+

 SELECT node.name, (COUNT(node.name)-1) AS level
     FROM nested_category AS node, nested_category AS parent
         WHERE node.lft BETWEEN parent.lft AND parent.rgt
             GROUP BY node.name;

 The result will be:
 +--+---+

 | name                 | depth |
 +--+---+
 | ELECTRONICS          |     0 |
 | TELEVISIONS          |     1 |
 | TUBE                 |     2 |
 | LCD                  |     2 |
 | PLASMA               |     2 |
 | PORTABLE ELECTRONICS |     1 |
 | MP3 PLAYERS          |     2 |
 | FLASH                |     3 |
 | CD PLAYERS           |     2 |
 | 2 WAY RADIOS         |     2 |
 +--+---+

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to diagnose a transaction hang problem?

2010-01-29 Thread Gunnlaugur Briem
Another quick way of troubleshooting hangs is the tool pg_top, in
which you might see a process in the state “Idle in transaction”. This
state means that some database operations have been performed in a
transaction on that connection but the transaction has not yet been
committed.

Those database operations will have been granted locks, for which your
stalled session is waiting. Behind this idle-in-transaction connection
might be another SQLAlchemy session that you neglected to commit or
close. That's a common way for this situation to come up.

You can see the locks held by the connection using pg_top (hit L), or
you can find them with pg_locks as Alex mentioned. These locks may
give you a clue as to where in your code that other session was
created, helping you track down the bug to correct.

To avoid creating cases like this, I try to be careful about session
objects: I never store them (keep them on the stack, i.e. as local
variables and function arguments), and I always create and close them
using a construct like this:

from contextlib import closing
with closing(Session()) as session:
do_stuff()
session.commit() if I want to

Note that sessions are not the same as DB connections (which are
pooled further down in the layers of stuff going on), you gain nothing
by storing and reusing them, and you risk creating cases like this.
Per the docs, “Sessions are very inexpensive to make, and don’t use
any resources whatsoever until they are first used...so create
some!” (and close and discard them happily).

- G.



On Jan 29, 2:13 pm, Alex Brasetvik a...@brasetvik.com wrote:
 On Jan 29, 2010, at 15:01 , 一首诗 wrote:

  What might cause this kind of problem?

 Possibly waiting on locks. Do you have any concurrent transactions modifying 
 the same data?

 When the problem appears, run `select * from pg_stat_activity` to see whether 
 there are locking issues.

 To see the locks involved, run `select * from pg_locks`.

 --
 Alex Brasetvik

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: How to query data from a relational table according to a list of matched conditions?

2009-12-23 Thread Gunnlaugur Briem
You can put the func.count() label in a variable and filter on it:

articlecount = func.count(Article.id).label('count')
for article in session.query(Article, articlecount) \
   .join(Article.keywords).group_by(Article.name) \
   .filter(Keyword.name.in_(['k2', 'k3', 'k6', 'k7'])) \
   .filter(articlecount = 2) \
   .order_by(articlecount.desc()):

(Untested, but I believe that's about right)

- G.



On Dec 23, 7:03 am, Olli Wang olliw...@ollix.com wrote:
 Thanks for reply. I finally got the right result as following:

 for article in session.query(Article, func.count(Article.id).label
 ('count')) \
                .join(Article.keywords).group_by(Article.name) \
                .filter(Keyword.name.in_(['k2', 'k3', 'k6',
 'k7'])).order_by('count DESC'):
     print article

 And the result looks like this:
 (article_2, 3)
 (article_1, 2)
 (article_3, 2)
 (article_4, 1)

 However, is it possible to further filter the labeled `count` to be=2? So 
 the result would become:

 (article_2, 3)
 (article_1, 2)
 (article_3, 2)

 Thanks.

 On Dec 23, 6:09 am, Michael Bayer mike...@zzzcomputing.com wrote:



  On Dec 22, 2009, at 1:39 AM, Olli Wang wrote:
   Hi, is it possible to query data from a relational table according to
   a list of matched conditions? For example, assume I have two tables,
   Article and Keywords, in a Many-to-Many relationship, and have data
   like this:

   Article 1 has keywords (k1, k2, k3, k4)
   Article 2 has keywords (k2, k3, k5, k6)
   Article 3 has keywords (k1, k3, k4, k5, k6)
   Article 4 has keywords k1, k2, k4, k5)

   Now I want to find all articles in the order of the number of matched
   keywords (k2, k3, k6, k7), and the result should like this:

   1) Article 2, because it has 3 matched keywords: (k2, k3, k6)
   2) Article 1, because it has 2 matched keywords: (k2, k3)
   3) Article 3, because it has 2 matched keywords: (k3, k6)
   4) Article 4, because it has only 1 matched keyword: (k2)

   Note that because no article has the keyword k7, so it just not
   count. Please help. Thanks.

  you'd likely want to join from Article to Keywords along the many-to-many 
  table, and filter for those Keywords which identify as k2, k3, k6, k7, 
  likely using an in.   This join woud look along the lines of:

  session.query(Article).join(Article.keywords).filter(Keyword.name.in_('k2', 
  'k3', 'k6', 'k7'))

   --

   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Warning when relation on mapper supercedes the same relation on inherited mapper

2009-12-16 Thread Gunnlaugur Briem
Hi,

I created a mapper inheriting from another mapper and overriding a
relation definition, and got this warning:

Warning: relation 'dimensions' on mapper 'Mapper|DataSetSlice|
dataset_slice' supercedes the same relation on inherited mapper
'Mapper|DataSet|dataset'; this can cause dependency issues during
flush

I'd like to understand those dependency issues better (read: at all),
to know whether they apply in my case.

The new class I am mapping is class DataSetSlice(DataSet), which
defines a slice out of another DataSet (identified by parent_id, a
self-join on the dataset table), but is also a DataSet in its own
right (fitting into our inventory/categorization system). So there is
an inheritance relationship and also (separately) a parent/child
relationship.

A DataSet has dimensions (many-to-many), and a DataSetSlice logically
has the same dimensions as its parent. So the DataSet mapper has this
relation:

dataset_mapper = mapper(DataSet, dataset_table
...,
properties={
'dimensions': relation(Dimension,
secondary=dataset_dimension_association,
order_by=dataset_dimension_association.c.dimension_id,
backref='referencing_datasets'
),
})

And the DataSetSlice (subclass) mapper has this instead:

datasetslice_mapper = mapper(DataSetSlice, datasetslice_table,
...,
properties={
'dimensions': relation(Dimension,
secondary=dataset_dimension_association,
primaryjoin=dataset_dimension_association.c.dataset_id
== datasetslice_table.c.parent_id,
foreign_keys=[dataset_dimension_association.c.dataset_id,
  dataset_dimension_association.c.dimension_id
  ],
order_by=dataset_dimension_association.c.dimension_id,
backref='referencing_dataset_slices'
),
})

The salient difference is that the primary join references parent_id
in the subclass table instead of id in the superclass table --- but
these are the same by a foreign key relationship. Thus I'm making a
slice have the same dimensions as its parent.

Could someone please explain the dependency issues that can arise from
this? (Or explain generally the kind of dependency issue this warning
refers to?)

Regards,

- Gulli

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: Warning when relation on mapper supercedes the same relation on inherited mapper

2009-12-16 Thread Gunnlaugur Briem
On Dec 16, 4:08 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 - there's two tables, dataset_table and datasetslice_table.
 - these two tables have *two* foreign key references to each other - an
 inheritance relationship (on unknown columns since they aren't displayed
 here) and another on parent_id to id, which is a one-to-many
 association.

Yes, sorry I was unclear. The inheritance is simple joined-table
inheritance, on columns named id in both tables, the subclass id being
a foreign key referencing the superclass id. There is a discriminator
column type in the dataset table with the value 'slice' for
DataSetSlice datasets, and 'base' for the regular ones.

 - for each row in dataset_table, a row in dataset_dimension_association
 may exist.
 - *however*, if a row in dataset_table also references datasetslice_table
 via the inheritance relation, then there may *not* be a row in
 dataset_dimension_association with that dataset_table's id - because the
 mapping says that dimensions now needs to reference
 dataset_dimension_association via *another* row in dataset_table which it
 references via parent_id.   This is a constraint that isn't expressed in
 the schema but is implied by the mapping I see.

Yes --- or rather if there *is* such a row, it is ignored (on the
dataset side, while the backref property on the dimension will still
contain the dataset) because the dataset_table row has type='slice'.
But you're right (of course), this is bad relational design because of
the join inconsistency and the asymmetry in the many-to-many relation.

 The hypothetical issue as far as flush() is that both dimension relations
 would have some interaction when working out dependencies and association
 row values, even for a DataSetSlice object, and would incur conflicting
 data.  I'm not sure if that is the current behavior and would have to
 verify, though the warning stays in place since with inheritance, you need
 to look at relations from table hierarchy A to table B in terms of the
 bottommost table in the A hierarchy - relational inheritance is not really
 polymorphic in that way, for the join reasons I mention above.

Very true. Thanks heaps for putting that warning in the code; I would
have made do with that flawed design if you hadn't.

This was driven by the desire to (a) let slices be datasets in their
own right (fitting them naturally into our system's inventory,
category hierarchy, browsing UI, etc.) and (b) avoid the duplication
of explicitly associating each of the slices with the same set of
dimensions as its parent dataset. So I wanted to override the
dimensions property in the DataSetSlice subclass, delegating to the
parent dataset (the OO way), but that property was injected by the
mapper, so I strayed down the path of overriding it there.

I could consider this dimensions property the “raw dimensions” (maybe
rename it as such) and define a separate (non-ORM) property for the
“dimensions to use”, delegating to self.dimensions in DataSet and to
self.parent.dimensions in DataSetSlice. But then still (a) dimension
associations for a type='slice' dataset do not make sense and should
be constrained not to exist, and (b) querying for datasets having a
given dimension will only find non-slice datasets (though an outer-
join would draw in the slices). So the relational design flaw is still
there.

Instead I will probably just add the duplicate dimension associations,
a lesser evil. Additions/removals of dimensions of existing datasets
will probably not be common anyway. And possibly there will be reason
to let slice dimensions differ from parent dataset dimensions later
on. (A dataset sliced to just one value of a given dimension could be
considered not to have that dimension, for instance.)

Thank you for your characteristically helpful response! (And for all
the work you put into SQLAlchemy in general, and into supporting its
users.)

Regards,

- Gulli

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Hyper-eager autoflush for debugging

2009-11-16 Thread Gunnlaugur Briem

Hi,

a common pattern for me is to encounter a ProgrammingError or other
exception on autoflush. Then the point in my code at which I made a
booboo has already passed and so is not indicated by the stack trace.
If the cause isn't obvious, I will go rerunning the failing test,
inserting session.flush() at strategic places before the offending
code path, to narrow down where the bad DB operation really happens.

Is it feasible to add a mode like autoflush='eager' on the session?
Meaning that any operation that will result in an autoflush does so
immediately. Terrible in general, but a useful mode to enable for
debugging cases like this.

Downsides I imagine: (a) check constraints may be violated on
intermediate states being flushed (e.g. two properties need to be
updated together but in this mode wouldn't be), and (b) a check for
this state might need to be in a critical path, nibbling at runtime
performance. But Michael/others will know better than me whether these
are real hindrances.

Regards,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Hyper-eager autoflush for debugging

2009-11-16 Thread Gunnlaugur Briem

On Nov 16, 3:38 pm, Michael Bayer mike...@zzzcomputing.com wrote:

 id like to know what errors specifically you get which aren't fairly
 obvious even if they happen later on.   I'm guessing some of the
 dependency oriented messages like dependency rule tried to blank out
 primary key.

Nah, the particular one I stubbed my toe on today was:

TypeError: can't compare offset-naive and offset-aware datetimes

... caused by accidentally slipping a non-timezone-aware datetime into
a query/operation somewhere (with timezone-aware datetime columns, on
PostgreSQL).

But I have also found myself wanting this when debugging general
ProgrammingErrors --- I get all kinds, because I am mixing ORM and
sqlexpression calls with literal SQL through session.execute() ---
unavoidable because my model is fairly dynamic (a sort of generalized
warehousing application, thousands of tables, bunch of bulk operations
and DDL at runtime).

 It's always an easy option to add more contextual
 information to our exception messages if that helps.

The exception messages are generally fine --- rather, I'm after more
accurate contextual exception stack traces (not generally, just when I
particularly ask for it).

    This is the 2nd use case in a couple of
 weeks where such a proxy system is useful, leading to the notion of
 having a common extension that allows generic wrapping of attribute
 operations ...but like that other use case, both are cases that I'm not
 entirely convinced are even necessary.

My case is absolutely not necessary. Just an idea for debugging
convenience.

 well as far as performance, that would be entirely out the window.  and
 yeah constraints like composite primary keys and such wouldn't work.   But
 also, this level of meddling is definitely not something I'd want to see
 in the core, adding conditionals to everything.   Implementation wise it
 would be a SessionExtension that intercepts attach, plus the
 abovementioned proxies for InstrumentedAttribute and CollectionAdapter.

Yeah, performance would be abysmal, but one would only do this when
debugging anyway.

Thanks for the pointers --- I'll give the SessionExtension (Co.)
approach a try, if I get another troubleshooting case that itches
enough. If I come up with something useful, I'll post it here or put
it on UsageRecipes.

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Can I coerce strings into Unicode?

2009-06-12 Thread Gunnlaugur Briem

The engine's conversion to unicode doesn't happen when you assign the
property, it happens when the underlying database operation is
committed, and arrives in the python object's property only after
roundtripping through the database.

In [50]: m1.body = 'new - NOT unicode'

In [51]: m1.body
Out[51]: 'new - NOT unicode'

In [52]: session.add(m1)

In [53]: m1.body
Out[53]: 'new - NOT unicode'

In [54]: session.commit()
2009-06-12 09:46:18,430 INFO sqlalchemy.engine.base.Engine.0x...aa70
BEGIN
2009-06-12 09:46:18,431 INFO sqlalchemy.engine.base.Engine.0x...aa70
INSERT INTO message (body) VALUES (?)
2009-06-12 09:46:18,431 INFO sqlalchemy.engine.base.Engine.0x...aa70
['new - NOT unicode']
2009-06-12 09:46:18,432 INFO sqlalchemy.engine.base.Engine.0x...aa70
COMMIT

In [55]: m1.body
2009-06-12 09:46:22,803 INFO sqlalchemy.engine.base.Engine.0x...aa70
BEGIN
2009-06-12 09:46:22,804 INFO sqlalchemy.engine.base.Engine.0x...aa70
SELECT message.body AS message_body
FROM message
WHERE message.body = ?
2009-06-12 09:46:22,805 INFO sqlalchemy.engine.base.Engine.0x...aa70
['new - NOT unicode']
Out[55]: u'new - NOT unicode'

So if you want to rely on the engine's coercion to unicode, you have
to go through the engine.

But really, you ought to take care of unicode encoding issues yourself
before passing text data to SQLAlchemy. The engine cannot know where
your 8-bit strings come from, and just assumes that they are encoded
in UTF-8 (or whatever other encoding you set it to). You are better
equipped to know what encoding to expect your string input to be in
(and it may vary, depending on your input). If you don't know, find
out. (Or specify and assert.)

Cheers,

- Gulli



On Jun 12, 6:20 am, allen.fowler allen.fow...@yahoo.com wrote:
 Anybody?

 On Jun 4, 1:13 am, AF allen.fow...@yahoo.com wrote:

  Hello,

  I'm using sqlite and convert_unicode = True on the engine.

  How can I force coerce string based object attributes in to unicode?
  (I had thought convert_unicode = True would do this)

  Here is what I am seeing...

  Setup code:
  engine = create_engine('sqlite:///:memory:', echo=True,
  convert_unicode=True)
  Session = sessionmaker(bind=engine)
  session = Session()
  metadata = MetaData()
  m1 = message(u'message body 1')

  Now, in ipython:

  In [1]: session.add(m1)

  In [2]: m1.body
  Out[2]: u'message body 1'

  In [3]: m1.body = u'new - unicode'

  In [4]: m1.body
  Out[4]: u'new - unicode'

  In [5]: m1.body = 'new - NOT unicode'

  In [6]: m1.body
  Out[6]: 'new - NOT unicode'

  In [7]: unicode(m1.body)
  Out[7]: u'new - NOT unicode'

  Output line 6 is the problem.

  Ideally, I'd like to see output lines 6  7 be the same.

  Am I doing something wrong?

  Thank you,
  Allen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Late and ugly error when mixing timezone-savvy and timezone-naive datetimes

2009-03-11 Thread Gunnlaugur Briem

Hi,

I get away with stuffing datetime.datetime.now() into a DateTime
(timezone=True) column, despite the former being timezone-naive
(.utcoffset() is None, .tzinfo is None, etc.).

It is stored in the table with UTC offset +00, which is arguably
incorrect (states information that was not present in the input).

But even if you call it correct, you get in trouble when you read the
value back as an attribute of a mapped class in a session, set the
attribute again to datetime.datetime.now() (again timezone-naive), and
then try to query the session for the same object again. This retches
up a TypeError: “can't compare offset-naive and offset-aware
datetimes”.

Code to reproduce:

from sqlalchemy import Table, MetaData, Column, Integer, DateTime,
create_engine
from sqlalchemy.orm import sessionmaker, mapper
from datetime import datetime, timedelta
from pytz import utc
t = Table('foo', MetaData(), Column('id', Integer, primary_key=True,),
Column('dt', DateTime(timezone=True)))
class T(object):
pass

mapper(T, t)
e = create_engine('postgres://localhost/satest')
t.create(bind=e, checkfirst=True)
e.execute(t.delete()) # in case we are re-running this test
Session = sessionmaker(bind=e)
inst = T()
inst.dt = datetime.now()
assert inst.dt.utcoffset() is None
session = Session()
session.add(inst)
session.commit()
session.close()
session = Session()
inst = session.query(T).first()
assert inst.dt.utcoffset() == timedelta(0)
inst.dt = datetime.now()
assert inst.dt.utcoffset() is None
# next line raises TypeError: can't compare offset-naive and offset-
aware datetimes
inst = session.query(T).first()

SQLAlchemy should either reject the timezone-naive datetime value
right away when it is bound to a timezone-savvy DateTime column, or
tolerate me setting a timezone-naive datetime value again. I root for
the former.

Regards,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-04 Thread Gunnlaugur Briem

Hi 一首诗,

what database engine are you using? On PostgreSQL at least, table
creation and dropping (and some other operations) take an ACCESS
EXCLUSIVE lock on the database, and will wait for this lock
indefinitely if there are open transactions hanging around. My app
creates and drops tables willy-nilly, not just at start-up, and I find
that I have to be very careful about session lifetimes. So I gave up
scoped_session entirely.

Note that you don't need to avoid creating sessions again and again
(not to save on performance anyway), see
http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
Sessions are very inexpensive to make, and don't use any resources
whatsoever until they are first used...so create some! In particular,
note that a session is distinct from a database connection -- database
connections are (typically) expensive to create, and SQLAlchemy
handles connection pooling to save on those costs. That's probably
what you were intending, and it happens even if you keep making new
sessions.

Regards,

- Gulli



On Mar 4, 11:26 am, 一首诗 newpt...@gmail.com wrote:
 Hi, all

 I am using sqlalchemy in twisted in my project in the way below.
 Defer any database operation so the twisted's main thread won't be
 blocked.

 And I use scoped_session, so that sessions won't have to be created
 again and again.

 ==
 class Database()
 def __init__(self, conn_str):
 self.conn_str = conn_str
 self.engine = create_engine(self.conn_str, echo=False)
 self.Session = scoped_session(sessionmaker(bind = self.engine,
  expire_on_commit=False))

 def getObjectById(self, klass, id):
 return threads.deferToThread(self._getObjectById, klass, id)

 def _getObjectById(self, klass, id):
 sess = self.Session()
 return sess.query(klass).get(id)
 ==

 The code doesn't work.   When I limit the thread numbers to 1

 reactor.suggestThreadPoolSize(1)

 Everything goes fine.  Other wise the server would be blocked and must
 be killed by kill 9 

 The result conflicts with my understanding of sqlalchemy.  Since I
 don't share any object between threads, there should be no problem!

 Ah  It always have risk to use something you haven't tried
 before 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Infinite recursion in sqlalchemy/orm/attributes.py when running under debugger

2009-03-02 Thread Gunnlaugur Briem

Hi,

I have a table-mapped attribute that is dependent on two other
attributes:

from sqlalchemy import Table, MetaData, Column, Text, create_engine,
Integer
from sqlalchemy.orm import mapper, synonym

class Foo(object):
def _get_name(self):
return self._name
def _set_name(self, name):
self._name = name
self._update_table_name()
name = property(_get_name, _set_name)

def _get_provider(self):
return self._provider
def _set_provider(self, provider):
self._provider = provider
self._update_table_name()
provider = property(_get_provider, _set_provider)

def _update_table_name(self):
table_name = %s_%s % (self.provider, self.name)
if len(table_name)  50:
table_name = table_name[0:50]
self.table_name = table_name

foo_table = Table('foo', MetaData(),
  Column('id', Integer, primary_key=True),
  Column('name', Text),
  Column('provider', Text),
  Column('table_name', Text)
  )
mapper(Foo, foo_table, properties={
'name' : synonym('_name', map_column=True),
'provider': synonym('_provider', map_column=True),
   })

e = create_engine('sqlite:///:memory:')
foo_table.metadata.create_all(bind=e)

When I run this normally, nothing happens. When I run it under the
debugger (in PyDev), I get infinite recursion, looking like this:

Traceback (most recent call last):
  File /Applications/eclipse/plugins/
org.python.pydev.debug_1.4.4.2636/pysrc/pydevd.py, line 883, in
module
debugger.run(setup['file'], None, None)
  File /Applications/eclipse/plugins/
org.python.pydev.debug_1.4.4.2636/pysrc/pydevd.py, line 712, in run
execfile(file, globals, locals) #execute the script
  File /Users/gthb/Documents/workspace/test/src/sqlalchemytest7.py,
line 33, in module
'provider': synonym('_provider', map_column=True),
  File /path/to/SQLAlchemy/sqlalchemy/orm/__init__.py, line 752, in
mapper
return Mapper(class_, local_table, *args, **params)
  File /path/to/SQLAlchemy/sqlalchemy/orm/mapper.py, line 198, in
__init__
self._configure_properties()
  File /path/to/SQLAlchemy/sqlalchemy/orm/mapper.py, line 481, in
_configure_properties
self._configure_property(key, prop, False)
  File /path/to/SQLAlchemy/sqlalchemy/orm/mapper.py, line 616, in
_configure_property
prop.instrument_class(self)
  File /path/to/SQLAlchemy/sqlalchemy/orm/properties.py, line 302,
in instrument_class
proxy_property=self.descriptor
  File /path/to/SQLAlchemy/sqlalchemy/orm/attributes.py, line 1590,
in register_descriptor
descriptor = proxy_type(key, proxy_property, comparator,
parententity)
  File /path/to/SQLAlchemy/sqlalchemy/orm/attributes.py, line 181,
in __init__
self.descriptor = self.user_prop = descriptor
  File /Users/gthb/Documents/workspace/test/src/sqlalchemytest7.py,
line 14, in _set_name
self._update_table_name()
  File /path/to/SQLAlchemy/sqlalchemy/orm/attributes.py, line 214,
in __getattr__
return getattr(self._comparator, attribute)
  File /path/to/SQLAlchemy/sqlalchemy/orm/attributes.py, line 214,
in __getattr__
return getattr(self._comparator, attribute)
  

The same can be reproduced outside of PyDev by doing:

python -m pdb sqlalchemytest7.py

and stepping until the above calamity strikes. (It seems it does not
happen on cont)

This is in python 2.5.2 on Mac OS X 10.5.6, with sqlalchemy 0.5.2.

So, two things:

1) what am I doing wrong?

2) SQLAlchemy should handle it more gracefully. :)

Regards,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Ordering results of a WHERE x in y query by y

2009-02-25 Thread Gunnlaugur Briem

Hi all,

having a x IN y query, with y supplied as input to the query:

session.query(C).filter(C.someattr.in_(valuelist))

is there a way to tell SQLAlchemy to order the results according to
valuelist? I.e. not by the natural order of someattr, but by the
arbitrary order seen in valuelist? E.g.:

session.add(C(someattr='Abigail'))
session.add(C(someattr='Benjamin'))
session.add(C(someattr='Carl'))
valuelist = ['Benjamin', 'Abigail']
q = session.query(C).filter(C.someattr.in_(valuelist)).order_by(clever
(valuelist))
q.all()
# returns [C('Benjamin'), C('Abigail')]

The solution I can think of is to create a temporary table with
sess.execute('create temp table ...'), insert the valuelist into that
temp table along with a sequence index, join to that temporary table
and order by its index. Is there a less kludgy way?

Regards,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] with_statement support in session objects

2009-02-24 Thread Gunnlaugur Briem

Hi,

trying to be careful to close all sessions when I'm done with them, I
find myself doing this all the time:

session = Session()
try:
do_stuff_with(session)
finally:
session.close()

This would be neater:

with Session() as session:
do_stuff_with(session)

but the sessionmaker-produced class does not implement the context
manager protocol (the __enter__ and __exit__ methods) used by the with
statement. Now, I can add on the context manager protocol using
contextlib:

from contextlib import closing
with closing(Session()) as session:
do_stuff_with(session)

but is there any reason for the session itself not to support the
context manager protocol and save me the extra closing()?

Regards,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Typo in synonym documentation?

2009-02-03 Thread Gunnlaugur Briem

Hi,

in the docs, here:

http://www.sqlalchemy.org/docs/05/reference/ext/declarative.html#defining-synonyms

there is this code snippet:

class MyClass(Base):
__tablename__ = 'sometable'

_attr = Column('attr', String)

def _get_attr(self):
return self._some_attr
def _set_attr(self, attr):
self._some_attr = attr
attr = synonym('_attr', descriptor=property(_get_attr, _set_attr))

The references to self._some_attr should be self._attr, right? And
likewise for the next code sample?

Thanks,

- Gulli

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---