[sqlalchemy] Re: Postgres - Backup - Restore

2008-11-10 Thread Chris Miles


On Nov 10, 4:57 am, Petr Kobalíček [EMAIL PROTECTED] wrote:
 I have postgres related problem. I'm normally developing with sqlite,
 but we are using postgres on the server. The problem is that
 sqlalchemy probably remembers primary keys and after database restore
 it will start in all tables from 1.

If you use the PostgreSQL tools pg_dump and pg_restore they should
maintain the sequences properly for you when copying databases between
servers.

Cheers,
Chris Miles


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] conn.execute('CREATE SCHEMA %(s)s', {'s':s}) escape schema name,but it shouldn't, and raise ProgrammingError

2008-11-10 Thread sector119

Hi!

I use PostgreSQL and when I try to create schema I use following
command conn.execute('CREATE SCHEMA %(s)s', {'s':s})

I get raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.ProgrammingError: (ProgrammingError) syntax error at or
near E'system'
LINE 1: CREATE SCHEMA E'system'

Why it try to escape schema name and how to make it not to do that? :)

Thanks a lot!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Removing aggregate function from query results

2008-11-10 Thread Simon

Hi,

you could simply get the Bowler objects from the results by saying

results = [r[0] for r in results]

I'm not sure whether you query is correct, though. Usually, you cannot
select columns which are not in the GROUP BY clause or which are not
aggregated - after grouping, you have several bowler_id candidates for
each group, and there's nothing in your query telling the database
which ID you want. MySQL allows this and implies that all ID values
are the same, SQLite might do something similar. Try adding some more
bowlers for each city and see if it still returns the results you
expect.

You can rewrite the query using exists():

b1 = bowlers_table.alias()
b2 = bowlers_table.alias()
results = session.query(Bowler).filter(not_(
  exists([b2.c.bowler_id], and_(b2.c.bowler_id != b1.c.bowler_id,
b2.c.city_id == b1.c.city_id, b2.c.highscore 
b1.c.highscore)).correlate(b1)
)).all()

Cheers, Simon

On 10 Nov., 11:35, Ian Charnas [EMAIL PROTECTED] wrote:
 Hello Alchemy Land!

 If I have a simple test-case with Bowler objects and City objects, and
 I want to use func.max and group_by in order to find the highest
 scorers in each city... I might do something like this:

 max_score = func.max(Bowler.highscore).label('highest_score')
 results = session.query(Bowler,
 max_score).group_by(Bowler.city_id).all()

 So this works as I'd expect, and 'results' now contains
 (Bowler,max_score) tuples... but  what I really want is to have a
 query that just returns Bowler objects, and not these tuples.  Is
 there a way to get rid of that 'max_score' column from the result
 set?  I've been at this for hours, I bet it's really simple but I just
 can't find it.

 please help!!!

 Full example
 --

 # STANDARD BOILERPLATE
 from sqlalchemy import *
 from sqlalchemy.orm import *
 engine = create_engine('sqlite:///:memory:', echo=True,
 strategy='threadlocal')
 Session = scoped_session(sessionmaker(autoflush=False,
 autocommit=False))
 session = Session(bind=engine)
 metadata = ThreadLocalMetaData()
 metadata.bind = engine

 # DEFINE TABLES
 bowlers_table = Table('bowler', metadata,
     Column('bowler_id', Integer, primary_key=True),
     Column('name', String(50)),
     Column('highscore', Integer, default=0),
     Column('city_id', None, ForeignKey('city.city_id'))
 )
 cities_table = Table('city', metadata,
     Column('city_id', Integer, primary_key=True),
     Column('name', String(50))
 )
 metadata.create_all()

 # DEFINE CLASSES
 class Base(object):
     def __init__(self, **kwargs):
         for key, value in kwargs.iteritems():
             setattr(self, key, value)
 class Bowler(Base):
     pass
 class City(Base):
     pass

 # MAP CLASSES
 mapper(City, cities_table)
 mapper(Bowler, bowlers_table, properties={
     'city': relation(City)

 })

 # CREATE SAMPLE DATA
 nyc = City(name=New York City)
 michael_bayer = Bowler(name=Michael Bayer, highscore=299, city=nyc)
 big_lebowski = Bowler(name=Jeffrey Lebowsky, highscore=170,
 city=nyc)

 cle = City(name=Cleveland)
 ian_charnas = Bowler(name=Ian Charnas, highscore=220, city=cle)
 the_jesus = Bowler(name=Antonio DeJesus, highscore=130, city=cle)

 session.add_all([michael_bayer, big_lebowski, ian_charnas, the_jesus])
 session.flush()

 # GET HIGH SCORERS BY CITY
 max_score = func.max(Bowler.highscore).label('highest_score')
 results = session.query(Bowler,
 max_score).group_by(Bowler.city_id).all()

 # Results contain (Bowler, max_score) tuples
 [(__main__.Bowler object at 0x139b590, 170),
  (__main__.Bowler object at 0x13b20d0, 130)]

 # But I want results to just contain Bowler objects
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Session and postgres_returning

2008-11-10 Thread paftek

Hi,

I am using SQLAlchemy with PostgreSQL 8.3.
Today I just discovered the great postgres_returning functionnality.

Dumb question :
Is it possible to take advantage of it in session mode ? In other
words, is it possible for session.add() to issue only one INSERT
INTO ... RETURNING ... query instead of two (INSERT then SELECT),
like in the following example :


Code :

class Item(Base):
__tablename__ = 'items'

sku = Column('sku', String(50), primary_key=True)
title = Column('title', String(1000))
foo = Column('foo', Sequence('items_foo_seq'))

def __init__(self, sku, title):
self.sku = sku
self.title = title

item_c = Item('CCC', 'Title CCC')
session.add(item_c)
session.flush()
print item_c.foo


Queries issued :

INSERT INTO items (sku, title, foo) VALUES (E'CCC', E'Title CCC',
nextval('items_foo_seq'))
SELECT items.foo AS items_foo FROM items WHERE items.sku = E'CCC'



Thanks !
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Memory leak - is session.close() sufficient?

2008-11-10 Thread joelanman

Thanks Simon - just checked and I'm running 2.5.2 on my machines.

From experimenting - I'm not so sure I have a memory leak, so much as
just using a lot of memory.

I didn't realise that when Python frees memory, it doesnt necessarily
become free in Linux. I think that possibly all that's happening is
that as more complex pages are hit, the app is using more memory -
which is never obviously freed up in Linux.

On that note then - for simpler pages, the Apache process seem to use
about 60mb, which goes up to around 150mb for more complex pages
(Joined objects made up of around 3500 rows from mysql).

I'm using WSGIDaemonProcess with 8 threads - so do these figures sound
like an alright ballpark or totally ridiculous? Is there anything I
can do to keep memory usage down?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Removing aggregate function from query results

2008-11-10 Thread Ian Charnas

Hello Alchemy Land!

If I have a simple test-case with Bowler objects and City objects, and
I want to use func.max and group_by in order to find the highest
scorers in each city... I might do something like this:

max_score = func.max(Bowler.highscore).label('highest_score')
results = session.query(Bowler,
max_score).group_by(Bowler.city_id).all()

So this works as I'd expect, and 'results' now contains
(Bowler,max_score) tuples... but  what I really want is to have a
query that just returns Bowler objects, and not these tuples.  Is
there a way to get rid of that 'max_score' column from the result
set?  I've been at this for hours, I bet it's really simple but I just
can't find it.

please help!!!

Full example
--

# STANDARD BOILERPLATE
from sqlalchemy import *
from sqlalchemy.orm import *
engine = create_engine('sqlite:///:memory:', echo=True,
strategy='threadlocal')
Session = scoped_session(sessionmaker(autoflush=False,
autocommit=False))
session = Session(bind=engine)
metadata = ThreadLocalMetaData()
metadata.bind = engine

# DEFINE TABLES
bowlers_table = Table('bowler', metadata,
Column('bowler_id', Integer, primary_key=True),
Column('name', String(50)),
Column('highscore', Integer, default=0),
Column('city_id', None, ForeignKey('city.city_id'))
)
cities_table = Table('city', metadata,
Column('city_id', Integer, primary_key=True),
Column('name', String(50))
)
metadata.create_all()

# DEFINE CLASSES
class Base(object):
def __init__(self, **kwargs):
for key, value in kwargs.iteritems():
setattr(self, key, value)
class Bowler(Base):
pass
class City(Base):
pass

# MAP CLASSES
mapper(City, cities_table)
mapper(Bowler, bowlers_table, properties={
'city': relation(City)
})

# CREATE SAMPLE DATA
nyc = City(name=New York City)
michael_bayer = Bowler(name=Michael Bayer, highscore=299, city=nyc)
big_lebowski = Bowler(name=Jeffrey Lebowsky, highscore=170,
city=nyc)

cle = City(name=Cleveland)
ian_charnas = Bowler(name=Ian Charnas, highscore=220, city=cle)
the_jesus = Bowler(name=Antonio DeJesus, highscore=130, city=cle)

session.add_all([michael_bayer, big_lebowski, ian_charnas, the_jesus])
session.flush()

# GET HIGH SCORERS BY CITY
max_score = func.max(Bowler.highscore).label('highest_score')
results = session.query(Bowler,
max_score).group_by(Bowler.city_id).all()

# Results contain (Bowler, max_score) tuples
[(__main__.Bowler object at 0x139b590, 170),
 (__main__.Bowler object at 0x13b20d0, 130)]

# But I want results to just contain Bowler objects

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Memory leak - is session.close() sufficient?

2008-11-10 Thread Julien Cigar

Note that a lot of database drivers cache *everything* in memory when
you .fetchall(), fetchone() or fetchmany(x). So all those operations
consume the same amout of memory :

result = cursor.execute(...)

for i in result:
   ...

data = result.fetchall()
for i in data:
   ...

data = result.fetchone()
...

With a lot of rows it can consume a lot of memory ...

One solution is to use server side cursors, but it only works with
PostgreSQL at the moment (and you can't use server side cursors with the
ORM).

On Mon, 2008-11-10 at 05:47 -0800, joelanman wrote:
 Thanks Simon - just checked and I'm running 2.5.2 on my machines.
 
 From experimenting - I'm not so sure I have a memory leak, so much as
 just using a lot of memory.
 
 I didn't realise that when Python frees memory, it doesnt necessarily
 become free in Linux. I think that possibly all that's happening is
 that as more complex pages are hit, the app is using more memory -
 which is never obviously freed up in Linux.
 
 On that note then - for simpler pages, the Apache process seem to use
 about 60mb, which goes up to around 150mb for more complex pages
 (Joined objects made up of around 3500 rows from mysql).
 
 I'm using WSGIDaemonProcess with 8 threads - so do these figures sound
 like an alright ballpark or totally ridiculous? Is there anything I
 can do to keep memory usage down?
  
-- 
Julien Cigar
Belgian Biodiversity Platform
http://www.biodiversity.be
Université Libre de Bruxelles (ULB)
Campus de la Plaine CP 257
Bâtiment NO, Bureau 4 N4 115C (Niveau 4)
Boulevard du Triomphe, entrée ULB 2
B-1050 Bruxelles
Mail: [EMAIL PROTECTED]
@biobel: http://biobel.biodiversity.be/person/show/471
Tel : 02 650 57 52


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread John Hunter

On Mon, Nov 10, 2008 at 4:33 AM, King Simon-NFHD78
[EMAIL PROTECTED] wrote:

 I'm no SQL expert, so please take this with a pinch of salt, but as far
 as I know, conditions in the 'WHERE' clause of an SQL statement are
 applied BEFORE any grouping, so you can't use grouping functions (such
 as SUM, MAX etc) in the WHERE clause. Instead, you add a 'HAVING'
 statement on the end to filter the rows AFTER the grouping.

Ahh, that helps a lot.

 BTW, I think the 'no grouping functions in WHERE clause' rule is also
 the reason why your MAX query didn't work. The  fix that Mike gave you
 turned that part of your query into a subquery that only produced that
 single value. This statement:

I see.  That is why the select worked in my first test case but not
the second.  In the max case, there was only a single value to return.
 In the sum case, there was a sum grouped by (strategy, symbol).
Indeed, the having clause does what I want::

  In [39]: q = session.query(Snapshot.strategy, Snapshot.symbol,
   sum_pnl).group_by(Snapshot.strategy, Snapshot.symbol).having(sum_pnl-15000)

  In [40]: print q SELECT snapshot.strategy AS snapshot_strategy,
snapshot.symbol AS
snapshot_symbol, sum(snapshot.pnl) AS sum_1
FROM snapshot GROUP BY snapshot.strategy, snapshot.symbol
HAVING sum(snapshot.pnl)  %s

This produces a list of (strategy, symbol, sum(pnl)) as desired.

Now what I'm trying to figure out how to do is get a count over each
strategy of the number of symbols where the sum(pnl)-15000.  So I
need to do one group_by over (strategy, symbol) to get the right sums,
and then one group_by over strategy alone to get the symbol counts
where the threshold criterion is met.

To be honest, I don't really know how to do this in pure SQL, so this
is part sqlalachemy, part SQL question.  In the past, I have done
naive sql queries and done the extra logic in python, so this time
around I am trying to be a little more persistent in figuring out the
sql way to do things.

Thanks for your explanation!

JDH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
 Sent: 10 November 2008 14:07
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: select where field=max(field)
 
 
 On Mon, Nov 10, 2008 at 4:33 AM, King Simon-NFHD78
 [EMAIL PROTECTED] wrote:
 
  I'm no SQL expert, so please take this with a pinch of 
 salt, but as far
  as I know, conditions in the 'WHERE' clause of an SQL statement are
  applied BEFORE any grouping, so you can't use grouping 
 functions (such
  as SUM, MAX etc) in the WHERE clause. Instead, you add a 'HAVING'
  statement on the end to filter the rows AFTER the grouping.
 
 Ahh, that helps a lot.
 
  BTW, I think the 'no grouping functions in WHERE clause' 
 rule is also
  the reason why your MAX query didn't work. The  fix that 
 Mike gave you
  turned that part of your query into a subquery that only 
 produced that
  single value. This statement:
 
 I see.  That is why the select worked in my first test case but not
 the second.  In the max case, there was only a single value to return.
  In the sum case, there was a sum grouped by (strategy, symbol).
 Indeed, the having clause does what I want::
 
   In [39]: q = session.query(Snapshot.strategy, Snapshot.symbol,
sum_pnl).group_by(Snapshot.strategy, 
 Snapshot.symbol).having(sum_pnl-15000)
 
   In [40]: print q SELECT snapshot.strategy AS snapshot_strategy,
 snapshot.symbol AS
 snapshot_symbol, sum(snapshot.pnl) AS sum_1
 FROM snapshot GROUP BY snapshot.strategy, snapshot.symbol
 HAVING sum(snapshot.pnl)  %s
 
 This produces a list of (strategy, symbol, sum(pnl)) as desired.
 
 Now what I'm trying to figure out how to do is get a count over each
 strategy of the number of symbols where the sum(pnl)-15000.  So I
 need to do one group_by over (strategy, symbol) to get the right sums,
 and then one group_by over strategy alone to get the symbol counts
 where the threshold criterion is met.
 
 To be honest, I don't really know how to do this in pure SQL, so this
 is part sqlalachemy, part SQL question.  In the past, I have done
 naive sql queries and done the extra logic in python, so this time
 around I am trying to be a little more persistent in figuring out the
 sql way to do things.
 

I think the query should look something like this:

SELECT strategy, COUNT(*)
FROM (SELECT snapshot.strategy AS strategy,
 snapshot.symbol AS snapshot_symbol,
 sum(snapshot.pnl) AS sum_1
  FROM snapshot
  GROUP BY snapshot.strategy, snapshot.symbol
  HAVING sum(snapshot.pnl)  1500) AS strategies
GROUP BY strategy

Run that by hand on your database and see if you get the results you
expect. The nested query gets the list of strategies that match the
original criteria, and the outer query uses that to produce the counts.
(Note that there are other ways to get the same result. For example, you
could JOIN your snapshot table to the subquery, which might be useful if
you wanted other columns from it in the outer query)

It should be fairly easy to build that query with SA's underlying
expression language. I'm not certain how to do it through session.query,
but I'm sure it's possible.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Newbie: Adding a column after database table has been created, declarative

2008-11-10 Thread Jules Stevenson

Hi,

Apologies for lowering the general IQ of the list, I'm very new to web apps
and databases.

I had a declarative table:

class ArkContact(Base):

table of all contacts

__tablename__ = 'contacts'

id = Column(Integer, primary_key=True)
project_id = Column(Integer, ForeignKey('projects.id'))
client_id = Column(Integer, ForeignKey('clients.id'))
firstname = Column(String)
lastname = Column(String)
email1 = Column(String)
email2 = Column(String)
workphone = Column(Integer)
mobile = Column(Integer)
project = relation(ArkProject, backref=backref('contacts',
order_by=func.lower(firstname)))
client = relation(ArkClient, backref=backref('contacts',
order_by=func.lower(firstname)))

All is good,

I added a new column in:

lastcontact = Column(DateTime)

Now I'm getting errors when I try to connect:

class 'sqlalchemy.exc.OperationalError': (OperationalError) no such
column: contacts.lastcontact u'SELECT contacts.id AS contacts_id,
contacts.project_id AS contacts_project_id, contacts.client_id AS
contacts_client_id, contacts.firstname AS contacts_firstname,
contacts.lastname AS contacts_lastname, contacts.email1 AS contacts_email1,
contacts.email2 AS contacts_email2, contacts.workphone AS
contacts_workphone, contacts.mobile AS contacts_mobile, contacts.lastcontact
AS contacts_lastcontact \nFROM contacts ORDER BY lower(contacts.firstname)'
[]  

Am I being extremely naive in thinking there would be some way of 'updating'
the table to show the new column?

Many thanks,

Jules


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] how to use Sql functions in object-relational objects?

2008-11-10 Thread fanlix

hi:
  In my app there is a user_table, with a column access_time.
  Without sqlalchemy, just update user_table set access_time =
Now() ,
  With sqlalchemy and user as a object-relational object, I have to
make a app time and do user.access_time = now() ?

  or a better way?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Removing aggregate function from query results

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 5:35 AM, Ian Charnas wrote:


 Hello Alchemy Land!

 If I have a simple test-case with Bowler objects and City objects, and
 I want to use func.max and group_by in order to find the highest
 scorers in each city... I might do something like this:

 max_score = func.max(Bowler.highscore).label('highest_score')
 results = session.query(Bowler,
 max_score).group_by(Bowler.city_id).all()

 So this works as I'd expect, and 'results' now contains
 (Bowler,max_score) tuples... but  what I really want is to have a
 query that just returns Bowler objects, and not these tuples.  Is
 there a way to get rid of that 'max_score' column from the result
 set?  I've been at this for hours, I bet it's really simple but I just
 can't find it.


hey Ian -

any chance you can just use the func.max() at the end of the Query  
using the values() method ?  that way its just an ad-hoc thing.
Otherwise there's no official way to remove an entity from an  
existing Query's list of entities.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] default=0.0 on Float Column produces `col` float default NULL

2008-11-10 Thread Simon

Hi all,

I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a
table with a float column and would like to have a default value of 0:

Column('col', Float(), default=0.0)

However, executing metadata.create_all(engine) yields

CREATE TABLE `Table` (
  ...
  `col` float default NULL,
  ...
) ENGINE=InnoDB DEFAULT CHARSET=latin1

Is that a bug, or am I erring somewhere?

Thanks, Simon
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-10 Thread Cito

The new behavior is exactly what I expect, namely that query.count()
returns the same as len(query.all()). Are there cases in which this
does not make sense or where this would not work?

-- Christoph
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Memory leak - is session.close() sufficient?

2008-11-10 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of joelanman
 Sent: 10 November 2008 00:21
 To: sqlalchemy
 Subject: [sqlalchemy] Re: Memory leak - is session.close() sufficient?
 
 
 Thanks for all the advice - I've changed my unicode settings and
 upgraded Beaker, but still have something to fix.. I'll report back if
 I find it.

I'm sure it's not relevant, but for a while I was developing an
application with SQLAlchemy (0.3.something) on Python 2.4.1, and had
problems with memory leaks. I upgraded to Python 2.4.4 and the leaks
went away.

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 2:25 AM, Randall Smith wrote:


 Just to make sure we're considering the same plan, I don't plan to  
 make
 any API changes that would cause breakage.  All changes are additions
 including the public API and some new dialect methods (get_views,
 get_indexes, ...).  Most of the work as I see it is in refactoring and
 testing.


yeah its lookin great, thanks !


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL

2008-11-10 Thread jason kirtland

Simon wrote:
 Hi all,
 
 I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a
 table with a float column and would like to have a default value of 0:
 
 Column('col', Float(), default=0.0)
 
 However, executing metadata.create_all(engine) yields
 
 CREATE TABLE `Table` (
   ...
   `col` float default NULL,
   ...
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1
 
 Is that a bug, or am I erring somewhere?

default= is purely a client-side default executed in Python.  For a 
server-side (DDL) default, you want Column(, server_default='0.0')



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: conn.execute('CREATE SCHEMA %(s)s', {'s':s}) escape schema name,but it shouldn't, and raise ProgrammingError

2008-11-10 Thread Michael Bayer

Postgres doesn't allow bind parameters to be used with CREATE SCHEMA -  
it expects an identifier, not a literal value.When I try it on my  
system I don't get the E behavior you're getting, I get :

sqlalchemy.exc.ProgrammingError: (ProgrammingError) syntax error at or  
near 'foo'
LINE 1: CREATE SCHEMA 'foo'




On Nov 10, 2008, at 6:44 AM, sector119 wrote:


 Hi!

 I use PostgreSQL and when I try to create schema I use following
 command conn.execute('CREATE SCHEMA %(s)s', {'s':s})

 I get raise exc.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 sqlalchemy.exc.ProgrammingError: (ProgrammingError) syntax error at or
 near E'system'
 LINE 1: CREATE SCHEMA E'system'

 Why it try to escape schema name and how to make it not to do that? :)

 Thanks a lot!
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 12:08 PM, John Hunter wrote:


 On Mon, Nov 10, 2008 at 10:05 AM, King Simon-NFHD78
 [EMAIL PROTECTED] wrote:

 Actually, the section after that (Using Subqueries) probably does
 something very close to what you want. What's the result of these  
 lines:

 q1 = (session.query(Snapshot.strategy, Snapshot.symbol, sum_pnl)
 .group_by(Snapshot.strategy, Snapshot.symbol)
 .having(sum_pnl-15000)).subquery()

 q2 = (session.query(Snapshot.strategy, func.count('*'))
 .join(q1, Snapshot.strategy = q1.c.strategy)
 .groupby(Snapshot.strategy))


 I assume you mean '=='? I get a foreign key error on the join::

sum_pnl = func.sum(Snapshot.pnl)

q1 = (session.query(Snapshot.strategy, Snapshot.symbol, sum_pnl)
 .group_by(Snapshot.strategy, Snapshot.symbol)
 .having(sum_pnl-15000)).subquery()

q2 = (session.query(Snapshot.strategy, func.count('*'))
 .join(q1, Snapshot.strategy==q1.c.strategy)
 .groupby(Snapshot.strategy))

  File /home/titan/johnh/dev/lib/python2.4/site-packages/ 
 SQLAlchemy-0.5.0rc3dev_r5205-py2.4.egg/sqlalchemy/sql/util.py,
 line 109, in join_condition
raise exc.ArgumentError(
 ArgumentError: Can't find any foreign key relationships between
 'snapshot' and '{ANON 157186924 anon}'

 Still playing with the aliases in the link you referred me to above
 but haven't gotten there yet...

you need an extra tuple on the join, query.join((q1, s.s==q1.c.s))

i think I might need to look into raising an error when the arguments  
aren't sent properly, im not sure why it doesn't do that already.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5.0rc3 - kinterbasdb.ProgrammingError: (0, 'Invalid cursor state. The cursor must be open to perform this operation.')

2008-11-10 Thread Werner F. Bruhin

Michael,

Michael Bayer wrote:
 I know what this is and it should be working in r5280.   I don't have  
 access to firebird here so we weren't able to run the tests on it  
 before rc3 was out.
   
Thanks for the quick reply.

Looking at the changes doc these will be included in rc4 - any idea when 
this will come out?

Werner

P.S.
What is involved to get SA r5280 onto my MS Vista machine to test?  Is 
this just a SVN checkout or are there some additional build/compile steps?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5.0rc3 - kinterbasdb.ProgrammingError: (0, 'Invalid cursor state. The cursor must be open to perform this operation.')

2008-11-10 Thread Michael Bayer

I know what this is and it should be working in r5280.   I don't have  
access to firebird here so we weren't able to run the tests on it  
before rc3 was out.


On Nov 10, 2008, at 7:39 AM, Werner F. Bruhin wrote:


 I am getting sometimes the following exception with rc3 which I did  
 not
 see with rc2 when I do something like this:


 engine = sa.create_engine(dburl, encoding='utf8', echo=False)
 # connect to the database
 ##connection = engine.connect()
 Session = sao.sessionmaker()
 Session.configure(bind=engine)
 ##Session.configure(bind=connection)
 session = Session()
 query = session.query(db.Preferences).get(1)
 lang = session.query(db.Language).get(2)
 query.language = lang
 session.commit()

 Am I doing which I should not, which now causes this or ...?

 Best regards
 Werner

 Following the traceback and part of my model.

 Traceback (most recent call last):
  File saTest.py, line 56, in module
session.commit()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\session.py,
 line 670, in commit
self.transaction.commit()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\session.py,
 line 375, in commit
self._prepare_impl()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\session.py,
 line 359, in _prepare_impl
self.session.flush()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\session.py,
 line 1354, in flush
self._flush(objects)
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\session.py,
 line 1424, in _flush
flush_context.execute()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py,
 line 260, in execute
UOWExecutor().execute(self, tasks)
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py,
 line 723, in execute
self.execute_save_steps(trans, task)
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py,
 line 738, in execute_save_steps
self.save_objects(trans, task)
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\unitofwork.py,
 line 729, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\orm\mapper.py,
 line 1318, in _save_obj
rows += c.rowcount
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\engine\base.py,
 line 1397, in rowcount
return self.context.get_rowcount()
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\engine\default.py,
 line 279, in get_rowcount
return self.cursor.rowcount
  File
 c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg 
 \sqlalchemy\pool.py,
 line 466, in __getattr__
return getattr(self.cursor, key)
 kinterbasdb.ProgrammingError: (0, 'Invalid cursor state.  The cursor
 must be open to perform this operation.')

 The relevant part of my model are:
 class BaseExt(object):
def __repr__(self):
return %s(%s) % (
 (self.__class__.__name__),
 ', '.join([%s=%r % (key, getattr(self, key))
for key in sorted(self.__dict__.keys())
if not key.startswith('_')]))

 Base = sad.declarative_base(cls=BaseExt)
 metadata = Base.metadata


 class Language(Base):
__table__ = sa.Table(u'language', metadata,
sa.Column(u'langid', sa.Integer(),
 sa.Sequence('gen_language_langid'), primary_key=True, nullable=False),
sa.Column(u'name', sa.String(length=50, convert_unicode=False),
 nullable=False),
sa.Column(u'locales', sa.String(length=2, convert_unicode=False)),
sa.Column(u'created', sa.Date()),
sa.Column(u'updated', sa.Date()),
)

 class Preferences(Base):
__table__ = sa.Table(u'preferences', metadata,
sa.Column(u'prefid', sa.Integer(),
 sa.Sequence('gen_preferences_prefid'), primary_key=True,  
 nullable=False),
 ...
sa.Column(u'fk_langid', sa.Integer(),
 sa.ForeignKey(u'language.langid'), nullable=False),
 ...
)

cellar = sao.relation(Cellar)
language = sao.relation(Language)
reason_ls = sao.relation(Reason_Ls)
displayformats = sao.relation(Displayformats)
measure_ls = sao.relation(Measure_Ls)
ingr_ls = sao.relation(Ingr_Ls)
tastingsys = sao.relation(Tastingsys)
imagetype_ls = sao.relation(Imagetype_Ls, primaryjoin=
('Preferences.fk_imagetypeid==Imagetype_Ls.imagetypeid'))
filters = sao.relation(Filters)
ratingtype_ls = sao.relation(Ratingtype_Ls)
container_ls = sao.relation(Container_Ls)
imagetype_ls_rec = sao.relation(Imagetype_Ls, primaryjoin=
 
 ('Preferences.rec_fk_imagetypeid==Imagetype_Ls.imagetypeid'))



 


--~--~-~--~~~---~--~~
You received 

[sqlalchemy] Re: 0.5.0rc3 - kinterbasdb.ProgrammingError: (0, 'Invalid cursor state. The cursor must be open to perform this operation.')

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 12:10 PM, Werner F. Bruhin wrote:


 Michael,

 Michael Bayer wrote:
 I know what this is and it should be working in r5280.   I don't have
 access to firebird here so we weren't able to run the tests on it
 before rc3 was out.

 Thanks for the quick reply.

 Looking at the changes doc these will be included in rc4 - any idea  
 when
 this will come out?

hoping we can get to 0.5.0 final, we generally release every 3-4 weeks


 P.S.
 What is involved to get SA r5280 onto my MS Vista machine to test?  Is
 this just a SVN checkout or are there some additional build/compile  
 steps?

you can do easy_install http://svn.sqlalchemy.org/sqlalchemy/trunk .



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread John Hunter

On Mon, Nov 10, 2008 at 10:05 AM, King Simon-NFHD78
[EMAIL PROTECTED] wrote:

 Actually, the section after that (Using Subqueries) probably does
 something very close to what you want. What's the result of these lines:

 q1 = (session.query(Snapshot.strategy, Snapshot.symbol, sum_pnl)
  .group_by(Snapshot.strategy, Snapshot.symbol)
  .having(sum_pnl-15000)).subquery()

 q2 = (session.query(Snapshot.strategy, func.count('*'))
  .join(q1, Snapshot.strategy = q1.c.strategy)
  .groupby(Snapshot.strategy))


I assume you mean '=='? I get a foreign key error on the join::

sum_pnl = func.sum(Snapshot.pnl)

q1 = (session.query(Snapshot.strategy, Snapshot.symbol, sum_pnl)
 .group_by(Snapshot.strategy, Snapshot.symbol)
 .having(sum_pnl-15000)).subquery()

q2 = (session.query(Snapshot.strategy, func.count('*'))
 .join(q1, Snapshot.strategy==q1.c.strategy)
 .groupby(Snapshot.strategy))

  File 
/home/titan/johnh/dev/lib/python2.4/site-packages/SQLAlchemy-0.5.0rc3dev_r5205-py2.4.egg/sqlalchemy/sql/util.py,
line 109, in join_condition
raise exc.ArgumentError(
ArgumentError: Can't find any foreign key relationships between
'snapshot' and '{ANON 157186924 anon}'

Still playing with the aliases in the link you referred me to above
but haven't gotten there yet...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-10 Thread Michael Bayer

it should be fine.


On Nov 10, 2008, at 6:34 AM, Cito wrote:


 The new behavior is exactly what I expect, namely that query.count()
 returns the same as len(query.all()). Are there cases in which this
 does not make sense or where this would not work?

 -- Christoph
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session and postgres_returning

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 8:37 AM, paftek wrote:


 Hi,

 I am using SQLAlchemy with PostgreSQL 8.3.
 Today I just discovered the great postgres_returning functionnality.

 Dumb question :
 Is it possible to take advantage of it in session mode ? In other
 words, is it possible for session.add() to issue only one INSERT
 INTO ... RETURNING ... query instead of two (INSERT then SELECT),
 like in the following example :

this is an enhancement that we'll be pursuing in the near future.
Jason was having problems getting INSERT RETURNING to work at all with  
pg 8.3 the other day for some reason.

the exact functionality will be that the PG dialect uses INSERT  
RETURNING by default in all cases where right now it executes a  
sequence beforehand.   This means that if you issue an insert() using  
PG, no explicitness will be needed in order to get new defaults or  
primary key values back.  A dialect flag  
use_returing_on_inserts=True will allow control over the feature at  
the configurational level.   The ORM wont be aware of it explicitly.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 0.5.0rc3 - kinterbasdb.ProgrammingError: (0, 'Invalid cursor state. The cursor must be open to perform this operation.')

2008-11-10 Thread Werner F. Bruhin

I am getting sometimes the following exception with rc3 which I did not 
see with rc2 when I do something like this:


engine = sa.create_engine(dburl, encoding='utf8', echo=False)
# connect to the database
##connection = engine.connect()
Session = sao.sessionmaker()
Session.configure(bind=engine)
##Session.configure(bind=connection)
session = Session()
query = session.query(db.Preferences).get(1)
lang = session.query(db.Language).get(2)
query.language = lang
session.commit()

Am I doing which I should not, which now causes this or ...?

Best regards
Werner

Following the traceback and part of my model.

Traceback (most recent call last):
  File saTest.py, line 56, in module
session.commit()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\session.py,
 
line 670, in commit
self.transaction.commit()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\session.py,
 
line 375, in commit
self._prepare_impl()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\session.py,
 
line 359, in _prepare_impl
self.session.flush()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\session.py,
 
line 1354, in flush
self._flush(objects)
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\session.py,
 
line 1424, in _flush
flush_context.execute()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 260, in execute
UOWExecutor().execute(self, tasks)
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 723, in execute
self.execute_save_steps(trans, task)
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 738, in execute_save_steps
self.save_objects(trans, task)
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\unitofwork.py,
 
line 729, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\orm\mapper.py,
 
line 1318, in _save_obj
rows += c.rowcount
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\engine\base.py,
 
line 1397, in rowcount
return self.context.get_rowcount()
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\engine\default.py,
 
line 279, in get_rowcount
return self.cursor.rowcount
  File 
c:\python25\lib\site-packages\sqlalchemy-0.5.0rc3-py2.5.egg\sqlalchemy\pool.py,
 
line 466, in __getattr__
return getattr(self.cursor, key)
kinterbasdb.ProgrammingError: (0, 'Invalid cursor state.  The cursor 
must be open to perform this operation.')

The relevant part of my model are:
class BaseExt(object):
def __repr__(self):
return %s(%s) % (
 (self.__class__.__name__),
 ', '.join([%s=%r % (key, getattr(self, key))
for key in sorted(self.__dict__.keys())
if not key.startswith('_')]))

Base = sad.declarative_base(cls=BaseExt)
metadata = Base.metadata


class Language(Base):
__table__ = sa.Table(u'language', metadata,
sa.Column(u'langid', sa.Integer(), 
sa.Sequence('gen_language_langid'), primary_key=True, nullable=False),
sa.Column(u'name', sa.String(length=50, convert_unicode=False), 
nullable=False),
sa.Column(u'locales', sa.String(length=2, convert_unicode=False)),
sa.Column(u'created', sa.Date()),
sa.Column(u'updated', sa.Date()),
)

class Preferences(Base):
__table__ = sa.Table(u'preferences', metadata,
sa.Column(u'prefid', sa.Integer(), 
sa.Sequence('gen_preferences_prefid'), primary_key=True, nullable=False),
...
sa.Column(u'fk_langid', sa.Integer(), 
sa.ForeignKey(u'language.langid'), nullable=False),
...
)

cellar = sao.relation(Cellar)
language = sao.relation(Language)
reason_ls = sao.relation(Reason_Ls)
displayformats = sao.relation(Displayformats)
measure_ls = sao.relation(Measure_Ls)
ingr_ls = sao.relation(Ingr_Ls)
tastingsys = sao.relation(Tastingsys)
imagetype_ls = sao.relation(Imagetype_Ls, primaryjoin=
('Preferences.fk_imagetypeid==Imagetype_Ls.imagetypeid'))
filters = sao.relation(Filters)
ratingtype_ls = sao.relation(Ratingtype_Ls)
container_ls = sao.relation(Container_Ls)
imagetype_ls_rec = sao.relation(Imagetype_Ls, primaryjoin=
('Preferences.rec_fk_imagetypeid==Imagetype_Ls.imagetypeid'))



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of John Hunter
 Sent: 08 November 2008 05:09
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Re: select where field=max(field)
 

[SNIP]

 Here is a query that lists the sum(pnl) for each symbol and strategy
 in my snapshots table
 
   session.query(Snapshot.strategy, Snapshot.symbol,
 func.sum(Snapshot.pnl)).group_by(Snapshot.strategy,
 Snapshot.symbol).all()
 
 That works fine.  But what if I only want to list the rows where the
 sum(pnl)-15000 ?  I tried a few things:
 
   session.query(Snapshot.strategy, Snapshot.symbol,
 func.sum(Snapshot.pnl)).group_by(Snapshot.strategy,
 Snapshot.symbol).filter(func.sum(Snapshot.pnl)-15000).all()
 
 but got the error below
 
 raise exc.DBAPIError.instance(statement, parameters, e,
 connection_invalidated=is_disconnect)
 ProgrammingError: (ProgrammingError) (, 'Invalid use of group
 function') u'SELECT snapshot.strategy AS snapshot_strategy,
 snapshot.symbol AS snapshot_symbol, sum(snapshot.pnl) AS sum_1 \nFROM
 snapshot \nWHERE sum(snapshot.pnl)  %s GROUP BY snapshot.strategy,
 snapshot.symbol' [-15000]
 

I'm no SQL expert, so please take this with a pinch of salt, but as far
as I know, conditions in the 'WHERE' clause of an SQL statement are
applied BEFORE any grouping, so you can't use grouping functions (such
as SUM, MAX etc) in the WHERE clause. Instead, you add a 'HAVING'
statement on the end to filter the rows AFTER the grouping.

ie. The SQL you want is something like:

SELECT snapshot.strategy AS snapshot_strategy,
   snapshot.symbol AS snapshot_symbol,
   sum(snapshot.pnl) AS sum_1
FROM snapshot
GROUP BY snapshot.strategy, snapshot.symbol
HAVING sum(snapshot.pnl)  15000

In SA, I think you might be able to write that as (untested):

session.query(Snapshot.strategy,
  Snapshot.symbol,
  func.sum(Snapshot.pnl))
.group_by(Snapshot.strategy, Snapshot.symbol)
.having(func.sum(Snapshot.pnl)-15000).all()


BTW, I think the 'no grouping functions in WHERE clause' rule is also
the reason why your MAX query didn't work. The  fix that Mike gave you
turned that part of your query into a subquery that only produced that
single value. This statement:

func.max(Snapshot.datetime).select()

...gets turned into something like 'SELECT max(datetime) FROM snapshot'.
This then gets embedded as a subquery into your larger query. It's
probably worth printing the SQL produced by each of the queries so that
you can see the difference.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Removing aggregate function from query results

2008-11-10 Thread Ian Charnas

Simon, Michael, thank you!

Simon, yes you were totally right, my query was totally wrong!  I was
up all night trying to get some code working, and at 5am I was getting
a little fuzzy.  I'd like to use that as my excuse ;-)

What I ended up doing this morning was doing a simple query with max
and group_by that finds the highest score in each city, and then
joined that with Bowler.  The resulting SQL looks great.  Thanks for
your help!

max_score = func.max(Bowler.highscore).label('city_highscore')
city_highscores = session.query(Bowler.city_id,
max_score).group_by(Bowler.city_id).subquery()

best_bowlers = session.query(Bowler).join(
(highest_scores,
and_(
Bowler.city_id==city_highscores.c.city_id,
Bowler.highscore==city_highscores.c.city_highscore
)
)
)

Hope this helps someone searching on the list!
-Ian

On Nov 10, 10:27 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 10, 2008, at 5:35 AM, Ian Charnas wrote:





  Hello Alchemy Land!

  If I have a simple test-case with Bowler objects and City objects, and
  I want to use func.max and group_by in order to find the highest
  scorers in each city... I might do something like this:

  max_score = func.max(Bowler.highscore).label('highest_score')
  results = session.query(Bowler,
  max_score).group_by(Bowler.city_id).all()

  So this works as I'd expect, and 'results' now contains
  (Bowler,max_score) tuples... but  what I really want is to have a
  query that just returns Bowler objects, and not these tuples.  Is
  there a way to get rid of that 'max_score' column from the result
  set?  I've been at this for hours, I bet it's really simple but I just
  can't find it.

 hey Ian -

 any chance you can just use the func.max() at the end of the Query  
 using the values() method ?  that way its just an ad-hoc thing.    
 Otherwise there's no official way to remove an entity from an  
 existing Query's list of entities.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL

2008-11-10 Thread jason kirtland

With 0.4 it's a positional argument to Column: Column('col', Float(), 
PassiveDefault('0.0'))

Simon wrote:
 Thanks Jason! Is there any way of doing this in SA 0.4 as well?
 
 On 10 Nov., 16:42, jason kirtland [EMAIL PROTECTED] wrote:
 Simon wrote:
 Hi all,
 I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a
 table with a float column and would like to have a default value of 0:
 Column('col', Float(), default=0.0)
 However, executing metadata.create_all(engine) yields
 CREATE TABLE `Table` (
   ...
   `col` float default NULL,
   ...
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1
 Is that a bug, or am I erring somewhere?
 default= is purely a client-side default executed in Python.  For a
 server-side (DDL) default, you want Column(, server_default='0.0')
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 0.5rc3 problem (works in 0.5rc2 and 0.4.8)

2008-11-10 Thread David Gardner

Had a problem this morning where SA 0.5rc3 was returning None, while 
0.5rc2 and 0.4.8 returned the expected object/row.

tables  mappers:
--
typehierarchy_table = Table('typehierarchy', metadata, autoload=True)
typehierarchy_names_table = Table('typehierarchy_names', metadata, 
autoload=True)
mapper(TypeNode, typehierarchy_table, properties={
  'AutoPopNames':relation(TypeAutoPop, backref='TypeNode'),
  'Children':relation(TypeNode, 
primaryjoin=(typehierarchy_table.c.id==typehierarchy_table.c.parent),
  
remote_side=[typehierarchy_table.c.parent],backref='Parent')}, 
save_on_init=False)

mapper(TypeAutoPop,typehierarchy_names_table, save_on_init=False)


0.5.0rc3:

  from assetdb import *
DEV BRANCH:assetdb.py
  print sqlalchemy.__version__
0.5.0rc3
  session=create_session()
  t='h2_prj'
  show='sid'
  parent=None
  db.echo=True
  tn = 
session.query(TypeNode).filter(TypeNode.Parent==parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
2008-11-10 12:19:06,534 INFO sqlalchemy.engine.base.Engine.0x...af50 
SELECT typehierarchy.project AS typehierarchy_project, 
typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS 
typehierarchy_id, typehierarchy.type AS typehierarchy_type, 
typehierarchy.static AS typehierarchy_static
FROM typehierarchy
WHERE NOT (EXISTS (SELECT 1
FROM typehierarchy AS typehierarchy_1
WHERE typehierarchy.id = typehierarchy_1.parent)) AND typehierarchy.type 
= %(type_1)s AND typehierarchy.project = %(project_1)s
2008-11-10 12:19:06,535 INFO sqlalchemy.engine.base.Engine.0x...af50 
{'type_1': 'h2_prj', 'project_1': 'sid'}
  tn
[]
 


0.5.0rc2:

  from assetdb import *
DEV BRANCH:assetdb.py
  print sqlalchemy.__version__
0.5.0rc2
  session=create_session()
  t='h2_prj'
  show='sid'
  parent=None
  db.echo=True
  tn = 
session.query(TypeNode).filter(TypeNode.Parent==parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10 
SELECT typehierarchy.project AS typehierarchy_project, 
typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS 
typehierarchy_id, typehierarchy.type AS typehierarchy_type, 
typehierarchy.static AS typehierarchy_static
FROM typehierarchy
WHERE NOT (EXISTS (SELECT 1
FROM typehierarchy
WHERE typehierarchy.id = typehierarchy.parent)) AND typehierarchy.type = 
%(type_1)s AND typehierarchy.project = %(project_1)s
2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10 
{'type_1': 'h2_prj', 'project_1': 'sid'}
  tn
[assetdb.TypeNode object at 0x201f3d0]



-- 

David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
[EMAIL PROTECTED]



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] extending pre-existing tables

2008-11-10 Thread rdmurray

I sent this last week but it seems like it may not have been
posted to the list...at least, I haven't seen any responses :)

-- Forwarded message --
Date: Tue, 4 Nov 2008 18:00:57 -0500 (EST)
From: [EMAIL PROTECTED]
To: sqlalchemy@googlegroups.com
Subject: extending pre-existing (read only) tables

I've got a somewhat oddball situation, and I'm hoping for some pointers
to the right sqlalchemy features to handle this.  Or, failing that,
hints as to the places I'm going to need to write code to compensate
for the weirdness I have to deal with.

The situation is that I get periodic copies of a set of database tables.
In my local database, I need to provide supplemental data that is
maintained only in my copy.

In my application library, I need to support two models: a model that
includes only the data of which I get periodic copies, and a second
model that includes my supplemental data.  So, object wise what I think
I want to have is a set of objects for the imported data, and then
a second set of objects that subclasses the first and extends them.
(ex: a Publication object, and then an ExtendedPublication object).
So I have two sets of tables, the base tables (whose schema I would
prefer not to touch), and a second set of parallel tables containing
my local data.  On import, I just drop and replace the base tables,
leaving my tables untouched.

Right now I'm playing with mapper inheritance.  The first stumbling
block I've come to is the case where the local table doesn't yet have
a row for the object from the base table.  In that case, a query on my
subclassed object returns nothing.  Do I need to write an import program
that creates rows in the local tables for any items in the imported data
that don't yet have them?  That strikes me as rather fragile.  Is there
a way to automate the creation of the local table row with default values?

Or am I approaching this all wrong?

--RDM

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: extending pre-existing tables

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 3:42 PM, [EMAIL PROTECTED] wrote:


 Right now I'm playing with mapper inheritance.  The first stumbling
 block I've come to is the case where the local table doesn't yet have
 a row for the object from the base table.  In that case, a query on my
 subclassed object returns nothing.  Do I need to write an import  
 program
 that creates rows in the local tables for any items in the imported  
 data
 that don't yet have them?  That strikes me as rather fragile.  Is  
 there
 a way to automate the creation of the local table row with default  
 values?

using table inheritance implies that for a subclass such as  
ExtendedFoo, both tables are populated at all times.   So yes you'd  
have to work out some way to ensure that the database is in the state  
required to use SQLAs table inheritance functionality.   The system  
you're using to populate the base tables would need to be extended  
in this way.

 Or am I approaching this all wrong?

Possibly.  The fragility here is that you are relying on a model  
that isn't actually implemented here, i.e. that your application is  
written around a table inheritance assumption when that is not  
actually the case - the extended tables may or may not have a row  
present for a corresponding base row.   It would likely be more  
straightforward to implement a non-inheriting system.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5rc3 problem (works in 0.5rc2 and 0.4.8)

2008-11-10 Thread Michael Bayer

Hi David -

One thing I notice is that your remote_side on the self referential  
relation from TypeNode-Children is not needed, whereas it *is* needed  
on the TypeNode-Parent side, which is the many to one side, using  
backref=backref('Parent', remote_side=[typehierarchy_table.c.id]).  
Without it, SQLA assumes TypeNode-Parent is a one to many collection.

However, I cannot reproduce your behavior - if the remote_side is in  
fact missing on the backref side, the comparison of  
TypeNode.Parent==something correctly raises this error:
sqlalchemy.exc.InvalidRequestError: Can't compare a collection to an  
object or collection; use contains() to test for membership.

Here is a test case illustrating the current 0.5rc3 behavior - you can  
see that a many-to-one self referential comparison does not generate  
an EXISTS anymore:

from sqlalchemy import *
from sqlalchemy.orm import *

metadata = MetaData()

nodes = Table('nodes', metadata,
 Column('id', Integer, primary_key=True),
 Column('parent_id', Integer, ForeignKey('nodes.id'))
)

class Node(object):
 def __init__(self, **kw):
 for k in kw:
 setattr(self, k, kw[k])

mapper(Node, nodes, properties={
 'Children':relation(Node, backref=backref('Parent',  
remote_side=nodes.c.id))
})

print  
create_session().query(Node).filter(Node.Parent==Node(id=2)).statement

output:

SELECT nodes.id, nodes.parent_id
FROM nodes
WHERE :param_1 = nodes.parent_id




On Nov 10, 2008, at 3:36 PM, David Gardner wrote:


 Had a problem this morning where SA 0.5rc3 was returning None, while
 0.5rc2 and 0.4.8 returned the expected object/row.

 tables  mappers:
 --
 typehierarchy_table = Table('typehierarchy', metadata, autoload=True)
 typehierarchy_names_table = Table('typehierarchy_names', metadata,
 autoload=True)
 mapper(TypeNode, typehierarchy_table, properties={
  'AutoPopNames':relation(TypeAutoPop, backref='TypeNode'),
  'Children':relation(TypeNode,
 primaryjoin=(typehierarchy_table.c.id==typehierarchy_table.c.parent),

 remote_side=[typehierarchy_table.c.parent],backref='Parent')},
 save_on_init=False)

 mapper(TypeAutoPop,typehierarchy_names_table, save_on_init=False)


 0.5.0rc3:
 
 from assetdb import *
 DEV BRANCH:assetdb.py
 print sqlalchemy.__version__
 0.5.0rc3
 session=create_session()
 t='h2_prj'
 show='sid'
 parent=None
 db.echo=True
 tn =
 session 
 .query 
 (TypeNode 
 ).filter 
 (TypeNode 
 .Parent 
 = 
 =parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:19:06,534 INFO sqlalchemy.engine.base.Engine.0x...af50
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy AS typehierarchy_1
 WHERE typehierarchy.id = typehierarchy_1.parent)) AND  
 typehierarchy.type
 = %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:19:06,535 INFO sqlalchemy.engine.base.Engine.0x...af50
 {'type_1': 'h2_prj', 'project_1': 'sid'}
 tn
 []



 0.5.0rc2:
 
 from assetdb import *
 DEV BRANCH:assetdb.py
 print sqlalchemy.__version__
 0.5.0rc2
 session=create_session()
 t='h2_prj'
 show='sid'
 parent=None
 db.echo=True
 tn =
 session 
 .query 
 (TypeNode 
 ).filter 
 (TypeNode 
 .Parent 
 = 
 =parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy
 WHERE typehierarchy.id = typehierarchy.parent)) AND  
 typehierarchy.type =
 %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 {'type_1': 'h2_prj', 'project_1': 'sid'}
 tn
 [assetdb.TypeNode object at 0x201f3d0]



 -- 

 David Gardner
 Pipeline Tools Programmer, Sid the Science Kid
 Jim Henson Creature Shop
 [EMAIL PROTECTED]



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: extending pre-existing tables

2008-11-10 Thread rdmurray

On Mon, 10 Nov 2008 at 17:19, Michael Bayer wrote:
 Possibly.  The fragility here is that you are relying on a model
 that isn't actually implemented here, i.e. that your application is
 written around a table inheritance assumption when that is not
 actually the case - the extended tables may or may not have a row
 present for a corresponding base row.   It would likely be more
 straightforward to implement a non-inheriting system.

OK.  Is there a way to do that and still have the (for example)
ExtendedPublication object inherit from the Publication object, or do
I need to switch to an object composition scheme?  In which case
pre-populating the extended tables may be a lot easier :)

--RDM

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5rc3 problem (works in 0.5rc2 and 0.4.8)

2008-11-10 Thread David Gardner
Yeah, I kinda figured that out in a round-about way, because 0.5rc2 and 
0.4.8 would get me my object, but then node.Parent was the collection of 
children. Thanks for the response.
Michael Bayer wrote:
 Hi David -

 One thing I notice is that your remote_side on the self referential
 relation from TypeNode-Children is not needed, whereas it *is* needed
 on the TypeNode-Parent side, which is the many to one side, using
 backref=backref('Parent', remote_side=[typehierarchy_table.c.id]).
 Without it, SQLA assumes TypeNode-Parent is a one to many collection.

 However, I cannot reproduce your behavior - if the remote_side is in
 fact missing on the backref side, the comparison of
 TypeNode.Parent==something correctly raises this error:
 sqlalchemy.exc.InvalidRequestError: Can't compare a collection to an
 object or collection; use contains() to test for membership.

 Here is a test case illustrating the current 0.5rc3 behavior - you can
 see that a many-to-one self referential comparison does not generate
 an EXISTS anymore:

 from sqlalchemy import *
 from sqlalchemy.orm import *

 metadata = MetaData()

 nodes = Table('nodes', metadata,
  Column('id', Integer, primary_key=True),
  Column('parent_id', Integer, ForeignKey('nodes.id'))
 )

 class Node(object):
  def __init__(self, **kw):
  for k in kw:
  setattr(self, k, kw[k])

 mapper(Node, nodes, properties={
  'Children':relation(Node, backref=backref('Parent',
 remote_side=nodes.c.id))
 })

 print
 create_session().query(Node).filter(Node.Parent==Node(id=2)).statement

 output:

 SELECT nodes.id, nodes.parent_id
 FROM nodes
 WHERE :param_1 = nodes.parent_id




 On Nov 10, 2008, at 3:36 PM, David Gardner wrote:

   
 Had a problem this morning where SA 0.5rc3 was returning None, while
 0.5rc2 and 0.4.8 returned the expected object/row.

 tables  mappers:
 --
 typehierarchy_table = Table('typehierarchy', metadata, autoload=True)
 typehierarchy_names_table = Table('typehierarchy_names', metadata,
 autoload=True)
 mapper(TypeNode, typehierarchy_table, properties={
  'AutoPopNames':relation(TypeAutoPop, backref='TypeNode'),
  'Children':relation(TypeNode,
 primaryjoin=(typehierarchy_table.c.id==typehierarchy_table.c.parent),

 remote_side=[typehierarchy_table.c.parent],backref='Parent')},
 save_on_init=False)

 mapper(TypeAutoPop,typehierarchy_names_table, save_on_init=False)


 0.5.0rc3:
 
 
 from assetdb import *
   
 DEV BRANCH:assetdb.py
 
 print sqlalchemy.__version__
   
 0.5.0rc3
 
 session=create_session()
 t='h2_prj'
 show='sid'
 parent=None
 db.echo=True
 tn =
   
 session
 .query
 (TypeNode
 ).filter
 (TypeNode
 .Parent
 =
 =parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:19:06,534 INFO sqlalchemy.engine.base.Engine.0x...af50
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy AS typehierarchy_1
 WHERE typehierarchy.id = typehierarchy_1.parent)) AND
 typehierarchy.type
 = %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:19:06,535 INFO sqlalchemy.engine.base.Engine.0x...af50
 {'type_1': 'h2_prj', 'project_1': 'sid'}
 
 tn
   
 []
 
 0.5.0rc2:
 
 
 from assetdb import *
   
 DEV BRANCH:assetdb.py
 
 print sqlalchemy.__version__
   
 0.5.0rc2
 
 session=create_session()
 t='h2_prj'
 show='sid'
 parent=None
 db.echo=True
 tn =
   
 session
 .query
 (TypeNode
 ).filter
 (TypeNode
 .Parent
 =
 =parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy
 WHERE typehierarchy.id = typehierarchy.parent)) AND
 typehierarchy.type =
 %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 {'type_1': 'h2_prj', 'project_1': 'sid'}
 
 tn
   
 [assetdb.TypeNode object at 0x201f3d0]



 --

 David Gardner
 Pipeline Tools Programmer, Sid the Science Kid
 Jim Henson Creature Shop
 [EMAIL PROTECTED]



 


 

   


-- 
David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
(323) 802-1717 [EMAIL PROTECTED]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 

[sqlalchemy] Re: 0.5rc3 problem (works in 0.5rc2 and 0.4.8)

2008-11-10 Thread David Gardner

I ended up having problems with my backref (in any version of SA) so I 
re-worked my mappers and now 0.5rc3 is generating correct SQL. So this 
probably isn't a bug.

mapper(TypeNode, typehierarchy_table, properties={
  'AutoPopNames':relation(TypeAutoPop, backref='TypeNode'),
  'Children':relation(TypeNode, cascade=all, 
backref=backref(Parent, remote_side=[typehierarchy_table.c.id]))}, 
save_on_init=False)

tn = 
session.query(TypeNode).filter(TypeNode.Parent==parent).filter(TypeNode.type==t).filter(TypeNode.project==show).first()

2008-11-10 14:25:04,488 INFO sqlalchemy.engine.base.Engine.0x...df50 
SELECT typehierarchy.project AS typehierarchy_project, 
typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS 
typehierarchy_id, typehierarchy.type AS typehierarchy_type, 
typehierarchy.static AS typehierarchy_static
FROM typehierarchy
WHERE typehierarchy.parent IS NULL AND typehierarchy.type = %(type_1)s 
AND typehierarchy.project = %(project_1)s
 LIMIT 1 OFFSET 0
2008-11-10 14:25:04,489 INFO sqlalchemy.engine.base.Engine.0x...df50 
{'type_1': 'h2_prj', 'project_1': 'sid'}


David Gardner wrote:
 Had a problem this morning where SA 0.5rc3 was returning None, while
 0.5rc2 and 0.4.8 returned the expected object/row.

 tables  mappers:
 --
 typehierarchy_table = Table('typehierarchy', metadata, autoload=True)
 typehierarchy_names_table = Table('typehierarchy_names', metadata,
 autoload=True)
 mapper(TypeNode, typehierarchy_table, properties={
   'AutoPopNames':relation(TypeAutoPop, backref='TypeNode'),
   'Children':relation(TypeNode,
 primaryjoin=(typehierarchy_table.c.id==typehierarchy_table.c.parent),

 remote_side=[typehierarchy_table.c.parent],backref='Parent')},
 save_on_init=False)

 mapper(TypeAutoPop,typehierarchy_names_table, save_on_init=False)


 0.5.0rc3:
 
   from assetdb import *
 DEV BRANCH:assetdb.py
   print sqlalchemy.__version__
 0.5.0rc3
   session=create_session()
   t='h2_prj'
   show='sid'
   parent=None
   db.echo=True
   tn =
 session.query(TypeNode).filter(TypeNode.Parent==parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:19:06,534 INFO sqlalchemy.engine.base.Engine.0x...af50
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy AS typehierarchy_1
 WHERE typehierarchy.id = typehierarchy_1.parent)) AND typehierarchy.type
 = %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:19:06,535 INFO sqlalchemy.engine.base.Engine.0x...af50
 {'type_1': 'h2_prj', 'project_1': 'sid'}
   tn
 []
  


 0.5.0rc2:
 
   from assetdb import *
 DEV BRANCH:assetdb.py
   print sqlalchemy.__version__
 0.5.0rc2
   session=create_session()
   t='h2_prj'
   show='sid'
   parent=None
   db.echo=True
   tn =
 session.query(TypeNode).filter(TypeNode.Parent==parent).filter(TypeNode.type==t).filter(TypeNode.project==show).all()
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 SELECT typehierarchy.project AS typehierarchy_project,
 typehierarchy.parent AS typehierarchy_parent, typehierarchy.id AS
 typehierarchy_id, typehierarchy.type AS typehierarchy_type,
 typehierarchy.static AS typehierarchy_static
 FROM typehierarchy
 WHERE NOT (EXISTS (SELECT 1
 FROM typehierarchy
 WHERE typehierarchy.id = typehierarchy.parent)) AND typehierarchy.type =
 %(type_1)s AND typehierarchy.project = %(project_1)s
 2008-11-10 12:30:50,452 INFO sqlalchemy.engine.base.Engine.0x...9a10
 {'type_1': 'h2_prj', 'project_1': 'sid'}
   tn
 [assetdb.TypeNode object at 0x201f3d0]



 --

 David Gardner
 Pipeline Tools Programmer, Sid the Science Kid
 Jim Henson Creature Shop
 [EMAIL PROTECTED]



 

   


-- 
David Gardner
Pipeline Tools Programmer, Sid the Science Kid
Jim Henson Creature Shop
(323) 802-1717 [EMAIL PROTECTED]



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] AttributeError: 'NoneType' object has no attribute 'pop'

2008-11-10 Thread arashf

Traceback (most recent call last):
File /srv/server/metaserver/metaserver/lib/base.py, line 56, in
__call__
ret = WSGIController.__call__(self, environ, start_response)
File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/pylons/
controllers/core.py, line 195, in __call__
after = self._inspect_call(self.__after__)
wasn't getting this on the betas of sqlalchemy, but am getting it on
rc3. any ideas?

File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/pylons/
controllers/core.py, line 79, in _inspect_call
result = func(**args)
File /srv/server/metaserver/metaserver/lib/base.py, line 96, in
__after__
metaserver.model.Session.commit()
File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
sqlalchemy/orm/scoping.py, line 121, in do
return getattr(self.registry(), name)(*args, **kwargs)
File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
sqlalchemy/orm/session.py, line 670, in commit
self.transaction.commit()
File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
sqlalchemy/orm/session.py, line 385, in commit
self._remove_snapshot()
File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
sqlalchemy/orm/session.py, line 306, in _remove_snapshot
_expire_state(s, None)
File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
sqlalchemy/orm/attributes.py, line 985, in expire_attributes
self.dict.pop(key, None)
AttributeError: 'NoneType' object has no attribute 'pop'
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: extending pre-existing tables

2008-11-10 Thread Michael Bayer


On Nov 10, 2008, at 5:34 PM, [EMAIL PROTECTED] wrote:


 On Mon, 10 Nov 2008 at 17:19, Michael Bayer wrote:
 Possibly.  The fragility here is that you are relying on a model
 that isn't actually implemented here, i.e. that your application is
 written around a table inheritance assumption when that is not
 actually the case - the extended tables may or may not have a row
 present for a corresponding base row.   It would likely be more
 straightforward to implement a non-inheriting system.

 OK.  Is there a way to do that and still have the (for example)
 ExtendedPublication object inherit from the Publication object, or do
 I need to switch to an object composition scheme?  In which case
 pre-populating the extended tables may be a lot easier :)


you should go with composition since its simpler.  After that, if  
there's something missing that inheritance provides we can take a look  
at that.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: select where field=max(field)

2008-11-10 Thread John Hunter

On Mon, Nov 10, 2008 at 11:10 AM, Michael Bayer
[EMAIL PROTECTED] wrote:

 you need an extra tuple on the join, query.join((q1, s.s==q1.c.s))

This gets past the syntax error, but does not produce the right
results.  I had to take some time off today to work on other problems,
but am now returning to this query.  To better take advantage of all
of your generous time :-) I wrote a free-standing example that
populates a test database.  The initial query Simon suggested works
and produces the desired output -- the goal is to replicate this with
a sqlalchemy query.  I also include the join we were working on.  Now
that the syntax is correct, it runs, but gives the wrong output.

Since someone proposed a bowling example earlier I decided to run with
that since it fits my problem quite well: instead of finding the
number of symbols per strategy where the sum(pnl)-15, we are
looking for the number of bowlers per league where the
sum(frames)200.  Example below


Thanks for any additional input!

import datetime
import sqlalchemy as sa
import sqlalchemy.orm as orm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import func
Base = declarative_base()

class Frame(Base):
__tablename__ = 'frame'

league = sa.Column(sa.String(12), primary_key=True)
bowler = sa.Column(sa.String(12), primary_key=True)
frame = sa.Column(sa.Integer, primary_key=True)
score =  sa.Column(sa.Integer)


def __init__(self, league, bowler, frame, score):
self.league = league
self.bowler = bowler
self.frame = frame
self.score = score

def __repr__(self):
return Game('%s', '%s', '%d', '%d')%(self.league,
self.bowler, self.frame, self.score)

def populate(session):
'add some random bowling data to the dbase'
import random
for league in 'strikers', 'punters', 'plungers', 'scorers':
for i in range(random.randint(3,10)):
bowler = chr(i+65)
for frame in range(1, 11):
score = random.randint(0,30)
session.add(Frame(league, bowler, frame, score))

session.commit()

if __name__=='__main__':
engine = sa.create_engine(sqlite:///test.db)
Base.metadata.bind = engine

Session = orm.sessionmaker()
session = Session(bind=engine)
Base.metadata.drop_all()
Base.metadata.create_all()
populate(session)

# this is what we are trying to achieve
query = \
SELECT league, COUNT(*)
FROM (SELECT frame.league AS league,
frame.bowler AS frame_bowler,
sum(frame.score) AS sum_1
 FROM frame
 GROUP BY frame.league, frame.bowler
 HAVING sum(frame.score)  200) AS frames
GROUP BY league

print 'desired', session.execute(query).fetchall()

# this is what Simon suggested

total_score = func.sum(Frame.score)
q1 = (session.query(Frame.league, Frame.bowler,
total_score).group_by(Frame.league,
Frame.bowler).having(total_score200)).subquery()

q2 = (session.query(Frame.league, func.count('*')).join((q1,
Frame.league==q1.c.league)).group_by(Frame.league))

print q2
print q2.all()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: AttributeError: 'NoneType' object has no attribute 'pop'

2008-11-10 Thread Michael Bayer

yeah this is an enhancement we made, whereby InstanceState removes  
circular references from itself when its host object is garbage  
collected, thereby taking the load off of gc (and it does).   So in  
this case, asynchronous gc is occurring right as InstanceState is  
doing expire_attributes on itself, and the dict is getting swiped.
We had this issue and did a None check, but that's obviously not  
enough since...well its asynchronous.  So r5283 turns state.dict  
into an attribute that will always spit out a dictionary if the real  
one was removed, and its looking like rc4 will be sooner rather than  
later as we are hitting a few glitches like this.


On Nov 10, 2008, at 8:28 PM, arashf wrote:


 Traceback (most recent call last):
 File /srv/server/metaserver/metaserver/lib/base.py, line 56, in
 __call__
 ret = WSGIController.__call__(self, environ, start_response)
 File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/ 
 pylons/
 controllers/core.py, line 195, in __call__
 after = self._inspect_call(self.__after__)
 wasn't getting this on the betas of sqlalchemy, but am getting it on
 rc3. any ideas?

 File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/ 
 pylons/
 controllers/core.py, line 79, in _inspect_call
 result = func(**args)
 File /srv/server/metaserver/metaserver/lib/base.py, line 96, in
 __after__
 metaserver.model.Session.commit()
 File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
 sqlalchemy/orm/scoping.py, line 121, in do
 return getattr(self.registry(), name)(*args, **kwargs)
 File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
 sqlalchemy/orm/session.py, line 670, in commit
 self.transaction.commit()
 File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
 sqlalchemy/orm/session.py, line 385, in commit
 self._remove_snapshot()
 File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
 sqlalchemy/orm/session.py, line 306, in _remove_snapshot
 _expire_state(s, None)
 File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
 sqlalchemy/orm/attributes.py, line 985, in expire_attributes
 self.dict.pop(key, None)
 AttributeError: 'NoneType' object has no attribute 'pop'
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: AttributeError: 'NoneType' object has no attribute 'pop'

2008-11-10 Thread arashf

gotcha, cool. was I first to run into this? :-)

On Nov 10, 5:57 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 yeah this is an enhancement we made, whereby InstanceState removes  
 circular references from itself when its host object is garbage  
 collected, thereby taking the load off of gc (and it does).   So in  
 this case, asynchronous gc is occurring right as InstanceState is  
 doing expire_attributes on itself, and the dict is getting swiped.    
 We had this issue and did a None check, but that's obviously not  
 enough since...well its asynchronous.  So r5283 turns state.dict  
 into an attribute that will always spit out a dictionary if the real  
 one was removed, and its looking like rc4 will be sooner rather than  
 later as we are hitting a few glitches like this.

 On Nov 10, 2008, at 8:28 PM, arashf wrote:



  Traceback (most recent call last):
  File /srv/server/metaserver/metaserver/lib/base.py, line 56, in
  __call__
  ret = WSGIController.__call__(self, environ, start_response)
  File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/
  pylons/
  controllers/core.py, line 195, in __call__
  after = self._inspect_call(self.__after__)
  wasn't getting this on the betas of sqlalchemy, but am getting it on
  rc3. any ideas?

  File /usr/lib/python2.5/site-packages/Pylons-0.9.6.2-py2.5.egg/
  pylons/
  controllers/core.py, line 79, in _inspect_call
  result = func(**args)
  File /srv/server/metaserver/metaserver/lib/base.py, line 96, in
  __after__
  metaserver.model.Session.commit()
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
  sqlalchemy/orm/scoping.py, line 121, in do
  return getattr(self.registry(), name)(*args, **kwargs)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
  sqlalchemy/orm/session.py, line 670, in commit
  self.transaction.commit()
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
  sqlalchemy/orm/session.py, line 385, in commit
  self._remove_snapshot()
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
  sqlalchemy/orm/session.py, line 306, in _remove_snapshot
  _expire_state(s, None)
  File /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.0rc3-py2.5.egg/
  sqlalchemy/orm/attributes.py, line 985, in expire_attributes
  self.dict.pop(key, None)
  AttributeError: 'NoneType' object has no attribute 'pop'
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres - Backup - Restore

2008-11-10 Thread Petr Kobalíček
Hi Chris,

yeah these tools works great, our problem was that if I did backup and
restoration from web interface then this problem happen.

I wasn't also familiar with postgres :)

Cheers
- Petr

2008/11/10 Chris Miles [EMAIL PROTECTED]:


 On Nov 10, 4:57 am, Petr Kobalíček [EMAIL PROTECTED] wrote:
 I have postgres related problem. I'm normally developing with sqlite,
 but we are using postgres on the server. The problem is that
 sqlalchemy probably remembers primary keys and after database restore
 it will start in all tables from 1.

 If you use the PostgreSQL tools pg_dump and pg_restore they should
 maintain the sequences properly for you when copying databases between
 servers.

 Cheers,
 Chris Miles


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---