[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-09 Thread Cito

On 8 Nov., 22:03, Michael Bayer [EMAIL PROTECTED] wrote:
 oh sorry, also count() is meant to count instances of a single kind of  
 object.  So in fact you should be saying:

 session.query(UserRss).join(Rss, item).count()

This question is actually coming from the TurboGears group. The
problem here is that our pagination mechanism takes an existing query
and checks its result size with count(). I.e. we have no influence on
the actual query, we just assume that if you can get all() or slices
from the query that you can also count() its results. This had worked
all the time up to 0.4.8, but with 0.5 it doesn't work any more.

In this example Greg probably wanted to display data from both UserRss
*and* Rss in a data grid, so it would not help him to alter the query
that way.

-- Christoph
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-09 Thread Michael Bayer


On Nov 9, 2008, at 4:46 AM, Cito wrote:


 On 8 Nov., 22:03, Michael Bayer [EMAIL PROTECTED] wrote:
 oh sorry, also count() is meant to count instances of a single kind  
 of
 object.  So in fact you should be saying:

 session.query(UserRss).join(Rss, item).count()

 This question is actually coming from the TurboGears group. The
 problem here is that our pagination mechanism takes an existing query
 and checks its result size with count(). I.e. we have no influence on
 the actual query, we just assume that if you can get all() or slices
 from the query that you can also count() its results. This had worked
 all the time up to 0.4.8, but with 0.5 it doesn't work any more.


Assuming your joins are many-to-ones, you can get a straight row count  
using query(*anything).value(func.count('*')), optionally adding  
distinct() before you call value().   This is a better strategy than  
using count() in any case since Query.count() still has a behavioral  
contract I'm not totally happy with, and you'll get better control  
over what it is you actually want to count.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with join, count on 0.5.0rc3

2008-11-09 Thread Michael Bayer


On Nov 9, 2008, at 9:39 AM, Michael Bayer wrote:



 On Nov 9, 2008, at 4:46 AM, Cito wrote:


 On 8 Nov., 22:03, Michael Bayer [EMAIL PROTECTED] wrote:
 oh sorry, also count() is meant to count instances of a single kind
 of
 object.  So in fact you should be saying:

 session.query(UserRss).join(Rss, item).count()

 This question is actually coming from the TurboGears group. The
 problem here is that our pagination mechanism takes an existing query
 and checks its result size with count(). I.e. we have no influence on
 the actual query, we just assume that if you can get all() or slices
 from the query that you can also count() its results. This had worked
 all the time up to 0.4.8, but with 0.5 it doesn't work any more.


in the latest trunk r5269 I have overhauled query.count() to properly  
support multiple entity and column-oriented ORM queries.  Your use  
case will now pass, but be aware that it now takes into account the  
full list of entities queried.  In 0.4, this behavior was broken as it  
would only use the first entity, and silently ignore the rest.   This  
means that a query such as query(A, B).count() with no JOIN or joining  
criterion will return the count of the cartesian product of A*B.  A  
query such as query(func.count(A.somecol)).count() will return a value  
of one, since an aggregate function returns one row.  None of this is  
supported at all in 0.4 and was previously not supported in 0.5.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres - Backup - Restore

2008-11-09 Thread Petr Kobalíček
Thank you Michael.

2008/11/9 Michael Bayer [EMAIL PROTECTED]:

 you need to call ALTER SEQUENCE on your sequences such that they begin
 with an integer identifier greater than that of the table they are
 being used with.


 On Nov 9, 2008, at 1:22 PM, Petr Kobalíček wrote:

 I have found some material about this and this is called 'sequences'
 in postgres terminology.

 So I know the problem, but I don't know how to synchronize sequences
 using sqlalchemy.

 Cheers
 - Petr

 2008/11/9 Petr Kobalíček [EMAIL PROTECTED]:
 Hi devs,

 I have postgres related problem. I'm normally developing with sqlite,
 but we are using postgres on the server. The problem is that
 sqlalchemy probably remembers primary keys and after database restore
 it will start in all tables from 1.

 The error is (IntegrityError) duplicate key violates unique
 constraint.

 I'm defining tables in way that's in documentation, we want to
 include
 more databases so we are not using specific DB extensions:

 # Example
 OrderTable = sql.Table(
 Order, meta.metadata,
 # Relation
 sql.Column(orderId , sql.Integer  ,
 primary_key=True),
 # If userId == None, the order is anonymous
 sql.Column(userId  , sql.Integer  ,
 sql.ForeignKey(User.userId), nullable=True),

 ...

 I don't know if this problem was discussed before, but I didn't found
 any informations about this.

 Cheers and thanks for replies:)
 - Petr


 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema inspection api

2008-11-09 Thread Michael Bayer

You're on the right track.  The reflection methods are always called  
with a Connection that is not shared among any other thread  
(connections aren't considered to be threadsafe in any case) so  
threadsafety is not a concern.

I think you should look at the mysql dialect sooner rather than later,  
as its going to present a challenge to the overall notion here.  It  
has a much more elaborate design already and you're going to want to  
cache the results of SHOW CREATE TABLE somewhere, and that would be  
better to be *not* cached on the Connection itself, since that is  
something which can change over the lifespan of a single connection.   
I think for the PG _get_table_oid method that needs similar treatment,  
i.e. that its called only once per reflection operation.

We have a nice decorator that can be used in some cases called  
@connection_memoize(), but for per-table information I think caching  
info on the Connection is too long-lived.  I had the notion that the  
Inspector() object itself might be passed to the more granular  
reflection methods, where each method could store whatever state it  
needed into a dictionary like inspector.info_cache - or, perhaps we  
just pass a dictionary to each method that isn't necessarily tied to  
anything.This because I think each dialect has a very different  
way of piecing together the various elements of Table and Column  
objects, we'd like to issue as little SQL as possible, and we are  
looking to build a fine-grained interface.   The methods can therefore  
do whatever queries they need and store any kind of structure per  
table in the cache, and then retrieve the information when the new  
Dialect methods are called.   This means the call for foreign_keys,  
primary_keys, and columns when using the mysql dialect would issue one  
SHOW CREATE TABLE, cache all the data, and return what's requested for  
each call.

With the caching approach, multiple queries with the same  
reflection.Inspector object can be made to work nearly as efficiently  
as the coarse-grained reflecttable() methods do now.


On Nov 8, 2008, at 6:31 PM, Randall Smith wrote:

 Michael Bayer wrote:

 The structure of the API would drive the current reflection API to
 become more componentized.  What we see as a need on the public
 refleciton API side would drive the currently monolithic  
 reflection
 methods to be changed.   The current reflecttable() methods in turn
 would ideally move *out* of the dialects, and the  
 Table(autoload=True)
 functionality would simply call up an Inspector and use that.  So the
 ultimate outcome is that the individual reflecttable methods go
 away, and dialects implement a larger set of simpler methods.


 I started by factoring out components for reflecttable.  So now there
 are methods get_columns, get_primary_keys, get_foreign_keys,  
 get_indexes
 and get_views.  get_columns, get_primary_keys, and get_foreign_keys  
 are
 the methods used by reflecttable (and the ones I actually  
 implemented).

 There is one thing I did that will slightly degrade performance. I
 created a method in postgres.py __get_table_oid that several other
 methods use.  This value could be cached at the beginning of
 reflecttable, but I didn't won't to address threading issues.

 I made a local branch called reflection and have attached the files  
 and
 diffs for engine/base.py and databases/postgres.py at revision  
 5262.  I
 did a few real world tests, but not the automated tests.  I don't know
 how difficult they are to set up.  That'll be the next thing I do.
 Mainly I just want to get feedback on the API and if I'm going about  
 the
 implementation right.

 --Randall

 
 base.py.diff.gzbase.py.gzpostgres.py.diff.gzpostgres.py.gz


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Postgres - Backup - Restore

2008-11-09 Thread Petr Kobalíček

Hi devs,

I have postgres related problem. I'm normally developing with sqlite,
but we are using postgres on the server. The problem is that
sqlalchemy probably remembers primary keys and after database restore
it will start in all tables from 1.

The error is (IntegrityError) duplicate key violates unique constraint.

I'm defining tables in way that's in documentation, we want to include
more databases so we are not using specific DB extensions:

# Example
OrderTable = sql.Table(
  Order, meta.metadata,
  # Relation
  sql.Column(orderId , sql.Integer  , primary_key=True),
  # If userId == None, the order is anonymous
  sql.Column(userId  , sql.Integer  ,
sql.ForeignKey(User.userId), nullable=True),

  ...

I don't know if this problem was discussed before, but I didn't found
any informations about this.

Cheers and thanks for replies:)
- Petr

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Python 2.6 includes sqlite3 2.4.1

2008-11-09 Thread Eric Ongerth

To whom it may concern:

I noticed the following thread in this group (the only thing that came
up when I searched for sqlite3 python 2.6):

http://groups.google.com/group/sqlalchemy/browse_thread/thread/d6d691b53e93b5e5/78a57bae1aefd59d

And then I found the following on the page for What's New in Python
2.6 (released last month):

[quote]
The sqlite3 module, maintained by Gerhard Haering, has been updated
from version 2.3.2 in Python 2.5 to version 2.4.1.
[/quote]

... just FYI.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres - Backup - Restore

2008-11-09 Thread Michael Bayer

you need to call ALTER SEQUENCE on your sequences such that they begin  
with an integer identifier greater than that of the table they are  
being used with.


On Nov 9, 2008, at 1:22 PM, Petr Kobalíček wrote:

 I have found some material about this and this is called 'sequences'
 in postgres terminology.

 So I know the problem, but I don't know how to synchronize sequences
 using sqlalchemy.

 Cheers
 - Petr

 2008/11/9 Petr Kobalíček [EMAIL PROTECTED]:
 Hi devs,

 I have postgres related problem. I'm normally developing with sqlite,
 but we are using postgres on the server. The problem is that
 sqlalchemy probably remembers primary keys and after database restore
 it will start in all tables from 1.

 The error is (IntegrityError) duplicate key violates unique  
 constraint.

 I'm defining tables in way that's in documentation, we want to  
 include
 more databases so we are not using specific DB extensions:

 # Example
 OrderTable = sql.Table(
 Order, meta.metadata,
 # Relation
 sql.Column(orderId , sql.Integer  ,  
 primary_key=True),
 # If userId == None, the order is anonymous
 sql.Column(userId  , sql.Integer  ,
 sql.ForeignKey(User.userId), nullable=True),

 ...

 I don't know if this problem was discussed before, but I didn't found
 any informations about this.

 Cheers and thanks for replies:)
 - Petr


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Postgres - Backup - Restore

2008-11-09 Thread Petr Kobalíček
I have found some material about this and this is called 'sequences'
in postgres terminology.

So I know the problem, but I don't know how to synchronize sequences
using sqlalchemy.

Cheers
- Petr

2008/11/9 Petr Kobalíček [EMAIL PROTECTED]:
 Hi devs,

 I have postgres related problem. I'm normally developing with sqlite,
 but we are using postgres on the server. The problem is that
 sqlalchemy probably remembers primary keys and after database restore
 it will start in all tables from 1.

 The error is (IntegrityError) duplicate key violates unique constraint.

 I'm defining tables in way that's in documentation, we want to include
 more databases so we are not using specific DB extensions:

 # Example
 OrderTable = sql.Table(
  Order, meta.metadata,
  # Relation
  sql.Column(orderId , sql.Integer  , primary_key=True),
  # If userId == None, the order is anonymous
  sql.Column(userId  , sql.Integer  ,
 sql.ForeignKey(User.userId), nullable=True),

  ...

 I don't know if this problem was discussed before, but I didn't found
 any informations about this.

 Cheers and thanks for replies:)
 - Petr


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---