On Feb 18, 5:28 pm, Michael Bayer wrote:
> On Feb 18, 2009, at 2:23 AM, Frank Millman wrote:
>
> > Hi all
[...]
> > My application supports PostgreSQL and MS-SQL. Both of these databases
> > have the concept of a 'scrollable cursor'. AFAICT the DB-API does not
> > support this concept, so I man
Thank you, thats great.
Looked at is_modfied and it stops at the first modification. But
thinking about it, you are right, it would be more efficient to just
do the individual checks on all the dirties attributes.
On 18 Feb, 15:23, Michael Bayer wrote:
> get_history() is a public function withi
well the thing with eagerloading is that its using outer joins, and if
you are dealing with a joined table inheritance structure its
potentially going to outerjoin to subqueries which themselves contain
joins. Or if you're selecting a LIMIT/OFFSET result, SQLAlchemy will
apply the LIMI
When I eagerload an object's 3 nested collections, the SqlAlchemy
debug output is about 30 lines long. When I don't eagerload, the
debug output is about 10,700 lines long. So the eagerload is
definitely using less queries.
However, the eagerload strategy takes many times longer than the
lazyloa
On Feb 18, 2009, at 2:23 AM, Frank Millman wrote:
>
> Hi all
>
> I have not used SQLAlchemy before. I am comfortable with SQL and enjoy
> the feeling of being in control. However, I can see that SA does bring
> some major benefits, particularly in hiding the differences between
> the dialects of
Gunnlaugur Thor Briem wrote:
> Works for me (once I add metadata.create_all(bind=engine) ) ... possibly
> you have an old SQLite that doesn't do the auto-incrementing primary key
> thing?
I forgot to call the metadata.create_all. Silly problem. Thanks.
Regards,
mk
--~--~-~--~~---
get_history() is a public function within the attributes package at
the top level, and I also added API documentation for it recently (not
on the site yet).
from sqlalchemy.orm.attributes import get_history, instance_state
get_history(instance_state(myobject), "someattribute")
in the latest
On Feb 18, 2009, at 3:05 AM, Sam Magister wrote:
>
> Hi,
>
> I have a table with millions of rows that I want to iterate through
> without running out of memory, and without waiting a long time for all
> rows to be loaded.
>
> Looking in the documentation, it seems that .yield_per(count) does
>
Works for me (once I add metadata.create_all(bind=engine) ) ... possibly you
have an old SQLite that doesn't do the auto-incrementing primary key thing?
- G.
On Wed, Feb 18, 2009 at 2:26 PM, Marcin Krol wrote:
>
> Hello,
>
> I just started learning sqlalchemy, my version is 0.5.2, I'm read
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.02.2009 15:26 Uhr, Marcin Krol wrote:
>
> This is pretty obvious, 'id' integer column has not been filled. But I
> have no idea how to remedy this, since this seems to be dependent on
> smth in the guts of sqlalchemy. Help!
>
Column('id',In
Hello,
I just started learning sqlalchemy, my version is 0.5.2, I'm reading the
tutorial (Object Relational Tutorial) and produced this code:
from sqlalchemy import create_engine, Table, Column, Integer, String,
MetaData, ForeignKey
from sqlalchemy.orm import mapper, sessionmaker
engine = cr
Hi all
I have not used SQLAlchemy before. I am comfortable with SQL and enjoy
the feeling of being in control. However, I can see that SA does bring
some major benefits, particularly in hiding the differences between
the dialects of various databases.
Before making a decision about switching to
Hello
I intend to log any changes (not new rows) to a table.
The simplist way I can see to do this is to check every object that is
"add"ed for changes. I intend to, before every flush, look in "dirty"
then use is_modified and if there is a change use get_history to find
out the original and ne
Michael Bayer wrote:
>
>> Which docs do I need to understand what all this means and where do I
>> find them?
>
> that would be here:
> http://www.sqlalchemy.org/docs/05/mappers.html?highlight=large%20collections#working-with-large-collections
Cool, so my model now looks like:
Base = declara
Hi,
I have a table with millions of rows that I want to iterate through
without running out of memory, and without waiting a long time for all
rows to be loaded.
Looking in the documentation, it seems that .yield_per(count) does
what I want (I've read the warnings, and I'm not doing anything wit
15 matches
Mail list logo