Hi All,
Can any body help me how Sqlalchemy can be used to identify changes in
database?
I mean how to identify that some rows got delete or added after particular
time.?
Thanks
Balaji
--
You received this message because you are subscribed to the Google Groups
"sqlalchemy" group.
To unsubsc
On 7/15/15 4:40 PM, Юрий Пайков wrote:
Ok, that is clear now.
Eagerloading of tables occurring more than once in a query is a bit
confusing for me as it is not well-documented,
for example that *contains_eager()* needs *alias=* in order
to work properly for a second occurrence of a table. If
there's your two rows:
Row (None, 24769797950537732L, datetime.datetime(2015, 7, 15, 17, 49,
57, 410290, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180,
name=None)), datetime.datetime(2015, 7, 15, 17, 49, 57, 410305,
tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180, name=None)), 0L,
24
Ok, that is clear now.
Eagerloading of tables occurring more than once in a query is a bit
confusing for me as it is not well-documented,
for example that *contains_eager()* needs *alias=* in order to
work properly for a second occurrence of a table. If I might I would advise
you to shed some
the interesting is, if i get the count() produced sql and put it into
psql ...
SELECT count(*) AS count_1
FROM system_unit, (
SELECT system_unit.fk_updated_by AS system_unit_fk_updated_by,
system_unit.fk_created_by AS system_unit_fk_created_by,
system_unit.dt_created_on AS system_
right! sorry, now here we go (again):
(Pdb) import logging
(Pdb) logging.basicConfig()
(Pdb) logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:56:44,565 INFO sqlalch
On 7/15/15 3:46 PM, Richard Gerd Kuesters wrote:
thanks Mike!
here we go:
(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine SELECT
count(*) AS count_1
FROM system_unit, (SELECT s
oh, the pk "24769797950537768" is a postgres biginteger.
On 07/15/2015 04:46 PM, Richard Gerd Kuesters wrote:
thanks Mike!
here we go:
(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine S
thanks Mike!
here we go:
(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine SELECT
count(*) AS count_1
FROM system_unit, (SELECT system_unit.fk_updated_by AS
system_unit_fk_updated_by,
On 7/15/15 3:13 PM, Richard Gerd Kuesters wrote:
does this happen even with a filter for a PK?
the problem is:
>>> session.query(Entity).filter(Entity.id_ ==
24769797950537768).count() == 2
>>> len(session.query(Entity).filter(Entity.id_ ==
24769797950537768).all()) == 1
i don't see where
does this happen even with a filter for a PK?
the problem is:
>>> session.query(Entity).filter(Entity.id_ ==
24769797950537768).count() == 2
>>> len(session.query(Entity).filter(Entity.id_ ==
24769797950537768).all()) == 1
i don't see where i have 2 pks with the same value ... in psql:
mydb
On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:
hello!
i'm encountering a weird behaviur with session.count() when using a
custom mapper that implements a where condition to every session.
first, what is happening:
>>> len(session.query(Entity).all()) == 1
>>> session.query(Entity).count(
oh, forgot to mention:
* this occurs even with a filter that's supposed to bring one register
only (at the database level it works);
* in the database level, a count *without* the where clause brings the
result i mentioned earlier.
thanks,
richard.
On 07/15/2015 03:11 PM, Richard Gerd Kues
hello!
i'm encountering a weird behaviur with session.count() when using a
custom mapper that implements a where condition to every session.
first, what is happening:
>>> len(session.query(Entity).all()) == 1
>>> session.query(Entity).count() == 2
"Entity" is a base polymorphic entity, inher
On 7/15/15 2:42 AM, Юрий Пайков wrote:
I have an example here
https://gist.github.com/ojomio/aa5eca3bea03d21e00e8. This code issue
exactly one query and load everything at one time
What I am asking about is
line https://gist.github.com/ojomio/aa5eca3bea03d21e00e8#file-gistfile1-py-L65
If I
I thought maybe there was a simpler way to do that, but the
hybrid_property works. Thanks.
On Wed, Jul 8, 2015 at 11:19 AM, Mike Bayer wrote:
>
>
> On 7/8/15 12:15 AM, Pedro Werneck wrote:
>
>
> Let's say I have a table 'user', and for backwards compatibility reasons I
> have a single-column tabl
oh, yes, i was thinking about cascading polymorphic_on, like you mentioned.
but, no problem, i'll try to workaround my problem with a more simple
approach (the old soft-delete dilemma) ...
thanks for your help, Mike!
best regards,
richard.
On 07/15/2015 10:46 AM, Mike Bayer wrote:
On 7/
On 7/15/15 9:09 AM, Richard Gerd Kuesters wrote:
hi all,
i was wondering if there's a way to create more than one level of
polymorphic entities in sa. quick example:
class Foo(Base):
...
__mapper_args__ = { ... }
class Bar(Foo):
...
__mapper_args__ =
Ok, that approach isn't cool.. I get another one from flask_jsontools and
it does what I need!
json.dumps(q, cls=DynamicJSONEncoder)
Base = declarative_base(cls=JsonSerializableBase)
import decimal
from datetime import datetime, date
from json import JSONEncoder
from sqlalchemy import inspe
hi all,
i was wondering if there's a way to create more than one level of
polymorphic entities in sa. quick example:
class Foo(Base):
...
__mapper_args__ = { ... }
class Bar(Foo):
...
__mapper_args__ = { ??? } # <--- polymorphic_identity for ... two?
20 matches
Mail list logo