[sqlalchemy] If path in joinedload() include self-referential relationship, should I alias() every table after that relation ?

2015-07-15 Thread Юрий Пайков
I have an example here https://gist.github.com/ojomio/aa5eca3bea03d21e00e8. 
This code issue exactly one query and load everything at one time
What I am asking about is 
line https://gist.github.com/ojomio/aa5eca3bea03d21e00e8#file-gistfile1-py-L65

If I don not use alias *second_B* and simply write 

).join(
AWithChildren.bs
).options(
contains_eager(B.as_).
contains_eager(A.children, alias=AWithChildren).
contains_eager(AWithChildren.bs).
joinedload(B.cs)
)


Then  SQLAlchemy issue another query on *C* table, apparently not matching 
expression AWithChildren.bs and B
So my question is - if there are many other tables after *A.children -* 
should use alias() for every one and mention them like
.joinedload(
PreviousTable.relation,
alias=SomeTableAlias
).
?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] One-to-one relationship as a boolean flag

2015-07-15 Thread Pedro Werneck
I thought maybe there was a simpler way to do that, but the
hybrid_property works. Thanks.

On Wed, Jul 8, 2015 at 11:19 AM, Mike Bayer mike...@zzzcomputing.com wrote:


 On 7/8/15 12:15 AM, Pedro Werneck wrote:


 Let's say I have a table 'user', and for backwards compatibility reasons I
 have a single-column table named 'user_active' which is basically just a
 foreign key used as a boolean flag. I need my User model to have the
 'active' field as a boolean mapped to that one-to-one relationship. So, I
 have something like this:


 class User(db.Model):
 __tablename__ = 'user'
 user_id = db.Column(db.Integer, primary_key=True)
 # ... other fields

 active = relationship('UserActive', backref='user', uselist=False)


 class UserActive(db.Model)
 __tablename__ = 'user_active'
 user_id = db.Column(db.Integer, db.ForeignKey('user.user_id')


 Using the simple relationship like that returns either the UserActive
 instance, or None, which isn't ideal as True or False but works fine in any
 boolean context. My problem is assignment and querying. How can can I get it
 to work in a way that setting User.active = True creates the new UserActive
 instance, and querying for User.active == True works as a join, not a
 subquery?

 why not make a @hybrid_property called active, call the relationship
 _active, and then customize access completely?





 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 --
 You received this message because you are subscribed to a topic in the
 Google Groups sqlalchemy group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/sqlalchemy/7azas9khOx0/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



-- 
---
Pedro Werneck

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] If path in joinedload() include self-referential relationship, should I alias() every table after that relation ?

2015-07-15 Thread Mike Bayer



On 7/15/15 2:42 AM, Юрий Пайков wrote:
I have an example here 
https://gist.github.com/ojomio/aa5eca3bea03d21e00e8. This code issue 
exactly one query and load everything at one time
What I am asking about is 
line https://gist.github.com/ojomio/aa5eca3bea03d21e00e8#file-gistfile1-py-L65


If I don not use alias *second_B*and simply write
|
).join(
AWithChildren.bs
).options(
contains_eager(B.as_).
contains_eager(A.children,alias=AWithChildren).
contains_eager(AWithChildren.bs).
joinedload(B.cs)
)
|

Then  SQLAlchemy issue another query on *C* table, apparently not 
matching expression AWithChildren.bs and B
So my question is - if there are many other tables after *A.children 
-*should use alias() for every one and mention them like

|
.joinedload(PreviousTable.relation,alias=SomeTableAlias).
|
?


the use here of .joinedload() on the end of a series of contains_eager() 
calls is already very unusual, and I'm somewhat surprised it works 
correctly in the first case as this is not a use case that's really 
tested.   As for the case of second_b not being present, this is not 
surprising as AWithChildren.bs refers to B, which is already present 
in the query as the primary entity, for a separate collection of B to be 
present it needs to be aliased.   the join(AWithChildren.bs) is the same 
as join(B, AWithChildren.bs), and you'd never want to say what is 
essentially session.query(B).join(Q).join(B) - you need an alias for 
each subsequent occurrence of B.






--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2

Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe ... i'm running 
sqla 1.0.6. if there's any information i can provide to enlighten this 
out, please let me know.


best regards,
richard.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
attachment: richard.vcf

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

oh, forgot to mention:

* this occurs even with a filter that's supposed to bring one register 
only (at the database level it works);
* in the database level, a count *without* the where clause brings the 
result i mentioned earlier.



thanks,
richard.


On 07/15/2015 03:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2

Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe ... i'm 
running sqla 1.0.6. if there's any information i can provide to 
enlighten this out, please let me know.


best regards,
richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
attachment: richard.vcf

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

does this happen even with a filter for a PK?

the problem is:

 session.query(Entity).filter(Entity.id_ == 
24769797950537768).count() == 2
 len(session.query(Entity).filter(Entity.id_ == 
24769797950537768).all()) == 1


i don't see where i have 2 pks with the same value ... in psql:

mydb=# select count(*) from entity where id_ = 24769797950537768;
 count
---
 1
(1 row)

is this really right?


On 07/15/2015 04:02 PM, Mike Bayer wrote:



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary key 
for Entity.The Query object de-duplicates redundant rows based on 
entity identity.  Turn on echo='debug' and you should see this in the 
rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe ... i'm 
running sqla 1.0.6. if there's any information i can provide to 
enlighten this out, please let me know.


best regards,
richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
attachment: richard.vcf

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

thanks Mike!

here we go:

   (Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
   24769797950537768).count()
   2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine SELECT
   count(*) AS count_1
   FROM system_unit, (SELECT system_unit.fk_updated_by AS
   system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on AS
   system_unit_updated_on, system_unit.bi_rm_timestamp AS
   system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
   system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
   WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
   2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine
   {'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0,
   'rm_timestamp_2': 0}
   2L
   (Pdb) len(session.query(MachineUnit).filter(MachineUnit.id_ ==
   24769797950537768).all())
   2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine SELECT
   system_unit.fk_updated_by AS system_unit_fk_updated_by,
   system_unit.fk_created_by AS system_unit_fk_created_by,
   system_unit.dt_created_on AS system_unit_dt_created_on,
   system_unit.dt_updated_on AS system_unit_updated_on,
   system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
   machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
   system_unit.bi_rm_timestamp = %(rm_timestamp_1)s
   2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine
   {'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0}
   1


# MachineUnit = Entity

it fits to the faq entry you mentioned earlier? :)


cheers,
richard.


On 07/15/2015 04:22 PM, Mike Bayer wrote:



On 7/15/15 3:13 PM, Richard Gerd Kuesters wrote:

does this happen even with a filter for a PK?

the problem is:

 session.query(Entity).filter(Entity.id_ == 
24769797950537768).count() == 2
 len(session.query(Entity).filter(Entity.id_ == 
24769797950537768).all()) == 1


i don't see where i have 2 pks with the same value ... in psql:

mydb=# select count(*) from entity where id_ = 24769797950537768;
 count
---
 1
(1 row)

is this really right?
what does echo='debug' say ?   both for the first query and the 
second.Also, that integer ID looks kind of like it itself could be 
hitting some overflow limit at some point.   I assume no issue for a 
simple integer in the thousands ?










On 07/15/2015 04:02 PM, Mike Bayer wrote:



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary 
key for Entity.The Query object de-duplicates redundant rows 
based on entity identity.  Turn on echo='debug' and you should see 
this in the rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe 

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

oh, the pk 24769797950537768 is a postgres biginteger.


On 07/15/2015 04:46 PM, Richard Gerd Kuesters wrote:

thanks Mike!

here we go:

(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine SELECT
count(*) AS count_1
FROM system_unit, (SELECT system_unit.fk_updated_by AS
system_unit_fk_updated_by, system_unit.fk_created_by AS
system_unit_fk_created_by, system_unit.dt_created_on AS
system_unit_dt_created_on, system_unit.dt_updated_on AS
system_unit_updated_on, system_unit.bi_rm_timestamp AS
system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine
{'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0,
'rm_timestamp_2': 0}
2L
(Pdb) len(session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).all())
2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine SELECT
system_unit.fk_updated_by AS system_unit_fk_updated_by,
system_unit.fk_created_by AS system_unit_fk_created_by,
system_unit.dt_created_on AS system_unit_dt_created_on,
system_unit.dt_updated_on AS system_unit_updated_on,
system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s
2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine
{'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0}
1


# MachineUnit = Entity

it fits to the faq entry you mentioned earlier? :)


cheers,
richard.


On 07/15/2015 04:22 PM, Mike Bayer wrote:



On 7/15/15 3:13 PM, Richard Gerd Kuesters wrote:

does this happen even with a filter for a PK?

the problem is:

 session.query(Entity).filter(Entity.id_ == 
24769797950537768).count() == 2
 len(session.query(Entity).filter(Entity.id_ == 
24769797950537768).all()) == 1


i don't see where i have 2 pks with the same value ... in psql:

mydb=# select count(*) from entity where id_ = 24769797950537768;
 count
---
 1
(1 row)

is this really right?
what does echo='debug' say ?   both for the first query and the 
second.Also, that integer ID looks kind of like it itself could 
be hitting some overflow limit at some point.   I assume no issue for 
a simple integer in the thousands ?










On 07/15/2015 04:02 PM, Mike Bayer wrote:



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using 
a custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary 
key for Entity.The Query object de-duplicates redundant rows 
based on entity identity.  Turn on echo='debug' and you should see 
this in the rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Mike Bayer



On 7/15/15 3:46 PM, Richard Gerd Kuesters wrote:

thanks Mike!

here we go:

(Pdb) session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine SELECT
count(*) AS count_1
FROM system_unit, (SELECT system_unit.fk_updated_by AS
system_unit_fk_updated_by, system_unit.fk_created_by AS
system_unit_fk_created_by, system_unit.dt_created_on AS
system_unit_dt_created_on, system_unit.dt_updated_on AS
system_unit_updated_on, system_unit.bi_rm_timestamp AS
system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
2015-07-15 16:43:53,114 INFO sqlalchemy.engine.base.Engine
{'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0,
'rm_timestamp_2': 0}
2L
(Pdb) len(session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).all())
2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine SELECT
system_unit.fk_updated_by AS system_unit_fk_updated_by,
system_unit.fk_created_by AS system_unit_fk_created_by,
system_unit.dt_created_on AS system_unit_dt_created_on,
system_unit.dt_updated_on AS system_unit_updated_on,
system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s
2015-07-15 16:44:32,891 INFO sqlalchemy.engine.base.Engine
{'system_unit_id_1': 24769797950537768, 'rm_timestamp_1': 0}
1


# MachineUnit = Entity

it fits to the faq entry you mentioned earlier? :)


probably, but this is still only INFO output, not DEBUG. echo='debug' 
(not echo=True) will show you the result rows coming back as DEBUG entries.


the JOIN you're seeing seems like it will return two rows for the single 
PK identity.








cheers,
richard.


On 07/15/2015 04:22 PM, Mike Bayer wrote:



On 7/15/15 3:13 PM, Richard Gerd Kuesters wrote:

does this happen even with a filter for a PK?

the problem is:

 session.query(Entity).filter(Entity.id_ == 
24769797950537768).count() == 2
 len(session.query(Entity).filter(Entity.id_ == 
24769797950537768).all()) == 1


i don't see where i have 2 pks with the same value ... in psql:

mydb=# select count(*) from entity where id_ = 24769797950537768;
 count
---
 1
(1 row)

is this really right?
what does echo='debug' say ?   both for the first query and the 
second.Also, that integer ID looks kind of like it itself could 
be hitting some overflow limit at some point.   I assume no issue for 
a simple integer in the thousands ?










On 07/15/2015 04:02 PM, Mike Bayer wrote:



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using 
a custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary 
key for Entity.The Query object de-duplicates redundant rows 
based on entity identity.  Turn on echo='debug' and you should see 
this in the rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = 

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Mike Bayer



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary key 
for Entity.The Query object de-duplicates redundant rows based on 
entity identity.  Turn on echo='debug' and you should see this in the 
rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe ... i'm 
running sqla 1.0.6. if there's any information i can provide to 
enlighten this out, please let me know.


best regards,
richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Mike Bayer



On 7/15/15 3:13 PM, Richard Gerd Kuesters wrote:

does this happen even with a filter for a PK?

the problem is:

 session.query(Entity).filter(Entity.id_ == 
24769797950537768).count() == 2
 len(session.query(Entity).filter(Entity.id_ == 
24769797950537768).all()) == 1


i don't see where i have 2 pks with the same value ... in psql:

mydb=# select count(*) from entity where id_ = 24769797950537768;
 count
---
 1
(1 row)

is this really right?
what does echo='debug' say ?   both for the first query and the 
second.Also, that integer ID looks kind of like it itself could be 
hitting some overflow limit at some point.   I assume no issue for a 
simple integer in the thousands ?










On 07/15/2015 04:02 PM, Mike Bayer wrote:



On 7/15/15 2:11 PM, Richard Gerd Kuesters wrote:

hello!

i'm encountering a weird behaviur with session.count() when using a 
custom mapper that implements a where condition to every session.


first, what is happening:

 len(session.query(Entity).all()) == 1
 session.query(Entity).count() == 2


your all() query is returning two rows that have the same primary key 
for Entity.The Query object de-duplicates redundant rows based on 
entity identity.  Turn on echo='debug' and you should see this in the 
rows coming back.


we really need an FAQ entry for this one.




Entity is a base polymorphic entity, inherited by other classes.

my custom mapper:

class DeletedOption(MapperOption):
propagate_to_loaders = True

def process_query(self, query):
mzo = query._mapper_zero()
if hasattr(mzo, 'class_'):
parent_cls = mzo.class_
filter_crit = parent_cls.rm_timestamp == 0

if query._criterion is None:
query._criterion = filter_crit
else:
query._criterion = query._criterion  filter_crit


i remember getting this custom mapper option in a recipe ... i'm 
running sqla 1.0.6. if there's any information i can provide to 
enlighten this out, please let me know.


best regards,
richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters

right! sorry, now here we go (again):

   (Pdb) import logging
   (Pdb) logging.basicConfig()
   (Pdb) logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
   (Pdb)  session.query(MachineUnit).filter(MachineUnit.id_ ==
   24769797950537768).count()
   2015-07-15 16:56:44,565 INFO sqlalchemy.engine.base.Engine BEGIN
   (implicit)
   INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
   2015-07-15 16:56:44,566 INFO sqlalchemy.engine.base.Engine SELECT
   count(*) AS count_1
   FROM system_unit, (SELECT system_unit.fk_updated_by AS
   system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on AS
   system_unit_updated_on, system_unit.bi_rm_timestamp AS
   system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
   system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
   WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
   INFO:sqlalchemy.engine.base.Engine:SELECT count(*) AS count_1
   FROM system_unit, (SELECT system_unit.fk_updated_by AS
   system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on AS
   system_unit_updated_on, system_unit.bi_rm_timestamp AS
   system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
   system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
   WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
   2015-07-15 16:56:44,566 INFO sqlalchemy.engine.base.Engine
   {'rm_timestamp_1': 0, 'system_unit_id_1': 24769797950537768,
   'rm_timestamp_2': 0}
   INFO:sqlalchemy.engine.base.Engine:{'rm_timestamp_1': 0,
   'system_unit_id_1': 24769797950537768, 'rm_timestamp_2': 0}
   2015-07-15 16:56:44,567 DEBUG sqlalchemy.engine.base.Engine Col
   ('count_1',)
   DEBUG:sqlalchemy.engine.base.Engine:Col ('count_1',)
   2015-07-15 16:56:44,568 DEBUG sqlalchemy.engine.base.Engine Row (2L,)
   DEBUG:sqlalchemy.engine.base.Engine:Row (2L,)
   2L
   (Pdb) len(session.query(MachineUnit).filter(MachineUnit.id_ ==
   24769797950537768).all())
   2015-07-15 16:57:06,070 INFO sqlalchemy.engine.base.Engine SELECT
   system_unit.fk_updated_by AS system_unit_fk_updated_by,
   system_unit.fk_created_by AS system_unit_fk_created_by,
   system_unit.dt_created_on AS system_unit_dt_created_on,
   system_unit.dt_updated_on AS system_unit_updated_on,
   system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
   machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
   system_unit.bi_rm_timestamp = %(rm_timestamp_1)s
   INFO:sqlalchemy.engine.base.Engine:SELECT system_unit.fk_updated_by
   AS system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on 

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Richard Gerd Kuesters
the interesting is, if i get the count() produced sql and put it into 
psql ...


   SELECT count(*) AS count_1
   FROM system_unit, (
   SELECT system_unit.fk_updated_by AS system_unit_fk_updated_by,
   system_unit.fk_created_by AS system_unit_fk_created_by,
   system_unit.dt_created_on AS system_unit_dt_created_on,
   system_unit.dt_updated_on AS system_unit_updated_on,
   system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
   machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   system_unit_fk_organization_id, system_unit.u_system_unit_name AS
   system_unit_u_system_unit_name, system_unit.in_ipv4 AS
   system_unit_in_ipv4, system_unit.p_system_unit_type AS
   system_unit_system_unit_type, system_unit.i_version AS
   system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   FROM system_unit JOIN machine_unit ON system_unit.pk_system_unit_id
   = machine_unit.pk_fk_system_unit_id
   WHERE machine_unit.pk_fk_system_unit_id = 24769797950537768 AND
   system_unit.bi_rm_timestamp = 0) AS anon_1
   WHERE system_unit.bi_rm_timestamp = 0


first, the inner sql:

   pollux=# SELECT system_unit.fk_updated_by AS
   system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on AS
   system_unit_updated_on,
   pollux-# system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
   machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   pollux-# system_unit_fk_organization_id,
   system_unit.u_system_unit_name AS system_unit_u_system_unit_name,
   system_unit.in_ipv4 AS system_unit_in_ipv4,
   system_unit.p_system_unit_type AS system_unit_system_unit_type,
   system_unit.i_version AS
   pollux-# system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   pollux-# FROM system_unit JOIN machine_unit ON
   system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
   pollux-# WHERE machine_unit.pk_fk_system_unit_id = 24769797950537768
   AND system_unit.bi_rm_timestamp = 0;
 system_unit_fk_updated_by | system_unit_fk_created_by |
   system_unit_dt_created_on   |syst
   em_unit_updated_on | system_unit_bi_rm_timestamp |
   machine_unit_pk_fk_system_unit_id | syst
   em_unit_pk_system_unit_id | system_unit_fk_organization_id |
   system_unit_u_system_unit_name
 | system_unit_in_ipv4 | system_unit_system_unit_type |
   system_unit_i_version | machine_unit_e_
   machine_type | machine_unit_e_printer_type
   
---+---+--+
   
---+-+---+-
   
--++---
   
-+-+--+---+
   -+-
   | 24769797950537732 | 2015-07-15
   17:49:57.41029-03 | 2015-07
   -15 17:49:57.410305-03 |   0
   | 24769797950537768 |
24769797950537768 |  24769797950537729 |
   Workstation Super Legal do Richard
 | 10.0.0.2|2
   | 1 |
  50 |   0
   (1 row)


1 row. ok, so far it's good. now, the whole query ...

   pollux=# SELECT count(*) AS count_1
   pollux-# FROM system_unit, (
   pollux(# SELECT system_unit.fk_updated_by AS
   system_unit_fk_updated_by, system_unit.fk_created_by AS
   system_unit_fk_created_by, system_unit.dt_created_on AS
   system_unit_dt_created_on, system_unit.dt_updated_on AS
   system_unit_updated_on,
   pollux(# system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,
   machine_unit.pk_fk_system_unit_id AS
   machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id AS
   system_unit_pk_system_unit_id, system_unit.fk_organization_id AS
   pollux(# system_unit_fk_organization_id,
   system_unit.u_system_unit_name AS system_unit_u_system_unit_name,
   system_unit.in_ipv4 AS system_unit_in_ipv4,
   system_unit.p_system_unit_type AS system_unit_system_unit_type,
   system_unit.i_version AS
   pollux(# system_unit_i_version, machine_unit.e_machine_type AS
   machine_unit_e_machine_type, machine_unit.e_printer_type AS
   machine_unit_e_printer_type
   pollux(# FROM system_unit JOIN machine_unit ON
   system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
   pollux(# WHERE 

Re: [sqlalchemy] strange behavior in count with custom mapper option

2015-07-15 Thread Mike Bayer

there's your two rows:

 Row (None, 24769797950537732L, datetime.datetime(2015, 7, 15, 17, 49, 
57, 410290, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180, 
name=None)), datetime.datetime(2015, 7, 15, 17, 49, 57, 410305, 
tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180, name=None)), 0L, 
24769797950537768L, 24769797950537768L, 24769797950537729L, 
u'Workstation Super Legal do Richard', '10.0.0.2', 2, 1, 50, 0)



Row (None, 24769797950537732L, datetime.datetime(2015, 7, 15, 17, 49, 
57, 410290, tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180, 
name=None)), datetime.datetime(2015, 7, 15, 17, 49, 57, 410305, 
tzinfo=psycopg2.tz.FixedOffsetTimezone(offset=-180, name=None)), 0L, 
24769797950537768L, 24769797950537768L, 24769797950537729L, 
u'Workstation Super Legal do Richard', '10.0.0.2', 2, 1, 50, 0)



and you can see they both are the same.  therefore count of 2, .all() 
gives you one object.




On 7/15/15 3:58 PM, Richard Gerd Kuesters wrote:

right! sorry, now here we go (again):

(Pdb) import logging
(Pdb) logging.basicConfig()
(Pdb) logging.getLogger('sqlalchemy.engine').setLevel(logging.DEBUG)
(Pdb)  session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).count()
2015-07-15 16:56:44,565 INFO sqlalchemy.engine.base.Engine BEGIN
(implicit)
INFO:sqlalchemy.engine.base.Engine:BEGIN (implicit)
2015-07-15 16:56:44,566 INFO sqlalchemy.engine.base.Engine SELECT
count(*) AS count_1
FROM system_unit, (SELECT system_unit.fk_updated_by AS
system_unit_fk_updated_by, system_unit.fk_created_by AS
system_unit_fk_created_by, system_unit.dt_created_on AS
system_unit_dt_created_on, system_unit.dt_updated_on AS
system_unit_updated_on, system_unit.bi_rm_timestamp AS
system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
INFO:sqlalchemy.engine.base.Engine:SELECT count(*) AS count_1
FROM system_unit, (SELECT system_unit.fk_updated_by AS
system_unit_fk_updated_by, system_unit.fk_created_by AS
system_unit_fk_created_by, system_unit.dt_created_on AS
system_unit_dt_created_on, system_unit.dt_updated_on AS
system_unit_updated_on, system_unit.bi_rm_timestamp AS
system_unit_bi_rm_timestamp, machine_unit.pk_fk_system_unit_id AS
machine_unit_pk_fk_system_unit_id, system_unit.pk_system_unit_id
AS system_unit_pk_system_unit_id, system_unit.fk_organization_id
AS system_unit_fk_organization_id, system_unit.u_system_unit_name
AS system_unit_u_system_unit_name, system_unit.in_ipv4 AS
system_unit_in_ipv4, system_unit.p_system_unit_type AS
system_unit_system_unit_type, system_unit.i_version AS
system_unit_i_version, machine_unit.e_machine_type AS
machine_unit_e_machine_type, machine_unit.e_printer_type AS
machine_unit_e_printer_type
FROM system_unit JOIN machine_unit ON
system_unit.pk_system_unit_id = machine_unit.pk_fk_system_unit_id
WHERE machine_unit.pk_fk_system_unit_id = %(system_unit_id_1)s AND
system_unit.bi_rm_timestamp = %(rm_timestamp_1)s) AS anon_1
WHERE system_unit.bi_rm_timestamp = %(rm_timestamp_2)s
2015-07-15 16:56:44,566 INFO sqlalchemy.engine.base.Engine
{'rm_timestamp_1': 0, 'system_unit_id_1': 24769797950537768,
'rm_timestamp_2': 0}
INFO:sqlalchemy.engine.base.Engine:{'rm_timestamp_1': 0,
'system_unit_id_1': 24769797950537768, 'rm_timestamp_2': 0}
2015-07-15 16:56:44,567 DEBUG sqlalchemy.engine.base.Engine Col
('count_1',)
DEBUG:sqlalchemy.engine.base.Engine:Col ('count_1',)
2015-07-15 16:56:44,568 DEBUG sqlalchemy.engine.base.Engine Row (2L,)
DEBUG:sqlalchemy.engine.base.Engine:Row (2L,)
2L
(Pdb) len(session.query(MachineUnit).filter(MachineUnit.id_ ==
24769797950537768).all())
2015-07-15 16:57:06,070 INFO sqlalchemy.engine.base.Engine SELECT
system_unit.fk_updated_by AS system_unit_fk_updated_by,
system_unit.fk_created_by AS system_unit_fk_created_by,
system_unit.dt_created_on AS system_unit_dt_created_on,
system_unit.dt_updated_on AS system_unit_updated_on,
system_unit.bi_rm_timestamp AS system_unit_bi_rm_timestamp,

Re: [sqlalchemy] If path in joinedload() include self-referential relationship, should I alias() every table after that relation ?

2015-07-15 Thread Mike Bayer



On 7/15/15 4:40 PM, Юрий Пайков wrote:

Ok, that is clear now.

Eagerloading of tables occurring more than once in a query is a bit 
confusing for me as it is not well-documented,
for example that *contains_eager()* needs *alias=parameter* in order 
to work properly for a second occurrence of a table. If I might I 
would advise you to shed some light on the usage in a documentation.


it really comes down to an understanding of the basic concept.   The SQL 
itself must return the correct results.  You cannot refer to a table 
twice in the same SQL statement in two different FROM contexts; all but 
one of them must be aliased for them to be handled separately.  
contains_eager() mererly refers the ORM to the particular columns in the 
result set that represent the data for this collection.


If you turn on echo='debug' in your create_engine() statement, you can 
watch the rows as they come in. To load a series of B rows, which 
also contain a series of additional B objects represented via JOIN or 
LEFT OUTER JOIN, requires aliasing.





OK, moving on
When using
|
.join(relation,aliased=True)
|
it is stated that next *.fliter() *call will refer to the second 
occurrence (unless *.reset_joinpoint() *is called).


aliased=True is an old feature that I think is more confusing than it is 
worth for end-user use of query().   I'd not recommend using it unless 
you have some case where it is absolutely necessary.




Here https://gist.github.com/ojomio/08e5d91d8eed6cc76a2c I have an 
example of similar behavior. Consider my slightly modified example.
it seems *contains_eager()*, when given not the full path from the 
first occurrence(B-A-A-B-C) but rather only the portion of path from 
the last mention of table(B-C), populates the collection with the data 
from the _latest_ mentioned instance of that table
That seems incorrect.  There is no join from B-cs stated here, there is 
only a join from second_B-cs.   Therefore contains_eager() should be 
given second_B.cs, not B.cs.  But also, yes, the full path of 
contains_eager() must be stated here, because otherwise the query will 
never populate B.as_, or A.children, or A.bs, and therefore never get to 
B.cs; those columns are thrown away.



Is it the desired outcome or should it be considered a bug? Could this 
pattern change in future?


That the eager loading system requires full explicitness in order to 
know what to do is exactly why it is so flexible.


And, as always, thank you for you patience and attention brought to my 
problem

my pleasure!




среда, 15 июля 2015 г., 22:32:42 UTC+5 пользователь Michael Bayer 
написал:




On 7/15/15 2:42 AM, Юрий Пайков wrote:

I have an example here
https://gist.github.com/ojomio/aa5eca3bea03d21e00e8. This code
issue exactly one query and load everything at one time
What I am asking about is line
https://gist.github.com/ojomio/aa5eca3bea03d21e00e8#file-gistfile1-py-L65

If I don not use alias *second_B*and simply write
|
).join(
AWithChildren.bs
).options(
contains_eager(B.as_).
contains_eager(A.children,alias=AWithChildren).
contains_eager(AWithChildren.bs).
joinedload(B.cs)
)
|

Then  SQLAlchemy issue another query on *C* table, apparently not
matching expression AWithChildren.bs and B
So my question is - if there are many other tables after
*A.children -*should use alias() for every one and mention them like
|
.joinedload(PreviousTable.relation,alias=SomeTableAlias).
|
?


the use here of .joinedload() on the end of a series of
contains_eager() calls is already very unusual, and I'm somewhat
surprised it works correctly in the first case as this is not a
use case that's really tested.   As for the case of second_b not
being present, this is not surprising as AWithChildren.bs refers
to B, which is already present in the query as the primary
entity, for a separate collection of B to be present it needs to
be aliased. the join(AWithChildren.bs) is the same as join(B,
AWithChildren.bs), and you'd never want to say what is essentially
session.query(B).join(Q).join(B) - you need an alias for each
subsequent occurrence of B.




-- 
You received this message because you are subscribed to the

Google Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it,
send an email to sqlalchemy+...@googlegroups.com javascript:.
To post to this group, send email to sqlal...@googlegroups.com
javascript:.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 

Re: [sqlalchemy] If path in joinedload() include self-referential relationship, should I alias() every table after that relation ?

2015-07-15 Thread Юрий Пайков
Ok, that is clear now. 

Eagerloading of tables occurring more than once in a query is a bit 
confusing for me as it is not well-documented, 
for example that *contains_eager()* needs *alias=parameter* in order to 
work properly for a second occurrence of a table. If I might I would advise 
you to shed some light on the usage in a documentation.

OK, moving on
When using
 .join(relation, aliased=True)
it is stated that next *.fliter() *call will refer to the second occurrence 
(unless  *.reset_joinpoint() *is called). 

Here https://gist.github.com/ojomio/08e5d91d8eed6cc76a2c I have an 
example of similar behavior. Consider my slightly modified example.
it seems *contains_eager()*, when given not the full path from the first 
occurrence(B-A-A-B-C) but rather only the portion of path from the last 
mention of table(B-C), populates the collection with the data from the 
*latest* mentioned instance of that table

Is it the desired outcome or should it be considered a bug? Could this 
pattern change in future?
And, as always, thank you for you patience and attention brought to my 
problem

среда, 15 июля 2015 г., 22:32:42 UTC+5 пользователь Michael Bayer написал:

  

 On 7/15/15 2:42 AM, Юрий Пайков wrote:
  
 I have an example here https://gist.github.com/ojomio/aa5eca3bea03d21e00e8. 
 This code issue exactly one query and load everything at one time 
 What I am asking about is line 
 https://gist.github.com/ojomio/aa5eca3bea03d21e00e8#file-gistfile1-py-L65

  If I don not use alias *second_B* and simply write 
   
 ).join(
 AWithChildren.bs
 ).options(
 contains_eager(B.as_).
 contains_eager(A.children, alias=AWithChildren).
 contains_eager(AWithChildren.bs).
 joinedload(B.cs)
 )

  
 Then  SQLAlchemy issue another query on *C* table, apparently not 
 matching expression AWithChildren.bs and B
 So my question is - if there are many other tables after *A.children -* 
 should use alias() for every one and mention them like
 .joinedload( PreviousTable.relation, alias=SomeTableAlias ).
 ?
  

 the use here of .joinedload() on the end of a series of contains_eager() 
 calls is already very unusual, and I'm somewhat surprised it works 
 correctly in the first case as this is not a use case that's really 
 tested.   As for the case of second_b not being present, this is not 
 surprising as AWithChildren.bs refers to B, which is already present in 
 the query as the primary entity, for a separate collection of B to be 
 present it needs to be aliased.   the join(AWithChildren.bs) is the same as 
 join(B, AWithChildren.bs), and you'd never want to say what is essentially 
 session.query(B).join(Q).join(B) - you need an alias for each subsequent 
 occurrence of B.




  -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to sqlalchemy+...@googlegroups.com javascript:.
 To post to this group, send email to sqlal...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] using sqlalchemy to track changes in database

2015-07-15 Thread Balaji Pattewar
Hi All,

Can any body help me how Sqlalchemy can be used to identify changes in 
database?
I mean how to identify that some rows got delete or added after particular 
time.?

Thanks
Balaji

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] is it two or more levels of polymorphism possible?

2015-07-15 Thread Richard Gerd Kuesters

hi all,

i was wondering if there's a way to create more than one level of 
polymorphic entities in sa. quick example:


   class Foo(Base):
...
__mapper_args__ = { ... }

   class Bar(Foo):
...
__mapper_args__ = { ??? }  # --- polymorphic_identity for ... two?


   class Baz(Bar):  # --- is this possible?
   ...



cheers,
Richard.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
attachment: richard.vcf

Re: [sqlalchemy] is it two or more levels of polymorphism possible?

2015-07-15 Thread Mike Bayer



On 7/15/15 9:09 AM, Richard Gerd Kuesters wrote:

hi all,

i was wondering if there's a way to create more than one level of 
polymorphic entities in sa. quick example:


class Foo(Base):
...
__mapper_args__ = { ... }

class Bar(Foo):
...
__mapper_args__ = { ??? }  # --- polymorphic_identity for ...
two?


class Baz(Bar):  # --- is this possible?
   ...




yes, you give a new polymorphic_identity to Baz separate from Bar.  
The only requirement is that they are all using the same column on Foo 
for the differentiation (cascading polymorphic_on columns isn't 
supported yet).





cheers,
Richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] is it two or more levels of polymorphism possible?

2015-07-15 Thread Richard Gerd Kuesters


oh, yes, i was thinking about cascading polymorphic_on, like you mentioned.

but, no problem, i'll try to workaround my problem with a more simple 
approach (the old soft-delete dilemma) ...


thanks for your help, Mike!

best regards,
richard.


On 07/15/2015 10:46 AM, Mike Bayer wrote:



On 7/15/15 9:09 AM, Richard Gerd Kuesters wrote:

hi all,

i was wondering if there's a way to create more than one level of 
polymorphic entities in sa. quick example:


class Foo(Base):
...
__mapper_args__ = { ... }

class Bar(Foo):
...
__mapper_args__ = { ??? }  # --- polymorphic_identity for
... two?


class Baz(Bar):  # --- is this possible?
   ...




yes, you give a new polymorphic_identity to Baz separate from 
Bar.  The only requirement is that they are all using the same 
column on Foo for the differentiation (cascading polymorphic_on 
columns isn't supported yet).





cheers,
Richard.
--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google 
Groups sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to sqlalchemy+unsubscr...@googlegroups.com 
mailto:sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com 
mailto:sqlalchemy@googlegroups.com.

Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
attachment: richard.vcf

[sqlalchemy] Re: How to jsonify Query result?

2015-07-15 Thread sector119
Ok, that approach isn't cool.. I get another one from flask_jsontools and 
it does what I need!


json.dumps(q, cls=DynamicJSONEncoder)


Base = declarative_base(cls=JsonSerializableBase)


import decimal
from datetime import datetime, date
from json import JSONEncoder

from sqlalchemy import inspect
from sqlalchemy.orm.state import InstanceState


class DynamicJSONEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, (datetime, date)):
return obj.isoformat()
elif isinstance(obj, decimal.Decimal):
return float(obj)
elif hasattr(obj, '__json__'):
return obj.__json__()

return super(DynamicJSONEncoder, self).default(obj)


def get_entity_propnames(entity):
ins = entity if isinstance(entity, InstanceState) else inspect(entity)
return set(ins.mapper.column_attrs.keys() + ins.mapper.relationships.keys())


def get_entity_loaded_propnames(entity):
 Get entity property names that are loaded (e.g. won't produce new 
queries)
ins = inspect(entity)
keynames = get_entity_propnames(ins)

# If the entity is not transient -- exclude unloaded keys
# Transient entities won't load these anyway, so it's safe to include all 
columns and get defaults
if not ins.transient:
keynames -= ins.unloaded

# If the entity is expired -- reload expired attributes as well
# Expired attributes are usually unloaded as well!
if ins.expired:
keynames |= ins.expired_attributes

return keynames


class JsonSerializableBase(object):
 Declarative Base mixin to allow objects serialization

__json_private__ = set()

def __json__(self):
return {name: getattr(self, name)
for name in get_entity_loaded_propnames(self) - 
self.__json_private__}



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.