But work.like_query.count() will be efficient if you have the right indexes 
in the database, no?

I think if you want to denormalize that count all the way and also stay 
very efficient, maybe it would be good to do it right on the db server with 
a trigger and a stored procedure and avoid extra python function calls.  
What database do you use?


On Wednesday, December 5, 2012 3:06:03 PM UTC-8, Hong Minhee wrote:
>
> Hi,
>
> I am making a typical web application using SQLAlchemy, and it contains 
> “users”, “works” and “likes”:
>
>
>    - users ( id, login, … )
>    - works ( id, title, … )
>    - likes ( user_id, work_id )
>
>
> I want to print how many users liked each work, so the most simple (but 
> naive) is querying count of them e.g.:
>
> len(work.like_set)
> work.like_query.count()
>
>
> However it’s inefficient, so I want to maintain like_count field or such 
> thing for works.  The problem is that I have to manually update the field 
> every time new like is inserted or existing like is deleted.  I thought it 
> could be automatically updated without inconsistency if I catch SQLAlchemy 
> events, but I’m not sure whether it’s correct way to do it or not.
>
> How do you guys solve such situation with SQLAlchemy?
>
> Thanks,
> Hong Minhee
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/1bSJm_qJci8J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to