Thanks for your answer, I really like the frequency of update vs read way of
thinking. 

A related question is whether it is a good idea to denormalize on read-heavy
part of data while normalize on other less frequently-accessed data? 

Our app will have a limited number of system managed boards that are viewed
by every user so it makes sense to denormalize and propagate updates of pins
to these boards. 

We will also have a like board for each user containing pins that they like,
which can be somewhat private and only viewed by the owner. 

Since a pin can be potentially liked by thousands of user, if we also
denormalize the like board, everytime that pin is liked by another user we
would have to update the like count in thousands of like boards. 

Does normalize work better in this case or cassandra can handle this kind of
write load?



--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Data-modeling-for-Pinterest-like-application-tp7594481p7594517.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.

Reply via email to