Hi, We are facing a problem where we need to store 270 fields per item. The fields are laboratory measurements of a patient - 40 measurement values for 7 timepoint. The fields need to be accessed per timepoint, per measurement and all fields for one patient once. There will be over 10000 patients, distributed under different hospital items (tree-like, for permission reasons). Data is not accessed for two patients at once, so we don't need to scale the catalog. So I am curious about how we make Plone scale well for this scenario.
- The overhead of a field in AT schema? Should we use normal storage backend (Python object value) or can we compress or field values into list/dict to make it faster using a custom storage backend. - The wake up overhead of AT object? Should we distribute our fields to several ZODB objects e.g. per timepoint, or just stick all values to one ZODB objects. All fields per patient are needed on some views once. - One big Zope objects vs. few smaller Zope objects? Cheers, Mikko Ohtamaa Oulu, Finland -- View this message in context: http://n2.nabble.com/The-most-efficient-way-to-store-270-AT-fields--tp2112645p2112645.html Sent from the Product Developers mailing list archive at Nabble.com. _______________________________________________ Product-Developers mailing list [email protected] http://lists.plone.org/mailman/listinfo/product-developers
