On Wednesday 28 May 2008 15:49:16 Koen Bok wrote:
>> Yep, exactly that. It would speed up my (UI) app immensely. Any
>> ideas how to approach something like that?
> thinking of it... the attributes has to be deferred/None, and
> set-up externaly by the wrapping smartie, e.g. UI pager or
> whatever. but i have no idea how it can be done most nicely... some
> synonim() that returns some.cache[ mycol] else fallsback to
> self._mycol?

or maybe, for a more massive approach, using some of per-instance 
MapperExtension hooks:
  def populate_instance(self, mapper, selectcontext, row, instance
  def append_result(self, mapper, selectcontext, row, instance, result
  def create_instance(self, mapper, selectcontext, row, class_):
  def translate_row(self, mapper, context, row):
i guess which-one depends on what level the cache works. 

if u make something, i'd be interested to see it... VerticalCache of 
sorts

ciao
svilen

> On May 28, 5:07 pm, [EMAIL PROTECTED] wrote:
> > some time ago i posted a list of my ideas along
> > this..http://groups.google.com/group/sqlalchemy/browse_thread/thr
> >ead/d88696...
> >
> > > be ware: its all pure theory.
> > >  -1 (horizontal) (eager) loading ONLY of the needed row
> > > attributes, also hierarhicaly (a.b.c.d)
> > >  -2 (vertical) simultanously loading of columns - e.g. the lazy
> > > attribites - wholly, or in portions/slices (depending on UI
> > > visibility or other slice-size)
> > >  -3 skipping creation of objects - only using the data, if time
> > > of creation gets critical. For example a simple report for a
> > > name.alias and age of person, the creation of 100,000 Persons
> > > can be ommitted. To be able to do drill-down, the person.db_id
> > > would be needed+stored too.
> > >  -4 cacheing of some aggregations/calculations in special
> > > columns/tables, so they're not re-invented everytime
> > >  -5 translate the whole report - calculations, aggregations,
> > > grouping etc. into sql and use the result as is (with same
> > > thing about db_id's)
> >
> > except the #4/aggregation which is pretty automated now, i dont
> > have yet implementation of the rest.
> > i think u're talking about #2 ?
> >
> > ciao
> > svilen
> >
> > > Hey All,
> > >
> > > I have a conceptual question.
> > >
> > > You have two ways to get relations; lazy and nonlazy. Nonlazy
> > > works great for saving queries but can get pretty slow with
> > > complicated joins. So I was wondering if there was a third way;
> > > pre fetching all the data for relations and let the mapper get
> > > the relation data from a cache instead of doing another query.
> > >
> > > It's kinda hard to explain, so I wrote an example script at:
> > >http://paste.pocoo.org/show/55145/
> > >
> > > I guess this should be possible by writing some
> > > MapperExtension? Did anyone do anything like this, or maybe has
> > > some pointers?
> > >
> > > Thanks!
> > >
> > > Koen
>
> 


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to