Tapestry already does some caching on invariant properties, and when it doesn't the overhead of actually reading the property again is negligible unless it result in a database query etc. Also, I have plenty of pages where properties change while rendering the page.

So, no. It wouldn't be useful.

-Filip

On 2008-03-18 18:01, Tobias Marx wrote:
Wouldn't it be useful to make the @Cache annotation the default annotation
for all methods?

I can not imagine a case where it would make sense that during the rendering of 
a page returns different results...

-------- Original-Nachricht --------
Datum: Tue, 18 Mar 2008 11:53:09 -0500
Von: "Adam Zimowski" <[EMAIL PROTECTED]>
An: "Tapestry users" <users@tapestry.apache.org>
Betreff: Re: @Cached and caching in general

@Cache works on per request basis, so that anything you return from a
method which has @Cache annotation will get actually "built" or
"retrieved" only once - but only once per http request. So if you're
building an expensive HTML fragment:

@Cached
public String buildExpensiveHtmlFragment() {
  //.........
}

during a request of a single thread you can call
buildExpensiveHtmlFragment() as many times as you want with a full
confidence that the work to build it would be performed only once.
When you refresh the page, the build will be invoked once again (but
once only).

-adam

On Tue, Mar 18, 2008 at 11:44 AM, Tobias Marx <[EMAIL PROTECTED]> wrote:
I have not used T5 yet, but would @Cached use the file system for
caching HTML fragments similiar to caching mechanisms in some php frameworks?
 Or is this a pure memory-based cache?

 I am thinking about migrating an old PHP application to T5 - it has
really a lot of traffic and any users are logged in at the same time.
 It is quite a low-level application that is still quite fast due to
cron jobs in the background that generate HTML fragments that are included to
reduce the database-query bottleneck (e.g. grouping/ordering and sorting of
huge tables).
 Somehow I don't trust Hibernate for high-performance database queries
on huge tables .... as I think if tables are huge and many people access it,
it will always lead to problems...no matter how good the queries are and
how well you have splitted the data across several tables.
 So I think the best solution is always to generate HTML fragments in
the background that take a long time and simple "include" them....this is
even quicker then parsing templates when the data is cached. So you save the
time necessary for querying the database plus the time necessary for
processing the templates that are involved.
 Currently the setup on this application uses one-way database
replication and the cron jobs access the the huge data table on the replicated
database and generate those HTML fragments without disturbing the
web-applications performance. So the main application simply includes those HTML
fragments within milliseconds.
 But maybe the T5 caching mechanism would make all of those low-level
tricks redundant?
 ---------------------------------------------------------------------
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to