On Feb 16, 3:55 pm, Alex Black <a...@alexblack.ca> wrote:
> > calculating an md5 of a file would induce additional computation time
> > and we'd need to maintain hashes for each resource file. The prototype
> > that I have now is a function in LiftRules that by default returns a
> > random value generated on startup. Applications that needs MD5 per
> > file could calculate that and maintain them.
>
> Hi Marius, what does the proposed token represent? It looks to me like
> it represents a given resource (css file) per running instance of
> Jetty.

In my prototype it is a random string generated once at startup. So
this is the same for all css/js references. But this is imposed just
by the default implementation of the LiftRules.attachResourceId
function. A different implementation can generate unique MD5 sequences
based on each individual file. But I'm not convinced that this should
be on the framework side.

>
> By using MD5 it instead represents the file itself.
>
> Problems with using a token that represents a given resource per
> running instance of jetty:
> - if the server restarts you use a new token, so all clients are
> forced to re-get the 'new' resource

Correct.

> - if you run more than one server, then each server has different
> tokens, so clients think there are different resources

Correct again.
>
> I also like the suggestion that a solution to the consolidation
> problem be kept separate from the problem of generating unique urls
> for cachable resources (such as CSS, javascript, etc).

The MD5 generation if we want it to reflect the file content would
take some time to generate. It would indeed happen once if that
resource was not seen yet. But along with this there will be other
logic:

1. Detect when files changes ... for that we'd need a polling
mechanism as we wouldn't want hash calculation on each page rendering.
2. Maintain the hash cache

Personally I do not think this is an imperative thing for the
framework. I think it is more important for Lift to allow the
flexibility to apply this type of logic and this is what I'm aiming
to. I agree with you that MD5 approach is more consistent but this
random token applied per server instance is not that bad as it's main
purpose is not to optimize the resource loading but to have a minimal
mechanism to force browsers to refresh the resources if we change css/
js on server side (as the original issue was). Other people may not
prefer the MD5 approach but rely more on expiration headers and so
on.

Your two cases described would be solved by using the MD5 approach but
I don't think it is a disaster if we restart the servers clients will
fetch again the CSS as they think that the resource changed.

Furthermore if one of the committers wants to add this MD5 logic after
this support is in, he can certainly do it. To me the proper
abstraction in allowing that is more important right now.

>
>
>
>
>
> > > 2. Consolidation of CSS files on a given page for performance firstly, 
> > > and secondly for caching.
>
> > > Would there be times when people would not want the behaviour of 2? Im 
> > > generally not a huge fan of things that mess with user code or could 
> > > provide uneasy behaviour; im thinking specifically when people build 
> > > there templates where CSS values are overridden by values loaded after 
> > > initial value ad unless its munged together right, it might damage the 
> > > expected behaviour (think blueprint)...? Can I suggest we solve the 
> > > caching problem using the known hack of random strings, then deal with 
> > > this proposal of resource consolidation?
>
> > What I'm playing with is"
>
> > <lift:css.combine>
> >   <res:css name="abc.css"/>
> >   <res:css name="def.css"/>
> > </lift:css.combine>
>
> > under the hood this would be a function that return a Stream Response
> > that "concatenates" the streams of files in questions serving them
> > sequentially in the corresponding order.  So from Lift's perspective
> > there is no additional computation involved comparing with current
> > situation except we serve desired resources in one response.
>
> > To sum up the random string is what I think we should start with. IMO
> > it is a fairly good solution that can evolve in time towards something
> > else.
>
> > > Cheers
>
> > > Tim
>
> > > On 13 Feb 2010, at 08:45, Marius wrote:
>
> > > > On 12 feb., 23:04, Alex Black <a...@alexblack.ca> wrote:
> > > >>> Yes, that's how it should work if everything was configured correctly
> > > >>> (which I think it wasn't for the OP)
>
> > > >> Heh, I'm the OP.
>
> > > >> I'll have to dig into why its not working as expected I guess.
>
> > > >>> But what we were discussing (at least I was :-) was more that Lift
> > > >>> should serve resources with an "Expires" date in the far future. That
> > > >>> way the browser will only make a single request for a resource (as
> > > >>> long as the file is cached). This works well for returning visitors.
> > > >>> But of course an updated resource should be fetched, hence the unique
> > > >>> filenames.
>
> > > >> There are some things I like about that solution, but the unique
> > > >> filenames just seems wrong.
>
> > > >> So I see that a far in the future expires works, but the reason you
> > > >> need the unique filenames is because it doesn't really work. The far
> > > >> in the future expires says "you can cache this for a long time cause
> > > >> it won't change".
>
> > > >> The other option is say "you can cache this for like the next hour"
> > > >> but every time you fetch it, you can tell me when you last got it
> > > >> (conditional GET), and I won't send it to you if it hasn't changed
> > > >> (304 not modified).  This results in more requests, but no need for
> > > >> unique filenames or anything, instead if the file changes then the
> > > >> server will serve it up to whoever needs it.
>
> > > > It doesn't sound like today this solution is consistent on all "major"
> > > > browsers. Can you confirm that it does?
> > > > I used the query string solution in the past (like many others) and
> > > > this works reasonably well. It is not a perfect solution
> > > > but better then today. Besides if we want to adopt a different
> > > > solution that would be pretty easy because this knowledge will be
> > > > built
> > > > in the snippet and the user code wont really change.
>
> > > >>> Combining individual files will improve load times for first time
> > > >>> visitors by reducing the number of requests.
>
> > > >> That sounds like a great idea.. would like the same thing for JS.
> > > >> Does the YUI compressor tool that lift uses with maven have this type
> > > >> of feature? I Thought I read that it did.
>
> > > >>> /Jeppe
>
> > > > --
> > > > You received this message because you are subscribed to the Google 
> > > > Groups "Lift" group.
> > > > To post to this group, send email to lift...@googlegroups.com.
> > > > To unsubscribe from this group, send email to 
> > > > liftweb+unsubscr...@googlegroups.com.
> > > > For more options, visit this group 
> > > > athttp://groups.google.com/group/liftweb?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Lift" group.
To post to this group, send email to lift...@googlegroups.com.
To unsubscribe from this group, send email to 
liftweb+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/liftweb?hl=en.

Reply via email to