On Wed, 09 Sep 2009 10:06:20 +0200, Anselm R Garbe <garb...@gmail.com> wrote:

2009/9/9 Pinocchio <cchino...@gmail.com>:
I am saying this because even after a lot of marketing muscle and
commercial force, it has been hard for Adobe, Sun and Microsoft to push
their rendering stacks over HTML + Javascript. Flash is the only thing
which gained major adoption... and the picture might change once HTML 5
comes out.


The Flash strategy is definitely what I have in mind.


I guess the problem would be convincing the 100s of millions of people to
install our plugin. Much worse than converting web app developers to our
stack. [I have a feeling I didn't quite get your point here...]


If you can attract the developpers, the users will probably follow. The perfect scenario is when a programmer develops a killer application using your technology. Users install whatever is required in order to run the app. It seems to me that convincing a developper to user your platform is the extremely difficult part. This is where the technology has to be a lot better in a lot of areas.

Well, before taking the penetration aspect too far -- it is more
important to discuss the actual new web stack first. Key to it is that
it provides benefits wrt the existing web stack in many aspects (like
flash *yuck* or silverlight -- not too sure about silverlight adoption
though), that in itself will drive adoption. (Packaging the new
browser as a plugin for legacy browsers would make a lot of sense
though to drive adoption.)

But what I'm more interested in is this:

- knowing the limitations of HTTP and complexity of HTTP/1.1 compliant
web servers, shouldn't the new web stack rely on a new protocol
instead?

I'm not a specialist, but it seems to me that the only limitation of HTTP is its stateless-ness, which forces state management at an upper level at the cost of extra complexity. AFAIK caching mechanisms and security/encryption are there, but could easily be simpler.
So it looks like it is a secondary issue.

- knowing the limitations of nowadays web applications, how should the
content be organized? Should there be a strict separation of
data/content and views? How would a scripting interface look like?

The Web has evolved from simple, static, linked together documents servers to full-blown applications and two-way communication (FB, twitter etc.). All this use-cases coexist nowdays. "separation of date and views" is clearly a variation on the "code/data" duality. A priori, one should be neutral on this, in order to "perform" in an average way in all use-cases. IOW, it should suck averagely in all cases. As I see it, a simple,static document should be a program that consists essentially in a few "print" statements of the text, plus some code for link-buttons and font selection etc. Of course, the scripting language must be chosen so that it doesn't get too much in the way in this case. A full blown app would obviously be 90% of code with a few bits of static text. However, in this approach the content is mixed with the way it is displayed; I think the idea must be refined so that a client may extract the content rather than just displaying it.

How would extension points look like?

I'm not sure what you refer to, but one would use the extension mechanism of the interpreter of the scripting language.

What about security to begin with?

This is actually two questions:
- security of the connexion,
- safety of the interpreter. As someone else pointed, The whole thing must run in a sandbox.

- what content should be representable?


The more, the better :) Althought on may select only one or two formats for each category of content (image, sound, video, etc.).

When seeking for real benefits in a new web stack, the benefits can't
be of plain "less sucking implementation/standard" nature, because end
user won't care if the underlying technology sucks less or sucks a
lot, they can't decide and they have no strong opinions about it (like
usual car customers don't really care if it's an Otto motor or a
Wankel motor or a Boxer).


At first I agreed. But a better implementation or standard is meaningful for the programmer, who can do more and be more productive in a more friendly context; this is a major point if our primary target is the programmer. The user certainly can't tell the mechanical differences between two motors, but he can certainly tell by driving the car if the motor has been changed.

I think the benefits could be in the following areas:

[snipped;agreed]

- consistency: consistent display among all platforms (requires a
clear and explicit standard spec)


The conundrum is that it sets limits to what one can do: a few years back, not all platform had transparency support for instance. One might have defined it has an extension, though. The same may happen tomorow with 3D video for instance (3D TV is announced for next year).

- performance: better performance (this depends on the content
standard and potentially the protocol)


Yes. Because it allows to do more with the same hardware, and sometimes to make things more easy to develop by using brute force.

- security: better security (this might be not a big adoption driver though)


It's not impossible people get fed up with the security issues of the current technologies (in particular if the next Facebook worm swallow their accounts). I think there is something to do in the area of transparency: tell people about what the program is about to do in a way they can understand.

Reply via email to