‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, September 5, 2020 1:09 PM, Wols Lists <antli...@youngman.org.uk> 
wrote:

> Isn't that how the web originally WAS designed? That the web-site sent
> content and the browser determined how it was displayed?

sort of.  it was not very clear and they could've
gone either direction.  so they had to answer the
question: where to go?  they thought a bit and
concluded:

    "let's go turing-complete with built-in drm
    and enough fluff to make viewing a 2D page
    (e.g. cnn.com) take almost twice as much RAM
    as that of a 3D game (e.g.  quake-iii) [1].
    but remove marquee!"

even though i dislike how the web ended up being,
there is one side effect that i like:

    - making the web turing-complete served as an
      experiment to explore what humans want.  if
      web devs didn't have the power to freely do
      things, we wouldn't have known what do they
      want, and which idea is good/bad.

of course, the web also morphed into other messy
things that didn't have any good side effects.
such as the drm, and the many information leakages
that are so ridiculous they effectively render
"authentication" sort of redundant; google may
identify us by our browsers' fingerprints and call
it a day.  as if not enough, goog also graciously
give us x-client-data for free [2].

that said, i think the decades old experiment is
over, and i think we've seen enough to conclude a
few things from this experiment.  i suggest that
we must deprecate http/js/css/etc, and split the
web into two components:

 (1) page content definition format (PCDF): an
     efficient binary format that only defines
     content, with no presentation information.

     imo this is very doable because, while the
     content in the web varies drastically, their
     _type_ is pretty finite (e.g. nav bar,
     copyright notice, related topics, body, etc).
     i think if we survey websites, it is easy to
     see that there is only a small number of
     content types.

     the client obtains PCDF documents via https
     then presents them based on user's viewing
     preference which is purely defined locally in
     his computer (the server has no business in
     knowing any of it).  this way navigation
     bars, copy right notices, etc are placed in a
     standardized manner for every user based on
     what he cares most about.

     this way, we won't need to mess up with user
     style sheet hacks per website.  plus page
     size will become extremely small, and
     ridiculously efficient to render thanks to
     the binary format, and much ore responsive.
     it would be so fast you'd feel that the page
     has loaded even before you clicked on the
     link.

 (2) application containers:  this is the part why
     the web has javascript support, and this is
     still a part where is not clear to me if we
     actually need it.

     i think this is also very redundant with many
     alternatives doing basically the same thing,
     such as docker.

     maybe this is just "package manager in a
     glorified chroot"?

     this side is still unclear to me, and i don't
     know where it is going.

---
[1] https://www.networkworld.com/article/3175605
[2] https://www.theregister.com/2020/03/11/google_personally_identifiable_info/



Reply via email to