On Fri, 29 Apr 2016, Christoph Lohmann <2...@r-36.net> wrote:
> Greetings.
>
> On Fri, 29 Apr 2016 17:58:08 +0200 Jochen Sprickerhof 
> <d...@jochen.sprickerhof.de> wrote:
>> Hi,
>> 
>> just saw this commit:
>> 
>> http://git.suckless.org/sites/commit/?id=6e3450a047c5f7eda300f68814f7b1dfd499119e
>> 
>> Can someone (@Christoph) please specify which version of Webkit and which
>> packaging is meant and what are the symptoms of hell?
>
> The  pure  insane  size of the webkit source and compile system makes it
> uninteresting for an average user to compile webkit on his/her own. This
> stalls any try to patch, extend or strip down webkit.
>
> This  basic  fact forces users into binary packaging, which – especially
> for webkit1 –, is bad and tends to have a dependency on all Open  Source
> projects out there.
>
> The symptoms of hell (They can be applied to other projects too.):
>
> * Crashing without an easy way to debug it
>       * You need to download hundreds of megabyte of source for webkit and
>         of course compile it, for debugging it.
>       * This leads to no motivation in fixing.
>       * The bigger the project, the more »magic thinking« happens.
>               * Magic only leads to Arch Linux help forum content.
> * Dependency subhell
>       * Here's where the catholic church banned all unborn children.
>       * Download and debug all the APIs in need is not possible except for
>         the person who is working for money on webkit. 
>
> Conclusion:  If  you  reach  the stage of too many dependencies or code,
> which can only be changed by the one who wrote it, remove  your  project
> and  leave  the  software industry for something productive. Your future
> hobby enthusiams will keep the project size  small,  just  by  practical
> means.
>
>
> Sincerely,
>
> Christoph Lohmann

So Webkit is problematic.

What would it take for a basic, sucks-less-than-webkit, web engine?

I don't mean to run GMail, but just basic web browsing: render text,
follow links, show images, terminal or X.

- HTTP client lib (is libcurl ok? oops no! 70k sloc!!!)
- Basic XML / HTML parser lib (simple, hand-rolled, recursive descent?)
- Basic layout engine, rendering frontends (text/md dump, terminal, X)
- Sandboxing (pledge, cgroups, privsep, etc? What can each OS give us?)

Things NOT to do:

- Silly security / privacy holes
- JS, JIT, other insanity
- History, bookmarks, password manager, ... external scripts if needed
- Caches (use a local caching proxy)
- Going over N thousand lines of ANSI C (find N)

Things to consider:

- Scriptability, "being UNIX-y"
- HTTPS - is it possible to sandbox the crypto code?
- Cookies - necessary evil? Reject 3rd party cookies by default?
- Basic CSS? How much CSS support is too much?
- Threads? Processes? Async IO? Best way to fetch / process / render w/o
  freezing the UI? fork + pipes sounds least silly...

Would such a project make sense? Surf is the practical approach where we
have 95% of modern web working OK, at the cost of dealing with a huge
pile of suck.

What about a slightly more radical approach, where we have 95% of only
what we care about working OK, and reduce SLOC by 95%?

What about w3m? lynx? They're 50k, 84k sloc respectively. Maybe a good
place to start? Don't know yet.

Also I'd like to apologise for the empty talk without any code, I don't
have time right now to do a prototype, also everyone would hate it as it
would be in Python :)

K.

Reply via email to