On Tue, May 31, 2022 at 5:40 PM 'Philip White' via Metamath <
[email protected]> wrote:

> I also have been working on a verifier, which is at
> https://git.sr.ht/~philipwhite/mmweb, if anyone is interested. The
> implementation includes a CLI for verifying files, but the main point
> of the parser/verifier is to power a web UI, which I am currently
> hosting at https://trailingwhite.space/proofs/ if you want to take a
> look (it downloads set.mm on page load, which takes several seconds on
> my connection, but it gets cached by the browser so that future page
> loads are faster). All the design decisions are aimed at enabling the
> web UI to be as good as possible.
>

FYI: I tried loading the page, I get "Loaded N" where N rapidly counts up
to 42869 and then nothing happens. This happens even after a reload,
set.txt is cached so that doesn't seem to be the issue.

One reason I starting working on my own verifier instead of
> contributing to a different one is that I thought it would be a fun
> project with a limited enough scope that I could finish it pretty
> quickly. I picked OCaml because that is that language I like best; the
> goal wasn't to make the version that everyone should use.
>
> After I got to the point where the verifier worked well enough, I
> started thinking about interesting things that I could do with such a
> library, and the idea of an interesting web UI came up. At this point,
> I have to answer the question, "why not switch to using a more hardcore
> and battle-tested library like metamath-knife?" I can think of a lot
> answers to this:
>
> - Using a thing that I wrote and that nobody depends on means that I
>   can change it to fit my needs whenever I want, and without asking
>   anyone else.
>

This kind of approach is fine for toy projects but for 'real' projects that
you want to be used by the community this is not-invented-here syndrome.
Even if you don't want to write rust code, one thing you can do is document
your implementation strategy and feature set so that the code can be ported
by others.


> - It wouldn't at all surprise me if a general-purpose API is difficult
>   to make suitable for web user interfaces, and in particular that type
>   of user interface code that I write. Bonsai, the library I am using,
>   combine both the virtual-dom approach popularized by React, and also
>   a library called incremental, which preserves work from previous
>   frames whenever it can. To make the incremental library happy, it's a
>   good thing to use persistent data structures and make updates in a
>   way that shares as much memory structure as possible, since that will
>   allow the incremental library to short-circuit computations that it
>   has already done. However, this approach seems awful if your goal is
>   to make a verifier that goes as fast as possible over the entire file.
>

I think this opinion is formed from an incorrect guess of how things are
done in mm-knife, if you are talking about that. Verifiers are generally
written as you say, but proof assistants have to have a more incremental
view of things, and mm-knife is the only proof assistant that actually has
incrementality built in (even mmj2 and MM-PA don't really support
incremental updates).

- My build can be all in one language, and I don't have to worry about
>   how to link Rust and OCaml together. Maybe it's not that hard to "get
>   it to work", but polyglot systems are always at least a little less
>   convenient; for instance, you only get to support the intersection of
>   the systems that all the languages support.


If you are compiling to web assembly, I don't think the system support
really matters, you are ultimately constrained by the browser in the end.
Besides, I don't think target support is an issue these days no matter what
language you choose.


> In my case, I think
>   binding OCaml to Rust and making it work in the browser seems like a
>   not fun situation.
>

Interestingly, if you are using web assembly (EDIT: or compiling to
javascript, if the *.bc.js files are any indication), you might not even
need to link them together directly, since they can both just separately
talk to javascript.


> I admit it would be a shame if lots of people did a bunch of redundant
> work, and then we end up with a bunch of different verifiers with the
> same feature-set.


Yes, that is exactly the trend I'm worried about. An mm-knife web UI is
definitely coming; I think sorear had a proof of concept of this way back
when smm3 was first introduced.


> However, if what's happening is that if a bunch of
> people are writing verifiers so that they can then explore some
> interesting ideas on top of it, that seems like a great situation to
> me.
>

There is a big difference between proof of concept and production tool. I
*strongly* recommend centralizing efforts for the latter, and while it's
fine to do the former in any language / project, I think that it is easy to
start dreaming bigger and planning for more users, adding more features,
and sinking a ton of time into something that ultimately leads to
fragmentation.


> Haskell was the result of lots of people focusing their efforts
> together to do research on lazy functional programming (I think). This
> seems like a big win because decent programming languages take a while
> to build, so it helps a lot if researchers have an already built
> language as a starting point.
>
> In comparison, metamath verifiers are stupid easy to write. (I don't
> mean your stupid if you can't make one, but that it's much much easier
> to make one than to make a programming language compiler).


Metamath *verifiers* are easy to write, but metamath *proof assistants* not
so much. I think the need for centralizing our fairly small pool of
manpower is because the proof assistant-styled features are not nearly
powerful enough in any system except to some extent mmj2 and MM-PA, both of
which are severely hampered by losing their principal architect /
maintainer.


> In my
> estimation, at this point, if everybody exploring interesting ideas
> were working on the same codebase, it could be rather inefficient.
> Eventually, once ideas mature, it makes sense to me to think about how
> to combine them all into one system, but now feels too early.
>
> To clarify, what I really mean is that if you like Rust and
> metamath-knife matches closely enough with a thing your working on,
> then that's a good project to work with. However, if, like me, you
> don't really like Rust (it's fine, I just don't really want do
> low-level programming),


I shouldn't get into this too much, but Rust is not really that low level.
It is actually pretty comparable to OCaml in its feature set (it was
literally derived from OCaml, among other things, and the original rust
compiler was written in OCaml) .

But language wars are bound to happen no matter what choice you make. There
aren't too many language choices that are fast enough for doing
set.mm-scale work quickly enough and also aren't too low level so that
"user code" can be written on top for things like proof automation and UI
work.

Mario

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/CAFXXJSuY2NFOaqpFR4rqX2%3DhGCX_J1%3DevJX9%3D%3DSc3Mxk723umA%40mail.gmail.com.

Reply via email to