On Tue, Feb 28, 2023 at 6:40 PM B 9 hacke...@gmail.com
<http://mailto:hacke...@gmail.com> wrote:

By the way, a tokenizer should be able to reduce the file size dramatically
> by simply omitting the string after REM statements. Having it remove
> vestigial lines completely would be slightly trickier and probably require
> a second pass as it'd have to make sure the line was not a target of GOTO
> (or any of the other varied ways of referring to line numbers).
>
Okay, I made a quick and dirty tokenizer that removes comments
<https://github.com/hackerb9/tokenize/tree/decomment> and extra white
space. It does not do the second pass to remove unneeded lines or anything
fancy, like converting ' to REM (which takes up less space). It's not as
good as the renumberer used on TSWEEP, but it's close, creating a binary
that is only 500 bytes larger.

—b9

Reply via email to