On 6/18/2023 8:37 PM, Carlos via ntg-context wrote:
On Sat, Jun 17, 2023 at 06:53:06PM +0200, Hans Hagen via ntg-context wrote:

On 6/17/2023 2:06 AM, linguafalsa--- via ntg-context wrote:
On Fri, Jun 16, 2023 at 06:35:49PM +0200, Gerben Wierda via ntg-context wrote:
I know this is off topic, but I suspect this community is actually one of the 
best places to find an answer.


It is the best community. And I tell you what.

What happened is that all TeX engines have neglected fonts from the beginning.

Really? When tex showed up digital font technology was pretty much in flux.
And, with metafont being part of the tex ecosystem, one can argue that tex
was quite innovative too.

Ecosystem. I would be very careful by including an ecosystem there.
Yes. Yes. The TeX ecosystem is obviously part of TeX but is not part
of the ecosystem of fonts either. And what is done on ecosystems
can either benefit or affect ecosystems greatly. And it's a known
trait that humans have been known for having more of a flock group
mentality for no apparent rationally-based reasons than just being
themselves making  these decisions/following instincts or whatever
and not because of a particular ecosystem, or for the benefit of the latter.

With ecosystemn I mean: tex, metafont, cmr fonts, all kind of tools ... evolving into more engines, more fonts, macro packages, distributions, user groups and use group journals, meetings etc

And the above does not imply, bear with me here, that metafont was
not innovative, but it can be argued that without TeX there is no
metafont, so no room is left for errors either. So, yes, it must be
innovative. It has to be.

There had to be metafont because there was not much else that could provide what tex needed (at that time).

Potscript and its fonts came aroudn at the same time and were rather closed
technologies. But as soon possible backend drivers (also part of the tex
ecosystem) kicked in.

Then we got virtual fonts which enhanced tex's capabilities.

I really like Optima, and what I really like about it is the 'flared style'.

But I would like to move to a flared-sans font that gives me more licensing 
freedom. I haven't been able to find one after extensive searching. The only 
one who were reasonably priced (not free) were the URW Classico ones in Adobe 
Creative Cloud, but those can only be used in Adobe programs like InDesign (and 
not TeX).


Licensing freedom is an oxymoron. There's no freedom in licensing.
Only greed.

The only extension engine that at one point had a plan in mind,
or most of the bases covered in this regard was Omega.

One needs morr than plans. Afaik omega was more about input processing and
th efont part was mostly going beyond 8 bit fonts but i might have missed
something (omega was never productin ready).

Notwithstanding the intricacies/details of what may have actually
happened with its short lifespan I think it's more than clear the lack
of support behind it. I'm not going to delve into what exactly caused
its demise or if it was simply the after effect of other projects
that contributed to it. It's irrelevant.

Hm, its time span was not that short ... I first heard of omega at the eurotex meeting in arnhem where also etex was discussed (and you dont'want me to cite things said there) .. in successive years there have been announcemnts etc.

However, for an engine to be used it must work reliable and guiseppes 'aleph' was basically a variant of omega that also had etex on board. In fact, that was supported in context mkii (and some used it because of the input processor which i think was the more innovative thing in omega but i never dived into it, other users did)

It makes no sense to discuss into all this as all teams involved in engines have published in user group journals or presented plans at meetings.

Also keep in mind that we're talking frontend here; omega is dvi based so like regular tex and etex whatever it does with fonts is not really related to the engine bu tup to the backend: the engine only needs metrics (omega extended tfm into ofm for that).

pdftex brought a pdf backend, xetex pipes into a dvi backend, luatex has a pdf backend built in; (nts being related to etec never took of also because it was not that useable and in the meantime pdftex had taken over); there are afaik some very useable japanese tex engines; the fact that dvi survived was due to dvipdfmx development

But stand by for a second. I look forward to your quick witted answers. But 
hear me
out

Suppose that on my prior message I was referring indeed to 'mkii' and
not to 'omega'

And also suppose for a second that the term 'omega' is to be replaced
with 'mkii' on your reply accordingly

After careful observation the resemblance is quite possibly identical,
isn't it? and it could also inarguably apply to the circumstances as
well. Don't you think?

no it isn't, its building upon hat is there:

mkii     -> mkiv   -> mkxl
(pdf)tex -> luatex -> luametatex

I mean, it's like comparing oranges with apples, and mkii with mkiv and
mkvi and so forth

mkii is there, it works, stil used, quite stable etc ... it's not some half-way experiment and mkiv is just a follow up on it as is mkxl on mkiv

If you were to tell me then, that mkii for instance was not aimed
as an input processing I can almos assure its falsiliability is written
all over, even before the sentence is processed and thought out loud
by you.

I don't understand (omega input processing is not reading files as tex does, but juggling sequences of characters (tokens) into other sequences suitable for rendering from fonts ... a bit like contextual substitution does in open type). That might eventually in the engine lead to mapping glyphs or snippets but the resulting node lists is what the engine then carries on.

Bottom line is that the production-ready part is an obvious byproduct
of its short lifespan, but one cannot be making the claim (false as
would have been seen later, because omega carbon footprint lasted more
on books than on shelves really, not for selling out fast but rather
discontinued quickly) and that its goal was solely within this input
processing spectrum. Because it wasn't.

hm, there are a few things that could push omega: a special input processor (for scripts that need it, i think that was the evolving bit), large registers ranges (done slightly different than etex), and directionality (in retrospect only a few made sense which was admitted by one of the authors; that bit combined with local boxes became part of luatex). The mathml stuff in omega never really went operational i think.

But, i think there were more ambitious plans esp as i remember talks about media and multiple dimensions and languages and so but there we also have a chicken egg situation: one reason why luatex works is that it evolved parallel to context so it got used and functionality got tested; there were also (i think) more plans with etex, but as that go stalled it also stayed with one version and a few things added; pdftex had more impact because it was really used (e.g. in mkii) immediately.

Or heck

So there is a lot involved in engines etc being accepted, a dedicated user base being one of them (could be a few users). A small community like the context one cannot simply jump on any engine. It also depends on needs.

In the end I think that etex an domega both beign around as possible extensions at least kept the users open minded about extensions.

or heck. Let's go even further. By making the dubious assertion
that we've been built with noses to hold our eyeglasses lest these
eyeglasses fall off while reading, or that we've been built with ears
to hold pencils and pens in the ears while thinking and writing.

For crying out loud.

It is xetex that hooked into opentype although pdftex can actually deal with
truetype fonts to some extend. Before there was something 'opentype' we had
two competing but similar technologies. And it took a while before it was
even clear how to interpre the specification (also think about reverse
engeneering fonts and heuristics and ... bugs or features ...). TeX was
always pretty fast in picking up new stuff (maybe users less so).

When it came to commercial fonts the plan of action ahead was by
including PFC data on these very same commercial fonts that would
benefit primarily its opentype versions in the long run.

What is PFC data?

The glyph containers on a table-based SFNT format

So just shapes or pieces of shapes?

What do you have right now? Opentype fonts only. Sure. Quality can be
even the same than its type1 counterpart, and at times not so much
according so some folks that have bothered to go the extra length in
making the most accurate comparison that's available between them two.

For most fonts it's just 'more shapes' which then also leads to more
ligatures, kerns etc btu that is already nice. And when fonts lack something
we can always tweak them (runtime).

But it's not about kerns nor ligatures but hinting Hans, hinting, and
that alone right there, underlies the  whole reason by which experts
have been infatuated with digital fonts for ages.

hinting ... well, rendering is not part of the frontend .. tex is shape agnostic and whether or not hints get passes is upto a backend and using them is up to a viewer or printer

in fact, one can argue that metafonts have some kind of hinting as they deal with snapping and rounding

but nowadays hinting ... displays are pretty good so ...

But I guess what you wrote earlier aligns in more ways than one with
what omega end-goal was, whether unwanted or not, routing along to
an opentype tunnel vision perspective, while shrugging off other
simplistic formats such as afm/tfm by breaking down the glyphs into
smaller pieces Although I would see it differently, and perhaps
I'm wrong. And perhaps it wasn't practical either, but for some
folks it was worth the extra complexities to undertake at the time,
while for many wasn't. Regardless, converters to take you over to
the containers' formats and include the tables later on was likely
needed either way. So why bother, right?

I suppose that tex can adapt to anything out there, because most of the time it only constructs stuff and then needs dimensions of what it made.

(There have been plenty proposals or experiments i think but in the end it is usage that makes things work out. We have some font stuff in context that cna be used for advanced irmprovements but no one uses them because it depends on fonts that never showed up.)

But pfc was the correct method and not others :) and for the right
reasons.

Still unclear to me in what sense. You have to prove it with examples or some reference that explains how it could work out (from user input to reliable rendering.)
 Hans

-----------------------------------------------------------------
                                          Hans Hagen | PRAGMA ADE
              Ridderstraat 27 | 8061 GH Hasselt | The Netherlands
       tel: 038 477 53 69 | www.pragma-ade.nl | www.pragma-pod.nl
-----------------------------------------------------------------

___________________________________________________________________________________
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist : ntg-context@ntg.nl / https://www.ntg.nl/mailman/listinfo/ntg-context
webpage  : https://www.pragma-ade.nl / http://context.aanhet.net
archive  : https://bitbucket.org/phg/context-mirror/commits/
wiki     : https://contextgarden.net
___________________________________________________________________________________

Reply via email to