On 29/04/2004 19:15, Kenneth Whistler wrote:

Having duly read through this entire discussion about
Michael Everson's Phoenician encoding proposal and having
tried to understand all the points made in the arguments here,
I was particularly struck by one point that Michael made:



This Phoenician proposal is not a new proposal. Phoenician proposals have been on the table for more than a decade.



I went back to the earliest encoding proposal, which Rick McGowan published in UTR #3, in **1992**. Guess what? It had exactly the same 22 Phoenician letters, with almost exactly the same names. Michael reordered the alphabet to a more acceptable order, made prettier glyphs, added the four attested Phoenician numerals, and a word separator. That's it.

Nothing, to my mind, illustrates the utter aridity of the
discussion that has been going on today than the fact that
the essential core of the encoding proposal for Phoenician
has lain dormant for 12 years with *NO* controversy about
the identity of the characters. And not a *SINGLE* comment
has been made, through the 17 yards of discussion on the
list today, about any technical detail of Michael's
encoding proposal -- not even about the one and only possibly
controversial aspect I can see in it, the proposal to encode a
PHOENICIAN WORD SEPARATOR character.



Ken, I think you have entirely missed the point here. No one is questioning the technical details. The point at issue is the principle of whether this should be encoded as a separate script.

By your same argument, if someone resubmits a proposal for Klingon and no one questions the technical details of the proposal, that proposal should automatically be accepted. No, it should not, because Klingon does not meet the requirement to be a script in actual use. For the same reason, Phoenician should not be accepted unless it can be demonstrated that it meets the requirements for a script to be encoded.

Yesterday I posed the question:



A. Does Phoenician constitute a "landmark" in the Canaanite
script continuum? Yes/No



to which I got the mincingly coy, but perhaps predictable answer, "Maybe".

Upon reading all the followup discussion, it seems clearly that
the answer really is "Yes, but we don't want to concede
the point because it would cut through the long argument
we are having."

The only potentially actionable new thing today that I
have heard is that *some* people (exact identity unclear)
*might* be mollified if the *name* of the script proposed
for encoding were designated "Old Canaanite", instead of
"Phoenician" -- even though the intended coverage of
particular historical attestations and styles would be
identical in either case -- on the chance that it would
appear more Semito-centric and less Greek-centric in its
orientation.



For the record, I agree that Old Canaanite would be a better name. The reason for this is not primarily to be more Semito-centric, but rather to represent better the range of languages covered. For the same reason, Latin script should not be called English script, because English is only one of many languages using it.


My reaction to that is that such a name change would be
a sop to those who know enough to understand the issues
of historical range and times and style differences
anyway, namely the Semiticists to whom the terms "Proto-Canaanite",
"Neo-Punic", "Ammonite", "Moabite", etc., actually signify.
And it would fail to communicate to the rest of the potential
users of the script, who may only know the term
"Phoenician" and not the actual script historical complexities.
Does that really buy anything for the encoding proposal
that cannot simply be handled by annotation in the eventual
introductory material describing the encoding?



It buys historical accuracy. Some users of English may not recognise "Latin" as the name of its script, but that is not a good reason to name is "English script".


Everything else discussed today boils down to a long
argument about whether *anything* at all should be encoded,
or whether the entire proposal is superfluous.

Now while I grant that the higher-order question as to
*whether* a script should be encoded is logically prior
to worrying about the details of the characters of that
script to be encoded, once it is determined that it
should be encoded at all, I'm detecting a great deal
of speciousness in the argumentation that has been presented
so far.

I don't believe that anyone has any realistic technical
objection to Michael's proposal in any detail, and
since it is clear that failing any technical flaw the
proposal will proceed to be approved by the character
encoding committees, the alternative is to attack on the
basis of a failure of consensus for the *need* for encoding
the script. And in particular, to call into question the
identity of Phoenician *as* a script.

Well, we can then proceed to resolve those questions.

But keep in mind the following observation: A consensus
among *some* people that they do not have a need for
an encoding does not constitute a consensus (for the
encoding committees) that there is no need for an
encoding.



Understood. But on the other hand, the lack of a consensus among *any* people that they have a need for an encoding does seem to imply that there is no need for an encoding. I have yet to see ANY EVIDENCE AT ALL that ANYONE AT ALL has a need for this encoding. So I am asking simply that the proposer demonstrates that there is SOME community of users who actually have a need for this encoding, for plain text rather than graphics. I have asked for this over several months. The new proposal not only fails to demonstrate this, it indicates that the proposer has not even attempted to find any such community of users, because he admits to not contacting any user community.


Back to the desert...



Which is where this proposal belongs, unless any evidence can be produced that anyone needs this script.


--Ken




-- Peter Kirk [EMAIL PROTECTED] (personal) [EMAIL PROTECTED] (work) http://www.qaya.org/





Reply via email to