After a day away from e-mail because I was travelling home from Azerbaijan, I found about 100 postings on this subject. I want to reply to several of them, but I will put most of my replies together into this one posting.

On 20/05/2004 16:51, Kenneth Whistler wrote:

...

John Hudson asked, again:



My question, again, is whether there is a need for the plain text distinction in the first place?



And I claim that there is no final answer for this question. We simply have irresolvable differences of opinion, with some asserting that it is self-evident that there is such a need, and others asserting that it is ridiculous to even consider encoding Phoenician as a distinct script, and that there is no such need.

My own take on this seemingly irreconcilable clash of opinion is
that if *some* people assert a need (and if they seem to be
reasonable people instead of crackpots with no demonstrable
knowledge of the standard and of plain text) then there *is*
a need. And that people who assert that there is *no* need
are really asserting that *they* have no need and are making
the reasonable (but fallacious) assumption that since they
are rational and knowledgable, the fact that *they* have no
need demonstrates that there *is* no need.



Thank you, Ken, for the clear exposition. But I dispute that the need has been demonstrated. So far, as far as I can remember, two people (one Semitic scholar and one Indo-Europeanist) have stated that they have a requirement for a separately encoded text, and a much larger number, including some of the top scholars of Semitic languages, have stated that they do not have a need. Now I agree with what you wrote elsewhere, Ken, that the absolute minimum for standardisation is two users, and we do have this absolute minimum. And I agree that if some people do have a real need, and an understanding of the *script* as well as of the Unicode standard, then in principle the lack of need of others should not stand in the way. But...


If such is the case, then there *is* a need -- the question
then just devolves to whether the need is significant enough
for the UTC and WG2 to bother with it, and whether even if
the need is met by encoding of characters, anyone will actually
implement any relevant behavior in software or design fonts
for it.



... I cannot agree that a need expressed by just two people, working in very different fields (and so unlikely to use this script to communicate with one another!), is significant enough for the UTC and WG2 to bother with.


And this is before coming back to Dean's repeated argument that encoding a new script, even if not many people want it or use it, messes things up for people who don't need and don't use the new script.

In my opinion, Phoenician as a script has passed a
reasonable need test, and has also passed a significant-enough-
to-bother test.



Well, I disagree. The need is not unreasonable, but no one has demonstrated that it is significant enough. It is the sort of thing which should be in the PUA (if the PUA supported RTL scripts, but I won't go back to that issue!) because it is the private need of a couple of individuals which clashes with the need of the majority of scholars in the field.


On 20/05/2004 20:16, James Kass wrote:

A kind list member has advised privately that CAL does now use
Unicode for Syriac text.

...

It's nice to see a transition to Unicode and it's comforting to
see that Unicode Syriac is used rather than Unicode Hebrew
to store and display Syriac text.




Well, we are now being assured that people who want to encode Phoenician, palaeo-Hebrew etc as Unicode Hebrew will be quite free to do so indefinitely even if Phoenician is encoded. Does that imply an assurance from James and everyone else in this field that Semitic scholars will never have to endured barbed comments like:

"... it's comforting to see that Unicode Phoenician is used rather than Unicode Hebrew  to 
store and display Phoenician text."


Or are we to expect that as soon as Phoenician is encoded separately, the majority of Semitic scholars who have always opposed this will come under all kinds of pressure to use the encoded script which was added just to meet the requirements of a couple of people, one of whom is not even a Semitic scholar?


On 20/05/2004 19:19, James Kass wrote:

...

But, the original question didn't concern Prof. Kaufman's credentials,
rather it was asked if Prof. Kaufman spoke for himself or if he claimed
to speak for all professionals in the field. (Not that Prof. Kaufman
appeared to make such a claim, rather this claim might be inferred from
something written by Peter Kirk.)



I made no such claim, nor has anyone. My claim is that Kaufman is a top scholar of Aramaic, but not that he formally represents other scholars. He is not a top expert in Unicode, and so considering the confusion of the top experts in this area it is not surprising that the encodings on his website are currently confused. I am sure he would appreciate help in tidying it up. Instead, what he is getting is ridicule, and potentially a new script which will make his task much harder, especially if he comes under pressure to use the separate Phoenician script which he considers so ridiculous.


On 20/05/2004 19:45, John Hudson wrote:

... There is no reason at all why Semiticists cannot simply totally ignore the proposed Phoenician block. The important question then, it seems to me, is not whether to encode Phoenician or not, but how to better communicate that the encoding of a particular set of characters does not mean that they have to be used to encode particular texts or languages.

Would that the first sentence were true! As for the second sentence, if the UTC and others really agree that with this position, first this needs to be communicated to James Kass. Perhaps what is needed is a sentence somewhere in the Unicode standard making it clear that every encoded script in the standard is only a suggestion for how that script should be represented, rather than any kind of requirement. But then, is that in fact true?

On 20/05/2004 21:06, James Kass wrote:

Dean Snyder wrote,



Your seven-repeated "reasonable" analysis of this engineering issue does
not even mention once, much less address, the PROBLEMS that will be
caused by encoding this diascript.



There seems to be a fear among those opposed to the Phoenician proposal that many people will welcome a separate encoding for the script and begin to use it. These people will create new data from old material and convert existing data to the Phoenician encoding.

Doesn't the idea that so many people will embrace a new Phoenician range
imply that it's the right thing to do?



The fear is rather that a few people, who are not true Semitic scholars, will embrace the new range, and by doing so will make things much harder for the majority who don't need and don't want the new encoding. One of the original purposes of Unicode was to move away from the old situation in which many different incompatible encodings were used for the same language and script. We don't want to get back into that situation.


On 21/05/2004 01:11, Trond Trosterud wrote:


21. mai. 2004 kello 07.30, James Kass kirjoitti:

 As a member of the Latin script user community, I'd not be threatened by

a separate encoding for Fraktur. I have Fraktur books in my library.
Whether I've got their titles stored in my database using Latin characters
or abusing math variables is best left to speculation.


Well, actually, it is left to the search engine that searches through your impressive library, looking for books. This means that we need either good search engines, consistent librarians, or a very conservative policy for encoding (the last opportunity was missed already, as we all know).

In this case, if we miss the opportunity of unifying the Semitic scripts, we will forever need such really good search engines to unify the encodings so that Hebrew and Phoenician/Palaeo-Hebrew are found by the same search.

On 21/05/2004 06:22, saqqara wrote:

... Apparently, the majority view here and elsewhere seems to be that Phoenician is a distinctive script family. If so, then the only issues are those factual elements of Michaels proposal and there is no need to continue the discussion here of whether it is needed at all.


Actually, this is not the majority view, at least here. It is the repeatedly expressed view of one script expert, and a few others have supported him (although many of these know little about the script), but the number of those who have disagreed seems to be larger, and that includes most of the experts on Semitic scripts who have expressed an opinion.

On 21/05/2004 08:14, Peter Constable wrote:

...

I think Doug is right. The point is, the situations are *not* analogous:
in the Fraktur case, there is nobody that wants a distinction; in the
Phoenician case, there appear to be people who do.



Yes, but very few of them. I'm sure we could find more than two or three supporters of separate Fraktur encoding if we looked, and without even going to the mathematicians.


Doug's point is, if there are *lot* of people that will use a separate
Phoenician block, then that will validate that it was a useful thing to
do; but if there are *not*, then the unification-camp has little cause
for concern about existence of distinctly-encoded data.



And my answer to Doug's point is that it only takes a *few* people using a separate Phoenician block, not enough to validate its usefulness, to cause severe compatibility problems for the "unification-camp". Plus, as before, the existence of the block implies that there will be pressure to use it.

On 21/05/2004 17:11, Asmus Freytag wrote:

...

I've never said there was a demand for it; I've only said that lot's of
people would USE it if it were encoded. That is my opinion. Do you
disagree that lots of people would use a Fraktur encoding?


For ordinary text, few people will need the separately encoded Fraktur.
Its much easier to enter it as Latin and apply a font shift.

And for ordinary text, few people will need the separately encoded Phoenician (especially because it will be in a higher plane and so not so well supported). It's much easier to enter it as Hebrew and apply a font shift. Therefore, the proposed Phoenician encoding is not useful, or useful only for "few people".

... And if separate Fraktur
and Roman German encodings WERE used you would face the same kinds of
problems we would face with separately encoded Phoenician and Jewish Hebrew.


Precisely. So, if separate Fraktur makes no sense, nor does separate Phoenician.


--
Peter Kirk
[EMAIL PROTECTED] (personal)
[EMAIL PROTECTED] (work)
http://www.qaya.org/




Reply via email to