From: <[EMAIL PROTECTED]>
> Unifying Phoenician and Hebrew would be akin to unifying
> Katakana and Hiragana.  *That* would be silly.

One good argument in favor of not unifying Phoenician and Hebrew, which are in a
situation comparable to Hiragana and Katakana, with one set having a onen-to-one
match to the other. This unification would have lost text semantic in Japanese,
because either Hiragana or Katakana alone cannot represent some semantic
distinctions between what is a "native" Japanese word and a transliterated
foreign word. The script distinction allows semantic distinctions.

Suppose that a modern Hebrew text is speaking about Phoenician words, the script
distinction is not only a matter of style but carries semantic distinctions as
well, as they are distinct languages. It's obvious that a modern Hebrew reader
will not be able to decipher a Phoenician word, and even understand it if it is
transliterated to the Hebrew script.

Even though there's a continuum here, having the choice between a historic
script and the modern Hebrew script will be useful to allow writing texts with
mixed scripts (notably for didactic purposes, and vulgarization books). Without
the distinction in the code, it will be difficult to read a text using mixed
scripts unified with the same Unicode code points.

Modern Hebrew with its pointed extension for historic religious texts is already
complex enough without adding new historic script styles to that complexity.
Phoenician may appears simple today, but it is likely to be extended to cover a
broad and complex range of historic texts which have nothing in common with
Hebrew. It may even be possible that some branches be disunified to cover the
case of left-to-right scripts or early ancesters of Greek, or the case of early
Brahmi and Arabic scripts.

This disunification will be obvious by the choice of the representative glyphs.

One day the Hebrew script will need to be stabilized to work correctly with
modern and Biblic usages.

Then writers and scholars will have the choice between the best scripts to use
to represent the printed texts. I quite sure that each branch will have their
distinctive orthographic system, their own sets of properties, etc... even if
there's a superficial one-to-one mapping from one to the other. That's what was
done for Greek and Cyrillic, disunified from Latin as it really helps language
identification and simplifies text processing. A too broad unification for
characters that already are not immediately identifiable by their apparent glyph
identity will just create a nightmare.

Now suppose that an author wants to use a Hebrew transliteration: he can do so
quite easily, but he will also make sure that the text is correctly rendered and
interpreted with the Hebrew script. A phoenician author will have difficulties
to create a text which will be rendered correctly in all the many variants of
the scripts. He will concentrate his efforts in only one of these script
"variants", and this will help improve the studies of these old scripts. At any
time, in each script branch, there will be some refinements to add a few
script-specific diacritics and marks or even plain vowel letters, which won't
have any correspondance in the Hebrew script.

If we unifiy Phoenician with Hebrew too early, it will become nearly impossible
to introduce new vowels or newer left-to-right layouts, because the Hebrew
script will become too complex to handle correctly with these additions.

Let's keep Hebrew clean with only modern Hebrew and traditional pointed
Hebrew... The religious traditions in Hebrew are too strong to allow importing
into it some variants and marks coming from separate Phoenitic branches used by
non-Hebrew languages. The simple one-to-one mapping will still be possible for
the most direct ancestors of Hebrew, but this will not work with lots of
Phoenitic branches from which Hebrew is not an ancestor or child.


Reply via email to