There are 14 messages in this issue. Topics in this digest:
1.1. Re: Possibly the simplest possible self-segregating morphology From: R A Brown 1.2. Re: Possibly the simplest possible self-segregating morphology From: Jörg Rhiemeier 1.3. Re: Possibly the simplest possible self-segregating morphology From: Logan Kearsley 1.4. Re: Possibly the simplest possible self-segregating morphology From: Jim Henry 1.5. Re: Possibly the simplest possible self-segregating morphology From: R A Brown 1.6. Oligosynthesis (was:: Possibly the simplest possible self-segregatin From: R A Brown 1.7. Re: Possibly the simplest possible self-segregating morphology From: Jim Henry 1.8. Re: Oligosynthesis (was:: Possibly the simplest possible self-segreg From: Jörg Rhiemeier 2a. Elomi and Ilomi From: Anthony Miles 2b. Re: Elomi and Ilomi From: Larry Sulky 3a. Re: NATLANG: Isolating Languages Question From: Anthony Miles 4a. Re: Reduncancy From: David Peterson 4b. Reduncancy From: Peter Bleackley 4c. Re: Reduncancy From: Jim Henry Messages ________________________________________________________________________ 1.1. Re: Possibly the simplest possible self-segregating morphology Posted by: "R A Brown" r...@carolandray.plus.com Date: Sun Sep 12, 2010 1:03 pm ((PDT)) Darn it - this thread would come in a week when I was particularly busy! ;) Now, hopefully, I can catch up. IME describing something as "possibly the simplest possible" is simply asking to be proved wrong. 'Simplicity' strikes me as one of those terms like 'ease of learning' that get bandied about in certain conlang contexts (usually IME among auxlangers) and about which no one agrees ;) On 09/09/2010 16:49, Gary Shannon wrote: > Words are made up of any number of CV syllables where C > is a glottal stop, a single consonant, or any one of a > number of permitted consonant clusters (as yet > unspecified). The first syllable may have a null > consonant, i.e. V only. > > The first vowel of a word is any vowel other than 'a'. > All of the remaining vowels of the word are the vowel > 'a'. For example: > > diva, ropa, upasana, purampada, toskala, osa'atanda ... > > The accent falls on the non-a syllable. I assume, then, that all these words are monomorphemic. Self-segregating morphology means that _morphemes_ are self segregating (I noticed several emails talked about 'words' and I found some confusing in this respect - sure, in a self-segregating morphology set-up we will probably also want to know where compound words end; but that is another matter). Conlangs with CV patterns that use vowels to mark morpheme boundaries have been around for some time. Way back in 2001 John Cowan described an interesting system for his _xuxi_, see: http://archives.conlang.info/cae/qeiljhin/dhueqeindhein.html Admittedly his is a bit less simple than the one Gary outlines above as it depends upon rules of vowel harmony and disharmony. But I find it intriguing. Another scheme that has CV syllables only and uses vowels to mark morpheme boundaries is my "Scheme C" on: http://www.carolandray.plus.com/Exp/Appendix2.html This, arguably, is even simpler than Gary's scheme in that the language has only two vowels to worry about - not five: one front vowel and one back vowel :) But, as I see it, the one possible drawback of these schemes that depend on a change of vowel is that monosyllabic morphemes are not possible. -------------------------------------------------------- On 09/09/2010 22:50, Maxime Papillon wrote: > I can think of a number of self-segregating morphology > that I find simpler, but then how can we tell except with > "it feels simpler to me"? Exactly!! > We could ask who can write the > shortest segregating computer program for his morphology, > but then we're talking about computers, not about human > speakers. Indeed we would be. If the language is for human communication then IMO it's humans we should be concerned with. If we achieve a system whereby self-segregation is evident to humans than any computer programmer worth his/her salt should be able to write a program so the machine can do the same. > The word "simple" doesn't seem adapted to the field of > linguistic. Try telling that to auxlangers ;) ------------------------------ On 10/09/2010 16:53, Jörg Rhiemeier wrote: [snip] >> I can't imagine a simpler system than that. > > How about what I use in X-3? All morphemes are exactly > one phoneme long. Of course, the language is > oligosynthetic ... ..and in my 'Experimental Conlang' each morpheme is exactly one syllable long and all the syllables are CV :) That surely has to be simpler! Of course, as the language also has to oligosynthetic as it has only 64 morphemes. (I'm rapidly going off the idea of experimental oligosynthetic language and may revert to my original idea of an experimental loglang). ----------------------------------- On 10/09/2010 17:11, Gary Shannon wrote: [snip] > Huffman coding achieves self-segregation by using "prefix > free" coding, which is just another way of saying that > certain sequences of characters are classified as > prefixes only (i.e., word-initial) and imply that more is > to follow. In fact, they are classified by exactly how Huffman coding was used by Jeff Prothero in his 'Plan B' to denote the length of morphemes; he was concerned with bit-patterns (Plan B is IMO computer-centric not anthropocentric). Jacques Guy in his 'Plan C' parody pointed out that for a human it in effect means the initial sound determines the length of the morpheme. I outline such a system in "Scheme A" on: http://www.carolandray.plus.com/Exp/Appendix2.html -- Ray ================================== http://www.carolandray.plus.com ================================== "Ein Kopf, der auf seine eigene Kosten denkt, wird immer Eingriffe in die Sprache thun." [J.G. Hamann, 1760] "A mind that thinks at its own expense will always interfere with language". Messages in this topic (41) ________________________________________________________________________ 1.2. Re: Possibly the simplest possible self-segregating morphology Posted by: "Jörg Rhiemeier" joerg_rhieme...@web.de Date: Sun Sep 12, 2010 1:48 pm ((PDT)) Hallo! On Sun, 12 Sep 2010 21:06:01 +0100, R A Brown wrote: > Darn it - this thread would come in a week when I was > particularly busy! ;) > > Now, hopefully, I can catch up. IME describing something as > "possibly the simplest possible" is simply asking to be > proved wrong. 'Simplicity' strikes me as one of those terms > like 'ease of learning' that get bandied about in certain > conlang contexts (usually IME among auxlangers) and about > which no one agrees ;) Indeed. One should always be careful with such superlatives - it is more like advertising than a useful description. > [...] > > But, as I see it, the one possible drawback of these schemes > that depend on a change of vowel is that monosyllabic > morphemes are not possible. Indeed. And that makes them awkward. While I can live with a language where each *lexical* morpheme is at least 2 syllables long, *grammatical* morphemes ought to be as short as possible, and anything above one syllable is too much. In Old Albic, I have lots of grammatical morphemes (and even a few verb roots!) that are just one *phoneme* long (and most lexical roots are one syllable). > On 10/09/2010 16:53, Jörg Rhiemeier wrote: > [snip] > >> I can't imagine a simpler system than that. > > > > How about what I use in X-3? All morphemes are exactly > > one phoneme long. Of course, the language is > > oligosynthetic ... > > ..and in my 'Experimental Conlang' each morpheme is exactly > one syllable long and all the syllables are CV :) > > That surely has to be simpler! Indeed. > Of course, as the language also has to oligosynthetic as it > has only 64 morphemes. (I'm rapidly going off the idea of > experimental oligosynthetic language and may revert to my > original idea of an experimental loglang). The I Ging was a too restrictive set of meanings, I guess? I still feel that oligosynthetic languages do not really work, and that is part of the reason why my work with X-3/Quetech is utterly stuck (another part of the reason is that I have enough other projects with higher priority: Old Albic, a web magazine dealing with sustainable living, writing songs for a band I am going to try to found next year, and yet others). -- ... brought to you by the Weeping Elf http://www.joerg-rhiemeier.de/Conlang/index.html Messages in this topic (41) ________________________________________________________________________ 1.3. Re: Possibly the simplest possible self-segregating morphology Posted by: "Logan Kearsley" chronosur...@gmail.com Date: Sun Sep 12, 2010 2:25 pm ((PDT)) >> We could ask who can write the >> shortest segregating computer program for his morphology, >> but then we're talking about computers, not about human >> speakers. > > Indeed we would be. If the language is for human communication then IMO it's > humans we should be concerned with. If we achieve a system whereby > self-segregation is evident to humans than any computer programmer worth > his/her salt should be able to write a program so the machine can do the > same. Er... I wouldn't be quite that bold. Human brains are *really* good at pattern matching (hence we can get along without self-segregating natural languages). There's lots of stuff that's blatantly obvious to people but incredibly difficult to program computers to recognize. I don't know what it would be like, but I can definitely imagine a self-segregation scheme that would be highly compatible with human brains, but for that very reason extremely difficult to formalize for a computer. -l. Messages in this topic (41) ________________________________________________________________________ 1.4. Re: Possibly the simplest possible self-segregating morphology Posted by: "Jim Henry" jimhenry1...@gmail.com Date: Sun Sep 12, 2010 3:55 pm ((PDT)) On Sun, Sep 12, 2010 at 5:23 PM, Logan Kearsley <chronosur...@gmail.com> wrote: >>> We could ask who can write the >>> shortest segregating computer program for his morphology, >>> but then we're talking about computers, not about human >>> speakers. >> Indeed we would be. If the language is for human communication then IMO it's >> humans we should be concerned with. If we achieve a system whereby >> self-segregation is evident to humans than any computer programmer worth > Er... I wouldn't be quite that bold. Human brains are *really* good at > pattern matching (hence we can get along without self-segregating > natural languages). There's lots of stuff that's blatantly obvious to > people but incredibly difficult to program computers to recognize. I > don't know what it would be like, but I can definitely imagine a > self-segregation scheme that would be highly compatible with human > brains, but for that very reason extremely difficult to formalize for > a computer. And the reverse is probably true as well. There are probably an very large but finite number of possible self-segregating morphology schemes at any given level of algorithmic complexity. I think someone upthead probably mentioned one of the class at the simplest or second-simplest level[1] -- all morphemes begin with exactly one "k" and continue with zero or more instances of various other graphemes. Schemes of this class are probably *too* simple to be useful -- in them, other important qualities are sacrificed to simplicity. The next-simplest class, I reckon, would include those where there is a set of multiple graphemes of which one instance marks the beginning of a morpheme, which then contains zero or more instances of graphemes not iun that set. But within that set of SSM schemes, all of which are equally easy to code a parser for,[2] I suspect some of them are far easier for humans to learn to parse in realtime than others -- namely, those where the set of initial (or terminal) graphemes form a natural set w.r.t. their phonological associations; those would proably be easier to parse than those where the initial or terminal set are a more or less random subset of the language's graphemes. 1. I suspect the set of schemes where all morphemes are exactly the same length are even simpler than those where there is a single terminal or a single nonterminal character. 2. All the above pertains to writing parsers for a stream of ASCII or Unicode text. If we're talking about OCR recognition of an actual original writing system, or speech recognition of a novel conlang phonology, then indeed some graphemes or phonemes would probably be significantly easier for the parser to treat as morpheme boundaries than others, and in these cases I would expect somewhat more (but far from perfect) congruity with which schemes are easier for humans. http://wiki.frath.net/List_of_self-segregating_morphology_methods -- Jim Henry http://www.pobox.com/~jimhenry/ Messages in this topic (41) ________________________________________________________________________ 1.5. Re: Possibly the simplest possible self-segregating morphology Posted by: "R A Brown" r...@carolandray.plus.com Date: Sun Sep 12, 2010 11:42 pm ((PDT)) On 12/09/2010 22:23, Logan Kearsley wrote: [snip] >> >> Indeed we would be. If the language is for human >> communication then IMO it's humans we should be >> concerned with. If we achieve a system whereby >> self-segregation is evident to humans than any computer >> programmer worth his/her salt should be able to write a >> program so the machine can do the same. > > Er... I wouldn't be quite that bold. Human brains are > *really* good at pattern matching (hence we can get along > without self-segregating natural languages). There's lots > of stuff that's blatantly obvious to people but > incredibly difficult to program computers to recognize. Oh yes, I'm well aware of that. I did do some work in AI at one time in my past life. > I don't know what it would be like, but I can definitely > imagine a self-segregation scheme that would be highly > compatible with human brains, but for that very reason > extremely difficult to formalize for a computer. Yep - I probably overstated the case. I guess what I meant is that all the examples of self-segregating morphemes I've come across so far will not be exactly difficult to program - at least as far as written language is concerned. Even those languages that restrict themselves CV syllables are making things easier for computer voice recognition. But maybe a "self-segregation scheme that would be highly compatible with human brains, but for that very reason extremely difficult to formalize for a computer" will come along - but I haven't met such a beast yet ;) ---------------------------------------- On 12/09/2010 23:53, Jim Henry wrote: [snip] > > And the reverse is probably true as well. IMO the reverse is definitely true. IMO the Huffman coding scheme of 'Plan B' is not human-friendly - but it's simple to program. I'm sure I could come up with other 'computer-friendly' schemes that would not be exactly easy for human communication, especially spoken communication. Such computer-centric languages do not interest me. -- Ray ================================== http://www.carolandray.plus.com ================================== "Ein Kopf, der auf seine eigene Kosten denkt, wird immer Eingriffe in die Sprache thun." [J.G. Hamann, 1760] "A mind that thinks at its own expense will always interfere with language". Messages in this topic (41) ________________________________________________________________________ 1.6. Oligosynthesis (was:: Possibly the simplest possible self-segregatin Posted by: "R A Brown" r...@carolandray.plus.com Date: Mon Sep 13, 2010 12:01 am ((PDT)) On 12/09/2010 21:46, Jörg Rhiemeier wrote: > Hallo! > > On Sun, 12 Sep 2010 21:06:01 +0100, R A Brown wrote: [snip] > >> Of course, as the language also has to oligosynthetic as it >> has only 64 morphemes. (I'm rapidly going off the idea of >> experimental oligosynthetic language and may revert to my >> original idea of an experimental loglang). > > The I Ging was a too restrictive set of meanings, I guess? 64 morphemes is somewhat restrictive, whether based on the Yì JÄ«ng (I Ching, I Ging, etc) or not. But as the myriad of online offers of Yì JÄ«ng readings show, Yì JÄ«ng has cultic/ mystic significance for many. J.R.R. Tolkien was not exactly happy to find New Agers using his Middle Earth; similarly I would not be happy to find my language, if I had developed it, being used for cultic purposes. > I still feel that oligosynthetic languages do not really work, > and that is part of the reason why my work with X-3/Quetech > is utterly stuck Exactly so. I just cannot see how they would work in practice, tho experimenting with them might possibly throw up interesting results. -- Ray ================================== http://www.carolandray.plus.com ================================== "Ein Kopf, der auf seine eigene Kosten denkt, wird immer Eingriffe in die Sprache thun." [J.G. Hamann, 1760] "A mind that thinks at its own expense will always interfere with language". Messages in this topic (41) ________________________________________________________________________ 1.7. Re: Possibly the simplest possible self-segregating morphology Posted by: "Jim Henry" jimhenry1...@gmail.com Date: Mon Sep 13, 2010 5:05 am ((PDT)) On Mon, Sep 13, 2010 at 2:28 AM, R A Brown <r...@carolandray.plus.com> wrote: > On 12/09/2010 23:53, Jim Henry wrote: >> And the reverse is probably true as well. > IMO the reverse is definitely true. IMO the Huffman coding scheme of 'Plan > B' is not human-friendly - but it's simple to program. Even within the Huffman coding solution space, it would be trivial to come up with a solution that's equally easy to code as Plan B, but significantly easier for humans to parse (though still probably nowhere near as easy to parse as those that involve fixed-length morphemes, or subsets of initial and terminal phonemes or syllables). For instance, plosives occur at the beginning of one-syllable morphemes, fricatives at the beginning of two-syllable morphemes, etc. Something that we haven't discussed here recently, though, is *how and when* a self-segregating morphology is or would be helpful. Of course it would make text in an engelang easier to parse with software than that of any natlang, or even fairly regular conlangs without SSM such as Esperanto. But, as Logan pointed out upthread, humans are really good at pattern-recognition, and usually identify the morpheme boundaries in languages they're familiar with without problems, even in languages that present a lot of ambiguities to a typical software parser. When and how and how much would SSM make things easier for humans? >From my experience, I'd suggest that it's most likely to be helpful for intermediate learners, who've already learned the syntax and morphology of the conlang fairly well, but whose vocabulary is still small. People talk about the ambiguous parses of Esperanto compound words, but in my experience such words are almost never ambiguous *in context*, for a fluent speaker. And a fluent speaker, encountering an unfamiliar morpheme in context, is pretty likely to be able to identify it as such (if not, in fact, guess its meaning from context as well) without confounding it with whatever affixes or more familiar root morphemes it's compounded with. It's for a less fluent learner, whose vocabulary is still small, that being able to ambiguously parse unfamiliar words into their component morphemes before looking up all the unfamiliar morphemes would save time and reduce uncertainty. A few years ago, when I was learning Volapük, I said this in the course of a conversation with a friend who was also learning it: On Mon, Nov 28, 2005 at 7:01 PM, Jim Henry <jimhenry1...@gmail.com> wrote: > Yes; "datuval" uses two different "um" affixes, and > could be analyzed (before one knows many > Vp morphemes) as "da-tuval", "da-tuv-al", "dat-uv-al" > "datuv-al", or "datuval" (or maybe even "dat-u-val"?). And in a situation like that -- reading a text in a conlang and pausing to look up unfamiliar words from time to time -- I doubt whether Huffman encoding would be significantly worse than fixed-length morphemes or initial/terminal phoneme sets. Even the most complex SSM scheme that anyone has seriously proposed would save dictionary lookup time compared to the ambiguity of Volapük compounds for an intermediate student. On the other hand, for a real-time spoken-language situation -- a conversation or lecture, for instance -- I'm not sure any SSM scheme more complex than fixed-length morphemes would give a significant edge over general human pattern-recognition abilities. One would have to experiment to find out. -- Jim Henry http://www.pobox.com/~jimhenry/ Messages in this topic (41) ________________________________________________________________________ 1.8. Re: Oligosynthesis (was:: Possibly the simplest possible self-segreg Posted by: "Jörg Rhiemeier" joerg_rhieme...@web.de Date: Mon Sep 13, 2010 7:54 am ((PDT)) Hallo! On Mon, 13 Sep 2010 07:49:16 +0100, R A Brown wrote: > On 12/09/2010 21:46, Jörg Rhiemeier wrote: > > [...] > > > The I Ging was a too restrictive set of meanings, I guess? > > 64 morphemes is somewhat restrictive, whether based on the > Yì JÄ«ng (I Ching, I Ging, etc) or not. Indeed. Even Toki Pona has twice as many morphemes! > But as the myriad of > online offers of Yì JÄ«ng readings show, Yì JÄ«ng has cultic/ > mystic significance for many. J.R.R. Tolkien was not exactly > happy to find New Agers using his Middle Earth; similarly I > would not be happy to find my language, if I had developed > it, being used for cultic purposes. I understand that very well. Indeed, I have a mild fear that once I have uploaded the complete documentation of Old Albic to the Web, some "Elvish" Otherkin and New Age cranks will abuse it. Probably they'll use only the vocabulary and won't give a damn about its grammar, and come up with a "pidgin Elvish" in which the Old Albic words are used English-wise without any proper inflections. > > I still feel that oligosynthetic languages do not really work, > > and that is part of the reason why my work with X-3/Quetech > > is utterly stuck > > Exactly so. I just cannot see how they would work in > practice, tho experimenting with them might possibly throw > up interesting results. My plan with X-3 is to start with the vocabulary of Toki Pona, assign a phoneme to each item, with lexical morphemes (including pronouns) being consonants and grammatical morphemes such as conjunctions and semantic relation markers being vowels. Then I shall translate a small text corpus into it and count the phonemes and the syllables. The question is, how much is actually saved in comparison with "normal" languages such as English or Old Albic? One one hand, the morphemes are as short as they can possibly be; on the other, X-3 will have to take recourse to compounds and circumlocutions in order to express many concepts for which root morphemes exist in other languages. -- ... brought to you by the Weeping Elf http://www.joerg-rhiemeier.de/Conlang/index.html Messages in this topic (41) ________________________________________________________________________ ________________________________________________________________________ 2a. Elomi and Ilomi Posted by: "Anthony Miles" mamercu...@gmail.com Date: Sun Sep 12, 2010 6:22 pm ((PDT)) Well, between learning Esperanto, composing in Toki Pona, and finishing the Na'gifi Fasu'xa Babel Text, I don't think I'm going to get around to learning Ilomi anytime soon. I do have one question: how different is Elomi and Ilomi? I presume it's not that different, if the lesson plan still works. Messages in this topic (2) ________________________________________________________________________ 2b. Re: Elomi and Ilomi Posted by: "Larry Sulky" larrysu...@gmail.com Date: Mon Sep 13, 2010 6:14 am ((PDT)) Elomi marked verbs with an initial "i" and names with an initial "e". Ilomi reversed the two. On Sun, Sep 12, 2010 at 9:20 PM, Anthony Miles <mamercu...@gmail.com> wrote: > Well, between learning Esperanto, composing in Toki Pona, and finishing the > Na'gifi Fasu'xa Babel Text, I don't think I'm going to get around to > learning > Ilomi anytime soon. I do have one question: how different is Elomi and > Ilomi? I > presume it's not that different, if the lesson plan still works. > Messages in this topic (2) ________________________________________________________________________ ________________________________________________________________________ 3a. Re: NATLANG: Isolating Languages Question Posted by: "Anthony Miles" mamercu...@gmail.com Date: Sun Sep 12, 2010 7:03 pm ((PDT)) I doubt Toki Pona, were it to acquire a community in one place, would remain isolating. It's already developed arbitrary meanings for many compounds, so my guess is it would become agglutinative and acquire voiced (pre-nasalized) consonants. If the CBB is anything to go by, that is the most common fate of isolating languages. The newbie starts with an isolating language, and stress patterns change it into agglutinative languages. But for the sake of argument, TP e and en are already easy to mistype, and some persons do reduce all post-tonic syllables to a schwa. Without a speaking (rather than typing) community, it's hard to tell how various individuals treat secondary stress. Going with Mandarin tones (which I've been using over in Romlang to derive a Mandarinesque Romlang), jan pi tokipona > jap1 to4po1 or ja1 pi2to41. e2 and e1 would come from e and en, assuming e doesn't drop out altogether. Would jan unpa > ja1u1pa2 or ja2nu1pa1? Maybe I'll cook something up in the jan nasa section of the toki pona forum, but of course, I can't claim it under a Creative Commons license. Messages in this topic (8) ________________________________________________________________________ ________________________________________________________________________ 4a. Re: Reduncancy Posted by: "David Peterson" deda...@gmail.com Date: Mon Sep 13, 2010 4:06 am ((PDT)) On Sep 13, 2010, at 3â34 AM, Peter Bleackley wrote: > A lot of engelangers try to reduce redundancy in languages, but in real life > redundancy is quite useful, because it gives you more chances to work out > what somebody has said if you didn't quite catch it. Has anyone ever tried to > create a conlang that increases redundancy? I'd hope that's what most naturalistic conlangs do naturally. For example, anytime you have a conjugation paradigm that differs in person and number, for example, and have these forms cooccur with pronouns, that's redundant. In English, for example, "He hits" is redundant: Either "he hit" or "hits" would be the most one-to-one way to do it. Another example is pluralization. Any conlang that has a more or less regular plural and has that plural cooccur with numerals is employing redundancy. So, for example, in Zhyler: demven "water buffalo" vaj demvenej "three water buffalos" *vaj demven "three water buffalo" Though that last example doesn't work quite well in English, as "buffalo" serves just fine as a plural... There are a few natural languages that work the opposite way. In Arabic, for example, I've been given to understand that you never say "1 x", you just say "x", hence the famous title /alf layla wa layla/, "thousand night and night", which is "1,001 nights". You also never say "two X", you just use "X-dual". I won't go so far as to say that Arabic nouns have no number marking when they're preceded by a number (that is in something other than the singular or the dual), but it's not a straightforward plural that marks the noun (or not always. I was really confused by this corner of Arabic' grammar...). My guess is that all conlangs attempting to naturalistic display some sort of redundancy, either on accident or by design. A conlang that maximizes redundancy might be interesting. There's an example of a language where each nominal is case-marked, each argument of the verb is marked on the verb, and all nouns agree with the verb, so that in a ditransitive sentence, each noun is marked with three cases, and the verb has marking for the subject, object and indirect object. That's probably the most redundant example I've ever seen--nat or con. -David ******************************************************************* "A male love inevivi i'ala'i oku i ue pokulu'ume o heki a." "No eternal reward will forgive us now for wasting the dawn." -Jim Morrison http://dedalvs.com/ LCS Member Since 2007 http://conlang.org/ Messages in this topic (3) ________________________________________________________________________ 4b. Reduncancy Posted by: "Peter Bleackley" peter.bleack...@rd.bbc.co.uk Date: Mon Sep 13, 2010 4:09 am ((PDT)) A lot of engelangers try to reduce redundancy in languages, but in real life redundancy is quite useful, because it gives you more chances to work out what somebody has said if you didn't quite catch it. Has anyone ever tried to create a conlang that increases redundancy? Pete Messages in this topic (3) ________________________________________________________________________ 4c. Re: Reduncancy Posted by: "Jim Henry" jimhenry1...@gmail.com Date: Mon Sep 13, 2010 5:15 am ((PDT)) On Mon, Sep 13, 2010 at 6:34 AM, Peter Bleackley <peter.bleack...@rd.bbc.co.uk> wrote: > A lot of engelangers try to reduce redundancy in languages, but in real life > redundancy is quite useful, because it gives you more chances to work out > what somebody has said if you didn't quite catch it. Has anyone ever tried > to create a conlang that increases redundancy? My säb zjed'a has a higher degree of phonological redundancy than many languages -- no two morphemes differ by fewer than two phonemes. http://www.pobox.com/~jimhenry/conlang/conlang13/intro.htm But its syntax and morphology (such as it is; it's mostly isolating) are not, I think, unusually redundant. -- Jim Henry http://www.pobox.com/~jimhenry/ Messages in this topic (3) ------------------------------------------------------------------------ Yahoo! Groups Links <*> To visit your group on the web, go to: http://groups.yahoo.com/group/conlang/ <*> Your email settings: Digest Email | Traditional <*> To change settings online go to: http://groups.yahoo.com/group/conlang/join (Yahoo! ID required) <*> To change settings via email: conlang-nor...@yahoogroups.com conlang-fullfeatu...@yahoogroups.com <*> To unsubscribe from this group, send an email to: conlang-unsubscr...@yahoogroups.com <*> Your use of Yahoo! Groups is subject to: http://docs.yahoo.com/info/terms/ ------------------------------------------------------------------------