There are 10 messages in this issue.

Topics in this digest:

1a. Re: NATLANG: Isolating Languages Question    
    From: Alex Fink
1b. Re: NATLANG: Isolating Languages Question    
    From: Alex Fink
1c. Re: NATLANG: Isolating Languages Question    
    From: John Vertical

2.1. Re: Possibly the simplest possible self-segregating morphology    
    From: R A Brown
2.2. Re: Possibly the simplest possible self-segregating morphology    
    From: Jörg Rhiemeier

3a. Re: How did English <u> get /U/ and /V/?    
    From: John Vertical

4. Complete Annotated Na'gifi Fasu'xa Babel Text    
    From: Anthony Miles

5a. Re: Reduncancy    
    From: Anthony Miles
5b. Re: Reduncancy    
    From: Maxime Papillon
5c. Re: Reduncancy    
    From: And Rosta


Messages
________________________________________________________________________
1a. Re: NATLANG: Isolating Languages Question
    Posted by: "Alex Fink" 000...@gmail.com 
    Date: Mon Sep 13, 2010 8:30 am ((PDT))

I don't understand a lot of this:

On Sun, 12 Sep 2010 21:52:01 -0400, Anthony Miles <mamercu...@gmail.com> wrote:

>I doubt Toki Pona, were it to acquire a community in one place, would remain
>isolating. It's already developed arbitrary meanings for many compounds, so
>my guess is it would become agglutinative and acquire voiced (pre-nasalized)
>consonants.

Okay, so maybe compounding might hasten the progression towards
non-isolation generally.  (Although you needn't get univerbated compounds
right off; idioms that aren't syntactically single words are an entirely
ordinary thing.)  But what do prenasalised consonants have to do with
anything?  TP's only clusters are N+C, okay, but I'm not seeing it.

>If the CBB is anything to go by, that is the most common fate of isolating
>languages. The newbie starts with an isolating language, and stress patterns
>change it into agglutinative languages.

Stress patterns?  Like, the stress of the supposedly-isolated word comes to
be lost on one word in compounds?
And are you taking about simulated or external history?  Are these
conlangers making con-diachrony, or revisions to the synchronic conception
of their project?  (Or are other people learning them and inducing these
changes?  I'd be surprised if that was a regular happening on the CBB.)

>But for the sake of argument, TP e and en are already easy to mistype, and
>some persons do reduce all post-tonic syllables to a schwa. Without a
>speaking (rather than typing) community, it's hard to tell how various
>individuals treat secondary stress. Going with Mandarin tones (which I've been
>using over in Romlang to derive a Mandarinesque Romlang), jan pi tokipona >
>jap1 to4po1 or ja1 pi2to41. e2 and e1 would come from e and en, assuming e
>doesn't drop out altogether. Would jan unpa > ja1u1pa2 or ja2nu1pa1? 

I gather you just mean to use the tone numbers in the Mandarin style here
(but then what's 41?).  But if you're talking about the actual Mandarin
tonogenesis, then you should know that final resonants never caused tonal
changes in Mandarin.  Most reconstructions have the original tones coming
from something like a *-0 : -? : -s contrast which was lost in favour of
phonation; there was a fourth tone in syllables checked by a stop; these
were later redistributed according to the voicing of the initial.  

Alex





Messages in this topic (11)
________________________________________________________________________
1b. Re: NATLANG: Isolating Languages Question
    Posted by: "Alex Fink" 000...@gmail.com 
    Date: Mon Sep 13, 2010 9:03 am ((PDT))

On Sat, 11 Sep 2010 15:15:15 -0700, David Peterson <deda...@gmail.com> wrote:

>Are there any exclusively (or nearly exclusively) isolating languages
>that are NOT tone languages? (I'm thinking of isolating on par with
>Chinese--not mostly isolating with inflection, like English.)

This WALS query 
  http://wals.info/feature/combined?id1=22&id2=13
suggests that Maybrat might be one.  Depends if there really is one category
marked on the verb or zero, I suppose, and if there's lots of derivational
morphology. 

But genuinely isolating languages aren't especially common to begin with,
leaving creoles aside, I thought.  Could this correlation be accidental,
helped out by sprachbund effects?  

I secretly hope it is accidental.  I guess Matthew Martin's interpretation
of McWhorter has something to it:

>I think I know where McWhorter was going with his question-- analytic
>languages are subject to phonetic erosion just like any other language. Since
>nothing can glom onto a word to compensate for the ever shortening words,
>many pairs of recently eroded words will become identical except for the slight
>tonal rise or fall that the recently disappeared consonant caused.  So all old
>analytic language are expected to be tonal.  

but I'm not sure I see why retention of tones should be favoured as a way to
preserve distinctness over, say, retention of those consonants that are
posited being lost in this explanation.  And anyway, my impression from the
modern Chinese languages is that they haven't taken any such special efforts
to preserve distinctness, but rather they've made up for phonetic attrition
with pervasive compounding, and this strategy seems available in general.

Alex





Messages in this topic (11)
________________________________________________________________________
1c. Re: NATLANG: Isolating Languages Question
    Posted by: "John Vertical" johnverti...@hotmail.com 
    Date: Mon Sep 13, 2010 2:29 pm ((PDT))

>Are there any exclusively (or nearly exclusively) isolating languages
>that are NOT tone languages? (I'm thinking of isolating on par with
>Chinese--not mostly isolating with inflection, like English.)
>
>-David

Khmer, supposedly.

John Vertical





Messages in this topic (11)
________________________________________________________________________
________________________________________________________________________
2.1. Re: Possibly the simplest possible self-segregating morphology
    Posted by: "R A Brown" r...@carolandray.plus.com 
    Date: Mon Sep 13, 2010 12:52 pm ((PDT))

On 13/09/2010 13:02, Jim Henry wrote:
> On Mon, Sep 13, 2010 at 2:28 AM, R A
> Brown<r...@carolandray.plus.com>  wrote:
>> On 12/09/2010 23:53, Jim Henry wrote:
>>> And the reverse is probably true as well.
>
>> IMO the reverse is definitely true. IMO the Huffman
>> coding scheme of 'Plan B' is not human-friendly - but
>> it's simple to program.
>
> Even within the Huffman coding solution space, it would
> be trivial to come up with a solution that's equally easy
> to code as Plan B, but significantly easier for humans to
> parse

Yes, indeed - I've done it   ;)

> (though still probably nowhere near as easy to
> parse as those that involve fixed-length morphemes, or
> subsets of initial and terminal phonemes or syllables).
> For instance, plosives occur at the beginning of
> one-syllable morphemes, fricatives at the beginning of
> two-syllable morphemes, etc.

Agreed - I don't a Huffman coding solution (i.e. initial 
sound/syllable determines morpheme length) is ever going 
feel 'natural' to humans.

> Something that we haven't discussed here recently,
> though, is *how and when* a self-segregating morphology
> is or would be helpful.  Of course it would make text in
> an engelang easier to parse with software than that of
> any natlang, or even fairly regular conlangs without SSM
> such as Esperanto.

I'm told in fact, that Esperanto is not particular easy 
language to write a parser for.  But I admit I've not tried it.

As far as self-segregating morphemes are concerned, it 
depends upon the language. I know when I was learning 
Speedwords in the late 1950s I found it quite confusing not 
being able to spot morpheme boundaries. I think in 
briefscripts and certain types of engelang it is helpful for 
_humans_ to know unambiguously where morpheme boundaries occur.

> But, as Logan pointed out upthread,
> humans are really good at pattern-recognition, and
> usually identify the morpheme boundaries in languages
> they're familiar with without problems, even in languages
> that present a lot of ambiguities to a typical software
> parser.

Yes, agreed. Certainly if one is aiming to produce a 
naturalistic looking artlang self-segregating morphemes are 
not wanted. As I said above, it depends on the language.

> When and how and how much would SSM make things
> easier for humans?

When learning Speedwords    ;)

>> From my experience, I'd suggest that it's most likely
>> to be helpful
> for intermediate learners, who've already learned the
> syntax and morphology of the conlang fairly well, but
> whose vocabulary is still small.  People talk about the
> ambiguous parses of Esperanto compound words, but in my
> experience such words are almost never ambiguous *in
> context*, for a fluent speaker.

Yep - in context _kataro_ (group of cats) is hardly likely 
to be confused with _kataro_ (catarrh).

[snip]
>
> It's for a less fluent learner, whose vocabulary is still
> small, that being able to ambiguously parse unfamiliar
> words into their component morphemes before looking up
> all the unfamiliar morphemes would save time and reduce
> uncertainty.  A few years ago, when I was learning
> Volapük, I said this in the course of a conversation with
> a friend who was also learning it:

I can well imagine that in Volapük, where words can grow 
quite long, it would be useful know unambiguously where 
morpheme boundaries occur.

[snip]
>
> On the other hand, for a real-time spoken-language
> situation -- a conversation or lecture, for instance --
> I'm not sure any SSM scheme more complex than
> fixed-length morphemes would give a significant edge over
> general human pattern-recognition abilities.  One would
> have to experiment to find out.

You may well be correct. Piashi (which I've now abandoned) 
was going to have fixed-length morphemes.

-- 
Ray
==================================
http://www.carolandray.plus.com
==================================
"Ein Kopf, der auf seine eigene Kosten denkt,
wird immer Eingriffe in die Sprache thun."
[J.G. Hamann, 1760]
"A mind that thinks at its own expense
will always interfere with language".





Messages in this topic (43)
________________________________________________________________________
2.2. Re: Possibly the simplest possible self-segregating morphology
    Posted by: "Jörg Rhiemeier" joerg_rhieme...@web.de 
    Date: Mon Sep 13, 2010 1:27 pm ((PDT))

Hallo!

On Mon, 13 Sep 2010 20:51:43 +0100, R A Brown wrote:

>  On 13/09/2010 13:02, Jim Henry wrote:
>
>  [...]
>  >  Even within the Huffman coding solution space, it would
>  >  be trivial to come up with a solution that's equally easy
>  >  to code as Plan B, but significantly easier for humans to
>  >  parse
>
>  Yes, indeed - I've done it   ;)
>
>  >  (though still probably nowhere near as easy to
>  >  parse as those that involve fixed-length morphemes, or
>  >  subsets of initial and terminal phonemes or syllables).
>  >  For instance, plosives occur at the beginning of
>  >  one-syllable morphemes, fricatives at the beginning of
>  >  two-syllable morphemes, etc.
>
>  Agreed - I don't a Huffman coding solution (i.e. initial
>  sound/syllable determines morpheme length) is ever going
>  feel 'natural' to humans.

Indeed not.  But most self-segregation schemes won't feel
natural to humans (to what else?  Aliens are not yet known;
animals don't use full-fledged languages; computers do not
have feeelings).  Some may be so subtle that they don't feel
unnatural - but in those cases, people will be likely to
entirely miss the fact that the language is self-segregating
at all!

>  >  Something that we haven't discussed here recently,
>  >  though, is *how and when* a self-segregating morphology
>  >  is or would be helpful.  Of course it would make text in
>  >  an engelang easier to parse with software than that of
>  >  any natlang, or even fairly regular conlangs without SSM
>  >  such as Esperanto.
>
>  I'm told in fact, that Esperanto is not particular easy
>  language to write a parser for.  But I admit I've not tried it.

Certainly, SSM makes it much easier to parse the language.
At least, you can clearly tell where morphemes begin and end.

>  As far as self-segregating morphemes are concerned, it
>  depends upon the language. I know when I was learning
>  Speedwords in the late 1950s I found it quite confusing not
>  being able to spot morpheme boundaries. I think in
>  briefscripts and certain types of engelang it is helpful for
>  _humans_ to know unambiguously where morpheme boundaries occur.

Yes.  In natural languages, the large amount of redundancy makes
it easier to understand the language even without self-segregation.
Whenever you miss a morpheme boundary, you'll soon notice that
the way you try to parse the utterance it becomes ungrammatical,
and retry.  We all do that subconsciously, and therefore need
no self-segregation in our native languages.  But in a dense
language such as Speedwords, which squeezes out just about any
redundancy it could, I can easily understand that self-segregation
is a good thing.

>  >  But, as Logan pointed out upthread,
>  >  humans are really good at pattern-recognition, and
>  >  usually identify the morpheme boundaries in languages
>  >  they're familiar with without problems, even in languages
>  >  that present a lot of ambiguities to a typical software
>  >  parser.
>
>  Yes, agreed. Certainly if one is aiming to produce a
>  naturalistic looking artlang self-segregating morphemes are
>  not wanted. As I said above, it depends on the language.

Sure.  In a naturalistic artlang (such as the ethnic language
of a fictional human nation), self-segregation is not called for.
That is, unless one finds a way to arrive at self-segregating
morphology by a set of natural-looking phonological and grammatical
changes.  My conlang Arne at least gets close to self-segregation
(basically the Konya way), as I have observed further upthread.
I think I could nudge it to true self-segregation without breaking
its naturalness; now that's a challenge.

But the "natural" habitat of self-segregating morphology are of
course engelangs of all stripes.

>  >  When and how and how much would SSM make things
>  >  easier for humans?
>
>  When learning Speedwords    ;)

Yes, when learning a foreign language, you search for morpheme
boundaries a lot, and a self-segregation rule makes that easier.
Hence, it would not be a bad idea to have a simple self-segregation
rule in an IAL.

>  [...]
>
>  I can well imagine that in Volapük, where words can grow
>  quite long, it would be useful know unambiguously where
>  morpheme boundaries occur.

Yes.  Volapük is an inflection-heavy monster.  Perfectly regular,
yes, but baroque in its morphological richness.  And *ugly*.
(Not that I dislike richly inflected languages; but Volapük
does it badly.)

>  [snip]
>  >
>  >  On the other hand, for a real-time spoken-language
>  >  situation -- a conversation or lecture, for instance --
>  >  I'm not sure any SSM scheme more complex than
>  >  fixed-length morphemes would give a significant edge over
>  >  general human pattern-recognition abilities.  One would
>  >  have to experiment to find out.
>
>  You may well be correct. Piashi (which I've now abandoned)
>  was going to have fixed-length morphemes.

So does X-3 - one phoneme, one morpheme.  The only exception
being proper names.

--
... brought to you by the Weeping Elf
http://www.joerg-rhiemeier.de/Conlang/index.html





Messages in this topic (43)
________________________________________________________________________
________________________________________________________________________
3a. Re: How did English <u> get /U/ and /V/?
    Posted by: "John Vertical" johnverti...@hotmail.com 
    Date: Mon Sep 13, 2010 2:19 pm ((PDT))

On Fri, 10 Sep 2010 17:22:18 +0100, David McCann wrote:
>The point I was trying to make about the neogrammarians is that at least
>some confused two types of change. If a change is unconditional, then it
>necessarily will be universal. If /ü/ > /i/ or whatever, then /ü/ is
>lost and forgotten. But if the change is "X > Y under condition C", then
>the fact that some instances of X remain in the language enables some
>people to ignore the change in some words where you would expect them to
>apply it, and there is ample evidence that they do. The historical
>linguists (but not dialectologists) generally ignored this by pulling
>the rabbit of "dialect mixture" out of their hats.

You do realize dialects don't have to be topolects, right? Someone who picks
up /U/ > /V/ for a certain register but does not do so for another is doing
precisely dialect mixture (of an acrolect and a basilect).

I'm not claiming that there's no option for exceptionlessness tho; you
seemed to be claiming that "under condition C" is in itself, even if the
change is wholly regular, somehow a failing point for the Neogrammarian theory.

In other words, we need to note three layers—
1) U > V
2) U > U after a labial
3) U > V even after a labial
It's 3) which is the problem. Simultaneously, the lack of a "U > U when not
after a labial" layer reveals that 2) *isn't* just a bunch of exceptions
that have bandied together to confuse everyone...

John Vertical





Messages in this topic (10)
________________________________________________________________________
________________________________________________________________________
4. Complete Annotated Na'gifi Fasu'xa Babel Text
    Posted by: "Anthony Miles" mamercu...@gmail.com 
    Date: Mon Sep 13, 2010 9:19 pm ((PDT))

The Na'gifi Fasu'xa Babel Text is up at FrathWiki!

http://wiki.frath.net/N%C3%A1%C5%8Bifi_Fas%C3%
BAxa#Annotated_Babel_Text





Messages in this topic (1)
________________________________________________________________________
________________________________________________________________________
5a. Re: Reduncancy
    Posted by: "Anthony Miles" mamercu...@gmail.com 
    Date: Mon Sep 13, 2010 9:38 pm ((PDT))

Na'gifi Fasu'xa marks _everything_ for number and gender - and still almost 
always requires overt actors.





Messages in this topic (6)
________________________________________________________________________
5b. Re: Reduncancy
    Posted by: "Maxime Papillon" salut_vous_au...@hotmail.com 
    Date: Mon Sep 13, 2010 10:23 pm ((PDT))

> Date: Mon, 13 Sep 2010 11:34:50 +0100
> From: peter.bleack...@rd.bbc.co.uk
> Subject: Reduncancy
> To: conl...@listserv.brown.edu
> 
> A lot of engelangers try to reduce redundancy in languages, but in real 
> life redundancy is quite useful, because it gives you more chances to 
> work out what somebody has said if you didn't quite catch it. Has anyone 
> ever tried to create a conlang that increases redundancy?
> 
> Pete


I remember sketching a language in which negative sentences where marked away 
from their positive counterparts in at least four points, to somewhat extend 
the normal pattern of double negatives in Romance languages where negativity is 
marked with both a prepositioned marker, and either a negative pronoun as an 
object or a negative determiner.

In my sketch, there were both those things, as well as a morpheme on the verb 
instead of the tense/aspect representing some kind of "never tense" or "doesn't 
happen tense" (because if it's negative, then it doesn't happen, right?), and a 
marker on the subject to replace the nominative case marker by a "not involved 
in the action" one (because if it's negative, it's not really the agent of the 
verb, right?).

So a normal sentence like:

JOHN-nominative EAT-present BANANA-object-plural.

Would be made negative as:

JOHN-unrelated DOESNT EAT-never BANANA-object-zeroth_number.

Or at least that's what I remember. Maybe I was able to slip in one or two 
other negative markers.

I think optative also had a similar overarching influence. Something like:

JOHN-optative WOULD_LOVE_TO EAT-irrealis BANANA-object-hypothetical.

This project didn't go far.

 

Maxime
                                          




Messages in this topic (6)
________________________________________________________________________
5c. Re: Reduncancy
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Tue Sep 14, 2010 4:18 am ((PDT))

Peter Bleackley, On 13/09/2010 11:34:
> A lot of engelangers try to reduce redundancy in languages, but in real 
> life redundancy is quite useful, because it gives you more chances to 
> work out what somebody has said if you didn't quite catch it. Has anyone 
> ever tried to create a conlang that increases redundancy?

Lojban, in the gismu (fairly pointlessly) and the digits (usefully) but not in 
the rafsi and rest of the cmavo.

For a general purpose engelang, i.e. one with a natlang's range of uses, I 
think the cons of built-in redundancy in the phonological composition of 
morphemes (mainly extra signal length) greatly outweigh the pros (mainly extra 
signal robustness). Better just to build redundancy into key paradigms such as 
digits, and have available some alpha-bravo-charlie scheme for noisy 
environments.

However, the size of the phoneme inventory (or, more precisely, the size of the 
set of phonetic realizational contrasts) is also determined by a trade-off 
between signal length and robustness, and it's here that the case for 
redundancy -- in this case, phonetic redundancy -- inevitably operates. (More 
phonemes, less phonetic redundancy, shorter words; fewer phonemes, more 
phonetic redundancy, longer words.)

--And.





Messages in this topic (6)





------------------------------------------------------------------------
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/conlang/

<*> Your email settings:
    Digest Email  | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/conlang/join
    (Yahoo! ID required)

<*> To change settings via email:
    conlang-nor...@yahoogroups.com 
    conlang-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
    conlang-unsubscr...@yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 
------------------------------------------------------------------------

Reply via email to