There are 20 messages in this issue.

Topics in this digest:

1.1. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: Jim Henry
1.2. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: And Rosta
1.3. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: And Rosta
1.4. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: And Rosta
1.5. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: And Rosta
1.6. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: Logan Kearsley
1.7. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious    
    From: And Rosta

2a. Re: Topic: preverbal/postverbal goodness    
    From: A. Mendes

3a. Re: How to....    
    From: David McCann
3b. Re: How to....    
    From: Arthaey Angosii
3c. Re: How to....    
    From: Michael Everson

4a. Re: The Empire Strikes Back, told in icons    
    From: Adam Walker

5a. Origin on Quantifiers    
    From: Logan Kearsley
5b. Re: Origin on Quantifiers    
    From: neo gu
5c. Re: Origin on Quantifiers    
    From: Marlon Ribeiro

6a. Re: Degree of nouns (or other parts of speech)    
    From: Marlon Ribeiro

7a. Re: Grammaticization of fault/credit    
    From: Marlon Ribeiro
7b. Re: Grammaticization of fault/credit    
    From: Marlon Ribeiro

8a. Re: Learning colors    
    From: Marlon Ribeiro

9.1. Re: number classes    
    From: Marlon Ribeiro


Messages
________________________________________________________________________
1.1. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "Jim Henry" jimhenry1...@gmail.com 
    Date: Mon Apr 25, 2011 6:23 am ((PDT))

On Mon, Apr 25, 2011 at 2:13 AM, Logan Kearsley <chronosur...@gmail.com> wrote:

> Incidentally, Rick Harrison's Plan B addresses this problem explicitly:
> http://www.rickharrison.com/language/plan_b.html

Minor nit: Plan B was created by Jeff Prothero.  Rick Harrison just
hosts it on his website.

Also, you may want to look in the archives of the CONLANG list for
discussions of Plan B;  Jörg Rhiemeier said (in a message on "Plan B
variations" on 2010/3/5):

"It is neither a loglan nor a loglang, only a relex of English
with a phonology that is both naive and bizarre, and a self-segregation
strategy that is original but unwieldy..."

There were discussions in late September 2005 as well.

-- 
Jim Henry
http://www.pobox.com/~jimhenry/





Messages in this topic (68)
________________________________________________________________________
1.2. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Mon Apr 25, 2011 8:27 am ((PDT))

Jim Henry, On 25/04/2011 01:47:
> On Sun, Apr 24, 2011 at 7:14 PM, And Rosta<and.ro...@gmail.com>  wrote:
>
>> I guess that oligosynthesis fails only to the extent that idiomaticity (i.e.
>> noncompositionality) is excluded, and in that case the failure boils down to
>> a matter of minimum vocabulary size (more exactly, the minimum possible
>> number of morphemes or listemes). But is there really a minimum?
>
> I think there must be a minimum; a language with only one morpheme
> would be completley unusable,

It would be like a grunt, an act of pure contentless communication. Or, it 
might mean 'banana', so not be contentless, but not be very useful either.

> and in one with only two, all compounds
> longer than two morphemes long (and there would have to be many) would
> probably be completely opaque.

Two morphemes affords you two types of grunt. If you're imagining them used to 
make compounds, you're probably imagining compounds that are less than fully 
compositional, in which case each compound adds an extra listeme to the lexicon.

>Thus there must be a minimum,
> somewhere under Toki Pona's (original) 118 and above 2.

I have only a nodding acquaintance with Toki Pona, but that nodding 
acquaintance has given me the impression that TP has many idioms -- i.e. 
conventionalized ways of expressing notions using noncompositional complexes of 
words. IOW it has many more than 118 listemes, and it tells us nothing about 
minimum vocab size.

OTOH, perhaps the TP project, and oligosynthetic projects, should instead be 
seen as a kind of exercise in compositional translucency, in which while the 
idiomatic whole is not fully predictable from its parts, its parts nevertheless 
have some nonarbitrary relation to the whole. Maybe even understood in these 
terms, oligosynthesis fails (as Brett had said it does). It seems to me that if 
the oligosynthesis simply aimed to minimize the arbitriness of relations among 
the parts, then there's no problem, while if the aim is to exclude any 
arbitrariness of relations among the parts, then it's doomed to failure.
  
--And.
  





Messages in this topic (68)
________________________________________________________________________
1.3. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Mon Apr 25, 2011 9:02 am ((PDT))

Alex Fink, On 25/04/2011 02:55:
> On Mon, 25 Apr 2011 02:13:58 +0100, And Rosta<and.ro...@gmail.com>  wrote:
>
>> Regarding Fith, I read the online description, interpreted the stack not as
> a processing method but rather as a proceduralized way of describing
> syntactic structures, and saw nothing that struck me as conspicuously
> difficult. However, both you and Joerg had said that the stack manipulations
> were mindboggling, and I'd been meaning to ask you both if you could work
> through one or two of the mindbogglingest examples in order to get me to see
> where the problems lie. With regard to lingering, it's very possible that I
> misunderstood it, but as I understood it, it is not different from what
> natural languages' grammars allow, and in both Fith and natlangs it would
> place a strain on short-term memory. In short, I haven't seen in Fith
> anything impossible or unnaturally difficult, but that may be because I've
> failed to recognize the problems it presents.
>
> I'd rather give a few circumscribed examples.

Thanks!

> These involve the stack
> conjunctions, which are if I haven't overlooked anything the only unnatural
> part.  They're described in the grammar as stack operations, but you can
> read right to left and look at them as argument shuffling operations.  I do
> this to draw an analogy to the SE behaviour in Lojban, which you do agree is
> unnaturally difficult.  But this analysis then leads to very silly
> conclusions when you start trying to apply it to lower-valence verbs --
> which I think goes to show exactly that Fith is strictly crazier than Lojban
> on this score.
>
> Let V be a 3-valent verb.  Then A B C V binds A to the third place of V, B
> to the second, and C to the first.  But you can prepose "shen" to V, after
> C, to in effect switch places 1 and 2 of V; or "ronh", to make the new place
> 1 the old place 3, the new place 2 the old place 1, and the new place 3 the
> old place 2; or "lonh", to do the inverse, make the new places 1 2 3 the old
> places 2 3 1.
> And you can string any number of these together, playing the shell game
> switching the places as you move outward, right to left.  So that's just as
> unnatural as SE.

SE is worse. They're like Fith 'counterrotiation', but applicable to lists of 
unlimited length, rather than Fith's 3. IMO the real killer with SE is that the 
demoted argument is moved not to the front or end of the list of arguments 
following the promoted argument, but rather to the position vacated by the 
promoted argument, which can be in the middle of the list.

> Okay, granting that view of things, what does, say, "shen" do on a
> one-argument verb V (er, "modifier", but that's probably a distinction
> without a difference)?  Well, it switches the unique argument place of V
> with an argument place of a _different verb entirely_!  Namely, you do a
> (preorder) traversal of the part of the syntactic tree you've constructed so
> far (as you read right to left), find the first verb which has an argument
> place you haven't started filling yet, and switch that argument with the
> argument of V.

I still don't get it. At the level of phonological form, you have "A B C shen 
V". After processing "shen", the logical-syntactic form is "A C B" (by some 
kind of movement transformation). Then, after processing the monadic V, the l-s 
form is "A C [BV]". There is nothing mindboggling in this, it seems to me.

I must be missing something, though. Please do have the patience to keep on 
trying to explain to me!

> Ditto "lonh", "ronh" on verbs of two or fewer arguments.  And of course the
> shell game can be played on these as well.
>
> Similar things can be done with the duplicating conjunctions.  Maybe these
> aren't so offensive, since they come off looking a lot like coreference
> strategies (even so, "kuu" e.g. is pretty farfetched).

"kuu" seems just an abbreviation for "voi voi".

> "du", for instance,
> preceding a>=2-valent verb, just derives a reflexive with one fewer place
> from it, forcing the first two arguments to corefer.  But on a 1-valent
> verb, it amounts to saying that the argument of this verb is equal to the
> next unfilled argument of a different verb, again found by a tree traversal.

Yes, but why is this weird? It's a kind of anaphora. One can envisage, say, 
"John du rocks thinks" = "John_1 thinks he_1 rocks".

I probably don't know what a tree traversal is.

--And.





Messages in this topic (68)
________________________________________________________________________
1.4. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Mon Apr 25, 2011 9:40 am ((PDT))

Logan Kearsley, On 25/04/2011 06:11:
> On Sun, Apr 24, 2011 at 6:48 PM, And Rosta<and.ro...@gmail.com>  wrote:
>> On the reasonable assumption that the 'syntax of semantics' -- i.e. the
>> structure of the meanings that language has to express -- is very simple
>> (since predicate logic has sufficient expressive power), any arguments that
>> syntax needs to be more complicated than this would need considerable
>> support.
>
> Well, you did just also say:
>
>> (Examples of things claimed impossible (and I'm sure they are):
>> grammatical operations that involve counting (beyond 2); grammatical rules
>> that consist of reordering phonological words.)
>
> and pure predicate logic requires counting, unless you restrict it so
> predicates are only allowed to have a small maximum number of
> arguments and clauses are only allowed to nest a certain maximum
> number of levels.

In what way does predicate logic require counting?

I meant rules referring to the nth element in some ordered structure (where n > 
2).

Are you thinking of the notational device of distinguishing arguments by linear 
order? This is purely notational, extrinsic to the structure of logical forms. 
Moreover, it's only the rules specifying the semantic interpretation of the 
predicate that need to be able to distinguish among the arguments.

> You can get around some of that by adding in arbitrary variable
> assignment (basically, anaphors) and/or relational operators, but
> neither of those seems particularly practical itself. Using a
> relational model to tack predicates together, rather than nesting them
> as arguments, results in excessive reliance on implication, and
> variable assignment would turn practically every conversation into one
> of those introductory textbooks where practically every sentence is
> the definition of a word that you need to understand the next
> sentence.

I'm afraid this para has gone over my head, tho I'd welcome a slower gentler 
version of it.

> You can eliminate counting in argument binding by using named
> arguments, but that's pretty much identical to case marking.

You seem to me to be thinking of methods of linearizing notation of logical 
form, rather than the structure of logical form itself. (But I may be 
misunderstanding you.)
  
> On Sun, Apr 24, 2011 at 7:13 PM, And Rosta<and.ro...@gmail.com>  wrote:
> [...]
>> Regarding Fith, I read the online description, interpreted the stack not as
>> a processing method but rather as a proceduralized way of describing
>> syntactic structures, and saw nothing that struck me as conspicuously
>> difficult. However, both you and Joerg had said that the stack manipulations
>> were mindboggling, and I'd been meaning to ask you both if you could work
>> through one or two of the mindbogglingest examples in order to get me to see
>> where the problems lie.
>
> Alex did a pretty good job, I think, but I'll point out one more
> thing: there's a good reason why Fith has the "stack synchronization"
> operator. It's because Fith requires you to keep in memory an ordered
> list of arbitrary length, and count through it automatically. If you
> forget any item in the list, or mess up their order, despite the fact
> that they haven't been mentioned for the last three complete
> sentences, you will be completely incapable of parsing anything
> correctly, to the extent that you can parts of different sentences
> hopelessly confused (as Alex described). Thus, every once in a while,
> even an alien with a stack-based brain has to stop and remind
> everybody else how long their list should be. Since human short term
> memory is limited to about 7 items, and that really is *short* term,
> it's not possible for a human to parse an arbitrary Fith discourse,
> which may have multiple grammatically unrelated nested sentences, not
> just nested clauses, in real-time.

Natlangs present these same problems -- natlangs generate sentences that tax or 
overtax shortterm memory. So this feature of Fith doesn't strike me as 
intrinsically unnatural or impossible.

Now that you mention it, though, the synchronization operator is unnatural and 
impossible, because it involves counting the number of items on the stack. So 
this is one example of Fith being unnatural and impossible.

>> With regard to lingering, it's very possible that I
>> misunderstood it, but as I understood it, it is not different from what
>> natural languages' grammars allow, and in both Fith and natlangs it would
>> place a strain on short-term memory.
>
> Lingering is superficially similar to saying something like "Now,
> remember *whatever*; we'll get back to that later", and then going on
> with a long discourse and eventually coming back to it later.
> "Remember *whatever*? It's important because...." But in Fith, the
> listener is expected to just know exactly when you get back to it,
> without restatement. And it's not limited to words or full ideas; it
> can be arbitrary fragments of sentences. For a simple example, it's be
> like if you could do this in English:
>
> "Oil. Now, here's a whole bunch of stuff, potentially lasting several
> minutes, about initially unrelated-seeming international politics. Is
> important because of what I just said."

You can do it in English:

"Oil, numerous plutocrats from all nations of the world -- and this is 
something I am certain is true, having read it in today's newspaper -- have 
spent many millions in hiring lobbyists to assure governments that we are not 
going to run out of".

So this aspect of Fith is neither unnatural nor impossible. But native speakers 
of Fith might also cope better with English than native speakers of English do.
  
> For a more complex example:
>
> "Did Bob. What. Here's a lot of stuff, informing you about how my day
> has been and what I did yesterday. Did you do last night? Get in
> contact with you?"
>
> And it can get a lot worse than that, especially with duplication
> operators; bits of completely unrelated sentences can get swapped
> around, altering what's lingering mid-discourse.

Both Fith and natlangs generate structures too difficult for humans to cope 
with because of overtaxing short-term memory. Native Fith could cope with much 
more complicated natlang utterances than humans can. And humans could probably 
speak perfectly good Fith (give or take some of the stack conjunctions), and 
understand it perfectly well, so long as the utterances were within short-term 
memory capabilities.

--And.





Messages in this topic (68)
________________________________________________________________________
1.5. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Mon Apr 25, 2011 10:09 am ((PDT))

Logan Kearsley, On 17/04/2011 07:22:
> I've come to the conclusion that mathematical
> simplicity =/= linguistic simplicity; coming up with a minimal set of
> rules that's actually usable is quite a feat.

Can you explain more. If I were asked, I'd say that mathematical simplicity 
does match linguistic simplicity, and that coming up with a minimal set of 
rules that's usable is trivially easy.

But maybe by "linguistic simplicity" you mean not "grammatical simplicity" but 
"ease of use".

As for what is usable, I'd be interested to hear how you gauge usability and 
what you think affects it. From my study of natlangs (or rather, just English), 
I'd say that (1) semanticosyntactic structures are very deep and nodeful, but 
are organized by simple combinatorics, (2) morphophonological structures are 
vastly simpler, in that nonterminal and many terminal nodes in syntax have no 
independent morphophonological exponent, and (3) the correspondences between 
semanticosyntactic structures and phonological structures is quite messy and 
complicated. I wonder if processing cost is driven not by the depth and 
nodefulness of syntactic structures so much as by the number of 
morphophonological exponents of elements in the syntax. In that case, the 
challenge (for the engelanger) is to design the correspondences between 
sentence syntax and sentence phonology in such a way that the latter is as 
simple as possible (yet still, one would want to insist, unambiguously encoding 
the synt
ax).

--And.





Messages in this topic (68)
________________________________________________________________________
1.6. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "Logan Kearsley" chronosur...@gmail.com 
    Date: Mon Apr 25, 2011 11:02 am ((PDT))

On Mon, Apr 25, 2011 at 7:20 AM, Jim Henry <jimhenry1...@gmail.com> wrote:
> On Mon, Apr 25, 2011 at 2:13 AM, Logan Kearsley <chronosur...@gmail.com> 
> wrote:
>
>> Incidentally, Rick Harrison's Plan B addresses this problem explicitly:
>> http://www.rickharrison.com/language/plan_b.html
>
> Minor nit: Plan B was created by Jeff Prothero.  Rick Harrison just
> hosts it on his website.

Whoops! Thanks for the correction. It even says so right at the top of
the page.....

On Mon, Apr 25, 2011 at 9:58 AM, And Rosta <and.ro...@gmail.com> wrote:
> Alex Fink, On 25/04/2011 02:55:
[...]
>> Okay, granting that view of things, what does, say, "shen" do on a
>> one-argument verb V (er, "modifier", but that's probably a distinction
>> without a difference)?  Well, it switches the unique argument place of V
>> with an argument place of a _different verb entirely_!  Namely, you do a
>> (preorder) traversal of the part of the syntactic tree you've constructed
>> so
>> far (as you read right to left), find the first verb which has an argument
>> place you haven't started filling yet, and switch that argument with the
>> argument of V.
>
> I still don't get it. At the level of phonological form, you have "A B C
> shen V". After processing "shen", the logical-syntactic form is "A C B" (by
> some kind of movement transformation). Then, after processing the monadic V,
> the l-s form is "A C [BV]". There is nothing mindboggling in this, it seems
> to me.
>
> I must be missing something, though. Please do have the patience to keep on
> trying to explain to me!

The mindbogglingness, in my mind, derives from the fact that you
aren't just limited to rearranging the order or position of elements
in a clause, or even in a sentence. You can actually extract any node
of the syntax tree and move it to a completely different tree.
Elements can be moved around anywhere within the entire discourse.
This lets you interleave elements that will, in the end, become parts
of completely different sentences; lingering is the simplest form of
that.

Imagine if, in English, you had a single word that meant "find the
last relative clause that I used, take it out of it's original noun
phrase, and wait for me to tell you a new noun phrase to stick it on".
That's the sort of thing you have to deal with in Fith. Except that
you can do it with any syntax node, because Fith doesn't have distinct
types of nodes like human languages do, and you're required to count
(although only up to 3, since Fith lacks a 'select' operator)
backwards through the stack to find the right node. If natlang garden
path sentences are disorienting, well, gosh, Fith just makes it worse.

The process itself is not mindboggling. It is logically quite simple.
But the results- what it can be used to do to the syntax tree- are
incredibly complex and context-dependent. It's kind of like Conway's
Game of Life. The rules are incredibly simple, anybody can understand
them, but no one can predict the results for anything non-trivial.

>> "du", for instance,
>> preceding a>=2-valent verb, just derives a reflexive with one fewer place
>> from it, forcing the first two arguments to corefer.  But on a 1-valent
>> verb, it amounts to saying that the argument of this verb is equal to the
>> next unfilled argument of a different verb, again found by a tree
>> traversal.
>
> Yes, but why is this weird? It's a kind of anaphora. One can envisage, say,
> "John du rocks thinks" = "John_1 thinks he_1 rocks".

Which is how it's described- as a replacement for anaphora. But it
doesn't actually function the same way; properly employing it requires
being able to anticipate ahead of time the next time you would use
that anaphora and count off the elements in between the duplication
and the usage.

> I probably don't know what a tree traversal is.

Sequentially moving between nodes in the tree along their connecting edges.

On Mon, Apr 25, 2011 at 10:36 AM, And Rosta <and.ro...@gmail.com> wrote:
> Logan Kearsley, On 25/04/2011 06:11:
[...]
>> Well, you did just also say:
>>
>>> (Examples of things claimed impossible (and I'm sure they are):
>>> grammatical operations that involve counting (beyond 2); grammatical
>>> rules
>>> that consist of reordering phonological words.)
>>
>> and pure predicate logic requires counting, unless you restrict it so
>> predicates are only allowed to have a small maximum number of
>> arguments and clauses are only allowed to nest a certain maximum
>> number of levels.
>
> In what way does predicate logic require counting?
>
> I meant rules referring to the nth element in some ordered structure (where
> n > 2).
>
> Are you thinking of the notational device of distinguishing arguments by
> linear order? This is purely notational, extrinsic to the structure of
> logical forms. Moreover, it's only the rules specifying the semantic
> interpretation of the predicate that need to be able to distinguish among
> the arguments.

That's precisely what I'm thinking of. The notational device is of
prime importance, since that's what language *is*. It's extrinsic to
raw logical forms, but you can't vocalize raw logical forms; you have
to have some serialized notation for them. Predicates with short
argument lists can be dealt with easily, as demonstrated by the
existence of configurational languages, as long as they are isolated.
When you start nesting them, though, using one predicate as the
argument to another, you either need parenthesization, which humans
are bad at, or counting, which humans are also bad at.

>> You can get around some of that by adding in arbitrary variable
>> assignment (basically, anaphors) and/or relational operators, but
>> neither of those seems particularly practical itself. Using a
>> relational model to tack predicates together, rather than nesting them
>> as arguments, results in excessive reliance on implication, and
>> variable assignment would turn practically every conversation into one
>> of those introductory textbooks where practically every sentence is
>> the definition of a word that you need to understand the next
>> sentence.
>
> I'm afraid this para has gone over my head, tho I'd welcome a slower gentler
> version of it.

The relational model looks at predicates as statements that establish
a relation between all of the arguments to that predicate. After
you've stated a bunch of relations, you can then use relational
operators to implicitly generate new relations without having to
explicitly state any more predicates. Although relational operators
are still subject to nesting, this solves some of the problems of
composing multiple predicates, but at the expense of requiring perfect
contextual awareness. You see this sort of thing happen in natural
languages with discourses like "Bob and Fred went to the store, and
Bob went fishing. The guy who went to the store and went fishing is
tall, the guy who only went to the store and didn't go fishing is
short." It's way more verbose in English than it would be with
explicit relational operators.

Variable binding is a more natural way of composing things while
extracting nodes to flatten out the tree. Rather than saying "I
approve of the fact that Bob went to the store", I could flatten the
parse tree by saying "X means Bob went to the store. I approve of X.",
or "I approve of X, where X means Bob went to the store." In pure
mathematical form you just use another arbitrarily named variable for
every statement that you want to un-nest; clearly, that quickly
becomes unwieldy in linguistic usage; you have to define a new word
just about every other sentence, and expect the listener to remember
all of your definitions. Natlangs can do it using anaphors, which are
basically variable names that get continually re-used and which are
assigned values implicitly (which often leads to ambiguity), so that
if I actually wanted to flatten the tree in English I would say "Bob
went to the store, and I approve of *that*."

>> You can eliminate counting in argument binding by using named
>> arguments, but that's pretty much identical to case marking.
>
> You seem to me to be thinking of methods of linearizing notation of logical
> form, rather than the structure of logical form itself. (But I may be
> misunderstanding you.)

Of course I am. If we're not talking about methods of serialization,
then we're not talking about language, we're just talking about logic.

>> "Oil. Now, here's a whole bunch of stuff, potentially lasting several
>> minutes, about initially unrelated-seeming international politics. Is
>> important because of what I just said."
>
> You can do it in English:
>
> "Oil, numerous plutocrats from all nations of the world -- and this is
> something I am certain is true, having read it in today's newspaper -- have
> spent many millions in hiring lobbyists to assure governments that we are
> not going to run out of".
>
> So this aspect of Fith is neither unnatural nor impossible. But native
> speakers of Fith might also cope better with English than native speakers of
> English do.

Even taking out the parenthetical, that's not a grammatical utterance
in my idiolect of English. I would have to say "Oil is something
that...."

> Both Fith and natlangs generate structures too difficult for humans to cope
> with because of overtaxing short-term memory. Native Fith could cope with
> much more complicated natlang utterances than humans can. And humans could
> probably speak perfectly good Fith (give or take some of the stack
> conjunctions), and understand it perfectly well, so long as the utterances
> were within short-term memory capabilities.

Well, yes. That's what Shallow Fith is. I have no problem with the
idea that humans can *produce* a perfectly correct subset of Fith, and
understand the same subset. But just as you argue that nobody actually
speaks formalized Lojban, they would not actually be speaking formal
Fith. And in comprehension they would crippled by the fact that
stylistically normal Fith necessarily taxes your brain to an extent
that is avoided in stylistically normal natlangs.

On Mon, Apr 25, 2011 at 11:06 AM, And Rosta <and.ro...@gmail.com> wrote:
> Logan Kearsley, On 17/04/2011 07:22:
>>
>> I've come to the conclusion that mathematical
>> simplicity =/= linguistic simplicity; coming up with a minimal set of
>> rules that's actually usable is quite a feat.
>
> Can you explain more. If I were asked, I'd say that mathematical simplicity
> does match linguistic simplicity, and that coming up with a minimal set of
> rules that's usable is trivially easy.
>
> But maybe by "linguistic simplicity" you mean not "grammatical simplicity"
> but "ease of use".

That is what I mean. The simplest possible sets of grammatical rules
produce parse trees that are too complex to be useful for
understanding all but trivially simple sentences. You need additional
rules (increasing mathematical complexity) to provide ways of
simplifying the parsing structure to make it easier for humans to use
(linguistic simplicity).

> As for what is usable, I'd be interested to hear how you gauge usability and
> what you think affects it. From my study of natlangs (or rather, just
> English), I'd say that (1) semanticosyntactic structures are very deep and
> nodeful, but are organized by simple combinatorics,

They are deep if you write out *all* of it at once. But when I'm
parsing speech, I don't build up a huge X-bar diagram in my head. I
make use of those combinatoric rules to collapse subtrees into single
nodes as quickly as possible. Having multiple kinds of nodes (Noun
phrases vs. prepositional phrases vs. verb phrases, for example,
rather than just generic predicates) increases mathematical
complexity, but makes it a lot easier to figure out how best to
collapse subtrees, which subtrees I need to keep in mind to reference
or to add stuff to, etc. I doubt my brain ever keeps track of any
predicate-equivalent more than three levels deep, and rarely even that
far, no matter how complex the complete logical structure might be.

> (2) morphophonological
> structures are vastly simpler, in that nonterminal and many terminal nodes
> in syntax have no independent morphophonological exponent,
...
> the challenge (for the engelanger) is to design the
> correspondences between sentence syntax and sentence phonology in such a way
> that the latter is as simple as possible (yet still, one would want to
> insist, unambiguously encoding the syntax).

I would argue that those two features are opposed to each other.
Morphophonological structures are simpler than underlying semantic
structures *because* they do not encode all of the necessary syntactic
information explicitly. "As simple as possible while completely
unambiguous" *may* not be particularly simple.
I think it is an achievable goal to make it usably simple, though,
especially seeing as how lots of natlangs tend to be way more complex
than they need to be (as demonstrated by the existence of simpler
natlangs). In fact, Rick Morneau did a pretty good job of it, I think.
But Rick's ideal perfect VSO grammar is far from being the
mathematically simplest representation of underlying logical forms,
because it makes concessions to the strengths and weaknesses of the
human facility for language.

-l.





Messages in this topic (68)
________________________________________________________________________
1.7. Re: Conlangs as Academic Evidence in Linguistic Studies: How Serious
    Posted by: "And Rosta" and.ro...@gmail.com 
    Date: Mon Apr 25, 2011 12:54 pm ((PDT))

Logan Kearsley, On 25/04/2011 07:13:

> We can understand a nesting structure like "I own a dog
> which eats food which comes in a bag I buy from the store that's on
> the corner of the street that they were working on on the day when I
> was late to my office which has a nice air conditioner.... etc., of
> arbitrary depth because of tail optimization. If I ever try to return
> and add another node higher up the parse tree, then it breaks and
> becomes incomprehensible, because I broke the assumptions of tail
> optimization and require you to remember counting information that you
> weren't even keeping track of to bind things properly. A first-pass
> attempt at making predicate logic human-usable is adding some means of
> re-arranging arguments to allow for more tail optimization, which I
> did with Palno.

("Making predicate logic human-usable" is a misdescription, but I'll address 
that in a reply to another msg of yours.)

This tail optimization is a specific instance of two more general principles. 
If phonological words are organized into a dependency stemma (i.e. a tree in 
which all nodes are phonological words (with their linear order intact)), then 
processing difficulty increases with (A) the number of branches crossing a node 
(because each branch is being held in short-term memory), and (B) the number of 
nodes a branch crosses (because this affects the length of time the branch has 
to be held in memory). (The same principles can doubtless be stated in headed 
PSG terms, but I can't be arsed to work out how to formulate it.) Tail 
optimization caters to (B), reducing the number of nodes a branch crosses.

--And.





Messages in this topic (68)
________________________________________________________________________
________________________________________________________________________
2a. Re: Topic: preverbal/postverbal goodness
    Posted by: "A. Mendes" andrewtmen...@gmail.com 
    Date: Mon Apr 25, 2011 7:16 am ((PDT))

Actually, I Think it makes more sense to use pre/post-positions to
make the definiteness of the PP... and the PPs reletive positions to
the verb to mark the static vs. dynamic spatial distinction.

On Sun, Apr 24, 2011 at 12:40 AM, A. Mendes <andrewtmen...@gmail.com> wrote:
> WOW or I could use Pre/post-positions not for definiteness but for the
> static vs. dynamic spatial distinction
>
> jump on.table
> 'jump (up and down) on THE table
>
> jump table.on
> 'jump onto THE table'
>
> on.table jump
> 'jump (up and down) on a table'
>
> table.on jump
> 'jump onto A table'
>
> but then I'm fucked when it comes to 'verbless' sentences (comment
> topic statives). Maybe here it will have to mark definiteness
>
> on table
> '(it is) on the table'
>
> table on
> '(it is) on a table'
>
>
> On Sun, Apr 24, 2011 at 12:16 AM, A. Mendes <andrewtmen...@gmail.com> wrote:
>>> If we analogise newness in discourse to newness as
>>> an actual state of affairs, then if "on.table" is topical, it should be an
>>> old state, i.e. he was on the table already, jumping up and down; if
>>> "on.table" is in the comment, it's free to be a new state, i.e. have the
>>> force of "onto".
>>
>> I'm so glad for your description of this cause this is the part of the
>> equation I couldn't figure out how I'd deal with. This is definitly
>> how I'm going to handle this beast.
>>
>>> Or, wait.  Aren't the above even backwards compared to what you said 
>>> initially?
>>
>> Yup... Examples 1 and 2 are backwards... even when compared to 3 and
>> 4. It was probably a mini-stroke.
>>
>>> the cluster of properties of prototypical natlang subjects include 
>>> agentivity =
>> perpetratorhood and topicality.
>>
>> Where I'm getting in trouble is that even though TOPIC and SUBJECT are
>> two grammatically different creatures... often the subject is also the
>> topic as well. My goal is total grammatical de-emphasis of the
>> subject.
>>
>> Actually... I think I might come at it from this way... maybe verbs
>> will only every have one argument (patient/victim by default and
>> agent/perpetrator when the verb is reduplicated). That way... topics
>> could function as either the subject/agent or object/patient, since
>> it's the backdrop of the utterance:
>>
>> na   ti   me'u
>> eat girl // shark
>> 'the shark, the girl is eaten'
>>
>> na   na   ti   me'u
>> eat.eat  girl // shark
>> the shark, the girl eats
>>
>> It doesn't really change what I'm doing... just how I think about what
>> I'm doing. But there'd still be instances where the Topic is also the
>> patient or agent or verb:
>>
>> pi   ha
>> see // man
>> 'the man is sighted'
>>
>> ha   pi
>> man // see
>> 'a man is sighted'
>>
>> pi   pi   ha
>> see.see man
>> 'the man sees (something)'
>>
>> ha   pi   pi
>> man see.see
>> 'a man sees (something)'
>>
>> maybe I should view these as topic-less sentences.
>>
>>> If this is a question of avoiding ambiguity, might there be phonological
>>> devices that you can use?  Prosody, perhaps, or your vowel alternations,
>>> however they work.
>>
>> I'm not worried about ambiguity (though perhaps I should be). My issue
>> it that I'm using the preverbal/postverbal distinction to signal
>> indefiniteness/definiteness. If I treat adpositions like verbs—in that
>> indefiniteness/definiteness is assigned by whether the noun leads or
>> follows it—then the PP's position to the verb has to do something
>> else... or produce a different meaning.
>>
>> on.table
>> 'on the table'
>>
>> jump on.table
>> 'jump the on the table'
>>
>> on.table jump
>> 'jump an on the table'
>>
>> — — — —
>>
>> table.on
>> 'on a table'
>>
>> jump table.on
>> 'jump the on a table'
>>
>> table.on jump
>> 'jump an on a table'
>>
>> Can you understand? There's potential for something to happen here...
>> but atm I can't say what. I like this idea but have to develop it
>> further
>>
>>> I have stretched ropes from steeple to steeple; garlands from window to
>>> window; golden chains from star to star, and I dance.  --Arthur Rimbaud
>>
>> I memorized a Rimbaud poem in high school for a French competition...
>> 'The Drunk Boat'. This quote is magical.
>>
>> On Sat, Apr 23, 2011 at 9:31 AM, Roger Mills <romi...@yahoo.com> wrote:
>>> --- On Thu, 4/21/11, A. Mendes <andrewtmen...@gmail.com> wrote:
>>>  So Korean and Japanese with case
>>>> markings, and Hebrew and Russian with
>>>> prepositions/(and/or case?). Mean!
>>>
>>> Malay/Indonesian have a couple strategies:
>>>
>>> Topic/comment, with a possessive suffix--
>>> perempuan itu, rambut/nya hitam
>>> woman that, hair/her black
>>>
>>> And I _think_ you can say
>>> (ia) ada uang/nya
>>> (he) there.is money/his = he has money (with him at the moment, not= he's 
>>> rich AFAIK)
>>>
>>> whether you can say
>>> perempuan itu ada rambutnya hitam -- I'm not sure
>>>
>>> In a few cases (intrinsic parts of wholes?) you can use the ber- prefix, 
>>> which implies 'having'--
>>>
>>> meja itu berkaki tiga
>>> table that ber-leg 3 == that table has three legs
>>>
>>> or, meja itu, tiga kakinya // or meja itu, kakinya tiga
>>>  .....   three its/legs           ....   its/legs three
>>>>
>>>> Here's something I've not sorted yet:
>>>>
>>>> panín.po'i   mara
>>>> black.hair  // woman
>>>> 'the woman has black hair'
>>>>
>>>> panín.po'i
>>>>
>>>> On Fri, Apr 22, 2011 at 1:51 PM, Daniel Bowman <danny.c.bow...@gmail.com>
>>>> wrote:
>>>> > That makes sense; Korean and Japanese have similar
>>>> grammar.
>>>> >
>>>> > On Thu, Apr 21, 2011 at 7:36 PM, Garth Wallace <gwa...@gmail.com>
>>>> wrote:
>>>> >
>>>> >> On Thu, Apr 21, 2011 at 5:08 PM, A. Mendes <andrewtmen...@gmail.com>
>>>> >> wrote:
>>>> >> >
>>>> >> > Q2. Do any natlangs conflate existence with
>>>> possession?
>>>> >>
>>>> >> Japanese does. The intransitive verb "aru" means
>>>> "to exist
>>>> >> (inanimate)". To show possession, you make the
>>>> owner the topic and the
>>>> >> possessed item the subject. So "Tanaka has an
>>>> apple" would be
>>>> >> "Tanaka-san wa ringo ga arimasu", which could also
>>>> be translated "As
>>>> >> for Tanaka, an apple exists".
>>>> >>
>>>> >
>>>>
>>>
>>
>





Messages in this topic (16)
________________________________________________________________________
________________________________________________________________________
3a. Re: How to....
    Posted by: "David McCann" da...@polymathy.plus.com 
    Date: Mon Apr 25, 2011 8:31 am ((PDT))

On Sun, 24 Apr 2011 13:28:06 -0700
Roger Mills <romi...@yahoo.com> wrote:

> How does one get IPA and other special characters into an email? (I
> usually receive IPA, Greek, Russian OK)

1. You can use the third level and fourth levels on your keyboard with
the AltGr and AltGr+Sh keys. For Windows, Microsoft have a keyboard
editor that you can download for free that will enable you to set the
keys. Thus I have the keys "e o u i a" giving "ɛ ɔ ʊ ɪ ə".

2. There's a program available that emulates the Unix compose key
system. Thus I use Comp+n+y for "ɲ" and Comp+l+h for "ɬ".

But doesn't Windows come with a program to select characters from a
table?

Americanist transcription is less trouble (as it was in the days of
typewriters): with Comp+c I can get č, ǯ, š, ž rather than tʃ, dʒ, ʃ, ʒ.





Messages in this topic (7)
________________________________________________________________________
3b. Re: How to....
    Posted by: "Arthaey Angosii" arth...@gmail.com 
    Date: Mon Apr 25, 2011 2:51 pm ((PDT))

On Sun, Apr 24, 2011 at 1:28 PM, Roger Mills <romi...@yahoo.com> wrote:
> How does one get IPA and other special characters into an email? (I usually 
> receive IPA, Greek, Russian OK)

Unlike others who suggest setting up an IPA keyboard layout, I type
IPA so rarely that I always just go to
http://weston.ruter.net/projects/ipa-chart/view/keyboard/ , click the
IPA symbols I want, and paste them into my email.

This suggestion won't be that great if you're intending to type a lot
of IPA frequently, of course. :)


-- 
AA

http://conlang.arthaey.com





Messages in this topic (7)
________________________________________________________________________
3c. Re: How to....
    Posted by: "Michael Everson" ever...@evertype.com 
    Date: Mon Apr 25, 2011 3:05 pm ((PDT))

On 24 Apr 2011, at 21:28, Roger Mills wrote:

> Probably an old question that has been answered, but please refresh this old 
> dog's memory--
> 
> How does one get IPA and other special characters into an email? (I usually 
> receive IPA, Greek, Russian OK)

I use the Mac OS. The Irish Extended keyboard is particularly good for many of 
the IPA characters I use. I use Apple Mail, and set my e-mail font to Everson 
Mono, which has complete UCS Latin support.

PopChar is also essential, and much better than Apple's Character Viewer.

Michael Everson * http://www.evertype.com/





Messages in this topic (7)
________________________________________________________________________
________________________________________________________________________
4a. Re: The Empire Strikes Back, told in icons
    Posted by: "Adam Walker" carra...@gmail.com 
    Date: Mon Apr 25, 2011 8:59 am ((PDT))

FANTASTIC!

Wow I nearly busted a gut over a few of these trying to keep quiet at work!

Adam

On Sat, Apr 23, 2011 at 4:41 PM, Eric Christopherson <ra...@charter.net>wrote:

>
> http://waynedorrington.blogspot.com/2011/04/star-wars-episode-v-retold-in.html
>





Messages in this topic (6)
________________________________________________________________________
________________________________________________________________________
5a. Origin on Quantifiers
    Posted by: "Logan Kearsley" chronosur...@gmail.com 
    Date: Mon Apr 25, 2011 11:45 am ((PDT))

I want to check if my understanding of this is correct.

You start out with a quantifier-less language. Some things are
countable (1 ball, 2 balls, etc.), and some things aren't (*One air.
Some air.).
In order to count uncountable stuff, you have to quantify it by
attaching it to some other countable noun (One molecule of air.)

Over time, more and more stuff gets interpreted as uncountable,
requiring quantification, until darn near everything is uncountable
and you're left with a small closed class of quantifiers, which may
seem weird and arbitrary because they're a subset of an original open
class of countable nouns.

How close am I?

-logan.





Messages in this topic (3)
________________________________________________________________________
5b. Re: Origin on Quantifiers
    Posted by: "neo gu" qiihos...@gmail.com 
    Date: Mon Apr 25, 2011 3:06 pm ((PDT))

On Mon, 25 Apr 2011 12:40:23 -0600, Logan Kearsley 
<chronosur...@gmail.com> wrote:

>I want to check if my understanding of this is correct.

I think the term is "counter" or "classifier". A quantifier is a word such 
as "all" or "some", although in a sense, cardinal numbers are 
quantifiers as well. I can't comment on the rest.

>You start out with a quantifier-less language. Some things are
>countable (1 ball, 2 balls, etc.), and some things aren't (*One air.
>Some air.).
>In order to count uncountable stuff, you have to quantify it by
>attaching it to some other countable noun (One molecule of air.)
>
>Over time, more and more stuff gets interpreted as uncountable,
>requiring quantification, until darn near everything is uncountable
>and you're left with a small closed class of quantifiers, which may
>seem weird and arbitrary because they're a subset of an original open
>class of countable nouns.
>
>How close am I?
>
>-logan.





Messages in this topic (3)
________________________________________________________________________
5c. Re: Origin on Quantifiers
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Mon Apr 25, 2011 8:29 pm ((PDT))

This happens in almost every language in the world. But in some natlangs, 
counters (or "measure words") are a challenging issue. For example, nearly all 
Asian languages resort to counters. If you have a look at the Japanese 
language, for instance, we'll come accross a rough list of more than 500 
counters. Of course less than 2/3 of them are used in everyday conversation, 
because that makes daily life even more complicated. However, any leaner 
must know by heart at least a hundred or so in order to be able to count 
anything without great problems.

I speak Japanese and I know some native and highly proficient speakers. As far 
as they've told me, not even the Japanese people master counters and they 
differ sometimes when using them. Some things may receive two, three, four or 
more counters, not only one specifically as it normally happens, depending on 
the speaker, the area, the context, or the actual situation of the object, 
thing, 
animal, place being counted. For example, a "katana" (a Japanese kind of 
sword) may be counted with the counter "-tou" (for Japanese swords), "-ken" 
(for any sword) or "-furikaeri" (for katanas), or "-hon" (for long and thin 
things). 
"Sakana" (fish) is counted with "-hiki" (small animals) when it's alive, but it 
becomes food, when it's dead and ready to be eaten is counted with "-hai" 
(liquid or solid portion of food, drink and spices). A flourist count flowers 
with "-
rin" (which uses numbers from the Chinese system introduced into Japanese) or 
"-hira" (which uses numbers from the native Japanese counting system), but in 
everyday life ordinary people may count them with "-hon".

I don't know if many conlangs make use of counters, but my conlang does. In 
Yuelami there are only seven categories: 1) people, 2) animals, 3) small 
objects, 4) large objects, 5) places, 6) time, and 7) abstract things / not 
visibly physical things / audiovisual / feelings / ideas / concepts.





Messages in this topic (3)
________________________________________________________________________
________________________________________________________________________
6a. Re: Degree of nouns (or other parts of speech)
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Mon Apr 25, 2011 10:26 pm ((PDT))

Yu anolalei maosie beiyoa zuemeta zera, Jimmie Herry.
Thank you very much for your response, Jim Herry. 

I've read what you recommended me to, and I found it really interesting. I'll 
think 
over my conlang's system, because it still may be enriched in view of such eye-
opening insights about the role of the noun degree.

I would like to listen to your conlang and to know more about it. The stacks 
you 
employ are quite useful, straightforward and breakthrough. Congratulations on 
what you've come up with, because you've successfull devised a unique 
language. If you want to take a look at the script I've designed for Yuelami 
you 
can go to marlonkodaka.blogspot.com and tell me what impression about it is 
like.

Eranoylao kamanue.
Best regards.





Messages in this topic (3)
________________________________________________________________________
________________________________________________________________________
7a. Re: Grammaticization of fault/credit
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Mon Apr 25, 2011 10:32 pm ((PDT))

Thank you very much for your help, Danny.





Messages in this topic (22)
________________________________________________________________________
7b. Re: Grammaticization of fault/credit
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Mon Apr 25, 2011 10:33 pm ((PDT))

Thank you very much for you help.





Messages in this topic (22)
________________________________________________________________________
________________________________________________________________________
8a. Re: Learning colors
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Mon Apr 25, 2011 11:41 pm ((PDT))

I agree that learning colours in Arabic is quite difficult. I've tried when I 
studied 
for one year at the university but I've sadly forgotten it altogether. 

I think that colours in Japanese are quite simple, because if you don't 
remember 
the native word, you can use a word derived from English (but I don't agree 
with that, because English is English and Japanese is Japanese! How can they 
mix up languages as if they were only one!). 

For example, "midori" means green, but if you don't know it or suppose that you 
just can't recall it, so you are allowed to use "guriin" instead. Same for 
"aoi" - 
"buruu", "akai" - "reddo", "kiiroi" - "yeroo", "chaiiroi" - "buraun", "kuroi" - 
"burakku", "shiroi" - "howaito", "haiiroi" - "guree", "momoiroi" - "pinku", 
"murasaki" 
- "paapuru", and so on.

In Portuguese (my natlang), some colours have different forms for masculine 
and feminine nouns, whereas others use simply one common form:

vermelho (m.), vermelha (f.) - red
branco (m.), branca (f.) - white
amarelo (m.), amarela (f.) - yellow
preto (m.), preta (f.) - black
roxo (m.), roxa (f.) - purple
dourado (m.), dourada (f.) - golden
prateado (m.), prateada (f.) - silvered
alaranjado (m.), alaranjada (f.) - orange

but:
cinza (m. and f.) - gray/grey
rosa (m. and f.) - pink
verde (m. and f.) - green
azul (m. and f.) - blue
lilás (m. and f.) - violet
marrom (m. and f.) - brown
laranja (m. and f.) - orange
prata (m. and f.) - silver
magenta (m. and f.) - fuchsia
bege (m and f.) - beige

In my conlang (Yuelami) the adjectives for colours are unlike the colour names 
(nouns):

beizei (m.), beize (f.), beizao (n.) - black, noun: beizivue (n.)
aluvei (m.), alueve (f.), aluvao (n.) - white, noun: aluevo (f.)
vabei (m.), vabe (f.), vabao (n.) - red, noun: vabue (n.)
eyei (m.), eye (f.), eyao (n.) - blue, noun: eyo (f.)
jalei (m.), jale (f.), jalao (n.) - brown, noun: jalivue (n.)
kisei (m.), kiese (f.), kisao (n.) - gray, noun: kisue (n.)
rasakei (m.), rasake (f.), rasakao (n.) - purple, noun: rasakue (n.)

laravi (m., f., and n.) - orange, noun: laravo (f.)
masi (m., f., and n.) - green, noun: maso (f.)
ki (m., f. and n.) - yellow, noun: kavue (n.)
momi (m., f., and n.) - pink, noun: momavue (n.)

For me it was (and still is) really diffilcult to learn the colours in Polish, 
specially 
because their forms vary according to case, gender and number (there are 
seven cases and three genders, with different forms for singular and plural).

SINGULAR
nominative - bia&#322;y pies (m.), bia&#322;a ksi&#261;&#380;ka (f.), 
bia&#322;e dziecko (n.)
                  white dog        white book         white child    
accusative - bia&#322;ego psa (m.), bia&#322;&#261; ksi&#261;&#380;k&#281; (f), 
bia&#322;e dziecko (n.)
       I have a white dog         white book        white child
genitive - bia&#322;ego psa (m.), bia&#322;ej ksi&#261;&#380;ki (f), 
bia&#322;ego dziecka (n.)
                white dog`s        white book`s    white child`s
dative - bia&#322;emu psowi (m.), bia&#322;ej ksi&#261;&#380;ce (f.), 
bia&#322;emu dziecku (n.)
        to a white dog         to a white book     to a white child
locative - bia&#322;ym psie (m.), bia&#322;ej ksi&#261;&#380;ki (f.), 
bia&#322;ym dziecku (n.)
         in a white dog      in a white book     in a white child
instrumental - bia&#322;ym psem (m.), bia&#322;ej ksi&#261;&amp;#380;k&#261; 
(f.), bia&#322;ym dzieckiem (n.)
                with a white dog    with a white book  with a white child
vocative - bia&#322;y pies! (m.), bia&#322;a ksi&#261;&#380;ko! (f.), biale 
dziecko! (n.)

PLURAL
nominative - bia&#322;e psy (m.), bia&#322; ksi&#261;&#380;ki (f.), bia&#322;e 
dzieci (n.)
                  white dogs       white books       white children    
accusative - bia&#322;e psy (m.), bia&#322;e ksi&#261;&#380;ki (f), bia&#322;e 
dzieci (n.)
       I have  white dogs         white books        white children
genitive - bia&#322;ych psów (m.), bia&#322;ych ksi&#261;&amp;#380;ek (f), 
bia&#322;ych dzieci (n.)
                white dogs`        white books`    white children`s
dative - bia&#322;ym psom (m.), bia&#322;ym ksi&#261;&#380;kom (f.), 
bia&#322;ym dzieciom (n.)
        to white dogs         to white books     to white children
locative - bia&#322;ych psach (m.), bia&#322;ych ksi&#261;&amp;#380;kach (f.), 
bia&#322;ych dzieciach (n.)
         in white dogs      in white books           in white children
instrumental - bia&#322;ymi psami (m.), bia&#322;ymi ksi&#261;&#380;kami (f.), 
bia&#322;ymi dziecmi (n.)
                with white dogs    with white books  with white children
vocative - bia&#322;e psy! (m.), bia&#322;e ksi&#261;&#380;ki! (f.), biale 
dzieci! (n.)

................................................................................................





Messages in this topic (16)
________________________________________________________________________
________________________________________________________________________
9.1. Re: number classes
    Posted by: "Marlon Ribeiro" marlonc...@hotmail.com 
    Date: Tue Apr 26, 2011 12:02 am ((PDT))

In Yuelami, I use separate words for the numbers themselves, when counting 
them in sequence. When they are used to count words, to measure things, 
these indepedent words change to prefixes that should be added to the 
respective measure word (counter) according to the noun to be counted 
(which needs to receive a suffix for the partitive case).

1 - jivue (prefix: vo-)
2 - asei (prefix: za-)
3 - latao (prefix: la-)
4 - yaboa (prefix: ya-)
5 - reyue (prefix: re-)
6 - zonie (prefix: zo-)
7 - mixao (prefix: mie-)

8 - miejivue (prefix: mievo-)
9 - miesaxei (prefix: mieza-)
10 - mielatao (prefix: miela-)
11 - meiyaboa (prefix: meiya-)
12 - miereyue (prefix: miere-)
13 - miezonie (prefix: miezo-)
14 - zamixao (prefix: zamie-)

15 - zamiejivue (prefix: zamievo-)
16 - zamiesaxei (prefix: zamieza-)
17 - zamielatao (prefix: zamiela-)
18 - zameiyaboa (prefix: zameiya-)
19 - zamiereyue (prefix: zamiere-)
20 - zamiezonie (prefix: zamiezo-)
21 - lamixao (prefix: lamie-)

28 - yamixao (prefix: yamie)
35 - remixao (prefix: remie-)
42 - zomixao (prefix: zomie)
49 - memixao (prefix: memie-)
56 - mievomixao (prefix: mievomie-)
63 - miezamixao (prefix: miezamie-)
70 - mielamixao (prefix: mielamie-)
77 - meiyamixao (prefix: meiyamie-)
84 - mieremixao (prefix: mieremie-)
91 - miezomixao (prefix: miezomie-)
98 - zamemixao (prefix: zamemie-)
99 - zamemiejivue (prefix: zamemievo-) 
(Base-7 is employed till 99 as you may have noticed)

100 - joayue (prefix: joa-)
121 - joalamixao (prefix: joalamie-)
123 - joalamiesaxei (prefix: joalamieza-)
199 - joazamemiejivue (prefix: joazamemievo-)
200 - zajoayue (prefix: zajoa-)
300 - lajoayue (prefix: lajoa-)
...
...
...

volanei luezaxue = ONE YEAR
(vo/lanei lueza/xue) "one period/while of a year"
/vC'lane lU'zaSU/

vo-: jivue (one) 
-lanei: time counter
luzao /'luzA/ -> lueza- /lU'za/: year
-xue: partitive case sufix

mielanei luezaxue: SEVEN YEARS
/mI'lane lU'zaSU/
mie-: mixao (seven)

joalanei luezaxue: ONE HUNDRED YEARS
/Jo'lane lU'zaSU/
joa-: joayue (a hundred)





Messages in this topic (39)





------------------------------------------------------------------------
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/conlang/

<*> Your email settings:
    Digest Email  | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/conlang/join
    (Yahoo! ID required)

<*> To change settings via email:
    conlang-nor...@yahoogroups.com 
    conlang-fullfeatu...@yahoogroups.com

<*> To unsubscribe from this group, send an email to:
    conlang-unsubscr...@yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 
------------------------------------------------------------------------

Reply via email to