Glen -

Yes, looking to compare answers..

    Q. Can Science be done without language?
    A(smith).  Some, almost for sure.
A(gepr): No, probably not.  Language is an denser/compressed replacement
for other behaviors (e.g. grooming) and serves to bring about behavioral
coherence in a group.  Behavioral coherence is necessary for science.
(Thought coherence is irrelevant to science except when/where it
facilitates behavioral coherence.)
So you are saying that it is *practically* not possible because of lack of group coherence, though in principle (which I think was Kennison's original question?) language is not possible *for* doing science, it is merely neccessary (valuable/efficient) for gaining coherence? Is it not possible then that a tribe of primates who obtain behavioral coherence through lots of nit-picking (like we are doing here?) could then have enough behavioral coherence to start doing science?
    Q. Can Science be done more easily/effectively with language?
    A(smith).  It seems as if this is the case.
A(gepr): Yes, which seems like a natural consequence of my answer to the
first question.
Ok... so I think your answer to the first question was really an answer to the second.
    Q. Is Science a "collective thing"
    A(smith).  Some uses of the term Science are specifically a
    collective thing.  To wit, the collection of all artifacts of a
    specific methodology including the hypotheses (tested or not), the
    methods and apparatus for testing them, the resulting data gathered
    during the testing, the logic and mathematics used to analyze the
    data, and most familiarly, the conclusions drawn (scientific theories).
A(gepr): By definition, science consists of testable conjecture.  In
order to be tested, the conjecture has to be reified, embedded into a
context collectively constructed by a population (even if that
population is 1 human and 100,000 rodents).
But what is a "conjecture" without language? And if there is such a thing, it is already reified (or more to the point needs no reification/cannot-be-reified)?
   Hence, testing requires the
artifact be part of, fit in with, a collectively co-evolved context.
yes, I think this is an important aspect we are teasing out of this conversation, such as it is.
There is no instantaneous or infinitesimal science, i.e. all science has
spatial and temporal extent.  If the conjecture cannot be reified,
instantiated into the external world, then it is not science.
And back to (my corollary to) Kennison's original question... can (any/all?) conjectures be created/encoded without language?
    Q. Is Science created *by a collective*
    A(smith).   Individual elements in the collective thing we call
    Science can be created by very small collectives.  When an
    individual generates hypotheses, contrives experiments, executes
    them, gathers data and draws conclusions, this is an important
    *part* of science and will be included in the collective artifact.
    Without independent verification (and nobody seems to agree on just
    how much independence and how much verification is sufficient), the
    artifacts are not yet fully vetted and I suppose not "quite"
    science.   In this sense, Science requires a collective.
A(gepr): Yes. By definition, science consists of testable conjecture.
Testability implies multiple individuals willing to share enough
similarity to engage in the testing.  So, that's a slam dunk.
agreed.
Shared
conjecture requires sharing in one (or both) of two forms: 1) shared
anatomical or physiological structure
still nit picking... I'm not clear on what a conjecture based in anatomical/physiological structure is? The crow who's beak doesn't get deep enough into a crevice to get the grub who therefore uses a twig and the other crow whose shared structure provides the same problem/challenge? Or just to spin off a bit... the woodpecker who does it without the twig because the beak is narrow, vs the crow who needs a twig (which resembles the woodpecker beak)? I'm fishing here.
  or 2) shared mental constructs.
Hence, complete orthogonality (or autonomy) would prevent the production
of science.  However, the existence of collectives does not necessarily
imply their produce is science.  Collectives are necessary but not
sufficient.
Ok... we've agreed that it is built into the definition that more than one (more than a few, more than several?) is required to "do Science" (reproduceability) and that the act of "doing Science" requires both spatial and temporal extent. For (more than?) several to "do Science" together, we must have some coherence (you say behavioral is enough and I think all there is, I still hold onto the thought that thoughts are real and may also be required to be coherent).
And I think you will agree that anthropomorphism is a form of figurative
thinking as much as the use of metaphor.  In fact it seems like a
special kind of metaphor (where the metaphorical source domain is
humanity which is ultimately sourced from one's sense of one's self)?
I just can't get beyond this.  I try, but I can't.  Anthropomorphing
(-izing?) is not figurative or metaphor.  I may be mincing words, here.
  But I believe these physiological processes are NOT representative.
They aren't symbolic.  When I refer to a robot (or a tree sapling or my
cat) as _he_ or _him_, I'm not thinking of the robot as a _symbol_ for
anything.  I'm imputing that robot (or sapling or cat) with its own
first class presence.  It's an "end in itself" a "person", as it were,
with as high an ontological reality as my self.

To think of them figuratively or metaphorically would be an entirely
different thing.  In fact, if I were to think of, say, my cat as merely
a _symbol_, I'd be more psychopathic than I already am. ;-)

Now, I admit that when I use the tagline "putting oneself in the other's
shoes", that has an element of the figurative or metaphorical, in the
sense that it requires an abstracted "replacement".  The idea requires a
sense of being able to pluck one's self out of its context, pluck the
other out of its context, and do a switcheroo.

I admit that.  But it's a failing of language and not indicative of real
figurative thinking.  When I empathize with my robot, I don't really
replace the robot with my self.  Instead, I promote the robot to
personhood status.  And that's not using the robot as a symbol at all.
It's a completely different way of thinking about the world.

So, anthropomorphism is _not_ figurative or metaphor, it's an
ontological commitment (or delusion).
I'm stuck on (the other side) on this myself. What you describe, I would call "Identification" and/or "empathy". It only becomes "Anthropomorphism" (for me) when we add the abstractions of "this is human-like" and "human-like is me-like". But I accept that you you don't grant "thinking" it's own reality, so I I'm not sure you accept these abstractions?
I'm not sure that I can say that my "thoughts are not real".   I can
agree for the sake of arguement that they are *different* than my
immediate sensations, but then my immediate sensations (go experience
one of many perceptual illusions) are not *real* either.  We fit our
*raw* perceptions (whatever that means) onto some series of layers of
models.   I would contend that at some point those models are entirely
linguistic/abstract/symbolic (for humans) and that wherever that divide
lies might be an important one.
OK.  Maybe I'm just not saying this correctly.  I disagree and
counter-contend that at NO point are our models entirely
linguistic/abstract/symbolic.  There is no divide.  Everything in our
heads _is_ biochemical.
Referring (and deferring) back to Lakoff and Nunez (Where Mathematics Comes from: How the Embodied Mind brings Mathematics into Existence), I accept that our models are *only* as abstract/symbolic as the rigorous mathematics we describe them in.

I think you are making the argument of the materialists that mind does not exist, only brain, and that mind (even to the extent that it is an illusion) could not exist independently of brain. I think this one is a bit over my pay grade, in the sense that I don't expect to demonstrate a mind outside of a brain anytime soon, but I'm also not ready to say it is impossible anymore than I think that multiple instances of the same code running on multiple instances of the same machine, or better yet, on a combination of various virtual machines running on a combination of various physical machines/designs is impossible. There is a question of complexity and initial boundary conditions provided by the human body/perceptions.

What I mean by "thoughts are not real" is that our word "thought" is
short-hand for the wildly complex and feedback rich biochemical
processes inside us.  It's fantasy to think that thoughts are somehow
separate or separable from the wet stuff inside us.
And yet, when I write down a complex thought (or better yet, someone more capable than I) and someone else reads it (maybe you, maybe someone more capable than you ;) ) then a "thought" or at least an "idea" has been serialized, disembodied, and reembodied?
But I would claim that what I am doing (whilst manipulating said
objects) is manipulating abstractions... in particular, I am using the
(relatively accurate) physical conservation of length in these
objects/materials to "add" and then using the *abstraction* of
exponential notations and arithmetic to then *multiply* and/or to simply
*look up* other functions (e.g. trigonometric) using the device of marks
on a movable pair of objects with an (also) moveable reticule.

When I do "simple" arithmetic in my head, I use a combination of
conventional symbols (0,1,2,3,4,5,6,7,8,9) and rules (decimal positional
numbers) and more rules (addition, multiplication, division, etc) to
achieve these answers.  I happen *also* to have a strong intuition about
much arithmetic/mathematics which I not as obviously symbolic.   But I
would claim this intuitive calculation is more like a sloppy version of
the slide rule described above.   I may do long division in my head
using some short-cuts, but it is entirely symbolic, and I may check my
answer using various intuitive tricks (including visualizing the number
as a rectangular area and the divisor and result as the length of the
sides).
Sorry about the willy-nilly snipping.  But I get irritated when others
quote too much.  So I may quote too little (though I'll never be as
zealous as Marcus at snipping down the quote ;-).
I admire those who can trim down to the essentials ("as simple as possible, but no simpler" A.E.) and myself prefer to wade through extra rather than try to guess what the response is referring to without a specific quote at hand.
I recall an accusation leveled at list participants awhile back when we
were talking about the definition of math and what mathematicians do.
It went something like: "those who talk a lot about math don't tend to
be very good at math" or something like that.

The point, I think, is that _doing_ math is what makes one comfortable
with it, whether one is an engineer, an artisan, or a pure
mathematician, the only thing that can make one good at it is to do it.
So the correlation is that people who are good at things, got good by doing it, don't *need* to discuss it, and in fact recognize that discussing is futile in the face of doing?
  Now, that says nothing about what those symbols mean while you're doing
it.  But the consensus seems to be that most people who are good at math
tend toward a Platonic understanding of math.  The "symbols" are more
than just symbols, whose meaning can be applied, unapplied, and
re-applied willy nilly.  To people who are good at math, the "symbols"
are less symbolic than they might be to those of us who are only
adequate at math.  Good mathematicians aren't just manipulating symbols,
they're discovering reality.
Hmmm... they are discovering the reality of certain relations between symbolic statements?
This means, I think, that we animals are less capable of abstraction
than I think you assert.  When you do that "higher" more abstract math,
you're doing the _exact_ same thing as manipulating the slide rule, or
matching the length of your string to marks on a board ... exactly, not
nearly, not figuratively.
I am beginning to understand the level of your commitment to this position but am not necessarily becoming more committed to it myself. I'm seeking a toehold in this realm, or maybe more to the point, trying to find obtain an intuitive understanding of what it would mean to dispense with "language", "abstraction", "figuration", "metaphor", "analogy", etc.
Ok... I think I agree that Science (as opposed to mathematics) requires
an embedding in the (real, messy, wet, etc.) world. What I'm not clear
on is whether the abstractions we have developed (linguistic in general
and mathematical in particular) are not neccesary (or at least very
useful?).

[...] If a person (or culture) had
the stamina/capacity to store all such examples and index them
effectively, I suppose the abstractions of algebra would be irrelevant
or unneccesary and maybe even considered a "cheap trick" by those who
had the capacity to hold these problems in their heads?
If you don't regard the conceptual/linguistic objects as abstractions,
but instead regard them as compressions, then we can agree that they are
necessary.
I think you are in territory that I have encountered elsewhere and been stymied (well, temporarily stuck actually). I do think they can be regarded as compressions, but I think even *as* compressions, they also serve as abstractions? I'm left assuming that you might believe in abstractions at all? That they do not exist, that they are meaningless?
   Whether the compressions are lossy or lossless depends, I
think on the biochemical structures involved.  For example, the
autonomic wiggling of our eyes or fingers when we look at or manipulate
an object filters out some concrete detail so that the compressed
version of it in our heads has less detail than the uncompressed version
impinging on our outer senses.  Similarly, we can be tricked (by a
prestidigitator) into faulty compressions.  (I.e. when we decompress it,
it looks nothing like the original.)
I do believe there is a *lot* of compression going on (and much being quite lossy) in perception and in communication (as evidenced by our difficulty in converging on a shared lexicon here?). I do think that language used for communication often suffers from a faulty compression/decompression pairing... supporting your notion that communication is (often?) an illusion.
But the skill being developed by compressing and decompressing a LOT is
not an abstracted thinking-in-isolation skill.  It's a filtering skill,
determining signal from noise, what to include in the compression and
what to leave out.  That's the key skill, not manipulating the
abstractions/compressions inside our heads.  The key to being a good
scientist, doing science, lies in the embedding into or out of the
environment, not the thinking/manipulating abstractions in one's head.
Ignoring how "good" the thinking or science is, I contend that this IS what thinking is...

We "compress" as you say. We fit data to models. Then we manipulate these instances of the models (informed by the data) until we find a supposedly useful or interesting instanced-model-state (some might say output-from) which we then "decompress" (in this case I think I mean re-apply semantics to...). We measure the position of something which we percieve to be "a thing". We impute thingness (rigid bodyness+???) to this imagined "thing" and we use some model which we received or discovered (by conjecture, testing,etc.) in the form perhaps of a set of differential equations. With values attached to the differential equations, we manipulate said equations according to the rules of calculus and algebra (independent of the compressed out qualities of the "thing") until we, for example, derive a simpler form, such as the "thing"'s position and velocity at some time (t). When we decompress, we apply the semantics of the "thing" (red billiard ball bouncing and rolling down an inclined plane?) . We definitely "filtered" when we decided that the "compression" (if I'm using your term correctly) of the features of the "thing" we measured was useful... We took it's position, mass, velocity, etc. at time t0, fit it to a model of "rigid bodies in motion in a gravitational field" and *ignored* it's redness, it's human-ascribed utility as a "billiard ball" etc.
Preserving the applicability or embeddability of what's in your head is
the most important part, no matter how you manipulate thing in your head.
Ok... I think that is what I just said above? Making sure that the lossiness is really just separability... holding onto the "redness" and the "billiardballness" to re-apply at decompression?

- Steve

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to