Hi Ben, List,

Thanks for your helpful remarks.  First off, I agree with the worry you've 
expressed--to the effect that the way the question is phrased may involve some 
kind of "wrong turn."  I stated it that way because I was trying to express the 
question in a manner was neutral between a more descriptive and psychological 
explanation of validity and a more normative and logical explanation.  That is 
the kind of debate he starts with when he considers the psychological 
explanation of déjà vu and then works his way to a logical account of the 
comparison of qualities of feelings.

Having said that, I would like to point out that many of the passages you've 
cited are meant to explain the validity of an argument that is written on a 
chalk board.  From this point of view, we are trying to account for the 
validity of the argument itself--and that is largely a matter of the truth of 
the underlying principle that is governing the inference (however it is 
embodied).

On the top of page 320 in EP, however, he is considering questions about how 
we--as human cognizers--are able to *recognize* that two things are similar or 
dissimilar.  My hunch is that he is focusing on these points about what is 
needed to recognize similarity of two feelings because he is interested in the 
question of what is necessary to recognize that a comparison of similarity is 
apt, or recognize that an abdutive inference to a hypothesis is valid, or what 
have you.  My sense is that these are related questions.  

On the same page, he makes the following claim:  "it must be remarked that the 
only effect of a quality of feeling is to produce a memory, itself a quality of 
feeling; and that to say that two of those are similar is, after all only to 
say that the feeling which is the symbol of similarity will attach to them. 
Thus the feeling of recognition of a present idea as having been experienced 
has for its signification the applicability of a part of itself. The general 
occurrences of the feeling of similarity are recognized as themselves similar, 
by the application to them of the same symbol of similarity." 

My hunch is that this remark is part of the larger explanation he wants to 
offer of how we can recognize that an abductive inference is valid.

He goes on to say:  "It is Kant's "I think," which he considers to be an act of 
thought, that is, to be of the nature of a symbol. But his introduction of the 
ego into it was due to his confusion of this with another element."  I'd like 
to figure out what Peirce thinks the confusion amounts to.  On the Kantian 
account, the recognition of the validity of an act is a key idea. 

--Jeff


Jeff Downard
Associate Professor
Department of Philosophy
NAU
(o) 523-8354
________________________________________
From: Benjamin Udell [[email protected]]
Sent: Sunday, August 24, 2014 4:21 PM
To: [email protected]
Subject: Re: [PEIRCE-L] Phaneroscopy, iconoscopy, and trichotomic category 
theory

Jeffrey, list,

My turn to write a long one. I think you take a bit of a wrong turn regarding 
Peirce's views when you ask

[Quote]
What is the standard that we can use when comparing the feeling that an 
argument is a good inference to the feeling that an argument is an invalid 
inference?
[End quote]

Peirce insisted that an argument's validity has nothing to do with a feeling of 
its being a good inference, a feeling of logicality. See for example "What 
Makes a Reasoning Sound?" in EP 2. For example, in "The Doctrine of Chances" 
http://en.wikisource.org/wiki/Popular_Science_Monthly/Volume_12/March_1878/Illustrations_of_the_Logic_of_Science_III
 , Section III, he writes,

According to this, that real and sensible difference between one degree of 
probability and another, in which the meaning of the distinction lies, is that 
in the frequent employment of two different modes of inference, one will carry 
truth with it oftener than the other. It is evident that this is the only 
difference there is in the existing fact.

In "The Probability of Induction," he sharply criticizes Bayesian or subjective 
probabilities, and discusses confidence intervals (without calling them that) 
in statistics. Statisticians have labored long to come up with measures of 
goodness of an induction. But the confidence can be quite deceiving, because it 
can't take systematic error (sample bias) into account, much less other kinds 
of error (the botch in the equipment that made it seem that neutrinos sometimes 
travel faster than light - note that the statistical confidence level of the 
result was very high).

At the same time, there are characters, namely verisimilitude and plausibility 
(natural simplicity) that he associates with good inductions and good 
abductions, respectively, characters that one might think of as feelings. 
Verisimilitude (sometimes he calls it 'likelihood') in Peirce's sense consists 
in that, if pertinent further data were to continue, until complete, to have 
the same character as the data supporting the conclusion, the conclusion would 
be proven true.

[From CP 8.224, draft letter to Paul Carus, circa 1910. Quote]
By verisimilitude I mean that kind of recommendation of a proposition which 
consists in evidence which is insufficient because there is not enough of it, 
but which will amount to proof if that evidence which is not yet examined 
continues to be of the same virtue as that already examined, or if the evidence 
not at hand and that never will be complete, should be like that which is at 
hand.
[End quote]

[From CP 2.663, "Notes on the Doctrine of Chances," 1910. Quote]
I will now give an idea of what I mean by _likely_ or _verisimilar_. It is to 
be understood that I am only endeavouring so far to explain the meanings I 
attach to "plausible" and to "likely," as this may be an assistance to the 
reader in understanding the meaning I attach to _probable_. I call that theory 
_likely_ which is not yet proved but is supported by such evidence that if the 
rest of the conceivably possible evidence should turn out upon examination to 
be of a _similar_ character, the theory would be conclusively proved.
[End quote]

It is a likeness that the inductive conclusion bears to the data in the sample. 
This really doesn't sound like a confidence interval. It sounds like that in 
virtue of which one calls an induction an inductive 'generalization'. In his 
"Notes on The Doctrine of Chances," (1910) CP 2.664, he wrote:

[Quote]
this history [...] shows only too grievously how great a boon would be any way 
[of] determining and expressing by numbers the degree of likelihood that a 
theory had attained—any general recognition, even among leading men of science, 
of the true degree of significance of a given fact, and of the proper method of 
determining it. I hope my writings may, at any rate, awaken a few to the 
enormous waste of effort it would save. But any numerical determination of 
likelihood is more than I can expect.
[End quote]

But this verisimilitude, even if it is a feeling, is a starting point, until 
one can expand and improve one's sampling and analysis to the point where more 
than sheer verisimilitude is involved. Once that happens, we don't regard an 
inductive conclusion as merely 'likely'. In the case of abduction, plausibility 
may vary, but any inference that explains the phenomenon is justified at the 
level of critique of arguments. But as a result of further research, a 
hypothesis may be so strongly supported that we no longer regard it as merely 
'plausible,' merely 'appealing to instinct', etc. The validity of abduction and 
induction both depend ultimately on the idea of an indefinite community that, 
by followup, self-correction, etc., can bring about definite increase of 
knowledge. I've argued that, since deduction can get tricky and complex, even 
the validity of deduction, in our actual use of it, depends on the idea of that 
indefinite community. The definition of deductive validity is such that any 
deduction is valid on inconsistent premisses, but we care about deductions from 
consistent premisses, deductions whose prospects of soundness are not doomed 
from the start by the formal character of the premiss set. Many systems of math 
are proven consistent-if-arithmetic-is-consistent. But it is not a feeling, or 
more precisely, a quality of feeling, but rather the experience of not 
collapsing in contradictions, that leads mathematicians to regard those systems 
as flat-out consistent for their purposes, and the experience that 
contradictions can be cordoned off, if, for example, division by zero in the 
real number system is considered a source of inconsistency. The probability of 
a deductive conclusion can be quantified in Peirce's sense, but there's little 
feeling in that. There are other characters that deductive conclusions can 
have, which make them valuable, but which incline the reasoner more, or less, 
to doubt rather than to acceptance - novelty (an opposite to verisimilitude) 
and nontriviality (an opposite to natural simplicity), even when we distinguish 
the nontriviality of a conclusion (such as the Pythagorean theorem) from the 
complexity (or lack thereof) of its proof.  Peirce references deductive novelty 
just once that I know of (he says deduction "merely gives a new aspect to the 
premisses"), but it's a topic with some history; Peirce's student Gilman 
published a paper on deductive novelty "The Paradox of the Syllogism Solved by 
Spatial Construction" in 1923 that I hope to read at some point.

Anyway, verisimilitude seems not usefully quantifiable, least of all 
quantifiable like probability; the novelty or new aspect of a deductive 
conclusion seems not usefully quantifiable like information in the 
information-theoretic sense; and the history of complexity theory shows the 
difficulty of trying to quantify or otherwise mathematicize usefully the 
nontriviality or 'depth' of a deductive conclusion - it's certainly not merely 
mathematical arity, adicity, valence. I'm not aware of attempts to quantify or 
graph or mathematicize naturalness or simplicity in terms of optimization, but 
again the challenge seems to be to do so in a useful way. And, again, the 
problem is that even if it is shown that people with sufficient experience and 
discipline in the given subject matter tend to agree about degrees of 
verisimilitude, plausibility, nontriviality, etc., still in the build-up of 
knowledge, the logic must rest come to rest on facts, not on feelings, they 
should rest on some sort of externality, some sort of compulsion by the facts, 
as he discussed back in "The Fixation of Belief," even if, as in mathematics, 
one's being compelled to truth happens internally in some sense, that is, in 
one's imagination. In one of his last words on plausibility, in the letter to 
Carus, Peirce gave plausibility an explicitly normative turn with the word 
"ought": "By plausibility, I mean the degree to which a theory ought to 
recommend itself to our belief independently of any kind of evidence other than 
our instinct urging us to regard it favorably." (CP 8.223).

If Peirce was interested, as you suggest, in phaneroscopy in part because of 
issues of evaluating our reasonings, then it would be in terms of how such 
'feelings', or whatever they are, as plausibility and verisimilitude facilitate 
and expedite investigation, - I guess I'd call that the 'right turn' - not 
because of how they ultimately justify our reasonings and investigative methods 
(what I meant by the 'wrong turn')1.

Best, Ben

On 8/23/2014 9:26 PM, Jeffrey Brian Downard wrote:

1)      What is the standard that we can use when comparing the feeling that an 
argument is a good inference to the feeling that an argument is an invalid 
inference?  Isn’t this similar in some respects to comparing the intensity of a 
one experience of a feeling of blue to another feeling of blue?  Isn’t it 
different in other respects?

2)      Once we have formed a class of sample arguments that we take to be good 
and a class that we take to be bad, what kind of measurements can be made when 
comparing these classes?  At the very least, we can apply a nominal scale in 
saying that they are labeled as different classes.  For the sake of the logical 
theory, however, we need a stronger standard of measurement, don’t we?

3)      What is the standard for making the comparison of the goodness or 
badness of an argument?  Should we take it to be a prototypical argument that 
appears to be beyond criticism?  Perhaps we should take an argument, such as a 
cogito argument, or an ontological argument for God’s reality, or an argument 
for the indubitability of the axioms of logic as a prototype, and then place 
one or another of these arguments in a glass case in Westminster.  I suspect 
that this would fail to serve the purpose we have in removing possible errors 
from our measurements of the goodness or badness of any given argument.

How can the examples of measuring silk against a yardstick, comparing 
biological specimens to a “type-specimen”, and comparing the weight of carbon 
and gold to hydrogen help us think more clearly about the grounds we having for 
comparing arguments and saying that one class contains a sample of good 
inferences and that another class contains a sample of bad inferences.  In 
making such comparisons, we need something more than just a nominal assignment 
of the term ‘good’ to one class and ‘bad’ to another.  Having said that, don’t 
we need more than an ordinal scale that enables us to make relative comparisons 
of goodness and badness?  How might we arrive in our theory of logic at a 
standard of measuring the validity of inferences that is richer than a nominal 
or ordinal scale?  After all, we are relying on our standards for comparing 
arguments for the sake of arriving at conclusions about what, really, is true 
and false.

-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to [email protected] . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to [email protected] with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to