September 12, 2007, 6-7:30 PM in 234 Moses Hall
Kevin Scharp (Ohio State University, Philosophy)
Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are “traditional” in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion. Over the years, however, several philosophers have presented an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is sound. That is, our conceptual competence leads us into inconsistency. I call this alternative the inconsistency approach to the liar. Although this approach has positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable. In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes. I outline an inconsistency approach to the liar paradox that satisfies this condition.
October 03, 2007, 6-7:30 PM in 234 Moses Hall
Marco Panza (CNRS, Paris)
The double role of diagrams in Euclid’s Plane Geometry
Proposition I.1 of Euclid’s Elements requires to ‘construct’ an equilateral triangle on a ‘given finite straight line’, or segment, in modern parlance. To achieve this, Euclid takes this segment to be AB, then describes two circles with centre in the two extremities A and B of this segment, respectively, and takes for granted that these circles intersect each other in a point C. This is not licensed by his postulates. Hence, either his argument is flawed, or it is warranted on other grounds. A possible solution of the difficulty is to admit that Euclid’s argument is diagram-based and that continuity provides a ground for it insofar as it is understood as a property of diagrams.
Proposition I.1 is, by far, the most popular example used to justify the thesis that many of Euclid’s geometric arguments are diagram-based. Many scholars have recently articulated this thesis in different ways and argued for it. My purpose is to reformulate it in a quite general way, by describing what I take to be the twofold role that diagrams play in Euclid’s plane geometry (EPG).
Euclid’s arguments are object-dependent. They are about geometric objects. Hence, they cannot be diagram-based unless diagrams are supposed to have an appropriate relation with these objects. I hall take this relation to be a quite peculiar sort of representation. Its peculiarity depends on the two following claims that I shall argue for:
C.i) To provide the conditions of identify of the objects of EPG is the same as to provide the identity conditions for the diagrams that represent them or-in the case of angles-of appropriate equivalence classes of diagrams that represent them;
C.ii) The objects of EPG inherit some properties and relations from the diagrams.
For short, I shall say that diagrams play a global and a local role in EPG to mean, respectively, that they are such that claims (C.i) and (C.ii) hold.
November 07, 2007, 6-7:30 PM in 234 Moses Hall
Kevin Kelly (Carnegie Mellon University)
Truth-conduciveness Without Reliability: A Non-theological Explanation of Ockham’s Razor
Science could not get far without a systematic bias in favor of simpler theories, often referred to as “Ockham’s razor”. Indeed, no longer is such a bias a matter of the private “bon sense” of the individual theorist—it is now very publicly implemented in the most advanced statistical and computational techniques for inferring theories and causes from empirical data. But none of the recent technical literature resolves the most obvious puzzle raised by such practice, which is, to echo the Meno paradox: if we already know that the truth is simple, we don’t need Ockham’s razor and if we don’t, then why should we suppose that Ockham’s razor is truth-conducive? In his Monadology, Leibniz embraced the first horn: knowledge that God exists implies that he would make the best (most elegant) world. In an amusing twist, Robert C. Koons has reversed the implication, arguing that if Ockham’s razor produces scientific knowledge, it must have at least some supernatural assistance. I agree in spirit with Koons’ argument, but I view it as a reductio against the thesis that truth conduciveness must imply naturalistic reliability or truth-tracking, rather than as a proof that God exists. Instead, I propose that Ockham’s razor is optimally certifiably truth-conducive even if it is unreliable or fails to track the truth. The idea (theorem) is that Ockham’s razor converges to the truth under the least certifiable bound on reversals of opinion prior to convergence, no matter how complex the truth happens to be. That raises an interesting question about whether scientific knowledge must be produced by a reliable or truth-tracking method. If so, an awkward incoherence arises within science: everything we see and do could be the same, but knowledge implies the existence of a hidden, chance correlation between simplicity and truth. Thus, attributions of scientific knowledge produced by Ockham’s razor violate Ockham’s razor!
November 28, 2007, 6-7:30 PM in 234 Moses Hall
Terrence Fine (Cornell University, Electrical & Computer Engineering and Statistical Science)
Must Probability be Numerical? Part I: Introducing and Expanding Upon the Question
The first colloquium, “Introducing and Expanding Upon the Question,” will elaborate on the meaning of the title question and present arguments as to why additive numerical probability is unlikely to be universally applicable. Perspectives will be developed from the historical origins of the concept, the maturation of probabilistic reasoning, and a consideration of the wide variety of important meanings (interpretations) that this one mathematical concept of probability is expected to support. The first colloquium will close with a quick sketch of several mathematical alternative representations of probability.
[Note: On Thursday, November 29, 4-5 PM, Professor Fine will give a talk in the EECS department. The title is “Computationally Based Agnostic Inference,” and the talk will take place in the Wang Room, 531 Cory Hall. Slides for this talk (PDF)]
March 12, 2008, 6-7:30 PM in 234 Moses Hall
Ian Proops (University of Michigan)
Puzzling out Russell: The Curiosity of George IV
This paper examines the philosophical consequences of Russell’s solution to his famous puzzle about George IV in “On Denoting”. It is argued that by solving the puzzle in the way he does Russell already commits himself to many aspects of his later logical atomist metaphysics, including an ontology of “uni-faceted” sense data, and the view that sentences are incomplete symbols.
April 09, 2008, 6-7:30 PM in 234 Moses Hall
Mario Gomez-Torrente (UNAM)
Rereading Tarski on Logical Consequence
I will argue against some recent defenses of the view that, in 1936, Tarski always required all interpretations of a language to share one same domain of quantification. I hope to offer a more detailed examination of some of the relevant textual evidence on the issue than in my earlier work. If time permits, I will also offer some new considerations on subsisting issues of interpretation concerning Tarski’s views on the logical correctness of certain omega arguments, on the Tarskian proof that Etchemendy took to be modal and fallacious, or on Tarski’s appeals to the “common concept of consequence”.
April 23, 2008, 6-7:30 PM in 234 Moses Hall
Stephan Hartmann (Tilburg Center for Logic and Philosophy of Science)
Consensus, Compromise and Judgment Aggregation
Judgment aggregation studies the aggregation of yes-no judgments of the members of a jury on logically interconnected propositions into a consistent collective judgment set. As the discursive dilemma shows, proposition-wise majority voting will not in general lead to a consistent collective judgment set. To arrive at a consistent collective judgment set, three procedures have been discussed in the literature: the premise-based procedure (PBP), the conclusion-based procedure (CBP) and the distance-based procedure (DBP). According to these procedures, the jury can accept a judgment set that only a few (or even none) of the members of the jury voted for. This raises the question whether such a decision is really acceptable. Clearly, a decision based on PBP, CBP or DBP amounts to a compromise, and not everybody will be happy with the decision. The jury members agree to go along with the will of the others. The preferred solution, however, is to arrive at a consensus, whereby every jury member is in agreement with the final decision. The goal of this paper is to develop a model for the emergence of consensus in a judgment aggregation setting and to asks how this new aggregation method compares with suitably generalized versions of PBP, CBP and DBP. The paper is based on joint work with Jan Sprenger (Bonn, Tilburg).