# 2005-6

October 12, 2005, 6-7:30 PM in 234 Moses Hall

*Branden Fitelson (UC Berkeley)*

Old Evidence, Logical Omniscience, and Bayesianism

In this talk, I will aim to do five things (in order): (1) explain what the problem of old evidence is, (2) explain Garber’s resolution of the problem of old evidence, which involves a novel and subtle Bayesian approach to logical learning (i.e., the learning of logical relations), (3) compare and contrast Garber’s approach to logical learning with a previous approach sketched by I.J. Good, (4) offer an alternative approach to logical learning (with an application to the problem of old evidence), and (5) briefly discuss and critique Jeffrey’s approach to logical learning (and old evidence). Along the way, various important foundational observations will be made about Bayesian models of rational epistemic agents.

October 26, 2005, 6-7:30 PM in 234 Moses Hall

*Brian Skyrms (UC Irvine)*

Learning to Signal

The topic is how signaling systems can get started, by evolution or or by naive learning dynamics.

November 02, 2005, 6-7:30 PM in 234 Moses Hall

*Bob Meyer (Australian National University)*

Clasically relevantism

This paper examines theories based on relevant logics in which a classical Boolean negation is added to the original relevant DeMorgan negation. Special attention will be paid to the systems R# and R## of relevant arithmetic.

November 16, 2005, 6-7:30 PM in 234 Moses Hall

*Marc Pauly (Stanford)*

Logic, Mathematics, and the Theory of Social Choice

*Joint Meeting with Stanford’s Workshop on Logical Methods in the Humanities* Axiomatic characterization results in social choice theory are usually compared either regarding the normative plausibility or regarding the logical strength of the axioms involved. Here, instead, we propose to compare axiomatizations according to the language used for expressing the axioms. In order to carry out such a comparison, we suggest a formalist approach to axiomatization results which uses a restricted formal logical language to express the axioms involved. Axiomatic characterization results in social choice theory then turn into definability results of formal logic. The advantages of this approach include the possibility of non-axiomatizability results, a distinction between absolute and relative axiomatizations, and the possibility to ask how rich a language needs to be in order to express certain axioms.

November 30, 2005, 6-7:30 PM in 234 Moses Hall

*Peter Koellner (Harvard University)*

The Inextricable Tangle–On the Emergence of the Theory of Descriptions

“On Denoting” is one of the landmark papers in twentieth-century philosophy. This is the paper in which Russell contextually defines away descriptive phrases, thereby eliminating all denoting concepts from among his intensional primitives. This technique has been called the “paradigm of philosophical analysis” and is now quite familiar. But “On Denoting” has remained enigmatic in other respects and commentators have debated such questions as: Is Frege the true target of Russell’s criticisms? How is one to make sense of the famous Gray’s Elegy passage in which Russell purports to show that the distinction between meaning and denotation leads to an “inextricable tangle”? What is the true reason for the new theory? What is the alleged connection with the resolution of the antinomies?

Fortunately, with the appearance of Volume 4 of the Collected Papers (Foundations of logic, 1903-1905), we now have much more information about Russell’s views at the time. Here one finds well over 100 pages of material – much of it first-rate – in which Russell develops the theory of denoting up to and including the point where he discovers the new theory. In this talk I will trace the development of Russell’s work on denoting and discuss the ways in which the unpublished material illuminates the above questions.

December 14, 2005, 6-7:30 PM in 234 Moses Hall

*Daniel Isaacson (Oxford)*

What Is a Mathematical Structure?

Structures in mathematics are of two sorts: particular structures (e.g. natural numbers), and general structures (e.g. groups). The difference between particular and general is marked by the definite and the indefinite articles: we speak of the natural numbers and a group. As articulated by Bourbaki, modern mathematics is more concerned with general than with particular structures. At the same time, the boundary between particular and general structures is not absolute (additional structure can be identified in particular structures in virtue of which they exemplify various general structures, e.g. the real numbers are a metric space with respect to the function |x-y|; and a topological space whose basic open sets are the open intervals determined by the < relation, and the reals have many other topologies). Nonetheless, from the point of view of attempting to answer the (philosophical) question “In what does the reality of mathematics consist?”, particular structures are fundamental. The various examples of a general structure ultimately are particular structures. So we need to be able to say what particular structures are.

Stewart Shapiro has stressed the contrast between considering a (particular) structure as existing in virtue of objects out of which it is composed (e.g individual natural numbers) or as existing independently of objects. Insofar as we wish to avoid the (in my view) hopeless idea that we can made sense of mathematical objects existing individually, i.e. independently of a structure in which they occur, the first of these possibilities is ruled out. General structures are given by their axioms, which are stipulative (a group is any set in which the axioms for a group hold). Particular structures are also given by their axioms, but in a very different way, namely by the categoricity of those axioms, i.e. the (mathematical) fact that any two models of those axioms are isomorphic. For a set of axioms to characterize an infinite structure categorically, second-order quantification is required. The status of second-order quantification is controversial. I shall discuss what can be achieved in this way.

March 15, 2006, 6-7:30 PM in 234 Moses Hall

*Oswaldo Chateaubriand (Pontifícia Universidade Católica do Rio de Janeiro and Conselho Nacional de Pesquisa)*

What is propostional logic a theory of, if anything?

I will discuss the question of what propositional logic is about and will present it as a first-order theory. In connection with this presentation I will discuss several traditional philosophical questions, among which the following:

- Must the objects of propositional logic (propositions, sentences, thoughts, judgments, etc.) have structure?
- What is the nature of quantification in propositional logic?
- What is the connection between material implication and the material conditional?
- What is the role of the material conditional in propositional logic?
- What is the nature of truth-values?

April 19, 2006, 6-7:30 PM in 234 Moses Hall

*Gerhard Heinzmann (University of Nancy 2)*

Hypotheses and Conventions: on the philosophical and scientific motivations of Poincaré’s pragmatic occasionalism

There is a general opinion that not only is Poincaré’s philosophical approach exciting since he anticipated so many of the problems relevant to contemporary philosophy of science but that it is also frustrating because his writings abound in puzzling passages some of which do not seem to fit together. In this paper I defend the claim that there is a coherent philosophical position underlying the whole of his work: pragmatic occasionalism. By examining the emergence of the pragmatic interpretation of occasionalism and the way in which it is instantiated in arithmetic, geometry, mechanics and physics, I propose to give a clearer idea of Poincaré’s philosophy of science. Before attending to either of these sciences, however, I will have to say something about how I conceive of pragmatism and Poincaré’s occasionalism in general.

May 03, 2006, 6-7:30 PM in 234 Moses Hall

*Henk Bos (University of Utrecht)*

Descartes’ Attempt to Base the Certainty of Algebra on Mental Vision

In his unfinished (and at the time unpublished) *Regulae ad Directionem Ingenii* (‘Rules for the Direction of the Mind’, composed during the 1620’s) Descartes used examples from mathematics to illustrate how the human mind can attain certain knowledge. The crucial example was the certainty of the arithmetical operations. For adding, subtracting, multiplying and dividing he produced arguments which convinced him. However, the text which we have strongly suggests that he was not able to prove the certainty of root extraction (both square and higher-order roots) in a manner he found satisfactory. He later abandoned this line of approaching the problem of certainty in philosophy.

The criteria for certainty which Descartes used in the *Regulae* were strongly visual; attaining certainty involved a process of ‘imagination’ in which figures were created by the mind on a kind of mental screen, observable by some inner eye.

At Descartes’ time one of the novelties in mathematics was the use of algebra in geometry. This necessitated a geometrical interpretation of the arithmetical operations; a theme in which Descartes had a keen interest (also after he gave up the *Regulae*).

I shall argue that the then common geometrical interpretations of the algebraic operations help us to identify the reasons why Descartes could accept adding, subtracting, multiplying and dividing as certain, but not the extracting of roots, and why this constituted an obstacle severe enough for him to give up the project of the *Regulae.*

I have dealt with this topic in my book *Redefining Geometrical Exactness : Descartes’ Transformation of the Early Modern Concept of Construction* (New York, Springer, 2001), pp. 261-269; in the lecture, however, I shall be more explicit in reconstructing the visual imagining involved in his argument.