We are a Working Group devoted to the discussion of historical and philosophical issues in symbolic logic, mathematics, and science. We meet on occasional Wednesday evenings for a talk and a lively discussion. The group is funded by the Doreen B. Townsend Center for the Humanities and the Department of Philosophy.

All members of the academic community are welcome to attend. We have regular participants in many different fields, including philosophy, mathematics, history of science, and psychology.

The group organizers are Lara Buchak (Philosophy), Wesley Holliday (Philosophy), John MacFarlane (Philosophy), Paolo Mancosu (Philosophy), and Seth Yalcin (Philosophy).

### Our next event

April 11, 2018, 6-7:30 PM in 234 Moses Hall

*Hanti Lin (UC Davis)*

Modes of Convergence to the Truth – Steps toward a Better Epistemology of Induction

Those who engage in normative or evaluative studies of induction, such as formal epistemologists, statisticians, and computer scientists, have provided many positive results for justifying (to a certain extent) various kinds of inductive inferences. But they all have said little about a very familiar kind of induction. I call it *full* enumerative induction, of which an instance is this: “We’ve seen this many ravens and they all are black, so all ravens are black”—without a stronger premise such as IID or a weaker conclusion such as “all the ravens observed in the future will be black”. I explain why those theorists of induction all say little about full enumerative induction. To remedy this, I propose that Bayesians be *learning-theoretic* Bayesians and learning-theorists be *truly* learning-theoretic—in three steps. (i) Understand certain modes of convergence to the truth as *epistemic ideals* for an inquirer to achieve where possible. (ii) Require the norm that an inquirer ought to achieve the highest achievable epistemic ideal. (iii) See whether full enumerative induction can be justified as—that is, proved to be—a necessary means for achieving the highest epistemic ideal achievable for tackling the problem of whether all ravens are black. The answer is positive, thanks to a new theorem, whose Bayesian version is proved as well. The technical breakthrough consists in introducing a mode of convergence slightly weaker than Gold’s (1965) and Putnam’s (1965) identification in the limit; I call it *almost everywhere* convergence to the truth, where the conception of “almost everywhere” is borrowed from geometry and topology. The talk will not presuppose knowledge of topology or learning theory.