- Forthcoming. Know how to transmit knowledge?. [Abstract]
- 2009. Know how to be Gettiered?. [Abstract]
- Forthcoming. The Evidential Impact of Explanatory Considerations (with Kevin McCain) [Abstract]
- Forthcoming. Coherence and conservation: A response to Gardiner [Abstract]
- Forthcoming. Belief, evidence, and knowledge: A response to Cling [Abstract]
- Forthcoming. Making conservatism great again: A reply to Schnee [Abstract]
- Forthcoming. Rival explanatory paradigms and justification: A response to Dabay [Abstract]
- 2014. Why explanatoriness is evidentially relevant (with Kevin McCain) [Abstract]
- 2014. Finite Reasons without Foundations [Abstract]
- 2013. Is foundational a priori justification indispensable? [Abstract]
- 2012. Is there an 'I' in epistemology? [Abstract]
- 2012. Introduction: Epistemic Coherentism.
- 2012. Basic Reasons and First Philosophy. [Abstract]
- 2011. Explanationist Plasticity & The Problem of the Criterion. [Abstract]
- 2016. Acquaintance and Skepticism about the Past. [Abstract]
- 2014. Direct Phenomenal Beliefs, Cognitive Significance, and the Specious Present [Abstract]
- 2013. BonJour and the myth of the given. [Abstract]
- 2010. Similarity & Acquaintance: A dilemma. [Abstract]
- 2010. Skeptics without borders (with Kevin Meeker) [Abstract]
- 2007. Acquaintance and the Problem of the Speckled Hen. [Abstract]
- Forthcoming. Religious Conscience and the Private Market [Abstract]
- Forthcoming. Will there be skeptics in heaven? [Abstract]
- Forthcoming. The Argument from so many arguments. [Abstract]
- 2014. Social Evil [Abstract]
- 2014. Skeptical Theism within reason. [Abstract]
- 2008. Hell, Vagueness, and Justice: A Reply to Sider. (with Trent Dougherty) [Abstract]
- 2008. A User’s Guide to Design Arguments. (with Trent Dougherty) [Abstract]
- 2007. Divine Hiddenness and the Nature of Belief. (with Trent Dougherty) [Abstract]
- 2012. Functionalism about Truth and the Metaphysics of Reduction (with Michael Horton). [Abstract]
- 2007. Foundational Evidentialism and the Problem of Scatter. [Abstract]
- 2008. Foundationalism.
- 2007. Internalism and Externalism in Epistemology.
Papers on knowledge how
Intellectualism about knowledge-how is the view that practical knowledge is a species of propositional knowledge. I argue that this view is undermined by a difference in properties between knowledge-how and both knowledge-that and knowledge-wh. More specifically, I argue that both knowledge-that and knowledge-wh are easily transmitted via testimony while knowledge-how is not easily transmitted by testimony. This points to a crucial difference in states of knowledge. I also consider Jason Stanley's attempt to subsume knowledge-how under an account of de se knowledge. I argue that there are crucial differences between de se knowledge and knowledge-how. Thus, this paper advances both the discussion of intellectualism and the literature on the nature of de se knowledge.(Nous (forthcoming))
Jason Stanley and Timothy Williamson’s influential article "Knowing How" argues that knowledge-how is a species of knowledge-that. One objection to their view is that knowledge-how is significantly different than knowledge-that because Gettier cases afflict the latter but not the former. Stanley and Williamson argue that this objection fails. Their response, however, is not adequate. Moreover, I sketch a plausible argument that knowledge-how is not susceptible to Gettier cases. This suggests a significant distinction between knowledge-that and knowledge-how.(Philosophy and Phenomenological Research (2009) LXXIX(3): 743-747).
Papers on Explanationism
Explanationism is an attractive family of theories of epistemic justification. In broad terms, explanationism is the idea that what a person is justified in believing depends on their explanatory position. At its core, explanationists hold that the fact that p would explain q if p were true is itself evidence that p is true. In slogan form: explanatoriness is evidentially relevant. Despite the plausibility of explanationism, not all of the recent interest in it has been complimentary. Recently, William Roche and Elliott Sober (2013 & 2014) have argued that " explanatoriness is evidentially irrelevant " (2013: 659). R&S's argument against the evidential relevance of explanatory considerations begins with what they call the " Screening-Off Thesis " (SOT): Let H be some hypothesis, O be some observation, and E be the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr(H|O & E) = Pr(H|O). (2014: 193) 4 R&S contend that SOT is true if and only if " explanatoriness is evidentially irrelevant ". We refer to this conditional claim as (IRRELEVANCE). In a recent article (2014), we argued that R&S overlook an important dimension of evidential support when making their case that IRRELEVANCE is true, viz., the resilience of a probability function. Resilience is essentially how volatile a probability function is with respect to new evidence; a probability function with low volatility is more resilient than a function with high volatility. We maintained that IRRELEVANCE is false because there are clear cases where explanatory considerations increase the resilience of a probability function. Additionally, we argued that there are numerous cases where SOT fails to hold. R&S (2014) argue that we were mistaken on both accounts. The arguments R&S offer are not persuasive, but they do significantly clarify the disagreement. We pick up on this improved dialectical situation to further defend our position that explanatoriness is evidentially relevant. The upshot of our discussion is that both SOT and IRRELEVANCE are false because explanatory considerations may be captured in logical and mathematical relations encoded in a Pr-function. Thus, both inference to the best explanation and explanationism are safe from attack. (Best Explanations: New Essays on Inference to the Best Explanation Oxford University Press. eds. McCain and Poston)
I am grateful for Gardiner’s excellent and challenging comments on my book. I cannot hope to adequately address all of the objections she raises. Instead I will discuss the reasons I think epistemic conservatism is required for a plausible coherentist view and then I will discuss the core idea behind conservatism. My hope is that within the context of a properly formulated and motivated conservatism some of the most pressing concerns Gardiner raises will appear less troubling than initial appearances. (Syndicate Philosophy (forthcoming))
I thank Andy Cling for these careful and insightful comments. Cling effectively summarizes many of the motivations and arguments I give for my explanationist view. He argues that on several important dimensions my view does not live up to its promises. In particular, he charges that the version of explanatory coherentism I defend is not a form of evidentialism and, moreover, it is a kind of foundationalist skepticism. In the following I aim to answer these important claims. (Syndicate Philosophy (forthcoming))
I appreciate Ian Schnee’s forceful criticisms of my attempt to explicate a plausible version of epistemic conservatism. As each commentator has pointed out, epistemic conservatism plays a pivotal role for my coherentist theory and so deserves careful attention. I argue that a belief’s justification is a matter of its fit in an explanatory coherent system that beats relevant competitors. Moreover, I argue that a belief’s justification is always relative to a set of background beliefs. I contend that unless background beliefs have some level of justification simply in virtue of being held then skepticism follows. The key is to formulate a plausible version of conservatism that does not do violence to our firm judgments about the role of evidence in justification. I’ve argued that the core conservative claim is a coherence condition on a subject’s mental life. Unless a subject has a special reason to change her views, she has a right to continue to maintain those views. Or, as I put it, if a subject believes p in the state of empty evidence she has a right to continue to believe p. This epistemic right is not indefeasible. As I mentioned in my responses to other commentators, belief is teleologically ordered to knowledge.Consequently, such a coherence condition on a subject’s mental states is not the be and end all to epistemology. I honestly think epistemic conservatism, properly understood, makes good epistemology. Schnee disagrees. Let us therefore reason about our differences. (Syndicate Philosophy (forthcoming))
Thomas Dabay provides a thoughtful and interesting perspective on my explanationist view. He focuses on the alternative systems objection to coherentism and argues that this is particularly problematic given my views about epistemic conservatism. Traditionally, the alternative systems objection targets coherentist views of justification because typical coherentist views hold that the justification of any belief is entirely a matter of its internal relations to other beliefs. The objection continues by observing that lots of different sets of beliefs—-like a good work of fiction—-bear virtuous internal relations to each other member of the set. But, presumably, to be epistemically justified in a belief requires more than its being embedded in a coherent work of fiction. Epistemic justification requires more than internal relations among beliefs in one’s doxastic system. (Syndicate Philosophy (forthcoming))
William Roche and Elliott Sober argue that explanatoriness is evidentially irrelevant. This conclusion is surprising since it conflicts with a plausible assumption---the fact that a hypothesis best explains a given set of data is evidence that the hypothesis is true. We argue that Roche and Sober’s screening-off argument fails to account for a key aspect of evidential strength: the weight of a body of evidence. The weight of a body of evidence affects the resiliency of probabilities in the light of new evidence. Thus, Roche and Sober are mistaken. Explanatoriness is evidentially relevant. (Thought))
In this paper I develop a theory of reasons that has strong similarities to Peter Klein's infinitism. The view I develop, Framework Reasons, upholds Klein's principles of avoiding arbitrariness (PAA) and avoiding circularity (PAC) without requiring an infinite regress of reasons. A view of reasons that holds that the ‘reason for’ relation is constrained by PAA and PAC can avoid an infinite regress if the ‘reason for’ relation is contextual. Moreover, such a view of reasons can maintain that skepticism is false by the maintaining that there is more to epistemic justification than what can be expressed in any reasoning session. One crucial argument for Framework Reasons is that justification depends on a background of plausibility considerations. In the final section, I apply this view of reasons to Michael Bergmann's argument any non-skeptical epistemology must embrace epistemic circularity. (Metaphilosophy Special Issues: On the Regress Problem)
Laurence BonJour's (1985) coherence theory of empirical knowledge relies heavily on a traditional foundationalist theory of a priori knowledge. He argues that a foundationalist, rationalist theory of a priori justification is indispensable for a coherence theory. BonJour (1998) continues this theme, arguing that a traditional account of a priori justification is indispensable for the justification of putative a priori truths, the justification of any non-observational belief, and the justification of reasoning itself. While BonJour's indispensability arguments have received some critical discussion (Gendler 2001; Harman 2001; Beebe 2008), no one has investigated the indispensability arguments from a coherentist perspective. This perspective offers a fruitful take on BonJour's arguments because he does not appreciate the depth of the coherentist alternative to the traditional empiricist-rationalist debate. This is surprising on account of BonJour's previous defense of coherentism. Two significant conclusions emerge: first, BonJour's indispensability arguments beg central questions against an explanationist form of coherentism; second, BonJour's original defense of coherentism took on board certain assumptions that inevitably led to the demise of his form of coherentism. The positive conclusion of this paper is that explanatory coherentism is more coherent than BonJour's indispensability arguments assume and more coherent than BonJour's earlier coherentist epistemology. (Episteme (2013) 10:3, 317-331)
Epistemic conservatism is the thesis that the mere holding of a belief confers some positive epistemic status on its content. Conservatism is widely criticized on the grounds that it conflicts with the main goal in epistemology to believe truths and disbelieve falsehoods. In this paper I argue for conservatism and defend it from objections. First, I argue that the objection to conservatism from the truth goal in epistemology fails. Second, I develop and defend an argument for conservatism from the perspectival character of the truth goal. Finally, I examine several forceful challenges to conservatism and argue that these challenges are unsuccessful. The first challenge is that conservatism implies the propriety of assertions like ‘I believe p and this is part of my justification for it’. The second challenge argues that conservatism wrongly implies that the identity of an epistemic agent is relevant to the main goal of believing truths and disbelieving falsehoods. The last two challenges I consider are the ‘extra boost’ objection and the conversion objection. Each of these objections helps to clarify the nature of the conservative thesis. The upshot of the paper is that conservatism is an important and viable epistemological thesis. (Dialectica (2012) 66:4, 517-541).
This paper develops and defends a coherentist account of reasons. I develop three core ideas for this defense: a distinction between basic reasons and noninferential justification, the plausibility of the neglected argument against first philosophy, and an emergent account of reasons. These three ideas form the backbone for a credible coherentist view of reasons. I work toward this account by formulating and explaining the basic reasons dilemma. This dilemma reveals a wavering attitude that coherentists have had toward basic reasons. More importantly, the basic reasons dilemma focuses our attention on the central problems that afflict coherentist views of basic beliefs. By reflecting on the basic reasons dilemma, I formulate three desiderata that any viable coherentist account of basic beliefs must satisfy. I argue that the account on offer satisfies these desiderata. (The Southern Journal of Philosophy (2012) 50(1): 75-93).
This paper develops an explanationist treatment of the problem of the criterion. Explanationism is the view that all justified reasoning is justified in virtue of the explanatory virtues: simplicity, fruitfulness, testability, scope, and conservativeness. A crucial part of the explanationist framework is achieving wide reflective equilibrium. I argue that explanationism offers a plausible solution to the problem of the criterion. Furthermore, I argue that a key feature of explanationism is the plasticity of epistemic judgments and epistemic methods. The explanationist does not offer any fixed judgments or methods to guide epistemic conduct; even the explanatory virtues themselves are subject to change. This feature of explanationism gives it an advantage over non-explanationist views that offer fixed epistemic judgments and epistemic methods. The final section of this paper responds to objections to explanationism. (Philosophical Papers (2011) 40(3): 395-419).
Papers on Acquaintance, The Given, and Luminosity
I consider the problem of skepticism about the past within Richard Fumerton's acquaintance theory of non-inferential justification. Acts of acquaintance occur only within the specious present, that temporal duration in which (intuitively) memory plays no role. But if our data for justification is limited to the specious present then the options for avoiding a far-reaching skepticism are quite limited. I consider Fumerton's responses to skepticism about the past and argue that his acquaintance theory is not able to stave off skepticism about the past. Furthermore, I argue that the bounds of skepticism about the past overflow to the specious present by limiting the kind of content that is available within the all too short present moment. Finally, I defend an epistemic conservative response to this stark skeptical problem by arguing that it is a theoretically economical account of our justification for beliefs about the past. (Traditional Epistemic Internalism Oxford University Press. Eds. Michael Bergmann and Brett Coppenger
David Chalmers (2010) argues for an acquaintance theory of the justification of direct phenomenal beliefs. A central part of this defense is the claim that direct phenomenal beliefs are cognitively significant. I argue against this. Direct phenomenal beliefs are justified within the specious present, and yet the resources available with the present `now' are so impoverished that it barely constrains the content of a direct phenomenal belief. I argue that Chalmers's account does not have the resources for explaining how direct phenomenal beliefs support the inference from `thisE is R' to `that was R.' (Philosophical Studies (forthcoming))
The Sellarsian dilemma is a powerful argument against internalistic foundationalist views that aim to end the regress of reasons in experiential states. Laurence BonJour once defended the soundness of this dilemma as part of a larger argument for epistemic coherentism. BonJour has now renounced his earlier conclusions about the dilemma and has offered an account of internalistic foundationalism aimed, in part, at showing the errors of his former ways. I contend that BonJour's early concerns about the Sellarsian dilemma are correct, and that his latest position does not adequately handle the dilemma. I focus my attention on BonJour's claim that a nonconceptual experiential state can provide a subject with a reason to believe some proposition. It is crucial for the viability of internalistic foundationalism to evaluate whether this claim is true. I argue it is false. The requirement that the states that provide justification give reasons to a subject conflicts with the idea that these states are nonconceptual. In the final section I consider David Chalmers's attempt to defend a view closely similar to BonJour's. Chalmers's useful theory of phenomenal concepts provides a helpful framework for identifying a crucial problem with attempts to end the regress of reasons in pure experiential states. (Res Philosophica (2013) 90:2, 185-201)
There is an interesting and instructive problem with Richard Fumerton’s acquaintance theory of noninferential justification. Fumerton’s explicit account requires acquaintance with the truth-maker of one’s belief and yet he admits that one can have noninferential justification when one is not acquainted with the truth-maker of one’s belief but instead acquainted with a very similar truth-maker. On the face of it this problem calls for clarification. However, there are skeptical issues lurking in the background. This paper explores these issues by developing a dilemma for an acquaintance theory. (Philosophical Studies (2010) 147: 369-378).
Timothy Williamson’s anti-luminosity argument has received considerable attention. Escaping unnoticed, though, is a strikingly similar argument from David Hume. This paper highlights some of the arresting parallels between Williamson’s reasoning and Hume’s that will allow us to appreciate more deeply the plausibility of Williamson’s reasoning and to understand how, following Hume, we can extend this reasoning to undermine the “luminosity” of simple necessary truths. More broadly the parallels help us to identify a common skeptical predicament underlying both arguments, which we shall call “the quarantine problem”. The quarantine problem expresses a deep skepticism about achieving any exalted epistemic state. Further, the perspective gained by the quarantine problem allows us to easily categorize existing responses to Williamson’s anti-luminosity argument and to observe the deficiencies of those responses. In sum, the quarantine problem reveals the deeply fallibilistic nature of whatever knowledge we may possess. (American Philosophical Quarterly (2010) 47(3): 223-237)
This paper responds to Ernest Sosa’s recent criticism of Richard Fumerton’s acquaintance theory. Sosa argues that Fumerton’s account of non-inferential justification falls prey to the problem of the speckled hen. I argue that Sosa’s criticisms are both illuminating and interesting but that Fumerton’s theory can escape the problem of the speckled hen. More generally, the paper shows that an internalist account of non-inferential justification can survive the powerful objections of the Sellarsian dilemma and the problem of the speckled hen. (Philosophical Studies (2007) 132: 331-346).
Papers in the Philosophy of Religion
I argue for an account of when an appeal to religious conscience exempts a business owner from providing a private market service. Recent cases in the state and federal courts evince the need for a philosophical treatment of the success conditions for religious conscience exemptions to private market services. I shall assume that the conscience objection is focused on a law to perform an act in the private market where the law arises from a state's anti-discrimination clauses. I begin with some conceptual clarifications and then proceed to examine cases and argue for a general account. On the account I propose a religious conscience exemption ought to be granted if and only if the act concerns a sacred matter according to an epistemically live religious tradition. I understand a religious tradition to be `epistemically live' if and only if it is believed by a group of persons and it is not common knowledge that it is false. On my view appeal to common knowledge is crucial for fairly balancing the competing rights of religious autonomy and anti-discrimination. (Religious Exemptions Oxford University Press. ed. Vallier and Weber).
I begin with a puzzle that arises from reflection on two things that are not normally put together: the nature of Christian hope, particularly the vision of a renewed creation, and global skepticism. The puzzle relates to the fact that if arguments for global skepticism work now then they work equally as well in heaven. My goal is to present the puzzle and then propose a resolution. I begin by discussing the nature of the Christian conception of heaven and then I develop an argument for global skepticism. I continue to fill out the puzzle before finally turning to examine a resolution of the puzzle.(Paradise Understood: New Philosophical Essays about Heaven Oxford. Oxford University Press. Eds. Ryan Bverly and Eric Silverman)
My goal in this paper is to offer a Bayesian model of strength of evidence in cases in which there are multiple items of independent evidence. I will use this Bayesian model to evaluate the strength of evidence for theism if, as Plantinga claims, there are two dozen or so arguments for theism. Formal models are justified by their clarity, precision, and usefulness, even though they involve abstractions that do not perfectly fit the phenomena. Many of Plantinga’s arguments are metaphysical arguments, involving premises which are necessarily true, if true at all. Applying a Bayesian account of strength of evidence in this case involves reformulating some of the arguments, but, even if a Bayesian shoe doesn’t fit perfectly into a Leibnizian foot, Bayesian footwear is much more suitable to certain types of terrain, especially when the landscape requires encompassing the overall effect of multiple vistas. I believe that the Bayesian model I offer has significant utility in assessing strength of evidence in cases of multiple items of evidence. The model turns questions of the overall strength of multiple arguments into a simple summation problem and it provides a clear framework for raising more philosophical questions about the argument. I hope that this paper provides a model for many fruitful conversations about how to aggregate multiple items of evidence. (Two dozen (or so) arguments for God. Oxford University Press. Eds. Trent Dougherty and Jerry Walls)
Social evil is any pain or suffering brought about by game-theoretic interactions of many individuals. This paper introduces and discusses the problem of social evil. I begin by focusing on social evil brought about by game-theoretic interactions of rational moral individuals. The problem social evil poses for theism is distinct from problems posed by natural and moral evils. Social evil is not a natural evil because it is brought about by the choices of individuals. But social evil is not a form of moral evil because each individual actor does not misuse his free will. Traditional defenses for natural and moral evil fall short in addressing the problem of social evil. The final section of this paper discusses social evil and virtue. I begin by arguing that social evil can arise even when individual virtue is lacking. Next, I explore the possibility of an Edwardsian defense of social evil that stresses the high demands of true virtue. In this context, I argue that social evil may arise even when all the participants are truly virtuous. The conclusion of this paper is that social evil is problematic and provides new ground for exploring the conceptual resources of theism. (Oxford Studies in the Philosophy of Religion (forthcoming)).
The evidential argument from evil moves from inscrutable evils to gratuitous evils, from evils we cannot scrutinize a God-justifying reason for permitting to there being no such reason. Skeptical theism challenges this move by claiming that our inability to scrutinize a God-justifying reason does not provide good evidence that there is no reason. The core motivation for skeptical theism is that the cognitive and moral distance between a perfect being and creatures like us is so great we shouldn’t expect we grasp all the relevant considerations pertaining to a God-justifying reason. My goal in this paper is to defend skeptical theism within a context that allows for an inverse probability argument for theism. These arguments are crucial for an evidentialist approach to the justification of theism. I aim to show that there is a natural way of motivating a skeptical theist position that does not undermine our knowledge of some values. (Skeptical Theism: New Essays, OUP, edited by Trent Dougherty and Justin McBrayer (forthcoming))
Ted Sider’s paper “Hell and Vagueness” challenges a certain conception of Hell by arguing that it is inconsistent with God’s justice. Sider’s inconsistency argument works only when supplemented by additional premises. Key to Sider’s case is a premise that the properties upon which eternal destinies supervene are “a smear,” i.e., they are distributed continuously among individuals in the world. We question this premise and provide reasons to doubt it. The doubts come from two sources. The first is based on evidential considerations borrowed from skeptical theism. A related but separate consideration is that supposing it would be an insurmountable problem for God to make just (and therefore non-arbitrary) distinctions in morally smeared world, God thereby has sufficient motivation not to actualize such worlds. Yet God also clearly has motivation only to actualize some member of the subset of non- smeared worlds which don’t appear non-smeared. For if it was obvious who was morally fit for Heaven and who wasn’t, a new arena of great injustice is opened up. The result is that if there is a God, then he has the motivation and the ability to actualize from just that set of worlds which are not smeared but which are indiscernible from smeared worlds. (Faith & Philosophy (2008) 25(3): 322-328).
We argue that there is a tension between two types of design arguments: the fine-tuning argument (FTA) and the biological design argument (BDA). The tension arises because the strength of each argument is inversely proportional to the value of a certain currently unknown probability. Since the value of that probability is currently unknown, we investigate the properties of the FTA and BDA on different hypothetical values of this probability. If our central claim is correct this suggests three results: (1) It is not very plausible that a cumulative case for theism include both the FTA and the BDA (with one possible qualification); (2) Self- organization scenarios do not threaten theism but in fact provide the materials for a good FTA; (3) A plausible design argument of one sort or another (either FTA or BDA) will be available for a wide variety of values of the key probability. (Religious Studies (2008) 44: 99-110).
In this paper we argue that attention to the intricacies relating to belief illustrate crucial difficulties with Schellenberg’s hiddenness argument. This issue has been only tangentially discussed in the literature to date. Yet we judge this aspect of Schellenberg’s argument deeply significant. We claim that focus on the nature of belief manifests a central flaw in the hiddenness argument. Additionally, attention to doxastic subtleties provides important lessons about the nature of faith.(Religious Studies (2007) 44: 183-198).
Functionalism about truth is the view that truth is an explanatorily significant but multiply-realizable property. According to this view the properties that realize truth vary from domain to domain, but the property of truth is a single, higher-order, domain insensitive property. We argue that this view faces a challenge similar to the one that Jaegwon Kim laid out for the multiple realization thesis. The challenge is that the higher-order property of truth is equivalent to an explanatorily idle disjunction of its realization bases. This consequence undermines the alethic functionalists’ non-deflationary ambitions. A plausible response to Kim’s argument fails to carry over to alethic functionalism on account of significant differences between alethic functionalism and psychological functionalism. Lynch’s revised view in his book Truth as One and Many (2009) fails to answer our challenge. The upshot is that, while mental functionalism may survive Kim’s argument, it mortally wounds functionalism about truth.. (Acta Analytica (2012) 27: 13-27).
This paper addresses the scatter problem for foundational evidentialism. Reflection on the scatter problem uncovers significant epistemological lessons. The scatter problem is evaluated in connection with Ernest Sosa’s use of the problem as an argument against foundational evidentialism. Sosa’s strategy is to consider a strong intuition in favor of internalism—the new evil demon problem, and then illustrate how a foundational evidentialist account of the new evil demon problem succumbs to the scatter problem. The goal in this paper is to evaluate the force of the scatter problem. The main argument of the paper is that the scatter problem has mixed success. On the one hand, scatter undermines objectual evidentialism, an evidentialist theory that formulates principles of basic perceptual justification in terms of the objects (or properties) of perceptual states. On the other hand, the problem of scatter does not undermine content evidentialism, an evidentialist view that formulates its epistemic principles in terms of the assertive content of perceptual states. The significance of the scatter problem, especially in concert with the new evil demon problem, is that it provides an argument for content evidentialism. (Abstracta (2007) 3(2): 89-106).
Works in progress
- Mind, Brain, and Free Will Richard Swinburnen
- Evidence and Religious Belief Kelly James Clark and Raymond VanArragon (eds). [Details]
- Liberal Faith: Essays in honor of Philip Quinn (ed) Paul J. Weithman
- Believing by Faith John Bishop. [Details]
- Justification without Awareness Michael Bergmann. [Details]
- Common-Sense Nicholas Rescher. [Details]
European Journal for Philosophy of Religion (forthcoming
Mind 117 (2009), 151-155
Philosophy and Phenomenological Research 77:2 (2008), 570-573
Faith & Philosophy 24:3 (2007), 361-363
Recent & Upcoming Presentations
- Will there be skeptics in heaven? [Venue]
- The argument from so many arguments. [Venue]
- Infinitism and Inference. [Venue]
- Locating Bayesianism within an explanationist framework. [Venues]
- F16: Ancient Epistemology
- F16: Medical Reasoning
- S16: Introduction to Philosophy
- S16: Introduction to Logic
- F15: Religious Pluralism
- F15: Critical Thinking
- S15: Philosophy of Science
- S15: Introduction to Philosophy
- F14: Epistemology
- F14: Symbolic Logic
- S14: Introduction to Philosophy
- S14: Philosophy of Religion
- S13: Philosophy of Math
- S13: Mathematical Logic
- S13: Philosophy of Science
- F12: Epistemology
- F12: Critical Thinking
- S12: Game Theory
Christopher Newport University, Paradise Workshop; April 24-25, 2015 (invited).
Baylor University, Plantingafest; November 6-8, 2014 (invited).
Vanderbilt University; October 17-20, 2013 (invited).
Southeastern Epistemology Conference; University of North Florida; October 26-27, 2012 (invited).
I am Professor of Philosophy at the University of South Alabama. My professional work aims to present a coherent and reasonable view on matters of enduring human interest. My primary research focuses on the foundations of reason, coherence, the epistemology of explanation, and the nature of practical knowledge. I’ve argued that the vast majority of knowledge depends upon the explanatory considerations of simplicity, plausibility, fit with background belief, and power. I’ve also argued that practical knowledge is a unique grasp of non-propositional structure. When I find the time, I write about a wide-range of other fascinating philosophical problems.
In 2009, I began the yearly Orange Beach Epistemology Workshops. These thematic workshops bring leading epistemologists to discuss current research trends on the beautiful white sand beaches of southern Alabama.