Austin and I have decided to start a new series covering the historically significant arguments for the existence of God. We recently realized that we've been writing this blog for a while but haven't really covered the more famous arguments for God's existence. This is our attempt to rectify that situation.

Cosmological arguments are those that seek to establish the existence of a First Cause of the universe - the cosmos - based on some cosmic feature of the universe. The Kalam Cosmological Argument is one such argument, and has had a rather interesting history. It has also seen somewhat of a resurgence in philosophy of religion and apologetics, largely due to the discoveries in cosmology of the last century or so. This is my attempt to briefly explain the argument. Of course, a fully fleshed out version of the Kalam Cosmological Argument would take pages upon pages to elucidate. The hope of this post is to give you a general overview of the argument itself and why it remains a compelling argument for God's existence after so many centuries.

The What Cosmological Argument?

Kalam! It is a term that was used by Muslim thinkers to mark out a theological statement; it gradually morphed into a term to refer to Islamic scholarship at large. This brings up the rather obvious question of why an argument for the existence of God by Christians has an Islamic term attached to it. Let's rewind the clock a bit to find our answer.

The kalam cosmological argument was first advanced by theologians in the early Christian church as a response to the Aristotelian doctrine that the universe has always existed. They used the argument to defend the idea of creatio ex nihilo - creation out of nothing. In the Bible we find that the universe was created, and not that it has always existed. The early theologians used this argument to defend the Biblical position.

As Muslims barreled their way through much of the largely Christian Mediterranean starting in the 7th century, they ran across this argument. Over time, both Muslim and Jewish scholars developed the argument further. Then, the argument made its way back into Christian circles (This is all horribly oversimplified; the idea that Christian, Muslim, and Jewish scholars might exchange ideas during a supposedly barbaric and anti-intellectual period of history may come as somewhat of a shock. As usual, history itself is far more intricate and subtle than our caricatural understanding of it. The early middle ages is a fascinating period of history, but I have neither the knowledge nor room to write about it at length).

So why is the argument popular now? Two reasons. 1) modern cosmology and 2) William Lane Craig. The latter is largely due to his well respected books on the subject, the first two published in 1979 and 1980. The atheist philosopher and professor Peter Millican described The Cosmological Argument: From Plato to Leibniz as "landmark in the discussion of the cosmological argument."[1] It was William Lane Craig who named it the kalam cosmological argument, to recognize the influence that Islamic scholarship has had on its development. The former because, if you've been paying attention, the scientific evidence of the last few decades strongly suggests the universe has a beginning.

Up until 1929, the scientific consensus had been that the universe did not have a beginning. However, during the 1920s the Russian mathematician Alexander Friedmann and the Belgian astronomer Georges Lemaitre independently formulated solutions to Einstein's theory of general relativity (yes, there are multiple solutions to GR) in which the universe had a beginning. Then, in 1929, Edwin Hubble discovered the redshift of distant galaxies (due to the Dopplar effect), which meant that these galaxies were once closer together. And that meant that all the matter and energy in the universe was once packed into an infinitesimally fine point at the beginning of the universe. That point expanded rapidly to form the universe - the Big Bang. It's like if you were to graph the distances of the galaxies from each other over time, you would see that right now they're far apart, but based on redshift observations you'd see that they were once closer together. Plot a line going back in time based on the distances of the galaxies and you find the line ends with distances between everything being essentially zero.

So What's the Argument Itself?

If you've been tracking with me so far, you already know one of the premises of the argument: the universe had a beginning. But here's the whole argument in its simplest form:
  1. Everything that begins to exist has a cause
  2. The universe began to exist
  3. Therefore, the universe has a cause
So is the argument any good? Well, clearly, it is a valid argument. It's a simple case of modus ponens - the conclusion follows directly from the premises. The real debate is over the two premises. Do things that begin to exist have a cause? Did the universe really begin to exist?

This is where things get more complex. As William Lane Craig explains, "The supporting arguments and responses to defeaters of the argument's two basic premises can proliferate in an almost fractal-like fashion."[2] Like any philosophical argument worth its salt, the rubber really meets the road with the premises (unless it's a really complex argument where the form of the argument itself is a matter of debate; but in this case, the form of the argument is very simple).

In the next post, we'll take a look at the premises in reverse order to see if they hold up. We'll then discuss the attributes of the cause of the universe that follow from the argument itself, and finish up with some of the common objections to the argument, and the responses to those objections.


Notes:
1. From Millican's comments as moderator for one of Craig's presentations. Here's a link.
2. The Blackwell Companion to Natural Theology, p102.
In a previous post, I introduced the so-called "Is/Ought Problem," first made famous by David Hume.  Hume argued that you cannot get from an "is" to an "ought"-- that is, you cannot argue from the way that things currently are, to the way that things ought to be.  Or more generally, you cannot get from factual statements to evaluative statements.  But this is exactly how many people argue in a moral context-- "punching someone causes them pain, therefore you should not punch someone," for example. Or "James is fast and strong. He should play football." But is this true?  Is it always uncalled for to argue from premises involving the way things are, to conclusions involving the way things ought to be?


In After Virtue, Alasdair MacIntyre gives several counter-examples for why this is not the case.  Consider first:
There are several types of valid argument in which some element may appear in a conclusion which is not present in the premises. A.N. Prior's counter-example to this alleged principle illustrates its breakdown adequately; from the premise 'He is a sea-captain', the conclusion may be validly inferred that 'He ought to do whatever a sea-captain ought to do'. This counter-example not only shows that there is no general principle of the type alleged; but it itself shows what is at least a grammatical truth - an 'is' premise can on occasion entail an 'ought' conclusion. [1]
You can see here that the reason this example succeeds is because there is a concept of "Sea Captain" which entails certain responsibilities.  Those responsibilities are built into what it means to be a sea captain.  Thus the counter-example adequately shows why Hume's contention is wrong, at least in some cases.  Consider further:
From such factual premises as 'This watch is grossly inaccurate and irregular in time-keeping' and 'This watch is too heavy to carry about comfortably', the evaluative conclusion validly follows that 'This is a bad watch'. From such factual premises as 'He gets a better yield for this crop per acre than any farmer in the district', 'He has the most effective programme of soil renewal yet known' and 'His dairy herd wins all the first prizes at the agricultural shows', the evaluative conclusion validly follows that 'He is a good farmer'.  
Both of these arguments are valid because of the special character of the concepts of a watch and of a farmer. Such concepts are functional concepts; that is to say, we define both 'watch' and 'farmer' in terms of the purpose or function which a watch or a farmer are characteristically expected to serve. It follows that the concept of a watch cannot be defined independently of the concept of a good watch nor the concept of a farmer independently of that of a good farmer; and that the criterion of something's being a watch and the criterion of something's being a good watch-- and so also for 'farmer' and for all other functional concepts-- are not independent of each other. [...]  
Now clearly both sets of criteria-- as is evidenced by the examples given in the last paragraph-- are factual. Hence any argument which moves from premises which assert that the appropriate criteria are satisfied to a conclusion which asserts that 'That is a good such-and-such', where 'such-and-such' picks out an item specified by a functional concept, will be a valid argument which moves from factual premises to an evaluative conclusion. Thus we may safely assert that, if some amended version of the 'No "ought" conclusion from "is" premises' principle is to hold good, it must exclude arguments involving functional concepts from its scope. But this suggests strongly that those who have insisted that all moral arguments fall within the scope of such a principle may have been doing so, because they took it for granted that no moral arguments involve functional concepts. [2]
So we have two examples of arguments which start from premises stating only the way things are, to a conclusion that entails some evaluative judgement.  MacIntyre rightly points out that at this point Hume's defenders can only proceed by modifying their principle.  Perhaps it is the case that evaluative conclusion can (in some cases) proceed from factual premises.  Perhaps instead, the principle is only true from those arguments which do not involve functional concepts.  But this shift identifies and magnifies the breakdown of the Enlightenment attempt to justify morality rationally, a major thesis of MacIntyre's book:

Yet moral arguments within the classical, Aristotelian tradition-- whether in its Greek or its medieval versions-- involve at least one central functional concept, the concept of man understood as having an essential nature and an essential purpose or function; and it is when and only when the classical tradition in its integrity has been substantially rejected that moral arguments change their character so that they fall within the scope of some version of the 'No "ought" conclusion from "is" premises' principle. That is to say, 'man' stands to 'good man' as 'watch' stands to 'good watch' or 'farmer' to 'good farmer' within the classical tradition. Aristotle takes it as a starting-point for ethical enquiry that the relationship of 'man' to 'living well' is analogous to that of 'harpist' to 'playing the harp well' (Nicomachean Ethics, 1095a 16). But the use of 'man' as a functional concept is far older than Aristotle and it does not initially derive from Aristotle's metaphysical biology. It is rooted in the forms of social life to which the theorists of the classical tradition give expression. For according to that tradition to be a man is to fill a set of roles each of which has its own point and purpose: member of a family , citizen, soldier, philosopher, servant of God. It is only when man is thought of as an individual prior to and apart from all roles that 'man' ceases to be a functional concept. [3]
Thus the failure involves not a simple disagreement with classical thinkers (the predecessor culture, as MacIntyre calls it), but a wholesale rejection of the classical worldview, whereby "man" was understood to have an essential nature.  And this essential nature involves functional concepts.  To say a man is a "good man" is no different from saying a watch is a "good watch," a farmer is a "good farmer," or a sea-captain is a "good sea-captain" on the classical view.  Each of these is true in virtue of the role that they play, and the functionality which they essentially exhibit. 

So given Hume's presuppositions (a complete rejection of the classical worldview, especially the idea of man's essential nature and telos), he may have been right that morally evaluative conclusions could not be derived from factual premises.  However, these presuppositions (shared by many Enlightenment and post-Enlightenment thinkers) are not obviously true.


Notes:
1. MacIntyre, 1981, p.57
2. Ibid. pp.57-58
3. Ibid. pp.58-59
We haven't really argued for it in the blog - rather, assumed it - but I think most readers would agree that one's beliefs should be rational. There should be some reason for believing them. To put it another way, there seems to be unspoken agreement that there is an imperative to form beliefs at least in part on the basis of rationality. And that if one's beliefs are shown to be irrational, they should be discarded.[1]

If you have read any of Descartes' work, you have to recognize that he was a genius, and largely responsible for kick-starting modern philosophy. He had some great insights. But there is one matter in which he was completely wrong. He thought our rational faculties never failed us - it is always something else that fails us. And of course this simply isn't the case. Our rational faculties do fail us. On the whole, they are fairly reliable, but they are not infallible.

One of the chief dangers to our beliefs is that our cognitive faculties are subject to error. In theology, sin's negative effects on the mind and intellect are known as the noetic effects of sin. It is a fascinating subject, if you'd like to pick it up.

At any rate, I'd like to focus on one way in which our intellect can fail us: cognitive bias. This is a danger to anyone currently breathing. It isn't that only atheists are in danger of it, or only Christians, or only agnostics, or Buddhists, or humanists, butchers, bakers, candlestick makers. You get the point.

Cognitive bias is a tendency or predisposition towards certain lines of thinking that do not align with rationality. Rationality draws a straight line, which cognitive biases fly off to the four corners of the earth. The real difficulty is that one might not be aware of cognitive bias. It seems like one is thinking rationally, when in fact one is in error.

I mentioned one cognitive bias in passing in a previous post on Alvin Plantinga's work. The basic idea is that if one has a strongly held belief, and someone provides a powerful argument against that belief, rather than accepting that belief is false, one may in fact further entrench that belief in the face of the evidence (apparently this is known as the backfire effect). In fact, one may abandon something else to hold on to the belief that has come under attack.

I don't really have any specific way of guarding against cognitive bias myself, other than trying to think carefully and spend time in honest introspection. But it may at least be useful to know what some common cognitive biases are. That may make it a bit easier to guard against them. Here are just a few that are relevant to discussions of philosophy, apologetics, and argumentation.

The one bandied about the most these days is confirmation bias. One falls prey to confirmation bias by selecting evidence that confirms one's preconceived notions, and rejecting all other evidence. For example, have you ever heard someone complain that it is always they who are stopped the red light and everyone else makes it through the intersection in the nick of time? Obviously, over the course of one's driving career that is almost certainly not true. That person is just ignoring all the times they made it through the light (probably because one's mind isn't on the light when able to simply cruise through the intersection). In the case of a belief system, one may look for evidence for one's own beliefs while subconsciously rejecting any opposing evidence.

Here's a biggie: the belief bias. With belief bias, one is predisposed against an argument not on the basis of it's logic but on the basis of the believability of the conclusion. Think of the Pevensie siblings in The Lion, the Witch, and the Wardrobe - especially Peter and Susan. Lucy is incredibly excited to tell them that she has found another world in the wardrobe. This announcement is met with utter skepticism. However, the professor calls them on their skepticism: Lucy's excitement seems genuine, and she is the most honest of the siblings. So, logically, she is probably telling the truth. But what she is claiming is plainly ridiculous! And yet it was true.[2]

Another is the focusing effect. Ever seen some sports pundit chalk up the success or failure of a recently completed season on just one factor? Happens all the time. And they are wrong all the time. The success of a team over the course of a season - or even one game - is dependent on a myriad of factors. But these pundits are so certain it was all due to this one thing they identified.

One more. In general, we have a tendency to reject information or an argument from an adversary simply because it is coming from our adversary. It's sort of a psychological ad hominem- that argument is bad because it came from my adversary. Someone who falls for this is suffering from the scourge of reactive devaluation.

In closing, it might go without saying but I am no psychologist. And I'm not a philosopher (many biases have actually been identified first by philosophers). Yet I think it is important that we think carefully and honestly about important issues, so having at least some idea of how we may be biased may help guard against these tendencies. That and wisdom.


Notes:
1. This of course can be taken too far. There are some beliefs that are not based on rationality but nonetheless seem completely reasonable. If you see a tree, you believe a tree is there not by some deductive argument but because your perception of the tree's existence is basic. The belief the tree is there is what Plantinga would call a properly basic belief.
2. Granted, this is an example from a fictional work. A more pressing example might be the arguments for the resurrection. Many are rejected outright, simply on the grounds that the idea of a man rising from the dead is ridiculous.
Next Post Newer Posts Previous Post Older Posts Home