Menu

Inductive Logic

A Thematic Compilation by Avi Sion

header photo

28. Thoughts on Induction

 

1.    Evidence

The Obvious

Every experience (concrete appearance – physical or mental percept, or intuition) is ‘evident’, in the sense that it is manifest before consciousness and that such appearance automatically gives it a minimum of credibility.

Concepts or theses (products of abstraction) are not themselves evident in this sense (though they too ‘appear’ in a sense), but rely for their credibility on their relation to certain experiences. An experience is ‘evidence for’ some concept or thesis, when it serves to confirm it adductively. A concept or thesis is ‘evidently true’ to the degree that such evidence for it is to be found.

A concept or thesis is said to be ‘immediately evident’, when very little effort is required to establish its truth, i.e. when the evidence that suffices to do so is readily available to everyone.

A concept or thesis is ‘self-evident’ (or evident by itself), if it is provable without reference to further experiential evidence (other than the minimum experience underlying its very conception or formulation). Such proof is achieved by noticing or showing the negation of the concept or thesis to involve an inconsistency or a self-contradiction of some sort.

We label ‘obvious’, then, all experiences (as such, i.e. in and for themselves), as well as ‘immediately evident’ and ‘self-evident’ concepts or theses.

Seems and Is

The following are some of the inductive arguments which help clarify the logical relations between the copulae ‘seems’ and ‘is’:

 

Uncertain mood:

P seems true and NotP seems equally true;

therefore (for this observer, at this time):

P ‘may be’ true, and equally NotP ‘may be’ true.

 

Probabilistic mood:

P seems true more than NotP seems true;

therefore (for this observer, at this time):

P ‘is probably’ true, and NotP ‘is probably not’ true.

 

Decisive mood:

P seems true and NotP does not seem true;

therefore (for this observer, at this time):

P ‘is’ true, and NotP ‘is not’ true.

 

Adductive Inference

Adductive inference often takes the form of a deductively invalid syllogism, such as:

 

All Z are Y, and

these X are Y;

therefore, these X are probably Z.

 

Of course, strictly speaking the conclusion does not follow from the premises; however, the premises do suggest some likelihood for the conclusion.

For example, “all beans in your bag are white, and the beans in your hand are white; therefore, the beans in your hand are probably from your bag.”

 

Trial and Error

With regard to the trial and error involved in adduction: “trial” means trying an idea out in practice, testing a theory by observation; and “error” means that some of the ideas we test will fail the test and thus be eliminated from further consideration or at least adjusted.

This is a rather broad notion. There are perhaps numerous, distinguishable types of ‘trial and error’ – in different fields of study, in different situations – which we ought to distinguish and list. I do not attempt it here.

It should in any case be stressed that this simple method is pervasive in our pursuit of knowledge. Already at the level of sensation, we are using it all the time. For instance, when we smell food to check out if it is fresh, we are using this method. At the level of concept formation, we again repeatedly appeal to it. E.g. when we try out different definitions for a group of things that seem similar, we are using this method. Similarly, when we formulate individual propositions or compounds of many propositions, we use trial and error.

Trial and error is not just a ‘scientific method’ for high level theoreticians and experimenters – it is the basic way to knowledge by mankind, and indeed by all sentient beings. It is ‘adaptation’ to the environment in the domain of knowledge, a subset of biological adaptation applicable to conscious organisms.

 

Approaching Reality

What do we mean by a thesis “approaching reality”? We refer to the disjunction of all conceivable (now or ever, i.e. to date or in the future) solutions to a problem. At every elimination of one of these alternative solutions, all other alternatives are brought closer to being “the” solution. It is a bit like a game of musical chairs, where the last, leftover contestant will be declared the winner. As the list of possibilities is shortened, the status of each possible solution is increased. Thus, it is not only through confirmation (of a given thesis), but also through rejection (of alternative theses), that the given thesis advances in our esteem, or in its “degree of truth,” In this way, we do not have to claim every thesis true or false without making nuances, and can view the quantitative aspect of induction as having formal justification.

 

Appearance, Reality and Illusion

Phenomenology results from a realization that the building blocks of knowledge are appearances. This realization is obtained through a dialectic, comprising thesis, antithesis and synthesis, as follows.

At first, one naturally regards everything one comes across in experience or thought as ‘real’ (this is the ‘naïve realist’ stance).

Then, faced with evident contradictions and gaps in one’s knowledge, one logically realizes that some things that seemed real at first must or at least may eventually be considered unreal – i.e. ‘illusory’ (this constitutes a cognitive crisis).

Finally, one realizes that, whether something is real or illusory (and ultimately remains so or turns out to be the opposite), at least it can immediately (unconditionally and absolutely) be acknowledged as ‘apparent’ (this is the ‘phenomenological’ stance, which resolves the crisis).

Knowledge of reality can then be inductively built up from knowledge of appearances, thanks to the following principle (d): One may credibly assume something that appears to be real is indeed real, until and unless it is proved illusory or at least put in doubt for some specific reason. This may be characterized ‘subtle realism’, and proceeds from the realization that the mere fact of appearance is the source of all credibility.

Thus, phenomenology follows the natural flow of knowledge, which is to initially accept individual appearances as real, while remaining ready to reclassify them as illusory if they give rise to specific logical problems that can only be solved in that specific way. The concept of ‘appearance’ is therefore not strictly primary, but a transitional term for use in problematic cases. Since it refers to the common ground between ‘reality’ and ‘illusion’, it is deductively primary. But since the latter are in practice attained before it, it is inductively secondary.

The concepts appearance, reality and illusion are to begin with concerned with experiences; and only thereafter, by analogy, they are applied to abstractions, i.e. conceptual products of experience arrived at through rational considerations, such as comparison and contrast (i.e. affirmation or negation, and measurement).

The term ‘fact’ is usually intended to refer to purely experiential data, i.e. the raw material of knowledge, in which case the opposite term ‘fiction’ refers to other items of knowledge, i.e. those tainted by interpretative hypotheses. (But note that in practice of course we do not always abide by such strict definitions, and may use the terms more broadly or narrowly.)

The concepts of truth, falsehood and uncertainty correspond in scope to those of reality, illusion and appearance. The latter triad is applied to the contents of propositions, while the former concerns the propositions as such. For example, considering “dogs bark,” the fact of dogs barking is ‘a reality’, while the proposition that dogs bark is ‘true’; similarly in other cases.

Once we understand all such concepts as signifying different epistemological and ontological statuses, it becomes clear why they need to be distinguished from each other. They are all used as logical instruments – to clarify and order discourse, and avoid confusions and antinomies.

Note well that phenomenology is not a skeptical philosophy that denies reality to all appearances and claims them all to be illusions. Such a posture (which too many philosophers have stupidly fallen into) is logically self-contradictory, since it claims itself true while rejecting all possibility of truth. The concept of illusion has no meaning if that of reality is denied; some credulity is needed for incredulity. Doubt is always based on some apparent contradiction or gap in knowledge; i.e. it is itself also an item within knowledge.

 

Existence and Non-existence

What is the relation between the concepts of existence and non-existence (or being and non-being), and those just elucidated of appearance, reality and illusion, one might ask?

At first, the term existence may be compared to that of reality, or more broadly to that of appearance (to admit the fact that illusions occur, even if their status is not equal to that of realities). However, upon reflection, an important divergence occurs when factors like time and place are taken into consideration.

We need to be able to verbally express changes in experience over time, space and other circumstances. An appearance, be it real or illusory, ‘exists’ at the time and place of its appearance – but may ‘not exist’ at some earlier or later time, or in another place. The ‘existence’ of appearances is transient, local, conditional and relative.

What appears today may cease to appear tomorrow, although it might (or might not) continue to appear less manifestly, through someone’s memory of it or through the appearance of exclusive effects of it. Something may appear here within my field of vision, but be absent elsewhere. You may see this in some circumstances, and then notice its absence in others.

We thus need to distinguish different ways of appearance. With reference to time: in actuality, or through memory or anticipation; or with reference to spatial positioning. Or again, with regard to modality: in actuality, only through potentiality (i.e. in some circumstances other than those currently operative), or through necessity (i.e. in all circumstances).

Time and place also incite a distinction between ‘existence’ and ‘reality’ (or ‘truth’), in that when something ceases to exist at a given time and place, the reality of its having existed at the previous time and place is not affected.

Furthermore, appearances are apparent to someone, somewhere – they are contents of consciousness, objects of cognition. The concept of existence is differentiated also with reference to this, by conceiving that what may be apparent to one Subject, may not be so to another. Moreover, we wish to eventually acknowledge that something may conceivably exist even without being experienced by anyone (though of course, in defining such a category, we must admit for consistency’s sake that we are thereby at least vaguely and indirectly conceptually cognizing the object concerned).

We thus come to the realization that the concept of appearance is a relatively subjective one, involving two distinct factors: an object of some kind with specific manifestations, on the one hand, and an awareness by someone of that object at a given time and place. The concept of existence is intended to separate out the objective factor from the factor of consciousness implicit in the concept of appearance.

‘Existence’ is thus needed to objectify ‘appearance’, and allow us to conceive of the object apart from any subject’s consciousness of it. We need to be able to conceive of the objects appearing to us as sometimes ‘continuing on’ even when we cease to be aware of them. Furthermore, we need to be able to consider objects that we have not yet personally experienced, and even may never experience. In this manner, we can project our minds beyond mere appearance, and through conception and adduction hope to grasp existence in a larger sense.

The concept of existence and its negation are thus additional instruments of logic, facilitating rational discourse, without which we would not be able to mentally express many distinctions. Consequently, saying ‘existence exists’ and ‘non-existence does not exist’ is not mere tautology, but an acknowledgement that the words we use have certain useful intentions. These statements constitute one more way for us to express the laws of thought. Existence cannot be denied, and non-existence cannot be affirmed.

We do not make the distinction between ‘existents’ and non-existents’ by mentally lining up two kinds of things, like apples and things other than apples. The epistemological scenario applicable to most of our concepts is not applicable to such basic ones, which are of a more broadly pragmatic nature. Discernment rather than distinction is involved.

Whereas the concept ‘existence’ has some ultimate experiential content, ‘non-existence’ has none – because factual denial is not based on the same mental process as affirmation. We never experience non-existence – we only (in certain cases) fail to experience existence. The concept of existence is not built up by contrast to that of non-existence, since (by definition) the former relates to ‘all things’ and the latter to ‘nothing’, and nothing is not some kind of something. There is no time, place or circumstance containing nothingness. The word ‘non-existence’ is just a dumping place for all the words and sentences that have been identified as meaningless or false.

Terms like ‘existence’ and ‘non-existence’ are not ordinary subjects, copulae or predicates; they are too broad and basic to be treated like any other terms. Those who construct a theory of knowledge, or an ontology, which concludes that ‘existence does not exist’ or that ‘non-existence exists’ have not understood the logic of adduction. When there is a conflict between theory and observed facts, it is the theory (or the ‘reasoning’ that led up to it) that is put in doubt and is to be dismissed, not the facts.

 

2.    Theorizing

Critical Thought

Critical thought, or criticism, is considering the truth or falsehood of an idea – not only its truth, and not only its falsehood, either. It is not essentially a negative, anymore than positive, penchant, but an attitude of rigorous review in judgment, of keeping our standards high.

What makes a theory “scientific,” in the strict sense, is not whether it emanates from some prestigious personage or institution or corporation, but whether a maximum of care has been taken to formulate it and test it in accord with all known criteria of inductive and deductive logic. Science does not primarily mean, as some imagine, lab technicians with white aprons or university professors, or the exact sciences or mathematical equations. The term “science” initially refers to serious study, or to pursuit of knowledge as against mere opinion. It signifies a sustained effort of sound methodology, as currently possible and appropriate to the field of study concerned.

 

Degree of Detail

An important criterion for the credibility of theories is the degree of detail they propose. For instance, the immediate Creation theory is vague, whereas the gradual Evolution theory offers detailed descriptions of entities and processes. But of course, even the most detailed theory may turn out to be false. The existence of elaborate fictions in the form of novels (or scientific hoaxes presented as fact) shows that detail is not by itself proof.

One should also distinguish between explaining (e.g. fossils are leftovers of creatures that lived on earth in times past) and explaining-away (e.g. fossils are mere artifacts placed on earth by God to test people’s faith). The former is generally preferable to the latter. Though here again, the criterion is not determining.

 

“Somehow”

Theorizing is of course not a one-time, static thing, but an ongoing, changing process.

An old theory may be replaced a new one, either because the facts currently faced are not covered by the old theory or because some logical or conceptual imperfection or inadequacy has been found in it. The new theory may not be much different from the old, a mere adjustment of it, but it must in any case bring something extra to bear, either a wider capacity to explain facts or some sort of logical improvement or conceptual clarification.

In setting standards for theorizing, we must highlight the fallacy of relying on “somehows” as a way to leap over holes in one’s theories. This may be viewed as one of the ways people “jump to conclusions,”

For example, to defend the idea of theodicy (Divine justice or karma), we posit a thesis of reincarnation (in this world or another). That is, seeing the injustice evident in everyday life, we first think there must be some hidden guilt in the life of the victim, and that unpunished criminals will be dealt with before their life is through. We assume that, in the long run, over the course of a whole life, apparent discrepancies are canceled out and equilibrium is restored. But then, realizing that this too is evidently not empirically true we assume reincarnation as an explanation. For instance, children are sometimes raped or murdered; and since these are clearly innocent victims within their current life, granting that children are not punished for their parent’s sins, the assumption of justice makes us suppose that they committed commensurate crime in a past life. Similarly, for an evidently unpunished criminal, it is assumed that Divine justice will punish him in an afterworld, or that karma will do so in a future life.[1]

In cases like this, the big fallacy is to be satisfied with a “somehow” to fill the gaps in our hypothesis. In the case of reincarnation, for instance, the theory should not be accepted unless an exact description of events in the transition from body to body were proposed, combined with a set of testable predictions that would make possible at least some empirical confirmation of the thesis (besides the events it is designed to explain). The apparent support that a vague reincarnation thesis gives to the foregone conclusion that “there is always justice” is not sufficient.

There are almost always hidden obscurities in our theories: the vagueness of some term, the lack of clarity of some proposition, the jumping to conclusions in some argument. Indeed, the sciences cannot claim success in their enterprise, as long as philosophy does not claim its own success. So long as consciousness, knowledge, universals, and similar concepts and problems of philosophy are not fully understood and solved, anything the special sciences say ignores such underlying obscurities and uncertainties. This means that the apparent success of science is temporary and delimited. Success can only be claimed at infinity, when all branches of knowledge reach their respective goals.

 

Pertinence

Pertinence might be explicated as the construction of an appropriate major premise, so that a given minor premise is enabled to yield the proposed conclusion. (I am thinking here of my findings in a-fortiori logic, generalizing the way we comprehend certain Biblical statements as inferences by interposing a presumed tacit major premise.[2])

How is the missing major premise discovered? It is not found by some direct, infallible insight – but as in all our knowledge (although we may not be consciously aware of these mental processes), it is arrived at inductively, by means of trial and error.

There may in fact be several alternative major premises, equally able to fulfill the required task of making the inference possible – equally pertinent. We may be aware of only some of these available possibilities.

We start by proposing a likely candidate for the post of major premise. This may at first glance seem like the most likely hypothesis. Later, we may change our minds, considering that the candidate does not fit in our overall context of knowledge in some respect(s). For instance, the proposed major premise might be more general than necessary, so that although it allows us to draw the desired conclusion in the present narrow context, it causes some havoc in a wider perspective. In such case, we propose a less general major premise or a considerably different one; and so on, till we are satisfied.

A hypothesis proposed is ‘pertinent’, if it can do the job at hand, which is to infer the desired conclusion from the given (minor) premise, even if it turns out to be rejected because it does not fit into the broader context. A proposed major premise incapable of fulfilling this role is ‘impertinent’.

 

Field Specific

Each field of study has methods and parameters peculiar to it, as well as many that are found in common with other fields. We may thus refer to specialized principles of logic.

For example, the logic of historical research (historiology) would demand that the various forms of evidence – physical remnants (artifacts, drawings, writings, etc.), behavioral indices (traditions handed down), as well as verbal sources (witnesses, second-hand contemporary testimony, historians’ later claims, etc.) – be clearly categorized and distinguished from each other, and their relative weight as evidence be assessed as objectively as possible.

 

Misappropriation

The most common logical fallacy is perhaps the misappropriation of logical expressions – using the language of logic, without having in fact resorted to logical processes. This often suffices to convince some people.

For examples: one might say: “it is a reasonable assumption that…” when one has made no attempt to logically check the issue out; or: “it may be inferred that…” when no deductive or even inductive logical process allows such inference. One gives the impression of logic, but without factual basis. Words like “it must be that,” “a fortiori,” “in conclusion,” “because of,” etc., are freely used as alibis, in lieu of logic, in the way of mimicry, when logic was in fact ignored or opposed.

Of course, such behavior in discourse is not always intentional dishonesty. It is often due to ignorance of logic or lack of logical skill, or even just to inattentive, vague and imprecise thinking. In particular, many people are not aware of the difference between strictly deductive inference and merely inductive inference – these two logical modes being all the same to them. Sometimes, even though their reasoning was sound and its results plausible, they are just not aware exactly how they did it.

An example of intentional dishonesty is the discourse of Nagarjuna, which as I show in Buddhist Illogic is replete with pretended logic.

Another notable example of pseudo-logical discourse is Sigmund Freud’s “Moses and Monotheism,” His method there can be characterized as false advertising and creeping annexation. He says he won’t engage in some form of argument (which would be too obviously logically illicit or unscientific); and then, in the very next breath or gradually thereafter, he goes ahead and inserts that very argument into his discourse (to justify his prejudices). He loudly acknowledges the argument to be invalid (so as to give the impression that his approach is virtuously objective and scientific); then, coolly ignoring the very methodological imperatives he has just admitted, he hammers home his (foregone) ‘conclusions’. It is psychological manipulation. He relies on the prestige acquired in his field to pass over lies concerning another field.[3]

 

3.    Additional Remarks

Experiment

Experiment is a category of observation. It is observation in the midst of active interventions, in contrast to totally passive observation. Even when an observer moves around an object to see it from other angles, without interfering with the object, that is experiment of sorts. Asking people questions on some topic is also experiment of sorts.

Of course, when we think of experiment, we especially think of manipulations of some object – i.e. changing some conditions in or around it, and observing how its properties or behaviors are affected. Scientific experiment may be viewed as a way to speed up observation – making the object go through different phases of its nature, rather than waiting for it to vary by happenstance. Experiment improves on mere observation simply because it expands its scope. Experiment is not some new discovery by modern science[4] but has always existed – since the first man prodded some beast with his finger to see how it would react!

To conclude, the distinction of experimentation is not manipulation of the object, but action by the observer. The essence of experimental research is still observation. It is active, instead of passive, observation. Experiment is not some epistemological category apart from and superior to observation.

Indeed, one might well ask if any observation is passive. But the answer to that is necessarily yes. At the end of any experimental activity, there has to be a moment of passive observation. Rather, then, one might say that the essence of observation is passive – patient looking and seeing, receptivity and attention.

Experiment can of course go wrong for a variety of reasons; its results are not always credible. It may be designed on the basis of wrong theoretical or practical assumptions; the physical equipment intended to control or measure the phenomena studied may be badly constructed or set up; the researchers may be insufficiently careful and accurate in their handlings and readings, whether inadvertently or ‘accidentally / on purpose’; the researchers may erroneously record their correct findings; and the results may be misinterpreted, due to weak logic or lack of intelligence or narrow knowledge base, or simply due to conscious or unconscious bias.

Often, experimenters are simply unable to see things differently from the schemas they are used to, and have foregone conclusions in their minds no matter what the experiments they make imply. Sometimes, however, experimental results seem contrary to all expectation and the incredulity of researchers is eventually legitimated by review of all procedures and further experiment. If an experiment gives inexplicable results in the light of all current knowledge and theory, one should indeed review and redo it very carefully.

Thus, theory and experiment have a dynamic, two-way relation. Experiments are meant to confirm or refute theories, by testing their predictions. But also, theories are used to design and evaluate experiments, as well as to explain their results. The two must repeatedly be adapted to each other.

 

The Human Factor

Induction depends greatly on the human factor – on our intelligence (in some cases, genius), on our open-mindedness, on the clarity and rigor of our thinking, and on the detachment and carefulness of our reasoning and experimentation.

When theorizing and setting up tests to confirm or reject our theories, it is important to make a big effort to foresee all conceivable explanations and all their possible implications. If the theories considered are not all the theories conceivable in the present context, or if we do not correctly work out their respective experimental predictions, our inductive conclusions are bound to be faulty and misleading.

The danger could be illustrated with the following example from the history of science[5]. At one time, people thought that tiny living organisms could be ‘spontaneously generated’ – e.g. maggots could appear out of nowhere in rotting meat. This seemed contrary to the thesis that all life was created in the first week, for instance. To resolve the issue, a scientist called Francesco Redi (Italy, 1626-97) devised an experiment in 1668, enclosing meat in a container flies could not penetrate and observing whether flies emerged in it. As it turned out, no flies emerged from within the meat, leading Redi to the conclusion that flies lay eggs, and in this case were prevented from doing so.

So well and good. However, suppose Redi had found flies in the meat, would he have drawn the conclusion that flies are spontaneously generated? He would have been tempted to do so, since (as far as I was told) he did not foresee alternative theses, such as that flies’ eggs might be carried to the meat like pollen or always present in it like bacteria. If that had been the case, Redi’s inference from the appearance of flies in the meat would have been erroneous. We see from this example the importance of conceiving all possible alternative explanations for a phenomenon, before testing one’s theories.

Note in passing that this is an example of what J. S. Mill much later called ‘the method of residues’[6]. The alternative explanations are listed, then tried out and eliminated one by one, leaving one theory we can still rely on. Of course, the reliability of the residual theory depends on the exhaustiveness of the original list of theories. If all theories are eliminated, we know (from the law of the excluded middle) we need to somehow conceive one more. Sometimes we lack the necessary intelligence or information for that.

A current example of this is the debate in the USA between Creationists and Darwinists. The latter support Darwin’s theory of evolution, and point to the plentiful and varied empirical evidence over billions of years for it (though the issue of origin remains unresolved); while the former support the Biblical idea of sudden emergence of life just a few thousand years ago and suggest “intelligent design” as an alternative outlook. Each group considers that the other’s ideas should not be taught in the classroom.

But, it seems to me, the idea of Divine creation (apart from other specifics of the Biblical narrative) is strictly speaking compatible with Darwinism, if we grant that God chose to institute ‘chance’ evolution (i.e. spontaneous genetic mutations and environmental selection) as the way the life He created in nature would proceed thenceforth. A third alternative is thus conceivable, which reconciles the conflicting theses and allows biology to be peacefully taught in the classroom.

 

Epistemic Ethics

Logic is not only about forms of reasoning, but also about intellectual style. It is first and foremost a teaching of epistemic ethics: the attitudes the intellect must adopt to arrive at truth. These include suppression of one’s ego, open-mindedness and truth-orientation, among many others.

Genuine philosophers earnestly search for truth. They have sincere questions and try to answer them honestly. They admit areas of doubt or ignorance. They are open to change, and evolve over time.

Fake philosophers play the role of being philosophers, but are really not philosophers. They have little interest in the substance of issues, but seek to dazzle an audience with their superficial erudition and their style. They sow famous names around in the hope of reaping reflected glory. They follow intellectual fashions in pursuit of wide approval ratings, being pious or subversive as befits the current market of ideas. To gain attention and fame, they may be scrupulously conventional or say shocking things.

They say things they do not personally fully understand; they claim to have knowledge they in fact lack. They are apologists for received doctrines, rather than researchers; and when they seem to propose some new doctrine, it is only by arbitrary opposition to established ideas so as to appear original.

For many people, philosophy is an instrument of social climbing or power over others, rather than a search for truth. Such people may convince many others of this or that absurd or silly doctrine, using the prestige of their position in the education system or in the media, or in some other social role. But in fact, they have only muddled their victims’ minds and incapacitated them.

When philosophizing, it is wise to remain low-key and matter-of-fact, avoiding grandstanding and personal emotional outbursts as much as possible. This is an issue of style, not substance. But if one does not exercise sufficient restraint in such discourse, it is very easy to get lost in misleading hyperboles. The wrong choice of language can end up determining our doctrines, causing us to approximate and exaggerate.

Here, I have in mind the likes of Nietzsche or Kierkegaard (and many others), who pervasively intertwine their emotional responses with their philosophical realizations. They make a big thing of their personal reactions – writing in a narcissistic manner. Thus, in the face of his insight that man is alone in the universe, without apparent supports – Nietzsche indulges in theatrical outbursts, dramatizing his utter shock, role-playing a heroic response. This is all bombast, designed to give his ego a sense of self-importance; it is a kind of mental equivalent of masturbation. Kierkegaard – “same-same, but different”: an equally emotional approach, though a self-pitying one and one with more sincerity.

Such personal reactions were, of course, characteristic of the times and places those philosophers lived in. Their styles seem so “un-modern” – few would indulge in such tonalities today. We are perhaps less flamboyant – but also more careful to avoid confusion between judgments of fact (true–false) and judgments of value (good–bad). Philosophers are human, and may of course be passionate to some extent, and express their personal valuations; but this should not be the centerpiece of their discourse.

 

The Uncertainty Principle

The Uncertainty Principle of quantum physics, according to which we cannot precisely measure both the position and the momentum of a particle at a given time, may be interpreted either epistemologically (i.e. as an insurmountable practical difficulty of observation and calculation) or ontologically (i.e. as something out there, a truth about the particle itself, such that it does not have precise position and momentum). Taken in this neutral manner, it is assumably generally accepted as scientific fact; it is the interpretations of it that are debated.

Classical physics would opt for the epistemological view. This would say that at the phenomenal levels under consideration, any measuring instrument or technique physically affects the objects to be measured, and therefore cannot provide an accurate result – but we can still hypothesize that there is an underlying reality, i.e. that the particle does indeed have both position and momentum. Note well that this posture is logically compatible with the notion that the assumed “underlying reality” will never be specifically known, i.e. there is no intent to evade the discovery that it is technically unknowable.

Modern positivism would prefer the ontological interpretation. It would say: no, the immeasurability is not an illusion underlain by definite facts – we can hypothesize that the indeterminacy is itself the ultimate reality, the truth of the matter. Note well that this posture is just as hypothetical as the preceding; it cannot claim to know what the “ultimate reality” is any more than the other view, since the common premise is precisely that the reality is technically inaccessible to humans. It is thus just as much a doctrinal stance, however prestigious those who take it are.

Granting the said impossibility of full measurement, it follows that – in this instance at least – each of the two interpretative theses is neither verifiable nor falsifiable. In this context, at least, their logical status is the same – they are equally speculative.

Both postures are admittedly hypothetical, but the former is clearly simpler, the latter philosophically more problematic. One of the principles of scientific method, in any context, is to prefer the simpler thesis unless we have good reasons to seek out a more complex one. That is, the simpler view is considered inductively more likely, because it is less prone to affect previously established knowledge.

We are not forced to rest content with the classical view; but we must have sufficient motive to abandon it in favor of the more complicated positivist view. The latter involves some very revolutionary suppositions about the nature of matter (namely, the possibility of natural spontaneity), which we cannot favor just for the hell of it, merely for the pleasure of challenging the existing order of things. We must first show up some distinctive weakness in the older view or some novel strength in the newer view, to justify such a radical overhaul of all past acquisitions and explanations.

The positivists argue that since we cannot determine these facts precisely, we might as well – for all practical purposes – regard them as non-existent. But the result is not quite the same, because we should consider not only the consequences of such a posture on their particular field of study, but with regard to knowledge as a whole. That is, it is not an innocuous stance – it has wide-ranging ontological and epistemological significance, seemingly putting some important fundamental assumptions of reason (viz. that all natural events are caused) in doubt.

Furthermore, there is no justification in forbidding further discussion of the issue henceforth. The positivists make an argument by intimidation, saying effectively “those who disagree with us are not worthy of intellectual consideration”[7]. But surely, the positivists must still remain open-minded – for they may indeed one day be proved wrong, if it should happen that we are able to dig deeper into matter, and eventually find some way to experimentally measure what the uncertainty principle says we cannot.

We cannot empirically prove a “cannot” – a “cannot” is a generalization from experience (though, in some cases, it is a logical insight, as in the preceding sentence). The uncertainty principle is not a purely empirical fact, plucked out directly from experience; it emerges within a certain theoretical context, which shapes our interpretation of events. This context, like many others throughout the history of science, may yet change, as our knowledge grows. There is no final and incontrovertible scientific theory.

Note well that I am not personally defending one or the other posture here[8], but comparing them from a neutral perspective, giving both fair consideration. That is, I am evaluating their discourse as a logician, using a discourse that is pure logic.

 

Drawn from Ruminations (2005), Chapter 2:1-15,17-18.

 
 

[1]             As I have pointed out elsewhere, such doctrines are unfair to innocent victims, accusing them without justification of past crimes; and they whitewash criminals, making it seem like they merely implement justice!

[2]             See Judaic Logic, chapter 4.2.

[3]             It is my wish to analyze that whole book in detail someday, so as to show up the cunning and variety of his tricks.

[4]             Although, of course, modern science has been using experiment more consciously, systematically and successfully than ever before.

[5]             I noted this example in the course of a lecture long ago, so I cannot guarantee my present rendition is entirely accurate. But no matter, I only include it here for purposes of illustration.

[6]             In his System of Logic (1843).

[7]             This is also an argument by authority. To which one can answer: one may be a great physicist and a not-so-great philosopher; merit in one field does not guarantee success in all others. Such attitudes are reminiscent of religious authoritarianism.

[8]             My neutrality should be evident from the open-minded position I have taken with respect to the idea of natural spontaneity in The Logic of Causation (see for example chapter 10.1 there).

Go Back

Comment

Blog Search

Blog Archive

Comments

There are currently no blog comments.