In common usage, people sometimes tend to use the words rational and logical somewhat interchangeably. The purpose of this post is to distinguish between these.


Logic is the set of rules that allows me to evaluate an arguement independent of its content, purely from its structure. Just as I use grammar to parse a sentence and determine the relationships between the words in the sentence, I use logic to parse an arguement and determine the relationships between the statements in the arguement. Just as a grammatical sentence may be meaningless (Colorless green ideas sleep furiously), a logical arguement may be meaningless or irrelevant. However, the analogy with grammar only goes so far. There are many different grammars and all of them are equally valid within the context of their application – a given language. Any consistently applied way of meaningfully combining words in a sentence forms a grammar. Grammar is a matter of convention. The same is not true of logic. The word itself has no plural. This is a striking fact. Think about it. It indicates that man cannot even conceive of a plural for logic. There can be no such thing as my logic vs your logic. Logic is the structure of coherant thought. It is a part of the mental apparatus that man is born with. It is implicit in the capacity to think. By implicit, I mean that I cannot choose to think illogically (though I may make mistakes). To identify mistakes in thinking, the implicit rules of logic need to be made explicit by identifying them. This is a science. Like all sciences, the science of logic also presupposes several things. In particular, it presupposes man’s ability to use logic (implicitly). Whether the word logic refers to the implicit set of rules or to the science which deals with identifying them depends on context. In this post, I am going to use the word logic to refer to the implicit set of rules.


Reason is the faculty of understanding and integrating sensory material into knowledge. Reason does not work automatically. To reason, man has to consciously choose to think and to direct his thoughts to achieve understanding. By directing thoughts, I mean preventing thoughts from wandering by staying focussed. Reasoning involves the use of logic. It also involves several other techniques. “Reason employs methods. Reason can use sense-perception, integration, differentiation, reduction, induction, deduction, philosophical detection, and so forth in any combination as a chosen method in solving a particular problem.” [Burgess Laughlin in a comment on an old post] Deduction obviously uses logic. I believe induction does too but the science on inductive logic is nowhere as well developed as it is on deductive logic. Sense-perception, integration and differentiation don’t use logic (Note: integration and differentiation refer to grasping the similarities and differences between various things). Reason then is not simply the faculty of using logic.


“Rationality is man’s basic virtue, the source of all his other virtues… The virtue of rationality means the recognition and acceptance of reason as one’s only source of knowledge, one’s only judge of values and one’s only guide to action.” [Ayn Rand, in The Virtue of Selfishness]

In the discussion that motivated this post, a colleague argued that if the use of reason does not guarantee correct decisions, it cannot be one’s only guide to action. A gut feeling or intuition might sometimes be a better guide to action. There are two separate issues here – the fact that the use of reason cannot guarantee correct decisions and the claim that intuition can be an alternative guide to action.

Consider intuition first. Intuition is an involuntary automatic evaluation of the available choices. There is no conscious awareness of the reasons for the evaluation. Intuition is a learnt response from previous experience. As such intuition is extremely helpful in any decision making process. However, the fact remains that in every voluntary decision – the sort of decision where there is enough time to reason – intuition is only one of the inputs to the use of reason. As long as I make a decision consciously and deliberately, reason remains the only guide to action. The only alternative is to evade the responsibility of a choice. Relying upon intuition is not irrational in itself. I might decide that I do not have sufficient knowledge to reach a decision and choose to rely upon intuition instead. As long as I identify the lack of knowledge, my decision is fully rational. Identifying the lack of knowledge (and hopefully doing something about it) will actually allow me to learn from the new experience and improve my intuitions for future use. Blindly relying on intuition – by default instead of by choice – will actually weaken my intuition in the long run. Intuition is one of the most valuable tools for decision making but it needs to be carefully cultivated by the use of reason for it to be good or useful.

It is important to stress that rationality (in the context of making a decision) involves the use of all my knowledge to the best of my ability. In particular, this includes knowledge of the time available, the relevance of prior experience and any known gaps in knowledge. It is this last aspect of rationality – the use of known gaps in knowledge – that is the motivation for the field of probability. Probability is about quantifying uncertainty by making use of all known information and postulating equal likelihood where no information is available. The consistent use of the equal likelihood postulate is at the heart of probability theory and it is what gives probability its precise mathematical characteristics. In modeling an outcome for an uncertain event, I start with a uniform distribution (every outcome is equally likely) and use available information to transform it into a more appropriate distribution. The parameters of the transformation represent a quantitative use of known information. The shape of the final distribution represent a qualitative use of the known information.

With this brief treatment of probability, I can now address the obvious fact that the use of reason cannot guarantee correct decisions. Consider an example. I have historical data for the exchange rate between a pair of currencies. I also have market quoted prices for various financial instruments involving the currency pair. To model the exchange rate at some future time with a probability distribution, I can use the historical data to establish the shape of the distribution and the market quoted prices to obtain the parameters of the distribution. If I had more information (say a model for other parameters that affect the exchange rate), I could incorporate that too. A decision based on such a model would be a rational decision. On the other hand, I could say that since the model does not guarantee success, I will simply use a uniform distribution (Ouch!! That is not even possible since the range for the exchange rate is unbounded. Let me simply restrict the range to an intuitive upper bound) with the arguement that the uniform distribution might actually turn out to be better. Yes, it might turn out to be better, but the arguement that it should be used is still invalid (Consequentialism is invalid and I am not going to argue this). Not all decisions can be formulated with precise mathematics like this, but the principle is the same. It is always better to use all my knowledge to the best of my ability.

Another aspect of the original discussion remains unaddressed – the claim that rationality is subjective. Since this post has already got long enough, I will just stress here that there is a difference between context-dependent and subjective.

Object Oriented Programming and Objectivist Epistemology

I have just started reading Ayn Rand’s Introduction to Objectivist Epistemology and chanced upon this very interesting paper by Adam Reed. Prof. Reed writes about models of causation and the exact architectural parallel between Rand’s epistemology and object oriented programming. I am very busy with work right now and ITOE is a difficult read but this is definitely something to explore once I am done with it.


I have struggled with the concept of probability for a long time. Not with the maths but with the meaning. Does a probability number really mean anything at all? And if so, what? Recently, I have reached a definite position on this. Here it is.

In a metaphysical sense, it seems clear that the probability number is meaningless. Or, more accurately, that it is not a property of the event in question at all (I am using the word ‘event’ loosely to refer to anything for which a probability may be calculated). An event either occurs or does not occur. No fractions are possible. Probability therefore must be a measure of a person’s state of knowledge of the factors that determine the event in question. That is, probability is an epistemological concept rather than a metaphysical one. It originates because of the need to make choices in the face of incomplete knowledge. This is clear since probability is used not just for future events but also for past ones. A classic example of this is the use of medical tests in conjunction with statistical analyses to arrive at a probability of a patient’s having a particular disease. In reality, either the person has the disease or not. The probability assigned to the possibility of disease is merely a tool used to decide whether further investigation is warranted.

This seems to suggest that probability is a subjective rather than an objective. But the precise math used to calculate probabilities suggests otherwise. Is probability subjective or objective? To answer this, it would be useful to look at what the words subjective and objective mean. In a comment on an old post, Burgess Laughlin wrote (and I agree):

“Objective,” in my philosophy (Objectivism), has two meanings. First, in metaphysics, it means existing independent of consciousness. The redundant phrase “objective reality” captures this meaning. Second, in epistemology, “objective” refers to knowledge that is drawn (inferred) logically from facts of reality. (See “Objectivity,” The Ayn Rand Lexicon.)

Subjective, as I use the word, refers to judgements or responses that cannot be traced back to facts of reality or the thought processes of the subject (A typical example is emotions).

Probability is not objective in the metaphysical sense. In fact, without consciousness, it would not exist at all. It is also not objective in the epistemological sense since it arises only in cases where the subject does not have complete knowledge of the facts of reality. And the fact that there are precise mathematical rules to calculate probabilities means that probability is not subjective either. If probability is neither objective nor subjective, what is it? Consider the case of a coin being tossed. Lacking any knowledge of the composition and weight distribution of the coin, the velocity with which it was tossed, the composition of air, the nature of the ground etc, the probability of a particular side showing up is taken to be 0.5. Where did this number come from? It is quite clear that this choice is purely arbitrary. The entire math of probability is based on a simple principle applied consistently. Given multiple possibilities and a complete lack of quantitative knowledge of relevant causes, each possibility has an equal probability. Clearly this is arbitrary, but it is the best one can do. And applied consistently, it provides a very precise framework for quantifying a lack of knowledge. It allows quantification of that which we do not even know!

Anyone who is familiar with Rand’s philosophy should note that my use of ‘arbitrary’ is different from (though related to and inspired from) Rand’s use of the word in the classification of the epistemological status of statements as true, false or arbitrary. To apply the principle of equal likelihood, one already needs to have identified all the possibilities. This means that probability cannot be applied to arbitrary (in Rand’s sense) assertions. What about the truth status of a statement involving probability? Such a statement can be demonstrated to be true (or false) subject to the equal likelihood principle. Without the principle, it is arbitrary (in Rand’s sense).

I think this classification – objective, subjective and arbitrary – might be useful in several other areas of math as well. For example (I need to think more about this though), it can be applied to Euclid’s axioms (in geometry). These axioms could be described as arbitrary and theorems could then be considered as true subject to the axioms.

In my next post, I will try to relate my position to statistics and randomness.

Good observation on scepticism

Via Rational Jenn, I came upon this short post that makes a good observation on scepticism1

These humanists have gotten rid of God, but they are still left with altruism, a religious morality. I’ve decided their epistemology is also a leftover. Skepticism is basically the desire for knowledge to exist without context. They want to know something without sensing it and without forming any concepts about it. These folks will only admit they know something if they know it in the way God knows stuff, by no means at all. They have a religious epistemology, without the religion. Weird.

Very well put.

1) I spell that scepticism, not skepticism.

Nonsense masquerading as profundity

I do not read the Economic Times and so did not know that it had a column named Cosmic Uplink (What does that mean?). It recently featured an article by Mukul Sharma titled “There’s nothing less real than reality” that ended with

Zhuangzi, said one night he dreamt he was a carefree butterfly flying happily. After he woke, he wondered how he could determine whether he was Zhuangzi who had just finished dreaming he was a butterfly, or a butterfly who had just started dreaming it was Zhuangzi.

The Times of India has a similar column named The Speaking Tree usually featuring similar articles. All that is needed to refute such nonsense is to take it literally. Aristotle The Geek does that very well with

Let me chop off the index finger of your right hand. If you are dreaming, the finger will still be there when you wake up. If you are awake, the fact that you are awake will be confirmed and a finger is a small price to pay for such profound knowledge.

Such articles are inherently dishonest. What is Mukul Sharma relying on when he writes such nonsense? He is relying on the fact that his readers are capable of reading it and understanding it. And yet it is the roots of that understanding which his article intends to destroy. Some time back I wrote a little about a book called “Practising The Power Of Now”. It contained this gem

The essence of what I am saying here cannot be understood by the mind.

But Mr Sharma and Mr Tolle, I do understand what it is that you are trying to do. And I refuse to fall for it.

A lesson in epistemology

I first learnt programming (in FORTRAN) in an introductory course on Computer Science in my first year of engineering. About an year after that I took on a project that required some ‘C’ programming with no knowledge of the language. I did that project moderately well. Some time after that, I learnt C++ from Bruce Ezekiel excellent book “Thinking in C++”, while simultaneously working on another project. Around the time I finished my graduation, I learnt C# mostly by reading an informal specification of the language.

Now, I want to learn F# and after looking at a small number of examples on technical blogs (which got me interested), I decided to download and study the specification of the F# language. I miscalculated. Reading a technical specification is not a good way to learn a language. My previous successful attempt at learning C# from its specification worked mainly because I already had experience in C and C++ (C# belongs to the same family of languages). F# is a functional language (as opposed to the imperative C family of languages) and my experience and concepts in C like languages do not translate to it. I now realize that given my background, I could learn Java (if I wanted to) by reading a technical specification but not F#. The new concepts I need to grasp F# cannot be easily learnt from a technical specification. They will have to be learnt by looking at and trying out numerous examples first, by induction.

What does this have to do with epistemology? The same principle (of induction) applies to all concepts, not just to concepts in specialized sciences. The final “finished form” of a concept is not particularly useful for learning. It is useful only when an approximate form of the concept has already been reached by induction from concrete examples.

As far as this blog is concerned, I have realized from this experience that most of my posts have been attempts at presenting ideas in “finished form”. Without the concrete examples that are necessary to reach these ideas through induction, it is unlikely that anyone who does not already agree with those ideas broadly will take them seriously. The “finished form” is useful for refining and clarifying existing ideas, not for reaching radically new ones.

%d bloggers like this: