Rationality

In common usage, people sometimes tend to use the words rational and logical somewhat interchangeably. The purpose of this post is to distinguish between these.

Logic:

Logic is the set of rules that allows me to evaluate an arguement independent of its content, purely from its structure. Just as I use grammar to parse a sentence and determine the relationships between the words in the sentence, I use logic to parse an arguement and determine the relationships between the statements in the arguement. Just as a grammatical sentence may be meaningless (Colorless green ideas sleep furiously), a logical arguement may be meaningless or irrelevant. However, the analogy with grammar only goes so far. There are many different grammars and all of them are equally valid within the context of their application – a given language. Any consistently applied way of meaningfully combining words in a sentence forms a grammar. Grammar is a matter of convention. The same is not true of logic. The word itself has no plural. This is a striking fact. Think about it. It indicates that man cannot even conceive of a plural for logic. There can be no such thing as my logic vs your logic. Logic is the structure of coherant thought. It is a part of the mental apparatus that man is born with. It is implicit in the capacity to think. By implicit, I mean that I cannot choose to think illogically (though I may make mistakes). To identify mistakes in thinking, the implicit rules of logic need to be made explicit by identifying them. This is a science. Like all sciences, the science of logic also presupposes several things. In particular, it presupposes man’s ability to use logic (implicitly). Whether the word logic refers to the implicit set of rules or to the science which deals with identifying them depends on context. In this post, I am going to use the word logic to refer to the implicit set of rules.

Reason:

Reason is the faculty of understanding and integrating sensory material into knowledge. Reason does not work automatically. To reason, man has to consciously choose to think and to direct his thoughts to achieve understanding. By directing thoughts, I mean preventing thoughts from wandering by staying focussed. Reasoning involves the use of logic. It also involves several other techniques. “Reason employs methods. Reason can use sense-perception, integration, differentiation, reduction, induction, deduction, philosophical detection, and so forth in any combination as a chosen method in solving a particular problem.” [Burgess Laughlin in a comment on an old post] Deduction obviously uses logic. I believe induction does too but the science on inductive logic is nowhere as well developed as it is on deductive logic. Sense-perception, integration and differentiation don’t use logic (Note: integration and differentiation refer to grasping the similarities and differences between various things). Reason then is not simply the faculty of using logic.

Rationality:

“Rationality is man’s basic virtue, the source of all his other virtues… The virtue of rationality means the recognition and acceptance of reason as one’s only source of knowledge, one’s only judge of values and one’s only guide to action.” [Ayn Rand, in The Virtue of Selfishness]

In the discussion that motivated this post, a colleague argued that if the use of reason does not guarantee correct decisions, it cannot be one’s only guide to action. A gut feeling or intuition might sometimes be a better guide to action. There are two separate issues here – the fact that the use of reason cannot guarantee correct decisions and the claim that intuition can be an alternative guide to action.

Consider intuition first. Intuition is an involuntary automatic evaluation of the available choices. There is no conscious awareness of the reasons for the evaluation. Intuition is a learnt response from previous experience. As such intuition is extremely helpful in any decision making process. However, the fact remains that in every voluntary decision – the sort of decision where there is enough time to reason – intuition is only one of the inputs to the use of reason. As long as I make a decision consciously and deliberately, reason remains the only guide to action. The only alternative is to evade the responsibility of a choice. Relying upon intuition is not irrational in itself. I might decide that I do not have sufficient knowledge to reach a decision and choose to rely upon intuition instead. As long as I identify the lack of knowledge, my decision is fully rational. Identifying the lack of knowledge (and hopefully doing something about it) will actually allow me to learn from the new experience and improve my intuitions for future use. Blindly relying on intuition – by default instead of by choice – will actually weaken my intuition in the long run. Intuition is one of the most valuable tools for decision making but it needs to be carefully cultivated by the use of reason for it to be good or useful.

It is important to stress that rationality (in the context of making a decision) involves the use of all my knowledge to the best of my ability. In particular, this includes knowledge of the time available, the relevance of prior experience and any known gaps in knowledge. It is this last aspect of rationality – the use of known gaps in knowledge – that is the motivation for the field of probability. Probability is about quantifying uncertainty by making use of all known information and postulating equal likelihood where no information is available. The consistent use of the equal likelihood postulate is at the heart of probability theory and it is what gives probability its precise mathematical characteristics. In modeling an outcome for an uncertain event, I start with a uniform distribution (every outcome is equally likely) and use available information to transform it into a more appropriate distribution. The parameters of the transformation represent a quantitative use of known information. The shape of the final distribution represent a qualitative use of the known information.

With this brief treatment of probability, I can now address the obvious fact that the use of reason cannot guarantee correct decisions. Consider an example. I have historical data for the exchange rate between a pair of currencies. I also have market quoted prices for various financial instruments involving the currency pair. To model the exchange rate at some future time with a probability distribution, I can use the historical data to establish the shape of the distribution and the market quoted prices to obtain the parameters of the distribution. If I had more information (say a model for other parameters that affect the exchange rate), I could incorporate that too. A decision based on such a model would be a rational decision. On the other hand, I could say that since the model does not guarantee success, I will simply use a uniform distribution (Ouch!! That is not even possible since the range for the exchange rate is unbounded. Let me simply restrict the range to an intuitive upper bound) with the arguement that the uniform distribution might actually turn out to be better. Yes, it might turn out to be better, but the arguement that it should be used is still invalid (Consequentialism is invalid and I am not going to argue this). Not all decisions can be formulated with precise mathematics like this, but the principle is the same. It is always better to use all my knowledge to the best of my ability.

Another aspect of the original discussion remains unaddressed – the claim that rationality is subjective. Since this post has already got long enough, I will just stress here that there is a difference between context-dependent and subjective.

Advertisements

9 Responses

  1. Really informative post. Thanks. You gave one of the best descriptions of intuition that I have seen.

  2. There are other points in this post to which I would not agree, too. But I think I would focus rather on the science part of it here—maturation of philosophic ideas takes time. (God knows how tentative I remain about so many philosophic issues despite years, nay, decades of thought.) But science is different. … So, allow me to correct you.

    Probability is about quantifying uncertainty by making use of all known information

    True.

    and postulating equal likelihood where no information is available.

    Nope. Here you go wrong.

    In the (theoretically pathalogical) case in which literally no information whatsoever is in principle is available, only one rational conclusion is possible: namely, that nothing can be said. In particular, not even a probabilistic theory can be applied. (Probability theory does have identity.)

    In such a case, even merely postulating anything would still be arbitrary.

    Postulates—scientific postulates (including the mathematical ones) are not spun out of thin air. They have identity. Even if only postulates, they still have a specific model assumed behind them. Even in the case nothing particular about a phenomenon is known, we still *know* its context—no matter how rudimentary. This is true for both kinds of postulates—those assuming which a tested theory has been generated, and those assuming which only a theory has been generated but which still remains untested. In the second case, one just attaches a mental note, namely, that it is a hypothetical model, not yet verified.

    The consistent use of the equal likelihood postulate is at the heart of probability theory and it is what gives probability its precise mathematical characteristics.

    Non sequitur. The equal likelihood postulate is nothing but another name for the uniform distribution. However, note that any other distribution would still require, and therefore impart to its usage in the probability theory, a precise mathematical characteristic to the same degree of precision.

    An aside:

    For probability theory, it’s always best to toss aside coin flipping, and instead begin with some continuous distribution situation. As an example, consider this one (or these two):

    Suppose some idiots (*probably* Americans, with p > 0.5) are busy having a rock party inside a room. You not being an idiot, are standing outside, enjoying the fresh cool breeze outdoors. e outside. (You have no problem partying, so long as it isn’t with stupids, idiots, morons, etc.) The room has one small window covered with a translucent screen. You can see the intensity of the light brighten now and then, as the disco-lights inside turn chaotically. The problem is to predict the intensity of the light at the window, as a function of time, without any access to the knowledge of the actual placements and the rotational etc. dynamics of the disco lights.

    Obviously, since you don’t know anything about the actual lights, are you justified in positing equal likelihood—i.e. a continuous uniform distribution from 0.0 to 1.0 (in normalized terms)? Not at all. The fact of the matter is, you simply have to wait and gather some empirical evidence of the actual intensities, before you can even construct a probabilistic distribution function (PDF).

    In general, this reliance of PDF on some prior empirically observed data (directly or indirectly, without or with first having been put into the form of a mathematical function) is at the heart of the probability theory.

    Notice, the empirical observation may indicate that the intensity is more likely to be brighter than darker—perhaps because someone was deliberately shining some light in the direction of the window for some reason.

    The case of the light is a bit hard to imagine, but consider the next revision in the scenario. Suppose you are to predict the intensity of the sound coming out of the window.

    Can you say that it will be a uniform PDF? Nope.

    For simplicity, subtract all the noise. Suppose we are doing this experiment when none is present in the room, and only music and lights are on.

    Now, can you say that the sound intensity will have uniform PDF? Not at all. The actually governing distribution here would be determined by what song which is being played—the intensity vs time graph of that song.

    You do have this context available to you—namely, that the PDF is nothing but the I vs t graph.

    Now, you are, in a way, operating at the level of instrumentation (i.e. the concrete range of the moment registration of sound intensities), and so, let’s assume you cannot make out what song it is. (And, if you could make out what song it was, why would you want to use probabilistic theory anyway?)

    The situation then is: there are ups and downs in the intensities; you have only a probabilistic means to predict them; and: you do know that a song is being played.

    Now, here is the important part. Since you also know that intensities of songs are typically not uniformly distributed, your natural conclusion should be: *not* to use the uniform PDF.

    Notice, if the sound is not of a song but of some source that you cannot make out anything about, the situation is even better: you know nothing, not even whether it will even out over a course of time. Not necessarily. (As an example: Suppose the sounds are being emitted by a snoring guy who wakes up and therefore stops snoring. Uniform distribution? Not at all! It would be all 0 after he wakes up.)

    Postulating that the phenomenon under observation remains steady, e.g. the case of the song, you still cannot assume PDF.

    In all such cases, your best bet is to go empirical: Collect data, hopefully covering a full stanza, and whatever function you *actually* observe becomes a basis—even if only a postulated one—for you to begin to make *probabilistic* predictions.

    Probability quantifies uncertainty, sure. But it does not do so outside a cognitive context—i.e. an empirical (read: sense-perceptual) context. That’s the bottom line.

    Indeed, in raw nature, the flipping of coins etc. (i.e. the uniform distribution) is a very rare occurrence. It’s a very very theoretical a case. And the basic reason we study it is because we are interested in knowing how such simple cases are connected together in the more complex system—whether they give rise to a normal distribution or a Poisson one, etc. … Indeed, if you have to be a bit simple minded about your choice of distribution, you are usually far better off assuming a Weibull, and if not that, a normal distribution, rather than the uniform distribution. Epistemologically, they all presume the same level of “postulated-ness” in use in the probability theory proper. Algebraically or analytically, one function may be more complex than the other, but for the reaons noted here, it’s precisely a more complex distribution that is more likely in empirical reality.

    Long reply, again. (No, I no longer even try. I am by now fully convinced that I will never be able to write short and pithy replies. Except, may be, as certain kind of “comments” on Twitter!!)

    –Ajit
    [E&OE]]

  3. OK. One more point.

    In the above reply, I, somewhat vaguely, indicated the following:

    Namely, that probability quantifies uncertainty but only within the cognitive context of the rest of your knowledge, which, ultimately is derived from the sense-perceptual data. The rest of the cognitive context also supplies at least a rudimentary basis to select one PDF over another. Such PDF need not always be the uniform distribution. Indeed, in the absence of such a basis, the uniform PDF epistemologically does not do any better than the other PDFs. What is important is the rest of the cognitive context, which, in many empirical situations, in fact implies using a more complex PDF.

    I, thus, completely left aside the question: What is the basic nature of a PDF, and why its choice should be important. I will try to briefly indicate the same now. (I will sure try to be brief.)

    Basically, a PDF is nothing but the values assumed by a random variable (RV). There are two parts to the name: random variable. The second part first: an RV is a misnomer; it is not a *variable*, it’s a *function*. The first part: It’s a *random* variable. So, to understand probability, we have to understand what we mean by “random”. Let’s do that with an example.

    Suppose you have a function f which is defined over integers in the range [1,10] (both included), thus having ten values in its domain and range. The function is well-defined, say via an algebraic relation f. As such it is “deterministic.” Input a domain value and get the range value as a determined output. There is this 1:1 relationship between input and output (that’s what functions are).

    Now, consider the inverse problem. Since you know the algebraic formula f, and since you know there is 1:1 relationship, if you are given the output value, you can use the inverse f function to uniquely determine find the corresponding input value.

    However, suppose you don’t know the generating forward algebraic formula in the first place. Then, what? Here, for simplicity, let’s assume that you still know that the unknown f is an algebraic polynomial, of a degree m m <= 10.

    For that matter, even if you are not given values in the increasing sequence, but if the input (x) values are given for the output (f) values for all those m number of enumerations, you can still determine the polynomial.

    The trouble is: If you don't even know what x value is associated with what value of f in the stream of m number of values of f that you see. In this case, since you cannot determine the polynomial, your knowledge is not sufficient to uniquely determine the values for the m+1, m+2 … values.

    It is this last situation—rather, this sequence of situations—that captures the essence of what we call randomness.

    Randomness means inability to determine the governing function, in essence. This can happen provided two conditions are met: if you don't know the nature of the governing function (e.g. that it's an algebraic polynomial, or a simple sinusoidal curve, etc.) and if you don't have enough number data points, including the knowlege of the values of the x (i.e. domain) variable, to be able to uniquely determine that function.

    Now, assume that your knowledge context is such that you know the form of the governing equation. Still, the situation would be random to you if you don't know the specific x (input) values which 1:1 correspond to each of the incoming stream for the f (output) values. This, in essence, then shows the nature of randomness.

    Randomness means: absence of a knowledge of the order.

    For instance, suppose, you see the output sequence: 1, 2, 1, 2, 1, 2, …. Under the assumption that enough has been seen, you can easily say that the next number would be 1. If you relax that assumption, then, under the separate assumptions that the output values are going to be only 1 or 2, and that the proportions of their relative frequencies as seen in the sample already seen, we can say that equal probability exists for either 1 or 2. However, if the sequence seen were to be: 1, 1, 2, 1, 1, 2, 1, 1, 2 … or even 1, 2, 1, 1, 1, 2, 2, 1, 1, then you would say that having 1 was 2/3 probable and having 2 was 1/3 probable. Note two points: that the PDF's are different in the two cases, and that even the equal probability postulate requires the initial empirical evidence in its favor.

    When we don't know the form of the governing equation, it's easy to see that we can't predict the next number in a sequence. But equally important also is to grasp that when the order of enumeration (the corresponding x values) is not available for an observed sequence f values, we still are unable to predict. Order matters.

    Now, a bit of philosophy. Functions tell us something about the real world. Objects have identity, and they act as per their identity. An aspect of their actions is what mathematics measures (with physics specifying/formulating what aspect or characteristic it is which is being measured). As such, measurement and functions carry identity. Therefore, randomness cannot exist in reality. If so, what is the metaphysical locus of randomness—the kind that greets us when we see something whose governing equation we do not know?

    The answer is: Just like infinity, randomness is a mathematical concept, an abstraction, a concept of method. It has order at its root.

    If you have a table of 10 values of a function as noted above, and if only the output (f) values are given in sequence other than a definite sequence (say the simply increasing x sequence), you are completely at a loss. Ditto, if you the governing algebraic equation is not assured to be the order equal to or less than than the number of input-output pairs available to you. This second observation is a clue to definining randomness.

    Let's begin with sequences whose identity is known. We can arrange sequences as per their complexity. For instance, for sequences generated by algebraic polynomials, we can say that the higher the order of the polynomial, the more complex it is.

    So long as the complexity of a sequence is determined by the input-output pairs available to us, it is not random. However, as the complexity goes beyond the specific (finite number of) input-output pairs available to us, we begin to think of at least partial randomness—we cannot decipher all aspects of its order. This provides the clue.

    The completely random sequence, therefore, must be completely disordered. It must be seen as the result of an infinitely complex governing equation.

    Now, infinity does not exist as an actuality. Therefore, what we really mean here is that the completely random sequence is a projection, a limiting case, of a sequence *of* sequences of increasing complexity.

    Randomness serves the same theoretical point in the complexity and disorder theory (and I take these two terms in a general and loose sense, not in the specific sense of the chaos theory or the phase-transition theory), as infinity does, e.g. in the potential theory.

    The gravitational potential never does really drop to zero. The idea of the potential going to zero at infinity simply serves as a convenient method to compare the convergence behavior of the real (finite) sequences in their real (finite) intervals. Similarly, the idea of randomness simply serves as a convenient way (that is what method means) to compare real (finite) sequences of different but real (finite) complexities/disorders.

    When we say PDF is random, we mean: in the given context, we wish to ascribe to it the limiting nature of infinite complexity, even while knowing full well that eventually, this will only be a means to undertake comparisons between the actual (definite) distribution functions.

    Enough is enough. Thanks for letting me think out aloud.

    –Ajit
    [E&OE]

  4. Lots to think about. Will write back. Meanwhile a couple of quick questions:
    Can one conceive of a continuous distribution before conceiving of a discrete distribution?
    Can one conceive of a non-uniform distribution before conceiving of a uniform distribution?

    I think the answer to both is no. Hence my preference for the coin toss and the uniform distribution.

  5. Let me take your second comment first, since I think that is more significant.

    Basically, a PDF is nothing but the values assumed by a random variable (RV).

    How so? The value of a PDF at a certain value x is the normalized probability that the RV takes a value in an infinitesimal range around x.

    There are two parts to the name: random variable. The second part first: an RV is a misnomer; it is not a *variable*, it’s a *function*

    Is it necessarily a function? Function implies determinism. What about the random variable that represents the number of slices of bread person X eats for breakfast? Or do you hold that probability theory cannot be applied to such a situation (I have not yet made up my mind about this, but if so, it would mean that probability theory cannot properly be applied to any field where the variables under consideration are even partly determined by human action). Anyway let me go along and suppose that the variable is in fact a function. A function of what? The causal factors presumably (Or perhaps you mean a function of the “experiment number”?). Typically there will be more than one factor. In what follows, you write about sequences, which are necessarily functions of one variable. I don’t see how your definition of randomness (limiting case of a sequence of sequences of increasing complexity) can be extended to functions of more than one variable.

    The completely random sequence, therefore, must be completely disordered. It must be seen as the result of an infinitely complex governing equation.

    What about the case where the governing equation itself (not just its form) is known but the values of the causal factors are not known? There is no infinite complexity here and I do not understand what it means for the causal factors to be disordered.

  6. More fundamentally, your description of randomness uses concepts such as sequences, limits, complexity and infinity. Are these concepts really necessary to conceive of randomness? Perhaps the concept of a sequence is necessary but I am pretty sure the others are not.

  7. Answers to the two quick ones: (Long enough!!)

    Can one conceive of a continuous distribution before conceiving of a discrete distribution?

    No. Just the way the natural numbers hierarchically precede the real numbers, similarly, the discrete distribution does precede the continuous one.

    When I said it’s better to begin with the continuous distribution, I didn’t mean to say in the hierarchical sense. I meant to say something like: “to break away from the routine treatment so that a fresh perspective may be obtained.” … Yes, I do think that I was not clear enough and that this clarification was in order.

    Can one conceive of a non-uniform distribution before conceiving of a uniform distribution?

    Yes. You already have the idea of number line and all at least implicitly—in the sense of continuous graphs even if not in the full sense of an analytical function—before you can think of a continuous distribution. And I think historically that’s the way it progressed. First, they already had some notions of chance and all but only in the discrete form (Cardano); then Rene Descartes introduced mapping between real numbers and geometry in the usual maths; and then probability theory got further developed in the discrete form; and then finally the continuous distribution arrived on the scene.

    In forming the concept of a continuous distribution, a man might perhaps begin with the uniform distribution, but it’s an accidental point, irrelevant to establishing hierarchical precedence (see more near the end of this reply). The crucial point is that the concept can’t be said to have been fully formed unless it is general enough to handle the non-uniform distribution as well. Indeed, the uniform distribution is only a special case of the nonuniform distribution. (The word “non” here isn’t epistemologically significant; it doesn’t indicate a definition through negation here. Instead, it’s just a convention or convenience. It actually means: “distribution in the general sense.”)

    The idea goes back to the concept of graphs. Ask people to draw a graph. People invariably draw a nonuniform one. For instance, even if they draw a line, they do so at 45 degrees—not a horizontal line (uniform distribution) or a vertical line. The concept wouldn’t be fully formed unless it is general enough.

    Hence my preference for the coin toss and the uniform distribution.

    But by doing so, you end up addressing only one part of the whole picture. The starting mathematical objects is one part; how they are put together, processed, the more complex quantitative relationships which emerge as a result of such processing, too is an essential part. The coin toss (by which you mean the *fair* coin toss) and the uniform distribution are, really speaking, special, even degenerate cases. It’s a case of: no matter what be the event, the value of its probability remains the same—it doesn’t change. Now, why shouldn’t it?

    Again, an analogy from CS: the simplest and therefore the most efficient sorting procedure (by definition O(1)) is when the data are already presented in the sorted order, and you know that they are such. This case certainly is simple, actually the simplest case of sorting—meaning, requiring no sorting at all. But is this special case really necessary when it comes to defining the meaning of the sorting procedure?

    More generally: Hierarchical precedence sure does guaruntee simplicity, but the converse is not true.

    While building a concept, a simple case does not have to *precede* a more complex case, while building a concept—*both* simple and complex cases may precede before a more abstract concept can be had.

    Simplicity is not the same as fundamentality. Before fundamentality can be asserted, an adequate generality of the scope has to be ensured. In ensuring adequate generality, simple as well as complex cases do get subsumed. The concept “animal” includes squirrels and dogs, but it does not stop there—the concept formation is not yet complete until man too is included in it.

  8. How so? The value of a PDF at a certain value x is the normalized probability that the RV takes a value in an infinitesimal range around x.

    Well, I was not writing very precisely. Take it as a general, hand-waving kind of discussion, whose purpose is to open up the points for discussions.

    Is it necessarily a function? Function implies determinism. … A function of what?

    You caught me there, again! No, it’s not a function in the usual, i.e. deterministic, sense. It is a function but only in an extended sense.

    Actually, if you are looking for mathematically precise definitions and all, in my limited knowledge, the best formulation of what probability theory is, was put forth by Kolmogorov. For an accessible (UG/PG level) treatment, I would single out these two: (i) Elementary Probability by Stirzaker and (ii) All of Statistics by Wasserman. You will have to supply quite a bit of philosophy (epistemology). However, these two books do offer the best currently available mainstream treatments.

    What about the case where the governing equation itself (not just its form) is known but the values of the causal factors are not known?

    I didn’t quite get your point here. But, unless you wish to continue, let’s leave it at that. I mean, I think rather than establishing the context and all for what your specific thought was here, perhaps via a long recounting, it seems to me that the same points would also get covered via our other discussions. Also see a separate reply to your separate point: “More fundamentally…” next.

  9. More fundamentally, your description of randomness uses concepts such as sequences, limits, complexity and infinity. Are these concepts really necessary to conceive of randomness? Perhaps the concept of a sequence is necessary but I am pretty sure the others are not

    You have a point there, and it is this: Suppose the 13 cards of a single color are shuffled, put on a desk, and read in sequence from top to bottom. Can we then say that the cards now appear in a random order?

    Here, I think there *is* a sense in which one would answer: “Yes.”

    However, such a usage is not very precise, and philosophically, the issue is not well-examined at this level of analysis. To make it more precise (and also to address the part of “human action” you raise), consider another example.

    Consider a typical scenario of that on-TV lottery, wherein they have numerous balls inside a glass container. They have these balls vigorously shaken up via chaotic air flow. Then, they allow the balls to come out at some intervals (either mechanically or by intervention of a human-action when the invited guest presses a button). Suppose the balls are numbered 0 through 9. Can the sequence of the released balls said to be random when the release happens via (i) a purely mechanical action (a timer-based opening of the valve) or (ii) via human intervention.

    Let’s analyze the first, mechanical, situation. The entire system here is deterministic. Fluid chaos is deterministic. If you know all the fluid parameters such as pressure, velocity, temperature, etc. (including the boundary, initial, and environment conditions) to sufficient accuracy, in principle, you can predict the sequence. More important: Even regardless of whether you have the ability to predict it or not (e.g. whether you are trained in the special science of mechanics or not), you know on philosophic grounds alone that the air, glass, blower, balls, etc. all are existents. So, each has its own identity. Each can act only in accordance with to its identity and no other way. Hence, the only end-result possible is that which is governed by their identities—including the identity of their interactions. In particular, existence allows for no metaphysical flux or fluke. As such, it is the system is deterministic—its end result is *completely determined* by the nature of the entities that act— whether *we* are able to tell the result or not (either via a lack of knowledge, or instrumentation of sufficient accuracy, or due to any fundamental limitations arising due to finite nature of our consciousness), it is completely determined by the nature of the objects—the identities—involved in it.

    If so, can we call the sequence so obtained, random? How will you answer that question?

    The only answer possible is this: If you conduct such an experiment in reality, then the result *in principle* cannot be random.

    Yet, in a loose sense, we can say that the sequence is random—in the same sense in which the shuffled deck we are tempted to say that its sequence is random.

    If so, we have to ask: what is that characteristic of these situations, which prods us towards describing them as possessing randomness?

    Now, in answering that question, we find that we have to say that randomness is a matter of degrees. Some situations are more random than others. Shuffling a deck only once or twice produces less randomness than shuffling it for a longer time. The difference between the well- and ill-shuffled decks is characterized in reference to the *lack* of order i.e. the complexity level of the lacking order.

    Notice the negative kind of reference here. Since randomness does not exist in the metaphysical reality, it can only be defined via a negation of some aspect of reality. The aspect in question is orderliness.

    If so, the next question becomes: What should we pick up as the standard of orderliness? It is in reference to this question that infinity and all come into picture.

    You see, what metaphysically exists is orderliness—not its lack. Therefore, if you do pick out any standard of orderliness without making reference to advanced mathematics, in particular, to infinity, then that standard will be found lacking in some or the other way.

    Suppose, you say that a polynomial of n-th degree is going to be your standard, where n is a finite number. For instance, you will describe the lack of orderliness in reference to a cubic polynomial. If the sequence does not fit a cubic polynomial, then it is random. That’s the general idea. But then, with this standard, you are unable to distinguish between a complexity of the order 4, 5, and higher. In essence, you are in the situation of a crow that counts: 1, 2, 3, many. Orderly, More Complex Orderly, Much More Complex but Orderly, Random. Yet, a sequence that appears orderly in a 4th degree polynomial and another in a 5th degree polynomial do actually differ in their complexity measure.

    The only way you can get out of the situation is to say than n can have no bounds. But, then, you run into the usual problem of infinity—the meaninglessness of the debates over millenia regarding its nature.

    The only way you can cut out that trash is to immediately go ahead and state and begin using ideas like: limiting process, limiting sense of the value, convergence behaviour, comparison of sequences as the cognitive purpose, etc.

    You cannot avoid basic calculus for concepts which do require it. Calculus provides objective standards of comparison for indefinitely extensible sequences. In particular, it puts forth the idea of convergence behaviour as the basis of comparisons. Convergence behaviour begins with finite range, and remains always within the finite range, but allows for us to speak confidently with objective rules whereby one indefinitely extended (i.e. infinite) sequence (series, functions, etc.) would not blow faster than the other. That last question is all that matters concerning *all* things to do with infinity.

    Precisely because randomness does not exist metaphysically that we have to have a *conceptually* *projected* standard for measuring it. Such projection requires reference to be made to infinite processes.

    Otherwise, in a crude sense, you can always stay happy with a sufficiently high degree n polynomial (or some other sufficiently complex function) as your standard. You can do that, so long as you know your limits, and stay within them.

    Now, a word about human action. Human beings can interfere in the physical world. (Isn’t this a revelation?)

    *When* (i.e. in every such occurrence as) human beings act , the law of identity applies also to their actions.

    If a man’s action is such that it actually has a physical consequence, then, from the more narrow perspective of the physical theory, he has to be accounted for as nothing but one more, physical kind of, a causal agent. From the physical theory, his status then is just at par with that of air, balls, valves, boundary conditions, etc.—nothing more, nothing less

    Human beings have free-will. This does not mean that their actions are to be taken as random. Inasmuch as their actions belong to this world, they cannot be taken as random. If so, what is the source of the orderliness in a freely-chosen choice? The source is: the man, himself. When a man chooses a particular card (or chooses a particular moment of pressing the button in that lottery situation), the cause (i.e. the causal agent) is: he himself, including his mind and body. Qua a living being, every organism naturally has greater physically causative powers than the inanimate objects do. Qua a living being having a free-will, that power is further qualitatively enhanced. Assuming the proper context (namely that there is a man who is exercising his free-will or even defaulting on doing so), free-will *is* physically causal. The effects—the physical consequences—are completely determined by the nature of the acting entity—here, the man. The exercise of the free will in the same direction (e.g. that I will now type the letters “Ajit”) inexhorably have the same consequence: “Ajit”. Just one last point (which I will not care to explain but look it up in David Harriman’s book). Existence has primacy. The only way we can characterize any aspect of consciousness is in reference to existence. The only way I can determine that my choice be freely exerted towards the goal of typing “Ajit” is in reference to certain mental actions that ultimately refer to my typing out “Ajit”. Once the mental action is well-identified, it is easy to see that the same action leads to the same effect: “Ajit”

    “Ajit”

    “Ajit”

    Ok I will not bore you any more.

    I am tired, need a cup of tea, and am not going to reply on this thread for a week or so. Feel free to leave your comments and criticism in the meanwhile, though.

    Best, and thanks,

    –Ajit
    “Ajit”
    “Ajit” 🙂
    [E&OE]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: