Validating theories

After more than a month of architectural redesign in the software product I am working on, I reluctantly took a couple of days off to attend a cousin’s wedding (which was interesting enough to merit a post of its own… coming soon…). The last pieces of the redesign are left and this morning, I intended to resume and finish them off. Instead, I just ended up sitting in front of my computer for several hours staring at the screen blankly. Surprised at this, I constructed a theory that being the introvert that I am, the two days of social interaction had taken so much out of me that my mind had hibernated and needed to reload all the stuff needed to think about work from the hard disk. (Perhaps, as a programmer, I can’t help thinking about everything in computing related analogies) The theory tied in nicely with what programmers call being “in the zone” (Look for Paul Graham’s essay on the topic – if I am not mistaken).

A few hours later I discovered that I had an upset stomach and a slight fever. So much for my theory! I need to be more careful.

Advertisements

Interesting Excerpts from PDC Languages Panel

…the problem is that you know… the pace of innovation is limited by what people can digest, and you cant force things on them too fast, and so you see… some things happen in generations at some level, and it just cannot be otherwise, because you know… you basically need a certain generation of programmers to retire or die off or whatever before you can introduce radically new ideas, because the stage in people’s lives when they really do something completely different passes at some fairly early stage… atleast from my perspective it is a fairly early stage.

…so, keeping it simple, is easier when you don’t have to deal with a committee, because the common assumption is that you know… two heads are better than one, but what you actually get when you have multiple heads is not their union but the intersection, and so its what they can all agree on and kind of negotiate around and it doesn’t work well unless there is somebody in charge …
So you don’t want that, you want to strictly adhere to a very uniform… whether it is functional… all the really beautiful languages basically take something to their logical conclusion… whether it is logic or functions or objects and they dont … mongrelize – hybridize doesn’t have quite the edge that I was looking for – but it is very hard and necessarily to do that in the real world under real time constraints… because in a lot of ways mongrels are very resilient and getting the pure solutions not to be very brittle and to address all the burning immediate needs of people takes time and the time usually does not exist

The interesting thing is that the actor model is a perfect fit with the object capability model, and again, if you take that seriously you find that you can introduce a particular model of concurrency that has much to recommend it with relatively little conceptual overhead, again because you are still… you are reusing these concepts of isolated things that communicate via message passing, bcause… well, the common thread to all these things, to the modularity, to the security, to the concurrency is… there is no global anything, there is no top level thing that is all knowing, that can synchronize everything, and knows about everything, and has a global namespace and so forth, because this is what actually scales… whether you are doing modularity or concurrency or worried about anything else, because you know there isn’t actually something up there in the universe.. the laws of physics work very well because they are distributed, because there is no shared convenient thing that you can appeal to that will sort it all out and if you program that way it tends to unify a lot of things

— By Gilad Bracha

The full session is available here.

Brilliant analogy

This is perhaps the best introduction to a subject I have seen.

A real-world example of asynchrony

“A waiter’s job is to wait on a table until the patrons have finished their meal.
If you want to serve two tables concurrently, you must hire two waiters.”

From The Visual Basic Team Blog.

A lesson in epistemology

I first learnt programming (in FORTRAN) in an introductory course on Computer Science in my first year of engineering. About an year after that I took on a project that required some ‘C’ programming with no knowledge of the language. I did that project moderately well. Some time after that, I learnt C++ from Bruce Ezekiel excellent book “Thinking in C++”, while simultaneously working on another project. Around the time I finished my graduation, I learnt C# mostly by reading an informal specification of the language.

Now, I want to learn F# and after looking at a small number of examples on technical blogs (which got me interested), I decided to download and study the specification of the F# language. I miscalculated. Reading a technical specification is not a good way to learn a language. My previous successful attempt at learning C# from its specification worked mainly because I already had experience in C and C++ (C# belongs to the same family of languages). F# is a functional language (as opposed to the imperative C family of languages) and my experience and concepts in C like languages do not translate to it. I now realize that given my background, I could learn Java (if I wanted to) by reading a technical specification but not F#. The new concepts I need to grasp F# cannot be easily learnt from a technical specification. They will have to be learnt by looking at and trying out numerous examples first, by induction.

What does this have to do with epistemology? The same principle (of induction) applies to all concepts, not just to concepts in specialized sciences. The final “finished form” of a concept is not particularly useful for learning. It is useful only when an approximate form of the concept has already been reached by induction from concrete examples.

As far as this blog is concerned, I have realized from this experience that most of my posts have been attempts at presenting ideas in “finished form”. Without the concrete examples that are necessary to reach these ideas through induction, it is unlikely that anyone who does not already agree with those ideas broadly will take them seriously. The “finished form” is useful for refining and clarifying existing ideas, not for reaching radically new ones.

%d bloggers like this: