lesscode.org


'Theory' Archives

Freedom vs. Safety  4

Cat.: Theory
25. August 2005

Kevin Barnes’ Freedom Languages is one of the best high level takes on the language divide I’ve read in a long time. He nails the major differences between the situation previously referred to as “dynamically vs. statically typed languages”. But he goes much further, placing languages into either “Freedom” or “Safety” categories and then listing the relative merits of each. I for one will be adopting this update in terminology.

As an illustration of how dead-on Kevin is in his analysis, here’s a link annotated paragraph on what you can expect from the “Freedom Language” advocate:

The advocates of freedom languages tend to talk first about the speed and efficiency of the individual programmer. They discuss the expressive power of different constructs and focus on all the powerful features that the safety languages lack. They point out complex patterns and show off twenty-line systems that do the same thing. They talk more about the ease or purity of things than the safety of things. They are dismissive of static-type-safety and compile-time validation in general.

Also, be sure to check out James Robertson’s commentary on Kevin’s post as well.

Disposable Software  17

Cat.: Theory
16. August 2005

I was recently hired to deliver a presentation on the Goal Oriented Software Design. The added twist to the challenge was that the presentation was supposed to be narrowly tailored to the client’s business domain, with some concrete examples. That meant that I couldn’t get away with just a generic presentation – I had to invest some serious time in research, before I was able to even think cogently about the specific problem.

The paying client was clear about that requirement, and was happy to oblige me. Not before asking me for the estimate, though, which ended up in thousands of dollars. Having no particular objections to my estimate, the client signed the SOW, and I went off to merrily prepare the presentation.

But then that episode got me thinking. Here we have a budget-conscious company, signing a hefty invoice in order to consume the presentation that basically got delivered to them within one hour (that’s all it took for me to run through my slides and to fend the QA period). That presentation was a one-time event, a throwaway, if you will.

Now, imagine if, instead of negotiating for a presentation, I was involved in negotiating the delivery of a software application with the same company. Would I be able to get away with thousands of dollars invoice for delivering a throwaway application that would only address a very narrow and specific problem domain and that only for a very short period of time?

The answer is, of course, a resounding ‘no way!’ No one in their right mind would ever approve building an application that would be useful only to several people, and only for a couple of hours/days.

But really, what’s the difference between such an application, and the presentation that I’ve delivered? My presentation, while doubtlessly viewed as being quite useful to several expert users within that company, was valid only for an hour or so. The fact that it took me more than two weeks to prepare something that will have useful lifespan of only an hour or so apparently didn’t seem to bother anyone. A software application, on the other hand, gets invariably perceived as being something that not only must possess phenomenal longevity (at least 3 to 5 years, according to the current rule of a thumb), but must also offer a huge portion of salvageability, that is, reusability.

A typical business presentation, such as the one delivered using Microsoft PowerPoint, is not expected to be reusable. In truth, however, there usually is quite a wealth of useful information and knowledge buried inside such typical presentations. But no one seems particularly bent on salvaging that content, or on reusing it, or on evolving it, etc.

But as soon as we step into the world of software application development, the delivered source code, and all the embedded knowledge it carries within, gets treated as pure gold. All of a sudden, everybody starts hyperventilating about the ability to reuse it, not to reinvent the wheel, yadda, yadda, yadda.

I must admit at this point that I’m a bit mystified as to why would one kind of knowledge (i.e. the knowledge embedded within software source code) be so extremely precious, while another kind of knowledge (i.e. the knowledge embedded within other business documents) be viewed as easily reproducible and thus not worthy of extreme ritualization. Especially when often times the two apparently different kinds of knowledges turn out to be quite similar in scope and in intent.

Something is terribly mystified (beyond any recognition) in the world of software application development. If someone is prepared to spend several thousands of dollars on articulating and communicating certain valuable, albeit disposable message to the targeted narrow audience without even batting an eyelash, yet at the same time balks at similarly useful articulation of the equally important knowledge to be delivered via the software application code, that tells me that something went haywire along the way. Reality check is in order.

On the flip side, however, I had to ask myself how is it that I feel perfectly comfortable delivering all those disposable PowerPoint presentation without ever looking back, and yet at the same time I find myself striving to build the software code that would be as general-purpose as is humanly possible? Am I being reasonable in behaving that way?

If someone asked me to deliver a PowerPoint presentation outlining the optimal strategy for building a teacher’s grade book that would collect assignments and quiz marks and grades and summarize them into the final grades, I would gladly jump to the challenge and quite comfortably ask for 5 to 7 days of research time. I would be quite certain that I could deliver a high quality product in that time frame, starting from a completely clean slate.

However, if they asked me to deliver the application code that would allow teachers to collect all the grades and summarize them for the current grading period, according to the particular rules that the school in question operates under, I would be reluctant to jump at it and commit to delivering it within the 5 to 7 days time frame.

Why, what is different with the software code compared to the PowerPoint content? Basically, the fact that both current technologies in vogue, (both Coke and Pepsi, that is, J2EE and .NET) are so complicated, prevent me from feeling comfortable in delivering a highly customized solution.

But, realistically speaking, I think it is becoming more and more clear that delivering huge, bulky, all-things-to-all-people software applications is simply not the way to go. We are now ushering into an era of disposable software. Build it quickly, with a particular narrow problem in mind, and for a particular, possibly short time frame of valid use. Then, be prepared to throw it away, and to move on, not looking back.

Up until recently, such a thing would sound more like science fiction, but today, thanks to the new generation of tools and new outlook on the philosophy of software development, such an approach is becoming increasingly feasible. I will give a brief illustration here:

Last year I worked on a project that aimed at delivering administrative functionality to the school districts throughout the North America. The project was conceived and driven by a high tech company, itself driven by various high tech concerns and agendas. Thus, we ended up with the ungodly mixture of .NET, J2EE, and a bit of the Oracle technology thrown in for good measure.

But the real problem was in the lack of true focus. The product was being built with a vague, generic individual user in mind. Thus, most of the features that the product offered were completely amiss. And because the end-user acceptance was so poor, the all-knowing higher-up powers started driving even harder towards generalization and future flexibility. Which means, when translated to dollar value, that the application was exorbitantly expensive to design, develop and maintain.

Because of that, it was put up on a pedestal, and was treated like a sacred cow in all ways imaginable.

Anyone who would dare suggest that, instead of attempting to build one giant, general purpose app, we focus on building several more specific apps that would address specific needs of individual districts (or even schools), would be shunned, ridiculed, and eventually thrown out of the boardroom.

Thus, we’ve ended with an extremely precious product that had myriad of features that end-users mostly didn’t care about. But, the final product is now exorbitantly expensive. Would anyone dare, at that point, tell the powers that be that such a product should be just a throwaway? Not a chance in hell! And so the dance of deception continues.

Meanwhile, all that the targeted segment of end-users wants is a simple app to allow them to do their job easily. And they are not thinking in terms of many decades of blissful use. Just let them do the specific job right now, for this specific event (such as submission of final grades).

In reality, a good software developer should be able to do just that – deliver a disposable, short lived product that addresses only the pressing needs of the moment, and then simply disappears. No maintenance, no enhancements, no song-and-dance, nothing. Similar to how my one-time business presentation required nothing more but to simply be consumed, and then thrown away. No fuss, no muss.

It’s a very powerful tool.  Comments Off

Cat.: Theory
09. August 2005

I’m a sucker for a gushy Tim Berners-Lee interview:

ML: And you’ve never had a sleepless night over that?

TBL: No I haven’t. I haven’t had a sleepless night over it because I suppose I’m so much more surrounded by the good things that people are doing with it. There are lots of positive stories of people doing great things, putting educational information out there for people in developing countries and things, for example. There’s a huge spirit of goodness. Most of the people I meet who are developing the web are focused on all those things.

Don’t take your memes to town  9

Cat.: Then they laugh at you..., Theory
07. August 2005

One of the most interesting characteristics of the Web is that it doesn’t version. The Web is a bona-fide computing ecosystem, and is along with email, one of the killer applications of another system, the Internet, itself designed for high-survivability and to deal with massive physical infrastructure damage (it’s literally nuke-proof). The Web evolves as does the Internet it runs on. Versioning for the Web makes no sense whatsoever.

So what’s this Web 2.0 thing then? Tim O’Reilly feels that it’s a valuable meme:

The reason that the term ‘Web 2.0’ has been bandied about so much since Dale Dougherty came up with it a year and a half ago in a conference planning session (leading to our Web 2.0 Conference) is because it does capture the widespread sense that there’s something qualitatively different about today’s web.

Web 2.0 is the Web for Mr. Safe and his management team. This helps bring shrinkwrap software, advertiser, broadcast and publish media people bringing to the Web, as it actually is, with enough of their old concepts out of their comfort zone. It does however choose to frame the Web in terms of the old models. Given the history of what hapens when old ideas knock up against the Web, it’s arguably risky then, for Mr Safe and his team to actually believe it fully. Sometimes you have to take a phenomenom on its own terms rather than it terms of your existing paradigm . Most recently to see what happens when the old ways don’t carry over, take at look at the IT industry’s Web Services efforts.

Anyone that thinks that the Web is something that evolves according to a release cycle fundamentally doesn’t understand it. If you could version the Web it wouldn’t work properly – support for versioning would be a bug. I think this is why Tim Bray doesn’t like Web 2.0, which is where this recent conversation began:

It’s not only vacuous marketing hype, it can’t possibly be right.

It’s a broken memeof sorts, perhaps one only a dinosaur could enjoy. Other developers like Dare Obasanjo are working through their transition from the Web as place to run a websites to the Web as a place to run platforms. Here’s Obasanjo responding to Tim O’Reilly’s take on Web 2.0:

Reading stuff like Tim O’Reilly’s just leaves me scratching my head. I completely grok the simple concept that folks like me at MSN are no longer just in the business of building web sites, we are building web platforms. Our users are no longer just people interacting with our web sites via Firefox or IE.

But even that idea – web platforms – seems dysfunctional. In the software industry a platform is a chunk of software infrastructure other people innovate on. It’s the closest the industry has ever had to a toll-booth. You build it and they will come, and pay to drive over it – how cool is that? Unfortunately, ‘web platforms’ are based on the collective hallucination that the Web ofers a viable means to support such API based architectural franchises. That being something of a cargo cult suggests it’s unlikely to work. Future franchises will be built around access to data not access to software. As with versioning, if the Web were an operating system or a platform, we’d need to file a bug report against it. Heck, it’s turtles all the way down – being able to file a bug report would be a bug.

Ian Davis meanwhile get closer by describing Web 2.0 as a state of mind:

Here’s my take on it: Web 2.0 is an attitude not a technology. It’s about enabling and encouraging participation through open applications and services. By open I mean technically open with appropriate APIs but also, more importantly, socially open, with rights granted to use the content in new and exciting contexts.

So it seems while the term Web 2.0 might miss the mark for technical types, it has broader value. Here’s Stephen O’Grady’s take on it (from the comments):

Web 2.0 is similar to that of Ajax. in and of itself it’s probably inaccurate, non-descriptive and misses the point. And yet, I’m favor of it. why? For two reasons: 1.) it’s propagated widely enough to have some widespread recognition, and 2.) it neatly packages everything up for the non-digerati who, after all, is the majority of the population.

What’s interesting in all this, is that while the toolsets and technologies are surely getting better, core Web architecture isn’t changing that much. What’s happening is that people’s ability to innovate is improving and that seems to come about by accepting Web thinking as much as any recent improvement in the toolchain. In short, instead of trying to make the Web a good place for your business or technology to function, adapt your business or technology to function well on the Web.

The Ying and the Yang  2

Cat.: Theory
17. July 2005

My time has been split between Rails/Django research and reading Andy Smith’s excellent Why Frameworks Suck essay/rant over and over. I can’t really explain that.