toxi.in.process

Friday, January 27, 2006

Importance of design

When I recently criticized Processing's lack of object orientation I seemed to have hit a sore spot. The majority of the responses have emotionally defended the current syntax as sufficient and most suitable for beginners. At the same time they dismissed my insistance on OO principles as strictly unnecessary and considered these proposed methods as mere "niceties" which can easily be lived without. I think that's missing my point completely!

Object orientation is not to be mixed up with "nice" looking, cleanly formatted or easy-to-read source code. Even though all those things are more than often features of software developed that way, practicising an object oriented approach has far more fundamental differences, techically as well as conceptually. What I consider to be those differences is explained in more detail below.

Another related and still consistently misunderstood point of my critique was that I claimed Java to be more expressive than Processing. Expressiveness is not just the result of a statement, but also how it is communicated. I was purely arguing Java's language constructs to be better suited to express complex abstract concepts by means of OOP. Of course Processing wins hands down when it comes to quickly producing sketchy prototypes, yet I wished Processing syntax would not so much focus on this "instant gratification" part. As teaching tool it should be able to grow with the user and "gently", yet consistently introduce core concepts from the beginning, since often it's easier to learn something than to unlearn it.

A programme is a step-by-step description of a process. However I am arguing we should not separate the act of encoding the step-by-step instructions from the actual act of decomposing the structure of the process in question. OO is not primarily concerned with syntax. It is all about reaching an optimal design solution as result of decomposition and abstraction of an idea. It itself is an iterative process and a more than useful "tool" to engage with bigger and more complex ideas in order to encode them as software. Aiming for literacy in this field will have (and has had for most) direct impact on the conceptual quality and complexity of work produced. If this is not the aim of teaching people programming then I don't know what is. I sincerely wished Processing would have embraced this further and I still can't see any why it shouldn't be possible to marry an easy-to-use syntax with those more conceptual aspects of structure and means of expression.

So I still don't really buy the arguments presented so far that OO thinking is too hard to grok for novices without any background in computing. Object thinking is all based on a certain perception and assumptions of the world. It does require certain skills to abstract and translate observations, but these are skills needed as much in other disciplines like design or philosophy (or art) and so have not much to do with requiring a Maths degree as prerequisite.

The world around us


A helpful, yet pretty simplistic view for the sake of argument below could be: We all live in the same world and to some extend share and agree on some aspects of this reality. One of them is that this reality is filled with a multitude of "objects": Your notebook, pen, mobile, toothbrush, dog, boy/girlfriend, iPod, power plugs... In the same way we can consider more abstract things as objects too: the Internet, an mp3 file, ski trip, club night, video game, story, softness, theatre play... All those objects, physical or not, have something in common: each one of them can also just be considered as "idea" or "concept" and as such they all can be mentally isolated and treated on their own and yet they are somehow related to each other and anchored into reality (well, that's the common assumption at least). The question's is, what is it which ties each of these objects into reality.

We humans use our various languages to refer to and communicate ideas (concepts) with each other. To simplify that process of communication we slowly (over millennia) have developed symbols for frequently used concepts. Those symbols are called "words" and we've been conditioned to associate each one of them with a more or less stable meaning/concept since the day we were born (Behaviorist theory). But remember each of our words only is a way to refer to an idea or concept. Each concept itself "exists" on its own, regardless if there's a word for it. Words form the basis for only that part of our description of the world which we all agree on, with an infinity of ideas (and their corresponding symbols) outside of it. By means of our own aquired private symbol collections and thought processes we mentally can construct any number of new ideas or concepts (with the restriction we can't think of anything using symbols unknown [to us]).

In order to avoid ambiguity or to communicate concepts (incl.entirely new ones) to others, we have to use a set of known established symbols (words) and put them into the "right" sequence (syntax) to communicate and explain the new concept each other (I've put "right" in quotes here since in theory we've got quite a choice of languages to express ourselves in). That necessary expansion of symbols to statements means our languages are self referential. We make use of this feature to create semantic layers: Once a new idea is becoming popular enough a new symbol will be introduced by someone familiar with the concept to refer to it more directly. A side effect of the sheer existence of this new symbol/phrase is that it now hides level of detail of the original (longer) idea to its new users. (AJAX as a recent example). In many modes of conversation this is desired and sufficient. The new symbol provides a higher level of abstraction of the idea and at the same time encapsulates its deeper knowledge.

As general principle it can be said that in order to (re)gain this "hidden" knowledge we'll have to inspect and engage with the underlying ideas, which again means nothing more than undergoing a potentially recursive learning process to recover the full extend of the bigger idea - or in Newspeak reverse engineering. This is true for your own ideas as much as for external ones. As a sidetrack: This also is a strong argument for Open Source when considering code as extension to language.

Object anatomy


By all means I'm not proposing this to be a correct theory of language, but taking this simplified theory as starting point we can see object orientation as idea is not as alien to most of us as it may seem. If we treat objects as ideas we can come to the following conclusions:
  • objects can be analyzed in isolation and are based on a number of properties (sub ideas) which are "hidden" by default. This is called encapsulation.

  • objects are nested (a novel is a specialized idea of a story). This is called inheritance.

  • nesting causes objects to have manifold appearances based on current context (my wife is my friend but also is a female, a mother, daughter, is human etc.) This is called polymorphism.

  • some objects (remember: as ideas) are used as prototypes/interface: for example "female" is a specialization of gender, which ultimately can be reduced to the idea of "binary" or opposition. Those special types of objects are also called interfaces and are employed on many levels across languages/cultures/contexts: ying-yang, on-off, black&white, male-female, plugs and sockets (to reference an example from the initial list of objects above) etc.
    Okay, some of those might be a bit contrived, but I hope you get my point...
Since this is no tutorial about OOP I won't go into much technical detail here, but for some reason more than often discussion about OOP quickly locks in on the topic of inheritance. It's true it's one of the pillars of OOP yet IMHO a more suitable way of introducing object thinking to newcomers is by focussing on encapsulation, polymorphism and the interface metaphor since their basics can all be explained without deep understanding of the inner workings of inheritance.

To be continued...

Friday, January 20, 2006

Dissecting the discourse

As a follow up to yesterday's stormy discussions on the Processing discourse, I have to say I've been quite perplexed by the impact of my last post and the intensity of the discussion started. It's true I got things mushed up got carried away with some of my subjective views, yet I think Processing can't be talked about merely as tool. We all know that and I take it the resulting intensity is a sign we all feel Processing has become a very important part of our (at least) creative lives...

The 2 main points of criticism of my raised issues were:

"I also think it encouraged a slightly superficial view of computational design by quickly gaining cult status amongst people never been exposed to programming before. I think it's dangerous and a sign of crisis if every recycled L-System, Neural Network, Wolfram automata or webcam tracking experiment automatically is considered art (by their authors), simply because it's been "(Re)Built with Processing"..."


For some reason (Sorry, I can't quite isolate it) this has been interpreted as me preaching elitism or belittling beginners which is absolutely not the case. As I mentioned previously, my entire knowledge of programming is based on years of playful (often seemingly meaningless) experimentation. The above statement was more concerned with some of the emerging ego issues amongst the userbase. I think I also finally realised (not for the 1st time) that my concept of an artist is strangely incompatible with that of most others and my remarks where caused by this.

FWIW I consider an artist to be a seeker of knowledge, understanding, a master of craft and resulting insights into the very nature of what we call the universe. This is a process which doesn't require nor has it place for bullshit. Life's too short for such things and I can't grasp why people do it instead of trying harder and raising their own standards a little. My mummy told me, be quiet if you don't have something useful to say.

Maybe I should have taken this partial advice yesterday, yet I think the gist of my argument is valid somewhat, but should have been expressed differently. Don't be hurt!

"In terms of pure expressiveness of ideas, concepts and thought processes as code, Processing is inferior to straight Java or dynamically typed languages like JavaScript or Ruby."


Now this really was a tricky one and definitely should have been expounded by me a bit deeper. What constitutes Java is not just its overblown and almost uncomprehensible standard library, it also is just a language with very nice features. Try to consider them on their own, the statement above was referrig purely to that. The same goes for the other two languages mentioned. I know exactly where most of you were coming from with your criticism of this above statement: Getting simple things done in straight Java can be cumbersome. On the other hand the language has a handful of simple mechanisms to allow us to construct increasingly complex systems of code without having to constantly reinvent the wheel. My question to that was why does "Processing" not make use of those handy features? - and here it's not really the tool I am talking about but the way those things are totally excluded from the processing reference. If people (beginners) are never made aware that things exists how can they learn about it (Google doesn't count here!)? I also think the teachers amongst us should maybe give students a little bit more credit in grasping OOP concepts?

Another weird question, but it would be ++interesting to know: Has it maybe to do with the fact that most of you (teachers) have never been taught OOP/design patterns yourself?

A while ago I decided for myself that everything (code related) I do should be specialized and yet generic enough to be reusable in future projects, but alas I cannot do that fully with Processing in its current form.

Re: JavaScript and other dynamic languages. I think Ben's argument of saying he can't produce his work with JS is a slight paradox. If he would have chosen JS as base language for his project (with an underlying layer in C, e.g. by using SpiderMonkey) - he maybe would be able by now to do so. IMHO it's a little bit misconstrued, but I won't hit on it any further, since as he rightfully pointed out, there're much more important things to be solved at current.

Reason why I mentioned JS (and above all Ruby) as alternative was because those dynamically typed languages allow for very powerful coding idioms which one can only dream of in Java. But of course these again are very advanced topics and as such are only interesting to that part of our community. This was one of the mushed points of my last post.

Keeping on talking about object orientation just a bit longer, I find it very interesting Casey mentioning MAX, PD and VVVV as tools being almost purely targeted at the artist community. All of those tools are not dealing with text based programming, yet are deeply rooted in an object oriented approach. In fact their entire essence is based on an huge number of tiny encapsulated objects, fulfilling very specialized tasks and are loosely coupled. In order to create a "sketch" people choose them like building blocks and connect them via "wires" to create a directed process graph/execution flow.

So I think some take away points from that are:
  • Objects are not alien concepts to programming newbies - I think it's all down to teaching styles. The visual programming metaphor obviously helps beginners on those platforms more since with the textual approach of Processing the connecting wires are initially invisible and have to be mapped out and "stored" in the students mind.

  • Secondly, the highly modular architecture of those systems has contributed to their incredible success (IIRC, MAX has been on the market for +15 years). Also because of their fine grained modules (which have to be written in C++, lo and behold...) beginners can start out writing just tiny atoms of code instead of thinking on library scale. Because those tools enforce by their nature all objects/patches to be encapsulated, every effort spent on writing those extra bits of code is well spent in the knowledge it can be easily reused, potentially in infinite ways. This behaviour is not the case with Processing.


To back up some of the above, here're some search results for the word "object" for the various community websites:

PureData ~35200 results
VVVV ~2400
Processing 442 results

I couldn't find a decent start site for MAX, but there's maxobjects.com which is hosting roughly 2900 MAX/MSP objects of various complexity.

Do you get my drift?!

Thursday, January 19, 2006

</Procrastination>

Note: This article is using "computational art/design" as compound term also including "generative" approaches.

Code lies at the very heart of computational design, a discipline becoming increasingly popular as proven by the mushrooming number of blogs, books written, conferences and workshops held - all using "code" as their core concept and pivotal sales hook. Yet there's apparently little intelligent discussion taking place about Code (with capital C) which goes beyond the art theoretical/cultural mindset and touches more on its raw (dare I say "technical" side), discussion about its manifold structures, expressiveness, metaphors in generative systems...

Code is language. It can be used and articulated in infinite ways. Poets use language in different, often far more subtle and sophisticated ways compared to our average modes of conversation. Their mastery and/or unique approach to language is what makes them artists, even before theorists can utter the words "political" or "historical" as their main point of interest. I much rather subscribe to something like Tolstoy's naturalistic description of art.

So personally and especially in regards to computational art, I find myself repeatedly standing in direct conflict with the often voiced opinion that literacy in the digital medium is unobtainable or even undesirable.

Processing...

...is a real phenomenon. Heralded as the new "it" tool for computational artists, it actually doesn't directly embrace or promote any state-of-the-art software designs (i.e. code structures). It's true, Processing has been primarily developed as teaching tool and always had in mind a beginner target audience, yet I've been thinking for quite some time that it merely delays the learning curve and lures in an increasing number of users (or shall I say "ongoing generative artists"?) with its easy to learn (and teach!) syntax to get quick visual (mainly) bang-for-the-buck. There's no arguing about its potential as digital sketching tool and its suitability for short workshops.

Being focused on small code sketches/experiments and used by various respected artists the tool created an huge amount of interest fairly quickly. In retrospect (well, for me after almost 3 years) I also think it encouraged a slightly superficial view of computational design by quickly gaining cult status amongst people never been exposed to programming before. I think it's dangerous and a sign of crisis if every recycled L-System, Neural Network, Wolfram automata or webcam tracking experiment automatically is considered art (by their authors), simply because it's been "(Re)Built with Processing"... Of course this is in no way to attack the tool or its intent, but is my growing issue with the surrounding community ethos. We have blogs writing about data equals nature and math being the language of nature, yet there doesn't seem to be any deep understanding of the importance of clean code designs and intelligent data structures or even community interest in further researching and experimenting with those artistically.

boolean isWrong = ( isExperimental != hasGoodDesign);

In fact from conversations with various fellow Processing users and lecturers I gather most are not aware of the total absence of decent software designs in the majority of the work produced with the tool so far. Due to the simplicity of its syntax, authoring environment and reference examples, the implicitly encouraged coding style is somewhere between procedural C programming (minus the pointer mess) and barely scratching the surface of object oriented designs.

In terms of pure expressiveness of ideas, concepts and thought processes as code, Processing is inferior to straight Java or dynamically typed languages like JavaScript or Ruby. Its ease of use has been gained by sacrifying scalability. Processing is based on convenience methods and it shows indirectly too. The Processing community at large has started to grow into one of consumers of previously written code.

On the other hand I believe artists (ongoing or not) working with "computational strategies" (can we please quit the marketing speak) must, or at least should, be aware of and work on intelligent software designs in order to advance the(ir) discipline. Form follows function.

In response to that I also believe it might hurt Processing as platform in future if experienced users will find themselves forced to breakout and leave the tool behind. To pre-empt this to happen, I think the community at large should pay more attention and spend time on extending the current library base. Above all, library authors should also respect the tremendous amount of work put in by Ben+Casey so far and too embrace the open source mentality of their core tool. The licenses are many (as well as much misunderstood, choose wisely!). That way existing library functionality can be further extended without having to reinvent the wheel (yet again!)...

An extensive library base for Processing will help the tool's longevity, even if users will slowly outgrow the initial proposal of the tool and only continue to use it library itself.

Open source is for doers. Happy, belated 2006! Glad I'm still alive...