Tuesday, July 31, 2007

Using JavaDB and db4o in Processing

Kind of as reply to Tom's mini howto for using SQLite with Processing, but also since I've been dabbling with it myself recently, here's an alternative take on using an embedded database from within Processing (or more generally in Java)...

Mainly due to Java's strong focus on server side development, over the past few years there have been several large scale community efforts to create Java native database engines, which don't rely on underlying C code, are high performant and portable: the essence of the Java way. The other benefit is that such DB engines can be embedded and distributed with your application without requiring any further installation. One such development effort is Apache Derby, a project which started in 1996, swapped owners several times (amongst them IBM) and then became an incubator project at Apache in 2004. Sun also joined the project and has been bundling it as library (under the name JavaDB) as part of the JDK (v6+) since December 2006. So in other words if you have Java6 installed you also should have Derby. But even if you don't (for example Mac users), you can download Derby from here and unzip it to any folder on your hard drive.

The following deals with setting up Processing to work with Derby:
  1. Create a new folder structure /derby/library within Processing's /libraries folder
  2. Copy the file derby.jar from Derby's /lib folder into the newly created folder

Before we can start using the database now we first need to create a new database. Derby comes with its own commandline client "ij" which is located in the /bin directory of the main Derby install dir (If you're going to use this tool more often it would make sense to add this /bin directory to your system path).

Launch ij from the commandline and then create a new database with this command:
ij> connect 'jdbc:derby:/path/to/database;create=true';

Databases are just folders and can be stored anywhere. For example on Windows the path /dev/derby/mydb would refer to C:\dev\derby\mydb...

Next create some a simple table in the new database and add some data:
ij> create table cities (
cityID integer not null primary key,
name varchar(32) not null
insert into cities values(1,'london');
insert into cities values(2,'berlin');
insert into cities values(3,'san francisco');

Given that all worked fine so far we can finally move on to a small Processing demo to query our exciting dataset:
import org.apache.derby.*;
import java.sql.*;

String driver = "org.apache.derby.jdbc.EmbeddedDriver";
String connectID = "jdbc:derby:/derby/testdb";
Connection conn;

void setup() {
try {
catch(SQLException e) {

void query() throws SQLException {
catch(java.lang.ClassNotFoundException e) {

try {
conn = DriverManager.getConnection(connectID);
Statement st=conn.createStatement();
ResultSet results=st.executeQuery("SELECT * FROM cities");
while ({
println("City: "+results.getString("name"));
catch (Exception e) {
finally {
// always make sure we close the connection
if (conn!=null) conn.close();

Now, this is obviously an absolute bare bones demo, however Florian Jennet wrote a little database library last year to hide all these excessive try/catch clauses. Unfortunately he's also hardcoded the database connection string to only work with MySQL and there's no source supplied with the library so one could change it and easily add support for other JDBC drivers... (nudge! :)

Speaking of databases, here's another food for thought. SQLite, MySQL, Apache Derby et al are all based on the relational database model. Java on the other hand is object oriented and as you can see it takes a relative large effort to exchange data between both worlds without any further help. To ease these tasks there're various powerful object relationship mappers available acting as translators between the worlds of objects and that of SQL. Last but not least, native object oriented databases are yet another alternative approach here which is far more aligned with the language. db4o is such an object database and allows you to store and query complex object hierarchies in a most natural (in a Java context) way. For example with db4o you can store and restore the entire state of an application - with just a single line of code. This in turn not just saves you a lot of time and headaches, but also enables building more complex, data intensive applications. If you're interested, the db4o site has a very easy to follow tutorial...

Sunday, July 29, 2007

String based designs

As we delve deeper into the realms of applied generative design and deal with a whole population of possible design outcomes, we often find ourselves preferring certain outcomes more than others and want to narrow down our explorations. So the identity of each such design plays an important role. Identity in this context can be defined by the set of input parameters used, but we also need to ensure the processing of these parameters is deterministic, meaning that even though we often use (pseudo)randomness as part of the algorithm, the outcome should be replicable for each set of parameters.

Most (if not all) pseudo-random generators use the concept of a random seed which subsequently produces a unique (and deterministic) sequence of "random" numbers. In Processing you can use both randomSeed() and noiseSeed() to achieve this. Now while using numbers is all fine, and technically speaking, all digital media is just numbers - there're use cases where it'd be nicer to use e.g. text as seed directly. For example, the 20,000 designs of the Lovebytes fluffies are all based on their generated character name only. There're about 10 other parameters, but these too are chosen based on the random sequence seeded by the name.

One way of turning a string into a number is by using message digests, like the popular MD5 or SHA1 algorithms. A message digest takes any number of bytes as input and calculates a fixed length hash. MD5 results in a number 128 bits long and SHA1 160 bits. This is more data than we can cope with since most common random number generators only accept up to 64 bits as input. In Java/Processing this is equivalent to the long type.

The following function takes a string as input, computes the hash and then returns the first 8 bytes as long integer to be used as random seed. Because it doesn't use the full hash it's possible in theory to end up with the same result for different inputs. However, I've not yet managed to come across a collision with the relative short strings (names, sentences, phrases) used in my work.


* Calculates the message digest of the given string and
* returns the first 8 bytes packed into a long
* @param msg string to form hash from
* @param digest message digest ID (e.g. "MD5" or "SHA1")
* @return zero if failed, else partial digest as type long
long getLongHash(String msg, String digest) {
long result=0;
try {
MessageDigest md = MessageDigest.getInstance(digest);
byte[] buffer=md.digest();
for(int i=0,bits=56; i<8; i++) {
long val=(buffer[i]<0 ? 0x100+buffer[i] : buffer[i]);
catch(Exception e) {
return result;

And again, use it like that:

long seed=getLongHash("Hello world!","MD5"); // "SHA1" as alternative

Btw. The default Java Random generator does not guarantee to produce a deterministic sequence across all platforms. This means as long you're using Processing's default random() or noise() functions you're only guaranteed the same sequence as long as you stay on either Windows or OSX or Linux. Last year Marius wrote a Processing wrapper for the famous Mersenne Twister generator, however this one can only be used as alternative and in isolation. Processing's noise() function is hardcoded to use the default Java generator...

Wednesday, July 25, 2007

Specifying PDF page size in Processing

I've always been wondering if there's no easier way than trial & error to figure out the correct dimensions one has to specify in the size() command to match a certain paper size for the generated PDF. Today I finally had a quick look at the source of the underlying iText library on which Processing's PDF wrapper is built. As expected it actually includes a convenience class which has presets for all common page formats. This is great, but to make this even slightly more user-friendly you can take this snippet and keep it:
* Convenience method to be used instead of the normal size() command.
* Creates window matched to a given paper size
* @param r a predefined constant of the iText PageSize class
* @param isLandscape true, if you want landscape orientation
* @param renderer class name of the Processing renderer to use
void pageSize(com.lowagie.text.Rectangle r, boolean isLandscape, String renderer) {
if (isLandscape) {
} else {
And here're a couple of examples how this snippet would be used (make sure you import the PDF library):
void setup() {
// create window @ A3 landscape using OPENGL

void setup() {
// create window @ US Letter size, portrait using default renderer

Tuesday, July 24, 2007

Digital portfolio library for London College of Fashion

Rather late than never, here's finally some brief documentation of Moving Brands' latest interactive installation, after Muon our second one this year...

The brief and objectives set by the London College of Fashion were similar to last year's, only with the added challenge of having to show work of twice as many students (18 courses), almost 500 in total with 10+ images each. Our solution to that was a digital approach, housed within a 'library' setting where visitors could browse students work in two stages: First, scanning and selecting students of interest, followed by detailed browsing of their work. To reflect this, the library also consisted of two complementary parts, one "analog" and one "digital". Each student is represented by a post-card sized tag which is located on the installation walls. On the back of each card is a unique pair of identifier tags, so when the card is placed on one of the four interactive tables the student's work is revealed. The cards also acted as student "business cards" (with their contact details) and could be taken away by visitors, which often were recruiters.

After lots of deliberation the software side of the installation was developed using the fabulous reacTIVision & Processing. The former is used to analyse and identify the printed markers, however the major problem we had to overcome was that the tool only comes with a set of 90 markers, whereas we needed close to 500. After initially considering the definition of custom markers, I then opted for combining 2 markers into pairs (hierarchical index) and so end up with a maximum of 45*45 = 2025 possible identities (due to the special cases caused by the fixed usage scenario of the tables we could theoretically reach a higher number). The next challenge then was to identify pairs from a given set of visible markers (easy) and robustly track these as single entity over time (not as easy), especially since the software was built to support a multi-user scenario. Delaying and aggregating events broadcast by reacTIVision to Processing made all the difference and helped tackling the temporary breaking up of groups do to sub-optimal light conditions and/or user actions.

So as soon as a group of markers is identified, a background thread is started to load in the various portfolio images of the related student. This multi-threaded approach is important so that any ongoing animation isn't suddenly interrupted. However since the images loaded are quite large the loading process can sometimes take several seconds and the preloader becomes quite obvious when browsing several students in a row. Caching to the rescue! After some googling I found Whirlycache, a very easy to use Java object cache with multiple purging policies. It only took me 3 lines to integrate into my application and since the installation machines had 2GB of RAM, many images could be kept in memory and the preloader disappeared for most users. Yeah!

Finally, as with last year's table we tried to increase the tactile feeling of the interaction and where we've used a visual response (soft surface simulation) in the past, this year we've opted for sound to make the experience seem more physical. We developed a sound palette for moments when cards are placed on the table, cards are rotated and removed - all with a noticeable positive difference.

Some more images of the installation are on F l i c k r...

Sunday, July 22, 2007

Sustainablity and generative design

Mitchell asked in a comment to the Technology is knowledge is power post:
How does this all reflect on your own practice as an artist/designer? What does *sustainable* generative design look like? Open source is perhaps one answer...
It's a hard question to answer really how this reflects on me personally since am still only coming to grips with it slowly... It's not that most of this was truly new to me, but I've been doing lots of reading & thinking about these things only recently, and so haven't managed to form a full opinion yet. I think all the real answers to the sustainability question are hidden below the usual layers of conversation and discussion had, and hence why I quoted Illich in the post. Like a good visualization piece he manages to give you a view of the data from a totally different angle. Generally I agree with many of his points about the formal western school system ultimately fulfilling a more profound function in our society of encouraging (and persisting) a class system based on certification, (over)production, consumption and compliance. This is especially true when "exporting" this system to the Third World. (Btw. Seeing school's role more in terms of teaching compliance to established norms vs. current reality also moves ADD into a different light. Stowe Boyd has more about it.)

As long as consumption remains the main engine of our society it's hard to seriously address sustainability. The problem is also further complicated that much of the current sustainability discussion is about the purely environmental aspects of the concept, whereas these issues are just a part of the bigger picture. This is why we need to generally acquire a better global understanding of the complex interlinked nature of the systems we live in: The systems we built ourselves only recently and the ones which pre-date us but of which we only realize now to which extent we have changed and shaped them...

Every tool requires a certain level of literacy to be used creatively and I think it's wrong to assume that we as societies at large have acquired these skills and mindsets to a level that they can be creatively used as tool by as many members of society as possible. Governments and mainstream media with their tendency to either ignore or to create spin around these complex issues are totally throwing spanners in the works. The "simplicity" term has already started being subverted by marketing and this is also why I'm very wary of it being preached as the main answer if the questions needing to be tackled are some of the biggest and complex we've ever faced. Simplicity is relative - reaching for better levels of literacy in systems might be more fruitful than risking important concepts being unwillingly dumbed down and diluted.

Not using this as an excuse, but partly being a product of my environment I can't say that in the past I actively cared that much for sustainability myself (above and beyond recycling, public transport, energy saving etc.). Yet I've always believed in learning by doing and I quit college for the same reason. Most of the valuable things I've learned and the ones I'm most proud of, are the result of self-initiated projects and sleep deprivation fed by a genuine hunger for trying to uncover hidden layers & systems in nature. Probably like many others in this field (generative design) I've always had more interest in the synthesis and simulation of (new?) concepts vs. sampling "cultural" symbols and trends. Yet am often also wondering if this discipline has not positioned itself in a vacuum if the knowledge we acquire by doing this isolated artistic research will never filter into something more important. ?!?!

Given that computers and software are our current state-of-the-art tools for problem solving, all in all I'd like to believe that a continued cultural rise and awareness of open source, hacking, informal learning, workshops, blogging, tool making, digital fabrication, generative design can be and already is all part of the bigger solution:
  • Code literacy requires good skills in the abstraction and decomposition of ideas and acknowledges the process nature and connectivity of systems
  • A designer's appreciation and sensitivity of form and aesthetics informs adaptable software architectures required for building modular and agile tools.
  • Open Source tools acts as platform builders (technically and socially), distribute development costs and reduce the entry threshold by enabling anyone with an interest and access to hardware to become part of ongoing projects and communities.
  • Hardware initiatives (e.g. OLPC, Arduino) and communal digital fabrication centres allow for grassroots education, experimentation and production of tools for fulfilling local/individual needs not catered for by corporations.

Wednesday, July 18, 2007

Adobe says: The picture is now complete.

This is how Adobe's newsletter from this morning starts. And indeed, the picture is complete now, just see for yourself:


That is the price US designers have to splash out for obtaining the new CS3 Master Collection. Since Adobe announced the first CS3 products earlier this year it became clear that creatives outside the homeland will be facing an increasingly unfair pricing policy. Even though there were various petitions with over 10,000 signatories to reject the proposed pricing structure or at least provide a sound explanation for the drastic increases, Adobe hasn't changed much and only provided mushy arguments as reasons. The prices below are correct as of today (July 18, 2007)...

countrylocal priceprice in USDincrease
Germany / France / Spain2,999 EUR$4,14266%
UK1,969 UKP$4,04362%
Australia4,455 AUD$3,91357%
Switzerland4,227 CHF$3,52941%

Some explanations offered by Adobe include:
  • It costs Adobe 5 times more to manufacture and manage inventory in Europe because:
    • We must maintain different sku's for each language version to support different labeling requirements, support information, and sales requirements.
    • We maintain smaller quantities per language, in keeping with market sizes, which increases costs for printing, inventory management, and inventory disposal.
  • The costs associated with our value-added reseller channels are 25% higher.
  • We maintain 2.5 times as many field marketing employees in Europe as in North America to support our creative business at a certain level of quality across local markets. However, the revenue per employee is smaller, so the overall costs per unit of revenue is 4:1 in Europe compared to North America
  • Variable marketing expenses are 46% higher
  • Development costs are approximately $2.5-$3 million per language for each of the 14 languages Adobe Creative Suite supports.
However shocking these figures are, I guess some of them are somewhat valid points. But they also lead to even further questions. For example how does a 57% higher price in Australia fit in? Why is the Swiss version (German & French) 25% cheaper than in Germany or France? And using the reasoning given, why does it seem like Non-American, but English-speaking customers are subsidizing the higher cost of other localized versions, whereas US customers don't (after all the US version is also offered in French and Spanish - without any price increase!)...?

This all wouldn't be this bad or as important wasn't it for the fact that Adobe pretty much "owns" the Creative industry of the world. The industrialized world that is. There're major territories where the pricing of such tools is prohibitive and does nothing but encouraging piracy. Also, by superimposing the conceptual framework, metaphors and features of their products onto the creative process Adobe effectively shapes the ideas and defines the benchmark & quasi-status-quo of what is (supposedly) be possible and can be realized by a mass market of designers. Of course every tool has this effect, yet no other player in this market has as much impact on the resulting outcomes as has Adobe.

There's nothing wrong with Adobe's tools per se: they're very powerful and generally well thought out. For me the problem is that for years, a lot of designers have been conditioned (by using mainly the same tool(s) on a daily basis) to unconsciously restrict their thinking and creative output to the style choices invisibly encouraged by the metaphors and features of Flash, Illustrator & friends. New features added to these packages are quickly turned into the latest design fad by the starved minds of designer-consumers whose attention span shortens and desire for ever new features leads to ever faster release cycles.

I dare to say, along with Mac PowerBooks, Adobe products are the chosen drugs for the vast majority of creative professionals today.

Proprietary software has (at least) 2 negative aspects:
  • pricing policies and (group) licensing costs
  • lock-in effect (people building livelihoods on top of particular tools)
All this really makes me think how important an investment into a better open source tool chain is for the creative minds of the world (and especially to those Europeans). How much money could be saved and re-channelled into the actual creative process and design research if there was a set of free tools, able to compete qualitatively with current proprietary software. Community owned and focused, development costs of new features (localization and documentation too) can be (and is) distributed to a much wider group of people, and as result products wouldn't become as bloated & suffer the level of feature creep, Photoshop (for example) has suffered in recent years... Various successful open source projects for the creative worker have been in existence for quite some time, however in order to realistically compete they need more global interest of both users and developers willing to give their time to the improvement of these tools. To most individuals, the cost of these tasks (ranging from advertising, user feedback, education, bug reports, UI design to development and testing) will be negligible compared to the prices quoted above.

There're numerous successful examples showing the Open Source concept works well in practice also in the creative market, most notably Blender, InkScape, Gimp - Processing - all have very active communities which are involved at various levels of support & development. Of course, proprietary products have very large and active communities too, however these are restricted to feedback & discussion and else have the privilege of being passive consumers. Very creative indeed! I still vividly remember the regular furore caused by new product improvement announcements (actually the lack of such) on various Macromedia Director mailing lists... This mentality simply doesn't exist in the Open Source world.

One of my colleagues found this great quote by Jeff Bezos, CEO of Amazon:
"Most people, unleashed, are innovators. We're this great species of tool-using animal who likes to make our world better. The companies that can unleash that particular animal instinct are the ones that will thrive."

I fully agree with this and I find it plain weird the so called "Creative class" is full of consumers who like to swallow those bitter pills and at the same like to call those people who actually are truly innovative and creative, geeks.

If you want to see for yourself what's possible with current Open Source solutions today, check out Ubuntu Studio, the multi-media edition of the most user-friendly Linux distribution to date.

Sunday, July 15, 2007

Technology is knowledge is power

Regine has posted a report about Usman Haque's I hate technology talk at We love technology up in Huddersfield last week.

Usman argues the meaning of the word "technology" has changed and he refers to an older era (presumably pre-industrial) when the word used to imply "knowledge" and the study of making things. These days, in our consumerist society, we on the other hand tend to think of (and mainly also encounter) technology only as products, which I think is really not surprising since "technology" itself has become a commodity. But it's not only that, we also equally deal with "knowledge" as product. This too comes complete with a hefty price tag, namely the cost of education required and this is where, for me at least, the story slowly unfolds....

In the past year I've been reading 2 books which have pretty much completely transformed my views and understanding of the role of being a designer in a technocracy as ours:

First there was Bruce Sterling's pamphlet Shaping Things about a future class of products, and more generally, of objects (Spimes) and the societal changes their advent will equally require and trigger. While it's an absolutely fascinating and mindblowing 100 pages it is about scenarios 10-30 years still ahead of us.

John Thackara's "In the Bubble: Designing in a Complex World" on the other hand is dealing in the Big Here and Long Now, quite literally. This is the first book about the design discipline in general which sent shimmers down my spine every couple of pages. The chapters are dealing with these topics: Lightness, Speed, Mobility, Locality, Situation, Conviviality (more about this below), Learning, Literacy, Smartness and Flow. A Potent combination of topics.
Even though I've been re-reading the book twice since the beginning of the year I'm still having a hard time summarizing the immense amount of insight, the examples given, the quotes, statistics, gems of wisdom and the important questions asked, for example:
"...addressing the question "Where do we want to be?" brings us up against an innovation dilemma. We've built a technology-focused society that is remarkable on means, but hazy about ends. It's no longer clear to which question all this stuff - tech -is an answer, or what value it adds to our lives. Too many people I meet assume that being innovative means "adding technology to it". Technology has become a powerful, self-replicating system that is accustomed to respect and receives the lion's share of research funding. In NASDAQ, tech even has its own stock exchange." (p.2)

Some of the material presented early on in the book can induce a sense of global doom and depression, however the sheer number of truly innovative examples is making the book a celebration and wake-up call to focus more on human centred approaches to design. And "human centred" not only from a perspective as Maeda is approaching it (Yes, more simplicity is needed in a complex world, but I'm equally dreading over-simplification of things which are inherently complex). Human centred design also means designing for sustainability, both social and environmental versions.

With regards to this post, he too has very interesting things to say about the state of education in our society. Throughout the book he's citing several powerful quotes by Ivan Illich, described by The Guardian as:
" of the world's great thinkers, a polymath whose output covered vast terrains. He worked in 10 languages; [...] Best known for his polemical writings against western institutions from the 1970s, which were easily caricatured by the right and were, equally, disdained by the left for their attacks on the welfare state, in the last 20 years of his life he became an officially forgotten, troublesome figure (like Noam Chomsky today in mainstream America)."

I found several of his texts online and they indeed are rocking the boat of many of our societal institutions. In the foreword to his book "Celebration for awareness" he writes:
"Each chapter of this volume records an effort of mine to question the nature of some certainty. Each therefore deals with deception - the deception embodied in one of our institutions. Institutions create certainties, and taken seriously, certainties deaden the heart and shackle the imagination. It is always my hope that my statements, angry or passionate, artful or innocent, will also provide a smile, and thus a new freedom - even though the freedom come at a cost."

Here're a few more of my favourite quotes of these texts which might be helpful in explaining why the meaning of the word "technology" has changed, more or less directly caused by how we've been approaching education through institutionalizing "schooling" for the past 150 years.
"The modern university confers the privilege of dissent on those who have been tested and classified as potential money-makers or power-holders. No one is given tax funds for the leisure in which to educate himself or the right to educate others unless at the same time he can also be certified for achievement. Schools select for each successive level those who have, at earlier stages in the game, proved themselves good risks for the established order. Having a monopoly on both the resources for learning and the investiture of social roles, the university coopts the discoverer and the potential dissenter. A degree always leaves its indelible price tag on the curriculum of its consumer. Certified college graduates fit only into a world which puts a price tag on their heads, thereby giving them the power to define the level of expectations in their society. In each country the amount of consumption by the college graduate sets the standard for all others; if they would be civilized people on or off the job, they will aspire to the style of life of college graduates." — Ivan Illich - Deschooling Society
"In a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy. [...] Man must choose whether to be rich in things or in the freedom to use them." — Ivan Illich, Tools for Coniviviality

Reading the above about lack of interest in funding education outside institutions I had to immediately think of Neil Gershenfeld's TED talk and his problem of sourcing major funding for his global "Fab labs" - simply because the currently existing institutions are too rigid, specialized and mutually exclusive to deal with such new educational and social development efforts. So even though Illich had his hay day in the 70s - not much seems to have changed since...

Finally, to close the arc of this rather long post - another great quote by another great thinker, Guy Debord in The Society of Spectacle, Thesis n°6:
"Understood in its totality, the spectacle is both the result and the goal of the dominant mode of production. It is not a mere decoration added to the real world. It is the very heart of this real society's unreality. In all of its particular manifestations - news, propaganda, advertising, entertainment - the spectacle represents the dominant model of life. It is the omnipresent affirmation of the choices that have already been made in the sphere of production and in the consumption implied by that production. In both form and content the spectacle serves as a total justification of the conditions and goals of the existing system. The spectacle also represents the constant presence of this justification since it monopolizes the majority of the time spent outside the production process."