toxi.in.process

Wednesday, April 26, 2006

JMF based video capture (teaser post)

I've got an imminent deadline for a prototype of my 1st serious camera tracking project and I just can't seem get QT4Java to play nicely (make that "consistently") on my dev machine. Sometimes it works. More than often it does not...

Yesterday I wasted almost a full day with (un/re)installing various versions of QT and WinVDIG, reading and hacking - then decided to throw in the towel and try my luck with the Java Media Framework to get a camera feed into my app...

Another 12 hours later, after wading through Sun's JMF docs and various failed experiments, writing and merging little demos scavenged from across the web, I've got it working: Me waving back at myself in B&W on screen as JMF captured (and processable) video with Processing.

Disclaimers (and hence the "teaser" subtitle):

  • So far only working under Eclipse

  • JMF only exists for Windows and Linux (no OSX, sorry Robert!)

I've done some preliminary speed tests with a 640x480 window size and my Philips ToUCam Pro capturing at 320x240 @ 15fps:

P2D (default renderer) 44fps average
P3D 52fps average
(*both fps counts over 1 minute) - I'm not sure if/how much this is faster than the default QT4Java solution...

Next steps are obviously to wrap this up in a library, but this probably won't be anytime until end of May since I want to do it properly. I might publish the existing code earlier, but it's currently using an Java interface mechanism for callbacks and so will only work outside the P5 IDE (e.g. in Eclipse)... watch this space!

Tuesday, April 25, 2006

Colour code snippets

I needed to sort a given colour palette by certain criterias, for example by luminance, saturation or by proximity to another colour. This, for instance, comes quite handy when trying to bias a random colour choice using a fixed palette (e.g. favour darker over brighter colours in the palette or pick more yellow shades than blue). Am sure such a readymade util exists in some form, but sometimes writing stuff yourself is quicker and more worthwhile than googling for it. So DIY won yet again with the 3 results below:

/**
* sorts a given colour palette by saturation
* @param cols array of integers in standard packed (A)RGB format
* @return sorted version of array with element at last index
* containing the most saturated item of the palette
*/
int[] sortBySaturation(int[] cols) {
int[] sorted=new int[cols.length];
Hashtable ht=new Hashtable();
for(int i=0; i<cols.length; i++) {
int r=(cols[i]>>16) & 0xff;
int g=(cols[i]>>8) & 0xff;
int b=cols[i] & 0xff;
int maxComp = max(r,g,b);
if (maxComp > 0) {
sorted[i]=(int)((maxComp - min(r,g,b)) / (float)maxComp * 0x7fffffff);
}
else
sorted[i]=0;
ht.put(new Integer(sorted[i]),new Integer(cols[i]));
}
sorted=sort(sorted);
for(int i=0; i<sorted.length; i++) {
sorted[i]=((Integer)ht.get(new Integer(sorted[i]))).intValue();
}
return sorted;
}

/**
* sorts a given colour palette by luminance
* @param cols array of integers in standard packed (A)RGB format
* @return sorted version of array with element at last index
* containing the "brightest" item of the palette
*/

int[] sortByLuminance(int[] cols) {
int[] sorted=new int[cols.length];
Hashtable ht=new Hashtable();
for(int i=0; i<cols.length; i++) {
// luminance = 0.3*red + 0.59*green + 0.11*blue
// same equation in fixed point math...
sorted[i]=(77*(cols[i]>>16&0xff) + 151*(cols[i]>>8&0xff) + 28*(cols[i]&0xff));
ht.put(new Integer(sorted[i]),new Integer(cols[i]));
}
sorted=sort(sorted);
for(int i=0; i<sorted.length; i++) {
sorted[i]=((Integer)ht.get(new Integer(sorted[i]))).intValue();
}
return sorted;
}

/**
* sorts a given colour palette by proximity to a colour
* @param cols array of integers in standard packed (A)RGB format
* @param basecol colour to which proximity of all palette items is calculated
* @return sorted version of array with element at first index
* containing the "closest" item of the palette
*/

int[] sortByProximity(int[] cols,int basecol) {
int[] sorted=new int[cols.length];
Hashtable ht=new Hashtable();
int br=(basecol>>16) & 0xff;
int bg=(basecol>>8) & 0xff;
int bb=basecol & 0xff;
for(int i=0; i<cols.length; i++) {
int r=(cols[i]>>16) & 0xff;
int g=(cols[i]>>8) & 0xff;
int b=cols[i] & 0xff;
sorted[i]=(br-r)*(br-r)+(bg-g)*(bg-g)+(bb-b)*(bb-b);
ht.put(new Integer(sorted[i]),new Integer(cols[i]));
}
sorted=sort(sorted);
for(int i=0; i<sorted.length; i++) {
sorted[i]=((Integer)ht.get(new Integer(sorted[i]))).intValue();
}
return sorted;
}


Added bonus: Using a webcam and applying the 2nd function (sortByLuminance) to the contents of the current pixel buffer, you can instantly and possibly unintentionally create a close copy of this "amazing" piece of "infoviz" concept art... Sorted! :)

Also, I wasn't sure whether I should continue posting code snippets like this to this blog. A year ago I set up an account with Code Snippets which used to be more Ruby, JS and generally webdev oriented, but meanwhile has quite a big range of languages and subjects covered. Then of course there's also Processinghacks, but it didn't seem fitting for this either...

Friday, April 21, 2006

Quality data for visualizationists

Quite a few Processing users (incl. myself) are working professionally and/or experimentally with (data) visualizations. Interesting and good works in this field are not just down to ingenuity of their authors but also largely dependent on quality data sources. Often those can be quite hard to come by, especially if you're reliant on free data sources. The creation and preparation of your own data can turn into a big stumbling stone since you are suddenly confronted with major technical issues about retrieval, parsing, storage, transformation, putting bits of data into relationships etc. No wonder a lot of amateur experiments are based around the readily available data as, for example, provided by Flickr, Technorati or del.icio.us.

As the blogging and Open Source movement has shown, innovation in any domain can and does happen bottom-up and amateurs play a major role in that. As Paul Graham writes:
There's a name for people who work for the love of it: amateurs. The word now has such bad connotations that we forget its etymology, though it's staring us in the face. "Amateur" was originally rather a complimentary word. But the thing to be in the twentieth century was professional, which amateurs, by definition, are not.

That's why the business world was so surprised by one lesson from open source: that people working for love often surpass those working for money.
Related anectode about amateur hardship: When working on base26 two years ago, I've spent about 5 long nights going page by page through the Oxford Dictionary manually filtering four-letter words and noting down their usage types, all for lack of an electronic version with this information.

On a large scale, good quality Open data is still generally rare, but steadily growing across various domains. The success of XML based standard data formats like RSS has been playing another important role on the road to liberated and readily usable data, yet the inherent problem with these formats is their lack of (direct) support for multi-dimensional and multi-directional data relationships.

So with these (amongst many other things) in mind, today reasearchers at Austrian company System One have announced the release of a snapshot of the entire English version of Wikipedia, converted into common flavours of RDF (RDF/XML, Ntriples and Turtle) and licensed as GFDL. Wikipedia3, a monthly updated dataset currently counts in at approx. 47 Million triples (metadata statements about wikipedia articles) and that so far only includes the combined structural information of each article, like internal link and category relationships. A separate dataset containing the actual annotated articles is planned as is support for inter-wiki relationships and a SPARQL interface for processing remote queries.

This is a pretty amazing endeavour and hopefully will provide enough incentive for people to pick up and learn to use RDF technologies as flexible and powerful tool also for infoviz purposes.