Process-es for metadata creation
I’m very impresed by the Time Graphs photo set on flickr, where “sunset” tagged photos are graphed along annual and daily axes revealing the sinusoidal shift of daylight hours in northern hemispheres, and “breakfast” “lunch” and “dinner” cover the standard social feeding times. These visualizations would enable a race of beings living inside the core of planet earth to make some generalizations about life at the surface, astronomical phenomena, etc. There are also 12 hour shadows in both graphs, from cameras with AM/PM incorrectly set. Reminds me of GeoURL’s Ghost Blogs of Tibet, the Central Asia mirror world of the US cause by incorrect longitude sign.
One immediately practical result could be to help flickr users set the correct time on their cameras. But the real story is how metadata can be utilized in analysis, and repurposed. Computer Vision and Image Processing have tons of opportunity in extracting useful information, combined with the folksonomy, beyond the initial forays with time and Color. There are several techniques for automatic extraction of faces from images, to explore things like whether “street” or “freeway” is more human, or the average number of faces in a “crowd”. Line extraction can answer other obvious questions like if “nature” or “city” is more angular. Heck, the results of feature extraction and matching against known characteristics of tag sets could lead to automatic suggestion of tags. So, like .. “the spacing of concentric circles in this image suggests you might want the tag ‘donut'” .. which is a totally useless example, but illustrates how automatic and human processes can feed off each other to create metadata (no actually a ‘bagel’).
Here is a rough breakdown on sources of metadata creation, as I’m thinking of it. There is mechanically created, nearly objective metadata. In photos, this is the EXIF header data listing this like apeture, time photo was taken, flash mode. There is user created, free flow metadata. This is tags. There is mechanically created metadata derived from mechanical and user metadata, and maybe network effects, such as Google Pagerank. And there is process created metadata, where you are taken through steps of metadata creation that may utilize any of the other three.
A great example of process created metadata is Geotagging Flickr with Google Maps and Greasemonkey. Two Greasemonkey scripts integrates flickr, Google Maps, and geobloggers.com, to enable straightforward geotagging of photos. It adds a form to flickr, where a location is specified. That’s directed to Google Maps, where the map can be explored to the precise location is found. Then one click links back to flickr, adds the geotags, and submits to geobloggers. The entire process is well illustrated here.
Site integration is the true power of Greasemonkey, which hasn’t been much explored. Scripting happens right on the network, linking together systems, using apis, modifying behaviors, as if the web were some sort of cohesive OS.
Processes will be essential to enable more complex, useful folksonomy, or more formal folksonomies. Processes are being considered to geolocate evnt and openguides. This thread on metadata creation on the EGIP list compares the stick and carrot approaches, and the challenges in making it all easy. Geotags are just the tip of more formal folksonomy systems.