What did I pick up from the XML Melbourne Lab feedback?
The VD system was described as a “taxonomy of display.” A “recombinant video player.” There was confusion “Is it a content engine or a tag engine? Could we provide clearer context.
Another called it “Anti-TV…the opposite of YouTube…not a lot in the house” and an example that contradicts all the noise on the Internet through its slowness and stripped back minimal design.
The ability for a number of people in different locations to see varying news perspectives at the same time. The multi-window composition creates the opportunity for multiple perspectives to be viewed at the same time. The viewer can make their own judgments on that news item by engaging across a number of perspectives concurrently.
Also, there is the potential to respond to the concept of multi-tasking. Why aren’t there more systems that allow users to view multiple clips at the same time as a way of searching and deciding what they want to watch? Makes sense speeds up consumption. I think people are ready for this type of viewing but we are still locked into the security of one window viewing due to established cinematic and TV paradigms.
In a multi-window format with data increased a combination of stills and video could be used where photojournalistic type images come to life for short periods with audio overlays. An example www.mediastorm.org.
Overall, the multi-window aspect is what made the VD system unique. In comparsion, tagging is something that is growing fast around online video content.
Of course sport also came up. The slow cricket match playing in the left hand screen while other sports stream through other windows. Activities like sports can easily be watched at the same time as viewers wait for highlights the goal to be scored etc. The Olympics to die for in this system. Delayed edited broadcast another option, along with multi-camera curation except you see all the cameras. This tied in with live VJ gigs and music concerts.
Granularity – Semantic Video
Following up the idea of fragmenting existing TV programs for web publication the www.abc.net.au/fourcorners TV documentary program provides excerpts with duration times for viewers to access independently. But there seems to be a far as I can tell, no extras like out takes, extended interviews and other background. Also, the material seems to rely on previous program viewing with little focus on taxonomy, classifying under themes and categories.
A viewing platform with thumbnail similarities www.piclens.com a type of fly through viewing image-videowall but the clips remain separate as discrete independent pieces of content.
In a fast moving environment where time is of essence one person argued that time should not be invested in classification – taxonomy of online video content. The approach should be UGC instead where users make their own folksonomy type choices. An example is the vmark system. Here the idea is to leave long duration recordings and let users break the material up into whatever fragments they choose, with the option to embed and share those portions with themselves and others. (I need to try it out to confirm this perception) A Korean example of vmark – http://zzim.kbs.co.kr/section/ . A key objective is to get return traffic back to the original source material using metadata.
But, the concept of the content producer avoiding having to classify video content manually misses the point in relation to VD. Because the idea is to construct specific relationships between text and moving-imagery as way to provide certain types of context for the viewer/user.
A UGC idea where individuals capture material around Australai on the premise of classification rather than editing. These are single shots (no edits) but there could be jump cuts in camera, which are categorised and tagged. An approach that ties in easily with amateur online production techniques to shoot and publish directly online. (via a computer or direct from mobile etc) i.e. qik Many amateur producers struggle with more advanced editing but are becoming familiar with tagging and folksonomy practices. There could be themes where the content that is uploaded is synidcated into one central VD system for display under specific categories (themes that have been worked out in advance).
Re-mix is another consideration, particulary across multiple windows. Not only are users open to remix there own version (a standard single-window video) but also users could remix across multiple windows. It becomes more like a DJ turntable combining audio and vision from multiple sources at the same time.
Added social networking functions
Netvibes lets individuals assemble their favorite widgets, websites, blogs, email accounts, social networks, search engines, instant messengers, photos, videos, podcasts, and everything else they enjoy on the web – all in one place.
Some people loved the simplicity of the player (as it contradicted the visual overload of most web pages and players). Others where dying to get stuff back in there where the video component is supported with components that develop and maintain community. I discussed this earlier when comparing the motives behind Showinabox and how we stripped most of the web 2.0 blog functionalities out. View2gether is an example of a “social viewing platform” and freebase.
Returning to single window output
There was a number of people who wanted to be able to take away a traditional single edited video clip from the system as an option. This got me thinking about the divide we have created between the VD system and standard viewing practices. LIke creative commons currently considers the established status quo of traditional copyright until things move toward a more open approach. Maybe there is something in providing an additional single-window option. But a part of me also says NO, make the leap.
other reference – limelight networks