g-speak overview 1828121108 from john underkoffler on Vimeo.
Oblong Industries is the developer of the g-speak spatial operating environment.
"Every graphical and input object in a g-speak environment has real-world spatial identity and position. Anything on-screen can be manipulated directly. For a g-speak user, "pointing" is literal."The g-speak implementation of spatial semantics provides application programmers with a single, ready-made solution to the interlocking problems of supporting multiple screens and multiple users. It also makes control of real-world objects (vehicles, robotic devices) trivial and allows tangible interfaces and customized physical tools to be used for input."
"The g-speak platform is display agnostic. Wall-sized projection screens co-exist with desktop monitors, table-top screens and hand-held devices. Every display can be used simultaneously and data moves selectively to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to application code."
g-speak was born at the MIT Media Lab, and Oblong was started in 2006. The work behind g-speak's gestural I/O began over 15 years ago. For more information, read g-speak in slices.
Oblong developed Tamper over the g-speak system as a prototype for film production. Below is the demo. At 0:08, sketches of the gestures used in g-speak are displayed in the video
Geen opmerkingen:
Een reactie posten