I’ve been playing with the Flash zooming interface demo that is a part of Jef Raskin’s Humane Interface concept. As noted in the demo itself, it is limited and does not reflect the full capabilities of today’s rendering engines. With that said, it is still easy to see the potential of such a design.
The space is easily navigable and items can be grouped in certain locations. This is a more truly “spatial” UI than that advocated by John Siracusa. A “spatial” metaphor doesn’t really make sense when items can disappear. Sure, a window or desktop can have items arranged spatially, but the value of that spatial representation is greatly decreased as soon as the window is closed or the desktop covered.
My questions about the implementation of the zooming interface:
How is a given object’s zoom level determined?
Does the user have control over an object’s zoom level? Can it be set relative to nearby objects or a global scale?
How would things such as playing media be handled in such a UI? How would I create a playlist of audio files?
I know – just buy the book. Not seeing anything more detailed on the website, I imagine the book delves into specifics.