среда, 21 августа 2013 г.

Voice Magic: Touch & Say

Writing Semantic JSON is very tedious. Generally every pronoun means that you must create a reference to a previous sentence. It's the core idea of this input format, so you can't just drop making references. Videos of Bret Victor inspired me to create a more practical approach.

You just touch the screen and say a word. A figure appears under your finger tip. It is bound to the recorded sound. Now you can reuse the figure dragging it from recent to new statements. I'm going to provide a picture to illustrate this.




When you touch a circle of a previous statement inside time line area, you hear your own recorded voice saying the respective word. When you touch a circle inside context area, you hear a short story about this object.

If someone feeds his speech into your device you can reference circles from the feed as well.

There are few real world objects that we describe in speech when we experience a circumstance. If one would take the labor to type letters near the figures one has used a lot, two multy-layered graphs of say me and another person could snap together producing talk.

You may ask: why not just talk to each other. Why do we need computers?

Imagine someone who lives a thousand years after this moment. How can you communicate with this person? Will you write a book? The imaginary person of the future won't find you book inside the mountain of books that will have accumulated since the present time.

Text lines are no more sufficient. We need graphs.

semanticweb.com

Комментариев нет:

Отправить комментарий