Interfaces are a necessary evil. They are a means of communication, connecting users to instruments and devices. Some interfaces are only one way (like a keyboard), while others are a duplex interface (like a touch screen). But all are necessary to extract value using a device. Any device.
The main problem with interfaces is that they are literally in the way. They take time. They are sometimes complicated. Sometimes it takes a while to learn how to use them.
I remember seeing the movie Firefox with Clint Eastwood, a movie about a special stealth jet airplane where the pilot is able to control weapon systems using neurofeedback BCI mind control, and think to myself “why stop there?”.
Engagement can be defined in several ways. My favorite is probably “how likely am I to use a device/system for a certain need” (and not how many times I use a device). If every time I want to know the actual time I look at my watch (and not the clock hanging on the wall next to me) – I would say my watch has 100% engagement. Even if I only check my watch twice a day. If it were hard to tell time using my watch – I would be less engaged with it. In my mind, increasing the number of times I check my watch does not depict engagement to my watch – but rather a metric describing to depth of my need for time awareness.
Natural Language Interfaces, like Apple’s Siri for example, are the intuitive and natural next steps of interface evolution, right after touch screen technology and just a couple of steps shy of full direct neural interface. The ability to use your own words to interact with a device is shortening the time-to-value and making it more likely to use my iPhone when I want to perform a task.
If it’s easier to extract value – ultimately, it will be more engaging.