Gadget makers have finally reached a tipping point for themselves. Having passed the period of forced transformation of their devices into bad Android – tablets, companies are ready to step beyond screens.
The prosperity of connected devices, especially within the IoT trend, has been driven by the advent of inexpensive components. Anyone can fake something smart with a processor, an accelerometer, a gyroscope, and enclose it all in a 3D printed case for a small cost. But all enthusiasts and professionals alike face one major challenge: how to give users control over their new toy. Some companies opt for a smartphone app, while others embed a touchpad into the device. The touchscreen display is simple, accessible and does not require deep learning from the user.
However, the touchscreen has its own problems. For example, it is necessary to constantly update the software, which requires specially trained personnel, the ability to distribute updates 'over the air' and the desire of users to install them. The apps that display and run on these screens also need to be updated, and a development team is essential. And perhaps the biggest challenge: everyone already has the perfect touchscreen display in their pocket. Our phones are constantly updated and have sufficient performance, so why do we need another device?
Another screen?
With these aspects in mind, device manufacturers are developing new paradigms of user interfaces in an attempt to kickstart the evolution beyond screens. They invest in gestures and voice control to appeal to users' lack of interest in staring at screens, but their efforts are also an attempt to deal with the failed examples of integrating and supporting similar control methods in Android. There is much more at stake. The battle for control of touchscreen software has already won Apple and Google, but whoever wins the next era of engagement will have all the profits and all investments.
We have already met the prerequisites for a new confrontation, but the success of Amazon Echo and the active offensive of connected things gave gadget manufacturers a new incentive to give another chance to alternative ways of interacting with devices. The question is whether they can find something that people will like so much that they can change their behavior and will not kill technology. The Echo can be seen as a one-time success rather than a sign of a changing business, but manufacturers nevertheless seem to see the device as a harbinger of new trends. In fact, they have nothing else to do.
Gadi Amit, Fitbit Force designer and lead designer at NewDealDesign, says: “The future of interaction is mostly subliminal, not surface. It involves elaborate interactions where the computing environment is primarily in the background rather than the foreground. ' Almost every company already thinks in a similar vein. For example, AirPods connect to updated products Apple directly when removed from a special container and do not require any user intervention, Snapchat glasses are activated by a button, no screen is needed. But perhaps no company has been as successful in creating hardware around screenless interaction as Amazon with its Echo home device and Dash buttons.
These buttons are placed around the house to place an online order for a product immediately after the user discovers an insufficient quantity. Buttons integrate harmoniously into the home (unless you consider the large button inconspicuous) and do not require complex thought processes from the user. Everyone knows how to push a button. Dash is connected to, runs on cloud infrastructure and is easy to implement. Amazon speaks of good sales of buttons, but, unfortunately, we cannot yet confirm these statements with figures. We can assume that the line of Echo devices (including Dot) has been quite successful. The Echo series has given thousands of consumers the ability to control their lives with their voice. Users can turn on devices, listen to streaming music broadcasts, ask the voice assistant questions, or set a timer / alarm, while the second form of interaction (light) indicates that the device is 'listening'.
Major players including Google Apple, Microsoft and Samsung also see 'voice' as an integral part of future gadget interactions. Google released Home as a direct competitor to the Echo, and other companies over the past year have focused on their own voice assistant. 2017 could be the year of Siri or Cortana or even Samsung's mysterious voice assistant. Such products have not generated much interest in the past, but smarter and better-designed software may make a difference. Smaller companies are also experimenting, but not with voice control, but with a user interface. They will most likely never get the same market share as Amazon, but they expect to cash in on the novelty and simplicity of their interaction technologies. In some cases, they are of the opinion that such ways of communicating with the device will lead users to a more natural relationship with their gadgets. Such efforts are rooted in technologies of the past that have failed to gain a foothold in the market, which, however, does not stop anyone.
Jake Boshernitsan, co-founder of the Knocki project, sees smart buttons like Dash as a key component of future interactions, the buttons are accessible to everyone and easy to use. Knocki turns surfaces into interactive devices, in a similar way sound was used in Clapper to receive commands. “We decided to use knocking, with its help you can activate the whole environment in which a person lives,” says Boshernitsan. Knocki attaches to a tabletop or tabletop and recognizes 'knocks' as commands. The device works with IFTTT and therefore can recognize different combinations to control specific devices. Basically, functionality that used to require a touchscreen can be integrated into the room and not thought about it anymore. These interactions have become part of the normal daily routine.
Next year, I expect the biggest growth in gesture technology. Take at least two smart mirrors from CES – HiMirror Plus and Ekko. Both mirrors are gesture controlled for one simple reason – no one wants to leave fingerprints on expensive glass. At the show, we also saw the integration of gesture control into car windshield displays such as those found at Navdy.
Sang Won Lee, CEO of gesture control technology company Qeexo, believes in this type of interaction, believing that it more fully includes the user in the process. In his opinion, this method provides a more visual experience, which contributes to a better experience of the product. In some cases, gestures are used in parallel with the screen, but without a generally accepted form of touch communication, and in these cases, hardware manufacturers do not try to get rid of the screen or make it simpler.
Manufacturers are struggling to get users to think outside of touchscreens, but so far their efforts have felt like a gimmick rather than something life-changing. In my opinion, the gesture controls are not yet fully developed. I often have to repeat the movement to complete the command, and when I wave my hand to change the desk, I feel unnatural every time. Over time, such actions will begin to be perceived by my body as something familiar, but the sensations at this stage are strange. To get a sense of the level of development of alternative user interfaces where they actually came to be, look at gaming devices such as the hit Microsoft Kinect and Nintendo Wii.
Few will return from the battlefield
While the Echo is an example that it is possible to sell 'voice', it may not work for all devices. The success of the Echo stems from the integration of the device into the home space, a place where people need silence until they change their minds. Alexa plays the role of an obedient helper that helps control music and smart lights. This use case is harder to justify for other gadgets like watches. I have no need to talk to the watch like the Echo. I also wear a watch outside the home, so although it would be easier to dictate text to the watch, the conditions do not always allow us to guarantee successful input.
With the above trends in mind, why not give other companies a go outside the home? Voice or gesture control can be built into cars or other highly specific scenarios, at least until the vast majority switch to cars with autopilot. To topple the phone and its touchscreen, companies need to prove that their interactions are simpler and their use cases more obvious.
Original material by Ashley Karman
The further the development of voice and gesture control goes, the more incredible the concepts and prototypes of devices with a claim to become our companions in the future seem. Over time, many of these projects disappear without having mastered the stage of fundraising and only the Black Mirror reminds us of them. The first steps towards a new concept of interaction have already been taken, but will voice assistants really become a massive innovation? Are the related technologies and users themselves ready for this? The answer is not clear yet.