Intelligence in smartphones, part 1

Can our devices really be called smart?

Intelligence in smartphones, part 1

Original material

Modern tech companies are investing heavily in artificial intelligence, and for good reason. AI can potentially evolve faster than human intelligence, hence more efficient problem solving. So far this time has not come. It may be several decades before we can get artificial intelligence elements that are comparable to the general intelligence of humans, ahem, some humans. At our disposal at the moment there are only basic models that perform narrow tasks.

If-then rule

For virtually the entire 'life' of computers, computations were carried out by means of rules programmed into the computer. We started with simple math, then moved on. Even the modern smartphone user interface is still manually programmed based on rules. 'If the user clicks on this button, then this is what needs to be done.' That's okay, but it's not smart.

Intelligence in smartphones, part 1

Think for yourself, smartphones in their current form are stupid. Especially in terms of user interface design. The reason we call them 'smart' is because they can install programs on them. Apps extend functionality and take it beyond what smartphones came with from the factory. In fact, we just poke our fingers on the buttons with numbers in a certain sequence to call another number. Of course, there are also touchscreen gestures, such as keyboard with continuous input and swipe support, but again, these are a couple more examples of functions with rules.

Apps and mobile operating systems are carefully programmed in a way that doesn't end up being very 'smart' or 'intelligent'. All smartphone operating systems use edge-to-edge gesture as their primary mode of interaction, which doesn't make sense on screens larger than 3.8 inches. User interface designers often struggle to create smart interactions for phones. In many cases, they rely on approaches that worked in the past, but are no longer relevant, and if they are, then for a very narrow segment of the population.

Intelligence in smartphones, part 1

Be that as it may, we can already see that our devices are starting to work 'narrow intelligence' or something that makes it appear. 'Narrow intelligence' looks at one specific aspect of a problem and tries to solve it through a series of algorithms and available information. The more data a narrow AI collects, the more accurate the answer will be. Siri, Cortana, Alexa, Google Assistant, Bixby, etc. are an example of narrow intelligence. They have a specific 'skill' set that includes speech recognition and short response. None of these assistants are actually capable of evolving or truly learning from you as a user yet. This is despite the fact that they are collecting data. There are some more minor aspects of training, such as learning your route to work and the ability to provide early warning of traffic jams or learning preferences for news or reading email for delivery notification, etc.

Voice interface

Voice-based intelligent user interfaces are the most interesting for the development of future interaction methods, because do not require hands or eyes. At least if they are designed correctly. At the moment, most of the voice assistants have to look at the screen to work with, and this breaks the experience. These voice interfaces still deliver relevant notifications to the user, only accompanied by an indistinct sound effect. I can ask Siri to read my mail, but I can’t ask to read new messages from a particular user as soon as they arrive, that would be more helpful. For Siri (and for other assistants) there is not even such an option to program such a rule yourself. While there used to be a way to set up SMS notifications for emails from individual users through the T-Mobile push service, today there is a way to do this with Microsoft Flow and Office 365. But the latter is not 'smart' ', and SMS are read aloud only on Windows Phone. Also, why would I need to be notified of an e-mail by another of the same message?

Intelligence in smartphones, part 1

Still, the methods of scripting interservice interactions like in Microsoft Flow and IFTTT are another form of narrow artificial intelligence in which you can teach software to perform specific tasks based on certain criteria. The problem is that these rule-making services are not very well integrated into the system software of the phones or into the interface of any virtual assistant system. I can't say 'Hey Cortana, tell me when the client signs the document I sent them', although I could set up this simple applet in Microsoft Flow. With Android I can define rules for IFTTT that go beyond the capabilities of the 'Ok Google' assistant and take a step in the right direction. But I still can't say something like 'Ok Google, answer all the text messages that I'm busy until tomorrow morning.'

Text interfaces

Chatbots are another form of narrow intelligence that lives in various messengers or specialized software. They are similar to voice assistants except that they respond to basic commands within the text chat interface. Sometimes they generate buttons, clicking on which will give an answer to questions from an assistant, it's better than typing the answer, but still the functionality is very limited. Worse, each of these chatbots has a set of features that the others lack, and you have to choose the one with which you need to communicate in order to solve a particular problem or get the right answer. Finding the right chatbot agent takes a lot of cognitive energy. I actually need one chatbot that understands everything and has access to all my installed applications and services.

Intelligence in smartphones, part 1

There are also several narrow artificial intelligence systems aimed at working with e-mail. [email protected] and [email protected] are AI systems that, when they receive an e-mail from you with a list of people, will consistently contact each addressee and individually discuss when it is convenient for everyone to meet. This chatbot system is much smarter because does not need to install a separate application. I believe that all Cortana system functions need to be integrated into the hotspot of the e-mail address. At this time, calendar.help can only handle meeting requests. Cortana was recently added to Skype as a chatbot, so all it needs is a phone, SMS, and email interface.

All of these narrow forms of artificial intelligence still require humans to adapt and learn the specific commands and phrases that these systems will 'respond' to. Let's be honest, the same is true for interactions between people.

Intelligent graphical interfaces

Many companies started using chat bots after some research found that users are most often using instant messengers. This may be partly true, but the most commonly used method of human-computer interaction is the graphical user interface. To launch the messenger, you need to click on the icon every time. Chatbots and a voice interface are good if you like typing or talking to a computer in a special way, but the graphical interface has a button you need that is constantly visible and clicking on it is faster and more efficient. What are the chances that the app designer will make it perfectly tailored to the user's needs? The likelihood is low, especially if you are an advanced user. Smartphone apps are designed so that you have to adapt to them instead of allowing them to adapt to you. This is the opposite of the intellectual system.

In the early days of computing, we could do this with manual tuning. Many professional-grade PC programs offer a fully customizable user interface. I can create toolbars and keyboard shortcuts with shortcuts that make my software work more efficient for the tasks at hand. I can write scripts that add new menu items and functionality to some programs. Even professional hardware such as the Wacom Mobile Studio Pro offers a myriad of customization options through programmable tactile controls. Even more, each user may have different needs that can be addressed more efficiently with a little customization and a little human intelligence.

Intelligence in smartphones, part 1

Today, there are very few UI customization options in smartphones and applications for them. In iOS you can distribute icons into folders and replace the background in the launcher, as an option – add widgets to a special screen or icons to the notification center, well, that's it. In Android, you can install a completely different app launcher, distribute widgets on your desktop, change icon designs, but you can't fix the terrible interface in Snapchat or change the terrible colors in the Gmail app. It will not work out and hide the interfering buttons inside the application that you do not use, or replace them with useful function keys. You cannot create your own gestures to quickly perform specific functions in specific applications. Unfriendly towards advanced users and not as personalized as we would like.

By Adam Z. Lein

If you think about it, then in fact, smart devices lack their own intelligence, which could help them independently solve a wide range of tasks. Another question is whether such a breakthrough will be widely used by the masses when phones become smarter than “some people.”

Nevertheless, the scenarios described by the author, on the one hand, seem to be something from the future, and on the other, they quite fit into the AI ​​/ ML concept. I would not like to wait several decades, of course. In the next issue – the continuation of the material, after which we will discuss the theses.

Rate article
About smartphones.
Add a comment