About possible ways of developing truly 'smart' devices …
Original material
In the days Windows Mobile at the turn of the century, many smartphones had hardware keys programmed to perform certain actions in specific applications. I could set up my device to control playback in the player and GPS navigation while driving without having to look at the screen. This approach is more efficient and safer than the modern concept of touch screen control. In 2009, Vinnie Brown and Probex created a completely redesigned interface for the HTC Touch Pro 2. In the HTC Touch Diamond, I could even program which on-screen keyboard was displayed depending on whether stylus in its slot (touch-oriented T9 keyboard) or not (letter recognizer or FITALY). The customizable interface allows, by and large, to 'train' the device to work the way I want, and change it so that my tasks are performed more efficiently. This was the whole 'intelligence' at that time.
Of course, I needed to use my intellect to understand what software / hardware changes should be made to increase efficiency. In this aspect, there is potential for improvement: the software will be able to anticipate the needs of the user, studying it and notifying about the change that can simplify the work.
Microsoft actually had its own 'smart' user interface built into Office around the same time. 'Personalized Menus' kept track of more frequently used commands and made them more accessible over time, hiding less relevant menu items. This greatly simplified the complex interface in a way that directly met the needs of the user. This approach also increased efficiency by reducing the number of mouse movements required to access popular features. Probably, sometimes it is useful to show the complexity of the application, which is why the activation of the 'intelligence' responsible for auto-personalization should always be an optional function. And that's why we need AI to learn user preferences.
A personalized user interface is more appropriate for devices with a relatively small display, such as smartphones, where it is simply impossible to see all functions at the same time due to the limited screen area. The buttons 'Bold Type', 'Italic Font', 'Underline' and 'Font Color' in the mobile application Microsoft Word greatly interfere with the action I need, namely choosing the style name. No one should ever use these buttons in Word again, custom anchored styles are a smarter and more efficient tool for word processing. In Outlook Mobile, the Delete, Back, Forward buttons take up valuable screen space and are for functions I don't use. What I really need are the Flag with Notification, Reply and Reply All buttons. Maybe other people use the Back / Forward buttons. I don't know, and I don't care. The interface is clearly ineffective for me.
Many mobile app developers create user interfaces based on their own preferences. This often means that the end result is buttons, icons, and organizational structure that only the developer understands. If you are lucky, you will come across a few applications designed for the average user and relying on the results of user tests, which is much better than development without such information, but also much worse than smart design, personalized for each user or user scenario. Designing for the average user will be confusing and a stumbling block for novice and experienced users looking for a simpler or more efficient solution.
I am knowledgeable enough to customize the desktop version of Outlook to reduce cognitive energy use by automatically color-coding emails and appointments, targeting specific keywords and importance. I am able to program the hard keys on my Wacom hardware to reduce muscle energy use and develop muscle memory for specific tasks. I can also change the keyboard shortcuts to enter frequent commands with multiple fingers rather than whole-hand gestures. Unfortunately, my knowledge is simply not applicable to many programs, especially on mobile platforms. I'm not educated enough to program my own software or a mobile OS.
All applications must necessarily have a single system interface for customization, which must include the collection of information. So that when you start a new application, the default interface for the average user starts, in which the buttons have text labels that promptly explain their purpose. After a certain number of launches of the application and collecting information, the GUI should display something like this: 'I noticed that you use certain functions more often than others, do you want me to make popular elements more accessible to you? You can always restore default settings or manually change features that are important to you by clicking Menu> Customize '. The smart personalization feature will also allow specific icons to be reduced to fit more features on the screen if desired.
The problem with custom user interfaces in the past was that the tech support did not know how the GUI was set up, but this can be fixed with the 'Switch to default user interface' button.
Of course, a 'smart' graphical user interface must fully integrate with a 'smart' voice interface and a 'smart' chat interface. If all applications were developed with an eye to integrating with the artificial intelligence of the phone, then I could use the voice interface in order to change the theme of the entire system. Currently, no application on iOS or Android conforms to the system guidelines for graphical interface design. They are all different, with different colors and styles of icons. This is where the variegated leapfrog of user experience comes in. There is an interface theme in Windows on PC. On mobile Windows, apps 'pick up' color tones, background colors, and so on, but it's not as polished as we would like it to be. Even Microsoft itself does not follow this structure.
General and super-AI
On the other hand, if we could create a fast enough artificial intelligence capable of reaching a general level of intellectual development, perhaps we would no longer need to interact with computers. The term 'general level of intellectual development' describes the level of development of an AI 'thought' model, comparable to that of human intelligence. I'm not quite sure what kind of person I'm talking about, IQ levels are quite different, but in theory, AI will one day be just as good as a person.
If progress continues at the same pace, although this will not be the case with all computing systems, then from this moment we are separated by 20-30 years. Therefore, I am more than sure that we still need to interact with narrow artificial intelligence for some time and modify it. Hence the need for a smarter graphical, text and voice user interface.
Evolution and natural intelligence were the primary way life developed on Earth. This is how human intelligence appeared, which could evolve much faster. We were able to create things that would never have been possible in the normal scheme of random evolution, and we did it very quickly thanks to the ability to share knowledge. But sometimes we forget something, it is peculiar to people. We don't know how to build pyramids. We forgot how to send people to the moon. We've forgotten how to design an effective human-computer interface that doesn't waste cognitive and muscle energy.
Super-AI is the next step, it should increase the efficiency of evolution, provided that it can more accurately store knowledge, but this will not happen if we do not lay the right foundation.
In any case, isn't it time for smartphones to get smarter and learn to make our life easier? Or will we still have to poke at randomly scattered icons on the screen, as we did all this time?
By Adam Z. Lein