At first, users had to learn a set of commands to be typed in a console to communicate with the computer. If you were knowledgable and dedicated, you could make your computer do some pretty neat stuff. It required a lot of time and patience to get a result, but it was the beginning of time...
Then came the graphical user interface, the GUI. Instead of entering commands, the user could simply click on an icon to execute some tasks. It was easier to handle since visual cues were available to the user to help them achieve their goal. With the help of menus and forms, it was now possible to enter all the parameters required with or without a keyboard. Icons and menus were providing a context that the user would understand.
Lately, with mobile devices, we are getting an evolution of these icons. Tiles and cards do provide a more advanced context to the user requiring less user inputs making the tasks easier to handle. Tiles, cards and also widgets let users create their own visual context on the display. Still, we need to provide some parameters and pre-defined configurations to make those machines useful.
Computers, in all their forms, are just machines waiting for user inputs. They got a bit smarter by using GPS information, contact list, calendar events and other personal informations. A smartphone phone can take some "initiatives" by itself and notify you about traffic, your next day schedule or what movie to see. It's a step forward to a smart assistance, but not quite there...
First we need to ask ourselves if we want a really smart device that would be our assistant like Smithers for Mr. Burns. In some ways it could be really useful and invading at the same time. Imagine your own butler in your pocket...
This device would need to know us, listen to our conversation, monitor our activities, check our daily routine to be able to figure out what we need next. This would be pretty intrusive and would lead to personal data leaks at the risk of identity theft. Putting the inherent dangers of such technology, it would change the context decoding process in a drastic way: the device would be the one managing the contexts instead of the user. This is the next big step in computer evolution, reversing the roles...
Such device would be able to manage our bills, plan our schedules, entertain us when required, help us find the best road before being jammed in the traffic. We would not have to ask it, it would propose the best solution. Of course that would mean losing our capacity to evaluate a situation by ourselves, always relying on the machine instead of our brain... A bit scary, isn't it?
Let's hope that if some company can find out go to do that, they won't call it Skynet...
Patrick Balleux
No comments:
Post a Comment