That mouse you use every day will be completely gone in five years. It will be entirely replaced by touch screen displays, facial recognition, and Wii-mote like devices that you wave around in the air. This is according to the predictive genius of some guy who works at Gartner and probably makes ten times as much as I do each year. Oh, and his full time job is making predictions about the future of technology.
For the record, the guy’s name is Steven Prentice– if he comes knocking at your door asking for hundreds of thousands of dollars for his predictive expertise, you might want to have some second thoughts. And maybe some third or fourth thoughts as well. Perhaps his quote was taken out of context: possibly he wasn’t saying mice and keyboards would be displaced on existing devices, but rather that for tiny or specialized devices like phones and PDAs we wouldn’t use mice and keyboards. If that’s what he meant, well, I’m sorry for the misunderstanding- be more clear next time, Mr. Prentice.
But I’ll be perfectly clear and as concise as possible- if he honestly believes that the mouse will be completely gone as an input control device within five years on desktop/workspace computers, and particularly if he thinks it will be replaced by touch screen and motion sensitive devices that we wave around in the air, he is going to be proven both completely wrong and astoundingly ignorant.
Here is my prediction. In five years, 80-90% of the people using desktop or workspace computers will still be using mice and keyboards. A percentage of the user population will switch to various touch sensitive, surface based, and motion technologies for specialized purposes or with highly space-constrained devices like smart phones. But outside of those limited venues, people will switch back to a keyboard and mouse for most of what they do.
I also predict that people using tiny portable devices will use inefficient and uncomfortable input approaches due to the limitation of their tiny portable devices. Said people will generally not be under the illusion that those uncomfortable and inefficient approaches to input should be migrated to situations where they have the luxury of space.
Has Mr. Prentice ever tried to do anything more in-depth than type a 50 character message on a touch device? Has he tried to draw, navigate, select text and icons extensively for hours on end using a touch device, either a surface or touch screen? Reaching and leaning into a device is uncomfortable and, unless you are working with four or five others in some sort of collaborative session or poking at a tiny device like an iPhone, is completely uncomfortable and inefficient. For specialized users like professional artists, digital input tablets provide precision input for drawing, but even then most of them switch to a mouse for general navigation.
Likewise with a motion detecting device like a Wii-mote. Waving a control around in the air is not comfortable for more than a few minutes at a time, is extraordinarily inaccurate, and provides no additional precision or cognitive/spatial benefits with the type of interfaces we use today. It is great for playing a game, but sucks for most other types of input.
Years ago, numerous so-called experts predicted the demise of the keyboard and mouse, too. They said we’d all be using speech recognition to control our machines. As with touch and motion technologies, the basic mechanisms of speech recognition do not improve upon what we have today with the keyboard and mouse outside of very specific environments. In an office environment, dozens of people in close proximity chattering at their computers will never work effectively, even assuming voice recognition could be made 100% “natural speech” accurate. Only senior executives in private offices could use voice recognition, and then with no more efficiency then moderately fast typing. Since senior executives rarely if ever type anyway, and since the people who review their work for them can generally type much faster and with greater accuracy than they talk.. speech recognition is of little use for normal purposes.
On the other hand, speech recognition is perfect for people who are already using their hands: folks flying an aircraft, perhaps, a surgeon in an operating theatre, or someone performing quality assurance on a manufacturing line. But such specialized use accounts for only a fraction of the overall usage. Just like with touch and motion input.
I’m sorry, Mr. Steven Prentice- but for a new input technology to completely or even substantively replace the traditional mouse and keyboard, it would have to do something remarkably better. It would have to be faster, less prone to error, take up less space, and probably all of these things and more. Something that is even slightly less efficient in terms of these considerations will not displace the incumbent technology, certainly not quickly. And neither touch nor motion input technologies provide appreciable improvements in any of these areas and, in fact, are markedly slower, more prone to error, and take up more space.
The next major change in input devices will either be a very gradual change over decades, or it will involve some kind of direct neural interaction. A device that can interact with the human nervous system is the only improvement that I could see rapidly displacing the existing keyboard and mouse duo. Such an interface could come, and some interesting progress has been made in the area, but at the moment I’m doubtful that it will come in my lifetime. I fully expect to retire in fifteen years or so with keyboards and mice still being the predominant input devices of choice.
If nothing else his timeline is wrong. I fully expect that most of the computers I am using now – all but one XP machines – will still be exactly where they are in 5 years. There is too much established infrastructure for their to be complete shifts in computing in 5 year spans like there used to be.
It’s like cars: even if tomorrow someone came out with an electric car for $30,000 that could go 80 mph and had a range of 500km on a single 4 hour charge and the batteries lasted 10 years most cars on the road would still be internal combustion 5 years from now. People can’t afford to just change what they drive overnight, and neither can corporations and people afford to just ditch their computers even if something better comes along.
Handwriting recognition … if it worked, might replace a keyboard and mouse, but it doesn’t and despite what the pundits said we are not all using tablet PC’s now.
Heck, we are still using qwerty keyboards! Remember when we were all going to be using Dvorak? Or the weird knobby one handed things?
Not going to happen
You are right on the timeline consideration, Chris. I fully expect that, at some point in the future, we will use a different “primary input method” for interaction with computers.
I suppose one way to look at Mr. Prentice’s statement is to broaden the scope of what we call “computers” to include things like game consoles and smart phones. If you do that, then based purely on the number of devices in use, it might even be possible to say that we already have switched from using mice and keyboards. Perhaps the dominant input method based on this broader definition of computers is the telephone keypad for SMS.
But personally I reject that kind of broadening of what defines a personal computer. Of course you don’t use a mouse and keyboard with your cell phone- not because that wouldn’t be a better interface if you use your phone for website navigation and writing long emails, but because you bloody well can’t carry around a full sized keyboard and mouse. So instead you are stuck with a painful and inefficient input method: stabbing at a tiny keyboard with your index finger and poking/swiping your greasy digits around on a very small and shiny display. These motion and pointing interfaces are all fantastic alternatives for these types of space constrained situations.
For devices that sit at a desk and have the luxury of space, a keyboard and mouse is the best interface today for entering large volumes of text and navigating rich user interfaces. As for handwriting recognition, again it is primarily of value to space-constrained devices. I can easily type at four or five times the speed that I can hand-write a note, so even if the accuracy was 100%, I’d only really use it where a keyboard isn’t convenient.
A mouse is superior to “eye motion detection” or whatever they call it because there is no mistaking that I mean to select something when I use a mouse. I don’t know about you, but I use my eyes for a lot of other things than selecting icons on screen. A mouse is also superior to touch screens for the vast majority of uses because most of the time I don’t want to have to reach my whole arm across my desk to select or point at something. The repetitive stress injuries from such continuous reaching and stretching would be horrendous. And as for motion sensing controllers like the Wiimote… they are inaccurate for all but the most vague actions, and once again I don’t see that the offer any improvement over the humble mouse.
Mostly, I think guys like Mr. Prentice make crazy statements like this to sound visionary. To the average executive (who has staff to do anything so menial as touch a computer) fondling his iPhone or watching his kid play on the Wii, this kind of claptrap sounds cool. And five years from now, Mr. Prentice can just make some sort of vague statement about how unimaginative all the real workers are for not casting off their mice and adopting his touch sensitive, motion detecting future.
Ah well, you are a touch typist. Me, I can now type faster than I write but it took many years… and even then how much faster I type depends on how you count and correct typo’s 😉
If a computer could reliably read my chicken scratch when I write fast, it would probably be a close thing. For mosr people that hunt and peck or some variation thereof … writing would be as fast or faster.
Besides … we could always use shrt hnd. 😀
As to Mr Prentice making excuses 5 years from now… well the great thing about futurism is that if you just make enough outlandish predictions some will come true. Then you mention how 5-10 years ahead of everyone else you predicted ‘A’, and just don’t mention how you were wrong about ‘B’,’C’,’D’,’E’,’F’ and ‘G’.
One would like to think that anyone that hired Gartner as a consulting firm would get to see every prediction made by their experts and get to decide for themselves if their experts are clueless or visionary, but that isn’t going to happen either 😉