I have always liked science fiction. One of the first books I read after “Clifford the Dog” were some old Tom Swift books…then on to Isaac Asimov, Robert Heinlan, Arthur C. Clarke, Pohl Anderson, Clifford Simak…all the usual suspects.
Since my late teens, the bulk of my reading has been in the fantasy side of the bookstore. I have probably read more fantasy than pure science fiction, and I’d say that some of that is a result of the “acid trip” science fiction of the 70’s. I think I basically got turned off- it seemed like I had to be an uber intellectual drug addict to understand half the books that were around at the time. I read a lot of short science fiction, but things like Neuromancer and the rest of the Cyber-punk generation never really caught my attention. It all seemed so strained to me: I was a computer geek, and reading about some guy downloading his “wetware” seemed basically like the technobabble of the ignorant.
I am about half way through reading Accelerando by Charles Stross, and my opinion about “cyber punk”, if that’s the genre you want to stick this book in, has changed. For probably ten years now, I’ve been asking my friends on a regular basis what they would do if faced with the situation described in the following bullets:
- People today rely more and more on the Internet and online data sources to improve their functional intelligence. Example: if someone mentions a person or technology, you can Google it and know what they are talking about in a fraction of a second. Not long ago, being able to have such instant awareness of vast arrays of knowledge would have made you a genius. Today, its old hat.
- Computing technology is becoming smaller and more “connected” at a dizzying pace. You can carry in the palm of your hand a 1 GB computer that is wireleslly connected to the Internet. That tiny 500 gram computer likely has ten times the computing power of a 10 kilogram desktop machine only a few years old
- direct human/machine interfaces are not science fiction; such interfaces exist today, albeit crudely, in the form of devices allowing people with missing/paralized limbs to control artificial manipulators purely with their mind
- within our lifetime, we will likely have the option of having a direct “personal” interface to the Internet. We’ll be able to be directly connected to all the massed knowledge, good and bad, of that resource in real time. Initially this will come via wearable computers with special glasses and sub-vocalization mikes, but at least some of the function will be via interfaces to the human nervous system
So…would you do it? Would you have that implant installed, assuming it was demonstrated safe (as safe as, say, Lazik eye surgery is today)? No? How about if some of your co-workers went for the procedure, and now were effectively far more capable than you in knowledge based work? Would you blame your employer for advancing that co-worker over you, given that they can answer questions, solve problems, and get the job done faster than you? What if you were applying for that job, and the other applicants were “enhanced”?
I’ve heard all the “computers make you lazy/stupid” statements in the past: the “people who have computers can’t add a row of numbers” or “a computer weakens your memory. But what I have always called “intelligence” isn’t memory or arithmetic skills: its the abilty to form new ideas and see relationships between data. And that data is becoming more and more intimately available. We’ll increasingly be faced with the question of how deeply we want to be connected.
These are not science fictional questions. Its almost a certainty that we’ll be facing exactly these situations within our lifetime. Accelerando takes my question and supercharges it. What kind of humanity will exist when the net global capability of artificial processing outstrips humanity? What will a human be when their intelligence is as much outside their head as inside it? When their autonomous software proxy agents can effectively think and act as independent yet symbiotic entities with their “host” human?
Its speculative fiction at its best…along with a bit of humour. The idea of infinitely recursing virtually intelligent shell companies runs smack up against the denial of service recursive lawsuit, and the future of the free software movement is taken to a logical yet weird extreme. And the scene wherein one of the main characters has his exointellect stolen, and struggles to manage with his massively diminished “self” is wonderfully drawn. I can see trivial parallels to my loss and frustration when I’m unable to access the Internet from work. When the thief activates the exointellect and its AI agents try to cope with the vastly diminished “wetware” host they find themselves stuck with….its not slapstick, but it is educationally funny.
An interesting thought, and one that brings to mind some of the risks in blurring the boundary between “tool user” and “tool”: Not to long ago I read a little news article about how a bunch of democrat geeks managed to tweak enough pages so that when you typed in “bogotefd idiot” ( or some similar pejorative) into google, it returned with George Bush. Then some republican geeks did the same thing with one of the democratic candidates as the target/victim.
All harmless fun and sillyness, and corrected by Google as soon as they found out about it with a change to their search algorithms. But… and here’s the kicker, what happens when a significant portion of society uses google and other online sources as suplamental memory and intelligence? Not as a handy reference but through habit and seamless intefaces as part of what they KNOW? Then someone with the ability to spoof those online sources, even for a little while, can in effect alter teh memories and knowledge of society.
You think urban myths are hard to kill now, imagine when people have “proof”. 😉
People have always used propoganda, advertising, control of schools and libraries to try and re-write history. To mold the way people think for their own ends. And it works. But it is a slow and cumbersome process and doesn’t allow fine detailed control. We have ( or most of us should, ) a certain skepticism and distrust of media and other outside sources. We take some convincing befoe we accept as fact what they tell us…or at least all of what they tell us. But we don’t apply the same rigorous filter to our inside thoughts. But if people become so intergrated into the online that they start to rely on it as if it were their own memory and knowlege, then anyone that can change that has in effect mastered thought control.
I don’t really buy the whole concept of smart tools that are uniquely adapted to individuals- I prefer the unix philosophy of constellations of tools with uniform methods of usage that I can pick up anywhere and run with, exceptions are annoying like having to copy my .vimrc to every new computer I start using. Tools remain tools and not part of the user because the interface is the same for everyone. People may show individuality by choosing different subsets of available tools and using them in unique ways, but the idea of convergence/strong-coupling strikes me as inflexible- deeply customized and self-embedded tools are going to be hard to change and upgrade.
Having instant access to google all the time doesn’t change all that much except to depress the value of carrying around lots of trivia and devalue mindless regurgitation of news and information sources- originality will be a premium when when anything you say could be put into google to show that someone already said it, and someone else has already discredited it (obviously it’s already like that in the online world, conversations in the real world are next). The best things to have in your head are good searching methods for one, and complex processes/methods/models relevant to your work and life for two (search methods also falling into that category of course). The things that are hardest to understand and learn and most useful to you are your most prized mental possessions.
Good point, Chris. There are continuing debates about Wikipedia, for example. Is it a valid reference when the editors are the unwashed masses? When an anonymous person can alter what is in effect an encyclopedia entry and “change history”?
I can see both sides of the argument. On the one hand, I see the value of experts reviewing and confirming data before it becomes “published”. On the other hand, I can see where the “experts” have an unduly inflated opinion of their infallability due to the perception they have of the value of their education or the “dues” they have paid.
Wikipedia is incredibly responsive to new developments, and I’ve often found better and more thought-provoking entries there than I’ve ever seen in a traditional encyclopedia. But can it be trusted? I don’t think I’d want to rely on any single source of information as authoratative. And I guess that’s where the Internet, properly used, can be valuable.
To put it another way…a “smart”, open minded person with access to the Internet can become better informed and leverage its data to gain new understanding. A “fool” with a closed mind can use the Internet to support their biased and ignorant viewpoint. The trick is knowing, at a given point in time, which one you are 😉
Greetings, Patternjuggler!
I agree with the premise that having a singular “custom” mental toolset wouldn’t be the way to go. I would personally prefer a general purpose/generic neural interface, more or less like having a sandboxed wireless network link. An individual user would customize their interface and the set of tools that they deploy within that space, just like they do with a computer desktop today.
I also think we are on the same page regarding instant access to information not making one “smarter”, at least not with my definition of intelligence. But I do believe that a person lacking such instant access would be less valuable. Sort of like wanting to be a white collar worker today without having any familiarity with computers.
As I noted in my original posting, I’ve largely skipped the last 20 years of Science Fiction, so I’m somewhat out of the loop on concepts like singularity in terms of computing. Personally, I’m doubtful that mere processing power or MIPS in any way equals intelligence or potential for intelligence. I also think that its somewhat naive to see a trend, such as the annual doubling of processing density that we’ve come to expect, and to project that infinitely into the future.
That said, I *do* believe that direct, personal, transparent, and near-instantaneous access to a data resource like the Internet is around the corner. Processing capacity is not the issue: instead, the whole keyboard, mouse, and monitor interface is where our next great bottleneck comes from. Voice recognition is a bogus solution in the real world- far too slow, and impossible to work with in a cube-farm office. A higher bandwidth and more effective/less intrusive interface is what is needed.
I think the neural/implant “UI” revolution will cause a minor societal inflection point. Today we can keep the disparity between the technology “haves” and “have nots” at a distance. When the difference becomes personalized down to actually altering your “self” to become more hooked in, I think it will challenge a lot of people. Will having such an interface immediately make you “smarter”? No. But not having it will radically curtail your abilities and reduce both your capacity to “produce” and, effectively, your value.