Although the ideas presented in this section are somewhat speculative, they would be classed as conservative by such writers as Fjermedal (The Tomorrow Makers) and Drexler (Engines of Creation). This section is intended to strike a cautionary note or alternative to the much more extravagant forecasts of others.
For instance, Fjermedal quotes Drexler on the subject of building a computer with a million times the speed and a greater capacity than the entire human brain into the volume of a cubic micron (the size of one synapse): "Something like that may seem very ambitious to us now, like 'Oh, it will take a thousand years,' which is the kind of thing people will say; well, that is a thousandth of a year once we get the AI technology in place," said Drexler.
A more immediately fruitful path for intelligence technology to follow might be related to the ongoing development of substantial improvements to the human/machine interface and to the portability of computing devices. It is not difficult to envision a small but powerful computational device that can be carried about in much the same way as a 1990s engineer totes a trusty multifunction calculator. Indeed, there are already pocket computers, but the device envisioned here is so much more that it has been called a pocket brain, though it might more properly termed a Personal Intelligence Enhancement Appliance (PIEA). This device, which might or might not have a simulated intelligence, could develop along a natural enhancement path from hardware and software that is readily available today.
Initially, a PIEA would be a powerful computer, possibly utilizing voice input, together with some means of communicating on demand with larger machines for the purpose of information query and data transfer. Such devices could before long be sold with memory in terabyte quantities--they would become personal secretaries, diaries, dictation machines and calculator/computer all in one.
A second phase could be initiated by the development of some means to make a direct communications link with the human brain. This would not necessarily involve electrodes that tap into the grey matter, but could use attachments somewhere else on the body, so long as the nervous system could transmit some kind of code that the brain could interpret.
Indeed, electrodes to the body's nerves need not be used at all. Eyeglasses could incorporate an earphone and a small video screen at the periphery of vision. These would allow rapid transmission of audio and graphical information to the wearer of the appliance, though at first some manipulation by voice and electronic signals would be required for the reverse transmission. Perhaps voice control could be achieved with a throat patch that picked up subvocalization. A wearer could control the device without anyone else knowing it was there, and would eventually become unconscious of its presence and use.
As time went by and people became more skilled in their use, such devices could be further reduced in size to the point where their parts could be surgically implanted in the eye and ear. Eventually some neurological connections might be possible. The processing and memory portion might by this point have shrunk to the point where it too could be inconspicuously implanted. A skilled user might eventually be able to do complex mathematical computations, store sequences of logical arguments, or perform word processing or spreadsheet tasks mentally, perhaps without being consciously aware of whether the processing was taking place in their organic brain, or in their electronic auxiliary. Such an appliance would become the ultimate in personal computers--an extension of self that is a logical development of the systems available today. As a side-effect, research into such possibilities should also allow some device to give visual ability back to the blind, and hearing to the deaf.
Neither would communications with the rest of the world be left out, for at the same time, cellular telephone transmission technology would have matured to the point where the PIEA could remain in continuous contact with the Metalibrary facilities. Individuals so equipped would make the bulk of the contributions to the Metalibrary, for they could, in effect, have their thoughts and actions monitored throughout the day and night, organized and analysed by the Metalibrary facilities, and attached to existing hypertexts automatically as appropriate.
Once the memory store of the PIEA was made sufficiently large, it would be able to store the pictures it showed to the user in a digitized form and in very large quantities. Communication through the Metalibrary or directly with another individual could be achieved by triggering the PIEA to send an appropriate picture or text to be shown to the intended receiver. Gradually, communications would take on the form of a stream of images, rather than of text. Individuals would send personal messages to each other as they happen to think of them. The PIEA would from time to time interrupt its wearers' activity to inform them of waiting mail.
It might eventually be possible to simulate mind-to-mind transfers of information and to hold thought conferences with other individuals so equipped. Various partnerships could form among people of similar or complementary interests, and the research or problem analysis done by these teams would be of a very different calibre and nature than any of them alone, or even in normal collaboration, could achieve. Such partnerships could become known as Metapersons. The services of the more successful would be highly sought after, as there would tend to be a multiplicative effect on the ability of the Metaperson participants, rather than an additive one. The individuals involved would retain their identities fully, except when their Metaperson was in session, and they could participate in other conferences or different Metapersons according to their interests and needs. For most practical purposes, the size of such partnerships would be limited. Two or three would link at first, then larger groups would attempt to form, but some practical maximum working size, perhaps about eight, would soon become apparent.
For legal purposes, a Metaperson along with the percentage shares held by the constituent individuals would have to be registered. This would be done so that contracts could be drawn up, payments made, and lawsuits brought in the same way as they are now with corporations. To reduce individual liability and raise advertising capital, promising Metapersons might indeed incorporate and sell shares to others as well. In such cases, the participants would more likely be engaged in general purpose occupations such as engineering, architecture, accounting, law, and so on. More creative Metapersons might be formed to tackle a single, very specific problem, and then disband once the collaborative task was finished. Some who did this would not want to tie themselves up corporately in such a fashion, and would want to be free to dissolve their participation in one Metaperson and form or join another at short notice. Temporary partnerships would also have to be registered, together with their effective duration.
This concept, if it were ever realized, has a number of advantages over other proposed future data/communications systems. A substantial amount of what an individual did, thought, and contributed could be stored in the PIEA. Material could be sent to the Metalibrary only if the individual so directed, and thus some privacy would be retained. The Metalibrary itself could be fitted with the best available expert system and logical/inference software and become a "smart" omnipedia, capable to some extent of doing its own research.
In addition, since direct brain hook-up would be minimal, there would be no possibility of raiding someone else's thoughts or "reading their mind," though if security on the PIEA devices were lax, their stored memory might be stolen. However, the PIEA/Metalibrary combination does retain both individuality and privacy. Implant versions, on the other hand, would be secure from physical theft but could be monitored to determine the whereabouts of the wearer. There is some likelihood that people will become permanently and irrevocably "wired-in" to the Metalibrary. An interesting theme from science fiction is that the size of these sort of groups would grow until all of humanity becomes a hive constituting a single Metaperson, but there are no obvious technical advantages or efficiencies to this.
Even if it were possible and did offer significant benefits, not everyone would want to form such links with other people. They might have little need to do so for their work, and little desire to do so for personal interests. Others would be too poor--system use would cost money--and there will be others who find themselves unable or unwilling to make use of such a system. Perhaps many would object to its development on the grounds of the potential for abuse and privacy violations, but unless the system were compulsory, it is hard to imagine such objections carrying much weight.
Those who see immortality in cyborg form as the ultimate AI goal might be disappointed if the PIEA/Metalibrary scenario does turn out to be the end result of research into intelligence and the workings of the human brain. However, this may be a more achievable goal from a technical point of view, for elements of it exist now in primitive form in hardware, services, and software. The downloading scenario, on the other hand, does not appear to have any immediately realizable aspects.
In practice such a Metalibrary would in fact seem to have an intelligence--not one of its own, but one that is borrowed from the substantial number of people connected to it at any given time. In addition, cultures could tend to become somewhat homogenized by the existence of the Metalibrary. Since it would also be a 24-hour a day facility, the "intelligent" response of the Library might tend to be relatively uniform at all times. The conduct of special events--such as a particle physics forum, popular entertainment, or a political meeting would create alterations in the apparent "intelligence" of the system at particular times, but these would on the whole tend be unnoticeable.
Perhaps philosophers would then argue over whether the Metalibrary was truly and independently intelligent, or whether it had only the borrowed appearance of such, but it is unlikely that anyone would want to put the issue to a test by disconnecting all its users, even for a brief period of time.
The end result of this vision of machine computability is a little different than that of simulating or duplicating human intelligence--still a machine, and its "intelligence" neither taught or transferred, but only borrowed. Still, the Metalibrary as envisioned here, would be a powerful knowledge and problem-solving tool. In some form, it is not only a more probable extension of current technology, but a more potent one than simulating human brains mechanically, at whatever intelligence level.
In the short term, those who use the PIEA and Metalibrary will be able to achieve a different version of immortality through the writings, world view filters, and stored images each one leaves behind. The Metalibrary would be able to present orally and visually the views of the long-dead using these stored images. An appearance of immortality would be created, but no transfer of personality, and hence no reality of continuance within the machine. Whether personality transfer to a machine can ever be achieved is unknown, and may be unknowable. On the other hand, the pseudo-intelligent Metalibrary facility may be a straightforward extrapolation of what is already available.
Profile On ... Issues
A Case Against Strong A. I.
Advocates of strong artificial intelligence believe that machines will eventually be made that can duplicate the functionality of the human mind, and that such machines can be termed intelligent in essentially the same way as a human being. Here are some brief outlines of common arguments that this is impossible -- some of them may easily be refuted, but they may as easily be strengthened (see the exercises).
In the 1930s Godel proved that even arithmetic truths cannot all be reduced to an encoding in some syntactical notation, that is, that arithmetic meaning is something bigger than notation. Therefore, human thinking as a whole also cannot be fully represented by a machine encoding.
Logic and Chemistry
Even if thought is only chemistry and electricity, we can never know that this is true, for such a knowing would be predetermined by physical processes, and not be a result of logic. It would not be a rational knowing.
If one is a believer in creation, it took an infinite God to create the human mind. If one believes only in evolution, it required billions of years of the operation of some (as yet unknown) process to achieve it. It is at least premature, if not presumptuous, to suppose that the same feat can be duplicated by humankind at any time in the near future, if at all.
The purpose of the brain is to exercise control over the entire biological entity of which it is an integral part. The purpose of a computer is to calculate. The two activities are so fundamentally different, that no elaboration of the latter will ever be equivalent to the former.
A religious viewpoint
The purpose of the mind is to give intelligent, voluntary and wilful service to God. No corresponding activity on the part of a machine is conceivable.
Beliefs, circuits, and chemistry
If mental activities such as the holding of beliefs are equated with electrical circuits or chemical reactions, then such common devices as, say, thermostats and firecrackers have beliefs (or something closely related to beliefs). If this is the case, then artificial intelligence is everywhere, and it is not particularly a science of the mind, nor even of the brain.
Adding machines and understanding
A mechanical adding machine can manipulate symbols and produce results to which a human can attribute meaning. This does not mean the adding machine understands addition. Making the parts smaller and faster, or even adding robotic mobility and human-like features changes nothing. There is no sense in which human intelligence can ever be ascribed to an assemblage of such parts.
Machines and intentionality
A computing machine that assembles data according to some pattern and produces output to which humans attach meaning does not do so deliberately or intentionally, but because its program determines that it shall. An important human element is missing.
The Chinese argument
A man is placed in a box with a collection of Chinese symbols and a set of rules for input and output based on those symbols, and is then passed questions by operators written in those symbols. His ability to produce correct answers based on those rules does not imply that he (or the system as a whole) understands Chinese -- even if he can produce results indistinguishable from those of a native Chinese writer. [Due to John R. Searle writing in Artificial Intelligence - the Case Against]
Vision, recognition, and knowledge
A machine that is capable of determining, say, the sex of an individual from a photograph cannot be said to understand what sex is. The storage and analysis of patterns for comparison with television images is not all there is to vision, and does not approximate what humans do see, or how they reason from what they see.
God in a box?
Suppose thinking can be automated. Are machine analyses of data to be considered unassailably correct? Is the machine's "world view," whether explicitly or implicitly programmed, the only possible one? If so, merely human ideas and views would then be invalid, and only the machine's conclusions would have validity. Is this what anyone wants to produce? Or, can unassisted machine analysis even yield meaningful interpretations of data, much less a single correct interpretation?
What goes along with intelligence?
Can artificial mistakes, artificial pain, artificial emotions, artificial beliefs, artificial understanding, and artificial free will be separated from artificial intelligence? If they cannot, is there any point to building constructs that may choose not to benefit humanity, but attend to their own agenda?
The microscopic and the macroscopic
There is no reason, other than the confidence of some theoreticians, to suppose that processes describable in the microscopic and bounded world of chemical reactions and electrical circuits can ever be extrapolated even to a small subset of the macroscopic world of human experience. That is, the building of a circuit or a program like a neuron provides no reason to believe that human intelligence, understanding, experience, or emotions can ever be approximated.
Like understanding, there seems every reason to believe that the process of extracting the important from the mass of detail (abstraction) is a uniquely human ability. If this is so, then even expert systems will never be as proficient as human experts; they will only be sophisticated database searchers. [Hubert and Stuart Dreyfus (Mind Over Machine) claim to have shown that the latter is true.]
Ideas and meaning
The ideas that give meaning to data are not inherent within data, but must be imposed on it by an intelligent mind. Thus, the ability to generate ideas to give meaning to the world cannot be encoded. Whatever data a machine collates and whatever output a computing device generates, it has meaning only as it is assigned by human beings, either before processing (inherent in the program) or afterwards.
Ideas and their consistency with data and action
As such toxic ideas as racism illustrate, the ideas by which humans live and govern their actions sometimes operate independently of any data -- they are anti-informative. Likewise, love and altruism are ideas that can act contrary to their owners' self-interest. Such ideas are not inherent within any data and cannot be encoded as algorithms.
There is no possibility that computers will ever equal or replace the mind except in those limited functional applications that do involve data processing and procedural thinking. The possibility is ruled out in principle, because the metaphysical assumptions that underlie the effort are false. -- Theodore Roszak (The Cult of Information)