So Landon is writing about the idea of a ‘technological singularity’ in one of his classes, and wanted my computer science background opinion of the matter.
As a quick basic definition, “a technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict."[wiki] there’s 4 ways authors have suggested such a singularity might take place, but basically an obvious example is a super intelligent computer (AI). Where such a computer can intellectually out-perform a human in every way, including design, thus it could make better versions of itself at accelerated rates, and human understanding would get left in the dust.
Overall, I think this theory to be pretty off, particularly the super AI approach to it. It holds a small amount of water, for example Vinge (the original pioneer of the term) states “the arrival of self-aware machines will not happen until after the development of hardware that is substantially more powerful than humans' natural equipment.” In my opinion, equipment is a possibility. Processing speed and hard drive space continues to increase. While our human memories are limitless as far as we know, I believe it’s possible hard drive space could get large enough to be essentially limitless for any purpose.
The flaw I see is the software complexity of such a self-aware system. True our hardware capacity essentially quadruples every 18 months, but our software intricacy does not evolve at the same accelerated rate, just look at windows. It’s first release was in 1985, which would mean by now it would have quadrupled in complexity 16 times, that’s 4^16 = 4,294,967,296 more sophisticated. I wouldn’t say it’s done that. Yes, hardware has increased approximately at that rate.
A huge problem is decision making. Current AI textbooks define the field as "the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success” [source]. The problem with this is a machine is bound by strict logic, everything in computers comes down to 1’s and 0’s in the end, some qualitative number. The way this works is by heavy reliance on probability.
For example, when you decide to go to the store, how many factors go into your decision? What are your primary motivations? Get some eggs? Save the most money? Be home by 5? Don’t die? If ‘don’t die’ is your number 1 driving motivation, there’s a chance if you leave the house you’ll get in a car accident, so you never leave the house. Probability to the rescue! Bayesian probability logic allows one to factor in what they see as all the relevant factors and then get a numerical probability of the outcome. The system can be designed with a certain acceptable threshold to follow like “anything greater than 5% chance of death and I won’t go”. The problem is for every new factor you add to the math (is it rush hour? How fast is my car? How good was the safety check on my car? How fast can I react?) the math complexity grows exponentially. Plus how much of a difference do each of these factors make? These are all decisions made by the creator. Suddenly writing a system to make a simple decision like whether to go to the store becomes an immense task, strongly affected by the authors bias and perception.
The way AI moves forward still today is when faced with such complexity, the designers decide to focus on a very small set of the factors they consider the most important, and just ignore the rest. However, a super-intelligent machine needed to cause a singularity would not have this luxury. It would have to be able to consider all angles, otherwise a human could outperform it, or think of things the computer never can, because it wasn’t programmed to.
Even if some way was found to tie multiple small decision making processes together into a giant system, each different system will have been created by different developers, who each programmed in their own bias of what was important and what isn’t, or what an acceptable threshold of risk is. Over the combination of thousands of different specialized decision making processes, conflicts of interest are bound to occur, and the AI would be practically helpless.
I further submit that software can only do what humans tell it to do. It can’t “learn” in any mysterious unpredictable matter. Machine learning is still deterministic and thus observable by humans. In fact, the bias used by the machine to “learn” things is also programmed in by humans. In order for a computer to have a “new thought” it has to follow a logic path set for it by a human.
I’ve probably ranted about this enough. I find the idea of a technological singularity, at least by creating a super-intelligent AI, very implausible. The other ways a technological singularity is theorized to happen I have different gripes with, but they’d each require an equal length rant, so for now I’ll leave it at this.
No comments:
Post a Comment