I predict that in the coming decades, humanity will likely create a bunch of powerful institutes, misleading politicians to attract founding into esoteric, yet useless, areas of science. This will result in a new form of government, called Idiocracy.
In this post I will criticize some aspects of the foundations of the The Singularity Institute of Artificial Intelligence. On the other side, I found very intriguing some philosophical and scientific questions putted on the table by some scientists of the institute, but in general I think that the advocacy of the institute is fragile on several aspects.
I'll discuss some concepts that I think that are not accurate in the overview and mission of the Institute ("What is the Singularity?").
Human intelligence is the foundation of human technologyHuman science is the foundation of human technology, in general, intelligent people not devoted to science are incapable of developing new technologies.
If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect.The loop is already closed, humans, with or without the help of technology (like writing), can improve themselves through self-reflection, thinking and decision-making processses. Maybe the central issue is speed, they comment more on that aspect further ahead.
but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating the next generation of brain-computer interfaces,I can't see why new interfaces will bring new intelligence, classic speech or keyboard have more to do with intelligence and symbolic processing that images and analogic movements.
The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses.I believe the key ingredient here is linkage, besides there are a hundred trillion direct connections in the brain (synapses) the paths between neurons have a number much more larger than that. Regarding the importance of links in intelligence, I see much more potential on AI involving Internet, than in AI involving human-machine interfaces or virtual-environments. With regard to numbers, atomic operations per-second are not a guarantee, intellectual processes involves several entangled systems. We can use an analogy, having a third-world country with the same population as a first-world country doesn't make both country equally wealthy.
However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore's Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a dozen other variants.This "laws" can't continue forever, thermodynamics limits any kind of computation, wires are getting thinner, more heat is disipated, even quantum computter suffer from this limits when they interact with non-quantum computers. For a more detailed description, read for example Theory of Thermodynamics of Computation. From the economic side of this growth, its also non-sustainable, production of goods can't grow forever at an exponential growth, even if this benefits the governing elites. Also watch the following to observ exponential growth on power consumption.
But leave aside for the moment the question of how to build smarter minds, and ask what "smarter-than-human" really means. And as the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down.We don't know because we're not that smart. We're trying to guess what it is to be a better-than-human guesser.With this kind of sophist arguments you can protect concepts like God or related, if a supreme being exists then you can't comprehend its decisions or thinking. Anyone can speculate about things that don't exist, quimeras, flying hypos, etcetera. The last phrase resembles the Liar's Paradox "This sentence is false" formulated in an artificial intelligence context or the Berry's Paradox "The smallest positive integer not definable in under eleven words", in this case is something like "the smallest better-than-human intelligence described by humans".
Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite its own source code can potentially make itself faster as well.From the psicological point-of-view I think that freewill can be used for self-improvement. On the other side, making transformations on computer codes have been proved to led to non-predictable results, because if results were predictable basic Computability Theory threorems will be false (computers cannot decide computer properties in general, view Rice's Theorem). Merging both points of view, is not difficult to argue that self-improvement is not predictable, complex decisions like this can often lead to opposite effects.
Combine faster intelligence, smarter intelligence, and recursively self-improving intelligenceWell, faster intelligence has been refuted with the "big numbers don't matter" argument, smarter intelligence is a sophism (smarter than what? is there a limit to human intelligence?) and the recursive self-improving concept if think is not applicable to practical computers, it's only a theoretical artifact, for example Optimal Universal Search, due to exponential time algorithms an huge constants involved.
Maybe in a following post I will attack the problem of "Why Work Toward the Singularity?". Is their founding money walking to a Black Hole Singularity?