I've just finished Turing's Cathedral, a wonderful new book by George Dyson about John von Neumann's team at Princeton that built one of the first computers. In the title chapter, there are a few excellent quotes:
"I asked [Turing] under what circumstances he would say that a machine is conscious," Jack Good recalled in 1956. "He said that if the machine was liable to punish him for saying otherwise then he would say that it was conscious...."
Jack Good would later explain that "the ultraintelligent machine ... is a machine that believes people cannot think."
One of Turing's key observations is also nicely detailed:
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? Bit by bit one would be able to allow the machine to make more and more 'choices' or 'decisions.' One would eventually find it possible to program it so as to make its behavior the result of a comparatively small number of general principles. When these became sufficiently general, interference would no longer be necessary and the machine would have 'grown up.'
And finally, Dyson sums it up quite well:
The paradox of artificial intelligence is that any system simple enough to be understandable is not complicated enough to behave intelligently, and any system complicated enough to behave intelligently is not simple enough to understand.
(NYT articles notwithstanding, of course.)