Logicomix: A Short Review

I just read Logicomix. Very interesting. Should have taken the tractatus more seriously, but that's ok, even a lot of professional philosophers don't understand it.

The impact of World War One on modernity is beautifully captured by a two page layout of Wittgenstein standing in the middle of no man's land and a caption by Russell saying "put a man on the edge of the abyss, and in the unlikely event that he doesn't fall in he will become either a mystic or a madman."

The themes betray a computer scientist's fondness for Turing, algorithms, and computation that if not wholly misplaced is not the answer to everything that many computer geeks think it is.

Yet again i find myself wishing that more people would read Hubert Dreyfuss.

But Dreyfuss himself doesn't understand The Philosophical Investigations point on psychology and in his commitment ot Heideggerean phenomenology founded in metaphysics as opposed to a Wittgensteinian one founded in language, he concedes too much to the model makers.

As does Logicomix ultimately. Because the story of David Hilberts program, which is what the book tells through the experience of Bertrand Russell, is not in fact the greek tragedy that the authors make it out to be whereby redemption is born out of loss. Nor is it the shakespearean tragedy that the repeated allusions to Hamlet would imply. In fact, it is a black comedy where the absurdity of the program which emerged as a resistance to the scholasticism of the previous age but that failed to reject the rigid rulemaking of that age as the source of the problems it created.

In the end, the correct solution is only hinted at by Gödel's work on incompleteness, as indicated by the incredibly lame idea that has followed in its wake that our brains are really just Turing machines. And it is this idea that has led large swaths of our intellectual culture off into yet another messed up cul de sac, the notion of an algorithmic universe as the meaning of existence. It is this idea that Logicomix tries to attack with the recasting of the conclusion of Orestes as the solution to the story of modernity. It ultimately fails though because it gives too short a shrift to absurdity and the mystical, whose only champions in the book are the later Wittgenstein ideas which don't get addressed except in a the biographical appendix, and a brief encounter between Russell and a group of dadaists who are dismissed with the usual conservative handwaving cliche of putting a toilet in an art gallery.

All in all, it's interesting and for people who don't already know the story about this stuff, it's a good introduction. However I think it fails to shed much new light on the topic. I'd recommend as companion texts "Wittgenstein's Poker" and David Berlinski's "The Advent of the Algorithm" for a fuller picture of the ideas that are only touched on here but not fully explored. And for some countervailing views of course Hubert Dreyfuss's "What Computer's Still Can't Do" and Paul Feyerabend's "Against Method" are two books that not enough intelligent non-scientists have read. Intellectual life in America would be so much better if you would.

Comments

I think I stopped even

I think I stopped even entertaining computationalism last week, but arguing against it is extremely difficult. Part of it is that people mean different things by it. Some people just want AI to be possible. I think a constructed intelligent, even conscious, entity is possible, but I don't think it will happen by "implementing" a symbolic representation of intelligence. I don't think that's what the human mind is. But, say the computationalists, can't any relevant biological process could be "emulated" by an algorithm with a precision that's "good enough"?

That's an empirical question. I can maybe come up with an a priori reason why consciousness can't. Just intelligence though... I can't come up with one. The three-body problem and other chaotic systems leads me to believe that it's quite possible that even quite simple uncomputable systems could exist in nature; but then you have to argue that one of those systems is relevant to intelligence. Why should it be?

The thing that troubles me most is that if computationalists are right, then consciousness is irrelevant. Either everything of a sufficient, not very high, complexity is conscious, or some things are and some things aren't, and anyway it doesn't matter. And by some weird metaphor, liberalism and humanism are doomed. It's bizarre to summarily dismiss the phenomenon we are most certain about, but I don't see a choice.

Anyway, I was somehow unaware of the Dreyfuss, and I will put it on my list. It better not be an Emperor's New Mind, though. I would be interested in hearing more about how he concedes to the model-makers.

well, i think it ultimately

well, i think it ultimately depends on what you want out of your AI. do you want a machine that when presented with natural language problems is capable of making reasonable, rational decisions based on facts? that's certainly possible, because it's something that can be done algorithmically. But i think it's worth noting that for all the development that's gone on in AI research, natural language processing is still one of the things that computers are the worst at of all the things they are able to do, and notably it's the thing that human minds out of all the objects in the universe are the very best at. Also it's important to not e that for a machine to be able to do that, it doesn't need to be conscious. What requires consciousness are creativity, opinion formation, forming the framework for making value judgments, and where I think Dreyfuss comes too close to making a concession is that hes willing to admit that machines that are forced to be in the world and cope with it and are capable of doing so might be able to be conscious. I get very close to sometimes thinking that consciousness might be something that can only arise out of human development, and I at times have real doubts that every human being is even conscious. Which is not solipsism. I recognize that there are some other minds, but sometimes I wonder if maybe we give people too much credit when we say that absolutely every human being is a sentient entity.

Of course, these are dark thoughts that i have in my more schopenhauerian moods...

rest assured tho, What Computers Can't Do is not Emperor's New Mind material. It's straight up Merleau-Ponty style phenomenology examing what it is that humans do and then asking the question of whether an algorithm is capable of this sort of thing.

What, in your mind, does or

What, in your mind, does or does not qualify someone for sentience?

a conscious mind that is

a conscious mind that is aware that it exists in a wider universe with other similar beings and is capable of reflecting on that fact.