1. A Canadian comments on one form of Yankee stupidity. As a Yank, I'm used to hearing all manner of critique from non-Yanks, but in this case, I'm inclined to agree with Skippy.
2. My buddy Charles makes the acquaintance of Stephen King. I was pretty sure, while reading the first part of the essay, that Charles would say that King is an excellent writer; Charles and I have similar tastes in many respects, and I find King exceedingly readable, so it was no surprise when I came across this part of Charles's essay:
I went on the next story, called “The Man in the Black Suit.” As soon as I began reading it, I got excited—King was writing a folk tale of the “meeting the devil in the woods” variety. I devoured it whole, and as I read, I could no longer deny the realization that had been creeping up on me: I had grossly underestimated Stephen King as a writer.
How could I underestimate a writer I had never read? Well, that’s easy. People do it all the time. For me, it was a combination of my dislike of “horror” and my impressions of King, which were primarily that he a) wrote the type of horror I disliked and b) pandered to the lowest common denominator. I figured that anyone who sold so many books and had so many of those books made into films had to be a lowly panderer, right? Or, um, I guess he could also be a really good writer.
King often gets a bad rap among the literati because his writing strikes the elitist, Don DeLillo-loving crowd-- the people partial to unreadable prose-- as too common. But the truth is that King's easy command of narrative structure and his ability to make a story compelling, all while preserving clarity-- a quality often lacking in postmodern works that promote style over content-- point to the fact that the man's a good writer. Sure, he may need an editor when it comes to his longer novels, but as Charles writes, King is self-disciplined enough to write great short stories, which are indeed a better measure of writerly mettle than novels are.
3. Over at Conscious Entities, Peter explores the question "Is Intentionality Non-computable?" The post discusses the framing problem along with the halting problem and the tiling problem, and bears directly on the discussion I had with a few commenters over at this post of mine. Affirming my own sentiments, Peter writes:
Let’s consider the original frame problem. This was a problem for AI dealing with dynamic environments, where the position of objects, for example, might change. The program needed to keep track of things, so it needed to note when some factor had changed. It turned out, however, that it also needed to note all the things that hadn’t changed, and the list of things to be noted at every moment could rapidly become unmanageable. Daniel Dennett, perhaps unintentionally, generalised this into a broader problem where a robot was paralysed by the combinatorial explosion of things to consider or to rule out at every step.
Aren’t these problems in essence a matter of knowing when to stop, of being able to dismiss whole regions of possibility as irrelevant? Could we perhaps say the same of another notorious problem of cognitive science - Quine’s famous problem of the impossibility of radical translation. We can never be sure what the word ‘Gavagai’ means, because the list of possible interpretations goes on forever. Yes, some of the interpretations are obviously absurd – but how do we know that? Isn’t this, again, a question of somehow knowing when to stop, of being able to see that the process of considering whether ‘Gavagai’ means ‘rabbit or more than two mice’, ‘rabbit or more than three mice’ and so on isn’t suddenly going to become interesting.
Quine’s problem bears fairly directly on the problem of meaning, since the ability to see the meaning of a foreign word is not fundamentally different from the ability to see the meaning of words per se. And it seems to me a general property of intentionality, that to deal with it we have to know when to stop. When I point, the approximate line from my finger sweeps out an indefinitely large volume of space, and in principle anything in there could be what I mean; but we immediately pick out the salient object, beyond which we can tell the exploration isn’t going anywhere worth visiting.
The suggestion I wanted to clarify, then, is that the same sort of ability to see where things are going underlies both our creative capacity to spot instances of programs that don’t halt, or sets of tiles that cover the plane, and our ability to divine meanings and deal with intentionality. This would explain why computers have never been able to surmount their problems in this area and remain in essence as stolidly indifferent to real meaning as machines that never manipulated symbols.
None of which is to say that humanlike AI is totally out of reach. My point in the original post was that it's simply a long way from fruition. Personally, I'm sympathetic to Kurzweil's "Strong AI" functionalist camp.
4. Malcolm discusses ideas, emotions, respect, and tolerance.
_
No comments:
Post a Comment