Posts by BenWilson
Last ←Newer Page 1 2 3 4 5 Older→ First
-
Searle's ultimate point is one of scientific parsimony - that comprehension is not necessary for the room to operate.
I think it's the opposite of parsimony. He's inventing entities, like 'comprehension' which are indistinguishable from the operation of the room, then simply asserting that the human has it and the room doesn't. Until you are the room, you couldn't know for sure.
He talks of 'purely formal play' as if inventing yet another phrase creates an important if completely unobservable distinction. How can he be sure that the parsing of language in our minds isn't 'purely formal play' at a subconscious level? Certainly when you are learning the language it's purely formal play. I mean has he actually tried to do what his thought experiment suggests would be a piece of piss? Thousands of engineers spent decades and managed to actually do what he rubbishes as purely formal play and millions of humans now use it for a lot more than play. When you do use computers for translation you become aware of the pitfalls and all the problems with his analogy. Doing good translation via rules actually doesn't work that well and it's easy to spot computer translations because of the lack of comprehension of the subject matter. For that reason I say that his Chinese room is extremely contrived. If it genuinely could do good translation so that people couldn't tell it wasn't a native then it would have to comprehend.
Are your sure of that? That's exactly what they do.
I am totally sure. It may seem that way to a human but it sure doesn't to an engineer. Redbaiter may be repetitive, and we don't know anything about him/her apart from that (well actually I'm pretty sure he/she's an Australian), but I can assure you that he/she is at least one human. It's not the high level understanding that he/she fails to display, it's the low level stuff that he/she does display, like picking up which object in a sentence you are talking about from context.
Of course there have been AI programs around since the very early days that can fool the unwary user - the famous anecdote of the author of the Gestalt psychoanalysis program ELIZA discovering that his secretary was having quite a meaningful relationship with the program springs to mind. And my 2 year old is clearly rather taken with the strange ability of his favourite stuffed elephant to talk to him when I'm in the room. Humans anthropomorphize. Other animals seem to do the same thing if you swap out the anthro for some other prefix. I'm sure my cat reconciles living with a house full of large dangerous animals by convincing herself that we're just big cats really.
I think you're ultimately right about language. Translation is a very difficult task, whether it be a language you don't know, or conversing with an old friend on topics that you both understand well, you can never really be sure that you get what the other person is talking about, that they're referring to the same kinds of things as you. The idea in the mind is perhaps not exactly the same thing as any word or set of them, even if words are by far the best invention in all of time for communicating them. That's why I never bother with arguing about what a word 'really means'. Because it doesn't really mean anything, it's just a way of communicating an idea, and if it did the job, then you were using it right. Dictionary Nazis really are a pain in the arse because they are trying to limit what can be communicated rather than understand it.
-
Oh, and thanks for giving me the opportunity to go into engineer mode. I haven't had the chance recently, cause my current system doesn't raise the slightest questions - people are just happy that spam gets blocked and their eyes glaze over when I tell them how.
-
Kracklite, I was never really convinced by Searle on the Chinese room. It seemed like the kind of sleight of hand which was all through his work. OK, the guy manipulating the symbols might not understand Chinese, but the 'system' of him doing that with his book of rules (which, incidentally has never been done and strikes me as far less feasible than a computer doing the same thing), does understand it. A more modern approach would probably be to use Altavista's Babelfish or some other translation service. I find it not particularly feasible to say that a person in conjunction with Babelfish conducting a workable conversation is not functionally equivalent to that symbiote actually speaking (probably quite poor) Chinese. Certainly my wife has conducted quite a number of communications that way with her non-English-speaking German relatives, and to all intents and purposes, she can read and write German, albeit very slowly. By the time she'd mastered it to the point that Searle hypothesizes, doing it rapidly and with few errors, she-plus-machine basically would be a German reader/writer. There is no disputing that she 'comprehends' what is going each way, with the aid of that tool. Quite extensive arrangement have been successfully negotiated like that.
I'm sure someone could writer a programme that simulates the posts of Redbaiter or DFJ - and I don't mean that as a joke.
I doubt it. An amusing parody could be written, but everyone would know the difference, the moment they flamed it and it failed to grasp anything about what they were saying, or display any understanding of anything outside of simple abuse.
Personally I'm coming from the point of view of the engineer trying to make those kinds of systems work, so the question of consciousness is of less concern. My experience from various systems I've bedded down over the years is that people only think an 'intelligent' system is intelligent if it does something like what they do. They find it really hard to accept if the system does something better and they can't comprehend why. Then it's just a 'dumb machine'. But when it does something just the way they would have, they think it's a 'smart machine'. I've spent hundreds of hours arguing with experts about the outputs of various systems, and it always seems to come to a 'us vs the machine' mentality which is not helpful. They don't seem to get that they are part of the system, that it's a bigger system when it's machine+human.
I have to say my degree in Philosophy has helped me out a lot in these discussions, if only to head off stupid philosophical arguments before they get any traction. But it's never actually helped me with anything 'technical' whatsoever. As in, solving the 'technical' problem. What philosophy gave me was an ability to solve the human problem of people trying to argue that there was more to it than just a technical problem.
-
There's not many other contexts where getting thrown down a flight of stairs wouldn't constitute attempting to injure with intent. I'm yet to meet the guy who threw himself down a flight of stairs.
-
What about children on planes and ships?
One reason might be that people don't often complain to the police :-)
I know a few who have, after receiving some grievous injuries, but the matter was not even investigated. It seems that if you punch someone out the back of the pub hard enough that they become unconscious, then you get a memory defense.
-
Graeme, I take it that the reasonable force in removing people 'therefrom' has a similar policy of police good sense? The number of people I've seen getting hurt (receiving 'bodily harm') whilst being expelled from bars is quite a lot, but I've seldom heard of any prosecutions. Is that because defence of land quickly turns into defence of the person when the person being removed fights the 'reasonable force', and the bar for reasonableness is lowered?
-
Kracklite, I think Turing thought sentience or consciousness was an offshoot of intelligence. The test can hardly discover something that can actually only be experienced by the individual who has it. But it can give a very strong argument that if such a thing is in other humans (which each of us can only guess at) then it very likely also would be in a machine that could not be distinguished from a human via conversation.
As you say, it's 'in theory'. I always thought Turing's point was for it to be a thought experiment to refute philosophers saying a machine could never, never, never ever have consciousness. He provided one test that most of them (but of course not all) would accept.
To my thinking it's way too high a standard, which is why I said it's a 'sufficient but not necessary' test. I don't think something has to be be so skilled a mimic that it can pretend to be something it is not to highly discerning observers, to be considered intelligent or sentient. For an illustration, I doubt most most ESL folk would pass a Turing test where they pretended to be a native English speaker - natives would spot them pretty quickly. That does not mean they are not equally intelligent, it just means it's a pretty unfair test.
Turing himself doubted that a disembodied intelligence could be written that would come near to passing his test. I agree. But there's still a lot of disembodied software which does incredibly smart stuff. Google, for instance, is better than most human librarians at finding you the info you need.
What I'm saying is that computers will have been intelligent and sentient way before they ever pass a Turing test. I don't think they're there yet, and I think we still have quite a long way to go before there's any kind of general intelligence in machines that even vaguely resembles human intelligence. But who knows, it could just be a small chance in paradigm and architecture. Certainly the hardware is powerful enough now, something we couldn't really say when I was first studying AI.
-
I actually enjoyed Tom Cruise in Magnolia. I'm not entirely sure if he realized he was parodying himself, but no-one else could have done a better job of it. Playing a cheezy cult-leader arsehole... Hollywood does seem to cast method actors a lot.
-
I'm presuming people are not taking Chuck too seriously , but I'll keep an eye on it.
Looks like the commentators were already doing that for you. Chuck's made his point now, that he's against paedophilia. The extension that thinking of adjusting the law could only be conceived of by childless paedophiles is unlikely to impress anyone. Personally, I remember being 15 and the laws 'protecting' me from the cute 16 year old girls in my class seemed all fucked up then, and the impression has lasted. I have a son, and I bet when he's 15 he'll feel just the same.
-
You mean, see whether a person conversing with Key via a keyboard will decide (be fooled into thinking?) he's human based on the answers he gives?
I believe the Turing test is intended to be a sufficient but not necessary proof of "intelligence", not humanity. That's why it's always computers trying to pass it. Key could still be a very cunning android. He certainly is intelligent.
I can't hold his poll watching against him too much, since it's one of the things I most like about Clark - she actually does what's popular quite a lot of the time. The only real concern is whether he will continue to do it after getting elected, or whether there really is an agenda we just don't know about. And there really is no way of telling, unless he gets caught red-handed saying "gone by lunchtime" kind of stupidities. I'm sure Labour will do their damnedest to make people think that though. Their spin will be "it's better the devil you know".
Last ←Newer Page 1 … 963 964 965 966 967 … 1066 Older→ First