to toxic pollutants. But, contrary to Functionalism this something else is not - or at least, not just - a matter of by what underlying procedures (or programming) the intelligent-seeming behavior is brought about: Searle-in-the-room, according to the thought-experiment, may be implementing whatever program you please, yet still. Searle's "Derivation from Axioms" Besides the Chinese room thought experiment, Searle's more recent presentations of the Chinese room argument feature - with minor variations of wording and in the ordering of the premises - a formal "derivation from axioms" (1989,. All we need now is a study that gives those with such low levels some pollutant-free EPA and DHA, and see how much it takes to push people past the threshold. 27 (A1) Programs are formal (syntactic). Perhaps he protests too much. (A2) Minds have mental contents (semantics). Lower levels of the long-chain omega-3 fat DHA in some areas of Alzheimers brains got people thinking that maybe DHA was protective. The Churchlands criticize the crucial third "axiom" of Searle's "derivation" by attacking his would-be supporting thought experimental result. The nub of the experiment, according to Searle's attempted clarification, then, is this: "instantiating a program could not be constitutive of intentionality, because it would be possible for an agent.g., Searle-in-the-room to instantiate the program and still not have the right kind of intentionality".
The Connectionist Reply The Connectionist Reply (as it might be called) is set forth-along with a recapitulation of the Chinese room argument and a rejoinder by Searle-by Paul and Patricia Churchland in a 1990 Scientific American piece. Though it would be "rational and indeed irresistible he concedes, "to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it" the acceptance would be simply based on the assumption that "if the robot looks and behaves sufficiently. Depends on the details of Schank's programs the same "would apply to any computer simulation" of any "human mental phenomenon" (1980a,. (1) Though Searle himself has consistently (since 1984) fronted the formal "derivation from axioms general discussion continues to focus mainly on Searle's striking thought experiment. Not Strong AI (by the Chinese room argument). 5 and since he acknowledges the possibility that some "specific biochemistry" different than ours might suffice to produce conscious experiences and consequently intentionality (in Martians, say and speaks unabashedly of "ontological subjectivity" (see,.g., Searle 1992,.