The Meaning of Meaning

What is meaning? This simple question is at the heart of philosophy of mind. Mentality and meaning have always gone hand in hand, and philosophers have tried to give an account of meaning for thousands of years.  Despite the many spirited attempts, a concrete understanding of meaning has been elusive in philosophy, and a broad consensus is no where to be seen. It’s a devishly complicated question to ask what the meaning of meaning is; it boggles the mind as to how to even go about answering the question. Although it is difficult to give a straight-forward definition of meaning, I do believe, contra Socrates, that giving examples is helpful in the art of producing a rigorous definition for a concept.

Philosophers have often focused on meaning at the linguistic level, wondering how the phrase “the cat is on the mat” means that the cat is actually on the mat. Moreover, what is the practical import of the statement? What does it mean to tell someone the cat is on the mat? If the cat belongs to no one, the import is probably zilch. But if the owner of the cat has been looking for it for days, then the import of stating where the cat is is likely to be highly meaningful. From an evolutionary perspective, it seems like the practical import of a linguistic statement is more developmentally basic, in both a phylo and ontogenetic sense. In other words, meaning comes first, then language. But this pushes back the question, what is nonlinguistic meaning?

The question of nonlinguistic meaning is tied into the question of nonverbal mental content. Linguistic meaning is usually talked about in terms of propositional content e.g. the content of the statement “the cat is on the mat” is the state of affairs of a cat being on a mat. So verbal content is relatively easy to make sense out of, because we can understand the conceptual content in terms of the implied propositional content, which can be spelled out in terms of beliefs and desires. If I don’t know where the cat is and I am looking for the cat, then someone telling me that the cat is on the mat will update my belief system such that I will, ceteris paribus, be motivated to go look on the mat, and will actually look. This is a fairly orthodox way of accounting for linguistic content. But what about nonverbal mental content? How can we make sense of that?

The question is philosophically vexing in that it’s difficult to use language (the medium of philosophy) to talk about mental content that exists independently of language. One way to get a better sense of nonverbal mental content, and thus nonverbal meaning, is to ask which creatures “have” nonverbal mental content. Let’s start with unicellulars like bacteria. Does a bacterium have a “mental life”? Not in the traditional sense of the term, since it seems strained to say that a bacterium believes anything, and having beliefs has long been a traditional criteria in distinguishing creatures with mentality from those without. Whereas we could, if we wanted to, adopt an intentional stance and say that when the bacterium senses a sucrose gradient a belief is formed by the bacterium that this is indeed sucrose it is encountering. But we know deep down that the “sensing” of the sucrose is entirely constituted by the physical-chemical nature of the bacterium. The sensing and digestion of the sucrose is entirely reactive and mechanistic. The bacterium’s “decision” to devour the sucrose based on its “belief” is entirely mechanical. The belief-forming talk is just that, talk. We do not really think that the intracellular machinery’s job is to form beliefs; its job is to perform biochemical functions that aid in the continuation of the bacterium’s metabolic existence.

But although the bacterium does not have beliefs, and thus does not “have” propositional attitudes except those we ascribe to it, it still makes sense to say that bacterium has a mental life, however dim compared to more complex creatures. For what is mental life? I claim a creature has a mental life just insofar as there is something it is like to be that creature. And, following Heidegger, I claim there is something it is like to be a creature just insofar as that creature “lives in” a phenomenal world. “Living in” in a phenomenal world is not like a spatial sense of “in” as in the case of the pencil being “in” the box. Living “in” a phenomenal world is more like being-in-the-world where being-in-the-world is a matter of (1) having concerns and (2) living in an environmental niche. A bacterium has concerns insofar as it is “concerned” about its own survival. Its whole existence is constituted by a desire to stay alive, to maintain its autonomous living. It “does” this in virtue of its complete biochemical nature. But its biochemical nature is organized in such a way as to constitute a machine which has a homeostatic equilibrium and the means by which to maintain that equilibrium despite perturbations from a changing environment and breakdowns in the stability of the internal mechanisms. So because the bacterium is “concerned” about itself in virtue of having its physical structure, the bacterium therefore lives in a phenomenal world insofar as it lives in an environment. The bacterium’s world is such that what is meaningful to the bacterium is that which enables it to keep on living. Thus, sucrose is meaningful to the bacterium because it affords the possibility of digesting it for maintenance of its homeostatic equilibrium.

We have then a foundation of meaning upon which to build more complex types of meaning. Basic nonverbal mental content, and thus basic nonverbal meaning, is based around autonomy. The bacterium is an autonomous machine because it gives itself its own principles for behavior based on its nature. These principles are properties of its organization as a physical object. One of the principles is concern oriented insofar as the maintenance of a dynamic nonlinear homeostatic equilibrium is the fundamental concern. And as we said, if you are concerned about something, then you live in a phenomenal world. If you live in a phenomenal world, you “have” phenomenal experience (where having is understand to be a metaphor, and not a literal “having” of an object like having a hammer in your hand). And if you have phenomenal experience, there is something it is like to be you. Thus, there is something it is like to be a bacterium.

But notice how the bacterium has no nervous system. If my argument goes through, then we can conclude that looking for the neural correlates of phenomenal experience is a completely misguided enterprise that is bound to fail. However, since I have been trying to argue that phenomenal experience and consciousness do not overlap, this means that we can still coherently look for the neural correlates of consciousness. But the NC of phenomenal experience is completely misguided, because, as I have tried to establish, there is something it is like to be a bacterium, and bacteria do not have nervous systems. If I am right, then neurophilosophers trying to pinpoint the NCs of phenomenal experience have been barking up the wrong tree. For the fundamental principle of mental life is not consciousness but living in a phenomenal world i.e. a world of real value and meaning, where entities are encountered as significant. Rocks do not live in a phenomenal world. There is nothing a rock is concerned about. It does not care if you break it in two. There is nothing it is like to be a rock. A rock has no mental life. But what a world of difference in the bacterium! The bacterium is alive. It has concerns. It lives in an ecological (i.e. phenomenal) niche. Whereas the rock does not strive to stay together in a particular organizational pattern, the bacterium does. Sucrose means nothing to a rock, for nothing means anything to a rock, but things matter to bacteria. Sucrose is meaningful to bacteria.

And that is the meaning of meaning in its most basic form. Of course, I am glossing on the complexity of both primordial meaning and linguistic meaning. Linguistic meaning, though grounded by primordial meaning, takes on a life of its own once established in a population. This is why Heidegger made pains to distinguish between being-in-the-environment and being-in-a-linguistic-world, with the latter reserved for those humans who have learned a language and grown up in a social-linguistic community.

Advertisements

11 Comments

Filed under Consciousness, Philosophy

11 responses to “The Meaning of Meaning

  1. Alex

    Computer algorithms are ‘concerned’ about things too. Does that mean that there is ‘something it is like’ to be a computer algorithm, and they have a phenomenal experience?

  2. Gary Williams

    “Computer algorithms are ‘concerned’ about things too”

    What are they concerned about?

    I don’t rule out the possibility of a computer ever having phenomenal experience. I don’t think the technology is there yet, but I can see in the future the development of a truly autonomous robot that might be truly concerned about something (its continuation as an entity). Until there is autonomy, I would be skeptical of any claims of a robot or computer algorithm being “concerned” about anything. In that case, I would say we are abusing the metaphor. It makes sense to say that biological organisms are concerned because have an affective response to the world in terms of metabolic needs whereby stimulus have a perceptual “valence” of good or bad, but a computer, so far as I am aware, does not have that kind of biological valence because it lacks a solution to the frame problem. I don’t think it’s impossible for computers to be concerned with things, but I think that technology is not available yet.

    • Alex

      Take this robot for example:

      It is “concerned” about developing a functional gait. By developing it’s own virtual self model in order to walk, and then testing it in real enviroment, it applies “valence” to it’s own behaviour.
      So, can we say that it has a phenomenal experience? If not, what is the difference between the robot and bacteria, that makes the bacteria superior in that respect?

      • Gary Williams

        The difference between the robot and the bacteria is that the bacteria has a metabolism and the robot doesn’t. The bacteria has needs and wants. It has a will to survive because it has the goal of survival based on metabolic needs. The robot only has the goal which has been programmed into it by its programmers. I do not think it is autonomous enough for us to warrant the term concern or need, because it has no internal drive for survival. Without that inner drive for autonomous existence, I doubt the robot experiences the world in terms of a valence of “good” and “bad”. Now, we can, if we want to, apply the “intentional stance” to the robot and say it “wants” to develop a good gait. There is nothing incoherent about this, but I think that we must acknowledge that it is a metaphorical application to the robot, whereas in the bacteria’s case it is not metaphorical.

      • Alex

        1. Metabolism. What difference is there in a biological metabolism of a bacteria and the “machine”, external metabolism of a robot (ie. (charging of a battery), that makes only the bacteria an experiencing entity? If it’s about autonomy -> look point 3.
        2. How come bacteria has needs and wants, and the algorythm does not? Drives of bacteria is just a simple chemistry; one could argue that certain algorythms are far more complex, adaptable and nonlinear. If it’s about autonomy -> look point 3.
        3. I see that you claim that the difference is mostly about “autonomy”. Why exactly? Why is this autonomy as you define it (“having a will to survive based on one’s metabolic needs)” a key to “nommetaphorical” experience?

      • Gary Williams

        My thoughts in regards to autonomy and cognition are largely inspired by Maturana and Varela’s work on autopoiesis. The link between phenomenal experience and autonomy (autopoiesis) is based on the fact that both autopoiesis and phenomenal experience have similar qualitative profiles. Phenomenal experience is characterized by existence or identity over time and a sense of “continuity” over time. This identity over time is possibly what gives rise to the various qualitative features invokes in describing phenomenal experience. There is thus a sense of “temporality” for the bacterium in virtue of how it maintains its existence over time in an autonomous fashion. The sense of temporality is what grounds the what-it-is-like-ness. At least, this is how the story is supposed to go. I don’t think it’s a complete or entirely unproblematic theory of phenomanlity, but I think Varela and co. make a good case for linking autopoiesis with cognition (i.e. phenomenality, mental life). The robot, presumably, doesn’t have a sense of temporality, of “living in time”, and thus does not experience the world in the way an organism does. Now, I am kind of sympathetic to a kind of panpsychism whereby there is something it is like to be a nonorganic entity, but I think it must be of an entirely different kind. So maybe the robot does experience the world. But if panpsychism is true, then we must say that even the pebble experiences the world, which seems counterintuitive.

      • Alex

        I am not acquainted with Maturana and Varela’s work, but attributing cetrain quality (phenomenal experience) to an object (bacteria) just because this object and the quality have similar features (continuity, temporality), reminds me of magical thinking…
        I also cannot see why bacteria would have a sense of temporality, while the robot wouldn’t. The robot from the example I gave you creates complex representation of itself and it’s enviroment (in time!), while the bacteria reacts on purely mechanistic, biochemical level.

  3. Can we disambiguate “Meaning”? Perhaps “Meaning” as “Definition” plays by the rules of language. But if “meaning” could purely measure importance, that would be emotive experience and non verbal.

  4. Gary Williams

    Alex, do you really think the robot/computer is doing anything else besides acting on a purely mechanistic level? Humans might attribute representations to the robot, but all that’s really happening is that voltages are changing and other purely physical happenings. The fact that the robot represents itself or the environment is something that we attribute to the robot, but in reality, it works on a purely mechanical causal basis, just like the bacteria. Computers and robots are just machines. The difference between the bacteria and the robot is in how the mechanisms are organized. The bacteria’s mechanisms are organized in such a way that its physical system is balanced on the edge of chaos in terms of its homeostatic equilibrium. This sets up a continuity between its past states and its current states. It has a temporal “history” that connects its past to its current needs and drives it to perceive the world in terms of a valence. I don’t think it’s magical thinking to think of this “purely mechanical level” in terms of phenomenal experience because what is phenomenal experience if not “what it feels like” to run cognitive operations. Cognitive operations are things like perceiving and reacting to perceptions based on emotional valences of “good for me”or “bad for me”. If you place some toxic substance in a petri dish, the bacteria will perceive it “as bad” and run away from it. The robot has nothing comparable to this because nothing is intrinsically good or bad for it except so far as its human programmers tell it something is “good” or “bad”. But that is not autonomy. Good and bad only make sense as perceptual categories if you have a stake in your own survival, and I just don’t think any robot is at that level of sophistication right now.

    The difference between the computer and the bacteria can be illustrated by the following example. Imagine a computer that has a biochemical sensor hooked up to it. The sensor is tuned to send an electronic signal to the computer whenever it detects a toxic substance, as like the kind which would be toxic to a bacteria. Whenever the computer gets the signal, it will print out the message “that is bad”. We thus have an input/output system set up such that certain input trigger certain output. If we wanted, we could try and make an analogy and say the bacteria is doing the same thing as the computer, in that it detects the input of the toxic substance and then runs an output (“escape”). The point of this example is to illustrate what I think are huge differences between how the computer and the bacteria “perceive” the toxic substance. In the case of the computer, I do not think it is accurate to say that the computer really perceived the toxic substance because there was no valence. For the computer, it was purely behavioral. There was no conceptual valence of “badness” because the computer was running, not by an organizational scheme which institutes norms, but rather, in terms of a meaningless input/output function. Whereas we can ascribe an input/output function to the bacteria, it is fundamentally different because it categorizes and places a meaning/value on the input as “bad” in terms of the norms of its metabolic equilibrium.

    • Alex

      Gary,
      I used the example of the starfish robot only to get a better grip of your understanding of phenomenal experience. I do not claim that this robot actually has phenomenal experience – that doesn’t seem much more likely than a bacterium having one.

      Of course the robot acts on purely mechanistic level, but so is the bacterium. However, the starfish robot is superior in respect that it does in fact create representations – that’s the whole concept behind that particular robot. It tries to build a 3D model of itself by using it’s sensors (as you can actually see in the video), and then use that model to predict which gait would work for it best on a certain type of terrain.

      I admit that I am heavily influenced by Thomas Metzinger’s self-model theory. It relies on representational theory of mind – experience is made of those representations which are conscious (phenomenal representations vs non-phenomenal representations). Of course, in order to create representations, you probably need some kind of nervous system. Bacterium has no nervous system, so it probably makes no representations – it works on much simpler level. So in this approach, bacterium simply cannot have any experience. That is why I was curious how you imagine it possible.

      I think I understand your perspective now – still, intuition tells me that experience needs some processing power, I can’t imagine it rises “out of the blue”, just because certain being is autonomous.

      However, let’s follow this line of thinking – you don’t accept an idea of an computer algorithm having an experience, but accept a bacterium having one – then how about a virus? They don’t have their own metabolism, and are almost like a computer algorithm – actually a computer virus doesn’t differ that much from a biological virus – both are just self-replicating programs using the “metabolism” of their hosts… Does virus have a phenomenal experience? If yes, does computer virus have one too?

      • Gary Williams

        Alex,

        I now see where you are coming from in regards to the representational theory of mind. I am highly sympathetic to Metzinger’s phenomenal self model, but I differ in regards to defining what exactly the phenomenal self-model theory explains. Metzinger claims that he is providing a model of experience itself. I think this is wrong. What I think Metzinger is explaining is actually higher-order consciousness, which I think is different from experience itself.

        So let me try and construct a mental taxonomy that could explain Metzinger’s theory in a charitable fashion. Let’s call whatever “experience” rocks have as Level 0 experience. Let’s call whatever “experience” bacteria have as level 1 experience. Let’s call whatever “experience” a creature who has the capacity to create representational self-models level 2 experience. Let’s call whatever “experience” creatures with human kinds of self-modeling level 3 experience.

        On your view, only level 2 and 3 really are “experiential” or “phenomenal”. So we could distinguish the lower levels by saying they are “non-phenomenal experience”, with phenomenal understood in terms of the qualitative effects that arise from self-modeling. For most philosophers though, nonphenomenal experience is an oxymoron. This is why I think Metzinger’s “phenomenality” is really about “consciousness”, and I would rather say that the lower levels are forms of “nonconscious experience”. I admit that self-modeling with representations gives rise to unique forms of experience, I just deny that self-modeling is the origin of experience itself. But I understand why someone would restrict “phenomenality” to levels 2 and 3, it’s just that I don’t think “experience” needs much processing power, since it seems intuitive to me to think of creatures with no nervous system or a very simple one as having experience. Otherwise we have to go up the phylogenetic tree and arbitrarily pick a dividing line where experience “arises”. For me, it’s easier to think about how “real experience” or “phenomenality” arises from the transition from level 0 to level 1. I think your point has been is that this transition from 0 to 1 is mysterious, and that the transition to real experience happens from level 1 to 2.

        What I think happens is that “phenomenality” arises at the transition from level 0 to 1, protoconsciousness happens at the transition from 1 to 2, and consciousness proper arises from the transition from 2 to 3. For you and Metzinger, “phenomenality” is something more complicated and needs to arise at level 2 when self-modeling happens. But take a level 2 creature who was just born and before their self-modeling system is fully online. Do we want to say that before the self-modeling system turned on, there was nothing it was like to be that creature? This violates my intuitions, but I understand that this is something close to your intuition. What I feel is going on is that the creature starts experiencing the world as soon as it is conceived, and that self-modeling takes that basic experiential level is its “matter” to work with. So on my view, self-modeling takes experience as its input and then through its workings generates new feelings of subjectivity. So I feel like self-modeling doesn’t create subjectivity since subjectivity isn’t a “thing” which can be generated like a factory generates a car. Subjectivity is the “what it is like” of an entity. So phenomenal self-modeling generates new forms of subjectivity that are based on representational modeling. Im willing to say that once self-modeling happens the nature of experience is radically different so as to constitute a new “level” of experience, but I do not think that level is the “first” level where experience is. For this, I’m willing to talk about experience at levels 0 and 1, but it must be understood that the experience is very simple and “dim”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s