Concepts Without Language or Understanding

March 28, 2026; most recent update: March 30, 2026

 

Table of Contents

 

1. Introduction

2. Examples of Things with Reasoning but No Language

3. How This is Possible: Purely Subconscious Concepts

    a. How They Work in Animal Consciousness

    b. Why Nature Would Do Such a Thing

    c. The Relation Between Some AI and Animals with Reason

    d. A Familiar Phenomenon: Our Subconscious Concepts

4. Extra Explanatory Value

 

Introduction

 

There is a debate over whether or not thought can exist without language. If it cannot, that means either of three things: language is necessary for thought, thought is sufficient for language, or both. It is the first possibility that has mainly interested philosophers and others, and that option will be the focus here.

 

The position that thought requires language is lingualism; the view denying that, mentalism.

 

But, before going further, what is thought? What exactly is it that distinguishes things capable of thought from things without that ability? Beings that can think can form inferences -- judgements from a given piece of information, whether that be a percept, concept, or another thought. To put it another way, they can derive new information from information that is already given. For example, if the ground and patio are wet, you might infer that it rained even if you didn't actually see, hear, or feel it rain.

 

And, in fact, forming concepts involves just that. When you witness two or more individual things that are similar in some way, at some point you realize they're connected by something that transcends each of those individuals. Thus you establish a new concept (dog, beagle, fish, tree, etc.) to represent individuals with those attributes:

 

[Percept]

"Individual A has three sides."

+

[Percept]

"Individual B has three sides."

=

[Inference/Concept]

"There is a group of things with three sides: triangle."

 

 

Thereafter, when you see new individuals with that same similarity, you deduce that they too are members of that concept:

 

[Concept]

"Triangles have three sides."

+

[Percept]

"Individual C has three sides."

=

[Inference]

"Individual C is a triangle."

 

Clearly, there are also inferences within each of the premises above, not just from premises to conclusion. In the first example, the premise "Individual A has three sides" is itself an inference from the premises "That object is Individual A," and "Those are three sides." It entails having a concept of Individual A and a concept of side -- and of the number three, etc. We can get carried away here as to all that this implies. But essentially it points to a few key things. First, inferences are propositional in nature and can only occur from other propositions: inferences assert something is true or false and occur from something else that is asserted to be true or false. Secondly, concepts are necessary for inferences.

 

Concepts -- at least most of them -- are inferences. And they are necessary for all inferences, whether conceptual inferences or not.

 

Still, someone might ask whether it is really true that all thoughts are propositional in nature and have a conceptual foundation? What about questions, or sarcastic thoughts, or thoughts specifically about certain individuals -- for example, thoughts about what makes a certain individual unique rather than what they share with other individuals? Such thoughts may seem different on the surface than thoughts that are clearly assertions, but they're essentially the same.

 

When you have a question, you're basically inferring it's true that at least two possibilities on a given issue might be true -- thus the uncertainty on each possibility. The more certain you become of which possibility is true, the less you question it: the "might be true" of the other possibilities becomes more and more hypothetical.

 

Sarcasm involves (usually as humor) cloaking an assertion with an opposite assertion. The sarcastic assertion can be implied or explicit.

 

And, of course, we also form concepts of individuals too, not just groups with individual members. We do this by identifying the individual with the same unique and essential properties it possesses over different instances and places of its existence: the different instances and places are essentially its members, sharing the individual's essential properties.

 

So, the debate really shifts to whether or not thoughts are possible without language to whether or not concepts are possible without language. Could a being have concepts but then no power or need to form symbols to represent or communicate them?

 

For true believers that such a scenario is impossible, the existence of language then becomes an easy way to verify whether or not something has concepts and therefore the faculty of reason. No other test is required.

 

This would have clear implications regarding whether or not animals have thought. No known animals have anything that really exhibits a language (a system of symbols representing concepts) even though they do communicate in various ways. Moreover, the "languages" that some seem to have are very puny compared to the much larger number of concepts they would have to possess from experience -- if they are truly capable of thought. Therefore, if we take the lingualist view, we can reasonably assume that these "puny languages" aren't really languages at all and thus that animals have no thought.

 

 

Examples of Things with Reasoning but No Language

 

However, despite not having language, there are animals that demonstrate some degree of reasoning. Here are just two examples:

 

1. Ancient Greek philosopher Chrysippus gave a case that strongly suggests deductive reasoning in a dog. Three converging roads were before the dog, which was trying to find which way its prey went. There was no scent at two of the roads when the dog stiffed, so the dog went down the third road without sniffing.

 

2. I once saw footage of a gazelle stranded on a tiny island in the middle of a small lake. A group of lionesses sat on the shore, waiting for the gazelle to swim back to land. Eventually the gazelle took a few steps off of the island and stood, revealing that the lake wasn't a lake at all but merely a giant puddle just a few inches deep. Almost immediately, the lionesses sprinted towards the gazelle, making splashes along the way. By all appearances it still looked like a small lake, but the gazelle "standing on water" was enough to tell the lionesses otherwise. (They also seemed to understand the need to attack the gazelle while it was in the water, before it could fully utilize its speed on land.)

 

We don't have to assume that these animals' underlying capacity of reason is anywhere near the same breadth or depth as our own. But it seems we have to attribute some capacity of reason to them. It's harder to explain their behavior here as coming from non-rational factors, such as the psychological power of association, than reason. For example, the gazelle was standing the whole time, so why wasn't it until it stood off of the island that the lionesses ran into the water? Why were they reluctant to enter the water before that? A gazelle standing on land rather than on water would seem more likely to give them a "green light" to attack if the power of association is the primary factor.

 

But it doesn't stop with animals. Some AI also have concepts and no language. Chess bots are a good example. More on that, below.

 

How This is Possible: Purely Subconscious Concepts

So why is it that some things can think but can't communicate their thoughts? Why doesn't their ability to think result in language too, as it does with humans?

 

One simple answer to this is that they don't think in the first in place; the question is wrongheaded and so there's nothing to explain. You don't need to be a lingualist to reach that conclusion. The main appeal of explaining "animal rationality" by things other than reason is that it seems to align much better with the logical principle that we should seek the simplest explanation -- the one requiring the fewest assumptions given current evidence. If animals do reason, like us, then it seems that we ought to see some implications of that in their lives, such as (of course) language, morality, laws, technology, and so on. Since we don't see any of those things, it's easier to say that their rationality in them simply doesn't exist and that the apparent reasoning some of them show at times is an illusion explainable by non-rational, mental causes. 

 

However, from another angle, such explanations are less simple. Initially, the most conservative explanation for Chrysippus's dog and similar examples among animals is to grant that they are thinking: if it looks like reasoning is involved, then most likely it is.

 

But there is a simple explanation that gives us the best of both worlds: acknowledging that some animals do what they seem to do, reason (at least to a minimal degree), while also accounting for why they don't have language and the other things you would expect from a being with that ability -- as well as why their level of reasoning is much less advanced than ours'.

 

It is that some animals have concepts subconsciously but never at a conscious level. That is, they have a given concept in that they are aware that its members are similar to each other and distinct from other things, but they are never aware of the concept itself.

 

For example, a lion might see a lonely leopard cub from a distance and charge at it, even if it had never seen a leopard cub before. This is because it recognizes the cub's similarity to the adult leopards it has seen. But this doesn't mean it has awareness of the concept LEOPARD. It wouldn't, for instance, recognize that leopards are conceptually closer to cheetahs than hyenas, even though it intuitively distinguishes leopards, cheetahs, and hyenas from each other. The ability to recognize the closer conceptual relation would be possible only if the lion could derive the genus CAT from the less abstract concepts of LEOPARD and CHEETAH. Since it has no consciousness of those lesser concepts, it cannot. Put differently, the lion's perceptual awareness -- which isn't in dispute -- gives it the percept of each individual leopard it observes, providing the mental objects necessary for its rational faculty to form a universal from those particulars, but at a subconscious level. Thus the lion immediately recognizes the similarity among individual leopards. But it never has a conceptual object in its consciousness, thus it cannot consciously -- or subconsciously -- distinguish different groups.

 

To be clear, the assumed psychological process here is that usually consciousness must first provide an object ("the material") for the subconscious to retain an object or form a new object of its own, but the subconscious does not necessarily give its created objects back to the conscious. For animals with concepts, the faculty of reason works at a purely subconscious level. But nevertheless in most cases it needs conscious percepts to form its hidden concepts: you can't create something from nothing. So, in this respect, we can think of the subconscious as having a tendency to be a little bit "greedy" or "ungrateful": it usually takes resources from the conscious, but doesn't necessarily give anything back.

 

The exception to this general rule is innate concepts --  truly innate concepts, not ones similary to instincts that, while innate at the individual level, ultimately result from a species' past experience and thus over time become embedded into the subconscious of each individual. Innate concepts are necessary for inferences to get started in the first place.

 

    How They Work in Animal Consciousness

This idea can be a little bit confusing. After all, what is it in a lion's and similar animals' consciousness that allows it to integrate and distinguish things in real time, if concepts don't reside there? It's that when the lion consciously views individual leopards as similar, and distinguishes them from non-leopards, it is guided purely by an automatic sense, an intuition, rather than by understanding; the subconscious concepts manifest themselves as intuitions in its consciousness. Thus it has an intuitive, working knowledge of concepts related to things it can perceive, but that's all. Again, it has no consciousness of distinguishing like things as a group from other groups.

 

    Why Nature Would Do Such a Thing

But why would some animals have concepts but avoid transporting them to consciousness? Is that really plausible? Yes, because it would give them the advantage of increased awareness of their environment while also avoiding the energy burden of having concepts at a conscious, more active level. It would be a nice middle ground for survival. (One disadvantage, however, is that it would make it harder for them to form new concepts and thus put a strain on increasing their awareness even further.)

 

Certain examples show clearly the disadvantage of lacking some sort of conceptual awareness, and thus why nature would give conceptual capacity to some animals. I once saw video of a death adder wiggling the tip of its tail, trying to lure a small lizard by imitating the thin tail of a mouse or other small prey of the lizard. Sure enough, the lizard crept forward in small spurts, instinctively not trying to be too abrupt, so that it would avoid being noticed by its "prey." However, when it got close enough, it instantly became the death adder's victim.

 

A few important things were completely absent from the lizard's mind, showing a lack of conceptual awareness. First, "wiggling tail" doesn't necessarily mean that of a harmless thing. And yet, even if the lizard hadn't known that beforehand, it would have been immediately evident to it had it possessed some sense that a tail is part of something. For, as the death adder wiggled its tail, the rest of its body and its menacing face was in plain view. So, why did the lizard fail to notice something so obvious right next to the tail and clearly connected to it? Because, lacking a concept of tail, it didn't think to look next to the tail and therefore couldn't put together the whole picture. It was completely blind to any other consideration except instinctively going towards the wiggling tail.

 

But why did nature leave the lizard with such a lousy detection system? The  clearest explanation is that usually "wiggling tail" means a meal and therefore the instinct to go towards it without a second thought is beneficial -- to hesitate doing so could mean going hungry, especially with quick prey that tend to jump around and don't stay put. Sometimes "wiggling tail" means something bad, but because it is usually a sign of something good, the strong instinct helps the lizard species survive even if it occasionally fails a few of its members. So, nature keeps it in place. Why upgrade the lizards' minds and thereby increase their energy consumption and needs for so little benefit?

 

That might work for lizards, having prey (insects, amphibians, smaller reptiles, small rodents, etc) that are relatively plentiful and generally not too clever or dangerous. But for predators facing scarcer, smarter, and more formidable prey, having a rational faculty in the background would be a good  bargain and perhaps necessary. It wouldn't sap too much of the energy resources needed for physical hunting, yet would also allow them to save resources by giving them the intuition to hunt smarter as well as being more able to detect and avoid danger.

 

But what if nature did upgrade the lizards' minds without increasing their energy costs too much, by allowing concepts in the subconscious? Not only could they more likely avoid the death adder's deception and similar situations. There's also a good chance that their smarter hunting could more than compensate for the lost energy expenditure due to their improved minds. However, nature doesn't seem to have much foresight and is more likely to react only when a threat is clear and imminent. So, for now, it seems content with leaving the lizard with what it has. Don't fix what isn't broken.

 

Still, we can see with both the lizard and with more advanced predators, the advantage of subconscious concepts and why it's plausible that nature would reach this solution in some animals.

 

    The Relation Between Some AI and Animals with Reason

Certain AI, such as chess bots, have a similar but not identical situation regarding concepts. Like some animals, they have concepts but cannot consciously change the definitions they have of those concepts. However,

they're different in two important ways. First, they can't independently acquire new concepts. Second, they can't even subconsciously change their definitions. The cause of these two things leads to a third inability, which they share with those animals: a lack of self-consciousness.

 

 

[Cannot Independently Acquire New Concepts]

---------------------------------------------

 

The bigger difference is that the AI mentioned are limited to the set of concepts given to them by their code writers. They have no ability to form new concepts and can acquire new ones only with external help.

 

The reason for this (and for why they can't change their definitions) is the same. New concepts can be formed only if the subject possesses innate  concepts of the most general type: place-holding concepts that allow each and every percept or concept the subject ever has to be an object of thought, including those most-general concepts. For instance, Ayn Rand mentioned three "implicit" concepts (by which she meant innate and prior to any conscious concept) that are always involved in concept formation: entity, identity, and unit. Take the process by which you form the concept of tree. You first notice a tree exists (entity): a something registers in your awareness. And by noticing it you also distinguish it from other objects, such as flowers, other trees, etc (identity). But then you notice it has something in common with and only with other trees, and thus form the concept TREE (unit). Of course, this same mental process would be happening regarding the other trees too. So, to form the concept TREE, you have to notice the existence of certain objects, and then single each of them out from their environment to recognize that those objects have something in common; all three of these "implicit" concepts are necessary. Other innate concepts are required too for concept formation, but the three mentioned are the main ones involved.

 

The process could be purely conceptual as well: a concept comes to mind; you isolate the concept from others, perhaps noticing what distinguishes it; and finally you see what it uniquely shares with at least one other concept -- thus forming a new, wider concept.

 

These general, place-holding concepts are absent in chess bots. This largely explains their inability to think about anything outside of simply playing chess in real time.

 

 

[Cannot Independently Acquire New Definitions]

----------------------------------------------

 

While animals can't consciously change their definitions, this is possible at at the subconscious level, where their reasoning exists. But chess bots, and certainly many other AI, cannot even subconsciously change their definitions: the definitions are inherently static.

 

There's a lot to clarify from the previous paragraph.

 

It can sound absurd to say that (some) animals, especially subconsciously, form definitions. Definition formation seems to be something that can be done only consciously and with a human, intellectual attitude: we take a closer look at a concept we have and try to get a better understanding of it. But concept formation itself requires establishing a definition, at least subconciously. To form a concept is to mentally form a group -- or "universal" as it's sometimes called -- under which like members are placed. But at least one shared property among members has to be selected in order for that process to happen. Which property or properties are selected is the definition under which one understands the concept.

 

The faculty of reason groups members via a shared set of properties. That's  simply what it does and it cannot do otherwise (grouping them together by unshared properties). That's why in cases when you become aware that your conscious, effort-based definition groups unlike things together -- or resists grouping like things together -- you can no longer seriously accept the definition and thus reject it (at least privately). In such cases, your rational faculty tells you clearly that the definition is wrong. The grouping could still be useful, but no longer as a definition. If you're stubborn, you might seek ways to cling to your definition. Deep down, however, reason will still consistently whisper that you're wrong.

 

But what follows from this, and is clear from experience, is that that two different beings could place the same set of members in a group but via different properties. Nothing guarantees that one being's rational faculty will group like members together the same way that another being's rational faculty does. Lions might subconsciously distinguish leopards from cheetahs by facial characteristics, whereas hyenas might do so by body shape. Perhaps individual lions might differ in which facial characteristics they select, just as individual hyenas could "disagree" over which aspects of body shape are emphasized. Reason doesn't necessarily care about the essence, the true fundamental nature of like things: that's a human concern. In it's basic state, it just looks for one or more shared properties to conveniently group like things together. And what that property set is can vary among different beings with reason, or even within each being at different times. 

 

So, because the faculty of reason in humans and some animals has the ability to define concepts on its own, it can also redefine them. That's missing with at least certain types of AI, such as chess bots. Their rational faculty is bound by stagnant definitions that cannot be altered. A human might define a chess rook as "a piece that can move laterally or vertically any number of spaces on a single turn." Then they realize this definition would mean, for example, that a rook going two spaces forward and three spaces to the right on the same turn would be a valid move, which it isn't. Thus the human redefines ROOK as "a piece that can move laterally or vertically any number of spaces in a straight line on the same turn." But a chess bot can only go with the definition given to it by the code writer -- whether or not that definition follows chess rules -- and can't redefine it.

 

As with its inability to expand its set of concepts, this limitation also stems from its lacking the most-general, place-holding concepts that are necessary for concept formation. It has no place-holding concept to fit its definitions into so that it can go through the process of seeing new relations between them and other concepts or do the same with definition members.

 

Because the bot is a slave to the definitions given to it by the code writer and can't correct them if necessary, this means it has no awareness of the concepts themselves of ROOK, KNIGHT, or that of any other piece type. If it did, it could then make inferences connecting those concepts. For instance, the bot could eventually understand that the concept QUEEN is essentially ROOK and BISHOP combined: queens share certain properties exclusively with rooks and their other properties exclusively with bishops. It would never come to that realization. And even though its definitions of piece types are through mathematical formulas, not linguistic statements, the point still stands. It would not do any kind of mathematical calculation that focused, simply, on QUEEN being the sum of ROOK and BISHOP. Neither its definition nor the concept behind it can be an object of its consciousness.

 

This doesn't mean it doesn't have even an inkling of conceptual consciousness. The elements of consciousness are representations and are of four basic kinds: percepts, concepts, propositions, and symbols. A chess bot has all of these in regards to playing chess: input of the current piece configurations on the chess board; definitions outlining the crux and powers of the different piece types; calculations (propositions); and different numerical representations symbolizing each piece type, allowing the bot to identify what kind of piece is at a given location on the chess board. Also crucial is that the chess bot has a goal for itself -- winning the game -- showing that these representations are its own and thus for it, unlike, say, a car dashboard warning, which can be read only by a human inside, not the non-sentient car. The dashboard representation is thus for the human and merely produced by the car, not possessed by it. 

 

So, we're forced to admit that the chess bot has some conceptual consciousness. We can objectively see that it has the definitional representations (i.e. the mental structures) for that consciousness and therefore those must manifest themselves in some way to it in order to help drive its actions. Moreover, it's behavior itself shows it has a conceptual consciousness of the proper bounds of each piece type, otherwise it couldn't move the pieces correctly on the chess board.

 

Therefore, like animals with reason, it at least has a working awareness of concepts, but not an understanding of concepts in their own right. There's no reason to believe that the chess bot has intuition, unlike the animals described, and thus its awareness of these boundaries and distinctions is driven by the code definitions alone. But it is not, for example, able to unify the code parts comprising rooks' allowed movements into a whole and thus produce the concept of rook.

 

A human looking at the mathematical formula comprising the rook definition in the bot's code would be able to make an equivalency and translate it into a linguistic definition of rook. Or, in other words, they would be able to tell if the actual code definition was valid and complied with chess rules. This shows that the concept ROOK is actually in the bot's code, but simply doesn't reach its consciousness as a concept. Thus, like some animals, it has concepts but they are restricted to its subconsciousness. (We can say that the bot has a subconscious because there is an area where its concepts reside even though it is never truly conscious of them.)

 

 

[No Self-Consciousness]

-----------------------

 

Its not having place-holding concepts also means it can't see its thoughts through a third-person lens and evaluate or criticize them. Without a god's-eye view of its calculations, it doesn't have self-consciousness -- namely, the ability to think about its own thinking. It is conscious and thinks, but isn't conscious of what it thinks. This too it shares with animals that possess reason: they're aware of their environment and their intuitions in response to it, but not the causation behind the intuitions.

 

Some people would not recognize such consciousness as reasoning at all: to truly reason is at least to be aware of your reasoning, this viewpoint holds. After all, when we do something and are not totally aware of why we did it, we start to look for non-rational factors to help explain our action. Intuition, instinct, emotion, desire, etc are some of the possibilities.

 

But is such a definition of REASONING too narrow? If the chess bot isn't reasoning in some way when it generates output from input, then what exactly is it doing? Is its chess play really just a physical, mechanical process, such as a car engine going? The car engine does things essentially the same way every time, whereas the chess bot -- like many other AI -- varies its output depending on the input. However, regardless of one's opinions on chess bots and AI more generally, an even bigger implication of the definition above is that it would mean subconscious reasoning doesn't happen.

 

The next section will challenge that idea.

 

    A Familiar Phenomenon: Our Subconscious Concepts

One advantage of explaining intelligent animal behavior though subconscious concepts is that the idea is nothing new or mysterious but something familiar that hits close to home.

 

We already saw that we need innate, place-holding concepts to even begin forming concepts on our own. These concepts reside in our subconscious before we ever become aware of them. In fact, it's highly unlikely that anyone would become aware of these very general, abstract concepts before they became aware of less abstract ones.

 

But even many of those less abstract concepts stay hidden in our subconsciousness, for a while. And if not the concept itself staying hidden, at least the definition does. Our faculty of reason naturally forms concepts and definitions for us without any effort on our part, just as our perceptual faculty gives us percepts without us having to try.

 

So, it's not a stretch to say that the same natural, effortless process happens in some animals. Their rational faculty makes concepts available for them just as automatically as their digestive system breaks down food. Because this process happens without effort, we can abandon the common idea that free will and reason are mutually necessary: the former requires the latter but not vice versa. There's no barrier to entry preventing animals from having concepts.

 

Unlike those animals, none of our concepts are restricted to the subconscious. Nor do all of our concepts originate in the subconscious. The second point is especially true when someone thinks of a novel concept or acquires a concept by it being taught to them. But that we have concepts working at the subconscious level is clear when we realize what makes our ability to understand and generate speech possible. There's no way anyone would ever have time to consciously define or read the definition of every concept they're aware of. Even if they did have the time, they would forget most of them. So, in order to automatically create and understand written or spoken sentences -- especially the latter -- we rely on subconscious definitions. We simply have an intuitive feel for what words mean and thus what certain concepts mean. Yes, occasionally we misuse a word or try to express a concept we don't really understand. But in general, we're able to have instant and functional, two-way communication.

 

(This doesn't mean conscious definitions are ultimately useless. It strengthens our subconscious definitions when we have first gone through the labor of consciously defining, or receiving, a definition. You will likely have a better feel for what a concept means if you have first done this, even if you can no longer consciously define it as accurately.)

 

Also, when we see what makes many of our attempts at a conscious definition possible, it's apparent that in many cases reason has already provided a subconscious, effortless definition for us beforehand: the concept is already there. Otherwise, what would there be to consciously define?

 

In the cases relevant to the previous paragraph, we are at that point aware of the concept, just not the definition. But we also have concepts we're not aware of -- ones that originate in the subconscious and that, like the chess bot's definitions of each piece type, we take for granted in the background. We aren't aware of them until we have reason to think of them, at which point we might even try to define them. This is more likely to occur with concepts having no tangible, or at least no clearly tangible, instances. For example, an infant might have a subconscious understanding of PERPENDICULAR mainly by observing trees and buildings that are upright. The infant would have no real understanding of what a 90-degree angle is but would intuitively sense that something is radically different about a tree or building that is leaning over. Maybe at that point that the infant would first have a conscious understanding of PERPENDICULAR since he would have a related relationship that contrasts with it. This would, of course, be before he knew the word "perpendicular."

 

Extra Explanatory Value

The idea that concepts are purely subconscious in some animals and some  AI allows us to assume they have reasoning while also explaining why they are missing key things we expect from beings with a rational faculty: language, a similar level of intelligence to our own, technology, morality, and so on. Many of these missing features are explainable in some AI by the fact that they cannot independently acquire new concepts or definitions, unlike humans and some animals. So, the theory has more value when applied to the cases of animals with reason.

 

 

[No Language]

--------------

Without consciousness of the concepts you have, you can't begin to form symbols representing those concepts. Language creation would be impossible.

 

 

[Significantly Lower Intelligence]

-------------------------------

One reason it seems implausible that animals can reason is that their basic intelligence is so primitive compared to ours. Why do we see that if they truly have reason, like us? Several things can explain how they too can have conceptual capacity and yet much lower intelligence than humans:

 

1. No sharing of ideas -- If they have no language to share ideas then that significantly reduces their learning compared to us. That difference in activity would almost certainly have an evolutionary impact in innate intelligence.

 

2. No capacity for higher-level concepts -- Without conscious concepts, they would be unable to form what Ayn Rand called an "abstraction from abstractions." Borrowing her example, they couldn't derive the higher concept of FURNITURE from the less abstract concepts TABLE, CHAIR, etc (or CAT from LEOPARD, CHEETAH, and so on). All of their concepts would be formed only from percepts (individual leopards, individual cheetahs, etc) and none from other concepts.

 

3. Less mental work -- Following intuition involves much less mental effort than deliberate, analytic thought. And as already mentioned, the subconscious thinking underneath that drives the intuition also requires less effort. The increased effort of conscious, analytic thought itself could develop a more intelligent mind, both individually and over time within a species.

 

4. Conscious thought is more likely to be correct -- Deliberate, conscious thinking is more likely to get it right than subconscious thinking since it is more careful, thorough, and there is more control over the final result.  Whereas subconscious thinking is more prone to error given that the process would seem to necessarily be simpler, hastier, and without a critical component. Of course, in many simpler situations humans have a tendency to overthink things and thus intuition often has the upper hand. But intuition is at a disadvantage when the situation involves many steps or many aspects.

 

5. More correct intuition on average -- Our intuition too would likely get it right more often than that of animals'. As mentioned earlier, consciously defining a concept or getting its definition from other sources is likely to leave a stronger intuitive understanding of the concept even when the definition is forgotten. But our ability to think about things consciously also hones our intuition on other issues as well, not simply definitions. This is especially useful in situations when time is scare and we must immediately react to a situation and can't contemplate beforehand.

 

 

[No Technology, Laws, and Other Areas of Knowledge]

----------------------------------------------------

From the paragraphs above, it's clear why technology and social laws are absent among them.

 

 

[No Capacity for Morality]

-------------------------

Without self-consciousness, morality is impossible since free will would be impossible. All of this centers around conscious thought. To have conscious thought is really to have self-consciousness: to have conscious thought is to know what those/your thoughts are. This means you're able to see the results of your rational faculty's work, even if that work was done without your effort. When reason makes distinctions, it shows alternative possibilities to follow that are nevertheless consistent with your beliefs and desires, and at a more general level, possibilities that are inconsistent with them. Without self-consciousness, you couldn't evaluate those options and judge them on their own rational merit since they wouldn't be presented to you. So, although reason would express itself through intuition and hint that this or that path was a good fit (or off limits), "deciding" which path was best would be driven by your desires or instincts. If one path seemed more desirable even though long-term it was worse off for you, you would always "choose" such a path. Intuition would still guide you away from paths contradicting the one you took. But you couldn't ever be aware why you took the path you did, or even that there were alternative paths, making deliberate action impossible.

 

Ultimately you would be a slave to those simple desires and instincts. Not only does this mean you couldn't really do something out of a sense of following your best, long-term interests. It also means it would be impossible to do, or not do, something out of respect for someone else. Therefore morality would be impossible.

 

 

Made with Namu6