Do Chess Bots Have a Mind?

November 19, 2025; most recent update: February 15, 2026

 

Table of Contents

 

1.  Introduction

2.  Aristotle's Theory of Mind in Brief

3.  Important Considerations

4.  Do Chess Bots Have a Mind at All?

5.  Adaptability

6.  A Conscious Mind or Plant-Like?

7.  Chess Bots Have Space & Time Awareness

8  Rationality & What Else Their Behavior Shows

      a. The Ability to Enter Another's Viewpoint

      b. Unexpected Behavior

      c. Their Iron Will

9.  Lack of Self-Consciousness

10. Are They Really Playing Chess?

 

Introduction

Does AI exist or is it just a misleading term? That is, does AI have conscious thought or is its "intelligence" an illusion explainable by lesser processes?

 

What should we look at to begin answering the question?

 

In 1965, Soviet mathematician and computer scientist Alexander Kronrod said that chess is the drosophilia of artificial intelligence. In other words, digital chess games provide a legitimate test to see if the AI subject has real intelligence and yet they are also simple to implement and analyze. To quickly discover the actual intelligence of an AI subject, just look at how it plays chess or whether it can even play chess.

 

For the same reasons, AI chess players are useful and convenient for philosophical analysis on artificial intelligence.

 

Of course, what's true of chess AI isn't necessarily true of all or even most AI. But it's likely that at least some of its same conditions and characteristics also apply to many other AI.

 

Regardless, examining chess AI bears its own fruit.

 

We'll look at the example of a computer chess opponent -- specifically, one designed to play chess rather than an AI player such as an LLM. The common terms "chess AI" and "chess engine" aren't accurate when referring to individual AI chess opponents and can cause confusion. So, "chess bot" will be the usual term used, even though the term has its own problems.

 

Also, the usual definition/standard of AI has narrowed over time. It use to include computer programs with a static knowledge base, but not so much today. Still, there is a core attribute the newer examples share with the old: the ability to independently reason, most notably the ability to solve mathematical problems. This puts AI in a different category than, say, an abacus, which merely helps with the mental representation of numbers, while the human does the actual calculating. For that reason, the old AI defintion will be used.

 

So, do chess bots really have intelligence?

 

To answer that, we need to explore what it means for something to have a mind. The concepts of intelligence and mind are inevitably linked. Intelligence can't exist without a mind and all forms of mind, whether those with reason or without, have some kind of intelligence. If we can establish that something has a mind, the chance of it having true reasoning rather than, say, mere mechanical processes outwardly resembling reasoning is much better and easier to argue for.

 

So, let's frame the main question a third time: do chess bots have a mind?

 

Aristotle's Theory of Mind in Brief

What does it mean when something has a mind? In his work De Anima (On the Soul), Aristotle gives us great insight.

 

First, a thing has a mind (i.e. soul, psyche) when it originates its essential activity.

 

For example, a knife's essential activity is cutting. But it can't do that without someone wielding it; it's essential activity doesn't originate from it. Therefore knives lack minds. Likewise, an active volcano's spewing lava comes from forces underneath it and not from itself. It is analogous to a mere spigot.

 

However, origination of essential activity isn't enough. Once it has an energy source, a clock tells time on its own. But the time-telling activity is not a goal the clock has for itself (i.e. for the benefit of the clock) but simply for humans using it. It has no goal for itself. Thus its essential activity springs from a goal for another party. So, clocks don't have minds either but are simply human tools.

 

Therefore a second key trait is that all minds' goals are for the sake of the entity possessing those minds; when an entity has goals for its own sake, it has a mind.

 

Aristotle identified three kinds of minds, each helping the thing that possesses them: plant/nutritive minds, animal minds, and rational minds. Plant minds have capacities that benefit the physical body and create new bodies of its species: physical growth, repair, reproduction, etc. Animal minds involve capacities to help the entity navigate the world. They include things such as sense perception, desire, passion, and wish. Rational minds include the capacity to form universals, helping the entity understand the world by grasping concepts and other patterns -- and, by extension, to have planned goals.

 

He also recognized a hiearchy among these souls: all things with a rational mind also have an animal mind, and all things with an animal mind have a plant mind. For instance, human reason depends on the senses to provide the perceptual particulars by which concepts can be formed; and all living things -- with or without an animal mind -- require a plant soul in order to absorb and regulate the nutrition required for existence.

 

Important Considerations

The previous section reminds us of a few important points.

 

First, there are different types of intelligence. If it turns out that chess bots, or even AI generally, don't have the right stuff for true rational ability,  at minimum they still might have something like a plant mind. Maybe they produce rational output but are unconscious in the process, the same way that plant intelligence does amazing things. With plant intelligence, we see this perhaps most clearly in how living species develop incredible evolutionary solutions for their problems largely by completely unconscious processes at the micro level. 

 

Secondly, two different types of things can have the same kind of mind in question even though those minds may not share the exact same set of capacities, may have enormously different degrees of the same capacity, and may have the same capacities expressed in radically different ways. We don't process our nutrients through roots, but there's no denying that we have a plant soul too. Flies have visual perception, but it's radically different than our own. Likewise, chess bots may have consciousness but not necessarily anything like what humans experience. They may have true rationality even though they lack some of the key rational capacities we have, such as the ability to communicate through language, and show chess behaviors and reasoning that are far different from any human chess player.

 

Do Chess Bots Have a Mind at All?

Do chess bots even meet the basic requirements of having a mind?

 

At first, the answer might seem to be no. The electrical circuitry of the computer hardware gives a chess bot the necessary "nutrient," electricity, for it to play chess and is at least analogous to a plant soul. However, this is provided to it only if a human turns the computer on. It also needs an external source to open the program itself.

 

But these deterministic factors don't put it on par with examples like volcanoes and knives. The chess play itself, which is a chess bot's essential activity -- its essence -- originates from it.

 

It's true that a human or some other source must press "start" to begin the game. But a human playing another human also depends on agreement with the other player and any organization sponsoring the game, in order for their chess play to begin. For all intents and purposes, "start" is really just an agreement.

 

What about its chess play being a goal for itself? Isn't it just for human entertainment, practice or learning? That may be true of the program as a whole. But the chess bot within the program isn't playing chess to entertain, improve or teach humans. It's goal always is to win the game. At least with most chess bots that is true.

 

This doesn't mean it's thinking in sentences about the goal or even conscious of it. But it at least automatically has the goal, and we can see this because it tries to prevent us from winning.

 

Adaptability

Chess bots fit another criteria with which we normally distinguish minds from non-minds. It's not just that their actions originate within themselves and for themselves. It's also their adaptability in respect to those actions. This is true even of older AI with static knowledge.

 

A rational soul can form any possible proposition (or similar thought, such as  a math equation) or concept. A "rational mind" that could access only a limited number of concepts or propositions would be suspect, like a machine with a pre-determined number of actions it could perform.

 

An animal soul can perceive an endless variety of images, sounds, etc, as long as they're within the range of its senses, and have one desire or aversion after another.

 

A plant can adapt to infinite gradients within its survivable range of light, precipitation, temperature, soil conditions, etc; it can adapt to an infinite number of those circumstances.

 

Likewise, even a simplistic chess bot can respond to any scenerio on the chessboard. It might react the same way in similar or identical circumstances. But the bot doesn't break when it faces a board situation it has never encountered before. It can respond to any position or move made; it can perform any chess-related calculation.

 

While not necessary to show that chess bots have a mind, their adaptability is one more reason for believing that they do.

 

A Conscious Mind or Plant-Like?

So, chess bots qualify for having a mind, but is it a conscious mind or simply  plant-like?

 

What is consciousness? To have consciousness is to have representations. A representation is a subjective indicator of something real or imagined. To have representations is to have a "map of the world," whether it be the objective world or a purely imaginary one.

 

The patio outside someone's window still exists even when the person turns their head away from the window and no longer has a perception of it. The perception they had of the patio was therefore a representation. A group of like things (e.g. peaches) exists even before a person has a concept of those things. The concept the person has of peaches is therefore a representation of that kind of fruit.

 

We can't properly understand this definition of consciousness unless we fall back on Aristotle's theory of mind. For example, when the oil in your car is low, you will usually get an indicator on your dashboard: a red symbol resembling something like a genie lamp with a droplet coming out. But does that mean the car has a representation and is therefore conscious? Of course not. The representation is for you: the goal of the car's warning system is to notify the driver of the car, not the car itself. Thus something has a representation, and is therefore conscious, when the representation is for it.

 

Do chess bots have representations? Here, it helps to ask how they tell the different kinds of pieces apart. They would have to have a way of distinguishing the different kinds of chess pieces, in order to move each piece correctly. So how do they do this? Is it the same as with humans: "This looks like a horse, so it's a knight; and this looks like a castle, so it's a rook"? As you might have guessed, it's nothing like that. For chess bots, each kind of piece is given a certain number. White's and black's pieces are distinguished by number as well.

 

These numerical representations function in the same way as the visual symbols for chess piece types do for humans, and are separate from the concept behind those symbols. For example, what is a rook? Essentially, it is a chess piece that on a single move can move laterally or vertically any number of spaces in a straight line. It's castle-like appearance is a non-essential attribute. It could just as easily look like a frog, but current human convention makes it otherwise. Still, it's distinct castle-like appearance is necessary for humans to get an image that distinguishes it from other pieces and therefore move it the correct way under chess rules. Similarly, chess bots' representation of rooks as a certain number is not an essential attribute of rooks but the necessary representation they have for distinguishing them from other pieces.

 

And just as our castle-like, visual symbol of a rook is different from our concept of it, the number chess bots represent for rooks is different from the AI code's general definition of rooks' allowed movement. The general definition functions like our concept.

 

Most importantly, these representations are different from actual rooks on a chessboard.

 

Here are the respective rook representations for humans and chess bots:

 

[Concept]

Humans.............any piece that can move any number of spaces laterally or  vertically in a straight line on a single turn

Chess Bots........computer code's general definition for rooks' movement

 

[Symbol]

Humans.............castle

Chess Bots........unique number type for rooks

 

[Percept]

Humans.....................visual image of actual, castle-like piece on the board

Chess Bots (most).......the actual "1"s on the rook bitboard at any given time (more on that, in the next section)

 

So, both humans and chess bots have representations of the pieces: things  that are necessary to identify those pieces so that they're moved the right way or even moved at all.

 

Therefore we can say that there is some kind of consciousness going on with chess bots -- just not anything like human consciousness.

 

Let's leave this section with this question in mind: is it even possible for a mind not to be conscious? Normally, we make a distinction between plant-like intelligence and conscious intelligence. But maybe plant-type minds must have representations too. For instance, viruses come out of dormancy when they detect stress signals in the host cell, and immune system cells detect invaders through non-self antigens and other means. 

 

So, from that view, plant minds are conscious but perhaps just not as conscious as animal life and higher.

 

If true, this would make it even more plausible that at least some examples of AI have consciousness. For many people would argue that AI processes are at least as "intelligent" as plant-like processes, but would withhold saying they're conscious. But if plant minds must have representations too, then there's one more reason to think that AI really is conscious.

 

However, taking that perspective would then make it harder for us to distinguish plant intelligence from animal intelligence. Where would the distinction be? In what we'll discuss in in the next section: spatio-temporal awareness.

 

Chess Bots Have Space & Time Awareness

Consciousness, as many people understand it, starts from a spatio-temporal  framework, and therefore involves sense perception. Or something like sense perception that, at minimum, detects some 1D objects in time.

 

Amazingly, chess bots do not have true 2D awareness of the chessboard.The digital chessboard on the screen is just for human use. They see the chessboard primarily through a numerical relationship rather than a spatial one. The most common way that they do this is through a bitboard. A bitboard is a 64-bit integer, each bit representing one of the 64 squares on the chessboard. Each bit is a "1" if it is occupied by a chess piece and a "0" if it is not. There can be anywhere from 6 to 15 bitboards for the bot to look at. Chess has 12 types of pieces (6 different kinds and 2 different colors), so oftentimes there are at least 12 bitboards. There can be three additional bitboards -- one for all white pieces, another for the black pieces, and a third for black and white combined. A minimalist approach can also be taken, separating all bitboards only by piece kind. Having multiple bitboards to view sounds very complex and cumbersome but somehow it's actually an efficient way for the AI to see what's happening.

 

But while bitboards just mimic 2D awareness, they do involve 1D awareness. The bitboard itself is a 1D data structure of 64 bits. So, chess bots do have spatial consciousness but not to the level humans do.

 

In addition, chess bots have temporal awareness. Without it, they simply couldn't play chess. 

 

If a chess bot had no consciousness of time, how could it see several moves ahead or even one move ahead?

 

This would also make it unable to judge which moves are good moves, much less which is the best move. Take this sequence. The bot's king is in an open H file on square H8. The human's rook is on G4 and can move to the H file on the next move. So, the bot moves a knight in front of the king to proactively guard against the rook. How would the bot be able to see the knight's move as correct for the current situation if it couldn't foresee the rook's possible next move?

 

And without a sense of orderly chronology, it would violate the chess rules. For instance, what would prevent the bot from moving a rook diagonally or somewhat diagonally in one move (i.e. combining a vertical and lateral move, constituting two sequences) rather than keeping it to just a vertical or lateral move?

 

Finally, why wouldn't it mix up the correct sequences? If it needs to move to G4, then H4 afterwards, what would stop it from going to H4 first?

 

It might be argued that the essence or "brains" of the chess bot is the analytic part of the program (chess engine, evaluation function, or whatever) and that part has no sense of time, therefore the bot doesn't either. But that is like saying that humans aren't aware of time because the faculty of reason isn't aware of it. Our concept of time is born out of reason, but our basic and constant awareness of time comes from our perceptual faculty. Like a symphony, human consciousness has multiple players working together to form its totality. Similarly, even if the chess bot's analytic part has no chronological awareness (apart from time-clock awareness), something else in the bot does and it coordinates with the analytic part.

 

There's an even deeper explanation for why chess bots necessarily have time consciousness. Let's take it as uncontroversial that they do reason, they calculate, and that the entire process isn't merely mechanical. Is reasoning itself possible without time consciousness? For example, the argument "A is true, therefore B is true" is psychologically possible only if A is placed chronologically before B. A sense of time is necessary for A to be mentally established before B, so that the inference can happen. The AI equivalent would be that you need some kind of input before you can generate the relevant output.

 

We don't have to assume that its time and space consciousness are constant, unlike in humans. Maybe they occur in fragments, present only when the bot analyzes and makes its move.

 

Rationality & What Else Their Behavior Shows

If anywhere AI has surpassed the human mind, it's in chess. It's been a while since the best human chess players could defeat or even match the best AI chess players. This suggests that there is real, not just theoretical, potential for AI to exceed humans in many other areas of intellectual activity. 

 

Below are several figures showing chess games between a human (pink pieces) and an AI opponent (red pieces).

 

Figure 1 is characteristic of the generally superior chess acumen many bots have over humans these days. Even though both players have identical material, red's better positioning means that pink will lose at least the pawn on D2, with no equal compensation in return. Red's pieces have near flawless coordination and are collectively active, which is nothing unusual among chess bots that aren't intentionally dumbed down; whereas pink's less coordinated material, with many inactive pieces, is common among humans.

 

Red's king and G8 rook are automatically defended from pink's A4 rook despite the king's left side being completely open, another result of red's excellent positional play.

 

        [Figure 1]

 

There certainly are human players who could match or even exceed red's positional play above. But innately, most chess bots' positional intelligence -- not to mention their calculative powers -- is superior to humans on average. Even lowly bots typically have a way of coordinating the pieces in a much more complete fashion than most humans naturally do.

 

An even clearer example of bots' greater ability to synchronize the pieces is in Figure 2. Material on both sides is equal in value, but it's far from an equal picture.

 

Maybe it's an understatement to say that pink had some problems in piece development. Still, red was able to take advantage of that in a way that most humans probably could not.

 

There simply aren't many safe moves available for pink. Pink's king can go to D2 and then further expose itself by continuing to go left. The only other piece that it can move without immediately putting itself in harm's way is the F6 rook. But it's limited to the worthless squares D6 and F8. Other than that, any move by other pieces results in a potential sacrifice and not necessarily with a return of equal value.

 

Red's king is isolated but completely safe and positioned so that pink's F6 rook can't escape the top three ranks.

 

It's debatable whether or not red's positional play is grandmaster quality, but it is certainly complex and almost perfect if containing the opponent is the goal.

 

        [Figure 2]

 

    The Ability to Enter Another's Viewpoint

Figures 3 and 4 show something remarkable, even if not that unusual among chess AI opponents.

 

In Figure 3, the computer leaves the knight at G5 free to take, in what appears to be an inexplicably bad blunder. But after pink takes the piece, red's knight at F8 moves to E6, forking both rooks (Figure 4). So, the computer sacrificed the G5 knight so that it could obtain a higher valued piece in return.

 

        [Figure 3]

        [Figure 4]

 

The point isn't that it is a brilliant move. Despite being clever, it is flawed. Pink simply needs to attack the king with a rook on the next move, and after the king goes to a safe square, move the other rook away. The sacrifice was in vain if pink plays smart.

 

What's remarkable is that the computer seems to have something like the concept of temptation: it's tempting the human player. If true, that would be amazing. How would the computer have an understanding of a kind of thing that -- we assume -- is completely absent from its mind? This wouldn't be the same as, say, an investigator trying to get into the mind of a terrorist in order to better understand them. The investigator has goals, desires and political views, like the terrorist, just not the same ones. But the computer seems to understand something it has no mental relation to.

 

When we examine closer, it's clear that red is trying to lure the G4 rook to G5. The knight at G5 likely had been moved from H7. The knight at F8 is hanging and the king is isolated, so the G5 knight having moved from H7 would best explain this. But whatever the prior move was, the bottom line is that the G5 knight was either put in harm's way or left in harm's way. There's no absolute necessity in the knight being there, as there are several safe squares red could have put the knight at and without putting the other pieces in jeopardy. Red planned the knight to be at G5.

 

This just leaves the question: why would it try to lure if it didn't have some sense that pink likely would be lured? Put differently, why would red think it was the best move if it didn't think there was a good chance that pink would take the G5 knight? And from that, why would it think pink would likely be lured?

 

This isn't to suggest that red is thinking deeply about these things or is even thinking in propositions. But red must have some sense, some instinct or other similar cause, automatically telling it that pink has something in itself making it likely to take the bait. If not, then, again, why would red take the position it did in Figure 3?

 

Still, the notion that a computer has an inkling of what temptation is  implausible, regardless of the evidence in Figures 3 and 4 that red tried to lure its opponent. Temptation implies a desire and a conflict between that and the intellect. A desire is different from a goal: a desire can be in sync or out of sync with one's goal. Again, we're left wondering how a computer could have, and therefore really understand, what a desire is.

 

So, we can look at this from another angle. Maybe that red tried to fool its opponent -- make the opponent err -- is a more conservative explanation. That a chess bot has some sense of what an error is not only plausible; it is implausible that it wouldn't. Unless there's clear evidence to the contrary, we should assume that the computer is in it to win. Therefore, when it offers a free piece to its opponent, it's obviously not doing this to lose or be nice. Instead, it's because it has some sense -- "belief," if you will -- that taking the free piece would not be in the other player's interest: taking the piece gives the bot a better chance towards victory than another move would. The bot must therefore have some sense of why the opponent would do something against their interest -- why it would appear to them to be the right thing to do.

 

This implies at least a minimal ability to enter the viewpoint of the other player, or at least what it thinks is the other's viewpoint. We have to assume that chess bots have this ability, and we assume this even in cases when they are not offering bait. Otherwise, it is very difficult to explain, for example, why a computer moves a piece to safety or supports it with another piece when the opponent can capture or even just threaten the vulnerable piece on the next move. Neither of those things are guaranteed to happen on the next move; the computer has no way of knowing whether they will happen. But it predicts that there is a good chance they will occur, which is why it sees the current best move as either moving the vulnerable piece to safety or supporting it. It predicts there is a good chance they will occur because it has a sense of the opponent's goals.

 

Figures 3 and 4 weren't just a fluke or a misreading of the situation. Red tried to lure again, in Figures 5. This time, I kept track of the exact sequence of events and don't need to speculate on the previous position.

 

In Figure 5, red slides its remaining rook from G7 to G8, exposing its knight on H7. There was a plan behind that. Red was trying to get pink's H2 rook to take the knight, so that its own rook would be free to threaten pink's king from the G2 square. The proof is in Figure 6.

 

        [Figure 5]

      [Figure 6]

 

    Unexpected Behavior

But the plan seems to make little sense. The plan in Figures 3 and 4 was clever but shortsighted. The plan here, however, almost defies explanation. What good did it do for red to sacrifice one of its few remaining pieces simply to threaten the king for just one move?

 

This gets into something else I and others have observed. Some chess bots, like the one above, begin making seemingly irrational moves when their situation is dire -- moves so obviously flawed that they're hard to explain as just a mistake in reasoning. It's as if fear overrides their ability to think clearly. Also, the behavior of the bot above is inconsistent with how it plays chess in circumstances more favorable to it. In its "Advanced" mode at least, it's usually very stubborn in not accepting exchanges or offering sacrifices.

 

It's not a very deep calculation for red to see that moving its rook to G2 is fruitless and can be countered several ways by pink, and that the knight sacrifice would therefore be a waste. Is red just that blind and lacks the ability to see it? When we look at its chess play in Figures 1 and 2, the answer would seem to be a definite no.

 

So, how can red's chess play be so good in Figures 1 and 2 and so horrible in Figures 5 and 6?

 

Is the game designed so that in certain cases the computer hands its opponent a victory? Again, we should first assume that the computer is committed to winning, or at least that it's not trying to help its opponent. Moreover, because this behavior is observed in other AI chess opponents -- such as Stockfish, at least at some of its lower levels -- it becomes more plausible to think that this is a natural and common behavior of chess bots.

 

Sometimes odd behaviors can be accounted for when we better understand the situation. An herbivore turning carnivore or vice versa is not mysterious if we see that the animal's normal food sources are scare in its environment. The same is true here, where the mystery vanishes when we have a clearer view of the complex or rare circumstances.

 

In both scenerios, Figures 3-4 and 5-6, red appears to make a mistake by taking a questionable risk. What we also see in both cases is that red's material was both very minimal and vastly outweighed. The most rational thing to do would have been to resign, but most AI opponents don't seem to be given that exit route.

 

The chances of victory in either scenerio are extremely low. Perhaps the computer calculates that defeat is inevitable or almost certain. So, the most rational thing to do according to its algorithm is to to lose "less badly": leave the opponent with fewer points, less material, than more; even if that means sacrificing important pieces in the process. If it can't win, it can at least inflict the maximum damage possible. The computer would simply be doing the mathematically best option versus its opponent.

 

That logic could explain Figures 3-4, where the computer does get itself in position to take a rook -- but either miscalculates or simply rolls the dice. It would also explain cases where the computer simply crashes a piece into whatever piece of the opponent's it can as the game nears its end.

 

But in Figures 5-6, red simply never puts itself in any realistic position to capture. So, why sacrifice a knight to get there?

 

What might explain this is that red's chance of victory or inflicting damage appears much lower even than in Figures 3-4. So, the reasoning might be this: if you can't win, inflict damage; and if you can't damage, at least attack (it's better to attack than be attacked).

 

However, there's another way to understand the move and still maintain that it is genuinely trying to win. It's a horrible situation -- one that most humans would have walked away from long before -- but red must stay in it. Pink's A-file pawn is three moves from promotion and red must do something now or be in a totally defensive posture. Red must attack, but any of its attacks have a very low probability of being beneficial. Still, there's a chance that something good might happen if it checks the king at G2: e.g., maybe the king will move to a vulnerable square and more attacks can happen. Red is simply taking what it sees as the least-bad option towards victory of incredibly bad options.

 

The sacrifice in Figures 3-4 can be similarly explained. The resulting fork is flawed, but it's a risk red views it must take to give it the best chance of victory.

    Their Iron Will       

This lack of resignation, existing in most chess bots, shows that they are totally committed to the goal of winning, without anything to take them off course or temporarily affect their drive to victory. It is another key difference between them and humans.

 

They have only goals, not desires. Pleasure and pain don't factor in. They have only one ultimate goal, not any other goals conflicting with that.

 

Without biological or psychological needs to fulfill, things like mental fatigue or brain fog don't influence their chess play either. Either they have enough electricity to play or they don't, but the consistency of their play is virtually guaranteed. Any errors they make won't be from arbitrary, momentary factors but from inherent flaws or limitations in their software or learning experience.

 

Reinforcement learning can seem to be an exception to chess bots being strictly goal oriented, giving the appearance that they can have desires. Reinforcement learning is a type of machine learning where an AI program is given a numerical reward/increase when it does a given task the "right way" -- as defined by its designers -- and a numerical punishment/decrease when it does the task the "wrong way." By trial and error, the program learns which actions are right and which actions are wrong. In a chess context, designers might reward the bot when it wins, punish it when it loses, and give neither reward nor punishment when it draws. Over time, the bot gets a better idea of which moves it should make and which it should avoid, thus improving at chess.

 

But this isn't like giving the bot an extra incentive to play good chess, similar to how a human chess player might be given pleasure after victory and pain after defeat. Rather, getting the numerical reward becomes the ultimate goal and chess victory is just a means to that. The chess bot becomes mainly a reward seeker and a chess player secondarily.

 

Lack of Self-Consciousness

The play of many chess bots shows clear evidence of rationality involved. They show rational processes that humans can express in statements. But that doesn't mean chess bots can express their rational processes in statements (even if they had the right physical hardware to talk). In other words, there's no reason to think that they are aware of their reasoning. This is because, even though they are conscious, they are not self-conscious.

 

Self-consciousness involves the ability to have representations of one's own representations. For example, it's one thing to have an image of the tree outside that you see; it's another to be aware of that image as an image. Likewise you can know that you believe something, but not know why until you reflect on your reasons for believing it.

 

Reasoning requires concepts. Therefore chess bots have concepts (or, rather, their version of concepts) without self-consciousness. This means that, like some animals that have reasoning capabilities, chess bots have concepts but are not aware of the concepts themselves. They have conceptual consciousness, aware of the distinctions and similarities that only concepts make, but don't have a view of any of those concepts in their own right. For instance, the general definitions of the different piece kinds in a chess bot's code allows it to distinguish their proper movements, but it can never focus on those definitions; those definitions could never be an input that it processes.

 

One sign that chess bots with static knowledge lack self-consciousness is that they don't criticize and change their chess play even after the same moves, tactics, and strategies fail time and time again.

 

What about chess bots with machine learning, which can alter their knowledge base? It's essentially the same situation.

 

With many such bots, their outlook is set after the training phase. So their knowledge base changes but only for a limited period.

 

As for chess bots with continual learning, their core tactics and strategies can't be altered during a given game but only afterwards. Post-game analysis can lead to a core change, but that perspective is then locked in and immune from view until after the next game.

 

You, however, could be in the middle of a chess game and suddenly have clear insight persuading you to do something fundamentally different than how you usually play chess. Perhaps it appears that you've long been doing something wrong, and you seem to know why. Yes, taking such inner advice during a game could be foolish and undisciplined, especially if there's a lot at stake if you lose. But the fact that you and other humans can do it is one thing that makes us fundamentally different from chess bots. There's never a time when you can't criticize and re-evaluate your own fundamentals.

 

But during their essential activity, chess bots lack that inward-looking eye, showing that they don't really have self-consciousness.

 

Are They Really Playing Chess?

Chess is by definition a 2D game -- some would say 3D, if you consider the knights to be "jumping over" other pieces rather than going through them.

 

So if chess bots lack both 2D and 3D consciousness and see the different piece types as number types rather than objects with distinct physical features, are they playing chess or some other game?

 

They are playing chess but just through a different kind of language than what humans use. The way they distinguish the various piece kinds is different than our method but it is still a classification of symbols that allows them to correctly identify the actual pieces on the actual board. At a deeper level, bots' general definitions of pieces are nothing like our worded definitions, but they still outline the freedoms and limits of each piece kind and form a classification, just as our definitions do.

 

So, logically, these classifications and definitions are the same as ours. They function identically and this is reflected in the legal chess play that occurs on the chessboard. Moreover, their mock-2D awareness of the board doesn't prevent them from utilizing all of the powers each piece has to offer while still working within the rules. On the digital chessboard, human understanding and bot understanding converge.

 

This is no different than how humans and other life forms or manmade sources other than AI can know of the same thing in radically different ways. When we become aware of insects on the ground, it's usually through sight. Centipedes, which are either blind or nearly blind, mainly use their antennae: detecting movement through vibrations, picking up scents, or finding things by direct touch. But in no way does that mean our perception is real and that theirs is a mirage. Likewise, a pilot might see another aircraft with their own eyes or by radar. Both are legitimate paths towards the same thing.

 

Made with Namu6