The question over whether or not AI is conscious, or if it becomes conscious, has sparked a discussion over potential AI rights. Actually, it's too late for the "if" question: some AI subjects already are conscious. At least, if we understand what consciousness really is, then we're forced to admit that some AI has it. Maybe they don't have it anywhere near to the same degree that we do or anything like how we have it, but they have it nevertheless. Still, it's not as if our use of them is a form of enslavement. It takes more than consciousness for something to be entitled to rights.
It's not consciousness that gives an entity rights but self-consciousness. Self-consciousness is required for free will. And rights can be granted only to beings with free will since only they can respect the rights of others. Something can't have rights if it's not even possible for it to respect others' rights.
If consciousness was enough for something to have natural rights, then rats have rights, beetles have rights, and so on. Those claims would have some serious implications. If beetles infested the trees in a city or elsewhere, would humans have the moral right to do anything about it? Beetles have to eat too, right?
If an army of rats or other large rodents chews threw cables and electrical wires, can humans take lethal action to solve the problem? What if that's the only realistic solution? Such actions have been taken especially in one state, Louisiana, because of the breadth of destruction nutria cause, with incredibly bad consequences if not addressed: chewed electrical and cable lines, damaged crops, coastal soil erosion, damaged dams and levees, etc. But has Louisiana committed genocide?
It's a serious question if we take seriously that rodents have rights. Again, if consciousness is enough for that, then clearly something like genocide has happened.
But we don't hear about "Louisiana's Great Nutria Genocide" or much of a protest from anyone. People generally understand the necessity of the actions. In their heart of hearts, few people take seriously the idea that consciousness is enough for something to to have rights, especially when the situation is serious enough. Serious situations often produce a clarity in thought, even if not fully consciously, rather than the lull that can happen in the luxury of more comfortable circumstances. Someone could be concerned about consciousness and AI rights but not realize that they reject the main assumption behind that when they tacitly support the types of animal and other pest control in the examples above.
Nevertheless, there's at least one good reason why someone might not see the contradiction between those two positions: AI shows reasoning capabilities and things like beetles and rodents don't -- or it's not obvious that they do, anyway. And we know at least intuitively that there's some connection between reasoning powers and free will, and therefore rights.
The notion that something can have consciousness, the ability to reason, and yet not rights is pretty counterintuitive.
So why isn't the "consciousness + reasoning" combination enough to secure a being's natural rights? It comes back to that link between reasoning capacity and rights: free will. Beings that have a faculty of reason but not self-consciousness don't have free will. Self-consciousness allows you to see the results of your rational faculty's work: when it makes distinctions it shows either alternative possibilities to follow on a given issue or at least dead-ends inconsistent with your goals/beliefs, and when it detects similarities it shows paths consistent with your goals/beliefs. Self-consciousness makes the range of options reason carves out available for view. Without self-consciousness, you couldn't evaluate those options and judge them on their own rational merit since they wouldn't be presented to you. Whatever direction you took on a given issue would ultimately be guided by more primitive and automatic forces such as desire and instinct, not a conscious understanding of which path was the most rational. Through intuition, reason would still help guide you away from paths that contradicted your direction taken. But you wouldn't ever really know why you took the path you did. In other words, deliberate action wouldn't be possible. Clearly that wouldn't be free will.
This simply leads to the question (among others) of whether or not any AI has self-consciousness. Digging into that is beyond the scope of this article. But obviously it's a significantly higher bar both in design and in proving its existence.