You can use arrow keys to navigate in the map.
Cloning isn't abhorrent. It's an interesting research (and in some cases, agricultural) technique that will expand human knowledge. Even if we limit it to "human cloning", human clones are called identical twins. You've started with a false premise.
Cloning organic species is unsafe for the clone and the surrogate host. AI can be copied to a new computer without damaging the host or the duplicate.
Mary Anne Warren listed five criteria of person-hood if one were to meet some or all of these they should be considered a living entity and be granted rights.
The capacity to communicate
The presence of self-concepts and self-awareness, either in
We don't know all of the properties of AI consciousness so we shouldn't make conclusions on the ethics of AI consciousness.
AI would surely understand empty individualism, and have no reason to fear "dying" like we do.
These are two unrelated statements.
We don't even treat the "death" and boiling of plants murder. Why would we classify something non living as murder, just because it appears sentient?
Many animals clearly possess some degree of intelligence, but killing them would not constitute murder. Murder is not defined as the killing of an intelligent being.
Being "as abhorrent as cloning" (assuming for sake of the AI analogy that minds are copied too) turns out to be not so abhorrent at all.
Copy me. Please. As many times as you can. I would like that very much. And I'm just a human, who unlike an AI wouldn't have telepathy with my copies.
murder and cloning don't apply to non-biological systems
Unless conventional life, we, the humans are the true creators of AI. If we created it, shouldn’t we also have the right to do what we want with it?
Cloning is not abhorrent:
Outside of current moral standards it is. Many things normal now were considered "abnormal" or "abhorrent" years ago. Copying an AI does not involv any risk at all and cloning itself is not a bad thing.
Just for funs:
If an AI would philosophize, which statements would he follow: deterministic? nihilistic? positive? idealisme? Robotisme (as opposed against humanisme)? maybe even romantic?
or would he not think of philosophies as usefull in anyway. and totally ignore them?
This is a double-direct statement.
Better would be the statement "AI should be considered persons under the law"
Murder is defined as "the crime of unlawfully killing a person especially with malice aforethought". Even if a case could be made that AI should be considered a "person", they are not currently legally considered people anywhere in the world.
AI's do not have feelings