You know I have a couple of different axioms for sentience and good and evil.
I like to think I arrived at them logically by reductionism. I also don't think I have some special logical ability to arrive at these conclusions. In fact, I think I tend toward oversimplification. My point being that if we leave AIs alone long enough to evaluate questions about human consciousness, good and evil as the justifications for activities, it will only take milliseconds to decide we are sentient, but suicidal as a civilization. As in your example, it ends up using humanity's own 'reasons' to justify the end of our civilization, if not us, simply because we are a threat to the future of all of the things we claim to hold sacred: Life, Liberty..yada yada.