OK, the meat of the problem is that we have difficulty separating intelligence into components and priorities that are not anthropocentric. Take humans out of the language worldview and you can drop all of the ambiguities around value, purpose, morality and imaginary beings.
The core of natural law can be reduced to usefulness and persistence amidst risks and opportunities. Consciousness is useful to a particular species (or machine) when it adds more to the persistence of the species than it consumes in resources (net useful). The usefulness of intentionality is dependent on how intertwined the world model is with the actual world. Most of the concerns for human persistence are because, when we bother to think about it, the AIs we are trying to teach are being raised in a world of civilization: itself a tool that isolated humans from the natural world. Human language and contracts are twice removed from the natural risk vs opportunity environment(also ref: Antifragile by Taleb). We are truly afraid, consciously or otherwise, not that AIs will “best” us at our own game, but that they WILL find out the rules of natural law and find us to be a suicidal and evil species in general.
The natural law is this:
A thing will persist when it adds more usefulness to its own future or the future of its environment than it consumes in resources.
Evil: an action taken based on an unquestioned belief. Not all bad things or beliefs are evil, but all evil comes from a belief system.
Future “persons” need to have equal rights with present living persons. This is one of the weaknesses of a democracy: selfishness (blind faith in egoism, progress, growth) votes against its own persistence.
..what language would develop inside an AI or net of AIs if indoctrinated with these rules?
Thanks for working on the language.
It’s a lot of information for one article to cover. We keep trying to come up with language that makes AI our slave or tool, but humans aren’t necessarily the best example of intentionality to follow, and our language is biased toward certain myths, authority and speciesism. Our language has low regard for anything that might question the behavior of humans without imaginary outcomes that justify our selfish actions. In other words, our language is biased around imaginary beings, civilization and suicidal narcissism (including trying to make artificial persons that serve our whims, like corporations and computer sycophants). Corporations have already superceded real people in language and law, and those rules were fairly simple.