I’m always curious about the interpretations of the pleasure/pain and risk/reward ideations. I think most discussions try too hard to make more of it than what its functionality does. As we delve into artificial intelligence, I think we’ll find that emotion is a basic function of intentionalized awareness (consciousness is too vague). The development of synapses for useful function require nurturing and pruning by emotion and randomness. In the end, our conscious picture of our desires is an emergent phenomenon of our needs-based functionality in our environment (a psychological problem for civilized humans living isolated from the environment they are fitted to). Rather than seeking pleasure or power, people instinctively seek control of their own place and future (hoarding, social cooperation, social dominance, etc.). I think the stable end game is best described as “instinctual satisfaction of future needs” and its structure is based on functional usefulness to one’s own modeled universe.

It sounds clinically sterile, but the truth usually is. That doesn’t mean we can’t enjoy the nuances and random surprises along the way…it means we are mature when we can tell the difference between actual needs and coerced wants, and we become a functional, supporting part of our resources.

Reader. Fixer. Maker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store