Love: The idea of personifying decision-making and its infinite points of failure.

I often feel the societal agenda is so native to our existence, it’s not often deconstructed what is actually happening. A "thing" makes decisions on “good” and “bad." In reality, the decision-making-thing also has self-interest, is corruptible, and can operate on outdated societal principals.

Also agree web3 ideas evaluated on merit of all ideas - those in existence and those not. It may turn out the existentially relevant path is to contribute to the companies already in existence than starting new!

Looking Forward: Very interested to see the value lists of major countries & religions. One question emerges - if there are differences between them (freedom v. lack of, individualism v. collectivism, etc.) how do we determine which values are existentially relevant? Are some more existentially relevant in different time horizons? For our agenda to be existentially relevant, our evaluation criteria must be. Frequency of occurrence (more countries and religions like freedom than not) does not indicate more existentially relevant.

Other Note: Meaningful projects require sustainability (e.g. in today's society, they require money to exist). The production of economic value does not equal intrinsic value (e.g. activism like Gandhi or MLK).

Expand full comment

I really like the super duper AI model. Not to say it would work but for its thought-provoking quality. It’s fun to play with and to consider. For example what if the AI moves in the direction of Skynet which I think was the AI that was causing so much trouble in the Terminator series. Or similarly in I Robot where Will Smith has to do battle with “Viki (virtual interactive kinetic intelligence) - I had to look this up. I thought the computer was named Vicky🥺? ) to save mankind from robot overlords. Or Matthew Broderick and Ally Sheedy are up against it in War Games. The list goes on and on. The Matrix, Space Odyssey. OK. I’ll stop. The point is that many highly imaginative authors including those qualified in science such as Arthur C Clarke and Isaac Asimov have delved into this subject and concluded that the danger of the AI finding man to be inferior such that the best course was to do away with humanity.

In conclusion it may be that the biggest issue here is containing the AI assuming we have a bias in favor of preserving humanity.

Also see Childhoods End one of Clarke’s earliest novels for discussion of evolving mankind in conjunction with galactic over mind or something like that.

Expand full comment