I’ve finished reading “The Alignment Problem” (ISBN: 9780393635829), by Brian Christian. As the subtitle states, it’s an attempt to discuss fuzzier aspects of human value with the growing relevance of ...
Altman then refers to the “model spec,” the set of instructions an AI model is given that will govern its behavior. For ChatGPT, he says, that means training it on the “collective experience, ...
Moral Labyrinth, created by artist and researcher Sarah Newman in 2018, is an art installation, workshop, and website inspired by the Value Alignment Problem in AI. Newman and BKC Fellow Mindy Seu, ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results