“OpenAI funds scientific research into algorithms capable of predicting people’s moral judgments. TechCrunch writes about this with reference to the statement submitted to the IRS. The startup awarded a grant to researchers at Duke University for a project called “Researching AI Morality.” Details about the work are few, and principal investigator Walter Sinnott-Armstrong declined to comment on the progress. The grant expires in 2025. Previously, Sinnott-Armstrong and another participant in the project Jana Borg wrote a book about the potential of AI […]”, — write: businessua.com.ua
OpenAI funds scientific research into algorithms capable of predicting people’s moral judgments. TechCrunch writes about this with reference to the submitted in IRS statement
The startup singled out grant to researchers at Duke University for a project called “Researching AI Morality.”
Details about the work are few, and principal investigator Walter Sinnott-Armstrong declined to comment on the progress. The grant expires in 2025.
Previously, Sinnott-Armstrong and another participant in the project, Jana Borg, wrote a book about the potential of AI as a “moral GPS” that helps people make more informed decisions.
Together with other teams, they created a “morally oriented” algorithm that helps decide who is best to donate kidneys to. They also assessed situations where people would prefer to delegate decision-making to AI.
The goal of the OpenAI-funded work is to train algorithms to “predict human moral judgments” in situations that cause conflict in the medical, legal and business fields.
It will be recalled that Sam Altman’s startup is preparing to launch an AI agent codenamed “Operator”.
The source