Researchers at Cornell University have introduced an innovative artificial intelligence (AI) tool designed to assist individuals in identifying hidden biases and logical inconsistencies in their decision-making processes. Rather than making decisions on behalf of users, the tool serves as an analytical resource to enhance the clarity of their choices.
The initiative originated in the lab of Abe Davis, an assistant professor of computer science, who faced challenges in evaluating numerous creative projects submitted by students. Despite the presence of clear evaluation criteria, discrepancies in grading standards among assistants were frequently observed.
“Relying on technology to make decisions for us can be risky. We aim to use technology to help us make better choices,” Davis explained.
Davis emphasized that the core concept of the tool is based on the premise that individuals are more adept at making direct comparisons between two options than assigning subjective ratings on a scale. “The AI utilizes this principle to construct an optimal hierarchy of choices,” he added.
The algorithm operates through several stages, positioning the AI as a meticulous auditor of the user’s preferences.
The neural network performs the following functions:
- Value Assessment: Users first evaluate the importance of various criteria, such as price, reliability, and fuel efficiency when considering vehicles.
- Pairwise Comparison: The AI prompts users to select the preferred option from pairs of items, capturing their true priorities.
- Inconsistency Detection: If a user’s choice contradicts their stated values, the tool highlights these discrepancies.
For instance, if a user claims that reliability is their top priority but consistently opts for red cars, disregarding technical specifications, the AI will point out this subconscious bias towards color. The user is then prompted to either adjust their rankings or include color as an official criterion in their decision-making process.
The researchers conducted tests of the system in two distinct scenarios:
- Short Film Evaluation: Participants noted that the tool facilitated a transition from purely emotional judgments to the application of specific quality criteria.
- Academic Assessment: Four assistants ranked student projects, demonstrating a high level of consistency in their results, which supports the accuracy and repeatability of the method.
Another significant feature of the tool is the option to disable the AI in situations where its application may raise ethical concerns.
“One of the most crucial aspects of this project is to avoid using AI to make decisions for us; instead, we use AI to help us reflect on what we truly want,”
Professor Chao Zhang emphasized.
Cornell University researchers have developed an AI tool aimed at helping users identify biases in their decision-making. The system enhances clarity by analyzing choices against stated values, promoting more informed decisions.
