当前位置:首页 > 产品中心

MIT study reveals AI model that can predict future actions of human

A new study from researchers at MIT and the University of Washington reveals an AI model that can accurately predict a person or a machine’s future actions. 

The AI is known as the latent inference budget model (L-IBM). The study authors claim that L-IBM is better than other previously proposed frameworks capable of modeling human decision-making.

It works by examining past behavior, actions, and limitations linked to the thinking process of an agent (which could be either a human or another AI). The data or result obtained after the assessment is called the inference budget.

Featured Video Related

They used L-IBM to predict the moves of human players in a chess game. “Our results show that suboptimal human decision-making can be efficiently modeled with computationally constrained versions of standard search algorithms,” the study authors note. 

“By doing so, we obtain both accurate models of humans’ decision-making and informative measures of their inferential capacity,” they added.

How does the AI model predict human behavior? 

To model the decision-making process of an agent, L-IBM first analyzes an individual’s behavior and the different variables that affect it.  “In other words, we seek to model both what agents wish to do and what agents will actually do in any given state,” the researchers said.

This step involved observing agents placed in a maze at random positions. The L-IBM model was then employed to understand their thinking/computational limitations and predict their behavior. 

This analysis revealed an agent’s goals, and its ability to navigate and make complex decisions. 

In the next step, the model examined language and communication-related cues. “Humans readily produce and understand language in ways that deviate from its “literal” meaning,” the researchers note.

The researchers made the subjects play a reference game. The game involves a speaker and a listener. The latter receives a set of different colors, they pick one but can’t tell the name of the color they picked directly to the listener. 

The speaker describes the color for the speakers through natural language utterances (basically the speaker gives out different words as hints). If the listener selects the same color the speaker picked from the set, they both win.  

“By fitting an L-IBM to utterances and choices in human reference games, we investigate whether we can infer whether humans are engaged in pragmatic reasoning from behavior alone, whether there are differences between players in their ability to reason about their interlocutors, and whether these differences actually predict communicative success,” the study authors explain.

Final step: Modeling human chess play

The model focused on the time different human players took to make their moves during a chess game. They also noticed the difference in time weaker and stronger chess players spent thinking about their moves. 

“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,” said Athul Paul Jacob, one of the study authors and a Ph.D.

The aim was to find out whether they could feed this data to L-IBM and model variability in players’ decisions across game states.

The inference budget (result) accurately highlighted the difference between the difference between weaker and stronger chess players. 

“For me, the most striking thing was the fact that this inference budget is very interpretable. It is saying tougher problems require more planning or being a strong player means planning for longer. When we first set out to do this, we didn’t think that our algorithm would be able to pick up on those behaviors naturally,” Jacob said.

If the AI model knows which player is better, it is likely to accurately predict which player is going to win the game.

This AI model can help us make better decisions

The three steps show that the L-IBM framework has the potential to model almost all aspects of human decision-making including routines, behavior, communication, and strategy.  

“We demonstrated that it can outperform classical models of bounded rationality while imputing meaningful measures of human skill and task difficulty,” the researchers note.

What makes L-IBM different from previous models is that instead of random data, it takes the past behavior and limitations of an agent into consideration to produce results.  

“If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human,” Jacob said.

The current study will also allow scientists to teach the details of human behavior to AI programs more effectively. However, L-IBM is not a perfect framework. 

Jacob and his team now plan to do further research to come up with better models.

You can read the study here.  

分享到: