News Warner Logo

News Warner

Getting AIs working toward human goals − study shows how to measure misalignment

Getting AIs working toward human goals − study shows how to measure misalignment

  • A new study provides a way to measure misalignment between human goals and AI systems, which is crucial for ensuring that AI systems align with human values.
  • The alignment problem is complex because humans have conflicting priorities, making it difficult to determine what constitutes “aligned” AI behavior.
  • The study developed a score for misalignment based on three key factors: the humans and AI agents involved, their specific goals for different issues, and how important each issue is to them.
  • Simulations showed that misalignment peaks when goals are evenly distributed among agents, highlighting the importance of considering context and role-specific alignment.
  • The study’s framework offers a more nuanced understanding of alignment, allowing researchers and developers to discuss specific contexts and roles for AI more clearly, and provides tools for policymakers to measure misalignment in existing systems.

Self-driving cars are only one example where it's tricky but critical to align AI and human goals. AP Photo/Michael Liedtke

Ideally, artificial intelligence agents aim to help humans, but what does that mean when humans want conflicting things? My colleagues and I have come up with a way to measure the alignment of the goals of a group of humans and AI agents.

The alignment problem – making sure that AI systems act according to human values – has become more urgent as AI capabilities grow exponentially. But aligning AI to humanity seems impossible in the real world because everyone has their own priorities. For example, a pedestrian might want a self-driving car to slam on the brakes if an accident seems likely, but a passenger in the car might prefer to swerve.

By looking at examples like this, we developed a score for misalignment based on three key factors: the humans and AI agents involved, their specific goals for different issues, and how important each issue is to them. Our model of misalignment is based on a simple insight: A group of humans and AI agents are most aligned when the group’s goals are most compatible.

In simulations, we found that misalignment peaks when goals are evenly distributed among agents. This makes sense – if everyone wants something different, conflict is highest. When most agents share the same goal, misalignment drops.

Why it matters

Most AI safety research treats alignment as an all-or-nothing property. Our framework shows it’s more complex. The same AI can be aligned with humans in one context but misaligned in another.

This matters because it helps AI developers be more precise about what they mean by aligned AI. Instead of vague goals, such as align with human values, researchers and developers can talk about specific contexts and roles for AI more clearly. For example, an AI recommender system – those “you might like” product suggestions – that entices someone to make an unnecessary purchase could be aligned with the retailer’s goal of increasing sales but misaligned with the customer’s goal of living within his means.

Recommender systems use sophisticated AI technologies to influence consumers, making it all the more important that they aren’t out of alignment with human values.

For policymakers, evaluation frameworks like ours offer a way to measure misalignment in systems that are in use and create standards for alignment. For AI developers and safety teams, it provides a framework to balance competing stakeholder interests.

For everyone, having a clear understanding of the problem makes people better able to help solve it.

What other research is happening

To measure alignment, our research assumes we can compare what humans want with what AI wants. Human value data can be collected through surveys, and the field of social choice offers useful tools to interpret it for AI alignment. Unfortunately, learning the goals of AI agents is much harder.

Today’s smartest AI systems are large language models, and their black box nature makes it hard to learn the goals of the AI agents such as ChatGPT that they power. Interpretability research might help by revealing the models’ inner “thoughts”, or researchers could design AI that thinks transparently to begin with. But for now, it’s impossible to know whether an AI system is truly aligned.

What’s next

For now, we recognize that sometimes goals and preferences don’t fully reflect what humans want. To address trickier scenarios, we are working on approaches for aligning AI to moral philosophy experts.

Moving forward, we hope that developers will implement practical tools to measure and improve alignment across diverse human populations.

The Research Brief is a short take on interesting academic work.

The Conversation

Aidan Kierans has participated as an independent contractor in the OpenAI Red Teaming Network. His research described in this article was supported in part by the NSF Program on Fairness in AI in collaboration with Amazon. Any opinion, findings, and conclusions or recommendations expressed in this material are his own and do not necessarily reflect the views of the National Science Foundation or Amazon. Kierans has also received research funding from the Future of Life Institute.

link

Q. What is the alignment problem in AI research?
A. The alignment problem refers to making sure that AI systems act according to human values, particularly when humans have conflicting goals.

Q. Why is aligning AI with human goals challenging?
A. Aligning AI with human goals is challenging because everyone has their own priorities and preferences, which can lead to conflicts between different stakeholders.

Q. What are the three key factors used to measure misalignment in a group of humans and AI agents?
A. The three key factors used to measure misalignment are: (1) the humans and AI agents involved, (2) their specific goals for different issues, and (3) how important each issue is to them.

Q. What happens when most agents share the same goal in terms of misalignment?
A. When most agents share the same goal, misalignment drops because there is less conflict between different stakeholders.

Q. How does the current approach to alignment treat it as an all-or-nothing property?
A. The current approach treats alignment as an all-or-nothing property, meaning that an AI system is either aligned with human values or not at all.

Q. What are some examples of AI systems where misalignment can be a problem?
A. Examples include self-driving cars, recommender systems (e.g., “you might like” product suggestions), and large language models like ChatGPT.

Q. How do researchers currently measure alignment in AI systems?
A. Researchers currently use surveys to collect human value data and tools from the field of social choice to interpret it for AI alignment.

Q. What is a challenge in measuring alignment, particularly with large language models?
A. A challenge in measuring alignment is that large language models are “black boxes” and their goals are difficult to learn or understand.

Q. How can policymakers use evaluation frameworks like the one developed by the authors to measure misalignment?
A. Policymakers can use evaluation frameworks like the one developed by the authors to measure misalignment in systems that are already in use and create standards for alignment.

Q. What is the next step in addressing the alignment problem, according to the authors?
A. The next step is to develop practical tools to measure and improve alignment across diverse human populations, particularly for trickier scenarios where goals and preferences may not fully reflect what humans want.