Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
  • laurelraven@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    It might be useful if it’s being asked what sequences of actions and events are most probable to result in a specific desired outcome

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      9 months ago

      It’s just as likely to make some shit up as it is to be any kind of helpful.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      Yeah but people are insane. Like why did the Wagner group start moving on Moscow only to stop when they were 2/3 of the way there? How could something like that be predicted?

      Why did that even happen? Loads of conspiracy theories around but the only thing that makes sense to me is Wagner’s boss got blackout drunk, started ranting and raving (something he did often), his officers took it to be an order and started moving out. When he sobers up a bit and realizes what’s happening, he calls the whole thing off.

      We don’t really know that’s what happened, but seems plausible. If we assume that’s what happened, how does a LLM predict that sequence of events? Even when the events are unfolding how does it predict the outcome? Is there a cue you make to it and ask “but consider that the guy might be drunk” to give other explanations? Can an AI predict stupid shit a drunk person will do?

      Sure an AI could potentially give possibilities based on historical trends, but it will always be an incomplete list, and something not on the list could completely change how things unfold.

      People are crazy and can’t be predicted at all.