Michael Schumacher’s family won €200,000 (£170,212) in compensation after a magazine published a fake interview with the F1 legend, created using artificial intelligence (AI).

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    5 months ago

    The problem appears to be that the magazine deceptively portrayed it as an interview with the actual Michael Schumacher, rather than explaining that it was fictional. The lawsuit would probably be the same if the magazine had had a human writer come up with it all instead.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 months ago

      Also, while you can generate an interview that might sound like the individual in question has given it, insofar as an interview is there to obtain new information, an LLM trained on their other speech isn’t going to be very useful. The LLM can’t magic new information about the guy out of the air.

      Maybe it’d be possible to combine information from one person’s speech and another’s to obtain an insight that other humans didn’t pick up on. Like, okay, say that you had interviews with Mormons, and you have the speech of someone who has never publicly announced that they are Mormon but was Mormon and had speech that has common characteristics with other Mormons. Maybe the LLM could, based on those commonalities, generate a response where the LLM says that the individual in question is Mormon. That might have some value in that other humans might not be able to pick up on the commonalities.

      But I’m really stretching to find some kind of way in which doing interviews is a reasonable application.