A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125

  • barsoap@lemm.eeOP
    link
    fedilink
    English
    arrow-up
    35
    ·
    6 months ago

    There are lots of ways to improve LLMs that aren’t just increasing the parameter size

    The paper isn’t about parameter size but the need for exponentially more training data to get a mere linear increase in output performance.