- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.
The Paper (No “Zero-Shot” Without Exponential Data): https://arxiv.org/abs/2404.04125
The paper isn’t about parameter size but the need for exponentially more training data to get a mere linear increase in output performance.