That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.
I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.
I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.