To the best of my knowledge, back-propagation IS learning, whether it’s happening in a neural-net on a chip, or whether we’re doing it, through feedback, & altering our understanding ( so both hard-logic & our wetware use the method for learning, though we use a rather sloppy implimentation of it. )
& altering the relative-significances of concepts IS learning.
( I’m not commenting on whether the new-relation-between-those-concepts is wrong or right, only on the mechanism )
so, I can’t understand your position.
Please don’t deem my comment worthy of answering: I’m only putting this here for the record, is all.
Everybody can downvote my comment into oblivion, & everything in the world’ll still be fine.
To the best of my knowledge, back-propagation IS learning, whether it’s happening in a neural-net on a chip, or whether we’re doing it, through feedback, & altering our understanding ( so both hard-logic & our wetware use the method for learning, though we use a rather sloppy implimentation of it. )
& altering the relative-significances of concepts IS learning.
( I’m not commenting on whether the new-relation-between-those-concepts is wrong or right, only on the mechanism )
so, I can’t understand your position.
Please don’t deem my comment worthy of answering: I’m only putting this here for the record, is all.
Everybody can downvote my comment into oblivion, & everything in the world’ll still be fine.
Back propagation happens during the creation of the model, not after it’s deployed.