Reading up on the Perceptrons controversy. It seems just totally petty and anti-intellectual – one group of researchers think another group’s approach is unpromising and getting too much hype and funding, so they write a book actively trying to “take down” that approach, even though all they really have are proofs of well-known limitations the other group is already moving beyond
– and they spin it in the community so that it does get seen as a definitive takedown, which was the point of writing the book, to take them down
There is a lot of this kind of think in academia, I think, and it isn’t unrelated to the production of knowledge, but it’s related only in a weird indirect way
Oh and then later everything surrounding the word “backpropagation” – now the neural net people want to say that in practice gradient descent works fine despite the lack of theoretical guarantees, but instead of just saying that, they give gradient descent a fancy new name, so it looks superficially as if they have defeated the other guys with a new specific innovation
science! inevitable progress in hindsight, but complete lunacy at any given time.
The second part of this old post is embarrassing because no, backprop is not just another name for doing gradient descent, it’s a fast algorithm for gradient descent, smh @ old me
(via argumate)

