Back when we were arguing over logical induction, I posted this in an Agent Foundations comment, but I don’t think I ever posted it over here. I really like it, and I think it helps clarify how weak mere convergence can be:
Finally, about abstract asymptotic results leading to efficient practical algorithms – yes, this happens, but it’s important to think about what information beyond mere convergence is necessary for it to happen.
Consider root-finding for a differentiable function F from R→R. Here’s one method that converges (given some conditions): Newton’s method. Here’s another: enumerate the rational numbers in an arbitrary order and evaluate F at one rational number per timestep, and write down the number iff F there is closer to zero than with the last number you wrote down. (You can approximate the root arbitrarily well with rationals, the function is continuous, blah blah.)
Even though these are both convergent, there’s obviously a big difference; the former is actually converging to the result in the intuitive sense of that phrase, while the latter is just trolling you by satisfying your technical criteria but not the intuitions behind them. (Cf. the enumeration-based trader constructions.)


