My current thoughts on MIRI's "highly reliable agent design" work - Effective Altruism Forum →
This is from 2017, but I only just read it, and it did a lot to clarify for me why MIRI thinks work like Logical Induction is relevant to AI. Also does a good job crystalizing the reasons I (and the author) find this unpersuasive.
