Reblog with your most controversial coding opinion.
Basically all discussions of software design and methodology make a fundamental mistake by treating a piece of software as “a black box that implements some desired behaviors,” noting that the list of behaviors never uniquely pins down the design of the black box, and then trying to fill the gap with generic principles of “good design” and “good practices” that are supposed to apply generically across all black boxes.
IMO it should be considered actively desirable for your internal representations to match the reality they are trying to model, even when nothing outside of your code will be able to tell (…for now). It enables you to respond quickly and gracefully to future demands (because these demands come from the real world and inherit structure from it), it makes failures more legible to the user (because the component that failed will look like a part of their existing mental model of the situation), and it will make the code easier to pick up and to maintain because external domain knowledge will automatically double as implementation knowledge (”where is $THING in the code? oh, there it is” vs. “where is $THING in the code? well, actually it’s distributed over these five things plus half of this other one”).
Any decent list of good practices ends up implicitly encouraging this, but in an awkward and over-generalized way, where “imagine you had to change something that can vary independently in the real world, wouldn’t it be nice if that was really easy” motivates conclusions like “everything should always be done in one place by a tiny encapsulated unit with only that job that knows nothing about anything else” instead of “the code should decouple things that are decoupled in reality.”
If you want something more down-to-earth: pipenv is pointless. It’s basically a version of pip freeze with an extremely slow and useless (see fn2 here) reproducibility feature, combined with a strictly worse version of virtualenvwrapper.

