Goodhart’s Curse says given a target optimization function f(x) and an approximation of f(x) called g(x), the argmax of g(x) is in expectation some x with large g(x) - f(x). This is pretty rigorous.
This is often used to argue that when creating an AI, any difference between the intended utility function and the true utility function is likely to blow up.
It turns out that if we make some assumptions about the error distribution, the expected error in optimizing f(x) grows very slowly (O((log n)^{1⁄2})) with the size of the searched solution space.
So maybe this curse won’t bite so hard after all.