Code coverage doesn't matter much

Don’t get me wrong, tests are good. Especially unit tests. I never understood how you could be confident about your code without them. Tests also prevent regressions (by either yourself or others). Measuring code coverage1, on the other hand, doesn’t tell us much2.
While tests are themselves good, not all are created equal (like all code). For example, tests can be brittle and fail when unimportant or unrelated details change. They can be overly complicated and require more maintenance than value they provide. Tests can even accidentally not test anything at all (ever seen a test that iterates an array which is accidentally empty?).
Code coverage doesn’t tell us whether the tests we’ve written are useful. Sure, we can see whether a change increases or decreases how much of our code is tested. But we could be increasing coverage while decreasing the quality of (and therefore confidence in) our tests. Or we could decrease coverage while increasing quality3.
The point of testing isn’t to make a number go up. It’s to have confidence in our code and to prevent regressions. Code coverage doesn’t do that4.
How do we write good tests and get that confidence without measuring coverage? Well, that deserves its own article. But for now I’ll suggest thinking about what things are important to test, and how they should be tested.
Code coverage is measuring how much of your code is covered by tests.
If you work on a testing coverage product, please don’t take this personally. If companies find value in what you’re working on, that’s great!
And of course the other permutations, as well: higher coverage and better quality, or lower coverage and worse quality.
I can see an argument for test coverage being a forcing function for writing tests in the first place. That’s fine if it’s helpful for your situation. I just worry about doing this without teaching people how to write good tests, and adding a lot of low (or even negative) value tests as a result.