Determining The best bit rate is more of an art form then a science. As soon as you go from a loss less to a lossy format, you’re accepting compromise. If the scene has a lot of natural blur, you may be able to get away with a surprisingly low bit rate before the effect is noticeable. Consumer playback systems may not be able to reproduce all the content contained in your high bit rate stream. It’s a balance of audience, playback expectations and how much you can get away with.
With video, you’ll need a higher bit rate whenever there’s rapid change that you want to preserve or intricate detail that doesn’t compress well. With audio, there’s a point where you can hear the artifacts, but different people pick up the playback errors at different loss rates for different types of music. Classical guitar may not tolerate much compression because it’s high attack plucking. Rock guitar may compress well because the accuracy of the notes isn’t something you can pick out easily and, as Billy Joel would say...it’s still rock and roll to me. Animation with sharp lines may require a higher bit rate than expected because pixelation is immediately obvious.
Anyhow...my solution is to start with the cleanest, highest resolution uncompressed source, then drop bit rate until I can tell there are defects, then try to be a touch more generous.
If you’re doing video, you may be able to do a comparison mask that makes the discrepancies more obvious. With audio, if I can visually compare waveforms, I’ll reduce bitrate in the less important passages, but boost it if there’s a silent passage or one with lots of complexity. Almost any compression on a silky voice like Nat King Cole’s is quickly audible.
I guess I’m saying that you need to do manual comparisons until you’re sick of watching/listening to the material.
And be careful with VBR. it’s not a cure-all, unless you have control over where bits are spent.