Years ago one of the stereophile magazines did tests with known/renown golden eared people - e.g., famous professional musicians, recording engineers, etc. What they found was that these people could hear distortion switched in/out at low levels (say .05-0.1%) for a single frequency pure sine wave. But in more complex music with multiple instruments playing, popular as well as classical, these same folks could frequently not detect as much as 10% distortion being switched in/out. And for ordinary folks, the distortion levels that could be detected were much higher.
In addition, psychoacoustic experiments have repeatedly found that very slight changes in overall volume are much more readily detected by the ear than small differences in distortion or even frequency response. Note that such small volume differences are frequently not interpreted by the listener's brain as louder vs softer, but more typically as just different somehow, or maybe slightly better vs worse. Hence any acoustical comparison experiments (e.g., for mojo vs non-mojo, etc) have to work very carefully to match the 2 volumes very closely.
Finally, for tests to unambiguously demonstrate a difference, the listener has to be unaware of which source they are listening to (e.g., mojo vs non-mojo, or whatever) with multiple random listenings - and afterwards be able to identify which was which with high probability.
In light of these known issues, I still find it very difficult to believe that someone could hear the ~0.1% distortion of a carbon resistor driven way beyond its rated voltage when that same signal is also passing through multiple tubes which are generating ~10% distortion.
And by the way, I also agree that we're all sharing views here, and I don't feel that anybody is being aggressive - i.e., it's a good and fair discussion.