Discussion in 'Audio Science' started by makmeksam, Dec 5, 2019.
Here is how to read harmonic distortions on a FFT in terms of theoretical human perception.
Also be careful when reading distortion plots.
For example, -115 dBFS (noise dominated) on a product whose full scale output is 300 Vrms is equal to about -63 dBu, which is somewhat shitty if you are using such product to drive headphones that put out 90 dBSPL with just 0.230 Vrms (i.e. -10 dBu).
Measurements can be deceiving.
300Vrms at full scale would light headphones on fire.
People do shit like that with measurements. Remember this one:
Oh LOL. Hahahahaha. John Atkinson cheated on measurements on behalf of his fellow British wanker. I forgot about that.
Does this mean measuring misses a lot. Or, even if the measurements and metrics are perfect, still the system may not be to the liking of a person?
Thanks for the link. I had a read your article some time back. But this was a good revision. I agree that THD or SINAD is not everything due to various limitations.
One thing is not yet clear to me. Let's consider a good quality recording. Is good quality sound already there in the recording? Or is it completely subjective? In other words, if there were a hypothetical system that can reproduce the exact sound that is in the recording will it sound good according to the consensus?
That's a good, no that's a GREAT question: What about being true to what's in the recording?
Give me a minute to answer. I've wanted to discuss this for a long time.
The essential problem to being "true to the recording" is that the recording at some point has to be approved by human beings. This approval process happens in a mastering stage. Stakeholders, and this could be studio executives, members of the band, the conductor, producers, Freddie Mercury's fuckmate, random idiots, etc. go to a mastering stage to give the thumbs up!
It comes down to that these people are giving their thumbs up to what they are hearing in the studio during playback. And guess what's in the studio? Tape machines, EQs, compressors, sound processors, DACs, computers, amplifiers, crossovers, and speakers. Some mastering engineers will go beyond this and get approval from how the recording sounds from boomboxes (an earlier time), inside cars, or from iPhone iBuds. What we can be assured of is that there is no direct metaphysical link from the recording directly to the consciousness of the approvers.
So really, at the end of the day, if we want to hear what's "true to the recording", it would be best to get the sound as close as possible to what the people in the mastering stage are hearing. Naturally, there are lot of things that we wouldn't be able to know, but we can be fairly certain that the playback speakers are going to be neutral-ish and set up correctly, but there is no guarantee of this. For example, the SACD versions of Michael Jacksons' Thriller album sounds thin. The supposedly reason is that the SACD was mastered by these guys who were sitting way too close to the woofers of a big woofer + horn monitors. As a result on being mastered on this bassy setup, the recording ended up very bass light.
When it when comes down to distortion, which has a direct affect on overtones and harmonic richness, the speaker amplifiers in the mastering stages are likely to be of high gain and high power using discrete components, which by design means magnitudes higher distortion than the latest-craze chip-based low-gain Chi-Fi headamps. We are talking about 0.01% THD with these kinds of amps. That's about x30 higher than these 0.0003% THD "ASR very highly recommended" chip amps.
So we can certainly make a strong argument that perhaps 0.0003% THD amps are not the way to realize "what the artists (or producers, or executives) intended (approved)", and that we should be using 0.01% THD amps, which equates to roughly 80db SINAD (SINAD is technically the inverse of THD+N).
How do you effect damping behavior with software? The main problems I'm thinking of involve resonance or transient response. We're talking more than an EQ curve here.
Does Atom has a damping issue? And can an amp fix transient response and resonance issues of headphones?
I think this gives a very good insight. Yes, if we want to hear what people at mastering stage heard, we wont be able to do it with high precision equipment. We will need similar systems to what people at mastering stage had. But how practical it is to setup a system that resembles majority of mastering studios? Also, in the end, is this what people are looking for knowingly or unknowingly?
If you ever have the chance to hear single-driver loudspeakers off a variety of amps I think you'll get it. Most of them depend on imprecise electronics to compliment their physical/mechanical limitations. The amp isn't 'fixing' but interacting with it, an amp like the Atom is designed not to interact. It's going to sound bad if your transducers were designed with this interaction in mind.
This a good topic of research for you as you will find discussions online about the limitations and problems of wideband speakers vs multi-driver speakers, this topic doesn't come up much with headphones because with a few exceptions they're one driver per channel.
What this tells me is, a specific amp can be a good pairing for a specific transducer. I agree with this. But I do not think amp transducer pairings exist such that the amp can do anything to improve problems like transient response or resonance in transducers. Please correct me if I am wrong.
I do not understand why people dislike me even before I pick a side
A cool sounding amp may be able to pair with a warm sounding speakers, or a warm amp can pair well with lean sounding speakers, but amps can't fix serious problems such as peaks or high distortion. To some extent, getting the right transient response is possible, but again, synergies can only go so far and really do not fix serious problems. A tight articulate metal driver with big magnet won't fix a distorted warmpoo amp.
Hazing. You're doing good.
This is why I asked that why people don’t like to use software fixes to do a good pairing or to do personalization assuming there are no such serious issues in a given pairing.
Software, namely EQ, can fix linear distortion, or frequency response irregularities. However, they cannot fix nonlinear distortion, aka THD, HD, IMD, etc. It's called nonlinear because it isn't easy to predict or model. Heck, there's probably a reason why I got a C- in my non-linear differential equations class. The linear one was easy.
Time domain correction in the form of impulse response convolution is promising. It's not a panacea however because the measured impulse responses are extremely specific to the headphone measurement rig (not a person) or exact orientation and position of the listening position when it comes to speakers. There is also a penalty of processing time or latency. Personally, I'm still old school and prefer to use simpler solutions, physical mods to headphones or better designs (namely wider dispersion, larger surface area for lower distortion, multiple subs, etc.) for speakers. I don't want the wrong impulse response applied if I happened to move to the end of the couch and lay my head down.
Using software to predictively do a true feed-forward error correction from a systems standpoint would be very interesting. Although I doubt this will happen because people like to take shortcuts in 2020. The lack of choice in modern amp and DAC designs and ASR's one world view condeming designs outside of monolithic parts (chips) that measure far far more than adequately well is an indication of this lack of innovation.
This is why Iove the Schiit guys. One old guy who avoids canned sigma delta designs, and the youngest guy who still knows how to design with discrete transistors, which is sad because Jason is a bit older than me, and neither of us are young anymore.
There are many different paths to personalization, and people will chose the roads they like the most.
There is also equipment kool factor and the feeling of experiencing things out, specially when among friends.
This is what I meant by software. I was thinking about a hypothetical software system that continuously learns a given audio pipeline as a whole and come up with an approximation function to error correct it at the very input. This is gonna work with nonlinear distortion. I think this is not that hard in a CS standpoint given modern ML advancements.
I think these types of software systems are already there in systems like Apple Homepods etc. to make them sound at least like they currently sound.
LOL, all these posts pretty much tell everyone what SBAF thinks of the Magni Heresy.
Separate names with a comma.