Audio Science Review Review

Discussion in 'Audio Science' started by purr1n, Aug 30, 2020.

Thread Status:
Not open for further replies.
  1. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,031
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    • Write Autohotkey macro / scripts to switch outputs randomly. Put attentuator between DAC and amp to level match.
    • Have someone else in the house to switch cables. Use same cable source and occlude sight of cable source to make the test triple-blind.
     
  2. atomicbob

    atomicbob dScope Yoda

    Pyrate BWC MZR
    Joined:
    Sep 27, 2015
    Likes Received:
    18,912
    Trophy Points:
    113
    Location:
    On planet
    @bboris77 beat me to it for describing JRiver 2-zone method for USB.
    I use Dante RedNet D16 to supply two digital feeds when testing AES or spdif.
    Then I employ a setup similar to this for level matched ABX:
    ABX testing.jpg
    It is time consuming to setup and execute. The most important lesson I learned is that rapid switching usually resulted in failure to detect differences. Listening for longer periods before switching and making choice works much better. Fortunately the Van Alstine ABX switcher has no time limits on a test run.
     
  3. Hands

    Hands Overzealous Auto Flusher - Measurbator

    Staff Member Pyrate MZR
    Joined:
    Sep 27, 2015
    Likes Received:
    12,287
    Trophy Points:
    113
    Location:
    Colorado
    Home Page:
    I know @purr1n is pretty vocal about their issues with RTINGS, and rightfully so, but I do appreciate the following:

    1. They will listen to user feedback and try to incorporate that into their test methodologies.

    2. They'll respond to Q/As and other comments in reviews, often going back and testing certain aspects or quirks in specific models.

    3. They seem to try to test in a somewhat holistic nature. Lots and lots of little tests, less of a slant where one test is primarily used to rank/chart a product's performance. (Yes, even contrast ratio does not have quite the sway that SINAD does on ASR.)

    4. They seem to have measurements, for headphones at least, that no one else does. More seemingly experimental sort of stuff, regardless of how official they suggest it to be in their methodology descriptions and write ups. Whether these are of any value is questionable (and thus possibly "dangerous"), but I'd rather see experimentation than stubborn simplicity and complacency.

    5. While the way in which they numerically rate test and overall categories is problematic, as I see it, I do appreciate that they have prominent ratings for specific use cases. Flawed, yes, but at least there's a semblance of recommendations based on use case rather than "one rting to rule them all."

    6. They seem more consistent in the precision of their testing and how they subjectively rate or comment on things, compared to the relatively sloppy ASR approach. (There are some exceptions with audio measurement accuracy, I think, especially with sound bars.)

    Again, this is not to say RTINGS doesn't have flaws. There's plenty to complain about. But there are a lot of core, foundational elements I appreciate, the kind of behaviors that can lead to something really good as things progress. Not guaranteed, but a possibility. It's some of the same behaviors I've seen in the better computer hardware review sites, the ones that raised flags on frame times back in the day.

    Whether or not I agree with everything, even some of the larger aspects, it's a hell of a lot better approach than ASR. More potential for good. Or it may just be that, in contrast to ASR, I'm viewing RTINGS too positively because of the juxtaposition.
     
  4. Hands

    Hands Overzealous Auto Flusher - Measurbator

    Staff Member Pyrate MZR
    Joined:
    Sep 27, 2015
    Likes Received:
    12,287
    Trophy Points:
    113
    Location:
    Colorado
    Home Page:
    On blind testing setups...

    I'm not usually one to stress things like cables, jacks, and so on. I tend to err on the side of caution and go with properly constructed and shielded cables, with good materials, at a reasonable price. I tend to go for things like BNC jacks with SPDIF for engineering peace of mind, even if I don't think I've ever really heard a difference between it and an RCA SPDIF jack.

    That said, I've used alligator clips to swap between potentiometers, and I was surprised at the differences I thought I heard. I might argue they can be just as or even more influential than caps and tubes.

    As such, I think there's some amount of uncertainty around if A/B switchers can impact sound quality at all. Could the switch itself affect the sound? Does the brief relay interruption f**k with our perception? What about the grounding scheme and how that might interact with the multiple pieces of gear in your setup? It's kind of hard to subjectively test those to ensure transparency, you know?

    Now, through a pot/attentuator in the mix to level match, that might be yet another variable. Same for adjusting the volume on DACs themselves, if they have that capability (of which there are many approaches).

    Even if we strip all that out and had perfectly identical, dual setups, with only one component differing, and have to plug the headphone into each rig manually for testing, well, there's still that brief interruption that can mess with you.

    I'd even go so far as to factor in digital sources. A subpar digital source, i.e. USB straight out of a PC, might muck up the sound too much to differentiate DACs. (But then you might have to ask, how much of this is the USB input on the DAC as it relates to my source?)

    Similarly, are your amps and headphones good enough to show differences? Some amps, like the Magnius, are going to impart their will on anything and everything.

    Do you have enough experience and training to pick differences out? (Or maybe, do you have enough of an imagination? Depends on who you ask.)

    As @atomicbob mentioned, sometimes you have to listen for long periods of time before switching. I've found sometimes I do best with short bursts. It varies.


    OK, so, that's an awful lot of nervosa, right? Do you really need to worry about all that? Not totally sure! I like to rule out variables where I can out of precaution, even if I mostly doubt it matters enough.

    If you don't account for these variables and hear differences, well, then you hear differences. But if you don't hear differences, are the variables enough to cast doubt on there being no differences?

    I think it's worth keeping in mind and poking around with, but nothing to stress over too much.

    Part of the problem is people assume blind testing has to be stressful. Really, who cares? Your approach will never be totally perfect. You will never be totally perfect. That's not to say you should be sloppy, nor should you lose hair over minutiae, but find a balance between consistency and precision, accuracy, experimentation, and having some fun with it.
     
  5. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,031
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    Their measurement methodology seems solid and their huge variety of different measurements is to be applauded. With RTings, there's always something there that those who know what they are looking for will find useful.

    However, their numeric ratings, say 7.5 for Movies and 8.4 for HDR are butt.

    There are also no measurements (at least seemingly) on why "@Hands (and now Merv) prefer IPS panels because their colors pop more than VA panels". And also how a weakness of IPS panels, is realistically not a weakness, and only serves to kill IPS panel's numeric ratings. The fact that their numeric ratings are translative (Movies, HDR, Gaming, etc.) only makes things more hokey.

    Bottom line is that I think these guys are deaf (look at their suggestions to correct the HD650) and also possibly blind. Numbers are fine. But they need to mean something.

    As for their receptiveness, you have to remember that one RTings guy came here to ask us about the Olive curve. We told him the Olive target was butt - in quite strong terms. He never came back.

    I plan on write a doodle on this.
     
    Last edited: Feb 9, 2021
  6. Thad E Ginathom

    Thad E Ginathom Friend

    Pyrate
    Joined:
    Sep 27, 2015
    Likes Received:
    14,263
    Trophy Points:
    113
    Location:
    India
    I think there are software solutions for blind testing between two sources. ISTR trying and failing to get a Linux offering to work for me. If I remember correctly, having left the Windows world long since, Phoobar has an ABX addon.
     
  7. Pancakes

    Pancakes Friend

    Pyrate Contributor
    Joined:
    Aug 13, 2020
    Likes Received:
    1,428
    Trophy Points:
    93
    Location:
    Atl
    Thinking about Harmon and all the other curves out there. Regarding HP FR measurements, instead of making curves based on a study of people's preferences, wouldn't the following make more sense as a baseline curve:

    In an anechoic chamber set up a speaker and EQ it so a reference mic shows flatline. Then, take your favorite can measurement rig with pinea and measure the FR of that speaker. What you've done is measured the ear gain of the HP measurement rig. A neutral headphone FR should be the inverse of that curve.
     
  8. loadexfa

    loadexfa MOT: rhythmdevils audio

    Pyrate Contributor
    Joined:
    Dec 26, 2017
    Likes Received:
    2,585
    Trophy Points:
    93
    Location:
    SF Bay Area Peninsula
    Agree with first. Second one needs to take placebo (or whatever it's called in audio) into account so "proven" is probably too strong a word.
     
  9. ultrabike

    ultrabike Measurbator - Admin

    Staff Member Pyrate MZR
    Joined:
    Sep 25, 2015
    Likes Received:
    8,960
    Trophy Points:
    113
    Location:
    Irvine CA
    As far as RTINGS:

    https://www.rtings.com/headphones/reviews/beyerdynamic/dt-990-pro

    "The Beyerdynamic DT 990 Pro are excellent neutral listing headphones"
    "They may sound a bit sharp at times, as the treble range is slightly too emphasized, but bass, instruments, and vocals are well-balanced and reproduced with high-fidelity"

    Are you shitting me?! These cans have the freaking Matterhorn peak for treble.

    Comfort: 7.5 vs HD600 with a 7.0.

    This is bullshit.

    I think it was because of RTINS or whatever it's predecessor was, that I bought these POS cans. The DT 990 Pros have a peak monstrosity in the treble region that actually hurts. Their measurements make it seem as if it was all positional error, or averaging issues. It is not. They are also way more uncomfortable than the classic Sennheiser headphones.

    I'm not picking on RTINS cuz they also measure cans. Again, if I remember correctly I bought my old DT 990 Pros because I got bamboozled by their measurements, and found them horridly bright after the fact. This before I did any measurements or gave them to Merv to test.
     
    Last edited: Feb 9, 2021
  10. Pancakes

    Pancakes Friend

    Pyrate Contributor
    Joined:
    Aug 13, 2020
    Likes Received:
    1,428
    Trophy Points:
    93
    Location:
    Atl
    Or conversely, a neutral can, measured on that rig should have that exact curve.
     
  11. purr1n

    purr1n Desire for betterer is endless.

    Staff Member Pyrate BWC
    Joined:
    Sep 24, 2015
    Likes Received:
    90,031
    Trophy Points:
    113
    Location:
    Padre Island CC TX
    LOL, well it's only a "slight" variance from Olive target:

    upload_2021-2-9_16-56-36.png

    The problem with RTings is lack of weighting, poor weighting, or poor algorithms. The horrible effect of that high amplitude 8kHz peak is likely minimized according to RTings automagically numeric scorer because the peak is narrow, and thus the "error" is minimal. The problem is that any errors, or peaks around this region, no matter how narrow, fricking hurt - a lot.

    Anything with peak like this, on top of the Olive target which is hot in the treble, should be an automatic fail - a showstopper.

    This is what happens when people decide to rely too much on graphs, Sean OIive's fake science, and not enough on actual experience.
     
  12. ushanka

    ushanka Facebook Friend

    Contributor
    Joined:
    Nov 30, 2020
    Likes Received:
    121
    Trophy Points:
    43
    Location:
    PNW
    Just like the only way to hear transparency properly is to review the distortion graphs before every listening session with the old trusty Topping stack ...
     
  13. Thad E Ginathom

    Thad E Ginathom Friend

    Pyrate
    Joined:
    Sep 27, 2015
    Likes Received:
    14,263
    Trophy Points:
    113
    Location:
    India
    In the first instance, a one person test extended to a hundred-person test, or more, might suggest that there actually is no difference.

    In the second, with an isolated individual being able to detect difference, doesn't the combination of double-blind with reliably and repeatably, rule out the placebo factor?
     
  14. bboris77

    bboris77 Friend

    Pyrate
    Joined:
    Dec 12, 2015
    Likes Received:
    778
    Trophy Points:
    93
    Amir just posted a promo vehicle for the D30 Pro and a video that I have not bothered watching in its entirety. What really struck me is the fact that he literally said nothing about how it sounds. Not a word in passing. It’s like a talking about a car without actually test-driving it and describing the experience. Why even bother saying anything - just give us the measurements. He also includes a disclaimer for early adopters that he has not tested the products for any bugs and deficiencies so buy at your own peril LOL.

    There is a degree of desperation at this point that is emerging as a theme in terms of Topping’s product lineup - this thing looks exactly like the D70 minus a few digital inputs that (EDIT: almost) nobody uses. It is as if they are throwing anything they can think of at the wall and see what sticks. There is absolutely no rhyme or reason for most of their products as they all presumably sound exactly the same and have almost identical functionality.

    In terms of Amir's writeups and videos, there is no longer even an attempt to appear objective and critical when it comes to Topping products - this is an advertising brochure for a product, pure and simple. Any perceived deficiency that his measurements discover are mansplained to the audience as great engineering and turned into a feature and a triumph. I think that for an ever increasing number of intermediate audiophiles, his schtick is actually doing a disservice to the Topping brand in the long term. It reeks of used-car sales tactics at this point, and is pushing the products too hard and too obviously to the point that anyone intelligent will start questioning it.

    EDIT #2: I have incurred the wrath of John Yang himself. Newsflash, he reads these forums. Funny that he spoke up and not Amir.

    https://www.audiosciencereview.com/...-review-balanced-dac.20259/page-6#post-667896

    EDIT: For a very short snippet of Amir's patronizing attitude at its best, go to 9:40 within this video:

     
    Last edited: Feb 10, 2021
  15. Baten

    Baten Friend

    Pyrate
    Joined:
    Mar 18, 2018
    Likes Received:
    1,131
    Trophy Points:
    93
    Location:
    EU
    Did you just call AES/EBU an input nobody uses? :rolleyes:

    PS on the D30 Pro. Its inputs/outputs are not labelled. who knows if it's an s/pdif OUT or IN? Lol.
     
  16. bboris77

    bboris77 Friend

    Pyrate
    Joined:
    Dec 12, 2015
    Likes Received:
    778
    Trophy Points:
    93
    I apologize for offending any AES/EBU users :) I was thinking more IIS-LVDS.
     
  17. Yethal

    Yethal Facebook Friend

    Joined:
    Sep 23, 2018
    Likes Received:
    238
    Trophy Points:
    43
    Location:
    Poland
    "The noise os so low you're not gonna hear it it's just a measurement artifact"

    If only he had the same energy when reviewing schiit dacs
     
  18. loadexfa

    loadexfa MOT: rhythmdevils audio

    Pyrate Contributor
    Joined:
    Dec 26, 2017
    Likes Received:
    2,585
    Trophy Points:
    93
    Location:
    SF Bay Area Peninsula
    I’m not sure, I don’t know how much placebo applies to audio nor how to correct for it. Just knowing “now you’re trying the second option” could be enough because you know there’s something different now. Humans are weird and complex. :)
     
  19. k4rstar

    k4rstar Britney fan club president

    Pyrate BWC
    Joined:
    Jun 11, 2016
    Likes Received:
    6,951
    Trophy Points:
    113
    Location:
    Toronto, Canada


    from the video description:

    "What is a white monkey job? A white monkey job is a job in China where you get paid to represent something that you don't necessarily know anything about."

    https://www.superbestaudiofriends.o...-or-topping-part-deux.9583/page-3#post-309659
     
    Last edited: Feb 10, 2021
  20. bboris77

    bboris77 Friend

    Pyrate
    Joined:
    Dec 12, 2015
    Likes Received:
    778
    Trophy Points:
    93
    Wow, that video blew my mind. I had no idea that this was going on. It really puts things in perspective. If it truly applies to Amir and Topping, one has to applaud them for transcending the Chinese market and making the same marketing trick work in Europe and North America. At least until now.
     
Thread Status:
Not open for further replies.

Share This Page