A Study on Just Noticeable Difference in Reverberation Time and Creating Realistic Convolutions

Discussion in 'Audio Science' started by spwath, Apr 5, 2021.

  1. spwath

    spwath Hijinks master cum laudle

    Pyrate BWC
    Joined:
    Dec 13, 2015
    Likes Received:
    7,894
    Trophy Points:
    113
    Location:
    Madison, WI
    So, as I am working on my senior capstone project, I thought I would share it, as it seems an interesting and relevant topic to discuss.

    So, what am I doing?
    I am working in part on a study to tell the just noticeable difference (JND) in reverberation time. There have been very few studies on this in the past, only one notable one that has set the accepted number, of a 5% change to reverberation time. Initial impressions say it might be greater than that.

    How are we doing this?
    Well, the best way would be to take a recording in a space, and change the space to change the reverberation time, and take another recording, and compare. This would be very impractical to do in real life though. But, luckily, we have the power of computers now. Odeon is a room acoustics software that can accurate do a lot of things.
    The room used is the Philadelphia Academy of music. This space sounded most natural in Odeon. This space had already been modeled previously, with all materials assigned, so I did not have to worry about that.
    Odeon has anechoic recordings available, essential for modeling and convolving this correctly. First, an entire anechoic recording was placed on the stage, but it didn't sound natural due to a few things, including no stereo imaging, and directivity of each instrument not taken into account. So I took each individual instruments anechoic recordings, and placed them in their respective positions on stage. I aimed all instruments at the conductor (let me know if you think this was the wrong thing to do), and put the listener in a central position in the audience, facing the stage.
    upload_2021-4-5_16-4-59.png

    If you want to know what instrument is what:
    (speaker on stage is conductor)
    upload_2021-4-5_16-5-42.png

    And first person view of stage from listener:
    upload_2021-4-5_16-12-59.png

    Then, ran the convolutions in odeon to get an output file. This is what it sounds like.
    https://hartford0-my.sharepoint.com...1AkvPC8gty_n4Bh2yIENi7tSfKzyvDumpl-Q?e=F7AuCm

    Here is the original anechoic recording
    https://hartford0-my.sharepoint.com...JGhGsMevMk0asBetYCo5gPDzpeDtQQB_vmRA?e=9fDhHC

    What next?
    Right now there is a reverberation time of 1.48 seconds. more convolutions need to be made to compare. First, i want to get the base recording as realistic as possible.

    Thoughts on How realistic the convolved recording is, and what may be unrealistic about it?
     
    • Like Like x 6
    • Epic Epic x 3
    • List
  2. ergopower

    ergopower Friend

    Pyrate
    Joined:
    Mar 21, 2018
    Likes Received:
    815
    Trophy Points:
    93
    Location:
    South Central PA
    This is interesting stuff, sorry it took a while to sit down and give a listen on my 2 channel system.
    First, that system - I Cast from my laptop to a Chromecast Ultra connected to a Yamaha AVR set to 2.1.

    To me, there is a believable sense of space, in the sense that there is clearly reflected sound that suggests size consistent with a concert hall. There is something about it that I can't put my finger on that is not quite natural, though.

    I wonder if the reflectivity off a side wall and ceiling should be the same for all instruments? I mean level not delay. There is surely some variation of what is absorbed vs. reflected with frequency, no?

    I also hear the violins as being behind the rest of the orchestra, which sounds very wrong. I don't know if this ties in to the above or not. Since each instrument is recorded separately, this is something I think you can manipulate.
     

Share This Page