View Single Post
Old 04 July 2017, 08:01 am   #8
Shy
Admin
 
Shy's Avatar
 
Join Date: Jul 2004
Posts: 372
Default

I'm just at very early stages, without final decisions on even the most basic processing upon which other things will be built . It's crucial to know what approaches should be taken in the first place, so that the end result won't be yet another failure. There are many software solutions nowadays available to developers and audio producers, which combine so-called HRTF (which are extremely low quality and ineffective in any software that does that), Haas effect, doppler effect, reverberation, echo, and simple phase and filtering techniques, to provide a virtual "mixing stage". So in a game, you're able to pinpoint the direction of some sounds accurately, and you get a nice realistic overall sound and feel like you're in a real environment. It's very nice even with the existing limited tools, but it could be much better, and much more "real", if the underlying effect algorithms simulated real-world effects more accurately.

Binaural recording won't be common because of many reasons:

- Even just walking around and doing nature or urban recordings, it's a big hassle, because it's not very comfortable having those mics stuck in your ears for long (and a very bad idea just like using earphones, sticking things in your ears is never good and can cause infections), and they are very sensitive to wind, so you need big foam pads covering the mics in any weather except rare weather with close to zero wind, and in some cases you need additional foam on your ears as well, and really windy conditions would require a pretty ridiculous setup which makes it all pretty unfeasible. Of course, taking a heavy, high quality, ridiculously expensive Neumann KU100 along with you is probably not an option either .

- An ideal recording location is required. In the uncommon event that there is no intention to add effects such as reverberation, phaser, chorus, flanger, or other effects later, then all that's needed is a good location with just the right reverberation / resonance conditions, and likely it would need to be quite a large space as well (which can be especially problematic if you want large space + little reverb), because if it's small, you won't have enough room to place all the instruments / sound sources spread enough from each other. If you think "OK, I do multiple takes anyway, I can use the same position for several instruments", what happens is that since it's a binaural recording, it just sounds weird, because then you have multiple completely different sounds coming from the same position, which is unrealistic and even more pronounced in a binaural recording than in a typical half-baked stereo mix. If you're familiar with traditional recording methods then you'd think "OK, I can deal with that, I'll just place mics closer or further away from the sound sources, as needed, and use a favored method for each type of instrument", but all that is completely shattered when it comes to binaural recording. If you don't use the exact same, SINGLE position for the microphones/head, when recording each instrument or person, the result will be bad and completely against the very idea of a binaural recording, which is to have a single, listener's perspective, around which everything happens. If you position the mics/head differently in each recording (if it's separate takes, meant to be mixed later), what you get is a completely messed up "space" and overall weird-sounding performance, because the reverberation, frequency response, the overall "positional cues", will be extremely different in each take, and when combined, you no longer have a binaural recording/mix, you have a mix of many binaural recordings, which is completely useless.

- If the intention is to also have separate effects applied to each sound, then the binaural recording would have to be made in a space with no or very little reverberation, because applying additional reverberation on top of the existing recorded reverberation doesn't sound good, and applying a phaser effect, or flanger or other delay-based effects, also doesn't sound good, because the entire "space" in which the sound occurs gets "painted" by the effect, while it was intended only for the "dry" sound and not the space it's within. It's extremely hard and/or expensive to get a large closed space with very little reverberation. Recording outside is usually not an option.

- The preferred method by most people in most musical genres nowadays, even completely acoustic with no effects, is to record each instrument separately using multiple types of microphones, in mono or one of multiple 2-channel methods, and mix them in a way that doesn't necessarily correlate with how a real performance sounds in a real environment. Some microphones and microphone placements enable capturing an instrument's sound better or more desirably than others. Even many people who prefer a true to life stereo image, prefer recording in the comfort of their own place, and making the adjustments to each recording (including equalization, peak limiting, panning, etc.) later, trying to create a realistic, different environment than where the instruments were originally recorded. For this reason, software that enables easy mixing in a virtual environment, and with realistic results, is the preferred method for most people.

- Today, "virtual instruments" are often the norm, rather than a secondary addition to real performances. Virtual instruments are either synthesizers, sample libraries (like pre-recorded single-note samples, and sometimes very sophisticated scripting), or a combination of both, aiming to enable musicians to create believable, real-sounding acoustic instrument performances, or a synthetic sound similar to analog synthesizers' or any other kind of synthetic sound, from plain subtractive to "physical modeling". Since those are either synthetic sounds or pre-recorded samples, those who make use of such instruments have only one option in regards to creating a result that would be similar to recording in a real space, and that is to mix those sounds in a virtual environment made up of either a sequencer's plain interface and included effects, additional plugin effects, external effect processors, or a combination of them. This virtual environment we have to operate within may or (usually) may not make it easy to process those sounds in a way that eventually results in a believable, immersive stereo mix. So most people nowadays who aim to create a believable, good-sounding musical performance, really need tools that enable them to get a "real" stereo mix, which when played on headphones, makes the listener feel they're in a real environment, and the same with speakers, to the largest extent possible without or with phase-cancelation post-processing.

Due to the changes in how things are made and how things are experienced / "consumed", the future of audio production as well as playback really is heavily dependent on effect algorithms most of all, so it's crucial that software as well as hardware is improved to meet the demands of our ever-progressing and expanding "virtualization". Ever since people started using telephones and phonograph cylinders, we've had a "virtual reality" field of sound, where people can essentially exist in places they're not, and where you can hear an artificial device omit any kind of sound. This is just a natural, requested progression of "virtual reality", in a era where the focus is now on blurring the boundaries between what is real and what is artificial.

Heh, and yeah, that drifted way beyond anything related to Musepack, file compression or anything like that. At least maybe somewhat interesting.
Shy is offline   Reply With Quote