I get the impression that Atmos is more oriented to film/video multimedia production. What is important to Atmos is that the position of the channels is independent of the actual playout channels of the venue. The venue's processing handles which source channels get sent to which speakers based upon the orientation of the speaker constellation in the venue and the required delay, equalization and dynamics processing required to produce the same orientation of spatial location and level as in the original mix. It's like a multitrack recording that doesn't get mixed down until you play it in your room and the room determines how the final mix comes together through software.seby wrote: Sat Jul 17, 2021 4:34 pm So I am seeing a live music performance and I am standing at the front/back side. What happens to my perception of the atmos mix? Is there some kind of calculation at work so that it sounds immersive no matter where one is positioned in its field, or would it sound even more unbalanced than standing to the side of a stereo field?
Live sound tends to be oriented to various proprietary non-Dolby immersive processor systems. L'Acoustics has L-ISA processing, Meyer has Constellation, DigiCo has KLANG, etc..
The loudspeaker system design for a particular venue is oriented to the venue and seating areas to provide highly localized sound from specific loudspeakers from the perspectives of panoramic coverage, depth of coverage and edge coverage. The idea is that rather than a mono or stereo L/R or L/C/R array set, you are using a constellation of loudspeakers - both arrays, clusters, and fills, to provide wide horizontal width and depth coverage for select channels that are processed independent of a output channel assignment. Input channels are "placed" in a spatial environment with respect to position in an x, y, z space, and the processor determines the levels, as well as temporal and spectral processing required within the loudspeaker constellation to orient that channel in its respective space to the audience. What makes this different from Atmos, is the "mix" for Atmos is baked into the track metadata as part of postproduction, where in live audio, the "mix" is whatever the FOH engineer decides what gets placed where during the pre-production planning, soundcheck and/or live.
Part of the end goal is that, for example, the guitar's placement in the audience's field has the same apparent location no matter where you're seated in the venue. You end up with a much better sounding mix, because rather than cramming everything into a mono mix and blasting it out every speaker with the attendant intermodulation 'mush' you get a proportional mix where sources are going to very specific loudspeakers or loudspeaker groups and delivered with greater clarity and separation 'from' the mix rather than 'in' a mix.
I don't know what Apple's role in all this is. That does sound like some consumer/prosumer level nonsense. But in the touring and show world it's definitely not nonsense, but the shape of things to come.