The Sansar audio engine is designed to take full advantage of spatial cues, and an important part of this is to enable you to accurately get information about sound playing around you with your eyes closed.
Compute Scene Reverb, also commonly referred to as bake reverb, is a feature in the Scene Settings that calculates all the audio reflections in your experience — how sound bounces between object surfaces. It simulates the reverberation that happens in the physical world; think of how "wet" and echoey your voice sounds in a long brick tunnel, as opposed to how "dry" it sounds inside of a car. Each space has different physical characteristics and, as a result, you can tell a lot about where you're located, even with your eyes closed.
Sounds in Sansar are realistically spatialized using a natural reverb that automatically makes the audio mix more cohesive. Even if you don't know a lot about the technical aspects of acoustic treatments and just want to place a few audio emitters around your scene, they should sound pretty good as a cohesive space.
Distinct footstep sounds for most audio materials can always be heard, but baking reverb is required in order to hear materials' absorbtion, transmission, and scattering properties. In other words, if baked reverb is off, there is no reverb.
To bake reverb:
- Click Scene Settings.
- In the Selection window, scroll to the bottom.
- Click Compute Scene Reverb to toggle it to On.
The Compute Scene Reverb control is in the Scene Settings Properties panel.
- In the global controls toolbar in the upper left, click Build. A progress indicator appears as reverb is calculated.
Occlusion, in audio terms, is the tendency of objects in a scene to dampen or muffle sound. Obstruction is a partial blockage, such as if you were to try talking to someone with some boxes between you—a common occurrence in open environments.
Occlusion can be useful in creating clean and controllable soundscapes. For example, in a party setting you might walk to another room to escape the loud noise and have a quiet conversation. Occlusion can also help separate the sounds of the outdoors from indoor sounds, such as a fireplace.
Occlusion is automatically included and enabled when you turn on Compute Scene Reverb as described above.
To use audio occlusion effectively, it's helpful to see where the physical collision "walls" are in your scene, which may include boundaries that aren't normally visible. For example, you might have a skinny-looking tree that has a much broader physics shape, or a window you can see through that has a physics shape that's a solid wall— which blocks sound from propagating through it. By enabling Physics Shapes together with Audio Volumes, you'll have a better idea of how sound can travel around your scene.
To do this:
- Toggle Visibility menu > Physics Shapes. Applicable objects are shown in gray. Shapes with other colors are either dynamic or not collidable, and do not affect occlusion.
- Also toggle Visibility menu > Audio Volumes, which are shown in green.
|Visibility > Physics Shapes and Visibility > Audio Volumes.|
Now you can see where your audio emitters (green) are placed relative to your physics shapes (gray).
|Physics shapes toggled Off (left) and On (right) in the same scene. Notice that the sound from the audio emitter will be blocked by the physics shape despite it looking like there is a window in the visual model.|
This makes it a lot easier to tell if an audio emitter (especially a point emitter, which emanates from a precise coordinate) is positioned inside a physics shape that blocks it. If you have an issue with overly large physics shapes blocking sound unintentionally, you need to edit your content accordingly outside of Sansar, then re-upload it with a more precise collision mesh.
|Known issue: An audio emitter attached to an object (inside its container) is intended to play unobstructed, but doesn't work correctly yet, due to a known issue as of July 27, 2017.|
Tips and best practices
- For your final published experience, unless you have a specific intent in mind, we highly recommend turning Compute Scene Reverb on to enrich your experience's audio quality for other users.
- However, reverb bake time can add several minutes when building your experience, so if time is crucial, you may want to leave it off when iterating on the visual aspects of your scene.
- The reverb bake time depends on several factors, mainly your computer's speed and your experience's density of geometry. Experiences with many complex objects have a slower compute reverb time because of all the additional surface bounce calculations that are needed. If you optimize your content, fewer triangles could mean faster reverb raytracing. But, if a scene has very dense meshes — like lots of small triangles— or a lot of meshes, raytracing cost goes up and bake time increases.
- Physical objects that can move are ignored.
Remember that we use static objects— shown in gray when you toggle physics shapes as described above— to calculate reverb and occlusion. These are the calculations that define which baked reverb impulse response you're hearing from your avatar's (or camera) position. If you've set walls and floors to be collidable and avatars are able to navigate sensibly through the space, it should sound great.
If you're using audio materials and audio emitters, compare your scene with Compute Scene Reverb On and Off, and listen to the difference it can make for you.