Image of soundfellas technology logo puresource.


Get the purest sounds from any source. Free from reverberation, unwanted distortions, and irrelevant elements. Achieve perfect sound source localization and reverb integration for full immersion. Learn More >>

Anechoic, ready for Any Reverb

One of the major benefits of the PureSource™ process is that the final sounds are anechoic (do not have extra reverb), making them the perfect asset for use with any algorithmic or convolution based reverbs you may use, boosting realism to the limit.

Instant Integration

Boost your productivity by using the cleanest possible sounds for your game, film, or VR experience. No irrelevant elements or unwanted distortions to correct, just drag-and-drop them in your project to get a perfect fit.

VR/XR Ready

Using the purest sound from a source, makes it perfect for applications needing perfect localization of the sound sources around a listener. This applies more than ever to virtual reality or any type of extended reality applications.

For our PureSource™ production process we utilize forensic audio, data collection, and reverse engineering techniques to retrieve the purest sound energy of a specific source. PureSource™ process steps are:

  1. Capturing the subject under the best possible conditions.
  2. Compensating of distortions added to the signal from the recording equipment.
  3. Spatial cleaning, which includes dereverberation and high frequency damping compensation.
  4. Source separation from the unwanted content.
  5. Separation details refinement, which includes edge smoothing and unwanted elements removal.
  6. Content-aware gap restoration.
  7. Organization and exporting.

Below you can listen how PureSource™ allow us to produce significantly superior sound samples. To help you understand the results, next to each step we show its computer graphics equivalent.

Step 1: Original capture

The original capture of a sound source contains everything from the environment of the capture. Most interesting sounds are found in noisy real-life environments, be it natural or technology, there are many elements captured together with the source that can end up in the final product.

Even in highly controlled audio production environments the choices of room and its materials, and the recording equipment used can add enough artifacts to alter the original identity of the source.

It’s important to understand that our brain compares the timbre of any sound to timbres stored in its memory to categorize and make sense of the environment, and also to optimize the localization of sound sources around the listener. So compromising the integrity of the source’s sound undermines the immersion capability of any experience we create using those recordings, a very important aspect of any production and paramount if you specialize in filmmaking, game development, and virtual or augmented reality.

Example - Decals photo cleaning 01

A good example of the original content’s problems can be seen in this photograph which was made from a 3D texturing artist that wanted to use the damages of the tilled wall as decals in a game environment, to introduce variety on top of a seamless texture. The photograph is distorted from the camera’s lens, it has an uneven distribution of light and a yellowish tint due to the surround lights, and it also has the rest of the wall which in the artist’s case is useless.

Similar to the photograph, here we have a recording of a pneumatic robot arm captured in a factory’s production floor, which we need to use isolated in a randomized playlist to introduce variety in an industrial game level’s soundscape. The recording is distorted from the microphone’s response and the recorder’s noise-floor, it has the acoustic reverberation of the room and high frequency damping of the air, and in the robotic arm’s movement there is content from near-by machinery.

Step 2: Distortion compensation

The first process we apply is to compensate of any distortions added to the signal from the recording equipment. That includes reversing of the microphone’s frequency response, removing the recorder’s self-produced noise and any other similar changes of the source’s sound.

Example - Decals photo cleaning 02

Here is the same photograph with the lens distortion corrected, this is easily done when we know the lenses’ characteristics. Removing of the noise introduced from higher ISO settings is also done in this step.

Here you can here the same recording corrected for the microphone’s frequency response (equivalent to lens distortion), and recorder’s noise. We achieved this by measuring the equipment’s characteristics in a controlled environment and then reverse them in the recorded signal.

Step 3: Spatial cleaning

The process of spatial cleaning includes:

  • Dereverberation (removing the room’s reverb) to produce an anechoic sound. This is the equivalent of delighting (removing light/shadow areas of a picture) used in photogrammetry, computer graphics and game/movie 3D texturing.
  • Compensation of high frequency damping introduced from the air absorption of the sound energy. This effect is strong enough on sound captured from distant sources. Sometimes it’s inevitable to close the distance between the recording equipment and the source, or impossible to get without overdrive/clipping from very loud sound levels, i.e. explosions, huge engines, etc.
Example - Decals photo cleaning 03

Here is the same photograph as before. This version is after processing delighting tools used in computer graphics. Any light gradients, artificial coloring, and shadows based in external factors is removed, revealing the source’s real colors and texture.

On the same level as the photograph, here you can listen to the recording with the reverberation of the factory’s production floor removed and any high frequency damping restored. The true character of the robotic arm is now revealed.

Step 4: Source separation

In any media authoring scenario, creators use assets to compose the final result.

Having the assets clean and separated from the unwanted content is very important to:

  • Boost productivity by helping the creator focus on the vision.
  • Allow for a consistent result from all the elements of the composite, that serves the intended aesthetics.

Isolated sources are very important in many aspects of media production, just like the decals example we use here used in advanced 3D texture mapping for games.

Example - Decals photo cleaning 04

Here we separated the rusty damage decals form the rest of the content, to make it fit for your production purposes. Still the content suffers from rough edges containing some of the background and irrelevant content that now shows as artifacts that live inside the content we want.

The same goes for the recordings of the robotic arm. There are still sounds present from the factory’s production floor on the higher and lower frequency edges of the isolated sounds, and you can hear other machinery still present within the wanted sounds (listen closely for that on the second sample).

Step 5: Separation refinement

To create usable samples of the source we need to further refine our isolation from any artifacts that remained from the separation process. This step includes:

  • Edge smoothing, to optimize the separated sources’ edges.
  • Unwanted elements removal, to remove any artifacts from the source interacting with the rest of the content.

This is often a semi-destructive process and graphic artists use precise selecting tools to completely remove anything irrelevant. As you see, this leaves some gaps on our data, but it was a necessary step to make our assets usable.

Example - Decals photo cleaning 05

Here you can clearly see that while we managed to remove all unwanted content from the background of the decals producing very smooth edges with selection refinement and feather, we had to delete content that was completely useless because of the tile gaps of the wall that was no part of the decals and would render the decals impossible to use on any other wall texture.

Using the same philosophy we refined and smoothed the edges of our robotic arm samples, but you can also see the same gaps introduced by removing completely useless areas of irrelevant content interfering with the signal from our sources.

Step 6: Content-aware gap restoration

To finalize our samples we introduce content-aware gap restoration, using either sound re-synthesis, noise generation based on captured noise profiles from the source, pattern-based stamping, or manual editing.

This produces a perfect representation of the true source, isolated and refined for use in any media authoring scenario, free from any artifact that would undermine its fidelity.

Example - Decals photo cleaning 06

In graphics, by using content-aware image replacement and generation techniques we can create the final product. Now we finally have the decals of a rusty damaged wall to use in any composition we like.

The same goes for sound. By using content-aware audio replacement and generation techniques we can create the final samples. Now we finally have the sounds of a pneumatic robot arm to use in our compositions, link with visuals and animations or playback randomly in a scene’s soundscape.

Step 7: Organization and exporting

Organization and exporting, based on modular grouping.

Example - Decals photo cleaning 07

Finally, as with any good and useful content library, the way the library is grouped should make it easy for the user to find, choose, and import any asset at any authoring application. In our graphics-based example that means exporting each different decal as an isolated graphics file with transparent background using a format that supports further manipulation and design by the user/creator.

We organize our audio assets using battle-tested philosophies that make it easier for the user of the library to quickly find sounds and stay both creative and productive at the same time.

  • Primary, secondary, and tertiary elements of design are ordered in the filename. That means that each file has a usable filename depending on the library it belongs to.
  • Different types of similar sound events are denoted using capital letters. for example, “Motor_A” and “Motor_B” are both motors but different types.
  • Different variations of the same sound event are denoted using numbers. For example, “Water_Splash_01” and “Water_Splash_02” belong to the same type of event but contain a slightly different variation. You can use those to create natural-sounding repetitions for your project.
  • Special editions of the same sound file are denoted using the difference in the end, like “Lite”, “Var”, or our own FocusBlur™.
  • When needed, the content is separated using folders named by the same philosophy we use to name our sound files. Allowing for even more speed to find and choose assets in collections that contain a lot of files.

Using a consistent organization philosophy that is tested for more than two decades at many types of media projects means that using our audio assets will be efficient and fast and will never become an obstacle between you and your creativity.

Hard facts

Small details in sound are easy to miss for the untrained ear and commonly occurring in audio production from nature and equipment. But the brain uses every bit of detail to make sense of everything and distill meaning. Allowing unwanted elements in your project’s audio greatly diminishes the capability of your audience to localize the sound and quickly understand its character, and undermines the immersion in the experience.

Not all audio assets are made the same. In SoundFellas we developed our PureSource™ audio production process, to create the best possible pure sound of any target source. By using sounds from our sound libraries you will be able to create awesome soundscapes and sound effects with isolated elements of the purest form, and you can rest assured that your project’s audio will offer to your audience total immersion and an experience of the highest fidelity.