• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/55

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

55 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)

Recorded Sound

Digital Audio Files. Linear. Time dependent. Distinction between the data and the hardware/application which turns the data into sound.

Procedural Audio

Sequenced Sound

MIDI, sequences trigger real recorded sounds or synthesized sounds. Used in video games e.g. 16 bit consoles

Procedural Audio

Synthetic Sound

Produced by electronic hardware or software simulations. Uses software/hardware/equations to generate sound. Can reproduce real instrument sounds or sfx e.g. FM, AM, GS, waveforms, envelopes.

Procedural Audio

Generative Sound

Generated by some process. Interactive and Non-Linear. Uses further input to alter existing sound, or start new sound. Has the potential to be adaptive.


Time independent.


Data reduction/minimal resource usage

Procedural Audio

Algorithmic Approach

Process or system that evolves based on a set of rules. Rules usually display some sort of order. Apply these numbers and functions to the creation of e.g. melodies.

Generative Sound

Stochastic Approach

Based on probabilistic/statistical rules. Often use random or chaotic data, which is subsequently filtered dynamically.

Generative Sound

Artificial Intelligence Approach

Systems which gain intelligence by ‘learning’ from new inputs e.g. neural networks


May start with some form of knowledge and the synthesis process is modified as new experiences occur.

Generative Sound

Compression

Expressing complex ideas in reduced form. Describing a complicated thing in simple terms. E.g. Willow tree genome described in 800kb data.

Procedural Audio

Decompression

Benefit of high compression ratios: Create or Produce complex things using only minimal data. Ratios for compression/decompression extremely high. e.g. complex behaviour of many characters, or complex textures, controlled by small set of instructions. Mandelbrot set. Simple mathematical terms describe complex structures.

How do we achieve Compression/ Decompression In games?

Input > rules > output

Procedural Audio

Non Linear, Time independent. May involve generative systems. Based on rules.


Takes input and maps it to some output. Real- time (Sound is produced as input changes)

Procedural Audio Rules

Control structure/ Synthesis structure. Some set of control data which plays the synthesis structure.

Input physical quantity such as velocity. Proximity to some object or actor. output audio signals

Difference between Procedural Audio and Sound Synthesis

Procedural Audio aims to create sounds using the most compact set of rules. This means the control and synthesis structures. The goal is a very high compression ratio.

Procedural approaches in games

Used to be used primarily for landscape generation. Now entire game worlds can be created procedurally as it is being played.

Constraints of procedural audio

Can be too random and requires use of constraints to make routes through landscapes etc. Music and sound require more structure than landscapes. (Patterns produced would not make meaningful sound or music)

Procedural Audio Essential steps

1. Critically analyse the required sound


2. Determine the most important sound attributes or traits.


3. Devise a strategy to create these sounds algorithmically

Procedural Audio - benefits

Small memory footprint. Asset management (no libraries, no editing). Architecture/ bit rate independent. Uniqueness. Object re-usability.

Procedural Audio disadvantages

Realism. Cost. Lack of standardised development platform. Shortage of skilled individuals.

Adaptive Audio

Sounds that reacts to transformations in the gameplay environment. Not directly triggered by the player.

Audio in Games

Control

When creating adaptive sound we need to control which sound elements are active in different parts of the game world. In order for the sound environment to be convincing. And to make efficient use of resources. Goes beyond simple control of ambient sound attenuation. Requires closer consideration of: game geometry, layout and Sound propagation phenomena.

Adaptation

Zones

Use triggers and volumes together to; account for changes in the game world sound environment. Allows us to control how and when sound assets are active/ heard by the player.

Adaptation

Area Transitions

Audio volumes control internal and external sound levels, filtering etc when inside or outside that area. Triggers switch sound elements on or off, fade, or alter in other ways. More control, but requires some visual scripting.

Adaptation

Sound Propagation Phenomena

Simple: reverberation. Persistence/ delay of sound due to repeated reflections in a given space. Easily achieve through use of audio volumes/ reverb assets. Trickier issues include Obstruction, diffraction, occlusion. Game world structures, geometry affect sound propagation.

Adaptation

Obstruction

Object in game world stops the propagation of a sound.

Diffraction

Sound can travel around the obstruction (takes indirect path). The effect is a reduction of loudness and loss of high frequencies. We can model this by reducing amplitude and filtering.

Adaptation

Occlusion

Sound is not totally obstructed by the object, but much of its energy is absorbed by the obstructing object. Some sound still travels through the object, but can not get around it. Effect is similar to diffraction - reduction of amplitude, loss of high frequencies and also loss of definition at high frequencies. Effects are likely more severe than in diffraction.

Adaptation

Mixing/ Levels

A problem in games sound due to non-linear nature of the game, player freedom. Can’t be 100% sure when, what and how many sounds will be playing at a given point in time. We should create a sound classification structure for all sounds in our game. Apply sound mixes (which control our classes) as the game is running.

Adaptation

Class Structure/ Hierarchy

Create and assign classes and hierarchy allows control over groups of sounds all at once. Set global values for each class. Ensures sound levels are correct.

Adaptation

Mixes

Generic, global settings might not always be enough to ensure we get the balance we want. Create a mix for game scenarios. Add settings for sound classes, level, EQ. When activated during game play, the mic overrides the global settings ensuring our desired mix is set for that point in the game.

Interactive Audio

Sound events that react to the players actions. Directly triggers the sound.

Audio in games

Passive Mix

Create a mix which boosts the dialogue sound class level and cuts the level of music, weapons m, ambience etc. Link the mix to our dialogue sound class, the mix listens for dialogue sounds during when the game play. When dialogue sounds okay the mix is triggered. Aka ‘ducking’

Adaptation

Active Mix

We push the sound mix when a certain event occurs. Requires blueprint script.

Adaptation

Adaptation

The world sounds must alter dynamically to reflect the players location, context. Real world sound is not necessarily our goal. Could recreate this if we wanted. But can be problematic e.g. attenuation radii overlap. Could affect sound clarity and resource hit. Rather a believable, functional world sound is more important. Whilst considering asset use/ efficiency, and clarity of the resulting soundscape.

Summary

Interaction

Requires some programming. Either in the game code (by audio programmer) triggered sounds/ modifications to sounds using the game code. Or using middleware tools (by the sound designer) linking game world objects, events, actions, behaviour, to sound. No coding required, rather use of visual scripting.

Interaction Design/ Scripting

An important step forward from creating sounds or embedding ambient sound. designing a key part of the game interaction. Control over when and how sounds are triggered or manipulated. Can be simple or complex.

Interaction

Simple interactions.

Player interacts with game object. Events fired. Sound elements faded in, out.

Complex interaction

Get played location. Line trace around player. Get surface type on each hit. Okay associated sound cue. Sound cue handles variation.

Physical interactions

Physics and sound attempt to manipulate sounds caused by material interactions in the game world.

Variation

Player encounters sounds again and again, repetition in sound is easy to notice. Causes boredom, fatigue. Reduces immersion in the game experience. Many sounds in real life are not generally repetitive. Especially natural sounds. Real sounds can vary subtly.

Sound cues

Randomly vary explosion sound files, add delay. Modulate pitch and level. Add randomly selected sounds (delayed). Modulate result.

Variation

Games are a longer experience than films. Players encounter the same sounds again and again. Human sensitive to repetition in sound. Breaks or reduces immersion. Causes fatigue.

Audio in games

Variation Summary

Real world sound vary in a subtle way. Balance is required. Some sounds should stay the same each time. Players want sound to be impressive/ exciting so it’s not always about realism.

Limitations of variation.

Memory. Processing time(frame). Your time.

Triangle of compromise

Memory. Variation. Quality.

Compromise

High Fs, 16 bit, lots of variation (sound files, modification) uses more memory/ processing. Lots of variation, low memory usage quality of files must be reduced. High quality, low memory usage, less variation in sounds.

Ram budget problems

Develop different sound system that will work for all different platforms, with varying constrictions. Assume the minimum spec and work with that.

Streaming

Only load assets when they are needed, more efficient use of the RAM budget. There is a streaming delay so not suited for some game sounds.

Streaming Levels

Large level with many sounds, lots of variation, several pieces of music, dialogue, RAM budget exceeded very quickly.

Adaptation

Sound changes according to players context. Player moves around game world - world audio content changes/transforms. Games sound adapts to reflect player context. Sounds perceived by player changes accordingly. Essentially a form of indirect interaction. Changes occur due to the players context in the game world. Sound reacts to transformations in the game play environment. Accounts for elements important to our perception of the world.

Interaction

Player directly triggers sound

Spatialisation

Players perception of sound emitter location

Adaptation in Unreal

Attenuation

Reduction of perceived amplitude over distance.

Adaptation in unreal

Filtering

Reduction of higher frequency elements over distance.

Adaptation in unreal

challenges: attenuation

Game worlds are not real. Internal spaces tend to be large and spacious (allows room for movement). External spaces not as spread out in the real world. Would take too long to navigate through them. Using ‘real’ attenuation curves causes problems. A world filled with many low level, distant sounds confuses the mix/ soundscape. Uses up resources. May be unimportant to game play or narrative.

Adaptation

Control

We can make use of alternative attenuation curves. Log attenuation curve: no sound until at a relatively short distance from the source. Good for small areas with many sound sources. Log reverse: sudden drop off. Good for sudden sound attenuation cause by wall/obstacle.