In the past, game audio simply had to react to the events happening in game, but this is no longer the case!
Learn how to enable actors in the game to 'hear' and respond to in-game audio, discover how to drive parameters by analyzing the audio's amplitude and frequency, and use your player's microphone to create immersive experiences.
Imagine driving leaf animations based on the sound of wind, instigating a camera shake based on the frequencies of an explosion, or having in-game actors react to the music track.
Apply this to the player's microphone to create even more exciting opportunities. Have your player start a fire by blowing on their microphone, conjure a magic spell using their voice, or even attract the attention of nearby NPCs by whistling.
The ability to use audio analysis to drive in-game events and parameters based on the audio itself has endless creative applications. Let's get started!
You will learn how to:
Explore more courses
- Implement Event Tracks within a Level Sequence to synchronize in-game events to music.
- Use the PawnNoiseEmitter and PawnSensing nodes to enable NPCs to ‘hear’ player-instigated events.
- Use cooked envelope and frequency data to control parameters and behaviors within a game.
- Conduct real-time spectral analysis on audio-capture data to drive events and parameters within a game.
- Route audio data to the Niagara visual effects system.