블로그로 돌아가기
탐지 방지

Audio Fingerprinting: How AudioContext Tracks You

The Web Audio API produces device-specific output that identifies your browser. Learn how audio fingerprinting works and how cloud browsers neutralize it.

Introduction

Audio fingerprinting uses the Web Audio API to generate device-specific identifiers. By creating an OfflineAudioContext, processing audio through nodes like OscillatorNode and DynamicsCompressorNode, and reading back the processed samples, websites can compute a hash that is unique to your hardware and software configuration.

Unlike canvas fingerprinting, audio fingerprinting does not require any visible element on the page. It runs entirely in the background and produces a stable identifier across sessions.

How Audio Fingerprinting Works

The technique follows a consistent pattern:

  1. Create an OfflineAudioContext with a specific sample rate (typically 44100 Hz)
  2. Create an OscillatorNode with a specific frequency and waveform
  3. Connect it through a DynamicsCompressorNode with specific parameters
  4. Render the audio buffer offline
  5. Read specific sample values from the output buffer
  6. Hash the sample values to create a fingerprint
const context = new OfflineAudioContext(1, 44100, 44100);
const oscillator = context.createOscillator();
oscillator.type = 'triangle';
oscillator.frequency.value = 10000;

const compressor = context.createDynamicsCompressor();
compressor.threshold.value = -50;
compressor.knee.value = 40;
compressor.ratio.value = 12;

oscillator.connect(compressor);
compressor.connect(context.destination);
oscillator.start(0);

const buffer = await context.startRendering();
const samples = buffer.getChannelData(0);
// Hash samples[4500] through samples[5000] for the fingerprint

Why Output Varies

Audio processing differences arise from:

  • Audio hardware - Different DACs and audio chipsets use different internal precision
  • Operating system audio stack - Windows (WASAPI), macOS (CoreAudio), and Linux (ALSA/PulseAudio) process audio differently
  • Browser implementation - Even the same browser version on different platforms produces different output
  • Sample rate handling - Resampling algorithms vary across implementations

Detection Systems Using Audio

Major detection platforms that incorporate audio fingerprinting:

  • FingerprintJS Pro - Combines audio with canvas, WebGL, and other signals
  • CreepJS - Open-source fingerprinting that includes AudioContext testing
  • DataDome - Uses audio as one signal in its bot detection pipeline
  • PerimeterX/HUMAN - Incorporates audio processing characteristics

How BotCloud Handles Audio Fingerprinting

BotCloud applies controlled noise to audio processing at the engine level:

  • Consistent output - Same profile produces identical audio fingerprints across sessions
  • Realistic values - Audio samples fall within normal ranges for the claimed platform
  • Sample rate alignment - The audio context sample rate matches the profile's platform
  • Cross-API consistency - AnalyserNode, AudioWorklet, and OfflineAudioContext all produce consistent results

Each profile generates a unique but stable audio fingerprint, preventing both cross-session correlation and detection of synthetic audio output.

Verification

Test audio fingerprint consistency:

const page = await browser.newPage();
await page.goto('https://browserleaks.com/audio');
// Audio fingerprint hash will be consistent across sessions with the same profile
#audio#fingerprinting#audiocontext#privacy