Skip to content

qualabs/dash-parser-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DASH Parser Benchmark

Headless benchmark tool to measure DASH.js parser performance across different implementations (XML vs JSON).

🎯 Purpose

This tool measures pure parsing time of the DashParser.parse() function using headless browser automation. It collects:

  • βœ… Average parsing time
  • βœ… Standard deviation
  • βœ… Median, P95, P99 percentiles
  • βœ… Min/Max values
  • βœ… Coefficient of variation for consistency analysis

πŸ“Š What It Measures

The benchmark measures ONLY the execution time of:

const parsedManifest = parserInstance.parse(manifestData);

This includes:

  • XML/JSON string parsing
  • Node processing (for XML)
  • Type conversions (matchers)
  • ObjectIron inheritance
  • Array organization

It excludes:

  • Network requests
  • Manifest loading
  • Player initialization
  • Event handling

πŸš€ Quick Start

1. Install Dependencies

npm install

2. Add Your Builds

Copy your dash.js builds to the builds/ directory:

cp ../dash.js/dist/dash.all.debug.js builds/dash-xml.js
cp ../dash.js/dist/dash.all.debug-json.js builds/dash-json.js
cp ../dash.js/dist/dash.all.debug-json-optimized.js builds/dash-json-optimized.js

3. Run Benchmarks

Compare XML vs JSON (standard):

npm run benchmark -- xml json

Compare XML vs JSON (optimized):

npm run benchmark -- xml json-optimized

Test a single build:

npm run benchmark -- xml

πŸ“ Project Structure

dash-parser-benchmark/
β”œβ”€β”€ package.json        # Dependencies and scripts
β”œβ”€β”€ config.json         # Build aliases and configuration
β”œβ”€β”€ index.html          # Browser test page
β”œβ”€β”€ benchmark.js        # Main benchmark entry point
β”œβ”€β”€ runner.js           # Benchmark runner with server
β”œβ”€β”€ README.md           # This file
β”œβ”€β”€ src/                # Benchmark modules
β”œβ”€β”€ builds/             # Place your dash.js builds here
β”‚   β”œβ”€β”€ dash-xml.js
β”‚   β”œβ”€β”€ dash-json.js
β”‚   └── dash-json-optimized.js
β”œβ”€β”€ manifests/          # Test manifests
β”‚   β”œβ”€β”€ test_pic.mpd
β”‚   └── test_pic.json
└── results/            # Generated reports
    └── pure-parsing-report-*.json

πŸ“ Usage

This tool has two simple modes:

Mode 1: Benchmark (Headless Local Testing)

Run automated benchmarks locally using Puppeteer in headless mode. Results are saved automatically.

npm run benchmark -- <build1> [build2] [build3]

Examples:

# Compare two builds
npm run benchmark -- xml json

# Compare three builds
npm run benchmark -- xml json json-optimized

# Test single build
npm run benchmark -- xml

Use this mode for:

  • Quick automated testing
  • CI/CD pipelines
  • Consistent reproducible results

Mode 2: Server (Visual Local & Remote Testing)

Start an HTTP server that allows you to run benchmarks from any browser (local or remote). Results are saved automatically to the server.

npm run server

Then simply navigate to the root URL:

  • Local testing: http://localhost:8080
  • Remote testing (via ngrok): https://your-ngrok-url.app

You'll see an interactive page with buttons for each build configuration. Just click/tap a button to start the benchmark - no need to manually type query parameters!

Advanced usage (direct URL with parameters): You can still use direct URLs if needed:

  • http://localhost:8080/index.html?build=/builds/dash-xml.js&manifest=/manifests/test_pic.mpd&iterations=100

Use this mode for:

  • Testing on real devices (phones, tablets, TVs)
  • Visual debugging
  • Testing in different browsers (Safari, Chrome, Firefox)
  • Remote testing via ngrok
  • Easy one-click benchmark execution

Remote Testing Setup:

  1. Start the server:

    npm run server
  2. In another terminal, start ngrok:

    ngrok http 8080
  3. Use the ngrok URL from any device to run benchmarks

All results will be automatically saved to the results/ folder on the server.


Available Build Aliases

The build aliases are defined in config.json:

  • xml - XML Parser
  • json - JSON Parser (Standard)
  • json-optimized - JSON Parser (Optimized)

βš™οΈ Configuration

Edit config.json to customize your benchmarks:

{
  "iterations": 100,
  "buildAliases": {
    "xml": {
      "name": "XML Parser",
      "path": "/builds/dash-xml.js",
      "manifest": "/manifests/test_pic.mpd"
    },
    "json": {
      "name": "JSON Parser (Standard)",
      "path": "/builds/dash-json.js",
      "manifest": "/manifests/test_pic.json"
    },
    "json-optimized": {
      "name": "JSON Parser (Optimized)",
      "path": "/builds/dash-json-optimized.js",
      "manifest": "/manifests/test_pic.json"
    }
  }
}

Configuration Parameters

  • iterations: Number of parsing iterations per test
  • buildAliases: Object mapping alias names to build configurations
    • Each alias contains:
      • name: Display name for the build
      • path: Path to the build file (relative to project root)
      • manifest: Path to the manifest file to use with this build

Adding New Build Aliases

To add a new build to test:

  1. Copy your build file to builds/ directory
  2. Add a new entry to buildAliases in config.json:
"my-custom-build": {
  "name": "My Custom Build",
  "path": "/builds/dash-custom.js",
  "manifest": "/manifests/test_pic.json"
}
  1. Run it:
npm run benchmark -- xml my-custom-build

πŸ“ˆ Output

JSON Report

A detailed JSON report is saved in results/ after each benchmark run:

{
  "timestamp": "2025-12-22T20:12:18.992Z",
  "measurementType": "pure-parsing",
  "description": "Measures only the DashParser.parse() function execution time",
  "source": "remote-browser",
  "userAgent": "Mozilla/5.0...",
  "results": [
    {
      "build": "XML Parser",
      "manifest": "test_pic.mpd",
      "stats": {
        "avg": 1.234,
        "stdDev": 0.123,
        "median": 1.200,
        "p95": 1.456,
        ...
      }
    }
  ]
}

πŸ“Š Statistical Metrics

  • Average: Mean parsing time across all iterations
  • Standard Deviation (Std Dev): Measure of variation in parsing times
  • Median (P50): Middle value when sorted
  • P75/P90/P95/P99: Percentile values (e.g., P95 means 95% of runs were faster)
  • Coefficient of Variation (CV): Std Dev / Average Γ— 100%
    • CV < 5%: Excellent consistency
    • CV < 10%: Good consistency
    • CV < 20%: Moderate variance
    • CV β‰₯ 20%: High variance (may need more iterations)

πŸ”§ Troubleshooting

"Unknown build alias"

Make sure the alias is defined in config.json under buildAliases.

"Failed to load build"

  • Ensure build files exist in builds/ directory
  • Check that paths in config.json start with /builds/
  • Verify the dash.js build is complete and valid

"Parser instance not obtained"

  • Make sure your dash.js build includes DashParser and FactoryMaker
  • Try using the debug build instead of minified version

High CV (> 20%)

  • Increase iterations in config.json
  • Close other applications to reduce system load
  • Run benchmark multiple times and average results

Remote Testing Not Working

  • Make sure server is running with npm run server
  • Check that ngrok is pointing to port 8080
  • Verify firewall is not blocking the port
  • Server listens on 0.0.0.0 to accept remote connections

πŸ“ Requirements

  • Node.js 16+
  • Chrome/Chromium (installed automatically by Puppeteer)
  • ngrok (optional, for remote testing)

πŸ“„ License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published