No description
Find a file
Nextcloud bot 7f0b6d5308
fix(l10n): Update translations from Transifex
Signed-off-by: Nextcloud bot <bot@nextcloud.com>
2025-10-26 01:50:02 +00:00
.github chore: update codeowners 2025-08-13 16:42:13 +02:00
.tx feat(l10n): Start translating the app 2024-10-04 14:31:54 +02:00
appinfo chore: Bump version to 1.4.0 2025-10-24 15:50:13 +07:00
img fix: Add app icons 2024-06-15 08:48:57 +02:00
l10n fix(l10n): Update translations from Transifex 2025-10-26 01:50:02 +00:00
lib file-versions-supported 2025-10-24 14:49:16 +07:00
LICENSES fix: Add app icons 2024-06-15 08:48:57 +02:00
playwright enh: assistant dialog styling 2025-07-21 14:16:49 +02:00
screenshots chore: Add screenshot 2024-07-29 09:37:36 +02:00
src file-versions-supported 2025-10-24 14:49:16 +07:00
templates recording 2025-09-03 18:26:18 +07:00
tests fix file locks 2025-08-05 14:00:35 +07:00
tools/benchmarks load tests 2025-09-24 15:17:45 +07:00
websocket_server fix recording temp location 2025-10-17 16:41:51 +07:00
.env.example fix recording temp location 2025-10-17 16:41:51 +07:00
.eslintrc.cjs recording docs 2025-10-10 16:17:30 +07:00
.gitignore recording 2025-09-03 18:26:18 +07:00
.l10nignore feat(l10n): Start translating the app 2024-10-04 14:31:54 +02:00
.nextcloudignore chore: Add screenshot 2024-07-29 09:37:36 +02:00
.php-cs-fixer.dist.php feat: Initial PoC to load excalidraw in a viewer app 2024-05-03 16:14:08 +02:00
.stylelintrc.cjs chore: Add SPDX headers 2024-06-14 15:48:43 +02:00
CHANGELOG.md chore: Bump version to 1.4.0 2025-10-24 15:50:13 +07:00
composer.json Chore(deps-dev): Bump phpunit/phpunit from 9.6.22 to 12.1.4 2025-05-03 01:53:02 +00:00
composer.lock chore(dev-deps): Bump nextcloud/ocp package 2025-10-19 02:40:24 +00:00
docker-compose.yml Update docker-compose.yml 2025-02-08 21:52:00 +01:00
Dockerfile Merge branch 'main' into fix/recording-temp-location 2025-10-20 15:26:51 +07:00
krankerl.toml fix: Include composer dependencies in the release bundle 2024-09-15 10:46:45 +02:00
LICENSE feat: Initial PoC to load excalidraw in a viewer app 2024-05-03 16:14:08 +02:00
package-lock.json Merge pull request #740 from nextcloud/dependabot/npm_and_yarn/main/vitest-4.0.3 2025-10-25 15:08:19 +07:00
package.json Merge pull request #740 from nextcloud/dependabot/npm_and_yarn/main/vitest-4.0.3 2025-10-25 15:08:19 +07:00
playwright.config.ts re-architect from websocket server heavy lifting to client-side processing, overall refactoring and optimizing 2025-04-30 18:09:03 +07:00
psalm.xml chore: Add github actions and fix psalm 2024-06-14 15:56:03 +02:00
README.md fix recording temp location 2025-10-17 16:41:51 +07:00
REUSE.toml ci: Add reuse info for screenshots 2024-07-30 11:16:43 +02:00
tsconfig.json re-architect from websocket server heavy lifting to client-side processing, overall refactoring and optimizing 2025-04-30 18:09:03 +07:00
vite.config.ts feat: use forked excalidraw library 2025-09-25 10:10:44 +02:00
vitest.config.js Fix server crashed, regular cleanups 2025-01-06 14:36:25 +07:00

Nextcloud Whiteboard

REUSE status

The official whiteboard app for Nextcloud. Create and share whiteboards with real-time collaboration.

Features

  • 🎨 Drawing shapes, writing text, connecting elements
  • 📝 Real-time collaboration with semi-offline support
  • 💾 Client-first architecture with local storage
  • 🔄 Automatic sync between local and server storage
  • 🌐 Works semi-offline - changes saved locally and synced when online (websocker server configured successfully)
  • 💪 Built on Excalidraw

Architecture

Nextcloud Whiteboard uses a client-first architecture that prioritizes browser-based functionality:

  • Browser-First: All whiteboard functionality works directly in the browser
  • Local Storage: Changes are immediately saved to browser storage (IndexedDB)
  • Real-time Collaboration: WebSocket server handles live collaboration sessions
  • Simplified Connectivity: Only browsers need to connect to the websocket server
  • Reduced Dependencies: Websocket server is only needed for real-time collaboration, not basic functionality

Installation & Setup

WebSocket Server for Real-time Collaboration

The websocket server handles real-time collaboration sessions between users. Important: The websocket server is only needed for live collaboration - basic whiteboard functionality works without it.

Connectivity Requirements

Essential (for real-time collaboration):

  • User browsers need HTTP(S) access to the websocket server
  • Nextcloud and websocket server share a JWT secret for authentication

Configuration

Configure Nextcloud with the websocket server details: (Can be configured in the Nextcloud admin settings)

occ config:app:set whiteboard collabBackendUrl --value="https://nextcloud.local:3002"
occ config:app:set whiteboard jwt_secret_key --value="some-random-secret"

Running the WebSocket Server

Node.js

npm ci
JWT_SECRET_KEY="some-random-secret" NEXTCLOUD_URL=https://nextcloud.local npm run server:start

Docker

docker run -e JWT_SECRET_KEY=some-random-secret -e NEXTCLOUD_URL=https://nextcloud.local -p 3002:3002 --rm ghcr.io/nextcloud-releases/whiteboard:stable

Or using Docker Compose:

services:
  nextcloud-whiteboard-server:
    image: ghcr.io/nextcloud-releases/whiteboard:stable
    ports:
      - "3002:3002"
    environment:
      NEXTCLOUD_URL: https://nextcloud.local
      JWT_SECRET_KEY: some-random-secret

Environment Variables:

  • JWT_SECRET_KEY: Must match the secret configured in Nextcloud
  • NEXTCLOUD_URL: Used for JWT token validation (not for server-to-server communication)
  • RECORDINGS_DIR: Optional writable directory for temporary recording files (defaults to /tmp/whiteboard-recordings in the Docker image and automatically falls back to the OS temp directory if unavailable)

Recording prerequisites

Board recordings require a headless Chromium browser on the collaboration server. The system automatically detects Chrome installations:

  • Self-hosted: Auto-detects Chrome/Chromium in standard locations
  • Docker: Uses bundled Alpine Chromium package
  • Custom paths: Set CHROME_EXECUTABLE_PATH environment variable

Quick Setup

Docker (Recommended)

# No setup needed - Chromium is pre-installed
docker run -e JWT_SECRET_KEY=some-random-secret -p 3002:3002 --rm ghcr.io/nextcloud-releases/whiteboard:stable

Self-hosted Systems

# Debian/Ubuntu
sudo apt-get update
sudo apt-get install -y chromium chromium-common

# Alpine Linux  
apk add --no-cache chromium nss freetype harfbuzz ttf-freefont

# macOS (install Chrome via Homebrew or download from google.com/chrome)
brew install --cask google-chrome

# Verify installation
chromium-browser --version  # Linux
google-chrome --version     # macOS/Chrome

Advanced Configuration

# Override auto-detection (for custom Chrome locations)
export CHROME_EXECUTABLE_PATH="/path/to/your/chrome"
npm run server:start

The server performs automated Chrome detection on startup and before each recording. If Chrome isn't found, users receive clear error messages with installation guidance. Temporary recording data is written to the directory specified by RECORDINGS_DIR (or /tmp/whiteboard-recordings in the official Docker image). If that location cannot be created or written, the server falls back to the operating system temp directory automatically and logs a warning. After installing Chrome, restart the websocket server to apply changes.

Reverse Proxy Configuration

If running the websocket server manually, configure your reverse proxy to expose it:

Apache Configuration

Apache >= 2.4.47:

ProxyPass /whiteboard/ http://localhost:3002/ upgrade=websocket

Apache < 2.4.47:

ProxyPass /whiteboard/ http://localhost:3002/
RewriteEngine on
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteCond %{HTTP:Connection} upgrade [NC]
RewriteRule ^/?whiteboard/(.*) "ws://localhost:3002/$1" [P,L]
Nginx Configuration
location /whiteboard/ {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;
    proxy_pass http://localhost:3002/;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
}
Other Reverse Proxies

Caddy v2:

handle_path /whiteboard/* {
    reverse_proxy http://127.0.0.1:3002
}

Traefik v3:

- traefik.http.services.whiteboard.loadbalancer.server.port=3002
- traefik.http.middlewares.strip-whiteboard.stripprefix.prefixes=/whiteboard
- traefik.http.routers.whiteboard.rule=Host(`nextcloud.example.com`) && PathPrefix(`/whiteboard`)
- traefik.http.routers.whiteboard.middlewares=strip-whiteboard

WebSocket Server Configuration

The websocket server handles real-time collaboration sessions (not critical whiteboard data):

Collaboration Data Storage

LRU Cache (Default)

  • In-memory session storage, simple setup
  • Suitable for most deployments
  • Session data cleared on restart (whiteboard data remains safe in Nextcloud/local storage)
STORAGE_STRATEGY=lru

Redis

  • For multi-server setups or session persistence
  • Enables horizontal scaling with Redis Streams
STORAGE_STRATEGY=redis
REDIS_URL=redis://[username:password@]host[:port][/database_number]

Scaling (Optional)

For high-traffic environments with multiple websocket servers:

  1. Use Redis for shared session state
  2. Configure load balancer with session stickiness
  3. Redis Streams handles WebSocket scaling automatically

Benchmarking & Capacity Planning

To size dedicated collaboration servers we profiled the websocket backend with the synthetic load harness in tools/benchmarks/. The runBenchmarks.mjs script boots the production server (TLS disabled, LRU cache) and spawns JWT-authenticated Socket.IO clients from loadTest.mjs. Each run holds a room open for 60 seconds, with 10% of participants sending cursor and viewport updates at 2 Hz to mimic active sketching while the rest stay idle.

Test environment

  • Apple M4 (10 logical cores, 16 GB RAM), Node v24.8.0
  • Single websocket process, NODE_OPTIONS=--max-old-space-size=8192, STORAGE_STRATEGY=lru
  • Aggregate ingress/egress recorded from client telemetry (nettop requires root on macOS)
  • Full JSON results are stored in tools/benchmarks/results.json

Observed highlights

  • Per-user CPU hovered around 0.2% for small teams and climbed to ~0.37% at 300 concurrent users.
  • Memory footprint stayed near 5 MB/user at 50 users and ~10 MB/user at 300 users.
  • Server egress reached ~3 Mbps (50 users), ~13 Mbps (100 users) and ~366 Mbps (300 users). Pushing to 500 synthetic users drove ~1.2 Gbps and the single process began dropping ~30% of sockets.

Key takeaways

  • Expect roughly 0.2% CPU per connected collaborator; reserve additional headroom for presenters or rapid drawing.
  • Budget ~510 MB of process RSS per user when running in a single-node, in-memory configuration.
  • Throughput scales quickly with active senders—plan outbound bandwidth over-provisioning (≥15 Mbps / 100 users) when presentations or screen follow are frequent.
Concurrent users Avg CPU (10-core test rig) Avg RSS Server egress (60 s run) Recommended spec
50 ~10% (~0.21% per user) ~0.24 GB ~23.5 MB total (≈3.1 Mbps) 2 vCPU / 1 GB RAM
100 ~20% (~0.20% per user) ~0.36 GB ~96.6 MB total (≈12.9 Mbps) 4 vCPU / 2 GB RAM
500* ~203% (≈2 cores) ~3.64.5 GB ~9.2 GB total (≈1.2 Gbps) ≥8 vCPU / ≥8 GB RAM per node + Redis + ≥2 nodes

*500-user test saturated a single instance and dropped ~30% of simulated clients. Treat this as an upper bound and plan to run multiple websocket workers behind a sticky load balancer with STORAGE_STRATEGY=redis.

Run the benchmark locally

  1. Install dependencies: npm ci (and composer install if you have not bootstrapped the PHP side yet).
  2. Ensure the websocket server can start without TLS (set TLS=false or export TLS=false before running the script) and that JWT_SECRET_KEY/NEXTCLOUD_URL are configured for your environment.
  3. Execute node tools/benchmarks/runBenchmarks.mjs to run the default scenarios (50, 100, 300 concurrent users).
  4. Adjust load with environment variables as needed:
    • LOAD_TEST_CONCURRENCY=50,150,300 to pick specific cohorts (comma separated).
    • LOAD_TEST_ACTIVE_RATIO=0.15 to vary the percentage of active broadcasters.
    • LOAD_TEST_RATE=3 to control per-sender update frequency (messages/sec).
    • LOAD_TEST_DURATION=90 to lengthen each run.
  5. After each execution the summarized telemetry is printed to stdout and saved to tools/benchmarks/results.json; keep copies per hardware profile for future comparisons.
  6. When testing in prod-like environments, monitor OS-level CPU/RAM/network metrics in parallel (e.g., top, sar, cloud dashboards) to validate the Node-level sampling.

Recommendations

  • Keep NODE_OPTIONS=--max-old-space-size=8192 (or higher) when targeting 300+ concurrent users to avoid heap exhaustion.
  • For 300+ users, switch to Redis-backed storage and deploy at least two websocket instances to spread load.
  • Budget at least 15 Mbps outbound bandwidth for every 100 concurrently connected users; architecturally heavy sessions (live presenting, rapid drawing) can double that figure.
  • Re-run node tools/benchmarks/runBenchmarks.mjs after feature changes or on target hardware to validate sizing before production rollout.

Troubleshooting

Connection Issues

Real-time Collaboration Not Working

  • Verify JWT secrets match between Nextcloud and websocket server
  • Check that user browsers can access the websocket server URL
  • Ensure reverse proxy correctly handles WebSocket upgrades
  • Check browser console for connection errors

Known Issues

Legacy Integration App Conflict If you previously had integration_whiteboard installed, remove any whiteboard entries from config/mimetypealiases.json and run:

occ maintenance:mimetype:update-db
occ maintenance:mimetype:update-js

Misleading Admin Errors Admin connectivity checks may show false negatives in Docker/proxy environments. These errors don't affect actual functionality since the architecture is client-first. Focus on browser-based connectivity tests instead.

Development

To build the project locally:

npm ci
npm run build

For development with hot reload:

npm run watch

To run the websocket server in development:

npm run server:watch