Lowkey Media Server

A companion server for batch processing, AI tagging, transcription, and media ingestion.

Works on Windows and Linux

Alpha Release

Download | Free

Version 1.0.0

Overview

Lowkey Media Server is a companion application for the Lowkey Media Viewer that manages long-running offline tasks. Built in Go, it runs as an HTTP server on port 8090.

  • Job queue with persistence and real-time status updates
  • Media ingestion from local directories, YouTube, and gallery sites
  • AI-powered auto-tagging using ONNX models
  • LLM vision integration for image descriptions
  • Video transcription with Faster Whisper
  • Browser extensions for Chrome and Firefox
  • Web-based media gallery and search

Getting Started

Installation

Download and run the Media Server executable. On Windows, it runs in the system tray for easy access. On Linux, run the binary from the command line.

Initial Setup

On first launch, you'll be prompted to create an initial user account. This account is used to authenticate with the web interface and API.

Initial Setup Demo Video

Installing Dependencies

The Media Server uses external tools for media processing. On the setup page, you can download and install the required dependencies with a single click. Dependencies are extracted on demand when a task requires them.

Dependencies are downloaded to:

Windows: %ProgramData%\Lowkey Media Server\tmp\
Linux: /var/tmp/lowkeymediaserver/

Available dependencies include:

  • ffmpeg / ffprobe - Media processing and conversion
  • yt-dlp - Video downloading from YouTube and other sites
  • gallery-dl - Image gallery downloading
  • faster-whisper - Video transcription
  • ONNX Runtime - AI model inference for auto-tagging
Dependency Setup Demo Video

Web Interface

Access the web interface at http://localhost:8090. The home page shows the job queue with all running and completed jobs.

Web UI Demo Video

Browser Extension

Install the browser extension for Chrome or Firefox to quickly create jobs from any webpage.

  • Chrome: Load unpacked from chrome-extension/ folder
  • Firefox: Load from firefox-extension/ folder

Features:

  • One-click task creation with current page URL
  • Command selection dropdown
  • Custom arguments support
  • Real-time job status updates
Browser Extension Demo Video

Docker Quick Start

The fastest way to run Lowkey Media Server is with Docker. A single command gets you a working server with ffmpeg and all core dependencies pre-installed.

Run the Container

Pull the image and start the server:

docker run -d --name lowkey-media-server \
-p 8090:8090 \
-v lowkey-data:/data \
ghcr.io/stevecastle/lowkey-media-server:latest

Open http://localhost:8090 in your browser. The server is ready to use.

Local Storage

To browse media from your local filesystem, bind-mount your directories into the container and register them as storage roots with environment variables.

docker run -d --name lowkey-media-server \
-p 8090:8090 \
-v lowkey-data:/data \
-v /path/to/photos:/mnt/photos:ro \
-v /path/to/videos:/mnt/videos:ro \
-e LOWKEY_ROOT_1=/mnt/photos:Photos \
-e LOWKEY_ROOT_2=/mnt/videos:Videos \
ghcr.io/stevecastle/lowkey-media-server:latest

Each LOWKEY_ROOT_<N> variable registers a storage root. The format is path or path:label where the label is the display name shown in the UI. If omitted, the path is used as the label.

Mount directories as read-only (:ro) if you only want to browse and process media without the server modifying your files.

S3-Compatible Storage

Lowkey Media Server can browse and process media stored in any S3-compatible object storage service — AWS S3, MinIO, Backblaze B2, Cloudflare R2, DigitalOcean Spaces, and others.

Use the LOWKEY_ROOTS environment variable with a JSON array to configure S3 backends. You can mix local and S3 roots in the same configuration. Mark one root with "default": true to designate it as the upload and download destination — uploaded files and ingested media will be stored there. If no root is marked default, the first root is used.

S3 Only

docker run -d --name lowkey-media-server \
-p 8090:8090 \
-v lowkey-data:/data \
-e 'LOWKEY_ROOTS=[{
"type": "s3",
"label": "My Media Bucket",
"endpoint": "https://s3.us-east-1.amazonaws.com",
"region": "us-east-1",
"bucket": "my-media",
"prefix": "photos/",
"accessKey": "AKIAIOSFODNN7EXAMPLE",
"secretKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"thumbnailPrefix": "thumbs/",
"default": true
}]' \
ghcr.io/stevecastle/lowkey-media-server:latest

Mixed Local + S3

docker run -d --name lowkey-media-server \
-p 8090:8090 \
-v lowkey-data:/data \
-v ~/photos:/mnt/photos:ro \
-e 'LOWKEY_ROOTS=[
{"type": "local", "path": "/mnt/photos", "label": "Local Photos"},
{
"type": "s3",
"label": "Cloud Archive",
"endpoint": "https://s3.us-west-2.amazonaws.com",
"region": "us-west-2",
"bucket": "media-archive",
"accessKey": "AKIAIOSFODNN7EXAMPLE",
"secretKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"default": true
}
]' \
ghcr.io/stevecastle/lowkey-media-server:latest

S3 Storage Root Fields

FieldRequiredDescription
typeYesMust be "s3"
labelYesDisplay name in the UI
endpointYesS3 API endpoint URL
regionYesAWS region or equivalent
bucketYesBucket name
accessKeyYesAccess key ID
secretKeyYesSecret access key
prefixNoKey prefix to scope browsing (e.g. "photos/")
thumbnailPrefixNoSeparate prefix or bucket path for generated thumbnails
defaultNoSet to true to make this root the upload/download destination

Tested S3-Compatible Services

ServiceEndpoint Format
AWS S3https://s3.<region>.amazonaws.com
MinIOhttp://<host>:9000
Backblaze B2https://s3.<region>.backblazeb2.com
Cloudflare R2https://<account-id>.r2.cloudflarestorage.com
DigitalOcean Spaceshttps://<region>.digitaloceanspaces.com

Environment Variables

All server settings can be configured with environment variables, no config file needed.

VariableDefaultDescription
LOWKEY_DB_PATH/data/db/media.dbSQLite database path
LOWKEY_DOWNLOAD_PATH/data/mediaDownload directory (deprecated — use a default storage root instead)
LOWKEY_OLLAMA_BASE_URLhttp://host.docker.internal:11434Ollama API endpoint for LLM features
LOWKEY_OLLAMA_MODELllama3.2-visionVision model for image descriptions
LOWKEY_JWT_SECRET(auto-generated)JWT signing secret for authentication
LOWKEY_DISCORD_TOKENDiscord token for media export
LOWKEY_FASTER_WHISPER_PATHPath to faster-whisper binary
LOWKEY_ROOT_1, _2, ...Local storage roots (path or path:label)
LOWKEY_DEFAULT_ROOT1Which root receives uploads/downloads (1-based index or label)
LOWKEY_ROOTSJSON array of storage roots (set "default":true on one)

If you're running Ollama on the host machine for AI descriptions, the default LOWKEY_OLLAMA_BASE_URL will find it automatically on macOS and Windows. On Linux, add --add-host=host.docker.internal:host-gateway to the docker run command.

Data Persistence

All server state is stored in the /data volume inside the container. Using a named volume (lowkey-data) ensures your data survives container restarts, upgrades, and image rebuilds.

/data/
db/media.db — SQLite database
media/ — Downloaded and ingested media
config/ — Auto-generated configuration
packages/ — Auto-downloaded tools (yt-dlp, gallery-dl, etc.)

To back up your data, use docker cp or mount a host directory instead of a named volume:

docker run -d --name lowkey-media-server \
-p 8090:8090 \
-v /path/to/lowkey-data:/data \
ghcr.io/stevecastle/lowkey-media-server:latest

Job Queue

The job queue manages all processing tasks with persistence and real-time updates via Server-Sent Events (SSE).

Creating Jobs

Create jobs through the web interface, browser extension, or API. Each job specifies:

  • Task type (ingest, autotag, metadata, etc.)
  • Input source (URL, file path, or directory)
  • Optional follow-up tasks
Job Creation Demo Video

Monitoring Progress

Jobs display real-time progress with live output streaming. Job states include:

  • Pending - Waiting to start
  • In Progress - Currently running
  • Completed - Finished successfully
  • Cancelled - Stopped by user
  • Error - Failed with error

Job Management

Manage jobs with these actions:

  • Cancel - Stop a running job
  • Copy - Duplicate a job configuration
  • Remove - Delete a job from the queue
  • Clear - Remove all non-running jobs

Media Ingestion

The ingest task scans and adds media to the database from multiple sources with optional follow-up processing.

Local Directories

Scan local directories to add media files to the database. Supports recursive scanning for nested folders.

Local Ingest Demo Video

YouTube Downloads

Download videos from YouTube and other sites using the bundled yt-dlp tool. Videos are automatically added to the database after download.

YouTube Download Demo Video

Download media from gallery sites using gallery-dl. Supports hundreds of sites including:

  • Twitter/X, Instagram, Reddit
  • DeviantArt, ArtStation, Pixiv
  • Tumblr, Flickr, and many more
Gallery Download Demo Video

AI & ML Features

Auto-Tagging (ONNX)

Automatically tag images using ML models (WD Tagger). Configure thresholds for general and character tags.

  • Batch processing of entire directories
  • Configurable confidence thresholds
  • Tags organized by category (Suggested, Character, etc.)
Auto-Tagging Demo Video

LLM Vision Descriptions

Generate image descriptions using Ollama with vision models (llama3.2-vision, etc.). Customize prompts for different use cases.

LLM Description Demo Video

Transcription

Generate transcripts for videos using Faster Whisper. Transcripts are saved as VTT files and can be viewed in the Media Viewer.

Transcription Demo Video

Media Browser

Browsing & Search

Browse your media collection through the web interface at /media. Full-text search across filenames, descriptions, and tags.

Media Browser Demo Video

File Serving

Stream media files directly from the server with caching headers for performance. Access files at /media/file?path=...

Swipe App

The Swipe App is a progressive web app (PWA) that provides a TikTok-like experience for browsing your media database. Install it on your smartphone to swipe through your collection with a mobile-optimized interface.

  • Installable PWA - Add to your home screen for a native app experience
  • Swipe Navigation - Swipe up/down to browse through media
  • Mobile Optimized - Designed for touch and vertical video viewing
  • Tag & Filter - Browse by tags and filter your collection

Access the Swipe App at http://[server-ip]:8090/swipe on your mobile device.

Swipe App Demo Video

Available Tasks

Task IDNameDescription
ingestIngest MediaScan directories and add files to database
metadataGenerate MetadataGenerate descriptions, transcripts, hashes
autotagAuto TagML-based automatic image tagging
yt-dlpyt-dlpDownload videos from YouTube, etc.
gallery-dlgallery-dlDownload media from gallery sites
ffmpegffmpegProcess and convert media files
moveMove MediaMove files and update database references
removeRemove MediaRemove entries from database
cleanupCleanupRemove orphaned database entries
lora-datasetLoRA DatasetGenerate datasets for ML training

Configuration

Configure the server through the web interface at /config or edit the config file directly:

Windows: %APPDATA%\Lowkey Media Viewer\config.json
Linux: ~/.config/lowkeymediaviewer/config.json

Configuration Options

  • dbPath - Path to the SQLite database
  • ollamaBaseUrl - Ollama server URL (default: http://localhost:11434)
  • ollamaModel - Vision model for descriptions
  • fasterWhisperPath - Path to Faster Whisper executable
  • onnxTagger - ONNX model configuration for auto-tagging

API Reference

Job Management

MethodEndpointDescription
POST/createCreate a new job
GET/job/{id}View job details
POST/job/{id}/cancelCancel a running job
POST/job/{id}/copyCopy job configuration
POST/job/{id}/removeRemove a job
POST/jobs/clearClear non-running jobs

Media Operations

MethodEndpointDescription
GET/mediaMedia gallery page
GET/media/apiMedia JSON API with search
GET/media/fileServe media files

System

MethodEndpointDescription
GET/healthServer health and stats
GET/streamSSE stream for real-time updates
GET/configConfiguration page
POST/configUpdate configuration

Contact & Support

The Media Server is in alpha. Report issues and request features on GitHub or via email.