# presenton **Repository Path**: ldyS/presenton ## Basic Information - **Project Name**: presenton - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-03-23 - **Last Updated**: 2026-03-23 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
Quickstart · Docs · Youtube · Discord
# Open-Source AI Presentation Generator and API (Gamma, Beautiful AI, Decktopus Alternative) ### β¨ Why Presenton No SaaS lock-in Β· No forced subscriptions Β· Full control over models and data What makes Presenton different? - Fully **self-hosted** - Works with OpenAI, Gemini, Anthropic, Ollama, or custom models - API deployable - Fully open-source (Apache 2.0) - Use your **existing PPTX files as templates** _(coming soon)_
| Platform | Architecture | Package | Download |
|---|---|---|---|
| macOS | Apple Silicon / Intel | .dmg |
Download β |
| Windows | x64 | .exe |
Download β |
| Linux | x64 | .deb |
Download β |
You can run Presenton in two ways: Docker for a one-command setup without installing a local dev stack, or the Electron desktop app for a native app experience (ideal for development or offline use).
**Option 1: Electron (Desktop App)**Run Presenton as a native desktop application. LLM and image provider (API keys, etc.) can be configured in the app. The same environment variables used for Docker apply when running the bundled backend.
Prerequisites: Node.js (LTS), npm, Python 3.11, and
uv
(for the Electron FastAPI backend in
electron/servers/fastapi).
cd electron
npm run setup:env
This installs Node dependencies, runs uv sync in the FastAPI
server, and installs Next.js dependencies.
- Run in Development
npm run dev
This compiles TypeScript and starts Electron. The backend and UI run locally inside the desktop window.
- Build Distributable (Optional) To create installers for Windows, macOS, or Linux:npm run build:all
npm run dist
Output files are written to electron/dist
(or as configured in your electron-builder settings).
docker run -it --name presenton -p 5000:80 -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
Windows (PowerShell):
docker run -it --name presenton -p 5000:80 -v "${PWD}\app_data:/app_data" ghcr.io/presenton/presenton:latest
- Open Presenton
Open http://localhost:5000 in the browser of your choice to use Presenton.
# ### βοΈ Deployment Configurations These settings apply to both Docker and the Electron app's backend. You may want to directly provide your API KEYS as environment variables and keep them hidden. You can set these environment variables to achieve it. - CAN_CHANGE_KEYS=[true/false]: Set this to **false** if you want to keep API Keys hidden and make them unmodifiable. - LLM=[openai/google/anthropic/ollama/custom]: Select **LLM** of your choice. - OPENAI_API_KEY=[Your OpenAI API Key]: Provide this if **LLM** is set to **openai** - OPENAI_MODEL=[OpenAI Model ID]: Provide this if **LLM** is set to **openai** (default: "gpt-4.1") - GOOGLE_API_KEY=[Your Google API Key]: Provide this if **LLM** is set to **google** - GOOGLE_MODEL=[Google Model ID]: Provide this if **LLM** is set to **google** (default: "models/gemini-2.0-flash") - ANTHROPIC_API_KEY=[Your Anthropic API Key]: Provide this if **LLM** is set to **anthropic** - ANTHROPIC_MODEL=[Anthropic Model ID]: Provide this if **LLM** is set to **anthropic** (default: "claude-3-5-sonnet-20241022") - OLLAMA_URL=[Custom Ollama URL]: Provide this if you want to custom Ollama URL and **LLM** is set to **ollama** - OLLAMA_MODEL=[Ollama Model ID]: Provide this if **LLM** is set to **ollama** - CUSTOM_LLM_URL=[Custom OpenAI Compatible URL]: Provide this if **LLM** is set to **custom** - CUSTOM_LLM_API_KEY=[Custom OpenAI Compatible API KEY]: Provide this if **LLM** is set to **custom** - CUSTOM_MODEL=[Custom Model ID]: Provide this if **LLM** is set to **custom** - TOOL_CALLS=[Enable/Disable Tool Calls on Custom LLM]: If **true**, **LLM** will use Tool Call instead of Json Schema for Structured Output. - DISABLE_THINKING=[Enable/Disable Thinking on Custom LLM]: If **true**, Thinking will be disabled. - WEB_GROUNDING=[Enable/Disable Web Search for OpenAI, Google And Anthropic]: If **true**, LLM will be able to search web for better results. You can also set the following environment variables to customize the image generation provider and API keys: - DISABLE_IMAGE_GENERATION: If **true**, Image Generation will be disabled for slides. - IMAGE_PROVIDER=[dall-e-3/gpt-image-1.5/gemini_flash/nanobanana_pro/pexels/pixabay/comfyui]: Select the image provider of your choice. - Required if **DISABLE_IMAGE_GENERATION** is not set to **true**. - OPENAI_API_KEY=[Your OpenAI API Key]: Required if using **dall-e-3** or **gpt-image-1.5** as the image provider. - DALL_E_3_QUALITY=[standard/hd]: Optional quality setting for **dall-e-3** (default: `standard`). - GPT_IMAGE_1_5_QUALITY=[low/medium/high]: Optional quality setting for **gpt-image-1.5** (default: `medium`). - GOOGLE_API_KEY=[Your Google API Key]: Required if using **gemini_flash** or **nanobanana_pro** as the image provider. - PEXELS_API_KEY=[Your Pexels API Key]: Required if using **pexels** as the image provider. - PIXABAY_API_KEY=[Your Pixabay API Key]: Required if using **pixabay** as the image provider. - COMFYUI_URL=[Your ComfyUI server URL] and COMFYUI_WORKFLOW=[Workflow JSON]: Required if using **comfyui** to route prompts to a self-hosted ComfyUI workflow. You can disable anonymous telemetry using the following environment variable: - DISABLE_ANONYMOUS_TELEMETRY=[true/false]: Set this to **true** to disable anonymous telemetry. > Note: You can freely choose both the LLM (text generation) and the image provider. Supported image providers: **dall-e-3**, **gpt-image-1.5** (OpenAI), **gemini_flash**, **nanobanana_pro** (Google), **pexels**, **pixabay**, and **comfyui** (self-hosted).Note: You can replace
5000with any other port number of your choice to run Presenton on a different port number.
docker run -it --name presenton -p 5000:80 -e LLM="openai" -e OPENAI_API_KEY="******" -e IMAGE_PROVIDER="dall-e-3" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
- Using Google
docker run -it --name presenton -p 5000:80 -e LLM="google" -e GOOGLE_API_KEY="******" -e IMAGE_PROVIDER="gemini_flash" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
- Using Ollama
docker run -it --name presenton -p 5000:80 -e LLM="ollama" -e OLLAMA_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="*******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
- Using Anthropic
docker run -it --name presenton -p 5000:80 -e LLM="anthropic" -e ANTHROPIC_API_KEY="******" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
- Using OpenAI Compatible API
docker run -it -p 5000:80 -e CAN_CHANGE_KEYS="false" -e LLM="custom" -e CUSTOM_LLM_URL="http://*****" -e CUSTOM_LLM_API_KEY="*****" -e CUSTOM_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="********" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
- Running Presenton with GPU Support
To use GPU acceleration with Ollama models, you need to install and configure the NVIDIA Container Toolkit. This allows Docker containers to access your NVIDIA GPU.
Once the NVIDIA Container Toolkit is installed and configured, you can run Presenton with GPU support by adding the `--gpus=all` flag:
docker run -it --name presenton --gpus=all -p 5000:80 -e LLM="ollama" -e OLLAMA_MODEL="llama3.2:3b" -e IMAGE_PROVIDER="pexels" -e PEXELS_API_KEY="*******" -e CAN_CHANGE_KEYS="false" -v "./app_data:/app_data" ghcr.io/presenton/presenton:latest
#
### β¨ Generate Presentation via API
**Generate Presentation**
Endpoint: /api/v1/ppt/presentation/generate
Method: POST
Content-Type: application/json
| Parameter | Type | Required | Description |
|---|---|---|---|
content |
string | Yes | Main content used to generate the presentation. |
slides_markdown |
string[] | null | No | Provide custom slide markdown instead of auto-generation. |
instructions |
string | null | No | Additional generation instructions. |
tone |
string | No |
Text tone (default: "default").
Options: default, casual, professional,
funny, educational, sales_pitch
|
verbosity |
string | No |
Content density (default: "standard").
Options: concise, standard, text-heavy
|
web_search |
boolean | No | Enable web search grounding (default: false). |
n_slides |
integer | No | Number of slides to generate (default: 8). |
language |
string | No | Presentation language (default: "English"). |
template |
string | No | Template name (default: "general"). |
include_table_of_contents |
boolean | No | Include table of contents slide (default: false). |
include_title_slide |
boolean | No | Include title slide (default: true). |
files |
string[] | null | No |
Files to use in generation.
Upload first via /api/v1/ppt/files/upload.
|
export_as |
string | No |
Export format (default: "pptx").
Options: pptx, pdf
|
{
"presentation_id": "string",
"path": "string",
"edit_path": "string"
}
**Example Request**
curl -X POST http://localhost:5000/api/v1/ppt/presentation/generate \
-H "Content-Type: application/json" \
-d '{
"content": "Introduction to Machine Learning",
"n_slides": 5,
"language": "English",
"template": "general",
"export_as": "pptx"
}'
**Example Response**
{
"presentation_id": "d3000f96-096c-4768-b67b-e99aed029b57",
"path": "/app_data/d3000f96-096c-4768-b67b-e99aed029b57/Introduction_to_Machine_Learning.pptx",
"edit_path": "/presentation?id=d3000f96-096c-4768-b67b-e99aed029b57"
}
Note: Prepend your serverβs root URL to**Documentation & Tutorials** # ### π Roadmap - [x] Support for custom HTML templates by developers - [x] Support for accessing custom templates over API - [x] Implement MCP server - [ ] Ability for users to change system prompt - [x] Support external SQL database # ### π Roadmap Track the public roadmap on GitHub Projects: [https://github.com/orgs/presenton/projects/2](https://github.com/orgs/presenton/projects/2) #pathandedit_pathto construct valid links.