# riscv-llm **Repository Path**: openkylin/riscv-llm ## Basic Information - **Project Name**: riscv-llm - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 1 - **Forks**: 0 - **Created**: 2024-05-08 - **Last Updated**: 2025-12-31 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # `Running LLM in RISC-V` ## Directory Structure ```shell . ├── dist # 构建pytorch wheel ├── docs # 文档 ├── gemma.app # cli chat app with Gemma ├── patches # Patches for the pytorch, inferllm ├── model # 模型参数文件,gemma模型由于超出大文件存储容量未上传 ├── tinyllm # cli story teller app with tinyLLM └── web.app # web app with Gemma and tinyLLM ``` ## Introduction 各个应用使用说明分别见各自的 `README.md` 文件。 ## TODO - [ ] performance optimization patches for pytorch - [ ] web.app user interface optimization - [ ] training and fine-tuning a smaller version of LLM for RISC-V ## References and Acknowledgements 感谢以下开源软件和相关的作者: Google,pytorch团队,MegEngine团队,openKylin团队,以及开源作者tiangolo,hmsgit等。 - https://gitee.com/openkylin - https://www.kaggle.com/models/google/gemma/frameworks/pyTorch/ - https://github.com/google/gemma.cpp - https://github.com/pytorch/pytorch - https://github.com/openkylin - https://github.com/cindysridykhan/instruct_storyteller_tinyllama2 - https://github.com/MegEngine/InferLLM/ - https://github.com/tiangolo/fastapi - https://github.com/hmsgit/fastapi-streaming-response - https://www.openmp.org/