Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature set.
A vLLM out-of-tree platform plugin that enables running vLLM on NPU (Ascend/torch_npu).
慕课网刘宇波老师所有的算法视频教程在学习时自己的代码记录+加一些其他的算法书籍和课程的学习