
GitHub - intel/ipex-llm: Accelerate local LLM inference and …
70+ models have been optimized/verified on ipex-llm (e.g., Llama, Phi, Mistral, Mixtral, DeepSeek, Qwen, ChatGLM, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art LLM …
ipex-llm/docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md at …
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
Run llama.cpp with IPEX-LLM on Intel GPU - GitHub
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
ipex-llm/README.zh-CN.md at main · intel/ipex-llm - GitHub
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
ipex-llm/docs/mddocs/Quickstart/ollama_quickstart.md at main
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
Install IPEX-LLM on Windows with Intel GPU - GitHub
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
ipex-llm-tutorial/Chinese_Version/README.md at main - GitHub
ipex-llm 是一个为intel xpu (包括cpu和gpu) 打造的轻量级大语言模型加速库。本代码仓库包含了若干关于ipex-llm的教程,能帮助你理解什么是ipex-llm,以及如何使用ipex-llm 来开发基于大语 …
GitHub - marcin-kruszynski/ipex-ollama-intel-igpu: Accelerate …
70+ models have been optimized/verified on ipex-llm (e.g., Llama, Phi, Mistral, Mixtral, Whisper, Qwen, MiniCPM, Qwen-VL, MiniCPM-V and more), with state-of-art LLM optimizations, XPU …
GitHub - intel/ipex-llm-tutorial: Accelerate LLM with low-bit (FP4 ...
IPEX-LLM is a low-bit LLM library on Intel XPU (Xeon/Core/Flex/Arc/PVC). This repository contains tutorials to help you understand what is IPEX-LLM and how to use IPEX-LLM to build …
ipex-llm/docs/mddocs/Overview/install_gpu.md at main - GitHub
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with …
- 某些结果已被删除