Pinned Loading
-
-
-
Pruning-LLMs
Pruning-LLMs PublicThe framework to prune LLMs to any size and any config.
-
dwzq-com-cn/DongwuLLM
dwzq-com-cn/DongwuLLM PublicThis is the codebase for pre-training, compressing, extending, and distilling LLMs with Megatron-LM.
-
OpenNLG/OpenBA-v2
OpenNLG/OpenBA-v2 PublicOpenBA-V2: 3B LLM (Large Language Model) with T5 architecture, utilizing model pruning technique and continuing pretraining from OpenBA-15B.
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.