(Translated by https://www.hiragana.jp/)
GitHub - Qiyuan-Ge/DarkAssistant
Skip to content

Qiyuan-Ge/DarkAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DarkAssistant

Badge 1 Badge 2

Quick Experience

App

Features

Let's ask the assistant first🤔:

display4

So this project encompasses the following features:

1. Rapid Conversion of LLM to Agent🤖

Effortlessly transform your Language Model (LLM) into an Agent.

2. LLM Proficiency Testing Tool🛠️

Explore and evaluate the capabilities of your Language Model through the integrated testing tool.

3. Dark Assistant WebUI

Experience the convenience of the Open Assistant Web User Interface (WebUI).

Watch the demo(Vicuna v1.5)

  • Youtube Watch the video
  • BiliBili Watch the video

Everyone have their own AI assistant

Image 1 Image 2
Image 3 Image 4

Support models

models on huggingface:

  • vicuna
  • airoboros
  • koala
  • alpaca
  • chatglm
  • chatglm2
  • dolly_v2
  • oasst_pythia
  • oasst_llama
  • tulu
  • stablelm
  • baize
  • rwkv
  • openbuddy
  • phoenix
  • claude
  • mpt-7b-chat
  • mpt-30b-chat
  • mpt-30b-instruct
  • bard
  • billa
  • redpajama-incite
  • h2ogpt
  • Robin
  • snoozy
  • manticore
  • falcon
  • polyglot_changgpt
  • tigerbot
  • xgen
  • internlm-chat
  • starchat
  • baichuan-chat
  • llama-2
  • cutegpt
  • open-orca
  • qwen-7b-chat
  • aquila-chat
  • ...

How to use

1. Installation

git clone https://github.com/Qiyuan-Ge/DarkAssistant.git
cd DarkAssistant
pip install -r requirements.txt

2. Starting the server

First, launch the controller:

python3 -m fastchat.serve.controller

Then, launch the model worker(s):

python3 -m fastchat.serve.multi_model_worker \
    --model-path model_math \
    --model-names "gpt-3.5-turbo,text-davinci-003" \
    --model-path embedding_model_math \
    --model-names "text-embedding-ada-002"

Finally, launch the RESTful API server:

python3 -m fastchat.serve.openai_api_server --host 0.0.0.0 --port 6006

You should see terminal output like:

INFO:     Started server process [1301]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:6006 (Press CTRL+C to quit)

See more details in https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md

3. Starting the web UI

streamlit run main.py

Then replace the API Base with your api base (in this case is http://0.0.0.0:6006/v1)
apibase

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages