hugging-llm

HuggingLLM, Hugging Future.

OTHER License

Stars
2.7K

Table of Contents generated with DocToc

HuggingLLM

ChatGPTAIAIAIHuggingLLM

**** ChatGPT NLPLLM

****ChatGPTNLPChatGPT


    • ChatGPT
    • PPONLPOILQL
    • ChatGPT

NLP


  • ChatGPTAPIAPI

API

GLM

  1. GLMSDK
pip install zhipuai
  1. GLM API
# GLM
from zhipuai import ZhipuAI

client = ZhipuAI(api_key="") # APIKey

messages = [{"role": "system", "content": ""},
            {"role": "user", "content": "Datawhale"},]

response = client.chat.completions.create(
    model="glm-4",  # 
    messages=messages, # 
    stream=True,  # 
)

full_content = ''  # 
for chunk in response:
    full_content += chunk.choices[0].delta.content
print(':\n' + full_content)
:
DatawhaleDatawhaleAI

DatawhaleChatGPTDatawhaleAI

Datawhale

DatawhaleAI

Qwen

  1. QwenSDK
pip install dashscope
  1. Qwen API
# qwen
from http import HTTPStatus
import dashscope

DASHSCOPE_API_KEY="" # APIKey

messages = [{'role': 'system', 'content': 'You are a helpful assistant.'},
            {'role': 'user', 'content': 'Datawhale'}]

responses = dashscope.Generation.call(
    dashscope.Generation.Models.qwen_max, # 
    api_key=DASHSCOPE_API_KEY, 
    messages=messages,
    result_format='message',  # 
    stream=True, #
    incremental_output=True  
)


full_content = ''  # 
for response in responses:
    if response.status_code == HTTPStatus.OK:
        full_content += response.output.choices[0]['message']['content']
        # print(response)
    else:
        print('Request id: %s, Status code: %s, error code: %s, error message: %s' % (
            response.request_id, response.status_code,
            response.code, response.message
        ))
print(':\n' + full_content)

:
Datawhale

DatawhaleGitHubStarDatawhale

Datawhale

docs content Jupyter Notebook content docs

  • DataWhale

|

Bhttps://b23.tv/hdnXn1L

https://aiplusx.momodel.cn/classroom/class/658d3ecd891ad518e0274bce?activeKey=intro



  1. @Sm1les@LSGOMYP

LICENSE

-- 4.0