【llm 使用llama 小案例】

huggingfaceicon-default.png?t=N7T8https://huggingface.co/meta-llama

from transformers import AutoTokenizer, LlamaForCausalLM

PATH_TO_CONVERTED_WEIGHTS = ''
PATH_TO_CONVERTED_TOKENIZER = ''  # 一般和模型地址一样

model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")

# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True,     
    clean_up_tokenization_spaces=False)[0]

> Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you.

最近更新

  1. docker php8.1+nginx base 镜像 dockerfile 配置

    2024-01-21 11:18:05       94 阅读
  2. Could not load dynamic library ‘cudart64_100.dll‘

    2024-01-21 11:18:05       100 阅读
  3. 在Django里面运行非项目文件

    2024-01-21 11:18:05       82 阅读
  4. Python语言-面向对象

    2024-01-21 11:18:05       91 阅读

热门阅读

  1. 【MySQL】利用perror命令查看错误代码信息

    2024-01-21 11:18:05       58 阅读
  2. 算法实战(数组篇)

    2024-01-21 11:18:05       45 阅读
  3. [VulnHub靶机渗透]:billu_b0x 快速通关

    2024-01-21 11:18:05       59 阅读
  4. 【6】测试项程序编写(ARM服务器)

    2024-01-21 11:18:05       52 阅读
  5. kafka入门(九):kafka分区分配策略

    2024-01-21 11:18:05       50 阅读