ollama-benchmark

LLM Benchmark for Throughput via Ollama (Local LLMs)

MIT License

Downloads
1.4K
Stars
24
Committers
4

Bot releases are visible (Hide)

ollama-benchmark - v0.3.16 Latest Release

Published by chuangtc 6 months ago

Add in default value (unknown) when grabbing cpu, gpu, and os_version in Linux

ollama-benchmark - v0.3.15

Published by chuangtc 6 months ago

Fix Debian Linux not using lshw command to grab cpu info

ollama-benchmark - v0.3.14

Published by chuangtc 7 months ago

Fix the line break when grabbing GPU info on Windows machines

ollama-benchmark - v0.3.13

Published by chuangtc 7 months ago

Fix Grabbing Nvidia GPU info on Linux

ollama-benchmark - v0.3.12

Published by chuangtc 7 months ago

add in flag --ollamabin to explicitly give the path to the developer version of ollama while building your own ollama

ollama-benchmark - v0.3.11

Published by chuangtc 7 months ago

  • Fix Windows version not detecting cmd or powershell mode
  • Fix grabbing ollama version issue on Windows
ollama-benchmark -

Published by chuangtc 7 months ago

Bump up to v0.3.10

ollama-benchmark - v0.3.9

Published by chuangtc 7 months ago

Fix sending data issue

ollama-benchmark - v0.3.8

Published by chuangtc 7 months ago

Fix data file path inside package issue

ollama-benchmark - v0.3.7

Published by chuangtc 7 months ago

Try to solve package data filepath for reading

ollama-benchmark - v0.3.6

Published by chuangtc 7 months ago

Fix PYPI auto publish issue

ollama-benchmark - v0.3.5

Published by chuangtc 7 months ago

  • Publish code to PYPI
  • Auto adjust models for running benchmark vi memory size
  • Let user do this, pip install llm-benchmark
  • Then do this, llm_benchmark run
ollama-benchmark - v0.2.4

Published by chuangtc 7 months ago

Rename test folder to tests to prepare following the PyPI package folder convention

ollama-benchmark - v0.2.3

Published by chuangtc 8 months ago

Ollama Windows version need to put in encoding="utf-8" while writing files and running subprocess.run()

ollama-benchmark - v0.2.2

Published by chuangtc 9 months ago

fix command typo in README

ollama-benchmark -

Published by chuangtc 9 months ago

Fix llava benchmark to 5 sample images

ollama-benchmark - v0.2.0

Published by chuangtc 9 months ago

Finish the run_benchmark.py

  • (for mistral model) $ python3 ollama-benchmark/run_benchmark.py -m data/benchmark_models.yml --b data/benchmark1.yml -t instruct
  • (for llama2 model) $ python3 ollama-benchmark/run_benchmark.py -m data/benchmark_models.yml --b data/benchmark1.yml -t question-answer
  • (for llava model) $ python3 ollama-benchmark/run_benchmark.py -m data/benchmark_models.yml --b data/benchmark1.yml -t vision-image
ollama-benchmark - v0.1.0

Published by chuangtc 9 months ago

The basic function to run test ollama API with llama2 and mistral 7B models.

The basic parsing log to compute performance metrics (tokens/s) in the output.