LLM Benchmark for Throughput via Ollama (Local LLMs)
MIT License
Add in default value (unknown) when grabbing cpu, gpu, and os_version in Linux
Published by chuangtc 6 months ago
Fix Debian Linux not using lshw command to grab cpu info
Published by chuangtc 7 months ago
Fix the line break when grabbing GPU info on Windows machines
Published by chuangtc 7 months ago
Fix Grabbing Nvidia GPU info on Linux
Published by chuangtc 7 months ago
add in flag --ollamabin to explicitly give the path to the developer version of ollama while building your own ollama
Published by chuangtc 7 months ago
Published by chuangtc 7 months ago
Fix sending data issue
Published by chuangtc 7 months ago
Fix data file path inside package issue
Published by chuangtc 7 months ago
Try to solve package data filepath for reading
Published by chuangtc 7 months ago
Fix PYPI auto publish issue
Published by chuangtc 7 months ago
Published by chuangtc 7 months ago
Rename test folder to tests to prepare following the PyPI package folder convention
Published by chuangtc 8 months ago
Ollama Windows version need to put in encoding="utf-8" while writing files and running subprocess.run()
Published by chuangtc 9 months ago
fix command typo in README
Published by chuangtc 9 months ago
Finish the run_benchmark.py
Published by chuangtc 9 months ago
The basic function to run test ollama API with llama2 and mistral 7B models.
The basic parsing log to compute performance metrics (tokens/s) in the output.