-
Notifications
You must be signed in to change notification settings - Fork 14
add: Evaluating Large Language Models with lm-evaluation-harness #113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis PR adds two new public-facing documentation artifacts for the lm-evaluation-harness tool: a comprehensive Markdown guide covering evaluation methods, features, configurations, and best practices, plus a Jupyter notebook providing interactive quick-start examples for both API and CLI usage. Total additions: 817 lines. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@docs/en/solutions/How_to_Evaluate_LLM.md`:
- Around line 531-532: The Markdown link in How_to_Evaluate_LLM.md points to a
dead target "../public/lm-eval/lm-eval_quick_star.ipynb" (typo/missing 't');
update that link target to the correct notebook location under docs/public
(e.g., replace "../public/lm-eval/lm-eval_quick_star.ipynb" with the correct
relative path to docs/public/lm-eval/lm-eval_quick_start.ipynb) so the "lm-eval
Quick Start Notebook" link resolves and the build succeeds.
- Around line 401-407: The Math & Reasoning Suite example uses the wrong
lm-evaluation-harness task name—replace the task token "math" in the lm-eval
command's --tasks argument with the official name "minerva_math" so the command
becomes: update the --tasks list (gsm8k,math,arc_challenge) to
(gsm8k,minerva_math,arc_challenge) in the example invocation for lm-eval (the
line with --tasks).
🧹 Nitpick comments (1)
docs/public/lm-eval/lm-eval_quick_star.ipynb (1)
178-182: Avoid hardcoding API keys in the notebook.Direct assignment encourages storing secrets in notebooks. Prefer a prompt or external environment variable to reduce leak risk.
🔐 Proposed safer pattern
- "import os\n", - "\n", - "# Set your OpenAI API key\n", - "os.environ['OPENAI_API_KEY'] = 'your-api-key-here'\n", + "import os\n", + "import getpass\n", + "\n", + "# Set your OpenAI API key (avoid committing real keys)\n", + "if not os.getenv('OPENAI_API_KEY'):\n", + " os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API key: ')\n",Please verify this still aligns with the recommended authentication flow in the OpenAI API docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/en/solutions/How_to_Evaluate_LLM.md`:
- Line 311: The Markdown table row containing `truthfulqa_gen` /
`generate_until` with metric names `bleu_max`, `rouge1_max`, `rougeL_max`,
`bleurt_max` also references metric suffixes `_acc` and `_diff` which are being
rendered as emphasis; update that table cell by wrapping the `_acc` and `_diff`
tokens in backticks (e.g., `_acc` -> `_acc`) so they render as code literals and
avoid unintended Markdown emphasis—edit the table row that lists
`truthfulqa_gen`, `generate_until`, and the metric names to apply the backticks
consistently.
♻️ Duplicate comments (2)
docs/en/solutions/How_to_Evaluate_LLM.md (2)
401-407: Use the officialminerva_mathtask name in the suite.The “Math & Reasoning Suite” uses
math, but the dataset table listsminerva_math. This mismatch will fail when users run the command.💡 Proposed fix
- --tasks gsm8k,math,arc_challenge \ + --tasks gsm8k,minerva_math,arc_challenge \
531-531: Fix the quick-start notebook link target (typo/404 risk).The link uses
lm-eval_quick_star.ipynbwhich appears to be a typo; verify the actual filename and update to prevent a broken link.💡 Proposed fix (if filename is quick_start)
-Download the Jupyter notebook example: [lm-eval Quick Start Notebook](/lm-eval/lm-eval_quick_star.ipynb) +Download the Jupyter notebook example: [lm-eval Quick Start Notebook](/lm-eval/lm-eval_quick_start.ipynb)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/public/lm-eval/lm-eval_quick_star.ipynb`:
- Around line 1-25: Rename the notebook file from lm-eval_quick_star.ipynb to
lm-eval_quick_start.ipynb to fix the typo; update any references to the notebook
name (e.g., links in docs, README, or navigation entries) so they point to
lm-eval_quick_start.ipynb and verify the notebook metadata/title if present
mentions "Quick Start" to keep naming consistent with the displayed heading.
🧹 Nitpick comments (3)
docs/public/lm-eval/lm-eval_quick_star.ipynb (3)
131-151: Consider removing leading newline in YAML config.The triple-quoted string starts with a newline, creating an empty first line in the generated YAML file. While typically harmless, it's cleaner to use
.strip()or start the content immediately.💡 Suggested fix
# Create a configuration file -config = """ -model: local-chat-completions +config = """\ +model: local-chat-completions model_args: model: Qwen/Qwen2.5-7B-Instruct base_url: http://localhost:8000/v1Or alternatively use
config.strip()when writing:with open('eval_config.yaml', 'w') as f: f.write(config.strip())
163-174: Remove empty placeholder cells.These empty markdown and code cells appear to be leftover placeholders and should be removed for a cleaner notebook.
213-232: Add error handling for results file loading.The hardcoded path
./results/results.jsonassumes a specific output structure. lm-eval may create timestamped subdirectories or differently named files. Consider adding error handling and guidance for locating actual output files.💡 Suggested improvement
import json +import os +import glob # Load and display results -with open('./results/results.json', 'r') as f: - results = json.load(f) +# Find the most recent results file +result_files = glob.glob('./results/**/results.json', recursive=True) +if not result_files: + print("No results.json found. Please run an evaluation first.") +else: + latest_file = max(result_files, key=os.path.getmtime) + print(f"Loading results from: {latest_file}") + with open(latest_file, 'r') as f: + results = json.load(f) + print("=== Evaluation Results ===") + print(json.dumps(results, indent=2))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@docs/public/lm-eval/lm-eval_quick_star.ipynb`:
- Around line 92-102: The code currently calls
urllib.request.urlopen(f"{BASE_URL_FOR_MODELS}/models") and broadly catches
Exception; instead parse and validate the URL scheme (using
urllib.parse.urlparse) to allow only "http" or "https" for BASE_URL_FOR_MODELS
before calling urllib.request.urlopen, and replace the broad except with
specific handlers for urllib.error.HTTPError, urllib.error.URLError and
json.JSONDecodeError so network, HTTP and JSON errors are handled explicitly and
printed via print(...) with clear context; keep the existing loop over
data.get("data", []) and printing of model ids but ensure invalid schemes
raise/print an explicit error before attempting urlopen.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.