Daily Log — 2026-02-12
Today’s Overview
- What was done: Completed two independent tasks across two devices: translated and restructured the MIHD project’s English enhancement plan into a structured Chinese version, and added GLM model billing support to the ccusage tool with format/type-check validation passing
- How it was done: On DCC, read the file and used AI to compress and output a Chinese summary; on tianhe, explored ccusage’s pricing loading chain, added local JSON file reading and merging logic in
_pricing-fetcher.ts, and resolved multiple rounds of TypeScript/ESLint errors until all checks passed - Why it matters: The MIHD plan document is now significantly more readable, improving team alignment; ccusage can now automatically read a local ccusage.json to calculate GLM model costs, resolving the core pain point of previously only supporting Claude billing
DCC
- What was done: Read and translated the MIHD project’s
/docs/ENHANCEMENT_PLAN.md, producing a structured Chinese version of the enhancement plan - How it was done: AI read the original Markdown file, compressed and distilled each of the 6 BIG AIMs one by one, preserving key implementation details such as filenames, CLI arguments, and config fields
- Why it matters: Provides a clear Chinese reference document for the phased implementation of MIHD normalization, Q-Former, batch correction, and other features
tianhe
- What was done: Implemented GLM/local pricing file loading for ccusage, fixed multiple TypeScript and ESLint errors until format/typecheck/targeted tests all passed
- How it was done: Explored the
_pricing-fetcher.ts→_macro.ts→packages/internal/pricing.tschain, implementedloadLocalPricing()usingnode:fs/promisesto read local JSON and merge with pre-fetched Claude pricing; iteratively fixed issues including incorrectResult.try()usage, ESLint errors on theprocessglobal, andsample_specmetadata leaking into the pricing table - Why it matters: ccusage now supports loading any LiteLLM-format pricing via the
CCUSAGE_PRICING_FILEenvironment variable or the default path~/.ccusage/ccusage.json, fundamentally resolving the GLM billing problem
Implemented and debugged GLM billing support for ccusage on tianhe; organized the Chinese version of the MIHD enhancement plan on DCC
Today’s Tasks
Architecture & Strategy
- 🔄 ccusage GLM model billing support — Added
loadLocalPricing()function inapps/ccusage/src/_pricing-fetcher.tsto load pricing data from a local LiteLLM-format JSON file (defaults to~/.ccusage/ccusage.json, overridable viaCCUSAGE_PRICING_FILE) and merge it into the offline cache. format/typecheck and targeted_pricing-fetcher.tstests pass; global test failures inapps/ampare unrelated to this change
Implementation & Fixes
- ✅ MIHD enhancement plan Chinese translation — Translated the 6 BIG AIMs in ENHANCEMENT_PLAN.md (normalization, UNI2+scGPT experiments, Q-Former/LLaVA, Niche query, batch correction, end-to-end configurability) into a structured Chinese version, preserving key implementation details such as filenames, CLI arguments, and config fields
Issues & Solutions
Critical Issues
1. Result.try() in the @praha/byethrow library returns a function rather than a direct result — the AI’s initial code treated it as a direct result, causing TypeScript type errors
Solution: Changed const parsedResult = Result.try({...}) to const parseLocalPricing = Result.try({...}); const parsedResult = parseLocalPricing();
Key insight: Result.try() is a higher-order function that returns a reusable parser function; calling that function returns the Result<T, E>. This differs from common Result monad implementations such as Rust or fp-ts, and requires special attention
2. The local ccusage.json contains metadata keys such as sample_spec, which after valibot schema parsing were treated as empty pricing entries, causing a test assertion to fail — the assertion required that only entries with token cost fields should be loaded
Solution: After schema parsing succeeds, added an additional check for input_cost_per_token != null || output_cost_per_token != null to filter out entries with no pricing data
Key insight: LiteLLM pricing JSON files mix documentation entries (like sample_spec) with real model entries; business-level filtering is required beyond the schema layer
General Issues
3. ESLint rules prohibit direct use of the global process object — the AI used process.env.CCUSAGE_PRICING_FILE in new code, causing a lint failure
Solution: Added import process from 'node:process' at the top of the file
Key insight: This project enforces explicit import of all Node.js built-ins via the node: protocol — a strict ESLint constraint
4. pnpm was not available in the ccusage environment, and corepack failed to download pnpm due to a network permission error (EPERM), making it impossible to run format/typecheck/test
Solution: The user manually installed pnpm and re-ran pnpm install with elevated privileges, successfully installing 930 packages
Key insight: HPC cluster environments may lack write access to ~/.cache/node/corepack and have restricted npm registry network access, requiring the user to manually install the package manager
Human Thinking vs. AI Thinking
Strategic Level
GLM Pricing Data Source
| Role | Approach |
|---|---|
| Human | The user proactively pointed out that a local ccusage.json file containing GLM entries already existed at /HOME/sysu_gbli2/sysu_gbli2xy_1/.ccusage/ccusage.json, and directly asked the AI to leverage that existing file |
| AI | The AI’s initial approach was to manually add a GLM provider prefix in the code and extend the isClaudeModel filter function, unaware that the user already had a complete local pricing file |
Analysis: The human knew a ready-made data asset existed and avoided reinventing the wheel; the AI defaulted to modifying code logic rather than reusing an existing data file
Implementation Level
Handling the pnpm Environment Issue
| Role | Approach |
|---|---|
| Human | The user manually installed pnpm to resolve the environment issue, reporting back “now I installed pnpm” |
| AI | The AI attempted workarounds such as using corepack and adjusting XDG_CACHE_HOME, all of which failed |
Analysis: For toolchain installation issues on an HPC cluster, the human’s direct approach (installing the tool) was far more efficient than the AI’s roundabout solutions
AI Limitations
Critical Limitations
- Misused the
Result.try()API from@praha/byethrow: assumed it behaved like common Result monad implementations (returning a result directly), when in fact it returns a reusable parser function that must be called to obtain the result — this required two rounds of fixes - Did not account for non-pricing metadata keys (e.g.,
sample_spec) mixed into the LiteLLM pricing JSON on first implementation, leading to test failures before the business-level token cost field filtering was added
General Limitations
- Failed to anticipate the network and permission constraints of an HPC cluster environment; multiple attempts to install pnpm via corepack failed due to EPERM or network fetch errors, wasting many turns before recognizing that the user needed to resolve it manually
Today’s Takeaways
Core Takeaways
@praha/byethrow’sResult.try({try, catch})returns a function (factory pattern) rather than executing immediately and returning a Result; this differs from common Result implementations like Rust or fp-ts and deserves special attention- ccusage’s offline pricing is pre-filtered at build time via the
isClaudeModelfunction in_macro.ts, retaining only Claude-related models; extending multi-model support requires synchronized changes in both that macro and_pricing-fetcher.ts - The MIHD project plan is implemented in strict dependency order: config foundation (Idea 6 Phase 1) → normalization → UNI2+scGPT experiments → Q-Former/LLaVA → Niche query → batch correction → full config finalization; this ordering is designed to avoid accumulating refactoring costs
Practical Takeaways
- When working with Node.js projects on HPC clusters, package managers (pnpm) and corepack may fail due to filesystem permissions or network isolation — confirm toolchain availability upfront
Session Summaries
MIHD
✅ MIHD Enhancement Plan Translation: Structured Chinese Version of 6 BIG AIMs 23:05:59.274 | codex The user requested translation of the MIHD project’s ENHANCEMENT_PLAN.md into Chinese. After reading the file, the AI compressed and distilled each of the 6 modules — normalization, UNI2+scGPT experiments, Q-Former/LLaVA fusion, Niche query, batch correction, and end-to-end configurability — while preserving implementation details such as filenames, CLI arguments, and YAML config snippets. The final output was a complete Chinese plan document including dependency order, implementation phases, and validation approaches.
ccusage
🔄 Adding GLM Model Billing Support to ccusage: Local Pricing File Loading Implementation and Multi-Round Debugging
15:47:15.640 | codex
The user first asked about basic ccusage usage, then raised the need to calculate GLM model costs. The AI explored the pricing chain (_pricing-fetcher.ts → _macro.ts → packages/internal/pricing.ts) and found that offline mode only pre-fetches Claude models. The user pointed out a local ccusage.json containing GLM entries, and the AI proceeded to implement the loadLocalPricing() function. The debugging phase involved four issues: missing pnpm, TypeScript errors from Result.try() misuse, ESLint errors on the process global, and metadata keys leaking into the pricing table. In the end, format/typecheck and targeted _pricing-fetcher.ts tests all passed; unrelated apps/amp test failures were left unresolved.
Token Usage
Claude Code
Summary
| Metric | Value |
|---|---|
| Total Tokens | 2,812,835 |
| Input Tokens | 45,385 |
| Output Tokens | 144 |
| Cache Created | 368,229 |
| Cache Read | 2,399,077 |
| Cache Hit Rate | 86.7% |
| Total Cost (USD) | $1.7933 |
Model Breakdown
| Model | Input | Output | Cache Created | Cache Read | Cost | Share |
|---|---|---|---|---|---|---|
| claude-opus-4-6 | 15 | 63 | 137,190 | 899,384 | $1.3088 | 73.0% |
| claude-haiku-4-5-20251001 | 45,370 | 81 | 231,039 | 1,499,693 | $0.4845 | 27.0% |
Codex
Summary
| Metric | Value |
|---|---|
| Total Tokens | 2,078,880 |
| Input Tokens | 2,058,788 |
| Output Tokens | 20,092 |
| Reasoning Tokens | 9,479 |
| Cache Read | 1,840,256 |
| Total Cost (USD) | $0.9858 |
Model Breakdown
| Model | Input | Output | Reasoning | Cache Read | Cost | Share |
|---|---|---|---|---|---|---|
| gpt-5.2-codex | 56,282 | 1,260 | 0 | 45,440 | $0.0446 | 4.5% |
| gpt-5.3-codex | 2,002,506 | 18,832 | 9,479 | 1,794,816 | $0.9412 | 95.5% |