fix(docs): SEO audit fixes, meta descriptions, and CI improvements#119
fix(docs): SEO audit fixes, meta descriptions, and CI improvements#119
Conversation
- Replace O(n) full-map scan in prune() with FIFO expiration queue for O(k) pruning of expired entries - Move tombstone compaction to a background microtask to keep prune() non-blocking - Add compactionScheduled flag to prevent duplicate microtasks - Reset queue and flag in clear() for proper cleanup Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix critical error: @cached docs falsely claimed per-instance caching; all instances actually share one cache per decorated method - Fix async example generic from Promise<Data> to Data for consistency - Add missing hooks option to Getting Started config table - Add getOrSet() subsection to Getting Started quick start Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add --filter . to pnpm install commands in npm-publish and run-tests workflows to skip docs workspace dependencies that may require a different Node version. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ahrefs site audit flagged 16 pages with meta descriptions too short. Updated descriptions in both .svx seo.description overrides and +page.ts fallbacks to 120-170 characters for better search snippets. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add --config.engine-strict=false to pnpm install in CI workflows - Replace softprops/action-gh-release with gh CLI for release creation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Update docs-kit, svelte, vite, vitest, wrangler, and other deps - Refresh sitemap lastmod dates for updated example pages - Move cf-typegen script to docs package Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Aligns with updated cf-typegen script that outputs to docs/src/. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughUpdates CI install flags and replaces GH release action with an inline Changes
Sequence Diagram(s)sequenceDiagram
participant Actions as GitHub Actions Runner
participant PNPM as pnpm
participant GH as gh CLI
participant GitHub as GitHub Releases API
Actions->>PNPM: run pnpm install --filter . --prefer-offline --frozen-lockfile ...
Note right of PNPM: installs workspace deps (offline/prefer frozen lockfile)
Actions->>GH: prepare release metadata (NEW_VERSION, PR_TITLE, PR_URL, CUSTOM_MESSAGE)
Actions->>GH: run `gh release create --latest ...` with composed body
GH->>GitHub: create release via GitHub API
GitHub-->>Actions: release created (response)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
package.json (1)
85-107:⚠️ Potential issue | 🔴 CriticalUpdate
engines.nodeto>=20.19.0or higher.The current
engines.node: ">=18.0.0"is incompatible withvite@8.0.0(requires^20.19.0 || >=22.12.0) andvitest@4.1.0(requires^20.0.0 || ^22.0.0 || >=24.0.0). Node 18 installations will fail. Set minimum to>=20.19.0to satisfy both packages.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 85 - 107, Update the package.json engines.node constraint from ">=18.0.0" to ">=20.19.0" so the project meets the peer requirements of vite@8.0.0 and vitest@4.1.0; locate the "engines" object in package.json and change the node field to ">=20.19.0" (adjust any related documentation or CI configs that enforce the previous Node version if present).
🧹 Nitpick comments (1)
src/cache.ts (1)
887-897: Consider edge case: rapid overwrites without prune() calls.The 2x threshold is reasonable, but if a single key is overwritten rapidly without any prune() calls, the queue grows unbounded until something triggers prune(). This is acceptable given size()/keys()/values()/entries() all call prune(), but worth noting for high-churn scenarios.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.ts` around lines 887 - 897, The expirationQueue can grow unbounded if a single key is rapidly overwritten without prune() being called; to fix, coalesce duplicate pending tombstones when enqueuing by checking expirationQueue's tail and replacing the last item if it has the same key (instead of always pushing), and also ensure the existing compaction trigger (compactionScheduled + queueMicrotask) is invoked when the queue exceeds the threshold; reference symbols: expirationQueue, compactionScheduled, queueMicrotask, prune(), and cache — update the code path that appends to expirationQueue to dedupe/replace same-key entries and keep the microtask compaction logic as a safety net when length > 2 * cache.size.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/npm-publish.yml:
- Around line 454-465: The release BODY sometimes ends up empty when EVENT_NAME
is "workflow_dispatch" and both CUSTOM_MESSAGE and PR_TITLE are empty; update
the logic that builds BODY (variables BODY, CUSTOM_MESSAGE, PR_TITLE,
EVENT_NAME) to provide a sensible fallback for manual dispatches—e.g., if
CUSTOM_MESSAGE is empty and EVENT_NAME == "workflow_dispatch", set BODY to
include the NEW_VERSION and recent commit message or a default string like "No
release notes provided" before calling gh release create, or alternatively
require the custom_message input for workflow_dispatch by validating it and
failing early with a clear message.
In `@docs/src/routes/docs/examples/async-fetching/`+page.ts:
- Around line 3-4: The page description string named "description" in +page.ts
claims "stale-while-revalidate" but the associated .svx content doesn’t
demonstrate it; either remove that phrase from the description or add a short
stale-while-revalidate example to the .svx file. To fix, update the
"description" constant to only list features actually shown (thundering herd,
error handling, sync fetchers) OR extend the .svx content to include a concise
Memory Cache stale-while-revalidate snippet and explanation that shows serving
stale data while a background revalidation fetch occurs; reference the
"description" constant and the existing .svx examples when making the change.
In `@docs/src/routes/docs/examples/computed-values/`+page.svx:
- Line 11: Update the seo.description string to remove or rephrase the "TTL
refresh" mention so it accurately reflects that the examples use ttl: 0
(no-expiration) for deterministic computations; locate the seo.description
assignment (the line assigning seo.description) and change the text to state
that the page demonstrates no-expiration caching with ttl: 0 or otherwise remove
the TTL refresh reference to avoid the misleading claim.
In `@docs/src/routes/docs/examples/monitoring/`+page.ts:
- Around line 3-4: The page metadata "description" string (the 'Monitor cache
health with hit rates, memory usage, eviction counts, and performance metrics.
Integrate Memory Cache statistics with DataDog, Prometheus, or custom
dashboards.' value) exceeds the PR target length; shorten it to be between
120–170 characters (≤170) by trimming redundant phrases and tightening
wording—e.g., focus on core items like cache hit rates, memory, evictions, and
integrations—update the description property accordingly.
In `@docs/src/routes/docs/examples/multi-tenant/`+page.ts:
- Around line 3-4: Update the description to accurately reflect the example:
state that the sample uses a single MemoryCache with logical per-tenant
isolation via namespaced key prefixes (not independent MemoryCache instances
with separate TTL/eviction/stats), or alternatively add a second example that
creates independent MemoryCache instances per tenant (e.g., separate MemoryCache
constructors) if you want to demonstrate truly independent
TTL/eviction/statistics; refer to the MemoryCache and namespace/key-prefix
approach used in the example when making the change.
---
Outside diff comments:
In `@package.json`:
- Around line 85-107: Update the package.json engines.node constraint from
">=18.0.0" to ">=20.19.0" so the project meets the peer requirements of
vite@8.0.0 and vitest@4.1.0; locate the "engines" object in package.json and
change the node field to ">=20.19.0" (adjust any related documentation or CI
configs that enforce the previous Node version if present).
---
Nitpick comments:
In `@src/cache.ts`:
- Around line 887-897: The expirationQueue can grow unbounded if a single key is
rapidly overwritten without prune() being called; to fix, coalesce duplicate
pending tombstones when enqueuing by checking expirationQueue's tail and
replacing the last item if it has the same key (instead of always pushing), and
also ensure the existing compaction trigger (compactionScheduled +
queueMicrotask) is invoked when the queue exceeds the threshold; reference
symbols: expirationQueue, compactionScheduled, queueMicrotask, prune(), and
cache — update the code path that appends to expirationQueue to dedupe/replace
same-key entries and keep the microtask compaction logic as a safety net when
length > 2 * cache.size.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c19a8c35-7325-441a-9968-bf3483bb132c
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!pnpm-lock.yaml
📒 Files selected for processing (39)
.github/workflows/npm-publish.yml.github/workflows/run-tests.yml.trunk/trunk.yamldocs/package.jsondocs/src/lib/sitemap-manifest.jsondocs/src/routes/docs/api/cached-decorator/+page.svxdocs/src/routes/docs/api/cached-decorator/+page.tsdocs/src/routes/docs/api/memory-cache/+page.tsdocs/src/routes/docs/examples/+page.tsdocs/src/routes/docs/examples/api-caching/+page.svxdocs/src/routes/docs/examples/api-caching/+page.tsdocs/src/routes/docs/examples/async-fetching/+page.svxdocs/src/routes/docs/examples/async-fetching/+page.tsdocs/src/routes/docs/examples/computed-values/+page.svxdocs/src/routes/docs/examples/computed-values/+page.tsdocs/src/routes/docs/examples/configuration/+page.svxdocs/src/routes/docs/examples/configuration/+page.tsdocs/src/routes/docs/examples/database-caching/+page.svxdocs/src/routes/docs/examples/database-caching/+page.tsdocs/src/routes/docs/examples/monitoring/+page.svxdocs/src/routes/docs/examples/monitoring/+page.tsdocs/src/routes/docs/examples/multi-tenant/+page.svxdocs/src/routes/docs/examples/multi-tenant/+page.tsdocs/src/routes/docs/examples/rate-limiting/+page.svxdocs/src/routes/docs/examples/rate-limiting/+page.tsdocs/src/routes/docs/examples/service-class/+page.svxdocs/src/routes/docs/examples/service-class/+page.tsdocs/src/routes/docs/examples/sessions/+page.svxdocs/src/routes/docs/examples/sessions/+page.tsdocs/src/routes/docs/getting-started/+page.svxdocs/src/routes/docs/getting-started/+page.tsdocs/src/routes/examples/basic-cache/+page.tsdocs/src/routes/examples/cache-statistics/+page.tsdocs/src/routes/examples/lru-eviction/+page.tsdocs/src/routes/examples/ttl-expiration/+page.tsdocs/src/worker-configuration.d.tspackage.jsonsrc/cache.statistics.test.tssrc/cache.ts
| run: | | ||
| BODY="Changes in this Release | ||
| ${CUSTOM_MESSAGE:-$PR_TITLE}" | ||
| if [[ "$EVENT_NAME" == "pull_request" ]]; then | ||
| BODY="$BODY | ||
|
|
||
| For more details, see the [Pull Request]($PR_URL)" | ||
| fi | ||
| gh release create "$NEW_VERSION" \ | ||
| --title "Release $NEW_VERSION" \ | ||
| --notes "$BODY" \ | ||
| --latest |
There was a problem hiding this comment.
Edge case: empty release body on workflow_dispatch without custom message.
When triggered via workflow_dispatch without a custom_message input, PR_TITLE is empty, resulting in release notes that just say "Changes in this Release" with no actual content.
Consider adding a fallback or making custom_message required for manual dispatches:
Proposed fix
run: |
- BODY="Changes in this Release
- ${CUSTOM_MESSAGE:-$PR_TITLE}"
+ DESCRIPTION="${CUSTOM_MESSAGE:-$PR_TITLE}"
+ if [[ -z "$DESCRIPTION" ]]; then
+ DESCRIPTION="Manual release"
+ fi
+ BODY="Changes in this Release
+ $DESCRIPTION"
if [[ "$EVENT_NAME" == "pull_request" ]]; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/npm-publish.yml around lines 454 - 465, The release BODY
sometimes ends up empty when EVENT_NAME is "workflow_dispatch" and both
CUSTOM_MESSAGE and PR_TITLE are empty; update the logic that builds BODY
(variables BODY, CUSTOM_MESSAGE, PR_TITLE, EVENT_NAME) to provide a sensible
fallback for manual dispatches—e.g., if CUSTOM_MESSAGE is empty and EVENT_NAME
== "workflow_dispatch", set BODY to include the NEW_VERSION and recent commit
message or a default string like "No release notes provided" before calling gh
release create, or alternatively require the custom_message input for
workflow_dispatch by validating it and failing early with a clear message.
There was a problem hiding this comment.
Intentionally not addressing — this is an edge case for manual workflow_dispatch without a custom message. Low priority and the release still creates fine with just the version tag.
There was a problem hiding this comment.
@jaysin586, understood — acknowledged and moving on.
- Bump engines.node from >=18.0.0 to >=20.19.0 (vite@8 + vitest@4 require it) - Fix async-fetching description: replace "stale-while-revalidate" with "thundering herd prevention" to match actual page content - Fix computed-values description: remove "TTL refresh" since examples use ttl: 0 - Trim monitoring fallback description to <=170 chars - Fix multi-tenant description to accurately reflect single-cache key-prefix approach rather than implying separate cache instances - Coalesce expirationQueue tail entries on rapid same-key overwrites to prevent unbounded queue growth between prune() calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Addressing the remaining CodeRabbit feedback from the review body (ca120d6):
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
src/cache.ts (1)
288-289: Use a named interface for queue entries.The new inline
{ key; timestamp }shape works, but a smallinterfacewould make the queue logic easier to scan and matches the repo’s TypeScript rule. As per coding guidelines, "Prefer interface for defining object shapes in TypeScript".✏️ Suggested edit
+interface ExpirationQueueEntry { + key: string + timestamp: number +} + ... - private expirationQueue: Array<{ key: string; timestamp: number }> = [] + private expirationQueue: ExpirationQueueEntry[] = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.ts` around lines 288 - 289, Introduce a named interface for the expiration queue entries and use it instead of the inline type: define an interface (e.g. ExpirationEntry { key: string; timestamp: number }) and replace the inline annotation on the expirationQueue field with Array<ExpirationEntry> (or ExpirationEntry[]), updating any references to the inline shape accordingly (look for expirationQueue and any push/pop or destructuring that expects { key, timestamp }). Ensure the interface is exported if used outside the file and follows existing naming conventions.package.json (1)
70-70: Use path-based workspace selector for clarity.
pnpm --filter docsworks currently (docs package is nameddocs), but--filter ./docsis more explicit and resilient to future name changes.- "cf-typegen": "pnpm --filter docs cf-typegen", + "cf-typegen": "pnpm --filter ./docs cf-typegen",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 70, Update the "cf-typegen" npm script value in package.json to use a path-based workspace selector instead of the package name; replace the command string "pnpm --filter docs cf-typegen" with "pnpm --filter ./docs cf-typegen" so it targets the docs workspace by path (modify the "cf-typegen" entry accordingly).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/src/routes/docs/examples/async-fetching/`+page.ts:
- Around line 3-4: The fallback description string in the description property
(in +page.ts) is too long; shorten it to 170 characters or fewer by trimming
redundant words and keeping the core message (cache async fetch results, request
deduplication, thundering herd prevention, Memory Cache, prevent redundant
network calls) — update the description value to a concise single-sentence
version under 170 chars (e.g., remove "automatic" or "TypeScript backend" if
needed) and verify the final character count.
In `@docs/src/routes/docs/examples/computed-values/`+page.svx:
- Line 11: The meta description assigned to seo.description is 176 characters
and exceeds the 170-char SEO target; shorten the string in seo.description to
≤170 characters by removing or condensing nonessential words (e.g., drop
"Includes" or "with `@humanspeak/memory-cache`" or rephrase to "with
`@humanspeak/memory-cache`" as shorter alternate) so the resulting description
stays under the 170-char limit while preserving the core message about caching
expensive computations, performance tracking, and hooks monitoring.
In `@src/cache.ts`:
- Around line 549-559: The expiration-queue append in the write path (the block
that updates this.expirationQueue with tail coalescing) must also trigger the
tombstone compaction scheduling that prune() currently enqueues; update the
write path to call the same compaction-scheduler used by prune() whenever you
push or update a tombstone entry (e.g., after queue.push(...) or after updating
tail.timestamp) so hot traffic that never calls prune()/size()/keys() still
schedules compaction; make the identical change for the similar block around the
symbols referenced at lines 894-904.
---
Nitpick comments:
In `@package.json`:
- Line 70: Update the "cf-typegen" npm script value in package.json to use a
path-based workspace selector instead of the package name; replace the command
string "pnpm --filter docs cf-typegen" with "pnpm --filter ./docs cf-typegen" so
it targets the docs workspace by path (modify the "cf-typegen" entry
accordingly).
In `@src/cache.ts`:
- Around line 288-289: Introduce a named interface for the expiration queue
entries and use it instead of the inline type: define an interface (e.g.
ExpirationEntry { key: string; timestamp: number }) and replace the inline
annotation on the expirationQueue field with Array<ExpirationEntry> (or
ExpirationEntry[]), updating any references to the inline shape accordingly
(look for expirationQueue and any push/pop or destructuring that expects { key,
timestamp }). Ensure the interface is exported if used outside the file and
follows existing naming conventions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 218443dd-94e8-4922-a35c-4d7af5bde1a5
📒 Files selected for processing (7)
docs/src/routes/docs/examples/async-fetching/+page.tsdocs/src/routes/docs/examples/computed-values/+page.svxdocs/src/routes/docs/examples/computed-values/+page.tsdocs/src/routes/docs/examples/monitoring/+page.tsdocs/src/routes/docs/examples/multi-tenant/+page.tspackage.jsonsrc/cache.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/src/routes/docs/examples/multi-tenant/+page.ts
- docs/src/routes/docs/examples/computed-values/+page.ts
| description: | ||
| 'Cache async fetch results with automatic deduplication and stale-while-revalidate patterns.' | ||
| 'Cache async fetch results with automatic request deduplication and thundering herd prevention using Memory Cache. Prevent redundant network calls in your TypeScript backend.' |
There was a problem hiding this comment.
Trim this fallback description to ≤170 chars.
Current copy is about 174 characters, so it still misses the PR’s SEO range.
✏️ Suggested edit
- 'Cache async fetch results with automatic request deduplication and thundering herd prevention using Memory Cache. Prevent redundant network calls in your TypeScript backend.'
+ 'Cache async fetch results with request deduplication and thundering herd prevention using Memory Cache. Reduce redundant network calls in your TypeScript backend.'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description: | |
| 'Cache async fetch results with automatic deduplication and stale-while-revalidate patterns.' | |
| 'Cache async fetch results with automatic request deduplication and thundering herd prevention using Memory Cache. Prevent redundant network calls in your TypeScript backend.' | |
| description: | |
| 'Cache async fetch results with request deduplication and thundering herd prevention using Memory Cache. Reduce redundant network calls in your TypeScript backend.' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/src/routes/docs/examples/async-fetching/`+page.ts around lines 3 - 4,
The fallback description string in the description property (in +page.ts) is too
long; shorten it to 170 characters or fewer by trimming redundant words and
keeping the core message (cache async fetch results, request deduplication,
thundering herd prevention, Memory Cache, prevent redundant network calls) —
update the description value to a concise single-sentence version under 170
chars (e.g., remove "automatic" or "TypeScript backend" if needed) and verify
the final character count.
| if (seo) { | ||
| seo.title = 'Computed Value Caching - Memory Cache' | ||
| seo.description = 'Cache expensive computations to avoid redundant processing with @humanspeak/memory-cache' | ||
| seo.description = 'Cache expensive computations to avoid redundant processing and speed up repeated calculations. Includes performance tracking and hooks monitoring with @humanspeak/memory-cache.' |
There was a problem hiding this comment.
Trim this meta description back under the 170-char target.
This copy is about 176 characters, so it still misses the SEO range called out in the PR.
✏️ Suggested edit
- seo.description = 'Cache expensive computations to avoid redundant processing and speed up repeated calculations. Includes performance tracking and hooks monitoring with `@humanspeak/memory-cache`.'
+ seo.description = 'Cache expensive computations to avoid redundant work and speed up repeated calculations. Covers performance tracking and cache hooks with `@humanspeak/memory-cache`.'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| seo.description = 'Cache expensive computations to avoid redundant processing and speed up repeated calculations. Includes performance tracking and hooks monitoring with @humanspeak/memory-cache.' | |
| seo.description = 'Cache expensive computations to avoid redundant work and speed up repeated calculations. Covers performance tracking and cache hooks with `@humanspeak/memory-cache`.' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/src/routes/docs/examples/computed-values/`+page.svx at line 11, The meta
description assigned to seo.description is 176 characters and exceeds the
170-char SEO target; shorten the string in seo.description to ≤170 characters by
removing or condensing nonessential words (e.g., drop "Includes" or "with
`@humanspeak/memory-cache`" or rephrase to "with `@humanspeak/memory-cache`" as
shorter alternate) so the resulting description stays under the 170-char limit
while preserving the core message about caching expensive computations,
performance tracking, and hooks monitoring.
| // Append to expiration queue (skip if TTL is disabled) | ||
| // Coalesce with tail entry if same key to prevent unbounded growth on rapid overwrites | ||
| if (this.ttl > 0) { | ||
| const queue = this.expirationQueue | ||
| const tail = queue.length > 0 ? queue[queue.length - 1] : undefined | ||
| if (tail && tail.key === key) { | ||
| tail.timestamp = timestamp | ||
| } else { | ||
| queue.push({ key, timestamp }) | ||
| } | ||
| } |
There was a problem hiding this comment.
Schedule tombstone compaction from the write path too.
Compaction is only queued from prune(), but tombstones are created on normal cache traffic. A hot cache that never calls prune()/size()/keys() can still grow expirationQueue without bound under interleaved overwrites, so the memory issue remains. As per coding guidelines, "This is a TypeScript memory cache library. Focus on cache correctness, memory management, and TypeScript types".
♻️ One way to centralize the trigger
+ private scheduleCompactionIfNeeded(): void {
+ if (this.compactionScheduled || this.expirationQueue.length <= 2 * this.cache.size) {
+ return
+ }
+ this.compactionScheduled = true
+ queueMicrotask(() => {
+ this.compactionScheduled = false
+ this.expirationQueue = this.expirationQueue.filter((item) => {
+ const entry = this.cache.get(item.key)
+ return entry !== undefined && entry.timestamp === item.timestamp
+ })
+ })
+ }
+
set(key: string, value: T): void {
const isNewKey = !this.cache.has(key)
...
if (this.ttl > 0) {
const queue = this.expirationQueue
const tail = queue.length > 0 ? queue[queue.length - 1] : undefined
if (tail && tail.key === key) {
tail.timestamp = timestamp
} else {
queue.push({ key, timestamp })
}
+ this.scheduleCompactionIfNeeded()
}
...
}
prune(): number {
...
- if (!this.compactionScheduled && this.expirationQueue.length > 2 * this.cache.size) {
- this.compactionScheduled = true
- queueMicrotask(() => {
- this.compactionScheduled = false
- this.expirationQueue = this.expirationQueue.filter((item) => {
- const entry = this.cache.get(item.key)
- return entry !== undefined && entry.timestamp === item.timestamp
- })
- })
- }
+ this.scheduleCompactionIfNeeded()
return count
}Also applies to: 894-904
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cache.ts` around lines 549 - 559, The expiration-queue append in the
write path (the block that updates this.expirationQueue with tail coalescing)
must also trigger the tombstone compaction scheduling that prune() currently
enqueues; update the write path to call the same compaction-scheduler used by
prune() whenever you push or update a tombstone entry (e.g., after
queue.push(...) or after updating tail.timestamp) so hot traffic that never
calls prune()/size()/keys() still schedules compaction; make the identical
change for the similar block around the symbols referenced at lines 894-904.
- Use optional chaining instead of redundant length guard - Clarify comment: coalescing applies to consecutive same-key overwrites Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/cache.ts (1)
549-559:⚠️ Potential issue | 🟠 MajorCompaction is still not scheduled from the write path.
Line 549 updates
expirationQueue, but compaction scheduling still only occurs inprune(). Under hot traffic that doesn’t callprune()/size()/keys(), tombstones can still grow unbounded (tail coalescing only helps consecutive same-key overwrites). Trigger the same scheduler fromset()after queue update.🛠️ Suggested fix pattern
+ private scheduleCompactionIfNeeded(): void { + if (this.compactionScheduled || this.expirationQueue.length <= 2 * this.cache.size) { + return + } + this.compactionScheduled = true + queueMicrotask(() => { + this.compactionScheduled = false + this.expirationQueue = this.expirationQueue.filter((item) => { + const entry = this.cache.get(item.key) + return entry !== undefined && entry.timestamp === item.timestamp + }) + }) + } @@ if (this.ttl > 0) { const queue = this.expirationQueue const tail = queue[queue.length - 1] if (tail?.key === key) { tail.timestamp = timestamp } else { queue.push({ key, timestamp }) } + this.scheduleCompactionIfNeeded() } @@ - if (!this.compactionScheduled && this.expirationQueue.length > 2 * this.cache.size) { - this.compactionScheduled = true - queueMicrotask(() => { - this.compactionScheduled = false - this.expirationQueue = this.expirationQueue.filter((item) => { - const entry = this.cache.get(item.key) - return entry !== undefined && entry.timestamp === item.timestamp - }) - }) - } + this.scheduleCompactionIfNeeded()As per coding guidelines, "This is a TypeScript memory cache library. Focus on cache correctness, memory management, and TypeScript types."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.ts` around lines 549 - 559, The write path updates expirationQueue in set() but never schedules compaction so tombstones can grow; after the TTL block that updates this.expirationQueue (respecting this.ttl > 0 and tail coalescing) call the same compaction scheduler used by prune() (e.g., ensureCompactionScheduled / scheduleCompaction or whatever helper prune() uses) to enqueue a compaction run; make the call idempotent/cheap (no-op if already scheduled) and only invoke it when TTL is enabled so behavior matches prune(), referencing expirationQueue, ttl, set(), and prune() as the locations to change.
🧹 Nitpick comments (1)
src/cache.ts (1)
288-289: Use an interface for the expiration queue item shape.Line 289 uses an inline object type; please extract it to an interface for consistency with the TypeScript guideline.
♻️ Suggested refactor
+interface ExpirationQueueItem { + key: string + timestamp: number +} + export class MemoryCache<T> { @@ - private expirationQueue: Array<{ key: string; timestamp: number }> = [] + private expirationQueue: ExpirationQueueItem[] = []As per coding guidelines, "Prefer interface for defining object shapes in TypeScript".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.ts` around lines 288 - 289, Extract the inline object type used for expirationQueue into a named interface (e.g., ExpirationItem) and update the declaration of the private expirationQueue property to use Array<ExpirationItem> (or ExpirationItem[]); ensure the new interface matches the existing shape ({ key: string; timestamp: number }) and adjust any other usages of that inline type in this file (references to expirationQueue, enqueue/dequeue/prune helpers) to use the new ExpirationItem interface for consistency with TypeScript guidelines.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/cache.ts`:
- Around line 549-559: The write path updates expirationQueue in set() but never
schedules compaction so tombstones can grow; after the TTL block that updates
this.expirationQueue (respecting this.ttl > 0 and tail coalescing) call the same
compaction scheduler used by prune() (e.g., ensureCompactionScheduled /
scheduleCompaction or whatever helper prune() uses) to enqueue a compaction run;
make the call idempotent/cheap (no-op if already scheduled) and only invoke it
when TTL is enabled so behavior matches prune(), referencing expirationQueue,
ttl, set(), and prune() as the locations to change.
---
Nitpick comments:
In `@src/cache.ts`:
- Around line 288-289: Extract the inline object type used for expirationQueue
into a named interface (e.g., ExpirationItem) and update the declaration of the
private expirationQueue property to use Array<ExpirationItem> (or
ExpirationItem[]); ensure the new interface matches the existing shape ({ key:
string; timestamp: number }) and adjust any other usages of that inline type in
this file (references to expirationQueue, enqueue/dequeue/prune helpers) to use
the new ExpirationItem interface for consistency with TypeScript guidelines.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 98b1e567-8abe-4e6d-9c83-ee73153f796a
📒 Files selected for processing (1)
src/cache.ts
|
⏭️ NPM publishing was skipped due to the |
Summary
Fixes Ahrefs site audit issues for memory.svelte.page: expands short meta descriptions to 120+ characters, improves SEO page titles, corrects
@cacheddecorator documentation, and defers tombstone compaction for better cache performance.Changes
⚡ Performance
queueMicrotaskto avoid blocking cache operations🐛 Bug fixes
@cacheddecorator docs — correct instance scope and fill documentation gaps.svxseo overrides and+page.tsfallbacks) to 120-170 characters📦 Dependencies
🔄 CI/CD
pnpm installto root package in CI workflows--config.engine-strict=falseto pnpm installsoftprops/action-gh-releasewithghCLI for release creation🔧 Chore
worker-configuration.d.tstodocs/src/to align with cf-typegen outputCommits
5ab145dperf: defer tombstone compaction via queueMicrotaskd09a79bchore(docs): bump dependencies and update lint config7233055fix(docs): correct @cached instance scope and fill documentation gaps02eca09ci: scope pnpm install to root package in CI workflows54598bdfix(docs): expand meta descriptions to meet 120+ character SEO threshold5720942ci: add engine-strict=false flag and replace gh-release action with CLI66e00c7chore: bump dependencies and update sitemap manifest dates6840e15chore(docs): move worker-configuration.d.ts to src directory