Test/#110: 성능 병목 API 부하 테스트#111
Conversation
add-indexes.sql: 20개 커스텀 인덱스 생성 drop-indexes.sql: 인덱스 롤백 (역연산) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- VideoRedisRepository, RedisUtil: 홈/비디오 캐시 읽기/쓰기 구현 - HomeService, VideoService: @value 기반 cacheEnabled 토글 추가 - application-nocache.yml: 캐시 비활성화 Spring 프로필 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- benc-uk/k6-reporter import으로 HTML 리포트 자동 생성 - handleSummary export: HTML + JSON + stdout 출력 - RESULT_DIR, RESULT_PREFIX 환경변수로 결과 경로 제어 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 시나리오 1(인덱스): 완전 자동 Before/After + 롤백 - 시나리오 2(캐시): 반자동 nocache/local 프로필 전환 - 시나리오 3(커넥션풀): 반자동 pool 10/50/100 비교 - 공통: 헬스체크, Redis 캐시 선택 초기화, 컬러 출력 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- run-scenario.sh 기반 실행 가이드로 전면 개편 - 시나리오별 전제조건, 실행 흐름, 검증 방법 구체화 - HTML 리포트 확인 및 JSON 지표 추출 방법 추가 - 환경변수 레퍼런스 테이블 정리 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (32)
Walkthroughk6 기반 부하 테스트 스크립트와 실행/오케스트레이션 도구, 대규모 테스트 데이터 및 인덱스 스크립트가 추가되었고, HomeService/VideoService에 Redis 캐싱 로직과 VideoRedisRepository/RedisUtil 키 유틸리티가 도입되었습니다. 또한 캐시 비활성화 프로파일과 관련한 .gitignore 업데이트가 포함됩니다. Changes
Sequence Diagram(s)sequenceDiagram
participant VU as K6 VU
participant API as API 서버
participant Cache as Redis 캐시
participant DB as PostgreSQL
rect rgba(0, 200, 100, 0.5)
Note over VU,DB: HomeService.readHome (캐시 활성화)
VU->>API: GET /api/home
API->>Cache: getCachedHomeVideos(orgId, filter)
alt 캐시 히트
Cache-->>API: JSON 비디오 목록
API-->>VU: 200 OK (캐시된 응답)
else 캐시 미스
API->>DB: SELECT videos ... WHERE org_id=?
DB-->>API: 비디오 행들
API->>Cache: cacheHomeVideos(orgId, filter, videoData)
Cache-->>API: OK
API-->>VU: 200 OK (DB에서 로드 후 캐시)
end
end
sequenceDiagram
participant User as 운영자
participant RunScript as run-scenario.sh
participant K6 as k6 엔진
participant API as API 서버
participant DB as PostgreSQL
participant Cache as Redis
rect rgba(100, 200, 150, 0.5)
Note over User,DB: 시나리오1: 인덱싱 테스트
User->>RunScript: ./run-scenario.sh 1
RunScript->>DB: DROP custom indexes
DB-->>RunScript: OK
RunScript->>K6: k6 run history-api-test.js (before)
K6->>API: 부하 요청
API->>DB: 쿼리(인덱스 없음)
DB-->>API: 응답(지연)
K6-->>RunScript: before 결과 수집
end
rect rgba(150, 150, 200, 0.5)
RunScript->>DB: 실행 scripts/add-indexes.sql
DB-->>RunScript: 인덱스 생성 완료
end
rect rgba(150, 200, 100, 0.5)
RunScript->>K6: k6 run history-api-test.js (after)
K6->>API: 부하 요청
API->>DB: 쿼리(인덱스 활용)
DB-->>API: 빠른 응답
K6-->>RunScript: after 결과 수집
end
rect rgba(200, 150, 100, 0.5)
RunScript->>DB: scripts/drop-indexes.sql (롤백)
DB-->>RunScript: OK
RunScript-->>User: 결과 디렉토리 출력
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
b051441 to
63b4273
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (18)
scripts/add-indexes.sql-53-58 (1)
53-58:⚠️ Potential issue | 🟠 Major비디오 제목 검색 인덱스가 효과적으로 작동하지 않습니다
idx_video_org_title_status인덱스의 문제점:
findSearchVideos()메서드는video.title.containsIgnoreCase(keyword)를 사용하며, 이는 SQL에서LIKE '%keyword%'(부분 일치 검색)로 변환됩니다- B-tree 인덱스는 앞에 와일드카드(
%)가 있는 패턴에서는 효과가 없으므로 이 인덱스는 실제로 사용되지 않습니다- 현재 인덱스 컬럼 순서(organization_id, upload_status, title)는 제목 검색 최적화에 도움이 되지 않습니다
해결책:
- PostgreSQL의
pg_trgm확장과 GIN/GiST 인덱스를 사용하세요:CREATE INDEX idx_video_title_trgm ON video USING gin(title gin_trgm_ops);- 또는 Full-Text Search를 고려하세요
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/add-indexes.sql` around lines 53 - 58, The current B-tree index idx_video_org_title_status won't help VideoRepositoryImpl.findSearchVideos() because video.title.containsIgnoreCase(...) becomes LIKE '%keyword%' which can't use a leading-wildcard B-tree; replace or supplement it by enabling the pg_trgm extension and creating a trigram GIN/GiST index on title (suggested name idx_video_title_trgm) or implement Full-Text Search; update migration to CREATE EXTENSION IF NOT EXISTS pg_trgm and add a GIN index on video.title (gin_trgm_ops) and remove or keep idx_video_org_title_status only for equality filters (organization_id/upload_status) as appropriate.src/main/resources/application-nocache.yml-1-3 (1)
1-3:⚠️ Potential issue | 🟠 Major
updateVideo()및deleteVideo()메서드에서 캐시 무효화 호출을cacheEnabled플래그로 보호하세요
VideoService.java의updateVideo()(685-686줄) 및deleteVideo()(710-711줄) 메서드에서invalidateVideoCache()및invalidateHomeCache()가cacheEnabled플래그를 확인하지 않고 무조건 호출됩니다.반면 캐시 읽기 (422줄, HomeService 71줄) 및 쓰기 (555줄, 609줄, HomeService 105줄)는 모두
cacheEnabled플래그로 보호되고 있습니다. 이로 인해nocache프로필이 활성화되어도 Redis 캐시 무효화 작업이 발생하여, 캐시 완전 비활성화 계약을 위반합니다.두 메서드에서 캐시 무효화 호출을
if (cacheEnabled)블록으로 감싸야 합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/resources/application-nocache.yml` around lines 1 - 3, In VideoService.java, the cache invalidation calls in updateVideo() and deleteVideo() must be guarded by the cacheEnabled flag: wrap the calls to invalidateVideoCache(...) and invalidateHomeCache(...) inside an if (cacheEnabled) { ... } block so that when cacheEnabled is false (nocache profile) no Redis invalidation is performed; locate the existing calls in the updateVideo and deleteVideo methods and surround them with the cacheEnabled check consistent with other read/write usages.k6-tests/shared/auth.js-22-24 (1)
22-24:⚠️ Potential issue | 🟠 Major실패 응답 본문 전체를 로그에 남기지 마세요.
현재 에러 로그가
response.body를 그대로 출력해서 민감정보 노출과 로그 폭증 위험이 있습니다. 상태코드 중심으로 로깅하고 본문은 필요 시 제한적으로만 남기세요.🛠️ 제안 수정
- if (loginRes.status !== 200) { - console.error(`로그인 실패: ${loginRes.status} - ${loginRes.body}`); + if (loginRes.status !== 200) { + console.error(`로그인 실패: status=${loginRes.status}`); return null; } @@ - if (selectRes.status !== 200) { - console.error(`조직 선택 실패: ${selectRes.status} - ${selectRes.body}`); + if (selectRes.status !== 200) { + console.error(`조직 선택 실패: status=${selectRes.status}`); return null; }Also applies to: 43-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/shared/auth.js` around lines 22 - 24, The error log currently prints the entire response body (console.error using loginRes.body), which risks leaking sensitive data; change the logging in the login failure branch to record only the status and a minimal, truncated or redacted summary of the body (e.g., “body truncated” or first N characters) instead of the full loginRes.body, update the same pattern at the other occurrence around lines 43-45, and keep the message descriptive (e.g., include loginRes.status and a short safe excerpt) so debugging remains possible without exposing full response contents.k6-tests/shared/auth.js-35-35 (1)
35-35:⚠️ Potential issue | 🟠 Major
orgId기본값을 1로 강제하지 말고 필수값으로 검증하세요.
orgId || 1은 설정 누락 시 잘못된 조직을 대상으로 테스트를 수행하게 만들 수 있습니다. 누락이면 즉시 실패 처리하는 쪽이 안전합니다.🛠️ 제안 수정
export function login(baseUrl, email, password, orgId) { + if (orgId == null) { + console.error('orgId가 필요합니다.'); + return null; + } @@ - const selectOrgUrl = `${baseUrl}/orgs/${orgId || 1}`; + const selectOrgUrl = `${baseUrl}/orgs/${orgId}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/shared/auth.js` at line 35, The code currently forces a default orgId with the expression orgId || 1 when building selectOrgUrl (const selectOrgUrl = `${baseUrl}/orgs/${orgId || 1}`); remove that fallback and instead validate that orgId is provided and non-empty before constructing selectOrgUrl — e.g., check the orgId parameter or variable and throw or assert (fail the test) with a clear message if missing, then build selectOrgUrl using the validated orgId; update any callers to supply orgId as required.src/main/java/app/allstackproject/privideo/domain/home/service/HomeService.java-82-82 (1)
82-82:⚠️ Potential issue | 🟠 Major캐시 히트 시 썸네일 URL이 이중 CDN 변환될 수 있습니다.
DB 조회 경로에서 이미 CDN URL로 변환해 캐시하고, 캐시 조회 경로에서 다시
generateImgUrl()을 호출하고 있어 URL 깨짐 가능성이 있습니다.🔧 제안 수정
- cdnUrlProvider.generateImgUrl(data.get("thumbnailUrl").toString()), + data.get("thumbnailUrl").toString(),Also applies to: 131-131, 150-150
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/app/allstackproject/privideo/domain/home/service/HomeService.java` at line 82, 현재 HomeService의 videoInfos.forEach(... info.setThumbnailUrl(cdnUrlProvider.generateImgUrl(...))) 호출이 DB-조회 경로와 캐시-조회 경로 양쪽에서 중복으로 실행되어 이미 CDN으로 변환된 URL을 다시 변환해 깨질 수 있습니다; 수정방법은 generateImgUrl 호출 전에 해당 thumbnailUrl이 이미 CDN 형식인지(또는 절대 URL인지) 검사하고, 이미 CDN이면 그대로 두고 아니면 cdnUrlProvider.generateImgUrl(...)를 호출하도록 변경하세요 — 관련 위치는 videoInfos.forEach(...) 호출부와 사용중인 generateImgUrl 메서드 호출 지점(파일 내 82, 131, 150 근처의 처리 로직 및 HomeService 클래스)을 찾아 동일한 보호 로직으로 통일 적용하세요.scripts/insert-test-data.sql-89-90 (1)
89-90:⚠️ Potential issue | 🟠 Major테스트 멤버 샘플링이 기존 사용자 데이터와 섞일 수 있습니다.
현재 방식은 기존 데이터가 있는 DB에서 실사용자가 테스트 조직 멤버로 편입될 수 있어, 결과 재현성과 데이터 격리가 깨집니다. 생성한 테스트 계정 범위로 명시적으로 제한하는 게 안전합니다.
🔧 제안 수정
-FROM users u -WHERE u.id IN (SELECT id FROM users ORDER BY id LIMIT 50) +FROM users u +WHERE u.email LIKE 'testuser%@example.com' + AND CAST(SUBSTRING(u.email FROM 'testuser(\d+)@') AS INTEGER) BETWEEN 1 AND 50동일 패턴으로 조직2/조직3 구간도 테스트 계정 범위 기준으로 분리 적용 권장.
Also applies to: 106-107, 123-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/insert-test-data.sql` around lines 89 - 90, The current sampling WHERE clause (FROM users u WHERE u.id IN (SELECT id FROM users ORDER BY id LIMIT 50)) can pick real user rows; change the subquery to explicitly restrict to test accounts (e.g., select only users with a test flag or a test-email/username pattern) so only generated test accounts are included; update the subquery used in this block (users u / SELECT id FROM users ...) and apply the same change to the equivalent blocks for organization2/organization3 mentioned in the comment so all test-member sampling uses the test-account filter.k6-tests/results/scenario1-indexing/scenario1-index-result.md-67-80 (1)
67-80:⚠️ Potential issue | 🟠 MajorVideo Join 시나리오가 성공 경로 성능을 거의 측정하지 못합니다.
요청의 대부분이 동일 사용자/동일 영상의 반복으로 409 경로에 머물면, 실제 “세션 시작 성공” 처리의 병목을 판단하기 어렵습니다. 성공 케이스 비율이 충분한 별도 시나리오를 분리해 지표를 내는 게 좋습니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/scenario1-index-result.md` around lines 67 - 80, The Video Join API load test is dominated by repeated 409 Conflict responses (이미 시청 중) so the success-path (200 session start) performance isn't measured; update the k6 scenario for POST /{orgId}/video/{videoId}/join to generate enough successful joins by either parameterizing requests with unique videoId or userId per virtual user, or adding a separate scenario that performs a single-first-join flow (ensure the request returns 200) before any repeat joins, and ensure the scenario filters/records metrics only for the 200 path so p50/p95 reflect the session-start performance rather than the 409 conflict path.k6-tests/shared/config.js-4-4 (1)
4-4:⚠️ Potential issue | 🟠 Major기본
BASE_URL스킴이 실행 스크립트 기본값과 불일치합니다.같은 테스트를 실행 경로에 따라
http/https가 달라지면 연결 실패나 지표 왜곡이 발생할 수 있습니다. 기본값을 하나로 맞추세요.🔧 제안 수정
- baseUrl: __ENV.BASE_URL || 'https://localhost:8080', + baseUrl: __ENV.BASE_URL || 'http://localhost:8080',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/shared/config.js` at line 4, The default baseUrl scheme in k6-tests/shared/config.js is inconsistent with the test run script; update the fallback value for baseUrl (the expression using __ENV.BASE_URL) to use the same scheme as the execution script (e.g., change the literal to 'http://localhost:8080' if the runner uses http) so __ENV.BASE_URL || baseUrl always defaults to the correct scheme and avoids connection/metrics issues.scripts/insert-test-data.sql-203-238 (1)
203-238:⚠️ Potential issue | 🟠 Major비디오 삽입이 비멱등이라 재실행 시 데이터가 누적됩니다.
이 구간은 중복 방지 없이 계속 누적 삽입되어, 시나리오 반복 실행 시 성능 비교 기준이 변합니다. 멱등 조건을 추가하거나 실행 전 정리 단계를 강제해야 합니다.
🔧 제안 수정
FROM organization o CROSS JOIN generate_series(1, 500) AS v(seq); +-- 재실행 멱등성 보장을 위해 아래 형태 중 하나 적용 권장 +-- 1) UNIQUE 키(video_url 등) + ON CONFLICT DO NOTHING +-- 2) WHERE NOT EXISTS (...) 조건으로 중복 삽입 차단🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/insert-test-data.sql` around lines 203 - 238, The INSERT into the video table (INSERT INTO video ... SELECT ... FROM organization o CROSS JOIN generate_series(...) AS v(seq)) is non-idempotent and accumulates duplicates on re-run; make it idempotent by either adding a pre-clean step (DELETE FROM video WHERE title LIKE '테스트 비디오 %' AND organization_id IN (SELECT id FROM organization)) or converting the insert to an upsert/conditional insert: ensure a unique constraint (e.g., on (organization_id, title) or a test-data marker column) and use ON CONFLICT DO NOTHING or wrap the SELECT with a WHERE NOT EXISTS subquery that checks for existing rows for the same organization_id/title; reference the video_seq sequence, INSERT INTO video statement, organization alias o and generate_series v(seq) when updating the script.k6-tests/shared/config.js-9-12 (1)
9-12:⚠️ Potential issue | 🟠 MajorID 기본값
1고정은 시나리오 실패를 유도합니다.테스트 데이터는 실행마다 ID가 달라질 수 있어, 고정 기본값은 높은 실패율을 만들고 성능 비교를 오염시킵니다. 필수 환경변수로 강제하는 쪽이 안전합니다.
🔧 제안 수정
+const required = ['USER_ID', 'MEMBER_ID', 'ORG_ID', 'VIDEO_ID', 'EMAIL', 'PASSWORD']; +required.forEach((k) => { + if (!__ENV[k]) throw new Error(`Missing required env: ${k}`); +}); + export const config = { // 테스트 데이터 testData: { - userId: __ENV.USER_ID || 1, - memberId: __ENV.MEMBER_ID || 1, - orgId: __ENV.ORG_ID || 1, - videoId: __ENV.VIDEO_ID || 1, + userId: Number(__ENV.USER_ID), + memberId: Number(__ENV.MEMBER_ID), + orgId: Number(__ENV.ORG_ID), + videoId: Number(__ENV.VIDEO_ID),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/shared/config.js` around lines 9 - 12, The config currently defaults userId, memberId, orgId, and videoId to 1 which causes flaky tests; update k6-tests/shared/config.js to remove these hardcoded defaults and instead assert that __ENV.USER_ID, __ENV.MEMBER_ID, __ENV.ORG_ID, and __ENV.VIDEO_ID are present, throwing a clear error (or exiting) if any are missing so the test run fails fast and requires explicit environment variables for userId, memberId, orgId, and videoId.k6-tests/video-join-api-test.js-94-95 (1)
94-95:⚠️ Potential issue | 🟠 Major에러율 집계와 실패 로그 처리가 실제 실패를 누락할 수 있습니다.
Line 94는 transport error(
status === 0,response.error)를 놓칠 수 있고, Line 117은 body가 비어 있으면 오류가 날 수 있습니다.🔧 제안 수정안
- errorRate.add(response.status >= 400); + const isRequestError = response.status === 0 || response.status >= 400 || !!response.error; + errorRate.add(isRequestError); @@ - if (!success && response.status !== 503 && response.status !== 504) { - console.error(`영상 세션 시작 API 실패: ${response.status} - ${response.body.substring(0, 200)}`); + if (!success && response.status !== 503 && response.status !== 504) { + const bodyPreview = response.body ? String(response.body).substring(0, 200) : '<empty>'; + console.error(`영상 세션 시작 API 실패: ${response.status} - ${bodyPreview}`); }Also applies to: 116-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/video-join-api-test.js` around lines 94 - 95, The current failure counting and log handling misses transport errors and can crash on empty bodies; update the k6 test around errorRate.add and response handling so failures include transport errors (check response.status === 0 or response.error and call errorRate.add(true) and log response.error) and only parse response.body after confirming it's non-empty (check response.body or typeof response.body === 'string' and length > 0 before JSON.parse or accessing fields), adjusting the code around errorRate.add, response.status, response.error and the body parsing/inspection at the block referencing response.body to avoid exceptions on empty responses.k6-tests/home-api-test.js-77-77 (1)
77-77:⚠️ Potential issue | 🟠 Major에러율 집계와 실패 로그가 일부 실패 케이스를 놓칩니다.
Line 77은
status === 0/response.error를 반영하지 못하고, Line 95는 빈 body에서 런타임 오류 가능성이 있습니다.🔧 제안 수정안
- errorRate.add(response.status >= 400); + const isRequestError = response.status === 0 || response.status >= 400 || !!response.error; + errorRate.add(isRequestError); @@ - if (!success) { - console.error(`홈 조회 API 실패: ${response.status} - ${response.body.substring(0, 200)}`); + if (!success) { + const bodyPreview = response.body ? String(response.body).substring(0, 200) : '<empty>'; + console.error(`홈 조회 API 실패: ${response.status} - ${bodyPreview}`); }Also applies to: 94-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/home-api-test.js` at line 77, The current error aggregation call only checks response.status >= 400 and misses network/errors (response.status === 0 or response.error) and later code accesses response.body without guarding for empty responses; update the errorRate.add call (errorRate.add(...)) to treat a response with response.status === 0 or response.error truthy as a failure (e.g., errorRate.add(response.error || response.status === 0 || response.status >= 400)), and add a defensive check before using response.body in the block around the code at lines ~94-96 (e.g., ensure response.body is non-empty or truthy before parsing/inspecting it) to avoid runtime errors.k6-tests/history-api-test.js-76-76 (1)
76-76:⚠️ Potential issue | 🟠 Major커스텀 에러율과 실패 로그가 실패를 과소/불안정하게 처리합니다.
Line 76은 transport error를 누락할 수 있고, Line 102는 body null 시 예외가 날 수 있습니다.
🔧 제안 수정안
- errorRate.add(response.status >= 400); + const isRequestError = response.status === 0 || response.status >= 400 || !!response.error; + errorRate.add(isRequestError); @@ - if (!success) { - console.error(`시청 기록 조회 API 실패: ${response.status} - ${response.body.substring(0, 200)}`); + if (!success) { + const bodyPreview = response.body ? String(response.body).substring(0, 200) : '<empty>'; + console.error(`시청 기록 조회 API 실패: ${response.status} - ${bodyPreview}`); }Also applies to: 101-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/history-api-test.js` at line 76, The custom error-rate and failure logging miss transport errors and can throw when response.body is null; update the errorRate.add call (variable errorRate) to treat missing/errored responses as failures (e.g., consider !response || response.error || response.status >= 400) and guard any parsing or access to response.body (in the block around where response.body is used, e.g., lines referenced near the parsing logic) with a null check so you don't call JSON.parse or access properties when response.body is null—log a clear failure when response is missing/errored and only parse body when response && response.body exist.k6-tests/run-test.sh-21-24 (1)
21-24:⚠️ Potential issue | 🟠 Major환경변수 로딩/전달 방식이 값 파손과 오동작을 유발합니다.
Line 23의
export $(...)와 Line 74의 비인용 문자열 확장은 공백/특수문자 포함 값을 깨뜨릴 수 있습니다.🔧 제안 수정안
if [ -f .env ]; then echo "📁 .env 파일 로드 중..." - export $(grep -v '^#' .env | xargs) + while IFS='=' read -r key value; do + [[ -z "${key}" || "${key}" =~ ^[[:space:]]*# ]] && continue + export "${key}=${value}" + done < .env fi @@ -K6_ENV_OPTS="--env BASE_URL=$BASE_URL \ - --env EMAIL=$EMAIL \ - --env PASSWORD=$PASSWORD \ - --env USER_ID=$USER_ID \ - --env MEMBER_ID=$MEMBER_ID \ - --env ORG_ID=$ORG_ID \ - --env VIDEO_ID=$VIDEO_ID \ - --env VUS=$VUS \ - --env DURATION=$DURATION" +K6_ENV_OPTS=( + --env "BASE_URL=$BASE_URL" + --env "EMAIL=$EMAIL" + --env "PASSWORD=$PASSWORD" + --env "USER_ID=$USER_ID" + --env "MEMBER_ID=$MEMBER_ID" + --env "ORG_ID=$ORG_ID" + --env "VIDEO_ID=$VIDEO_ID" + --env "VUS=$VUS" + --env "DURATION=$DURATION" +) @@ - k6 run $K6_ENV_OPTS \ + k6 run "${K6_ENV_OPTS[@]}" \ --out json="$output_file" \ "$test_file"Also applies to: 53-61, 74-76
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/run-test.sh` around lines 21 - 24, The current .env loading uses `export $(grep -v '^#' .env | xargs)` which breaks values with spaces/special chars and there are unquoted expansions later (around the block referenced at lines ~74) risking word-splitting; replace the export-xargs approach with a safe loader (e.g., enable automatic export with `set -a` then `source .env` and `set +a`, or iterate lines with `while IFS= read -r line; do` skipping comments and empty lines and exporting each raw line) and audit the later usages to ensure all variable expansions are quoted (change any unquoted $VAR to "$VAR") so values with spaces/special characters are preserved; look for the .env loading block and the expansions referenced around line 74 to apply these changes.k6-tests/run-scenario.sh-180-187 (1)
180-187:⚠️ Potential issue | 🟠 Majork6 실패를
|| true로 삼켜서 시나리오 결과가 왜곡됩니다.Line 185에서 실패를 무시하고 Line 186에서 성공으로 출력해, 임계치 실패를 놓치게 됩니다.
🔧 제안 수정안
run_k6_tests() { local result_dir="$1" local result_prefix="$2" local label="$3" + local failed=0 @@ - k6 run \ + if k6 run \ --insecure-skip-tls-verify \ -e RESULT_DIR="$result_dir" \ -e RESULT_PREFIX="$prefix" \ -e BASE_URL="$BASE_URL" \ - "$SCRIPT_DIR/$script" || true - print_success "[$label] $test_name 완료" + "$SCRIPT_DIR/$script"; then + print_success "[$label] $test_name 완료" + else + print_error "[$label] $test_name 실패" + failed=1 + fi print_separator done + return $failed }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/run-scenario.sh` around lines 180 - 187, The script currently swallows k6 failures by appending "|| true" to the k6 run command, then always prints success; remove "|| true" from the k6 invocation and instead capture the k6 exit code after running "$SCRIPT_DIR/$script" (e.g., rc=$?); use that rc to decide whether to call print_success or a failure logging function (or print_failure) and then call print_separator; ensure non-zero rc is propagated (exit $rc) so CI can detect failures.src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java-684-687 (1)
684-687:⚠️ Potential issue | 🟠 Major관리자 삭제 경로에서 캐시 무효화가 누락되어 있어 stale cache 위험이 있습니다.
VideoAdminService.deleteVideo()에서 데이터베이스 및 S3 파일은 삭제하지만 캐시를 무효화하지 않습니다. 반면VideoService의 수정/삭제 경로에서는 캐시를 무효화합니다. 관리자가 비디오를 삭제하면 캐시에 남은 오래된 데이터가 계속 노출될 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java` around lines 684 - 687, In VideoAdminService.deleteVideo() add the same cache invalidation performed in VideoService (call videoRedisRepository.invalidateVideoCache(videoId) and videoRedisRepository.invalidateHomeCache(orgId)) after the database and S3 deletions complete; ensure you reference the existing videoId and orgId variables and invoke the invalidation on the same videoRedisRepository instance so admin deletes do not leave stale cache entries.src/main/java/app/allstackproject/privideo/domain/video/repository/VideoRedisRepository.java-222-224 (1)
222-224:⚠️ Potential issue | 🟠 Major
KEYS명령으로 Redis가 블로킹되는 성능 리스크가 있습니다.Line 222와 241의
redisTemplate.keys(pattern)는 Redis 키스페이스가 커질수록 O(N) 블로킹으로 인한 지연을 초래할 수 있습니다.SCAN기반 점진 조회로 변경하여 블로킹을 제거하는 것을 권장합니다.🔧 제안 구현
- Set<String> keys = redisTemplate.keys(pattern); + Set<String> keys = new HashSet<>(); + Cursor<String> cursor = redisTemplate.scan(ScanOptions.scanOptions().match(pattern).build()); + while (cursor.hasNext()) { + keys.add(cursor.next()); + } + cursor.close();또는 헬퍼 메서드로 분리하여
scanKeys(pattern)형태로 통합 관리하면 유지보수가 용이합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/app/allstackproject/privideo/domain/video/repository/VideoRedisRepository.java` around lines 222 - 224, The current use of redisTemplate.keys(pattern) in VideoRedisRepository causes blocking O(N) scans; replace both calls to redisTemplate.keys(pattern) with a non-blocking SCAN-based approach: add a helper method scanKeys(String pattern) in VideoRedisRepository that uses RedisTemplate.execute(RedisCallback) or RedisConnection.scan(ScanOptions.match(pattern).count(...)) to iterate a Cursor<byte[]> and collect matching keys into a Set<String>, then delete keys in batches via redisTemplate.delete(Collection<String>); update the original code to call scanKeys(pattern) instead of redisTemplate.keys(pattern).k6-tests/video-join-api-test.js-4-4 (1)
4-4:⚠️ Potential issue | 🟠 Major원격 모듈을
main브랜치에서 직접 import하면 테스트 결과 재현성이 깨집니다.Line 4의
htmlReport를 commit SHA 또는 release tag로 pinning해주세요. Line 5의textSummary처럼 버전을 명시하면 외부 변경에 무관한 안정적인 테스트가 보장됩니다.권장 형식:
// commit SHA 사용 (가장 안정적) import { htmlReport } from 'https://raw.githubusercontent.com/benc-uk/k6-reporter/<commit-sha>/dist/bundle.js'; // 또는 release tag 사용 import { htmlReport } from 'https://raw.githubusercontent.com/benc-uk/k6-reporter/v1.0.0/dist/bundle.js';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/video-join-api-test.js` at line 4, The import for htmlReport is pinned to the main branch which breaks reproducibility; update the import of htmlReport in video-join-api-test.js to use a fixed commit SHA or a release tag (same style as the pinned textSummary import on line 5) so the module is immutable—replace the current URL with one using either a specific commit SHA (recommended) or a version tag (e.g. vX.Y.Z) while keeping the same path and export name htmlReport.
🟡 Minor comments (3)
k6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-30-47-summary.json-244-246 (1)
244-246:⚠️ Potential issue | 🟡 Minor보안 주의: JWT 토큰이 버전 관리에 포함됨
setup_data.token에 JWT 토큰이 포함되어 있습니다. 테스트용 토큰이라 하더라도 사용자 ID, 조직 ID, 권한 정보 등 민감한 클레임이 노출됩니다.테스트 결과 파일을
.gitignore에 추가하거나,handleSummary에서 토큰을 마스킹 처리하는 것을 권장합니다.// handleSummary에서 토큰 마스킹 예시 if (data.setup_data && data.setup_data.token) { data.setup_data.token = '[REDACTED]'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-30-47-summary.json` around lines 244 - 246, The JSON contains a JWT in setup_data.token which must be removed or redacted; update the summary handling to scrub/mask that field (e.g., modify the handleSummary routine to detect data.setup_data.token and replace it with a redacted placeholder) and/or ensure the generated results file is excluded from version control by adding the results directory or this file pattern to .gitignore so tokens are never committed.k6-tests/shared/auth.js-4-10 (1)
4-10:⚠️ Potential issue | 🟡 Minor주석과 구현이 불일치합니다.
JSDoc에는 환경 변수 기반 토큰 조회와
orgId인자가 암시되지만 구현/시그니처 설명이 맞지 않습니다. 실제 동작 기준으로 주석을 맞춰주세요.🛠️ 제안 수정
/** * 로그인하여 JWT 토큰을 발급받습니다. * `@param` {string} baseUrl - API 서버 기본 URL * `@param` {string} email - 사용자 이메일 * `@param` {string} password - 사용자 비밀번호 + * `@param` {number|string} orgId - 조직 ID * `@returns` {string|null} JWT 토큰 또는 null */ @@ /** - * 토큰을 환경 변수나 공유 데이터에서 가져옵니다. + * 토큰을 공유 데이터에서 가져옵니다. * `@param` {object} sharedData - k6 공유 데이터 객체 * `@returns` {string|null} JWT 토큰 또는 null */Also applies to: 72-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/shared/auth.js` around lines 4 - 10, Update the JSDoc to match the actual implementation of the login function: document that login(baseUrl, email, password, orgId) accepts an optional orgId, that it first checks for an environment/token override and returns that if present, and otherwise performs the network/login flow to obtain a JWT; make the `@param` tags reflect the actual parameters (including orgId and whether it's optional) and adjust the `@returns` description to match the real return value (JWT string or null). Apply the same correction to the other comment block around lines 72-74 so all JSDoc comments accurately reflect the implemented behavior.k6-tests/run-test.sh-18-18 (1)
18-18:⚠️ Potential issue | 🟡 Minor
cd실패를 명시적으로 처리해 주세요.Line 18은 디렉터리 이동 실패 시에도 계속 진행할 수 있습니다.
🔧 제안 수정안
-cd "$SCRIPT_DIR" +cd "$SCRIPT_DIR" || exit 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/run-test.sh` at line 18, 현재 스크립트가 cd "$SCRIPT_DIR" 실행 실패 시 계속 진행할 수 있으니, cd "$SCRIPT_DIR" 호출 직후 반환값을 검사해 실패하면 표준오류로 명확한 에러 메시지를 출력하고 비영(1)으로 종료하도록 처리하세요; (참고 지점: 변수 SCRIPT_DIR과 명령 cd "$SCRIPT_DIR").
🧹 Nitpick comments (12)
.gitignore (1)
47-49: 인증서 ignore 패턴은 전역 glob 대신 경로 스코프로 제한하는 게 안전합니다.Line 47에서 디렉터리 ignore를 이미 적용하고 있어 Line 48-49 전역 패턴은 중복이며, 저장소 전역의
.pem/.p12추적까지 막을 수 있습니다.제안 diff
src/main/resources/cert/ -*.pem -*.p12 +/src/main/resources/cert/*.pem +/src/main/resources/cert/*.p12🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 47 - 49, 현재 전역 glob 패턴 (*.pem, *.p12) 때문에 리포지토리 전역에서 인증서 파일 추적이 차단될 수 있으니, 전역 패턴을 제거하고 디렉터리 스코프에 한정된 패턴만 남기세요: keep the existing src/main/resources/cert/ directory ignore and replace/remove the global "*.pem" and "*.p12" entries so that only src/main/resources/cert/*.pem and src/main/resources/cert/*.p12 (or just the directory ignore) are used; ensure no duplicate entries remain and that only files under src/main/resources/cert/ are ignored.docs/test/load_test_guide.md (2)
33-39: 예제 SQL 명령어에서 환경변수 사용 권장하드코딩된 사용자명(
seohyun)과 데이터베이스명(fisa)이 포함되어 있습니다. 환경변수 레퍼런스 섹션(8장)과 일관성을 유지하기 위해$DB_USER,$DB_NAME을 사용하는 것이 좋습니다:제안된 수정
# 기존 데이터 초기화 (선택) -psql -h localhost -U seohyun -d fisa -f scripts/reset-test-data.sql +psql -h localhost -U ${DB_USER:-postgres} -d ${DB_NAME:-privideo} -f scripts/reset-test-data.sql # 대용량 테스트 데이터 삽입 -psql -h localhost -U seohyun -d fisa -f scripts/insert-test-data.sql +psql -h localhost -U ${DB_USER:-postgres} -d ${DB_NAME:-privideo} -f scripts/insert-test-data.sql🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/test/load_test_guide.md` around lines 33 - 39, Replace hardcoded DB user and name in the example psql commands with environment variables by changing the two psql invocations that reference the literal "seohyun" and "fisa" to use $DB_USER and $DB_NAME instead; update the lines that run reset-test-data.sql and insert-test-data.sql so they read psql -h localhost -U $DB_USER -d $DB_NAME -f scripts/reset-test-data.sql and psql -h localhost -U $DB_USER -d $DB_NAME -f scripts/insert-test-data.sql, and add a brief note above the commands reminding readers to export DB_USER and DB_NAME (or cite the env var section) before running them.
48-48: 문서와 코드 불일치: fenced code block 언어 미지정static analysis 힌트에서 표시된 대로, 여러 fenced code block에 언어가 지정되지 않았습니다 (Lines 62, 126, 138, 180, 212, 254, 267). Markdown 린터 경고를 해결하려면
text또는 적절한 언어를 지정하세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/test/load_test_guide.md` at line 48, Several fenced code blocks in load_test_guide.md are missing a language tag causing markdown-linter warnings; open the markdown and add an explicit language specifier (e.g., use ```text or the appropriate language like ```json/```bash) to each fenced block (the examples and sample output/code blocks referenced by the static analysis) so every ``` fenced block has a language label.k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27-summary.json (1)
1-92: 테스트 결과 아티팩트를 버전 관리에서 제외하는 것을 권장합니다이 파일은 실제 테스트 실행 없이 생성된 것으로 보입니다:
testRunDurationMs: 2.006(약 2ms로 매우 짧음)- 모든 메트릭 값이 0 (
count: 0,rate: 0)테스트 결과 JSON/HTML 파일들은 실행 환경마다 달라지므로
.gitignore에 추가하고 CI/CD 아티팩트로 관리하는 것이 좋습니다.+# .gitignore에 추가 +k6-tests/results/**/*.json +k6-tests/results/**/*.html🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27-summary.json` around lines 1 - 92, The file contains generated test artifact values (e.g., "testRunDurationMs" and zeroed metrics like "data_sent", "data_received", "http_req_duration") and should not be committed; remove this JSON from the repository, add an entry to .gitignore to exclude these test result files (match by filename pattern used for k6 summaries or the directory "k6-tests/results/"), and update CI to store these artifacts instead of versioning them so future runs produce artifacts only in CI storage.k6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-27-41.html (1)
1-925: 생성된 테스트 결과 아티팩트를 버전 관리에서 제외 권장이 HTML 리포트는 k6-reporter에 의해 자동 생성되는 아티팩트입니다. 테스트 결과에서 99.94% 실패율과 10,068개의 실패한 체크가 보이는데, 이는 테스트 환경 설정 문제를 나타낼 수 있습니다.
생성된 리포트 파일들은 CI/CD 아티팩트로 관리하고
.gitignore에 추가하는 것을 권장합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-27-41.html` around lines 1 - 925, Generated k6 HTML test-report artifact is checked into version control; exclude these auto-generated reports and store them as CI artifacts instead. Add a rule to your repository's ignore file to exclude patterns matching k6-reporter outputs (e.g., the HTML report names or directory where reports are written), remove the tracked report from the index (git rm --cached) so it stops being committed, and update CI to persist the report directory as build artifacts; look for outputs named like the shown HTML report produced by k6-reporter (the report filename pattern and the reporter output directory in your test pipeline) to decide the exact ignore pattern.k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27.html (1)
486-510: 빈 테스트 결과 리포트입니다.이 리포트는 Total Requests가 비어 있고 대부분의 메트릭이 0 또는 빈 값을 보여줍니다. 테스트가 제대로 실행되지 않은 것으로 보입니다. 이 불완전한 결과 파일을 저장소에 포함할 필요가 있는지 검토해 주세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27.html` around lines 486 - 510, The HTML report contains empty/zero metrics (elements with class "metric-card" and inner "metric-value", e.g., the "Total Requests" h4 and other metric h4s showing 0), indicating an incomplete test run; remove this partial report from the commit and either regenerate a valid k6 report from a successful run or replace it with a placeholder (or CI artifact) that clearly marks it as transient. Additionally, add a check in the test/reporting pipeline to fail or skip persisting reports when "Total Requests" or key "metric-value" elements are zero/empty so future incomplete reports (the shown metric-card/metric-value entries) are not committed.k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55-summary.json (1)
83-90: 비정상적인 테스트 결과입니다.이 결과 파일에서
iterations는 3,973,345회인 반면http_reqs는 1회에 불과합니다. 또한setup_data가 null로, 인증 설정이 실패했을 가능성이 있습니다. 이는 테스트가 정상적으로 실행되지 않았음을 나타내며, 이러한 불완전한 결과 파일들을 저장소에 포함하는 것이 적절한지 검토가 필요합니다.Also applies to: 203-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55-summary.json` around lines 83 - 90, The summary JSON shows inconsistent metrics (iterations: 3973345 vs http_reqs: 1) and setup_data:null indicating the test likely failed and incomplete result files should not be committed; add a post-test validation step (e.g., in the k6 run wrapper or summary processing function) that parses the produced JSON and checks that "setup_data" is non-null and that "iterations.values.count" and "http_reqs.values.count" are sane and consistent (reject when http_reqs.count is zero or orders-of-magnitude mismatch with iterations.count), and if validation fails, abort publishing/saving the artifact and surface a clear error; update the validation logic used by whatever generates/handles the summary (referencing "iterations", "http_reqs", and "setup_data") so incomplete or failed test outputs are rejected by CI or excluded from the repo.k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55.html (1)
1-839: 자동 생성된 테스트 리포트 파일입니다.k6-reporter에 의해 자동 생성된 정적 HTML 리포트입니다. 이 파일은 1개의 요청만 포함되어 있고 실패한 것으로 보여, 불완전한 테스트 실행 결과로 판단됩니다. 결과 디렉토리에 불완전한 테스트 결과를 포함할지 검토해 주세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55.html` around lines 1 - 839, Auto-generated HTML report for a partial/failed k6 run was committed; we should avoid emitting incomplete reports. In the report generation path (e.g., functions generateReport() / writeReportFile() or the CI step that runs k6-reporter), add a validation check that inspects the run summary (totalRequests, failedRequests, or an "incomplete" flag) and skip writing or uploading the HTML when it indicates an incomplete run (for example totalRequests below a configurable threshold or failedRequests > 0); instead log/emit a compact failure artifact and ensure the CI/collector only archives reports when generateReport() passes this validation.k6-tests/results/scenario1-indexing/after-index-home-api-2026-03-05T01-40-34.html (1)
1-918: 생성 리포트(타임스탬프 HTML) 직접 커밋은 최소화하는 편이 좋습니다.이 파일 유형은 실행마다 바뀌어 PR diff 노이즈와 저장소 비대화를 크게 만듭니다. 원본(스크립트/요약 템플릿)만 버전관리하고, HTML/JSON 결과는 CI artifact 업로드로 분리하는 구성이 더 안정적입니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/results/scenario1-indexing/after-index-home-api-2026-03-05T01-40-34.html` around lines 1 - 918, This PR includes a generated timestamped HTML report (the timestamped K6 HTML report in the diff) which should not be committed; remove the generated report from the repo, run git rm --cached on the file so it’s no longer tracked, add a rule to .gitignore to ignore these timestamped k6/html/json outputs, and update the CI workflow that runs the k6 test (or the report generator) to upload the HTML/JSON as pipeline artifacts instead of committing them; also validate the repository build scripts (the k6 reporter invocation) to only write outputs to a CI artifact directory.src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java (1)
556-563: 캐시 payload 구성 로직이 중복되어 있습니다.동일 Map 빌드 코드가 2곳에 반복되어 필드 변경 시 누락 위험이 큽니다. 헬퍼 메서드로 통합을 권장합니다.
♻️ 제안 수정안
- Map<String, Object> videoInfoForCache = new HashMap<>(); - videoInfoForCache.put("title", video.getTitle()); - videoInfoForCache.put("description", video.getDescription()); - videoInfoForCache.put("thumbnailUrl", thumbnailUrl); - videoInfoForCache.put("watchCnt", video.getWatchCnt()); - videoInfoForCache.put("expiredAt", video.getExpiredAt()); - videoInfoForCache.put("isComment", video.getIsComment()); - videoRedisRepository.cacheVideoInfo(videoId, videoInfoForCache); + videoRedisRepository.cacheVideoInfo(videoId, toVideoInfoCachePayload(video, thumbnailUrl)); @@ - Map<String, Object> videoInfoForCache = new HashMap<>(); - videoInfoForCache.put("title", video.getTitle()); - videoInfoForCache.put("description", video.getDescription()); - videoInfoForCache.put("thumbnailUrl", thumbnailUrl); - videoInfoForCache.put("watchCnt", video.getWatchCnt()); - videoInfoForCache.put("expiredAt", video.getExpiredAt()); - videoInfoForCache.put("isComment", video.getIsComment()); - videoRedisRepository.cacheVideoInfo(videoId, videoInfoForCache); + videoRedisRepository.cacheVideoInfo(videoId, toVideoInfoCachePayload(video, thumbnailUrl));private Map<String, Object> toVideoInfoCachePayload(Video video, String thumbnailUrl) { Map<String, Object> payload = new HashMap<>(); payload.put("title", video.getTitle()); payload.put("description", video.getDescription()); payload.put("thumbnailUrl", thumbnailUrl); payload.put("watchCnt", video.getWatchCnt()); payload.put("expiredAt", video.getExpiredAt()); payload.put("isComment", video.getIsComment()); return payload; }Also applies to: 610-617
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java` around lines 556 - 563, The Map building for the video cache payload is duplicated in VideoService (used around the cacheVideoInfo calls); extract this into a single helper method (e.g., toVideoInfoCachePayload(Video video, String thumbnailUrl)) that returns the Map<String,Object> and replace both inline constructions (the block that creates videoInfoForCache and the similar block at the other location around lines 610-617) with calls to that helper, then pass its result to videoRedisRepository.cacheVideoInfo(videoId, ...).k6-tests/run-scenario.sh (1)
152-153: 캐시 플러시에서KEYS대신--scan사용을 권장합니다.Line 152-153의
KEYS는 데이터가 많을 때 Redis를 멈추게 할 수 있어, 부하 테스트 결과에 잡음을 넣을 수 있습니다.♻️ 제안 수정안
- home_keys=$(redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" KEYS "home:*" 2>/dev/null || true) - video_keys=$(redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" KEYS "video:*:info" 2>/dev/null || true) + home_keys=$(redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" --scan --pattern "home:*" 2>/dev/null || true) + video_keys=$(redis-cli -h "$REDIS_HOST" -p "$REDIS_PORT" --scan --pattern "video:*:info" 2>/dev/null || true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/run-scenario.sh` around lines 152 - 153, The script uses redis-cli KEYS which can block Redis for large datasets; replace those two calls (the home_keys and video_keys assignments) with non-blocking SCAN via redis-cli --scan --pattern "home:*" and --scan --pattern "video:*:info" and iterate/collect results (e.g., using a while read loop or mapfile) rather than relying on KEYS, so the code still finds matching keys without blocking the server; update references to home_keys and video_keys to use the new collection method.k6-tests/video-join-api-test.js (1)
55-55:videoIds파싱 시 공백/빈 값 정리를 권장합니다.Line 55는
1, 2,같은 입력에서 잘못된 ID가 포함될 수 있습니다.♻️ 제안 수정안
- const videoIds = testData.videoIds ? testData.videoIds.split(',') : [testData.videoId]; + const parsedVideoIds = testData.videoIds + ? testData.videoIds.split(',').map((v) => v.trim()).filter(Boolean) + : []; + const videoIds = parsedVideoIds.length > 0 ? parsedVideoIds : [String(testData.videoId)];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@k6-tests/video-join-api-test.js` at line 55, videoIds parsing can produce empty or whitespace-only IDs (e.g., "1, 2,") — update the logic that builds videoIds from testData.videoIds to split, trim each entry and filter out empty strings before falling back to [testData.videoId]; modify the code that assigns videoIds (referencing the videoIds variable and testData.videoIds/testData.videoId) to use a split → map(trim) → filter(Boolean) pipeline so only valid non-empty IDs remain.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 42ad5efb-0452-463e-b74a-cfba1925a377
📒 Files selected for processing (40)
.gitignoredocs/test/load_test_guide.mddocs/test/load_test_scenario.mdk6-tests/history-api-test.jsk6-tests/home-api-test.jsk6-tests/results/scenario1-indexing/after-index-history-api-2026-03-05T01-42-39-summary.jsonk6-tests/results/scenario1-indexing/after-index-history-api-2026-03-05T01-42-39.htmlk6-tests/results/scenario1-indexing/after-index-home-api-2026-03-05T01-40-34-summary.jsonk6-tests/results/scenario1-indexing/after-index-home-api-2026-03-05T01-40-34.htmlk6-tests/results/scenario1-indexing/after-index-video-join-api-2026-03-05T01-46-11-summary.jsonk6-tests/results/scenario1-indexing/after-index-video-join-api-2026-03-05T01-46-11.htmlk6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-27-41-summary.jsonk6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-27-41.htmlk6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-30-47-summary.jsonk6-tests/results/scenario1-indexing/before-index-history-api-2026-03-05T01-30-47.htmlk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27-summary.jsonk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-10-27.htmlk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55-summary.jsonk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-12-55.htmlk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-20-13-summary.jsonk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-20-13.htmlk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-25-41-summary.jsonk6-tests/results/scenario1-indexing/before-index-home-api-2026-03-05T01-25-41.htmlk6-tests/results/scenario1-indexing/before-index-video-join-api-2026-03-05T01-34-21-summary.jsonk6-tests/results/scenario1-indexing/before-index-video-join-api-2026-03-05T01-34-21.htmlk6-tests/results/scenario1-indexing/scenario1-index-result.mdk6-tests/run-scenario.shk6-tests/run-test.shk6-tests/shared/auth.jsk6-tests/shared/config.jsk6-tests/video-join-api-test.jsscripts/add-indexes.sqlscripts/drop-indexes.sqlscripts/insert-test-data.sqlscripts/reset-test-data.sqlsrc/main/java/app/allstackproject/privideo/domain/home/service/HomeService.javasrc/main/java/app/allstackproject/privideo/domain/video/repository/VideoRedisRepository.javasrc/main/java/app/allstackproject/privideo/domain/video/service/VideoService.javasrc/main/java/app/allstackproject/privideo/global/util/RedisUtil.javasrc/main/resources/application-nocache.yml
| (String) cachedInfo.get("title"), | ||
| (String) cachedInfo.get("description"), | ||
| (String) cachedInfo.get("thumbnailUrl"), | ||
| Long.valueOf(cachedInfo.get("watchCnt").toString()), | ||
| (java.time.LocalDate) cachedInfo.get("expiredAt"), | ||
| Boolean.valueOf(cachedInfo.get("isComment").toString()), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the VideoService.java file
fd -t f "VideoService.java" --exactRepository: AllStackProject/Backend
Length of output: 291
🏁 Script executed:
# Read the relevant sections of VideoService.java to understand the cache mechanism
head -n 700 src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java | tail -n +450 | head -n 80Repository: AllStackProject/Backend
Length of output: 3975
🏁 Script executed:
# Look for where the cache is being populated/created to understand what type expiredAt is stored as
grep -n "expiredAt" src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java | head -20Repository: AllStackProject/Backend
Length of output: 372
🏁 Script executed:
# Check for Jackson configuration and version in the project
fd -t f "pom.xml|build.gradle" --exec grep -l "jackson" {} \;Repository: AllStackProject/Backend
Length of output: 81
🏁 Script executed:
# Look for the cache creation logic to see how the Map is built
grep -n "HashMap\|put.*expiredAt\|cachedInfo\|redisTemplate" src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java | head -30Repository: AllStackProject/Backend
Length of output: 1357
🏁 Script executed:
# Find the Video entity to see what getExpiredAt() returns
fd -t f "Video.java" | grep -E "domain|entity"Repository: AllStackProject/Backend
Length of output: 140
🏁 Script executed:
# Check the cache configuration
fd -t f -name "*Cache*" -o -name "*Redis*"Repository: AllStackProject/Backend
Length of output: 297
🏁 Script executed:
# Look at build.gradle for dependencies (especially Jackson and Redis)
cat build.gradle | grep -A 50 "dependencies"Repository: AllStackProject/Backend
Length of output: 2153
🏁 Script executed:
# Search for cache serialization/deserialization configuration
rg -t java "ObjectMapper|Jackson|RedisTemplate|SerializationContext" --max-count=20Repository: AllStackProject/Backend
Length of output: 3614
🏁 Script executed:
# Check Video entity's expiredAt field type
grep -n "expiredAt\|getExpiredAt" src/main/java/app/allstackproject/privideo/domain/video/entity/Video.javaRepository: AllStackProject/Backend
Length of output: 726
🏁 Script executed:
# Look at VideoRedisRepository to understand cache serialization
cat src/main/java/app/allstackproject/privideo/domain/video/repository/VideoRedisRepository.javaRepository: AllStackProject/Backend
Length of output: 9853
🏁 Script executed:
# Check JacksonConfig for date serialization customization
cat src/main/java/app/allstackproject/privideo/global/config/JacksonConfig.javaRepository: AllStackProject/Backend
Length of output: 1713
🏁 Script executed:
# Check if there are any existing tests or examples of LocalDate serialization in the codebase
rg -t java "LocalDate.*serialize|readValue.*LocalDate" --max-count=10Repository: AllStackProject/Backend
Length of output: 408
🏁 Script executed:
# Look at how other parts of the code handle LocalDate deserialization
rg -t java "LocalDate\.parse|LocalDate\.from" --max-count=15Repository: AllStackProject/Backend
Length of output: 49
cachedInfo에서 expiredAt를 LocalDate로 직접 캐스팅하면 런타임에 ClassCastException이 발생합니다.
Jackson의 기본 설정에서 LocalDate는 ISO-8601 문자열(예: "2025-03-01")로 직렬화되지만, TypeReference<Map<String, Object>>로 역직렬화할 때 타입 정보가 없어 문자열 객체로 복원됩니다. 따라서 465, 505줄의 직접 캐스팅은 캐시 히트 시 반드시 ClassCastException을 발생시킵니다.
🔧 수정 방법
// 클래스 내부 유틸 메서드(추가)
private java.time.LocalDate parseLocalDate(Object value) {
if (value == null) {
return null;
}
return java.time.LocalDate.parse(value.toString());
}그 후 캐스팅을 교체:
- (java.time.LocalDate) cachedInfo.get("expiredAt"),
+ parseLocalDate(cachedInfo.get("expiredAt")),Also applies to: 500-506
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@src/main/java/app/allstackproject/privideo/domain/video/service/VideoService.java`
around lines 461 - 466, In VideoService, avoid direct casting of
cachedInfo.get("expiredAt") to java.time.LocalDate (which causes
ClassCastException); add a private helper like parseLocalDate(Object value) that
returns null for null and otherwise returns
java.time.LocalDate.parse(value.toString()), then replace occurrences that cast
cachedInfo.get("expiredAt") (e.g., in constructors/builders around the
cachedInfo usage at the expiredAt positions) with
parseLocalDate(cachedInfo.get("expiredAt")); apply the same replacement for the
other similar occurrences (the block around lines 500-506) that currently cast
expiredAt directly.
📝 요약
🔖 변경 사항
부하 테스트 인프라
DB 인덱싱
add-indexes.sql)drop-indexes.sql)Redis 캐시 최적화
VideoRedisRepository,RedisUtil: 홈/비디오 캐시 읽기·쓰기 구현HomeService,VideoService:@Value기반 캐시 토글 기능 추가application-nocache.yml: 캐시 비활성화 Spring 프로필 추가테스트 결과 시각화
benc-uk/k6-reporter를 통한 HTML 리포트 자동 생성시나리오 오케스트레이터
run-scenario.sh: 시나리오별 환경 격리 + 자동 실행 스크립트문서
run-scenario.sh기반)✅ 리뷰 요구사항
📸 확인 방법 (선택)
📌 PR 진행 시 이러한 점들을 참고해 주세요
Summary by CodeRabbit
릴리스 노트
새로운 기능
문서
Chores