2026-04-20T01-56Z
4/20/2026, 1:56:50 AM · 1 flow · 315,790ms total
clean All settings match production defaults (app_defaults.yaml asOf 2026-04-19).
Build provenance
App
1.4·3
com.flashcopy.app.dev
Git
36652cef76
feature/processing-animation-variants · dirty
Sim
AAC26DF1…
com.flashcopy.app.dev
Built at
4/19/2026, 1:55:28 AM
video
ok pipeline →
Input media
No media file found for this input.
IMG_4558.mov37.21 MBsha256 96059437ee…
Input id
BBBB0003
Total
315.79s
Output
694 words
7360 chars
Cost est
$0.00184
gemini-2.5-flash · basis: chars
in 9,855 · out 3,680 tok
Stage timings
frame-extraction
0ms
s3-upload
4.39s
cloud-processing
311.40s
Stage details
| frame-extraction | frames 28 |
| s3-upload | bucket qr-video-ocr-frames |
| cloud-processing | lambda backgroundProcessor persisted true chars 7360 |
Extracted text
694 words · 7,360 chars
```json
{
"stitched_response": "import * as vscode from \"vscode\";\nimport * as aws from \"aws-sdk\";\n\nexport function activate(context: vscode.ExtensionContext) {\n const disposable = vscode.co…Prompts used
all prompts → frame_ocr gemini-2.5-flash 66326cc5be… · 81 chars
Perform OCR on this image and return plain text only. Do not describe the image.
qr_reader_v1/frame_ocr_lambda.py:31
video_polish gemini-2.5-flash structured JSON sha n/a · 898 chars
Please stitch together the OCRs that are taken from individual screenshots/frames from a video. There should be overlapping lines which can be used as a marker for when one frame ends and another begins. Please do not alter the content in any way besides stitching the content together and please add correct indentation. Do not alter or add any content.
Your final output must be a single, valid JSON object and nothing else. The JSON object must conform to the following structure:
{
"stitched_response": "(string) This key must hold the final, fully reconstructed and cleaned document text.",
"additional_notes": "(string) Use this field to briefly describe your process. Mention any significant noise you filtered out (e.g., 'Removed text from a Save As dialog box') or any ambiguities you encountered during the reconstruction."
}
Here is the raw OCR text from video frames:
{raw_text} qr_reader_v1/video_ocr_polishing_lambda_rest.py:102
Output diff
no baseline No prior run for this input (sha256 96059437ee…) and no ground-truth file at data/ground_truth/<sha>.md. After another ingest the diff will render here.
Video credit reconciliation
1 credit / frame Frames
28
Charged
n/a
Expected (1/frame)
28
Current app bills by duration seconds; at fps=1.0 frames == seconds so this matches by accident. Any non-1.0 fps surfaces a mismatch.
Run settings
Show all 20 values
{
"videoFramesPerSecond": 1,
"videoStitchingMethod": "gemini_only",
"videoPipelineMode": "s3_parallel",
"useBackgroundVideoProcessing": true,
"rekognitionThreshold": 80,
"geminiModel": "gemini-2.5-flash",
"photoOcrPromptSha": "48496a3017a2708a92d142281c5ab19f64f8132555514a00cbc35ca9d39daeba",
"frameOcrPromptSha": "66326cc5be6bdd434dbbdd330b519e26bd8bbcab4a6037a64c2148b66cd2aceb",
"imageRetentionHours": 24,
"bypassImageSaveConfirmation": true,
"bypassProcessingResultWindow": true,
"enableAnalytics": true,
"confirmCollectionReset": true,
"enableNotifications": false,
"autoProcessImages": true,
"includeBrandingInSharedText": true,
"autoSavePhotos": true,
"multiPhotoSeparator": "double_line",
"showDebugInfo": false,
"headerFooterStyle": "equals"
}