chore(sparkline-basic): populate altair quality_score + review (90/100)#5654
Merged
MarkusNeusinger merged 1 commit intomainfrom May 2, 2026
Merged
Conversation
…100) PR #5653 (regen) merged with quality_score: null and stale review data referencing the old #306998 palette and pyplots.ai title. Cloud AI review is not dispatched on regen PRs (ai-approved is set directly), so the local regen flow must populate this data itself — which it didn't. Locally re-evaluated both light and dark renders against prompts/quality-criteria.md. Current score: 90/100, verdict APPROVED: VQ: 30/30 | DE: 13/20 | SC: 15/15 | DQ: 14/15 | CQ: 10/10 | LM: 8/10 DE/LM ceilings reflect that the spec is `basic` (annotations forbidden) and that layered Altair composition isn't deeply distinctive. Strengths and weaknesses, image_description, and the full per-criterion checklist are populated so the next regen has accurate context. Also adds language: python and theme-adaptive / okabe-ito tags to impl_tags now that the implementation is theme-aware. Going forward this is automatic — agentic/commands/regen.md (PR #5650) now requires the regen flow to view both renders and write quality_score + review on every iteration.
Contributor
There was a problem hiding this comment.
Pull request overview
This PR backfills review metadata for the regenerated Altair implementation of sparkline-basic, bringing its YAML metadata in line with the repo’s current quality-review schema and the theme-aware implementation already in the codebase.
Changes:
- Populates
quality_scorefromnullto90. - Rewrites the
reviewblock to use the current 6-category scoring schema and updated plot description. - Adds new implementation tags for theme-aware styling and adaptive rendering.
| preview_html_light: https://storage.googleapis.com/anyplot-images/plots/sparkline-basic/python/altair/plot-light.html | ||
| preview_html_dark: https://storage.googleapis.com/anyplot-images/plots/sparkline-basic/python/altair/plot-dark.html | ||
| quality_score: null | ||
| quality_score: 90 |
2 tasks
MarkusNeusinger
added a commit
that referenced
this pull request
May 2, 2026
## Summary Follow-up to #5653 / #5654. The `/regen` flow blanked the `Quality:` line in the docstring header — it now reads `Quality: /100`, but should match the metadata's `quality_score: 90` populated in #5654. ```diff -Quality: /100 | Updated: 2026-05-02 +Quality: 90/100 | Updated: 2026-05-02 ``` The `regen.md` template (PR #5650) is being updated in parallel to require `{SCORE}/100`, not blank, so future regens won't drop the score. ## Test plan - [ ] CI green - [ ] header matches metadata `quality_score: 90`
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
PR #5653 (regen of altair sparkline-basic) merged with
quality_score: nulland a stalereviewblock still referencing the old#306998Python Blue palette andpyplots.aititle. The Cloud AI review pipeline (impl-review.yml) is not dispatched on regen PRs (the regen flow setsai-approveddirectly), so this metadata never got populated automatically.This PR fills it in by locally evaluating both rendered themes against
prompts/quality-criteria.md.Score: 90/100 — APPROVED
DE/LM ceilings reflect structural limits: spec is `basic` (annotations forbidden, capping DE-03) and Altair's layered composition is idiomatic but not deeply distinctive (LM-02).
What's populated
quality_score: 90(wasnull)review.image_description— fresh description of the regenerated plot (both themes)review.strengths/review.weaknesses— current state, no longer references retired colorsreview.criteria_checklist— full breakdown across all 6 categories using the current quality-criteria schema (VQ-01…VQ-07, DE-01…DE-03, SC-01…SC-04, DQ-01…DQ-03, CQ-01…CQ-05, LM-01…LM-02). Old file used a stale schema without the Design Excellence category.review.verdict: APPROVED(review-1 threshold ≥ 90)impl_tags.styling+=okabe-ito,theme-aware-chromeimpl_tags.techniques+=theme-adaptiveGoing forward
/regen(PR #5650) now requires the local flow to view both light and dark renders and writequality_score+reviewon every iteration — so this kind of follow-up shouldn't be needed again.Test plan
Generated by
/regenfollow-up (no Cloud AI review dispatched).