Skip to content

Upgrade llama.cpp from b8763 to b8788#86

Merged
bernardladenthin merged 1 commit intomasterfrom
claude/update-b8788-compatibility-K3edA
Apr 13, 2026
Merged

Upgrade llama.cpp from b8763 to b8788#86
bernardladenthin merged 1 commit intomasterfrom
claude/update-b8788-compatibility-K3edA

Conversation

@bernardladenthin
Copy link
Copy Markdown
Owner

  • Multimodal enhancements: Gemma-4 audio (conformer), Qwen3-Omni audio support
  • Backend optimizations: CUDA stream capture for argsort, SYCL improvements, SSM conv kernel size 5
  • Download cancellation: New virtual is_cancelled() method (backwards-compatible)
  • New GGUF tensor types: A_ENC_CONV2D, A_ENC_CONV_OUT for audio encoders
  • Server improvements: build_info in /props endpoint, router settings exposure

No breaking changes detected. Multimodal audio models work automatically via MTMD.

https://claude.ai/code/session_018HegvJdW2521WoFk7GmPvm

@bernardladenthin bernardladenthin force-pushed the claude/update-b8788-compatibility-K3edA branch from f639520 to dd1b779 Compare April 13, 2026 14:42
@bernardladenthin bernardladenthin force-pushed the claude/update-b8788-compatibility-K3edA branch from dd1b779 to dbb83ba Compare April 13, 2026 14:42
@bernardladenthin bernardladenthin merged commit 2f70531 into master Apr 13, 2026
14 checks passed
@bernardladenthin bernardladenthin deleted the claude/update-b8788-compatibility-K3edA branch April 13, 2026 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants