refactor: extract ChatCompletionRequest mapping out of handleChatCompletion#94
refactor: extract ChatCompletionRequest mapping out of handleChatCompletion#94
Conversation
There was a problem hiding this comment.
Pull request overview
Refactors the chat-completions request parsing in Server.swift by extracting the OpenAI message → internal Chat.Message mapping into a dedicated helper, simplifying handleChatCompletion’s message loop.
Changes:
- Extracted
ChatCompletionRequest.Message→Chat.Messageconversion intotoChatMessage(). - Simplified
handleChatCompletionto calltoChatMessage()and only accumulatesystemPromptTextfor.systemmessages.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// Convert OpenAI JSON spec message to internal Chat.Message | ||
| func toChatMessage() -> Chat.Message { | ||
| let text = textContent | ||
| let imgs = extractImages() | ||
| let aud = extractAudio() |
There was a problem hiding this comment.
PR description says ChatRequestParsingTests now use the production toChatMessage() helper instead of a local copy, but the test suite in tests/SwiftLMTests/ChatRequestParsingTests.swift still contains its own mapAssistantToolCalls mapping logic and does not call toChatMessage(). This leaves duplicated logic that can drift and means the new helper isn’t actually being locked down by tests. Either update the tests to assert against msg.toChatMessage() (and its toolCalls) or adjust the PR description to match what’s included.
Following the test lockdown in PR #93, this refactors the monolithic
handleChatCompletionloop.Changes
ChatCompletionRequest.Message→Chat.Messagemapping loop into a dedicated internaltoChatMessage()helper method.systemPromptText.ChatRequestParsingTeststo use the actual productiontoChatMessage()instead of a mock local copy, perfectly addressing Copilot's previous review suggestion.