chore: Defensively discard in-progress id on unexpected None from platform#875
chore: Defensively discard in-progress id on unexpected None from platform#875
Conversation
`ApifyRequestQueueSingleClient.fetch_next_request` committed the id to `_requests_in_progress` before awaiting `_get_request_by_id`. When the platform's `get_request` returns `None` (e.g. propagation delay after a fresh `_list_head`), the method returned `None` to the caller but the id stayed in `_requests_in_progress` forever — permanently poisoning `is_empty()` and excluding the id from future `_list_head` reconciliation, with no public recovery path (the caller had no `Request` to reclaim). Discard the id on the `None` path and continue to the next head entry. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #875 +/- ##
==========================================
+ Coverage 86.57% 86.94% +0.36%
==========================================
Files 48 48
Lines 2920 2926 +6
==========================================
+ Hits 2528 2544 +16
+ Misses 392 382 -10
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| client, api_client = _make_single_client() | ||
| api_client.list_head = AsyncMock(return_value={'items': [], 'hadMultipleClients': False}) | ||
| api_client.get_request = AsyncMock(return_value=None) | ||
| client._head_requests.append('missing-id') |
There was a problem hiding this comment.
The test is modifying the internals to establish a state that might not even be possible. Or can we achieve this state without touching the internals?
Normally, something can get into _head_requests from two sources:
- Request added by this client, and then it will be in the local cache so it will not return
None - It was already present on the platform, so it will also not return None, unless someone deleted it in the meantime (which is out of scope of this single client)
Is it some edge case that can happen around the cache limit? But if it was recently added, it will still be in cache; it will be removed from cache only if it was too old and in that case the data propagation delay is long over.
There was a problem hiding this comment.
The test is modifying the internals to establish a state that might not even be possible. Or can we achieve this state without touching the internals?
Probably with some patches, I'll give it a try.
There was a problem hiding this comment.
Ok, I wasn't able to reproduce this. Claude was probably hallucinating around the single/shared clients. I've kept just the guard as a defensive approach against unexpected platform responses, and updated the code and PR title accordingly. Check it out.
Summary
Defensive guard in
ApifyRequestQueueSingleClient.fetch_next_request: if_get_request_by_idunexpectedly returnsNonefor an id just surfaced by_list_head, discard the id from_requests_in_progressbefore continuing to the next head entry. Without this guard, the id would be permanently stuck in_requests_in_progress— poisoningis_empty()forever and filtering the id out of every subsequent head reconciliation, with no public recovery path (the caller receivesNone, so has noRequestto reclaim).Found by code reading — no known real-world trigger under single-producer/single-consumer operation. Kept as a boundary check against unexpected platform responses.
Test plan
🤖 Generated with Claude Code