Merge requests stuck due to exhausted Sidekiq threads
Description
- Merge requests become stuck in a perpetual loading state when newly created
- Existing merge requests fail to update when new changes are pushed
- CI/CD jobs remain queued and are not picked up for processing
- Sidekiq busy queue continuously grows
- Sidekiq threads become exhausted processing
NewMergeRequestWorker
orUpdateMergeRequestsWorker
jobs
Environment
-
GitLab installations using sharded Gitaly configuration
-
Multiple Gitaly servers configured
-
Impacted offerings:
- GitLab Self-Managed
Solution
-
Verify Gitaly server-to-server communication:
# Test TCP connectivity between Gitaly servers nc -zv gitaly1.internal 8075 nc -zv gitaly2.internal 8075
-
Check firewall and security group configurations:
- Ensure ports used by Gitaly (default 8075) are open between all Gitaly servers
- Verify no firewall rules are blocking cross-shard communication
-
If using a proxy, validate proxy configuration:
- Confirm proxy settings allow internal communication between Gitaly servers
- Check proxy logs for any blocked connections
-
Monitor Sidekiq queue metrics in the Admin Area:
- Go to Admin > Monitor > Background jobs
- Review the queue sizes and processing statistics
Cause
This issue occurs when Gitaly servers cannot communicate with each other in a sharded setup. Cross-shard communication is essential for merge request operations that involve repositories on different Gitaly shards.
Additional Information
You can monitor these issues by:
-
Checking Gitaly logs for connection errors:
sudo gitlab-ctl tail gitaly
-
Examining Sidekiq logs for stuck workers:
sudo gitlab-ctl tail sidekiq
-
Monitoring Sidekiq metrics in Admin > Monitor > Background jobs