While the code generation output has significantly increased, the review process has become a severe bottleneck to me.
The agents are producing a massive volume of PR. Reviewing this amount of AI-generated code requires deep cognitive effort to ensure architectural alignment, verify logic, etc.
Any insights on scaling the human review process to sustainably match agent throughput?