Article Processing Charges (APCs) have long been understood as a funding mechanism for open access. Within editorial operations, they have also played a quieter role by shaping submission behavior before manuscripts reach editorial systems. That moderating effect is now weakening, not because APCs are disappearing, but because the conditions that supported it have changed.
Across Science, Technology, and Medicine (STM) portfolios, the effort required to prepare and submit a manuscript has declined in a very real way. AI tools are accelerating drafting, improving language quality, and standardizing structure. Manuscripts are arriving faster, more frequently, and in a condition that passes technical checks with minimal intervention. The shift is not only in volume, but in the baseline readiness of what enters editorial workflows. A growing proportion of submissions now arrives in a state that appears ready for review.
Under these conditions, APCs are no longer shaping submission inflow in a way that meaningfully reduces editorial burden. They still influence who publishes, but they are less effective in moderating what gets submitted. A larger share of manuscripts now meets the threshold of being technically sound and within scope, even when their contribution is uncertain. The result is not a surge in poor submissions, but a steady increase in submissions that require full editorial consideration.
Editors are experiencing a clear shift in how their time is used. Less effort is spent correcting incomplete work, while more effort is spent evaluating borderline manuscripts that are methodologically sound, well written, and difficult to dismiss quickly. Triage is no longer primarily about filtering out what clearly fails. It has become an exercise in prioritization, where decisions must be made among submissions that all appear viable on first reading. This is a more demanding and less easily standardized task.
Peer review capacity has not expanded alongside submission growth. Reviewer pools remain limited, widely shared across journals and publishers, and under growing pressure. Invitation acceptance rates continue to decline, and securing reviewers takes longer. Each decision to send a manuscript for review now carries a tangible cost, both in time and in the use of limited reviewer goodwill. In this context, triage is not a preliminary step in the workflow. It is the mechanism through which editorial systems protect reviewer capacity and maintain standards under pressure.
This is where AI is becoming central to how editorial systems operate. At a basic level, it improves efficiency by supporting compliance checks, highlighting gaps in reporting, and identifying appropriate reviewers through expertise mapping. More importantly, it is beginning to support how decisions are made. By providing context around a submission, including how similar work has been handled and where it fits within a topic area, AI allows editors to make decisions with greater clarity and less iteration.
This does not replace editorial judgment. It allows that judgment to be applied more effectively and more consistently.
The central issue is scale. As submission volumes rise and more manuscripts require careful evaluation, the limiting factor becomes how many decisions can be made with confidence. AI helps distribute that effort. It supports prioritization, reduces unnecessary cycles, and improves consistency across decisions that would otherwise depend heavily on individual bandwidth and experience.
This has direct implications for how APC-based models function in practice. APCs align revenue with accepted articles, while editorial effort is driven by total submissions. As more manuscripts require evaluation, the cost of editorial work rises regardless of how many are ultimately published. Over time, this creates tension between revenue and effort that cannot be resolved through incremental process improvements alone.
AI helps address this imbalance. By reducing time spent on routine assessments and improving how manuscripts are prioritized, it allows editorial systems to absorb higher volumes without lowering standards. It also introduces greater consistency, ensuring that similar submissions are evaluated in similar ways across editors and journals, which becomes increasingly important as scale increases.
The debate around APCs is therefore shifting. It is no longer limited to questions of access or pricing. It now reflects a deeper concern about how editorial systems function under sustained inflow. Funders are questioning cost structures, libraries are evaluating value, and publishers are expanding institutional agreements. Beneath these developments is a shared recognition that the primary constraint in publishing is no longer production, but evaluation.
Publishers are already adapting. Triage is becoming more structured, with earlier decisions and clearer expectations around contribution. Reviewer selection is handled more carefully, and cascade workflows are used to avoid repeating review effort. There is also greater attention to portfolio-level visibility, where patterns in submission, review, and decision-making can be understood across journals rather than in isolation.
These changes are necessary, but they do not alter the underlying dynamic. Creation is scaling faster than evaluation. The moderating role APCs once played is diminishing, and the responsibility for maintaining selectivity now sits squarely within editorial systems. This shift calls for workflows that not only move manuscripts efficiently, but also support how decisions are made, how attention is allocated, and how standards are applied consistently across increasing volume.
The question facing publishers is no longer whether they can process more manuscripts or fund open access. It is whether they can continue to make sound editorial decisions as submission volumes rise and uncertainty increases. APCs were built for a system where submission involved friction. That friction has been reduced. Creation is scaling. Evaluation must keep pace.
AI is making that possible by allowing editorial judgment to operate at a scale that was not previously achievable. Know more
Knowledgespeak Editorial Team