Most publishers today are not asking how to attract submissions. That question has largely resolved itself. The more immediate concern is how to keep pace with what is already arriving.
Across STM portfolios, submission growth has shifted from cyclical to structural. Global research output continues to expand, and AI-assisted tools are accelerating how quickly that output is translated into manuscripts. The change is not limited to volume; it is visible in the baseline quality of submissions entering editorial systems. More manuscripts now arrive complete, well-structured, and linguistically polished, moving efficiently through submission platforms and meeting formal requirements with minimal friction. Yet this apparent readiness does not translate into easier editorial decisions.
Editors are spending less time filtering out incomplete work and more time determining whether a manuscript meaningfully contributes to the literature. That distinction cannot be resolved through formatting checks or automated signals. It requires judgment, context, and familiarity with the field. That effort is becoming the limiting factor in editorial workflows.
Peer review, which remains central to scholarly validation, has not scaled alongside submission growth. Reviewer pools are finite and increasingly shared across journals and portfolios. The same experts are invited repeatedly, often within overlapping timeframes. Declining acceptance rates and extended review timelines reflect a system operating close to its practical limits. This is not a matter of reviewer willingness; it reflects the structural capacity of the model itself.
At the same time, the nature of evaluation is evolving. A well-presented manuscript does not necessarily reduce the effort required to assess it. In many cases, it demands closer attention. Familiar structure and polished language can obscure marginal contribution, making it harder to distinguish between work that is technically sound and work that is substantively necessary. Editorial effort is shifting from correction to discernment.
Tools continue to support parts of this process. Similarity screening, image verification, and disclosure frameworks provide useful signals and improve efficiency at specific checkpoints. Increasingly, AI-driven tools are also being applied to support editorial triage—surfacing patterns in submissions, highlighting potential gaps in methodology reporting, and assisting in reviewer identification. These capabilities can help prioritize attention and reduce manual effort at scale. However, they do not replace the central task of evaluating novelty, rigor, and relevance. That responsibility remains with editors and reviewers, and it is becoming more demanding as submission volumes grow.
Over the past decade, publishers have invested heavily in throughput—submission systems, production automation, and workflow optimization. These investments have improved efficiency, but they address ingestion and processing, not evaluation. The constraint now lies in how consistently and reliably manuscripts can be assessed as volumes continue to increase.
This has direct economic implications. Higher submission volumes do not translate linearly into higher publication output when evaluation capacity is limited. Editorial triage becomes more demanding, reviewer allocation more complex, and the time required to reach confident decisions increases. At scale, additional volume can increase operational effort without proportionate gains in publishable content.
Publishers are already responding. Triage processes are becoming more structured, desk rejection decisions are made earlier, and reviewer selection is managed with closer attention to workload distribution. Cascade workflows are used to reduce redundant review effort, though coordination across portfolios remains uneven. There is also growing recognition that many of these pressures cannot be addressed at the level of individual journals alone. Reviewer load, submission patterns, and citation dynamics become clearer when viewed across portfolios, prompting a gradual shift toward portfolio-level visibility.
More fundamentally, the location of value within publishing is changing. As the cost of creating manuscripts declines, the act of evaluating them—selecting, validating, and positioning research—becomes more central. Evaluation is no longer a supporting step within the workflow; it is the point at which quality is established and trust is secured.
The question facing publishers is not whether they can process more manuscripts, but whether they can continue to evaluate them with the same level of rigor, consistency, and confidence as submission volumes increase. That question now sits at the core of publishing strategy. Creation will continue to scale, but evaluation will not scale in the same way. It is within that gap that the next phase of scholarly publishing is being defined. Know more
Knowledgespeak Editorial Team