What follows is a set of goals and principles for how to go about fixing and improving USENET. These are not specifications, but the reasons behind them and a description of how they should be done.
Some of these goals may seem to conflict. Most commonly, the desire for support of the vast installed base will conflict with the desire for change or simply "doing it right." This is not to imply a contradiction, but rather a tradeoff between two positive but incompatible goals.
The standard should not make policy decisions, unless there is near unanimity that the proper operation of the network requires a certain policy. Otherwise, members of the group should seek out things that people want to do on the network, and things that sites would like to be able to tune for policy reasons, and enable those features and that tuning.
USENET has stagnated because it's almost impossible for all sites to agree on a format change. A truly new feature that requires new code everywhere can only be installed after waiting a very long time in internet years, or by draconian measures by major sites -- the declaration that old format articles will simply not be propagated.
We do have to fix our bugs. There aren't a lot -- supersedes and the lack of extensibility of certain headers.
No header should be added that can't be extended cleanly later without breaking old software. Ideally, systems of extension should allow old software to make guesses about what to do when encountering an unfamiliar extension -- ignore, abort, give warning, map to known extension. Most systems only support "ignore all unfamiliar," but this is not enough.
Where possible, old headers that did not support extension should be made to do so, or set to do so in the future.
That means if there's a MIME way to do it, we need a good reason not to do it that way. All other things being equal, if we can interact cleanly with mail it's a win both for gateways and because common software is available.
However, this does not mean we bend over backwards to accommodate standards developed with clear disregards for the needs of a broadcast, post-once, read many medium like USENET.
While most articles do go over NNTP, it is NNTP's job to feed USENET articles, not the USENET spec's job to make sure articles can go over NNTP.
Being liberal in what you accept doesn't mean propagating errors out to the rest of the net. While there are many virtues in not altering articles that travel through you, continuing to propagate spec-violating articles can have greater negative consequences. The tradeoffs should be examined.
USENET has failed to be robust by allowing systems to routinely drop articles on the floor with no diagnostic. This is terrible software design. Systems should be made so that errors are detected, and can be sent once and only once to a person who should know about it.
Errors should be discovered and fixed fast. E-mail systems which dropped your mail into the bit bucket because you had a format error would never be tolerated.
If there is some function that people have already added, ad-hoc to USENET, the standard should work out a means to perform this function, consistent with other aspects of the spec and these goals.
Good ideas come from outside USENET. If a feature is popular on other conferencing systems, it's a good bet that people would find it useful here, and not find USENET wanting.
If a feature (not an implementation) has never been tried anywhere, we should implement it if we have good reasons to like it and believe it will be popular, but otherwise consider it for a later revision. However, if the idea is generally liked, the spec should be reviewed to assure it is extensible to accommodate new ideas like the proposal.
The old trust-everyone philosophy has sadly failed. Now the question is reversed. If you want to leave something unsecured, you have to explain why.
Sadly, "nobody is exploiting that hole today" no longer qualifies as a reason -- it was the reason everything was left unsecured in the first place. We have to assume a network full of spammers, religious nuts and even malicious foreign governments and saboteurs today. We should do the best security job we can do today unless the cost is extraordinarily high.
If something requires 300,000 site admins to change a config file, it won't happen. Most site admins would rather delegate most aspects of configuration of their site to somebody they trust.
A good principle of software design is that the "master" of some piece of data is controlled in exactly one place. It's OK to distribute the results -- and inherent in USENET -- but disasterously poor to normally distribute the maintenance.
While final control over site files always resides with the site, the norm should be remote control unless specied otherwise. Policy decisions for subsets (groups, subnets and hierarchies) should be made in one place, with the main local decision being whether to subscribe to the subset or not.
Having 300,000 independently maintained files listing who the moderator for a group is, or whether the group accepts binaries or MIME just won't work.
USENET's distributed nature is its strength and curse. Things should be kept distributed for efficiency, and final local control always should be left to local sites, but if a policy issue or configuration fact is to be associated with a newsgroup, hierarchy or subnet, then associate it there by default, not at the site. No one party should control all of USENET, nor can they. But the mechanisms for centralized control where needed are no more evil than the centralized control moderated newsgroups have represented, within their space, for a dozen years.
Some of the old assumptions of USENET are old, ancient in internet time. Some remain valid, but all can be subject to question, and none should be taken as a given.
We wish to combine maximum new functionality with minimal upheaval. We know upgrade will be slow, so plans for new features must expect that, and provide a transition plan that, where possible, does not remove function from the users of old software, or provides them an alternate way to get things done.
To make the transition to new features, temporary features may be put in place. But they should be explicitly temporary, with an explicit expire date which occurs by default (but which can be extended or removed if need be.) Transitional systems should ideally be implemented only in a small subset of the net, or with watch-daemon servers, so that all tools do not need to code for them.
We should not design anything into the network that is known to be already obsolete, without a very good reason.
Some features are going to be important enough that we must accept the upheaval, or accept that some new newsgroups or new features will not be available conveniently to users of old software. If it's this or abandon a useful and desired new feature, we go with the new feature. Any other philosophy leads to stagnation.
We must also remember that transmission, disk space and CPU are vastly cheaper today than when USENET was designed (and USENET is itself correspondingly much larger.)
The core of USENET is that it's distributed. The articles are distributed and the ownership of sites is distributed. Almost all the other technical features are found in other "competitive" systems, though sometimes they inherited them from USENET.
Some lesser features that have a strong association with USENET are:
USENET also has ubiquity, but it has this only through its long history and critical mass now, not due to superior features.
Because of USENET's broadcast nature, all articles must be interpretable by everyone. It does little good to have two ways to do the same thing, it just forces every tool to understand both. If you need two ways to do the same thing, you need a bo very good reason.
Thus we only want one way to sign an article, for example.
All articles are transmitted and stored 300,000 times to sites, and similar numbers of times to users, so any efficiencies multiply. However, we should not go nuts -- ease of use and the hand-editabilty of articles and components should be preserved.
Unless there is a strong technological reason, limits are a matter of policy, and the specification should require all implementations to handle at least very large, and ideally arbitrary sized objects.