USENET at first was built with effectively no security. Anybody, anywhere could introduce any article which could do anything. There was limited auditing even to detect abuse, let alone prevent it.
Over time abusers came, and this meant in many cases that "privileged" functions had to be in some places either shut down or "put on manual" at great administrative cost to admins. In some cases, actual security using digital signature was applied, for newgroup messages (pgpverify), moderated groups (pgpmoose) and NoCem. PGP was commonly used because it is a widely distributed standalone program capable of doing digital signature.
USENET has no government. It is an anarchy -- the absence of government -- but this does not mean total chaos. It has rules, guidelines, traditions, movements and principles of governance, if not government.
USENET is, in spite of its public nature, a privately owned network. It is a cooperative, owned by the owners of the sites that are on it. Nobody gets on the network or uses it without the permission, directly or indirectly of these owners. In these site owners lies all the authority on USENET.
This makes sense, as anything on USENET involves storing or changing data on the site owner's machines. Those files are theirs. Of course, having each individual site owner privately administer all aspects of the net on their machine would never work. There are over a few hundred thousand machines on the net, serving millions of users. So means to delegate administration have been found.
As noted, the first way to delegate it was to simply let anybody do anything. In fact, at first anybody could create a new newsgroup just by typing a new name. In the past there were not many malicious users, so the system worked.
Today we have malicious users. Both spammers and the like who abuse for imagined gains, and plain sociopaths like trollers and crackers who abuse the net or people on it for the sake of abusing it.
Barring malice, in the past we still had politics -- different groups wanting different things. To solve this various anarchic and pseudo-democratic systems evolved to develop group consensus or a measurement of group will, and everybody agreed, without force, to go along with the group will where it was important. One example was the newsgroup voting system.
This works because in fact to get anything done in a co-op like USENET, you need the almost unanimous consent of the site owners. Any site owner is free to not participate in any group, hierarchy or other activity. So you must keep them all happy if you want to do something netwide. While total unanimity is hard, near-unanimity, won through compromise, has actually worked better than might be expected. This is true in part because almost all of us are drilled from childhood to accept the democratic principle and accept things the majority wants so that we can get our way later when we agree with the majority.
Security on USENET amounts to the question, "Should anybody and everybody be able to perform this action?" If the answer is yes, you need no security. If no, you need some security to divide those who you do wish to perform the action from those who you don't.
Security of course has a cost, so sometimes you're willing to accept letting anybody perform some action if the risk of that is less than the bother of security. When the net was smaller, and there were few malicious people about, security wasn't necessary simply because even though anybody could do certain things, they tended not to.
Now on USENET, the only "action" is the posting of an article. However, this gets broken down based on what the headers of the article do, and in particular the Control header on control messages. So while "post an article" is not the unit you secure, you are interested in "post a cancel" or "post an article in a moderated newsgroup."
Here is a list of the actions on USENET I believe most people would prefer not be available to anybody and everybody. As such, we must address how to secure them.
Arbitrary users should not be able to:
While there is some fine debate about some, and in some cases these rules may vary in some hierarchies (for example alt might allow any party to create a group) I think that for the mainstream of USENET, ideally most people would prefer these functions were not entirely open.
If not all parties can be trusted to perform these actions, who can or should be trusted? Well, that varies from action to action. In some cases, like the cancel message, everybody agrees the original poster of a message should be trusted, and most agree the administrators of the equipment used to insert the posting into the net should be trusted as well. Many others wish to pick specific 3rd parties and trust them, to deal with abuse.
For other functions it's more political. The actions themselves require subjective judgement and must be performed by individuals or groups who win the trust of the machine owners who in turn grant it.
It turns out that the vast majority of people on USENET can be trusted, at at least given the benefit of the doubt, with their trust revoked only after it is abused. That's how the net used to work, but there was no way to revoke the trust when people started abusing.
The answers as to who people want to trust to perform these actions are varied and many. The underlying security system has to allow people to create the various structures of trust and enabling that they desire.
In particular, one function worth supporting is the technological vote. In this case, an action is enabled if N or more (presumably a majority) of a set of trusted parties approve it. There is no vote with ballot counting or a returning officer. Instead, the actual privileged command comes with proof of the approval of at least N of the trusted parties. This way no one party can have control or represent a single point of failure or takeover. However, this is complex so it is typically limited to the highest level functions of a system, like the 2 keys needed to fire the nuclear missile.
My investigation shows that other than for the specialty problem of verifying that a cancel message comes from the original generator of a message, there is no other solution to the problems stated above than public key based digital signature combined with certificates, ideally attribute certificates which certify authorized actions rather than individuals.
Such as system consists of:
I view these as the minimum requirements. There are other useful attributes that have been discovered by researchers on certificate systems which are also good ideas. They add some complexity but have been shown to be worth it. These include auditing information that tracks how certificates came about, expiration dates on certificates, and certificate and signature collapse for efficiency, among others.
Some have claimed all this is too complex. I believe in the end it is the simplest solution that meets the goals. If you wish to contend otherwise, you must demonstrate how another simpler system meets the goals, or why one of the goals is not necessary or good.