....to present workable ideas to diminish the power and reach of token farmers and spammers, and those who would use minds.com as a source of revenue at the expense of its social media experience. Also, to engage in a risk-reward analysis of those measures, with an eye towards preserving Minds' paradigm of freedom and transparency.
It is no secret that gamification of reach and of revenue is at the core of the Minds Token economy, and further, it is no secret that gamification of digital social experiences in general is both the present and future of social media. This presentation will take-as-wrote that such gamification, when it is fair and balanced, is a fundamentally healthy endeavor, and ought not to be eliminated or infringed.
Of late, token-manipulators and reach-manipulators have significantly diluted the rewards pool by posting low-quality content and low-quality replies, resulting in the inflation of their engagement score, at the expense of lower-tempo original content and thoughtful engagement by authentic users. By and large, authentic users can be distinguished from manipulators by their tendencies to....
Certainly, there are many instances of authentic users who temporarily engage outside of these tendencies - however, manipulators almost always operate outside of the envelope of this tendency, and, in addition....
Using such behaviors, these manipulators are able to dominate both the "Trending" section, and the "Trending for You" section, and in so doing, have crippled the earnings and reach of authentic users.
Manipulators make good use of Minds' permissiveness in allowing hidden hashtags. This allows them to follow trending interests of authentic users, and then falsely mark their content as being in that same category. The result is that the top-discusses tags of Minds, at any given time, are polluted with irrelevant material presented by spammers attempting to leech engagement from those subjects. However, instead of using public/visible hashtags, they use hidden hashtags for their allowance of five, thereby significantly reducing the probability that they will be reported for manipulation. This tactic does not require a particularly high intelligence quotient, and is quite pervasive.
The removal of this tool would force spammers to either use no hashtags (crippling their reach), or to post their disingenuous tags publicly, for all to see. This would benefit authentic users who wish to report hashtag manipulation, which, in turn, would benefit the authentic gamification of the network.
RISKS/CONCERNS: That I am aware of, there are no significant reasons - relevant to freedom or to privacy - why Minds users should be able to use hidden tags. One might argue that the Minds Team would be okay with allowing deliberate cross-pollination of ideas, vis-a-vis false tagging, in order to break into the trending feeds of others, with the hope of de-radicalizing or engaging them. (i.e. a hardcore racist might follow tags like [censorship; jews; trump; biden; truth], and a de-radicalizer, hoping to make a post that could reach their vision, would false tag a heartwarming race-based post with some of those same tags - even if precisely irrelevant)
Yet, if this were so, then why would the Minds Team have placed, as an option, the ability to report "Incorrect Use of Hashtags"?
Aesthetics are the only reason that I can think of - allowing users to hide their tags does, to some eyes, make certain posts look cleaner and less 'social-media-ey'. However, given the absolute leeching going on, I do not think aesthetics ought to be a priority in this case. Requiring a higher standard of transparency from users is neither censorious nor tyrannical, in my opinion.
In a social-media landscape where the proliferation of one's content and expressions has traditionally been its own reward, it is somewhat confusing that on MINDS, a "Remind" has not only a higher engagement score than a like or a comment, but the same score as the acquisition of a new subscriber. Hitting the Remind button under a post has nearly the same bar of effort as hitting "Like", and yet, its benefits to the remindee are significantly greater.
In a fashion common to manipulators and gamers, this dynamic is not lost on spammers. Much of the low-quality content invading the trending sections has done so by significant reminding, often in a circular pattern between spammers. It costs a spammer nothing to remind an insipid post about a wedding cake or a vacation destination, and yet the effect of even just ten Reminds is enough to push such content far above that of an artist making original content, or a journalist working in dangerous conditions.
Authentic users will Remind content based on its importance to them and to their environment. Spammers will Remind anything at all (except contentious material), knowing that the content is not the reach, but the engagement score. This inflates the "# of channels discussing" under trending; grants high engagement score (whether from Plus users or not, afaik); and edges out original content from the likelihood of trending. In turn, trending content is among the first content that Plus users will see and interact with, and this is how spammers get away with earning significant token amounts while doing almost nothing of value.
The engagement score of a Remind should be reduced to "1" point, just as a "Like" is. Even if its scoring were removed entirely, Reminding would still lead to greater point gain because it inherently increases the number of people who see the content.
RISKS/CONCERNS: reducing the engagement incentive to Remind a post will likely have little effect on users who Remind content based on its merits. I could be wrong on that, but I cannot foresee any effect other than the diminishment of content that is Reminded as part of an engagement scheme. Authentic Content might suffer a measurable decrease in engagement score, if the users Reminding it were doing so just to pad-out their own feeds. And, users who partook of engagement schemes might or might not have been Reminding measurable quantities of authentic content. However, as stated, all Reminding has its (systemically?) ignored value of presenting the content to new eyes, many of which are authentic.
This one is simple. When a bot or spammer is identified and banned, they can return to their misdeeds almost immediately - because MINDS does not ban people, it bans accounts.
A tithe of time might act as a buffer against users who have just been identified to have been running spam accounts. An important factor would be the question, 'How long is long enough?' Is a month too little or too much? Perhaps a week? Or two weeks? It's anyone's guess.
RISKS/CONCERNS: this measure, while it would grant the network some reprieve from determined spammers, is fairly tyrannical as solutions go. It is so because it operates on presumption-of-guilt for newcoming accounts, in order to add a quantum of security for the engagement economy.
However, the Minds Team has already instituted a measure similar in presumption-of-guilt, when they limited incoming reward score to that provided by Plus Users.
I will fill this segment as more ideas present themselves.