explicitClick to confirm you are 18+

Reward-Pool Distribution: A Solution? For Minds.com

B-SAug 24, 2018, 3:10:07 PM
thumb_up159thumb_downmore_vert

I am writing this article because of the following blog on Minds.com:

https://www.minds.com/authorpendragon/blog/minds-requires-no-personal-information-pays-for-content-real-877983396465778688

Here is a somewhat longer quotation, which introduces the contribution:

"Minds.com has launched. Minds was in Beta in 2016-2017, and is now out of Testnet, having moved to the Ethereum token system on August 15th, 2018. It has clearly meant every one of its promises, because for years it kept them all. In addition to open source, crypto and freedom of speech, it promised payment for content creation, optional anonymity and no Minds requests for personal information. In Beta, it delivered. The problem is spam. Spammers are a threat to Minds' "payment for work" because spammers by definition cheat, game that system. Since Minds moved to the Ethereum system, the stakes for enforcing honesty could not be higher."


Reward Pool Distribution

At the beginning a small outline of the distribution of the rewards. Minds.com users receive MIND tokens based on their daily activity on the platform or the incoming activity on their channel.

Votes, comments, subscriptions, referrals, reminds, on-chain transactions, check-ins and development are counted.

Only activities of users participating in the Reward distribution are included. This is limited to users who have verified themselves with the help of their telephone number. This limitation serves to protect the system and to prevent attackers from exploiting it.

This system was not conceived from the outset as it is today. Rather, it was a learning process that gradually took place. The current solution also has vulnerabilities and still offers spammers the opportunity to exploit the system in places.

Before I go into a possible solution to the problem, I would like to briefly go around what I like about the existing system and what I have to say in general about the distribution of rewards.


General

First of all, the principle at Minds.com is that every organic user's interaction with the network has the same impact on the reward pool. I would describe this system as an account-based system. Other platforms, such as Steemit.com - which works via the Steem blockchain - use a stake-based system, which means that the weighting of a user's interaction depends on the proportion he holds on the platform. In the case of Steemit these would be the so-called Vests.

In general, I think account based systems are more suitable for large scale, at least in my opinion, because they provide a better user experience for newcomers. Regardless of the time that has already been spent on the platform and thus the possibility of having already earned shares in the platform, the activity of new users will be rewarded with that of all others.

In principle, however, these systems are easier to trick. Bots and spammers have the possibility to claim large pieces of the reward pool for themselves in no time at all. This abuse can only be contained one by one and with each solution of a problem, a new one emerges directly. Because where pieces of cake are to be distributed, there will always be people trying to serve themselves who do not want to be involved in baking the cake, but only want to enjoy a self-interest at the expense of the others.

Measures are in place to counteract these people. As already mentioned above, for example, the necessary linking of a telephone number in order to participate in the reward distribution. However, people will still find ways to circumvent these measures. For example, with the help of free telephone numbers, which can be found on the Internet and used free of charge.

Example: Concrete problem

Although the idea that every user receives the same weighting when interacting with the network is a plus in terms of user experience, this idea is also the biggest point of attack.

Let me give you an example:

For example, a malicious user creates 25 accounts, each of which creates one spam mail each day. Now each account will vote for each post, remind each post and write a comment below each post. In addition to the reminder, each action also takes place on your own post. Simple math:

Voting: 25 x 25 x 1 = 625 points

Comments: 25 x 25 x 2 =1250 points

Reminds: 25 x 24 x 4 = 2400 points

Total: 4276 points

With a network score of 420,000 points, this score would correspond to a share of 1%!

And you have to remember: This bill only considers 25 accounts! Theoretically, someone with sufficient patience could create 2500 accounts and would end up pocketing about 50% of the reward pool if they interact with each other. 50%, because the activity of the many bot accounts would also increase the network score by 427600 points.

Of course, this calculation is not 100% correct, but it illustrates the problem quite well. The existing system can be used with little effort, so that an individual could claim half of the rewards pool for himself.




One approach: Variable Reputation

Perhaps the approach of putting each user activity exactly the same as all the others in the same category is wrong. It is precisely because of the possibilities of misuse that a solution must be found that assigns value to interactivity.

My suggestion: Variable Reputation

Currently, each vote of each'organic' user gives one point, each comment two points, and so on... But for example, a comment that answers a user's question in detail is worth more than a spam comment with the content "Nice".

My proposal would be to measure this value by means of a variable reputation that measures the quality of user interactions. For example, a reputation from -100 (bot, inferior interaction, spam) to +100 (good content, qualitative interaction) would be possible. The reputation then influences the weighting of the points achieved. This means: User A and B have each achieved 1000 points - User A has a reputation of 100, User B has a reputation of 0. The reputation now serves as a multiplier of points = (Rep 0 = points x 1 | Rep 100 = points x 5 | Rep -100 = points x 0.00023). Although the two users scored the same number of points, they would now receive different rewards, as their points are weighted differently. User A would have 5000 points with the factor 5 in total, user B according to the factor 1 only the original 1000.

More detailed breakdown of reputation factors:

Rep 0: x1

Rep 10: x1,4 | Rep-10: x0,43

Rep 20: x1,8 | Rep-20: x0,18

Rep 30: x2,2 | Rep-30: x0,081

Rep 40: x2,6 | Rep-40: x0,035

Rep 50: x3 | Rep-50: x0,015

Rep 60: x3,4 | Rep-60: x0,0067

Rep 70: x3,8 | Rep-70: x0,0029

Rep80: x4,2 | Rep-80: x0,0012

Rep90: x 4,6 | Rep-90: x0,00055

Rep 100: x 5 | Rep-100: x0

The numbers chosen are of course only an example and probably not optimally chosen. However, it should suffice to understand.


An example of distribution:

User A writes high-quality blog posts and regularly receives a lot of feedback on them. The feedback comes from different people, from time to time there are also well-known commentators or users. This benefits his variable reputation, which increases. The increase depends on various variables such as the reputation of the interacting users. The quality of his interaction, as well as the incoming interaction, gives him a variable reputation of 45 (may vary from day to day). Thus, his scores are multiplied by a factor of 2.8.


A negative example:

User B runs a spam account with which he writes hundreds of posts a day. It also spammes comments to force other users to be active. Now user A notices this abuse. This one decides to do something about it and downvoted five of the many posts. Due to the positive reputation of user A, this has now follow on from the reputation of the account of user B. This now has a reputation of -10 and receives less than half of the original rewards. In addition, interactions of user B now have a negative reputation effect on other users. For example, if user C interacts exclusively with user B and is voted on by user B, the reputation of user C will go down, too.

This mechanism of flexible reputation could, naturally improved on this approach, provide for a more organic overall activity on this platform. Which bases are used for the calculation is therefore probably the key question.

What do you think of this approach? Possibly a plausible solution or not to use it at all? Opinions on this are welcome in the comments!



#minds #reward-pool #crypto #bots #abuse #reputation #minds.com #earning