What’s stopping TruStory users from being bribed?


#1

We can’t stop people from behaving the way the way want to. We can bake in preventative measures within the platform that minimizes this behavior. For example:

  1. The token’s value is dependent on the quality of curation (skin in the game). We explain this more eloquently here, in the “Can’t TruStory be gamed?” section.

  2. Because of (1), users are economically aligned to maintain honest stories in a category. If some portion of the users is bribed, the majority of rational users who care about the economic value of the category token can always refute claims of the dishonest/bribed users. Even if the bribed users get a false story confirmed, that story can still be challenged by rational economic actors at any time. So you can try to organize a mob attack to push a narrative by bribing TruStory users, but if the rational actors who care about the category token’s value find it, they can easily challenge it and win over.


#2

Doesn’t (2) lead to a hypothetical danger like:

When the majority of the rational actors decided to meet up at a TruStory conference, someone jams the internet/cell phone access in that venue so the rational actors are offline (and hey if the TruStory gets successfull, that event could be held at some island resort haha, but the other sign of the coin would be even worse in that case as they could go offline for a while).

Meanwhile the bribed actors (who did not attend the event on purpose) act quickly and both publish and confirm lots of fakes, and at the same time successfully challenge all real stories that they can override.

Sounds like something coming from a Hollywood movie, but at this day and age does not sound too impossible, does it? :slight_smile:

Besides, the villains don’t have to be some “evil masterminds of the world”, the adversary could be as trivial as some other platform competing for the funds of the same VCs (and not the whole platform itself but some of its co-founders for example). Or, you know, someone doing it for the lulz.

Are these kinds of unlikely (yet not totally impossible) scenarios taken into consideration?


#3

I don’t want to debate the likelihood of this event because that would be a weak argument. This situation is possible and TruStory is designed to handle such scenarios because stories are always able to be challenged. In an event, where the rational people were offline and the irrational people being bribed were online, yes false stuff could get deemed true and true stuff could be deemed false. But these situations are bound to happen- where stuff is inaccurately identified. But the key is having the opportunity to correct it.

Stories are always able to be challenged. In this scenario, the rational actors would eventually come back online and restore the equilibrium of rational:irrational actors.


#4

Yes the story itself is just a random example of course. The real case could be not as cinematic, but say a massive hack sending “rational” actors offline.

Okay I see so the goal is the long-term stability, and that’s fine I think.

The problem here might be that the negative case in question could happen (on purpose) right before/during some important event, and so all the “simple” users (readers) of the TruStory who’ll rely on it by that moment will be completely disoriented (and maybe won’t even know that they’re being played if the attack is more subtle), and/or then the attention of a wider media would be directed towards the fact that TruStory is sooo un-true at the moment, rendering it as unreliable source.

Again, the issue here probably is that while you’ll repeat to the users and the media that in the long-term the equilibrium will restore and the content will become reliable again, they’ll be afraid that in any given moment of time in the future they’ll have no idea whether they could trust the content published on TruStory or not (esp. if they need it NOW to form an opinion, and not in some point in the future).

A tricky issue I think but sounds like a crucial one and probably it needs to be addressed before the mainstream adoption.

(It this point I am not aware if there are safeguards/policies ready for these cases, I’m just a prospective user - so if I have these doubts, many others might have them too; hence please don’t read it as criticism, on contrary I’m voicing a random user’s POV so you can take it into consideration when working on a solution for this issue… if there is one)


#5

My view is that the core assumption within this is that the token value will be crucial in disincentivising poor behaviour.

If the token value were to drop the disincentives will be lower, making it conceivably cheaper for a bribe to take place. More so if the tokenholders seek a means of cashing out. It’s not impossible.

In my mind what would help is a backup mechanism to maintain the value of the token e.g. buybacks or the addition of a reputational mechanism where the community curates each other. Sounds a little Gestapo-ish, but I believe it’s a reasonable solution to “Who shall watch the watchers”.

The second option is a superior one as it de-risks token value as the point of failure by introducing another variable that is harder to game albeit in a transactional fashion.


#6

Hi Shawn. Great response! We actually do have plans in the future to allow community members to curate community members. This is why we’re taking careful measures to ensure our initial community members are of the highest standards.


#7

H/t to you @Shawn for seriously though-provoking points.

You’re quick to realize the token isn’t just this thing with a price but critical to how the whole platform functions. It’s a measure of how easy it to curate a category, a measure of one’s reputation, a measure of how strong the community is, etc.


#8

Sorry it took me so long to reply to this - I’ve been mulling over your answers.

So in your mind would this be closer to Wikipedia or Quora? Personally, I Iove what Quora has done in moderating answers to be civil and informative, but it takes an intensive effort from the moderators.


#9

Great points Shawn!

We should also remember that this possible core assumption is not really guaranteed even at high and growing token value levels inc. the cases with buybacks etc:

Just like in the “The Scorpion and the Frog” fable, there could be people who have motives (sometimes totally unclear for us) to act in a way that may seem or even be illogical and harmful for their financial interests etc.

So relying on a purely financial mechanism of keeping a token’s value in the green might not be a very efficient way of keeping actors within certain boundaries.

And yet at the same time as you’ve pointed out, a permanently falling value of an incentive could indeed be a strong stress for many token holders, opening the gate for some cheaper and more simple bribery.

So overall, relying mostly on the financial incentive sounds a bit dangerous in the long run.

Hence, mutual watching of the watchers could be more robust - but this part should be carefully considered as well.

Because if a group of people are “watching” each other for months, they could slide down into a small muddy puddle of “come on, you’re my pal or not? just close your eyes on this little thing (or else I’ll blame you in reverse)”.

Probably it would be better to have some circular plan for watching the watchers (kind of A watches B, B -> C, C -> D, D-> A) to decrease mutual influence, or use some dynamical approach.

At this point probably these things sound like an overkill and “hey it won’t happen in our community!”, but if/when the project takes off nicely and starts having certain trust and impact in some niches, people will start attempting to influence/infiltrate it in various ways to pursue their interests.

And some of these attempts can be quite vigorous and/or well-funded. So it could be a good idea to start the preparations against these in advance.