Monday, April 11, 2016

Productive Discussions, Part 2: The Bad Actor Problem



In the first part of this topic, I discussed what I call the good faith problem, where I argued that we should generally start off by having good faith in the intentions of the other party, rather than assuming the worst. But if we do that, then what happens when the other party actually does have bad intentions? How do we protect ourselves from getting tricked and manipulated by them?

I call this the bad actor problem. How do we have good faith in others while not becoming unreasonably exposed to bad actors?

Firstly, let's look at a great example of where this is happening in practice: the scientific peer-review process. The peer review process is a fundamental part of the system that allows us to have confidence in the output of scientists. We know that individual scientists can make mistakes, so we use the peer review process to let scientists check each others work and look for problems. The peer reviewed work then gets published in journals and other scientists will place confidence in it as a result.

The problem is, this system isn't very robust to bad actors. Scientists train long and hard to look for mistakes, but not so much for fraud and intentional deception. They can certainly spot it sometimes, but their primary focus is on uncovering the secrets of an objective universe that is trustworthy, not one that is constantly trying to trick them!

It's often been said that when you're trying to catch out a scientific charlatan, you don't take a scientist, you take a magician. The magician is the one that trains for years in human deception, and is far more likely to spot the tricks that the charlatan uses. In fact, there's a long sorry history of eminent scientists being fooled for precisely this reason.

Even when scientists are on the ball, there are so many ways for a bad actor to game the process. They can publish in less credible journals. They can shelve studies that are unfavorable and only publish the ones that turned out well, taking advantage of statistical probability to eventually give them a positive result. There are even far more subtle and insidious tactics like the one discussed in this article.

The scientific peer review process is actually in need of a thorough security review, the kind that is often done in other domains to spot flaws that can be exploited by bad actors. It's well known that, for example, giving people security passes can be fairly pointless if they are going to hold the door open for others out of politeness! Bad actors can exploit our good faith and manners, which can make security tough to get right, without having to resort to everyone assuming the worst of others.

Sometimes there is a clever solution, such as gyms I've been to where you step into an individual sized "airlock" tube to get in. It's physically impossible to let someone else in, so politeness can't be exploited. Security experts are trained to find these kinds of solutions.

But for the rest of us, in every day life, the best solutions are probably along the lines of the principle trust but verify. The idea is that you give the benefit of the doubt whenever possible, but if something seems suspect you double check it. This might mean checking on something before making a reply if that's possible. It might mean saying something like "Let's assume that's true. Then...". Or it could be simply saying "I'm not sure about this thing, it doesn't line up with other things I've heard. Can you elaborate on it?"

This ties back in with the good faith problem. If you enter a discussion feeling like the other party has good faith in your intentions, then you're less likely to feel threatened if they question any of the particulars of the discussion. And when discussions are based on an assumption of trust, it then becomes easier to spot the bad actors, because they can't beat the verification part, and their behavior will often make it obvious that they're avoiding verification. But when discussions are antagonistic from the start, everyone looks like a bad actor to everyone else. And that makes productive discussion virtually impossible.


    No comments:

    Post a Comment