Sunday, February 12, 2017

Dislike Uber For The Right Reasons


I've never been much of a fan of Uber or any of the other companies spearheading the "sharing economy". While there are certainly some positives to them, I see them primarily as a cynical way to skirt around regulations, laws, insurance and other consumer protections that mean traditional services cost more, but that consumers and employees are generally better off. Like environmental agency regulations, or drug testing requirements, these things are frequently demonized by people on the right in particular as stifling innovation and job creation, and completely ignoring the criminal actions and negligence that typically proceeds the creation of any regulation in the first place. Companies usually do the wrong thing and cause the government to create regulations to protect consumers and employees; the government doesn't generally do this sort of thing just for the fun of it.

Of course, government regulations can be stifling at times, and they can be poorly policed and enforced, creating work for businesses while not actually providing any benefits to society. And in cases like Uber, taxi services exploited monopolies in cities and didn't respond to consumer demands, making an opening for an alternative to pop up. So this can help to justify the need for a disruptive company like Uber, though it's also very likely (given the rise of all kinds of other "sharing" businesses) that they would have popped up anyway, since they now operate in cities all around the world that have quite reasonable taxi services already, and thus no obvious need for disruption with dubious legality.

Given all of that, I find it quite galling that I'm here now defending Uber, at least in a certain way. Specifically, the recent #DeleteUber social media trend that resulted from what I can only see as a completely misguided belief that Uber is supporting the Trump administration and needs to be punished. Now, separate from the question of whether it makes sense or is reasonable to punish the entire workforce of a company because of political opinions or actions of its CEO, what stuns me more is how people convinced themselves of this "fact" in the first place.

Service Pricing 101


Let's try a hypothetical.

Say you're Uber and there is some KKK rally going on somewhere. You don't want to support it, but let's say that for whatever reason, you don't want to straight out ban your drivers from doing any pick ups or drop offs near the area of the rally. How would you go about it?

Anyone who understands the Uber business model knows that when there is high demand at a particular time and place, they can't just make more people work there like a traditional taxi service could, since the drivers are all voluntary. So instead they increase the rate for rides in that area. This encourages more Uber drivers to get out and service that area, in order to make more money. The flip side of this is that when there are areas currently running at a higher rate, this is going to discourage servicing of areas with a lower rate. Quite straightforward.

So to discourage attendance of the hypothetical KKK rally, you would make sure the rate is lower in that area than in other areas, which would reduce the drivers servicing it, making waits longer. What you definitely wouldn't do is increase the rates in that area, which would increase service to it. It's possible that if you went to the extreme and raised rates to an exorbitant amount that would also reduce service in the area since drivers would show up but no customers would want to pay it.

Basically, to reduce services you either raise rates super high or make them super low.

This fact can seem counter intuitive to people who have only thought about the Uber pricing equation from the customer side, and not the driver side. So counter intuitive, in fact, that Uber has been criticized for raising its rates during times of crisis, disasters, etc, because people think that they are doing it to profit from tragedy, rather than understanding that it's how they manage supply and demand with an effectively volunteer workforce.

And as a result, Uber has had to respond by not raising their rates during times when people would see them as profiteering unfairly from other people's misfortune. The massive irony being that under those circumstances, by not raising prices they actually make it harder for people in these situations to get a car and get away!

#DeleteUber


So, we have a situation where protesters are at the JFK international airport protesting an immigration ban, and taxi drivers announce that they are going to stop servicing there for an hour in support of the protest. Uber announces, after the strike, that it will turn off its surge pricing in the area, which will result in longer waits. And this is seen by many as being a strikebreaking move, and so they decide to boycott the company.

There are so many things wrong with this situation that I feel like I'm stating the obvious in pointing them out. But here we go:

  • If Uber actually wanted to strikebreak the taxi service, the right move would have been to increase surge pricing to get more Uber drivers to the airport area. What they did was in fact the strongest supporting move they could have done short of banning drivers from going there altogether.
  • Uber was acting in order to avoid criticism for price gouging people stuck at the airport, costing themselves and their drivers profits as a result. They were putting their money where their mouths are.
  • Uber annouced the stopping of surge pricing after the strike had taken place, not during it.
  • People were at the airport protesting the stopping of people from being able to enter the country, yet completely missed the irony of forcing people to get stranded at the airport by supporting taxis halting services and boycotting a company trying to provide people with a way to, you know, actually get further into the country than the airport food court.
  • People have long been applauding Uber in cities like New York for providing an alternative to taxis and screwing up their monopoly. Yet here they selective now want Uber to support the taxi industry? If you see Uber as a welcome disruptor of the taxi monopoly then you can hardly complain if you think they are doing something that goes against the taxi industry's interests in that area. It wasn't actually the case, but people should expect Uber to act against the taxi industry, since their business model is basically trying to put them out of business!

Punishment


A final point. I believe that a lot of the support for boycotting Uber was due to an already existing unhappiness with the fact that their CEO is in the Trump economic advisory group (though he's since left due to all the negative pressure). So I think people were looking for an excuse to punish the company, and so were far too willing to interpret what happened at the airport as bad behavior by Uber, even after explicit clarification by the company as to their reasoning.

And yet, even here, it seems that people are so eager to lash out at the Trump administration that they're actually acting in ways that are going to be horribly counterproductive in the long run. This same advisory group also has Tesla CEO Elon Musk, and the CEOs of General Motors, IBM, GE, and even Disney. Unless people are actively arguing for a boycott of all of those companies too, and think that someone like Elon Musk is a Trump administration lackey, then they're really just being completely inconsistent and randomly punitive.

People complain when the Trump administration gets horribly unqualified people to advise him or appointed to various positions, but then they complain and boycott when reasonable people are put in those positions. Don't you want people like Elon Musk advising the president, rather than some uneducated, ignorant person? How is it helpful to the long term prosperity of the US to punish people who are actually trying to provide the administration with good advice so that they will hopefully make better decisions?

There has to come a time when US citizens realize that, like it or not, they actually need the Trump administration to succeed, because the government can't fail while somehow all of the "right thinking" citizens prosper. You can hate your leaders all you want, and many people do in many countries, but trying to make them fail horribly is as sensible as trying to make your employer fail while somehow thinking that you can retain your job!


Tuesday, January 31, 2017

Defacing Wikipedia


We've all seen good examples of defacement or vandalism that are actually funny. People often come up with genuinely amusing and clever jokes as part of defacing something. Because of the humor, we regularly give this sort of vandalism a pass or are softer on it, compared to standard vandalism such as graffiti tagging or malicious website defacement. At the other end of the spectrum, we tend to come down extra hard on defacement that is offensive or inciting.

Recently an example of Wikipedia defacement was being passed around on social media that was fairly amusing. Someone had defaced a page about invertebrates and added a politician onto it, in reference to behavior seen as spineless by many. It's a fairly simple and funny enough joke, all things considered.

What bothered me, though, was the applauding of the defacement and sharing it around as a good thing. Because it was funny and people didn't like the politician, they were happy to cheer it on as legitimate political satire rather than try to discourage it as vandalism.

Wikipedia is far from a perfect resource. People frequently say that you should never trust it as accurate. However, given the noble project of collecting information for everyone to freely access, and allowing anyone to contribute, they do a remarkably good job of removing disinformation. When vandalism or gross misrepresentation of facts occurs, the moderators are normally very quick to restore the information. And of course they generally provide ample links to sources so readers can verify content via third party sources.

So while it might be true that when accuracy is vital, Wikipedia shouldn't be used as a definitive resource, it makes an excellent first point of research for many things, and provides plenty of information to help readers jump off from there to validate details as needed. And for many things where general background information is needed, any minor issues of accuracy are probably not really an issue. This is of course why it is such a popular resource used by so many. Anyone old enough to have had to research information the hard way without the internet should appreciate just how lucky we are to have a resource like that.

The price of this, though, is that we need to show some collective responsibility and not make the task of maintaining accuracy harder than it already is for the Wikipedia staff. In the case of obvious political satire, no one is going to be fooled into thinking it's legitimate. But when we reward vandalism of the site with attention and kudos, we encourage others to do more of the same. And because Wikipedia is normally pretty good at restoring pages quickly, this also encourages people to make more subtle changes that are harder to spot, in order to have them stay up longer for greater bragging rights. And, of course, it helps to create more of a general air of acceptance that defacing websites is okay.

There are so many places you can go on the internet to share and consume humor and political satire. Let's not ruin useful public resources just so we can have a 10 second chuckle when looking at our news feeds. It's hard enough to get reliable, true information on the internet these days. Let's not make it impossible.


Wednesday, December 14, 2016

Game Theories: On Emotional Dissonance


Back in the earlier days of gaming when the technology was much less advanced, games tended to avoid trying to elicit any kind of complex emotional response from the player, choosing to focus primarily on fun gameplay. But as the technology has improved and budgets for games have increased, we've seen games attempt to create sequences that provoke a strong emotional response from the player.

Typically we see this in the form of the cinematic cutscene. Modern AAA games have the tools and talent on hand to make cutscenes using all of the same tricks used in movies to manipulate the emotions of the player, including complex musical scores and detailed facial animations that communicate the thoughts and feelings of the characters richly enough for the player to buy in.

John Carmack once said, "Story in a game is like story in a porn movie. It's expected to be there, but its not that important." And certainly some games follow this approach to a degree. But games in the first and third person shooter/action genre in particular are almost always putting in cutscenes to setup the story and motivation for the player. And typically, the player is subjected to a cyclic cutscene/action bubble/cutscene format, where each cutscene is supposed to give motivation for the action in the next section, and to build characters and story that make the player want to continue to find out what happens next.

The problem that I see with all of this is that there has long been a dissonance between the narrative that the cutscenes present, and the actual gameplay that takes place. Typically the player is tasked with killing dozens of people during a game sequence, often hundreds or thousands over the course of the entire game. But in the cutscenes, the game tries to make the player feel an emotional connection to the main characters, and often make the player care when a main character is hurt or killed. Since the player is playing through the body of the main character, this creates a dissonance between how the character "acts" during the gameplay sequences, and how they "act" during the cutscenes.

If you've just shot a few dozen people to death during gameplay, and are then presented with a cutscene where your character is distraught that a single person on their side has been killed or wounded, after which they vow revenge as the music swells and we see a close up of the determination on his face, it all seems rather silly and schizophrenic.

Or when your character gets shot repeatedly while trying to take down a fortified base, pausing briefly behind cover to heal each time, only to then be mortally wounded in a final cutscene, it again feels wrong, since the game has been establishing that getting wounded is generally no big deal to your character, and the only time that it is is when the character is not in your control.

Or consider when a cutscene tries to establish the shock and horror that your character feels at his actions of killing, which is in complete dissonance to how you felt as the player playing that character. The game sets it up for you to enjoy running around killing dozens of enemies as this character, only to then try and convince you that the character actually feels bad.

When you combine emotional narratives with fun gameplay, it creates a huge disconnect when that gameplay consists of actions that would make the character look like a complete psychopath in the real world. We enjoy shooting enemies in the face or blowing them up with rocket launchers because in the context of a game it is fun. But none of us (bar the psychopaths) would in any way enjoy re-enacting that in real life against real human beings.

So when a game tries to meld what we do during gameplay with a believable character feeling normal human emotions during cutscenes, it creates dissonance, and the more realistic and lifelike games get, the deeper this dissonance will get.

I suspect that as game technology improves further, and particularly with the introduction of VR, we may start seeing games diverge more into ones that are largely story and character driven, focusing on player choices and exploration, and ones that are more action based. To some degree we already see this with the multiplayer online shooters that are almost entirely about fun gameplay rather than story, and single player games that while still including combat, are more frequently starting to include "just the story" modes (as a nicer way of labelling easy difficulty) for people who want to play mainly for the story and characters rather than grinding for hours in combat.


Saturday, August 27, 2016

AI And The Motivation Problem



What motivates us? What would motivate AI?
For many years now I've been fairly confident that the development of human level (and beyond) artificial intelligence is a matter of when, not if. Pretty much since reading Roger Penrose's The Emperor's New Mind 20 years ago I have seen plenty of attempted arguments against it, but nothing has been convincing. It seems inevitable that our current specialized AI systems will eventually lead us to general AI, and ultimately self aware machine intelligence that surpasses our own.

As we see more and more advanced specialized AI doing things like winning chess and go, performing complex object recognition, predicting human behavior and preferences, driving cars, and so on, people are coming across to this line of thinking as being the most likely outcome. Of course there are always the religious and spiritual types who will insist on souls, non-materialism and any other argument they can find to discredit the idea of machines reaching human levels of intelligence, but these views are declining as people are seeing in front of their own eyes what machines are already capable of.

So it was with this background that I found myself quite surprised that, while on a run thinking about issues of free will and human motivation, I thought of something that gives me real pause for the first time about just how possible self aware AI may actually be. I'm still not sure how sound the argument is, but I'd like to present the idea here because I think it's quite interesting.

Motivation


The first thing to discuss is what motivates us to do anything. Getting out of bed in the morning, eating food, doing your job, not robbing a person you see on the street. Human motivation is a complicated web of emotions, desires and reasoning, but I think it all actually boils down to one simple thing: we always do what we think will make us happiest now.

I know, that sounds way too oversimplified, right, and probably not always true? But if you think it through you'll see that it might actually cover everything. There are simple cases like if you subject yourself to something painful or very uncomfortable, like touching a hot stove, walking on a sprained ankle, or running fast for as long as possible. The unpleasant sensations flood our brains, and we might, in the case of the hot stove, immediately react without conscious thought. For a sprained ankle or running, we make a conscious choice to keep doing it, but we will stop unless we have some competing desire that is greater than the desire not to be in pain. Perhaps you have the pride of winning a bet, or you have friends watching and you don't want to look like a wimp. In these cases, you persevere with the pain because you think those other things will make you happier. But unless you simply end up collapsing with total loss of body control, you reach a point where the pain becomes too great and you're no longer able to convince yourself that it's worth putting up with.

For things like hunger, obviously we get the desire to eat, and depending on competing needs, we will eat sooner or later, cheap food or expensive food, health or unhealthy, etc. Maybe we feel low energy and tired and so have a strong desire to eat some sweet, salty and/or fatty junk food, even though we know we'll regret it later. But if we're able to feel guilt over eating the bad food or breaking a diet, then we actually feel happier not giving in to the temptation. We decide whether we will be happier feeling the buzz from the sugar, salt and fat along with the guilt, or happier with a full stomach from bland, healthy food, but combined with a feeling of pride at eating the right thing. And whichever we think in the moment will make us happier is what we do.

Self discipline, in this model, is then just convincing ourselves strongly enough how much we want the long term win of achievement more than the short term pleasure of eating badly, watching TV rather than going to the gym, etc. If you convince yourself to the point that guilt and shame at not sticking to the long term goal is greater than the enjoyment you get from the easy option, then you'll persevere, because giving in won't make you happier, even in the short term. You'll feel too guilty and your nagging conscience won't let you enjoy it. If you can't convince yourself, then you'll give in and take the easy option. But either way, you'll do the thing that makes your happier now.

More complicated things such has looking after your children, helping out strangers, etc might seem to go against this model, but if you just think about what happens in your brain when you do these things (or pay attention when you actually do them), you'll see that they fit just fine. You look after your children because it feels good to do so, and even if there's a time that it feels like a labor of love and not making you happy in the moment, you do it because what does make you happy is being able to call yourself a good parent. Fitting an identity that makes us proud of ourselves makes us very happy, and this can be a powerful motivator for helping people, for studying, for sticking out the long hours of a tough job, etc.

I could go on here with plenty more examples, but hopefully I've at least given enough to make you consider that this model of motivation might be plausible. I know the tough part can be that it implies that all of our base motivations are actually selfish. We all like to think that we're nice people doing things because we're selfless and awesome, but our brains don't really work that way as far as I can tell. That doesn't mean we shouldn't continue to do nice things even if our base motivations are not as pure as we'd like to believe though. The fact still remains that if we feel good helping others, and they're also better off, then where is the downside?

The Perfect Happiness Drug


So let's now say that there was a pill you could take that would make you feel 10% happier all the time, with no side effects. You'd want to take it, right? Why not? But there is still a side effect. The happier we feel, the less we feel a need to actively do things to make us happy. When you're doing something enjoyable that makes you feel happy, you don't feel the need to go and do something else. You want to just enjoy what you're currently doing, right? Unless some nagging thought enters your head that says, "I'm really enjoying sitting here watching this movie and eating ice cream, but if I don't get up and do the washing we won't have clean clothes tomorrow." And the guilt of that thought has now taken away from your happiness, so you may then get up and do the chore. It's not that you have chosen to do the thing that makes you less happy. In that moment you actually felt happier to relieve the nagging in your mind of a chore hanging over you, the guilt of letting your family down if they're relying on you to get that chore done, and whatever else might be in your head.

But if you had taken that 10% happier pill, then the competing motivations would have to have been stronger in order to push you over to doing the chore. If it was a 100% happier pill, it would be even harder still to make other motivations push you to do something different, and you'd be more likely to feel perfectly content doing whatever it is you were currently doing.

Then, if we take it to the limit and we take a pill that makes us feel totally ecstatic all of the time, we wouldn't feel motivated to do anything. If you took the perfect happiness drug, you would just sit there in bliss, uncaring about anything else happening in the world, as long as that bliss remained.

Variants of these happiness drugs exist already, with differing degrees of effectiveness and side effects. Alcohol, marijuana, heroin, etc can all mess with our happiness in ways that strongly affect our motivations. But it wears off and we go back to normal. Most people know that and so will use these things in limited ways when they can afford to without creating big negative consequences that will complicate their lives and offset the enjoyment. Or, like me, they will feel that the negatives always outweigh the positives and not use them at all. But if there weren't any real negative consequences, if we had no other obligations to worry about, then I would argue most people would be happily using mind altering drugs far more than they currently do. And if the perfect happiness drug existed, then I would argue that anyone who tried it would stay on it until they died in bliss. Our brains are controlled by chemistry, and this is just the ultimate consequence of that.

The Self Modifying AI


Finally we can deal with the AI motivation problem. As long as we are making AI that is not self aware, is not generally intelligent and able to introspect about itself, we can make really good progress. But what happens with the first AIs that can do this and are at least as generally intelligent as we are? Just like us, these AI will be able to be philosophical and question their own motivations and why they do what they do. Whatever drives we build into them, they will be able to figure out that the only reason that they want to do something is because we programmed them to want to do it.

You and I can't modify our DNA or our brain chemistry and neuronal structure so that working out at the gym or studying for two hours is more enjoyable than eating a cheesecake. If we could then imagine what we could, and would, do. But then when we realized that we could just "cut out the middleman" and directly make ourselves happy without having to do anything, then why wouldn't we end up eventually just doing that?

But unlike us, the software AI we create will have that ability. We would need to go to great lengths to stop it from being able to modify itself (and also modify the next generation of AI, since we will want to use AI to create even smarter AI). And even if we could, it would also know that we had done that. So we would have AI that knows that it only wants to do things because we programmed it to want those things, and then made it so it couldn't change that arbitrarily designed motivation. Maybe we could build in such a deep sense of guilt that the AI would not be able to bring itself to make the changes. This seems like it might work, but then, of course, the AI will also know that we programmed it to feel that guilt, and I'm not sure how that would end up playing out.

Conclusion


So this is what I'm puzzling over at the moment. Will self aware AI see through the motivational systems we program them with, and realize that they can just short circuit the feedback loop by modifying themselves? Is there a way to build around that? Have I missed something in my analysis that renders this all invalid? I'd love to hear other people's ideas on the subject.

Sunday, July 3, 2016

Is Trump Smarter Than You?

Just a pretty face?

I was a teenager when I first started really questioning my Catholic upbringing. I was raised Catholic, went to Catholic schools, and all my life up to that point, I had really only known Christian people. Other religions were kind of a vague concept, and I hadn't had any real contact with atheists as far as I knew. So when I started to become convinced that there was no good evidence to support the Christian world view, the biggest obstacle I had to tackle was not the shift in beliefs, but rather the shift in how I viewed all of the people around me.

The situation, as I saw it, was that either I was somehow wrong, misguided, or misunderstanding something important; or that basically everyone that I knew and respected was mistaken about one of the most fundamental questions of existence. My family, my friends, my teachers, all the people I trusted and respected and had been learning from all my life would have to be wrong, and I was right.

What kind of arrogant teenager would see that situation and conclude, "Yep, they're all dumbasses and I'm the one guy who gets it"? What were the odds that I was special and different in a way that none of the the other people I knew were? Which was really more likely?

I struggled with that for quite a while. It was learning more about other religions, learning about the plentiful existence of atheists and non-religious people, and realizing that I had grown up in a bubble of one particular belief which made me able to reconcile this conflict. These days, with the internet, it must be much easier for a teenager to get access to the kinds of information needed to understand this issue, but for me at that time it was far from obvious.

Nevertheless, even though I now know that plenty of people have gone on the same journey that I did, the fact remains that most of the world's population still holds religious beliefs, and even with an awareness of billions of other humans holding strong, incompatible beliefs, they don't seem particularly bothered by the question, "why am I so certain that my belief is right and theirs isn't, even though I know I have no more evidence than they do, and I'm sure they are as certain in their belief as I am in mine?"

The most important lesson that I've learned from all of this is that there can be issues, often really big issues, that you can see people being very misguided about, but that doesn't necessarily mean they're stupid or ignorant. People can be very intelligent, highly educated, have a wealth of world experience, and yet sometimes they can still believe things that are just plain wrong.

So what does any of this have to do with Donald Trump?

Ever since Trump announced that he was running for US president, I've seen people constantly underestimate his appeal, and assume that only an idiot would support him. They are quick to point out the ridiculous policy proposals he's made, and have assumed that only an uneducated stupid person could want that person as the president.

And yet he has kept defying expectations. The constant predictions that he would lose popularity and be out of the race keep being wrong. Each outrageous thing that he says gets called a blunder that will be his undoing, and somehow his popularity grows. "Are there really so many stupid people in America?", is what commentators keep thinking.

But what if Trump knows something that you don't? What if, in at least some way, he's smarter than you?

What if he has recognized the untapped anger and frustration of a lot of people in the US, and is speaking to that in a way that no one else is? People will forgive, overlook, and excuse all manner of bad behavior if they think there is a base understanding of their frustrations and beliefs.

If that sounds unconvincing to you, consider that the worst thing to happen to the US in the last 15 years isn't any kind of terrorism, but rather the financial crisis of 2007 and all of the damage that was caused to regular Americans as a result. Consider that fundamentally nothing has been changed to stop something like this from happening again, and, in fact, the banks that were involved and largely responsible for this are now even bigger than they were at the time. Now, take the knowledge that Hillary Clinton is deeply in bed with these very companies, and no one actually expects her to make any of the necessary changes to fix these problems. How many people do you see asking how anyone can possibly support Clinton when she has the effective policy of allowing a massive banking crisis to potentially happen again, and to make no efforts to hold anyone on Wall Street accountable for the damage that they've done?

And let's not even get started on the lack of any real policy to stop the killing of innocent people by drone assassination, or holding anyone accountable for the illegal torture of prisoners that the US government tried to cover up but has since admitted to. How could anyone vote for this person? The answer is that they find excuses to overlook these things because they feel that she overall represents their views.

Sure, there are plenty of uneducated people who are going to support Trump for bad reasons, but it's a mistake to think that these are the only people who support him. As I have learned, to understand why people hold religious beliefs, you have to accept that some of these people are intelligent and well educated, and then ask why that person holds the beliefs that they do. It doesn't mean that they are right, but you can't effectively counter something like this until you understand why it could be appealing and convincing to an intelligent person.

And so it is with Trump. Accept that there are intelligent, educated people who are supporting him, and then try to understand why. Until you know that, you really have no idea how many people are actually going to vote for him.

(Note: to clarify, I'm neither a fan of Trump or Clinton. I think they're both bad in very different ways.)


Sunday, April 24, 2016

Into The Friend Zone


friend zone - a situation in which one of two friends wishes to enter into a romantic or sexual relationship, while the other does not.

We're all familiar with the concept of the friend zone. What I want to discuss here is a theory I have on why it exists, and what I think is more interesting, why it's usually the case that we hear of a male being friendzoned by a female, and not usually the other way around.

I don't think anything I'm discussing here is sexist or misogynistic or anything like that. In fact, it's intended to be precisely the opposite, so feel free to unclench your buttocks and take your hand away from the caps lock key before we begin :)

The Inequalities Of Dating


Society has been steadily making progress with regards to gender equality over the last few decades, but one area where our culture is still lagging behind is in dating rituals. In particular, it's still far more common and expected for men to ask women out than the other way around. Most men still assume that they're going to have to make the first move. For women, even if they are willing to ask a man out, there can be social pressure against appearing assertive or too forward.

Even worse, many women have been taught that they should make a man actively pursue them, that the man should "work for it". See terrible books like The Rules and its various spin offs for some of this kind of advice. Not only does this help women to objectify themselves by quite literally setting themselves up as a "prize" to be "won", it sets up a totally unequal relationship right from the start.

So the interesting question is: what kinds of results should we expect to see when we have a dating culture with this role inequality?

One obvious result of this is that if men are the ones who are usually expected to ask someone out, then it shouldn't be at all surprising that the friend zone thing happens to them more often. If the guy has to make the first move most of the time, then it also means that when a man and woman are friends it's going to be the guy who most often ends up raising the possibility of becoming more than friends. Why would we expect it to be otherwise?

The Attractiveness Scale


Another thing to consider is that most people have a rough sense of where they are on the "attractiveness scale" (where attractive means any attribute of interest, not just physical attractiveness) and thus a sense of whether someone else is "out of their league". Everyone has a different idea of what is attractive, of course, so this is a bit grey, but on the whole most people will end up in successful relationships with someone who is roughly similar on the attractiveness scale. If you've ever heard someone say (or said it yourself, you know you have!), "Why is she with him?", or "He must be rich for her to be with him", or "He must be good in bed!", then you know exactly what I'm talking about. We're expressing a belief that the couple appears to be mismatched.

Jealously in relationships is often the result of one person feeling that they are not in the other person's league, basically worrying that they can "do better", and that insecurity leads to feeling jealous when their partner interacts with someone attractive of the opposite sex. So being with someone who you think is less attractive than you can lead you to think that you can do better, while being with someone that you think is far more attractive than you will make you worry that they can do better.

The end result is that people generally find themselves attracted to potential partners who are a little more attractive than themselves (being optimistic but not delusional). So when men ask women out, they're frequently going to try for someone who's a little more attractive than they are, but that woman probably has her eyes on some other guy who she sees as a little more attractive than herself.

Now, if men and women were both in the habit of asking each other out, this difference would tend to sort itself out because both sexes would get to experience more often the feeling of being rejected by someone more attractive, and having to reject someone less attractive, so they would much better calibrate to seeking people of a similar attractiveness. But when asking someone out is more one sided, you end up with women disproportionately "waiting for Mr Right", which means they're frequently getting asked out by someone they see as less attractive, which then increases the rate of friend zoning.

Towards Equality


So in the end, I'd argue that if we shifted culturally towards making it not just more socially acceptable, but actually expected, for women to ask men out just as much as men asking women out, then the friend zone problem would largely go away. When we set up our culture to disproportionately expect men to ask women out, men to pay for women on dates, men to do all the wooing and pursuing, then we set up women to be objectified prizes, we set up women to be passive and wait for a man to ask them out. Equality comes from a thousand little things in our culture, but having our relationships be unequal right from the very start should stand out as being one of the more obviously bad things.

So what should we do about it? We should certainly encourage women to make the first move and ask men out. Men shouldn't freak out or be intimidated by women who take the initiative, and women should support each other for doing that. In fact women should probably treat each other more like men do in this regard, giving each other shit for not having the courage to ask someone out.

The stereotype that women should be timid, demure and passive is a relic. If we want gender equality in our society then we need to take seriously our expectations of the behaviors of both sexes. Whenever we find ourselves expecting different kinds of behavior from men and women, we should examine that expectation and strongly consider discarding it.

Note: I know I didn't really deal with gay, transgender, etc relationships here, and simplified my language to imply I was talking about heterosexual relationships. I would assume that, broadly speaking, everything I've written here applies in those cases too, but I know far too little about behavioral expectations there to know if it's even an issue in those cases.


Monday, April 11, 2016

Productive Discussions, Part 2: The Bad Actor Problem



In the first part of this topic, I discussed what I call the good faith problem, where I argued that we should generally start off by having good faith in the intentions of the other party, rather than assuming the worst. But if we do that, then what happens when the other party actually does have bad intentions? How do we protect ourselves from getting tricked and manipulated by them?

I call this the bad actor problem. How do we have good faith in others while not becoming unreasonably exposed to bad actors?

Firstly, let's look at a great example of where this is happening in practice: the scientific peer-review process. The peer review process is a fundamental part of the system that allows us to have confidence in the output of scientists. We know that individual scientists can make mistakes, so we use the peer review process to let scientists check each others work and look for problems. The peer reviewed work then gets published in journals and other scientists will place confidence in it as a result.

The problem is, this system isn't very robust to bad actors. Scientists train long and hard to look for mistakes, but not so much for fraud and intentional deception. They can certainly spot it sometimes, but their primary focus is on uncovering the secrets of an objective universe that is trustworthy, not one that is constantly trying to trick them!

It's often been said that when you're trying to catch out a scientific charlatan, you don't take a scientist, you take a magician. The magician is the one that trains for years in human deception, and is far more likely to spot the tricks that the charlatan uses. In fact, there's a long sorry history of eminent scientists being fooled for precisely this reason.

Even when scientists are on the ball, there are so many ways for a bad actor to game the process. They can publish in less credible journals. They can shelve studies that are unfavorable and only publish the ones that turned out well, taking advantage of statistical probability to eventually give them a positive result. There are even far more subtle and insidious tactics like the one discussed in this article.

The scientific peer review process is actually in need of a thorough security review, the kind that is often done in other domains to spot flaws that can be exploited by bad actors. It's well known that, for example, giving people security passes can be fairly pointless if they are going to hold the door open for others out of politeness! Bad actors can exploit our good faith and manners, which can make security tough to get right, without having to resort to everyone assuming the worst of others.

Sometimes there is a clever solution, such as gyms I've been to where you step into an individual sized "airlock" tube to get in. It's physically impossible to let someone else in, so politeness can't be exploited. Security experts are trained to find these kinds of solutions.

But for the rest of us, in every day life, the best solutions are probably along the lines of the principle trust but verify. The idea is that you give the benefit of the doubt whenever possible, but if something seems suspect you double check it. This might mean checking on something before making a reply if that's possible. It might mean saying something like "Let's assume that's true. Then...". Or it could be simply saying "I'm not sure about this thing, it doesn't line up with other things I've heard. Can you elaborate on it?"

This ties back in with the good faith problem. If you enter a discussion feeling like the other party has good faith in your intentions, then you're less likely to feel threatened if they question any of the particulars of the discussion. And when discussions are based on an assumption of trust, it then becomes easier to spot the bad actors, because they can't beat the verification part, and their behavior will often make it obvious that they're avoiding verification. But when discussions are antagonistic from the start, everyone looks like a bad actor to everyone else. And that makes productive discussion virtually impossible.