Sunday, February 10, 2013

Polishing Software and the Death of a Thousand Cuts

Any large software project will typically use some form of bug/issue tracking software to keep tabs on all of the bugs and opportunities for improvements that get identified during the course of the project. This database may contain thousands of issues, prioritized by importance. Inevitably, there will never be enough time to address all of them, so as release time approaches, the triaging process typically gets stricter, with the classification of issues into things that there is still time to fix, and ones that won't be fixed in the release.

The question that this blog post will try to address is: what should we do with all of those issues?

Issue Triaging


There are two main types of software releases: products that are released once, and products that are maintained with subsequent versions. This distinction is very important when it comes to bug tracking, because issues that are found in a single release project can be treated very differently to a release with multiple versions.

With a single release, such as a typical game, an issue that doesn't get dealt with for the release will possibly get handled in a patch, but otherwise will never be resolved and can basically be forgotten about. Since patches tend to focus on serious issues or issues that are found after release, any issue found before release but deemed too low priority to resolve is probably never going to get fixed.

With products that have multiple versions, an issues can't be forgotten so easily. If you defer an issue from the current release, that will still leave it open for fixing on the next release. Just because something is low priority now (compared to other issues) does not mean it will still be the case when the next software version rolls around. Particularly for software that needs to remain backwards compatible with previous versions, issues can never truly go away unless they are fixed.

So, this all sounds fairly obvious and straightforward. The problem arises with a pattern that tends to emerge with a lot of low priority issues, which is that they stay low priority, and get shifted from release to release without ever getting fixed. Your issue database gradually fills up with hundreds or thousands of these issues that are never important enough to spend time fixing, but still exist in your software. What can you do about them?

  • Keep them in the system - a bug is a bug is a bug.
  • Remove after some number of releases without being fixed. Kind of like a 'three strikes' system or similar, where you say that if it didn't become a high enough priority after two, three, whatever releases, it never will, so close it as a "won't fix".
  • Raise the priority of the issue after each release. This would mean that it eventually becomes important enough to fix, but in practice this artificial gaming of the triage system doesn't really work, since people will recognize that they're missing out on fixing more important issues in favour of ones that are marked as high priority, but really aren't.
  • Mark as "won't fix" immediately and forget about it unless it gets raised independently again.

It's this final option that bothers me. The idea is that if an issue is not worth fixing now, it's not worth fixing at all, so just mark it as "won't fix" if it doesn't make the cut in your triaging. This makes for a much cleaner issue database, but is it actually a good idea?

Software Polish


Many pieces of software do what they're designed to do, but may have clunky interfaces, various minor behavioural quirks, and so on. We would tend to think of these as good software, but not great software (bad software is a program that can't even perform the job it was designed for!). The difference between good and great software is typically what we think of as polish. Getting rid of all of those little annoyances, cleaning up the UI, streamlining the user experience, these are things that rarely involve big issues, but are rather a collection of lots of tiny issues. Typically, none of those issues will be a big deal on their own, but when you accumulate a lot of them, you end up with software that feels unpolished. With games, you'll say that they needed another 3 or 6 months to finish it. With versioned software, you call it Vista (snap!).

So software being unpolished can be thought of as being like a death of a thousand cuts. None of those cuts is a big problem on its own, but they add up, and you eventually reach a point where you realize that you're in trouble. How can you avoid this downward slide into unpolished software, or probably more realistically, how can you make your project take the uphill march that ends in a polished product?

Issue Tracking is Polish Tracking


This is where your issue tracking is your friend. If you're accumulating lots of low priority issues, this is a warning sign that your product is unpolished. By just marking them all as "won't fix", you lose this important information and get a false sense of the quality of the software. Not only do you have the appearance of less open issues that need resolving than is actually true, but you also add confusion as to which issues you chose not to fix because they weren't genuine bugs, and which weren't fixed purely due to triaging.

Perhaps having all of these issues in the database is telling you that you're not spending enough time fixing issues, and that's why they're accumulating.

And this brings us to one more option for dealing with a large number of low priority issues, one that I left off the list above: spend more time polishing, and less time adding features on your next release!

Every developer prefers adding awesome new features over fixing bugs, and customers certainly like new toys, but it's also true that developers like to feel pride in their work, and customers like to use software that doesn't annoy them. It can be a hard balance to meet, but when your low priority issues start to accumulate, rather than ignoring them, it might be time to recognize that your software is becoming less polished, and you need to put more effort into fixing those issues.

Of course, there are always trade-off based on the number of developers you have, the time until the next release, and how keen your customers are to get new feature X. Maybe you need to shift your priorities, maybe you need to hire more developers, maybe you need to extend your release times, or maybe you need to accept that your software will become less polished. The key point is that this should be a conscious decision, well thought out and based on the available evidence. Marking off issues as "won't fix" that aren't actually fixed distorts your data and makes it harder to have a true picture of your current situation. And if you don't properly understand your current situation, you're less likely to make a good decision for the future.


Friday, February 8, 2013

The Problems of Religious Morality

One of the major objections religious people have towards atheism is the belief that morality requires a higher power. That is, without a god to declare what is right and wrong, there could be no basis for morality. A practical refutation of this would be the millions of atheists who aren't going around murdering people every day, but sticking with the religious basis of morality, there are some serious issues with the idea of morality coming from a higher power that I think are very interesting.

Now, let me be clear up front that I'm not attempting any kind of argument of the form "their ideas have more problems than my ideas, therefore I'm right and they're wrong". Understanding reality is not a popularity contest. Religious people seem to think that belief in morals from a higher power is a solid and robust idea, and once you make that leap of belief your moral philosophy is now on stable ground. And this fact in itself is often used as an argument as to why that leap should be made; i.e. that morality without a higher power is baseless and inconsistent, but once you inject a higher power into the mix you solve all of those problems. What I hope to demonstrate is that making the leap of faith doesn't afford moral theory with the desired robustness, and so it can't be used as an argument in its favour. That pretty much leaves it with just the "because I want it to be true" argument, which is how I think it should be.

All cultures have moral systems

The key fact to recognize is that all known human cultures have moral systems of some sort. They all have differences, and an act can be considered moral in one culture and highly immoral in another, but no human culture is amoral. Now, it's worth pointing out that these differences are not totally relative and arbitrary, so that any act will be moral in some society. There is generally a system that is relatively consistent and makes sense when the particulars of the culture are understood. For example, infanticide is considered moral in some traditional cultures (though is much less common today as these cultures have more contact and interaction with outside cultures). However, you can't just go around killing any child in these cultures. It's a very specific case when mothers give birth to a child while the previous one is still too young, or if they give birth to twins. In these cases, it is done as a practical matter because the mother will not be able to support both (see The World Until Yesterday by Jared Diamond).

We have to ask the question of how all of these cultures got their moral systems, if morality comes from a higher power. Here are the options I can see:

God spoke to all of them


If we require a higher power to know what is right and wrong, then it must be the case that all human cultures have been spoken to by a higher power in order to know this. This would require that god chose to appear in a different form to every culture, and to give each of them a different moral system. If this were true, then there is no such thing as a single objective morality, unless you claim that he told the truth once and lied every other time.

Or you could possibly argue that he gave all cultures the same moral system, but those systems became corrupted over time. If this were the case, then how would you tell what the correct, original system was?

God spoke to one/some of them


If god only spoke to one group (or possibly a small number of groups), you may be able to get around the problem of god giving different moral systems to different groups of people. But this creates a much bigger problem, which is that all of those other groups must have developed their moral systems without a higher power. And this, of course, is the very thing that religious people are saying cannot be done. Unless they want to argue that everyone else is just fooling themselves and have baseless moral systems. This would mean that all other religions with their own moral systems are a massive lie, with only one group having moral truth based on an actual higher power. Many religious people seem to believe precisely this (though they are reluctant to state it explicitly given the massive hubris of such a belief), creating the problem of people from different religions all thinking that they are right and the others are wrong, but without having any good reason why that should be so, leaving the much more likely theory that they are all wrong.

God speaks to everyone in some ill defined way


Another option would be to say that god hasn't spoken to everyone in a direct "Moses on the mountain" kind of way, but rather in some more subtle way, such as somehow encoding morality into our souls, or something along those lines. I'm not sure if any religious people actually try to argue such a thing, but it seems the most obvious alternative for avoiding the problems of the previous two options.

If such a thing were the case, then it would open the question of what need there is for religion to explain morals? If they are already part of us in some way, we don't need to be taught or told them from an external source. You wouldn't need to practice or believe in any particular religion since you already 'know' the important parts. It would also raise the question of how you could prove such a thing. There would be no practical difference between morality being innate for evolutionary reasons and being innate in a soul, since souls are supernatural concepts not detectable by any scientific method.

Conclusion


So, let me reiterate that if morals come from a higher power, given that different cultures have different moral systems, it must be the case that either zero or one of those cultures actually practices a moral system from a higher power, or that god intentionally gives different moral systems to different cultures. In any of these cases, it is unclear how you can determine which is the 'true' moral system, making the 'higher power' explanation have little practical value.

Deriving new morals

If it is necessary to have a higher power to give us rules for right and wrong, then this implies that our moral system is to some degree arbitrary. That is, god could just as easily have chosen to make any rule different. If this were not true, e.g. if god could not have chosen to make stealing or murder moral, then there is something outside of god that defines morality, which would mean that a higher power is not necessary.

At a bare minimum, a moral system would need to have a basic set of axioms, all arbitrarily chosen by god, from which all other morals could be deduced. Do any religious moral systems actually have such a thing? I would bet that some may claim to have it, but I've never seen such a thing. There always seem to be moral questions that require some degree of judgement, usually provided by the wise elders of the given tradition. But, just like a scientific theory, unless they can show their working, the clear set of indisputable steps that led them to their moral conclusion, they are not working with a consistent system that is reducible to arbitrary, god-chosen axioms.

So, the question is, how does a god-based moral system deduce new morals? How do you determine the morality of a choice that was never covered explicitly by the moral code that the higher power gave? If god could choose any arbitrary answer for any moral question, then you can't know what he would have chosen in this new situation. And if a moral system is consistent and axiomatic, then how much choice did god actually have in creating it, and is he then actually necessary to explain it? Or, if a moral system is not consistent, then how can you justify making new moral deductions?

Final thoughts

I hope that this post has given you some interesting food for thought, as these questions certainly have for me. Of course, I freely admit that I don't think a higher power is necessary to explain human morals, but by working through the implications of such a belief, it is possible to see that it is also not sufficient to explain the problem either, which is an important warning flag not to be dismissed lightly.

I look forward to feedback from others on this topic, since I know it's quite probably that I've made mistakes in my reasoning here, and maybe overlooked other options.

Monday, January 14, 2013

Compulsory Voting is Less Democratic

In recent US election cycles when the low voter turnout is noted, inevitably comparisons are made to countries like Australia, and the suggestion is made that it would be better to make voting compulsory like it is here. I will argue that compulsory voting is bad and ends up being a poorer representation of who voters actually want.

Get Involved!

The typical argument you hear in the US is that most people are lazy, and so won't bother to vote unless forced to. If you make it compulsory, they will get involved in the process and the election will be a better representation of the will of the people.

As an aside, I would argue that the first thing the US should be doing is making elections on the weekend, or even better, make election day a public holiday. People might be lazy, but adding the extra hurdle of having to get away from work to vote certainly doesn't help, and probably disproportionately affects lower income people who have jobs where it is more difficult to get away for a while, even if the employer would like to allow it.

If you make voting compulsory, there is no good reason to expect that most people who are currently disinterested will suddenly care passionately about the electoral process. Some might, but most will probably remain just as disinterested and go and vote with the minimum effort and involvement possible.

Signal to Noise Ratio

The biggest problem with compulsory voting is that rather than getting a true representation of the will of the people, you are now muddying the results with a whole heap of votes from people who don't care or are poorly informed. Possibly all these extra votes would be equally distributed across all voting options (if you treat them as random chance) but studies show that this is not the case. For example, the choice at the top of the ballot will be selected more frequently than the choices below.

So adding in a bunch of forced votes will skew the results in some way, giving a less correct result.

Low Information Voters

The other major problem is that of low information voters. This is a problem in any voting system:  voters who have poor or limited knowledge of the candidates, issues, and other relevant information for making an informed decision. Typically these are voters who get most or all of their relevant knowledge from sensationalist media, such as biased television news and news opinion shows, lowbrow newspapers, talkback radio, etc. Because these media sources rely on exaggerating, omitting important facts, and often outright lying to get ratings, and because most people will not make the effort to fact check, you end up with horribly misinformed and misguided voters.

Do you really want these people voting? How can they possibly improve the quality of the result?

Indifference

Then there is the problem of indifference. This can be either people who don't care which candidate is elected, or people who see all options as being equally bad, all candidates as being equally corrupt/untrustworthy. Forcing this person to vote guarantees invalid information since, from the voting perspective, that vote would have the same effect as the vote of a person who is 100% in favour of a particular candidate, yet this is clearly not a fair representation of how they feel about the candidates.

Amplifying Small Differences

Following from the previous point is the general problem that voting turns a fuzzy preference into a black and white one. Many voters are not going to be 100% in favour of one option and 0% in favour of all of the others. In reality they will see pluses and minuses in the different options. At the extreme they will be 50/50 like the indifferent case above, or maybe 51/49, just very slightly preferring one to another.

But the voting process turns all of these votes into a 100% choice for one option. If you instead had people voting on, say, a scale from 1 to 10 for all options, and then normalized so all of their preferences summed to 1, I think the election outcomes may well end up looking very different.

Optimal Representation

In the end, I don't think there is any perfect system that will capture the will of the people (and at least in part because I think the 'will of the people' is probably a poorly defined concept). However, it seems to me that the best results will occur when you maximise the number of well informed voters who have a strong opinion on the choices, and minimise the poorly informed voters or the ones that don't feel very stongly about any of the choices. Compulsory voting seems to pretty much get you as far from this ideal as you can go.

If voting is voluntary, but ideally has low barriers (like being held on a weekend as is done here in Australia), then you will get lower turnouts, but the quality of those responses will rise. The purpose of representative government is to have a smaller number of people whose job is to study the important issues and make good choices on behalf of their constituents, who don't have the time to do this. It doesn't seem like much of a stretch to extend this same principle to voting, with low information voters trusting that more informed and passionate people will have a better idea of the best choices, and keeping out of it themselves. Of course we need to look out for people gaming the system, but that's a separate matter.

Tweaking Compulsory Voting

If you must have compulsory voting, then it seems important to me that there is a valid way to vote for people who are mostly indifferent to the available choices, or even more importantly, well informed voters who feel that the available choices are all equally terrible. I think it would be interesting to have some sort of 'no confidence' option on the ballot. If more than a certain percentage of votes are no confidence votes, then you would do something like a re-election, where all current candidates are forced to stand down. There are probably all sorts of issues with a system like this, but I think it would at least force candidates to listen to their constituents and try to be genuinely good choices, rather than the current situation, where they simply try to be the least bad choice.

Actually, this option would probably also be valid for non-compulsory voting systems too. As long as candidates know that one of them must be chosen, the bar stays low and they only have to suck less than the others. But if they know that they can all be thrown out, the game suddenly changes, in our favour.

Saturday, January 5, 2013

Best and Worst of 2012

At the end of each year, I like to look back at the movies, games, and books that I watched/played/read over the year and try to figure out which were the standouts, either by being particularly good/memorable, or by being particularly bad. Since I don't go out of my way to consume bad media generally, my 'worst' picks are usually far from the worst possible in that category for that year, but this is still interesting, since something you expected to be good that turned out not to be, is probably more significant than something you had low hopes for from the very start.

I generally only include movies and games released during the year, but books are often older since the desire to consume them immediately is not as strong as with movies and games. I should also point out that I won't include anything that I wasn't seeing for the first time. I don't usually replay games or reread books, since the time commitment is too high, but I'll often rewatch movies I've seen before when I'm feeling lazy!

Movies

I watched 72 movies in 2012, with maybe two thirds being movies I hadn't seen before. Looking over my list, it's hard to find real standout 'must see' titles. Based on reviews, I expect that Django Unchained and Cloud Atlas would have been on my favourites list, but thanks to shitty cinema release schedules in Australia, neither one has come out over here yet.

Best

  1. Cabin in the Woods
    This movie is the ultimate deconstruction of the teenage horror movie genre. Funny and clever, I love the fact that the initial twist is just the tip of the iceberg for how far this movie goes.
  2. Sherlock Holmes: A Game of Shadows
    Not quite as good as the first one, with a bit too much emphasis on action and making Sherlock Holmes be wacky because the audiences liked that in the first one, and not enough mystery, but all the main actors are excellent and it's still a great ride.
  3. Tintin
    I've never read the comics so I can't judge it on that basis, but I found this movie to be highly entertaining and the CG was spectacular.

Worst

  1. Tinker Tailor Soldier Spy
    I've never seen the previous version or read the book, but even knowing that this is meant to be a slow movie, it didn't prepare me for just how boring I would find it. I have no problems with a thoughtful spy movie that isn't all Jason Bourne or Mission Impossible, but it still needs to be interesting, and despite some great actors, this movie just couldn't make me care about any of them.
  2. Underworld: Awakening
    I don't even remember what this one was about (beyond Kate Beckinsale in tight outfits shooting guns).
  3. God Bless America
    This was a very disappointing movie. The premise, of a man diagnosed with terminal cancer who decides to go on a rampage against all the things that piss him off in modern society, was great, but the execution was terrible. Too many times we were subjected to preaching by the main character in a blunt and non-subtle way, which completely wrecked the flow of the movie, even though I agreed with what he was saying.

Games

I played 24 games in 2012, though two of those were large DLCs for Skyrim. If a DLC gives me at least about 5 hours of gameplay I tend to consider it to be a game in its own right, since the experience of sitting down and playing it will be much the same as playing a new game.

Best

  1. Sleeping Dogs
    An unexpectedly excellent open world third person action title, this game has the best melee system I've come across (borrowed heavily from the Arkham games, but improved). I thought that the focus on melee over shooting would bother me since I'm not a fan of beat-em-ups, but it actually made the shooting that became more predominate in the later part of the game feel much more satisfying and earned, and kept the action feeling fresh, compared to games like GTA IV, where you're shooting guns from the start so there is nowhere to go but give you bigger guns.
  2. Hitman: Absolution
    Gorgeous graphics, quirky humour, and multiple ways to complete objectives that are actually worth doing. I will never be a hardcore stealth gamer, and I love the Hitman games because they let you play how you want, even if, for me, that means degenerating far to often into a killing spree to clear a level of enemies. I rarely replay games, but this game had me replaying missions to find all of the 'signature kills'.
  3. Risen 2
    I've been a fan of Piranha Bytes' games since Gothic 2, and Risen 2 had all of the elements that make their RPGs so enjoyable, placed in the unusual setting of pirate adventure. As usual there were plenty of annoying flaws, but like Bethesda's games, I'm a sucker for an open world game that rewards exploration with interesting characters and quests rather than just giving you loot or collectibles.

Worst

  1. Dance Central 2
    After buying a Kinect we grabbed copies of Just Dance 3 and Dance Central 2, since it's the closest Diana will ever get to taking me out dancing! The former game was fun, with lots of good songs and enjoyable choreography. Dance Central 2, however, was just wall to wall R&B and hip-hop songs, and stupid looking characters that you want to punch in the face. I'm sure some people love it, but for me it is everything I don't want in a dance title.
  2. Medal of Honor: Warfighter
    Spec Ops: The Line tried to make the player feel guilty through a clever narrative that makes you make bad decisions that get innocent people killed, and forces you to face this truth. Medal of Honor: Warfighter does a far better job of making you feel like a monster by throwing you, with far superior weapons and equipment, against desperate enemies and then patting you on the back and reassuring you what an awesome, heroic dude you are. On one mission you attack the enemy with a remote robotic vehicle equipped with a grenade launcher and a machine gun, and your enemy fights back with... a rock. That pretty much sums up the game.

Books

I read 56 books in 2012, which was the most I've read in a single year for quite a while. Part of the secret this time was getting into audiobooks in a big way. About 35 of the books I got through were audiobooks, which I listened to mostly while exercising or driving to work. I'm definitely a big fan of audiobooks now, though I have noticed that I don't take them in as well since I'm usually listening to them without full focus.

Best

  1. Thinking, Fast and Slow by Daniel Kahneman
    Destined to become a classic, this book is an excellent coverage of the various cognitive biases that lead us to faulty conclusions and making bad decisions. Everyone should read at least one book about cognitive biases just to gain a little humility about the limits of human thinking abilities, and this is the single best book I've read on the topic.
  2. Bad Pharma by Ben Goldacre
    This book is a big eye opener to just how little we can rely on published medical research to tell us how safe medications actually are. The author is a doctor who desperately wants the system improved so that drug companies are made more accountable and forced to publish all of their studies, not just the ones that make their drugs look good.
  3. Total Recall by Arnold Schwarzenegger
    Arnold is clearly a bit narcissistic and an unfaithful husband, and this book barely touches on the less favourable parts of his past, but I can't deny that it was a fascinating read. He has done a lot of things in his life and has lots of good stories to tell. The book was energetically paced and despite being about 600 pages in length, it was hard to put down.

Worst

  1. Greenwash by Guy Pearse (no, not that one)
    I picked this up on a flight to Sydney as a light read about various ways in which certain companies pretend to be taking environmentally friendly initiatives and reducing their own greenhouse gas emissions and so forth, but was disappointed to find a dull catalog of such things. The author makes very little effort to present his data in an interesting way, simply breaking things up into product category (such as Cars, Fashion, Oil, etc) and then just talks about what their advertising says, and then some facts about the reality of the company's behaviour. Over and over again. Good intentions but poor execution.

Conclusion

 Well, that's my list. Feel free to post comments, particularly if you disagree with any of my picks, since I always like to hear different points of view, and possibly even learn something!

Monday, November 26, 2012

Consumers Are Getting Smarter?

I've heard it said a lot that consumers are getting smarter and that traditional advertising doesn't work any more. It needs to be more sophisticated to match its audience. I call bullshit on this. In some ways advertising absolutely has gotten more sophisticated, but only in the ways it manipulates consumers, not in any real way that respects their intelligence.

A lot of research has been done over the last 50 years or so on human psychology in general, and advertising in particular. A lot more is known today about how our brains work, and what their shortcomings and blindspots are. Advertisers have learned how to tap into this to some degree, and use our weaknesses against us.

Since this same information is available to us, the consumers (books like Buying In and the book/TV show The Gruen Transfer are a good place to start), the logic is that there is some kind of Red Queen Effect occurring that makes advertisers have to keep one step ahead of the consumers so that their tricks will keep working.

The problem with this, though, is that it assumes that if consumers are aware of a psychological trick, they will be immune to it. Unfortunately, this is not at all true. Sometimes it works, but quite often we are fooled by a trick even when we're aware of it because it works on a subconscious level in our brains, so unless we engage our conscious brain to recognize the trick, it will slip past our defenses.

Daniel Kahneman talks about this in his excellent book Thinking, Fast and Slow, where he separates thinking into two levels, which he calls System 1 and System 2. System 1 is a fast, pattern recognizing, snap judgement level of thinking, and is generally in control when we're not paying conscious attention to things, or when we're getting an initial impression of something. System 2 is deeper, conscious thought, which takes time and effort, and so we tend to only engage it if we think we need to (and often not even then!).

Just because you know an optical illusion is an illusion, it does not mean you can see through it, and just because you know that a picture of a juicy hamburger on a billboard is trying to manipulate your base urges, it does not mean that it won't make you hungry.

I look at the television ad for the iPad Mini. What does this ad actually show us? Someone playing a tune on a virtual piano on a regular iPad, and then switching to an iPad Mini. That's it. The information content of the ad is basically, "you can now play a virtual piano in a slightly smaller format!". It's really just showing a cool gadget and suggesting, "Hey! Isn't this gadget AWESOME?!", and linking that to a familiar tune. That's the information content of this ad. Does that really seem like an ad that is responding to consumers getting smarter?

Or take the latest iPod ads (yes, I know I'm picking on Apple here, but it's a brand where the people who buy it tend to think they're smarter and more sophisticated than 'the masses', so I think it's the perfect brand to examine). Once again, a catchy tune, and this time just iPod Shuffles and Nanos bouncing around on the screen (it's slightly more complex than that, but that's the essence of it). No information content other than, "Hey, don't these gadgets look cool?".

Yet these ads work. Damn, do they work. Apple's sales of iPads and iPods are testimony to that. But this isn't a response to consumers getting smarter. This isn't in any way respecting the intelligence of the people buying the products. If anything it's downright condescending, the advertising equivalent of dangling car keys in front of a baby and cooing, "Oooh! Look at the shiny!"

So why do we keep hearing that consumers are getting smarter? I think that there are two main reasons for this, and they're both due to the fact that the advertising space is getting more crowded, and advertising has to work harder to be effective as people are bombarded with more ads and they have less impact as a result:
  1. Some companies/advertising firms have decided to make a niche in the space where they treat their customers as smart and savvy, and market to that. Whether they actually treat their customers that way or just say they do to manipulate those customers is a separate matter, and I'd say that both occur in practice, depending on the case in question.
  2. Advertising companies promote the idea of the smart consumer to their customers, the companies that buy their services, as a way to justify their costs. If a company thinks that the consumer is getting smarter, then they're going to feel more justified in spending large amounts commissioning advertising and marketing companies to create sophisticated ads for them. It's hard to justify massive consulting fees to create an advertising campaign if you think the target audience is dumb!
So there you have it. I could be totally wrong about all of this, of course. But next time you see an ad that appears to be treating you as a sophisticated, discerning consumer, try to remember to stop and ask yourself, "Do they really think I'm a smart consumer, or do they just think that I see myself that way, and so they're trying to tap into that conceit to lower my defenses?"

Tuesday, November 13, 2012

Assassin's Creed III

Assassin's Creed III is a great game that expands on the series in new and fresh ways while keeping enough of the trademark gameplay to make the game still feel familiar. However, it could have been an excellent game if a few aspects of the gameplay were improved, and more importantly, if various bugs had been fixed. It feels to me as though they needed about 3 more months of bug fixing before shipping. This is a real shame, since these issues are not showstoppers, but some are definitely frustrating and make you want to punch your character through the screen!

Setting

After three games in the old setting (renaissance  Italy), it was time for something new. At first I didn't think the American Revolutionary War would work, but I was quite pleasantly surprised. They found a good origin story for your character, Connor, and having him with both an assassin background and a Native American background gave some depth to the character, while also helping to explain some of the additional skills in this game, such as hunting and tree parkour.

Connor's background is tied in to the larger events happening during the period, although it does sometimes feel like he is shoehorned into certain events for no good reason except to be able to tie that event in to the game. We do get to see the conflict of the time between the British, Americans, and Native Americans done fairly tastefully, though the British tended to be a bit more faceless and generically evil to suit the plot's purposes.

Characters

Overall I wasn't a huge fan of Connor. After having the older, wiser Ezio of Revelations, it felt like a step back to play another young, impatient character who makes poor decisions. I did like playing Connor at several different ages, and the fact that it takes quite a bit of game time before you finally get to adult aged Connor didn't bother me, and was actually a nice change.

You start off the game playing as Connor's father, Haytham, and I quite enjoyed this character. Although his skills were more limited, you get to play through an interesting arc with him, probably made more interesting by not knowing where it was leading (at least for me, since I didn't know the connection between him and Connor when I first played).

The rest of the characters are servicable enough, with some needlessly over the top bad guys who never seem to shy away from reminding you of how evil they are. The inclusion again of famous historical characters is fun, but I suppose as a non-American it didn't really do anything for me.

World Navigation

These games are all about navigating the world in a cool way, and in this aspect, I felt that navigation regressed compared to previous titles. I found myself doing the wrong thing more often than I remember in past games, and getting frustrated by it. The fact that running and freerunning are done by the same button seemed to catch me out a lot when I would be trying to run from place to place and end up running up a doorway at my destination or jumping onto something that I ran too close to on my way past. Maybe this was more noticeable because fast travel points were very scarce in some maps and tedious to unlock in others, so I ended up having to travel on foot a lot more.

I found navigating on the Frontier map particularly annoying due to the ridiculously small number of fast travel points, combined with the fact that it's not obvious where they are unless you find them by luck or using an online reference. There were also bugs which meant that you would have to run around the two general stores that two of the points were located at, trying to make them unlock. And in one case, I unlocked one, only to travel away and find that it had disappeared again! I had to travel back manually, unlock it again, and have it disappear again!

The horse also seemed fairly useless unless you travelled on paths. Going overland, it would keep slowing down at obstacles, and the irregular nature of the terrain meant you would frequently find a cliff or river where you'd have to ditch the horse and go on foot anyway.

Tree navigation worked fairly well, but I found that I rarely used it to get from place to place since it was usually hard to tell where a particular tree path would get you. Unlike rooftop navigation where you can usually head in any desired direction, the trees would tend to have a more or less single path to follow, and it usually ended up being quicker and less frustrating to just run. And sometimes, you will go ahead and jump off a tree into empty space and kill yourself. Good times.

Combat

Melee combat was definitely improved in this game, with timing reactions to enemy attacks and doing counters being much more important. It frustrated me a bit at first until I got to understand it and gave up my preconceptions from the previous games, but eventually I got into it.

Other weapons were much less useful, specifically bow and arrow, and guns. There are a lot of glitches related to guns which make them next to useless, such as the reload button frequently not working, and the second gun (once you go through all the effort to get a second holster) seems to be ignored. But the biggest problem with the ranged weapons is that you can only aim and fire them when an enemy is close enough, and due to the slow charge up time before you can fire, I found them rarely useful in any open combat. Same was true when trying to fight animals such as wolves. You would see them coming, but by the time the game would allow you to start aiming, you would end up getting into the close range quick time events that typically happen with wild animals, making the ranged weapon useless. Oh, and in one mission with some scripted wolf attacks, the quick time events wouldn't work for some reason, so I was left having to run up a tree and try to pick them off from up there.

I also found it very hard to pick up weapons on the ground in combat if I had enemies close by. I could never get the 'pick up weapon' option to appear for long enough before getting struck. And since there was another bug that seemed to make my sword sometimes vanish when I travelled, this left me falling back to the tomahawk to deal with a mob of enemies far too often.

Side Missions

There are a whole set of naval side missions that were all very fun. The handling of the ship was easy and intuitive, and naval battles were exciting and the right level of challenge. You can upgrade your ship, but this tends to be very expensive, and since money can generally only be gained by doing other side quests, is not really worth doing unless you've got plenty of time to kill.

The assassin recruit feature is back, though you're limited to a maximum of 6, and the whole minigame of sending them off on quests is not as much fun as before. It used to be a fun strategy of picking missions with different levels of risk and reward, but there is very little difference between mission rewards this time, and the trick of sending a rookie with a veteran on a mission to level up faster no longer works. This means that levelling up your assassins is much more of a grind, and I found I rarely had them available to help me in my missions because I always had them sent off on their own ones.

All the minigames were fun, being recreations of real world games. Diana and I both found ourselves playing checkers or nine man morris, something we would probably never bother to do normally, so it felt good to actually practice a real world game.

The whole crafting and trading system was interesting but felt a bit pointless. You go to a lot of effort to unlock different craftsmen, which lets you use new recipes for items, but other than crafting specific items for your own character like bigger ammo pouches, I couldn't see much value in it. I still found myself generally just purchasing animal skins and selling those. Maybe more valuable items can be crafted later on? Also, trading tended to just be tedious, where you would just repeat the same action numerous times to fill a caravan to send off to trade. You can easily make a lot of money if you invest the time in this, but it feels like such a pointless grind.

Environment

Like all Assassin's Creed games, this one is very pretty. The amount of detail in the world is great, with your view always feeling full of lots of objects. City streets are appropriately cluttered, while the countryside is full of vegetation, tree stumps and so on. I never noticed any repeating textures on terrain, which is typical in open world games.

Water is gorgeous, and the naval missions really allow you to appreciate it. One mission in particular has you navigating giant rogue waves which were excellently implemented. Other effects like rain and snow are well done, and the fact that you get to visit each location in both summer and winter (as well as with a dynamic day/night cycle) is very cool.


Final Thoughts

I would definitely recommend Assassin's Creed III despite the bugs, though I would suggest waiting until a major bugfix patch gets released. This will definitely happen since many of the bugs are quite irritating but look like they will not be major issues to fix.

If this were the first or second game in a series you would cut them a lot more slack with these bugs, but as the fifth game in the series there is simply no excuse for it. It's good that they tried to innovate with this game, but that's not a good enough excuse for the quality level to regress. If we're going to be forced to have so many sequels these days rather than original content, we should at least demand that quality improves each time.

Monday, November 5, 2012

Reality Check

Many atheists say that religion is harmful, dangerous, and the cause of a lot of suffering in the world. Religious people tend to reject this classification, pointing out all of the other things that cause problems in the world, and will often claim that if religion didn't exist we would still find plenty of ways to make our fellow man suffer. While I absolutely agree with religious people on this point, I want to discuss in this post a particular missing attribute of religious belief that makes it uniquely dangerous and deserving of being singled out: the reality check.

Beliefs

We all have countless beliefs in things. It's a necessary part of making sense of the world. Whether it's the belief in gravity, in the equality of men and women, or that unicorns exist. Beliefs can be founded on two things: evidence or faith. In practice, since no belief can be 100% proven based on evidence, we tailor the strength of different beliefs based on the strength of evidence. There are many things we may believe that we haven't explicitly gone out and searched for evidence for, and it might be tempting to call this faith, but it really isn't, and always has a basis in facts and evidence.

For example, you might say that I have faith that Italy exists, since I have never been there and I'm trusting the word of others who say it exists. Is this faith? Not at all. There is a wealth of different types of data, such as books, movies, documentaries, conversations with people, that are on the whole consistent in their claims to Italy's existence. But what is more important, there are plenty of ways I could go out and gather further evidence on whether Italy exists or not if I so choose. No one is forcing me to just take their word on the issue.

Faith

The key factor that makes religion dangerous is faith. Faith, far from being a virtue, is what makes people cling to bad ideas despite lack of evidence, or even worse, despite direct evidence that the belief is false. You can't reason with faith. You can't present evidence to shake it. People with strong beliefs based on faith are often proud of the fact that they do not require evidence for their belief, and religions sometimes even tap into this and promote it as virtuous.

Faith isn't unique to religions, though. It is a typical part of many types of ideology. When you get a set of beliefs that must be accepted without proof, you start getting into dangerous territory. And when you suppress debate, discussion and contrary viewpoints to that ideology, that's when evil things tend to happen. This is the cause of a lot of the large scale suffering that does not stem from religion, e.g. Soviet Communism, pure unregulated capitalism.

The Reality Check

Evidence-based beliefs naturally are subjected to reality checks on a regular basis. Every piece of evidence is a test of whether a belief is consistent with reality. It can sometimes take a long time for the truth to be determined, but at least a mechanism exists for this to happen, and so we can be confident that bad beliefs will eventually be revealed as more evidence is gathered. (Note that this confidence is not faith, since it is based on a strong provable history of this mechanism working, and without a solid reason to expect this to change, the most defensible position is to expect the future to follow the same pattern as the past).

Faith-based beliefs don't typically rely on evidence and so don't have reality checks built in, but as long as a belief makes claims about the real world, evidence can be used to strengthen or weaken it. So, for example, if a person believes on faith that the world is 6000 years old, there is plenty of evidence that can be raised against this belief. The believer may choose to ignore the evidence and keep believing anyway, but the burden of cognitive dissonance will grow stronger as the evidence piles up, and there is at least a chance that eventually the believer will be forced to deal with it.

But what about beliefs that don't make claims about the real world? What if you believe that your suffering in this world will be rewarded after you die? What if you believe that killing a bunch of innocent people is what God wants you to do, and he will give you virgins in the afterlife if you do it? What if you believe that a person must not end their suffering from a horrible terminal illness by taking their own life, or having another person assist with this, because there will be worse punishment after death for it? What if you believe that people reincarnate and disabled/deformed people did something wrong in their previous life, and so deserve their suffering in this life rather than our support and kindness?

The danger with these sorts of beliefs is that they can never be disproved. There is no possible evidence, even in theory, that can prove the beliefs to be false. There can never be a reality check. When religions make claims about the physical world, such as the age of the universe, or that a piece of wafer gets converted into the flesh of a dead man, these claims can be made to smash hard against the rocky shore of reality. But when claims are made that fall beyond the realms of the physical universe, such as the nature of god and his decree of what is right and wrong, these beliefs float, untouchable, above the messy battlefields of reality where beliefs survive or die based on evidence.

Final Thoughts

Religion deserves to be singled out as a uniquely dangerous faith-based belief system, unlike other faith-based ideologies such as political, social, and economic systems, or straight ignorant beliefs that simply ignore evidence, such as sexism and racism, because it makes claims that can never be checked against reality, not even in principle. When these claims are used by believers to justify behaviours in the real world that negatively affect other living beings, it is very hard to bring about voluntary change from believers, because there is no way to prove to them that their core beliefs are wrong. Only by demanding intellectual honesty and getting believers to admit that their beliefs have no proof, and that believing something simply because you want it to be true is not good enough, can you bring about change, and this is far from a trivial task.



Acknowledgements: This post was inspired by chapter 3 of the book Why Are You Atheists So Angry? by Greta Christina, a very good book based on the atheist writings in her blog. I highly recommend the audio version of the book, enthusiastically read by the author herself.