Monday, March 11, 2013

What would you do if money were no object?

Money!
Make bad life choices!

I saw this video posted on Facebook recently, which contains a recording of British philosopher and writer Alan Watts discussing how to figure out what to do in life in order to be happy. You may have seen it too:

http://www.brainpickings.org/index.php/2012/10/10/if-money-were-no-object-alan-watts/


The basic premise of the discussion is simple enough: Think about what you would most like to do if money were no object, if money didn't matter, and then do that thing. Why spend your life doing things you don't want to do in order to make money, just so you can keep on doing what you don't want to do?

Sounds obvious, maybe even profound, right? Well, I thought so at first, but the more I thought about it, the more I realized that it's actually really bad advice. Let me explain.

Bad Hypotheticals


The problem with this hypothetical is quite simple: finding happiness in life is a complex multivariable problem, a balancing of many different competing wants and needs, and ignoring any one of these may have minor value as a thought experiment, but it's dangerous to actually make life decisions on this basis.

Think about the different aspects of life that you could hypothetically ignore and how it would change what you would choose to do:
  • What if money didn't matter?
  • What if health and fitness didn't matter?
  • What if morality didn't matter?
  • What if feeling fulfilled in the long term didn't matter?
  • What if friends and family didn't matter?
The problem is, all of these things, and many more, do matter. You can't just ignore one of them and think the results will be sensible. I mean, if health and fitness didn't matter, I could save about 15-20 hours a week on doing exercise and eat all manner of awesome shitty food. But would I actually go and do this? Of course not, it would be a terrible idea, or at least one that came with many undesirable consequences. My high blood pressure would get much worse, and I'd probably take about 20 years off my life. Or maybe I'd have a stroke and get to live the remainder of my life with that.

Actions and Consequences


Say you took the 'money is no object' hypothetical, and decided that you wanted to do something that almost certainly paid you poorly, but you did it anyway because it makes you happy. Now, say you also want to have children. Should those children also have to live with your decision and suffer an impoverished upbringing? We make lots of decisions in life, and many come with responsibilities and additional consequences, whether we like it or not. Ignoring these consequences and the impact they have on others, all so you can be happy, would be a rather selfish way to live your life.

Now, you might say that I'm reading into this too much, and that the hypothetical is useful as a meditative exercise, something to help gain focus. I agree completely with this, but to make this work is a two step process. First, think about what you would do if money was no object. Secondly, think if there is a way to realize that dream given that money does matter, without creating unwanted consequences. If the answer is no, then don't do it. The same could be done for all the other hypotheticals listed above. Use them as a meditative device, but if you can't find an answer that includes all of them, and any other important factors in your life, the option is a bad one and you should discard it.

Unknown Timespans


Think about what you would do if you only had one day to live. What wouldn't you do? Would you bother with eating well, would you go to work? Would you care about a long term savings plan?

What if you had a month to live? There are probably now some things that you would care about that wouldn't factor in with a single day. Meeting up with old friends, travelling to places you always wanted to go. But you probably still wouldn't care about work or health and fitness.

What if you had a year? Things like diet and exercise might start to be important again in this timespan, as would some form of income. But you probably still wouldn't be worried about your superannuation or keeping your salt and cholesterol intake low. You wouldn't care about high blood pressure or diabetes, but you'd care about getting hit by a passing car.

But in the end, we don't know how long we're going to live. Any of the above may actually be true for any of us, but most of us will have decades ahead of us. So as interesting and focusing as the above hypotheticals might be, we generally have to plan as though we will live for several decades.

However, on the flip side, we also need to balance this against any major regrets we would have if our lives were suddenly cut short. You don't want to regret shortsighted decisions if you actually live for a long time, but you also don't want to live entirely on the assumption that you've got many decades ahead, and then feel massive regret and disappointment if your life is unfortunately cut short and you miss out on all the things you planned to do 'one day'. This is all about balance, and there are no easy answers.

The same is true for money, health, morals, friends and family, and every other aspect that makes life so complicated and interesting. You can't make good decisions if you pretend that any of these things is unimportant, but you also can't make good decisions if you focused on any of them and pretended it was the most important thing either.

Hypotheticals are fun and can be mind expanding, but life is complicated, and so we shouldn't be tempted to fall for simple looking answers that try to hide all of this complexity. Sometimes I feel envy for people who are so passionate about just one thing, since life seems so simple for them. But at the same time, I feel sorry for them, as I think of all of the other amazing things they miss out on due to their intense focus and obsession. I'm not sure what the right answer is, and it's clearly going to be different for everyone, but chances are, it's going to be complicated. And that's okay.

Monday, March 4, 2013

Tomb Raider reboot

Diana and I just finished playing the new Tomb Raider reboot, and we enjoyed it far more than we expected. I think it's worth posting a bit about it for this reason.

Expectations


We went in to Tomb Raider without high expectations. I've never thought particularly highly of any of the previous games, and I tend to think that the Uncharted series mostly took the crown for this genre. However, having met some of the developers at Crystal Dynamics a few years ago and hearing about their plans for the game (it was in early preproduction at that time), I was certainly interested to see how well they were able to realize their vision.

I'd avoided reading anything substantial on the game, and I hadn't really planned to play it, expecting to just read some reviews and be satisfied with that. However, after coming down with a cold just before the weekend and seeing that the game was out already, we decided that there were worse ways to spend a couple of days, and so we gave it a go.

The concept of a gritty reboot of the rather tonally questionable series sounded like a good idea to me. It seems to be working for every other reboot these days, and I certainly wasn't going to go near a Tomb Raider game otherwise! I could see that there was actually some potential if they did it right.

Good borrowing


Overall we both enjoyed the game a lot. I find it very interesting because it's hard to pin down exactly what was good about it. There was no real single idea that was new or innovative in its own right, yet somehow the game as a whole worked really well. I think it is an example of a game that takes lots of good ideas from other games and combines them in just the right way. I would compare this to Sleeping Dogs, another game that does nothing new, but combines lots of good things well. 

I don't see this as a negative statement on the game or its developers. Genuinely new gameplay mechanics and level design ideas are pretty rare, and most of the time developers are taking existing ideas and just refining them. Sequels are quite often just polishing of ideas from the previous game, so a game that takes lots of ideas with potential from various other games and refines them all into a single experience is a good thing.

Structure


If I had to compare Tomb Raider to other games, I think Uncharted is clearly the obvious one, but the level design is strongly reminiscent of Arkham Asylum. This is due to the mechanic that both games share of having areas of levels that are initially inaccessible, but as new equipment is unlocked you can go back and access these areas. This has been done by several other games too, and I think it's a clever way to make a mostly linear game feel much less so.

The narrative of Tomb Raider is definitely linear. You move through a fairly interesting and well paced story. The new equipment mechanic allows it to feel less linear, though there are only a couple of major places in the game where you will revisit if your strictly follow the story. However, you have the freedom to go back to any previous section at any time by finding the nearest fast travel campsite, which will then let you instantly travel to any other discovered campsites on the island. This allows you to search for the various collectibles in previous sections or open up areas you couldn't access before (generally to gain access to collectibles).


Gameplay


One of the big strengths of the game is that it knows not to make any sections drag on for too long. Just like Uncharted, you will find yourself in sections where you are climbing to reach a destination; fighting sections; puzzle areas; cinematic sections where you're running/jumping/sliding either away from or towards the camera as things around you crumble/explode. Unlike Uncharted, these sections are usually kept quite short, so you never get bored or get forced to replay an overly large amount.

The optional tombs are a good example. You find several tombs across the island, and if you choose to complete them you will get a nice loot/xp reward. The tombs themselves are quite short, typically centered around a single physics puzzle, usually with a moderately simple but still satisfying solution. Most games would probably be unable to resist stringing half a dozen of these puzzles together in each tomb, but I think the designers knew that puzzles can be frustrating if you can't solve them, so limiting each tomb to a single one will stop the game experience from getting derailed for struggling players.

As the game progresses, items are unlocked such as a pick that allows you to climb up certain rock faces, rope arrows that allow you to pull distant objects or set up a zip line between points, and a simple shotgun that allows you to blow open certain blocked passages. As you master each of these, the game will start mixing them together more frequently, which helps increase the difficulty without changing the mechanics.

One of the best things I can say about this game is that it doesn't screw up the gameplay near the end. Too many games introduce boss fights that require gameplay that is mostly unrelated to the rest of the game, or just spam you with lots of overly hard enemies, such as taking some of their minibosses from earlier in the game and hitting you with two or three at a time. This has always struck me as lazy design. A good game will build up to a finale by testing the skills you've been developing throughout the game, and the final fight will bring together all of those skills. Tomb Raider does this very will, with the lead up to the finale requiring a rapid mixture of all of the navigation and fighting skills you've been developing.

Upgrades


Tomb Raider has multiple upgrade systems that help keep the game feeling fresh. Unlike games such as Call of Duty where variety is gained by having lots of different weapons appear throughout the game, but limiting the number the player can carry, Tomb Raider has a small array of weapons, all of which are carried by the player once unlocked. To keep it interesting, there is a system of upgrades available for each weapon. Generic salvage can be picked up throughout the game, and this can be spent upgrading weapons as the player sees fit. 

Separate to this is a character levelling system fuelled by experience points that are gained by killing enemies or finding collectibles. As the player levels they are given skill points which can be applied based on the player's gameplay style. You will end up unlocking most of these by the end of the game, but it's still nice to have some control over the order of the unlocks, and also fits with the overall character arc.

AI


Enemy behaviour is interesting and well executed. The standard patrolling/searching/fighting states are here, and variety is mostly achieved through different classes of enemy. Enemies have substantially different behaviours based on weapons, so you get the melee charging types, the slowly advancing shotgun guys, ones with assault rifles that tend to favour moving from cover to cover, long distance archers, and so on. Enemies will use fire bombs or grenades to flush you out of cover, and cover can often be destroyed by both you and AI using fire or grenades.

I like the fact that enemies do not take an unreasonable number of hits to go down, though this is affected by whether they have body armour or helmets. There is also the ability to temporarily incapacitate by shooting in the legs, which gives the chance to close in for a finishing move. There is a simple but effective dodge/counter system that stops close range from being frustrating without implementing a full melee system which is clearly not the focus of the combat here. 

Once again, there is nothing particularly new here, but it's all well done, with the combat being nicely balanced so that it never gets frustrating but also still leaves you feeling satisfied when you clear out an area. Some people will probably find it too easy, but I dislike games where I have to play action bubbles repeatedly to beat them, and in Tomb Raider I found that I usually beat a section with one or two tries. Note that this was on normal difficulty.

Conclusion


I've only touched on certain aspects of the game here, not even dealing with things such as graphics and sound (both of which are very good). I hope I've managed to capture some of the reasons why this game stood out for me as a particularly good example of a how to make a game right in a well established genre.

Carbon Dioxide as a pollutant?

This is just a short post about something that bothers me about word usage in relation to carbon dioxide. I've noticed more and more sources referring to it as a pollutant. For example, the US Environmental Protection Agency considers it a pollutant, and plenty of other credible sources such as National Geographic use this term too. So why do I think it's a problem?

Labels can be very powerful. They can stifle discussion and make it very hard for issues to be debated clearly. This is because many words are not neutral in tone, but rather come with pejorative or ameliorative connotations that sway thinking about the things they are attached to. A great example of this is the pro-life vs. pro-choice debate. By labelling something as pro-life, you make it very hard to argue against it without constantly having to deal with the implication that if you disagree, you are against life itself. We can see the same trick when referring to someone as, say, a climate change denier or a holocaust denier. The use of the term denier seems to be accurate and probably justified in these cases, but there is no doubt that it's a pejorative and that it sets the tone of discussion, which would make it much easier to dismiss the denier if he actually had some valid arguments.

While labels can affect the tone of discourse, they can also become dangerous when the meanings behind that label are used to justify policies and actions. For example, the term piracy for copyright infringement has been used to justify draconian measures and confuse people as to the legality of these measures. It is much easier to trick people by saying piracy is stealing than the more obviously untrue copyright infringement is stealing. It is like labelling speeding as murder and then advocating harsher punishments to stop murderers!

We've also seen this trick used recently in the US with the war on drugs and the war on terror. By incorrectly labelling something as a war, there is the real danger that laws regarding wartime policies can get invoked and abused. A rhetorical device becomes a legal justification.

So back to carbon dioxide and the term pollutant. I think this is a bad term because, as far as I'm aware, a pollutant is always a substance that is undesirable. A substance where, if you could, you would reduce it to zero in a given situation. Fecal matter would be a pollutant in drinking water because the desired amount is zero. CFCs in the atmosphere are a pollutant for this same reason. But carbon dioxide is of course not like this. There is currently an excess of it in the atmosphere above the desirable levels, but the desirable level is not zero. We would never want zero carbon dioxide in our atmosphere, the way we would desire zero levels of actual pollutants.

As a comparison, when a town becomes flooded from storms, no one ever, ever, complains about water pollution (as in the water itself being pollution). There is absolutely a problem of excess water at that point in time that needs to be reduced, but to call it a pollutant would just be to confuse, not clarify.

So let's work on dealing with the real problem of excess carbon dioxide in our atmosphere without ruining the meaning of another word through overly broad and incorrect application. Otherwise my head will literally explode!

Sunday, February 17, 2013

States Rights

There is this concept that government should be split into three levels: federal, state, and local. For a long time now I've questioned the value of state government having anything more than just an administrative function. We have it here in Australia, but nowhere do you hear about states rights more than in the US, and so that will be the primary example I will address.

The US has such a strong notion of states rights primarily due to the way the union was originally founded. I get the impression that it was much more like a loose union of independent states, sort of like the European Union, rather than like Australia. Of course the comparison is not exact since the EU is a union of countries, but it seems that the US states very much wanted to keep existing and operating as separate entities as much as possible.

This certainly explains why there is a historical tradition of strong states rights, but that is not a justification for not changing if there are good reasons. Though, of course, useful change can still be very hard to enact, even if most people agree that it would be a good thing (*cough* metric system *cough*).

The independent experiments argument


One of the arguments for states rights that is very popular in the US is the idea that each state operates as its own independent experiment, trying different ideas for what makes a well run state. The laws can be a little different (except for federal laws of course), services provided by the government can be different, and then we can compare results and hopefully learn more about what are the best policies.

This idea has a couple of problems. The most obvious one is that experimentation has a real cost. When a state implements a bad policy, it's real human beings that suffer the consequences of that policy. This should not be trivialized. While you can certainly create bad federal policy that then will effect everyone rather than just one state, maybe it's better to not have states thinking that they are their own little policy sandbox, and instead get together all the best thinkers from every state on various issues and come up with good policy for everyone.

The more subtle, but possibly greater problem, is that states are far from independent. When one state experiments with something like lower taxes, or better health care, etc, this doesn't happen in isolation, but can heavily disrupt all of the other states. For example, states with favourable corporate taxes (such as Nevada) attract a lot of businesses that take advantage of this. But this then takes tax revenue, employment opportunities, etc away from whatever state those businesses would have otherwise been started in. Or a state with better health care will attract more people in need of health care, which may then place extra burden on that program and even make it appear that it was a bad policy because it ends up costing a lot more than it would have if all states had the same policy.

So without independence, these state experiments cannot give good information about whether policies are actually good or not, since it becomes very hard to tell how much it was your state's policies and how much it was the effects of other states' policies that you observe. And if one state can create policies that negatively affect another state, but there is no legal accountability for those effects, then the incentive becomes much lower for states to consider the national consequences of their policies.

Universal law


State level laws open up the problem of behaviour in one state being legal, but the same behaviour being illegal in a different state. To me it seems a rather perverse idea that the legality of a citizen's behaviour is dependent on the state that they are in. I think that there are certain things that should bind all citizens within a country for the concept of 'country' to have any real value, and one of those things should be that the same laws apply to every citizen. We see the separate problem of laws seemingly being different if you happen to be rich or a big corporation, but this tends to be more due to uneven enforcement of laws. To actually say that certain laws don't apply at all is different, and I really don't see any benefits for having different laws. 

I would argue that any law that can't be applied to every citizen of your country is a bad law. Many bad laws actually get created because of this state based legal apparatus, with big industries specific to a particular state finding it much easier to get favourable laws made for them than they would if they had to petition on the national level. Corruption goes to the highest levels of course, but given that state legislators tend to be focused on the wellbeing of their state, it's always going to be harder to get national laws passed that only benefit a single state, or that benefit one state at the detriment of another.

There is also a problem with trying to change laws that exist on the state level for national purposes. For example, when the company Cars Direct was initially founded, it was done with the intention of providing internet sales of new cars direct from manufacturers to consumers all over the US. It turned out that such a thing was impossible, because there are laws in every state giving car dealerships their own territories that can't be infringed on by other sellers of the same make. This meant that Cars Direct would in effect be infringing on the territory of every dealership in the country. It also meant that if they wanted to go to court and fight to have this law changed, they would have needed to go to court in every state and fight the same battle, rather than being able to fight it once on the federal level, so it never happened.

Bureaucracy


State laws also create a lot of extra bureaucracy that makes government both more expensive and less effective. Each state needs to spend time and effort on the same types of laws, making the same research and policy work have to be done multiple times. Take a topic like energy policy. It's far less efficient if every state needs to hire its own experts and decide on the pros and cons of different energy sources and come up with policies. The more this was done at a federal level, the less duplication of effort there would be, and you also increase the opportunities to bring together experts from different states, giving you overall better information and hopefully better policies as a result.

State laws also end up reducing the government's ability to distribute funds in an efficient way, with each state having its own set of red tape and loopholes that can cause inefficiencies and need to be overcome.

Part of the pointless bureaucracy is the totally unnecessary duplication of things such as driver's licences and car registrations. How is it helpful to anyone to have to get a new licence and change your car registration whenever you move interstate, with different compliance laws and so on? I can see no good reason why systems such as these should not be unified across the entire country.

Conclusion


I'm sure that state level government is very useful from an administrative point of view, just as local government is, but I hope I've put forward some compelling reasons why the scope of state government should be reconsidered and ideally reduced, particularly in the case of state level laws. Times change, and what may have once been a useful framework may no longer be, and we should accept this without holding unreasonably to outdated ideas.


Sunday, February 10, 2013

The Daily Show, News, and Satire

This is just a short post about something that bothers me about comedy satire shows such as The Daily Show and The Colbert Report. I think these shows are quite enjoyable and actually perform a valuable service to society. Political satire is very useful for getting people engaged in important current events that they might otherwise not take interest in, and given the increasing trend of actual news media to focus on sensational stories to get ratings rather than on news stories that actually matter, these comedy satire shows are often airing stories that otherwise will be missed by many people.

My problem is that whenever a news program such as Fox News sends a criticism back towards The Daily Show for bad reporting, Jon Stewart typically uses the defence that his show is a comedy program, not a news show, so he should not be held to the same journalistic standards as news programs. I think he is mistaken in this for two important reasons, and that he needs to stop hiding behind this excuse.

Creating news content


If The Daily Show only ever reported on stories reported elsewhere, then it would have a fair claim that it is purely a satire show, taking existing news and mocking/commenting on it. However, this is not the case. The Daily Show creates news content of its own in two different ways:
  • Sending out 'correspondents' to create news stories on some topic
  • Having politicians, authors, actors, etc. on the show as guests
Now, you could argue that the news stories they create are humorous and not intended to be taken seriously, but I don't think this is actually true, certainly not all the time at least. They go out and talk to real people about real events. Just because they inject jokes and often mock the people they are interviewing does not stop it from being actual news. And quite often there is a clear message they are trying to promote. Injecting jokes into a news report doesn't suddenly make it no longer news. It just makes it news that is more enjoyable to watch.

News shows such as Australia's The Project are examples of news reporting coming from the other side of the spectrum, that is, news programs that add humour, as opposed to comedy shows that add news. Watching shows like these, it is clear that they all exist in a spectrum from serious to funny, but they all report their own news, rather than simply providing satire on other people's reporting.

Having guests on your show also invalidates any claim that you are only a comedy show. Let's face it, when The Daily Show has the current US President as a guest on the show, does Jon Stewart really get to just claim that he's not taking part in news creation? He doesn't have the President doing comedy sketches like Saturday Night Live, but answering real world questions. Sure there are jokes, but this is news creation, not satire in any way.

Knowing how others view you


Let's say that Jon Stewart genuinely only wants The Daily Show to be considered as a comedy show. Then he learns that many people watch his show for actual news content. He can say, "well, we're only a comedy show. If they want news content, they should watch something else". That's all well and good, but the unfortunate truth is that once you are aware that people are seeing you in a different way than you intend, you can no longer go on as you did before pretending that you're oblivious to this knowledge.

As an analogy, if you are naturally a quiet person, but one day you find out that all of the people around you mistake your quietness for snobbishness, you now have a dilemma. You can say, "but I'm not being snobbish, I'm just quiet" all you want, but this won't change the fact that people don't see it that way, and now you know it. You can insist that it's their misinterpretation and not your fault, but the fact will still remain that the next time you are around these people, you will be aware of the effect your behaviour has, and it is now a conscious decision to keep on acting that way. Fair or not, your choice to keep acting the way you always have is now a deliberate choice to keep people misunderstanding you.

In the same way, even if Jon Stewart doesn't want his show to be treated as a news show, once he's aware that people do, in fact, watch his show to catch up on news, he has no choice but to make that knowledge part of his future decisions. If he chooses to report on some stupid celebrity story rather than on an important political issue, he now does it knowing that he's wasted an opportunity to make the public better informed. That's his right, of course, but he has to make that choice, and he can't deny the effects of that choice, whether he likes it or not.

Conclusion


Of the various comedy satire shows I watch at times, The Daily Show, The Colbert Report, and Real Time With Bill Maher, I think Real Time understands its place in the spectrum of news programs best, and embraces that position well. Maher's interviews will often have humour in them, but never let the humour take over and drown out the actual serious issues being discussed. This is something that I think The Daily Show and The Colbert Report do not do well, and I think they would do better if they accepted their actual position in the world, rather than trying to have their cake and eat it too, so to speak. If they genuinely want to be treated just as comedy shows, and are not just using that as an excuse so they don't have to stand behind their reporting, then they should probably both drop the interviews with serious people and stick to actors and musicians, and stop doing field reports on real issues. But if they do that, I think we'll all be worse off.

Polishing Software and the Death of a Thousand Cuts

Any large software project will typically use some form of bug/issue tracking software to keep tabs on all of the bugs and opportunities for improvements that get identified during the course of the project. This database may contain thousands of issues, prioritized by importance. Inevitably, there will never be enough time to address all of them, so as release time approaches, the triaging process typically gets stricter, with the classification of issues into things that there is still time to fix, and ones that won't be fixed in the release.

The question that this blog post will try to address is: what should we do with all of those issues?

Issue Triaging


There are two main types of software releases: products that are released once, and products that are maintained with subsequent versions. This distinction is very important when it comes to bug tracking, because issues that are found in a single release project can be treated very differently to a release with multiple versions.

With a single release, such as a typical game, an issue that doesn't get dealt with for the release will possibly get handled in a patch, but otherwise will never be resolved and can basically be forgotten about. Since patches tend to focus on serious issues or issues that are found after release, any issue found before release but deemed too low priority to resolve is probably never going to get fixed.

With products that have multiple versions, an issues can't be forgotten so easily. If you defer an issue from the current release, that will still leave it open for fixing on the next release. Just because something is low priority now (compared to other issues) does not mean it will still be the case when the next software version rolls around. Particularly for software that needs to remain backwards compatible with previous versions, issues can never truly go away unless they are fixed.

So, this all sounds fairly obvious and straightforward. The problem arises with a pattern that tends to emerge with a lot of low priority issues, which is that they stay low priority, and get shifted from release to release without ever getting fixed. Your issue database gradually fills up with hundreds or thousands of these issues that are never important enough to spend time fixing, but still exist in your software. What can you do about them?

  • Keep them in the system - a bug is a bug is a bug.
  • Remove after some number of releases without being fixed. Kind of like a 'three strikes' system or similar, where you say that if it didn't become a high enough priority after two, three, whatever releases, it never will, so close it as a "won't fix".
  • Raise the priority of the issue after each release. This would mean that it eventually becomes important enough to fix, but in practice this artificial gaming of the triage system doesn't really work, since people will recognize that they're missing out on fixing more important issues in favour of ones that are marked as high priority, but really aren't.
  • Mark as "won't fix" immediately and forget about it unless it gets raised independently again.

It's this final option that bothers me. The idea is that if an issue is not worth fixing now, it's not worth fixing at all, so just mark it as "won't fix" if it doesn't make the cut in your triaging. This makes for a much cleaner issue database, but is it actually a good idea?

Software Polish


Many pieces of software do what they're designed to do, but may have clunky interfaces, various minor behavioural quirks, and so on. We would tend to think of these as good software, but not great software (bad software is a program that can't even perform the job it was designed for!). The difference between good and great software is typically what we think of as polish. Getting rid of all of those little annoyances, cleaning up the UI, streamlining the user experience, these are things that rarely involve big issues, but are rather a collection of lots of tiny issues. Typically, none of those issues will be a big deal on their own, but when you accumulate a lot of them, you end up with software that feels unpolished. With games, you'll say that they needed another 3 or 6 months to finish it. With versioned software, you call it Vista (snap!).

So software being unpolished can be thought of as being like a death of a thousand cuts. None of those cuts is a big problem on its own, but they add up, and you eventually reach a point where you realize that you're in trouble. How can you avoid this downward slide into unpolished software, or probably more realistically, how can you make your project take the uphill march that ends in a polished product?

Issue Tracking is Polish Tracking


This is where your issue tracking is your friend. If you're accumulating lots of low priority issues, this is a warning sign that your product is unpolished. By just marking them all as "won't fix", you lose this important information and get a false sense of the quality of the software. Not only do you have the appearance of less open issues that need resolving than is actually true, but you also add confusion as to which issues you chose not to fix because they weren't genuine bugs, and which weren't fixed purely due to triaging.

Perhaps having all of these issues in the database is telling you that you're not spending enough time fixing issues, and that's why they're accumulating.

And this brings us to one more option for dealing with a large number of low priority issues, one that I left off the list above: spend more time polishing, and less time adding features on your next release!

Every developer prefers adding awesome new features over fixing bugs, and customers certainly like new toys, but it's also true that developers like to feel pride in their work, and customers like to use software that doesn't annoy them. It can be a hard balance to meet, but when your low priority issues start to accumulate, rather than ignoring them, it might be time to recognize that your software is becoming less polished, and you need to put more effort into fixing those issues.

Of course, there are always trade-off based on the number of developers you have, the time until the next release, and how keen your customers are to get new feature X. Maybe you need to shift your priorities, maybe you need to hire more developers, maybe you need to extend your release times, or maybe you need to accept that your software will become less polished. The key point is that this should be a conscious decision, well thought out and based on the available evidence. Marking off issues as "won't fix" that aren't actually fixed distorts your data and makes it harder to have a true picture of your current situation. And if you don't properly understand your current situation, you're less likely to make a good decision for the future.


Friday, February 8, 2013

The Problems of Religious Morality

One of the major objections religious people have towards atheism is the belief that morality requires a higher power. That is, without a god to declare what is right and wrong, there could be no basis for morality. A practical refutation of this would be the millions of atheists who aren't going around murdering people every day, but sticking with the religious basis of morality, there are some serious issues with the idea of morality coming from a higher power that I think are very interesting.

Now, let me be clear up front that I'm not attempting any kind of argument of the form "their ideas have more problems than my ideas, therefore I'm right and they're wrong". Understanding reality is not a popularity contest. Religious people seem to think that belief in morals from a higher power is a solid and robust idea, and once you make that leap of belief your moral philosophy is now on stable ground. And this fact in itself is often used as an argument as to why that leap should be made; i.e. that morality without a higher power is baseless and inconsistent, but once you inject a higher power into the mix you solve all of those problems. What I hope to demonstrate is that making the leap of faith doesn't afford moral theory with the desired robustness, and so it can't be used as an argument in its favour. That pretty much leaves it with just the "because I want it to be true" argument, which is how I think it should be.

All cultures have moral systems

The key fact to recognize is that all known human cultures have moral systems of some sort. They all have differences, and an act can be considered moral in one culture and highly immoral in another, but no human culture is amoral. Now, it's worth pointing out that these differences are not totally relative and arbitrary, so that any act will be moral in some society. There is generally a system that is relatively consistent and makes sense when the particulars of the culture are understood. For example, infanticide is considered moral in some traditional cultures (though is much less common today as these cultures have more contact and interaction with outside cultures). However, you can't just go around killing any child in these cultures. It's a very specific case when mothers give birth to a child while the previous one is still too young, or if they give birth to twins. In these cases, it is done as a practical matter because the mother will not be able to support both (see The World Until Yesterday by Jared Diamond).

We have to ask the question of how all of these cultures got their moral systems, if morality comes from a higher power. Here are the options I can see:

God spoke to all of them


If we require a higher power to know what is right and wrong, then it must be the case that all human cultures have been spoken to by a higher power in order to know this. This would require that god chose to appear in a different form to every culture, and to give each of them a different moral system. If this were true, then there is no such thing as a single objective morality, unless you claim that he told the truth once and lied every other time.

Or you could possibly argue that he gave all cultures the same moral system, but those systems became corrupted over time. If this were the case, then how would you tell what the correct, original system was?

God spoke to one/some of them


If god only spoke to one group (or possibly a small number of groups), you may be able to get around the problem of god giving different moral systems to different groups of people. But this creates a much bigger problem, which is that all of those other groups must have developed their moral systems without a higher power. And this, of course, is the very thing that religious people are saying cannot be done. Unless they want to argue that everyone else is just fooling themselves and have baseless moral systems. This would mean that all other religions with their own moral systems are a massive lie, with only one group having moral truth based on an actual higher power. Many religious people seem to believe precisely this (though they are reluctant to state it explicitly given the massive hubris of such a belief), creating the problem of people from different religions all thinking that they are right and the others are wrong, but without having any good reason why that should be so, leaving the much more likely theory that they are all wrong.

God speaks to everyone in some ill defined way


Another option would be to say that god hasn't spoken to everyone in a direct "Moses on the mountain" kind of way, but rather in some more subtle way, such as somehow encoding morality into our souls, or something along those lines. I'm not sure if any religious people actually try to argue such a thing, but it seems the most obvious alternative for avoiding the problems of the previous two options.

If such a thing were the case, then it would open the question of what need there is for religion to explain morals? If they are already part of us in some way, we don't need to be taught or told them from an external source. You wouldn't need to practice or believe in any particular religion since you already 'know' the important parts. It would also raise the question of how you could prove such a thing. There would be no practical difference between morality being innate for evolutionary reasons and being innate in a soul, since souls are supernatural concepts not detectable by any scientific method.

Conclusion


So, let me reiterate that if morals come from a higher power, given that different cultures have different moral systems, it must be the case that either zero or one of those cultures actually practices a moral system from a higher power, or that god intentionally gives different moral systems to different cultures. In any of these cases, it is unclear how you can determine which is the 'true' moral system, making the 'higher power' explanation have little practical value.

Deriving new morals

If it is necessary to have a higher power to give us rules for right and wrong, then this implies that our moral system is to some degree arbitrary. That is, god could just as easily have chosen to make any rule different. If this were not true, e.g. if god could not have chosen to make stealing or murder moral, then there is something outside of god that defines morality, which would mean that a higher power is not necessary.

At a bare minimum, a moral system would need to have a basic set of axioms, all arbitrarily chosen by god, from which all other morals could be deduced. Do any religious moral systems actually have such a thing? I would bet that some may claim to have it, but I've never seen such a thing. There always seem to be moral questions that require some degree of judgement, usually provided by the wise elders of the given tradition. But, just like a scientific theory, unless they can show their working, the clear set of indisputable steps that led them to their moral conclusion, they are not working with a consistent system that is reducible to arbitrary, god-chosen axioms.

So, the question is, how does a god-based moral system deduce new morals? How do you determine the morality of a choice that was never covered explicitly by the moral code that the higher power gave? If god could choose any arbitrary answer for any moral question, then you can't know what he would have chosen in this new situation. And if a moral system is consistent and axiomatic, then how much choice did god actually have in creating it, and is he then actually necessary to explain it? Or, if a moral system is not consistent, then how can you justify making new moral deductions?

Final thoughts

I hope that this post has given you some interesting food for thought, as these questions certainly have for me. Of course, I freely admit that I don't think a higher power is necessary to explain human morals, but by working through the implications of such a belief, it is possible to see that it is also not sufficient to explain the problem either, which is an important warning flag not to be dismissed lightly.

I look forward to feedback from others on this topic, since I know it's quite probably that I've made mistakes in my reasoning here, and maybe overlooked other options.