Pies are for eating

Bar charts beat pie charts almost every time – I say almost because I am open to being convinced … but I won’t be holding my breath.

Disadvantages of pie charts:
–          pie slices ranking is difficult to see even if attempted
–          rely on color which is an extra level of mental effort for the viewer to process
–          colors are useless if printed in black and white
–          difficult to visually assess differences between slices
–          3-D is worse because it literally makes the nearest slice look bigger than it should be

Advantages of bar charts:
–          easier to read
–          no need for a legend
–          we can rank the bars easily and clearly
–          easy to visually assess difference

If you google “pie charts” you’ll find a bunch of people ranting far worse than me. Here is a good collation of some of the best arguments.

All that being said, data visualization is a matter of taste and personal preference does come into it. At the end of the day it’s about how best we can communicate our message. I wouldn’t dare say we should never use pie charts but personally I tend to avoid them.

The Monty Hall problem and 3 ways to solve it

The Monty Hall problem is a classic probability conundrum which on the surface seems trivially simple but, alas, our intuition can lead us to the wrong answer. Full disclosure: I got it wrong when I first saw it! Here is the short Wikipedia description of the problem:

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

monty-hall
On the surface the Monty Hall problem seems trivially simple: 3 doors, 1 car, 2 goats, pick 1, host opens 1, then choose to stick or switch

If you haven’t seen the problem before, have a guess now before reading on – what would you do, stick or switch? My instinctive first intuition was that it does not matter if I stick or switch. Two doors unopened, one car, that’s a 50:50 chance right there. Was I right?

Method 1: Bayes’ Theorem

Let’s tease it out using Bayes’ Theorem:

P(A|B) = P(B|A) * P(A) / P(B)

That’s the generic form of Bayes’ Theorem. For our specific Monty Hall problem let’s define the discrete events that are in play:

P(A) = P(B) = P(C) = 1/3 = the unconditional probability that the car is behind a particular door.

Note I am using upper case notation for our choice of door and as you see below I will use lower case to denote the door that Monty chooses to open.

P(a) = P(b) = P(c) = 1/2 = the unconditional probability that Monty will open a particular door. Monty will only have a choice of 2 doors because he is obviously not going to open the door you have selected.

So let’s say we choose door A initially. Remember we do not know what is behind any of the doors – but Monty knows. Monty will now open door b or c. Let’s say he opens door b. We now have to decide if we want to stick with door A or switch our choice to door C. Let’s use Bayes’ Theorem to work out the probability that the car is behind door A.

P(A|b) is the probability that the car is behind door A given Monty opens door b – this is what we want to compute, i.e. the probability of winning if we stick with door A

P(b|A) is the probability Monty will open door b given the car is behind door A. This probability is 1/2. Think about it, if Monty knows the car is behind door A, and we have selected door A, then he can choose to open door b or door c with equal probability of 1/2

P(A), the unconditional probability that the car is behind door A, is equal to 1/3

P(b), the unconditional probability that Monty opens door b, is equal to 1/2

Now we can write out the full equation:

P(A|b) = P(b|A) * P(A) / P(b) = (1/2) * (1/3) / (1/2) = 1/3

Hmmm, my intuition said 50:50 but the math says I only have a 1/3 chance of winning if I stick with door A. But that means I have a 2/3 chance of winning if I switch to door C. Let’s work it out and see.

P(C|b) is the probability that the car is behind door C given Monty opens door b – this is what we want to compute, i.e. the probability of winning if we switch to door C

P(b|C) is the probability Monty will open door b given the car is behind door C. This probability is 1. Think about it, if Monty knows the car is behind door C, and we have selected door A, then he has no choice but to open door b

P(C), the unconditional probability that the car is behind door C, is equal to 1/3

P(b), the unconditional probability that Monty opens door b, is equal to 1/2

Now we can write out the full equation:

P(C|b) = P(b|C) * P(C) / P(b) = 1 * (1/3) / (1/2) = 2/3

There it is, we have a 2/3 chance of winning if we switch to door C and only a 1/3 chance if we stick with door A.

Method 2: Write code to randomly simulate the problem many times

Bayes’ Rule is itself not the most intuitive formula so maybe we are still not satisfied with the answer. We can simulate the problem in R – grab my R code here to reproduce this graphic – by simulate I mean replay the game randomly many times and compare the sticking strategy with the switching strategy. Look at the results in the animation below and notice how as the number of iterations increase the probability of success converges on 1/3 if we stick with first choice every time and it converges on 2/3 if we switch every time.

 

animation
When we simulate the problem many times we see the two strategies (always stick vs always switch) converge on 1/3 and 2/3 respectively just as we had calculated using Bayes’ Theorem

Simulating a problem like this is a great way of verifying your math. Or sometimes, if you’re stuck in a rut and struggling with the math, you can simulate the problem first and then work backwards towards an understanding of the math. It’s important to have both tools, math/statistics and the ability to code, in your data science arsenal.

Method 3: Stop and think before Monty distracts you

Ok, let’s say we’re still not happy. We’re shaking our head, it does not fit with our System 1 thinking and we need a little extra juice to help our System 2 thinking over the line. Forget the math, forget the code, think of it like this:

You have selected one of three doors. You know that Monty is about to open one of the two remaining doors to show you a goat. Before Monty does this, ask yourself, which would you rather? Stick with the one door you have selected or have both of the two remaining doors. Yes, both, because effectively that is your choice: stick with your first choice or have both of the other doors.

monty-hall-made-easy
The Monty Hall problem can be reduced to this if we pause and think about the situation immediately before Monty opens a door to reveal a goat

Two doors or one, I know what I’d pick!

Parting thoughts

Coming at a problem from different angles: math, code, visualizations, etc, can help us out of a mental rut and/or reassure us by verifying our solutions. On the flip side, even when we ourselves fully understand a solution, we often have to explain it to a client, a manager, a decision maker or a young colleague who we are trying to teach. Therefore it is always a valuable exercise to tackle a problem in various ways and to be comfortable explaining it from different angles. Don’t stop here, google Monty Hall and you will find many other varied and interesting explanations of the Monty Hall problem.

Horrible statistical nomenclature and the aptly titled ‘Confusion Matrix’

Type 1 errors, type 2 errors, sensitivity, specificity, etc. As any undergrad knows statistical nomenclature can give you a headache. I still see experts in the field get confused over what is the difference between a false positive rate and a false discovery rate. I’m here to tell you, don’t worry, it’s not your fault, we all struggle to remember these horribly named statistics.

giphy

Thankfully some heroes put together this comprehensive confusion matrix on Wikipedia for us all to use. I have simply mimicked their layout in a spreadsheet with all the formulas which you can grab and use as your own.

Hover over the cells to see the cell description in the comment box and, if you’re like me, reference this every time you need to compute these statistics just to be sure!

Localized real estate cost comparison

Comparison of real estate costs across different regions presents a challenge because location has such a large impact on rent and operation & maintenance (O&M) costs. This large variance in costs makes it difficult for organizations to compare costs across regions.

“There are three things that matter in property: Location, location, location!” British property tycoon, Lord Harold Samuel

For example, imagine two federal agencies, each with 100 buildings spread across the US. Due to their respective missions, agency A has many offices in rural areas, while agency B has many downtown office locations in major US cities.

simple-comparison
Agency B has higher rent costs than agency A. This cost difference is largely explained by location – agency B offices are typically in downtown locations whereas agency A offices are often in rural areas. To truly compare costs we need to control for location.

However, we cannot conclude from this picture that agency B is overspending on rent. We can only claim agency B is overspending if we can somehow control for the explanatory variable that is location.

Naïve solution: Filter to a particular location, e.g. county, city, zipcode, etc, and compare costs between federal agencies in that location only. For example we could compare rents between office buildings in downtown Raleigh, NC. This gives us a good comparison at a micro level but we lose the macro nationwide picture. Filtering through every region one by one to view the results is not a serious option when there are thousands of different locations.

I once worked with a client that had exactly this problem. Whenever an effort was made to compare costs between agencies, it was always possible (inevitable even) for agencies to claim geography as a legitimate excuse for apparent high costs. I came up with a novel approach for comparing costs at an overall national level while controlling for geographic variation in costs. Here is a snippet of some dummy data to demonstrate this example (full dummy data set available here):

Agency Zip Sqft_per_zip Annual_Rent_per_zip ($/yr)
G 79101 8,192 33,401
D 94101 24,351 99,909
A 70801 17,076 70,436
A 87701 25,294 106,205
D 87701 16,505 70,275
A 24000 3,465 14,986

As usual I make the full dummy data set available here and you can access my R code here. The algorithm is described below in plain English:

  1. For agency X, compute the summary statistic at the local level, i.e. cost per sqft in each zip code.
  2. Omit agency X from the data and compute the summary statistic again, i.e. cost per sqft for all other agencies except X in each zip code.
  3. Using the results from steps 1 and 2, compute the difference in cost in each zip code. This tells us agency X’s net spend vs other agencies in each zip code.
  4. Repeat steps 1 to 3 for all other agencies.

The visualization is key to the power of this method of cost comparison.

agency-b-screenshot
Screenshot from Tableau workbook. At a glance we can see Agency B is generally paying more than its neighbors in rent. And we can see which zip codes could be targeted for cost savings.

This plot could have been generated in R but my client liked the interactive dashboards available in Tableau so that is what we used. You can download Tableau Reader for free from here and then you can download my Tableau workbook from here. There is a lot of useful information in this graphic and here is a brief summary of what you are looking at:

The height of each bar represents the cost difference between what the agency pays and what neighboring agencies pay in the same zip code. If a bar height is greater than zero, the agency pays more than neighboring agencies for rent. If a bar height is less than zero, the agency pays less than neighboring agencies. If a bar has zero height, the agency is paying the same average price as its neighbors in that zip code.

There is useful summary information in the chart title. The first line indicates the total net cost difference paid by the agency across all zip codes. In the second title line, the net spend is put into context as a percentage of total agency rent costs. The third title line indicates the percentage of zip codes in which the agency is paying more than its neighbors – this reflects the crossover point on the chart, where the bars go from positive to negative.

There is a filter to select the agency of your choice and a cost threshold filter can be applied to highlight (in orange) zip codes where agency net spend is especially high, e.g. a $1/sqft net spend in a zip code where the agency has 1 million sqft is costing more than a $5/sqft net spend in a zip code where the agency has only 20,000 sqft.

The tool tip gives you additional detailed information on each zip code as you hover over each bar. In this screenshot zip code 16611 is highlighted for agency B.

At a glance we get a macro and micro picture of how an agency’s costs compare to its peers while controlling for location! This approach to localized cost comparison provided stakeholders with a powerful tool to identify which agencies are overspending and, moreover, in precisely which zip codes they are overspending the most.

Once again, the R code is available here, the data (note this is only simulated data) is here and the Tableau workbook is here. To view the Tableau workbook you’ll need Tableau Reader which is available for free download here.

 

The birthday problem

How many people would you need in a group before you could be confident that at least one pair in the group share the same birthday?

One day, back in Smurfit Business School, our statistics lecturer challenged us to a bet. He predicted, confidently (smugly even), that at least two of us shared a birthday. He bet us each the princely sum of €1. I glanced around me and I counted close to 40 students in the room. Being the savant that I am, I also know there are approximately 365 days in a year, and so I thought, you’re on! I mean, even allowing for some probability magic: 40 people, 365 days, this is free money!

I soon learned this was the famous birthday problem and although I was beginning to feel cocky as we got half way through my classmates’ birthdays, our teacher ultimately prevailed. It turns out that in a group of just 23 people the probability of a matching pair of birthdays is over 50%!

I hope this spreadsheet and the explanation below will help you understand why this is so.

  • We need at least 2 people to have any chance of having a matching pair. This is trivial. Person A has a birthday on any day. The probability of Person B matching is 1/365.
  • With 3 people, there are three possible matches: A matches B, A matches C or B matches C.
  • With 4 people there are 6 possible combinations (count the edges in the little diagram shown here).four people and six combos You might spot a pattern by now. In mathematics these are known as combinations. After a while counting manually becomes tedious but, thankfully, for any given number of people we can use the combination formula to see how many possible combinations exist – jump to column B in the spreadsheet for a closer look.
  • The probability for any one of these combinations being a matching pair is 1/365. Think of that like a bet: each individual combination is a bet with a 1/365 chance of winning. How many of these bets would we have to place to get at least one win.
    • Here’s a neat little probability trick for answering an “at least” type question. Compute the probability of not winning at all, i.e. precisely zero wins, and subtract that value from 1.*
  • Column C in the spreadsheet uses the binomial distribution formula to compute the probability of a specific number of wins from a given number of bets where each bet is independent and has an equal probability of success.
  • In our case we want to compute the probability of precisely zero wins and subtract this value from 1. This gives us the probability of at least one win.

birthday chart

In the results, we can see that 23 is the magic number where the probability of at least one match exceeds 0.5. Remember there were close to 40 in my class so my teacher knew at a glance that his probability of finding at least one pair was close to 0.9 … and there were enough suckers in the room to cover his lunch!

 

* This little problem inversion trick can be generalized further to any occasion when we are faced with a difficult question. If you’re struggling, try inverting the question. Having difficulty predicting fraud? Maybe try predicting “not fraud”! It sounds trivial, silly even, but inverting a problem can get you out of a mental rut. For a famous example, see how statistician Abraham Wald used this technique to help the Allies win WW2.

Meetings: Making the most of a bad situation

I don’t like meetings. Too often they are a waste of my time. I think Dilbert agrees! Meetings are like pretend work, the full calendar and the flurry of activity gives the illusion of productivity even though the output from many meetingmongers is low. But I must begrudgingly admit that meetings are a necessary evil in my workplace. At the very least we need to communicate progress (or lack thereof) and status to stakeholders. So if you must call or attend a meeting, and sometimes you must, here are some tips for a smoother ride:

  1. Focus on other people in the group, in particular the key stakeholders like your client or boss. Ask yourself what do they need to get out of this meeting rather than what do you need. Listen to them and if you communicate everything clearly the meeting might be cut short and you could save yourself the dreaded “follow-up meeting”.
  2. Agendas are important but we often don’t have time to create one. As a minimum state the purpose and outcomes for the meeting. This could be as short as one sentence each and if nothing else it will help you to focus. If someone else set the meeting without an agenda and you have no idea what the purpose and desired outcome are – be ballsy and politely ask them.
  3. Don’t assume people read attachments you send them before meetings. Do you read every attachment sent to you? Of course not! Be respectful and highlight the top three issues – interested parties can read further if they like.
  4. If attendees are new to the location give clear and concise instructions regarding parking, traffic, building layout, etc. This saves time for everyone. You don’t want people turning up late and flustered, disrupting proceedings and requiring a repeat of issues already discussed.
  5. A quick roll call for key attendees can be helpful and, if the group is new, a rapid icebreaker can help people to connect. But if it’s a recurring meeting, introductions can quickly become banal.
  6. Stay focused on the meeting outcome. It might even help to start from there and work backwards.
  7. A short meeting is a good meeting. Everyone is happy when a meeting finishes earlier than the stated end time. Reverse is also true. Give yourself a little buffer time – like airlines do!
  8. Try, really try hard, to not call a meeting unless necessary.

Extreme analytics: anomalies and outliers

Ask ten people how they define outliers (aka anomalies) and you’ll get ten different answers. It’s not that they are all wrong it’s just that the term outlier can mean different things to different people in different contexts.

“A data point on a graph or in a set of results that is very much bigger or smaller than the next nearest data point.” from the Oxford English Dictionary.

Sometimes we want to detect outliers so we can remove them from our models and graphics. This does not mean we completely disregard the outliers. It means we set them aside for further investigation.

outlier regression example
This simple regression model fits tighter when the outlying data point is removed. Outliers should be investigated and not just removed because they don’t fit the trend.

On other occasions we want to detect the outliers and nothing else, e.g. in fraud detection. Regardless of what analytics project we are engaged in, outliers are very important so we have to come up with some techniques for handling them. The surest way to identify an outlier is with subject matter expertise, e.g. if I am studying children under the age of five and one of them is 6 foot tall, I don’t need statistics to tell me that is an outlier!

So what? Data practitioners don’t always have the luxury of subject matter expertise so we use heuristics instead. I will outline three simple univariate outlier detection methods and why I think the boxplot method outlined by NIST is the most robust method even though it involves a little more work.

The three methods are:

  1. Percentiles, e.g. flag values greater than 99th percentile.
  2. Standard deviations (SD), e.g. flag values more than 2*sd from the mean.
  3. Boxplot outer fence, e.g. flag values greater than the third quartile plus 3 times the interquartile range.

I generated dummy graduate salary data with some select tweaks to see how well each of these methods perform under different data distribution scenarios.

Scenario 1: Normally distributed data

n = 1,000, mean = $50,000, sd = $10,000. Below is a summary of the data including the distribution of the data points and a density curve.

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   16040   43270   49600   49730   56160   81960
s1
Scenario 1: Normally distributed data

Notice that the SD and percentile methods are too sensitive, i.e. they are flagging values that may be high but are nonetheless clearly part of the main distribution. This is an example of false positive outlier detection. The boxplot outer fence detects no outliers and this is accurate – we know there are no outliers because we generated this data as a normal distribution.

Scenario 2: Skewed data

Now let’s stick an outlier in there. Let’s imagine one graduate in the group struck it lucky and landed a big pay packet of $100k (maybe it’s his uncle’s company or maybe he’s really talented, who knows)!

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   16040   43300   49630   49800   56180  100000
s2
Scenario 2: Skewed data

Once again we see the SD and percentile methods are too sensitive. The boxplot method works just right, it catches the one outlier we included but not the rest of the normally distributed data.

Scenario 3: Even more skew

A handful of graduates came up with some awesome machine learning algorithm in their dissertation and they have been snapped up by Silicon Valley for close to $500k each!

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   16040   43330   49650   51960   56310  554200
s3
Scenario 3: Even more skew

Now with a handful more outliers, the SD threshold has moved to the right so much that it surpassed our $100k friend from scenario 2. He is now a false negative for the SD method. The percentile method is still too sensitive but the boxplot method is coming up goldilocks again.

Scenario 4: Percentile threshold on the move

In the first three scenarios the 99th percentile threshold hardly budged. Because simply put: n = 1,000 and so the 99th percentile represents the 10th highest value. Since we have only added 6 outliers, the 10th highest value is still in the main distribution. So let’s add 10 more outliers (for a total of 16) and see what happens to the percentile threshold.

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   16040   43430   49830   54200   56630  554200
s4
Scenario 4: Percentile threshold on the move

Boom. The 99th percentile threshold has jumped from being too sensitive (too many false positives) to a point where it is not sensitive enough and it is missing some outliers (false negatives). Notice once again how robust the boxplot method is to skewed data.

Closing comments

  • Boxplot method is no silver bullet. There are scenarios where it can miss, e.g. bimodal data can be troublesome no matter which method you choose. But in my experience boxplot outer fence is a more robust method of univariate outlier detection than the other two conventional methods.
  • These methods are only good for catching univariate outliers. Scroll back up to the very first chart in this piece and note that the “outlier” is not really an outlier if we only look at x or y values univariately. Detecting outliers in multidimensional space is trickier and will probably require more advanced analytical techniques.
  • The methods discussed here are useful heuristics for data practitioners but we must remind ourselves that the most powerful outlier detection method is often plain old human subject matter expertise and experience.

The R markdown script used to produce these examples and graphics is available for download from Google Drive here.