Have You Ever Tried Cocaine?

 

For this article, you will need a coin. If you can’t find one, Google has an online coin flipper.

Before you read the article, complete the following survey question. You will see by the method we use, your response is completely private. That means, regardless of your response, we do not know whether or not you have done the thing in question.

Please abide by the instructions we give. This is the only way we can ensure the survey is accurate and protects your privacy.

Flip your coin once and only once. If it lands heads, click the first option regardless of the truth. If it lands ‘tails’, answer truthfully. That way we have no idea whether you clicked the first option because you flipped heads or because you flipped tails and have tried cocaine.

We will come back and have a look at this later.


What percentage of people have tried cocaine? How widespread is doping in sport? What proportion of adults have cheated on their partner? How prevalent is cheating on university exams?

If you're like us, you probably find questions like these fascinating; they draw out a childish curiosity deep within all of us. The more outrageous the question, the more enticing the answer.

But more importantly, these questions hold immense value for businesses, governments and charities. The list of unethical and disruptive behaviours conducted by employees and executives is uncountable. Think employee shirking, sexual harassment in the workplace or even insider trading; all have serious ramifications for organisations and people. Understanding how widespread they are is key to developing efficient and safe workplaces. And if you’re a government, these answers can deeply inform policy decisions.

The problem

The thing is, the actions and beliefs underlying these questions are highly personal, making the answers incredibly hard to tease out.

Say you have cheated on your partner. If a survey conductor stopped you on the street and asked you, what would you say? You would probably say no, even though you don't know the person asking you the question. The embarrassment of admission and the worry your answer could be publicised are both reasons enough to lie.

So, if we want to know the answers to sensitive questions, what can we do?

The trick

We use something called a Randomised Response Technique (RRT). It is a neat way to, in theory, incentivise people to tell the truth when asked sensitive and intrusive questions. They make it so the answer is completely private; it can't be deduced from your answer whether or not you are guilty of the act in question. But, the organisers of the survey can still figure out the proportion of respondents who have done the taboo act.

Cool, huh?

RRTs take many forms. However, they all ask respondents to conduct a random experiment with known probabilities to manipulate either the question they answer or the answer they give.

Say we want to ask the question "have you ever cheated on your partner?". An RRT survey could run as follows. The interviewer will ask a respondent to flip a coin, out of sight. If it's heads, they answer "yes" regardless of whether or not that is true. If it's tails, they answer truthfully. That way, those who have cheated on their partner won't be the only ones saying "yes", meaning they are more likely to tell the truth.

In this example, we assume 50% of the respondents will flip heads and have to say "yes", regardless of the truth. The remaining 50% who flipped tails are the people we care about. So, if 60% of respondents have said "yes", we know 10%/50% = 20% of the respondents have cheated on their partner.

The paper

'When and why randomized response techniques (fail to) elicit the truth', written by academics from Harvard, Carnegie Mellon and Bocconi University, delves deeply into the field of RRTs. It focuses on the conditions under which RRTs perform well and the types of RRTs that perform the best.

By conducting nine different studies (surveys), they compare RRTs against direct questioning (DQ: asking the question straight), using the 'more-is better' principle to evaluate their effectiveness. No one wants to admit to these behaviours. So, the higher the prevalence of the behaviour, the better the survey technique was at making respondents tell the truth (i.e. 'more is better').

Why do RRTs often fail?

The paper finds RRTs like the one we described often perform worse than DQ. The problem comes down to fear of misinterpretation. If you aren't guilty of the action in question, but flip heads and are therefore forced to say "yes", you might fear your answer will be interpreted as an admission. So, you are incentivised to pretend you had flipped tails and then answer "no" truthfully. At the extreme, this can lead to non-sensical negative prevalence estimates.

Think about the cheating example we gave earlier. If 40% of people have answered "yes", then our estimate for the proportion of respondents who have cheated on their partner is -20%. This phenomenon is reflected in the paper's results for the question "have you ever had sexual desires for a minor?". The DQ estimate is 20% — shocking, we know — whilst the RRT estimate is -21%. The obvious explanation is the fear of misinterpretation caused many innocent people to cheat the survey.

They test this hypothesis by comparing RRTs with DQ for questions about both socially undesirable and desirable behaviours. If misinterpretation is the reason for lower RRT prevalence estimates, the discrepancy should disappear when asking about socially desirable behaviours.

As expected, although RRT estimates are lower than DQ estimates for undesirable behaviours, they are about the same for desirable behaviours. On top of this, when a neutral behaviour is framed as undesirable, DQ performs far better than RRTs. However, when that same behaviour is framed as desirable, RRTs actually perform better.

In fact, they found that those who were forced to say "yes" were far more worried about misinterpretation than those who had to answer truthfully.

It is clear from these results fear of misinterpretation is RRTs biggest problem.

How can we fix this?

The authors of the paper conduct an experiment in which they ask respondents the question "have you ever cheated on a relationship partner?". All respondents are asked to flip a coin just like the RRT we described initially. However, half the respondents are instead given the options "yes/flipped heads" and "no".

The idea behind this is: innocent respondents would now feel less incriminated by a forced response, as it has been made clear innocent respondents could have been forced to answer "yes" by flipping heads. As you no doubt have already realised, this is the format we used in the survey conducted at the start of this article.

In this case, DQ have a prevalence estimate of 25.4%. The normal "yes" RRT gave a non-sensical estimate of -21%. Amazingly, the revised "yes/flipped heads" RRT gave a prevalence of 30%, better than both the normal RRT and DQ.

What are some other methods?

The paper only talks about one type of RRT — forced response RRTs. In fact there is another, lesser used type — forced question RRTs. They get respondents to conduct a private experiment with non-50:50 outcomes, and then give them different questions based on that outcome. For example, they may ask you to flip a coin twice. If it comes up heads twice, you answer the question "have you ever cheated on your partner?". If the result is anything else, you answer the question "have you never cheated on your partner?”. This way, the misinterpretation effect is far less obvious (a keen eye will notice that it still could have an impact because one question is more likely than the other).

There is another technique called the Item Count Technique (ICT). It gives respondents a list of many benign behaviours (taking iron supplements, Panadol and fish-oil tablets etc) as well as one undesirable behaviour (taking marijuana) and asks respondents to give the number of behaviours they have engaged in. By comparing the average number with that of a control group who were only given the benign behaviour list, you can back out a prevalence estimate for the undesirable behaviour. Analyses show that ICTs perform similarly, if not better, than DQ.

Finally, we have the cross-wise technique. It asks respondents both a benign and a sensitive question. They only say whether their answer to both questions is the same or different. There is some evidence that these methods yield higher prevalence estimates than DQ. The problem is that many respondents do not even realise a prevalence estimate is being determined from the study, posing an ethical issue with the technique.

The survey results

Let’s take a look at the results of the survey you completed earlier. Subtract 50 from the percentage of people who clicked the first option. Then, divide that by 50. That is the percentage of people who have tried cocaine. Is it more or less than you thought?

Now, we think we can do even better. What we need you to do is to answer another RRT question below.

Flip a coin just like you did before. If it’s heads, select the first option. If it’s tails, answer the question truthfully. Keep in mind, your answer to this question is still completely private; it does not give us any information as to what your real answer to the cocaine question is either.

So, how does this impact our estimate?

First, use the same logic you used before to figure out the percentage of people who lied in the first survey. Then, add that figure to the percentage of people who clicked the first option in the first survey. This is the percentage of people who should have clicked the first option. Use that figure to find out an even better estimate for the proportion that have tried cocaine.

For example, if 60% of people clicked the first option in the cocaine question, and 55% of people clicked the first option in this ‘survey cheating’ question, we now know that 60% + (55%-50%)/50% = 70% of people should have clicked the first option in the cocaine question. So, we know a better estimate is that in fact 20%/50% = 40% of people have done cocaine, not 10%/50% = 20%.

Let’s find out some more about ourselves

Here are some more interesting questions. We will leave the maths up to you.


More from Academia Unveiled:

Subscribe to our newsletter:

 
Henry Munns

Co-Founder, Editor-in-Chief, Director of Content

Previous
Previous

Space, Money and Baseball

Next
Next

The ‘Emotional’ Investor