Blind Spots: Compartmentalizing

This is my contribution to the December blogging carnival on “blind spots”.

Summary: People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important – many standard arguments on moral issues are dramatically undermined or otherwise effected by EA considerations, especially moral uncertainty.

A long time ago, Will wrote an article about how a key part of rationality was taking ideas seriously: fully exploring ideas, seeing all their consequences, and then acting upon them. This is something most of us do not do! I for one certainly have trouble.

Similarly, I think people selectively apply EA principles. People are very willing to apply them in some cases, but when those principles would cut at a core part of the person’s identity – like requiring them to dress appropriately so they seem less weird – people are much less willing to take those EA ideas to their logical conclusion.

Consider your personal views. I’ve certainly changed some of my opinions as a result of thinking about EA ideas. For example, my opinion of bednet distribution is now much higher than it once was. And I’ve learned a lot about how to think about some technical issues, like regression to the mean. Yet I realized that I had rarely done a full 180  – and I think this is true of many people:

  • Many think EA ideas argue for more foreign aid – but did anyone come to this conclusion who had previously been passionately anti-aid?
  • Many think EA ideas argue for vegetarianism – but did anyone come to this conclusion who had previously been passionately carnivorous?
  • Many think EA ideas argue against domestic causes – but did anyone come to this conclusion who had previously been a passionate nationalist?

Yet this is quite worrying. Given the power and scope of many EA ideas, it seems that they should lead to people changing their mind on issues were they had been previously very certain, and indeed emotionally involved. That they have not suggests we have been compartmentalizing.

Obviously we don’t need to apply EA principles to everything – we can probably continue to brush our teeth without need for much reflection. But we probably should apply them to issues with are seen as being very important: given the importance of the issues, any implications of EA ideas would probably be important implications.

Moral Uncertainty

In his PhD thesis, Will MacAskill argues that we should treat normative uncertainty in much the same way as ordinary positive uncertainty; we should assign credences (probabilities) to each theory, and then try to maximise the expected morality of our actions. He calls this idea ‘maximise expected choice-worthiness’, and if you’re into philosophy, I recommend reading the paper. As such, when deciding how to act we should give greater weight to the theories we consider more likely to be true, and also give more weight to theories that consider the issue to be of greater importance.

This is important because it means that a novel view does not have to be totally persuasive to demand our observance. Consider, for example, vegetarianism. Maybe you think there’s only a 10% chance that animal welfare is morally significant – you’re pretty sure they’re tasty for a reason. Yet if the consequences of eating meat are very bad in those 10% of cases (murder or torture, if the animal rights activists are correct), and the advantages are not very great in the other 90% (tasty, some nutritional advantages), we should not eat meat regardless. Taking into account the size of the issue at stake as well as probability of its being correct means paying more respect to ‘minority’ theories.

And this is more of an issue for EAs than for most people. Effective Altruism involves a group of novel moral premisses, like cosmopolitanism, the moral imperative for cost-effectiveness and the importance of the far future. Each of these imply that our decisions are in some way very important, so even if we assign them only a small credence, their plausibility implies radical revisions to our actions.

One issue that Will touches on in his thesis is the issue of whether fetuses morally count. In the same way that we have moral uncertainty as to whether animals, or people in the far future, count, so too we have moral uncertainty as to whether unborn children are morally significant. Yes, many people are confident they know the correct answer – but there many of these on each side of the issue. Given the degree of disagreement on the issue, among philosophers, politicians and the general public, it seems like the perfect example of an issue where moral uncertainty should be taken into account – indeed Will uses it as a canonical example.

Consider the case of a pregnant women Sarah, wondering whether it is morally permissible to abort her child1. The alternative course of action she is considering is putting the child up for adoption. In accordance with the level of social and philosophical debate on the issue, she is uncertain as to whether aborting the fetus is morally permissible. If it’s morally permissible, it’s merely permissible – it’s not obligatory. She follows the example from Normative Uncertainty and constructs the following table

abortion table 1

In the best case scenario, abortion has nothing to recommend it, as adoption is also permissible. In the worst case, abortion is actually impermissible, whereas adoption is permissible. As such, adoption dominates abortion.

However, Sarah might not consider this representation as adequate. In particular, she thinks that now is not the best time to have a child, and would prefer to avoid it.2 She has made plans which are inconsistent with being pregnant, and prefers not to give birth at the current time. So she amends the table to take into account these preferences.

abortion table 2

Now adoption no longer strictly dominates abortion, because she prefers abortion to adoption in the scenario where it is morally permissible. As such, she considers her credence: she considers the pro-choice arguments slightly more persuasive than the pro-life ones: she assigns a 70% credence to abortion being morally permissible, but only a 30% chance to its being morally impermissible.

Looking at the table with these numbers in mind, intuitively it seems that again it’s not worth the risk of abortion: a 70% chance of saving oneself inconvenience and temporary discomfort is not sufficient to justify a 30% chance of committing murder. But Sarah’s unsatisfied with this unscientific comparison: it doesn’t seem to have much of a theoretical basis, and she distrusts appeals to intuitions in cases like this. What is more, Sarah is something of a utilitarian; she doesn’t really believe in something being impermissible.

Fortunately, there’s a standard tool for making inter-personal welfare comparisons: QALYs. We can convert the previous table into QALYs, with the moral uncertainty now being expressed as uncertainty as to whether saving fetuses generates QALYs. If it does, then it generates a lot; supposing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of surviving to birth, at which point its life expectancy is 78.7 in the US, for 78.126 QALYs. This calculation assumes assigns no QALYs to the fetus’s 6 months of existence between now and birth. If fetuses are not worthy of ethical consideration, then it accounts for 0 QALYs.

We also need to assign QALYs to Sarah. For an upper bound, being pregnant is probably not much worse than having both your legs amputated without medication, which is 0.494 QALYs, so lets conservatively say 0.494 QALYs. She has an expected 6 months of pregnancy remaining, so we divide by 2 to get 0.247 QALYs. Women’s Health Magazine gives the odds of maternal death during childbirth at 0.03% for 2013; we’ll round up to 0.05% to take into account risk of non-death injury. Women at 25 have a remaining life expectancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In total that gives us an estimate of 0.276 QALYs. If the baby doesn’t survive to birth, however, some of these costs will not be incurred, so the truth is probably slightly lower than this. All in all a 0.276 QALYs seems like a reasonably conservative figure.

Obviously you could refine these numbers a lot (for example, years of old age are likely to be at lower quality of life, there are some medical risks to the mother from aborting a fetus, etc.) but they’re plausibly in the right ballpark. They would also change if we used inherent temporal discounting, but probably we shouldn’t.

.abortion table 3

We can then take into account her moral uncertainty directly, and calculate the expected QALYs of each action:

  • If she aborts the fetus, our expected QALYs are 70%x0 + 30%*(-78.126) = -23.138
  • If she carries the baby to term and puts it up for adoption, our expected QALYs are 70%*(-0.247) + 30%*(-0.247) = -0.247

Which again suggests that the moral thing to do is to not abort the baby. Indeed, the life expectancy is so long at birth that it quite easily dominates the calculation: Sarah would have to be extremely confident in rejecting the value of the fetus to justify aborting it. So, mindful of overconfidence bias, she decides to carry the child to term.

Indeed, we can show just how confident in the lack of moral significance of the fetuses one would have to be to justify aborting one. Here is a sensitivity table, showing credence in moral significance of fetuses on the y axis, and the direct QALY cost of pregnancy on the x axis for a wide range of possible values. The direct QALY cost of pregnancy is obviously bounded above by its limited duration. As is immediately apparent, one has to be very confident in fetuses lacking moral significance, and pregnancy has to be very bad, before aborting a fetus becomes even slightly QALY-positive. For moderate values, it is extremely QALY-negative.

abortion table 4

Other EA concepts and their applications to this issue

Of course, moral uncertainty is not the only EA principle that could have bearing on the issue, and given that the theme of this blogging carnival, and this post, is things we’re overlooking, it would be remiss not to give at least a broad overview of some of the others. Here, I don’t intend to judge how persuasive any given argument is – as we discussed above, this is a debate that has been going without settlement for thousands of years – but merely to show the ways that common EA arguments affect the plausibility of the different arguments. This is a section about the directionality of EA concerns, not on the overall magnitudes.

Not really people

One of the most important arguments for the permissibility of abortion is that fetuses are in some important sense ‘not really people’. In many ways this argument resembles the anti-animal rights argument that animals are also ‘not really people’. We already covered above the way that considerations of moral uncertainty undermine both these arguments, but it’s also noteworthy that in general it seems that the two views are mutually supporting (or mutually undermining, if both are false). Animal-rights advocates often appeal to the idea of an ‘expanding circle’ of moral concern. I’m skeptical of such an argument, but it seems clear that the larger your sphere, the more likely fetuses are to end up on the inside. The fact that, in the US at least, animal activists tend to be pro-abortion seems to be more of a historical accident than anything else. We could imagine alternative-universe political coalitions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Exploit the Weak; They just don’t count” party. In general, to the extent that EAs care about animal suffering (even insect suffering ), EAs should tend to be concerned about the welfare of the unborn.

Not people yet

A slightly different common argument is that while fetuses will eventually be people, they’re not people yet. Since they’re not people right now, we don’t have to pay any attention to their rights or welfare right now. Indeed, many people make short sighted decisions that implicitly assign very little value to the futures of people currently alive, or even to their own futures – through self-destructive drug habits, or simply failing to save for retirement. If we don’t assign much value to our own futures, it seems very sensible to disregard the futures of those not even born. And even if people who disregarded their own futures were simply negligent, we might still be concerned about things like the non-identity problem.

Yet it seems that EAs are almost uniquely unsuited to this response. EAs do tend to care explicitly about future generations. We put considerable resources into investigating how to help them, whether through addressing climate change or existential risks. And yet these people have far less of a claim to current personhood than fetuses, who at least have current physical form, even if it is diminutive. So again to the extent that EAs care about future welfare, EAs should tend to be concerned about the welfare of the unborn.

Replaceability

Another important EA idea is that of replaceability. Typically this arises in contexts of career choice, but there is a different application here. The QALYs associated with aborted children might not be so bad if the mother will go on to have another child instead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of aborting the fetus are equivalently much smaller – if she has a child later on instead, she will have to bear the costs of pregnancy eventually anyway. This resembles concerns that maybe saving children in Africa doesn’t make much difference, because their parents adjust their subsequent fertility.

The plausibility behind this idea comes from the idea that, at least in the US, most families have a certain ideal number of children in mind, and basically achieve this goal. As such, missing an opportunity to have an early child simply results in having another later on.

If this were fully true, utilitarians might decide that abortion actually has no QALY impact at all – all it does is change the timing of events. On the other hand, fertility declines with age, so many couples planning to have a replacement child later may be unable to do so. Also, some people do not have ideal family size plans.

Additionally, this does not really seem to hold when the alternative is adoption; presumably a woman putting a child up for adoption does not consider it as part of her family, so her future childbearing would be unaffected. This argument might hold if raising the child yourself was the only alternative, but given that adoption services are available, it does not seem to go through.

Autonomy

Sometimes people argue for the permissibility of abortion through autonomy arguments. “It is my body”, such an argument would go, “therefore I may do whatever I want with it.” To a certain extent this argument is addressed by pointing out that one’s bodily rights presumably do not extent to killing others, so if the anti-abortion side are correct, or even have a non-trivial probability of being correct, autonomy would be insufficient. It seems that if the autonomy argument is to work, it must be because a different argument has established the non-personhood of fetuses – in which case the autonomy argument is redundant. Yet even putting this aside, this argument is less appealing to EAs than to non-EAs, because EAs often hold a distinctly non-libertarian account of personal ethics. We believe it is actually good to help people (and avoid hurting them), and perhaps that it is bad to avoid doing so. And many EAs are utilitarians, for whom helping/not-hurting is not merely laud-worthy but actually compulsory. EAs are generally not very impressed with Ayn Rand style autonomy arguments for rejecting charity, so again EAs should tend to be unsympathetic to autonomy arguments for the permissibility of abortion.

Indeed, some EAs even think we should be legally obliged to act in good ways, whether through laws against factory farming or tax-funded foreign aid.

Deontology

An argument often used on the opposite side  – that is, an argument used to oppose abortion, is that abortion is murder, and murder is simply always wrong. Whether because God commanded it or Kant derived it, we should place the utmost importance of never murdering. I’m not sure that any EA principle directly pulls against this, but nonetheless most EAs are consequentialists, who believe that all values can be compared. If aborting one child would save a million others, most EAs would probably endorse the abortion. So I think this is one case where a common EA view pulls in favor of the permissibility of abortion.

I didn’t ask for this

Another argument often used for the permissibility of abortion is that the situation is in some sense unfair. If one did not intend to become pregnant – perhaps even took precautions to avoid becoming so – but nonetheless ends up pregnant, you’re in some way not responsible for becoming pregnant. And since you’re not responsible for it you have no obligations concerning it – so may permissible abort the fetus.

However, once again this runs counter to a major strand of EA thought. Most of us did not ask to be born in rich countries, or to be intelligent, or hardworking. Perhaps it was simply luck. Yet being in such a position nonetheless means we have certain opportunities and obligations. Specifically, we have the opportunity to use of wealth to significantly aid those less fortunate than ourselves in the developing world, and many EAs would agree the obligation. So EAs seem to reject the general idea that not intending a situation relieves one of the responsibilities of that situation.

Infanticide is okay too

A frequent argument against the permissibility of aborting fetuses is by analogy to infanticide. In general it is hard to produce a coherent criteria that permits the killing of babies before birth but forbids it after birth. For most people, this is a reasonably compelling objection: murdering innocent babies is clearly evil! Yet some EAs actually endorse infanticide. If you were one of those people, this particular argument would have little sway over you.

Moral Universalism

A common implicit premise in many moral discussion is that the same moral principles apply to everyone. When Sarah did her QALY calculation, she counted the baby’s QALYs as equally important to her own in the scenario where they counted at all. Similarly, both sides of the debate assume that whatever the answer is, it will apply fairly broadly. Perhaps permissibility varies by age of the fetus – maybe ending when viability hits – but the same answer will apply to rich and poor, Christian and Jew, etc.

This is something some EAs might reject. Yes, saving the baby produces many more QALYs than Sarah loses through the pregnancy, and that would be the end of the story if Sarah were simply an ordinary person. But Sarah is an EA, and so has a much higher opportunity cost for her time. Becoming pregnant will undermine her career as an investment banker, the argument would go, which in turn prevents her from donating to AMF and saving a great many lives. Because of this, Sarah is in a special position – it is permissible for her, but it would not be permissible for someone who wasn’t saving many lives a year.

I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.

May we discuss this?

Now we’ve considered these arguments, it appears that applying general EA principles to the issue in general tends to make abortion look less morally permissible, though there were one or two exceptions. But there is also a second order issue that we should perhaps address – is it permissible to discuss this issue at all?

Nothing to do with you

A frequently seen argument on this issue is to claim that the speaker has no right to opine on the issue. If it doesn’t personally affect you, you cannot discuss it – especially if you’re privileged. As many (a majority?) of EAs are male, and of the women many are not pregnant, this would curtail dramatically the ability of EAs to discuss abortion. This is not so much an argument on one side or other of the issue as an argument for silence.

Leaving aside the inherent virtues and vices of this argument, it is not very suitable for EAs. Because EAs have many many opinions on topics that don’t directly affect them:

  • EAs have opinions on disease in Africa, yet most have never been to Africa, and never will
  • EAs have opinions on (non-human) animal suffering, yet most are not non-human animals
  • EAs have opinions on the far future, yet live in the present

Indeed, EAs seem more qualified to comment on abortion – as we all were once fetuses, and many of us will become fetuses. If taken seriously this argument would call foul on virtually ever EA activity! And this is no idle fantasy – there are certainly some people who think that Westerns cannot usefully contribute to solving African poverty.

Too controversial

We can safely say this is a somewhat controversial issue. Perhaps it is too controversial – maybe it is bad for the movement to discuss. One might accept the arguments above – that EA principles generally undermine the traditional reasons for thinking abortion is morally permissible – yet think we should not talk about it. The controversy might divide the community and undermine trust. Perhaps it might deter newcomers. I’m somewhat sympathetic to this argument – I take the virtue of silence seriously, though eventually my boyfriend persuaded me it was worth publishing.

Note that the controversial nature is evidence against abortion’s moral permissibility, due to moral uncertainty.

However, the EA movement is no stranger to controversy.

  • There is a semi-official EA position on immigration, which is about as controversial as abortion in the US at the moment, and the EA position is such an extreme position that essentially no mainstream politicians hold it.
  • There is a semi-official EA position on vegetarianism, which is pretty controversial too, as it involves implying that the majority of Americans are complicit in murder every day.

Not worthy of discussion

Finally, another objection to discussing this is it simply it’s an EA idea. There are many disagreements in the world, yet there is no need for an EA view on each. Conflict between the Lilliputians and Blefuscudians notwithstanding, there is no need for an EA perspective on which end of the egg to break first. And we should be especially careful of heated, emotional topics with less avenue to pull the rope sideways. As such, even though the object-level arguments given above are correct, we should simply decline to discuss it.

However, it seems that if abortion is a moral issue, it is a very large one. In the same way that the sheer number of QALYs lost makes abortion worse than adoption even if our credence in fetuses having moral significance was very low, the large number of abortions occurring each year make the issue as a whole of high significance. In 2011 there were over 1 million babies were aborted in the US. I’ve seen a wide range of global estimates, including around 10 million to over 40 million. By contrast, the WHO estimates there are fewer than 1 million malaria deaths worldwide each year. Abortion deaths also cause a higher loss of QALYs due to the young age at which they occur. On the other hand, we should discount them for the uncertainty that they are morally significant. And perhaps there is an even larger closely related moral issue. The size of the issue is not the only factor in estimating the cost-effectiveness of interventions, but it is the most easily estimable. On the other hand, I have little idea how many dollars of donations it takes to save a fetus – it seems like an excellent example of some low-hanging fruit research.

Conclusion

People frequently compartmentalize their beliefs, and avoid addressing the implications between them. Ordinarily, this is perhaps innocuous, but when the both ideas are highly morally important, their interaction is in turn important. In this post we the implications of common EA beliefs on the permissibility of abortion. Taking into account moral uncertainty makes aborting a fetus seem far less permissible, as the high counterfactual life expectancy of the baby tends to dominate other factors. Many other EA views are also significant to the issue, making various standard arguments on each side less plausible.


  1. There doesn’t seem to be any neutral language one can use here, so I’m just going to switch back and forth between ‘fetus’ and ‘child’ or ‘baby’ in a vain attempt at terminological neutrality. 
  2. I chose this reason because it is the most frequently cited main motivation for aborting a fetus according to the Guttmacher Institute. 
Advertisements

Let he who is without Science Denial cast the first stone

The Washington Post recently ran an article on how political affiliation and level of religious belief affect support for, or suspicion of, the scientific consensus on various subjects. In it they refer to research by Dale Kahan to argue/imply that opposition to science is primarily driven by conservative ideology.

For example, they have these three very attractive charts, showing that the difference between people of high and low religiosity is small compared to the difference between conservatives and liberals when it comes to global warming,

GlobalWarming

evolution,

.
Evolution

and Stem Cell research:
StemCell

However, as so often happens, their article on causes of political bias ends up displaying some pretty impressive political bias. Unsurprisingly, this bias tends to be flattering towards those who share their political beliefs, and damning of those who don’t.

Firstly, look at those charts again. When looking at on the left-right axis, your eye is naturally drawn to compare the two extremes – to compare the most right wing to the most left wing (especially as the line is monotonic). You note the large difference in height between the leftmost data points and the rightmost, compare it to the relatively small difference between the high and low religiosity lines. The former difference is bigger than the latter difference, so political opinions must be more important than religious ones.

… or so the chart leads us to believe. However, this is hugely deceptive. As you can see, there are 5 tick marks on the horizontal axis, the measure was created from questions using 5 and 7 options, and there are a very large number of little vertical lines. This means they’re using a relatively fine measure of political ideology: they differentiate moderate conservatives from ordinary conservatives from highly conservative people. By doing this, they increase how extreme the extremes are, which increases that vertical difference our eye is naturally drawn to. With religion, however, they only admit of two categories, high and low. Perhaps if they had disambiguated more, so the categories ranged from “More religious than the Hasidim” to “More atheist than Dawkins”, we would have seen more spread between those two lines. As it is, the charts suppress these differences, reducing the apparent effect of religiosity.

That’s not the only problem with the article. The climate change and evolution questions seem pretty good, but the stem cell question does not show what they think it does.

“All in all, do you favor or oppose federal funding for embryonic stem cell research”

Now, in general opposing research for science does seem like prima facie evidence that you’re in some sense anti-science. But not here! There are two other factors at play which conflate the issue.

The first is that this is as much a moral issue as a scientific one. Thinking that stem cell research is immoral doesn’t necessarily mean you disagree with any of the scientific findings, due to the is-ought gap. In the same way that opposing nazi research on cancer (which used a variety of immoral techniques) doesn’t mean you think their conclusions were factually wrong, you can think stem cell research is morally wrong but the conclusions factually correct. Or, to use a clarifying contemporary example, suppose the question instead asked,

“All in all, do you favor or oppose federal funding for methods of treating homosexuality”

My intuition, which I suspect you share, is that the line would slope in the opposite direction – lefties would be more opposed than righties. This isn’t necessarily be because they are anti-science – maybe they simply think we are better off not knowing how to treat homosexuality, or better off not even thinking about the possibility. This moral belief doesn’t, however, mean they disagree with conservatives and scientists on any factual issue.

But there is another, even bigger, problem with this question. It doesn’t just ask about the morality of stem cell research – it asks about federal funding for that research. Conservatives are well known for opposing federal funding of things in general. Yet this research suggests that consistently applying the conservative rule “oppose federal funding of things in general” is suddenly evidence of being anti-science. You would be branded anti-science by this question even if your thought process was

“I think the federal government is very bad at research – it will be inefficiently run, overly politicized, and poorly directed – so I don’t want it to mess up stem cell research. Stem cell research is far too useful and exciting to trust to the government.”

Yet surely such a person should be considered pro-science, not anti-science!

Indeed, it seems that overlooking this issue, and conflating opposition to the state with opposition to science, is a clear sign of political bias on the part of the author. They choose a question which almost by design proved conservatives were anti-science, not by actually measuring the truth, but by simply re-defining opposition to science to include the political opinions they oppose. David Friedman once wrote about something similar – a study which, while claiming to prove that right-wing people were authoritarians, really just defined authoritarianism as ‘respects right-wing authorities’.

Ok, so their choice of data visualization technique was perhaps misleading, and the stem cell funding question was awful. But the other two questions look pretty solid, right?

Perhaps not. It’s well known – or at least widely believed – that conservatives disproportionately disbelieve in evolution and global warming. So if you wanted to prove that conservatives were anti-science, you’d pick those two questions, confident that your prejudices would be confirmed.

Yet there is much more to science than evolution and global warming. There many issues where there’s a scientific consensus at least as strong as that on global warming, yet some people still disagree. For example,

  1. Astrology is nonsense
  2. Lasers are **not** condensed sound waves
  3. The earth orbits the sun

In fact, I would say that science is far more unequivocal on these issues than on global warming – probably around as certain as that evolution is true.

Yet on all these issues, Republicans are more likely to hold the scientific view that Democrats. And there are many more similar examples. If I wanted to make the same charts, but make Democrats look bad, I could easily “prove” that Democrats are morons who believe the sun orbits the earth.

The Washington Post article does contains a homage to data:

But why opine on all this an un-grounded way — we need data.

Unfortunately we need more than data – we also need rigorous statistical techniques.

It would be unfair to blame the original researcher. In his article, he also includes a chart on nuclear power, where conservatives have the more scientific view. Mysteriously, the chart that was flattering to conservatives doesn’t make it into the Washington Post article. Ironically, it turns out the Washington Post article was right – politics really is the mindkiller. It’s just hard to spot when you’re the one getting killed.

How to Pitch a Growth Stock – Cognitive Bias Edition

Growth investors focus on trying to pick the companies that will grow rapidly for many years to come, hoping to be rewarded by a consummately increasing share price. This can be anywhere from Venture Capitalists investing in tiny startups to enormous mutual funds betting on whether Twitter will continue to put up such strong growth. These companies tend to have high share prices compared to their current level of profitability, but have a good story, and will have rapidly grown in the past.

There are certain strategies that people use when trying to persuade someone to invest in a growth stock. This could be the startup team pitching to a VC fund or an analyst in a hedge fund pitching to his portfolio manager. I have more experience with the latter, so this will focus on companies with market caps above $100m.

Many of these strategies are intellectuality illegitimate. As such, I intend this as a sort of ‘Defense Against the Dark Arts’ – how to spot people using these rhetorical strategies in the wild. Perhaps this will also help people employee these strategies – if so I apologize to the world. I take the virtue of silence seriously – but in this specific instance I think sunlight is the best disinfectant – and hopefully it will make for an interesting blog post.

Choosing a growth stock to pitch

The first step is to choose the right stock. Pick something which has seen strong increases in share price over the last few years. A relatively smooth glide path up is best – don’t pick something that rose 20% in one day and has done nothing else. The goal here is to use the halo effect1 to make people confuse the historical share price movements with the company itself – to make it seem like the company itself actually has the property of steadily growing, rather than this just being a property of history of the market’s valuation of this stock.

However, it can be a good idea to pick something which has very recently seen a sharp fall in the share price. This way, your PM won’t feel like they’ve “missed it” – they’ve got another chance to get in. Regret Avoidance is a powerful effect, and you save them from this. Plus, the recent sharp fall means they’re safe from being the guy who bought in at the peak. That guy will look very stupid, so they’re happy to be safe from his fate.

Of course, you need an explanation for why these have happened. The steady rise is easy – the company is also steadily growing. The sharp fall is harder – people don’t want to invest in things that fall! – but there are some easy explanations on hand. ‘Hedge fund de-levering’ is always available as an excuse, and with any luck will act as a semantic stop-sign.

So far this has actually been pretty intellectually respectable. Or at least epistemically lucky – the first requirement, for steady share price growth, probably means the stock has strong momentum, which has historically been a strong predictor of returns (albeit with high kurtosis and negative skew, so beware!). The recent share price fall means the stocks will do well on short-term reversal, which has also historically been a good predictor. The next steps, however, are more dubious.

Total Addressable Market and the Conservation of Conservativeness

Having selected your company, the first step is to work out what the Total Addressable Market (TAM) for your stock is. Is it a household product? Take the number of households in America and multiply by the frequency of purchase. Is it a car? Look at the total number of cars. A better type of steel? Look at total US steel consumption. The key is to get a really really big number. If the number is insufficiently big, just look at a larger category of which the true market is a subset.

Next, multiply that number by their expected market share. As the company has been growing rapidly, it’s probably been expanding its market share of its current niche. So say you assume they’ll keep their current market share even as that niche grows into a major market. This assumption is conservative, you’ll say, because actually they have been growing their market share. This number should be reassuringly small – say 1%. It’s small size will help reassure people that you’re being conservative. If you want increase the total market they’ll eventually control, scope insensitivity means its easier to increase the TAM size than their market share. It’s obvious that 20% market share is a much more aggressive assumption than 2% (especially if their current market share is 2%), but not nearly so clear than $100000000000 is a more aggressive TAM estimate than $10000000000 – especially if you only present one of the numbers.

Next, assume a profit margin. If their current profit margin is high, just say you’ll conservatively assume no economies of scale. If their current margin is low, or they make no profits, just compare to vaguely similar companies and go for a slightly lower number. If ‘comparable’ mature companies have a 30% margin, say 20%. This sounds very conservative, but actually only reduces their profits by 33%.

Finally, assume a valuation multiple. The company is currently trading on a very high multiple, because the market is expecting rapid growth – maybe 30x earnings, or maybe 500x if you’re Amazon. So simply say you assume they’ll get a market multiple. Going from a 30x multiple to a market 15x multiple will cost you 50% of the valuation – but gain you a lot of apparent conservativeness.

The key principle here is the conservation of conservativeness. You want an estimate for them that is both very large and sounds conservative. To do this, you take advantage of scope insensitivity and arbitrage between the TAM stage and the company-specific stage. By making the company-specific stages (market share, profit margin, valuation) sufficiently conservative sounding, you can get away with an aggressive TAM estimate while keeping the whole thing sounding conservative. Scope-insensitivity means you can increase the TAM estimate at a lower cost of conservativeness than you can the company-specific elements, so there are gains from trade.

So once you’ve multiplied your TAM, market share, profit margin and valuation, you come up with an estimate for what this company could be worth in the future. However, you now deny that this is an estimate. Instead, it’s just an idea of the size of the market – you don’t actually expect they’ll reach it. This explicit denial protects you against any accusations of over-optimism, but you’ve successfully primed your audience on a really high number. If market sentiment is a battle between greed and fear, you’ve helped the greed side.

And a crucial subtlety – that valuation that you didn’t make is what the stock might be worth in the future. Because of the time value of money, you would need to discount that back to get to a current valuation. Since it credibly might take 10 years for the market to mature, even with a moderate 10% discount rate your valuation should really take a 61% hit. But by denying it was a valuation, you’ve avoided this step.

Downside Protection

The next step is to argue that the stock has “limited downside” or “downside protection”. This will reassure your audience that even if everything goes wrong, they won’t be fired. You’re trying to quieten their tendency towards fear, so that greed may reign.

Your goal here is to come up with a plausible story for why the worst-case scenario is 10% downside. 10% is just high enough that it sounds vaguely plausible, but low enough that it sounds reassuring.

There are a variety of ways of doing this. One is to name some assumptions you’re making, reduce them slightly, and claim that is the worst case. If the worst case looks pretty bad, just increase your original assumptions, so the haircut versions are higher.

Another is to look at some recent M&A in the sector, pick the most expensive deals, and argue that if the share price fell at all they’d be acquired on that valuation. There is always some expensive M&A going on, so this excuse is always available.

Finally, don’t forget to add that if the shares fell by this much, you’d think this represented a buying opportunity. This is totally misleading – the fact that the shares might be lower in the future is an argument for buying them then, not now, but it sounds level-headed and responsible. It also helps re-direct attention back to the optimistic forecasts of TAMs.

The actual valuation

Since you denied the TAM was actually a valuation, and the downside estimate was definitely the worst case scenario, you haven’t technically actually done any valuation just yet. Since that is ostensibly what you’re doing, you should actually have a go at estimating fair value. Doing all this has probably taken a long time so far, so you won’t have much time left for this stage.

There is an easy way to do it though. Project a high rate of revenue and earnings growth for the next 2-3 years. Place a multiple on the 3-year-out numbers that is lower than the current multiple. Discount that value back, using a high discount rate, to arrive at a share price 30% above the current level.

Fending off criticisms

Since you know more about the stock, you’ll probably be able to come up with a counterargument to any specific concern they mention, and they won’t be able to judge the issue. Any major concerns you have you can simply omit to mention.

If people ask you about execution risk, agree that it’s a risk, but then explain that’s why you used such a high discount rate. This suffices to rebut their attack. This is totally illegitimate. The discount rate (and implicit Equity Risk Premium) compensates investors for the variance of expected future earnings/dividends. It does not compensate you for lower expected earnings/dividends. Their concern is for the latter – your model implicitly assumes everything goes perfectly, whereas a true calculation of expected profits would include probabilities of lower performance. Your attacker will almost certainly not appreciate this fact. If by some misfortune they do, explain in an exasperated manner that you’ve already covered the downside case, and then change the subject.

As a last resort, simply utter the sacred words: “high risk, high reward.” People treat this utterance like magic – we want high rewards! Of course, it’s also nonsense. The phrase should be “high risk, high expected reward.” There is no guarantee you will get the reward! That is why it is ‘high risk’. But as people intuitive understanding of risk is terrible, you can safely abuse the phrase, just like everyone else.

Conclusion

So there you have it – how to pitch a growth stock. Or hopefully, how to spot the intellectually dishonest manoeuvres that are frequently used in growth investing.


  1. At least I think Halo effect is the right one. It also seems to have something to do with the Fundamental Attribution Error 

How Politics Makes Vox Stupid

Vox had an excellent article a while ago on how politics makes us stupid. It describes a number of ways in which people behave systematically irrationally about politics.

For example, there is research suggesting that showing people more evidence makes them hold their existing beliefs more firmly – regardless of whether the evidence supported or contradicted their beliefs. It talks about how people avoid evidence that threatens their self-identity.

But Vox made one big mistake with the article. When writing apolitical pieces, designed to reach across party lines and improve the state of political rationality, there is one rule you must always obey. Failing to observe this rule will lead to one side rejecting you and the other side failing to learn the lesson at hand. Failure to observe this rule leads to mindkilling, and Moloch.

The rule is:

If you use a political example from one side, you must use an equal and opposite example from the other side.

If you’re writing an article on irrationality in politics, and you have an example of republicans being irrational, you need to have an equally important example of democrats being irrational, with the same emotional salience, and the same amount of pagespace dedicated to it.

Vox totally violates this rule. And it does it in the predictable direction. It’s a left-wing site in general, and it’s specific examples of irrational behaviour (apart from those lifted from papers):

  • Climate change ‘denialists’
  • Sean Hannity (a conservative commentator)
  • Fox News
  • Antonin Scalia

Every single example of a person or a group they used were right-wing. Did they notice this? If not, then they need to do some serious work on their own bias. If they did, they have done their left-wing readers a great disservice.

The point of learning about biases isn’t to gain a new weapon with which to attack others. The point is to turn the knife upon yourself and cut the cancer from your own mind.