The behavior of any bureaucratic organization can best be understood by assuming that it is controlled by a secret cabal of its enemies.
— Robert Conquest, born on this date in 1917
Jonathan R. Cohen (Florida), Lawyers Serving Gods, Visible and Invisible, 53 Gonz. L. Rev. 187 (2017)
CODA:
Michelle Layser (Illinois) reviews a new work by Joshua D. Blank (UC-Irvine) and Leigh Osofsky (North Carolina), Legal Calculators and the Tax System, 15 Ohio St. Tech. L.J. ___ (2019).
Jonathan R. Cohen (Florida), Lawyers Serving Gods, Visible and Invisible, 53 Gonz. L. Rev. 187 (2017)
CODA:
Michelle Layser (Illinois) reviews a new work by Joshua D. Blank (UC-Irvine) and Leigh Osofsky (North Carolina), Legal Calculators and the Tax System, 15 Ohio St. Tech. L.J. ___ (2019).
The IRS has long attempted to aid wary taxpayers by publishing informal guidance that translates tax laws into more understandable statements. In previous work, Professors Joshua Blank and Leigh Osofsky have argued that such plain language guidance often oversimplifies complicated tax laws, opening the door to errors. They have called this characteristic “simplexity.” In their newest article on the subject, Blank and Osofsky identify another—potentially more serious—example of tax guidance that reflects simplexity: automated legal calculators like the IRS’s Interactive Tax Assistant.
In the context of tax compliance, legal calculators are essentially algorithmically programmed, automated tax advisors that perform mathematical calculations and attempt to calculate taxpayers’ legal consequences. It sounds technical, but anyone who has ever used TurboTax is familiar with the basic concept. The legal calculator “asks” the user questions about their profile and economic activities, and then it generates advice about what income might be taxable, what deductions or credits may be available, whether it makes sense for a taxpayer to itemize, and so forth.
While TurboTax online has been around since 1999, the IRS is a relative newcomer to the automated tax assistance scene, introducing its Interactive Tax Assistant in 2011. If there is a risk that taxpayers may place too much trust in the advice generated by private market legal calculators, then it stands to reason that the risk is even higher when the legal calculator is provided by the IRS itself. For me, this basic intuition is what makes the authors’ findings so immediately concerning.
Blank and Osofsky have identified concrete examples in which the Interactive Tax Assistant produces guidance plagued with simplexity. For example, they describe a hypothetical actor who has surgery to replace his own teeth (which are apparently ugly by Hollywood standards, but are otherwise fine) with more beautiful artificial teeth. They show that, on these facts, the Interactive Tax Assistant merely “asked the actor for a simplified input (artificial teeth?) and provided the actor with a simplified output (artificial teeth are deductible).”
In an attempt to keep things simple, the legal calculator neglected to ask whether the surgery was cosmetic (it was), whether it was necessary to replace teeth deformed due to a congenital abnormality (no), whether it was necessary to meaningfully promote the proper function of the body (no again), or any other question relevant to determine deductibility. In fact, the actor’s expenses probably were not deductible on these facts. This is simplexity in a nutshell, and it may well result in the wrong legal conclusion being generated by the IRS’s own tools.
Though informal written guidance often suffers from the same simplexity problems, the authors argue that the potential harm is even greater in the context of the Interactive Tax Assistant due to its personalized nature. Unlike guidance written to the masses, which may leave a taxpayer wondering if it really applies to her, the automated tool states clearly: “Your artificial teeth expenses are a qualified deductible expense.” The authors argue that the personalized approach—which begins with interactive Q&A and ends with conclusions stated in the second-person (“your expenses”)—not only increases the likelihood that taxpayers will rely on it, but also reduces the incentive for taxpayers to seek advice from human advisors.
This article is merely a preview of the authors’ research on simplexity in the context of legal calculators, and they identify four questions that are ripe for investigation. The first is the question of whether the government should use legal calculators at all, or whether the IRS should instead increase the availability of human tax advisors. Here, it would be interesting to learn how the human error rate might compare to that of machines, and who those errors tend to benefit (the government or taxpayers?).
It would also be helpful to know to what extent human customer service representatives rely on computer software to aid in their advising. If human customer service representatives rely heavily on computer-generated scripts when assisting taxpayers, then the simplexity problem may not disappear, but would merely be less observable. A second and related question, raised by the authors, is whether there is sufficient oversight, accountability, and transparency associated with the IRS’s use of legal calculators. Here, too, it would be useful to know how legal calculators compare to human customer service teams with respect to these metrics.
Third, to what extent should taxpayers be entitled to rely on the advice generated by legal calculators? (The IRS, for its part, has indicated that taxpayers cannot rely on such guidance, but I suspect this would surprise a lot of users.) The answer to this question may be especially important for low-income taxpayers, and particularly those who hope to claim the earned income tax credit. Such taxpayers lack the means to hire paid human advisors, who could provide more sophisticated advice, but they face a particularly complex set of legal rules. They also tend to be the target of audits and serious penalties in instances of noncompliance. Legal calculators could benefit such taxpayers, but they may also expose them to greater risk than is associated with in-person tax clinics.
Finally, what should be the default position of algorithms in cases of unsettled law? It seems clear that at least some cases, the simplexity described in this article is inexcusable. In the illustration above, there is little reason why the IRS could not design an algorithm that asked all the relevant questions. Assuming the IRS has sufficient human resources behind the scenes (admittedly, a large assumption), the IRS could simply program the algorithms to incorporate the full range of exceptions and nuance provided for in formal guidance.
But even if the algorithms were so robust, the problem of simplexity would not go away. Indeed, the most complex aspect of tax law is not wading through the technical details; it is interpreting tax law that is fundamentally unclear or unsettled. Normally, this is where human judgment comes in to play. In the case of legal calculators, the algorithms alone must generate a conclusion—and they must be told whether to favor pro-government or pro-taxpayer interpretations.
I look forward to reading the authors’ future work on this important subject, which will only become more significant as machines and algorithms become increasingly prevalent. This research should be of interest to any tax scholar interested in tax compliance, tax procedure, or artificial intelligence and taxation.