‘Absolutely filthy’: Chalmers blasts PwC
PwC Australia scandal: what actually happened and will it be fatal for the advisory firm?
Lazy leaders and heroic managers
Strategic transformation failures are often blamed on middle managers, but the real culprit is executives not delivering on three key issues.
Monitoring software has become more common since the pandemic – but are activity scores the best way to measure productivity?
Unearthing how government operates’: how the public uses freedom of information to seek truth
Rules can be thick or thin, says Lorraine Daston. “Behind every thin rule is a thick rule, cleaning up after it”... more »
The Prediction Society: Algorithms and the Problems of Forecasting the Future
MATSUMI, Hideyuki and Solove, Daniel J., The Prediction Society: Algorithms and the Problems of Forecasting the Future (May 19, 2023). Available at SSRN: https://ssrn.com/abstract=
“Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions. Algorithmic predictions are a type of inference.
Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address.
First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false.
Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast. More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future. Privacy law fails to address algorithmic predictions.
Privacy law lacks a temporal dimension and does not distinguish between predictions about the future and inferences about the past or present. Predictions about the future involve considerations that are not implicated by other types of inferences. Privacy law is entrenched with dichotomies that do not work with predictions. For example, privacy law is framed around a truth-falsity dichotomy. The law provides correction rights and duties of accuracy that are insufficient to address problems arising from predictions, which exist in the twilight between truth and falsehood. Individual rights and anti-discrimination law also are unable to address the unique problems with algorithmic predictions.
We argue that the law must recognize the use of algorithmic predictions as a distinct issue warranting different treatment than other privacy issues and other types of inference. We then examine the issues the law must consider when addressing the problems of algorithmic predictions.”