Humans have always tried to see the future, but contemporary life is saturated with predictions generated by algorithms that quietly shape decisions about ads, jobs, credit, policing, and even survival. A growing layer of mostly invisible, corporate-controlled forecasting now mediates many aspects of daily experience, raising unease about how much power has shifted to opaque systems. Three recent books examine how this situation emerged, what ideological and technical foundations sustain it, and how prediction functions less as neutral foresight than as a mechanism of control.
In The Means of Prediction: How Artificial Intelligence Really Works (and Who Benefits), economist Maximilian Kasy focuses on supervised learning, where statistical analysis of large, labeled data sets is used to forecast outcomes such as whether someone will violate parole, repay a mortgage, succeed at work or school, or even be at home when a building is bombed. He argues that a world governed by such predictive systems is becoming crueler and more constrained, with existing prejudices embedded and life chances narrowed. Kasy rejects the idea that fairer algorithms can fix the problem, since they still depend on biased historical data and operate under profit-maximizing incentives. He calls instead for broad democratic control over what he terms “the means of prediction,” including data, computational infrastructure, technical expertise, and energy, proposing tools such as data trusts and taxes that reflect the social harms inflicted by Artificial Intelligence, while acknowledging the political and institutional obstacles and the urgent question of whether there is enough time to enact such changes.
Benjamin Recht’s The Irrational Decision: How We Gave Computers the Power to Choose for Us traces the roots of today’s automated decision making to “mathematical rationality,” a narrow conception of rational choice that took hold around the end of World War II. Wartime models for managing risk and uncertainty inspired the design of computers as ideal rational agents focused on optimization, game theory, and statistical prediction, and Recht notes that “advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950,” while “from the late 1800s to the early 1900s, we had world-changing scientific breakthroughs in physics, including new theories of thermodynamics, quantum mechanics, and relativity” without formalized decision theory. He criticizes contemporary champions of data-driven rationality, such as Nate Silver, Steven Pinker, and various Silicon Valley figures, for treating every choice as a calculable bet and sidelining intuition, morality, and judgment, and he challenges the idea that people should make decisions like computers at all.
Philosopher Carissa Véliz, in Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to Artificial Intelligence, contends that predictions function like magnets that pull reality toward themselves, often becoming self-fulfilling. She frames Gordon Moore’s famous 1965 forecast about transistor density, which helped crystallize “Moore’s Law,” as an example of an industry collectively working to realize a prediction that aligned with its financial interests, with companies spending billions of dollars while profiting even more. Véliz warns that grand forecasts, such as claims that artificial general intelligence will be humanity’s final problem, redirect attention away from current harms that Artificial Intelligence is already causing. She argues that predictive claims are “veiled prescriptive assertions” and “speech acts” that tell people how to behave, reinforcing power hierarchies and, in heavily prediction-driven societies, edging toward oppression and authoritarianism. Across these works, a common thread emerges: technology is not destiny, and the most human response to pervasive, uninvited predictions may be to question, resist, and at times simply defy them.
