A glaring example of the substantial mismatch between science and technology studies (STS) and technical/scientific disciplines, this book misses the chance to ignite or fuel a--more than pertinent--discussion on the subtle and opaque but profound impact algorithms exert on our capabilities to decide in a free and sovereign way. The author merely provides a selective, humanities jargon-loaded, tedious, and repetitive retelling of existing STS literature and only rarely displays original or new thinking of his own.
It is an increasingly ubiquitous fact that the set of choices presented and available to us, and not just in the digital world, is heavily--and sometimes even completely--at the mercy of algorithms provided by private enterprises and governmental institutions alike. The author’s solution to this problem, sadly, is to advocate a romantically adorned post-humanism based on universal love, that is, our obligation to love other humans, living and non-living nature (such as stones and rocks), and--consequently to the extreme in the Digital Age--also (computer) code itself. Transgressing anthropomorphic viewpoints, the book finally articulates a universal “fundamental [...] right to be imperfect” for all, humans, nature, and, of course, code.
Despite such abysmal, almost comical digressions, potential readers who are willing to wade through a text dripping with sociologic-philosophical verbiage predominantly consisting of summaries or direct quotations of reference works will (after much patience) be able to (re)discover fundamental issues operating here--which I will shortly highlight in the following instead of providing chapter-by-chapter summaries.
What is an algorithm? The author almost gets it right when citing “an abstract formalized description of a computational procedure” as one potential definition, only to later erroneously overload the term by another reference claiming that an algorithm “automatically makes decisions based on statistical models or decision rules without explicit human intervention.”
What is algorithmic governance? The propensity to exclusively focus on learning and self-adapting artificial intelligence/machine learning algorithms and humans continues to surface throughout the book, culminating in the (referenced) definition of algorithmic governance as “the automated collection, aggregation, and analysis of big data, using algorithms to model, anticipate, and pre-emptively affect and govern possible behaviors.” Even though “behaviors” may, in principle, refer to and include algorithms controlling machines, robots, self-driving vehicles, smart home contraptions, or even power plants, the book completely disregards any Internet of Things (IoT)-related aspect.
Even though the author uses the phrase “code is law,” he unbelievably fails to mention and discuss the modern, typically blockchain or distributed, ledger-based incarnation of so-called smart contracts, a concept originally developed by Nick Szabo in 1997.
How does algorithmic governance differ from other traditional forms of governance? The author rightfully points out the intransparency and opacity of how (a subset of) algorithms potentially shape our everyday decision-making in the digital sphere. What started with personalization and support to navigate an unmanageable plethora of choices and options currently extends toward a continuous wave of microtargeted nudging through sublime conditioning of our precognitive layer of our behavior. Eventually, individuals no longer fail to choose the option already predetermined by the algorithm.
While I agree that our past decisions and preferences decidedly shape the digital bubble (interestingly, the term is never mentioned) of today’s large platforms, I certainly do not subscribe to the dystopian view that humans cannot at all break out of this vicious circle. Not only are there ways (like anonymous browsing, reading printed material) to leave this cycle, but humans also must not be reduced to monocausal automata reacting blindly to whatever stimulus an algorithm displays on whatever type of screen they are looking at.
How do algorithms impact our decision-making processes? How can we describe and actively shape this state of affairs? Undoubtedly, some algorithms are exerting tremendous power over us (for example, profiling, predictive policing) but does that mean that software code has become an agent “co-constitutive [...] of shaping our actions”? I very much liked the author introducing the elegant notions of “agency” and “assemblages” to discuss the distributed and fragmented environment of human decision-making. Nonetheless, agency is often conflated with the capacity to actively change elements of the environment (yes, software can do that as well) instead of correctly referring to an individual’s ability to exert their will or intention, including the socially reproductive quality of the respective actions. Assemblages of “human actors, code, software, and algorithms” equipping algorithms with “productive capacities” may sound elitist to nontechnical people, but simply boils down to the fact that software very often is a large, sometimes decisive, sometimes almost inescapable factor in our decision-making processes. Does that mean that the “digitized world […] possesses agency of its own”? No. European civil law knows the concept of diluted free will, which could have been an appropriate conceptual framework to further discuss the ramifications and effects of these assemblages instead of any sort of distributed agency.
How does algorithmic governance affect the legitimacy of our governments? Finally, going off on a tangent of “code as law,” the book wants us to believe that we are entering a state of “hybridization of [state] governance” where it is unclear to whom the “right to rule” has really been bestowed, the legitimate government or algorithms. Obviously, this viewpoint completely neglects that complex federated governance processes always have had “displaced loci of power”--not just with the advent of “algorithmic law.” These were called nations, federal states, circuits, counties, municipalities, and so on, collectively employing thousands of individual “agents” (in the sense of principal/agent theory) who rendered decisions based on local and federal law.
Moreover, the discussion is, again, framed without justification as inevitable, universally deployed, immutable, and unchangeable. This may hold for China, Russia, and other oppressive regimes, but provides an ineffective reference for discussing and arguing this relevant matter within democratic states and societies.
Let me lastly note that the short index provided by the book prominently misses entries for “algorithm,” “governance,” and “algorithmic governance”: a tell-tale sign of the amount of technical and scientific governance the book would have desperately needed.