![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
UNSW Law Society Court of Conscience |
SOCIOLOGICAL JURISPRUDENCE AND DIGITAL TECHNOLOGY: THE NEED FOR CROSS-DISCIPLINARY ANALYSIS
Anna Beckers* and Gunther Teubner#
Abstract
Dealing with the relation between information technology and private law, the article discusses the potential of sociological jurisprudence and examines its effects in concrete cases. Current judicial decisions, selected from different European courts, are scrutinised to determine whether alternative and socially adequate results can be reached when these decisions are confronted with sociological insights, in particular from systems theory. In particular, the article deals with the problem of how the law reacts to the personification of algorithms as actants, the emergence of human-machine-associations, the definition of damage for the use of the internet, and network and swarm properties of labour relations in digital platforms. More generally, the article discusses collisions between various social theories, knowledge transfer from the social sciences to law, and the normative potential of social theory.
Keywords: sociological jurisprudence, law of digitality, interdisciplinarity, transversality, theory collisions, knowledge transfer.
I Against The Technology-Determinist Short-Circuit
A Machine Behaviour and Socio-digital Institutions
One should avoid the interdisciplinary short-circuit in the legal analysis of information technology. One cannot draw far-reaching conclusions for the law from only examining the technical properties of new technologies. ‘Technology determines legal regulation’—with such an argument, one remains caught in simple causal models and equally simple normative conclusions. In contrast, a more complex multidimensional model will be presented here. The model places different social contexts of technology use at the centre of legal arguments and emphasises the role of the social sciences in developing an appropriate legal regime for digital risks. [1]
The starting point is a typology developed in IT studies that distinguishes three types of machine behaviour: individual, collective, and hybrid.[2] Individual machine behaviour refers to intrinsic properties of a single algorithm, whose risks are driven by its single source code or design in its interaction with the environment. Hybrid human-machine behaviour is the result of close interactions between machines and humans. They result in sophisticated emergent entities with properties whose risks cannot be identified if one isolates the involved humans and algorithms. Collective machine behaviour refers to the systemwide behaviour resulting from machine agent interconnectivity. Here, looking at individual machine behaviour makes little sense, while the collective level analysis reveals higher-order interconnectivity structures responsible for the emerging risks.
This typology is relevant for the law dealing with digital risks. However, to avoid the technology-determinist short-circuit, it is necessary to introduce ‘socio-digital institutions’ as intervening variables between technology and law. Socio-digital institutions are stabilised complexes of social expectations, in particular, expectations regarding the behaviour of algorithms in social contexts. Such institutions are neither identical with social systems, formal organisations, or social relations. Instead, social systems, including formal organisations and interpersonal relations, produce expectations via their communications, which – to use a classical formulation – condense into institutions under an ‘idée directrice’. Such expectations are institutionalised when consensus can be assumed to support them.[3] Moreover, because institutions consist of society-wide expectations, they have the ability to build bridges between different social systems and their expectations.[4] In such bridging institutions, legal, economic, political and technological expectations meet, and it is often difficult to distinguish between the expectations of the systems involved.[5]
Thus, there is a need for an ‘institutional turn’ in the law of digitality.[6] Idiosyncratic socio-digital institutions structure the appropriate models of responsibility for the actions of autonomous algorithms.[7] To appropriately understand these contexts, the social sciences are needed. They serve as intermediaries between IT sciences and jurisprudence.[8] Their methods are able to analyse in-depth specific socio-digital institutions and their risks and to interpret them with sufficient density.
Socio-digital institutions correlate with the types of machine behaviour mentioned above: individual machine behaviour is realised as an agent’s action in the institution of digital assistance. Hybrid machine behaviour gives rise to the institution of a human-machine association that emerges in the dense interaction between humans and machines. Collective machine behaviour occurs when interconnected algorithms remain only indirectly connected to the social sphere. Here, society is exposed to invisible machines and their interconnected operations. Each type of machine-behaviour thus gives rise to a specific socio-digital institution.
Each socio-digital institution has its own novel risks of harm to which liability law must respond: The risk of digital assistance arises when tasks are delegated to autonomous algorithms instead of humans, and their decisions can no longer be controlled. Human-machine associations create the risk of emergent collective decisions that cannot be traced back to individual decisions of the algorithms or humans involved. The risk of digital interconnectivity is related to society’s exposure to an opaque network of uncontrollable interconnected algorithms.[9]
The law dealing with digital risks will gain relevant insights from the social sciences, as demonstrated in the following analysis of particular cases. Reference is made to theories on (1) the social personification of algorithms, (2) emergent properties of human-algorithm associations, (3) the definition of damage for the use of the internet, (4) network and swarm properties of labour relations in digital platforms.
II Sociological Jurisprudence In Digital Conflicts:
Selected Cases
A Google’s ‘Autocomplete’ Function: Algorithms as Actants
Several autocomplete cases have been brought against Google for violation of personality rights.[10] The auto-complete function proposed compromising search terms for names of well-known personalities. The Google autocomplete function completes a search entry with similar terms. In the cases that became known as auto-complete cases, google searches combined, for instance the name of a company and its founder with the cult organisation Scientology or the name of a prominent figure with the terms escort lady and prostitute. Google argued that the algorithm’s suggestions were unforeseeable and uncontrollable or that the unpredictability stems from the user input, with Google just collecting data and publishing results.[11] However, the courts have ruled the opposite. Search results violating personality rights are attributable to the company with the company violating a duty to control. Consequently, Google is liable in principle once it obtains knowledge of personality rights violations by its autocomplete function.[12]
The requirement for Google to obtain knowledge to trigger the duty to control creates a massive evolving liability gap because it does not cover autocomplete operations that remain undetected by the company. The following arguments will suggest a different result. Google is, per se, liable for violating personality rights by its search algorithm – already in the very first instance where it occurs.
The case very well illustrates the socio-digital institutions discussed above. The Google autocomplete function occupies a somewhat strange intermediate place between an autonomous algorithmic decision and a hybrid human-machine interaction, in which user input, Google’s management’s programming of the search algorithm, and the mathematical operations are all present. The reason for that ambivalence is that the action is not generated based on training data, but is made by using real-time user input and personalised criteria of the users. In addition, the auto-complete function is only activated by user input in the search engine and its rules are constantly revised within the organisation. Google autocomplete is, as an observer has it, ‘...a social process that at many points could be informed by social values’.[13] Is this, then, according to our classification, a human-machine association or a case of digital assistance? The Google autocomplete results are based on the interaction between human user input and machine calculations. At the same time, the relation between human input and the algorithmic operations is not a clear-cut cooperative relation but rather one of delegation.
Sociological analysis clarifies the conditions under which one of the two socio-digital institutions emerges - digital assistance or human-machine association. The delegation of tasks from the Google organisation to the autocomplete algorithm creates two autonomous but interdependent chains of action, and then a principal-agent relationship emerges between them.[14] Such relationships necessarily presuppose social agency for both the principal and the agent. Therefore, a (partial) attribution of personhood to algorithms becomes possible within digital assistance.
Personification of algorithms – several social theories provide the appropriate rationale for this complex process.[15] Economists contribute relatively little to this topic. When they observe the use of algorithms in markets, they implicitly perceive algorithms as rational actors.[16] In contrast to narrow rational choice assumptions, sociological theory analyses personification as a performative act that institutes the social reality of an actor, which cannot be identified with a specific rationality, economic or else. Actor-Network Theory defines the interactive qualities that transform an algorithm into an ‘actant’.[17] Information philosophy defines the conditions under which algorithmic actions can be considered autonomous.[18] Systems theory describes in detail how, in a situation of double contingency, emergent human-machine communication constitutes the social identity of the algorithm and its (limited) action capacities.[19] Each social context creates its own criteria for algorithmic personification; the economy is no different than politics, science, morality, or law. Political philosophy describes in detail how the transfer of a ‘potestas vicaria’ constitutes the personhood of algorithms in principal-agent relations, which opens up new productive potentials but simultaneously ‘implies clear risks and dangers for modernity’.[20]
As a result of social personification, technological risks are transformed into social risks. Causal risks stemming from the movement of objects are now conceived as action risks arising from the disappointment of Ego’s expectations about Alter’s actions.[21] In digital assistance, no longer an instrumental subject-object relationship appears; instead, it is a subject-subject relation, more precisely, a principal-agent relation with its typical communicative risks. The more the institution of digital assistance covers online transactions, the more the law is challenged to decide, according to its own criteria, the type of legal personhood it grants to digital actors. In this constellation, special liability rules that react to the risks of digital actors’ decision-making are necessary, which differ from the causal risks of dangerous objects. This is why legal policy proposals that introduce a new strict liability regime or simply modify tort liability rules are inadequate.[22] Such proposals would treat algorithms wrongly as objects, dangerous installations, or defective products and ignore what is new about algorithms – their autonomous decision-making ability. Instead, the rules of agency law and vicarious liability are to be applied to faulty decisions of algorithms in the context of digital assistance. The principal is bound when the algorithm enters into contracts as an agent, and the principal is liable when the algorithm decides incorrectly and causes damage.[23]
The Google autocomplete algorithm demonstrates the development from automation to digital autonomy very well. The algorithm’s operations are determined by mathematical calculations, but the varying user input and the learning abilities of the algorithm in the light of such inputs, as well as unforeseeable individual user properties, results in autonomous decision-making under uncertainty. ‘Thus, Google provides for some decision premises by certain conditions via ‘input’, but what becomes visible as ‘output’ at the end, cannot be predicted with any certainty’.[24] It is already the first autonomous algorithmic decision violating personality rights that triggers vicarious liability.
It is not a wrongful decision by Google but the output of the autocomplete function that counts as a violation of personality rights. It is not necessary to show that Google had knowledge of the infringement. Instead, it is the communicative act of auto-completion itself that the law treats as a violation. Hence, one does not need to find the conditions for contractual or tortious liability (violation of a contractual obligation or a duty of care) in the human principal’s behaviour, which becomes increasingly difficult with autonomous systems.[25] Instead, it is exclusively the auto-complete output that needs to be qualified as illegal or not. This solution, which is based on the general principles of contractual and tortious liability, would also solve the related legal problems regarding the applicability of liability rules for news providers. Google’s liability does not depend on whether the company is a news provider or an intermediary because this distinction does not affect its role as principal.
B Digital journalism: Human-algorithm-association as Collective Actors
A second significant responsibility gap looks quite different, as the example of investigative journalism will illustrate. It is common practice today in journalism to use algorithms for investigation and producing content and news. This happened also in Panama Papers, a real-life case partly amended here. An international consortium of journalists used software to analyse numerous documents in a complex investigation to uncover illegal tax practices.[26] Algorithms were used to mark, categorise, and select the relevant texts. Humans were involved in their work in close interaction.[27]
Although such human-algorithm cooperation can be highly beneficial to uncovering complex news stories – some journalistic investigations would have been impossible without the help of the technology – there is also considerable potential for damage. Who is liable if, during such an investigation, persons or companies are accused of misconduct that, in fact, were not involved? In this situation, a responsibility gap emerges when it cannot be clearly determined whether the algorithm was at fault or the humans erred. Current law does not provide for liability of such a human-machine association.
Unlike the operator's exclusive liability in Google autocomplete, digital journalism is a case of ‘digital hybridity’, another socio-digital institution in its own right.[28] Hybrid human-machine behaviour results from closely intertwined interactions between algorithms and humans. It would be wrong to use the individualistic approach of principal-agent relations and to separate single human and algorithmic actions since one would fail to notice that collective actors have been established. They develop properties whose risks differ qualitatively from the risks of individual action within digital assistance. While digital assistance has to cope with the risks of algorithmic autonomy, digital hybridity has to deal with the transformation of single human-algorithm interactions into collective actorship.
What is the contribution of the social sciences here? Due to their adherence to methodological individualism, economic analyses are sceptical of the reality status of collective actors. They conceive them as a mere ‘nexus of contracts’ and judge their personification as an abbreviation at best and as dangerous ‘errors’, ‘traps’ or ‘fictions’ at worst.[29] In contrast, sociology focuses closely on the differences in human-algorithm interactions.[30] They range from short-term loose contacts to full-fledged human-algorithm ‘organisations’ with an internal division of labour and distribution of competencies. Each of these hybrids creates its own risks. In loose contacts, the acts of humans and algorithms can be easily identified and can be qualified as principal-agent relations discussed above as our first socio-digital institution. Most conspicuous, however, are constellations of dense interaction, in which responsibility for actions can only be established for the whole hybrid entity. In contrast, it cannot be established for the individual algorithm or human involved.[31]
The wrongful action is attributed to the emerging human-algorithm association, and liability is channelled to a multitude of actors who are ‘behind’ the digital hybrid. A whole network of different actors initiates a dense human-algorithm interaction and profits from its results. Since control is dispersed among the network nodes, responsibility follows this specific risk structure. For human-machine associations, a fully developed corporate liability of the association as such cannot be established, at least for the time being. Instead, the principles of enterprise liability are well suited to shape the responsibility of digital hybrids.[32] Enterprise liability works in two steps. In the first step, the wrongful action is attributed to the hybrid as a collective without disentangling the contributions of humans and algorithms. In the second step, liability for the collective action is channelled to all the network nodes. These nodes have set up the hybrid and benefit from its activities. The hybrid is the source of their benefits. As a result, all the network nodes are liable according to benefit and control. If a hub enterprise contractually coordinates the network, primary liability should fall on this hub, usually the producer, who can take recourse on the network nodes.
If, in digital journalism, the algorithm that analysed the multitude of documents in collaboration with the journalists operated according to its programming and the human journalists fulfilled their monitoring duties, no one can be held liable.[33] This is a ‘collective moral responsibility’ situation in which a group commits an unlawful act even though the individuals involved behaved correctly.[34] Identifying a single wrongful act is impossible, although the collective work of algorithms and journalists led to the wrongful allegations. Accordingly, enterprise liability, as outlined above, is appropriate. The wrongful conduct is attributable to the human-machine association as a collective of journalists and algorithms. This allows channelling financial liability to the network members. The injured party can successfully sue the central node of the network. In the case of hybrid journalism, this can be either the controlling news organisation or the producer of the algorithm. Within the network, the internal proportional distribution of liability would be according to the economic benefit and control in the collaborative network.
C Blockage of Internet Access: Determination of Damage
The law cannot rely exclusively on economic analyses when choosing relevant social science theorems, as many authors do.[35] While economic perspectives are certainly relevant when identifying incentives for appropriate standards of care and activity levels,[36] they are relatively indifferent to broader societal problems, especially adequate victims’ compensation, encroachment of public institutions, or ecological damage. Any claims of monopoly on social theory made by one of the social science disciplines must be firmly rejected. This applies not only to legal economics, which has explicitly claimed to become law’s leading science in recent years, but it also applies to ‘sociology at the gates of law,’ critical theories of law as politics, or moral philosophy’s claims on legal normativity. It is impossible today to base law only on one theory of society from whatever discipline it is presented. Transversality – this is arguably the appropriate response to the rivalry between social theories. Transversality has been developed in response to a similar situation in philosophy struggling with the contemporary discourse plurality that followed the collapse of the grands recits.[37] In the legal cases discussed here, it becomes apparent that the law cannot be one-sidedly economised, politicised, sociologised, or moralised. Law must reject the totality claims of any theory but, simultaneously, accept the inherent right of diverse coexisting social theories. Legal arguments should exploit the plurality of language games in formulating concepts, principles, norms, arguments, and decisions. In a transversal passage, the law chooses the interdisciplinary points of contact – on its own responsibility.
The transversal approach becomes relevant in a recent decision by the German Federal Court of Justice, which had to determine the legal qualification of internet use when compensation is demanded for a permanent blockage of internet access and define the concrete decision criteria.[38] A telecommunications company was unable to provide Internet access when the Internet was permanently interrupted after a change of tariffs.
In this case, the usual qualification as compensation for loss of use of economic goods is too narrow.[39] The prevailing economic perspective, which focuses solely on market value, needs to be revised. Since the Internet has now become an ‘existential living space’, the legal determination of damage compensation must be changed to include social, cultural and aesthetic aspects. The ‘value’ of Internet access for users needs to be determined in a transversal passage through various social theories, which deal with the value category in different social contexts.[40] An economic perspective is not wrong, but it is far too one-sided. It reduces the Internet to a mere economic good and does not do justice to the multidimensionality of value involved. Instead of evaluating Internet blockage solely based on economic criteria, a sociologically informed analysis will demonstrate that the Internet is more than an economic good but a social institution in which meaningful and technical communications are closely linked. Accordingly, different value criteria from various social contexts will apply: The distinctions directrices are neither commercialization versus non-commercialization, nor market-based predictability versus non-calculability, nor luxury goods versus basic economic goods. Instead, the Internet needs to be legally qualified as a polycontextural institution and a space for personal development.[41] While economics contributes only to a shallow understanding of the Internet’s value, sociological-cultural analyses, which today are turning their attention to digital media, provide for a deeper analysis. If one takes private law seriously as society’s constitution, one has to consider the broader social aspects of the information technology ‘Internet’. First and foremost, the significance of the Internet as a living space, i.e., as the living and experiential world of the person, must be considered. This requires special protection of information technology, which is of central importance for developing the human personality. This is nothing less than the psychophysical integrity of persons.
Legal guarantees are required to ensure people's free access to the Internet. With such guarantees of a certain socio-digital minimum subsistence level, the law can act as a technology for the humanization of technology. This changes the concrete criteria for legal decision-making and the legal qualification of damage compensation. Adequate compensation of the immaterial damage for infringement of the general personality right replaces the dubious disguise as abstract compensation for the loss of an economic good.
D Digital Crowdsourcing: Labour Relations in Platforms
Novel network or swarm organisation forms on the Internet put traditional categories of labour law under pressure. Based on social science analyses, legal arguments about socio-digital institutions' normativity come in. The Internet represents a new context of legal conflicts for which an institutional analysis can provide a deeper understanding. In the ‘Crowdtree’ case, the plaintiffs sought payment of minimum wage for work performed by the defendant, who operates a digital crowdsourcing platform in Germany.[42] Crowdtree invited tenders for operational HITs (Human Intelligence Tasks) in the quality control of large volumes of data. The average hourly wage was lower than €2-3 per hour. The plaintiffs wanted to enforce the applicability of the German minimum wage of €8.50 per hour. They argued that the crowd workers were employees of the defendant. The defendant objected that it merely maintained relationships with the crowd workers as independent contractors. The court found in favour of the plaintiffs: The plaintiffs had to be paid €8.50 until an effective system of collective power had been established.
The decision is sound, but the reasoning needs to be improved. According to social science analyses, crowd workers are neither employees nor self-employed in the traditional sense. Their hybrid form of employment requires new normative points of reference, not in the law but in the crowd's social, network, or swarm-like self-regulation processes.[43] The law must address these access and interaction conditions. The development of the collective dimension of labour law is a prerequisite for the social design of contract law.
Transversality will again be pertinent to understand digital crowdsourcing platforms because labour law has to deal with novel forms of network or swarm organisation. Both economic and sociological analyses initially identify a fundamental organisational transformation in a parallel fashion. Economics has been inspired to investigate new forms of transaction and sociology to analyse new modes of cooperation in non-hierarchical forms of organisation.[44] Their effects on private law have already been studied in relation to network arrangements.[45] A network consists of several organisations that enter interrelated contracts, which are closely coordinated through vertical integration, without, in fact, ever creating a single integrated business entity such as a corporation or a partnership. The problem confronting the law is that neither corporate nor contract law fits the economic phenomenon of network organisation. As a result, the law needs to address some of the conflicts generated by networks adequately.[46]
Both the individualistic perspective of contract as well as the aggregate corporate perspective provide unsuitable legal concepts for digital networks. In this situation, economic transaction cost theory proves fruitful when it shows that rational actors choose network forms of business organisation if they offer transaction cost advantages over contractual or corporate structures. However, economic theory goes beyond this analysis and insists that the new networks are aimed exclusively at minimising transaction costs; furthermore, it wants private governance to monopolise conflict regulation in the network and rejects interventions by state law as inefficient. At this point, the law must reject the economic interpretation. It is only in the transversal passage through other social theories that the complexities of networks become visible to the law, requiring a normative perspective that goes beyond transaction cost minimization. For the hybrid organisation of digital networks, a double attribution is needed, i.e. a simultaneous commitment to the various individual goals and to the overarching project.[47]
Moreover, with a transversal orientation, the law takes up the impulses of economic, political, sociological, ethical, and other theories side by side. It institutionalises the conflicts between different social rationalities that emerge in digital platforms. This implies the legal obligations of network members to adjust their behaviour to different contradictory logics of action. The law requires participants to consider several contradictory imperatives simultaneously, albeit with different emphases, more precisely, the conflicting requirements of technological requirements, economic profitability, scientific knowledge, productive standards, and political orientation towards the common good.[48] After such a transversal passage through different disciplines, the law can more adequately constitutionalize the novel organisational forms of digital platforms. Moreover, the legally relevant network purpose, the legal definition of the user status, and the duties of users and organisational leadership can be determined in detail. This has consequences for the qualification of labour relations in digital platforms.
This is where reflexive labour law comes in. According to this, law must ‘primarily process — and redevelop — social knowledge about self-regulatory processes in different social contexts’.[49] Legal doctrine needs to carefully examine different legal institutions in order to determine whether, in terms of their internal normative logic, their ‘inner basis’ as jurists like to frame it, they are capable of responding sensitively to the structures and problems of the social phenomena as perceived by the law. These subtle search operations, which are performed with the aid of the sensory concepts of doctrine, are to be referred to here using the term ‘responsiveness’.[50] The responsiveness of the law is not to be judged before the forum of the social sciences, which ensure the authentic use of the term, or before the forum of a superordinate third instance acting as an intermediary between law and social theory, but only before the ‘forum internum’ of the law itself. In a complex examination, the law is challenged by the external problem analyses of social theories, but only if they are usable according to law’s own selection criteria, and it reconstructs these internally in its own language, in which it can then match problems and solutions together.
Social science analyses observe that the coordination of crowd activities is characterised by affiliation and specific communication media.[51] Crowdsourcing differs from networks; crowd workers are coordinated as swarms, which cannot rely on stable structures of connectivity but must constantly establish their connections through spontaneous interaction. Collectivity and knowledge are generated via feedback loops. Swarm intelligence thus allows flexible, spontaneous, unorganised problem-solving. Because of this reactivity, swarms have no form or order but represent open process, movement, pure happening, and collectivity in actu. They exist only in relationality and in the sudden act of flowing together. Swarm denotes spontaneous collectivity.
In the crowdsourcing case, the defendant is neither the plaintiffs' employer nor the platform's mere labour marketplaces. Instead, they are intermediaries with a specific mediating role that entails certain social obligations. The courts are establishing increased duties of protection on the part of such intermediaries for the personal rights of their users, taking into account special digital communication conditions.[52] In addition to the regional minimum wage regulations and the minimum remuneration entitlement under Art. 4 of the European Social Charter, the principles of good faith are used to determine the remuneration amount. This results in a total remuneration claim of € 8.50 gross per hour.[53]
III Conclusion
We have attempted to show here in a general way, using several selected cases, that at the point where social theory meets law an added value can be generated in terms of legal doctrine if the precarious relationship between autonomy and interconnectedness in three different dimensions is respected.
Transversality draws conclusions from the autonomy of different incommensurable social theories and their mutual interconnectedness. The law denies any monopoly claim and selects points of contact in a transversal exploration.
Responsiveness insists on the autonomy of legal doctrine vis-à-vis social theory and takes account of its interconnectedness with them by the law‘s self-exposure to the challenges posed by social theories, drawing inspiration from this for normative innovation, and observing the effects thereof on the social world.
Self-normativity: the law achieves normative orientation not from social theory, but solely from internal processes of the law and at the same time from the self-normativity developed by the reflection dogmatics of other social systems.
* Dr Anna Beckers is Professor of Private Law and Social Theory at Maastricht University, Faculty of Law, The Netherlands.
# Dr Gunther Teubner is Emeritus Professor of Private Law and Legal Sociology, Goethe-University, Frankfurt am Main, Germany.
1 For more details, see the monographic treatment of the whole model which is summarised here, Anna Beckers and Gunther Teubner, Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds (Hart Publishing, 1st ed, 2021) (‘Liability Regimes for Artificial Intelligence’). Here, we generalise the findings of the book and discuss how the abstract concepts of transversality and reflexive law bear on case analyses.
[2] Iyad Rahwan et al, ‘Machine Behaviour’ (2019) 568 Nature 477, 482, fig. 4.
[3] Niklas Luhmann, A Sociological Theory of Law (Routledge, 1985) ch II.4.
[4] See in more detail Gunther Teubner, ‘Legal Irritants: Good Faith in British Law or How Unifying Law Ends Up in New Divergences’ (1998) 61 Modern Law Review 11, 21–4.
[5] See generally Roberto Esposito, Institution tr Zakiya Hanafi (John Wiley & Sons, 1st ed, 2022).
[6] This follows the call for an institutional turn in contract interpretation, which becomes particularly relevant for emerging institutions in the digital sphere, see Dan Wielsch, ‘Contract Interpretation Regimes’ (2018) 81(6) Modern Law Review 958, 959.
[7] Jack Balkin, ‘The Path of Robotics Law’ (2015) 6 California Law Review Circuit 45, 49.
[8] Carla Reyes ‘Autonomous Corporate Personhood’ (2021) 96(4) Washington Law Review 1453, 1475.
[9] See Liability Regimes for Artificial Intelligence (n 1) 20–2, 45–8, 90–7, 111–20.
[10] For Italy, Ordinario di Milano case number 10847/2011, 24 March 2011; for the U.S. Guy Hingston v Google Inc, US District Court Central District of California, SACV12-02202-JST, case settled 7 March 2013; for Germany BGH GRUR 2013, 751 (Scientology). There are also reports about cases having taken place in Japan (2013, decided in favour of plaintiffs), France (2012, settled), Australia (2012, decided in favour of plaintiffs), Belgium (decided in favour of Google).
[11] This was a central argument by Google in the Japanese case on autocomplete, see ‘Google ordered to change autocomplete in Japan’, BBC (online, 26 March 2012) <www.bbc.com/news/technology-17510651>.
[12] BGH GRUR 2013, 751 recurs to the duties to control. Supportive Dan Wielsch, ‘Die Haftung des Mediums: BGH 14.05.2013 (‘Google Autocomplete’)’ in Bertram Lomfeld (ed), Die Fälle der Gesellschaft: Eine neue Praxis soziologischer Jurisprudenz (Mohr Siebeck, 2017) 125.
[13] Frank A Pasquale, ‘Reforming the Law of Reputation’ (2015) 47(2) Loyola University of Chicago Law Journal 515, 522.
[14] See generally Tobias D Krafft, Katharina A Zweig and Pascal D König, ‘How to Regulate Algorithmic Decision-Making: A Framework of Regulatory Requirements for Different Applications’ (2022) 16(1) Regulation & Governance 119, 123.
[15] For details, see again Liability Regimes for Artificial Intelligence (n 1) 8–10, 23–30.
[16] Xavier Gabaix and David I Laibson, ‘A Boundedly Rational Decision Algorithm’ (2000) 90(2) American Economic Review 433, 433.
[17] Bruno Latour, Politics of Nature: How To Bring the Sciences Into Democracy (Harvard University Press, 2004) ch 2.
[18] Luciano Floridi and Jerry W Sanders, ‘On the Morality of Artificial Agents’ in Michael Anderson and Susan L Anderson (eds), Machine Ethics (Cambridge University Press, 2011) 184, 192–205.
[19] Elena Esposito, ‘Artificial Communication? The Production of Contingency by Algorithms’ (2017) 46(4) Zeitschrift für Soziologie 249, 255.
[20] Katrin Trüstedt, ‘Representing Agency’ (2020) 32(2) Law & Literature 195, 196–7.
[21] Gunther Teubner, ‘Rights of Non-Humans? Electronic Agents and Animals As New Actors in Politics and Law’ (2006) 33 Journal of Law and Society 497, 503.
[22] See Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’, European Parliament (Study Commissioned by the Juri Committee on Legal Affairs, 2020) 97–103.
[23] See Mihailis E Diamantis, ‘Vicarious Liability for AI’ (2024) 99(1) Indiana Law Journal 317, 320–8; Dalton Powell, ‘Autonomous Systems As Legal Agents: Directly By The Recognition Of Personhood Or Indirectly By The Alchemy of Algorithmic Entities’ (2020) 18(1) Duke Law & Technology Review 306, 329.
[24] Graziana Kastl, ‘Eine Analyse der Autocomplete-Funktion der Google-Suchmaschine‘ (2015) 117 Gewerblicher Rechtsschutz und Urheberrecht 136, 140 (Our translation).
[25] See Martin Ebers, ‘Liability for Artificial Intelligence and EU Consumer Law’ (2021) 12 Journal of Intellectual Property, Information Technology and Electronic Commerce Law 204, 211–12.
[26] This is a fictitious case, but one that uses publicly available information on the Panama Papers research to illustrate the emergent properties of a human-algorithm association. For details: Panama Papers: Ivonne Wagner, Wolfgang Jaschensky and Laura Terberl, ‘The Journalists Behind the Leak’, Sueddeutsche Zeitung (online, 25 April 2016), <www.sueddeutsche.de/politik/panama-papers-the-journalists-behind-the-leak-1.2966929>.
[27] On hybrid journalism in general, see Nicholas Diakopoulos, Automating the News: How Algorithms are Rewriting the Media (Harvard University Press, 2019) 13–40.
[28] For details see Anna Beckers and Gunther Teubner, ‘Human-Algorithm Hybrids as (Quasi-)Organisations? On the Accountability of Digital Collective Actors’ (2023) 50(1) Journal of Law and Society 100.
[29] See Michael C Jensen and William H Meckling, ‘Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure’ (1976) 3(4) Journal of Financial Economics 306, 311; Frank H Easterbrook and Daniel R Fischel, ‘The Corporate Contract’ (1989) 89(7) Columbia Law Review 1416, 1426.
[30] See, eg, Andreas Hepp, Deep Mediatization: Key Ideas in Media & Cultural Studies (Routledge, 2020).
[31] See, eg, Philip Pettit, ‘Responsibility Incorporated’ (2007) 117(2) Ethics 171.
[32] Similarly, David C Vladeck, ‘Machines without Principals: Liability Rules and Artificial Intelligence’ (2014) 89(1) Washington Law Review 117, 149; Jessica S Allain, ‘From Jeopardy! to Jaundice: The Medical Liability Implications of Dr. Watson and Other Artificial Intelligence Systems’ (2013) 73(4) Louisiana Law Review 1049, 1074.
[33] Seth C Lewis, Amy Kristin Sanders and Casey Carmody, ‘Libel by Algorithm? Automated Journalism and the Threat of Legal Liability’ (2019) 96(1) Journalism & Mass Communication Quarterly 60, 69.
[34] See generally on the concept, David Copp, ‘The Collective Moral Autonomy Thesis’ (2007) 38(3) Journal of Social Philosophy 369.
[35] See Georgios I Zekos, Economics and Law of Artificial Intelligence: Finance, Economic Impacts, Risk Management and Governance (Springer, 2021) 361–400.
[36] See Gerhard Wagner, ‘Robot, Inc.: Personhood for Autonomous Systems?’ (2019) 88(2) Fordham Law Review 591, 597–9.
[37] Wolfgang Welsch, Vernunft: Die zeitgenössische Vernunftkritik und das Konzept der transversalen Vernunft (Suhrkamp, 1996); see also Félix Guattari, ‘Transdisciplinarity Must Become Transversality’ (2015) 32(5) Theory, Culture & Society 131.
[38] Bundesgerichtshof, III ZR 98/12, 24 January 2013 reported in (2013) BGHZ 196, 101.
[39] Malte-C Gruber, 'Digitaler Lebensraum’, in Bertram Lomfeld (ed), Die Fälle der Gesellschaft: Eine neue Praxis soziologischer Jurisprudenz (Mohr Siebeck, 2017) 115, 119–24 (‘Digitaler Lebensraum’).
[40] For a recent discussion of value in different social contexts see Isabel Feichtner and Geoff Gordon (eds), Constitutions of Value: Law, Governance, and Political Ecology (Routledge, 2023).
[41] Digitaler Lebensraum (n 39) 122–3.
[42] This is a fictitious case inspired by the factual scenario in United States District Court von California, San Francisco Case No. 12-cv-05524-JST – Class Action Otey/Greth v Crowdflower.
[43] Isabell Hensel, ‘Hire Me! Arbeiten in der Crowd‘ in Bertram Lomfeld (ed), Die Fälle der Gesellschaft: Eine neue Praxis soziologischer Jurisprudenz (Mohr Siebeck, 2017) 183, 188–94 (‘Hire Me!’).
[44] See generally Stefan Grundmann, Fabrizio Cafaggi and Giuseppe Vettori (eds), The Organizational Contract: From Exchange to Long-Term Cooperation in European Contract Law (Ashgate, 2013).
[45] Isabell Hensel, ‘When Gorillas Strike: Constitutional Protection of Non-Value Institutions in Labor Law’ (2024) 44(1) Zeitschrift für Rechtssoziologie 141; see generally Gunther Teubner, Networks as Connected Contracts (Hart Publishing, 2011).
[46] See generally Hugh Collins, ‘Introduction to Networks as Connected Contracts’ in Teubner (n 45).
[47] Pablo M Baquero, Networks of Collaborative Contracts for Innovation (Hart Publishing, 2020) 41–2.
[48] Gunther Teubner, ‘Coincidentia Oppositorum: Hybrid Networks Beyond Contract and Organization’ in Marc Amstutz and Gunther Teubner (eds), Contractual Networks: Legal Issues of Multilateral Cooperation (Hart Publishing, 2009).
[49] Ralf Rogowski, Reflexive Labour Law in the World Society (Edward Elgar, 2014).
[50] On responsiveness, see fundamentally Philippe Nonet, Philip Selznick and Robert A Kagan, Law and Society in Transition: Toward Responsive Law (Routledge, 2001).
[51] Niklas Luhmann, ‘Interaction, Organization, Society’ in Niklas Luhmann, The Differentiation of Society (Columbia University Press, 1982) 69.
[52] Google Spain SL and Google Inc v Agencia Española de Protección de Datos and Mario Costeja González (AEPD) (Court of Justice of the European Union, C-131/12, ECLI:EU:C:2014:317, 13 May 2014); Delfi AS v Estonia [2015] ECHR 586; Bundesgerichtshof, ‘Anonymität der Nutzer von Bewertungssystemen’, BGHZ 201, 380.
[53] Hire me! (n 43) 194.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawSocCConsc/2024/19.html