Is work what makes us human? Alex Beattie

How To Be Human In The Digital Economy
Nicholas Agar
MIT Press, $27.00,
ISBN 9780262038744

Take a minute to consider the speed with which technology is changing our everyday lives. The opening hours of local banks have reduced due to most of our neighbours doing their banking online. The pop-phrase “just Google it”, has retired the Oxford English Dictionary to the bookshelf to collect dust. Even self-service kiosks – once a visual oddity in New World supermarkets or McDonalds restaurants – now appear part of the commercial furniture. Such breathless technology-led change begs the existential question: what value will humans have in the future?

Vocation, vocation, vocation, claims Nicholas Agar, the author of How To Be Human In The Digital Economy. Agar is a professor of ethics at Victoria University of Wellington, who believes work is an essential ingredient to being human. It’s a bold claim: most books about the looming threat of artificial intelligence and automation surmise that no professions are safe, so humanity’s best defence against increasingly smart technology is a Universal Basic Income (UBI). In How To Be Human, Agar takes a different approach, calling for safeguards for a particular type of work he considers to be best performed by people. Agar wants to protect jobs based on our social natures: baristas, nurses, any profession where the personal touch adds meaning to our everyday. He calls this a “social economy”. 

Conceptualising an economy to protect human-based work is a noble pursuit. Agar is right to point out that working has been immensely beneficial to society, forcing groups of people together who would never have interacted otherwise. Furthermore, Agar interrogates whether UBIs would actually provide a silver bullet which would achieve an egalitarian future society. Instead, he suggests that a flat benefit would exacerbate the divide between those owning the latest machines, and those whose jobs have been automated by them. How To Be Human is also an impressively accessible book, with Agar’s key arguments clearly sign-posted and peppered with wit. When pondering whether machines are suited to undertake deeply personal interactions, Agar muses: “if a robot performs my prostate exam I have no grounds for awkward feelings about the state of my bum and worries about my decision to order the extra-spicy vindaloo for lunch.” Agar has a knack for making the often-dry topic of technology and automation amusing.

But a few jokes could not convince me of the central message of How To Be Human. Does work really constitute what it means to be human? While the merits of a hard day of labour resonated with my Protestant heritage, Agar lumps a very narrow conception of humanity with work without sufficient justification. In fact, there is hardly any interrogation of what constitutes “humanness” at all, with Agar instead arguing that he considers human-based work more appealing than a UBI or alternative future work economies, such as Jeremy Rifkin’s collaborative commons. This lack of enquiry is perhaps due to Agar limiting his research about technology and humans to economics literature. Economics – the science of analysing market behaviours – tends to reduce the human condition to rudimentary psychology, offering a gloomy outlook on society (hence its nickname “the dismal science”). That is not to say that there are no merits to working per se, but rather that an economics-inspired view of the good life relies upon a fairly conservative and transactional understanding of humanity. What is more, I cannot imagine telling anyone in the precarious “gig economy” that their work is humanising. A nine-to-five job may be cushy, but scores of Uber drivers and hospitality contractors are forced to work multiple jobs, long hours and late shifts to make ends meet: work conditions that sound more dehumanising than anything else. Perhaps a more suitable title to How To Be Human would have been How To Create Work In The Digital Economy.

Predominantly relying upon economics literature also means Agar views technology with broad strokes. His analysis of technology as a driver of change is the “digital package”, a catch-all term that conflates all elements of digital technology – good and bad – into a nebulous category that makes technology impossible to criticise. Other technology-specific disciplines offer more precise toolsets to grapple with the limits of technology and threat of automation. For example, media and technology theorists such as N Katherine Hayles or Langdon Winner have long considered the role of technology in creating change across society. They point out that, on the one hand, we extend our “humanness” by distributing our cognition through technologies (from wearing glasses, to using smartphones as mnemonic devices to offload our memory), and, on the other, reject technologies we consider inappropriate (for example, the infamous Google Glass augmented-reality glasses). What is more, worrying about far distant future events, such as end-to-end job automation, tends to blind us to the subtle but insidious forms of techno-engineering that are being driven right now by the likes of Facebook and Google. The degree to which many of us are hopelessly glued to our smartphones suggests that it is not only job automation we should worry about, but the automation of human behaviour itself. The extent to which incentives and other “nudges” are effortlessly embedded into our everyday routines has the numbing effect of turning humans into simple stimulus-responsive machines. Only by zeroing in on such pernicious components of the digital economy can meaningful reform be enacted. 

I am also wary of the limited scope of work offered by Agar’s social economy. Humans may be social creatures, but we each have varying degrees of sociability. Much of the work I enjoy – writing or designing teaching materials – requires the opposite of social behaviour. I find that such introspective work is best done in solitude, away from distractions and noise, and is part of what I consider makes me “human”. But, of course, my conception of my humanity is vastly different to that of others. This is why technology scholarship has moved away from trying to define what “human” is, shifting instead to the question of who decides? By defining highly social work as “human”, Agar risks excluding a significant part of the population that thrives on working alone. Agar’s social economy therefore risks reinforcing the subtle work inequalities that Susan Cain articulates in Quiet, whereby extroverted staff are privileged into positions of power, leaving their introverted colleagues by the wayside. Any claims about what constitutes “humanity” should come with the warning that such definitions are often used to dehumanise and control those who fall outside them.  

Recent news suggests that examinations of our humanity are heading in a more inclusive direction. In April 2019, Google announced it was disbanding its AI Ethics Board, just one week after forming it. The decision to dissolve was largely due to public outcry over the Board’s inclusion of Heritage Foundation President Kay Coles James, a well-known conservative figure with anti-LGBTQI+ and transgender views. There were concerns that James’s views would not ensure the Board could grapple with key ethical issues of fairness, rights and inclusion in AI. Such decisions offer a better hint of a future framework for debating humanity than what is offered in How To Be Human. It’s a debate in which key ethical issues are not determined by claiming what is (and, therefore, what is not) “human”, but instead by scrutinising the more pertinent question of who decides.

Alex Beattie is a PhD student in the media studies programme at Victoria University of Wellington, researching new ways to disconnect from the internet. 

Tagged with: , , , ,
Posted in Non-fiction, Psychology, Review, Sociology
Search the archive
Search by category