In Defence of Lazy Programmers
Does the YouTube recommendation algorithm radicalise people? On my review of the literature, the answer is: I don’t know. Popular opinion says it does, and some evidence seems to support this, though like with many things in the social sciences, there are contrasting voices.
It might be easier, then, to take this conversation out of the social sciences. For instance, wouldn’t we be able to answer this question quite quickly if we just looked at the recommendation algorithm? The difficulties of doing that aside, assuming we could, we might quickly identify something like a Markov chain, with a weight function indicating the likelihood of the recommended video being a far-right video, and weighting functions for alternative videos of different political persuasions. Then, all we would have to do is compare the weightings: is the YouTube algorithm more likely to recommend a far-right video to you?
This information is definitely valuable and would certainly be an important catalyst for the evolution of discussion in this area. Though, it’s important to recognise that a question about probabilities is different to our original question.
Consider targeted advertising, for example. Just because we can technically put the perfect advert in front of someone at the most opportune moment doesn’t mean we do put the perfect advert in front of someone. Targeting is only effective if the advert induces a person to click on it, and then buy something. Targeted advertising makes this probabilistically more likely, of course, but this cannot firmly tell us anything about the behavioural response. Similarly, the YouTube algorithm may (and I am talking wholly hypothetically) recommend more right-wing content than left-wing; but this alone tells us nothing of how viewers are responding to this content. Some will immediately click off. Others will enjoy the content and want more. Most, probably, will let it pass them by. Knowledge of the technical aspects of the algorithm, therefore, doesn’t really answer our question.
On top of this, knowledge of the ‘right-wing probability weighting’ raises its own question: why is it the value that it is (assuming, of course, such a function exists — the algorithm is almost certainly more complex than a single value)? Perhaps this was a conscious designer decision, which is tangential to our original question but is interesting for a range of groups (academics, regulators, viewers etc.). Or, perhaps this value is the ‘learned’ result of data, perhaps user traffic? The algorithm may have acquired this probability weighting from user demand for right-wing content. This, in turn, challenges the original question from a new angle — the algorithm isn’t radicalising anyone, but radical people are using the platform. We then might be interested to investigate why that is.
The point I’m making with this discussion is that complete technical knowledge of something like the YouTube algorithm cannot wholly or adequately answer a social science question like, “does the YouTube algorithm radicalise people?” Equally, as noted above, this is not to say social scientists can do the job without technical expertise. In the absence of firm details regarding the YouTube algorithm (again, as an example), various social scientists have sort to replicate the algorithm quantitively, with varying results; some of engaged in qualitative research which is important but may not be representative of the experiences of the platform’s millions of users; and others have sort to document the material fallout of the site’s political side, and generally ignored the algorithm.
These methods can be criticised, as Princeton Professor Arvind Narayanan recently did regarding some recent quantitative research on the algorithm. But criticism of these methods often comes in another form: namely, criticism of these methods due to a lack of technical expertise on behalf of the authors. I don’t think this is necessarily an invalid argument; social scientists and philosophers should have a working grasp of the technical aspects of the topic they are writing about, less they fail to caveat their wilder speculation. But I do not think a lack of technical knowledge — which often takes the form of a lack of first hand, coding knowledge — should disqualify social scientists on commenting and researching technologies such as machine learning or AI. Making this argument is the crux of this article.
In short, technical knowledge of how to build technological tools is neither a sufficient mandate for or qualification to exclusively comment on the social and material consequences of how those tools are used. This is not to say that the developers of Facebook or Google or YouTube do not and cannot hold opinions on these things; rather, it is to say that their being developers does not and should not give them exclusive right to comment and critique the purpose of their platforms. As above, social scientists and philosophers should probably know something about the technical aspects of the technology they are critiquing, if for no other reason than a working knowledge makes for wiser criticism. But a lack of knowledge in comparison to a developer is unreasonable and, in addition, hypocritical.
If, due to a (comparative) lack of technical knowledge, a social scientist or philosopher is supposedly unqualified to comment on technologies, can’t the same argument be made that developers and technical experts are unqualified to comment on the social issues that their technologies bring about? By imposing arbitrary knowledge barriers which must be reached to justify criticism, this necessary conversation breaks down. Developers are resided to developing technologies without ever questioning the impact of those technologies on society (and in turn, without questioning the value of those technologies), while social scientists and philosophers are forced to comment on the fallout of these technologies through an inhibitive lens.
This, to some, may not actually sound like a bad thing. Most successful technology entrepreneurs, given their rhetoric, seem to believe they are good people with only noble ambitions. In this world, because those in charge of the technology are already benevolent, we the population should simply trust them, and in doing so, any need for philosophical criticism or social scientific analysis becomes moot. And I’m sure, for the most part, these technologists are good people who do have noble intentions. However, either through a lack of trust (which I think is fair), or simply because the noble intentions of one person rarely align with the noble desires of all, it is unreasonable for society to just let the technologists get one with things.
For instance, most people can’t explain why they don’t like the idea of a CCTV camera in their bedroom. Most people, also, do have nothing to hide, and most people — probably — would benefit from the various convenience and automated services which data collected from this surveillance could provide. Yet, even without an articulate reason, nothing to hide and lots to gain, most people still do not have CCTV installed in their bedroom. This may be an extreme example, but the growth in what Professor Shoshana Zuboff calls ‘surveillance capitalism’ means this analogy is not too far-fetched. Allowing technology a free hand rubs up against social ideas which may — from some points of view — appear irrational, and to restrict commentary on this friction on the basis of a lack of technical knowledge is to unilaterally dismiss and diminish the value of these social ideas. As a maxim on this point, I would say this: the moment something encroaches on another’s life, that other has a right to comment and criticise, even if the circumstances of the encroachment are not deemed morally, socially or legally unacceptable.
A slight twist on this whole affair may come with another analogy. An author who writes a novel has a certain degree of judgement over some commentary regarding the novel, and little authority over other aspects. For instance, a fan-theory that one character is in love with another character can be definitively rebuked by the author, because the author is essentially God over those characters. Of course, the author cannot stop fans believing in the theory, but it is the author who authority to craft the original narrative how they see fit. However, the author does not have the authority to proclaim their novel definitely good. The quality of the novel is a product of the author’s efforts, but the criteria for which that product is evaluated is not in the author’s gift.
Relating this to commentary on technology, it would be perfectly acceptable for Mark Zuckerberg, for instance, to refute the claim that the Facebook website makes toast — because nothing in the Facebook code (to my knowledge) allows the site to produce toast. But it is not in Zuckerberg’s gift to proclaim Facebook to be good for society. Developers at Facebook do not have authority over the criteria through which the social benefit of Facebook will be evaluated. The social good of Facebook is determined by the population of users, and what opinion is held requires some level of investigation — it requires social scientists and philosophers.
I’d also like to say — and this might be contentious — that developers aren’t that special. For instance, let’s return to the YouTube algorithm question. The general narrative amongst those that believe the algorithm does radicalise people is that the algorithm nudges people towards right-wing content. By nudging, they generally mean the subset of behavioural science called nudge theory, which argues small changes in how a prospect is framed can have a significant and predictable influence on what decision is made. Academically, my work is in nudge theory, and I have some problems with how those who write about the algorithm use the term ‘nudge.’
To be sure, the algorithm is nudging people. But nudges by definition allow people to go their own way; to make an alternative decision. Nudges, at least on a surface level, respect human autonomy, and further, almost have to fail sometimes so as to not appear coercive. This means that nudging is the write phrase when discussing the algorithm, but in absence of a clear clarification that nudges aren’t impossibly compelling tools, the use of the term can become skewed. Now, here’s my point: only someone with an expertise in nudge theory would probably notice the absence of these clarifications and feel necessary to comment on it. I would argue this is the same for those with technical knowledge reading social science work, or philosophy of technology work.
Of course, sometimes the misuse of specialist knowledge is egregious, as nudge theorists will attest. But often technical knowledge is used within the bounds of reasonable interpretation and understanding, and what’s more, almost all social science or philosophy will draw from a multitude of specialities and inevitably be somewhat cross-disciplinary. All of this is to say, in research that draws on different disciplines, what authority does one discipline have our others in evaluating the validity of research? Nudge theory, for instance, draws on psychology but isn’t psychology, so can the psychologists unilaterally dismiss the subject? And this is to say nothing of sociologists, or natural scientists, or philosophers, all of whom I’m sure have skin in most every game if we look hard enough.
At this point, I should say, there is a risk of drawing an unwelcome dichotomy. It is not that research on technology and society must either be technical or social. Ideally, the study of technology’s impact on society would be more cross-disciplinary than it is. Ideally, social scientists and philosophers writing about machine learning or AI would have some experience of those things, or at least would educate themselves. Ideally, developers and technologists would be more transparent with their work and share data with researchers. This shouldn’t be a destructively antagonistic conversation, but a one of productive antagonism. My great fear in this article is that I have painted this as a battle between disciplines, rather than a collaboration.
But I also think that message comes through in places. Transparency over the YouTube algorithm, for instance, would be a huge boon for the social scientists now stuck in a pickle debating what to do next. Equally, rising concerns about privacy and invasive technology could probably be tackled by a more open discussion about the merits of technological progress bringing together technologists, social scientists and philosophers. And — just to emphasise the lack of a dichotomy — most disciplines borrow and adapt ideas from other disciplines, which may raise interpretive problems but also functions as an opportunity for collaboration. What I’m basically saying in, let’s be a bit pluralist about these things.
My central argument is still, however, in defence of social science researching technologies in society: technical knowledge of how to build technological tools is neither a sufficient mandate for or qualification to exclusively comment on the social and material consequences of how those tools are used.