Conferencing against online harassment

by Nicole Shephard

 

The feminist implications of big data and privacy

The European Women’s Lobby recently hosted an online conference and a Twitter chat under the banner of their #HerNetHerRights project. I had the great pleasure to give a short talk about a few feminist implications of big data and privacy.

The #HerNetHerRights project maps the current state of online abuse against women and girls in Europe. Read more about the project, the conference, and watch the conference video here. Update: the full report and a condensed online resource pack have now been launched as outputs of the wider project.

What follows here are my original notes for the occasion (i.e. before trimming them to fit my time slot).

Introduction

We’ve come quite a long way since they the early days of the big data hype. After initial hyperbolic claims that data will solve all our problems, answer all the questions we never had, and replace theory and the social sciences, a lot of critical work has appeared and toned the narrative down by a few notches. A couple of things are quite clear by now:

  • Even the biggest dataset is only ever a sample and never a whole population;
  • Who is included in that sample isn’t only geographically specific but also gendered, raced, and classed;
  • Data as such is never neutral, objectivity is a very persistent myth (in data as elsewhere), and algorithms don’t emerge from some kind of egalitarian vacuum;
  • Technology is embedded in the same kinds of unequal power relations as everything else;
  • And (almost) anyone can become a data point in one way or another but only few can collect data on a large scale and an even smaller elite can process and analyse that data.

My worry around contemporary data practices is that the feminist tools we have at our disposal to think about data, about knowledge production and research methods are underused. Here I have in mind things like intersectionality or reflexivity, but also concepts like situated knowledges, strong objectivity, epistemic violence, or all the feminist work around agency and consent.

On that note, I turn to a couple of somewhat disjointed tensions around the feminist politics of data and privacy in relation to online harassment.

Feminism & privacy

Feminism has a fairly complicated relationship with privacy. A great deal of exploitation, violence and coercion takes place in the privacy of the home, which has made it difficult to politicise the personal. That history and decades worth of feminist debate around the public/private divide make many a feminist cautious, if not reluctant, around the protection of privacy online.

We might productively think of this as a kind of double-bind:

Privacy, on the one hand, is a privilege that isn’t afforded equally to all. Women, queers, trans people, people of colour, disabled people, poor people, recipients of state benefits, refugees and others – particularly those inhabiting multiple intersections – are groups whose privacy was never considered on an equal level with that of privileged groups.

But turning this argument on its head, we can also think of giving up privacy, of consenting to transparency as a privilege that isn’t distributed equally either. We all have something to hide, of course, but who’s in the best position to reveal as good as everything without having to fear the consequences? Here, again, we’d have to list the most privileged groups, perhaps headed by the figurative heterosexual white cis man.

Instead of engaging in good privacy vs. bad privacy debates – is privacy good for women or is privacy bad for women – we may choose to focus on highlighting these inequalities in access to privacy and on making its protection online more widely accessible.

There is a good feminist argument to be made for the protection of online privacy. Such an argument would, at a bare minimum, reference how marginalised groups are exposed to even more harassment and persecution without privacy. Or that the freedom to experiment with gender, sexuality, and coming to one’s identity more broadly relies on privacy and anonymity. And how, for feminist activists, researchers or journalists, a lack of privacy can lead to exposure and harassment – not only of themselves but also of the often marginalised people they work with.

A feminist politics of data

The politics of data affect a wide range of people, but not all in the same ways. That includes big data and surveillance, questions around privacy and anonymity as well as online harassment and other forms of technology-related violence.

The response to online harassment as a social problem is often to call for more research, bigger data = better data. I too recall complaining that particularly on the European level we don’t have enough data on online harassment, that more research is necessary. While in some ways certainly a reasonable objective, the desire/need for more data also seems at odds with feminist concerns about the complicity of big data in surveillance and control, the epistemic violence that comes with the counting, sorting, and managing of populations, and the struggle against non-consensual and disempowering uses of data.

Is there such a thing as feminist data? I don’t know. But some data practices are definitely more in line with a feminist ethics than others. That includes, for example, thinking carefully about agency and consent in relation to data. That includes questioning the power relations behind who gets to collect data and who becomes a data point. That includes questioning and critiquing not only the data practices of governments and large corporations but also our own research and activism with data.

  • It’s important to always question what is being left out of the frame.
  • Which questions do we ask, and don’t we ask of our data?
  • What silences might the missing data reveal?
  • Which outliers in the data might reveal discrimination and exclusions?
  • But also, who gets to do the definitional work;
  • Who decides what counts as harassment in the data and what doesn’t?
  • Is our approach intersectional enough to capture the ways in which harassment affects different people differently?

These and similar questions apply to human and algorithmic efforts to stop online harassment by the means of data and technology. But they also refer to our own understanding of online harassment and to our research and efforts to get bigger and better data about online harassment.

A fairly recent large-scale European study, for example, interviewed 42 000 women across all EU countries, published a report of almost 200 pages but failed to mention race even a single time. That still feels symptomatic for the European context. In the small section that dealt with online harassment specifically, it was also quite narrow in what counted as harassment, reduced it to 2-3 items, which meant it reported much lower numbers of women affected by online harassment than other recent studies that operationalised harassment in more nuanced ways, but that’s a different story.

It makes sense, of course, to talk about the online harassment of women and girls. Research shows that they are indeed disproportionately targeted by online abuse. But it is also worth keeping in mind that not all online violence is gender-based. And that, in turn, we can only have a very partial understanding of online violence against women if we neglect the intersections between gender, race, religion or sexuality. Producing data that further conflates online harassment with gender and women is too narrow.

A feminist politics of data has to go further than simply adding women to data or generating more data about women. At a minimum, it should be intersectional enough to capture how race, sexuality, or religion factor into the online harassment of women. But, ideally, it would capture the issue more broadly and work towards data that includes women, men, trans and non-binary people to examine how gendered and sexualised harassment, racist harassment, Islamophobic and anti-Semitic harassment etc., intersect and affect different groups of people differently.

Feminisms vs. online harassment

I conclude by highlighting one more tension we encounter in figuring out what to do against online harassment. That is, we need to be a little bit careful what we wish for in two related ways.

First, some methods to eliminate online abusive content and users can be counterproductive.  That’s the case when such methods are easily turned against those they are built to protect, or when they almost by definition exclude the more vulnerable groups among our own ranks. Algorithmic filtering and blocking of abusive language to get rid of harassing content, for example, silences resistance and counter-speech that uses similar language, but also simple profanity, slang, and regional differences in what terms are used in what context and so on. Machines are still terrible at filtering nuance, context, meaning, irony, reclaimed or repurposed language.

Or, forcing real-names may curb some forms of harassment, i.e. get rid of some harassers. But it also further silences and excludes those who rely on anonymity for their feminist work, or for their personal safety. Similarly, mechanisms designed to identify and/or report abusers are at risk of being turned against those abused, i.e. “troll” is seldom a self-designation. Body shaming and transphobia under the guise of Facebook’s community guidelines about nudity is just one example that comes to mind.

Long story short: no measure, even if it promises an interim solution to some aspect of the problem, is a feminist measure if it also holds the potential to further silence and put at risk women, communities of colour, queers, and other groups who get marginalised online.

And second, when we wish for such “quick fixes”, we delegate a complex social problem that crosses online/offline boundaries, geopolitical contexts, as well as platforms, to companies that are often confined to particular geographical and legal contexts, that are often not particularly diverse or inclusive in their workforce, and that have a somewhat abysmal track record in doing justice to such issues in the past.

Holding platforms accountable for the harassment they enable is reasonable – not least because that’s where the potential lies for measures to reduce harm by design and for giving users some agency over what content and other users they engage with, and in what ways. But we also need to be very aware of the power relations these solutions are embedded in. The tech sector is ill-equipped to apply a feminist politics of data. Many technologists working on new features for these platforms simply lack the background in feminist theory and practice to adequately frame and address online harassment.

By delegating the problem, we give already powerful private sector corporations even further power, and discursive control over online harassment and what should be done against it. If I were to pointedly exaggerate here, I might ask, are we comfortable to delegate “fixing” the online harassment of women, queers, people of colour and trans people to a majority of white SV bros? Speaking of “the master’s tools…”.

My tentative final words here might be that working towards solving a complex social problem like online harassment requires us to work together, to build coalitions across disciplines, across industries, and across platforms rather than point fingers, attribute blame and demand others fix “it” (be that tech companies, legislators, …).

#HerNetHerRights online harassment conference flyer
Conference flyer (Source: European Women's Lobby)

Hire me for research, writing or consultancy around diversity, inclusion and the intersectional politics of data and technology. Get in touch!