An Interview with Sarah Jeong, Author of The Internet of Garbage -The Toast

Skip to the article, or search this site

Home: The Toast

In her new book, The Internet of Garbage, Sarah Jeong states that “the Internet is, and always has been, mostly garbage.” She talked with The Toast about topics discussed in her book, including online harassment, doxing, spam, free speech, and the challenges of moderating content platforms and social media networks. 

You can buy Sarah’s book on iTunes or Amazon.

The Toast: If has always been mostly garbage, why did you write this book now? Do you think we’re better positioned in terms of either will or technology to take more of the garbage out?

Screen Shot 2015-07-18 at 2.38.27 PM

Sarah Jeong: The book positions online harassment as part of a larger category of long-extant problems, but when it comes down to it, it’s still a book about online harassment. One of the things I wanted to do with the book was to hammer in how online harassment has been around forever — but I don’t think there would have been an audience for the book until fairly recently. There’s a lot more mainstream awareness of harassment and online misogyny in particular.

Why do you think that is? More media coverage, more survivors of online harassment speaking out?

100% media coverage. Part of that has to do with journalists being aggressively harassed — the journalists then turn around and use their platforms to show the world what is happening to them.

But that’s not the whole story. The Internet now includes a much broader swath of the entire population, which means that the old trite victim-blaming along the lines of “it’s just the Internet” doesn’t work so well. We now recognize the Internet as just another arena for our day-to-day lives, a place that’s no less real than the offline world. The Internet’s ubiquity also means that large-scale incidents of harassment become very large-scale, sucking in celebrities, journalists, even entire media organizations.

In the book, you mention some of the issues with media coverage of harassment — from reports not being clear about the definition of “doxing” to the focus on white cis women who’ve been harassed to the tendency to make harassment seem smaller or less threatening, less real-world, than it really is. Now that we are talking about it more, how can we ensure better, more accurate coverage of this issue? (Apart from sending a copy of your book to every single member of the media!)

The most important thing to address is how people of color — particularly black women — are either erased or villainized when we talk about online harassment. I would love to see a book about online harassment that centers on people of color. I wish my book could have done that, but unfortunately, there just aren’t a lot of studies on, for example, how race exacerbates harassment. There aren’t a lot of media accounts, either. When black women get harassed, either their stories never appear in the media, or their stories get retold, blaming the black woman for the ensuing harassment. See, for example, Jon Ronson’s shameful treatment of Adria Richards.

This isn’t just an issue of equitable treatment in the media. It actually has serious policy ramifications. Some of the most prominent funded anti-harassment activism centers on carceral remedies — that is, resorting to police, prisons, and the criminal justice system. If you’re a person of color, trans, and/or a sex worker, you may be less willing to go to the police.

A related problem is how the problem of harassment is cast as “a torrent of mean words.” And yes, a torrent of mean words really sucks to experience, and user interfaces should be designed to mitigate that, but that’s just froth on top of things like having your address published, your social security number published, your children threatened, your accounts hacked, strange packages arriving your door, strangers following you around your city. One reason why the media focuses on unruly speech over, say, doxing or stalking or swatting, is that mean tweets are out there in the open for everyone to see. No need to do any actual reporting. But this tendency is very harmful. It treats targets like they are fearful and upset because of “mere words.” Targets of sustained harassment aren’t thin-skinned, they’re often being subjected to campaigns aimed at making them afraid.

You make the point in your book that legal systems should address “the more extreme forms” of online harassment. What are some of the existing gaps in current law, and what would be a good start in terms of addressing them?

A great place to start is making it harder to dox people — nonconsensual disclosure of a person’s physical address or social security number or other information protected by law. For a lot of people, if you type their first and last name into a search engine, you’re going to get their home address. Most of the time, that address is available through a data broker that sells information about individuals. There’s no reason why it should be that easy to physically locate and harm a private individual you got mad at on the Internet.

One of the things I worked on in the last month was a letter to ICANN (icann.wtf) asking them not to strip domain privacy from site owners — in short, the organization that ultimately controls domain names was considering a proposal that would have made it harder to conceal your residential address if you owned a domain. Why were they considering this at all? Because the record industry wants to fight piracy. I know it doesn’t really make any sense, but that’s the state of Internet regulation today. Everything is primarily driven by corporate interests. There is little thought for the safety and privacy of ordinary individuals. There are many changes that can be made at both the national and international level that would make all potential targets of harassment safer.

WebYou wrote, “Anti-harassment is about giving the harassed space on the Internet, and keeping the electronic frontier open for them.” You point out that simply deleting offensive or abusive content directed at another user is insufficient; online platforms need to be planned and built to discourage harassment and also provide better ways of reporting it. Can you think of a platform or community that has done/is doing this well?

Metafilter is an unusually good online community, for many, many reasons, including robust moderation and a $5 sign-up fee. For obvious reasons, most platforms can’t just become Metafilter. Social media platforms are really struggling with these problems right now, and are resorting to some tactics that I criticize in the book as either unethical or doomed to fail. Strangely enough, online games like League of Legends and EVE Online are trying out some really innovative approaches around building and sustaining good communities. The specific approaches I think can’t be applied to social media platforms, but they’re at least thinking outside the box instead of just adding more man-hours to the labor of sorting and deleting content.

You explain that the First Amendment doesn’t actually apply to online platforms like Facebook and Twitter, even though they often co-opt it to defend their policies or actions. How should platforms balance the general goal of free speech against the need for some kind of community moderation?

Well, the following is going to sound like a bit of a non-answer because when it comes down it, each big platform is very different. They’re successful simultaneously because they occupy different niches in the ecosystem. I think, first of all, platforms have to decide what they are. Not all platforms need to embrace free speech. Obviously, a support group for people with epilepsy is going to have very content-based rules than an online club for .gif-makers. One of the things that I go into a little bit in the book — but really not enough — is how Facebook and Twitter and Reddit and so on are different from smaller platforms like that. For one thing, Twitter and Reddit at least avow a dedication to free speech. And Facebook is seeking to become so ubiquitous — and succeeding, to some extent! — that some people would argue it’s a utility or common carrier. I’m not sure I’d go that far, but Facebook is definitely in a position where I’m really uncomfortable with strict content-based rules on their end.

For a platform that does or should tend towards free speech as a principle, the goal is to make a robust marketplace of ideas, one that’s open to many different kinds of voices. Moderation paradoxically increases the number of voices heard, because some kinds of speech chills other speech. The need for moderation is sometimes oppositional to free speech, but sometimes moderations aids and delivers more free speech.

Bearing in mind that every platform is different, what are some general guidelines for good moderation of a space or community — ones that discourage harassment as much as possible without creating other problems for the site?

Rules that are both clear and easy to apply, from the perspective of both the users and the enforcers. Consistent enforcement of the rules. Both positive and negative reinforcement of behavior — something that reminds would-be bad actors that there are certain norms on the platform, and that other users expect them to adhere to those norms. And a compassionate user interface. Compassionate might mean that it’s easy to report abuse, or that it gives the users options to mute, hide, or even delete the abuse when it gets to be too much. But it can also just mean being reassuring. When you report abuse on Facebook, for example, it gives you this silly little message, “We’re sorry you had this experience.” More hardened Internet users might laugh it off, but think about what that means to someone who has honestly never encountered abuse before and isn’t entirely sure what just happened to them.

Can you talk about the problems with criminalizing revenge porn, which I know you have written about before?

Well, first off I think it’s important to define revenge porn. Revenge porn is nonconsensually distributed nude or sexualized photographs. (The exact definition does depend on the statute). I start with that, because when people think “revenge porn” their minds actually go to the usual narrative of revenge porn, rather than any strict definition — girl meets guy, girl and guy have a relationship, girl takes naked pictures for the guy, girl breaks up with guy, guy turns out to be a total scumbag who then posts her pics on the Internet. This is a narrative that does play out. But it’s actually the minority of revenge porn cases. From what we know, most revenge porn (that appears on sites dedicated to revenge porn) is hacked from victims, and the hackers have no relationship to the victim. And a good chunk of revenge porn is actually photoshopped — it’s not a real nude photo of the victim.

Efforts to criminalize revenge porn focus on two different kinds of criminals. One is the initial discloser, or what most people think of as the “scummy boyfriend.” Depending on how broad the statute is, it can actually net tabloid papers or people who are reporting abuse. So the first example would be the overused “What about Anthony Weiner’s dick pic?” hypothetical. The second example would be something like — say, you get an unsolicited dick pic. You then show it to your friend, or post it online. Depending on how the statute is written, you may have broken a revenge porn law! So these statutes — the ones aimed at “scummy boyfriends” — have to be carefully crafted. And so far, at the state level — a lot of them just aren’t. The one in Arizona is currently not being enforced because a court has ruled that it’s probably unconstitutional.

The other kind of criminal is a revenge porn site operator. So these are people who own and control centralized repositories of so-called revenge porn. They are the reason why revenge porn is so harmful — they are purposefully trying to destroy women’s reputations and lives by bombing their Google search results. The only way to create a law that could criminalize an operator specifically for revenge porn is to create a federal law that punches a hole through CDA 230, which is a law that I describe in my book (you can actually read the relevant chapter here). CDA 230 is an important law without which we would probably not have the Internet as it exists today. Everything would have been sued under by the Church of Scientology or Westboro Baptist or people like Chuck C. Johnson.

Punching a hole through CDA 230 is a really drastic measure, particularly since most revenge porn site operators are totally prosecutable under existing law. They’re usually running an extortion racket connected to the website, or they’re hacking people for the photos, and so forth. If state attorneys general actually wanted to go after any revenge porn site operator, they could. (And Kamala Harris, here in California, has successfully gone after Hunter Moore). And if the U.S. Attorney General wanted to go after revenge porn sites, she totally could as well, by using the authority given to her by § 2257. So there are ways to address these problems without new statutes — and the creation of new statutes is fraught with many issues.

You and I were recently talking on Twitter about how, despite all the garbage, the Internet can be a powerful force for good. Can you talk more about how that happened for you, and also if you do have hope that we can someday make it a less garbage-y place so it’s safer and more useful for everyone?

I grew up in a very religious, very conservative household, and went to a very religious, very conservative high school. And I am presently not at all religious or conservative. I would have never become the person I am today if I hadn’t been an avid user of the Internet. As a teenager, the Internet introduced me to science that I wasn’t learning in school. It introduced me to political beliefs and facts and stories and narratives that I would otherwise have never known. I realized that many of my personal beliefs were either obviously wrong, or obviously bigoted. And there are other ways in which the Internet influenced my path — I had this realization the other day that if I hadn’t read loads of fanfiction as teenager, I may not have gone to law school, subsequently focusing on copyright and Internet law and policy. I had this conversation recently with some other women who left very religious and/or conservative bubbles they had been raised in — we all had the same experience in which we were voracious readers of science fiction/fantasy and also active Internet users. The science fiction/fantasy gave us a way to imagine a completely different world, and the Internet gave us a way to seek it out.

When I was a teenager, I mostly read or posted on forums. Facebook was just getting started — I wasn’t on it until I got to college. Twitter and Tumblr didn’t really become quite as prevalent until I was a few years into college. The Internet today is a really different place from the one I “grew up in,” so to speak. I worry that it’s more insular, in the sense that people mostly talk to people they already know in real life. And I worry that it’s more corporatized, in the sense that communities come together on centralized platforms controlled by for-profit corporations. But I still see that radical promise for young people who are unhappily trapped in bubbles. Young people are coming together in supportive communities that could not exist without technology. As a consequence, we are seeing an explosion of new kinds of conversations, new thinking, new art, new activism. It’s a beautiful thing. My concern is — how can we encourage this cultural explosion? One way is to make it so that the people who prominently participate in this cultural flowering don’t get harassed or stalked or harmed because of their participation.

Add a comment

Skip to the top of the page, search this site, or read the article again