A picture is worth a thousand problems

Jessica Maddox
7 min readNov 21, 2019

--

Narwhal the unicorn puppy (source: @macthepitbull on Instagram)

Last week, a unicorn puppy named Narwhal went viral on the internet. This week, the rescue organization taking care of him issued a statement on their Instagram: they were the real rescue organization, and fake organization Instagram accounts had cropped up and were using their photos to gain followers and/or donations. Also this week, the Pete Buttigieg presidential campaign used a stock photo of a Kenyan woman to promote the presidential hopeful’s plan for black Americans, all while passing the woman off as an American citizen. These are just two examples, but they are the latest iterations of online image misuse that has become newsworthy. Digital images have a hard time online. They are extremely vulnerable to misuse that stems from being decontextualized and then re-contextualized.

Jennifer Lopez’s Versace dress for the 2000 Grammys (source: Harper’s Bazaar)

The history of the internet and images is a complicated one. We should never forget that in our contemporary social media era, Google image search was created so individuals could look up the Versace dress Jennifer Lopez wore to the 2000 Grammys. Additionally, we should not let Mark Zuckerberg rewrite the history of Facebook into “keeping people connected during the Iraq War.” The origins of “The facebook” were on the coattails of Facemash, where users could rate an individual’s attractiveness. Modern roots to visual social media are grounded in people not having control over their own images.

Of course, these aren’t new problems. For as long as it has been possible to replicate an image, it has been possible for that image to be used without consent. When we put our images out into the world, we have very little control over what happens to them. This has perplexed some of the world’s most famous image theorists, from Susan Sontag to Roland Barthes to W.J.T. Mitchell, for decades. In our “analog” era, images were just as susceptible to misuse, be it through graffiti, vandalism, or unauthorized copies. As things went digital, these vulnerabilities followed. But when combined with the very qualities that make the internet fun, great, and useful, they present us with some new types of misuse, as well as more frequent cases of misuse.

Regarding what makes the internet social, internet scholar danah boyd has written extensively on what she calls “the affordances of networked publics.” By this, she refers to how the technological functions of our social media platforms shape how we use them. For example, on Twitter, you can write messages and put them out into the world, but given how Twitter is technologically designed, those messages cannot exceed 280 characters. The affordances of networked public are ways of thinking about how technological functions shape our online and social interactions.

boyd writes that there are four affordances underscoring most of our online actions and interactions. These are persistence, replicability, scalability, and searchability. In other words:

  • Persistence: What we do online is automatically recorded and archived. This allows Facebook or TimeHop to remind you what you posted X years ago today. This has also been how individuals can find one’s past Tweets and bring them up again in the present moment. Just because our content is out of sight, out of mind, doesn’t mean it’s forgotten.
  • Replicability: Our online content can easily be duplicated. This can be done through simple actions we take for granted, such as right-clicking and saving as, taking screenshots, using copy and paste, etc.
  • Scalability: It is possible for online content to ultimately have a huge audience. Here, I refer back to Narwhal the unicorn puppy — his picture was posted on the rescue’s Instagram and then picked up by the extremely popular We Rate Dogs Twitter and Instagram accounts. In the next 24 hours, Narwhal’s rescue organization gained over 15,000 followers. As an internet researcher, the question of scalability is most often the one my research participants ask of me — how do I make my content go viral? Do you know the secret? (I don’t, because there is no secret).
  • Searchability: Our online content is almost always accessible through search. Whether it’s Google image search, or searching through a platform’s in-site search bar, we can easily find the information we want online.

You can read more on danah boyd’s research here. In my own work as a visual social media scholar, I have looked at how the pre-existing problems and vulnerabilities associated with images are exacerbated by these four affordances. I maintain that understanding these affordances and how they work is the first step towards helping individuals gain a sense of visual-digital literacy and be more adept consumers and makers of online images.

Persistence, replicability, scalability, and searchability are what make the web fun and easy to use. They are what allow you to recall a photo through the “Memories” feature in order to have a great #TBT or #TransformationTuesday post. They are what enable you to find that friend from high school on social media. They are what let you find out any information you want through a quick Google search. But they are also what put us and our images at risk.

In terms of online image misuse, revenge porn and catfishing have (rightfully) captured the bulk of our attention. But image misuse happens every day online, and it often happens in less salacious ways — but it is the prevalence of the everyday the ultimately enable the salacious cases to continue, unchecked. Even the popular “woman yelling at a cat” meme is guilty of being the product of online image misuse. The meme contains a screen grab from The Real Housewives of Beverly Hills, spliced together with the image of an angry-looking cat. The cat’s real name is Smudge, and his owner shared his image on her Tumblr back in 2018. It went viral, eventually being repurposed into the meme we know and love. While this is a more harmless instance, this is still a case of an image being used in a way without the original taker’s consent. In my research, I have further examined this practice in cases of digital kidnapping and freebooting (or, YouTube video theft) to show how the affordances of networked publics allow bad actors to misuse images in far from benevolent ways.

The “woman yelling at cat” meme template (source: Reddit.com/r/multiwall)

The problem with the very things that make the web great is that they also can be used by individuals with bad, malicious, or not-thought-through intents. And while platforms should step in to help stop this issue, we know this is often not always the case. Platform governance and content moderation is an important and thorny intersection of this very issue, and something I could easily dedicate another thousand words to.

In the last weeks, Facebook has announced an initiative to attempt to curtail revenge porn. Adobe and Twitter have recently proposed a system for permanently attaching artist’s names to their images (digital artwork theft has become a huge problem in recent years). While these are good steps in the right direction, they are not perfect steps, nor do they address the underlying problems that allow these problems to persist in the first place. Additionally, the advice of “don’t post online” isn’t great advice — it ignores how some of social media’s most vulnerable users are people who are typically pushed to the sidelines of public participation, in general.

Digital image misuse is an immense problem with no easy answer. While individuals advocate for better and safer ways to participate online, we do have to work within the system we are in while simultaneously demanding better. This is one of the main reasons I advocate for visual-digital literacy.

Images are a popular tool for misuse, as well as misinformation, and visual-digital literacy involves a high level of healthy skepticism. My goal is not to turn everyone into a pessimist, but when it comes to looking at images online, we must always consider the source, context, and ways in which the image is presented. These three things are also always intertwined with each other. For instance, so many of our online images are screenshots that are shared across social media platforms (for instance, an Instagram post to Twitter; or an image snipped from a news article on one’s phone and shared to any social media). Who is posting? What have they left out when taking a screenshot? In image theory, a popular saying is “to frame is to exclude.” When one takes a screenshot, they make deliberate choices about what to include in the shot and what to leave beyond the borders.

For example, the other day, I was looking at a screenshot of a politician’s Instagram post that had been shared to Twitter. However, the date of the Instagram post was cropped out. Because of how easy things can be replicated and altered, this made me question the validity of the screenshot. A quick search into the politician’s Instagram showed me that the picture was in fact real.

But misinformation and misuse can go beyond screenshots. Photos can easily be altered or edited. They can be taken from one context, put into another, and told with an entirely new story — in my research on digital kidnapping, digital kidnappers will scroll through parenting Facebook groups or Instagram posts, take the photo of someone else’s child, and pass the kid off as their own. These are cases in which scrutinizing source, context, and the ways in which an image is shared remain key to understand what is being conveyed.

Digital image misuse is pervasive, and it’s a complicated problem without an easy solution. Until platforms catch up — and they may never catch up — the burden lies with us to be smart, savvy social media users and question the images we see online.

--

--

Jessica Maddox
Jessica Maddox

Written by Jessica Maddox

Professor of digital media studies and technology. Into all things internet and dogs.

No responses yet