Shrinking horizons: the Internet and invisible bias

By Steph Murphy

This article is part of Right Now’s February issue, focusing on Technology and Human Rights

By Stephanie Murphy

As the Internet enmeshes itself more and more into our lives, technology may be doing more than helping us form our own opinions. It may be fostering our own, private realities.

Confirmation bias

The internet is supposed to be an unbiased, unmediated repository of information. Some of the information itself may be outrageously one-sided, of course – but the internet itself is meant to be a neutral storage place. It’s impersonal, a place where people seek information free from the influence of government, corporate or other interests. But even in Australia, there’s reason to wonder just how independent we really are.

Social research shows that people already tend to favour information that confirms their beliefs rather than challenges them. This is called confirmation bias. We remember things selectively, choose to discount or dismiss contrary views, and interpret information in a biased way. Often, we don’t even know we’re doing it. Unsurprisingly, research shows the effect is stronger for emotionally charged issues.

This presents real problems in a democracy, where the strength of the system is meant to lie in the testing of ideas. If we’re incapable of treating ideas neutrally, or seriously entertaining those that don’t confirm our already-held beliefs, what does this mean for the political system, for free speech, for our values?

Even more counter-intuitive is the fact that people’s beliefs tend to strengthen and polarise, even when faced with the same evidence – a phenomenon known as attitude polarisation.

As social beings, there is a cost associated with being perceived to be wrong, which might reinforce our tendency to see things in a one-sided way.

So we select what we read and believe. But we’re not the only ones filtering the information we receive. Even before we read a word, technology companies are silently manipulating what we receive in the first place.

The Filter Bubble

 The internet has become increasingly personalised.

We’re familiar with online profiles and targeted advertising. What fewer people know is that technology companies have also been quietly personalising our search results.

The Filter Bubble, a term popularised by Eli Pariser’s book of the same name, refers to the phenomenon where internet users are isolated in their own cultural or ideological bubbles by algorithms that only show them what they are likely to agree with.

Since 2009, Google’s algorithms use data like location, past clicks and previous search history to determine how and what to display in search results. A person who spends a lot of time on the New York Times website can expect to see more results featuring the same site – which would be a welcome intervention for many.

But Pariser gives a more sinister example. A conservative and a progressive friend searched for “BP”. The first got investment news about British Petroleum. The other got information about the Deepwater Horizon spill.

Facebook does the same, by prioritising what you see based on your previous interactions. If you only interact regularly with people who share your views, you may never see updates from those who differ. Which leaves everyone in a gratifying but ultimately narrowing echo chamber.

Consequences

Commentators like Evgeny Morozov argue that this insidious effect has exposed democracy’s Achilles’ heel. The effect of the filter bubble is to stifle and crush debate, and allow for ever-deepening rifts – despite polls showing that we agree on far more than we think.

Confirmation bias can also polarise beliefs, creating more impregnable divisions in society. In one study, progressives and conservatives were presented with deliberately incorrect information about controversial issues. But when they were presented with corrections – hard, incontrovertible evidence – they continued to stick with the incorrect information if it agreed with their point of view.

It’s difficult not to see some of the current controversies in Australian politics in light of these dynamics. Issues like asylum seekers (or illegal arrivals for some) have seen two sides of the debate become more and more hostile to one another.

It’s an example of the confirmation bias effect above: even presented with hard evidence that, say, seeking asylum isn’t illegal, large numbers of people persist in believing incorrect information. This has implications for anyone trying to spread a particular message: facts don’t always work and advocates will need to come up with other ways of convincing people of their arguments.

In a free society like Australia, the biggest worries may stop there. But as Morozov points out, in authoritarian Big Brother states, the same algorithms that work out whose Facebook updates you’re most likely to be interested in could also work out what news not to show you.

Bursting the filter bubble?

Pariser and others have called for companies like Google to be more transparent about the way they organise and manipulate search results. But maybe part of the criticism stems from a misguided belief that it is the responsibility of companies like Google and Facebook to safeguard our intellectual and cultural literacy.

Evgeny Morozov makes this point in his article about Eli Pariser’s book, suggesting that underneath Pariser’s concern about the filter bubble is a pre-determined idea about what good citizens of the world should be exposed to.

What’s more, some research suggests that people use these algorithms to expand their tastes rather than restrict them. On services like Amazon or Spotify, users look at information about what similar users are reading or listening to in order to broaden their horizons and discover new content.

Whatever the view on the merits of personalisation it looks set to continue. And as increasing numbers of people turn away from television to the internet as their sole source of news and information, the effect on civic discourse will be profound.

Latest