Did you ever notice an advertisement on social media that was exactly fitting your interests? Well, this is not because you were being spied on by special agents of Google, Facebook, Twitter, and co. but due to algorithms. These algorithms are collecting, saving, storing, and forwarding our data with the help of artificial intelligence. In this way, computers learn from the collected data and can categorize us to suggest products, new friends, and websites.
But how exactly is it working? As mentioned before, these “intelligent” algorithms are recording every click and activity that we do online. Thanks to that, the computer learns and stores what kind of content, articles, products the user prefers, what kind of information s/he likes, and at what time s/he consumes which information. In a short time, algorithms have a very detailed idea about the user’s gender, age, relationship status, education, interests, financial situation, social status, and political and religious orientation. They can even detect ethnicity, use of addictive substances, personality traits, happiness, sexual orientation, and they are even capable of noticing an illness before the clinical symptoms.
Due to all this knowledge, search engines, newsfeed, and social networks can filter the content that they are showing based on the users’ interests. Since most of the platforms are financed through advertisement, they are keen on showing exactly the content a user might like to keep them on the website as long as possible.
These algorithms seem to be very useful in simplifying internet usage due to personalized content, but they can also influence and restrict our lives in an unconscious kind of way. An easy example. online shopping. When looking for new clothes online, we are also confronted with active algorithms that register every movement on the website, and thus knows after a short time our taste, size, and above all our presumed financial liquidity. They can check our purchasing capacity, for example, based on location (i.e. the residential area), our search queries, and which end device we use. Therefore, we are shown articles that fit our profile, but they do not necessarily have to be the best or cheapest option.
Taking the fact that we mainly get information suitable to our interests, it raises the question if this also influences the forming of our opinions.
Some people are afraid that personalized algorithms are limiting free opinion-forming because they produce these so-called “Filter Bubbles” in which everyone just gets the news they are interested in and not the full range of information. We can notice this filter bubble phenomenon already happening a lot during the Covid-19 Pandemic. When someone wants to believe in conspiracy theories, they will find enough proof on the Internet to confirm these theories and will continually find themselves confronted with new content in this direction. In this way, algorithms can influence extremist thinking as well, because they cannot differentiate between “good” and “bad” content, they will always propose more input fitting the interest.
Avoiding algorithms is nearly impossible but as a user, you can protect yourself from trackers by installing a browser extension that blocks cookies or switching to a browser with build-in privacy protection tools like duck-duck-go.