The cult of algorithms

We're all members, like it or not, mostly not

Three phontes, with ads for watches on Instragram displayed on their screens

Instagram is convinced I want to buy a watch. I get ads for men’s wrist watches all the time.

When you consider that across the span of my life I have almost never worn a watch, I think those dinner plate sized brand name monstrosities that are the hallmark of status seeking males look really dumb, and that I already have a Fitbit which serves needs I actually care about, it should be pretty obvious that I’m not in the market for a fashion statement watch.

We all know the stereotype that if you buy one blender off of Amazon then every online ad you see on every website for two months following will be for blenders. This is because the processes, the algorithms, that decide what ads to show us are too simplistic to understand that people buy blenders not only to blend things, but also to not have to think about buying blenders anymore. What passes for “artificial intelligence” is pretty much a sequence of assumptions with no more than two points of references. You bought a thing, you must want those things. Not very intelligent by any stretch.

That’s not the problem with Instagram insisting on showing me watches, though. Not only have I never bought a watch online or otherwise, whenever an ad for a watch pops up in my feed, I dutifully click on the three dots in the upper right hand corner, select “Hide Ad” and then select “It’s not relevant”. So on top of never having created any first point of reference indicating I like watches through my buying habits, I proactively do what I can to inform Instagram that I am absolutely not interested.

You would think that they’d be happy to use that information. Everyone involved in this situation is motivated to stop showing me ads for watches. I have no interest in them, Instagram wants me to stay addicted to using their app by being constantly exposed to images I like, and the people selling watches don’t want to spend advertising money on anyone who is completely outside their target demographic.

The reason I’m still getting these ads as to do with a fundamental philosophy of computer scientists and engineers, the coders and designers of this increasingly high tech world that we live in. They’re not really a cohesive group with one particular ideology that they’re trying to push. But, they do tend to cluster around certain ideals, one of them being that technology can make the world a better place. Part of that belief is that algorithms are an objective route to the truth. It’s a belief system pretending to be a science, and it’s the belief system we all live by these days. And it’s fucked.

In the case of Instagram, here’s what’s going on with the watch ads. I might be off on the most specific details, but the broad strokes definitely colour within these lines.

Among the various people I follow, some of them are fitness models who have the bodies of comic book super heroes. I follow them because they’re hot and, it’s weird, but I like looking at hot women. On the side I sometimes use them as references for drawing. Whatever. What I definitely don’t follow them for is their lifestyle, which often includes occasionally posting when they do things like buy brand name goods, or act as spokespeople for those goods or whatever.

I suspect it’s this group of people that I follow, this type of Instagram account, that is largely the source of the watch ads being delivered to me. There’s a high likelihood that people who like high fashion brands also like wrist watches like Rolex or Tag Huweveryouspellit, and so do enough of their followers.

The algorithm that decides what ads to show me looks at who I follow, looks at their interests, and then is programmed to assume that people who group together might have similar interests. I like them, they like watches, maybe I like watches. It’s a reasonable enough starting point.

But then I go in and say I don’t like that ad. Shouldn’t the algorithm now know I don’t like watches and build a simple exception? “Dave might like the same things as these people who follow these accounts, except watches.” Something like that. It’s pretty simple programming to build an exclusion list. I can see maybe showing me a second or maybe a third watch just to be sure, but certainly after three or more times, the algorithm should be pretty certain that I’m not into watches and all watch ads should cease.

Imagine someone told you that they hated books but loved going to the library. They hate swimming but love the beach. They are completely atheist but go with their friends to church. As a human, you could easily imagine how to resolve those concepts that at first seem contradictory. Maybe the first person just likes that libraries are quiet, the second likes the sound of the ocean, the third enjoys the company. But to resolve those scenarios, you utilized a huge amount of human thinking capacity that computers don’t have. Things like imagination, experience with humans, and the ability to compare situations with different contexts.

Algorithms, though, can’t do any of that. The reason I keep getting ads for watches is that even though I keep rejecting watch ads, I keep going back to the groups of people that initially made it think I might like a watch. I keep rejecting books, but keep going back to the library, and, in this metaphor, algorithms don’t know anything about libraries other than that they hold books. Each time I reject a watch ad, it lowers the odds of showing me a watch ad, but then every time I like a picture with a model, the odds go back up again.

Without the ability to consider the problem from a context of the whole life that humans live, the algorithm tries to work within the data set it does have. Each ad for a watch contains data subsets to do with information about its price or what features it has. It might separate out sports watches from fashion watches, and things like that. Only knowing that I said no to this watch, but that I group with people who like watches, the algorithm’s only recourse is to wonder if I didn’t like this particular watch. Better show me different variants, in hopes of zeroing in on the watch that finally makes sense of the fact that I like people with watches and keep hating watches.

This could be handled differently, but, this is where belief enters into the equation. The beliefs of the technophiles I mentioned earlier. I think most technophiles would say that this sort of blindness algorithms have to the things humans perceive is a good thing.

Go the other way with it, and imagine that instead of a computer deciding what ads to show, it’s a human. You could easily have the opposite problem.

Imagine a person who was really into watches, so much so that he was very particular about all their details. You show that person three watch ads and they reject all of them, and, as a human, it would be understandable to think, “man, this guy hates watches.” But the reason Mr Hypothetical turned down those ads was because he is only interested in artisanal steam powered watches that are hand crafted by Tibetan monks. The guy might be trying to tell the algorithm that he likes a specific subset of watches, but he can’t get that message through, because he is limited to only speaking in terms of “like” and “dislike” with no place to say why. From a human point of view, we start with the broadest categorizations and work down. The main thing all three ads had in common was that they were about watches, so it seems that’s where the issue is.

A technophile could argue that it’s much better to go with the less biased computer algorithm that keeps seeing certain associations and keeps offering watches. Given enough time, with enough rolls of the dice, Mr Hypothetical could eventually see a watch he likes. The price is me getting watch ads I will never like, but, in a grand sense, if the number of times I’m shown watches I don’t want costs less than the one time Mr Hypothetical bought one, the system is winning and I’m a mild form of collateral damage.

Isn’t it possible, that maybe, just maybe, I would actually some day see that one in a million watch that could interest me? After all, I say I don’t like watches, but, if I’m so interested in the people who are, isn’t it possible that within my likes and interests there are contradictions that could be resolved to my benefit? This, in my opinion, is where we can start to see the belief system going from being kind of reasonable, to sort of fucked.

If you’ve ever known someone who keeps complaining about the kind of person they date, even though they keep repeatedly dating the same kind of person, then you will be keenly aware that we don’t always make choices that represent our own best interests. A technophile might say, not unreasonably, that by taking human perception out of the process of evaluating our behaviours, we allow more for a more honest depiction of ourselves to emerge. One that we might be happier with.

To explain, let me turn away from Instagram, and look to Spotify, where the faith in algorithms is as equally wrong headed as Instagram. Let’s say I want to believe that I am a sophisticated connoisseur of obscure jazz music. But really, when I don’t think anyone is observing me, I tend to listen to Eurotrash disco. All the algorithms in Spotify are geared to figuring out what I “really” like, not what I seek out. So, for example, although many of their algorithms are proprietary, one of the known ones, so far as I understand it, is that if you switch away from a song within something like the first 30 seconds of listening to it, Spotify’s algorithms will guess that you didn’t like that song. Not an unreasonable thing to guess. Seems logical that you would listen to a song you like the whole way through, and not drop it after hearing the first few bars. Or measures? Verses? However songs work.

Spotify will then do that over time as well. Keep skipping the same song in your collection, and Spotify will guess you really don’t like that song. And songs like it. You may have actively searched for pretentious jazz and clicked the “like”, and that means something, but since you keep skipping them and actually listening to club hits from the gayest parts of eastern Europe, guess what Spotify is more likely to suggest to you next?

And is this bad? Doesn’t this mean Spotify is earnestly trying to provide you with an enjoyable music experience that will resonate on an fundamental level below your self delusion?

You could say that.

You could also say it’s not Spotify’s job to try and solve me and my delusions.

There’s nothing wrong with wanting to aspire to be something you aren’t already. Maybe you’ve listened to Eurotrash most of your life and you fall into it easily, but now you want to expand your music horizons, and you’re finding it difficult to find examples of songs in a new genre because it’s a bit challenging for you… but that should be a path I could try and go on, shouldn’t it? Why should an algorithm come in and keep telling me where I “really” want to go?

I would go further and say, you know what? Maybe my desire to pursue obnoxious atonal jazz is driven by pure pretension and a desire to be an annoying prick by looking down on other people’s music choices. Maybe I only listen to it so that I can keep up with the references people at snobby wine parties make, and when I’m at home listening to this music by myself I’m miserable on some subconscious level because I’d rather listen to some generic beats churned out in a Casio synthesizer factory. That’s my choice to make and let me live in the hell of my own creation. So what if I want to prioritize my insecure need for social standing over my musical listening experience? That’s my choice to make, and why is any computer algorithm stepping in with any opinions about what I’d really really really rather do?

On a less cynical note, there are definitely more common reasons why this faith that blind algorithms reveal truths about ourselves fail. For example, I have a number of songs in my personal collection that I love but almost never listen to. I don’t listen to them for any one of a few reasons. One song reminds me of a very specific time in my life, and there may be times when I want to delve into that emotional time machine, but not just as an every day experience when I have my songs on random shuffle. Or another song might be one that I think is great, and because it’s great, I’ve listened to it over and over and over, and I’ll need a break of a few years before it can get fresh again. I want that song in my library, I just need to skip over it for a while.

There are no behaviours you can do in Spotify that will express to the algorithm a complex concept like, “I love this song, I really do, it’s just the wrong time in my life right now for this specific example, so I want you to suggest songs like it, but without me having to actually play it.”

And I think anyone with eclectic tastes in music can tell you that Spotify completely fails to find any way of grasping that your musical tastes include more than one genre.

I’m not the first to decry algorithms as being part of a more mechanistic world that is trying to categorize us and simplify us at the expense of our potential for individuality and complexity. What I feel is less addressed is that I believe that the technophiles largely responsible for building this information age we live in are motivated by a what seems like a worthwhile dream that we will all get our desires met. Our real desires, the ones that we might not even speak openly about, but that we actually want under the surface. To a technophile creating an algorithm, it’s us that are the problem, because we tend to manipulate systems to get at what we perceive we “should” like, based on social or psychological needs. And that is not an issue to ignore. Facebook sucks for many reasons, but one of those reasons is us, posting market friendly versions of ourselves, trying to make our lives sound better than mundane, often at the expense of honesty and the healthy upside of sharing our struggles.

What I find problematic is that anyone would take it upon themselves to solve anyone’s personal dysfunction. It’s one thing to create social networks with algorithms to protect us from each other, from harassment, political manipulation, false news, and that kind of thing. That’s a whole different issue from trying to save me from myself.

The problem is a lack of dialog. The cult of algorithms assumes that watching me as an unbiased observer will reveal truths about me that would get buried if you came up and asked me what I believed to be true. I’ll tell you that I want pretentious jazz, I’ll act like I want dumbed down disco. Which is really “me”, and whose business is it to get involved in figuring that out?

And this isn’t a situation where technology isn’t sophisticated enough to go in the right directions yet. It is entirely possible to create a system that asked me for some reasons why I hated the watch ad. Answers that could be grouped and searched for by keyword in such a way that an algorithm would know for damn sure that I am 100% sure I never want to see an ad for a watch. It’s just a matter of willing to listen to what I want you to know, not just what you think you can objectively infer by not letting my self perceptions get involved.

The problem isn’t the technology, it’s the philosophy behind it. I’m willing to live in a world where I make choices that don’t always make sense, or “reveal” anything truthful about me, or are based on a decision process that is flawed, and suffer the consequences. The people who make algorithms aren’t.

As our world evolves into one where systems become more advanced, no amount of improved technology will help us get better results if the problem to be solved isn’t understood. I don’t want to be solved. I want to be listened to, dialogued with, and I’m willing to live with my own mistakes. Build that algorithm.