Smelling your own fumes will eventually destroy you

Machine learning figures out what you like and gives it to you. Your subordinates tend to do the same. What’s not to like about that? 

Curated information can save you time, provide mental comfort, and lower your anxiety. The problem with likely-to-like information is that it narrows your point of view. Pretty soon, all you smell is the aroma of your own fumes. 

I’ve spent the past week testing some of the limits of Amazon Music’s machine learning. I love ’80s rock and am a huge fan of Taylor Swift’s tunes. Amazon has a cool feature called autoplay. When you reach the end of your playlist, the feature plays songs it believes that you will enjoy.

I got into the mood for 80s rock, so I “liked” tracks by Guns-n-Roses, AC/DC, and Tina Turner. I kept the autoplay engaged for a couple of days to see what would happen. 

After two days, the tracks were all headbangers and no T-Swizzle, even though my Faves playlist is full of her music. By day 3, the auto-playing songs grew repetitive. 

Amazon Music wants to please me by playing songs it thinks I’ll like based on my history and how I’ve responded to its advice. Our top lieutenants will do the same. They want to give useful advice that pleases you. After all, they have to spend many of their waking hours with you.

The trouble is that the mental algorithms they use to gauge what you’ll find useful are not dissimilar to Amazon Music’s method. If you are not very careful, you will wind up getting the same themes over and over again. You’ll struggle to find new ways to win when you use the same old thinking.

To avoid endless repeats of Bryan Adams, Bon Jovi, and Aerosmith, I needed to take action to hear other voices. What trusted advisors do you use to make sure that you are not savoring the smell of your own fumes?

About Author