Correcting Error Across Infinity
Consider the Spotify algorithm, the one that powers the Discover Weekly playlist. It's so supremely catered to you. It keeps delivering, week after week... until it doesn't. It turns uncanny, the songs don't cut it. On paper, they look like you would like them, they fit every parameter. On your ears, they feel like a forced pick, like the algorithm couldn't know better.
When you disengage and stop listening, recommender algorithms start error-correcting. It's something that even a human curator would do. Consider this now... Does a machine correct error the same way a person would? People correct error based on the external perspective, the perspective of those whose approval is wanted or needed.
If human perspective produces judgement across infinite dimensions, then error-correcting to please a human perspective necessarily corrects across infinity. Machines and software can't handle. People correct error across infinite aspects contained in the perception of those individuals we consider fit to judge our actions. The people whose judgment we consider worthy, out of fear or inspiration.
You may ask those same individuals to distill their judgement in a set of aspects or parameters, but any articulation of their judgement is bound to be incomplete, stripped of a depth, no matter how exhaustive their articulation. A description of their judgment is bound to be incomplete because whatever is being judged has to feel "good", it has to feel "right", and there's endless ways for that to happen.
✎ Connection to
bis / Epistemology of Computation