Artificial Self-Driving Systems Vulnerable to Creative Intention
Artificial Self-Driving Systems can react to Happening but cannot know, much less answer, why they react. The System can’t ask itself why, even as it evaluates the situation in the terms it was programmed to do so. Even if these terms can be made flexible and generative, they still do not constitute sensitivity to intentionality. There's an element of recursivity in intentionality that is beyond numerical apprehension. Numerical evaluation is not a process of self-reflection. The Artificial System will ever be vulnerable to creative intention, intention that goes beyond the learned/recognized patterns of the system, a vulnerability that can be exploited by bad-faith actors.
Computers and algorithms are artificial systems with natural agency in matters computational. Development of cars with self-driving systems is an effort to create artificial systems with agency through space, i.e., agency in matters spatial. Imagine giving artificial systems agency in matters non-computational, as if they were capable of feeling! It's hard enough for us to exercise it.
– Riverside Park, Manhattan
✎ Connection to
bis / Epistemology of Computation