I’m reading “Everyware: The Dawning Age of Ubiquitous Computing” by Adam Greenfield and he’s expertly thought and researched about the mediation of technology and cultural norms as a result of computers and sensors that exist in every object and medium in our lives. He gets to talking about how designing the interface for a real-world system is full of fuzzy areas and uncertainties and multiple users, while up till now, most of our software takes it for granted that one user is readily identifiable (the source of the input it receives), has error-catching and ..else conditionals, etc.
He describes how the artifacts of the future meat/virtual space will have to discern our intentions based on the subtle cues that we give and receive through decades of social conditioning as a species. Until then, the devices will continue to seem clumsy and feel nowhere close to passing a Turing test.
So I was thinking that maybe gaming will be the first area in which this kind of smart intuition will take place. But even now, computer AI is retarded in games. It is almost as if the AI is an afterthought for designers who are more interested in coding other aspects of a game.
It’s also probably a pain because the wheel gets reinvented each time. Each game codes its own AI from scratch unless it licenses an engine, but even then, the designers still have to build the AI to their specific event.
And this got me thinking to another significant problem with any sort of project: lack of crowdsourcing. Why would people (particularly 1 or 2 developers) devote more time to things like AI which will only last as long as software is selling on Steam or in the stores? Why invest in building a community or a feature if no one will use it after a few months?
So what if NPCs (non-player characters) and AI had a standard character set for use across disciplines, games, online user interfaces, etc.? What if you built different archetypes of bots that could be tweaked for whatever project it was needed for? What if the AI archetype’s behavior was networked? That is, say someone meets a female paladin archetype in a Q&A forum for a company and interacts with it, and the results of that interaction are shared to all the other instances of that archetype in other settings (video games, online sexbots, car dashboard interface) so they can all learn specific lessons about interacting with humans?
This would mean they’d learn over time and be enduring archetypes that we want to interact with. If one instance of a thief learns that it will get in trouble looking a little too suspicious in one online venue, it might disguise itself better in another setting (a multi-player RPG). AI entities flagged as “tech support” or “Q&A” might collectively share their wisdom just because they are given that same descriptor of tech support. Different AI entities belonging to “you” would all share your preferences. Or not. Maybe you want to have unique experiences and build bonds with them separately.
I don’t know. I can just see a future where we will be interacting with bots a lot more, and we will expect those bots to have some continuity and to learn about us to make our lives better and easier. And I think this will require some highly-networked AI pulling from tens of thousands of interactions with real humans to develop something truly useful — otherwise we’ll just have what we get now: a bunch of throwaway code that barely accomplishes the task of discerning human intention.