At which point and how does the creator of an application disappear from the user experience of the application? This is a psychological phenomenon, where the user sees the application as an actor and doesn’t see the real actor beyond that.
It is my conception that there is no human-computer interaction or human-object interaction as far as interaction is considered of involved parties all taking action on each other. Basically I see this as a case of time differentiated human-human-interaction, where there is a person designing an application or an object and a person using the application or object.
When I was younger, I pretty much ignored the authors of the books I wrote. Now, I have eventually come to appreciate the author’s voice too. When I read Ready Player One, I perceive the ghost of Ernest Cline, the author. However, I also find a ghost in Parzival’s, the main character’s shell and can differentiate it from the one inside Aech’s. Yet, my perception is that these later two are really only intentionally filtered mirror images of the first one. The book is actually filled with filtered representations of Ernest Cline. I would think that theatre and movies are richer here, as the ghost of a character is actually a mix of at least three real people: The writer of the manuscript, the director of the play, and the actor of the character.
My conception of “human-computer interaction” relates to the Turing test. In the Turing test one communicates/interacts with two partners (A and B). Then one is asked, which of the two is a computer and which is a human being? Which shell has a real ghost inside – a soul, a person, whatever. And which one only has a trace of the ghost? As you see, if you assume that one cannot communicate with a computer, but only have the computer mediate the programmer, then the actual question in the Turing test is, in which case do you have only the distance of one computer to a real person, and in which case do you have two? Two, and a separation of time. Turing test should not be seen as a test for artificial intelligence, but for recognizing the level of presence and communication channel.
Imagine a communication system, depicted occationally in science fiction, where the sender of a message programs an artificial intelligence to deliver the message. Typically this is a holographic projection of the sender. Endure the fact that this appears like a very inefficient mean of communication. The sender should predict the questions the recipient will possibly ask and program the AI to respond to these. They should figure out as complete as possible set of question concepts the recipient may have and fashion a parser to identify when these questions are asked in one way or another. This may involve the recipient showing the AI pictures or documents and referring to these objects as a part of the question. (“Is the person in this picture the one who has written this book?”) The AI should be programmed to fit the reply to the way the question was formulated and to the whole social situation of interaction. One of the replies is that the AI has not been programmed with an answer to the question. (“I don’t know that.”) It will be also used in some cases, where the AI fails to understand the question properly, although more usually the AI should have a specific reply to this situation as well. (“Sorry, I don’t quite understand your question.”, “You’re not making any sense.”) In a version of the Turing Test relating this case, given three players, the tester should try to answer, which two have the same developer/message sender behind them, discussing with them.
This case relates to telepresence issues. If it is easy to start considering a computer system or application as an entity to interact with, so that the person behind its design disappears, then how to help the same from happening from where people are supposed to interact with people through social media systems and applications? How to help that not only the developer of the discussion forum system disappears from the discussion, but also the other participants of the actual discussion?
The earlier scrutiny of the Turing test, however falls short in where actual artificial intelligences are considered. I should assume that there is a line between an expertly clockwork designed knowledge system and something sharing the same ballpark with neural networks – systems where the final outcomes are unperceivable by the developers. When the developer doesn’t understand, how the computer comes up with the solution, the computer starts to approach a real artificial intelligence. It is also an interesting to consider co-authored applications. One could see the application created by two people as their child. Is a real child of two parents with it’s ghost, or is the ghost just a mixed representation of the parents’ ghosts? How much and why would a clone be it’s own person? Is the differing personality of identical twins only due to the fact that they cannot simultaneously occupy the same exact space? My discourse above has seen the ghost stripped away from the machine, but this is a point where an artificial ghost starts to emerge. From this point further one needs deeper philosophy and psychology, if not theology, to try to answer where a real ghost has appeared into human beings and why is that any less artificial than the ghosts we are intentionally building into computers ourselves?