A personalized ChatBot that emulates the intimate way we talk to each other. It uses emojis, misspellings and references to old topics in conversations. It is trained on a specific chat history with a person (in this case, my clueless roommate) using Markov Chains and a GPT2-Model. Then, a Pupeteer script emulates the timing on human respones via chat.
This was a performative action meant to explore virtual identity and automation, in emotionally charged spaces such as chat applications. My roommate consented to the use of this screenshots and participation in the project. For a random choosen day, my roomate unknowngly interacted with this bot instead of me. Even as she knew about the experiment, a great deal of emotional concern for my wellbeing was involved, agravated my the chatbot use of familiar misspellings and phrases used exclusively between us.
This poses questions for a future where the refinement of such a set of tools, masks the bot behind. Would it be desirable to auto-response a friends chat, the same way we use email's auto-response?