Drawing from recent creative experiences Chomsky vs Chomsky (Sundance 2021) and Future Rites (Creative XR, UK-Can Immersive Exchange, Philharmonia, IDFA DocLab Forum), Director Sandra Rodriguez (Canada) explores how artificial intelligence (AI) and human creativity meet at a crucial junction, to create compelling virtual worlds and characters that invite interaction, discovery and play. Between technology and carefully crafted storytelling, it is the human imagination that remains at the core of any immersion.
Sandra Rodriguez, Ph.D., is a creative director (interactive, VR, XR, AI), producer and a sociologist of new media technology. For the last four years, she was founder and head of the Creative Reality Lab at EyeSteelFilm (Emmy-award company based in Montreal), where she explored futures of non-fiction storytelling in VR/AR/AI. Her work as creative director and producer (DoNotTrack, DeprogrammedVR, Big Picture, ManicVR, Chomsky vs. Chomsky: First Encounter) have garnered multiple awards, including a Peabody (DoNotTrack, 2016), best immersive experience (IDFA DocLab 2016; Leipzig DokNeuland 2018, Numix 2018), best storytelling (UNVR and World Economic Forum tour 2018), and the first Golden Nica award given to a VR project at Ars Electronica (2019).
Her most recent works span from immersive dance performance, multi-user XR theater and large scale installation, but always explore the sparks that fly at the crossroads of AI, VR, and human creativity. Rodriguez is also a lecturer at the MIT CMS/W since 2017, where she leads the course HackingXR, MIT’s first course on VR and immersive media production. Her experience combines immersive know-how, award-winning productions and human-centered design.
The following is a transcript of the video’s content, generated by Otter.ai fittingly enough, with human corrections during and after. For any errors the human missed, please reach out to firstname.lastname@example.org.
Vivek Bald 00:48
It’s It’s wonderful to have Sandra Rodriguez back with us. She, as many of you know, was one of our earliest fellows in the Open Documentary Lab, and has remained part of the larger open doc lab, family and, in addition has taught some really popular courses in VR for MIT students. And we’re just really pleased to have Sandra back with us for this hour and a half. She is a creative director, producer and a sociologist of new media technology. For the last four years. Sandra was founder and head of the creative reality lab at EyeSteelFilm, an Emmy Award winning company based in Montreal, where she explored futures of nonfiction storytelling in VR AR and AI. Her work as creative director and producer of do not track the program to VR, big picture, manic VR, Chomsky vs. Chomsky, first encounter, have garnered multiple awards, including a Peabody for Do Not Track in 2016 best immersive experience at info doc lab in 2016, Leipzig DokNeuland in 2018. And oh, that was the best immersive award was in all of those places in the doc lab, Leipzig DokNeuland and numix in 2018 best storytelling un VR and World Economic Forum forum tour in 2018, and the first golden Nika award ever given to a VR project at Ars Electronica in 2019. So today, I believe Sandra will be talking about a couple of her most recent projects. And please join me in welcoming Sandra Rodriguez.
Sandra Rodriguez 02:53
Thank you, Vivek. And thank you, Andrew, I’m really excited to be here today, not only because I get to talk to you guys again. And I get to feel like I’m part of Comparative Media Studies again. But it’s been a while that I haven’t set foot at the department and in not just because of COVID realities, of course, but also because right before that, I was on maternity leave. And since everything has been virtual, so I really miss being in the department and sharing part of the experiences that I’m creating or working on and collaborating a lot with either people that I’ve met at MIT, or that I’ve met through people at MIT. So MIT has been part of these experiences, and either very close relation way are sometimes a little bit kind of six degrees of separation way. But and I hope that my Keynote that I’ve placed before we start is not going to fail me now. Because I’ve included more videos, I arrived in 2015, as you were saying, as one of the I think it was the second year of the MIT open doc lab fellows. And I was arriving as a visiting scholar of the Comparative Media Studies Department on a postdoctoral research creation grant. My focus then was on big data and surveillance data and how we could create public installations that could open a conversation with the public about some of the ways the data was in a biased way analyzing us and how it left aside, everything that is transparent or non visible from the way we interact in public spaces. So that was my goal then to really open conversations on big data uses misuses and disruption by artists. And part of my research was also creating, of course, these endeavors have kept going now that I’m exploring VR and AI and in different ways. So keynote stay with me. As some of you know me because of the class hacking XR as you’ve mentioned since 2017, I’ve been teaching to CMS undergrad and grad students but also technology computer science students, and a lot of requests from students in architecture design, who I think Ilan can tell us, but who I think you know, are interested in this class because partly, it promised this thing on virtual reality or augmented reality and how we can explore these new tools to share common experiences and stories, but also tried to bring focus on the critical aspects and ethical dimensions that come with exploring new media. So as I was getting ready to present today,
Sandra Rodriguez 05:35
sorry, you can see my little rainbow wheel spinning every time I change the slide. But I may close my own camera, if it will help. I’ll just give it a try. And if it doesn’t, try to see if it helps. This is a picture of the class this year, this was the iteration of the class this year, I’m not sure if Ilan, you can find yourself in this image, it was a first attempt to try to use these technologies to have a sense of presence together. And I’m happy what of the students wrote on a post it note that VR is awesome. I hope that was the feeling conveyed that day in class. But I think it’s not just about feeling that these tools are awesome. It’s also about, at least for me a mantra is to think about the way that we choose to use these tools. We’re not just using these tools, because it is a future that we foresee as inevitable, but we choose to use tools to share human experiences. So my mantra is always to get us to think perhaps about why we choose particular tools, why we choose particular affordances in these tools, when we try to share and convey elements of our human experience. So I’ve had often opportunities to talk about my past work. As you’ve mentioned, since I’ve arrived at the MIT open doc lab, I was invited to talk about do not track which was really about tracking an audience to show how tracking works, or the medic VR experience was, which was about exploring manic depression through virtual reality, because we felt it enabled anyone to embody the emotions of manic depression instead of trying to explain them. So I’ve talked about these a lot. And these were projects that were there while I was at MIT. But in parallel, I was also increasingly working with AI. And I’ve seldom had the chance to talk about this, except for first time during the hacking VR class, I think two weeks ago. So I had the chance to kind of test a little bit of the things I want to present with the students then. But in 2016, I had just arrived at the MIT Open Doc Lab, I had been there for a semester, and I was approached by a researcher at csail, who had an idea about a project and wanted to do so I’ll get into the origins of the project before but let’s just say for now, that Chomsky vs. Chomsky is an experience that uses AI to talk about AI. So a little bit like do not tracked it before or like my research gration did when I arrived, my goal was with Chomsky vs. Chomsky to try to open a conversation about the limitations, biases, potential pitfalls, but also opportunities of AI by using AI. And future Rites kind of does the opposite. It’s let’s you could call it the left brain, the right brain approach to things even though cognitive scientists will tell you there’s no such thing as a left brained or right brained, but it’s a bit of the approach to say, What if I could just use VR and AI as a tool, not just to reflect on our use of AI, but maybe just to reflect on our sheer interest in creating with these tools. And I feel really lucky, I feel like I fell or slipped into two elements that I was just curious about. And that happened to grow exponentially VR, but also AI endeavors and creating virtual and intelligent environments are entities that we can interact with. And experiences are now maybe one of the biggest promises of the combination of VR and AI. And of course, it’s it’s exciting to feel like you’re on the cusp of something new. But it also, you know, brings a red alert, always think of the promises and pitfalls of these tools and why again, these are now enhanced over other media. Why do we feel they’re more relevant today than they are if other media, and again for me, the answer lies in the choice? Why do we choose to make these more relevant or not necessarily more relevant, we choose to make these more relevant.
Sandra Rodriguez 09:30
I like this quote a lot. And I’ve used it a lot in class maybe too much, simply because I think it conveys Well, what new media artists or artists of emerging media do, they need to both contribute to the development of a medium so try to see its limitations. See, its its problems and inherent clunks and use these clicks and limitations to create from it and try to explore other opportunities or other possibilities. So it means that you need to act both as an artist that usually tries to disrupt it. too, to talk about other things that the tool is not showing, and as a scientist to see, what else could you create with it. So it’s a little like finding, I’ve mentioned to Vivek and Andrew, that my toddler is now 19 months. So she’s exactly in that period, right? You show her something, she will break it apart and try to put it back together. That’s a little what I think artists and emerging media need to do all the time. But of course, they’re not the only ones. This is a very famous quote, our path leads to the poetry of machines. It’s a Dizga Vertov quotes, also known as David Kaufman. He mentioned in another famous quote, speaking of the mechanical eye, my way leads for the creation of a fresh perception of the world, I explained in a new way, the world I’m known to you. And these are discourses on technologies we use to share experiences that have been there for a while now, with every new media, we try to think of what else could it show about our human experience? I don’t think we tried to respond to this question. In a manner that is scientific, we’re not trying to see exactly what it can show. But I think it stems from a desire, we have to always inquire on other aspects of our reality that we find so hard, so intangible and so hard to share. So this desire to constantly share and constantly tried to see in a new technology, how it can help us share something different. leads me to maybe a first segment of this presentation that I’d like to just settle a little bit of my background, since we like you suggested, so I believe that sometimes it’s welcomed. And given a little bit of the background on some of the questions I’m going to try to raise with the two projects that I’m going to talk about today. And it’s just the fact that when we talk about new tools, especially now tools that promise a virtual reality, that is magic, and seems to create an alternate ways of meeting, encountering even teaching, I think we need to always remind ourselves that these tools come with a materiality, they are connected to an economy to an industry to social context, but also to specifically the hardware and software that they’re made of. and reminding all of these elements together, helps us sometimes see that it’s not new, as I was mentioning, with the with the vertov code, that we need to rely on the limitations of the tool to try to think of different ways to tell the story. So speaking of a bit of background, I am a documentarian, by trade I studied in film school. Before I even decided to do grad studies. I started in film school, and they remember then, I had a dual endeavor and that my parents were really torn to know if it was a good choice for me to go into film school, which of course they didn’t agree with, because I had very good grades and was interested in Indian engineering, and wanted to really go into technology. But I kept feeling like both endeavors work really well together. I love film because it was a machine. And through the machine, I could explore how to tell different stories and documentaries have a long tradition of thinking ethically about who gets to see who gets to show what is seen. How do you share a story? How do you make it personal. I hope the videos are going to work here. This is a very old documentary I’m saying very old because 2007 doesn’t feel that old to me. But we still use dv tapes, which is very old technology. And while it’s playing the documentary was exploring how a small community had a little under 11,000 feet.
Sandra Rodriguez 14:09
So the community is a very small community and they live on the ancient rooms of archeological site that is wider than Machu Picchu, better preserved than Machu Picchu, and completely destroyed because nobody really cares about pre pre Incan civilization anymore. But the sheer fact is that this community keeps finding these mummies and they bring them back into the school and they created their own little museum to remember their direct lineage, but also to scold the children when they’re not doing well in school they need to sit with the mummies and the mummies have a real personality and reality in their lives. Of course, what we loved about filming this, this film was that there was no black or white there was no truth or false. Everybody had very different conceptions about who the mummies were. with, you know, why were they so well preserved? Who were we to trust to preserve them, the narco traffickers in the area were the ones preserving them the best. So of course, very archaeologists, this was a problem. And I show and I’ve showed this image, often, simply because for us, this was an story that had so many facets that we had to talk about it. And we were trying to convey as best as possible the different layers of reality of the different inhabitants of the town. And while we were doing so the town created their first Facebook page, this was 2007 2006, was the beginning of Facebook. And these are the archaeologists who were taking pictures of themselves and posting it on the first copy and Facebook page. So we thought we were trying to tap into one reality. But while we were doing so, they were also sharing these goofy North Americans coming with their cameras and equipment, and just disrupting their daily activities as archaeologists. And they had a story to share in 2013, was my last traditional film. So I’m choking here just because I can’t and I not often have the chance to. So this was a documentary that related to older men, losing a lot of little bits of pieces of an important moments of history of Bolivian history. And that seemed to not be able to recollect anything that helped me make the movie so I’m not sure if the sound is going to play properly.
Ever since I
was little, I’ve
heard the stories of hidden messages with my sister, or whatever, the creator, the secret codes, my father.
But I clearly understand these messages,
were there for
Sandra Rodriguez 17:37
So that’s […] for another invitation, but I was following 6 ex-missionary priests, one of them, my father, who were all taking part into the Guevara’s revolution in Che Guevara’s guerrilla. And believe in the 1960s. Of course, it’s what a story to share, right? But what we discovered was, nobody really remembered what happened, those that did remember what happened had Alzheimer’s, and everybody kept thinking that they were making it up. So again, it’s just for me another snippet of of understanding how reality comes in layers. And some of these layers are factual, some are felt summer made up memories, some are completely forgotten. And these are different layers, that with technology we can try to explore and convey in different light and different meaning. So my understanding of vertex quote, is that he’s trying also to see how can I use the reproduction of an image and editing of an image to convey some of the feelings of my daily senses. In that case, in parallel to working, and film and documentaries, I was also finishing a PhD, where I studied making sense of social change among a younger generation of social media users and how they used social media to create sense in social activism. Weirdly enough, I was asked in 2017, again, to revisit this as part of a book. And I was told, you know, it’s so still inherent today, a lot of the younger individuals interviewed as part of this publication. We’re saying exactly the same thing. The young people I had interviewed for my own PhD, were saying, a lot of them understood social media as inherently built with limits. They didn’t think it had the potential to create any change. But through the limits, they were also trying to change the way we perceived things around us. So it led me to think of a different type of theory on social change and one that is a lot closer to Herbert Blumer’s theory of the made in 1928 of social change, social change in his mind, and always come as a construct. It didn’t always come with with organized forces. A lot of times just came with small changes in social and co constructed perceptions. So I felt that a lot of the younger individuals that I was interviewing as part of my thesis believed in that power of using one of the limitations of the technology doesn’t help you change anything. But it did have one important affordance, it could be shared vastly. So if you could use that sharing power, and that sharing capability to add small changes in perceptions, that was then a goal of trying to convey social change through changing our perception of the world. And of course, these media have started to permeate our lives, increasingly, and increasingly, these technological tools that help us track record, have our own take on the world that has we’re perceiving it and share it extensively, bring new questions on the type of data that can be shared, the type of data that can be tracked and how it can be used again and again. So of course, this is inspiring to a lot of makers, because I wanted to introduce this not just by introducing myself in my own work, I was inspired by other makers out there who are also used data points, just to show either nature, how wind circulates in the United States, right? We feel fine. His recollection of words used on Twitter, if people say I feel as long as the words I and feel are connected together, the data points show if Twitter users are feeling happy today, or more static or sad or excited. So I think it’s just a lesson which can hear everything that I can do, let me know. But it’s getting warmer. So closing the window, a little bit too hot at the moment, but I can hear I think it’s ambulances or maybe fire trucks. goal here is just to say that it’s not new, that tools inspire with either the limitations or try to convey something that they help us see that we don’t necessarily see through more traditional media. In the case of artificial intelligence. There’s so many to unravel, right? There’s, is it neural networks? Are we talking about machine learning? Are we talking about image or facial recognition? When we talk about AI, we sell them we often talk about a wide set of technologies. And not just one. So we have a tendency to
Sandra Rodriguez 22:25
have to over explain what AI is. But the real short answer is that AI is not one thing. It’s an umbrella of terminologies of different technologies. And for me why I’m showing these images is one of the things that it inspired me when we’re saying it can show you things that you haven’t seen before or like vertex, quote, it can show you a world you hadn’t expected. We keep mentioning AI or Big Data can know us better than we actually know ourselves. But it can show us other things. It can show a pattern in movements, for instance, on the right with the AI is actually just doing is, is creating images from patterns of movements that it recognizes. So it’s not doing much. But it’s still inspiring to see that you can still try to convey a sense of movement through something else than just data points. For me, this was exciting. So the conclusion of this first segment is just to say maybe that new technology always comes with a particular set of affordances. And they are simultaneously shaped by the way we perceive our world and our own bias as much as they shape the way we will perceive the world and perceive each other. So they come with opportunities, and of course huge challenges. And how do we use them. So I presented at the beginning of the talk, two projects that I was going to talk about today, I’m super excited to share with you today. Both projects are still in the making. One was partly presented at Sundance, it was presented as a first chapter, but it’s still in the making. And both are targeted for erepublik release in spring 2022. So we’re still in the midst of production. That’s why I’m so excited to share because you can see the glitches in the process instead of just seeing the end result. And both projects have a different understanding of how we can leverage the affordances of AI. The first really aims to open a conversation about AI with AI and through AI. It’s a bit catchy, but I’ll go into more detail about what that means. And the second project tries to use AI really hear as a backbone as an invisible backbone that just enhances our humaneness, our need for interacting with each other in these virtual spaces. So why a conversation about AI with AI and through AI. The initial provocation comes from the feeling that I especially while at MIT was dicey. I felt there were a lot of discourses on AI that rubbed me the wrong way. rubbed me the wrong way. Not because I didn’t see any opportunities in the technology, but rather, I felt like we were all drinking the Kool Aid. We were told what AI could do, what was inevitable, where it was heading, how we would need to get ready for it. And I just felt, isn’t it just big data on steroids all again, and I had been working on do not track since 2011 to 2015. It felt like a redundant microwave reheat discourse, but with new sheer excitement. And I thought, with any new technology, especially when it comes with such a threat of potential repercussions, don’t drink the Kool Aid, you need to have a level head and try to see the opportunities and trade offs. But well, I’m always curious about maybe that’s the little disruptor. in me, it’s to think beyond the plus and prompt pros and cons. What is everything that we’re leaving out of the conversation? What is it that we’re leaving behind.
Sandra Rodriguez 25:55
And AI, as, as noted, is presented as a future technology, but it’s already in our lives, right from the series in our pockets to self parking cars, to technology that helps us make deep fake videos with politicians, to this little gadget here that enables us to find while they’re in a different, it feels like AI is at the cusp of showing us great new potential and great new opportunities. And simultaneously taking the sheer fun out of everything that we take for granted. For instance, the fun is maybe not finding Waldo, but trying to find it, you know, and finally succeeding in getting that little bit of adrenaline rush Yes, I found him. It’s not actually just finding weld. And that’s the fun. So, at the same time that we’re told that it can emulate the human mind, we also see a rise in pseudo AI. So tech firms that as the the title here says quietly use humans to do bots work and it’s no it’s it’s not like a one time thing it happens on and off. If you’re talking to for with a chatbot services for a company with a chatbot reaches its limits, and usually targets you to a real human that will help you out but it doesn’t tell you now real human speaking. So it’s it all comes in a conflation, where we are told that AI is imminent, we need to get ready for it. We think we know what it means. But it keeps feeling hidden hyped up mythologize. Which brings us to think about what happens when you combine it to the next other glitchy thing, which is VR, right, the next other I mean, shiny tool. Again, we’re told that we’re on the cusp of a shift in how technology will impact the way we live, work and play. But especially if we combined virtual reality where with artificial intelligence, one of the promises is that we can finally get to interact with entities that feel real, that can answer back as though they are real, that make us feel like we are in contact with a real human being. So machines is a company that did great in creating AI that was also taught how to exchange through emotional intelligence. So tried to understand the way our facial expression and even twists and turns and phrases meant as hesitation and how to use that instead of just trying to convey a conversation based on questions and answer. But again, we’re just told that these virtual entities will make us feel as if we’re speaking with a real human, meta human by unreal engines is just now out. And it’s a cloud streamed app that takes real time digital or human creation, or offers users to be able to create one and once your character is finished, you can export and download in Unreal, ready to animate, and it’s rigged like a puppet. It actually feels to me a little still like a latex puppet when you see the way these characters although they look very human like to me feel more like Latics puppets than they feel human. And it’s not.
Sandra Rodriguez 29:04
I’m not saying this to try to say that I don’t believe in AI futures. Or that I don’t think we should try to extend these opportunities. Rather, again, I’m trying and that was the goal of the Chomsky vs. Chomsky project, to get us to think about why we’re trying to emulate humans in the first place. What is it that we’re leaving behind when we’re focusing on AI as being a perfect replica of human encounters. And of course, if we keep bringing back the specter of super intelligence and singularity and we keep bringing them forward, every time we talk about ai, ai, Britain is made intentionally again mysterious mythologize of or out of reach, so it doesn’t help us all take part into steering its future. I felt like with a clear eyed view conversation on the AI, you can really start leveraging its real affordances and not fall into the trap of either fearing it. Or enhancing it. And Chomsky vs. Chomsky hopes to do just that. So it has a lot of different partners. But it started in the real production starting in 2018. But the real endeavor started back in 2016. There was a postdoc researcher at csail yard and cats who approached me while at the MIT open dock lab, with a great idea, he said he had found a way to map the reign of Noam Chomsky. So of course, that’s a pun, and it’s kind of a catchy approach. But his idea was really, really smart. He gathered that if he could himself gather enough data about the famous intellectual, he could easily find patterns in the way he spoke, and the way he used hand gestures. He also found that Noam Chomsky had a very robotic manner to him that with only a couple of words, you could prompt similar answers from one, you know, journalist interview to the next. And so in a way he could replicate the way he thinks, or at least try to map out the way he thinks and how word connected to feelings and connected to gestures. And of course, I thought, are you trying to robotized Noam Chomsky? And I asked, yarden gets Are you trying to make him into a replica? And yarden responded with a lot of transparency? Of course not. I’ve spoken already to Chomsky about this, and he would never agree with this. And I thought, Well, isn’t that strange? And then very ironic proposition to try to map the brain of somebody who fundamentally disagrees with the fact that you can actually replicate the way he thinks. And since then, I’ve been told that yes, it’s true that Noam Chomsky is one of the has one of the largest digital footprints available. He has hundreds of 1000s of videos, pictures, recordings that are uploaded online, but he also gives everything free of rights, which means that it’s easily shareable, usable, and reusable. He becomes a perfect case study to try to deep fake it or trying to replicate the way he speaks the way he has gestures or moves. But he I thought also becomes the perfect guide into questioning our endeavor into recreating and emulating other human beings. So yarden cats and myself didn’t agree necessarily on the way to do so he still believed making a film about this endeavor was more prevalent, I felt that maybe we needed to reach out to Chomsky and see why he would have disagreed in the first place. And so I did, I reached out to Chomsky reached out to his, it’s not the right word for it, but for lack of a better synonym in English, his entourage, his liaisons for public relations. And we discussed the fact that my goal was really to try to understand why Noam Chomsky was labeled as anti artificial intelligence where I had heard him say in an interview, that he feels he has been doing artificial working on artificial intelligence all his life that nobody seemed to mind. So I thought, I think he has a lot of things to say. And it’s true that one of Chomsky, his famous theory, on natural language, is are so very partly at the basis of natural language processing. Right for those that MIT was created, both in the same in the same buildings, and there’s a lot of talks in the hallways that enter each other, and natural language processing, or natural, natural language theory, explain way to simply suggest that we all have inside a sort of system that allows us to use finished blocks of ideas, or have emotions or meanings, and reconstruct and recombine them to create new meanings. And that’s how humans are endlessly creative. We recombine these blocks of meanings, or creating new meanings each time.
Sandra Rodriguez 33:43
But here’s the catch. In all of the interviews, it’s as part of the research I tried to listen to every Noam Chomsky interview that closely are, by far related to artificial intelligence and machine learning are cognitive replications of the mind. And here’s the great catch. I think what where he is repetitive is that he keeps insisting and reminding us that we know very little about the mind if an almost nothing about the way our brain works. So he he prompts a question that I think is really relevant, that he keeps repeating. And it’s, it’s not that AI fails us, it’s that we need to decide what metaphors we’re willing to accept for ourselves and which you aren’t. And for me, it brings us back to the main point of the conversation. If we’re not really knowing what we’re replicating with AI, what are we leaving behind? What is it that we’re not seeing about ourselves that we’re not taking into account when we’re trying to replicate the way we communicate? So of course, it’s very meta, it sounds very intellectual and very high level but the experience a for exactly the opposite. We wanted it to be fun, funny, quirky as a very malkovich malkovich kind of approach and a very Train. So, bear with me for this little video. It’s a little trippy. But it’s our trailer for the experience that was presented at some.
I have been asked questions all my life. Everybody wants to know. Defining intelligence is a colossal problem, way beyond the limits of our understanding. We have to be humble. We really are in a pre Galilean stage. We don’t know what we are looking for any more than Galileo did. There is an instinct for freedom at the core of human nature to inquire, to create. You are the countless number of people a driving force in history. It is up to you to decide. You can duplicate me. But the question is, is myself actually myself?
Sandra Rodriguez 36:09
Is this sound very bad just for me? Or is it also very bad for you guys? The sound is okay. So so I can hear like very choppy. So I keep if I keep grimacing and gluing, just because I’m not sure what you guys are hearing. Yes. So the experience didn’t aim to make us either fear or embrace AI but ask questions. It’s not aimed as a didactic or intellectual experience. But we wanted people to question try to break the system to try to inquire how it worked. And in doing so discover little snippets of information about how AI work and especially how our own brain works. So I like this quote by Noam Chomsky law because he kept insisting we are really are to pre Galilean stage. It’s not that we can’t get to a certain point of AI is that we don’t know what we’re looking for anymore than Galileo did. So for me, this is not depressing, or optimistic. It’s enthralling. It’s the clear eyed view, we need to kind of move forward. And of course, to create the experience, MIT archives give me access to their incredible Noam Chomsky special collection, where you can find so many of the speeches that Noam Chomsky first writes by hand before he has them type written, and amongst the messiness of how humans take notes about what how they’re preparing to talk publicly about different subjects. You can see some of the, you know, human lives seeping through. So I know that he has grandkids I’m not sure if he was drawing for a grandkid at that moment, I can see coffee stains, I could see notes and things to skip when you think you’re talking too much or too long about certain elements. All this messiness is what creates our human experience. So we tried to cram saying we because I didn’t create the experience by myself, of course, I have a vast team working with me that I’ll present a little bit. But we worked both from the real archives to get a sense of how Noam Chomsky speaks, some of his notes that keep re insisting you can see by his handwritten notes, things that he keeps insisting on that, that feel really relevant to some of the messages that he has been conveyed over and over and over 60 years. But we also have access to a vast library of archives through Chomsky dot info. And Chomsky dot info is simply a repository of Noam Chomsky talks that fans, you know, personally, transcribe, and some of them are transcribed by apps, but it’s all there. It’s all free of access. And we decided to just try to use this as our first initial data. And we ended up with hundreds of thousnads of questions asked of Chomsky, and hundreds of thousands of answers given by Chomsky to the set questions. And this helped us create the backend system. So the backend system works with using different tools that are labeled AI. One of them is creating speech to text and text to speech, to have a sort of a chatbot. That chatbot also uses algorithms to predict questions, a user’s question intent, content and effect and gives it a score. But that score we made visible at all time to the user. So when you spoke and asked question to Noam Chomsky or pardon Chomsky AI, you could see how it related to real Chomsky questions or answers, how it related to understand your own question, your own intent, and a lot of the time the system fails, but I felt presenting this to the public was fun enough already that you keep trying to see where else would it fail and you try to make the system field just to see how it will do real. And then we use a customized Microsoft Azure, called Luis natural language processing model and we made our own a complex conversational chat. But model called ARNOLD. So in a nutshell because it can be a little bit either way to simply explain for somebody who specializes in AI, or way too technical for somebody who doesn’t really want to hear about all the backend possibilities and how it works. What I think is useful to know is simply that when you would ask Chomsky II a question from the score of the system, the system would then choose which of three conversation mode it chose, chose its sensors from, either if the question was very, very accurately related to a theme, we wanted to highlight in the experience that it went into scripted mode. Either it was a chitchat conversation, then it could be pulled either from real Noam Chomsky conversations on where there are when the questions were a little bit more complex, we would recreate a Chomsky AI question from a pool of real Chomsky answers.
Sandra Rodriguez 40:56
And it became even a little bit more complex, it now becomes a little bit more complex. We presented this at Sundance with two goals, (a) introduce the character Chomsky AI, and to test the types of questions people were asking and how they were responding to Chomsky AI. From this, we’re now creating this new model. And this new model is, again, a bit complex to explain. We’ve called it ARNOLD, all the names that are already given to these natural language processing systems are called Luis, Bert, Ernie. And we thought it was kind of Bert on steroids. So we call it an ARNOLD, I’m not sure if it’s fully ethical, to name him that. But we decided that we felt we wanted to play with how much we could push it. But the goal is always to show to the user what the system is doing at all times. And of course, it becomes really funny to work with teams, I’ve mentioned the teams that I was working with. So in part we worked with two German studios. Schnellebuntebilder and Kling Kläng Klang in Germany, sound and design studio, who helped us create a world of virtual worlds. Why VR simply at the beginning, because it was easier to test how people would interact with the space and to actually find ways to recreate this space, physically. So VR helped us to have a first quick prototype to test. And with the designers we created world inspired by nature inspired by burns, and inspired by wind and water sounds that don’t exist. So we had an AI system recreate all these bird sounds from scratch.
Sandra Rodriguez 42:43
The music is also composed by an AI system. So again, I wanted to explore that tension and friction. On one hand, we have Chomsky AI that keeps telling us and warning us about the pitfalls and limitations of AI his own creativity. And at the same time, we’re using it in the backend to create the score of the music create the world created the birds that inhabited and just tried to give it a feel that’s purely created by the AI system. I worked with Cindy Bishop, who was also fellow at the MIT Doc Lab Lab and used to work at MIT civic media to try to think of the best way to combine the front end with the back end. And we created the back end with a company based in Montreal called movie II and the National Film Board of Canada. And we insisted on not creating a deep fake, because of the many questions and ethical questions we had with it. What we decided to deep fake is simply the manner in which Chomsky I speak so the crux of the voice, or the types of pitch of his voice, so you can hear, hear, hear, hear, that’s not so easy for a friend, you can hear hear how that feels.
Sandra Rodriguez 43:56
You can hear the little croaks of voice that are, you know, from an older Chomsky, but then the ways he speaks and we try to recreate, sorry, my keynote is running a little bit low, but an experience that on one end, help us at different steps, interact with the system. And then from time to time, Chomsky, I would pull us into what we call monologues, the world would change around him, he would pull us into his world and talk to us and show us either how AI systems run on data. Give us a peek into how natural language processing works, how algorithms are biased, and tap into questions on deep fake or emulation of the human mind and why we’re so obsessed with it. And at the same time, that’s my endeavor as a creator and not letting the machine run fully freely. What I wanted the the full experience to really embody are three things that through my research of Noam Chomsky, archive, I realized are three of his essential messages and legacy that he’s left Trying to highlight the fact that our the way our minds work is not that different than than other species. What makes it perhaps unique is a trifecta of need for inquiry, a need for cooperation and the need for creativity. And if the experience can make us feel that, and that’s the goal for the next phase that we’re now in production of, we feel we would have succeed. So what we presented as Sundance enabled us to test the types of questions users were asking, and it went from full to chat questions like, how are you? Where are you right now? And Chomsky? I would always bring us back to understand that he’s no, not Noam Chomsky, that he’s an emulation of Noam Chomsky, that he’s made of data. And we had other maybe more, I’m not sure why it’s not showing me. Sorry, you can see my little rainbow going here. Just trying to convey some of the questions. We had questions like, do you think Trump will win the elections again? Or what do you think of the Pol Pot regime? We had questions on what will end the world and questions on our human intelligence is a different from animal language. So we had very, very chatty questions and very specific questions. And that’s what we were hoping to get from a very diversified public, we were hoping to see how they would react to an entity, to our biggest surprise, and perhaps what made us most happy Sorry, guys. It’s stuck, and not wanting to get to the next slide. And I see time running, and I want to talk about the next project. We were very excited and happy, we got feedback. And I feel like it feels like I’m boasting. But I want to share this as lessons to learn from people interested in combining AI or creating AI driven entities. We got amazing feedback on how meaningful the conversation felt. So we had people specialized in technology that asked us specifically how we made the backend work, how it knew exactly how to respond. And I thought, well, the real answer is in scripting. The real answer doesn’t come from the machine. The real answer came from my human capacity to predict the types of questions that you will be asking, and to predict the types of fallback to answers to these questions that would feel meaningful. Now, that’s what I love about creating this type of work. I work with AI, technologists and architects for the backend move AI are specializing in exactly this and creating chatbots. And they come with potentialities and possibilities that they work hard on. And when you test the system, the conversation feels meaningless. And there’s a reason for that it drives modules of blocks of ideas, exactly like natural language theory did were explained. It brings these blocks of ideas together, but it lacks context. So how can scripting work? Well, scripting creates a character. And the character Chomsky II has a will of its own. So it can fall back some of these questions that users have sorry, we’re really stuck. Slow. I’m really trying hard to get it to move. Maybe if I cut my camera for a second. will enable us to get to the next slide. Maybe if I stopped share and tried to resume sharing again. There you guys still with me?
Andrew Whitacre 49:00
Yep, we’re here
to try sharing again.
cool here. Haha,
Sandra Rodriguez 49:16
I was trying to get to this first lesson that we’ve learned from the experiences that a conversation is not a service. What I mean by this is every time that I do work with AI architects, the goal is to try to create a service that can answer all your questions understand the concept of context of your question, and answer to its best of capacities. And for the research for this experience, I had the chance to try a lot of chatbots bots, services or chatbot experiences, even creative experiences. And I felt that the character is always portrayed that somebody’s willing to help you answer you. Are you feeling a little bit at your service, where we felt we found something to share. So again, it’s not to boast about what we created, but so many people responded very positively. When the answers were fully scripted. But we would we scripted the answers with our real quotes from Noam Chomsky where he prompts us to think about why we want to ask these types of questions. So these were fallback answers. For instance, if a question was specifically asking about politics, he would bring back a monologue about, you know, I’m always asked about politics. And I’m always asked about politics, maybe because you’re trying to find in my system, and the answer that you’re trying to find for yourself, you know, so he kept showing us the limitations of its own system, telling us what he could do. And we’re trying to help us think about why we wanted him to have all the answers to everything. And people felt that was the meaningful conversation. So first lesson is that a conversation is more than a service. It’s a relation, it comes with a character, and if he can feel empathy for that character, you feel like the conversation is more meaningful. A second lesson was reassessing what drives our desire to share with an entity that’s virtual, why do we want to talk to this virtual entity at all? First thing is we want to test the limits of the system, we want to game it. So that’s why we react and try to interact with it. But little hints were actually really good at big believe, humans are great at inventing worlds, and we won’t inhabit them. So what really helps us is our unique human need and desire to share with a magic world. We want to be part of these magic worlds. So it’s not just the promises of the technology, but the promise of the big belief, the power of mega leap that we have. And the third lesson is we never let AI drive the experience. Whenever we did, the experience became boring. It’s really users that have the storytelling power. They choose where they spend more time in asking questions or receiving answers, or they want to listen to Chomsky more and we fall into these linear more linear narrated discourses that help us discover more about the legacy of Noam Chomsky and not just be with a kind of gimmicky chatbot of answer and questions. I’m not sure about the time because I’ve lost some time with the slowness of my presentation. Do I still have maybe five to 10 ish minutes to talk about this other project? Absolutely. Because I can do it fast. To know, how soon I should finish or not.
Vivek Bald 52:25
That’s good. Five to 10 is fine.
Sandra Rodriguez 52:28
But so if I put in a nutshell, this whole experience of Chomsky vs. Chomsky, it highlighted a lot of the limitations of AI, it really tried to bring this clear, naive view that I presented the beginning of whether technology actually does to retrieve some of the answers, how would work see the accuracy percentages go? The more questions user asked, the more the percentage is lower than that is, it was a surprise to me. I didn’t know that’s the way it works. But a system will learn from a lot of different users using it but the more users use it, the less its answers are accurate or feel accurate. So it’s a little bit of a catch 22. And so the system did exactly just that Noam Chomsky showed us all its limitation. And the more questions we’re asking, the more it would derail. And he would he would tell us, the more you’re trying to get answers for me, the less I can actually give you correct answers. And for me, this was really inspiring to tap into other things that I couldn’t with this project that was beyond the scope of this project. But it showed me boundaries, clear boundaries for AI, and I flat fell boundaries could really be seen as an invitation rather than the limitation. Some of these boundaries were that we were able to create a world music, Bird sounds with deep faking data that we found online. And it made me think about our relationship with nature in a world especially now in a world where we feel we’re overwhelmed with understanding how we can assess some of the issues. We get more data than ever about what’s happening, and we don’t know how to tackle a relation current disruptive relationship with nature. And while I was thinking of some of these issues, I was I’ve met with Alexander Whitley. He was visiting Montreal, this was right before the confinement and wet right before my maternity leave. And we were just discussing the sheer possibility that one day we’re going to just let ourselves loose and use AI to create a dancing experience and we didn’t exactly know what the experience would be about, but we knew it was going to be about dance. Alexander Whitley is a UK based choreographer. He’s the founder of the Alexander Whitley Dance Company, and they have been known for using technology and dance and live performances.
Sandra Rodriguez 54:51
And we both came from a similar endeavor but a little bit different. For me, it was all about maybe not using so many words like I’ve had this person entation maybe just let our our bodies tell the story tried to see exactly what I was not able to showcase where the chubs keepers project was all text based and voice based, and do exactly the opposite. I could see how people were interacting with the, the entity throughout the the test of the prototype. People were getting close to it getting back if the shape kept morphing, they played with a lot more of the shaped would call down, they would call down as well. We had nature inspired environment where the nature and sport environment grew, people tried to see how they could affect it. All these little snippets of understanding users movements in space made me think what if we could have an opportunity to use data and AI to analyze these body movements, and use these body movements to help us guide an introspection on how we interact with nature when it’s virtual or real. And on his end, Alexander Whitley thought, you know, there’s a classic work of art, he was trained at the Royal Ballet, and he thought there’s a classic work of art that every choreographer wants to tackle. It’s not accessible to the wide audience. And it’s so relevant today with what’s happening in the world and that that work is the Rite of Spring. So Future Rites is a multi user immersive dance performance based on an unforeseen additive adaptation story of the Rite of Spring. And we use real time animation mocap data and artificial intelligence really, as this time the backbone is what makes the magic of the experience. It’s not the theme or the goal of the experience. And Rite of Spring is Stravinsky’s famous and controversial work of art. It’s one considered by many as one of the most controversial that I’m sorry, can’t say that word in English, controversial work of art of the 20th century. It’s considered by others as a gateway into the modern area. It premiered in 1913, at Théâtre des Champs Elysées and sparked riots between different classes of Parisian goers. Because of different elements of music that broke with rules in depiction of how rhythm was supposed to fall. There is no real rhythm. It’s not a music you can really dance to. But yet, it’s so narrative that it’s been reinterpreted by Pina Bausch to Walt Disney. So in very different light and very different forms. I’m talking to you about Rite of Spring now, and you may not know what it is, and I’m pretty sure when you’ll hear the score, you’ll recognize it. But the untamed energy of Rite of Spring, really wanted to push the idea of innovation, understanding movement in a different light than we thought, well, we have a technology now that helps us rethink it in different light. And it was an uncanny coincidence, Alexander widley, and myself are both getting fine and suddenly stuck not being able to travel to Montreal or to London to co create this project. But he we had the uncanny chance that he was confined with a motion capture suit, which doesn’t happen every time. But he is the brother of Nell Whitley, from Marshmallow Laser Feast for the company. And they were all in, in confinement together. And they all brought their motion capture suit. So we had movement data, and we started to test how we could interact and use this. And I was really inspired by this discovery, which I’m not mistaken, I learned from by being in one of the slack accounts of AI and MIT. This is what really inspired me, you could train a target to recognize movement in a source video, I think this example is more eloquent. And voila, you can turn target subjects that really don’t know how to dance into elegant dancers. And couldn’t that be an amazing promise that we could create an experience where you don’t have to understand the rhythm of a Rite of Spring, you are a dancer, as you know, as part of the ballet. And we toyed with it a little bit. It forced us to rethink how we could have different users confined, be part of the experience all at the same time. And I’m going to show you the trailer because it’s going to be a little faster than me explaining the project but this is where we are now this is a prototype. It’s a trailer for the prototype. The project is not finished. Of course, I now get to talk to you a little bit more about
Sandra Rodriguez 59:21
it. So you say that the promise of the experience is to create a advanced auto tune system that have labeled it for lack of better words. But the goal is that we’re playing on something that is again very human. And it’s our capacity for nemesis. When you see at the at the end, if you’d like of the experience you see, I’m going to try to put my camera back on. So you can see my hand gestures. Because we’re talking about dense. When you see some characters, sometimes you feel like you’re you’re by yourself in the experience and you’re just puppeteering an avatar, and the other avatars are dancing on their own. But at some simple moments when you are yourself dancing, either even if you’re just doing a flick of an arm, the system tries to combine and match your flick of an arm to an actual choreographed dance movement that we’ve recorded in the database of dance movements. So you feel like you are dancing like the dancer, the puppet that you’re puppeteering in front of you and the avatar that you’re puppeteering. If you’re moving just a little bit to the to the right, the arm feels elevated, it feels like a natural dance movement that seems to fall magically with rhyme with the music and with the other avatars and characters. And the more you interact with the dance, the more you feel like you’re completely part of the choreography and you don’t feel like you see the difference and discrepancy between the professional dancers and yourself. The system is pretty simple. It really works like Autotune. If you’re you for those of you who know Autotune, if you’re not really singing to pitch, it will bring your voice to that pitch. It’s a little bit the same. But what happens with memesis, memesis helps us fill in the gaps. When our brain sees something that we feel is replicating what we’re doing. That’s what we’re getting them to feel at the beginning of the experience. If it ends up moving a little bit higher than what we’re doing, we will match that movement, we have a tendency to want to connect to other humans by mimicking what they’re doing. Chimps do this all the time, when they see themselves they will open their eyebrows to say, Well, I’m not a threat and the other will also open their eyebrows a little bit to say I’m not a threat. We are not taught that this comes from instinct. So this instinct of mimesis we can use with the AI as a backbone to help increase the way people are dancing. So our little test that we’ve done so far with the prototype, made us recollect again the same lesson, we have an infinite power of make believe we humans are willing to pretend like we’re dancing and really let ourselves go when the image that we’re seeing matches what we’re actually creating. And my second and final lesson is I really felt inspired by this desire, we have to connect, I think either by words when we’re trying to talk to chumps to entity that we know is not real, that people really ask deep questions and try to entice a real conversation or with these, you know, straw like men characters that we created because was just a prototype. And still people are trying with our AV testing that we have done so far, people are still trying to connect with them and see if they’re real if the if it’s somebody else elsewhere and trying to match the movements. So for me, it just highlights the human desire to connect, that is enhanced by technology. But it’s all the little glitches and meth messiness that we can bring with the AI that makes it feel more real. So as a conclusion of these two different projects, I would say that what I one thing that I’m learning I’m working currently on other AI driven now, larger scale installation that is still in non disclosure mode. And I hope next year I get to talk about it, because they’re all going to come out at the same time, and I’m going to be burnt out.
Sandra Rodriguez 1:05:44
For the moment, everything is doing well. But from these two different projects, it led me to think that between technology and carefully crafted storytelling, what really is key is not what we can create with the technology or not how well we’re crafting the stories, but how well we tap into our capacity for human imagination. And that’s what’s at the core of any immersion, as technological and immersive and even intelligent technology. keep developing. I hope to keep working on experiences that invite curiosity and play. And that can help us remember that ultimately, it’s all about defining the world we want to live in with said technology. Noam Chomsky AI not Noam Chomsky, one of his made up answers from past traces that we’ve recreated with a AI system said to us that we are the ones creating meetings, we are the force creating architectures of our future. So it’s not a correctly English put phrase, but it does tell us something that Noam Chomsky strongly believes in. But he also believes that if you let yourself be amazed by these new technologies, you’ll find the puzzles and it’s in the puzzles, that we find inspiration for such projects. So it’s a little bit of a mash up of different projects. But it goes to show you two extremes of using AI to either discuss AI or really using it as a backbone to help us do what we do best connect to each other and connect through our imagination. So I would leave that as my conclusion for today’s talk. And I hope I haven’t lost you all. Because I can’t see the chat. I can’t see if that’s true. Now you can see the chat, and you can see me see the chat. So I’ll stop sharing for like an access the chat and see. Yes, that’s fabulous. Thank you.
Vivek Bald 1:07:35
Thanks so much. I’m going to, hold on, I need to change my viewing here too. So questions.
Vivek Bald 1:07:50
Vivek Bald 1:07:58
G. R. Marvez 1:08:02
Hi, thank you so much for your talk. It was really interesting. I’ve been doing AI work recently in my lab, and I’m curious about the size of the data set you used in building Chomsky AI.
Sandra Rodriguez 1:08:14
So the the size that we use them and the size that we’re using now. So the size that we use, then we literally had hundreds of 1000s of full answers and hundreds of 1000s of questions. We didn’t use all of that. So the prototype helped us narrow down to 7000 question and 5000 answers. And this felt really, really limited. We didn’t feel it would be enough to create compelling answers. But to our surprise, we didn’t even go that far. That was our pool of data that we used. That’s why I highlight the scripting. behind it. The goal was really not to create a Chomsky slash theory that anybody could access at any point and it gives us an answer that is relevant to what we’re searching for instance, because it wasn’t still a narrative led experience. It is an experience that tries to open a conversation a lot as I was mentioning before with scripted so the first thing we started with was q&a maker and q&a maker gave us prompted us so many pre scripted answers, and that every time they saw the pre scripted answer, so we had two modes, q&a maker for chit chat and then a more complex our ARNOLD conversational chatbot from these 7000 questions and 5000 answers and it didn’t match. So what happened was a lot of the q&a answers I ended up revisiting and I don’t know how many there are I know if they’re in the hundreds if not in the 1000s I’m not sure if it’s about 500 or 600 q&a already made answers that Microsoft provides and they didn’t sound Chomsky esque. You know, like if somebody asks How are you doing, he would say living the dream, you know, but sometimes he would use exactly that answer. And we added what I kept confusing everyone by calling handles. What I mean by handled is just an onboarding and onboarding for this pre scripted answer. So if you’re asking him, What is your name? He didn’t exactly know what to answer. But the, let’s say, scripted answer was, well, you could say that I’m an emulation of Noam Chomsky. And then he would fall into scripted monologue of the character explaining who he is where he’s from. And he would change the world. And you could understand that he’s talking to you just the same like we’re talking now, you asked a simple question, I started answering simply and then I drift off into what I want to convey to you, right? So in a conversation, have moments where you try to answer somebody’s question, but you also just start to share. And that’s where we had little scripted moments. But for the answers that were ready, available by q&a maker, we tried to add before and after some turns of phrases that are very Chomsky asked, for instance, starting with, well, just as simple as you know, a lot of the answers we already had, if you hear them out, he has little tweaks or ways of speaking like we all do, that giveaway, his way of speaking or his mannerisms. That’s the word I’m looking for. So we had little, what I call the handles are kind of mannerism that help you onboard a pre scripted answer and outboard a pre scripted answer, and made it feel more like it was Chomsky-esque. That was a lot of scripting, we expected our database to be too small. And what we’re doing now, though, is what we presented at Sundance allowed us to really build up our database of questions. And so the new system that we’re using now GPT-2, we did, we chose just to to steer away from GPT-3, which drove our team not super happy at the beginning, because they were really excited about the type of answers that they could get. But from an ethical point of view, we’re trying to highlight potential pitfalls of creating these emulations of who we are and from our digital traces. Trading too deep fake one another, for instance, we didn’t feel like pretending all the answers came from Trump’s game self, were relevant to the experience and we stuck with GPT. Two, that enables us to at least try to learn how to answer better. The questions that are asked but always with real Noam Chomsky answers. Were things he said in the past two, were trying to limit it to that so we don’t pretend we’re Noam Chomsky, what we do is create answers from matched snippets of answers. If that. If that is coherent, I’m losing the thread of the question. See, I’m just doing Chomsky I myself,
Vivek Bald 1:12:54
there’s a question in the q&a, specifically, how and where can we interact with Chomsky AI. I assume
Sandra Rodriguez 1:13:01
You could interact with Chomsky AI, you could interact with him during Sundance 2020. And that was the beginning of a tour, which included MIT libraries, included Geneva Film Festival included many different pitch pitstops that we had planned for, but COVID hit right after Sundance, we met even made joke at Sundance that we were all going to fall sick from that so called COVID disease we were hearing about on the news, because we’re all sharing headsets in a very confined space, while at Sundance, that was February 2020. And of course, everything started was our last travel. So that’s for that. But in the meantime, we didn’t lose any time, we’re now working on the production of the full experience, which is the prototype or the prologue was really testing how people interacted with the character. Now for the full experience is a multi user experience where you get to interact with the same Chomsky character, but in a world that he builds around you. And this is going to be for public release in spring 2022. We have a NFB, NFB is the National Film Board of Canada, we have a release dates and it’d be when they say you have a release date, it needs to be ready. Their release date is April 2020. Exactly. So keep posted. I’ll keep you posted. I’ll try
Vivek Bald 1:14:25
Other questions either from the q&a bar which is open or screen. I mean, I just have a kind of a comment and that is that. I think what’s interesting in hearing you describe how you designed Chomsky vs. Chomsky to interact with people. How much actually mirrors what I think of as kind of a humanistic pedagogy, which is, you know that that often as a teacher, you, when there’s a question asked of you by a student in particular, often, that student already has all the pieces of the answer, right? And it’s just a matter of kind of drawing, drawing that out in this kind of back and forth. And and it seems to me that that’s partly what you, you are tapping into here, right? That that the, that the AI version of tomsky is not a machine for answers, but a machine for getting you to find the answers.
Sandra Rodriguez 1:15:43
That was one of the best moments in the video that couldn’t or didn’t want to play that I was, you know, stuck with and couldn’t move to the next slide. There was a video of a man that really extensively stayed a long time asking questions of Noam Chomsky. So, again, the experience works, you’re encountering this entity that keeps morphing. And if you’re not doing anything, it will take its time before it reveals itself to you. If you interact with it a lot, it will start asking you questions and creating worlds around you. But depending on the flow of your conversation, you could stay in chit chat for a while. Or if you start to ask questions about are you real Chomsky? What are you exactly? Where are you based? When people really try to what I call break the system to try to see behind the curtain of Oz, the curtain of Oz, like you know who’s hiding behind, that’s when he starts storytelling, part of what he’s made of how you know, some of the elements and as you’re saying, the goal is really to help us think about what we’re hoping with these technologies. So the humanistic pedagogy is really helping an individual that’s interacting with such a system. Think a little bit back, it’s easy to find the flaws of these systems, because they’re full of them. So you start by finding the flaws, and you’re disappointed when you do. But I think there’s a little bit of a magic when you ask people to take a step back, and you ask, but what exactly are you wanting from it? And why do you want it to be specific? Why do you want it to be accurate, and you start to think differently about the technology while we were creating the experience. So full disclosure, I made jokes about having a toddler because maybe you’re hearing her at the moment, she just came back home with that.
Sandra Rodriguez 1:17:31
It was very strange, because I kept hearing talks about how the human brain is like, you know, is like a machine’s brain. And of course, we try we make metaphors, it’s normal. We try to create these metaphors, because it helps us understand at the same time, so the prototype lasted nine months, during these last nine months of prototyping, I was also pregnant. We presented at Sundance, I was three months after and the prototype, my baby was three months old, it was kind of a matchy matchy, trying to see the human versus the machine. And you really see how we don’t learn the same way. Having worked on both prototypes at the same time, you see both living in artificial prototypes, really not working the same way. But you do understand the need for metaphors they help us understand. So throughout the experience, I call this project the baby making machine because the lead developer went on maternity leave, the lead designer for the music went on paternity leave as well. We all fell like one after the other, we were either leaving and returning from maternity leave or paternity leave. And we all had the same kind of aha moment where we thought, okay, now I understand the real difference between what I’m seeing and how I’m explained how AI works. And instead of feeling like these revelations were limiting, it helped us love AI. Or strangely, the more we were shown what it couldn’t do, the more we’re like, but then I could create music with this. Instead of feeling like AI had to be presented as the composer. Suddenly our composer was a dad. And he said, Oh my god, it’s so incredible. I know how could use, you know, certain elements of the data to really create a score in the background. And we all felt really inspired, the more we learned about the limitations. So that’s why I opened the second segue with this. It’s not just because it’s a catchy phrase, it was kind of a we’re all faced with dilemmas of human life and artificial entities that we’re building. And we always felt like the limitations of one helped us raise more questions. It just more questions prompted more questions. So as you’re saying, with the humanistic pedagogy, it’s a little bit bringing this back, I guess to the public, making them think twice about why we’re using the mind as a metaphor for AI. Why not an octopus mind? Why, why do we want it to reproduce human mind Is it really more efficient for what we wanted to do or create? And in some of the archives, I thought Noam Chomsky had a humor that I know of him that sometimes when he’s prompted with repetitive questions, he decides to come up with a fully other answer. I saw an interview where he’s asked again about, you know, can machines think? And he answers Well, can chicken fly? You know? And can humans fly the way they do? If you look at their record of long jumping, they fly better than chicken and I thought, Oh, he’s hilarious. He has these reference that he’s just trying to convey new metaphors. So in a nutshell, yes, Chomsky, AI is a very humanistic, driven pedagogical character that tries to ask questions and prompt more questions. that are a little bit like a politician’s discourse. And a truthful, straightforward answer.
Vivek Bald 1:21:02
Vivek Bald 1:21:04
questions or comments, because I’m all for it, we’re still in production. So things that you feel are a little bit weird or not well explained, are also welcome. I’m learning from your feedback as well. Just like an AI.
Vivek Bald 1:21:23
Rashin Fahandej 1:21:27
Hi, sorry, I thought this webinar. So I was multitasking. So I’m sorry that I’m not fully fully video present. But thank you so much for this incredible talk and updates on your project, Sandra, it was really nice to see. Just like a quick comment, on the new projects, I felt it’s really kind of having seen the Chomsky project at Sundance on the before pandemic, it’s really nice to see how you are sort of reflecting on the AI, and the possibilities, and also the project itself, which actually creates a very interesting narrative on the use of AR, or this sort of alternate narrative, alternative narrative, which is refreshing. And I think, kind of like bringing it more as a collaborator, like bringing AR more as a collaborator, or co creator in this process, and you’re sort of thinking about how you’re, you’re constructing a relationship, rather than just that sort of question and answer format that a lot of AI pieces will do is really beautiful. So I thought it really works also with the visuals, and the way that you’re building the, the, this whole space and interaction. So I’m looking forward to sort of see the, the next iteration of that. And also, with the with the dance piece, it’s quite interesting. Yesterday, you showed an image of a bigger screen and this sort of, sort of lines and, and visuals kind of connecting bodies to that screen. So I was wondering where that sits in? And are you thinking about beside the virtual or work? Is that going to be an installation? Like I was sort of curious about the formation of the dance piece and how it will be what will What are your mind’s like thinking around the presentation of that?
Sandra Rodriguez 1:23:41
There’s nothing like boundaries, as I was saying before, so it’s, it’s a, it’s like a practice what you preach, you know, is a very wide that think that the best practice what you preach, really is the, I would say have to say the mantra for Future Rites. So, Future Rites, we started ideating, the project pre pandemic, tree, even the end of Chomsky AI, and we were kind of dreaming up where we could create with the Rite of Spring reinterpretation and what I didn’t have time to maybe highlight by Rite of Spring, I have myself seen some reinterpretations of Rite of Spring and what I feel in these different re-interpretations, what is inherent to the piece of that it makes the audience feel like they are part of a sacrifice. That’s why it was so not well received in 1913. You do see somebody being picked up by a community that person is ends up getting sacrificed and the sacrifices that person dances him or herself to that to this and because we are put into a situation where we’re watching a community choose Who or what gets sacrificed, it made people feel very uncomfortable. Long story short, I’m trying to resume now COVID hits, we had to find new alternative ways to co create and the initial goal for the experience was room scale. Pardon, location based. So in situ installation, where we would have live dancers and live performers and multi users in the same space. And the experience helped us play with projections, your body in space. And it really was kind of an overwhelming, broad perspective on how we could use body and space, see each other move and dance and see live performance only in virtual reality. And we were basing it from past experience that either Alexander widley head or myself, we’ve had himself as a choreographer, where he felt if you really highly choreographed a piece in virtual reality, and you ask users to dance, they will move until the performer steps in, and then everybody stops moving and watches the show, I felt the opposite that as a user of dance based VR experiences, when you’re asked to do certain specific movements, I get bored, I like to disrupt, I like to be a little bit of a punk in these systems. And when I’m told too much what to do, and now raise your hands. And this is the choreography you need to follow. I feel you’ve missed out on the sheer fun of interacting and trying to move to your own beat.
Sandra Rodriguez 1:26:32
So our goal was, let’s take everything that we both feel is right about dense, let’s make sure we have an installation that enhances this. And it’s going to be location based. But with confinement, we couldn’t work on that at all. We couldn’t work at a distance, we had to create a pipeline. That’s why I mean, practice what you preach, that I could test the system from my home, he could test the system from his home, has developed our lead developer, his partner in crime is usually in the same studio, but now was in Austria, we were all in different locations of the world, using a lot of the technology that creates problems in our relationships with nature in the first place. And we kept being confronted with our own choices. We can now create at a distance, but we know we’re having repercussions on our environment, what do you do, we kept talking about the sacrifices and the choices that we kept having to make and what we felt was good or bad for the creation of the project. And it forced us to rethink the project as a multi format, one that can be accessed from simply a browser. So that’s one of the format’s that we’re aiming for one that can be accessed with a VR headset. But we worked. I’m not sure if this is recorded how much now, but we worked on proprietary technology. That is a simple toggle effect. So for each of the avatars that you’re seeing in the experience, we can choose what I’m calling it a toggle effects, not literally a toggle, but it helps again, it’s a metaphor, we can choose who’s controlling it, either the user or the machine. But we can change machine by life performer. So for location base, you can have life performance really create an experience that is tailored for location based for the current experience is aimed for people trying it from their house, either from a browser or through a headset. And we created this pipeline not because we wanted to have yet new promises of democratizing the tool, which is great. But to be really honest, and truthful, we couldn’t work all at distance, we needed to find a pipeline where we could all try it from our homes. And that’s what we found was the best solution for everyone. Having us force ourselves to create something that we could try individually made us rethink the full experience. So when you’ve seen was our first initial intention for an installation really location based coming first, and derivative derivatives coming later. Now the derivatives are coming in first, the fact that you can access it from your home is first location based comes in later. A little bit of a complex answer, but because I know Rashin
Rashin Fahandej 1:29:10
Rashin Fahandej 1:29:13
Thank you so much.
Vivek Bald 1:29:15
Well, we are a little past time. So unfortunately, we’re gonna have to say goodbye. But um, thank you so much Sandra for for sharing these two amazing projects. And yes, as they develop as they’re finished, as we’re all able to be in the same room together. We would love to have you back to to experience some of this ourselves.
Vivek Bald 1:29:43
And if if I was mentioning speaking of installations, the other project that is still an NDA, that I can’t really reveal what it is, but it’s a large status, large scale multi station installation on I’m going to just say this because there’s only thing that I Maybe we can say, on our weird, intertwined relationship with sexuality, elitism and data, data that’s driven to force us to rethink our binary codes of our own biases. So all this to say, this is also going to be out. I’m guessing also end of spring 2022. I would love to be back and talk about different elements of what AI can show us about ourselves and our, our, our binary perceptions of the world.
Vivek Bald 1:30:33
Thank you. Thank you. Thank you so much. And thank you, everyone. This is our last session of the school year. So we will see you all again in the fall. And thank you so much, have a wonderful summer.
Andrew Whitacre 1:30:49
Hey, and congratulations to all the students on finishing.
Vivek Bald 1:30:53
Andrew Whitacre 1:30:54
Especially in the second years, we’re gonna miss having you around.
Vivek Bald 1:30:59
Vivek Bald 1:31:01
Bye, everyone. Talk to you soon. Bye.