Video game engines have promoted a new cultural economy for software production and have provided a common architecture for digital content creation across what were once distinct media verticals—film, television, video games and other immersive and interactive media forms that can leverage real-time 3D visualization. Game engines are the building blocks for efficient real-time visualization, and they signal quite forcefully the colonizing influence of programming. Video game engines are powering our visual futures, and engine developers that include Unity Technologies and Epic Games are rapidly iterating their products to tackle new markets, where data and visuality continue to converge. This analysis, which draws from software studies and studies of visual culture, examines a tool that is fairly new to the Epic Games arsenal—the in-development MetaHuman Creator that is part of Epic’s proprietary Unreal Engine. The MetaHuman Creator is a cloud-streamed application that draws from a library of real scans of people and allows 3D content developers to quickly create unique photorealistic fully-rigged digital humans. MetaHuman creation is a fluid of process, and the speedy transformation of character rigs and other non-binary attributes highlights the potential queerness or openness of data. Yet the ongoing push toward (hyper)realism in commercial media has birthed a visual economy which is supported by an industrial apparatus that privileges mastery over the tools of production, and where bodies and politics are often cleaved in the design process. Epic’s multiethnic, multiracial, transgender MetaHuman Creator is a design tool and not a narrative engine. Its transitions are simple and seamless, and the traces of non-binary and non-white identities are simply part of a larger color palette. These tools represent a way of seeing and knowing the world, and the representations they produce are part of hermetically-sealed and privately-held encoding processes that include a company’s original data, its application programming, its proprietary build environment and its interface. This analysis poses two interrelated questions. Are the MetaHuman Creator and similar simplified building tools democratizing the field of digital content creation? Are they fostering more diverse representations and narratives, and supporting the free play of identity in playable media?
Eric Freedman is Professor and Dean of the School of Media Arts at Columbia College Chicago. He is the author most recently of The Persistence of Code in Game Engine Culture (2020), as well as Transient Images: Personal Media in Public Frameworks (2011). He serves on the editorial board of the International Journal of Creative Media Research and the advisory board of the Communication and Media Studies Research Network.
The following is a transcript of the video’s content, with human corrections. For any errors the human missed, please reach out to cms@mit.edu.
Heather Hendershot 00:48
Eric Friedman is a professor and dean of the School of Media Arts at Columbia College, Chicago. He’s the author, most recently of the persistence of code and game engine culture, which, um, I think it’s 2020. You so that, so it is out because he asked me to do, um, and as well as transmit images, personal media and public frameworks 2011. He serves on the editorial board of the International Journal of Creative Media Research and the advisory board of the Communication and Media Studies Research Network. So I’m going to hand it off to Professor Friedman, thank you for joining us today.
Eric Freedman 01:26
Thanks. Thanks. Hi there, it’s great to be here. I appreciate being hosted. And to talk about some new work. This is part but not part of my, my next, or the book that I’m currently on deadline for which is on artificial intelligence and playable media. But it sort of fits between these spaces between the work that I was doing on video game market sector architectures, and artificial intelligence, I must say that the some of this was written or conceived of in my head with a particular type of audience. So I don’t know the context of the graduate student body. So how much you know and don’t know. So some of this may be too rudimentary. Some of it may be too advanced, but sort of let me know when we get to that sort of point of the sort of conversation if I’ve, if I need to clarify anything that I’ve spoken about. I’m going to share my screen because I’ve got some things for us to look at along the way. And I will share my sound as well. And I hope you can all see that in here. There’s nothing in here yet. But you can all see that. So I’m going to be talking about the role that the influence of video game technologies have had on both game and non game media. And their design and development environments and runtime components are software systems that are known as game engines. And they’ve promoted a wide range of uses for real time 3d visualization that extend far beyond the field of video game development. The software systems that drive video games have provided a common framework for digital content creation, across what were distinct once distinct media verticals, film, television, even manufacturing and industrial fabrication, architecture and urban planning and other interactive media forms, including digital twins, any form that can leverage photorealistic visual effects in program programmable software generated assets. So what we see is that game engines are influencing just about every aspect of for example, the film and television production pipeline, including pre visualization, advanced in camera, visual effects, all of the all these things are complementing what was once the solitary preview of post production. What we see in film and television production is more and more directors and cinematographers are working in tandem with digital imaging technicians. Popular TV programs like the Mandalorian this crowd probably knows, or Westworld, have leveraged game engine technologies throughout their respective production processes to seamlessly integrate physical and virtual assets. And both series have used elaborate filmmaking techniques to map and capture virtual settings rendered in real time across custom built led walls while recording live performers. And while game engines are being used across media forums, they were built as development tools for interactive digital content creation, and really as code frameworks to formally define the field of play meaning the game environment and its assets, as well as the executable functions meaning what happens during gameplay throughout video games, They’re the building blocks for efficient real time visualization. And their programming is continuing to influence a broader and broader swath of visual culture is why I think it’s important to talk about these structures, video game engines, you’ll have a lot to do with powering our visual futures. Know, as data and visuality continued to converge software development companies that include Unity Technologies and Epic Games. The latter of which I’ll focus on today are rapidly iterating their products to tackle new marketplaces and they have plans in fact, to colonize the metaverse. Many of the images that we consume and interact with are really artfully crafted engineered media borne by these environmental these ecosystems of hardware, software and programming. So in this reading, what I want to do is examine a tool that’s fairly new to the Arsenal video game developer Epic Games, and that’s the MetaHuman Creator.
Eric Freedman 06:05
The meta human creator, which is part of epics proprietary Unreal Engine was revealed in February 2021 and subsequently released in April of the same year, through the developers early access program. The application which is free features an accessible user interface and it allows game developers and 3d content creators to quickly build high fidelity digital humans and I’ll just play their promo video save the sense of sort of how this tool works and what they’re pitching I could be one of many choosing not to please their classes you create the narrative and meta human.
Eric Freedman 07:56
So the questions that I’m posing in my analysis that I’m not necessarily looking to answer today, but are the meta human creator and similar simplified building tools democratizing the field of digital content creation, which is sort of what Epic is pitching, you know, in this easy access, familiar user interface? Are they also fostering more diverse representations and narratives? And are they supporting the free play of identity and playable media. So these tools, which include the entire Unreal Engine suite, you know, shape visual culture, by translating it into a data set that can be manipulated by an algorithm. And what we have as digital content developers can transform visual artifacts, whether it be environments or objects and characters in real time with simple user interface controls, such as draggable assets, and variable or numeric sliders. And these tools represent a way of seeing and knowing the world. And the representations they produce are part of these hermetically sealed in privately held and coding processes that include a company’s original data, its application programming, its proprietary build environment, and its interface. So for me software and platform studies, as critical approaches to the theories and practices of computing might help us unpack both the promise and the perils of epics, a sophisticated character generator. The meta human creator is after all, it’s a data driven software system. It’s an analytical engine. It’s an integrated system of calculation and design, and it can sculpt and transform its source data and streamline this production process of digital humans.
Eric Freedman 09:38
For those who are not familiar how this system works, the meta human creator, it’s a cloud streamed a web based application that lets content developers create these high fidelity digital characters without really being steeped in the technical processes of things like character generation, rigging, animation and other in engine real time functionality. It provides It’s a truly rapid method for developing 3d character models that can be animated with a variety of other programs like Autodesk Maya, and it draws from as its foundation, a library of scans of real people, and then allows 3d content developers to quickly create these unique photorealistic fully rigged digital humans by mixing together different parts of real people, while changing each character’s facial textures and geometries, and simultaneously updating the underlying rig so it holds on to the actual mechanics of the underlying rig. And the library, as it stands now includes a number of prefabricated meta human presets that represent the generalized and rather soft contours of race, ethnicity, gender diversity, and a broad spectrum of individualized skin tones, textures, physiognomies, and style types. And the tool allows visual artists to really rapidly and seamlessly as you saw in that clip, manipulate a character’s facial features adjust things like skin complexion, select from a range of preset body types and styles. And then each finished character can be exported, again, fully rigged, meaning fully functional, movable, and ready to animate within epics Unreal Engine, and that’s part of their licensed game licensing agreement to utilize these characters within the Unreal Engine. So from a technical standpoint, the process collapses more traditional scanning to rig development pipelines, and wraps the creative process in a very familiar user interface to really facilitate the design process for those unreal those who are developing inside the Unreal Engine. And for that reason, it really promises to advance the field of virtual production for games, and other immersive experience, experiences and feels that draw from these high fidelity and responsibly responsibly animated virtual assets and digital doubles. I have this slide on the screen because I think the tool will also animate some of the cultural politics of the face generating algorithm that was vividly illustrated on this cover of Time magazine back in 1993, which was a composite of the new face of America created from a computer generated mix of individuals from several racial and ethnic groups. And this particularly, you know, seamless synthesis designed to illustrate how immigrants are shaping the United States in a multicultural society really belies the work involved, you know, conceals the politics. The policy decisions, you know, conceals the socio economic divides, and conceals the myriad forces of disenfranchisement that impedes such a very seamless vision of progress. So all of those sort of obstacles, barriers, boundaries, margins are obscured by sort of the fascination with what we can do on the surface with surface algorithms. I’d say MetaHuman creation also shares an awkward legacy and its dependence on deeply multi layered algorithmic manipulation, with visual trickery and disinformation, which is an unfortunate byproduct of ganz GA ns, which are generative adversarial networks and GaNS are a type of machine learning that uses a pair of artificial intelligence algorithms, and a large volume of data to try and accurately replicate real world image patterns until the fakes are indistinguishable from the originals. One of the results of advanced Gann research is the ability to develop unique human faces that can pass for real people. And collectively, the systems are rendering visual culture from a perspective that as advances is actually less and less determined by human intervention. Although it was originally conceived and developed from within an anthropomorphic anthropocentric design perspective, and is inherently prone to humanistic bias. These representations again are not simply happening within their own platforms, but they are impacting the visual grammar, the organizational labor, the technical structures, the industrial models of film, television, gaming related media forms.
Eric Freedman 14:17
For those of you who aren’t familiar with how GaNS work, it requires training I said two algorithms, a generator and a discriminator. So in this process, you actually have aI competing against itself, the generator learns to create plausible data while the the ladder the discriminator, learns to discriminate between the generators, fake data, and real data until the process becomes almost much more seamless, much more competent and capable of generating generating realistic images, voices, videos, anybody who have seen the fake Tom Cruise videos, no, and suggests the goal of intelligence research may be twofold. How do we simulate Intel, human Intel To such a degree that machines might replace human beings as part of the process and occupy their locations within an organization. So we no longer need the real actor. And it’s also about a representational framework or development pipeline that also sort of displaces human labor and allows machines to carry out these these sort of design tasks. For those of you I’ll give you a sort of a little background. In 2018, a team of Nvidia technology researchers proposed style Gann and style Gann is a style based generative architecture that is tasked with analyzing and synthesizing existing facial image data to produce these novel photorealistic images. I’ll give you a sort of brief sort of clip here. So you understand how these style GaNS actually work with human faces.
Eric Freedman 17:08
To get too technical, so I’ll stop it there. But for those of you who may not actually have looked at the videos research, this style Dan, this technology that was being developed by NVIDIA became the sort of
Eric Freedman 17:25
architecture for what software engineer Phillip Wang used in 2019. He used because the style Gann is an open source generator. So the same technology developed by NVIDIA is used as an open source model by Philip Wang to create this person does not exist. And this is just a sampling of images from this person does not exist. For those who haven’t checked out the website. This person does not exist is a website that conjures its fake portraits from existing image data and produces new human faces every time that you refresh the page. So the style Gann is operating the background every time you hit click Refresh, a new face of a fake person will be generated. And this data driven image modeling this is just sort of seven selective faces from from this person does not exist. This data driven image modeling is based on always a degree of normalization. That’s how the algorithm works. How do we train the algorithm on existing data sets and output the information in ways that match stylistic social conventions and expectations? So the goal of the system are always sort of normalizing to, you know, to create an unsupervised image to image translation and the production of a novel image that conforms to common facial shapes and common facial features. So you never get anything, you know, unbelievable or radicalized, you know, as a result of this process, that’s sort of the modeling at work. So the style Gann in every case operates within and produces what we would conceive as acceptable structures. And while it’s a fascinating visual tool it carries with it as I’m sure your sense much starker undercurrents AI tools are being deployed with instinct within institutions that are historically marked by systemic discrimination in housing in the workplace and the criminal justice system, or financial institutions, and biases baked into the outcomes of what AI is constantly asked to do and predict. The data used to train machine intelligence is often under representative of people of color of women and other marginalized groups. And those fault lines impact the design development implementation of an outcomes of AI across a broad range of tech and tech independent and tech dependent industries, where it really becomes more difficult to extract what’s happening, meaning to extract the signaling processes embedded in the automated system. that are designed to create more efficient workflows and more realistic images. And while what I’ve shown you, including the MetaHuman Creator doesn’t really rely on AI per se, it is instead focused on the physical aspects of character creation. These assets are primed for intelligence uses as non player characters driven by AI subsystems, or as intelligent agents in a wider array of Video Game and Game adjacent environments. So since its launch, the MetaHuman Creator has been successfully coupled with an AI voice actor platform developed by replica studios, that enables developers to create AI characters that look and sound human, where we have voice actors training the AI how to perform. So we have this sort of whole hermetic system about how characters look, and what they sound like. That is all has a human component as its base. But again, those sort of those systems are then concealed in the actual ongoing output, which were AI slowly but surely takes over the performative dimension of character. And while it’s a unique enterprise, I believe the human the MetaHuman Creator carries the cultural weight of all of these other simulation systems, intelligent or not, that approximate human subjects. Again, as I said, MetaHuman Creators a fluid process that really shortens the timeline to produce its human character assets. And it streamlines the overall development pipelines. So there’s a there’s a labor implication here as well. The people who use or work with MetaHuman Creator are never given direct access to the primary scan library in its data. So there is an existing library of original Human scans. But instead, what you’re asked to do is work with and select from a cast of MetaHuman preset characters that are artifacts of that library and composite representations of its raw data. So reading the tool from a sort of critical vantage point, rather than a purely industrial event, vantage point. In this sort of speedy transformation of character rigs, and other non binary attributes. This ability to blend between characters highlights what I believe is the sort of potential queerness or openness of data, yet most commercial demonstrations of this tool speed through all of these mutable subjects, ultimately to land on a predictable end result of fixed results. So a dynamic creative process ultimately has to yield to some sort of stasis to the construction of a functional and fixed character build of a certain fizzy, anomic type. So the image library, which is a really complex data set, has to constantly behave in plausible ways, and according to a number of marked constraints to produce these anatomically believable outputs. And while code may be fluid, and sort of not have any anatomical fixity, these things always wind up becoming much more constricted. If our goal if our intent is to produce believable high quality digital characters. All of the MetaHuman’s use an underlying skeleton asset. So while their facial features may be unique, the the the features of defined body face hair and clothes. All of these are parented to a governing skeletal mesh, and those structures allow the external features of these characters to be purposefully animated and readily swapped. So these distinct bodies that are pulled from the library and blended together to make custom custom assets, now always remain functionally locked to an underlying mesh. So even the external skins of these characters can be readily swapped out. So this female character and another male character share may share the same underlying skit skeletal rig. So a process of Animation Retargeting enables animations to be reused between characters who have the same Skeleton asset, even if they have different proportions, or additional bones as long as they share the same bone hierarchy and use the same rig to pass animation data from one skeleton to another. So even the specificity of character type as good as it gets locked into a model, you know, becomes swappable based on the skeletal asset that drives the motion of all of these characters in the same way.
Eric Freedman 24:35
Some of the questions that I sort of raised throughout my research is, you know, these MetaHumans are designed intelligently, but they don’t possess intelligence. They look real. I don’t know. I don’t really know if they are uncanny or not. I don’t know if they truly evoke an emotional response. Now their facial features, their geometries, their expressions, their movements are all plausible, since they’re tied to that underlying skeletal Rig. But they would need another programming layer, another AI sips subsystem to really give them agency to really make them semi autonomous, contextually driven, and motivated characters. But I think that’s the prompt that’s really posed by Epic Games. In that commercial reveal video that asks us or commands us, you create the narrative. So these are sort of primed for sort of narrative appropriation. And while these assets are still in need of a narrative, I think we can still read them as meaningful traces of a particular industrial process, which is all about an ongoing push toward hyper realism in commercial media, that has created a visual economy that is now supported by an industrial apparatus that privileges how we master these tools or production, and where the bodies and the politics are often cleaved in the design process. And there’s an interesting sort of conversation among unreal developers, where they asked, you know, each other, like, Who is your favorite MetaHuman character, and, and they choose based on, you know, this sort of, very sort of whimsical, like, like, sort of, sort of visual reading of these characters that they presume produced, which are really sort of sharply de politicized, you know, in and just in terms of a like, let’s choose, as they do, and this selection video around this character, let’s choose this character or that character, and that’s sort of just mixing the mix them up together. So this, what I see is, you know, epics multi ethnic, multiracial, even transgender medic human creator, is from a unreal or Epic Games perspective, really just a design tool, not a narrative engine. The transitions are simple and seamless. And the traces of non binary and non white identities are simply part of a larger color palette, color palette of various ethnicities, and genders in the database. So I think the MetaHuman Creators really mastered cosmetic diversity, and the rather quick fabrication of multiculturalism, and it seems to suggest that technology can be colorblind, and embraces certain in between this. The rapid transitions that we see between subject positions is really a masterful visual spectacle that seems incompatible with any genuine concern for the difficulties have lived, lived experience, or lived experience with difficult body boundaries. Although epics developer suggests that the quality and fidelity of its MetaHumans can create player empathy. So I think what we see in MetaHuman creation is a really feat or spectacle of bodily governance, of regulating flow of regulating discrete bodies and the power and privilege associated with with this particular tool is not simply about how humans are represented but who produces these representations and who owns these bodies at the end of the day. Because medic human creation requires a certain degree of technical know how, as well as the requisite processing power, as well as the requisite network asked access with the and the the necessary Unreal engine supports to run these real time renders. And at the same time, if we step back a bit, we know that the MetaHuman Creator and its Unreal Engine parent meaning Epic Games, they sit within existing socio economic relations that guarantee the almost inequitable distribution of an access to technology. This application is based on in requires epics proprietary pixel streaming technology that can pull data from its central server. And as part of the and this is in part the product of the company’s acquisition of, of a Serbian based company, three lateral, which pulled the technology for crapping these virtual humans closer to the Unreal Engine by adding three laterals, no research on volumetric facial capture and facial rigging to epics portfolio. So what we see in Epic is a constant sort of acquisition cycle of what do we need, we need three lateral we need to capitalize on volumetric facial capture, let’s pull this company into our orbit, we need sort of stronger networking power or server power data draws. So let’s work with our pixel streaming technology. So it does indeed become the sort of real
Eric Freedman 29:30
sort of hermetic sort of data driven ecosystems. And as I, as I suggested the MetaHuman presets, or the series of characters that are based on 3d scans of real people, these were built from a dataset that required a significant new number of cameras, a significant number of processing power to create complete 3d models that real human faces, and in this case, three lateral provided the resources for Epic Games to accomplish this task, but we don’t No who provided the raw materials that make up this sort of racial and ethnic and gender mix of the unreal database. We don’t know how precisely these individuals were arranged and transformed into these preset MetaHuman characters, because the library that you are choosing from is one step removed multiple steps removed from that initial sort of library scan of real human faces. These are synthesized to begin with and you are basing a model around pre synthesized character sets.
Eric Freedman 30:33
So what we see in this is sort of what I what you make understand is being represented here. In this particular slide, and users choose one or more medic human presets from the database of pre made characters. This is simply a sampling of some of these pre made characters from the Unreal Engine database. So and you end users can choose one or more of these MetaHuman presets or character assets from this database. And again, there are, at this point, a library of more than 50 named individuals. So all of these individuals, organized alphabetically from ADA to Zen, they all are uniquely named. And you can then create your MetaHuman by enabling the applications blend mode. So what you do is you can then choose additional character presets and drag and drop these 3d portrait models into a concentric circle around the primary preset in this, what you see represented here is the application viewport. So in this case, there’s one primary preset in the center, you know, three additional character presets on that are orbiting in the concentric circle. And then using viewport, you can sort of choose your blend No, no, and start to sort of move elements of all of these concentric characters into the primary character reset. And you can then map their facial features onto one another. And this is, again, always designed to produce plausible results from the database as it as it outputs What are custom blends of digital, digital into an individual’s one other sort of interesting sort of element is once you get done constructing your, your fabricated sort of model is that Unreal Engines live link face app allows developers to capture live facial performance with an iPhone or iPad, and map that performance in real time to the MetaHuman characters. So in this case, you see from Lucas Ridley, you know, one of his tutorial, he is in the lower right hand corner, the the live link face app is represented on his phone next to him. In this case, what he’s doing is he’s capturing his facial performance to one of the preset character preset MetaHuman characters and animating it with his facial performance. So you can this becomes a sort of ecosystem of preset character. Unreal is live link face out your ability to map your performance to your MetaHuman character. You can also link your MetaHuman character to other full body motion capture workflows. So you can actually develop pose animations that capture capture based on sort of real time motion capture. And then the software ecosystems are compatible with other development tools like Maya and motion builder to you can sort of do some re plotting or fine tuning. And that’s the sort of nature of this sort of MetaHuman sort of creation is it has to play well with other data ecosystems. At the same time, it has to be rendered and published inside Unreal Engine in order to comply with the company’s licensing agreement. Moreover, again, as I mentioned, epic continues to foreground the value of all of its other services that allow you to run this application through a circuit of relations that includes not just what three lateral has given it, but also what cubic motion another acquisition gives in terms of facial animation technologies. It has acquired a number of other complementary companies that include twin motion for its real time architectural visualization software. Epic recently acquired quick sell for its mega scan library of 2d and 3d gram photogrammetry assets. So what we have in the epic arsenal is a consolidated robust suite of 3d content creation tools created inside the hermetic data system that is both people and motion and environments, and an object Asset Library. All of the elements are blended together. And again, as I pointed out, so that we can start to see Epic Games as one of many parties looking to colonize the metaverse.
Eric Freedman 34:56
Before I sort of wrap this up, I want to suggest that you know, epic course is not alone in its pursuit of building the metaverse and populating it with human assets, people selling people is unity is open source, human centric synthetic data generator. This is sort of a sampling from Unity Technologies. This was released to the public just earlier this year. And what Unity provides those of you who work with Unity unity unity is also you know, engine based game developer who is using its engine based tool, and other environments and other build environments. What unity provides with people songs people is a simple our simulation ready, 3d human assets that are coupled to a parameter parameterised lighting and camera system, meaning you can take an asset, you can drop it inside any environment, and it’s coupled to lighting and camera. So you’re not going to lose that sort of the fidelity this relationship between your character and environment. The 3d Human models that we see here, and people selling these people are all drawn from a subset of the what’s called the render people library. The reason we call is called people sounds people because it’s anonymizing the data. So these may be based on real world scans of photogrammetry scans of real people. But by anonymizing the data, these become sort of neutral objects that cannot be associated or reattached to any one individual. So the 3d human models that unity is using are drawn from a subset of render people. And for those of you who have never shopped for pre rendered people, this is the render people library render people has its own 3d people, library products that are built from 3d photogrammetry scans, which are sort of high resolution scans of more than in this case, 4000 live models that can be inserted into a range of 3d visualization product projects. So again, in this case, you can purchase a model in the render people shop, and you can drop it into any, you know, sales environment, any sort of interactive environment that you want. Again, these are based on sort of real models. So in the render people environment, you’ve got a sort of real actor who can act as a salesperson or a or a host at your university campus. On a virtual tour, you don’t have to build that model, and it can be synchronized with your environment, people sounds people, what it does is simply thinks about sort of the ethics or data privacy by anonymizing the data from the render people models, modulating the appearances of its virtual people to create more customizable data sets. And that it also allows you to train the models to adapt to a specific specified environment. So we can then generalize the model to that domain so it can fit almost any domain.
Eric Freedman 37:50
As I close just want to say that sort of game engine, I think this is sort of an important beyond sort of the law focused on people, game engines, and their data infrastructures are really the building blocks for the metaverse and developers like epic and unity are using their engines their assets, their data holdings, to claim territory, and as I pointed out to buy up competing or complementary technology companies to really secure these hermetic world building systems, as I mentioned, and in 2019, they acquired quick sell, so they added this world Atlas of high resolution 3d scans of environmental specimens, no surfaces, materials, objects and vegetation. Unity in 2021, announced its plan to acquire Weda digital, and the the company’s proprietary visual effects suite stating its intent to unlock the potential of the metaverse. So what we find unity and epic both doing is, again, building these people models, and then acquiring these robust asset libraries of urban and natural environments, of flora and fauna of manmade objects of materials, textures, and more to really have this sort of robust sort of architecture that is a fully fledged architecture. In my from my own perspective, I turned to critical media study is beyond sort of studies of production. Because I think at the end of the day, we need to interrogate the formal and structural properties of these texts. And at some level, we need to understand the production process. And that’s I hope I’ve given you sort of some insight into what what happens here. This is the sort of Animation Retargeting that I’ve talked about. So this particular MetaHuman character can be mapped to almost any, any other MetaHuman character can be sort of retargeted to the same animation rig, the universality of type, but I think it’s important to ask whether a tool that can produce multicultural subjects is really empowered to produce substantive social change and create counter narratives that challenge existing racial and ethnic and ethnic inequities. I think these questions are are pressing in this closed circuit of relations because human assets are being fabricated from the same identical algorithmic architecture as their virtual worlds. They’re being constructed from an operating within the same governing data structures. And as I said, you know, the promotional materials that surround the release of epics met human creator, focus in on what characters look like and how plausible they are, rather than what they’re doing or what they’re being asked to do. And these representations always lacked context. But as the engine driven traces of the natural world, continue to move toward greater fidelity, and to a greater alignment between, you know, physiology, physio, nomic, and mechanical systems, the power of epics tool is not simply what it does, but how it does it, the user interface is designed as a direct manipulation tool, where end users can monitor their design choices in real time visualize the various sort of character deformations, you know, as they move around these facial markers and the sliders, but they don’t really have access to what’s happening, you know, you know, beneath the surface here. So the question is, is, is the MetaHuman Creator, an engine for diversity? Is it simply a spectacle of control, and where video game technologies have extended their algorithmic influence into the machinery of a much broader media and information economy. And they’ve succeeded by holding on to the human humanistic attachments of these algorithms that make their software look so welcome and familiar to many developers and designers who are struggling with these problems and to get their products running. So by examining the software systems and their technologies that undergird the representation, I think we can begin to understand the cultural power code as it gets written into these data architectures. And then in the case of MetaHuman Creator, we have to understand that software is both a matter of engineering, what’s happening in the background, but also language, you know, it has to be legible, it has to follow certain scripted rules, it has to organize, it has to communicate, it has to represent information in ways that makes sense. It has to shape all of its image data into a fixed set of material relations that follow the rules of successful game production or interactive media production. These creations have to make sense, the bodies have to work, they have to perform without failure. And, you know, they have to create really responsive bodies. So there’s a broader lesson here for me in shifting our attention to both the the deeper architectures of game design in development, but also thinking about what’s happening at the image level as well, there’s a, we have to unpack the privilege that comes with the ability to really engage in this machine based identity surfing, we have to always continue to unpack, to understand that these hygenic or functional, understandable distillations of the material world, you know, are based on sort of an underlying information architecture. And there are other scholars and critics that you might be familiar with that have sort of, you know, approach this, you know, Jennifer Murkowski and Trey Andre or Russ worm, in their study of game representation talk about, we don’t want to simply pay attention to code analysis, because if we do that, you know, we are thinking that programming is purely technical, and not particularly ideological. And at the same time, game representation is always tethered to software and harder. So we need to, you know, we need to develop and expand our approach to Media Studies, which you may already be doing, and try to situate computation and representation side by side, and understand their interrelated disciplinary histories and practices. Understand that digital media have an outer layer and an inner layer, if you want to think of it that way that representations are governed by code. And they also are the provenance of both media history and history of computing. That’s why That’s what I see going on here. In terms of, you know, the question that we have to always remind ourselves is like, what is the level of narrative possibility? Are these systems simply amplifying existing hierarchies of knowledge and power? Our work with MetaHumans from a design perspective, is commonly limited to the surface level interventions with the user interface. But as a countermeasure, I believe when you have to think about how these proprietary software environments are built, we need to strive for deeper computational literacy. And that’s if we want to figure out the the design biases of these imaging systems, and consider how they are operating in and regulating the world at large. With that, I will sort of pause and hope you have some sort of response are some inputs and questions or some parallel investigations that you’re all involved with. So thank you.
Heather Hendershot 44:59
Thank you so much, Eric, that was really, really interesting. I want to ask a quick question that last simulation we were seeing over and over again of the sort of robot, human and the human human, and there’s, they’re the same, right? And their hands are empty, but they’re, it’s simulating, like holding weapons, right? Just like without the weapon being there. Um, so I just want to make sure that was what was really happening. It struck me as very strange, right? And it points to like, well, what are we going to do with these responsive bodies that don’t function? Or the or that always function correctly? Right. And it’s going to be playing games with guns, and I, I’m not a gaming person. And also, I don’t know, like the whole range of Epic Games. But are there different, more unusual things that these bodies are designed to do? Are there when you talked about like responsive bodies that don’t fail? I immediately thought of like pornographic applications of simulated bodies. And at that point, it occurred to me like, oh, yeah, this really limited kinds of body creation would really have a negative impact if it was used in a pornographic context where like, range of body types would be possibly like really kind of all over the place, right, as opposed to really tightly constricted in what you showed. So this just kind of a little prompt that might get you going. And a few additional thoughts.
Eric Freedman 46:29
Yeah, because the rigs are there is a tension between sort of the the, the marketing and the design orientation of the of the product. So what you’re rightly noting is, those sort of Epic Games are unreal developers who are because what I showed you is pulled straight from the last shot, that last animated loop is pulled from the unreal documentation. There’s this tension between like the unreal documentation that shows you the traditional, there are a couple poses blended together, the assumption is a particular type of action based gameplay, whether it’s a, whether it’s a third person shooter, there’s a certain type of action orientation written into the documentation that accompanies that, that’s peppered throughout sort of what unreal is proposing. But you are correct in noting, The Open narrative prompt is, as long as there’s a rig, these characters can be made to perform in any way. The limitation there is back to the Unreal Engine architecture, if they have to be sort of published within unreal, what are the limiting contours of the Unreal Engine that allow or disallow certain bodily performances?
Heather Hendershot 47:46
Thank you, I see. Sure she has her hand up. Hi, Eric, thank
Srushti Kamat 47:51
you so much for being here. My name is Drishti. I’m a grad student here. And I’ve actually been writing my thesis on virtual production for the last two years. So I’ve been tracking a lot of what you’re talking about. And I’m hoping that my question might be able to, like mitigate some of the blocks that I’m not able to understand in your argument. You’ve given you’ve sort of noted a couple of different aspects, right narrative possibility, embeddedness, and code, propriety information, literacy, marketing. These are all very different studies and angles to the same issues of power. So what exactly are you arguing? Because there are multiple layers to the arguments of power within that. So maybe that question will help mitigate my my hesitation to respond. Thank you.
Eric Freedman 48:37
For me, it flows from I think, the overarching, I think all of these are connected together. For me, it’s it’s about the it is about this. It’s about a data ecosystem. It’s so it’s sort of the big picture is who owns and controls data writ large. So it use and that’s sort of the, it is a fascinating field, because you’re being immersed as you are and studying virtual production, which is constantly evolving. It seems at first, the space. My initial argument used to be around game engine technology is is that the space for radical possibility closes as you move through the development pipeline. And that’s typically the logic of, of, of game development. Like you have this rich and open engine based architecture that is just a software framework that, again, gets connected to a to a designer that gets connected to a story. So the radical possibility shuts down at every stage of the development pipeline. So Software frameworks are open, they get formalized as an engine, they can attach to a particular intellectual property, and that then pushes them within a particular genre that then makes them tell a particular story around. So it was closing down of queer or expressive possibility, in ways that if you were just to take the Unity engine and use it as an independent developer, that possibility is still there, although sort of the question becomes as these as unity and epic Knoll are no longer simply in the business of producing an engine, and you know, quite tech producing engine unity produces an engine epic produces an engine, Rockstar Games produces an engine, as they start to acquire, you know, in this case, an epic game, you know, quick sell, and, you know, three lateral, as you as an independent developer, or artist, you start to look outward for other types of influences or assets or libraries or data flows, that may make your object act differently. You what you find is those things are now shared within the same ecosystem. So it’s not simply Oh, we’ll have to buy everything from Unreal, it is, as well, unreal starts to shape what those products can do. So while three lateral used to be an independent, Serbian based company that could like who knows what we might do with 3d scanning, well, now that is not just acquired by unreal, but asked to do and not do certain things as part of the sort of Epic Games mission statement. And that’s sort of the so you’re right. It’s a complexity because there’s that, but then there’s like, well, what is the space for expressive possibility when documentation? And I study some documentation that tells users that explains like, okay, as an independent developer, what should I do with this tool? Well, you start by reading through documentation, that then pushes down to a particular pathway that says, Click here, then click there. So the element of exploration starts to shut down, and documentation and marketing and the I think all these things, it’s, it’s hard, you’re right, it’s hard to stitch these together, because they, they, you run up against different critical assumptions, driven by different material histories, you know, different bodies have. But I think we have an obligation to try to start to stitch some of these things together. And that’s, it’s interesting that the other virtual production, like you know, because Mandalorian and Westworld are working with Unreal. So you may have a vision or a storyteller, or a filmmaker, who is working with intellectual property, and now has the unreal technician on set. Who can then say, We can do this, but not this. So you start to determine possibility in virtual production, not based on in some instances, what Lucas wants to do. But in other instances, what the pairing with Unreal and Epic Games, suggests the technology can or should, or will do, based on, you know, history of conventional use, cuts any closer, but it is a extremely complicated landscape as more and more of the, the sort of the underlying data sets get conformed in ways that we simply don’t see.
Srushti Kamat 53:09
Okay, so I just wonder, though, how it’s different to pre existing, you know, my workflows where there were also artifacts that said do A, B, C, and D, I mean, any deployment of technology involves in authority or authority over the workflow. That’s any that’s not the that even zoom technologies, right? Like, click the mute button when you don’t want to be heard like that. So how are you distinguishing this as any any different, I guess, is my question.
Eric Freedman 53:35
They’re similar in terms of their underlying architecture, as you point out, yes, zoomed in my, my first work, my first book on trends in images was about like online Memorial archives. So how does the sort of architecture of 901 911 Memorial way or a virtual sort of remembrance online where all of your personal images get conformed to particular template? Like that’s sort of the first iteration of this process of normalizing data, according to these privileged pre built architectures? I’d say the difference here, so zoom, and Facebook, and match.com, like online dating sites, all conform information. And we can see that sort of ideological imperative meaning, there’s a prebuilt architecture that that we’re all asked to work within, and you know, but we see but we understand that negotiation at work, I’d say, for me, it’s this object of, of authenticity or verisimilitude, that starts to conceal what’s happening because it’s one thing to have it in an interface culture, where we understand the distinction or see the radical distinction between ourselves and the representation on screen, as those divides start to break down in the case of photogrammetry in the case of, of, of many humans It becomes harder and harder to see the coding layer. Then it might be in terms of like, What are my choices? Like these become much more comfortable prebuilt assets. And we start to maybe not as scholars, but I think more and more deliberately the the architecture becomes so complex, that we start to move to default mechanisms that are handed to us. And I think, and especially in within the realm of story, that becomes a much more dangerous arc, I think of simplification.
Heather Hendershot 55:38
I think that are there we go Tomas. Yes.
Tomás Guarna 55:43
Thank you. And thank you for a fantastic talk. And I’m interested in in your publication that no, this algorithmic bodies are anatomically coded and that this restricts you know, some fluidity or queerness of some traces of code. But I wonder if you know, humans are coded, right, we have DNA, our anatomic possibilities are not infinite. So I wonder The problem here is, it’s a reference on humans? And if that is the real, real, like, challenge here?
Eric Freedman 56:15
What do you sort of, uh, what are you postulating there?
Tomás Guarna 56:18
Well, my question is, if, if, if if there is a possibility of designing a replication of a human, that is not an atomically, coding restricted, right, given that humans are restricted by a code with just DNA, right? We are, on the way don’t have infinite possibilities are our physical bodies cannot be, you know, there’s a limit in the cleanness of our physical bodies. And I wonder if then if the problem that we’re talking could be summed up, as you know, the problem is that the humanities be replicated on them. The physical body has limitations,
Eric Freedman 56:53
right? Mm hmm. Yeah, and it is this level of, I think, what you’re, what you’re suggesting to is, this is the, it’s the level of remediations that we’re talking about, if we have this complication, within the body itself, and then the body becomes I think it is this, the photogrammetry library, the live scanning of the body suggests that you can capture it, we start to move away from the, we’re capturing the materiality of the body, but not its essence of the body. So so we start we start from with the scanning library to work from the surface of the body. So it’s in the multiple levels of radically ratio, the lives of the library is built on surface scans, which which you face that sort of what you’re talking about the essential problem, and then the MetaHuman Creator, all these 3d building tools start to work from that sort of, sort of secondary level. So it’s to try to get back to that. The question then becomes does narrative where that possibility gets sort of Rhian reinterpreted or re encoded? Does the agency always get lost in terms of how these bodies are constructed, but returned somehow in terms of what they do and say, that might complicate how they’re built?
Tomás Guarna 58:09
But is there? Do you see a conflict there, then, do you do feel that? I mean, I don’t want to, like, address the the conversation too much. But do you think that that there’s a positive lead to trust send that that material aspect of humanity that this technology’s somehow restricting? Would that be a correct way to put it?
Eric Freedman 58:31
Um, do I think the technology can actually is the technology inherently a closed circuit of relations that cannot get us to a more transcendent? Is that what you’re suggesting?
Tomás Guarna 58:44
Yeah, yeah. I wonder.
Eric Freedman 58:47
I mean, ultimately, at the end of the day, is there now you’re asking sort of another sort of an alternative sort of language? Like, is there an expressive possibility that simply is that potentiality out there and not realized? Like, and I think that’s a it’s a good prompt. I think it’s, it’s antithetical to the work that good that engine based design ultimately does, I think this is the role of artificial intelligence to is always this sort of simulation of intelligent behavior means it has to be logical behavior, that there is no sort of space for for logic or incorrect or, or failure, that say it’s the nature of these elements that they they can may not ever be able to push to that end. Thank you.
Heather Hendershot 59:41
We have a question from Paul. And then we have one from Laurel Kearney. That’s in the chat box, and maybe we could start with Paul.
Paul Roquet 59:50
Sure. Thank you for the talk. There’s so many interesting directions this can go in. I wanted to ask more about the push for hyper realism and sort of the familiar and legible form of hyper realism that really came out strongly in the way these interfaces are designed and work on mimicking 3d media production in Japan, or it’s often the fixation on realism seems to be a very sort of American thing. It’s not like we couldn’t be the anime characters instead, why? Why do we need to do this? So I’m curious what you see as driving that? Is it more sort of the history of Uncanny Valley discourse and trying to stay well away from that by not letting you actually get there with the interface? Or is it more that that’s clearly where the big money is to be able to make completely sort of plausible, realistic characters as opposed to something that’s maybe a little a little more unfamiliar to people? Or what is what is driving the need to keep it sort of safely? hyper realistic?
Eric Freedman 1:00:47
Um, that’s a that’s an interesting question. And I, you know, I, at some level, even in terms of the, from my perspective, I think it’s the it’s the origin story of these technologies, in the sort of the, in the larger the largest quest of AI development in general. And it’s sort of in a relationship with, with either game forms or game technologies, you know, AI games are a testing ground for AI. So they ground it in terms of particular sort of questions. I think the fact that epic in Unreal, are grounded in game space, is largely, and I say this with an understanding that even from Japan, if we talk about CAP calm, no, Cap calm has always, at least in its video game space has, has pushed in its engine development, towards greater and greater hyper realism. So I think I think it’s the legacy of where these tools in what industries, they’re coming out of his orienting, orienting them to a certain sort of, I think that’s where the push is coming from. So I think there is a particular bias within these within these particular game industries. That is shaping that is pushing the technology or attaching that technology to that sort of visual bias. That is simply getting replicated in and aligning them with other media, other entertainment industries. So you’re right, there is a possibility in all of these engines in working in a 2d way. But that seems to be the obfuscated by now the attachment of I think it’s the level of commerce of unity, or Epic Games in unity to other other parts of the entertainment complex, that are sort of prioritizing certain types of project models have been interesting. testcase in terms of the metaverse in terms of what is satisfying and not satisfying in terms of a like, what’s the visual logic of the what’s the visual vocabulary of the metaverse? I think it will be an interesting push to see if that becomes more and more aligned with what we’re seeing in the photogrammetry of the medic human creator, or whether it still remains this sort of this sort of other space. And without getting too far off, I think sort of the fact that sort of fortnight, and Epic Games have investments in that in that sort of material direction, I think there’s going to be a strong push in terms of epic controlling the visual vocabulary of what happens in those other spaces. But I think that’s where it stems from. It’s this, the origin stories of these of these tech companies.
Heather Hendershot 1:03:36
Thank you, Eric, from Laurel, we have this question. I’m going to read it aloud. Sorry for the technologically rudimentary question. But I’m curious about the specific mechanisms of an algorithm determining a, quote, plausible face. Do you have any information about how this plausibility is specifically codified? And at what point in the process? Is it based on humans manually reviewing edge cases? Is there a possibility for users to push against or override that plausibility within the MetaHuman Creator as released? Or is it too late by the time the data set reaches the users hands?
Eric Freedman 1:04:10
That’s an excellent question. And I can sort of, radically simplify that by first saying that the it is done by initially with the initial photogrammetry standard. So it’s based on existing human physiognomy, that gets sort of captured. So there’s the original scan library that has some element of correction in terms of sort of humans reading through and, you know, reviewing those edge cases to create the within, in the creation of the initial library. And then once those edge cases are corrected, then the AI learning model takes over and then can replicate those edge cases and eradicate those edge cases over and over and over again. So there’s the initial human intervention that sets up the machine learning model that then identifies the edge cases, then that can slowly but surely make sure they get filtered out. If you look at the, the, the all of the style Gann, which is, I think a good place to look at this, all of the documentation is on the GitHub website. So if you go to this person does not exist and click through, you can ultimately find the, the coding database on the nividia style Gan site, it’s all open source, the possibility to push against or override that plausibility becomes non existent in the MetaHuman Creator, and they talk openly about this is that you you can’t create an odd result, it will always there are, they have set the sliders, they have set the parameters in ways that do not allow you to push too far or to push the limits of plausibility. So the functional possibility, it is, in fact, too late by the time the data set reaches the user’s hands, in the case of the MetaHuman Creator. Now, what you can see is some people exporting this back to the Maya, some people then export these things, these assets into Maya, where you can do a bit more subtle manipulation. So there’s a way of regaining some agency over your character by pushing it into Maya.
Heather Hendershot 1:06:29
And in terms of plausibility, like is is what’s defined is plausible. is it plausible that someone might be fat? is it plausible that someone might have a harelip? Or, you know, any kind of any range of disability? Because their idea of what’s plausible, seems like a certain very normative idea of obvious, right, a certain kind of perfect body that is, you know, with a certain body mass index and so on. And just curious where, you know, with the sliders, is there any possibility for someone you know, who has a limp, for example?
Eric Freedman 1:07:08
Yeah, well, they have different body types, they have a range of body types, body sizes and shapes in their asset library. So you can in fact, sort of mix and match, there are certain limits to how far again, you can push on those. And then the, what the body does, and how it performs is then simply a matter of the rigging, so you can perform deformations to the body in terms of how it moves, because that’s not controlled by MetaHuman Creator, but in terms of, when does the when does a an ear or nose long no longer look like a nose those are things or or a lip, there is the sort of normal devising function in terms of all of the basic facial features, again, this gets back to the AI sort of training model, if it can no longer recognize it, as this particular facial attribute, it becomes a it becomes this sort of marginal case it becomes no longer plausible becomes an edge case that cannot be is not part of the dataset. So there are limits on to what extent you can, I don’t want to say I use deform simply in the technical sense, but again, moving out of MetaHuman Creator into my that you can then start to change surface surface texture, you can change, you know, further you can add scars to a face, you can deform a face, you can do all of these other bodily deformations that you might not be able to do in medicine curaytor The issue for me is often that that becomes a second level of, of, of competence of literacy, to know how to take the asset from Unreal and move it into Maya to deform do the deformations and the additional rigging here, and then export it back into so those who have the utmost literacy and competence are those who can push the bodily transformation the furthest, whereas most of us are set with and using some of these default parameters.
Heather Hendershot 1:09:15
Thank you, Laurel says thank you. And we have a question from the one of the attendees who’s not a panelist, uh, you know, the mellow. At the beginning of your presentation, you presented the possibility of replacing human actors with AI many humans, what would be a MetaHuman actor, without the unpredictable narrative layer inherent in human personality?
Eric Freedman 1:09:41
The interesting question, and at the moment, I’m actually unreal does not allow you to use its MetaHumans as intelligent agents. Does it interesting. So they’ve drawn a line in the sand where you cannot use a MetaHuman as your Salesforce or your your, your AI agent on your university tour. So there there seems to be some sensitivity to that to, to faking people or tricking people into thinking too many. So it has to be used within a particular type of, I would say sort of fictional context where you understand the MetaHuman as a character and not an agent, not an intelligent agent. I’m losing the thread on the question, but it’s not. So replica studios with the automated sort of the, the AI sort of voice control that uses the human actor is subject to the same conditions is that working with the MetaHuman characters still as a character within a story, as opposed to an agent, outside of a sort of a narrative construct? I think the question about what an AI agent might do in a in a sort of erratic or unpredictable way, is attached to the context that they’re inserted. And the best that I can do in terms of like, there are multiple use cases. But if these characters are traditionally because again, we’re talking about a game company, if these characters are traditionally being used as non playable characters, or companion characters, within a game narrative, for the character to act too erratic, within the game, and be too unpredictable, is going to lead to less than satisfactory gameplay for most traditional triple A games, which is where you’d see these characters like the, the character has to make sense. It has to be beatable, it has to be understandable, it can’t act erratically, because that can result in a certain amount of displeasure in traditional AAA game title. So I think the narrative context might determine to what extent a certain level of erratic behavior might be allowable, but also acceptable, depending upon that sort of context of its use. I’d say the same about like a university tour, like you have an intelligent agent like do like someone wants some continuity at work there. What would be the the acceptable context in which the character might act more erratically? And we could there there are likely use cases where that could happen. And that would be acceptable. Thank you,
Heather Hendershot 1:12:29
um, Srushti another question.
Srushti Kamat 1:12:33
Yeah, I just I just occurred to me and I’m wondering what your thinking was behind using the word colonize in association with these companies. Are you building off of your existing theory? Or like why not say control like colonize, that’s it’s a heavy word.
Eric Freedman 1:12:48
Um, I use it in the sense that. To me it colonizing suggests the way that they are acquiring assets. And I use the example of the metaverse because the fact that they may own the environment, or the virtual stage, and they can populate that stage with with characters and then they can populate that stage with environmental assets and objects. For me, the notion of colonization suggests a certain pleasurable engagement or willful engagement with an Epic Games metaverse. So it hasn’t met a certain once a certain complicit nature, in terms of the agreement if we want to engage in these spaces. If we want to have an avatar in the metaverse, we there’s a certain certain willful adoption of the platform. It’s a very seductive adoption of the platform. And for me, that’s the power of the sort of the colonizing influence. It’s not it is there is an element of control. But there is this suggestion of a complicit agreement between those who want to participate in the tools which which they are able to participate in the ability to then think about if it becomes a branded Metaverse construction, that the limits of engagement are based on acquiring an Epic Games avatar to participate or fortnight avatar, which is owned by Epic like it seems more about your your pathway to entry. is a matter of like, who owns and controls doesn’t feel like the appropriate. There is an element of control written in there. But I think the notion that possibility gets written out in terms of entry points, to me feels more like a colonizing force.
Heather Hendershot 1:14:58
I think we have a question From Tomas
Tomás Guarna 1:15:02
Yes. Thank you I’m so I’m I’m personally very interested in questions of governance around the metaverse and I think the questions around what 3d assets and what not what three assets are not approved as simply interested. But I think there is something to acknowledge that the labor behind the the 3d assets, it’s going to be had to be done by someone. So I, I mean, I’m done that this probably has to be a company like Unreal, right? Like, that’s, I mean, there’s probably has to be engines. I was recently playing around with decentraland, which is, which is a metaphor, Metaverse inspired application that works on blockchains. It’s very interesting. But the engine that runs that it’s an It’s unreal. I mean, it’s there’s no MetaHumans involved, because it’s, it’s a 3d space. But um, but um, but yeah. So I wonder if, if the question here is, it’s about, like statics about the potential aesthetics half of what he meant humans can do or if it’s a question more about, about about governance of cyberspace. Right on, and we go, Nick, you know, we can we can narrow it down on on the metaverse and say that it’s a question about, about the politics of what artists get bogged down in, whereas it still gets validated. Right?
Eric Freedman 1:16:16
I would answer that by saying yes, I agree. I agree. It’s about I think it’s it’s both it is about data governance. I would say unfortunately, the data governance, conditions often get radically simplified in terms of they get typically centralized around issues of data security and data ethics. And where they typically typically skirt the questions of representation. And in these other in at least in the policy debates about ownership of data, because those are easier question. I think those are, I would say, easier questions. That might be the wrong word. But those are the traditional policy questions that we ask about information space. And we ask less about what our assets are, what assets are allowed and what they can do, they’re, typically first and foremost get reduced to questions of ownership and privacy.
Tomás Guarna 1:17:13
That’s super interesting. And it makes me think that there’s analogy with this is this in content, moderation, right? Like, what you can say on the internet? And what it can look like on the metaverse, there’s these seem to be the similar questions,
Eric Freedman 1:17:31
huh? Yeah, I’m aware of that. Is it Facebook just introduced boundaries? And boundaries? Or if that’s an injury question about sort of shoot, I haven’t really thought about this, I’m not going to talk about it, because it’s going to sound like an on thought. But that’s an interesting question where data governance in governance of behavior meets sort of the question of around the representation of self around the avatar. So it’s an interesting moment where data governance, questions of ethics, and questions of bodies in representations start to come together, at least in that instance, we need to build a policy around behaviors, which is not a which is now as sort of a visual, it takes us sort of takes a visual form, a bike becomes embodied, you know, in that in that space.
Heather Hendershot 1:18:31
Great, thank you. Do we have any more questions, thoughts to share? I don’t see anyone else from the off screen space. Okay, well, thank you so much, Eric. Was very provocative. Very interesting. Thanks for sharing was really great. Um, and thank you to everyone for coming today. And we’ll see you. Andrew, can you remind us what our talk is next week, so we can do a little plug for it? Is it the podcaster?
Andrew Whitacre 1:19:08
Yeah, we just sent me for a second. Actual.
Eric Freedman 1:19:12
Before you set up, just let me say, I know, we’ve got a lot of students and grad students in your program. These are questions that I’m actively working through. I know you’re actively working through similar parallel divergent questions. So I’m, I’m happy to hear what you’re what you’re working on, respond to what you’re working on. And how do you respond to what I’m like these are these ideas are constantly in motion. So I’m happy to sort of share and talk more. And you know how to Heather can tell you how to reach me and happy to continue conversations about the works, your work, my work, however, it helps. That is great. Thank you, Eric.
Andrew Whitacre 1:19:52
So yep, we do have another event, same time next Thursday. 5pm. It will also be just on Zoom. So not in person just Don’t do. And that’s worth Jorge Caraballo, who was former growth editor at Radio Ambulante, which was Latin America’s most popular documentary podcast. His talk is entitled How to use audio storytelling to cultivate a community and keep it engaged.
Heather Hendershot 1:20:15
Great. Thank you. All right. Well, I’ll see the grad students there next weekend. Hopefully some of our guests will be there as well. And thank you again to Eric and obviously virtual applause.
Eric Freedman 1:20:27
Thanks. Bye.