Blending imagination with the physical world through advancing the art and engineering of XR technologies.
The MIT Center for Advanced Virtuality — Virtuality for short — pioneers innovation with technologies of virtuality including XR (VR, AR, MR, etc.), videogames, social media, and new forms unanticipated by these platforms.
Learn more
These technologies are important, as nearly everyone uses virtual identities through platforms such as social media profiles, e- commerce accounts, or avatars in video games. More importantly, these platforms are increasingly interconnected. MIT Virtuality takes a thoughtful approach to such technologies that enable us to communicate, express, play, and work virtually. We believe that virtuality is more than just a particular type of head-mounted interface. It refers experiences that are real, but not primarily physical. By advancing the state of computer-based virtuality systems (virtual reality, augmented reality, and beyond), MIT Virtuality hopes to better serving human needs through artful innovation. Our focus is the array of computationally enabled experiences in which we explore aspects of our selves, environments, and societies imaginatively constructed atop our physical world.
The MIT Center for Advanced Virtuality supports designing, developing, and researching, technologies of virtuality.
Particularly, we focus on production (development of new genres, aesthetics, and conventions), research (simulation and understanding of social and ethical impacts), and innovation (new models and collaborations to deploy impactful outcomes).
The MIT Center for Advanced Virtuality pioneers innovative experiences using technologies of virtuality. Such technologies, ranging from Virtual Reality (VR) to Cinematic Reality (CR) and beyond, all use computing to construct imaginative experiences atop our physical world. It is crucially important to create and deploy such technologies effectively since they are now used every day to communicate, express, learn, play, and work.
The Center for Advanced Virtuality has several functions supporting its mission. Its production function serves as both a studio and a laboratory to support both creative projects and research endeavors. The studio brings faculty together with professionals to innovate new genres, aesthetics, and conventions for using technologies of virtuality. Its laboratory investigates the impacts of these technologies, focused on learning, simulation, and cognition. Its enabling function brings together students, experts, and resources to further the intellectual and creative capacity of work involving technologies of virtuality across MIT. Taken together, these functions advance the state of the art for virtuality research and development with a cutting-edge humanistic ethos that considers the social and ethical impacts of technologies as we invent them.
Creation of new genres, aesthetics, and conventions in virtuality.
Research to understand effective new models of virtuality and to understand its critical learning, social and ethical effects.
Innovation to foster culturally and economically impactful outcomes.
Through its Studio, Lab, Salon, and Hub (see diagram below) The Center for Advanced Virtuality advances the making, researching, sharing, and enablement, of computer-based virtuality systems and aims to better serve human needs through artful virtual innovation.
There are four central functions of the center:
Content Creation [Virtuality Innovation Studio]
The studio supports XR Arts Production focused on expression, innovation, and professional quality XR development.
Social Impact and Learning Research [Critical XR Pedagogy Lab]
The lab supports research to assess the impacts of XR systems, focused on critical pedagogy, experiential learning, learning via simulation, and the cognition of XR experiences.
Capacity Building Events [XR Salon]
The salon supports visiting speakers, panels, and other events to further the intellectual and creative capacity of XR work across MIT.
Sharing Resources and Abilities [XR Hub]
The hub offers coordination of student involvement in XR projects, physical space, and equipment for XR production and research.
Faculty & Researchers
DIRECTOR
Professor D. Fox Harrell, Ph.D., Director
Fox Harrell is a researcher exploring the relationship between imaginative cognition and computation. His research involves developing new forms of computational narrative, gaming, social media, and related digital media based in computer science, cognitive science, and digital media arts. The National Science Foundation has recognized Harrell with an NSF CAREER Award for his project “Computing for Advanced Identity Representation.” Harrell holds a Ph.D. in Computer Science and Cognitive Science from the University of California, San Diego. His other degrees include a Master’s degree in Interactive Telecommunication from New York University, and a B.F.A. in Art, B.S. in Logic and Computation (each with highest honors), and minor in Computer Science at Carnegie Mellon University. He has worked as an interactive television producer and as a game designer. His recent book is Phantasmal Media: An Approach to Imagination, Computation, and Expression (MIT Press, 2013).
Staff
Francesca Panetta, XR Creative Director
Francesca Panetta is a Creative Director in the MIT Center for Advanced Virtuality. As an immersive artist and journalist, she uses emerging technologies to innovate new forms of storytelling that have social impact. Previous to MIT, she worked at the Guardian for over a decade where she pioneered new forms of journalism including interactive features, location-based augmented reality, and most recently virtual reality where she led the Guardian’s inhouse VR studio. Her work ranges in subject matter; examples include solitary confinement in U.S. prisons in a work called 6×9 and a child development-based work about seeing the world through a baby’s eyes called First Impressions. Such works merge journalistic reporting, scholarly sources, and artistic expression. He works have won critical acclaim – receiving awards around the world, and touring the White House, Tribeca, Cannes, Sundance, and more. She was a 2019 Nieman Fellow at Harvard University.
Rita Sahu, Virtuality Senior Project Manager
Rita Sahu received her Bachelors and Masters in Engineering from MIT. She is a member of Tau Beta Pi and Pi Tau Sigma, on the basis of distinguished scholarship and leadership. Over the last 10 years, she has managed a variety of large educational projects and initiatives at MIT that have involved complex global collaborations. Rita founded CogniHive LLC in 2017 to provide an opportunity for the elderly to stay mentally active and socially connected through learning content.
Megan Parnell, Administrative Assistant
Researchers
RESEARCH ASSISTANTS
Pablo Ortiz, Graduate Student, Computer Science and Artificial Intelligence Laboratory (CSAIL)
Pablo Jose Ortiz-Lampier is a Ph.D. student in the Electrical Engineering & Computer Science program at MIT and a Research Assistant in the Imagination, Computation, and Expression Laboratory (part of the Computer Science and Artificial Intelligence Laboratory). Pablo studies how online learning systems can support critical thinking at scale.
Danielle Olson, Graduate Student, Computer Science and Artificial Intelligence Laboratory (CSAIL)
Danielle Olson is a Ph.D. Student in Electrical Engineering & Computer Science at MIT and works as a Research Assistant in the Imagination, Computation, and Expressions (ICE) Lab within the MIT Computer Science and Artificial Intelligence Laboratory. Danielle graduated with a B.S. in Computer Science & Engineering from MIT in 2014. While at MIT, Danielle founded Gique Corporation, an educational nonprofit 501(c)(3) that exists to inspire & educate youth in STEAM. She also served as an MIT Media Lab undergraduate researcher, MIT Cheerleading Squad Captain, & first Student Ambassador to the MIT Office of Minority Education. Following her graduation from MIT, Danielle worked as a Program Manager at the Microsoft New England Research & Development Center. Danielle has also previously worked as Summer Program Coordinator for the MIT Online Science, Technology, and Engineering Community (MOSTEC).
Megan Prakash, Graduate Student, Computer Science and Artificial Intelligence Laboratory (CSAIL)
Megan Prakash is an M.Eng. student in Electrical Engineering & Computer Science at MIT and works as a Research Assistant in the Imagination, Computation, and Expressions (ICE) Lab within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Megan graduated with a B.S. in Computer Science & Engineering from MIT in 2018. She previously worked at Akamai Technologies as a security platform engineer developing planetary-scale distributed systems. Currently, her interests include researching virtuality for museum and informal learning environments. She intends to create novel virtual experiences that can represent culture, stimulate curiosity, and enable global empathy.
JJ Otto, Graduate Student, Comparative Media Studies Program
JJ Otto is a writer, researcher, and gamer pursing their Master’s in Comparative Media Studies from MIT. JJ’s interests center around novel forms of storytelling, identity exploration in role-playing games, and how marginalized communities create worlds that are meaningful for them.
Affiliates & Collaborators
Cecil Brown, Researcher and Lecturer, Stanford
Cecil Brown’s undergraduate education was at Columbia University in Comparative Lit (German and French). He has a M.A in English Literature from the University of Chicago; and a PhD in Folklore, African American Literature, and Narrative Theory from UC Berkeley. He is a novelist and folklorist. He directed the first hip-hop conference in 2002 at U C Berkeley. He co-produced the first conference on the cell phone (“Cell Phone Justice”) and “Swinging and Flowing the Digital Divide” both sponsored by CITRIS (the Center for Information Technology Research for in The Interest of Society).
Zephyr Frank, Professor of Latin American History, Stanford
Zephyr Frank is Professor of History and the Director of the Program on Urban Studies. He was also the founding Director of the Center for Spatial and Textual Analysis (CESTA) 2011-16. His research interests focus on Brazilian social and cultural history, the study of wealth and inequality, and the digital humanities.
Eric Klopfer, Professor, MIT
Eric Klopfer is a Professor and Director of the Scheller Teacher Education Program and The Education Arcade at MIT. He is also a co-faculty director for MIT’s J-WEL World Education Lab. His work uses a Design Based Research methodology to span the educational technology ecosystem, from design and development of new technologies to professional development and implementation. Much of Klopfer‘s research has focused on computer games and simulations for building understanding of science, technology, engineering and mathematics. Much of his research centers on the affordances of new technologies including AR, VR and mobile, and how those can be applied today. Klopfer is also the Co-founder and past President of the non-profit: Learning Games Network.
Meredith Thompson, Researcher and Lecturer, MIT
Meredith Thompson has a bachelor’s degree in chemistry from Cornell, a master’s in science and engineering education from Tufts, and a doctorate in science education from Boston University. Thompson draws upon her background in science education and outreach as a research scientist and lecturer for the Scheller Teacher Education Program. Her research interests are in collaborative learning, STEM educational games, and using virtual and simulated environments for learning STEM topics.
Pakinam Amer, Research Affiliate
Pakinam Amer is an award-winning journalist, science writer and a virtual reality and emerging technology researcher. She’s also a graduate of the Knight Science Journalism program at MIT. Previously, she was the chief editor of Nature Middle East, the regional copy of Nature. Pakinam has an MA in Investigative Journalism from City, University of London. She has a strong passion for storytelling, game design, comics, and long-form narratives.
Vik Parthiban, Research Affiliate
Vik Parth is passionate about teams and technologies that shape how people interact with digital information and the physical world. He is currently the Director of Product and Innovation at Arrow Electronics. He graduated from MIT where he worked on holographic interfaces and hardware to push AR/VR technology forward at the Media Lab. He also worked at Magic Leap in the Advanced Photonics division to develop the first consumer pair of Mixed Reality wearable computers. An inventor, engineer, and entrepreneur at heart, Vik desires to positively change how we interact and use technology.
Advisory Groups
ADVISORY COUNCIL
Mark Loughridge is the Director of the CASE Center for Entrepreneurship at Punahou School. He was also a co-founder of Foundation 9 Entertainment (F9E), which grew to be the largest independent game developer in the world. During his 11+ years in growing this business, Mr. Loughridge oversaw the creative direction of the company, spearheaded new product development, and provided leadership on strategic planning. Mr. Loughridge sold his stake in 2006 and now invests in a variety of projects, with a particular interest in educational innovations and technologies that foster creativity.
Errol Arkilic is a Founder of M34 Capital. M34 is an investment company that focuses on seed and early-stage projects being spun out of academic and corporate research labs.. M34 focuses on turning science projects into companies and does so across a broad spectrum of technologies and geographies. He is also a founder of USRCA.org, a non-profit with a focus on entrepreneur education for science and engineering graduates. Previously, Errol was the founding and lead program director for the National Science Foundation Innovation Corps program. He led the I-Corps effort from its inception until July 2013. Prior to this, he was the lead software and services Program Director for the NSF SBIR program. Before his government service, Errol was founder and CEO at StrataGent Lifesciences (Acquired by Corium International: CORI) and Manager of Product Engineering at Redwood Microsystems. He received his BS in Mechanical Engineering from The George Washington University and his Masters and Ph.D. degrees in Aero/Astro Engineering from MIT.
ADVISORY BOARD
Arvel Chappell, III, Vice President, Emerging Technology, Warner Bros
John Jennings, Professor of Media and Cultural Studies, U.C. Riverside
Kamal Sinclair, Director, Sundance New Frontiers Lab
Karim Ben Khelifa, VR Director
Lisa Caruso, Head of U.S. Content, Population Media Center
Mahyad Tousi, CEO, Boomgen Studios
MIT STEERING COMMITTEE
Vladimir Bulović, Founding Director of MIT.nano, Professor of Electrical Engineering, Fariborz Maseeh Chair in Emerging Technology
Joe Checkelsky, Assistant Professor of Physics
Ekene Ijeoma, Assistant Professor and Director of Poetic Justice at MIT Media Lab, Director of Studio Ijeoma
Leila W. Kinney, Executive Director of Arts Initiatives, MIT Center for Art, Science & Technology (CAST)
Nick Montfort, Professor of Digital Media, Comparative Media Studies/Writing
Danielle Wood, Assistant Professor, Program in Media Arts and Sciences
Our Work
The studio supports XR Arts Production focused topics including on creative expression, social impact, and cultural innovation.
“The Enemy”
By Visiting Artist Karim Ben Khelifa
“The Enemy” is a groundbreaking interactive Virtual Reality (VR) exhibition and immersive experience by acclaimed photojournalist Karim Ben Khelifa. The MIT Center for Advanced Virtuality director, Professor Fox Harrell, was the Human-Computer Interaction Producer on the project. The project immerses participants in discussions about war and humanity by using pioneering VR technology to present interviews with soldiers on opposite sides of conflicts in Israel and Palestine, The Congo, and El Salvador. Using virtual-reality headsets, participants encounter real, 360-degree imaging and recordings of combatants on opposite sides of international conflicts who were interviewed by Ben Khelifa for the project. In their own words, the combatants offer personal perspectives on war, their motivations, suffering, freedom, and the future.
Ben Khelifa was a fellow at the MIT Open Documentary Laboratory during the project’s formative stage. During that time, he and Harrell realized their shared aim of using virtuality-based art for social change. Harrell wrote and received a grant from MIT’s Center for Art, Science & Technology (CAST) to support further collaboration. This resulted in the project’s incorporating innovative technology designed by Harrell that shapes and adapts the VR-experience for each participant, creating a unique and personalized experience that draws on answers given in a preliminary survey.
The Noir Initiative
The Narrative, Orality, and Improvisation, Research (NOIR) Initiative
The MIT Center for Advanced Virtuality’s NOIR Initiative will focus on the research and practice of digital improvisation, phantasmal media, and cross-cultural forms of expression. An important aim is on designing and engineering experiences that convey the characteristics of oral communication forms of expression through information and communication technologies.
Key commitments include:
- Using vernacular language for free expression and against oppression
- Decolonization of media practices (grounding them in diverse cultural origins)
- Recognizing value in ephemeral, transitory media (such as everyday spoken language)
- The importance of live performance
- Integrative cultural systems, e.g., remediating vernacular cultural forms in computational environments
- Critical connoisseurship (high quality content that is also accessible)
- Speculative design (fiction plays a key role in innovation and meaning-making)
- Transdisciplinary scholarship and production
The lab supports XR Research, focused on topics including learning, cognition, and technical innovation.
XR Learning
Enriching Virtual Reality (VR) Narratives with Embodied and Gestural Interaction
There is an increasing need for embodied and gestural interfaces for VR narratives. This research aims to develop theories and technologies to advance an understanding of embodied identity expression in virtual reality (VR) narratives through designing interfaces which use speech, gestural, and physiological and other inputs oriented to reflect the nuance of real-world human interaction.
Workplace Learning
(Supported by J-WEL)
Provoking Critical Reflection on Gender Discrimination with Chimeria: Grayscale
In late 2017, the media attention to this perpetual ill and the harrowing #metoo stories sparked us to share our own computational tale of fiction that we humbly hope can participate in this dialogue.
The computer as a medium offers a unique expressive palette for storytellers. With it, we can build models of crucial and moving issues in our world. Grayscale, an interactive narrative experience provokes players to reflect critically on sexism in the workplace, both overt & hostile and more subtle.
Imagination, Computation, and Expression Laboratory
Passage Home
By Danielle Olson, Project Director; Dr. D. Fox Harrell, Thesis Advisor; Riana Elyse Anderson, Committee Member and Collaborator
Aspects of identity such as race and ethnicity which have significant impacts on how people experience the physical world seldom impact interactions implemented in virtual reality (VR) narratives. Even when race and ethnicity are taken into account, their impacts are often minimal.
This research investigates: (1) how racial socialization (RS) mediates how users experience emotion, perceive characters, themes, and events in VR narratives, (2) how social phenomena related to RS can be computationally modeled to facilitate meaningful storytelling on an individual and cultural level, and (3) how culturally-situated gestural interaction can be used to drive forward VR narratives.
Towards these goals, the current research will investigate how players make decisions using body language as they come face to face with characters in a three to five minute virtual reality (VR) experience called Passage Home VR which employs a computational model of RS and is themed around coping with racial discrimination as a student of color in a classroom setting.
The Education Arcade at MIT
Taleblazer – Location-based Augmented Reality Game Play and Creation
by Eric Klopfer, Professor and Director of the Scheller Teacher Education Program and The Education Arcade at MIT
TaleBlazer (http://taleblazer.org) is a platform for making and playing location-based mobile augmented reality (AR) games. Game locations can include venues such as zoos, nature centers or historic locations, or can be the communities in which game designers live. As players move around the physical space, their devices sense their current location (typically using GPS outdoors or iBeacons indoors) allowing players to interact with virtual characters, artifacts and data within the context of real landscapes. These hybrid experiences in TaleBlazer create new opportunities for dynamic storytelling, informal learning experiences, and community engagement. Players can take on in-game roles (for example, taking on the role of an environmental scientist, investigative journalist or a farm family in the 1830s), and work individually or collaboratively to tackle in-game challenges. TaleBlazer players download a free mobile app to their device (Android or iOS). The web-based editor uses a visual, blocks-based programming environment that allows both novices and experts to create their own location-based AR games. Informal learning venues (including museums, zoos, nature centers, botanical gardens and others) have created TaleBlazer games, to bring location-based AR games to their visitors. Organizations including Global Kids and Seattle Public library have facilitated out-of-school TaleBlazer game making workshops in which youth design, author and implement their own TaleBlazer games, empowering them to create meaning around themes within their local communities.
The Education Arcade at MIT
Collaborative Learning Environments in VR (CLEVR)
by Eric Klopfer, Professor and Director of the Scheller Teacher Education Program and The Education Arcade at MIT
The Collaborative Learning Environments in Virtual Reality (CLEVR) team aims to develop conceptual understanding and scientific habits of mind through educational games. The Education Arcade is developing Cellverse, a collaborative, cross-platform game where players explore virtual cells, diagnose diseases, and select therapies.
Cellverse creates learning experiences that integrate knowledge, skills, and identity. Players learn about biology by asking “how should cells work, what’s wrong with this cell, and which therapy would address this disease?” They develop spatial skills by navigating the VR cell, and collaboration skills through working with a partner. Working together across different technologies and sources of information, players formulate hypotheses, collect evidence, evaluate and refine, thereby developing identities as scientists.
Key commitments include:
- Representing cells and their components authentically in commonly misrepresented aspects, such as size, shape, number, orientation, position.
- Representing cells and their components authentically in commonly misrepresented aspects, such as size, shape, number, orientation, position.
- Making learning about science more like doing science and less like memorizing discrete facts – investigative, situated, experimental, and innovative
Get Involved
Hub
News & Events
Salon
Jobs
Submissions
Virtuality Studio & Laboratory:
How We Can Help You
The MIT Center for Advanced Virtuality supports producing work that innovates involving technologies of virtuality. Makers and researchers on campus may submit proposals to calls for projects These works may be studio (expressive) projects or laboratory (research) projects.
Studio projects: primarily focus innovating new genres, aesthetics, and conventions for using technologies of virtuality.
Lab projects: may focus researching social impacts, learning, simulation, cognition involving new technologies and may also involve inventing new technologies.
We also welcome projects that span both expressive and research aims.
Who can submit?
Anyone on campus may submit a project involving technologies of virtuality. If aligned with the Center’s aims, we shall feature the work on the site and support publicity and media outreach for the project.
The Production Process
Accepted proposals are produced in collaboration with the Center for Advanced Virtuality and paired with a production team at the Virtuality Studio or Lab. Typically, the Studio helps to cultivate the project’s expressive vision, professional production, and impact. Typically, the Lab helps to foster productive interdisciplinary research collaborations to further the research.
The Center also produces in-house projects within the Studio and Lab, supporting ongoing initiatives in innovation and research regarding technologies of virtuality. Some such projects are solicited by the Center that align with ongoing initiatives within the center.