BSU HOSTS XR CONFERENCE

IDIA Lab was invited by the Campus Alliance for Advanced Visualization to host their annual conference (CAAV) in Fall 2023 at the College of Architectures and Planning. The CAAV is a nonprofit organization that promotes the capabilities for use of extended reality technologies, such as CAVE-style immersive VR systems, video walls, headsets, and advanced visualization software employed in research, learning and public engagement. 

CAAV’s purpose is to act as an information resource to worldwide higher education – focusing on immersive visualization platforms, advanced video screen formats, visualization software and issues of sustainability and operation for high tech visualization facilities.

AI Art Installation Beijing

AI Art Installation Beijing

Wishing Well (许愿池) by media artist John Fillwalk and IDIA Lab, is an Artificial Intelligence (AI) installation that was invited into the 5th Arts and Science International Exhibition and Symposium (TASIES 2019) at the National Museum of China in Beijing located on Tiananmen Square. The exhibition curated the work of international artists engaged in the pioneering use of AI in the creation of art-making. The Wishing Well installation creates an interactive environment where visitors’ spoken wishes shape the substance, meaning and character of the artwork – mediated through artificial intelligence.


Wishing Well ( 许愿池 ) engages both private and public spatial interactions in the creation of a simultaneously intimate and collective social experience. The tone of the wishes aggregated by the system is processed anonymously and delivered into this social sculpture via synthesized whispered voices, spoken in Mandarin. The pool of water in the bowl, or wishing well, is vibrated by a speaker placed underneath – creating a three-dimensional dynamic sound wave representation shaped by the sound of the actual wish. Virtual colored shapes are emitted based on the tonal analysis of each wish and are aggregated into an historical representation of previous wishes – launching skyward to become part of the larger collection of dreams.

Indiana University VR Exhibit

Indiana University VR Exhibit

Indiana University Virtual Museum Exhibition Opens BSU IDIA Lab recently launched its virtual reality exhibition interpreting the Angel Mounds heritage site at the Indiana University Museum of Archaeology and Anthropology (IUMAA). The gala ribbon cutting event opened a portion of the collections of the former Glenn Black Laboratory of Archaeology and the Mathers Museum of World Cultures. The diverse collections at IUMAA relate to dynamic societies from around the world and contain over five million objects of cultural heritage and archaeological significance. The virtual exhibit, entitled City on the River, is a large scale immersive 360 degree VR CAVE experience which place viewers in the community on the Ohio river over 900 years ago. The simulation provides learning experiences regarding the agriculture, construction techniques, hunting, fishing, gathering and astronomical alignments of the Mississippian peoples. The Indigenous Peoples built sturdy walls to protect their relatives and great mounds of earth. These structures were designed to align with the movements of the sky – the sun, stars, and moon. Then, after 300 years of continuous inhabitation, the Indigenous Peoples of this great town decided to leave their fields and mounds behind. The larger exhibition will tell the story of the People who lived there through material culture from the site, the IDIA immersive virtual reality experience, connections to the stories of contemporary culture-bearers, and art from Indigenous descendant communities.

APOLLO 11 50th ANNIVERSARY

3D simulation of Apollo 11 landing created for the John F. Kennedy Library’s 50th Anniversary of the Apollo 11 Mission. Produced for the International MoonBase Alliance. Produced by the Institute for Digital Intermedia Arts at Ball State University. https://idialab.org/

About IMA The International MoonBase Alliance (IMA) is an association comprised of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaiʻi’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements. https://moonbasealliance.com/

Shackelton Crater Animation

Shackleton Crater is an impact crater lying at the South Pole of the Moon. The ridges along the crater’s rim are exposed to almost continual sunlight, while the interior is perpetually cast in shadow. This low-temperature interior of the crater functions to capture frozen water shed during impacts on the Moon. Some sites along Shackleton’s rim receive almost constant illumination where sunlight is almost always available for conversion into electricity using solar panels, potentially making attractive sites for future Moon landings. The temperature at these sites is also more favorable than at more equatorial latitudes as it does not experience the daily temperature extremes of 100 °C. The crater is named after Antarctic explorer Ernest Shackleton.

The 3D model of the Moon was created by IDIA Lab by using height and color data provided by NASA’s Lunar Reconnaissance Orbiter project. This data was then imported into a digital terrain generator to create a continuously accurate and scalable model of the entire lunar surface. This project was created for the International MoonBase Alliance and ImmersiveWorlds, LLC. Produced by IDIA Lab.

About IMA: The International MoonBase Alliance (IMA) is an association composed of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaii’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements. http://moonbasealliance.com/

The Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University explores the intersection between the arts, science, and technology. Scholarly, creative and pedagogical projects investigate virtual reality, human computer interface (HCI), augmented reality, mobile apps, visualization and 3D simulation to heighten understanding across disciplines. The Lab was funded through the Eli Lilly Foundation’s Digital Media Initiative as part of a $40 million dollar grant. Our faculty, staff and students develop projects in partnership with international clients in this innovative studio initiative investigating the forefront of emergent design and learning.

Acknowledgements: Special thanks to the National Aeronautics and Space Administration and the NASA Lunar Reconnaissance Orbiter mission.

Interactive Virtual ChatBots

Interactive Virtual ChatBots

IDIA Lab’s chatbots provide interactions with virtual 3D animated non-player characters (NPC) that can be created with customized persona profiles. Users can converse with NPCs via voice or text input, where the virtual characters use artificial intelligence to assess the tone of the conversation and reflect simulated responses with facial animation, gesture, and vocal intonation. The lab engages ChatGPT and other AI software to assemble an integrated system within the game engine of Unity to create a robust real time user engagement with an infinite variety of customized NPC profile modeling – from historical characters to highly specific technical interactions.

Virtual Elementary School

Virtual Elementary School


The Maryknoll Virtual School, created by the IDIA Lab, is a digital twin simulation of a parochial elementary school in Honolulu, HI. The virtual school provides an environment for students and teachers to remotely engage in collaborative real time lectures and conversations in a multiplayer, avatar based teaching environment. The application also provides administrators the ability to monitor admission, registration, class assignments, and grading through an extensive management portal. The application allows for communication via chat, voice, live whiteboard, screen share, in-world video conferencing, and file sharing.

 

 

Deep Mapping / VR / GIS

Deep Mapping / VR / GIS

Deep Mapping Middletown: Designing Immersive Experiences for Spatialized Historical Data James J. Connolly John Fillwalk

Deep Mapping Middletown seeks to represent in spatial termsthe substantial archive produced by research on Muncie, Indiana, USA, the site of Robert and Helen Lynds’ seminal community studies, Middletown (1929) and Middletown in Transition (1937). The success of the Lynds’ work, which is considered to be among the most influential interpretations of twentieth-century American life, inaugurated a tradition of using this small midwestern city as a barometer for assessing broader social and cultural trends in the United States. Researchers, journalists, and filmmakers have repeatedly returned to the city over the past century to document social and cultural change, generating an extraordinarily rich multimedia archive documenting local experience. Most, though not all, of this material is accessible in digital form.

We have begun to build a multi-tiered platform that mobilizes this archive for “deep mapping” the city. By deep mapping, we mean the process of generating user-driven, multimedia depictions of a place. Drawing on postmodern theory, scholarsengaged in deep mapping have employed digital technologies to create complex representations of spaces and empower users to explore them from a variety of perspectives. Deep mapping aims to destabilize depictions of place, conveying the multiple meanings that different groups of people have assigned to specific settings and their evolution over time. Our deep mapping platform integrates GIS and immersive 3D simulationtechnology to provide access to this material and facilitate investigations of spatial-historical experience, including the evolution of racial geographies and the civic and social consequences of deindustrialization.

Part of our aim in this project is to reframe Middletown Studiesfor scholars, students, and public audiences. While there is anextraordinarily rich collection of Middletown research materials, including extensive published scholarship, hundreds of recorded interviews, thousands of photographs, hundreds of hours of films, survey results and unpublished research reports, much of the work that produced this archive rests on a problematic premise. The Lynds’ initial investigations neglected Black and other minority experiences, an oversight that many follow-up studies failed to remedy. Only since the 1970s has Middletown research has become more inclusive, incorporating the experiences of racial and ethnic minorities that the Lynds and their immediate successors ignored. Recent work has also jettisoned the anthropological gaze in favor of more collaborative approaches that share authority between researchers and community members. A key goal of Deep Mapping Middletown is to elevate this later body of work, using the multivocality inherent in deep mapping to repurpose the Middletown archive as a resource for investigating and empowering the marginalized, not just the mainstream.

In its current, prototyping stage, our project aims to overcome several technical and design challenges. These include:

1. The development and refinement of a Historical Spatial Data Infrastructure (HSDI) that include geolocated historical data from various sources and in various formats (text, image, audio, and video) ingested into a GIS, as well as tools, features, and procedures to manage and facilitate use of the data. A key part of this work is establishing lat-long coordinates for photographs and audio-visual materialas well as for passages extracted from textual sources such as oral history transcripts or ethnographic writing.

2. Application of manual and computational techniquesdeveloped by various scholars for capturing and representing vague or subjective spatial information in both 2D and 3D. The Middletown archive includes a substantial body of purposely obscured evidence in ethnographic writing, as well as spatial data contained in oral histories, and anonymized survey data. While researchers have employed a range of visualization techniques that extend beyond traditional coordinate-based cartographic methods to represent these kinds of data, we are especially interested in approaches that link vague and subjective experiential evidence to coordinate locations.

3. Development of the interface between a Unity-based virtualenvironment and a GIS-based HSDI that enables users to engage with the spatial data we are assembling.

4. Development of a virtual environment that includes in-world visual cues modeled on game analytics, such as heat maps and dwell times, that visualize spatial data, includingaffective and sensory experiences, documented in Middletown research.

We propose to present a paper documenting our progress to date in meeting these challenges and explain the potential of 3D immersion for deep mapping. Working with a team of scholarly advisors, librarians, designers and developers, we have producedan initial GIS that includes geolocated sample data for a single neighborhood drawn from collections of photographs, oral histories, and ethnographies. We have also developed a 3D immersive space using the Unity game engine, employing the ArcGIS SDK for Unity to integrate our GIS and 3D model, giving users access to spatial data within our immersive environment. We are also currently creating role-playing experiences that limit access to spaces and information depending on the role adopted by the user and the period selected. These experiences are derived from spatial data in the Middletown archive. We will also follow best practices for heritage visualization as described in the London Charter by making paradata that documents our interpretive choices available to users.

Our presentation will also include a demonstration of our prototyping work to date, including a sample walk-through of our immersive test environment and a review of the HSDI.

Smithsonian VR Exhibition

Smithsonian VR Exhibition


BSU IDIA Lab developed Virtual Milestones of Flight, an immersive head mounted display prototype designed for the permanent exhibition at the Smithsonian Air and Space Museum in Washington, DC.

Both the exhibition and simulation celebrate selections of the most significant airplanes, rockets, and spacecraft in history. Some of these craft incorporated into the simulation include: Charles Lindbergh’s Spirit of St. Louis, the X15, a hypersonic rocket powered USAF aircraft, the Wright Brothers’ flyer, Sputnik, the first Russian artificial satellite, and NASA’s Apollo 11 Lunar Lander.

Pumapunku Reconstruction

Pumapunku Reconstruction

There’s something fascinating about looking at ancient ruins and imagining what the lives and culture might have been like of the people who lived there. With the everchanging advancements in technology, researchers can get closer than they ever have to solving these mysteries.

Ball State University faculty John Fillwalk, senior director of the Institute for Digital Intermedia Arts (IDIA Lab) and associate professor of electronic art, is merging cutting-edge digital technology with archaeological research. His latest project involves bringing Pumapunku, an ancient temple in the pre-Incan city of Tiwanaku, to life through digital modeling, offering new perspectives on one of the world’s most mysterious archaeological sites. His work was highlighted in a recent episode of Ancient Aliens, produced by the History Channel.

“Ball State’s IDIA Lab was featured on a couple of episodes with History Channel for the Universe series—one on Ancient Rome and another on Stonehenge,” Prof. Fillwalk said. “In both of those projects, we employed 3D technology in various ways to simulate archeological sites that are mostly in ruin and bring them back to life virtually.

“Over the years, we have developed some niche expertise in working with archeologists to examine these ancient monuments. We deploy NASA’s JPL data to create an accurate virtual Sun and Moon based on a specific location and time period. And because we had worked with History Channel previously using those techniques, they approached us with this new project.”

Pumapunku: A Historical Marvel IDIA Lab digial 3D model of Pumapunku

Experts disagree on the age of Pumapunku. The site is part of the larger Tiwanaku complex and is believed to date back over 1,500 years to A.D. 500. It has puzzled historians and archaeologists because of its advanced engineering and intricate stonework.

Pumapunku, often called the “Gate of the Puma,” is located in the Bolivian highlands near Lake Titicaca and was part of the Tiwanaku civilization, which predates the Inca Empire. The city’s stone blocks, which weigh several tons each—the largest clocks in at 144 tons—feature precise cuts, perfectly straight edges, and intricate geometric patterns that defy the technological capabilities typically attributed to ancient civilizations. Some scholars have speculated that Pumapunku’s architects possessed advanced knowledge of astronomy, mathematics, and engineering.

The mystery of how the ancient Tiwanaku people managed to transport and carve such large stones has captivated researchers for decades. Traditional theories suggest the use of rudimentary tools and methods like wooden rollers or ropes. However, no definitive explanation has been found, leaving plenty of room for modern-day investigation.

IDIA Lab’s Digital Approach Prof. Fillwalk and his team at IDIA Lab worked for several months to digitally reconstruct Pumapunku using advanced 3D modeling and simulation techniques. This project is not merely a visual recreation but an attempt to understand the cultural, architectural, and environmental context of the ancient city. By doing so, his team hopes to provide deeper insights into the construction methods used by the Tiwanaku civilization and the site’s possible significance in pre-Columbian history.

To create the digital model, Mr. Fillwalk and his team used photogrammetry—a technique that uses overlapping photos to create models, laser scanning, measured hand modeling, and other 3D imaging techniques to capture the exact dimensions of the stone structures at Pumapunku. These data sets are then used to build a highly detailed, accurate, three-dimensional representation of the site. But since the location is in ruin and was never even finished before being abandoned, Mr. Fillwalk and other researchers have had to do their best to fill in gaps and imagine what the final product might have been like.

“The city really is a mess,” Mr. Fillwalk said. “It’s probably one of the most challenging sites we’ve ever tried to reconstruct because, first of all, the architect itself was incomplete— abandoned before totally constructed. We’ve tried to imagine the original architects’ vision. The other problem is that various parties have looted it over the years, looking for gold and such. In one instance, they used dynamite to excavate and just blew up a large section of the foundation. Many stones were taken away or relocated, ending up in local churches and houses.”

Despite these challenges, Mr. Fillwalk is pleased with the results and confident with the final product. Since finishing the episode with the History Channel team, he has collaborated and completed even more work with Dr. Alexei Vranich, archeologist and the leading expert on Tiwanaku and a professor at the University of Warsaw, Poland.

“Sometimes when I work on projects like this, I get a bug about it, so I really wanted to go further with the project,” FIllwalk said. “What we did for the show was a cursory interpretation, but we’ve taken it further since, using Dr. Vranich’s data from his excavations. We are pretty confident in our interpretation. I think it’s as good as we can do at the moment, given the data we have access to.”

Director Fillwalk’s Pumapunku project demonstrates how technology can revolutionize the study of cultural heritage and ancient civilizations. By integrating archaeological expertise with state-of-the-art digital tools, Mr. Fillwalk and the IDIA Lab, centered within the Estopinal College of Architecture (ECAP), are unraveling the mysteries of Pumapunku while making the ancient site accessible to a broader audience than ever before. Their work bridges the gap between the past and the future, providing new perspectives on one of the most enduring architectural mysteries.

“Resurrecting Puma Punku” (Ancient Aliens S20, E20) aired on Sept. 14, is available on some streaming services and will be rebroadcast on the History Channel.

September 27, 2024 by Jennifer Criss

Marius Igitangaza Ngabo

Marius Igitangaza Ngabo is a dedicated Master’s Student in Landscape Architecture at Ball State University, with a Bachelor’s degree in Architecture from the University of Rwanda (2021). His diverse project experience showcases his creativity, resilience, and commitment to learning. Captivated by landscape architecture during his undergraduate studies, Marius envisions incorporating design and planning expertise to transform natural environments. He is driven by a passion for enhancing people’s experiences in outdoor spaces, focusing on how design enhances feelings, perceptions, and mental health. By connecting natural and human systems, he strives to develop solutions that benefit people, wildlife, and the environment as a whole.

Displaced Resonance v2

IDIA Lab was invited to exhibit their virtual and hybrid artworks at the Third Art and Science International Exhibition at the China Science and Technology Museum in Beijing, China.

http://www.idialabprojects.org/displacedresonance/

Displaced Resonance v2

John Fillwalk

Michael Pounds

IDIA Lab, Ball State University

Interactive installation, mixed media

2012

Displaced Resonance v2 is an interactive installation consisting of sixteen reactive forms that are networked in a grid of light and sound. Interaction within the sculptural field is based on a participant’s presence and proximity to each sculpture. The Displaced Resonance installation is connected to a mirrored instance of the field in a virtual environment – bridging both physical and virtual visitors within a shared hybrid space. Visitors to the virtual space are represented by avatars and through their proximity affect the light and sound of each sculpture. Each participant is aware of the other in each space and uniting both instances within a singular hybrid environment.

A computer system using an thermal camera tracks the movement of visitors and responds by controlling the distribution of sound and dynamic RGB data to the LED lights within the sculptural forms. The installation utilizes custom live processing software to transform these sources through the participants’ interaction – displacing the interaction from both their physical and virtual contexts to the processed framework of physical and virtual resonances. The two environments are linked to send and receive active responses from both sides of the installation via messaging, sensors, hardware and scripting.

In prototyping this work, the process included both physical and virtual models to design and build the form and the interaction. The physical prototypes were interpreted in a virtual simulation environment, investigating the spatial interaction of the structure. The interactive functionality was tested through scripting before the form was brought into the sensored camera-based version. After several virtual iterations, the form was re-interpreted and fabricated.

John Fillwalk

with Michael Pounds, David Rodriguez, Neil Zehr, Chris Harrison, Blake Boucher, Matthew Wolak, and Jesse Allison.

Third Art and Science International Exhibition

China Science and Technology Museum in Beijing

http://www.tasie.org.cn/index.asp

TASIE Press release

http://www.tasie.org.cn/content_e.asp?id=84

Ball State artists create “forest” of light on display in China and Internet

Muncie, Ind. — Ball State University electronic artists have created a “forest” of light and sound that will be on exhibit in Beijing, China through November, yet also accessible to visitors from Indiana or anywhere else in the world.

That’s possible because “Displaced Resonance,” as the interactive art exhibit is known, has both real-life and virtual components.

The physical portion has been installed in a gallery of the China Science and Technology Museum in Beijing. There, in-person visitors can negotiate a thicket of 16 interactive sculptures spaced 1.5 meters apart that will change colors and emit music as they approach.

A digital replica of the layout, meanwhile, resides on the Internet, accessible through the museum’s website. Online visitors can wander the virtual exhibit using an avatar, and the digital pillars will change colors and produce sounds, just like their physical counterparts.

But that’s not all — the two pieces interact with each other, says John Fillwalk, director of Ball State’s Institute for Digital Intermedia Arts (IDIA) and Hybrid Design Technologies (HDT), which created the work in collaboration with IDIA staff, students and composer Michael Pounds, BSU.

When an online avatar approaches a virtual pillar, the corresponding real-life column also will change colors, and vice versa. In-person and virtual visitors will produce different colors, however, allowing them to track each other through the exhibit.

“It’s what we call hybrid art,” says Fillwalk. “It’s negotiating between the physical world and the virtual. So it’s both sets of realities, and there’s a connection between the two.”

The physical pillars are two meters (or more than 6 feet, 6 inches) tall. They consist of a wooden base containing a sound system; a translucent pillar made of white corrugated plastic and computer-controlled lighting.

A thermal camera mounted on the museum’s ceiling keeps track of visitors and feeds its data to a computer program that directs the columns to change color and broadcast sounds when someone draws near.

“It’s a sensory forest that you can navigate,” Fillwalk says.

A video screen mounted on a wall overlooking the exhibit allows museum visitors to watch avatars move around the virtual version, while Internet patrons can affect the physical counterpart.

“Displaced Resonance” is the centerpiece of Ball State’s contributions to the Beijing museum’s 3rd Art and Science International Exhibition and Symposium, a month-long celebration of technology and the arts. Ball State was invited to participate because museum curators discovered some of IDIA’s work and liked what they saw, Fillwalk said.

In addition to “Displaced Resonance,” IDIA contributed four other pieces of digital art that museum visitors can view at a kiosk.

Those pieces are:

· “Proxy”, in which visitors create, color and sculpt with floating 3D pixels.

· “Flickr Gettr,” in which visitors can surround themselves with photos from the Flickr web service that correspond to search terms they submit.

· “Confluence,” in which users create virtual sculptures by moving around the screen and leaving a path in their wake.

· “Survey for Beijing,” in which real time weather data from Beijing is dynamically visualized in a virtual environment.

(Note to editors: For more information, contact John Fillwalk, director of the Institute for Digital Intermedia Arts, at765-285-1045 or jfillwalk@bsu.edu; or Vic Caleca, media relations manager, at 765-285-5948, or vjcaleca@bsu.edu. For more stories, visit the Ball State University News Center at www.bsu.edu/news).

VBAM: VIRTUAL BROAD ART MUSEUM COMMISSION

Press Release

View the VBAM project

VBAM Introductory Lecture

John Fillwalk and the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University were commissioned to design and build a virtual museum artworks for the Eli and Edythe Broad Art Museum in East Lansing, MI. The physical Museum, was designed by architect Zaha Hadid and opened to the public in Fall 2012. Museum Director, Michael Rush commissioned Fillwalk and IDIA Lab to envision and create a dynamic program of artworks for the Virtual Broad Art Museum project (VBAM). Fillwalk and his collaborators created an immersive multi-user environment and four commissioned artworks using Unity 3D – experienced entirely within a web browser.

“Engaging visitors with innovators at the leading edge of art and technology, both here at MSU and around the globe, is key to the Broad Art Museum’s mission,” said founding director Michael Rush. “With the Virtual Broad Art Museum, we have an opportunity to embrace the tremendous creative and connective possibilities that exist in the digital world.”

The mixed world reception took place on Thursday, March 15th 2012 from 6:00 – 8:00 pm EST simultaneously on the campus of Michigan State University and in the VBAM virtual worlds.

Please visit the VBAM project links below.

http://idialabprojects.org/vbam/proxy.html

http://idialabprojects.org/vbam/flickrgettr.html

http://idialabprojects.org/vbam/survey.html

http://idialabprojects.org/vbam/confluence.html

VBAM Exhibitions by John Fillwalk

Participants in con|FLUENCE can create pathways based on reactions to both social and spatial relationships. There is a virtual response and persistence to interactions, as the routes are drawn and sculpted in the three-dimensional environment – forming nodes that are created via visitors’ proximity.

con|FLUENCE. John Fillwalk. 2012.

Survey for VBAM is an immersive landscape simulation using real time weather data from the location of the Broad Art Museum in East Lansing, Michigan. Representations of surveyor’s tape, flags, light, sky color, time of day, wind speed and even cloud density are driven by the actual physical weather forces – informing the virtual landscape.

Survey. John Fillwalk. 2012.

dis|PLACE is an interactive gesture-based performance that will be exhibited at the VBAM reception on the 15th of March. This installation links virtual space to physical reality in an exploration of agency and gesture. Employing a participant’s movement, the work can both navigate 3D space – as well as control direct interactions with the digital performers via a Microsoft Kinect™ sensor.

dis|PLACE. John Fillwalk. 2012.

Proxy is a performative, interactive and site-specific virtual installation where participants shape the evolution of a sculptural and sonic composition as a response to the Museum. Interactions are painted, sculpted and sonified within this collaborative and evolving environment. The work progresses to construct in itself relation to the Museum – eventually transforming as integrated structure – reflecting and becoming its environment.

Proxy. John Fillwalk. 2012.

Flickr™ Gettr for VBAM connects the social image web service of Flickr™ to the virtual Museum environment, allowing visitors to create a dynamic cloud of spatial imagery by entering a search term of their choice – providing tagged images from Flickr™ into the Museum environment. The search term is spoken by the software and tags from image cloud are revealed allowing for subsequent user transformations of the spatial imagery.

Flickr Gettr. John Fillwalk. 2012.

Credits

Proxy.
John Fillwalk with
Neil Zehr, Michael James Olson
2012

Survey for VBAM
John Fillwalk with
Neil Zehr, Keith Kothman, Charles Estell
2012

Flickr Gettr for VBAM
John Fillwalk with
Jesse Allison, Neil Zehr, Charles Estell
2012

con|FLUENCE
John Fillwalk with
Neil Zehr
2012

dis|PLACE (exhibited at the reception on the 15th of March as a live performance only)
John Fillwalk with
Neil Zehr, Jonathan Strong
2012

Michael Rush, Director of the Broad Museum of Art, MSU
Adam Brown, Professor Intermedia and Electronic Art, MSU
John Fillwalk, Director of the Institute for Digital Intermedia Arts [IDIA Lab], BSU
Neil Zehr, IDIA Lab, BSU
Charles Estell, IDIA Lab, BSU
Michael James Olson, IDIA Lab, BSU
Ina-Marie Henning, IDIA Lab, BSU
Jonathan Strong, IDIA Lab, BSU
Jesse Allison, LSU
Keith Kothman, BSU
Jonathan Ross, BSU
Blake Boucher, BSU
Michael Davidson, MSU
Jayne Goedekke, MSU
Kyle Gomboy, Reaction Grid
Chris Hart, Reaction Grid

Special thank you to Neil Zehr, IDIA Lab, BSU

Mellon Foundation Humanities Virtual World Consortium

Screen Shot 2013-04-23 at 4.22.14 PM

BSU’s IDIA Lab was contracted by the Mellon Foundation Humanities Virtual World Consortium to design and develop a major open source virtual world initiative for Digital Humanities and Cultural Heritage projects. The consortium – comprised of Kings College, London; UCLA; the University of Virginia and Trinity College, Dublin – turned to the expertise of BSU’s IDIA Lab to create this two-year innovative hybrid simulation platform that leverages new modes of teaching and learning in immersive environments.

http://virtualworlds.etc.ucla.edu

Concurrent with the development of the prototype infrastructure, members of the Consortium developed working relationships with Ball State University’s IDIA Lab, an internationally recognized academic leader in in the development of virtual worlds, human computer interaction, visualization and 3D simulation. Most important for the Consortium, the IDIA Lab is engaged in the development of scholarly, creative and pedagogical projects that explore the intersection between the arts, science and technology. The IDIA Lab is not merely a technical development team, but is also a interdisciplinary design studio that integrates art and emergent technologies into every phase of development. After inviting John Fillwalk, the lab’s director, to attend a series of conference calls with the consortium, a proposal for interface and in world design was solicited. John Fillwalk is an internationally recognized artist and developer of virtual and hybrid environments. He serves as the senior director of the Hybrid Design Technologies initiative [HDT], professor of Art and as the director of the Institute for Digital Intermedia Arts [IDIA Lab} at Ball State University. Over the past 25 years, his interactive and virtual artworks have been exhibited internationally in numerous festivals, galleries and museums including SIGGRAPH, CYNETart, Synthése, 404 Festival, Dutch Design Week, Boston Cyberarts, Virtual Broad Art Museum, ISEA, ASCI, VIdéoformes, Indian Institute of Technology and the Beijing Science and Technology Museum.

The Consortium elected to partner with the IDIA Lab, since it offers design as well as technological expertise and a common interest in the theoretical implications of Virtual World technologies on research and pedagogy.

Development will be split between the two independent teams, with the IDIA Lab, in general, centering its work on the development of the Unity based platform including avatar selection, navigation, network controller, user interface system, and back end network hosting, while Tipodean develops the HTML and KML system and works with members of the Consortium to integrate the four individual projects into the shared platform. The development will not occur in isolation from the rest of the Consortium.  The external development teams will offer scheduled monthly training sessions to the internal technical teams of the Consortium. We are employing a similar development model to that successfully used during the Planning Phase of the HVWC in which, through a collaborative effort of local staff and third-party developers, we implemented a prototype template and virtual world environment with a subset of features below enabled.  In addition, we plan to hire a graphic design independent contractor and a game design independent contractor to work with the PIs and our development teams on the look and feel of the Consortium’s web presence as well as the conceptualization of the interface design.

Our Mission

I. Project Summary

The 1990s saw the development of digital technologies supporting the 3D (three dimensional) modeling of cultural heritage objects and environments. For the first time, humanists could digitally model and reconstruct the damaged or vanished monuments of the past. The results were typically 2D renderings or videos (“animations”). The decade of the 2000s saw the enhancement of 3D environments with avatars making it possible for scholars to enter into the 3D world and to use the Internet to interact with the simulated environment while communicating with fellow humanists located anywhere on the planet. Such software platforms are called networked virtual worlds (NVWs). The Humanities Virtual World Consortium (HVWC) will explore how the unique characteristics of  networked virtual worlds can enable and advance humanistic research while working towards creating a sustainable base for mainstreaming the technology in humanities scholarship. Our initial work is based upon a series of related scholarly initiatives that draw upon virtual world technology and which are meant to: a) advance the current state of research on the phenomenology of space and place, b) design visual and aural conventions to evoke the sensorial experience lost to us due to technological and evidentiary constraints, c) test the current capabilities of virtual worlds to explore chronotopic problems, previously inaccessible due to the limitations of prior technology, d) guide future development of humanities-driven virtual worlds, and e) produce works of exemplary digital scholarship, disseminated in formal, peer-reviewed publications, that solve specific research problems in particular disciplines and area studies. Our overall intent is to demonstrate how networked virtual worlds can uniquely enable important kinds of research inquiry, and thus contribute to the transformation of scholarly communication in the relevant subject fields and to the field of digital humanities. With this in mind, our projects have been chosen so that they span multiple disciplines— including Archaeology, Art History, Architectural History, Buddhist Studies, Classics, History, Irish Studies, Literary Studies, Tibetan Studies—and periods from the ancient past to contemporary times. While the projects explore discipline-specific research questions, they share common goals concerning humanities research and scholarly communication in a networked virtual world environment.

II. Expected Outcomes and Benefits of the Project

Project Deliverables: At the conclusion of the project we will release a networked virtual world platform template to be used in the Unity game development engine, the Consortium web site with documentation, four virtual world projects released as Unity3D builds, four draft in-world “articles” embedded within the virtual world project, and four articles submitted for peer review in journals devoted to specific scholarly domains.

The main outcomes of the proposed project will be (i) the consolidation of the Humanities Virtual World Consortium as a robust, fully functional academic organization that can persist and expand in the future; (ii) the development of a shared virtual world platform that is adapted for maximum utility for scholarly projects; (iii) the publication of four significant scholarly projects in that platform that exemplify the way in which virtual worlds offer unique affordances for scholarly research inquiries; (iv) individual articles based on the virtual world content submitted journals in our traditionally-defined, domain-specific fields. At the conclusion of the grant, a) the organization will be published on the web with bylaws and an means for others to join,  b) the new collaboratively maintained virtual world research and publication environment will host four scholarly publications products, and c) the shared development template and accompanying documentation will be made available online to aid others in the development of Virtual World content. The primary benefit of this project is that it constitutes a valuable and necessary step towards establishing the organizational and technical basis for a sustainable scholarly exploration of the unique ways in which networked virtual worlds can enable and advance humanistic research. While valuable and compelling individual projects have emerged over the past years, interoperability between them has not been supported. The barriers remain extremely high for individual scholars who wish to use this medium for their own research without major funding. For these reasons, to date these projects have failed to attain a wide scholarly audience. We propose to establish an organization, technology, and specific scholarly publication apparatus that would address these deficiencies in the subsequent years and beyond.

The broader, long-term, and more far-reaching benefits are that the initiative will establish a basis for the scholarly community to engage in a serious attempt at creating a sustainable, shared environment for scholarly research and communication to be done in a networked virtual world environment. These publications will show the viability and importance of such research, raise its profile in the scholarly community, and demonstrate in a compelling, virtually tangible, and accessible way the benefits of using a shared platform. Such publications will, it is envisioned, lead to an expansion of the Consortium and finally the development of a robust platform in which scholars can easily participate without the need for major grants to support further development of one-off technologies. Scholarly projects from disparate disciplines and geographical regions would be interoperable, and scholarly use would be widespread due to the consistency of interface and technology.

Mesa Verde AR Visitor Interpretation App

Mesa Verde AR Visitor Interpretation App

Ball State Univeristy’s IDIA Lab has been contracted by the US Department of the Interior to develop virtual visitor experiences for the Mesa Verde National Park. The park preserves and interprets the archeological heritage of the Ancestral Pueblo people who made it their home for over 700 years, from 600 to 1300 CE. Today, the park protects nearly 5,000 known archeological sites, including 600 cliff dwellings.

The application will bring to life well-known museum dioramas and locative walking tours of the park. Augmented reality and interactive 3D experiences will help tell the stories of the UNESCO World Heritage site – including the transformation of static elements of the historic dioramas with animated figures and interactive elements. The application will be available on both Google Play and AppStore in 2021.

Khan Academy

The Khan Academy featured HDT / IDIA Lab and Virtual World Heritage Laboratories Digital Hadrian’s Villa Project. Hosted by Beth Harris, Dean of Art and History at Khan Academy, a walkthrough with archeologist and HDT research fellow Dr. Bernard Frischer showcases the project. A virtual tour of Hadrian’s Villa using a 3D digital model of the villa was created under the direction of Dr. Bernard Frischer in conjunction with the IDIA Lab. The ruins of Hadrian’s Villa, in the town of Tivoli, near Rome, is spread over an area of approximately 250 acres. Many of the structures were designed by the Emperor Hadrian who ruled from 117 until his death in 138 C.E. This virtual rendering is based on current archeological research and has been created in consultation with art historians, archaeologists, and museum curators with expertise in this area. Please note, a few features are necessarily assumptions based on the best available evidence. Dr. Bernard Frischer and Dr. Beth Harris are the speakers in this walk though that was created by Beth Harris and Steven Zucker of Smart History.

Meet3D

Meet3D is a virtual collaboration space that brings people together to share, learn, and work – wherever they are. Private or public auditoriums, classrooms, conference halls or offices can be joined by any group or audience on demand. Meet3D was developed by the Institute for Digital Intermedia Arts at Ball State University to bring remote participants together in a common space and facilitate a sense of physical presence. The platform is available in both browser-based and executable versions.

Interactive Art at the College of Architecture and Planning

The IDIA Lab is designing a series of extended reality artworks that will be installed in the atrium of the College of Architecture and Planning. The installation allows passers-by to interact using various modes including motion, voice, and gesture to shape the compositions. The works employ sensors, sound art, artificial intelligence, and other programming to create dynamic experiences.

NSF Virtual Teaching Simulator

The College of Architecture and Planning’s IDIA Lab creates technological innovation in the arts, sciences, and humanities. The Lab engages students, staff, scholars and clients in collaborative projects including 3D, virtual reality and mobile applications. This newsletter includes updates on current IDIA Lab projects including an augmented reality visitor experience application for Mesa Verde National Park, a publication on our Roman archeology celestial alignment research, a multiuser simulation for understand aspects of wellness in Amazon’s Lumberyard game engine, and a $3 million dollar NSF grant in which IDIA Lab was selected by Massachusetts Institute of Technology, Educational Testing Services and Indiana University to create simulations for elementary math and science classrooms.

While the COVID-19 pandemic has had a major effect on PK-12 classrooms, it has also affected opportunities for preservice teachers to gain practical teaching experience. Responding to this problem, School of Education faculty Meredith Park Rogers and Adam Maltese, along with Dionne Cross Francis of the University of North Carolina at Chapel Hill, an affiliate faculty member of IU, have begun work on a project with collaborators from ETS, Towson University and Massachusetts Institute of Technology, where they will design and study an online suite of practice-based teaching activities intended to support mathematics and science preservice teachers in honing their skills for facilitating argumentation-focused discussions.

This project, funded by a grant for over $3 million from the National Science Foundation (Grant 2037983), will run from 2020-2023. ETS serves as the lead organization for the project, titled Online Practice Suite (OPS): Practice Spaces, Simulations and Virtual Reality Environments for Preservice Teachers to Learn to Facilitate Argumentation Discussions in Mathematics and Science.

The OPS consists of a coordinated set of online practice-based teaching activities that include game-based practice spaces, small-group avatar-based simulations, and a virtual reality classroom. The activities are designed to be complemented with targeted feedback and support from teacher educators.

With the COVID-19 pandemic increasingly pushing K-12 schools and universities across the nation to adopt online approaches, there is an immediate need for finding effective methods for preservice teachers to practice teaching even if traditional school-based teaching experiences are less available.

Even without pandemic-related restrictions, learning how to teach effectively demands that preservice teachers have robust, authentic and consistent opportunities to engage in the work of teaching—ideally across different contexts with diverse student populations and for varied purposes.

Over the next three years, the OPS research team will work with mathematics and science teacher educators to design, test, and refine the full suite of online practice-based teaching activities. Findings will be used to understand the mechanisms that support preservice teachers’ learning within and across the OPS activities and to document an emergent set of best practices for supporting preservice teachers’ improvement in this ambitious teaching practice over time. The OPS will be accompanied by support materials to help teacher educators use and integrate the activities into teacher education courses, including online and face-to-face learning contexts. Both elementary and middle/secondary school mathematics and science scenarios will be developed for the OPS.

The IU faculty will work with the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University to develop an immersive virtual reality environment that will situate teachers in learning to navigate a whole-class discussion with students. This aspect of the OPS is referred to as the Virtual Teacher Simulator (VTS). In these simulations, preservice teachers will move beyond the one-on-one and small-group interactions of the other two OPS activities and begin to put their skills together to teach a full classroom of virtual student avatars. Each preservice teacher will have the opportunity to move around the classroom and engage with the virtual student avatars just as though they were in a real classroom. Additionally, they will receive feedback from a mathematics or science teacher educator (i.e., coach) in order to continue to improve their practice.

As Park Rogers explains, “The VTS context especially affords preservice teachers multiple opportunities to rehearse their teaching within a short time span, without disrupting learning in a real classroom, all while receiving immediate and expert feedback from specialized mathematics and science teacher educators. Furthermore, incorporating real-time feedback and opportunities for multiple rehearsals, all within an immersive whole class context, will allow preservice teachers to take risks in their practice and try out novel pedagogical moves that they may not feel secure enough, or have opportunities, to try in an actual classroom.”

“We must find a way to continue to prepare high-quality math and science teachers through this pandemic and beyond,” says Park Rogers. “Through this collaboration, we hope to offer teacher educators and their preservice student teachers a solution to the current dilemma the global pandemic of COVID-19 has created, and also alternative or supplementary methods for targeting core teaching practices even when we are able to safely return to classrooms to work with real children.”

This material is based upon work supported by the National Science Foundation under Grant 2037983. The opinions expressed are those of the author and do not necessarily represent views of the National Science Foundation.

By Catherine Winkler, Indiana University.

NSF FUNDED VIRTUAL SIMULATION OF HADRIAN’S VILLA

NFS Funded Virtual Simulation of Hadrian’s Villa

IDIA Lab has designed a virtual simulation of the villa of the Roman Emperor Hadrian, which is a UNESCO World Heritage site located outside of Rome in Tivoli, Italy. This project is being produced in collaboration with the Virtual World Heritage Laboratory (VWHL) at the University of Virginia (UVA), directed by Dr. Bernard Frischer and funded by the National Science Foundation. This large-scale recreation virtually interprets the entire villa complex in consultation with the world’s foremost villa scholars and educators. The project has been authored in the game engine of Unity as a live 3D multi-user online learning environment that allows students and visitors to immerse themselves in all aspects of the simulated villa.

The project will not only accurately recreate the villa buildings but also include a complete Roman avatar system, non-player characters with artificial intelligence, furniture, indigenous vegetation, dynamic atmospheric system and sophisticated user interface. The interface will not only provide learning, navigation, reporting and assessment opportunities but will also allow users to change the position of the sun to any date in 130 AD using data from the Horizons database at JPL NASA – testing theses of astro-alignments of architectural features during solstices and equinoxes. UVA students will be briefed on the culture and history of the villa as well as learn the virtual environment for five weeks prior to immersing themselves within it. The avatar system will allow for them to enter the world choosing class and gender – already being aware of the customs and behavior of the Roman aristocracy, soldier, slave or politician. This project will be delivered to VWHL at UVA in early March.

The Digital Hadrian’s Villa Project:

Virtual World Technology as an Aid to Finding Alignments between

Built and Celestial Features

Bernard Frischer1

John Fillwalk2

1Director, Virtual World Heritage Laboratory, University of Virginia

2Director, IDIA Lab, Ball State University

Hadrian’s Villa is the best known and best preserved of the imperial villas built in the hinterland of Rome by emperors such as Nero, Domitian, and Trajan during the first and second centuries CE. A World Heritage site, Hadrian’s Villa covers at least 120 hectares and consists of ca. 30 major building complexes. Hadrian built this government retreat about 20 miles east of Rome between 117, when he became emperor, and 138 CE, the year he died. The site has been explored since the 15th century and in recent decades has been the object of intense study, excavation, and conservation (for a survey of recent work, see Mari 2010).

From 2006 to 20011, with the generous support of the National Science Foundation[1] and a private sponsor, the Virtual World Heritage Laboratory created a 3D restoration model of the entire site authored in 3DS Max. From January to April 2012, Ball State University’s Institute for Digital Intermedia Arts (IDIA Lab) converted the 3D model to Unity 3D, a virtual world (VW) platform, so that it could be explored interactively, be populated by avatars of members of the imperial court, and could be published on the Internet along with a related 2D website that presents the documentation undergirding the 3D model.

The 3D restoration model and related VW were made in close collaboration with many of the scholars who have written the most recent studies on the villa.[2] Our goal was to ensure that all the main elements—from terrain, gardens, and buildings to furnishings and avatars—were evidence-based. Once finished, the was used in two research projects.

The first project was a NSF-sponsored study of the usefulness of VW technology in archaeological education and research. We used the virtual villa in undergraduate classes at Xavier University and the University of Virginia to investigate the thesis of two recent studies by project advisors Michael Ytterberg and Federica Chiappetta about how this enormous built space was used by six different groups of ancient Romans, ranging from the Emperor and Empress to normal citizens and slaves (Ytterberg 2005; Chiappetta 2008). Avatars representing these groups have been created and are being operated by undergraduate students as a Problem‐Based Learning (PBL) experience. They are observed by subject experts, who are using the data generated to test and, if necessary, refine the initial theses about how circulation through the villa was handled.  The results are still being evaluated. Preliminary indications are that the data show that the combination of VW used in a PBL educational context is very effective in taking advantage of the known connection between between the hippocampus and long-term learning, especially when the information to be mastered is spatial (Kandel 2007).

The second project involved use of the VW for some new archaeoastronomical studies. Most of our advisors’ publications, like the older work by archaeologists that preceded them, have concentrated on archaeological documentation, restoration, formal, and functional analysis. The latest research by advisor De Franceschini and her collaborator Veneziano (2011) combined formal and functional analysis: it considered the alignment of certain important parts of the villa in relation to the sun’s apparent path through the sky on significant dates such as the solstices. In their recent book they showed how two features of the villa are aligned with the solar solstices: the Temple of Apollo in the Accademia; and the Roccabruna. We used the VW to extend their research to other areas of the villa, taking advantage of 3D technology to restore the sun to the right place in the sky and also to restore the damage to the architecture of the villa, as De Franceschini and Veneziano had independently suggested be done before they learned about our digital model of the villa.

The work of De Franceschini and Veneziano is innovative. Archaeastronomy has become an accepted field of study in recent decades, and a considerable amount of work has been done in Old and New World archaeology. In Roman archaeology, however, this approach is still rarely encountered. Significantly, one of the few compelling studies concerns the most famous Hadrianic building: the Pantheon in Rome. Hannah and Magli 2009 and Hannah 2011 have shown a number of solar alignments in the building, of which the most notable are the sun’s illumination of the entrance doorway at noon on April 21; and the view of sunset silhouetting the statue of Hadrian as Sun god on a four-horse chariot atop the Mausoleum of Hadrian as viewed from the middle of the Pantheon’s plaza at sunset on the summer solstice. Like the summer solstice, April 21 is also a significant date: on it occurred the  annual festival in Rome known as the Parilia (re-named the Romaia by Hadrian),[3] which celebrated the founding of Rome.

De Franceschini and Veneziano pursued an observation of Mangurian and Ray (2008) to document an impressive example of solar alignment at Hadrian’s Villa involving the tower known as Roccabruna at the western end of the villa. Originally, a tower-like structure topped by a round temple, what remains today is the well-preserved, massive lower floor. The main entrance is located on the northwestern side to the right and gives access to a large circular hall covered by a dome. The dome is punctuated by an odd feature: five conduits that are wider on the outside than on the inside (figure 1).

What is the function of these unusual conduits? They have no known parallel in Roman architecture. After asking themselves this same question, on June 21st, 1988, the day of summer solstice, the American architects Robert Mangurian and Mary Ann Ray went to Roccabruna at sunset, and discovered the extraordinary light phenomena which occur there. At sunset the Sun enters through the main door illuminating the niche on the opposite side, something that happens during most of the summer days. But only in the days of the summer Solstice the Sun penetrates also into the conduit located above that door: its rays come out from the slot inside the dome projecting a rectangular light blade on the opposite side of the dome. In June 2009, De Franceschini verified the findings of Mangurian and Ray. However, they know that the apparent path of the Sun through the sky changes slightly each year, so that in the nearly 1880 years separating us from Hadrian, the precise effect of the alignment has been lost. As they noted, only a computer simulation can recreate the original experience of being in the lower sanctuary at Roccabruna at sunset on the summer solstice during the reign of Hadrian.

Once we had our 3D model of the site, we were able to obtain from NASA’s Horizons system[4] the correct azimuthal data for the year AD 130 and put the sun into the sky at sunset on the summer solstice. Following the lead of De Franceschini, who in the meantime had become a consultant to our project, we put into the niche one of the four statues of the Egyptian sky goddess Isis that were found at the Villa. De Franceschini chose Isis because first of all, there is no question there was a statue in this niche so we need to put something there; and the two flanking niches had candelabra, whose bases are preserved and are decorated with Isiac iconography. Moreover, Isis’ festival in Rome was on the summer solstice. So we scanned and digitally restored one of the several statues of Isis from the villa and put it into the central niche. Finally, for the dome, which we know from surviving paint was blue and therefore had the famous “dome of heaven” motif (Lehmann 1945), we followed De Franceschini in restoring a zodiac set up in such a way that the sign of Gemini is over the statue niche since the last day of Gemini is the summer solstice. Our zodiac is adapted from the great Sun God mosaic in the Rheinisches Landesmuseum in Bonn, which kindly gave us permission to use it.

As can be seen in figure 2, when we restored the sun in the right position in the sky dome for sunset on the summer solstice (June 21) of 130 CE in our 3DS Max model of Roccabruna, the sunlight coming through the main doorway illuminated the statue of Isis in the statue niche, and the light entering through the conduit lit up the sign of Gemini painted on the cupola. So we were able to confirm the Mangurian-Ray thesis.

The approach we have taken in our Roccabruna project is deductive: Mangurian and Ray noted the strange feature of the conduits punctuating the cupola of Roccabruna; they hypothesized a solar alignment. De Franceschini and Veneziano agreed and for various reasons we don’t need to go into today, they put a statue of Isis into the statue niche. We set up the conditions in which these hypotheses could be tested and were able to verify them.

But surely, if there is one such alignment at the villa of the same emperor who was responsible for the Pantheon, there may be others. But the villa is very big—covering over 100 hectares—and has 30 major building complexes, most larger than Roccabruna. Moreover, such alignments could just as easily involve astrological features such as the Moon and the planets. Faced with this level of complexity, the best methodological way forward in searching for new alignments is clearly inductive and empirical. This is one reason why we asked the Institute for Digital Intermedia Arts (IDIA Lab) of Ball State University to create a multi-user virtual world based in Unity 3D from our 3DS Max model.

The project of virtually interpreting a simulation on the scope and scale of Hadrian’s Villa was a daunting one – engaging layers of scholarly, technical and pedagogical challenges. The technical challenges were many – foremost to leverage the game engine of Unity 3D to become an effective multi-user avatar-based virtual world. An important factor was to create an environment that was straightforward and accessible via standard web browsers on both Mac and Windows and selected Unity 3D as the starting point for developing the platorm. We required specific back-end administration tools to handle the accounts and server side aspects of the project – for this we relied on Smart Fox Server as it manages Unity 3D quite well. Our team took an approach that bridged and integrated disparate technologies, creating a robust virtual world platform to immersively augment both instructional and PBL processes. VW features available to the learning community included text based communication, a live map showing current visitor positions, map based teleportation, managed voice channel, user selected avatar gestures, online users, paradata, photographs of the extant site, plan views, and integrated web links.

Key to the project was a varied system of avatars representing the imperial court, freemen, senators, scholars, soldiers, and slaves to the emperor. The avatar system provided several important functions testing recent scholarly interpretations of circulation throughout the villa and the use of various spaces for typical court activities – meals, imperial audiences, bathing, worship, etc. Upon entering the simulation, the choice of avatar would predicate how one’s social standing within the role-play of the world.

A gesture system was created via motion capture providing each user with a unique set of actions and gestural responses to engage social interactions – including greetings, bowing and gestures specific to rank and class. Communication was also a critical element in the modes of problem based learning engaged by the participants in the simulation. Specific technologies provided varied abilities such as public chat, private instant messaging and live multi-user voice channels.

A companion website was co-developed and integrated into the VW environment providing learners with visual assets such as photographs and panoramas of the current site, site plans, elevations, and video interviews with Villa scholars. We also developed three-dimensional turntables of the interpreted and reconstructed models, overview information on each of the major Villa features, bibliography and an expansive database of art attributed to the Villa site. This information can be directly accessed by learners directly from within the virtual world. The development team integrated the notion of paradata, introduced by the London Charter – making instantly transparent the scholarship and all underlying elements of the 3D model (from terrain to buildings, furnishing, costumes, and human behavior).

In support of new research theme on celestial alignments by consultants De Franceschini and Veneziano, a major goal for the project was to develop an accurate simulation for the position of the sun. The solar tracking, or virtual heliodon that we created as a response to this research, was envisioned as a simulation that would a bridge between the virtual environment and coordinates from an external database calculating solar positions. After investigating existing tools we decided to employ the Horizons database that was created by NASA’s Jet Propulsion Laboratory as an on-line solar system data computation service – tracking celestial bodies in ephemerides from 9999 BCE to 9999 CE. In implementing solar tracking for the Villa project in instances were we where we wanted to investigate potential significant solar alignments, we entered the latitude, longitude and altitudes of specific buildings from the Tivoli site to poll the Horizons data for the year 130 CE. The user was able to change the date, time of day, and quickly play the sun from specific moments via the user interface. The system was co-related to both the Julian and Gregorian calendars and contained presets for the vernal and autumnal equinoxes as well at the summer and winter solstices.

These tools allowed for the rapid discovery of potential alignment that might bear further investigation. The solar feature allows one to proceed empirically, in effect turning the clock back to 130 CE and running experiments in which the days and hours of the year are sped up by orders of magnitude so that one can in a very short time find candidate alignments not yet hypothesized by scholars working in the traditional way of Mangurian-Ray.

As developers, our goal was to create the solar tool and let students and scholars use it to undertake their own empirical research. Our team was not intending to engage in this research ourselves, yet in the process of working within the environment daily we quickly began to notice curious solar phenomena. In a bit of empirical study of the very first component of the site we installed in the simulation, the Antinoeion – or newly-discovered Temple of the Divine Antinous, we noticed an alignment of potential interest. The most likely alignment seemed at first glance to be along the main axis running from the entrance, through the obelisk in the central plaza to the statue niche at the end of the axis. We ran the days and hours of the year and found that the sun and shadow of the obelisk align at sunrise on July 20. We consulted with our expert on the Egyptian calendar in the Roman period, Professor Christian Leitz of the University of Tuebingen–and he confirmed that this date has religious significance. It is, in fact, the date of the Egyptian New Year, as the Romans of Hadrian’s age clearly knew (cf. the Roman writer Censorinus, who states that the Egyptian New Year’s Day fell on July 20 in the Julian Calendar in 139 CE, which was a heliacal rising of Sirius in Egypt).

In the process of developing and subsequently utilizing the simulation tools we created for astro-archeological research, our conclusions have been that virtual world technologies can indeed take the inquiry for significant built-celestial alignments to a new level of insight.

Bibliography

Chiappetta, F. 2008. I percorsi antichi di Villa Adriana (Rome).

De Franceschini, M. and G. Veneziano, 2011. Villa Adriana. Architettura celeste. Gli secreti degli solstizi (Rome).

Hannah, R. 2008. Time in Antiquity (London).

Hannah, R. 2011. “The Role of the Sun in the Pantheon’s Design and Meaning,” Numen 58: 486-513.

Kandel, E. 2007. In Search of Memory: The Emergency of a New Science of Mind (W. W. Norton, New York). Kindler edition.

Lehmann, K. “The Dome of Heaven,” Art Bulletin 27: 1-27.

Lugli, G. 1940. “La Roccabruna di Villa Adriana,” Palladio, 4: 257-274

Mangurian, R. and M.A. Ray. 2008. “Re-drawing Hadrian’s Villa,” Yale Architectural Journal, 113-116.

Mari, Z. 2010. “Villa Adriana. Recenti scoperte e stato della ricerca,” Ephemeris Napocensis 20: 7-37.

Ytterberg, M. 2005. “The Perambulations of Hadrian. A Walk through Hadrian’s Villa,” Ph.D. dissertation, University of Pennsylvania.


[1] NSF grant # IIS-1018512.

[2] For a list of scholarly consultants, see http://idialabprojects.org/hvtest/mission.php,

[3] Athenaeus 8.361e-f.

[4] http://ssd.jpl.nasa.gov/?horizons.

Final Wisdom I: Interactive Haptic Poetry Installation at SIGGRAPH 2010

Final Wisdom I is an interactive installation engaging viewers in a sensory exploration of temporal and spatialized poetry. Participants manipulate imagery, sound and language through their gesture, touch and proximity. The work is engaged through a framework of cameras and sensors that react to heat, position, and capacitance – presenting a shifting environment of reactive media and haptics. Final Wisdom I is the work of artists Hans Breder and John Fillwalk, poetry by critic Donald Kuspit, music by composers Carlos Cuellar Brown and Jesse Allison. This project is produced through the Institute for Digital Intermedia Arts at Ball State University in collaboration with the Institute for Digital Fabrication. Final Wisdom I was exhibited at the art gallery of SIGGRAPH 2010 in Los Angeles, CA. Special thanks to IDF/CAP students Matthew Wolak, Christopher Baile and Claire Matucheski, and Assistant Professor of Architecture Joshua Vermillion. http://www.i-m-a-d-e.org/

As an intermedia artist, John Fillwalk actively investigates emerging technologies that inform his work in a variety of media, including video installation, virtual art, and interactive forms. His perspective is rooted in the traditions of painting, cinematography, and sculpture, with a particular interest in spatialized works that can immerse and engage a viewer within an experi- ence. Fillwalk positions his work to act as both a threshold and mediator between tangible and implied space, creating a conduit for the transformative extension of experience, and to pursue the realization of forms, sounds and images that afford interaction at its most fundamental level. In working with technology, he values the synergy of collaboration and regularly works with other artists and scientists on projects that could not be realized otherwise. Electronic media extend the range of traditional processes by establishing a palette of time, motion, interactivity, and extensions of presence. The ephemeral qualities of electronic and intermedia works, by their very nature, are inherently transformative, and the significance of the tangible becomes fleeting, shifting emphasis away from the object and toward the experience.

John Fillwalk is Director of the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University, an interdisciplinary and collaborative hybrid studio. An intermedia artist and Associate Professor of Electronic Art, Fillwalk investigates media in video installation, hybrid reality and interactive forms. He received his MFA from the University of Iowa in Intermedia and Video Art, and has since received numerous grants, awards, commissions and fellowships.

Donald Kuspit is an art critic, author and professor of art history and philosophy at State University of New York at Stony Brook and lends his editorial expertise to several journals, including Art Criticism, Artforum, New Art Examiner, Sculpture and Centennial Review. Hans Breder was born in Herford, Germany, and trained as a painter in Hamburg, Germany. Attract- ed to the University of Iowa’s School of Art and Art History in 1966, Breder established the Intermedia Program. Carlos Cuellar Brown, a.k.a ccbrown, is a composer, instrumentalist and music producer. Formally trained as a classical pianist, Cuellar specialized in experimental music and intermedia with the late American maverick composer Kenneth Gaburo. Jesse Allison is the Virtual Worlds Research Specialist, IDIA, Assistant Professor of Music Technology, Ball State University. He is also President of Hardware Engineering with Electrotap, LLC, an innovative human-computer interface firm.

Leonardo: The International Society for the Arts, Science and Technology article published by The MIT Press. http://muse.jhu.edu/journals/leonardo/summary/v043/43.4.fillwalk.html

Engaging Technology II: Art + Science

The interplay of art and science—a concept at least as old as Leonardo da Vinci—is the focus of “Engaging Technology II,” an exhibition at Ball State University’s David Owsley Museum of Art which ran September 28, 2017 through December 22, 2017. Museum patrons experienced a selection of work from internationally renowned artists whose explorations on the topic include installations, code art (artwork generated by computer programming), art/science, augmented reality and human-computer interaction. 

The exhibition was curated by Ball State’s John Fillwalk, director of the Institute for Digital Intermedia Arts (IDIA Lab) in the College of Architecture and Planning and is both a sequel and a new concept, according to Robert La France, the museum’s director. This exhibit builds on the success of a prior exhibition curated by Fillwalk, that examined the emergence of the genre of Intermedia Art, with a particular focus on the intersection of technology and artmaking. The new show shifts the curatorial emphasis on STEAM – science, technology, engineering, arts and mathematics.

This exhibition continues to position Ball State as an international center of STEAM innovation. A goal of Engaging Technology II was to invite artists exploring the edges of art and science into discourse within our community—advancing our University’s conversation regarding interdisciplinary collaboration and innovation. This collection of artists examines various aspects of the physical and life sciences, from code and biology to chemistry and physics.

Among those individuals whose work is featured in the exhibition is Adam Brown, an associate professor at Michigan State University. His live biochemical installation, The Great Work of the Metal Lover is a work that sits at the intersection of art, science and alchemy. “It uses microbiology as a technique to solve the mystery of the philosopher’s stone,” said Brown, who described the centuries-old “stone” as a legendary substance sought by alchemists who believed it capable of turning metals like mercury into gold and silver. 

His installation uses custom lab equipment to introduce a “highly specialized” bacterium into an engineered atmosphere, turning toxic gold chloride into usable 24K gold over time during the course of the show. “By the end of the exhibition, the process will produce enough gold to put in the palm of your hand,” Brown said.

Other innovative contributors to “Engaging Technology II” include code artist Casey Raes; composer and visual artist Tristian Perich; and Evelina Domnitch and Dmitry Gelfand, European-Russian artists who create sensory immersion environments that merge physics, chemistry and computer science with uncanny philosophical practices. 

“10000 Peacock Feathers in Foaming Acid” is Evelina Domnitch and Dmitry Gelfand’s live, audiovisual performance which was presented on the fulldome projection screen of Ball State’s Charles W. Brown Planetarium. The artwork uses a penetrating laser beam to scan the surface of soap bubbles, the behavior of which model the unique properties of
cell membranes. 

Throughout the run of Engaging Technology II, a series of invited performances, lectures and workshops were scheduled on campus and in the Muncie community to enhance the exhibition’s reach. Classes, workshops, lectures and family events were offered for local
and regional audiences. 

By Gail Werner, BSU Media Strategist
Et al.

Apollo 11 Simulation for International MoonBase Alliance

A 3D simulation of an Apollo 11 landing was produced for International MoonBase Alliance and ImmersiveWorlds, LLC. by IDIA Lab.




About IMA

The International MoonBase Alliance (IMA) is an association comprised of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaiʻi’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements.

moonbasealliance.com/

NEWARK MOUNDS SIMULATION

Ball State’s Institute for Digital Intermedia Arts (IDIA Lab) and the Applied Anthropology Laboratories (AAL) have created a web-based virtual world that interprets the earthworks in their era of construction. The sky is simulated with accurate celestial bodies using data from NASA’s Jet Propulsion Laboratory to allow users to view the moon and sun as they appeared almost 2000 years ago. The project is funded by the National Endowment for the Humanities’ Office of Digital Humanities.

The project is a collaboration between Ball State, The Works and the Ohio History Connection, with support and partnership from several federally recognized American Indian tribes, including the Eastern Shawnee Tribe of Oklahoma and the Shawnee Tribe.

CREDITS IDIA Lab: John Fillwalk, Neil Zehr, Trevor Danehy, Adam Kobitz, Ina-Marie Johnston AAL: Kevin Nolan, Tusher Mohanta, Jade Moore, Ashra Wickramathi Ohio History Connection: Jennifer Aultman, Bradley Lepper The Works: Meghan Federer

Advisory Board: Brett Barnes (Eastern Shawnee Tribe of Oklahoma), Marti L. Chaatsmith (Newark Earthworks Center, Ohio State University), James J. Connolly (Ball State), Christine Ballengee Morris (Ohio State University)

Ball State University’s IDIA Lab is granted rights to Blue Mars virtual world technology from Avatar Reality

Ball State University’s IDIA Lab is granted rights to Blue Mars virtual world technology from Avatar Reality http://bit.ly/BSU_BlueMars Ball State granted rights to develop $10 million Blue Mars virtual world technology (3/5/2012)

Ball State granted rights to develop $10 million Blue Mars virtual world technology (3/5/2012)

A digital laser scan of a centuries-old Buddha (above) and a simulation of the 1915 World’s Fair in San Francisco (below) are two examples of how Ball State has employed Blue Mars for cutting edge technology projects.

The future of virtual worlds has a new home at Ball State University, thanks to the generosity of Avatar Reality Inc., which granted the university rights to the multimillion-dollar 3-D virtual world platform, Blue Mars.

Blue Mars’ creator, Hawaii-based Avatar Reality, has provided expanded rights to the technology, valued at $10 million in research and development, to Ball State for 3-D simulation and research projects beyond the world of gaming.

“We are honored to have our institution selected to build upon the tremendous innovations within Avatar Reality’s Blue Mars platform,” said John Fillwalk, director of the Institute for Digital Intermedia Arts (IDIA) and senior director of Ball State’s Hybrid Design Technologies initiative. “This partnership between Avatar Reality and Ball State is an unprecedented gesture and a great distinction.”

Since 2009, Ball State’s IDIA has been a leading developer for Blue Mars, employing the virtual platform on projects such as digital laser scans of a centuries-old Buddha and the creation of the Virtual Middletown Project. The project, which ushers visitors via their computers into the world of the early 20th-century Ball Brothers Glass Manufacturing Co., is one example of the cultural heritage work possible with Blue Mars. Another is IDIA’s simulation of the 1915 World’s Fair in San Francisco.

“Avatar Reality designed in Blue Mars a next-generation 3-D platform that greatly advanced the graphic fidelity, functionality and scalability of virtual worlds — well beyond current standards,” Fillwalk said. “Its strategy connected social spaces across devices, from high-performance desktops to mobile phones. Our goal is to build upon both its technology and vision for the future of virtual worlds. “

Recognition speaks to reputation

Phil Repp, Ball State’s vice president for information technology, said the new agreement with Avatar Reality can be traced to Fillwalk’s commitment to advancing IDIA to the national level.

“The fact that other universities — and many of them of very high caliber — are working hard to gain expertise in hybrid worlds and that Ball State was identified as the university most likely to further develop this technology speaks volumes about our reputation, experience and abilities,” Repp said. “It’s an honor for Ball State and John to be singled out with this kind of recognition.”

Under Ball State’s terms of agreement with Avatar Reality, the university will begin to fully operate Blue Mars for noncommercial purposes, expand upon the source code, increase its research and academic initiatives, and enhance the community of Blue Mars. In addition, Ball State will deliver original content on Blue Mars as it has done in the past. Existing commercial activity will continue to be operated and supported by Avatar Reality, Inc.

“I am really excited about the future,” Repp said. “Through our division of Hybrid Design Technologies, Ball State will further our position as a national leader in hybrid worlds and their applications to mediated learning technologies. Our reputation in this discipline is such a perfect fit to our institutional history of innovation in teaching and learning.”

About Ball State: Located in Muncie, Ind., Ball State University is redefining education by providing relevant, immersive learning experiences that engage high-caliber students in intense interdisciplinary projects both in and out of the classroom. Its vibrant campus is home to about 22,000 undergraduate and graduate students from across the country and abroad. Learn more at www.bsu.edu.

About Blue Mars: Developed by Avatar Reality Inc., Blue Mars is a premium 3-D virtual world platform featuring unparalleled interactivity, fidelity, scalability and security, and it enables artists and developers to create and distribute interactive 3-D experiences to a global audience. Blue Mars launched in open beta in October 2009 and began selling virtual land to third party developers in January 2010. Blue Mars Mobile, an iOS app based on the content and technology from the virtual world, was introduced in February 2011.

NATIONAL AWARD: VIRTUAL ESCAPE ROOM


The BSU College of Health and CAP’s IDIA Lab were honored with a national award for their collaborative virtual escape room project. The  serious game, “The Case of the Kidnapped Healthcare Professionals: Virtual Escape Room” was selected to receive the 2023 Creativity in Practice and Education Award from the National Academies of Practice (NAP). BSU’s Lori Porter accepted the award on behalf of the project team Karrie Osborne, PhD, RN, Blair Mattern, John Fillwalk, IDIA staff and Katherine Berryhill for their work in developing the project.

Escape rooms employ innovative learning strategies such as problem-based immersive learning, role-play, and game principles to improve teamwork, communication and leadership. Nationally, health profession curricula are beginning to explore the use of escape room experiences as innovative pedagogy. Escape rooms challenge individuals to collaboratively explore new content, work as a team to discover clues, solve puzzles, and accomplish interactive tasks in a limited amount of time.

Flickr Gettr: IDIA performs at the International Annual Science and Technology Festival in India

John Fillwalk and Jesse Allison travelled to Mumbai, India for the International Annual Science and Technology Festival, January 22-24, to exhibit the Institute for Digital Intermedia Arts’ (IDIA) “Flickr Getter” installation. The festival, the largest of its kind in Asia, hosted more than 65,000 attendees. In addition to exhibitions such as “Flickr Getter”, the festival featured keynotes such as Lars Rasmussen, Co-Founder of Google Wave and Google Maps; Vic Hayes, father of Wi-Fi; Ajay Bhatt, Co-Inventor of USB; Jonathan Gardner, Senior NASA Scientist; and R. Chidambaram, Principal Scientific Adviser of the Government of India.

IDIA’s Flickr Getter is an interactive, sonic and visual experience, immersing the viewer in a three-dimensional cloud of user searched Flickr images. The installation, completed in collaboration with the New Media Consortium, Linden Labs and the Institute for Digital Fabrication, was also highlighted in the festival’s promotional video.

For more information, please visit http://www.techfest.org/scintillations/flickrgettr/

Smithsonian and National Park Service Presentations

Vice-President of Information Technology, Phil Repp and IDIA Lab Director, John Fillwalk traveled to Washington, D.C. to present Ball State projects in electronic field trips, virtual worlds and hybrid design to several federal organizations. Meetings included a presentation at the Smithsonian Institution Building on the National Mall to representatives from all the Smithsonian Museums and Institutes as well as a separate presentation to the National Park Service. Conversations are ongoing regarding partnership and collaboration opportunities.
 

VIRTUAL COURTROOM SIMULATOR

IDIA is collaborating with BSU’s Brandy and Greg Rocheleau from the Department of Criminal Justice and Criminology on the production of a Virtual Courtroom. The project, funded by BSU’s Women of Beneficence, is a program designed to allow students in designated criminal justice classrooms to participate in courtroom simulations through the use of avatars to facilitate knowledge and empathy as well as practice decision-making skills in a courtroom setting. 

One of the main purposes of this project is to give criminal justice students an opportunity to practice ethical decision-making skills in the field of criminal justice. It also helps students develop empathy for vulnerable populations so students become socially-responsible professionals. IDIA is designing a multiuser virtual simulation where students can engage via avatars and communicate in voice, chat, video or whiteboard. The facilities will include a courtroom, deposition room, classroom and even an accurate digital twin of the Supreme Court!

RUTGERS HEALTHCARE SIMULATION

IDIA is designing an interactive virtual training simulator for Rutgers University’s School of Health Professions. The project immerses students within TeamSTEPPS, Team Strategies and Tools to Enhance Performance and Patient Safety. Developed by AHRQ, the Agency for Healthcare Research and Quality, TeamSTEPPS is an interprofessional communication framework providing interdisciplinary collaboration and teamwork tools aimed at optimizing patient outcomes by improving communication among healthcare professionals. It is AHRQ’s signature curriculum to improve patient safety by training health care teams to communicate and practice effective teamwork. TeamSTEPPS trains staff in teamwork skills to lead medical teamwork improvement initiatives in an organization, from initial concept development through sustainment of positive changes

Students will learn best practice interprofessional communication strategies and apply them in lab-based scenarios in a single-player simulator created in the 3D game engine of Unity. The experience is designed with branched dialogue trees to provide decision making opportunities to assess learned concepts as they are applied to various healthcare scenarios. The project was recently presented at the National Academies of Practice (NAP) annual conference.

ACADEMY OF DIETETICS AND NUTRITION

The IDIA Lab was contracted by the Academy of Nutrition and Dietetics – home of the Food Pyramid – to develop a dietetics occupational simulator. The simulator employs branched dialogue scenarios to train students in the process of an intake experience within a hospital clinic. This particular simulation walks through an interaction with a family whose child has diabetes and the different units of the hospital that are involved in the intake. The project was developed in two phases for the client and was created using the Unity Game Engine.

GIS PAPER: DEEP MAPPING

BSU’s James Connolly, Director, Center for Middletown Studies and John Fillwalk, Director, IDIA Lab presented their joint paper at the 30th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems in Seattle, WA. This paper explored innovations in visualizing data in immersive 3D environments associated with the newly launched Deep Mapping Middletown project. Lead by Connolly and Fillwalk, the project consists of a team of scholars and library scientists from several institutions producing deep maps based on the extensive archive created by the century of social research focused on Muncie, IN.

https://dl.acm.org/doi/abs/10.1145/3557919.3565815

VR Furniture System


Virtual and on-demand multi-user furniture system for hybrid reality collaborations.Demonstration of real time deployment of virtual furniture to support simulation of various small group and large group spontaneous interactions. Also demo of hybrid reality telepresence system.

The Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University explores the intersection between the arts, sciences, and humanities using a broad range of technologies. Founded in 2005, IDIA Lab has been designing scholarly, creative, teaching and learning projects that investigate virtual reality, human computer interface (HCI), augmented reality, mobile apps, interactive artworks, immersive environments, head mounted display, visualization and 3D simulation. The IDIA Lab studios were initially incubated through the Eli Lilly Endowment’s Digital Media Initiative as part of two generous grants to the University. IDIA Lab’s research facilities include five distinct labs – IDIA Lab, CAVE Lab, DOME Lab, HCI Lab, and SIM Lab – all working in concert with tour partner, the Digital Scholarship Lab (DSL). The labs’ faculty, staff and students develop projects in collaboration with a cohort of international clients in this innovative initiative that investigates the forefront of emergent design, expression and learning.



Virtual Escape Room

IDIA Lab is working on a Virtual Escape Room project for Ball State University College of Health. Project will launch in the fall semester. In this collaboration we are developing a virtual multi-user escape room. Escape rooms employ innovative learning strategies such as problem-based, immersive learning, role play, and game principles to improve teamwork, communication and leadership. Nationally, health profession curricula are beginning to explore the use of escape room experiences as innovative pedagogy. Escape rooms challenge individuals to collaboratively explore new content, work as a team to discover clues, solve puzzles, and accomplish interactive tasks in a limited amount of time. The virtual multi-user virtual escape room will support teaching teamwork and communications curricula for health profession students and contribute to improved health outcomes.

Engaging Technology II: Art + Science – 10000 Peacock Feathers in Foaming Acid

Engaging Technology II: Art + Science premiered at the David Owsley Museum of Art on September 28th, 2017. The exhibition was curated by John Fillwalk, Director of the Institute for Digital Intermedia Art at Ball State University. The opening reception was followed by a special performance by two of the featured artists – Evelina Domnitch and Dimitry Gelfand at the Charles W. Brown Planetarium.

About the performance, from the artists: “In 10000 Peacock Feathers in Foaming Acid we use laser light to scan the surfaces of nucleating and dissipating soap bubble clusters. Unlike ordinary light, the laser’s focused beam is capable of crawling through the micro and nano structures within a bubble’s skin. When aimed at specific angles, this penetrating light generates a large-scale projection of molecular interactions as well as the phenomena of non-linear optics. Bubble behaviors viewed in such proximity evoke the dynamics of living cells (the lipid membranes of which, are direct chemical descendants of soap films).”

http://www.portablepalace.com

Natalie Yates

Natalie Yates is an Assistant Professor of Landscape Architecture at Ball State University. Her scholarly work is situated at the intersection of representation and analysis of landscape systems. Her research focuses on dynamic landscape illustration and simulation, sensing methods in design process (including DIY sensing technologies, UAVs [drones], and GIS), post-industrial landscape remediation, and urban agriculture. Natalie teaches design studios, design communication, time-based media/technology methodologies, and UAS technologies.


Natalie is Ball State University’s 2020-2021 University Design Research Fellow for Exhibit Columbus. Her design research proposal and installation entitled “Calibrate” will open during Exhibit Columbus in Fall 2021.
Additionally, Natalie is president of the Board of Directors of Farmished, a local non-profit for promoting a thriving local sustainable food system. Farmished is currently transforming a former machine foundry site in south Muncie into an Urban Agriculture Training Farm.

Natalie has published research at the Association for Computer Aided Design in Architecture (ACADIA) national conference and co-authored the book Modeling the Environment: Techniques and Tools for the 3D Illustration of Dynamic Landscapes (John Wiley & Sons, Inc., 2012).

Kristen Barry

Kristin Barry – Architecture and Planning

Kristin Barry is an assistant professor of Architecture at Ball State where she teaches courses in architecture history and theory. She was previously an Instructor at Penn State University, where she taught graduate courses in architectural history/theory, and survey courses in ancient art history and architecture. After receiving her Bachelor of Science in Architecture from the University of Cincinnati, Kristin began working as an archaeological architect, and has worked in Greece, France, Israel, Egypt, and Turkey to document and interpret historical sites for a modern audience. Following her Master of Architecture from the University of Cincinnati, she was on the 2008 Masterplanning team at the Archaeological Site of Ancient Troy in Turkey working to redesign the tourism site to accommodate modern needs. Her research and publication explore how architecture and design in particular describe and interpret ancient remains, as well as change or affect historical understanding at some of the most popular archaeological sites in the world. Kristin is currently the site architect at the Hierakonpolis excavation in Egypt and volunteers with the PUP Global Heritage Consortium.

Virtual Companion: Indianapolis

Virtual Monument Circle
1910s-1930s

Virtual Monument Circle is a mobile application that uses historic photographs and maps of downtown Indianapolis, IN to create an immersive interpretation of various historic phases of the city center. This project is a prototype for a larger potential city-wide endeavor bringing to life significant neighborhoods and sites within the city. It is developed as a possible collaboration between the Institute for Digital Intermedia Arts at Ball State University, the Polis Center at Indiana University Purdue University Indianapolis and the Indiana Historical Society.

There are two experiences with the content dependent on the proximity of the user’s location – onsite and offsite. It will be available soon for both iOS and Android.

Virtual Monument Circle was designed and produced by the Institute for Digital Intermedia Arts at Ball State University. Special thanks to the Polis Center at IUPUI and the Indiana Historical Society.

For more information contact:  idialab@bsu.edu

Virtual Companion: Columbus

IDIA Lab – in collaboration with BSU Assistant Professor of Architecture, Kristin Barry – has  designed the cultural heritage mobile application, Virtual Columbus Experience – allowing users to virtually tour the rich architectural history of Columbus, Indiana. This locative mobile app was produced with fellowship support of Ball State University’s Digital Scholarship Lab.

Though a relatively small city, Columbus has made itself a cradle of modern architecture, commissioning many buildings and public works since the middle of the 20th century. The number of landmark buildings and notable architects to have worked in the city has earned it the nickname “Athens on the Prairie.”

With data and artifacts gathered by Kristin Barry’s immersive class, users of the app can explore 3D models of key buildings commissioned over the years, survey timelines showing when architects were active or buildings were constructed, and meet Columbus’ famous architects – such as Eero Saarinen, I.M. Pei, and Harry Weese.

After its launch, Virtual Columbus Experience will seek further funding to expand the scope and depth of the project across the community.

Buffalo Bill Center of the West AR Diorama

AR Diorama

Buffalo Bill Center of the West has contracted with IDIA Lab to create an augmented reality (AR) diorama of Buffalo Bill’s Wild West show. Guests to the museum will be able to view the physical white model. While IDIA Lab’s previous Buffalo Bill project focused on his Muncie show, this iteration explores the Ambrose Park show in Brooklyn, NY.

Adam Kobitz

“Constructing the physical portion of the Buffalo Bill diorama through 3D printing and laser cutting processes presented us with several challenges, the first of which was recreating our existing models within CAD software (Autodesk Fusion 360) to ensure that they were designed with these techniques in mind. This modelling process is quite different from those employed on the AR side of the piece, and entails ensuring that the sizes, thicknesses, etc. of each element correlates with the parameters of our machines. Additionally, given the overall scale of the final product, many elements throughout the diorama had to be broken down into smaller components and reassembled by hand. After the models had been 3D printed or laser cut, each of the elements needed to be assembled with adhesives, their seams hidden with a filler substance, filed and sanded, and painted to achieve the final aesthetic.”

Trevor Danehy

“I identified several 19th century photos of William Cody, Iron Tail, and Nate Salsbury with in a timeframe of a few years. Using these photos I constructed a digital 3D model of them within programs like Zbrush and Maya. I also did extensive research of the 1884 Buffalo Bill Wild West show in Ambrose Park, Brooklyn- a long forgotten location that has been all but wiped from history. There are very few details of Ambrose Park aside from a few photographs from the late nineteenth century. The location was demolished in the early 20th century and the Brooklyn Navy Terminal was paved over it. I was able to confirm structures and locations of the Wild West show by cross referencing photographs that overlap the same locations. An interesting note is that we have an electrical layout map of the show in Ambrose park. When comparing photographs to the electrical map, while it does confirm the exact locations of electrical poles throughout the campus, the positions of major tents have changed, possibly due to weather or other circumstance. Many informative photographs came from Robert Bracklow, a professional photographer that was heavily active in the New York area at the time. Many other photographers can be seen in the Wild West show, but his archive survived over more than a century. It was his photographs that helped establish the location of the tents belonging to William Cody, Annie Oakley, and even John Burke’s cabin. They also helped piece together what parts of the exterior grandstand might have looked like.”

Virtual World Heritage Ohio

Aerial view of Newark Earthworks render

Ball State University and Ohio History Council

Sponsored by the Office of Digital Humanities, National Endowment for the Humanities

The public will be able to explore the prehistoric Newark Earthworks in Ohio the way they appeared 2,000 years ago. Funded by the National Endowment for the Humanities’ Office of Digital Humanities, this interactive 3D simulation is under development at Ball State University.

 Ball State’sApplied Anthropology Laboratories (AAL) and the Institute for Digital Intermedia Arts (IDIA Lab) are creating a web-based virtual world that interprets the earthworks in their era of construction. The project is led oy Kevin Nolan, director and senior archaeologist at AAL and project co-director John Fillwalk, senior director of IDIA Lab.Interactive features will include accurate celestial alignments.  The sky will be simulated with accurate celestial bodies using data from NASA’s Jet Propulsion Laboratory to allow users to view the stars, planets, moon, and sun as they appeared 2000 years ago.

First person view of Newark Earthworks render

Already a National Historic Landmark, Ohio designated the Newark Earthworks as “the official prehistoric monument of the state” in 2006. Spread across four miles in what is now present-day Newark, Ohio, mounds and walls are constructed to record significant celestial alignments on the landscape, including the 18.6-year lunar cycle. The earthworks created community for the Hopewell People and provided sacred spaces for religious rituals and ceremonies related to their society. The Newark Earthworks comprise the largest set of geometric earthen enclosures in the world, built by the Hopewell People between A.D. 1 to A.D. 400 to serve a variety of cultural and spiritual purposes.

The project is a collaboration between Ball State and the Ohio History Connection, with support and partnership from several federally recognized American Indian tribes, including the Eastern Shawnee Tribe of Oklahoma and the Shawnee Tribe.

Dr. Michael Rhoades


Dr. Michael Rhoades is a multimedia artist and researcher. He joined the IDIA Lab team in August of 2019 where his role is that of a digital audio researcher creating generative algorithmic systems and configuring venues intended for multi-channel diffusion and stereoscopic projection.

In the spring of 2018 Michael received a BFA in Creative Technologies from the School of Visual Arts at Virginia Tech. Continuing there, in the summer of 2018 he began pursuing an interdisciplinary PhD in Computer Science, Musical Art, and Visual Art, which he completed in the fall of 2020.

The foci of his practice-based research reside in the areas of holograms, holophons, and supercomputing each contributing toward the production of novel visual music compositions and the development leading edge theoretical perspectives. After completing his graduate coursework at Virginia Tech, Michael returned to Indiana to be closer to family where he continues his research and creative practice remotely.

As a digital animator, painter, filmmaker, composer, computer scientist, theorist, and researcher, Michael’s multi-faceted artistic and academic endeavors continue to expand in an ever-broadening scope as the creative impulse inspires and requires. Examples of his music, visual art, visual music, and of his philosophical and technical writing are located at http://www.perceptionfactory.com.

Apollo 11 VR on Steam

Apollo 11 VR is a first person experience of the interior of the 1969 Command Module. Explore the cockpit firsthand in an authentic model of the interior based on scans from the Smithsonian. This is the 50th anniversary of the historic landing where Buzz Aldrin and Neal Armstrong walked on the surface of the moon on a site name Tranquility Base. 

The Spacecraft

The Apollo spacecraft had three parts: a command module (CM) with a cabin for the three astronauts, and the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages – a descent stage for landing on the Moon, and an ascent stage to place the astronauts back into lunar orbit. The astronauts used Eagle’s ascent stage to lift off from the lunar surface and rejoin Collins in the command module. They jettisoned Eagle before they performed the maneuvers that propelled the ship out of the last of its 30 lunar orbits on a trajectory back to Earth. They returned to Earth and splashed down in the Pacific Ocean on July 24 after more than eight days in space.

Armstrong’s first step onto the lunar surface was broadcast on live TV to a worldwide audience. He described the event as “one small step for man, one giant leap for mankind.”

Wishing Well

Wishing Well creates an environment where visitor’s interaction shapes the
substance, meaning and character of the artwork – mediated through artificial intelligence. Upon engaging the installation, a participant speaks a wish into a pool of water that only they can hear. Their spoken wish is collected by an artificially intelligent set of software and hardware that begins to process the viewer’s hope. There is a technical document attached to the proposal that details the specific flow and technologies, but in essence, once each wish is recorded it is spoken back in a transformed in a poetic manner by text to speech synthesis in a whispering tone. The individual viewer privately hears this once – their wish then is processed to
become part of a larger collection of all wishes which can be heard through
synthesized voice through directional speakers. The tone of the collection of all wishes is analyzed and affects the processing of the spoken wishes, the ambient sound, and the color and brightness of the light produced above and below the pool.  

Wishing Well engages both private and public spaces in the creation of a
simultaneously intimate and collective social experience. The tone of the wishes aggregated by the system is processed anonymously and delivered into this social sculpture via synthesized whispered voice spoken in English. The water pool, or wishing well, is vibrated by a speaker underneath creating a three-dimensional dynamic sound wave image of the actual wish. Virtual lanterns will appear projected with each wish and will launch skyward to become part of the larger collection of dreams.

AI

Speech To Text

IBM Watson’s Speech to Text service accepts an audio file or stream as input and returns a transcription. This service will allow Wishing Well to analyze user speech for its tone and repeat it back using text-to-speech.

Tone Analyzer

IBM Watson’s Tone Analyzer service accepts text as input and returns a tone analysis that detects analytical attitude, inhibition, confidence, hope, sadness and joy across a spectrum. This data will be used to drive the color and intensity of the light of Wishing Well based on the aggregate tone of the wishes.

Polly

Amazon AWS’ Polly output can be modified with SSML tags which can affect the timbre, loudness, and pace of the audio. Wishing Well will whisper wishes back to the wisher.

Cymatics and Spouting Bowls

Cymatics, from the Ancient Greek: κῦμα, signifying, “wave”, is a
subset of modulation physics. Ordinarily the outside of a plate or
vessel is vibrated, and areas of most extreme and least disturbance
are made visible by animating fluid. Various patterns emerge dependent on the amplitude, frequency and shape of the vessel.

The device utilized can be direct, for example, the Chinese spouting bowl, in which copper handles are rubbed and cause the copper base components to vibrate – thus stimulating the water.

Standing vessels started in China. An early form called nao took the
state of a stemmed flagon, mounted with edge upright, and hit
outwardly with a mallet.

Wishing Well User Experience

  1. The visitor steps into a zone and triggers a proximity sensor that begins recording.
  2. The user speaks their wish into the microphone in the console.
  3. The recording stops.
  4. The audio is converted to text via the speech-to-text AI service.
  5. The text is sent to the IBM Language Translator and then to Amazon Polly.
  6. The Language Translator returns the text.
  7. The text is sent to the Tone Analyzer.
  8. The Tone Analyzer returns a tone analysis.
  9. Tone is analysis is sent to Unity and Max to affect light color and sound.
  10. The tone analysis drives visual and synthesized voice effects based on the mood of the wish.
  11. The synthesized voice audio is played through a hyper-directional speaker using ultrasound directed toward the visitor so only they can hear privately. http://hypersound.com/pro/products/
  12. Previous wishes are played at the same time through a randomly selected hyper-directional speaker surrounding the installation. Each visitors wish becomes part of the cloud of wishes and randomly plays anonymously through AI voice.
  13. The installation goes into an idle / attract state in which visual and auditory effects are influenced by the combined tone of the currently saved wishes.

ATK Procedural Audio Library

ATK-Sharp

ATK (Audio Tool Kit) is a demo of ATK-Sharp, a procedural audio generation library. ATK contains four example sound scripts that developers can use as a reference to script their own custom sounds.

Download now from the Unity Asset Store!

Why use procedural audio?

Using procedurally-generated audio instead of pre-made audio files has two main benefits:

1. Build file size can be reduced. Audio files can take up megabytes of disk space, while procedural audio is an order of magnitude smaller, typically only requiring kilobytes to store its script’s algorithm.

2. Procedural audio is flexible. Audio files are essentially baked. You may be able to change the pitch or speed of a file, but they could never match the sample-by-sample control you’ll have using procedural audio. Adding physics and logic as input, a given sound script may never produce the same output more than once.

Features

– WindAudio.cs demonstrates the very basics of what a developer will need to do to start scripting procedural audio using the library.
– FireAudio.cs script showcases how the developer can combine noise generators and a modifier to create the hiss, crackle, and pop of a campfire.
– ChimeAudio.cs exhibits how the developer can use physics as an input to the audio script.
– CricketAudio.cs models the chirp pattern of an actual species of cricket, known as the fall field cricket, to demonstrate how far the concept of procedural audio can be taken.

Dependencies

ATK-Sharp

Support

For support with ATK, please email us at idialab@bsu.edu.

Photogrammetry and 3D Printing in “The Power of Place”

Long on the cutting edge of emerging technologies, IDIA Lab has gained a reputation for unique skill sets. Providing their photogrammetry and 3D printing services, IDIA Lab joined several departments in the production of the new exhibition at the David Owsley Museum of Art. The Power of Place: 100 Years of Architecture at Ball State University

“The Power of Place: 100 Years of Architecture at Ball State University”

“The Power of Place: 100 Years of Architecture at Ball State University” showcases the ways Ball State’s campus architecture has changed and how the past still shines through.

“The exhibition embodies the idea that even though Ball State’s built environment has greatly changed over time by way of adding new buildings constructed with modern materials, our past is forever present,” says Rebecca Torsell, archivist for architectural records at Ball State University, who organized the exhibition.

Running from September 27 through December 21, 2018, the exhibition is part of a campus-wide celebration of Ball State’s Centennial.

“The museum is an ideal place to celebrate the University’s Centennial,” says Robert G. La France, director of the David Owsley Museum of Art. “For more than 80 years the museum’s building — the Fine Arts Building — has stood as Ball State’s palace to the arts. This show displays the University’s long-standing commitment to distinctive architecture and introduces visitors to its vast architectural archives and special collections.”

Photogrammetry

Part of the history of Ball State architecture includes unique tiles used in the Fine Arts Building itself. The museum wanted a way to display these ornate, relief tiles without removing them from the wall. Using their photogrammetry techniques, IDIA Lab first recreated the tiles as a virtual 3D model.

IDIA Lab Modeler, Trevor Danehy, explains the process, “We take at least one hundred photos of each tile, everything from straight on shots of the tile to sharp angles in an effort to expose the levels of depth in the crevices of the tile. The photogrammetry software algorithm requires that each photo has at least 50% overlap compared to the last photo taken in order to stitch together a mesh.”

“It can be a very trial and error process, so the more images taken, the better. That is why we prefer an DSLR camera that can take several images rapidly. Quantity over quality works best in photogrammetry. Once the software stitches together a crude mesh, we do a small amount of digital cleanup and export it for 3D printing.”

3D Printing

After the virtual mesh has been created, the next step is to recreate the tile using a physical medium. At IDIA Lab, this is accomplished using 3D printing. IDIA Lab’s Adam Kobitz, who manages 3D printing in the lab, describes the printing technique. “We printed the tiles using an Fused Deposition Modeling (FDM) printing process, in which the physical objects are built up by extruding layer after layer of melted plastic. This process allows us to produce accurate recreations of the tiles relatively quickly, permitting viewers of the exhibit to observe and interact with them in a manner they’re unable to with the original works.”

After the tiles are printed, they are ready to be displayed as a portable replica of part of Ball State’s architectural history.

BALL STATE UNIVERSITY AND TU DORTMUND EXCHANGE PROGRAM IN INTERMEDIA ART

Ball State University and the Technische Universität Dortmund have partnered in creating an international exchange program for students studying the design and technology of Intermedia artforms.

Ball State University and the Technische Universität Dortmund have partnered in creating an international exchange program for students studying the design and technology of Intermedia artforms. This program will provide opportunities for students and faculty engaged in experiential project-based approaches to the collaborative creation of new media art. The Hans Breder Foundation, which is a non-profit international Intermedia art organization, has contributed a significant body of historic artwork to the Museum Ostwall in Dortmund – the largest art museum in Germany. The archive will serve as a catalyst for research and production of Intermedia based projects initiated by the BSU and TU Dortmund exchange. John Fillwalk, also president of the Hans Breder Foundation, initiated the exchange program with the assistance of the Rinker Center for International Programs. He will be present at the formal signing ceremony on November 19th in Dortmund, Germany.

CAVE

CAVE is an immersive 270° panorama screen primarily used for presentations to large groups of students. OmniFocus projection maximizes the shadow-free zone in our installation of the Elumenati‘s 8m GeoDome Panorama, so up to 40 people can share the space. Without headsets, the shared space sets the stage for spontaneous interactions with the content and one another. It’s also logistically smoother than managing 40 headsets at once, especially post-pandemic.

Another use case for the CAVE is as part of our virtual production pipeline, using the Elumenati’s WorldViewer and Omnity plugin for the Unity game engine. This allows us to collaborate inside the 3D environment and demo content for clients.

A vendor and partner since 2018, the Elumenati is a design and engineering firm creating innovative solutions for immersive visualization. With nearly twenty years as leaders in the field, their mission is to develop elegant, effective systems and novel applications of immersion and interactivity, building VR domes and social, collaborative spaces for XR experiences.

Blue Mars Promo

Blue Mars is a 3D massively multiplayer virtual world platform developed by Hawaii-based Avatar Reality – allowing third party developers to create virtual worlds, MMOG games, simulations, businesses, entertainment venues, clothing, custom avatars, furniture, virtual homes, and other items. It consists of four main parts: the client software, the Sandbox Editor SDK suite, the website, and the host servers.

Free Account and Download here: http://www.bluemars.com/BetaClientDow…

Blue Mars Client

The free Blue Mars Client is your window into our free to play massively multiplayer virtual world made up of an expanding set of independently operated Cities that feature unique themes, activities, and attractions. Through the Blue Mars Client, users can participate in a wide variety of entertainment, educational, and productivity experiences.

Blue Mars Sandbox Editors

The free Blue Mars Sandbox Editors are the bridge between your offline content and the online Blue Mars world. Our Editors work in concert with digital content creation tools like 3DSMax, Maya, Blender, and SketchUp to create immersive 3D spaces and upload them into the virtual world. Using the Blue Mars Sandbox Editors, developers can arrange geometry, script interactive features, define items for sale, manage shop and residence interiors, preview lighting conditions, adjust time of day, create shops, manage land, develop terrain and vegetation, choose textures, adjust shaders, and much more. Our editors are “What you see is what you play” so the view of your offline content in the Editor is exactly how it will appear in the online environment.

BlueMars.com

BlueMars.com is the central hub for discovery, community, and user and developer account management. Everything from currency balances, friend lists, messaging, business intelligence, inventory, virtual land, search, shopping, and account information will be managed through the web site.

Blue Mars Servers

The Blue Mars Servers distribute the content uploaded into the virtual world by developers, manage the global item and inventory database, and coordinate the movement, actions, and communication of the users exploring Blue Mars. Our advanced server architecture can support many thousands of users simultaneously in the same location.