BSU HOSTS XR CONFERENCE

IDIA Lab was invited by the Campus Alliance for Advanced Visualization to host their annual conference (CAAV) in Fall 2023 at the College of Architectures and Planning. The CAAV is a nonprofit organization that promotes the capabilities for use of extended reality technologies, such as CAVE-style immersive VR systems, video walls, headsets, and advanced visualization software employed in research, learning and public engagement. 

CAAV’s purpose is to act as an information resource to worldwide higher education – focusing on immersive visualization platforms, advanced video screen formats, visualization software and issues of sustainability and operation for high tech visualization facilities.

AI Art Installation Beijing

AI Art Installation Beijing

Wishing Well (许愿池) by media artist John Fillwalk and IDIA Lab, is an Artificial Intelligence (AI) installation that was invited into the 5th Arts and Science International Exhibition and Symposium (TASIES 2019) at the National Museum of China in Beijing located on Tiananmen Square. The exhibition curated the work of international artists engaged in the pioneering use of AI in the creation of art-making. The Wishing Well installation creates an interactive environment where visitors’ spoken wishes shape the substance, meaning and character of the artwork – mediated through artificial intelligence.


Wishing Well ( 许愿池 ) engages both private and public spatial interactions in the creation of a simultaneously intimate and collective social experience. The tone of the wishes aggregated by the system is processed anonymously and delivered into this social sculpture via synthesized whispered voices, spoken in Mandarin. The pool of water in the bowl, or wishing well, is vibrated by a speaker placed underneath – creating a three-dimensional dynamic sound wave representation shaped by the sound of the actual wish. Virtual colored shapes are emitted based on the tonal analysis of each wish and are aggregated into an historical representation of previous wishes – launching skyward to become part of the larger collection of dreams.

Indiana University VR Exhibit

Indiana University VR Exhibit

Indiana University Virtual Museum Exhibition Opens BSU IDIA Lab recently launched its virtual reality exhibition interpreting the Angel Mounds heritage site at the Indiana University Museum of Archaeology and Anthropology (IUMAA). The gala ribbon cutting event opened a portion of the collections of the former Glenn Black Laboratory of Archaeology and the Mathers Museum of World Cultures. The diverse collections at IUMAA relate to dynamic societies from around the world and contain over five million objects of cultural heritage and archaeological significance. The virtual exhibit, entitled City on the River, is a large scale immersive 360 degree VR CAVE experience which place viewers in the community on the Ohio river over 900 years ago. The simulation provides learning experiences regarding the agriculture, construction techniques, hunting, fishing, gathering and astronomical alignments of the Mississippian peoples. The Indigenous Peoples built sturdy walls to protect their relatives and great mounds of earth. These structures were designed to align with the movements of the sky – the sun, stars, and moon. Then, after 300 years of continuous inhabitation, the Indigenous Peoples of this great town decided to leave their fields and mounds behind. The larger exhibition will tell the story of the People who lived there through material culture from the site, the IDIA immersive virtual reality experience, connections to the stories of contemporary culture-bearers, and art from Indigenous descendant communities.

APOLLO 11 50th ANNIVERSARY

3D simulation of Apollo 11 landing created for the John F. Kennedy Library’s 50th Anniversary of the Apollo 11 Mission. Produced for the International MoonBase Alliance. Produced by the Institute for Digital Intermedia Arts at Ball State University. https://idialab.org/

About IMA The International MoonBase Alliance (IMA) is an association comprised of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaiʻi’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements. https://moonbasealliance.com/

Shackelton Crater Animation

Shackleton Crater is an impact crater lying at the South Pole of the Moon. The ridges along the crater’s rim are exposed to almost continual sunlight, while the interior is perpetually cast in shadow. This low-temperature interior of the crater functions to capture frozen water shed during impacts on the Moon. Some sites along Shackleton’s rim receive almost constant illumination where sunlight is almost always available for conversion into electricity using solar panels, potentially making attractive sites for future Moon landings. The temperature at these sites is also more favorable than at more equatorial latitudes as it does not experience the daily temperature extremes of 100 °C. The crater is named after Antarctic explorer Ernest Shackleton.

The 3D model of the Moon was created by IDIA Lab by using height and color data provided by NASA’s Lunar Reconnaissance Orbiter project. This data was then imported into a digital terrain generator to create a continuously accurate and scalable model of the entire lunar surface. This project was created for the International MoonBase Alliance and ImmersiveWorlds, LLC. Produced by IDIA Lab.

About IMA: The International MoonBase Alliance (IMA) is an association composed of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaii’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements. http://moonbasealliance.com/

The Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University explores the intersection between the arts, science, and technology. Scholarly, creative and pedagogical projects investigate virtual reality, human computer interface (HCI), augmented reality, mobile apps, visualization and 3D simulation to heighten understanding across disciplines. The Lab was funded through the Eli Lilly Foundation’s Digital Media Initiative as part of a $40 million dollar grant. Our faculty, staff and students develop projects in partnership with international clients in this innovative studio initiative investigating the forefront of emergent design and learning.

Acknowledgements: Special thanks to the National Aeronautics and Space Administration and the NASA Lunar Reconnaissance Orbiter mission.

Interactive Virtual ChatBots

Interactive Virtual ChatBots

IDIA Lab’s chatbots provide interactions with virtual 3D animated non-player characters (NPC) that can be created with customized persona profiles. Users can converse with NPCs via voice or text input, where the virtual characters use artificial intelligence to assess the tone of the conversation and reflect simulated responses with facial animation, gesture, and vocal intonation. The lab engages ChatGPT and other AI software to assemble an integrated system within the game engine of Unity to create a robust real time user engagement with an infinite variety of customized NPC profile modeling – from historical characters to highly specific technical interactions.

Virtual Elementary School

Virtual Elementary School


The Maryknoll Virtual School, created by the IDIA Lab, is a digital twin simulation of a parochial elementary school in Honolulu, HI. The virtual school provides an environment for students and teachers to remotely engage in collaborative real time lectures and conversations in a multiplayer, avatar based teaching environment. The application also provides administrators the ability to monitor admission, registration, class assignments, and grading through an extensive management portal. The application allows for communication via chat, voice, live whiteboard, screen share, in-world video conferencing, and file sharing.

 

 

Deep Mapping / VR / GIS

Deep Mapping / VR / GIS

Deep Mapping Middletown: Designing Immersive Experiences for Spatialized Historical Data James J. Connolly John Fillwalk

Deep Mapping Middletown seeks to represent in spatial termsthe substantial archive produced by research on Muncie, Indiana, USA, the site of Robert and Helen Lynds’ seminal community studies, Middletown (1929) and Middletown in Transition (1937). The success of the Lynds’ work, which is considered to be among the most influential interpretations of twentieth-century American life, inaugurated a tradition of using this small midwestern city as a barometer for assessing broader social and cultural trends in the United States. Researchers, journalists, and filmmakers have repeatedly returned to the city over the past century to document social and cultural change, generating an extraordinarily rich multimedia archive documenting local experience. Most, though not all, of this material is accessible in digital form.

We have begun to build a multi-tiered platform that mobilizes this archive for “deep mapping” the city. By deep mapping, we mean the process of generating user-driven, multimedia depictions of a place. Drawing on postmodern theory, scholarsengaged in deep mapping have employed digital technologies to create complex representations of spaces and empower users to explore them from a variety of perspectives. Deep mapping aims to destabilize depictions of place, conveying the multiple meanings that different groups of people have assigned to specific settings and their evolution over time. Our deep mapping platform integrates GIS and immersive 3D simulationtechnology to provide access to this material and facilitate investigations of spatial-historical experience, including the evolution of racial geographies and the civic and social consequences of deindustrialization.

Part of our aim in this project is to reframe Middletown Studiesfor scholars, students, and public audiences. While there is anextraordinarily rich collection of Middletown research materials, including extensive published scholarship, hundreds of recorded interviews, thousands of photographs, hundreds of hours of films, survey results and unpublished research reports, much of the work that produced this archive rests on a problematic premise. The Lynds’ initial investigations neglected Black and other minority experiences, an oversight that many follow-up studies failed to remedy. Only since the 1970s has Middletown research has become more inclusive, incorporating the experiences of racial and ethnic minorities that the Lynds and their immediate successors ignored. Recent work has also jettisoned the anthropological gaze in favor of more collaborative approaches that share authority between researchers and community members. A key goal of Deep Mapping Middletown is to elevate this later body of work, using the multivocality inherent in deep mapping to repurpose the Middletown archive as a resource for investigating and empowering the marginalized, not just the mainstream.

In its current, prototyping stage, our project aims to overcome several technical and design challenges. These include:

1. The development and refinement of a Historical Spatial Data Infrastructure (HSDI) that include geolocated historical data from various sources and in various formats (text, image, audio, and video) ingested into a GIS, as well as tools, features, and procedures to manage and facilitate use of the data. A key part of this work is establishing lat-long coordinates for photographs and audio-visual materialas well as for passages extracted from textual sources such as oral history transcripts or ethnographic writing.

2. Application of manual and computational techniquesdeveloped by various scholars for capturing and representing vague or subjective spatial information in both 2D and 3D. The Middletown archive includes a substantial body of purposely obscured evidence in ethnographic writing, as well as spatial data contained in oral histories, and anonymized survey data. While researchers have employed a range of visualization techniques that extend beyond traditional coordinate-based cartographic methods to represent these kinds of data, we are especially interested in approaches that link vague and subjective experiential evidence to coordinate locations.

3. Development of the interface between a Unity-based virtualenvironment and a GIS-based HSDI that enables users to engage with the spatial data we are assembling.

4. Development of a virtual environment that includes in-world visual cues modeled on game analytics, such as heat maps and dwell times, that visualize spatial data, includingaffective and sensory experiences, documented in Middletown research.

We propose to present a paper documenting our progress to date in meeting these challenges and explain the potential of 3D immersion for deep mapping. Working with a team of scholarly advisors, librarians, designers and developers, we have producedan initial GIS that includes geolocated sample data for a single neighborhood drawn from collections of photographs, oral histories, and ethnographies. We have also developed a 3D immersive space using the Unity game engine, employing the ArcGIS SDK for Unity to integrate our GIS and 3D model, giving users access to spatial data within our immersive environment. We are also currently creating role-playing experiences that limit access to spaces and information depending on the role adopted by the user and the period selected. These experiences are derived from spatial data in the Middletown archive. We will also follow best practices for heritage visualization as described in the London Charter by making paradata that documents our interpretive choices available to users.

Our presentation will also include a demonstration of our prototyping work to date, including a sample walk-through of our immersive test environment and a review of the HSDI.

Smithsonian VR Exhibition

Smithsonian VR Exhibition


BSU IDIA Lab developed Virtual Milestones of Flight, an immersive head mounted display prototype designed for the permanent exhibition at the Smithsonian Air and Space Museum in Washington, DC.

Both the exhibition and simulation celebrate selections of the most significant airplanes, rockets, and spacecraft in history. Some of these craft incorporated into the simulation include: Charles Lindbergh’s Spirit of St. Louis, the X15, a hypersonic rocket powered USAF aircraft, the Wright Brothers’ flyer, Sputnik, the first Russian artificial satellite, and NASA’s Apollo 11 Lunar Lander.

Pumapunku Reconstruction

Pumapunku Reconstruction

There’s something fascinating about looking at ancient ruins and imagining what the lives and culture might have been like of the people who lived there. With the everchanging advancements in technology, researchers can get closer than they ever have to solving these mysteries.

Ball State University faculty John Fillwalk, senior director of the Institute for Digital Intermedia Arts (IDIA Lab) and associate professor of electronic art, is merging cutting-edge digital technology with archaeological research. His latest project involves bringing Pumapunku, an ancient temple in the pre-Incan city of Tiwanaku, to life through digital modeling, offering new perspectives on one of the world’s most mysterious archaeological sites. His work was highlighted in a recent episode of Ancient Aliens, produced by the History Channel.

“Ball State’s IDIA Lab was featured on a couple of episodes with History Channel for the Universe series—one on Ancient Rome and another on Stonehenge,” Prof. Fillwalk said. “In both of those projects, we employed 3D technology in various ways to simulate archeological sites that are mostly in ruin and bring them back to life virtually.

“Over the years, we have developed some niche expertise in working with archeologists to examine these ancient monuments. We deploy NASA’s JPL data to create an accurate virtual Sun and Moon based on a specific location and time period. And because we had worked with History Channel previously using those techniques, they approached us with this new project.”

Pumapunku: A Historical Marvel IDIA Lab digial 3D model of Pumapunku

Experts disagree on the age of Pumapunku. The site is part of the larger Tiwanaku complex and is believed to date back over 1,500 years to A.D. 500. It has puzzled historians and archaeologists because of its advanced engineering and intricate stonework.

Pumapunku, often called the “Gate of the Puma,” is located in the Bolivian highlands near Lake Titicaca and was part of the Tiwanaku civilization, which predates the Inca Empire. The city’s stone blocks, which weigh several tons each—the largest clocks in at 144 tons—feature precise cuts, perfectly straight edges, and intricate geometric patterns that defy the technological capabilities typically attributed to ancient civilizations. Some scholars have speculated that Pumapunku’s architects possessed advanced knowledge of astronomy, mathematics, and engineering.

The mystery of how the ancient Tiwanaku people managed to transport and carve such large stones has captivated researchers for decades. Traditional theories suggest the use of rudimentary tools and methods like wooden rollers or ropes. However, no definitive explanation has been found, leaving plenty of room for modern-day investigation.

IDIA Lab’s Digital Approach Prof. Fillwalk and his team at IDIA Lab worked for several months to digitally reconstruct Pumapunku using advanced 3D modeling and simulation techniques. This project is not merely a visual recreation but an attempt to understand the cultural, architectural, and environmental context of the ancient city. By doing so, his team hopes to provide deeper insights into the construction methods used by the Tiwanaku civilization and the site’s possible significance in pre-Columbian history.

To create the digital model, Mr. Fillwalk and his team used photogrammetry—a technique that uses overlapping photos to create models, laser scanning, measured hand modeling, and other 3D imaging techniques to capture the exact dimensions of the stone structures at Pumapunku. These data sets are then used to build a highly detailed, accurate, three-dimensional representation of the site. But since the location is in ruin and was never even finished before being abandoned, Mr. Fillwalk and other researchers have had to do their best to fill in gaps and imagine what the final product might have been like.

“The city really is a mess,” Mr. Fillwalk said. “It’s probably one of the most challenging sites we’ve ever tried to reconstruct because, first of all, the architect itself was incomplete— abandoned before totally constructed. We’ve tried to imagine the original architects’ vision. The other problem is that various parties have looted it over the years, looking for gold and such. In one instance, they used dynamite to excavate and just blew up a large section of the foundation. Many stones were taken away or relocated, ending up in local churches and houses.”

Despite these challenges, Mr. Fillwalk is pleased with the results and confident with the final product. Since finishing the episode with the History Channel team, he has collaborated and completed even more work with Dr. Alexei Vranich, archeologist and the leading expert on Tiwanaku and a professor at the University of Warsaw, Poland.

“Sometimes when I work on projects like this, I get a bug about it, so I really wanted to go further with the project,” FIllwalk said. “What we did for the show was a cursory interpretation, but we’ve taken it further since, using Dr. Vranich’s data from his excavations. We are pretty confident in our interpretation. I think it’s as good as we can do at the moment, given the data we have access to.”

Director Fillwalk’s Pumapunku project demonstrates how technology can revolutionize the study of cultural heritage and ancient civilizations. By integrating archaeological expertise with state-of-the-art digital tools, Mr. Fillwalk and the IDIA Lab, centered within the Estopinal College of Architecture (ECAP), are unraveling the mysteries of Pumapunku while making the ancient site accessible to a broader audience than ever before. Their work bridges the gap between the past and the future, providing new perspectives on one of the most enduring architectural mysteries.

“Resurrecting Puma Punku” (Ancient Aliens S20, E20) aired on Sept. 14, is available on some streaming services and will be rebroadcast on the History Channel.

September 27, 2024 by Jennifer Criss

River Data Visualization

River Data Visualization

oneWAPIHANNE is an interactive artwork visualizing live streaming data from water stations along the White River in Indiana. The stations are installed and monitored by the United States Geological Survey [USGS] – reporting the current gauge height and water speed over time. The visualization graphically represents each stations’ data in their entirety or individually – with overlays for station locations, and record river levels. Wapihanne is the name for the White River in the Delaware language. The chromatic tiles in the upper left represent the gauge stations along the White River. The monochromatic tiles in the upper right represent supplemental data overlays – displaying a line graph of the river height over time, a compressed map of the White River with the approximate location of the gauge stations, and the record levels for each station. In the bottom right of the screen is a gray tile that refreshes the data and restarts the program. Each flow of station data represents river height and velocity over the course of the last thirty days. The top of the flow is the height, and the size of the flow is related to velocity. Touching and dragging a specific flow, will reveal detailed information about that particular reading – creating a sound with volume tied to river velocity.


Displaced Resonance v2

IDIA Lab was invited to exhibit their virtual and hybrid artworks at the Third Art and Science International Exhibition at the China Science and Technology Museum in Beijing, China.

http://www.idialabprojects.org/displacedresonance/

Displaced Resonance v2

John Fillwalk

Michael Pounds

IDIA Lab, Ball State University

Interactive installation, mixed media

2012

Displaced Resonance v2 is an interactive installation consisting of sixteen reactive forms that are networked in a grid of light and sound. Interaction within the sculptural field is based on a participant’s presence and proximity to each sculpture. The Displaced Resonance installation is connected to a mirrored instance of the field in a virtual environment – bridging both physical and virtual visitors within a shared hybrid space. Visitors to the virtual space are represented by avatars and through their proximity affect the light and sound of each sculpture. Each participant is aware of the other in each space and uniting both instances within a singular hybrid environment.

A computer system using an thermal camera tracks the movement of visitors and responds by controlling the distribution of sound and dynamic RGB data to the LED lights within the sculptural forms. The installation utilizes custom live processing software to transform these sources through the participants’ interaction – displacing the interaction from both their physical and virtual contexts to the processed framework of physical and virtual resonances. The two environments are linked to send and receive active responses from both sides of the installation via messaging, sensors, hardware and scripting.

In prototyping this work, the process included both physical and virtual models to design and build the form and the interaction. The physical prototypes were interpreted in a virtual simulation environment, investigating the spatial interaction of the structure. The interactive functionality was tested through scripting before the form was brought into the sensored camera-based version. After several virtual iterations, the form was re-interpreted and fabricated.

John Fillwalk

with Michael Pounds, David Rodriguez, Neil Zehr, Chris Harrison, Blake Boucher, Matthew Wolak, and Jesse Allison.

Third Art and Science International Exhibition

China Science and Technology Museum in Beijing

http://www.tasie.org.cn/index.asp

TASIE Press release

http://www.tasie.org.cn/content_e.asp?id=84

Ball State artists create “forest” of light on display in China and Internet

Muncie, Ind. — Ball State University electronic artists have created a “forest” of light and sound that will be on exhibit in Beijing, China through November, yet also accessible to visitors from Indiana or anywhere else in the world.

That’s possible because “Displaced Resonance,” as the interactive art exhibit is known, has both real-life and virtual components.

The physical portion has been installed in a gallery of the China Science and Technology Museum in Beijing. There, in-person visitors can negotiate a thicket of 16 interactive sculptures spaced 1.5 meters apart that will change colors and emit music as they approach.

A digital replica of the layout, meanwhile, resides on the Internet, accessible through the museum’s website. Online visitors can wander the virtual exhibit using an avatar, and the digital pillars will change colors and produce sounds, just like their physical counterparts.

But that’s not all — the two pieces interact with each other, says John Fillwalk, director of Ball State’s Institute for Digital Intermedia Arts (IDIA) and Hybrid Design Technologies (HDT), which created the work in collaboration with IDIA staff, students and composer Michael Pounds, BSU.

When an online avatar approaches a virtual pillar, the corresponding real-life column also will change colors, and vice versa. In-person and virtual visitors will produce different colors, however, allowing them to track each other through the exhibit.

“It’s what we call hybrid art,” says Fillwalk. “It’s negotiating between the physical world and the virtual. So it’s both sets of realities, and there’s a connection between the two.”

The physical pillars are two meters (or more than 6 feet, 6 inches) tall. They consist of a wooden base containing a sound system; a translucent pillar made of white corrugated plastic and computer-controlled lighting.

A thermal camera mounted on the museum’s ceiling keeps track of visitors and feeds its data to a computer program that directs the columns to change color and broadcast sounds when someone draws near.

“It’s a sensory forest that you can navigate,” Fillwalk says.

A video screen mounted on a wall overlooking the exhibit allows museum visitors to watch avatars move around the virtual version, while Internet patrons can affect the physical counterpart.

“Displaced Resonance” is the centerpiece of Ball State’s contributions to the Beijing museum’s 3rd Art and Science International Exhibition and Symposium, a month-long celebration of technology and the arts. Ball State was invited to participate because museum curators discovered some of IDIA’s work and liked what they saw, Fillwalk said.

In addition to “Displaced Resonance,” IDIA contributed four other pieces of digital art that museum visitors can view at a kiosk.

Those pieces are:

· “Proxy”, in which visitors create, color and sculpt with floating 3D pixels.

· “Flickr Gettr,” in which visitors can surround themselves with photos from the Flickr web service that correspond to search terms they submit.

· “Confluence,” in which users create virtual sculptures by moving around the screen and leaving a path in their wake.

· “Survey for Beijing,” in which real time weather data from Beijing is dynamically visualized in a virtual environment.

(Note to editors: For more information, contact John Fillwalk, director of the Institute for Digital Intermedia Arts, at765-285-1045 or jfillwalk@bsu.edu; or Vic Caleca, media relations manager, at 765-285-5948, or vjcaleca@bsu.edu. For more stories, visit the Ball State University News Center at www.bsu.edu/news).

Virtual Hadrian’s Villa Launch at Harvard Center

Virtual Hadrian’s Villa Launch at Harvard Center

IDIA Lab has designed a virtual simulation of the villa of the Roman Emperor Hadrian, which is a UNESCO World Heritage site located outside of Rome in Tivoli, Italy. This project has been produced in collaboration with the Virtual World Heritage Laboratory (VWHL) at Indiana University (IU), directed by Dr. Bernard Frischer and funded by the National Science Foundation. This large-scale recreation virtually interprets the entire villa complex in consultation with the world’s foremost Villa scholars. The project has been authored in the game engine of Unity as a live virtual multi-user online learning environment that allows students and visitors to immerse themselves in all aspects of the simulated villa. The project launched at the Harvard Center for Hellenic Studies in Washington, DC on November 22, 2013. The webplayer versions of the Hadrian’s Villa project are funded through a grant from the Mellon Foundation.

The Launch of the Digital Hadrian’s Villa Project

The Center for Hellenic Studies, Ball State University, and Indiana University

Friday, November 22, 2013

Harvard Center for Hellenic Studies

Washington, DC

Speakers:
John Fillwalk, IDIA Lab, BSU
Bernard Frischer, VWHL, IU
Marina Sapelli Ragni

The presentations included previews of:
The Virtual World of Hadrian’s Villa
The Digital Hadrian’s Villa website

VirtualHadriansVilla_IDIALab_Login

The project not only recreates the villa buildings but also includes a complete Roman avatar system, non-player characters with artificial intelligence, furniture, appropriate vegetation, dynamic atmospheric system and sophisticated user interface. The interface provides learning, navigation, reporting and assessment opportunities and also allows users to change the position of the sun to any date in 130 AD using data from the Horizons database at JPL NASA – testing theses of astro-alignments of architectural features during solstices and equinoxes. Learning communities are briefed on the culture and history of the villa and learn the virtual environment prior to immersing themselves within it. The avatar system allows for visitors to enter the world selecting class and gender – already being aware of the customs and behavior of the Roman aristocracy, soldier, slave or politician.

Khan Academy Walkthrough of Virtual Hadrian’s Villa: http://youtu.be/Nu_6X04EGHk

Link to Virtual Hadrian’s Villa Walkthrough: http://youtu.be/tk7B012q7Eg

The Digital Hadrian’s Villa Project:

Virtual World Technology as an Aid to Finding Alignments between

Built and Celestial Features

Bernard Frischer1

John Fillwalk2

1Director, Virtual World Heritage Laboratory, University of Virginia

2Director, IDIA Lab, Ball State University

Hadrian’s Villa is the best known and best preserved of the imperial villas built in the hinterland of Rome by emperors such as Nero, Domitian, and Trajan during the first and second centuries CE. A World Heritage site, Hadrian’s Villa covers at least 120 hectares and consists of ca. 30 major building complexes. Hadrian built this government retreat about 20 miles east of Rome between 117, when he became emperor, and 138 CE, the year he died. The site has been explored since the 15th century and in recent decades has been the object of intense study, excavation, and conservation (for a survey of recent work, see Mari 2010).

From 2006 to 20011, with the generous support of the National Science Foundation[1]and a private sponsor, the Virtual World Heritage Laboratory created a 3D restoration model of the entire site authored in 3DS Max. From January to April 2012, Ball State University’s Institute for Digital Intermedia Arts (IDIA Lab) converted the 3D model to Unity 3D, a virtual world (VW) platform, so that it could be explored interactively, be populated by avatars of members of the imperial court, and could be published on the Internet along with a related 2D website that presents the documentation undergirding the 3D model.

The 3D restoration model and related VW were made in close collaboration with many of the scholars who have written the most recent studies on the villa.[2] Our goal was to ensure that all the main elements—from terrain, gardens, and buildings to furnishings and avatars—were evidence-based. Once finished, the was used in two research projects.

The first project was a NSF-sponsored study of the usefulness of VW technology in archaeological education and research. We used the virtual villa in undergraduate classes at Xavier University and the University of Virginia to investigate the thesis of two recent studies by project advisors Michael Ytterberg and Federica Chiappetta about how this enormous built space was used by six different groups of ancient Romans, ranging from the Emperor and Empress to normal citizens and slaves (Ytterberg 2005; Chiappetta 2008). Avatars representing these groups have been created and are being operated by undergraduate students as a Problem‐Based Learning (PBL) experience. They are observed by subject experts, who are using the data generated to test and, if necessary, refine the initial theses about how circulation through the villa was handled.  The results are still being evaluated. Preliminary indications are that the data show that the combination of VW used in a PBL educational context is very effective in taking advantage of the known connection between between the hippocampus and long-term learning, especially when the information to be mastered is spatial (Kandel 2007).

The second project involved use of the VW for some new archaeoastronomical studies. Most of our advisors’ publications, like the older work by archaeologists that preceded them, have concentrated on archaeological documentation, restoration, formal, and functional analysis. The latest research by advisor De Franceschini and her collaborator Veneziano (2011) combined formal and functional analysis: it considered the alignment of certain important parts of the villa in relation to the sun’s apparent path through the sky on significant dates such as the solstices. In their recent book they showed how two features of the villa are aligned with the solar solstices: the Temple of Apollo in the Accademia; and the Roccabruna. We used the VW to extend their research to other areas of the villa, taking advantage of 3D technology to restore the sun to the right place in the sky and also to restore the damage to the architecture of the villa, as De Franceschini and Veneziano had independently suggested be done before they learned about our digital model of the villa.

The work of De Franceschini and Veneziano is innovative. Archaeastronomy has become an accepted field of study in recent decades, and a considerable amount of work has been done in Old and New World archaeology. In Roman archaeology, however, this approach is still rarely encountered. Significantly, one of the few compelling studies concerns the most famous Hadrianic building: the Pantheon in Rome. Hannah and Magli 2009 and Hannah 2011 have shown a number of solar alignments in the building, of which the most notable are the sun’s illumination of the entrance doorway at noon on April 21; and the view of sunset silhouetting the statue of Hadrian as Sun god on a four-horse chariot atop the Mausoleum of Hadrian as viewed from the middle of the Pantheon’s plaza at sunset on the summer solstice. Like the summer solstice, April 21 is also a significant date: on it occurred the  annual festival in Rome known as the Parilia (re-named the Romaia by Hadrian),[3] which celebrated the founding of Rome.

De Franceschini and Veneziano pursued an observation of Mangurian and Ray (2008) to document an impressive example of solar alignment at Hadrian’s Villa involving the tower known as Roccabruna at the western end of the villa. Originally, a tower-like structure topped by a round temple, what remains today is the well-preserved, massive lower floor. The main entrance is located on the northwestern side to the right and gives access to a large circular hall covered by a dome. The dome is punctuated by an odd feature: five conduits that are wider on the outside than on the inside (figure 1).

What is the function of these unusual conduits? They have no known parallel in Roman architecture. After asking themselves this same question, on June 21st, 1988, the day of summer solstice, the American architects Robert Mangurian and Mary Ann Ray went to Roccabruna at sunset, and discovered the extraordinary light phenomena which occur there. At sunset the Sun enters through the main door illuminating the niche on the opposite side, something that happens during most of the summer days. But only in the days of the summer Solstice the Sun penetrates also into the conduit located above that door: its rays come out from the slot inside the dome projecting a rectangular light blade on the opposite side of the dome. In June 2009, De Franceschini verified the findings of Mangurian and Ray. However, they know that the apparent path of the Sun through the sky changes slightly each year, so that in the nearly 1880 years separating us from Hadrian, the precise effect of the alignment has been lost. As they noted, only a computer simulation can recreate the original experience of being in the lower sanctuary at Roccabruna at sunset on the summer solstice during the reign of Hadrian.

Once we had our 3D model of the site, we were able to obtain from NASA’s Horizons system[4] the correct azimuthal data for the year AD 130 and put the sun into the sky at sunset on the summer solstice. Following the lead of De Franceschini, who in the meantime had become a consultant to our project, we put into the niche one of the four statues of the Egyptian sky goddess Isis that were found at the Villa. De Franceschini chose Isis because first of all, there is no question there was a statue in this niche so we need to put something there; and the two flanking niches had candelabra, whose bases are preserved and are decorated with Isiac iconography. Moreover, Isis’ festival in Rome was on the summer solstice. So we scanned and digitally restored one of the several statues of Isis from the villa and put it into the central niche. Finally, for the dome, which we know from surviving paint was blue and therefore had the famous “dome of heaven” motif (Lehmann 1945), we followed De Franceschini in restoring a zodiac set up in such a way that the sign of Gemini is over the statue niche since the last day of Gemini is the summer solstice. Our zodiac is adapted from the great Sun God mosaic in the Rheinisches Landesmuseum in Bonn, which kindly gave us permission to use it.

As can be seen in figure 2, when we restored the sun in the right position in the sky dome for sunset on the summer solstice (June 21) of 130 CE in our 3DS Max model of Roccabruna, the sunlight coming through the main doorway illuminated the statue of Isis in the statue niche, and the light entering through the conduit lit up the sign of Gemini painted on the cupola. So we were able to confirm the Mangurian-Ray thesis.

The approach we have taken in our Roccabruna project is deductive: Mangurian and Ray noted the strange feature of the conduits punctuating the cupola of Roccabruna; they hypothesized a solar alignment. De Franceschini and Veneziano agreed and for various reasons we don’t need to go into today, they put a statue of Isis into the statue niche. We set up the conditions in which these hypotheses could be tested and were able to verify them.

But surely, if there is one such alignment at the villa of the same emperor who was responsible for the Pantheon, there may be others. But the villa is very big—covering over 100 hectares—and has 30 major building complexes, most larger than Roccabruna. Moreover, such alignments could just as easily involve astrological features such as the Moon and the planets. Faced with this level of complexity, the best methodological way forward in searching for new alignments is clearly inductive and empirical. This is one reason why we asked the Institute for Digital Intermedia Arts (IDIA Lab) of Ball State University to create a multi-user virtual world based in Unity 3D from our 3DS Max model.

The project of virtually interpreting a simulation on the scope and scale of Hadrian’s Villa was a daunting one – engaging layers of scholarly, technical and pedagogical challenges. The technical challenges were many – foremost to leverage the game engine of Unity 3D to become an effective multi-user avatar-based virtual world. An important factor was to create an environment that was straightforward and accessible via standard web browsers on both Mac and Windows and selected Unity 3D as the starting point for developing the platorm. We required specific back-end administration tools to handle the accounts and server side aspects of the project – for this we relied on Smart Fox Server as it manages Unity 3D quite well. Our team took an approach that bridged and integrated disparate technologies, creating a robust virtual world platform to immersively augment both instructional and PBL processes. VW features available to the learning community included text based communication, a live map showing current visitor positions, map based teleportation, managed voice channel, user selected avatar gestures, online users, paradata, photographs of the extant site, plan views, and integrated web links.

Key to the project was a varied system of avatars representing the imperial court, freemen, senators, scholars, soldiers, and slaves to the emperor. The avatar system provided several important functions testing recent scholarly interpretations of circulation throughout the villa and the use of various spaces for typical court activities – meals, imperial audiences, bathing, worship, etc. Upon entering the simulation, the choice of avatar would predicate how one’s social standing within the role-play of the world.

A gesture system was created via motion capture providing each user with a unique set of actions and gestural responses to engage social interactions – including greetings, bowing and gestures specific to rank and class. Communication was also a critical element in the modes of problem based learning engaged by the participants in the simulation. Specific technologies provided varied abilities such as public chat, private instant messaging and live multi-user voice channels.

A companion website was co-developed and integrated into the VW environment providing learners with visual assets such as photographs and panoramas of the current site, site plans, elevations, and video interviews with Villa scholars. We also developed three-dimensional turntables of the interpreted and reconstructed models, overview information on each of the major Villa features, bibliography and an expansive database of art attributed to the Villa site. This information can be directly accessed by learners directly from within the virtual world. The development team integrated the notion of paradata, introduced by the London Charter – making instantly transparent the scholarship and all underlying elements of the 3D model (from terrain to buildings, furnishing, costumes, and human behavior).

In support of new research theme on celestial alignments by consultants De Franceschini and Veneziano, a major goal for the project was to develop an accurate simulation for the position of the sun. The solar tracking, or virtual heliodon that we created as a response to this research, was envisioned as a simulation that would a bridge between the virtual environment and coordinates from an external database calculating solar positions. After investigating existing tools we decided to employ the Horizons database that was created by NASA’s Jet Propulsion Laboratory as an on-line solar system data computation service – tracking celestial bodies in ephemerides from 9999 BCE to 9999 CE. In implementing solar tracking for the Villa project in instances were we where we wanted to investigate potential significant solar alignments, we entered the latitude, longitude and altitudes of specific buildings from the Tivoli site to poll the Horizons data for the year 130 CE. The user was able to change the date, time of day, and quickly play the sun from specific moments via the user interface. The system was co-related to both the Julian and Gregorian calendars and contained presets for the vernal and autumnal equinoxes as well at the summer and winter solstices.

These tools allowed for the rapid discovery of potential alignment that might bear further investigation. The solar feature allows one to proceed empirically, in effect turning the clock back to 130 CE and running experiments in which the days and hours of the year are sped up by orders of magnitude so that one can in a very short time find candidate alignments not yet hypothesized by scholars working in the traditional way of Mangurian-Ray.

As developers, our goal was to create the solar tool and let students and scholars use it to undertake their own empirical research. Our team was not intending to engage in this research ourselves, yet in the process of working within the environment daily we quickly began to notice curious solar phenomena. In a bit of empirical study of the very first component of the site we installed in the simulation, the Antinoeion – or newly-discovered Temple of the Divine Antinous, we noticed an alignment of potential interest. The most likely alignment seemed at first glance to be along the main axis running from the entrance, through the obelisk in the central plaza to the statue niche at the end of the axis. We ran the days and hours of the year and found that the sun and shadow of the obelisk align at sunrise on July 20. We consulted with our expert on the Egyptian calendar in the Roman period, Professor Christian Leitz of the University of Tuebingen–and he confirmed that this date has religious significance. It is, in fact, the date of the Egyptian New Year, as the Romans of Hadrian’s age clearly knew (cf. the Roman writer Censorinus, who states that the Egyptian New Year’s Day fell on July 20 in the Julian Calendar in 139 CE, which was a heliacal rising of Sirius in Egypt).

In the process of developing and subsequently utilizing the simulation tools we created for astro-archeological research, our conclusions have been that virtual world technologies can indeed take the inquiry for significant built-celestial alignments to a new level of insight.

Bibliography

Chiappetta, F. 2008. I percorsi antichi di Villa Adriana (Rome).

De Franceschini, M. and G. Veneziano, 2011. Villa Adriana. Architettura celeste. Gli secreti degli solstizi (Rome).

Hannah, R. 2008. Time in Antiquity (London).

Hannah, R. 2011. “The Role of the Sun in the Pantheon’s Design and Meaning,”Numen 58: 486-513.

Kandel, E. 2007. In Search of Memory: The Emergency of a New Science of Mind(W. W. Norton, New York). Kindler edition.

Lehmann, K. “The Dome of Heaven,” Art Bulletin 27: 1-27.

Lugli, G. 1940. “La Roccabruna di Villa Adriana,” Palladio, 4: 257-274

Mangurian, R. and M.A. Ray. 2008. “Re-drawing Hadrian’s Villa,” Yale Architectural Journal, 113-116.

Mari, Z. 2010. “Villa Adriana. Recenti scoperte e stato della ricerca,” Ephemeris Napocensis 20: 7-37.

Ytterberg, M. 2005. “The Perambulations of Hadrian. A Walk through Hadrian’s Villa,” Ph.D. dissertation, University of Pennsylvania.

VBAM: VIRTUAL BROAD ART MUSEUM COMMISSION

Press Release

View the VBAM project

VBAM Introductory Lecture

John Fillwalk and the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University were commissioned to design and build a virtual museum artworks for the Eli and Edythe Broad Art Museum in East Lansing, MI. The physical Museum, was designed by architect Zaha Hadid and opened to the public in Fall 2012. Museum Director, Michael Rush commissioned Fillwalk and IDIA Lab to envision and create a dynamic program of artworks for the Virtual Broad Art Museum project (VBAM). Fillwalk and his collaborators created an immersive multi-user environment and four commissioned artworks using Unity 3D – experienced entirely within a web browser.

“Engaging visitors with innovators at the leading edge of art and technology, both here at MSU and around the globe, is key to the Broad Art Museum’s mission,” said founding director Michael Rush. “With the Virtual Broad Art Museum, we have an opportunity to embrace the tremendous creative and connective possibilities that exist in the digital world.”

The mixed world reception took place on Thursday, March 15th 2012 from 6:00 – 8:00 pm EST simultaneously on the campus of Michigan State University and in the VBAM virtual worlds.

Please visit the VBAM project links below.

http://idialabprojects.org/vbam/proxy.html

http://idialabprojects.org/vbam/flickrgettr.html

http://idialabprojects.org/vbam/survey.html

http://idialabprojects.org/vbam/confluence.html

VBAM Exhibitions by John Fillwalk

Participants in con|FLUENCE can create pathways based on reactions to both social and spatial relationships. There is a virtual response and persistence to interactions, as the routes are drawn and sculpted in the three-dimensional environment – forming nodes that are created via visitors’ proximity.

con|FLUENCE. John Fillwalk. 2012.

Survey for VBAM is an immersive landscape simulation using real time weather data from the location of the Broad Art Museum in East Lansing, Michigan. Representations of surveyor’s tape, flags, light, sky color, time of day, wind speed and even cloud density are driven by the actual physical weather forces – informing the virtual landscape.

Survey. John Fillwalk. 2012.

dis|PLACE is an interactive gesture-based performance that will be exhibited at the VBAM reception on the 15th of March. This installation links virtual space to physical reality in an exploration of agency and gesture. Employing a participant’s movement, the work can both navigate 3D space – as well as control direct interactions with the digital performers via a Microsoft Kinect™ sensor.

dis|PLACE. John Fillwalk. 2012.

Proxy is a performative, interactive and site-specific virtual installation where participants shape the evolution of a sculptural and sonic composition as a response to the Museum. Interactions are painted, sculpted and sonified within this collaborative and evolving environment. The work progresses to construct in itself relation to the Museum – eventually transforming as integrated structure – reflecting and becoming its environment.

Proxy. John Fillwalk. 2012.

Flickr™ Gettr for VBAM connects the social image web service of Flickr™ to the virtual Museum environment, allowing visitors to create a dynamic cloud of spatial imagery by entering a search term of their choice – providing tagged images from Flickr™ into the Museum environment. The search term is spoken by the software and tags from image cloud are revealed allowing for subsequent user transformations of the spatial imagery.

Flickr Gettr. John Fillwalk. 2012.

Credits

Proxy.
John Fillwalk with
Neil Zehr, Michael James Olson
2012

Survey for VBAM
John Fillwalk with
Neil Zehr, Keith Kothman, Charles Estell
2012

Flickr Gettr for VBAM
John Fillwalk with
Jesse Allison, Neil Zehr, Charles Estell
2012

con|FLUENCE
John Fillwalk with
Neil Zehr
2012

dis|PLACE (exhibited at the reception on the 15th of March as a live performance only)
John Fillwalk with
Neil Zehr, Jonathan Strong
2012

Michael Rush, Director of the Broad Museum of Art, MSU
Adam Brown, Professor Intermedia and Electronic Art, MSU
John Fillwalk, Director of the Institute for Digital Intermedia Arts [IDIA Lab], BSU
Neil Zehr, IDIA Lab, BSU
Charles Estell, IDIA Lab, BSU
Michael James Olson, IDIA Lab, BSU
Ina-Marie Henning, IDIA Lab, BSU
Jonathan Strong, IDIA Lab, BSU
Jesse Allison, LSU
Keith Kothman, BSU
Jonathan Ross, BSU
Blake Boucher, BSU
Michael Davidson, MSU
Jayne Goedekke, MSU
Kyle Gomboy, Reaction Grid
Chris Hart, Reaction Grid

Special thank you to Neil Zehr, IDIA Lab, BSU

Mellon Foundation Humanities Virtual World Consortium

Screen Shot 2013-04-23 at 4.22.14 PM

BSU’s IDIA Lab was contracted by the Mellon Foundation Humanities Virtual World Consortium to design and develop a major open source virtual world initiative for Digital Humanities and Cultural Heritage projects. The consortium – comprised of Kings College, London; UCLA; the University of Virginia and Trinity College, Dublin – turned to the expertise of BSU’s IDIA Lab to create this two-year innovative hybrid simulation platform that leverages new modes of teaching and learning in immersive environments.

http://virtualworlds.etc.ucla.edu

Concurrent with the development of the prototype infrastructure, members of the Consortium developed working relationships with Ball State University’s IDIA Lab, an internationally recognized academic leader in in the development of virtual worlds, human computer interaction, visualization and 3D simulation. Most important for the Consortium, the IDIA Lab is engaged in the development of scholarly, creative and pedagogical projects that explore the intersection between the arts, science and technology. The IDIA Lab is not merely a technical development team, but is also a interdisciplinary design studio that integrates art and emergent technologies into every phase of development. After inviting John Fillwalk, the lab’s director, to attend a series of conference calls with the consortium, a proposal for interface and in world design was solicited. John Fillwalk is an internationally recognized artist and developer of virtual and hybrid environments. He serves as the senior director of the Hybrid Design Technologies initiative [HDT], professor of Art and as the director of the Institute for Digital Intermedia Arts [IDIA Lab} at Ball State University. Over the past 25 years, his interactive and virtual artworks have been exhibited internationally in numerous festivals, galleries and museums including SIGGRAPH, CYNETart, Synthése, 404 Festival, Dutch Design Week, Boston Cyberarts, Virtual Broad Art Museum, ISEA, ASCI, VIdéoformes, Indian Institute of Technology and the Beijing Science and Technology Museum.

The Consortium elected to partner with the IDIA Lab, since it offers design as well as technological expertise and a common interest in the theoretical implications of Virtual World technologies on research and pedagogy.

Development will be split between the two independent teams, with the IDIA Lab, in general, centering its work on the development of the Unity based platform including avatar selection, navigation, network controller, user interface system, and back end network hosting, while Tipodean develops the HTML and KML system and works with members of the Consortium to integrate the four individual projects into the shared platform. The development will not occur in isolation from the rest of the Consortium.  The external development teams will offer scheduled monthly training sessions to the internal technical teams of the Consortium. We are employing a similar development model to that successfully used during the Planning Phase of the HVWC in which, through a collaborative effort of local staff and third-party developers, we implemented a prototype template and virtual world environment with a subset of features below enabled.  In addition, we plan to hire a graphic design independent contractor and a game design independent contractor to work with the PIs and our development teams on the look and feel of the Consortium’s web presence as well as the conceptualization of the interface design.

Our Mission

I. Project Summary

The 1990s saw the development of digital technologies supporting the 3D (three dimensional) modeling of cultural heritage objects and environments. For the first time, humanists could digitally model and reconstruct the damaged or vanished monuments of the past. The results were typically 2D renderings or videos (“animations”). The decade of the 2000s saw the enhancement of 3D environments with avatars making it possible for scholars to enter into the 3D world and to use the Internet to interact with the simulated environment while communicating with fellow humanists located anywhere on the planet. Such software platforms are called networked virtual worlds (NVWs). The Humanities Virtual World Consortium (HVWC) will explore how the unique characteristics of  networked virtual worlds can enable and advance humanistic research while working towards creating a sustainable base for mainstreaming the technology in humanities scholarship. Our initial work is based upon a series of related scholarly initiatives that draw upon virtual world technology and which are meant to: a) advance the current state of research on the phenomenology of space and place, b) design visual and aural conventions to evoke the sensorial experience lost to us due to technological and evidentiary constraints, c) test the current capabilities of virtual worlds to explore chronotopic problems, previously inaccessible due to the limitations of prior technology, d) guide future development of humanities-driven virtual worlds, and e) produce works of exemplary digital scholarship, disseminated in formal, peer-reviewed publications, that solve specific research problems in particular disciplines and area studies. Our overall intent is to demonstrate how networked virtual worlds can uniquely enable important kinds of research inquiry, and thus contribute to the transformation of scholarly communication in the relevant subject fields and to the field of digital humanities. With this in mind, our projects have been chosen so that they span multiple disciplines— including Archaeology, Art History, Architectural History, Buddhist Studies, Classics, History, Irish Studies, Literary Studies, Tibetan Studies—and periods from the ancient past to contemporary times. While the projects explore discipline-specific research questions, they share common goals concerning humanities research and scholarly communication in a networked virtual world environment.

II. Expected Outcomes and Benefits of the Project

Project Deliverables: At the conclusion of the project we will release a networked virtual world platform template to be used in the Unity game development engine, the Consortium web site with documentation, four virtual world projects released as Unity3D builds, four draft in-world “articles” embedded within the virtual world project, and four articles submitted for peer review in journals devoted to specific scholarly domains.

The main outcomes of the proposed project will be (i) the consolidation of the Humanities Virtual World Consortium as a robust, fully functional academic organization that can persist and expand in the future; (ii) the development of a shared virtual world platform that is adapted for maximum utility for scholarly projects; (iii) the publication of four significant scholarly projects in that platform that exemplify the way in which virtual worlds offer unique affordances for scholarly research inquiries; (iv) individual articles based on the virtual world content submitted journals in our traditionally-defined, domain-specific fields. At the conclusion of the grant, a) the organization will be published on the web with bylaws and an means for others to join,  b) the new collaboratively maintained virtual world research and publication environment will host four scholarly publications products, and c) the shared development template and accompanying documentation will be made available online to aid others in the development of Virtual World content. The primary benefit of this project is that it constitutes a valuable and necessary step towards establishing the organizational and technical basis for a sustainable scholarly exploration of the unique ways in which networked virtual worlds can enable and advance humanistic research. While valuable and compelling individual projects have emerged over the past years, interoperability between them has not been supported. The barriers remain extremely high for individual scholars who wish to use this medium for their own research without major funding. For these reasons, to date these projects have failed to attain a wide scholarly audience. We propose to establish an organization, technology, and specific scholarly publication apparatus that would address these deficiencies in the subsequent years and beyond.

The broader, long-term, and more far-reaching benefits are that the initiative will establish a basis for the scholarly community to engage in a serious attempt at creating a sustainable, shared environment for scholarly research and communication to be done in a networked virtual world environment. These publications will show the viability and importance of such research, raise its profile in the scholarly community, and demonstrate in a compelling, virtually tangible, and accessible way the benefits of using a shared platform. Such publications will, it is envisioned, lead to an expansion of the Consortium and finally the development of a robust platform in which scholars can easily participate without the need for major grants to support further development of one-off technologies. Scholarly projects from disparate disciplines and geographical regions would be interoperable, and scholarly use would be widespread due to the consistency of interface and technology.

Mesa Verde AR Visitor Interpretation App

Mesa Verde AR Visitor Interpretation App

Ball State Univeristy’s IDIA Lab has been contracted by the US Department of the Interior to develop virtual visitor experiences for the Mesa Verde National Park. The park preserves and interprets the archeological heritage of the Ancestral Pueblo people who made it their home for over 700 years, from 600 to 1300 CE. Today, the park protects nearly 5,000 known archeological sites, including 600 cliff dwellings.

The application will bring to life well-known museum dioramas and locative walking tours of the park. Augmented reality and interactive 3D experiences will help tell the stories of the UNESCO World Heritage site – including the transformation of static elements of the historic dioramas with animated figures and interactive elements. The application will be available on both Google Play and AppStore in 2021.

Virtual Meridian of Augustus: Presentation at the Vatican’s Pontifical Academy of Archeology

IDIA Lab virtual celestial simulator and 3D interpretation of the Meridian of August in ancient Rome. Project commissioned by the Virtual World Heritage Laboratory at Indiana University, directed by Bernard Frischer.

Findings presented at the Vatican’s Pontifical Academy of Archeology in Rome

Thursday December 19th, 2013

A Digital Simulation of the Northern Campus Martius in the Age of Augustus. Preliminary Results of New Studies of the Relationship of the Obelisk, Meridian, and Ara Pacis of Augustus

 by

Bernard Frischer, Department of Informatics, Indiana University

John Fillwalk, Director, Institute for Digital Intermedia Arts, Ball State University

http://www.pont-ara.org/index.php?module=crpCalendar&func=display&eventid=54

Pontifical Academy of Archeology

http://en.wikipedia.org/wiki/Pontifical_Academy_of_Archaeology

Consultants:

Horology consultant: Paolo Alberi Auber, Italy
Archeoastronomy consultant: Prof. Robert Hannah, University of Walkato, New Zealand
Archeoastronomy consultant: David Dearborn, Lawrence Livermore National Laboratory, U

SA
Data courtesy of NASA Jet Propulsion Laboratory Solar Dynamics Group: Horizons System

A Digital Simulation of the Northern Campus Martius in the Age of Augustus. Preliminary Results of New Studies of the Relationship of the Obelisk, Meridian, and Ara Pacis of Augustus

Bernard Frischer, Department of Informatics, Indiana University

John Fillwalk, Director, Institute for Digital Intermedia Arts, Ball State University

  1. With generous support from the National Science Foundation (grant # IIS-1014956), we have recently been developing a digital simulation of the northern Campus Martius in the period 9 BCE to 40 CE. [1] Our motivation is to create a tool that makes it possible instantly to see the correct positions of the sun and its shadow at any time of day in this period of time so that the various controversies associated with the work of Edmund Buchner on the so-called “Horologium Augusti” can be approached in a new way. We have two main goals in creating the simulation. First, we want to see if Buchner’s and other scholars’ claims and interpretations about the relationship of the Augustan obelisk, the (hypothesized) horologium (which we now call the Meridian of Augustus, following the lead of Albèri Auber 2011-12), and the Ara Pacis can be verified or refuted. Secondly, we want to use the simulation as a support for an  empirical survey all over the area of interest to see if it might even be possible, by broadening the field of inquiry in terms of time and space, to arrive at any new insights and discoveries. We are grateful for the opportunity to report on our findings here, and we begin by noting that these are preliminary and can be expected to be revised as our simulation is subjected to further testing and refinement. We concentrate here on the first goal of verification and refutation, reserving a report on the second goal to future publications.
  1. We begin by observing that the use of digital simulations such as ours may still be novel in the field of Roman topography, but they have been used since the early 1950s in physics (Galison 1997:759) and then, increasingly, in other branches of physical and life science to model systems behavior, to speed up difficult computations, and reduce the opportunity for human error. The grounds for and limits of their validity have been usefully treated by Humphreys 2004; and their potential utility in archaeology was mooted by Frischer 2008. As Humphreys stated, “the enormous flexibility and precision of simulation methods provide an opportunity to implement Gedankenexperimente in contexts providing much greater precision than is possible with traditional mental implementations, and they are free from the psychological biases that can effect even simple thought experiments” (Humphreys 2004:115-116).
  1. Of course, precision and valid results always depend on the reliability of the data represented in a simulation. For the all-important apparent size[2] and position of the sun in the sky dome of the simulation, we have relied on NASA’s  Horizons System (http://science1.nasa.gov/planetary-science/planetary-science-data/horizons-system/). Among other things, this database takes into account the changes in the sun’s apparent course through the sky that arise from the earth’s wobble as it rotates, providing correct azimuthal information for any point on earth in any historical period, including the Augustan age.

We take as our point of departure the archaeological data and interpretation of the site given by Albèri Auber in this volume and in other publications, especially Albèri Auber 2011-12. We agree with him in the following essential points.

(a) The obelisk was used as the gnomon only for a meridian, not an horologium inscribed on a large pavement, for which no evidence has ever been found.  Its purpose was scientific: as Albèri Auber 2012: 484-489 shows, it helped insure that tracking of leap years was correctly done through the observatio umbrarum (Pliny NH 2.35)

(b) There is only one phase for the obelisk-meridian: the Augustan phase. Buchner’s Flavian phase is a phantom based first on his reliance on Guarducci’s dating of the letters of the inscriptions found at 48 via Campo di Marzio to the first century CE (Buchner 1980: 362), a dating later withdrawn without comment (Buchner 1983: 505); and then on the brief report in Rakob 1987: 693- 94 that La Torre dated the ceramic fragments found in Buchner’s excavation to the Flavian period. However, La Torre has never published the pottery, nor did Rakob ever publish the stratigraphy, quota levels, and a plan showing the exact find spots. Until this evidence is produced and interpreted, we must suspend judgment. Our consultant on Greek epigraphy, Mika Kajava, will be writing a report on the dating of the lettering. Here we may quote his personal communication of August 1, 2013: “Considering the meridian inscriptions, in my view, it would be difficult to suggest a precise dating on the basis of paleography: an Augustan monumental text set up in a public place could look very similar to a Claudian or even a Flavian one. This is also because monumental writing tends to be conservative, and occasionally it is even archaizing. In the present case, one may also wonder if the fact that the texts were presumably modelled upon Greek precedents had some (extra) influence on the letter style.”

(c) The sub-phases of the project are: (i) the idea to bring an Egyptian obelisk to Rome as a manubial donation: presumably in 30 BCE after Augustus’ victory over at Alexandria and his annexation of Egypt; (ii) the vow of the Ara Pacis on July 4, 13 BCE; (iii) the dedication of the obelisk in 10 BCE,[3] presumably in conjunction with the twentieth anniversary of the victory at Alexandria;[4] (iv) the dedication of the Ara Pacis on January 30, 9 BCE; (v) the addition of the meridian at an indeterminate date during the reign of Augustus.[5]

(d) The height of the obelisk (including plinth, the bar between the pyramidion and the sphere, and the sphere itself) was 100 Roman feet. Haselberger (personal communication, October, 2013) stresses that this dimension is based on evidence that permits a range of possible heights varying by 3 to 6 feet depending on such factors as how we convert the Roman foot to meters, the unknown dimensions of the distanziatori (in the unlikely event such actually existed[6]), the height of the pole attaching the sphere to the pyramidion, and the diameter of the sphere itself. Albèri Auber takes a different tack: as a practicing gnomonologist himself, he stresses the practical advantages to his ancient Roman counterpart of working with the round number of 100 Roman feet and the useless complications that result if the height differed fractionally from it. Alberi Auber thinks that if 100 Roman feet is within Haselberger’s possible range of heights (and it is), and if the alternatives force us to work with dimensions such as (to make up some random examples) 101.33 or 103.75 Roman feet which would greatly complicate the ancient gnomonologist’s calculations of the length and subdivision into 360 degrees of the meridian, then, faute de mieux, 100 Roman feet is the obvious solution.[7] We agree.
In a forthcoming publication, we give the GPS coordinates, dimensions, and bibliographical sources for our 3D models of the meridian, obelisk, and Ara Pacis (Frischer and Fillwalk 2013). In brief, we claim +/- 2 meter accuracy for the placement of the existing fragment of the meridian at via di Campo Marzio 48 and of the Ara Pacis. The position of the obelisk at piazza del Parlamento 3 was derived from its height and the shadow it cast in relation to the position of meridian fragment, as previous scholars have noted can be done (e.g., Heslin 2007:13). In the same publication, we also describe the technical specifications of the simulation. In brief, we authored the model in Maya and converted the 3D model of the northern Campus to the game engine Unity. We developed a plug-in which, as noted, utilizes azimuthal data from NASA’s Horizons System. In calculating where to position the sun and create a lighting solution for observations concerning the obelisk and its shadow, we used the geocoordinates of the obelisk; for those concerning the Ara Pacis, we used the geocoordinates of the Ara Pacis. Unity makes it possible to roam around the simulated landscape and to see the monuments from both freely chosen and preset points of view. For example when the “Ara Pacis View” is selected, movement is restricted to the hypothetical axial line from the center of the base of the obelisk to the Ara Pacis and beyond to the via Flaminia. The height of the camera is fixed at 1.58 m, the average height of the human eyes. In this way, in Ara Pacis View, it is not possible to deviate from a true axial position, and the sun (or its shadow) appears in a way that would have been visible to the ancient adult Roman.

  1. Everything stated in section 3 explains the default settings for our simulation. But we have made an effort to build a certain flexibility into the simulation so that it can support different interpretations of the archaeological situation. Thus, even though the default setting does not display Buchner’s hypothesized pavement and horologium, we have included a software switch that can be thrown to illustrate where Buchner thinks this would have been positioned. Similarly, we have a slider that can lower the height of the obelisk in units of 1 mm to a depth of 2 meters. Pending additional funding, we plan to make all major components of the simulation equally flexible, so that, for example, one can also raise the obelisk up to 2 meters higher in increments of 1 mm; and one can move the center of the Ara Pacis in any direction by 2 meters in the same increments. In this way, we hope to create a flexible tool that supports assumption-free scientific research, allows adjustments to be made to improve accuracy, and is not limited to one particular reading of the archaeological record. In our view, the simulation ought ideally to serve the needs of archaeologists without itself becoming a new topic of debate.
  2. On the basis of the default values, we have thus far addressed the following issues that bulked large in the articles published in JRA 2011: (1) Did the shadow of the obelisk travel all the way down the equinoctial line (whether real and inscribed, or purely hypothetical) to the center of the western façade of the Ara Pacis on September 23, Augustus’ birthday? (2) If the shadow hits the façade of the Ara Pacis, does it have salience? (3) Did the shadow of the obelisk point toward the Ara Pacis at some point every day of the year? (4) Is the Ara Pacis oriented, not toward the obelisk, but toward the rising sun on April 21 (Parilia)?
  3. Before answering the first question, we note that our formulation of it reflects the “strong” reading of Buchner’s thesis about the relationship between the obelisk and Ara Pacis.[8] Like several scholars before him, Haselberger 2011:64 interpreted Buchner to mean only that the shadow progressed along the line, implicitly pointing toward the center of the western façade of the Ara Pacis, but did not necessarily reach the façade.[9] We call this the “weak” interpretation of Buchner. Our simulation can address both interpretations. Here we note that we dispute the weak interpretation and think Buchner did indicate that the shadow hit the façade. Even if Buchner never states this expressis verbis—just as he never says expressis verbis what scholars holding to the weak interpretation think that he says–what Buchner did write, taken in relation to his illustrations, leaves little doubt about what he meant.  Buchner 1976: 347 does state, as Haselbeger relates, “am Geburtstag des Kaisers…wandert der Schatten von Morgen bis Abend etwa 150 m weit die schnur-gerade Aequinoktienline entlang genau zur Mitte der Ara Pacis….” The weak interpretation of these words is that Buchner knew that the shadow moved along the equinoctial line but did not reach the facade of the Ara Pacis; and therefore that Buchner’s phrase, “etwa 150 m,” referred simply to the inscribed equinoctial line on the (hypothesized) pavement of his horologium. Our view is that the scholars who think Buchner meant that the shadow hit the façade of the Ara Pacis are right because of the phrase “etwa 150 m.” What does this refer to? If one measures the equinoctial line of the horologium on Buchner’s figure 7 (p. 337), one finds[10] that 1 cm=20 m. One also finds that the length of the equinoctial line on the horologium (i.e., the solid line) is 7 cm=140 m. But Buchner said “etwa 150 m.” If one then measures his dotted line extending the solid equinoctial line to the middle of the facade of the Ara Pacis, one discovers the missing 10 m.  For the weak interpretation to be right, Buchner would have to have written “etwa 140 m.”  Moreover, advocates of the weak interpretation must also explain why Buchner accounts at p. 346 for a supposed architectural anomaly of the two entrances of the Ara Pacis by claiming that “die Aequinoktienlinie des Solarium geht durch die Ara hindurch, durchschneidet wie di Vorder- so auch die Rückfront….” And this, too, is clearly seen in his fig. 7 on p. 337. This latter point is, it seems to us, decisive evidence in favor of the strong interpretation. Buchner cannot simply mean that an imaginary equinoctial line can be extended from the end of the actual inscribed line through the west to the east entrance of the Ara Pacis. Such an imagined extension of a line would not require a physical entrance and exit point. Buchner must, rather, be loosely describing an actual physical event, namely the progress of the shadow along the (imagined) extension of the equinoctial line through the altar. Clearly it is the shadow that requires the two entrances, not the hypothetically extended line. At any rate, since the weak interpretation is often encountered in the scholarly literature,[11] we address it here as well as the strong interpretation.
  1. To answer the first question as understood by the strong interpretation, the simulation suggests that it is true, as Buchner always claimed, that on September 23 the shadow of the obelisk progresses more or less down the (in our view hypothetical) equinoctial line in the zone that would be paved and inscribed with the horologium Buchner imagines. Since September 23 is not the actual date of the fall equinox in the Augustan age (which fell on the 25th of the month on the Julian calendar), the shadow actually fails to hit the line at the beginning of the second hour of the day; but for most of the second hour and all the other hours indicated on Buchner’s diagram of the horologium, the shadow does move along the line. However, the simulation also shows that, just at the crucial moment, when the shadow leaves that zone and approaches to ca. five meters of the center of the façade of the Ara Pacis, it  veers off course (see figure 1).These are the facts, at least if the simulation is reliable. In our view the simulation refutes the strong interpretation of Buchner and also casts doubt on the validity of the weak interpretation. If, as Buchner thought, the whole point of the ensemble Obelisk-Ara Pacis is an alignment of the latter with the former precisely on Augustus’ birthday, then the fact that the shadow misses the mark just as it approaches closest to its alleged target (and in an area where they was probably pavement on which the shadow could clearly be seen,)[12] is an indication that Buchner’s thesis is wrong. The simulation shows that the shadow clips the lower south end of the façade of the Ara Pacis just before sunset, when it disappears. We also note here that both the strong and weak interpretations are also thrown into doubt if, as we assume from the absence of any archaeological support, Buchner’s hypothesized pavement with an inscribed equinoctial line never existed. Without such a line, the average observer unversed in the subtleties of astronomy and gnomonology would have had little reason to process the significance of what he was seeing. Buchner’s thesis requires that the observer who does realize what is happening as the shadow advances across the zone to the Ara Pacis have the patience to stand and watch for some, or, ideally, all of the nearly twelve hours of daylight on that date. This seems impractical and implausible. Another decisive point against Buchner’s thesis is the fact that the shadow from the obelisk does hit the center of the western façade of the Ara Pacis at sunset on several other dates of the year. At most, then, one can say that Buchner had the right idea but concentrated on the wrong date. After they have been independently verified, we will report in a separate publication on the dates when the shadow does fall on the center of the western façade at sunset. In this regard, we note Suetonius’ comment (Augustus 31) that Augustus named Sextilis and not his birth month September after himself when he became pontifex maximus “because in the former he had won his first consulship and his most brilliant victories.” Here we have a nice parallel for Augustus’ decision-making in conferring honors on himself: he was not unduly influenced by his birthday but took other factors into consideration. Apparently, the same thing was true of the design he commissioned for the alignment of the obelisk with the Ara Pacis. Finally, and this is the most decisive point against the Buchner thesis, the phasing outlined in section 3(c) makes purely coincidental any shadow effect involving the hypothesized horologium and the Ara Pacis since the Ara Pacis was designed and sited first, the obelisk second, and the horologium (or, as we would instead assert, following Albéri Auber, the meridian) was added as an afterthought.
  2. Pollini 2012:210-216 reports on and illustrates (see, especially, p. 215, figure V.7e) a computer simulation that appears to confirm the strong version of Buchner’s thesis. We were naturally concerned about this result, which contradicts our own. Pollini and his modeler, N. Cipolla, kindly answered questions about their methods and software. We learned that Cipolla used formZ as his modeling package and for the lighting solution showing the shadow of the obelisk hitting the center of the western façade of the altar at 4:31 pm on September 23. As noted above in section 3, correct azimuthal, temporal and geospatial data are needed if the results of a simulation can have validity. Cipolla (personal communication, April 16, 2013) wrote that in creating a lighting solution for the shadow of the obelisk cast at 4:31 pm toward the western façade of the Ara Pacis, he used the built-in geocoordinates for Rome furnished by formZ. We determined that these coordinates are: 41 degrees, 54 minutes N, 12 degrees 30 minutes E. If we put these coordinates in Google Maps, we find that they yield an address near Stazione at via Giovanni Amendola 14-40, 00185 Rome. This location is ca. 2 km from where the obelisk was erected in antiquity. The spatial error is compounded by a temporal error. Cipolla also used formZ’s built-in time setting in his lighting solution. However, as a personal communication (dated April 30, 2013) from Paul Helm of the formZ Technical Support team states, “the formZ Sun Position is designed for current years, and not intended for historical use.” The position of the sun in the sky is quite different today than it was in the period under discussion. Here, too, formZ’s built-in data have contributed to the different—and, we would claim, erroneous–results seen in the Pollini-Cipolla simulation.
  3. Heslin 2007:14 writes that “the shadow of the obelisk would have pointed at the Ara Pacis every single afternoon of the year…” Hannah 2011:94 notes that this is mistaken and that “in mid-winter, for example, it is not possible for the afternoon sun to cast a shadow that will fall from the obelisk towards anywhere near the direction of the altar.” The simulation, with its ability to instantly make complex calculations, allows us to confirm Hannah’s point and make it more precise. On the following dates, the shadow from the obelisk does not point at all toward the Ara Pacis: October 30 to February 11.
  1. Schütz 1990:450-453 questions whether the shadow of the sphere of the obelisk on the Ara Pacis would have had salience. Hannah 2011:91-93 disputes this on the basis of autopsy of the shadow cast by the cenotaph in Dunedin. The simulation also shows that Schütz’s concern was misplaced.
  2. Schütz 2011:85 claims that the Ara Pacis is not aligned with the shadow of the setting sun from the obelisk, but is oriented precisely away from the obelisk toward the rising sun to the east on April 21, the Parilia festival. The simulation shows that there is no precise alignment at sunrise on April 21 (figure 2), but there is such an alignment on May 6, a date with no festival on the religious calendar or other connection to Augustus, his family, or Roman history. It is therefore doubtless accidental—a coincidental result of another date (or dates) determining the design of the obelisk-Ara Pacis ensemble.
  3. Schütz’s attempt to orient the Ara Pacis toward the sun may, however, turn out to be another good idea which, like Buchner’s regarding the shadow, was simply misapplied. Let us recall the phasing of the project (section 3[c]): the design and construction of the Ara Pacis preceded the installation of the obelisk. When it was erected in Rome, the obelisk was rotated to be nearly parallel to the orientation of the Ara Pacis. This rotation has no impact on the obelisk’s functionality as a gnomon for the (still later) meridian: the obelisk’s shadow falls at the correct cross-hatchings on the meridian line whether or not the obelisk, like the meridian, is oriented N-S. But the obelisk’s rotation does have an important visual relevance for the relationship of the obelisk to the Ara Pacis: the obelisk, added after construction of the altar was already underway, was sited and disposed so as to be aesthetically compatible with the altar for someone viewing both on axis from the east (i.e., from the via Flaminia). Here, a different rotation of the obelisk would have been dysfunctional, i.e., aesthetically unpleasing. By looking eastward toward the sunrise from the eastern entrance of the Ara Pacis, Schütz ignores the (to us) indisputable visual and positional relationship between the obelisk and the altar.

The idea of seeing a relationship between the Ara Pacis and the sun—and not, as Buchner thought, the sun’s shadow—is quite interesting, and Schütz deserves credit for introducing it into the debate. All over the world, archaeoastronomers have found evidence of such built “solar markers” (cf., e.g., s.v. solstice markers in the Index of Kelly and Milone 2011: 606). Finding one at Rome would thus not be unprecedented. In this connection, we may note that the obelisk is expressly dedicated to the Sun god (see above n.2).  We are currently using the simulation to study this possibility and have identified three candidate pairs of dates. We note that given the nature of the analemma pattern traced by the sun in the sky each year, we will always find at least two dates (equidistant from the solstices) when any such alignment will occur. And, given the fact that when the observer moves along the axial line of the Ara Pacis imaginarily extended across the via Flaminia, the date of the alignment changes, we are likely to find more than one pair of candidate dates.

If arbitrary results are to be avoided in deciding which date (or, if this is another case of Augustan polysemy, dates; cf. Galinsky 1992) determined the positioning of the obelisk with respect to the altar, one needs rules of inquiry, and we state the rules we think reasonable to apply in the hope that they draw comment before our final report is published. First, there are the visual rules. What we are looking for must fulfill these visual criteria: the observer must be standing on the via Flaminia on axis with the eastern entrance to the Ara Pacis and with his gaze directed so that he can see the top of the obelisk in the distance; the disk of the sun must be more or less tangent to the top of the obelisk; and the disk must be (at least approximately) centered on the obelisk and, of course, on the axis of the Ara Pacis (figure 3). We think that the via Flaminia is the right place to use for observation because from here one had a good view of the entire façade of the Ara Pacis and the top part of the obelisk; and it was much more heavily trafficked than the adjacent area between the road and the eastern entrance to the altar. We assume that the effect we are looking for was exoteric, not esoteric: it was intended to be noticed by the mass of Romans passing through the Campus Martius and not simply by a select few. Then there is the cultural rule that a date is significant if and only if it corresponds to a well-attested religious festival, a personal event in the life of Augustus and his family, or an event of historic importance to the Roman state. We are, of course, aware of the fact when the requirements for a significant alignment are met for the observer stationed on the via Flaminia looking on axis from the east, then at the same time an observer positioned to the west on the imaginary axial line between the obelisk and the Ara Pacis would see the obelisk’s shadow projected onto the axis of the western entrance to the altar (figure 4). Our analysis, then, is in a sense an ironic (and, let us hope, irenic) compromise between the approaches of Buchner (=obelisk’s shadow centered on the western façade of the Ara Pacis) and of Schütz (=sun seen aligned to the main axis of the altar). However, to effect the compromise, we had to find better dates, and we had to turn Schütz ‘s ancient observer around so that she could see both the Ara Pacis and the obelisk, something possible only at sunset, not sunrise. We hasten to add that our results did not come from consciously working out such a compromise in advance and applying it to the simulation, but from using the simulation with no preconceptions as simply a device for virtual empirical survey and observation.[13]

We conclude by expressing the hope that our simulation does not itself become a new topic of the debate (which already includes more than enough controversies!). To reduce the chances that this will happen, before we will release any of our new findings along the lines of the compromise just described we will have the underlying calculations independently checked. And before we freely post the simulation itself on the Internet (as is our intention to do as a support for teaching and research), we want to modify it as suggested above in section 4 so that it is flexible enough to accommodate the complete range of expert views about the archaeological record.

Bibliography

Albèri Auber, Paolo, 2011-12. “L’obelisco di Augusto in Campo Marzio e la sua Linea Meridiana. Aggiornamenti e proposte,” Rendiconti della Pontificia Accademia Romana di Archeologia 84:447-580.

Albèri Auber, Paolo, 2013a. “La Linea meridiana di Augusto,” Orologi Solari, n. 2, CGI Coordinamento Gnomonico Italiano,  August, 2013.

Albèri Auber, Paolo, 2013b. “The Obelisk of Augustus and its Meridian Line. Part 1,” The Compendium. Journal of the North American Sundial Society, September, 2013.

Buchner, E., 1976. “Solarium Augusti und Ara Pacis,” RömMitt 83, 3 19-65.

Buchner, E., 1980.  “Horologium Solarium Augusti. Bericht über die Ausgrabungen 1979/80,”

RömMitt 87, 355-73.

Buchner, E., 1982.   Die Sonnenuhr des Augustus (Mainz).

Buchner, E., 1982b.  “L’orologio solare di Augusto,” RendPontAcc  53-54 ( 1980-82) 331-45.

Buchner, E., 1983. “Horologium  Augusti.  Neue Ausgrabungen  in Rom,”  Gymnasium  90,

494-508.

Buchner, E., 1984. “Sonnenuhr des Augustus und römischer Fuß,” in Bauplanung und

Bautheorie der Antike (DiskAB 4) 215-19.

Buchner, E., 1988.  “Horologium solarium Augusti,” in Kaiser Augustus und die verlorene

Republik (exh. Cat., Berlin) 240-45.

Buchner, E., 1994.  “Neues zur Sonnenuhr des Augustus,” Nürnberger Blätter zur

Archäologie 10 (1993-94) 77-84.

Buchner, E., 1996a. “Horologium Augusti,” in LTUR 3 (Rome) 35-37.

Buchner, E., 1996b. “Ein Kanal für Obelisken. Neues vom Mausoleum des Augustus in Rom,”

AntW 27, 161-68.

Buchner, E., 1999. “Horologium Augusti,” in Via del Corso. Una strada lunga 2000 anni

(exh. cat., Rome) 159-63.

.

Favro, D., 1996. The urban image of Augustan Rome (Cambridge).

Frischer, B., 2008. “From digital illustration to digital heuristics,” in Beyond Illustration: 2D and 3D Digital Technologies as Tools for Discovery in Archaeology, edited by Bernard Frischer and Anastasia Dakouri-Hild, BAR International Series 1805 (Oxford) v-xxiv.

Frischer, B.and J. Fillwalk, 2013. “ A Computer Simulation to Test the Buchner Thesis. The Relationship of the Ara Pacis and the Meridian in the Campus Martius, Rome,” Proceedings of Digital Heritage 2013, forthcoming.

Galinsky, K., 1992. “Venus, polysemy, and the Ara Pacis Augustae,” AJA 96, 457-75.

Galison, P., 1997. Image & logic. A material culture of microphysics (Chicago and London).

Grenier, J.-C., 1996.  LTUR 3 s.v. Obeliscus Augusti: Circus Maximus (Rome) 355-356

Hannah, R., 2009. Time in antiquity (London and New York).

Hannah, R., 2011. “The Horologium of Augustus as a sundial,” JRA 24, 87-95.

Haselberger, L., 2011. “A debate on the Horologium of Augustus: Controversy and clarifications,” JRA 24, 47-73.

Heslin, P. J., 2007. “Augustus, Domitian and the so-called Horologium Augusti,” JRS 97, 1-20.

Heslin, P. J., 2011. “The Augustus code: A response to L. Haselberger,” JRA 24,74-77.

Humphreys, P., 2004. Extending ourselves. Computational science, empiricism, and scientific method (Oxford and New York).

La Rocca, E., 1983. Ara Pacis Augstae in occasione del restauro della fronte orientale (Rome).

Liverani, P., 2006-7. “Templa duo nova Spei et Fortunae in Campo Marzio,” RendPontAcc 79, 291-314.

Rakob, F. 1987. “Die Urbanisierung des nördlichen Marsfeldes. Neue Forschungen im Areal  des Horologium Augusti,” in L’Urbs. Espace urbain et histoire (CollEFR 98)  687-712.

Rehak, P, 2006. Imperium and cosmos. Augustus and the Northern Campus Martius, edited by John G. Younger (The University of Wisconsin Press, Madison).

Rodríguez-Almeida, E., 1980. “Il Campo Marzio settentrionale: solarium e pomerium,” RendAccPont 51-52, 195-212.

Rossini, O., 2006. Ara Pacis (Milan).

Schütz, M., 1990. “Zur Sonnenuhr des Augustus auf dem Marsfeld,” Gymnasium 97, 432-457.

Schütz, M., 2011. “The Horologium on the Campus Martius Reconsidered,” JRA 24, 78-86.

Torellli, M., 1992.  “Topografia e iconologia. Arco di Portogallo, Ara Pacis, Ara Providentiae, Templum Solis,” Ostraka 1: 105-131.

Torelli, M., 1999. “Pax Augusta, Ara,” in LTUR 4 (Rome) 70-74.

Zanker, P., 1988. The Power of images in the age of Augustus (Ann Arbor).

Figures and Captions

Figure 1. At sunset on September 23, 1 CE, the shadow of the obelisk does not hit the middle of the western façade of the Ara Pacis as required by Buchner’s thesis but, as seen in this illustration, it only grazes the lower right side of the façade before continuing to the right (south) beyond the altar and soon disappearing after sunset. Source: Frischer-Fillwalk simulation.

Figure 2. Sunrise on April 21 (Parilia) of 1 CE seen from the eastern doorway of the Ara Pacis. According to Schütz 2011, 85 the Ara Pacis is oriented in such a way as to be on axis with the rising sun on this date. The doted red line gives the vertical axis and shows that Schütz’s theory is not confirmed by the simulation. Source: Frischer-Fillwalk simulation.

Figure 3.  The Ara Pacis and upper part of the obelisk seen in “Ara Pacis View” from the via Flaminia in the Frischer-Fillwalk simulation. This striking effect appears to have occurred on several days of the year. Before releasing the dates and discussing their possible significance, we are having them independently verified. Source: Frischer-Fillwalk simulation.

Figure 4. The shadow of the obelisk projected onto the vertical axis of the western façade of the  Ara Pacis seen in “Ara Pacis View” from a position along the axial line from the base of the obelisk to the center of the Ara Pacis, as seen in the Frischer-Fillwalk simulation. The dotted red line indicates the vertical axis, on which the shadow appears to be precisely centered. The date and time are the same as in figure 3. Source: Frischer-Fillwalk simulation.


[1] We are honored that Lothar Haselberger has invited us to contribute to this volume. We thank the National Science Foundation for the funding that made the Digital Meridian Project possible. We also thank Paolo Albèri Auber and Robert Hannah for their constant responsiveness to requests for information, help and collaboration. We are grateful to Nicholas Cipolla, David Dearborn, Karl Galinsky, Mika Kajava, Ann-Marie Lewis, Paolo Liverani, John Miller, John  ollini, and Michael Schütz for answering questions and providing the information we requested. Franco Sgariglia kindly arranged our several visits to study the remains of the meridian found by E. Buchner under via di Campo Marzio 48. Needless to say, we are solely responsible for the data and interpretations presented in this report. Bernard Frischer wrote this report, was the principal investigator of the NSF grant, and is responsible for the archaeology behind the simulation; John Fillwalk edited the text of this report and is responsible for the creation of the digital simulation and related solar tracker plug-in.

[2] Ca. 1.0 degree at sunrise and sunset and 0 .5 degrees at other times of the day.

[3] The date is given by mention in the dedicatory inscription of Augustus’ holding of tribunician power for the fourteenth time (CIL VI.701 and 702; ILS 91).

[4] So J.-C. Grenier, LTUR 3 s.v. Obeliscus Augusti: Circus Maximus (Rome 1996) 355-356 at p. 356.

[5] In his contribution to the present volume, Albèri Auber rightly stresses that  in NH 36.72 Pliny uses the word “addidit” twice in connection with the meridian and related sphere atop the obelisk: Ei, qui est in campo, divus Augustus addidit mirabilem usum ad deprendendas solis umbras dierumque ac noctium ita magnitudines, strato lapide ad longitudinem obelisci, cui par fieret umbra brumae confectae die sexta hora paulatimque per regulas, quae sunt ex aere inclusae, singulis diebus decresceret ac rursus augeresceret, digna cognitu res, ingenio Facundi Novi mathematici.  is apici auratam pilam addidit, cuius vertice umbra colligeretur in se ipsam, alias enormiter iaculante apice, ratione, ut ferunt, a capite hominis intellecta. The meridian was a genial afterthought to a project already complete when the obelisk had been installed in the Campus Martius. On the basis of the fact that the obelisk is rotated by 15 degrees from N, the same point was made as early as 1750 by J. Stuart apud Bandini 1750: letter XIII, p. LXXIV.

[6] Schütz 1990:438 cogently notes in this connection that distanziatori are not seen on the relief of Antoninus and Faustina illustrating the obelisk.

[7] Albèri Auber 2011-12:467. Schütz 1990:442 may be right that for Buchner the choice of 100 Roman feet stemmed solely from “a fascination from round numbers,” but for Albèri Auber the decision was made for purely hard-headed, practical reasons. Moreover, Schütz does not reckon with the fact that there was a single, Augustan phase and that, according to Albèri Auber, the meridian was elevated by 60 cm on an embankment off the virgin soil (Buchner’s Flavian phase). So, in a sense, both Schütz and Albèri Auber can be right: the obelisk, including its base was higher off the virgin soil than 100 Roman feet; but since the meridian was also raised off the virgin soil—in Albèri Auber’s view, to offer some protection against the flooding of the Tiber (but, we would note, possibly because the quota level of the virgin soil was lower at the obelisk end of the meridian than at its northern limit; cf. Rodriguez-Almeida 1982:208) —then the effective height of the sphere of the obelisk off the pavement of the meridian was still 100 Roman feet.

[8] See, for example: Schütz 1990: 451; Favro 1996:130; Rehak 2006:83; Hannah and Magli 2011:506. In a rare lapsus memoriae, Torelli 1999:70, writes that the “shadow of the obelisk-gnomon touched the figure of Augustus represented on the frieze around the altar” (cf. also Torelli 1992:107).  Torelli has rotated the Ara Pacis by 90 degrees. Correcting that error, we can add Torelli to the list of scholars holding to the strong interpretation of Buchner.

[9]Haselberger writes: “Buchner never speaks of the actual shadow reaching the Ara Pacis, but of the connection between Ara and Horlogium established through the equinoctial line.”

[10] At least on my photocopy, which may not be a perfect 1:1 reproduction; but the scale issue is irrelevant since it affects Buchner’s scale as well as his plan.

[11] E.g., La Rocca 1983:57, Zanker 1988:144, Rossini 2006:12.

[12] Pavement survives only on the south side of the Ara Pacis. See Haselberger 2011, 55.

[13] To confess the truth, the first author must admit to starting this project with the working assumption that Buchner’s thesis was more or less correct, as several generations of the students who have attended his lecture courses on Roman Topography at UCLA and the University of Virginia can attest.

Khan Academy

The Khan Academy featured HDT / IDIA Lab and Virtual World Heritage Laboratories Digital Hadrian’s Villa Project. Hosted by Beth Harris, Dean of Art and History at Khan Academy, a walkthrough with archeologist and HDT research fellow Dr. Bernard Frischer showcases the project. A virtual tour of Hadrian’s Villa using a 3D digital model of the villa was created under the direction of Dr. Bernard Frischer in conjunction with the IDIA Lab. The ruins of Hadrian’s Villa, in the town of Tivoli, near Rome, is spread over an area of approximately 250 acres. Many of the structures were designed by the Emperor Hadrian who ruled from 117 until his death in 138 C.E. This virtual rendering is based on current archeological research and has been created in consultation with art historians, archaeologists, and museum curators with expertise in this area. Please note, a few features are necessarily assumptions based on the best available evidence. Dr. Bernard Frischer and Dr. Beth Harris are the speakers in this walk though that was created by Beth Harris and Steven Zucker of Smart History.

Meet3D

Meet3D is a virtual collaboration space that brings people together to share, learn, and work – wherever they are. Private or public auditoriums, classrooms, conference halls or offices can be joined by any group or audience on demand. Meet3D was developed by the Institute for Digital Intermedia Arts at Ball State University to bring remote participants together in a common space and facilitate a sense of physical presence. The platform is available in both browser-based and executable versions.

Interactive Art at the College of Architecture and Planning

The IDIA Lab is designing a series of extended reality artworks that will be installed in the atrium of the College of Architecture and Planning. The installation allows passers-by to interact using various modes including motion, voice, and gesture to shape the compositions. The works employ sensors, sound art, artificial intelligence, and other programming to create dynamic experiences.

NSF Virtual Teaching Simulator

The College of Architecture and Planning’s IDIA Lab creates technological innovation in the arts, sciences, and humanities. The Lab engages students, staff, scholars and clients in collaborative projects including 3D, virtual reality and mobile applications. This newsletter includes updates on current IDIA Lab projects including an augmented reality visitor experience application for Mesa Verde National Park, a publication on our Roman archeology celestial alignment research, a multiuser simulation for understand aspects of wellness in Amazon’s Lumberyard game engine, and a $3 million dollar NSF grant in which IDIA Lab was selected by Massachusetts Institute of Technology, Educational Testing Services and Indiana University to create simulations for elementary math and science classrooms.

While the COVID-19 pandemic has had a major effect on PK-12 classrooms, it has also affected opportunities for preservice teachers to gain practical teaching experience. Responding to this problem, School of Education faculty Meredith Park Rogers and Adam Maltese, along with Dionne Cross Francis of the University of North Carolina at Chapel Hill, an affiliate faculty member of IU, have begun work on a project with collaborators from ETS, Towson University and Massachusetts Institute of Technology, where they will design and study an online suite of practice-based teaching activities intended to support mathematics and science preservice teachers in honing their skills for facilitating argumentation-focused discussions.

This project, funded by a grant for over $3 million from the National Science Foundation (Grant 2037983), will run from 2020-2023. ETS serves as the lead organization for the project, titled Online Practice Suite (OPS): Practice Spaces, Simulations and Virtual Reality Environments for Preservice Teachers to Learn to Facilitate Argumentation Discussions in Mathematics and Science.

The OPS consists of a coordinated set of online practice-based teaching activities that include game-based practice spaces, small-group avatar-based simulations, and a virtual reality classroom. The activities are designed to be complemented with targeted feedback and support from teacher educators.

With the COVID-19 pandemic increasingly pushing K-12 schools and universities across the nation to adopt online approaches, there is an immediate need for finding effective methods for preservice teachers to practice teaching even if traditional school-based teaching experiences are less available.

Even without pandemic-related restrictions, learning how to teach effectively demands that preservice teachers have robust, authentic and consistent opportunities to engage in the work of teaching—ideally across different contexts with diverse student populations and for varied purposes.

Over the next three years, the OPS research team will work with mathematics and science teacher educators to design, test, and refine the full suite of online practice-based teaching activities. Findings will be used to understand the mechanisms that support preservice teachers’ learning within and across the OPS activities and to document an emergent set of best practices for supporting preservice teachers’ improvement in this ambitious teaching practice over time. The OPS will be accompanied by support materials to help teacher educators use and integrate the activities into teacher education courses, including online and face-to-face learning contexts. Both elementary and middle/secondary school mathematics and science scenarios will be developed for the OPS.

The IU faculty will work with the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University to develop an immersive virtual reality environment that will situate teachers in learning to navigate a whole-class discussion with students. This aspect of the OPS is referred to as the Virtual Teacher Simulator (VTS). In these simulations, preservice teachers will move beyond the one-on-one and small-group interactions of the other two OPS activities and begin to put their skills together to teach a full classroom of virtual student avatars. Each preservice teacher will have the opportunity to move around the classroom and engage with the virtual student avatars just as though they were in a real classroom. Additionally, they will receive feedback from a mathematics or science teacher educator (i.e., coach) in order to continue to improve their practice.

As Park Rogers explains, “The VTS context especially affords preservice teachers multiple opportunities to rehearse their teaching within a short time span, without disrupting learning in a real classroom, all while receiving immediate and expert feedback from specialized mathematics and science teacher educators. Furthermore, incorporating real-time feedback and opportunities for multiple rehearsals, all within an immersive whole class context, will allow preservice teachers to take risks in their practice and try out novel pedagogical moves that they may not feel secure enough, or have opportunities, to try in an actual classroom.”

“We must find a way to continue to prepare high-quality math and science teachers through this pandemic and beyond,” says Park Rogers. “Through this collaboration, we hope to offer teacher educators and their preservice student teachers a solution to the current dilemma the global pandemic of COVID-19 has created, and also alternative or supplementary methods for targeting core teaching practices even when we are able to safely return to classrooms to work with real children.”

This material is based upon work supported by the National Science Foundation under Grant 2037983. The opinions expressed are those of the author and do not necessarily represent views of the National Science Foundation.

By Catherine Winkler, Indiana University.

The Firefly Effect

A virtual social installation by John Fillwalk, IDIA Lab and Jody Naderi, CAP

This work, designed specifically for Nuit Blanche Toronto activates a locative field in which participants engage in dynamic social experience. Individual users of the app interact with a dynamic feedback system allowing them to find the cadence of a contemplative walk. Participants can view other app users on a live map – locating them spatially to join in a unique group or virtual room – forming a dynamic choir of collaborative synchronous light and music.

Nuit Blanche

https://nbto.com

Firefly at Nuit Blanche

https://nbto.com/project.html?project_id=114

IDIA Lab

https://www.idialab.org/firefly/

History Channel’s The Universe Features Virtual Stonehenge

IDIA Lab has developed a simulation of Stonehenge in Unity 3D which illustrates the various stages of construction and celestial alignments in an interactive virtual simulator. The project incorporates IDIA Lab’s CelestialEngine which uses NASA JPL data to accurately position the sun, moon and visible planets – correcting for changes in time in the Earth’s rotation and other forces – allowing for accurate observations of the night sky as they would have appeared thousands of years ago at Stonehenge.

The History Channel’s television series The Universe recently shot segments of an upcoming Stonehenge episode at Ball State University’s IDIA Lab and will feature the use of our simulator and rendered animations throughout the hour long episode. http://www.history.com/shows/the-universe/episodes

Press release: https://www.insideindianabusiness.com/newsitem.asp?ID=63067

Simulator video feature walkthrough:  http://www.youtube.com/watch?v=fNK-cCIog1E&list=UUtQrLF1JPprEvP4AO-iSNvw&feature=share&index=2

The History Channel’s The Universe, Ancient Mysteries Solved: Stonehenge

Perhaps the most mysterious structure on Earth, Stonehenge has stood on a plain in Southern England for 5000 years. Why is it there? In this episode we explore the possibility that this was a prehistoric astronomical observatory. Here ancient astronomer priests may have divined the complex movements of the Sun and Moon, recognizing patterns that would not be discovered elsewhere for thousands of years. The primitive Shamans may have also been the first astronomers to predict eclipses.

Stonehenge

Introduction

Stonehenge is one of the most famous prehistoric sites in the world – consisting of a ring of standing stones set within large earthworks. It is in the middle of the most dense complex of Neolithic and Bronze Age monuments in England, including several hundred burial mounds.

Archaeologists believe the earliest phases were built approximately around 3000 BC with radiocarbon dating in 2008 suggesting that the first Sarsen stones were raised between 2400 and 2200 BC and that the bluestones may have been raised at the site as early as 3000 BC.

The site and its surroundings were added to the UNESCO‘s list of World Heritage Sites in 1986 in a co-listing with Avebury Henge. It is a national legally protected Scheduled Ancient Monument. Stonehenge is owned by the Crown and managed by English Heritage, while the surrounding land is owned by the National Trust.

Archaeological evidence found by the Stonehenge Riverside Project in 2008 indicates that Stonehenge could have been a burial ground from its earliest beginnings. The dating of cremated remains found on the site indicates that deposits contain human bone from as early as 3000 BC, when the ditch and bank were first constructed.

PHASES

Phase One 3000-2920 BC

The first monument was essentially an earthwork, consisted of a circular bank and ditch enclosure measuring about 110 meters (360 ft.) in diameter, with a large entrance to the northeast and a smaller one to the south. It stood in open grassland on a slightly sloping spot. Within the outer edge of the enclosed bank is a circle of 56 holes each about a meter in diameter, known as the Aubrey holes after John Aubrey, a 17th-century antiquarian who was thought to have first identified them. Current thinking is that the Aubrey holes contained 56 bluestones during this phase of construction. There are suspected to be three heel stones during this era.

Phase Two 2620-2480 BC

Archaeological excavation has indicated that around 2600 BC, the builders reimagined the monument entirely – and began a massive phase of construction. During this period the sarsen ring with horizontal lintels was erected, the “U” shaped cluster of 5 central trilithons. These huge stones, ten uprights and five lintels, weigh up to 50 tons each. They were linked using complex jointing transferred from knowledge of woodworking. Also during this phase an inner ring of bluestones was constructed – most likely from the removal and relocation of the 56 Aubrey hole bluestones.

The earthwork was altered to create two barrows containing the addition of two of the station stones, with the remaining two outside the barrows, forming a rectangle. These station stones have both solar and lunar alignments. The heel stones were reduced to one – which stands somewhat angled today.

Phase Three 2480-2280 BC

In stage three the Avenue was constructed, a long roadway leading to the river Avon and leading to other settlements and monuments. A bluestone circle is constructed inside the ring of trilithons. This phase also noted the appearance of three slaughter stones.

Phase Four 2280-2020 BC

The main alteration of the monument during this period was the reconstruction of the bluestone configuration within the sarsen ring. They were reworked into two distinct patterns, one a central inner oval of 23 stones inside the trilithon ring – the other a circle of 59 stones between the trilithons and the sarsen ring. The remnants of both patterns are visible today. Also, the slaughter stones were reduced to one – which remains in a fallen state.

Phase Five  1680-1520 BC

The site is essentially unchanged with the exception of the construction of the X and Y holes. There are 30 Y holes and 29 Z holes – these are suspected to perhaps have significance in the tracking of the lunar month. The Y and Z Holes are the last known construction at Stonehenge, built about 1600 BC.

Present Day (1600 BC on)

Roman coins and medieval artifacts have all been found in or around the monument but it is unknown if the monument was in continuous use throughout British prehistory and beyond, or exactly how it would have been used. The Romans are believed to have removed 4 of the 23 inner bluestones from the oval, leaving the remaining 19 stones and holes forming the horseshoe we see today. The site was known to scholars during the Middle Ages and since then it has been studied and adopted by numerous groups.

Throughout the 20th century, Stonehenge began to be revived as a place of religious significance, by adherents of Neopagan and New Age beliefs, particularly the Neo-druids.

In the late 1920s a nation-wide appeal was launched to save Stonehenge from the encroachment of the modern buildings that had begun to rise around it. By 1928 the land around the monument had been purchased with the appeal donations, and given to the National Trust to preserve. The buildings were removed (although the roads were not), and the land returned to agriculture. More recently the land has been part of a grassland reversion scheme, successfully returning the surrounding fields to native chalk grassland.

Celestial Alignments

Many astronomical alignments have been attributed to Stonehenge, a complex of megaliths and earthworks in the Salisbury Plain of England. The most famous of these is the midsummer alignment, where the Sun rises over the Heel Stone.

As well as solar alignments, there are proposed lunar alignments. The four station stones mark out a rectangle. The short sides point towards the midsummer sunrise and midwinter sunset. The long sides if viewed towards the southeast, face the most southerly rising of the moon.

Gerald Hawkins, a professor and chair of the astronomy department at Boston University in the United States, published an analysis of Stonehenge in 1965 in which he proposed its purpose as an ancient astronomical observatory predicting movements of sun and stars. Archaeologists and other scholars have since demonstrated such sophisticated, complex planning and construction at other prehistoric earthwork sites across the globe.

Function and Construction

Stonehenge was produced by a culture that left no written records. Many aspects of Stonehenge remain subject to debate. There is little or no direct evidence for the construction techniques used by the Stonehenge builders. Proposed functions for the site include usage as an astronomical observatory or as a religious site.

Professor Michael Parker Pearson of Sheffield University has suggested that Stonehenge was part of a ritual landscape and was joined to Durrington Walls by their corresponding avenues and the River Avon. He suggests that the area around Durrington Walls Henge was a place of the living, whilst Stonehenge was a domain of the dead. A journey along the Avon to reach Stonehenge was part of a ritual passage from life to death, to celebrate past ancestors and the recently deceased. Whatever religious, mystical or spiritual elements were central to Stonehenge, its design includes a celestial observatory function, which might have allowed prediction of eclipse, solstice, equinox and other celestial events important to a contemporary religion.

IDIA Lab Virtual Stonehenge Simulator

IDIA Lab has developed a simulation of Stonehenge that illustrates the various stages of construction and celestial alignments in an interactive virtual simulator. The project incorporates IDIA Lab’s Celestial SimEngine which uses NASA JPL data to accurately position the sun, moon and visible planets – correcting for changes in time in the earths rotation and other forces – allowing for accurate observations of the night sky as they would have appeared thousands of years ago at Stonehenge.  https://idialab.org/virtual-stonehenge/

________________________________________

BBC The Sky at Night Episode

This past summer, BBC’s program, The Sky at Night also included IDIA Lab’s Stonehenge simulation and animation in an episode about Stonehenge and the summer solstice. The Sky at Night is a 50 year running program on astronomy which airs on BBC One, Two, Three and Four.

BBC Sky at Night Programme website:  http://www.bbc.co.uk/programmes/b036r5nj

Stonehenge pan preview: Stonehenge Sunset Pan

NSF FUNDED VIRTUAL SIMULATION OF HADRIAN’S VILLA

NFS Funded Virtual Simulation of Hadrian’s Villa

IDIA Lab has designed a virtual simulation of the villa of the Roman Emperor Hadrian, which is a UNESCO World Heritage site located outside of Rome in Tivoli, Italy. This project is being produced in collaboration with the Virtual World Heritage Laboratory (VWHL) at the University of Virginia (UVA), directed by Dr. Bernard Frischer and funded by the National Science Foundation. This large-scale recreation virtually interprets the entire villa complex in consultation with the world’s foremost villa scholars and educators. The project has been authored in the game engine of Unity as a live 3D multi-user online learning environment that allows students and visitors to immerse themselves in all aspects of the simulated villa.

The project will not only accurately recreate the villa buildings but also include a complete Roman avatar system, non-player characters with artificial intelligence, furniture, indigenous vegetation, dynamic atmospheric system and sophisticated user interface. The interface will not only provide learning, navigation, reporting and assessment opportunities but will also allow users to change the position of the sun to any date in 130 AD using data from the Horizons database at JPL NASA – testing theses of astro-alignments of architectural features during solstices and equinoxes. UVA students will be briefed on the culture and history of the villa as well as learn the virtual environment for five weeks prior to immersing themselves within it. The avatar system will allow for them to enter the world choosing class and gender – already being aware of the customs and behavior of the Roman aristocracy, soldier, slave or politician. This project will be delivered to VWHL at UVA in early March.

The Digital Hadrian’s Villa Project:

Virtual World Technology as an Aid to Finding Alignments between

Built and Celestial Features

Bernard Frischer1

John Fillwalk2

1Director, Virtual World Heritage Laboratory, University of Virginia

2Director, IDIA Lab, Ball State University

Hadrian’s Villa is the best known and best preserved of the imperial villas built in the hinterland of Rome by emperors such as Nero, Domitian, and Trajan during the first and second centuries CE. A World Heritage site, Hadrian’s Villa covers at least 120 hectares and consists of ca. 30 major building complexes. Hadrian built this government retreat about 20 miles east of Rome between 117, when he became emperor, and 138 CE, the year he died. The site has been explored since the 15th century and in recent decades has been the object of intense study, excavation, and conservation (for a survey of recent work, see Mari 2010).

From 2006 to 20011, with the generous support of the National Science Foundation[1] and a private sponsor, the Virtual World Heritage Laboratory created a 3D restoration model of the entire site authored in 3DS Max. From January to April 2012, Ball State University’s Institute for Digital Intermedia Arts (IDIA Lab) converted the 3D model to Unity 3D, a virtual world (VW) platform, so that it could be explored interactively, be populated by avatars of members of the imperial court, and could be published on the Internet along with a related 2D website that presents the documentation undergirding the 3D model.

The 3D restoration model and related VW were made in close collaboration with many of the scholars who have written the most recent studies on the villa.[2] Our goal was to ensure that all the main elements—from terrain, gardens, and buildings to furnishings and avatars—were evidence-based. Once finished, the was used in two research projects.

The first project was a NSF-sponsored study of the usefulness of VW technology in archaeological education and research. We used the virtual villa in undergraduate classes at Xavier University and the University of Virginia to investigate the thesis of two recent studies by project advisors Michael Ytterberg and Federica Chiappetta about how this enormous built space was used by six different groups of ancient Romans, ranging from the Emperor and Empress to normal citizens and slaves (Ytterberg 2005; Chiappetta 2008). Avatars representing these groups have been created and are being operated by undergraduate students as a Problem‐Based Learning (PBL) experience. They are observed by subject experts, who are using the data generated to test and, if necessary, refine the initial theses about how circulation through the villa was handled.  The results are still being evaluated. Preliminary indications are that the data show that the combination of VW used in a PBL educational context is very effective in taking advantage of the known connection between between the hippocampus and long-term learning, especially when the information to be mastered is spatial (Kandel 2007).

The second project involved use of the VW for some new archaeoastronomical studies. Most of our advisors’ publications, like the older work by archaeologists that preceded them, have concentrated on archaeological documentation, restoration, formal, and functional analysis. The latest research by advisor De Franceschini and her collaborator Veneziano (2011) combined formal and functional analysis: it considered the alignment of certain important parts of the villa in relation to the sun’s apparent path through the sky on significant dates such as the solstices. In their recent book they showed how two features of the villa are aligned with the solar solstices: the Temple of Apollo in the Accademia; and the Roccabruna. We used the VW to extend their research to other areas of the villa, taking advantage of 3D technology to restore the sun to the right place in the sky and also to restore the damage to the architecture of the villa, as De Franceschini and Veneziano had independently suggested be done before they learned about our digital model of the villa.

The work of De Franceschini and Veneziano is innovative. Archaeastronomy has become an accepted field of study in recent decades, and a considerable amount of work has been done in Old and New World archaeology. In Roman archaeology, however, this approach is still rarely encountered. Significantly, one of the few compelling studies concerns the most famous Hadrianic building: the Pantheon in Rome. Hannah and Magli 2009 and Hannah 2011 have shown a number of solar alignments in the building, of which the most notable are the sun’s illumination of the entrance doorway at noon on April 21; and the view of sunset silhouetting the statue of Hadrian as Sun god on a four-horse chariot atop the Mausoleum of Hadrian as viewed from the middle of the Pantheon’s plaza at sunset on the summer solstice. Like the summer solstice, April 21 is also a significant date: on it occurred the  annual festival in Rome known as the Parilia (re-named the Romaia by Hadrian),[3] which celebrated the founding of Rome.

De Franceschini and Veneziano pursued an observation of Mangurian and Ray (2008) to document an impressive example of solar alignment at Hadrian’s Villa involving the tower known as Roccabruna at the western end of the villa. Originally, a tower-like structure topped by a round temple, what remains today is the well-preserved, massive lower floor. The main entrance is located on the northwestern side to the right and gives access to a large circular hall covered by a dome. The dome is punctuated by an odd feature: five conduits that are wider on the outside than on the inside (figure 1).

What is the function of these unusual conduits? They have no known parallel in Roman architecture. After asking themselves this same question, on June 21st, 1988, the day of summer solstice, the American architects Robert Mangurian and Mary Ann Ray went to Roccabruna at sunset, and discovered the extraordinary light phenomena which occur there. At sunset the Sun enters through the main door illuminating the niche on the opposite side, something that happens during most of the summer days. But only in the days of the summer Solstice the Sun penetrates also into the conduit located above that door: its rays come out from the slot inside the dome projecting a rectangular light blade on the opposite side of the dome. In June 2009, De Franceschini verified the findings of Mangurian and Ray. However, they know that the apparent path of the Sun through the sky changes slightly each year, so that in the nearly 1880 years separating us from Hadrian, the precise effect of the alignment has been lost. As they noted, only a computer simulation can recreate the original experience of being in the lower sanctuary at Roccabruna at sunset on the summer solstice during the reign of Hadrian.

Once we had our 3D model of the site, we were able to obtain from NASA’s Horizons system[4] the correct azimuthal data for the year AD 130 and put the sun into the sky at sunset on the summer solstice. Following the lead of De Franceschini, who in the meantime had become a consultant to our project, we put into the niche one of the four statues of the Egyptian sky goddess Isis that were found at the Villa. De Franceschini chose Isis because first of all, there is no question there was a statue in this niche so we need to put something there; and the two flanking niches had candelabra, whose bases are preserved and are decorated with Isiac iconography. Moreover, Isis’ festival in Rome was on the summer solstice. So we scanned and digitally restored one of the several statues of Isis from the villa and put it into the central niche. Finally, for the dome, which we know from surviving paint was blue and therefore had the famous “dome of heaven” motif (Lehmann 1945), we followed De Franceschini in restoring a zodiac set up in such a way that the sign of Gemini is over the statue niche since the last day of Gemini is the summer solstice. Our zodiac is adapted from the great Sun God mosaic in the Rheinisches Landesmuseum in Bonn, which kindly gave us permission to use it.

As can be seen in figure 2, when we restored the sun in the right position in the sky dome for sunset on the summer solstice (June 21) of 130 CE in our 3DS Max model of Roccabruna, the sunlight coming through the main doorway illuminated the statue of Isis in the statue niche, and the light entering through the conduit lit up the sign of Gemini painted on the cupola. So we were able to confirm the Mangurian-Ray thesis.

The approach we have taken in our Roccabruna project is deductive: Mangurian and Ray noted the strange feature of the conduits punctuating the cupola of Roccabruna; they hypothesized a solar alignment. De Franceschini and Veneziano agreed and for various reasons we don’t need to go into today, they put a statue of Isis into the statue niche. We set up the conditions in which these hypotheses could be tested and were able to verify them.

But surely, if there is one such alignment at the villa of the same emperor who was responsible for the Pantheon, there may be others. But the villa is very big—covering over 100 hectares—and has 30 major building complexes, most larger than Roccabruna. Moreover, such alignments could just as easily involve astrological features such as the Moon and the planets. Faced with this level of complexity, the best methodological way forward in searching for new alignments is clearly inductive and empirical. This is one reason why we asked the Institute for Digital Intermedia Arts (IDIA Lab) of Ball State University to create a multi-user virtual world based in Unity 3D from our 3DS Max model.

The project of virtually interpreting a simulation on the scope and scale of Hadrian’s Villa was a daunting one – engaging layers of scholarly, technical and pedagogical challenges. The technical challenges were many – foremost to leverage the game engine of Unity 3D to become an effective multi-user avatar-based virtual world. An important factor was to create an environment that was straightforward and accessible via standard web browsers on both Mac and Windows and selected Unity 3D as the starting point for developing the platorm. We required specific back-end administration tools to handle the accounts and server side aspects of the project – for this we relied on Smart Fox Server as it manages Unity 3D quite well. Our team took an approach that bridged and integrated disparate technologies, creating a robust virtual world platform to immersively augment both instructional and PBL processes. VW features available to the learning community included text based communication, a live map showing current visitor positions, map based teleportation, managed voice channel, user selected avatar gestures, online users, paradata, photographs of the extant site, plan views, and integrated web links.

Key to the project was a varied system of avatars representing the imperial court, freemen, senators, scholars, soldiers, and slaves to the emperor. The avatar system provided several important functions testing recent scholarly interpretations of circulation throughout the villa and the use of various spaces for typical court activities – meals, imperial audiences, bathing, worship, etc. Upon entering the simulation, the choice of avatar would predicate how one’s social standing within the role-play of the world.

A gesture system was created via motion capture providing each user with a unique set of actions and gestural responses to engage social interactions – including greetings, bowing and gestures specific to rank and class. Communication was also a critical element in the modes of problem based learning engaged by the participants in the simulation. Specific technologies provided varied abilities such as public chat, private instant messaging and live multi-user voice channels.

A companion website was co-developed and integrated into the VW environment providing learners with visual assets such as photographs and panoramas of the current site, site plans, elevations, and video interviews with Villa scholars. We also developed three-dimensional turntables of the interpreted and reconstructed models, overview information on each of the major Villa features, bibliography and an expansive database of art attributed to the Villa site. This information can be directly accessed by learners directly from within the virtual world. The development team integrated the notion of paradata, introduced by the London Charter – making instantly transparent the scholarship and all underlying elements of the 3D model (from terrain to buildings, furnishing, costumes, and human behavior).

In support of new research theme on celestial alignments by consultants De Franceschini and Veneziano, a major goal for the project was to develop an accurate simulation for the position of the sun. The solar tracking, or virtual heliodon that we created as a response to this research, was envisioned as a simulation that would a bridge between the virtual environment and coordinates from an external database calculating solar positions. After investigating existing tools we decided to employ the Horizons database that was created by NASA’s Jet Propulsion Laboratory as an on-line solar system data computation service – tracking celestial bodies in ephemerides from 9999 BCE to 9999 CE. In implementing solar tracking for the Villa project in instances were we where we wanted to investigate potential significant solar alignments, we entered the latitude, longitude and altitudes of specific buildings from the Tivoli site to poll the Horizons data for the year 130 CE. The user was able to change the date, time of day, and quickly play the sun from specific moments via the user interface. The system was co-related to both the Julian and Gregorian calendars and contained presets for the vernal and autumnal equinoxes as well at the summer and winter solstices.

These tools allowed for the rapid discovery of potential alignment that might bear further investigation. The solar feature allows one to proceed empirically, in effect turning the clock back to 130 CE and running experiments in which the days and hours of the year are sped up by orders of magnitude so that one can in a very short time find candidate alignments not yet hypothesized by scholars working in the traditional way of Mangurian-Ray.

As developers, our goal was to create the solar tool and let students and scholars use it to undertake their own empirical research. Our team was not intending to engage in this research ourselves, yet in the process of working within the environment daily we quickly began to notice curious solar phenomena. In a bit of empirical study of the very first component of the site we installed in the simulation, the Antinoeion – or newly-discovered Temple of the Divine Antinous, we noticed an alignment of potential interest. The most likely alignment seemed at first glance to be along the main axis running from the entrance, through the obelisk in the central plaza to the statue niche at the end of the axis. We ran the days and hours of the year and found that the sun and shadow of the obelisk align at sunrise on July 20. We consulted with our expert on the Egyptian calendar in the Roman period, Professor Christian Leitz of the University of Tuebingen–and he confirmed that this date has religious significance. It is, in fact, the date of the Egyptian New Year, as the Romans of Hadrian’s age clearly knew (cf. the Roman writer Censorinus, who states that the Egyptian New Year’s Day fell on July 20 in the Julian Calendar in 139 CE, which was a heliacal rising of Sirius in Egypt).

In the process of developing and subsequently utilizing the simulation tools we created for astro-archeological research, our conclusions have been that virtual world technologies can indeed take the inquiry for significant built-celestial alignments to a new level of insight.

Bibliography

Chiappetta, F. 2008. I percorsi antichi di Villa Adriana (Rome).

De Franceschini, M. and G. Veneziano, 2011. Villa Adriana. Architettura celeste. Gli secreti degli solstizi (Rome).

Hannah, R. 2008. Time in Antiquity (London).

Hannah, R. 2011. “The Role of the Sun in the Pantheon’s Design and Meaning,” Numen 58: 486-513.

Kandel, E. 2007. In Search of Memory: The Emergency of a New Science of Mind (W. W. Norton, New York). Kindler edition.

Lehmann, K. “The Dome of Heaven,” Art Bulletin 27: 1-27.

Lugli, G. 1940. “La Roccabruna di Villa Adriana,” Palladio, 4: 257-274

Mangurian, R. and M.A. Ray. 2008. “Re-drawing Hadrian’s Villa,” Yale Architectural Journal, 113-116.

Mari, Z. 2010. “Villa Adriana. Recenti scoperte e stato della ricerca,” Ephemeris Napocensis 20: 7-37.

Ytterberg, M. 2005. “The Perambulations of Hadrian. A Walk through Hadrian’s Villa,” Ph.D. dissertation, University of Pennsylvania.


[1] NSF grant # IIS-1018512.

[2] For a list of scholarly consultants, see http://idialabprojects.org/hvtest/mission.php,

[3] Athenaeus 8.361e-f.

[4] http://ssd.jpl.nasa.gov/?horizons.

Final Wisdom I: Interactive Haptic Poetry Installation at SIGGRAPH 2010

Final Wisdom I is an interactive installation engaging viewers in a sensory exploration of temporal and spatialized poetry. Participants manipulate imagery, sound and language through their gesture, touch and proximity. The work is engaged through a framework of cameras and sensors that react to heat, position, and capacitance – presenting a shifting environment of reactive media and haptics. Final Wisdom I is the work of artists Hans Breder and John Fillwalk, poetry by critic Donald Kuspit, music by composers Carlos Cuellar Brown and Jesse Allison. This project is produced through the Institute for Digital Intermedia Arts at Ball State University in collaboration with the Institute for Digital Fabrication. Final Wisdom I was exhibited at the art gallery of SIGGRAPH 2010 in Los Angeles, CA. Special thanks to IDF/CAP students Matthew Wolak, Christopher Baile and Claire Matucheski, and Assistant Professor of Architecture Joshua Vermillion. http://www.i-m-a-d-e.org/

As an intermedia artist, John Fillwalk actively investigates emerging technologies that inform his work in a variety of media, including video installation, virtual art, and interactive forms. His perspective is rooted in the traditions of painting, cinematography, and sculpture, with a particular interest in spatialized works that can immerse and engage a viewer within an experi- ence. Fillwalk positions his work to act as both a threshold and mediator between tangible and implied space, creating a conduit for the transformative extension of experience, and to pursue the realization of forms, sounds and images that afford interaction at its most fundamental level. In working with technology, he values the synergy of collaboration and regularly works with other artists and scientists on projects that could not be realized otherwise. Electronic media extend the range of traditional processes by establishing a palette of time, motion, interactivity, and extensions of presence. The ephemeral qualities of electronic and intermedia works, by their very nature, are inherently transformative, and the significance of the tangible becomes fleeting, shifting emphasis away from the object and toward the experience.

John Fillwalk is Director of the Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University, an interdisciplinary and collaborative hybrid studio. An intermedia artist and Associate Professor of Electronic Art, Fillwalk investigates media in video installation, hybrid reality and interactive forms. He received his MFA from the University of Iowa in Intermedia and Video Art, and has since received numerous grants, awards, commissions and fellowships.

Donald Kuspit is an art critic, author and professor of art history and philosophy at State University of New York at Stony Brook and lends his editorial expertise to several journals, including Art Criticism, Artforum, New Art Examiner, Sculpture and Centennial Review. Hans Breder was born in Herford, Germany, and trained as a painter in Hamburg, Germany. Attract- ed to the University of Iowa’s School of Art and Art History in 1966, Breder established the Intermedia Program. Carlos Cuellar Brown, a.k.a ccbrown, is a composer, instrumentalist and music producer. Formally trained as a classical pianist, Cuellar specialized in experimental music and intermedia with the late American maverick composer Kenneth Gaburo. Jesse Allison is the Virtual Worlds Research Specialist, IDIA, Assistant Professor of Music Technology, Ball State University. He is also President of Hardware Engineering with Electrotap, LLC, an innovative human-computer interface firm.

Leonardo: The International Society for the Arts, Science and Technology article published by The MIT Press. http://muse.jhu.edu/journals/leonardo/summary/v043/43.4.fillwalk.html

The History Channel’s The Universe, Ancient Mysteries Solved: Roman Engineering

History Channel at IDIA Lab
History Channel at IDIA Lab

John Fillwalk, director of the Institute for Digital Intermedia Arts and senior director of the university’s Hybrid Design Technologies initiative, was interviewed for the History Channel’s “The Universe.” The episode, which aired 9 p.m. May 23, featured the IDIA Lab’s digital recreation of ancient Roman structures.

For the season finale of its popular program “The Universe,” the History Channel is shining a spotlight on the work of virtual artists from Ball State University.

The season finale episode of the documentary series, which aired May 23, explores how Roman emperors built ancient structures to align with movements of the sun. To confirm experts’ theories about the religious, political, and cultural significance behind these phenomena, the cable network enlisted the help of Ball State’s Institute for Digital Intermedia Arts (IDIA).

Through the use of 3-D computer animation, artists and designers from the IDIA Lab virtually recreated several monuments featured in the episode, along with accurately simulating the alignment of the sun. These structures include: the Pantheon, one of the best preserved buildings of ancient Rome; the Temple of Antinous, a complex that today lies in ruins within Hadrian’s Villa outside of Rome; and the Meridian of Augustus, a site containing an Egyptian obelisk brought to Rome for use as a giant sundial and calendar.

‘Getting things right’

IDIA Lab's Roccabruna reconstruction

 

The IDIA Lab digitally rebuilt ancient Roman structures to give viewers of the History Channel’s “The Universe” a vivid idea of what ruins looked like when they were inhabited.

James Grant Goldin, a writer and director for The History Channel, says IDIA’s animations were an essential part of the program’s second season finale. An earlier episode of “The Universe,” which aired in 2014, contracted with IDIA for animation of Stonehenge in a segment demonstrating how the prehistoric monument may have been used to track celestial movements.

This time around, the work of Ball State digital artists is featured throughout the Roman engineering episode.

“It’s an honor for us,” says John Fillwalk, director of IDIA and senior director of the university’s Hybrid Design Technologies initiative. “Ideally our relationship with the History Channel will continue long into the future.”

Goldin said the cable network is keen on employing the IDIA for future projects because Fillwalk and his team create beautifully rendered graphics backed by data and research.

“John was devoted to getting things right,” he said. ” Many of the theories we discuss in the show remain controversial, and in nonfiction TV it’s always a good idea to present such things with a qualifying phrase in the narration—an ‘if’ or a ‘maybe.’ But the IDIA Lab combined their own research with that of experts, and I’m very happy with the results.”

Gaming software transforms history

IDIA Lab's Pantheon reconstruction

 

The Pantheon is one of the ancient structures the IDIA Lab developed 3-D renderings of for the History Channel’s “The Universe.”

Fillwalk has worked closely over the years with many prominent scholars of the ancient world, including Bernard Frischer, a Roman archeologist at Indiana University and Robert Hannah, dean of arts and social sciences from New Zealand’s University of Waikat, who advised on the archeo-astronomy of the project.

Hannah says he’s been astounded to see the way up-to-the-minute gaming software can bring to life the work of today’s historians and virtual archaeologists. “I’ve seen my son play games like ‘Halo,’ so I knew what was possible,” he said, “but I’d never seen it adapted to ancient world buildings.”

Phil Repp, Ball State’s vice president for information technology, says the relationship between the university and the cable network is a key example of how Ball State is distinguishing itself as leading world provider of emerging media content.

“For the History Channel to want the continued help of John and his staff speaks to the quality of what our lab can produce,” he said.

Goldin’s praise for the IDIA supports Repp’s theory. “Bringing the past to life is a very challenging task,” he said. “The renderings Ball State artists have created represent the most accurate possible picture of something that happened almost 2,000 years ago.”

Engaging Technology II: Art + Science

The interplay of art and science—a concept at least as old as Leonardo da Vinci—is the focus of “Engaging Technology II,” an exhibition at Ball State University’s David Owsley Museum of Art which ran September 28, 2017 through December 22, 2017. Museum patrons experienced a selection of work from internationally renowned artists whose explorations on the topic include installations, code art (artwork generated by computer programming), art/science, augmented reality and human-computer interaction. 

The exhibition was curated by Ball State’s John Fillwalk, director of the Institute for Digital Intermedia Arts (IDIA Lab) in the College of Architecture and Planning and is both a sequel and a new concept, according to Robert La France, the museum’s director. This exhibit builds on the success of a prior exhibition curated by Fillwalk, that examined the emergence of the genre of Intermedia Art, with a particular focus on the intersection of technology and artmaking. The new show shifts the curatorial emphasis on STEAM – science, technology, engineering, arts and mathematics.

This exhibition continues to position Ball State as an international center of STEAM innovation. A goal of Engaging Technology II was to invite artists exploring the edges of art and science into discourse within our community—advancing our University’s conversation regarding interdisciplinary collaboration and innovation. This collection of artists examines various aspects of the physical and life sciences, from code and biology to chemistry and physics.

Among those individuals whose work is featured in the exhibition is Adam Brown, an associate professor at Michigan State University. His live biochemical installation, The Great Work of the Metal Lover is a work that sits at the intersection of art, science and alchemy. “It uses microbiology as a technique to solve the mystery of the philosopher’s stone,” said Brown, who described the centuries-old “stone” as a legendary substance sought by alchemists who believed it capable of turning metals like mercury into gold and silver. 

His installation uses custom lab equipment to introduce a “highly specialized” bacterium into an engineered atmosphere, turning toxic gold chloride into usable 24K gold over time during the course of the show. “By the end of the exhibition, the process will produce enough gold to put in the palm of your hand,” Brown said.

Other innovative contributors to “Engaging Technology II” include code artist Casey Raes; composer and visual artist Tristian Perich; and Evelina Domnitch and Dmitry Gelfand, European-Russian artists who create sensory immersion environments that merge physics, chemistry and computer science with uncanny philosophical practices. 

“10000 Peacock Feathers in Foaming Acid” is Evelina Domnitch and Dmitry Gelfand’s live, audiovisual performance which was presented on the fulldome projection screen of Ball State’s Charles W. Brown Planetarium. The artwork uses a penetrating laser beam to scan the surface of soap bubbles, the behavior of which model the unique properties of
cell membranes. 

Throughout the run of Engaging Technology II, a series of invited performances, lectures and workshops were scheduled on campus and in the Muncie community to enhance the exhibition’s reach. Classes, workshops, lectures and family events were offered for local
and regional audiences. 

By Gail Werner, BSU Media Strategist
Et al.

Apollo 11 Simulation for International MoonBase Alliance

A 3D simulation of an Apollo 11 landing was produced for International MoonBase Alliance and ImmersiveWorlds, LLC. by IDIA Lab.




About IMA

The International MoonBase Alliance (IMA) is an association comprised of leading scientists, educators, and entrepreneurs from space agencies and industries worldwide to advance the development and implementation of an international base on the Moon.

Our priority is to create an action plan that will culminate with the building of MoonBase prototypes on Earth, followed by a sustainable settlement on the Moon. Building on discussions and recommendations from the Lunar Exploration and Analysis Group (LEAG), the European Lunar Symposium, the International Space Development Conference, the NewSpace Symposium, the International Astronautical Congress (IAC), and other worldwide space forums, we intend to formulate an integrated strategy for establishing a proposed multinational lunar base.

Towards this goal, the IMA hosted a 2017 summit last October on Hawaiʻi’s Big Island and invited leading aerospace, engineering and space industry professionals for a series of conferences focusing on identifying essential resources and technological capabilities required to enable and sustain lunar-based operations. Through the promotion of international collaborations and public-private partnerships, we can reduce costs, enhance benefits, and accelerate timetables for lunar settlements.

moonbasealliance.com/

NEWARK MOUNDS SIMULATION

Ball State’s Institute for Digital Intermedia Arts (IDIA Lab) and the Applied Anthropology Laboratories (AAL) have created a web-based virtual world that interprets the earthworks in their era of construction. The sky is simulated with accurate celestial bodies using data from NASA’s Jet Propulsion Laboratory to allow users to view the moon and sun as they appeared almost 2000 years ago. The project is funded by the National Endowment for the Humanities’ Office of Digital Humanities.

The project is a collaboration between Ball State, The Works and the Ohio History Connection, with support and partnership from several federally recognized American Indian tribes, including the Eastern Shawnee Tribe of Oklahoma and the Shawnee Tribe.

CREDITS IDIA Lab: John Fillwalk, Neil Zehr, Trevor Danehy, Adam Kobitz, Ina-Marie Johnston AAL: Kevin Nolan, Tusher Mohanta, Jade Moore, Ashra Wickramathi Ohio History Connection: Jennifer Aultman, Bradley Lepper The Works: Meghan Federer

Advisory Board: Brett Barnes (Eastern Shawnee Tribe of Oklahoma), Marti L. Chaatsmith (Newark Earthworks Center, Ohio State University), James J. Connolly (Ball State), Christine Ballengee Morris (Ohio State University)

Ball State University’s IDIA Lab is granted rights to Blue Mars virtual world technology from Avatar Reality

Ball State University’s IDIA Lab is granted rights to Blue Mars virtual world technology from Avatar Reality http://bit.ly/BSU_BlueMars Ball State granted rights to develop $10 million Blue Mars virtual world technology (3/5/2012)

Ball State granted rights to develop $10 million Blue Mars virtual world technology (3/5/2012)

A digital laser scan of a centuries-old Buddha (above) and a simulation of the 1915 World’s Fair in San Francisco (below) are two examples of how Ball State has employed Blue Mars for cutting edge technology projects.

The future of virtual worlds has a new home at Ball State University, thanks to the generosity of Avatar Reality Inc., which granted the university rights to the multimillion-dollar 3-D virtual world platform, Blue Mars.

Blue Mars’ creator, Hawaii-based Avatar Reality, has provided expanded rights to the technology, valued at $10 million in research and development, to Ball State for 3-D simulation and research projects beyond the world of gaming.

“We are honored to have our institution selected to build upon the tremendous innovations within Avatar Reality’s Blue Mars platform,” said John Fillwalk, director of the Institute for Digital Intermedia Arts (IDIA) and senior director of Ball State’s Hybrid Design Technologies initiative. “This partnership between Avatar Reality and Ball State is an unprecedented gesture and a great distinction.”

Since 2009, Ball State’s IDIA has been a leading developer for Blue Mars, employing the virtual platform on projects such as digital laser scans of a centuries-old Buddha and the creation of the Virtual Middletown Project. The project, which ushers visitors via their computers into the world of the early 20th-century Ball Brothers Glass Manufacturing Co., is one example of the cultural heritage work possible with Blue Mars. Another is IDIA’s simulation of the 1915 World’s Fair in San Francisco.

“Avatar Reality designed in Blue Mars a next-generation 3-D platform that greatly advanced the graphic fidelity, functionality and scalability of virtual worlds — well beyond current standards,” Fillwalk said. “Its strategy connected social spaces across devices, from high-performance desktops to mobile phones. Our goal is to build upon both its technology and vision for the future of virtual worlds. “

Recognition speaks to reputation

Phil Repp, Ball State’s vice president for information technology, said the new agreement with Avatar Reality can be traced to Fillwalk’s commitment to advancing IDIA to the national level.

“The fact that other universities — and many of them of very high caliber — are working hard to gain expertise in hybrid worlds and that Ball State was identified as the university most likely to further develop this technology speaks volumes about our reputation, experience and abilities,” Repp said. “It’s an honor for Ball State and John to be singled out with this kind of recognition.”

Under Ball State’s terms of agreement with Avatar Reality, the university will begin to fully operate Blue Mars for noncommercial purposes, expand upon the source code, increase its research and academic initiatives, and enhance the community of Blue Mars. In addition, Ball State will deliver original content on Blue Mars as it has done in the past. Existing commercial activity will continue to be operated and supported by Avatar Reality, Inc.

“I am really excited about the future,” Repp said. “Through our division of Hybrid Design Technologies, Ball State will further our position as a national leader in hybrid worlds and their applications to mediated learning technologies. Our reputation in this discipline is such a perfect fit to our institutional history of innovation in teaching and learning.”

About Ball State: Located in Muncie, Ind., Ball State University is redefining education by providing relevant, immersive learning experiences that engage high-caliber students in intense interdisciplinary projects both in and out of the classroom. Its vibrant campus is home to about 22,000 undergraduate and graduate students from across the country and abroad. Learn more at www.bsu.edu.

About Blue Mars: Developed by Avatar Reality Inc., Blue Mars is a premium 3-D virtual world platform featuring unparalleled interactivity, fidelity, scalability and security, and it enables artists and developers to create and distribute interactive 3-D experiences to a global audience. Blue Mars launched in open beta in October 2009 and began selling virtual land to third party developers in January 2010. Blue Mars Mobile, an iOS app based on the content and technology from the virtual world, was introduced in February 2011.

Old Planetarium + Cutting Edge Studio

planetarium_sub

OLD PLANETARIUM REVAMPED INTO CUTTING-EDGE STUDIO

Months after the opening of the Charles W. Brown Planetarium, Ball State has found a new use for its almost 50-year-old predecessor. The dome-shaped room in the basement of the Cooper Science Complex has become a production facility dedicated to developing new shows for planetariums nationwide.

“This is an exciting opportunity for us,” said planetarium director Ron Kaitchuck, “because the last thing we wanted was for the old planetarium to become a storeroom.”

To make use of the space, Kaitchuck and his staff first needed to outfit it with a new digital projector. Helping him secure funding for the device was John Fillwalk, director of Ball State’s Hybrid Design Technologies, a division of the Office of Information Technology that seeks out partnerships injecting the arts into STEM education. More commonly known as STEAM, the acronym stands for science, technology, engineering, arts and mathematics.

“We saw the dome as a valuable space where we could create new content — the kind that’s hard to test on flat-screen computer monitors,” said Fillwalk, who’s also director of the Institute for Digital Intermedia Arts (IDIA), a lab specializing in creating virtual reality and 3-D simulations for a broad range of clients including UNESCO, other universities and the History Channel.

Kaitchuck said Fillwalk’s expertise — as well as that of his student and professional designers — will let planetarium staff and students in the Department of Physics and Astronomy create shows they couldn’t otherwise. “For instance, we don’t know how to craft 3-D models, but John does.”

“And I’m not a scientist, but Ron is,” Fillwalk said. “So there’s a nice synergy at play where he advises us on the science side of these productions and we create the graphics.”

AN ENTREPRENEURIAL VENTURE IN THE MAKING

While the partnership is new this year, Dayna Thompson, assistant planetarium director, looks forward to seeing how the IDIA Lab helps improve shows Ball State is already creating.

One goal is to generate more revenue from the productions. Take, for example, “Saturn & Beyond,” which recently sold to a distribution company for $18,000. “Had we been able to work with IDIA on that, we could have included more graphics to help explain the science that the program talks about,” she said.

Helping assist with the creation of “Saturn & Beyond” was physics and astronomy student Monique Gabb, a Florida native completing her master’s degree who wants to work for a government-backed science and engineering research lab like Argonne. “Astronomy is a great gateway to getting people interested in science, so I enjoyed helping create a show like this.”

Gabb said when it came time to test “Saturn” for audiences, it was a challenge to get large blocks of screen time inside the 52-foot domed Brown Planetarium. “There are so many shows going on over there, it was hard to work us in.” With the conversion of the old planetarium into a new production dome, “we no longer have to worry about that issue.”

Kaitchuck said the size of the old planetarium is also major boon for its new use. “As far as I know, no other planetarium in the country has at its disposal a 30-foot dome for production purposes only,” he said, noting New York City’s Hayden Planetarium, inside the American Museum of Natural History, uses a production dome a fraction of that size.

IDIA TO CREATE NEW, INTERACTIVE CONTENT

Fillwalk has his own aspirations for the old planetarium, now that he’s helped transform it into a studio — one equipped with game-engine computers and a digital projector encased inside a protective fabricated display created by IDIA artist and Ball State alumnus Chris Harrison, ’10, MS ’12.

Most of Fillwalk’s plans involve using the space to help IDIA expand upon — and reshape — content the lab has created for past projects, including virtual simulations of Stonehenge and other ancient ruins in various celestial alignments. “It’s a great opportunity for us to test what we already do in another immersive environment. A dome gives you the same kind of view you’d see with a head-mounted, immersive display, but now we don’t have to wear goggles to get it.”

IDIA is also exploring development of dome-spaced content for new clients including museums and aerospace companies. “My biggest interest is finding ways to allow audiences to be more interactive with these kinds of show,” Fillwalk explained. “If you could hold your phone up as you’re watching, and have a planet or star jump out at you — I think kids would love that.”

OPPORTUNITIES FOR EDUCATION

Closer to home, Fillwalk and his team are working with Thompson on projects for the Brown Planetarium, which in its first year had about 20,000 visitors — more than the double the number of people who visited the old planetarium in its last full year of operation.

For the planetarium’s show, “Black Holes, Worm Holes, and the Movies,” IDIA artists visualized travel through a worm hole, which scientists consider to be a shortcut connecting two distant points in space-time. And for an upcoming Halloween show, Fillwalk and his staff will create graphics illustrating the history of the holiday and its astronomical origins.

Kaitchuck wants to use the old planetarium for future entrepreneurial learning opportunities that will let students outside his department help create new planetarium shows. “We could use music media production students for the audio, creative writing majors to help with scripts — it could be a very diverse group,” he said. “And now, with John’s help, the kinds of work we’ll be able to do … well, the sky really is the limit.”

By Gail Werner

DOME Lab photographs courtesy of Ball State University, Division of Strategic Communication

http://bit.ly/2anJRzr

Dr. Bernard Frischer Lecture: Rome Reborn: 3D Reconstructions as Tools for Discovery

Bernard Frischer is a digital archaeologist who writes about virtual heritage, Classics, and the survival of the classical world. He received his Ph.D. in Classics from the University of Heidelberg. He followed his doctoral work in Germany with a two-year postdoc at the American Academy in Rome, where he studied Roman topography and archaeology. Dr. Frischer currently is a professor of Informatics at Indiana University.

The Rome Reborn project is an international initiative, launched in 1996, to create a 3D reconstruction of ancient Rome in AD 320. The model, which took 22 years to complete, makes it possible for newcomers to the subject of Roman topography and urban history to obtain a quick visualization of the monuments in their context in the city. The purpose of this talk is not to explore these instructional applications of the model but to draw out the uses of virtual reality as a tool of discovery.

• Sponsored by the Digital Scholarship Lab (DSL), the Institute for Digital Intermedia Arts (IDIA) and the Ancient Studies Program at Ball State University.

• Introduction by Provost Dr. Susana Rivera-Mills.

For more information, please email us at DSL@bsu.edu

NATIONAL AWARD: VIRTUAL ESCAPE ROOM


The BSU College of Health and CAP’s IDIA Lab were honored with a national award for their collaborative virtual escape room project. The  serious game, “The Case of the Kidnapped Healthcare Professionals: Virtual Escape Room” was selected to receive the 2023 Creativity in Practice and Education Award from the National Academies of Practice (NAP). BSU’s Lori Porter accepted the award on behalf of the project team Karrie Osborne, PhD, RN, Blair Mattern, John Fillwalk, IDIA staff and Katherine Berryhill for their work in developing the project.

Escape rooms employ innovative learning strategies such as problem-based immersive learning, role-play, and game principles to improve teamwork, communication and leadership. Nationally, health profession curricula are beginning to explore the use of escape room experiences as innovative pedagogy. Escape rooms challenge individuals to collaboratively explore new content, work as a team to discover clues, solve puzzles, and accomplish interactive tasks in a limited amount of time.

Flickr Gettr: IDIA performs at the International Annual Science and Technology Festival in India

John Fillwalk and Jesse Allison travelled to Mumbai, India for the International Annual Science and Technology Festival, January 22-24, to exhibit the Institute for Digital Intermedia Arts’ (IDIA) “Flickr Getter” installation. The festival, the largest of its kind in Asia, hosted more than 65,000 attendees. In addition to exhibitions such as “Flickr Getter”, the festival featured keynotes such as Lars Rasmussen, Co-Founder of Google Wave and Google Maps; Vic Hayes, father of Wi-Fi; Ajay Bhatt, Co-Inventor of USB; Jonathan Gardner, Senior NASA Scientist; and R. Chidambaram, Principal Scientific Adviser of the Government of India.

IDIA’s Flickr Getter is an interactive, sonic and visual experience, immersing the viewer in a three-dimensional cloud of user searched Flickr images. The installation, completed in collaboration with the New Media Consortium, Linden Labs and the Institute for Digital Fabrication, was also highlighted in the festival’s promotional video.

For more information, please visit http://www.techfest.org/scintillations/flickrgettr/

Smithsonian and National Park Service Presentations

Vice-President of Information Technology, Phil Repp and IDIA Lab Director, John Fillwalk traveled to Washington, D.C. to present Ball State projects in electronic field trips, virtual worlds and hybrid design to several federal organizations. Meetings included a presentation at the Smithsonian Institution Building on the National Mall to representatives from all the Smithsonian Museums and Institutes as well as a separate presentation to the National Park Service. Conversations are ongoing regarding partnership and collaboration opportunities.
 

Newton’s Cenotaph in VR

Appeared in Engadget, Variety and Motherboard

Linden Lab lifted the veil on Sansar, the long awaited followup to Second Life, allowing the general public to explore VR environments that beta-testers have been creating for the platform. Among the usual suspects of ritzy nightclubs and elven cities is something unexpected.

Over the summer, a group of digital artists from Ball State University used Sansar to construct a digital replica of Newton’s Cenotaph—one of the most awe-inspiring neoclassical structures to never have been built.

“The building is really at this preposterous and fantastical scale,” director at Ball State’s IDIA Lab John Fillwalk told me in a phone interview. IDIA Lab is a division within Ball State’s College of Architecture and Planning that explores the intersection of digital and physical design.

“With this technology, you can build the impossible, or at least the impractical,” said Fillwalk. It had been a long-time aspiration of his to digitally assemble Newton’s Cenotaph in some shape or form, and Sansar provided a convenient way to bring the unrealized work of architecture to life.

The Cenotaph is a great, big dome of a building, originally imagined by the French architect Étienne-Louis Boullée in the 18th century. But Boullée’s more grandiose designs tended to skirt the limits of feasibility, and thus rarely saw the light of day. The Cenotaph’s design, for instance, eclipses the height of the Great Pyramids at 455 feet.

“It would take an enormous amount of labor to do something like that in reality,” Fillwalk said. “And the engineering to pull it off would be an outstanding undertaking.”

Sansar made it easier. To begin with, Fillwalk got ahold of high resolution scans from Boullée’s architectural prints. Following them as closely as possibly, the group recreated them in 3D modeling software Maya.

While the exterior of the unbuilt building is expansive, the interior is mechanically intricate. Boullée envisioned the building as a monument to Isaac Newton, who among other things, worked out mathematical proofs for heliocentrism, the idea that planets orbit around the sun.

In tribute, a great brass armillary sphere, representing the motion of the planets, was intended to rotate within the equally great dome.

One of VR’s greatest assets is giving users a sense of scale, so the medium was a natural fit for resurrecting impossible works of overambitious architecture, Fillwalk said. In fact, the Cenotaph may be too big.

“Because it takes so long to walk through it normally, we put in a teleport feature as a speedy way to get through it,” he said.

https://www.vice.com/en/article/zm4pe4/newtons-cenotaph-has-finally-been-built-but-in-vr

Visit Newton’s Cenotaph

VIRTUAL COURTROOM SIMULATOR

IDIA is collaborating with BSU’s Brandy and Greg Rocheleau from the Department of Criminal Justice and Criminology on the production of a Virtual Courtroom. The project, funded by BSU’s Women of Beneficence, is a program designed to allow students in designated criminal justice classrooms to participate in courtroom simulations through the use of avatars to facilitate knowledge and empathy as well as practice decision-making skills in a courtroom setting. 

One of the main purposes of this project is to give criminal justice students an opportunity to practice ethical decision-making skills in the field of criminal justice. It also helps students develop empathy for vulnerable populations so students become socially-responsible professionals. IDIA is designing a multiuser virtual simulation where students can engage via avatars and communicate in voice, chat, video or whiteboard. The facilities will include a courtroom, deposition room, classroom and even an accurate digital twin of the Supreme Court!

RUTGERS HEALTHCARE SIMULATION

IDIA is designing an interactive virtual training simulator for Rutgers University’s School of Health Professions. The project immerses students within TeamSTEPPS, Team Strategies and Tools to Enhance Performance and Patient Safety. Developed by AHRQ, the Agency for Healthcare Research and Quality, TeamSTEPPS is an interprofessional communication framework providing interdisciplinary collaboration and teamwork tools aimed at optimizing patient outcomes by improving communication among healthcare professionals. It is AHRQ’s signature curriculum to improve patient safety by training health care teams to communicate and practice effective teamwork. TeamSTEPPS trains staff in teamwork skills to lead medical teamwork improvement initiatives in an organization, from initial concept development through sustainment of positive changes

Students will learn best practice interprofessional communication strategies and apply them in lab-based scenarios in a single-player simulator created in the 3D game engine of Unity. The experience is designed with branched dialogue trees to provide decision making opportunities to assess learned concepts as they are applied to various healthcare scenarios. The project was recently presented at the National Academies of Practice (NAP) annual conference.

ACADEMY OF DIETETICS AND NUTRITION

The IDIA Lab was contracted by the Academy of Nutrition and Dietetics – home of the Food Pyramid – to develop a dietetics occupational simulator. The simulator employs branched dialogue scenarios to train students in the process of an intake experience within a hospital clinic. This particular simulation walks through an interaction with a family whose child has diabetes and the different units of the hospital that are involved in the intake. The project was developed in two phases for the client and was created using the Unity Game Engine.

VIRTUAL INSTRUMENT 3D UI

ChromaChord utilizes a combination of the Oculus Rift headset and an attached Leap Motion controller to create an immersive 3D VMI environment. The Oculus Rift provides a three-paneled visual interface, through Unity, that allows the users to access different component windows of the instrument (simply by turning their head), while the Leap Motion Controller allows the user to interact with the visual interface to musical effect (processed and generated in Max/MSP). ChromaChord uses the Oculus Rift DK2, a virtual reality headset to immerse the performer within the VMI. Utilizing two cameras in the virtual environment, this device relays a stereoscopic image to the wearer allowing for highly dimensional experiences. Additionally, a gyroscope, accelerometer, and magnetometer detect real-world head rotation and transform its virtual equivalents. Additionally, a peripheral infrared camera captures limited-range positional tracking. Mounted to the Oculus Rift is affixed a Leap Motion. This provides infrared tracking of hands. In our application, the Motion tracks the user’s real world hands and simulates them, mimicking hand positions and finger movements. Therefore, the user’s own hands are the primary input mechanism. Custom Software
  • Key note triggering and glow by Leap Motion hand touches
  • Particle generation triggered Leap Motion hand touches
  • Button click and glow by Leap Motion hand touches
  • Particle destruction and orbit physics
  • OSC Server and Client configurator / Message Handler / Data Handler
Our graphical user interface is composed of three sections – left, center and right panels. On the left, the user can see the modulation sphere. By touching this sphere, the user generates orbital particles. The number of these particles affects a live process on the notes. The user can remove these particles by simply swiping them away. Turning to the right reveals the root note and scale type controls. To select a root note and scale type, the user must press the desired button. The interface responds by highlighting the selected options. The center module contains the keys, a chromatic scale in two colored columns based on the Solfège. This is the core performance area of the UI. In addition there are arrows facing up and down that the user can touch to raise or lower the octave. All buttons and keys respond to touches by glowing white. 3D Musical Instrument The default window is the center palette, where the instrument itself is located. Within this window, there are two columns of colored keys (four per column) to the left and the right of the screen. To create sound, the user has but to put their hands vertically in front of them and ‘touch’ the keys, which are placed for both ease of use and the ability to contact multiple notes simultaneously to create chords. The notes, themselves, are stacked in intervals of a third from each other; consequently, neighboring notes on the same hand will create conventional harmonies. Between the columns, on the top and bottom-most extremities of them, are two arrows that, if touched, move the current note set up or down an octave. By touching the notes, Unity sends a UDP message to Max/MSP, indicating which note has been touched and remains active until contact has ceased. Similarly, by touching the octave buttons, UDP messages are sent to Max to reassign the MIDI note values. Within Max, MIDI values are then converted to their equivalent frequencies and assigned to their respective oscillators before being processed and output through Max’s audio out. When looking to the right, the user accesses the settings window, where note assignment takes place. First, the user selects the root note of the scale they wish to be within. The second parameter allows the user to select what mode of scale they would like – the options being: major, natural minor, and harmonic minor. By default, the instrument opens with a root note of C and is in the major mode. The UDP messages sent from the settings window control a series of gates within Max that then send adjusted MIDI values to the instrument’s oscillators. The performer accesses the modulation window to the left, which controls the frequency and Q of a filter, as well as level of effect. By touching the button in the center, Unity spawns floating particles that surround the user’s hands. The longer the users touches the button, the more particles accumulate; they can be removed by making shaking or throwing motions. Without particles, the filter’s frequency setting is at it’s lowest audible setting, creating a much darker, mellow timbre; alternatively, as more particles accumulate, the filter opens up, allowing for higher frequencies to pass through, creating a brighter, richer timbre. Additionally, the particles move the dry/wet ratio for effects towards the ‘wet’ end of the spectrum, deepening the audibility of the effects. The UDP message sent from the modulation window sends a single numeric value for the total number of particles active; this value is then scaled in Max/MSP to most appropriately suit each parameter that’s being altered.  
Data 0 0 0 0 0 0 0 0 0 0 0 0 0
Name Do Re Mi Fa Sol La Ti Do Up Octave Down Octave Modulation Root Note Scale Type
Type Key Key Key Key Key Key Key Key Octave Switcher Octave Switcher Modulation Root Note Scale Type
Range 0-1 0-1 0-1 0-1 0-1 0-1 0-1 0-1 0-1 0-1 0-100 0-11 0-2
Notes Bottom of octave Top of octave 1 when clicked 1 when clicked # of particles

GIS PAPER: DEEP MAPPING

BSU’s James Connolly, Director, Center for Middletown Studies and John Fillwalk, Director, IDIA Lab presented their joint paper at the 30th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems in Seattle, WA. This paper explored innovations in visualizing data in immersive 3D environments associated with the newly launched Deep Mapping Middletown project. Lead by Connolly and Fillwalk, the project consists of a team of scholars and library scientists from several institutions producing deep maps based on the extensive archive created by the century of social research focused on Muncie, IN.

https://dl.acm.org/doi/abs/10.1145/3557919.3565815

VR Furniture System


Virtual and on-demand multi-user furniture system for hybrid reality collaborations.Demonstration of real time deployment of virtual furniture to support simulation of various small group and large group spontaneous interactions. Also demo of hybrid reality telepresence system.

The Institute for Digital Intermedia Arts (IDIA Lab) at Ball State University explores the intersection between the arts, sciences, and humanities using a broad range of technologies. Founded in 2005, IDIA Lab has been designing scholarly, creative, teaching and learning projects that investigate virtual reality, human computer interface (HCI), augmented reality, mobile apps, interactive artworks, immersive environments, head mounted display, visualization and 3D simulation. The IDIA Lab studios were initially incubated through the Eli Lilly Endowment’s Digital Media Initiative as part of two generous grants to the University. IDIA Lab’s research facilities include five distinct labs – IDIA Lab, CAVE Lab, DOME Lab, HCI Lab, and SIM Lab – all working in concert with tour partner, the Digital Scholarship Lab (DSL). The labs’ faculty, staff and students develop projects in collaboration with a cohort of international clients in this innovative initiative that investigates the forefront of emergent design, expression and learning.



Virtual Companion: Indianapolis

Virtual Monument Circle
1910s-1930s

Virtual Monument Circle is a mobile application that uses historic photographs and maps of downtown Indianapolis, IN to create an immersive interpretation of various historic phases of the city center. This project is a prototype for a larger potential city-wide endeavor bringing to life significant neighborhoods and sites within the city. It is developed as a possible collaboration between the Institute for Digital Intermedia Arts at Ball State University, the Polis Center at Indiana University Purdue University Indianapolis and the Indiana Historical Society.

There are two experiences with the content dependent on the proximity of the user’s location – onsite and offsite. It will be available soon for both iOS and Android.

Virtual Monument Circle was designed and produced by the Institute for Digital Intermedia Arts at Ball State University. Special thanks to the Polis Center at IUPUI and the Indiana Historical Society.

For more information contact:  idialab@bsu.edu

Virtual Companion: Columbus

IDIA Lab – in collaboration with BSU Assistant Professor of Architecture, Kristin Barry – has  designed the cultural heritage mobile application, Virtual Columbus Experience – allowing users to virtually tour the rich architectural history of Columbus, Indiana. This locative mobile app was produced with fellowship support of Ball State University’s Digital Scholarship Lab.

Though a relatively small city, Columbus has made itself a cradle of modern architecture, commissioning many buildings and public works since the middle of the 20th century. The number of landmark buildings and notable architects to have worked in the city has earned it the nickname “Athens on the Prairie.”

With data and artifacts gathered by Kristin Barry’s immersive class, users of the app can explore 3D models of key buildings commissioned over the years, survey timelines showing when architects were active or buildings were constructed, and meet Columbus’ famous architects – such as Eero Saarinen, I.M. Pei, and Harry Weese.

After its launch, Virtual Columbus Experience will seek further funding to expand the scope and depth of the project across the community.

Sinespace Gallery Designer

IDIA Lab has created a virtual gallery design system in partnership with virtual world creator Sinespace. Sinespace, a multi-user virtual world platform based in the UK, contacted IDIA to design this system for their new virtual environment. The Gallery Designer isan easy to use system to dynamically create gallery and museum exhibitions on the fly with any web hosted images – creating an exhibition with any images that can be connected to from the internet. Users can easily pick gallery styles (classical/modern), exterior settings, frame style, lighting, floosr, wall color and ceiling treatments – customizing an experience that is complementary to the exhibition theme. The gallery can then be easily published and host a virtual reception with visitors from around the globe.

Wright Brothers Workshop

IDIA Lab has created a virtual simulation of the Wright Brothers bicycle shop in Dayton, OH for the HTC Vive headset and SteamVR. The environment is a recreation of the Wright Cycle Company, on South Williams Street in Dayton was built in 1886 and is a US National Historic Landmark and on the National Register of Historic Places. The Wrights occupied this location from 1895-1897. IDIA Lab designed the experience as a virtual field trip where a visitor can pick up and examine highly detailed objects from the shop including letters, tools and inventions created by the Wrights. This project will be published on Steam VR in the coming months.

EVAC+ Virtual Disaster Simulator

EVAC+ Virtual Disaster Simulator

IDIA Lab partnered with physicians at the Seattle Children’s Hospital to develop a virtual simulator to train staff in the case of various disasters.

The EVAC+ Virtual Disaster Simulator leverages virtual reality to train health care providers to react quickly and effectively to unexpected events that threaten patient safety. It was developed after pilot live simulations, user focus groups, surveys, and a successful pilot of the initial EVAC system. After the need for more communication opportunities and variations in disasters responses was identified in the pilot, EVAC+ was created.  EVAC+ leverages the same ability to prepare patient equipment for an evacuation but with significantly more features allowing for interacting with hospital room, other staff, and family members. Upon entering the EVAC+ environment providers are oriented to the virtual space, including navigation, interacting with objects, and how to interact verbally with non-player characters.  

Once they are comfortable with the space they can choose from a menu of disasters to experience, including earthquakes, fire, and an active shooter event. Each simulation has unique objectives around patient protection, equipment packing, and communication with other staff and families. Learners practice how they would provide a status update using the SBAR technique to their charge nurse, use closed loop communication, and respond to anxious family members.  

Pre-determined safety behaviors, such as stopping and resolving a mission critical issue, are prompted and performance recorded for feedback. After each scenario the user is guided through a self-debriefing on how well they performed on key disaster and communication behaviors. Information on how they managed and packed medical equipment is presented along with evidence-based information on how experts would recommend managing equipment. Users can choose to repeat any scenario to practice or refine their responses and can explore the scenarios in any order.  

The Unity3D environment is accessible on web-browsers and head-mounted VR systems. Amazon Web Services (AWS) supports voice communication and real-time analytics. The EVAC+ system fills a unique need for accessible, interactive, sustainable disaster training for healthcare providers.

Virtual World Heritage Ohio

Aerial view of Newark Earthworks render

Ball State University and Ohio History Council

Sponsored by the Office of Digital Humanities, National Endowment for the Humanities

The public will be able to explore the prehistoric Newark Earthworks in Ohio the way they appeared 2,000 years ago. Funded by the National Endowment for the Humanities’ Office of Digital Humanities, this interactive 3D simulation is under development at Ball State University.

 Ball State’sApplied Anthropology Laboratories (AAL) and the Institute for Digital Intermedia Arts (IDIA Lab) are creating a web-based virtual world that interprets the earthworks in their era of construction. The project is led oy Kevin Nolan, director and senior archaeologist at AAL and project co-director John Fillwalk, senior director of IDIA Lab.Interactive features will include accurate celestial alignments.  The sky will be simulated with accurate celestial bodies using data from NASA’s Jet Propulsion Laboratory to allow users to view the stars, planets, moon, and sun as they appeared 2000 years ago.

First person view of Newark Earthworks render

Already a National Historic Landmark, Ohio designated the Newark Earthworks as “the official prehistoric monument of the state” in 2006. Spread across four miles in what is now present-day Newark, Ohio, mounds and walls are constructed to record significant celestial alignments on the landscape, including the 18.6-year lunar cycle. The earthworks created community for the Hopewell People and provided sacred spaces for religious rituals and ceremonies related to their society. The Newark Earthworks comprise the largest set of geometric earthen enclosures in the world, built by the Hopewell People between A.D. 1 to A.D. 400 to serve a variety of cultural and spiritual purposes.

The project is a collaboration between Ball State and the Ohio History Connection, with support and partnership from several federally recognized American Indian tribes, including the Eastern Shawnee Tribe of Oklahoma and the Shawnee Tribe.

Apollo 11 VR on Steam

Apollo 11 VR is a first person experience of the interior of the 1969 Command Module. Explore the cockpit firsthand in an authentic model of the interior based on scans from the Smithsonian. This is the 50th anniversary of the historic landing where Buzz Aldrin and Neal Armstrong walked on the surface of the moon on a site name Tranquility Base. 

The Spacecraft

The Apollo spacecraft had three parts: a command module (CM) with a cabin for the three astronauts, and the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages – a descent stage for landing on the Moon, and an ascent stage to place the astronauts back into lunar orbit. The astronauts used Eagle’s ascent stage to lift off from the lunar surface and rejoin Collins in the command module. They jettisoned Eagle before they performed the maneuvers that propelled the ship out of the last of its 30 lunar orbits on a trajectory back to Earth. They returned to Earth and splashed down in the Pacific Ocean on July 24 after more than eight days in space.

Armstrong’s first step onto the lunar surface was broadcast on live TV to a worldwide audience. He described the event as “one small step for man, one giant leap for mankind.”

ATK Procedural Audio Library

ATK-Sharp

ATK (Audio Tool Kit) is a demo of ATK-Sharp, a procedural audio generation library. ATK contains four example sound scripts that developers can use as a reference to script their own custom sounds.

Download now from the Unity Asset Store!

Why use procedural audio?

Using procedurally-generated audio instead of pre-made audio files has two main benefits:

1. Build file size can be reduced. Audio files can take up megabytes of disk space, while procedural audio is an order of magnitude smaller, typically only requiring kilobytes to store its script’s algorithm.

2. Procedural audio is flexible. Audio files are essentially baked. You may be able to change the pitch or speed of a file, but they could never match the sample-by-sample control you’ll have using procedural audio. Adding physics and logic as input, a given sound script may never produce the same output more than once.

Features

– WindAudio.cs demonstrates the very basics of what a developer will need to do to start scripting procedural audio using the library.
– FireAudio.cs script showcases how the developer can combine noise generators and a modifier to create the hiss, crackle, and pop of a campfire.
– ChimeAudio.cs exhibits how the developer can use physics as an input to the audio script.
– CricketAudio.cs models the chirp pattern of an actual species of cricket, known as the fall field cricket, to demonstrate how far the concept of procedural audio can be taken.

Dependencies

ATK-Sharp

Support

For support with ATK, please email us at idialab@bsu.edu.