Interactive Art at the College of Architecture and Planning
The IDIA Lab has designed a series of extended reality artworks that are installed in the atrium of the College of Architecture and Planning. The installation allows passers-by to interact using various modes including motion, voice, and gesture to shape the compositions. The works employ sensors, sound art, artificial intelligence, and other programming to create dynamic experiences.
BOXELS
Have fun with this one! BOXELS is an interactive installation that uses your presence and gesture to create dynamic forms in real time. The project uses a depth sensor that first creates a point cloud from within its view. The points are then translated to what are called an isosurface – specifically metaballs – which are a fluid-like format of 3D geometry. We then translate the fluid shapes into dynamic cube forms (volumetric 3D pixels or voxels), resulting in the visualization that you are experiencing.
VOXELS
Have even more fun with this one! VOXELS is an interactive installation that uses your presence and gesture to create dynamic forms in real time. The piece rotates through varying aesthetics, by scripting the lighting and virtual camera settings. The project uses a depth sensor that first creates a point cloud from within its view. The points are then translated to what are called an isosurface – specifically metaballs – which are a fluid-like format of 3D geometry (volumetric 3D pixels or voxels), resulting in the visualization that you are experiencing.
WISHING WELL
Wishing Well AI, is an artificial intelligence artwork that inputs a user’s voiced wish, processed through Amazon AI platform. Speech, synthesized voice and tonal analysis are managed by AI and color and sound are associated based on the character of each wish. A 3D object representing the wish floats from the touch screen to the larger canvas where it affects a cloud of previous wishes.
BLENDR
Blendr invites participants to co-create image-based compositions initiated through spoken search terms. A voiced entry is input via a microphone, then translated to text using a speech-to-text tool – then the text is sent to an open-source web service (API) that allows openly licensed and public domain images to be discovered across more than 800 million hosted images. Up to 200 results are downloaded and displayed, allowing the user to individually view the images as they are processed via several scripts. They are progressively composited into an evolving montage – combined through dynamic procedural rule-based processes.
ELEMENTS [IV]
This interactive artwork is driven by the proximity and movement of our bodies in space. A depth sensor detects changes as we pass by or engage, generating a three-dimensional point cloud visualized by a particle system within a live virtual environment. The music is triggered in tandem with the visual response as the particles generate and animate over time. This work is one of a series of four representing the classical elements of nature.