Civic Resonance

Civic Resonance
Civic Resonance

Category:

Category:

Art & Technology

Art & Technology

Service:

Service:

Research & Design

Research & Design

Year:

Year:

2025

2025

We inhabit a world ubiquitously saturated with sensors. Every day, autonomous vehicle fleets generate massive amounts of data for navigation and obstacle avoidance. This project proposes a reconfiguration: transforming this machine vision from a mere tool into a transnational “digital landscape” for public engagement and expression.

We inhabit a world ubiquitously saturated with sensors. Every day, autonomous vehicle fleets generate massive amounts of data for navigation and obstacle avoidance. This project proposes a reconfiguration: transforming this machine vision from a mere tool into a transnational “digital landscape” for public engagement and expression.

Civic Resonance envisions a cross-city tele-matic installation. It establishes a millisecond-level audiovisual feedback loop, intertwining machine perception with human intervention. In City A, LiDAR sensors on autonomous vehicles scan the urban infrastructure in real-time; this de-identified data is streamed to City B to construct a spectral digital stage. In City B, a dancer improvises in response to this alien landscape; their gestures are captured as point clouds by Kinect V and instantly relayed back to a club space in City A, distorting and reshaping its audiovisual experience.

Civic Resonance envisions a cross-city tele-matic installation. It establishes a millisecond-level audiovisual feedback loop, intertwining machine perception with human intervention. In City A, LiDAR sensors on autonomous vehicles scan the urban infrastructure in real-time; this de-identified data is streamed to City B to construct a spectral digital stage. In City B, a dancer improvises in response to this alien landscape; their gestures are captured as point clouds by Kinect V and instantly relayed back to a club space in City A, distorting and reshaping its audiovisual experience.

To prototype the ‘Human Interpreter’ node (simulating the performer in City B), I utilized the Microsoft Kinect V2 as a volumetric sensor. Its purpose transcends mere video capture, aiming instead to extract the essence of human motion. Within a custom TouchDesigner environment, raw skeletal tracking data and depth point clouds are ingested in real-time.


To prototype the ‘Human Interpreter’ node (simulating the performer in City B), I utilized the Microsoft Kinect V2 as a volumetric sensor. Its purpose transcends mere video capture, aiming instead to extract the essence of human motion. Within a custom TouchDesigner environment, raw skeletal tracking data and depth point clouds are ingested in real-time.


Algorithms then translate the performer’s kinetic energy into abstract, generative visual forms rather than a literal bodily rendering. This computational art acts as a dynamic data source.The core mechanism involves transmitting the vibrant colors generated by these actions and re-mapping them onto the autonomous vehicle’s monochrome LiDAR point cloud. This effectively ‘paints’ human affect onto rigid machine geometry, enriching the final visual synthesis.

Algorithms then translate the performer’s kinetic energy into abstract, generative visual forms rather than a literal bodily rendering. This computational art acts as a dynamic data source.The core mechanism involves transmitting the vibrant colors generated by these actions and re-mapping them onto the autonomous vehicle’s monochrome LiDAR point cloud. This effectively ‘paints’ human affect onto rigid machine geometry, enriching the final visual synthesis.

There are no passive observers in this project. The physical reality of City A becomes a canvas for the performer in City B, while the distant dancer acts as a “ghost agent” intervening in machine perception. Through this “Human-Machine Co-agency,” the project explores novel forms of interaction amidst folded space and time. Simultaneously, it maintains a critical stance: confronting the double-edged nature of ubiquitous sensor networks, it seeks to examine urgent issues of identity, surveillance, and privacy within inevitable digital expansion.

There are no passive observers in this project. The physical reality of City A becomes a canvas for the performer in City B, while the distant dancer acts as a “ghost agent” intervening in machine perception. Through this “Human-Machine Co-agency,” the project explores novel forms of interaction amidst folded space and time. Simultaneously, it maintains a critical stance: confronting the double-edged nature of ubiquitous sensor networks, it seeks to examine urgent issues of identity, surveillance, and privacy within inevitable digital expansion.

Create a free website with Framer, the website builder loved by startups, designers and agencies.