Canyons
Water cuts a path through land over millions of years.
What if humans could leave our path imprinted on our built environments, on our own time scale, just like a water droplet?
USER JOURNEY
You enter a gallery.
You move around looking at artworks that interest you.
When you enter the next room, you find yourself back where you started – but the path you took through the room has been carved into the floor.
As you continue to move through the gallery tracing your own footsteps again and again, you dig a canyon into the earth.
In mere minutes, you impact your environment
the way water impacts the world over millennia.
The floor next to the counter where decades of people stand to order lunch, Angelo’s Pizza
REAL LIFE REFERENCE
Junichiro Tanizaki called it patina – the marks we leave on our world over time.
How might we model and amplify patina parametrically?
GRASSHOPPER
The first iteration created a grid of dots and sunk them to the depth of a curve that was drawn under them.
But in the real world, we create a canyon by moving along a surface, and depressing it – not drawing a squiggle through the ground and carving down to that.
Grasshopper definition
Script 1 takes in a number of columns and rows, and a distance between each point, and creates a grid of points.
Script 2 takes those points and a Rhino-drawn curve, and sinks down the Z value any points width distance away, to depth.
The final component turns these points into a lofted surface rows wide in the u direction.
Result
The final iteration of the definition is entirely parametric. Everything, including the curve’s path throw the space, is influenceable in Grasshopper.
First we create a set of points for your walk through the gallery.
You’ll enter the gallery through the same door each time so the final point must have the same x value as the first point to ensure the curves are contiguous.
The y value of the final point is the length of the entire curve, which needs to be easily modifiable to fit the curve to our gallery model.
Then we create a nurbs curve out of those points.
We need to offset the first curve so that it only starts in the second “room” of the gallery.
And we need to repeat this curve reps times as the participant walks through the gallery again and again.
Then we create the canyon.
We generate a grid of points that has cols columns and rows rows each of which are dist distance from each other.
We feed this grid of points into the next component.
We take in the points grid, as well as the curve we made, and we sink the points that are width x-y distance from the curve down to depth.
We need to set the depth at 0, even if the curve is offset to only start in the second gallery, so we modify each Z value according to its Y position.
We want to make the mesh sample the lofted curve more often so that it’s smoother. For every point in the source canyon, we’ll create 4 vertices in the mesh.
Then create a mesh out of that surface.
And immediately deconstruct it so that we can change the vertex colors.
The final script looks at each vertex and exports its height into a list, so that we can color it based on its depth, achieving the same striations that you see in a real canyon.
The gradient applies color based on a range between 0 depth and the maximum depth, computed in our initial canyon script.
Finally, output the color list to recreate a mesh.
RESULT
Rendered gif
Walking through the space
INTERACTIVITY
Back to the early question of:
‘What if humans could leave our path imprinted on our built environments, on our own time scale, just like a water droplet?’
When these imprints can be made in real-time- the perception in walking and interacting with the artwork could feel more immersive by feeling and seeing the indentations happen in real time.
To do this, we use fologram- which is a plug-in that we are more familiar with in the Rhino/Grasshopper environment, as it can connect the virtual models we create with the motion experience through a mobile device. We begin with the script given to us by our instructors that can record and extract real-time motion planes, and extract real-time motion points as a curve.
Project the points from the line onto the same plane as the ground in order to create attractor points using simple grasshopper commands.
Before making the initial motion data interact with the floor, we also need to divide the floor into a point grid that can change according to the previous attractor points
Then using attractor points we script in C# a script that inputs:
The attractor points (the way people move),
The surface grid points
Numeric Radial diameter of the attractor points that will be affecting the points on the surface grid.
Numeric Depth of the surface points that is affected by the attractor points
Top-down view of a real person moving through the digital space with their real phone, deforming the digital floor.
VIRTUAL REALITY
We cannot actually build this space in real life.
But we want to know what it would feel like to move through it.
So we put it in VR using Unity and Quest.
FINAL THOUGHTS
Canyons was a theoretical series of prototypes for expressing space deformation in super-speed.
Our next step is to find a gallery space willing to exhibit the real-world version of this concept.
CREATIVE TEAM
Wyatt Roy, Chiun Lee, Yongrui Jin