Playing with Texture Projection in Three.js
Publikováno: 7.1.2020
In this tutorial, you'll learn how to project a texture onto an object in Three.js with some interesting examples.
Playing with Texture Projection in Three.js was written by Marco Fugaro and published on Codrops.
Texture projection is a way of mapping a texture onto a 3D object and making it look like it was projected from a single point. Think of it as the batman symbol projected onto the clouds, with the clouds being our object and the batman symbol being our texture. It’s used both in games and visual effects, and more parts of the creative world. Here is a talk by Yi-Wen Lin which contains some other cool examples.
Looks neat, huh? Let’s achieve this in Three.js!
Minimum viable example
First, let’s set up our scene. The setup code is the same in every Three.js project, so I won’t go into details here. You can go to the official guide and get familiar with it if you haven’t done that before. I personally use some utils from threejs-modern-app, so I don’t need to worry about the boilerplate code.
So, first we need a camera from which to project the texture from.
const camera = new THREE.PerspectiveCamera(45, 1, 0.01, 3)
camera.position.set(-1, 1.2, 1.5)
camera.lookAt(0, 0, 0)
Then, we need our object on which we will project the texture. To do projection mapping, we will write some custom shader code, so let’s create a new ShaderMaterial
:
// create the mesh with the projected material
const geometry = new THREE.BoxGeometry(1, 1, 1)
const material = new THREE.ShaderMaterial({
uniforms: {
texture: { value: assets.get(textureKey) },
},
vertexShader: '',
fragShader: '',
})
const box = new THREE.Mesh(geometry, material)
However, since we may need to use our projected material multiple times, we can put it in a component by itself and use it like this:
class ProjectedMaterial extends THREE.ShaderMaterial {
constructor({ camera, texture }) {
// ...
}
}
const material = new ProjectedMaterial({
camera,
texture: assets.get(textureKey),
})
Let’s write some shaders!
In the shader code we’ll basically sample the texture as if it was projected from the camera. Unfortunately, this involves some matrix multiplication. But don’t be scared! I’ll explain it in a simple, easy to understand way. If you want to dive deeper into the subject, here is a really good article about it.
In the vertex shader, we have to treat each vertex as if it’s being viewed from the projection camera, so we just use the projection camera’s projectionMatrix
and viewMatrix
instead of the ones from the scene camera. We pass this transformed position into the fragment shader using a varying variable.
vTexCoords = projectionMatrixCamera * viewMatrixCamera * modelMatrix * vec4(position, 1.0);
In the fragment shader, we have to transform the position from world space into clip space. We do this by dividing the vector by its .w
component. The GLSL built-in function texture2DProj
(or the newer textureProj
) does this internally also.
In the same line, we also transform from clip space range, which is [-1, 1]
, to the uv lookup range, which is [0, 1]
. We use this variable to later sample from the texture.
vec2 uv = (vTexCoords.xy / vTexCoords.w) * 0.5 + 0.5;
And here’s the result:
Notice that we wrote some code to project the texture only on the faces of the cube that are facing the camera. By default, every face gets the texture projected onto, so we check if the face is actually facing the camera by looking at the dot product of the normal and the camera direction. This technique is really common in lighting, here is an article if you want to read more about this topic.
// this makes sure we don't render the texture also on the back of the object
vec3 projectorDirection = normalize(projPosition - vWorldPosition.xyz);
float dotProduct = dot(vNormal, projectorDirection);
if (dotProduct < 0.0) {
outColor = vec4(color, 1.0);
}
First part down, we now want to make it look like the texture is actually sticked on the object.
We do this simply by saving the object’s position at the beginning, and then we use it instead of the updated object position to do the calculations of the projection, so that if the object moves afterwards, the projection doesn’t change.
We can store the object initial model matrix in the uniform savedModelMatrix
, and so our calculations become:
vTexCoords = projectionMatrixCamera * viewMatrixCamera * savedModelMatrix * vec4(position, 1.0);
We can expose a project()
function which sets the savedModelMatrix
with the object’s current modelMatrix
.
export function project(mesh) {
// make sure the matrix is updated
mesh.updateMatrixWorld()
// we save the object model matrix so it's projected relative
// to that position, like a snapshot
mesh.material.uniforms.savedModelMatrix.value.copy(mesh.matrixWorld)
}
And here is our final result:
That’s it! Now the cube looks like it has a texture slapped onto it! This can scale up and work with any kind of 3D model, so let’s make a more interesting example.
More appealing example
For the previous example we created a new camera from which to project, but what if we would use the same camera that renders the scene to project? This way we would see exactly the 2D image! This is because the point of projection coincides with the view point.
Also, let’s try projecting onto multiple objects:
That looks interesting! However, as you can see from the example, the image looks kinda warped, this is because the texture is stretched to fill the camera frustum. But what if we would like to retain the image’s original proportion and dimensions?
Also we didn’t take lighting into consideration at all. There needs to be some code in the fragment shader which tells how the surface is lighted regarding the lights we put in the scene.
Furthermore, what if we would like to project onto a much bigger number of objects? The performance would quickly drop. That’s where GPU instancing comes to aid! Instancing moves the the heavy work onto the GPU, and Three.js recently implemented an easy-to-use API for it. The only requirement is that all of the instanced objects must have the same geometry and material. Luckily, this is our case! All of the objects have the same geometry and material, the only difference is the savedModelMatrix
, since each object had a different position when it was projected on. But we can pass that as a uniform to every instance like in this Three.js example.
Things starts to get complicated, but don’t worry! I already coded this stuff and put it in a library, so it’s easier to use and you don’t have to rewrite the same things each time! It’s called three-projected-material, go check it out if you’re interested in how I overcame the remaining challenges.
We’re gonna use the library from this point on.
Useful example
Now that we can project onto and animate a lot of objects, let’s try making something actually useful out of it.
For example, let’s try integrating this into a slideshow, where the images are projected onto a ton of 3D objects, and then the objects are animated in an interesting way.
For the first example, the inspiration comes from Refik Anadol. He does some pretty rad stuff. However, we can’t do full-fledged simulations with velocities and forces like him, we need to have control over the object’s movement; we need it to arrive in the right place at the right time.
We achieve this by putting the object on some trails: we define a path the object has to follow, and we animate the object on that path. Here is a Stack Overflow answer that explains how to do it.
Tip: you can access this mode by putting ?debug
at the end of the URL of each demo.
To do the projection, we
- Move the elements to the middle point
- Do the texture projection calling
project()
- Put the elements back to the start
This happens synchronously, so the user won’t see anything.
Now we have the freedom to model these paths any way we want!
But first, we have to make sure that at the middle point, the elements will cover the image’s area properly. To do this I used the poisson-disk sampling algorithm, which distributes the points more evenly on a surface rather than random positioning them.
this.points = poissonSampling([this.width, this.height], 7.73, 9.66) // innerradius and outerradius
// here is what this.points looks like,
// the z component is 0 for every one of them
// [
// [
// 2.4135735314978937, --> x
// 0.18438944023363374 --> y
// ],
// [
// 2.4783704056100464,
// 0.24572635574719284
// ],
// ...
Now let’s take a look at how the paths are generated in the first demo. In this demo, there is a heavy of use of perlin noise (or rather its open source counterpart, open simplex noise). Notice also the mapRange()
function (map()
in processing) which basically maps a number from one interval to another. Another library that does this is d3-scale with its d3.scaleLinear()
. Some easing functions are also used.
const segments = 51 // must be odds so we have the middle frame
const halfIndex = (segments - 1) / 2
for (let i = 0; i < segments; i++) {
const offsetX = mapRange(i, 0, segments - 1, startX, endX)
const noiseAmount = mapRangeTriple(i, 0, halfIndex, segments - 1, 1, 0, 1)
const frequency = 0.25
const noiseAmplitude = 0.6
const noiseY = noise(offsetX * frequency) * noiseAmplitude * eases.quartOut(noiseAmount)
const scaleY = mapRange(eases.quartIn(1 - noiseAmount), 0, 1, 0.2, 1)
const offsetZ = mapRangeTriple(i, 0, halfIndex, segments - 1, startZ, 0, endZ)
// offsetX goes from left to right
// scaleY shrinks the y before and after the center
// noiseY is some perlin noise on the y axis
// offsetZ makes them enter from behind a little bit
points.push(new THREE.Vector3(x + offsetX, y * scaleY + noiseY, z + offsetZ))
}
Another thing we can work on is the delay with which each element arrives. We also use Perlin noise here, which makes it look like they arrive in “clusters”.
const frequency = 0.5
const delay = (noise(x * frequency, y * frequency) * 0.5 + 0.5) * delayFactor
We use perlin noise also in the waving effect, which modifies each point of the curve giving it a “flag waving” effect.
const { frequency, speed, amplitude } = this.webgl.controls.turbulence
const z = noise(x * frequency - time * speed, y * frequency) * amplitude
point.z = targetPoint.z + z
For the mouse interaction instead, we check if the point of the path is closer than a certain radius, if so, we calculate a vector which goes from the mouse point to the path point. We then move the path point a little bit along that vector’s direction. We use the lerp()
function for this, which returns the interpolated value in the range specified, at one specific percentage. For example 0.2
means at 20%.
// displace the curve points
if (point.distanceTo(this.mousePoint) < displacement) {
const direction = point.clone().sub(this.mousePoint)
const displacementAmount = displacement - direction.length()
direction.setLength(displacementAmount)
direction.add(point)
point.lerp(direction, 0.2) // magic number
}
// and move them back to their original position
if (point.distanceTo(targetPoint) > 0.01) {
point.lerp(targetPoint, 0.27) // magic number
}
The remaining code handles the slideshow style animation, go check out the source code if you’re interested!
In the other two demos I used some different functions to shape the paths the elements move on, but overall the code is pretty similar.
Final words
I hope this article was easy to understand and simple enough to give you some insight into texture projection techniques. Make sure to check out the code on GitHub and download it! I made sure to write the code in an easy to understand manner with plenty of comments.
Let me know if something is still unclear and feel free to reach out to me on Twitter @marco_fugaro!
Hope this was fun to read and that you learned something along the way! Cheers!
References
- Images from Unsplash
- Leaf model from Poly
- Three.js
- poisson-disk-sampling algorithm
- canvas-sketch-util for the util functions
- threejs-modern-app for the boilerplate
- Icons made by Smashicons from www.flaticon.com
Playing with Texture Projection in Three.js was written by Marco Fugaro and published on Codrops.