Creating a Typography Motion Trail Effect with Three.js
Publikováno: 21.7.2021
Learn how to use WebGL framebuffers via Three.js to create an interactive motion trail effect for text.
The post Creating a Typography Motion Trail Effect with Three.js appeared first on Codrops.
Framebuffers are a key feature in WebGL when it comes to creating advanced graphical effects such as depth-of-field, bloom, film grain or various types of anti-aliasing and have already been covered in-depth here on Codrops. They allow us to “post-process” our scenes, applying different effects on them once rendered. But how exactly do they work?
By default, WebGL (and also Three.js and all other libraries built on top of it) render to the default framebuffer, which is the device screen. If you have used Three.js or any other WebGL framework before, you know that you create your mesh with the correct geometry and material, render it, and voilà, it’s visible on your screen.
However, we as developers can create new framebuffers besides the default one and explicitly instruct WebGL to render to them. By doing so, we render our scenes to image buffers in the video card’s memory instead of the device screen. Afterwards, we can treat these image buffers like regular textures and apply filters and effects before eventually rendering them to the device screen.
Here is a video breaking down the post-processing and effects in Metal Gear Solid 5: Phantom Pain that really brings home the idea. Notice how it starts by footage from the actual game rendered to the default framebuffer (device screen) and then breaks down how each framebuffer looks like. All of these framebuffers are composited together on each frame and the result is the final picture you see when playing the game:
So with the theory out of the way, let’s create a cool typography motion trail effect by rendering to a framebuffer!
Our skeleton app
Let’s render some 2D text to the default framebuffer, i.e. device screen, using threejs
. Here is our boilerplate:
const LABEL_TEXT = 'ABC'
const clock = new THREE.Clock()
const scene = new THREE.Scene()
// Create a threejs renderer:
// 1. Size it correctly
// 2. Set default background color
// 3. Append it to the page
const renderer = new THREE.WebGLRenderer()
renderer.setClearColor(0x222222)
renderer.setClearAlpha(0)
renderer.setSize(innerWidth, innerHeight)
renderer.setPixelRatio(devicePixelRatio || 1)
document.body.appendChild(renderer.domElement)
// Create an orthographic camera that covers the entire screen
// 1. Position it correctly in the positive Z dimension
// 2. Orient it towards the scene center
const orthoCamera = new THREE.OrthographicCamera(
-innerWidth / 2,
innerWidth / 2,
innerHeight / 2,
-innerHeight / 2,
0.1,
10,
)
orthoCamera.position.set(0, 0, 1)
orthoCamera.lookAt(new THREE.Vector3(0, 0, 0))
// Create a plane geometry that spawns either the entire
// viewport height or width depending on which one is bigger
const labelMeshSize = innerWidth > innerHeight ? innerHeight : innerWidth
const labelGeometry = new THREE.PlaneBufferGeometry(
labelMeshSize,
labelMeshSize
)
// Programmaticaly create a texture that will hold the text
let labelTextureCanvas
{
// Canvas and corresponding context2d to be used for
// drawing the text
labelTextureCanvas = document.createElement('canvas')
const labelTextureCtx = labelTextureCanvas.getContext('2d')
// Dynamic texture size based on the device capabilities
const textureSize = Math.min(renderer.capabilities.maxTextureSize, 2048)
const relativeFontSize = 20
// Size our text canvas
labelTextureCanvas.width = textureSize
labelTextureCanvas.height = textureSize
labelTextureCtx.textAlign = 'center'
labelTextureCtx.textBaseline = 'middle'
// Dynamic font size based on the texture size
// (based on the device capabilities)
labelTextureCtx.font = `${relativeFontSize}px Helvetica`
const textWidth = labelTextureCtx.measureText(LABEL_TEXT).width
const widthDelta = labelTextureCanvas.width / textWidth
const fontSize = relativeFontSize * widthDelta
labelTextureCtx.font = `${fontSize}px Helvetica`
labelTextureCtx.fillStyle = 'white'
labelTextureCtx.fillText(LABEL_TEXT, labelTextureCanvas.width / 2, labelTextureCanvas.height / 2)
}
// Create a material with our programmaticaly created text
// texture as input
const labelMaterial = new THREE.MeshBasicMaterial({
map: new THREE.CanvasTexture(labelTextureCanvas),
transparent: true,
})
// Create a plane mesh, add it to the scene
const labelMesh = new THREE.Mesh(labelGeometry, labelMaterial)
scene.add(labelMesh)
// Start out animation render loop
renderer.setAnimationLoop(onAnimLoop)
function onAnimLoop() {
// On each new frame, render the scene to the default framebuffer
// (device screen)
renderer.render(scene, orthoCamera)
}
This code simply initialises a threejs
scene, adds a 2D plane with a text texture to it and renders it to the default framebuffer (device screen). If we are execute it with threejs
included in our project, we will get this:
See the Pen
Step 1: Render to default framebuffer by Georgi Nikoloff (@gbnikolov)
on CodePen.0
Again, we don’t explicitly specify otherwise, so we are rendering to the default framebuffer (device screen).
Now that we managed to render our scene to the device screen, let’s add a framebuffer (THEEE.WebGLRenderTarget
) and render it to a texture in the video card memory.
Rendering to a framebuffer
Let’s start by creating a new framebuffer when we initialise our app:
const clock = new THREE.Clock()
const scene = new THREE.Scene()
// Create a new framebuffer we will use to render to
// the video card memory
const renderBufferA = new THREE.WebGLRenderTarget(
innerWidth * devicePixelRatio,
innerHeight * devicePixelRatio
)
// ... rest of application
Now that we have created it, we must explicitly instruct threejs
to render to it instead of the default framebuffer, i.e. device screen. We will do this in our program animation loop:
function onAnimLoop() {
// Explicitly set renderBufferA as the framebuffer to render to
renderer.setRenderTarget(renderBufferA)
// On each new frame, render the scene to renderBufferA
renderer.render(scene, orthoCamera)
}
And here is our result:
See the Pen
Step 2: Render to a framebuffer by Georgi Nikoloff (@gbnikolov)
on CodePen.0
As you can see, we are getting an empty screen, yet our program contains no errors – so what happened? Well, we are no longer rendering to the device screen, but another framebuffer! Our scene is being rendered to a texture in the video card memory, so that’s why we see the empty screen.
In order to display this generated texture containing our scene back to the default framebuffer (device screen), we need to create another 2D plane that will cover the entire screen of our app and pass the texture as material input to it.
First we will create a fullscreen 2D plane that will span the entire device screen:
// ... rest of initialisation step
// Create a second scene that will hold our fullscreen plane
const postFXScene = new THREE.Scene()
// Create a plane geometry that covers the entire screen
const postFXGeometry = new THREE.PlaneBufferGeometry(innerWidth, innerHeight)
// Create a plane material that expects a sampler texture input
// We will pass our generated framebuffer texture to it
const postFXMaterial = new THREE.ShaderMaterial({
uniforms: {
sampler: { value: null },
},
// vertex shader will be in charge of positioning our plane correctly
vertexShader: `
varying vec2 v_uv;
void main () {
// Set the correct position of each plane vertex
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
// Pass in the correct UVs to the fragment shader
v_uv = uv;
}
`,
fragmentShader: `
// Declare our texture input as a "sampler" variable
uniform sampler2D sampler;
// Consume the correct UVs from the vertex shader to use
// when displaying the generated texture
varying vec2 v_uv;
void main () {
// Sample the correct color from the generated texture
vec4 inputColor = texture2D(sampler, v_uv);
// Set the correct color of each pixel that makes up the plane
gl_FragColor = inputColor;
}
`
})
const postFXMesh = new THREE.Mesh(postFXGeometry, postFXMaterial)
postFXScene.add(postFXMesh)
// ... animation loop code here, same as before
As you can see, we are creating a new scene that will hold our fullscreen plane. After creating it, we need to augment our animation loop to render the generated texture from the previous step to the fullscreen plane on our screen:
function onAnimLoop() {
// Explicitly set renderBufferA as the framebuffer to render to
renderer.setRenderTarget(renderBufferA)
// On each new frame, render the scene to renderBufferA
renderer.render(scene, orthoCamera)
// 👇
// Set the device screen as the framebuffer to render to
// In WebGL, framebuffer "null" corresponds to the default
// framebuffer!
renderer.setRenderTarget(null)
// 👇
// Assign the generated texture to the sampler variable used
// in the postFXMesh that covers the device screen
postFXMesh.material.uniforms.sampler.value = renderBufferA.texture
// 👇
// Render the postFX mesh to the default framebuffer
renderer.render(postFXScene, orthoCamera)
}
After including these snippets, we can see our scene once again rendered on the screen:
See the Pen
Step 3: Display the generated framebuffer on the device screen by Georgi Nikoloff (@gbnikolov)
on CodePen.0
Let’s recap the necessary steps needed to produce this image on our screen on each render loop:
- Create
renderTargetA
framebuffer that will allow us to render to a separate texture in the users device video memory - Create our “ABC” plane mesh
- Render the “ABC” plane mesh to
renderTargetA
instead of the device screen - Create a separate fullscreen plane mesh that expects a texture as an input to its material
- Render the fullscreen plane mesh back to the default framebuffer (device screen) using the generated texture created by rendering the “ABC” mesh to
renderTargetA
Achieving the persistence effect by using two framebuffers
We don’t have much use of framebuffers if we are simply displaying them as they are to the device screen, as we do right now. Now that we have our setup ready, let’s actually do some cool post-processing.
First, we actually want to create yet another framebuffer – renderTargetB
, and make sure it and renderTargetA
are let
variables, rather then const
s. That’s because we will actually swap them at the end of each render so we can achieve framebuffer ping-ponging.
“Ping-ponging” in WebGl is a technique that alternates the use of a framebuffer as either input or output. It is a neat trick that allows for general purpose GPU computations and is used in effects such as gaussian blur, where in order to blur our scene we need to:
- Render it to
framebuffer A
using a 2D plane and apply horizontal blur via the fragment shader - Render the result horizontally blurred image from step 1 to
framebuffer B
and apply vertical blur via the fragment shader - Swap
framebuffer A
andframebuffer B
- Keep repeating steps 1 to 3 and incrementally applying blur until desired gaussian blur radius is achieved.
Here is a small chart illustrating the steps needed to achieve ping-pong:
So with that in mind, we will render the contents of renderTargetA
into renderTargetB
using the postFXMesh
we created and apply some special effect via the fragment shader.
Let’s kick things off by creating our renderTargetB
:
let renderBufferA = new THREE.WebGLRenderTarget(
// ...
)
// Create a second framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
innerWidth * devicePixelRatio,
innerHeight * devicePixelRatio
)
Next up, let’s augment our animation loop to actually do the ping-pong technique:
function onAnimLoop() {
// 👇
// Do not clear the contents of the canvas on each render
// In order to achieve our ping-pong effect, we must draw
// the new frame on top of the previous one!
renderer.autoClearColor = false
// 👇
// Explicitly set renderBufferA as the framebuffer to render to
renderer.setRenderTarget(renderBufferA)
// 👇
// Render the postFXScene to renderBufferA.
// This will contain our ping-pong accumulated texture
renderer.render(postFXScene, orthoCamera)
// 👇
// Render the original scene containing ABC again on top
renderer.render(scene, orthoCamera)
// Same as before
// ...
// ...
// 👇
// Ping-pong our framebuffers by swapping them
// at the end of each frame render
const temp = renderBufferA
renderBufferA = renderBufferB
renderBufferB = temp
}
If we are to render our scene again with these updated snippets, we will see no visual difference, even though we do in fact alternate between the two framebuffers to render it. That’s because, as it is right now, we do not apply any special effects in the fragment shader of our postFXMesh
.
Let’s change our fragment shader like so:
// Sample the correct color from the generated texture
// 👇
// Notice how we now apply a slight 0.005 offset to our UVs when
// looking up the correct texture color
vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
// Set the correct color of each pixel that makes up the plane
// 👇
// We fade out the color from the previous step to 97.5% of
// whatever it was before
gl_FragColor = vec4(inputColor * 0.975);
With these changes in place, here is our updated program:
See the Pen
Step 4: Create a second framebuffer and ping-pong between them by Georgi Nikoloff (@gbnikolov)
on CodePen.0
Let’s break down one frame render of our updated example:
- We render
renderTargetB
result torenderTargetA
- We render our “ABC” text to
renderTargetA
, compositing it on top ofrenderTargetB
result in step 1 (we do not clear the contents of the canvas on new renders, because we setrenderer.autoClearColor = false
) - We pass the generated
renderTargetA
texture topostFXMesh
, apply a small offsetvec2(0.002)
to its UVs when looking up the texture color and fade it out a bit by multiplying the result by0.975
- We render
postFXMesh
to the device screen - We swap
renderTargetA
withrenderTargetB
(ping-ponging)
For each new frame render, we will repeat steps 1 to 5. This way, the previous target framebuffer we rendered to will be used as an input to the current render and so on. You can clearly see this effect visually in the last demo – notice how as the ping-ponging progresses, more and more offset is being applied to the UVs and more and more the opacity fades out.
Applying simplex noise and mouse interaction
Now that we have implemented and can see the ping-pong technique working correctly, we can get creative and expand on it.
Instead of simply adding an offset in our fragment shader as before:
vec4 inputColor = texture2D(sampler, v_uv + vec2(0.005));
Let’s actually use simplex noise for more interesting visual result. We will also control the direction using our mouse position.
Here is our updated fragment shader:
// Pass in elapsed time since start of our program
uniform float time;
// Pass in normalised mouse position
// (-1 to 1 horizontally and vertically)
uniform vec2 mousePos;
// <Insert snoise function definition from the link above here>
// Calculate different offsets for x and y by using the UVs
// and different time offsets to the snoise method
float a = snoise(vec3(v_uv * 1.0, time * 0.1)) * 0.0032;
float b = snoise(vec3(v_uv * 1.0, time * 0.1 + 100.0)) * 0.0032;
// Add the snoise offset multiplied by the normalised mouse position
// to the UVs
vec4 inputColor = texture2D(sampler, v_uv + vec2(a, b) + mousePos * 0.005);
We also need to specify mousePos
and time
as inputs to our postFXMesh
material shader:
const postFXMaterial = new THREE.ShaderMaterial({
uniforms: {
sampler: { value: null },
time: { value: 0 },
mousePos: { value: new THREE.Vector2(0, 0) }
},
// ...
})
Finally let’s make sure we attach a mousemove
event listener to our page and pass the updated normalised mouse coordinates from Javascript to our GLSL fragment shader:
// ... initialisation step
// Attach mousemove event listener
document.addEventListener('mousemove', onMouseMove)
function onMouseMove (e) {
// Normalise horizontal mouse pos from -1 to 1
const x = (e.pageX / innerWidth) * 2 - 1
// Normalise vertical mouse pos from -1 to 1
const y = (1 - e.pageY / innerHeight) * 2 - 1
// Pass normalised mouse coordinates to fragment shader
postFXMesh.material.uniforms.mousePos.value.set(x, y)
}
// ... animation loop
With these changes in place, here is our final result. Make sure to hover around it (you might have to wait a moment for everything to load):
See the Pen
Step 5: Perlin Noise and mouse interaction by Georgi Nikoloff (@gbnikolov)
on CodePen.0
Conclusion
Framebuffers are a powerful tool in WebGL that allows us to greatly enhance our scenes via post-processing and achieve all kinds of cool effects. Some techniques require more then one framebuffer as we saw and it is up to us as developers to mix and match them however we need to achieve our desired visuals.
I encourage you to experiment with the provided examples, try to render more elements, alternate the “ABC” text color between each renderTargetA
and renderTargetB
swap to achieve different color mixing, etc.
In the first demo, you can see a specific example of how this typography effect could be used and the second demo is a playground for you to try some different settings (just open the controls in the top right corner).
Further readings:
The post Creating a Typography Motion Trail Effect with Three.js appeared first on Codrops.