Low-poly Smoke Particles in Three.js

Franky Hung
Geek Culture
Published in
6 min readNov 22, 2022

--

This is a short tutorial on how to make simple low-poly particles without using a full particle system in Three.js!

I like them colorful

Key Takeaways

  • learn to use the built-in Vector3.lerp function to ease object motions
  • learn how to transform screen coordinates into world space coordinates

Live Demo & Code

Most of the code and idea is originally from this cool codepen: https://codepen.io/vcomics/pen/KBMyjE?editors=0010 by Victor Vergara 🙇🏻‍♂️, which I found a while back and bookmarked. I improved it and added stats and gui controls. My code is hosted at https://github.com/franky-adl/smoke-blobs.

How does this work?

The idea is pretty simple. Create a bunch of spheres of various sizes, and then use easing and trigonometric functions to help animate a Brownian-like motion. Fog is also added to create an illusion of depth to the scene. No lighting is needed as we’re only using MeshBasicMaterial.

Creating the spheres is the easy part, but making the pseudo-random animation is a bit harder.

Firstly, we have a “seed” sphere, which is the first sphere in the array of spheres, to act as a motion guide to all subsequent spheres. It moves in a regular circular path with time(ignore offset for now):

// the first blob has a regular circular path (x y positions are calculated using the parametric function for a circle)
first_obj.position.set(
offset.x + Math.cos(elapsed * 2.0),
offset.y + Math.sin(elapsed * 2.0),
offset.z + Math.sin(elapsed * 2.0)
)
The parametric function of a circle

Then we make each next sphere’s movement to be a function of its previous sphere. So sphere 2 follows sphere 1, sphere 3 follows sphere 2, and so on. There are many ways to do that, but at the very least we have to prevent the spheres from following their peers too regularly and thus resulting in an overall regular motion. A simple and cost-effective way to make things look more random is to use trigonometric functions to calculate each sphere’s coordinates from its previous peer(ignore offset for now):

object.position.lerp(
new THREE.Vector3(
offset.x + Math.cos(object_left.position.x * 3),
offset.y + Math.sin(object_left.position.y * 3),
offset.z + Math.cos(object_left.position.z * 3),
), params.lerpFactor
)

First, let me explain what this lerp function is about.

Using the lerp function

I replaced the original code’s GSAP tween function to this built-in lerp function, which I think makes the code cleaner and it achieves pretty much the same result. The lerp factor (closed interval: [0, 1]) determines the percentage of distance for a vector to “lerp” towards the target vector. This is basically an “ease-out” animation when used with regular time intervals. I use a very small interpolation factor such that the objects move by a smaller distance in each frame. The higher the factor, the faster the spheres’ movement. To know more about lerp, read the documentation here: https://threejs.org/docs/#api/en/math/Vector3.lerp.

Imagine if there are just two spheres, the second sphere would still move in a somewhat circular and thus regular way even though we applied the trigonometric functions. That’s because the sin cos function give out regular wave patterns.

But if you increase the number of spheres to tens and hundreds, this regularity will quickly vanish as it becomes very hard for the human eye to follow each sphere’s movement, together with the fact that larger spheres in the front block the view of other sphers behind them.

Transforming screen coordinates into world space coordinates

followMouse turned on

I also added a “followMouse” checkbox such that the smoke follows your mouse when it is turned on.

To make this work, you need a way to transform your mouse/screen coordinates into the world space coordinates. This is not a guessing game as you want the object to be exactly at your mouse location. This needs precise calculation.

Luckily, there’s this code snippet that helps us do just that. The solution is provided by the Three.js guru WestLangley 🙇🏻‍♂️ at https://stackoverflow.com/questions/13055214/mouse-canvas-x-y-to-three-js-world-x-y-z.

So how does this magic snippet work?

To understand this, we have to make a little detour and get back to some WebGL basics. Here is the pipeline of how the underlying WebGL transforms every object from their local space to the final screen space:

Transformation of coordinate systems, image from https://learnopengl.com/Getting-started/Coordinate-Systems

As you can see there are 3 matrixes applied but let’s just focus on the Projection Matrix, which transforms View Space into Clip Space. View Space is the space as seen from the camera’s point of view, which is what I mean by world space in this article. (Careful not to mistake my world space for the WORLD SPACE in the above diagram. Sorry but I don’t have a another name for it.) Since the way WebGL works is that ultimately it expects all coordinates in the visible frustum of the camera to be mapped to the NDC(normalized device coordinates), which is a box of range [-1.0, 1.0] in each dimension, and everything outside of the NDC is clipped and not visible. That’s why it’s called Clip Space. Note that the z-axis of the NDC in WebGL is the reverse of Three.js z-axis. The NDC z-axis points into the screen as the positive direction.

Left: the camera’s visible frustum (img source), Right: the NDC(normalized device coordinates) box (img source)

So if we want to transform screen coordinates back to world space(as seen from the camera), we need a function to reverse the space transformation pipeline from step 5 back to step 3, which means the transformed coordinates are back in the View Space.

The function is written in the following gist. This is extracted from my repo as is. I also wrote comments to explain what is going on in each step.

A quick takeaway from the code is that you could take step 2 vec.unproject(camera) as your result because that has already transformed the screen coordinates into world space coordinates, IF you don’t care about its z value. In step 1, the closer you set the clip space z to 1.0(closer to the camera’s far plane), the more negative the mapped world space z you get; the closer you set the clip space z to -1.0(closer to the camera’s near plane), the more positive the mapped world space z you get).

If you want a designated z plane for the returned point, just pass targetZ into the function. Easy!

Finally the calculated world space coordinates is assigned to mouseWorldSpace , which is then assigned to the offset variable I previously told you to ignore. The offset is what makes the smoke follow your mouse position.

This is it. Kudos to you reading until the very bottom. I can tell you are a curious and perserverant learner! Hope you learn something from this tutorial and until next time 👋🏼.

--

--

Franky Hung
Geek Culture

Founder of Arkon Digital. I’m not a code fanatic, but I’m always amazed by what code can do. The endless possibilities in coding is what fascinates me everyday.