Case study of I-Remember.fr
5

As I always forgot the magic tricks I did in the previous projects and wasted countless hours looking for the same solutions over and over again, this time after working on I-Remember.fr, I decided to write down some thoughts and tricks before I forgot everything.

Just to be clear, I am very new to WebGL and GLSL. If you can learn anything from my past, that’s great! But if you found something wrong in this post, please comment and I can learn from you as well.


Visual Effects

In this case study, the main focus will be on the front-end parts even though my work in this project included all backend work. Fairly obvious, particles are the only key visuals in I-Remember, so I would like to mention some key visual effects with particles and then to breakthrough each of them.

Key Effects

  1. Memory sea - The endless space which is full of shinny particles. The user can zoom in and see the other users' memory.
  2. Posting step particles - The effect of using the same particles from the preloader to the end of the memory posting.
  3. Memory posting effect - The memory photo becomes a particle and drop to the ground after you successfully posted the memory.
  4. Fake particles after searching by keywords - 60000 particles fly up from the ground to enhance the visual effect.

Other Effects

  1. Click and zoom to the searched particle - The camera flies to the 3D particle.
  2. Map - The clickable map on the UI panel.
  3. Particles rollover - image distortion in rollover state
  4. Post Processing - RGBShift, TiltShifting, gradient head light, static noise, Vignette, Color adjustment like - ColorDodge, Lighten, color correction...

Unlike most of other Three.js projects, I-Remember didn't use any 3D models and most of those effects above(even the map UI) were accomplished by writing some custom GLSL shaders. If you are a web developer and not sure how to write one, go check out Paul Lewis’s An Introduction to Shaders Part1 and Part2.


Before I talk about those effects above individually, I think it is worth mentioning Noise because noise is the core of dynamic effects in GLSL.

Noise

Most of the GLSL programs involve randomness because we want things in random to make the application looks real. When we want to get a random variable In Javascript, we will probably use Math.random(). The random number we got from Math.random() is actually time based and which is sort of total random. In GLSL, we don't have this kind of handy function to call for getting a random number. We have to implement it ourselves.

If you typed in "Cheap GLSL noise" on Google, the first entry would be this link to stackoverflow. And you will find the following GLSL function:

float rand(vec2 co){
    return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}

In this function, if we passed a 2 dimensional vector to it, we will get a "sort of" random number from [0 .. 1]. So, if we want to render some 2D noise, all we have to do is to pass the coordination of the current position to the function, then we will get something like this:

Simple!

However, in most of the cases, we want something random but not a complete chaos. We want to have control on the parameters. We want something like this:

Yes, like Perlin noise.

After some digging online, I found this Simplex Noise GLSL function here by Ashima Arts.

Beware!! This noise function return value can be out of the 0 to 1 domain.

This library was used in I-Remember couple of times and it bought the dynamic and randomness to the whole website.


Memory Sea

In I-remember, I used 360,000 particles for the memory sea. Not all of them are visible but it is the buffer I used for this effect.

In order to place the particles randomly, I needed to apply some noise to the x,y,z values of the particles position vertices, basically there are 2 way to do so:

  1. Generate the noise in Javascript - With Javascript I can have full control of the particles and I can add the mouse interaction with the particles. However it is super slow to update the particles position on the fly and it is not possible to do an AABB mouse detection for every single particles in every single frame.
  2. Generate the noise in GLSL - I won't be able to retrieve the displacement values from WebGL in Javascript but the performance is ideal.

So, in I-Remember, I had to use a hybrid solution - do both but less.

For this hybrid solution, in order to get the same noise results in Javascript like in GLSL, I had to port the Simplex Noise algorithm from GLSL to Javascript. But before that I had to decide what dimension of the noise to use first. Normally, I would use 3D or even 4D noise so that it will be flexible and also it can prevent the memory sea looks like it is moving toward one direction. However, for some reason, I failed to port the 3D/4D Simplex noise into Javascript. My best guess would be some decimal precision issue. Anyway, using 2D Simplex noise looks okay to me also it is a bit cheaper to run especially in Javascript. You can grab my copy of the Simplex 2D noise port here

After sorting out the noise library porting stuff, I needed to do some real work - to create some particles. As I had no experience working with particles in Three.js before, I did a quick look and guess what... I found another tutorial by Paul Lewis - Creating Particles with Three JS**. After I learnt the codes, I decided not to follow it completely because I would like to have better control with the particles rendering. Instead of using a static PNG file or using Canvas pre-drawing as texture, I decided to draw the particle inside the fragment shader instead:

vec2 toCenter = (gl_PointCoord.xy - 0.5) * 2.0;
float len = length(toCenter);
float a = 1.0 - len;
gl_FragColor = vec4(1.0, 1.0, 1.0, a);

The snippet above is just the rough idea of how to draw inside the fragment shader. It basically normalize the length to the center and use this value to draw the particle gradient.

And then I could adjust the hue, saturation, brightness and intensity values of the particle freely based on the particle noise value and made the individual particle looks like this live demo:

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...


After I experimented with the individual particle drawing, I needed to find out the solution to deal with those 360000 particles.

Since I-Remember can let the users to zoom in and navigate the particles with the memory images, I needed to find a systematic way to identify each particles.

Putting the randomness displacement effect aside, I created 3x3 particle systems in Three.js and each of them contains 200 x 200 particles. Each particle system contains a position offset vector uniform which would be used to reference its global offset.

Those particles lay with an even distance next to each other like this:

With this kind of grid setting, I could know the rough position of each particles even after I applying the displacement offset. Also, I only needed to update the position of the particle system and swap to the other side once the camera reached the end of the viewing grid. Here is a visual to explain what I did:


Adding noise to the particles

Like the Perlin noise result I showed before:

It wasn't exactly what I wanted to have. So, in I-Remember, I applied transform, multiply, add and subtract the noise to something like this:

float noiseRatio = (snoise(refPos.xy * 0.002) * 0.7 + snoise(refPos.xy * 50.0 + 14.2)) * 0.3;

A bit confusing at first as it doesn’t seems telling you anything, right? In the codes, I used multiple Simplex noise function calls with different coordination scaling and offset. You can imagine that you are using Photoshop, you have several noise pattern images and you need to blend them into the noise pattern.

Here is a live example showing you what it is roughly like to blend 2 noises:

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

Simply saying, in 2D Simplex noise, if you multiply a smaller number to the coordination and pass it to the Simplex noise function, you will get some bigger noise pattern. If you pass a bigger scale to it, you will get a smaller noise. The xy offset is simply to avoid people to notice that two noises are scaled from the same origin.

To be honest, it is purely a trial and error procedure. Be patient and spend time with your designer on it :-)

Once I had the ideal noise ratio. I could use it to define a threshold to determinate if the particle is clickable or not. Also, I could base on this ratio to add the additional effect like x, y, z offset, light intensity, opacity… etc. In I-Remember.fr, some areas have no particles but actually, there are particles in those area but they are just with very low opacity or even completely transparent.

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

For those clickable particles, I used plane mesh instead of using particles so that I can easily applying the Three.js Raytracing function for mouse interaction. As long as the clickable particles are rendered on top of the memory sea, it would look fine.

Obviously unlike the other particles in the memory sea, I could not create 360000 planes for the clickable particles, otherwise the fps will become unbearable due to the rayTracing calculation in Javascript. So I only created 200 clickable particles and placed them on top of the memory sea.

One good thing of using the grid system to create those 360000 particles instead of using Math.random() is that we know the rough position of each particles. So instead of doing a super long loop to find the clickable particles(the particles with noiseRatio higher than the threshold), I could do a "spiral search" from the the lookAt center point which is closest to the current position like this:

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

In this live demo, if you slowly dragged the steps bar, you will notice that the "spiral search" I mentioned before. It works just the same even if you applied the offset and animate it.

Once it found a match that the noiseRatio is higher than the threshold, the gridXY index will be pushed into an array. And once it reached 200 found particles or it reached the maximum search radius, it stops. Then I used that array to compare with the visible clickable particles from the last frame. If the gridXY index value didn’t exist in the previous frame, it will be associated to a new memory data. Using this method, I can make sure that the clickable particles are associated to the same memory data as in the previous frame. After that, I applied the same displacement to the clickable particles like in the GLSL using my Simplex2D Javascript port.

BAAM! It is done!


Post step particles

In the preloader animation, the particles are spinning among the center and each individual particles sort of has its own additional movement. As we don’t need any mouse interaction with the particles, I did the animation completely in GLSL.

It is very easy to create a ring shape geometry. You can do it in Javascript or in GLSL. I did it in GLSL because if I needed to manipulate the angle and radius in the GLSL, there is no point to invert the result and do it twice.

JS:

for(var i = 0; i < 36000; i++) {
    vertices.push(new THREE.Vector3( i, 0, 0));
}
const float PI = 3.14159265358979323846264;

void main() {


    float i = position.x;
    float angle = i / 180.0 * PI;
    float radius = 1.0 + floor(i / amount * amountPerDegree) / amountPerDegree;

    vec3 pos = vec3(sin(angle) * radius, cos(angle) * radius);

    //... rest

It probably looks at bit too linear in the picture above but after applying displacement noise to the particles, it will look way better.


Playing with particles

In I-Remember, I have 36000 particles for the effect to play with. But for example, in the preloader, I didn’t really need that many of particles. So I separated them into 9 groups:

// 9 groups
float group = mod(floor(baseRadius * amountPerDegree), 9.0);

And then I can use step() / step() + mod() with the group value and multiply the alpha channel to filter some particles away in the current animation.

Here is an example of applying some noise offset based on the rotation angle and length to center.

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

In order to animate the particles like in the preloader to other steps, I needed to use a uniform “animation” to represent the current state:

  • 0 to 1 - it represents the preloading 0 percent to 100 percent state
  • 1 to 2 - it represents the state from preloader to the post/nav step
  • 2 to 3 - it represents the state the image upload to the image adjustment step
  • 3 to 4 - it represents the finishing animation

As it is using the same particles for the whole animation sequence, all of the displacement has to be stacked otherwise the particles will be jumpy between different steps. To simplify the linear interpolation calculation, I would like to map the “animation” value into 0 to 1 domain for the current state animation. For example, when I needed to do the displacement for the animation of the state from preloader to the post/nav step, I mapped it like this:

So I created this simple function - clampNorm()

This is a simple function to map and clamp the value between the min and max into 0 to 1 domain.

JS:

function clampNorm(val, min, max) {
    val = (val - min) / (max - min);
    return val > 1 ? 1 : val < min ? 0 : val;
}

GLSL:

float clampNorm(float val, float min, float max) {
    return clamp((val - min) / (max - min), 0.0, 1.0);
}

So for the whole animation sequence, the codes can look like this:

GLSL:

float preloaderAnimation = clampNorm(animation, 0.0, 1.0);
pos += preloaderAnimation * (some_modification_factors_of_preloader_state);
otherAnimation = clampNorm(animation, 1.0, 2.0);
pos += otherAnimation * (some_modification_factors_of_other_state);

Yes, I basically stacked the whole animation sequence up. I was a little bit worry about the performance but in real case, as long as the GLSL program is not as complex as one of those crazy Shadertoy codes. Doing some lines of calculation for couple of thousands of particles is still a piece of cake to your graphic card.

Also, you might want to out smart it with some "if statement" in the GLSL. However, it seems that logical if statement is really bad for your GPU (I still used it sometimes in this project though).

As I dig it online, instead of using “if statement”, I should use step(). What it does in JS is basically JS:

function step(edge, x) {
    return x < edge ? 0.0 : 1.0;
}

Consider the following code: GLSL:

if( animation < 1.0) {
    pos.x = 1.5;
} else {
    pos.x = 3.0;
}

Instead of using if statement, we can do: GLSL:

pos.x = (1.0 - step(1.0, animation)) * 1.5 + step(1.0, animation) * 3.0;

GLSL coding is like the whole new world to me. Lesson learnt.


Memory posting effect

After the users uploaded their memory image, they would see a tutorial screen to teach them how to adjust the image to fit inside the circle clipping area. This whole tutorial animation and the image adjustment step were all rendered in Canvas as it is the best and easiest way to handle instead of using WebGL. But after the users clicked the next button, it swapped to use WebGL instead. In order to replace the canvas dom element the Three.js plane, I needed to move the plane into some position in front of the camera so that it is 100% scaled. For this solution, you can check out the Three.js tip at the bottom of this post.

Back to the memory image itself, I added some sepia effect on the adjusted image to make it looks more like “faded memory” and it becomes a particle and have some light field effect. About the light field, the initial version is something like this:

float lightField(float angle, float angleScale, float t1, float t2) {
    clamp(snoise(vec2(angle * angleScale + t1, t2)), 0., 1.);
}

But you can see that there is a cropped part on the left I needed to resolved, so I added a patched size parameter, what it does is just mirror the other side such that it looks smooth. Like this:

float lightField(float angle, float angleScale, float t1, float t2, float patchSize) {
    return mix(
        clamp(snoise(vec2(angle * angleScale + t1, t2)), 0., 1.),
        clamp(snoise(vec2(-angle * angleScale + t1, t2)), 0., 1.),
        clampNorm(angle, PI - patchSize, PI)
    );
}

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

As you can imagine, I needed to move the particle plane into a certain position in front of the camera in order to keep it fit as 100% scale. It is sort of no way for me to simply animate the x,y,z translation values of the plane to make it looks like it is dropping into the memory sea. So, I had to fake it. I needed to scale and translate the particle with a certain easing function to make it looks like it is. Sometimes, when we do website like this, we need to find alternative and break through those technical boundaries in our mindset.


Fake particles after searching by keywords

Fake Particles It looks really awesome but actually it is very simple to make. First of all, I need to define the cone shape area that I want those particles be placed within. Like this:

So, I needed to find a way to put less particles right in front of the camera so that it wont block the screen and put more particles at the back of the cone so that it looks more natural. Actually there are so many ways to define it, and what I used in I-Remember.fr is 2^x like this:

Then I can simply use floor(log2(i)) function get the distance to camera. After adding some sin() cos() functions and fighting with the designer, I got something like this:

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...


Tricks I used in Three.js

Optimization

For the particles, I used addictive blending in Three.js. Which means when 2 particles are overlapped, the rgb values of those 2 particles will be summed up together. If you are not familiar with Photoshop, you can see the following demo:

Sorry, this snippet requires webgl feature.
Please update your browser!
Loading...

As you can see, with addictive blending, the order of the particles doesn't matter anymore. So I could take benefit to optimize the website by disabling the z-sorting of the particles in Three.js:

var mat = new THREE.ShaderMaterial({
    ...
    blending: THREE.AdditiveBlending,
    depthTest: false,
    ...
})

Fixed Scaling

If you try to resize the window of I-Remember.fr, you will probably notice that unlike most of other fullscreen WebGL websites, the 3D viewport doesn't scale in I-Remember.fr. With this method, we can maintain the scaling for clickable particles that it won't go chaos when they resize the window. The down side is that the objects at ther edges of the screen might distorted a bit when the user is using a big screen.

This is the implementation of fix scaling:

JS

camera.aspect = winWidth / winHeight;
var idealWidth = screen.width;
var idealHeight = screen.height;
camera.setViewOffset(
    idealWidth,
    idealHeight,
    idealWidth - winWidth >> 1,
    idealHeight - winHeight >> 1,
    winWidth,
    winHeight
);

In the example above, I used window.screen as reference and It is more reliable.

Display a plane with its original scaling

In I-Remember.fr, after you clicked the searched post, the camera will fly toward the post particle and it matches the position of the 2D post image. In order to make the plane looks 100% scaling, there are 2 ways to do so:

1. Scale the plane.
2. Move the camera to the plane or move the camera to the plane with certain distance.

The mathematical concept is something like this.

fov = (2 * Math.atan(filmWidth / (2 * focalLength))) * 180 / Math.PI;

So, in order to scale the plane to make it looks 100% scaling, we need something like:

var planeDistanceToCamera = camera.position.distanceTo(plane.position);
plane.scale.x = plane.scale.y = plane.scale.z = planeDistanceToCamera * 2 * Math.tan(camera.fov / 360 * Math.PI) / camera.fullHeight;

To optimized it, we can precalculate the part of the formula above and saved as a variable fixScaleFactor in the window.onresize()

var fixScaleFactor;
function onResize(){
    fixScaleFactor = 2 * Math.tan(camera.fov / 360 * Math.PI) / camera.fullHeight;
}

plane.scale.x = plane.scale.y = plane.scale.z = camera.position.distanceTo(plane.position) * fixScaleFactor;

With correct angle, the plane will look 100% scaling.

But normally I would suggest using the other way by moving the plane to the camera or moving camera to the plane. It is because otherwise it looks weird to other 3d elements in the same scene. Also we can gain the flexibility to easily do extra 2D translation.

So the code of moving the plane to the camera will be something like:

plane.position.copy(camera.position);
plane.rotation.copy(camera.rotation);
plane.translateZ(-1 / fixScaleFactor );
//plane.translateX(100) //move it to right 100px from the center

I used the same technology to fly the camera to the searched post image which the user can adjust the offset before they post the memory. Also, all of those UI particles are rendered on the same canvas by using this method.

Be awared! If somehow the the item you wanted to draw in 100% scale is too close the camera after you do the transformation, it might fail as the distance to the camera may be smaller than the near value of the camera. Then you may want to use the scaling instead or update the near value of the camera into a lower value temporarily.

Frameworks/Tool I used in this project:

PHP

I believe that good front-end developers should know how to do some back-end(as well as design). In this project, I have learned so much backend PHP, made tons of mistakes and found solutions like what I do in the front-end.

  • Laravel 4 - A very easy to use PHP framework, it uses the Symfony components and super user friendly to backend newbies like me.
  • Mobile Detect - I used this library in several projects before. It simply does what it claims.

JavaScript

  • Bower - First time to try this tool. I had mixed feeling about it as it is very helpful in most of the time but "Fuck how can I do this with it" in some scenario.
  • Grunt - I only used it as sass watcher and livereload. It is a very useful tool with very big community support. However, I chose Gulp instead after this project.
  • Three.js - Of course I used it. It is one of those libraries made us ex-Flash developers stays on the game. But if I do a project like this next time, I will probably just take its core/maths functions instead of using the whole Three.js.
  • RequireJS - AMD module loader. I have been using it for almost 2 years. For this project, I wrote a tool to convert Three.js into AMD modules, you can grab it here. A little but hacky but it works fine :P
Share:
Comments
Mike
2014-09-21 18:03:21
Reply to
Nice article. Great mix of code and aesthetics!

If you want a standalone, optimized library for 2D/3D math functions, take a look at glMatrix (glmatrix.net)
Edan Kwan
2015-04-22 06:27:05
Reply to
Testing
Edan Kwan
2015-04-22 06:34:50
Reply to
Testing2
Sam
2015-07-11 10:58:11
Reply to
Dear edan kwan,
this is a very awesome site , and i really appreciate your work , I dont know anything about the coding and all , i have no idea how to do it , i m planning to make one website so can i able to use your codes for developing my site . Its very kind if you help me out Please .

With warm Regard
Sam
christopher
2015-10-26 14:23:12
Reply to
wow, A M A Z I N G !!!!!
Cancel
What do you think?




All comments require my approval in order to be public. So... Spammers, please get the fuck out of my blog. Cunt!