Category

Website

Date

24th March 2018

How we built the Google Cloud Infrastructure WebGL experience

13 min. read

Google Cloud wanted to explain the vastness of their world wide infrastructure. How their data centers and regions work, and how incredibly close to end users Google Cloud has edge nodes and points-of-presence to enable fast, low-latency access to the Google Cloud backend infrastructure. Complex systems are often hard to describe with words alone, so we decided to map out the entire system and make it tangible by building an interactive world in 3D.


These are our most important learnings.


Overall tech stack

The site is built as a single page app, utilising the History API for navigation and deep linking.


Most of the site’s javascript is authored in TypeScript which, in addition to weeding out common bugs and typos, provided critical refactoring support during the development process.


Various libraries are in use, and without these it would not have been possible to build the site in a reasonable timeframe, so a big THANK YOU to the authors and contributors of these projects:



  • GSAP for animations and timeline managed animation flows.

  • Three.js for WebGL rendering and scene graph management.

  • THREE.BAS for instanced rendering and Vertex shader animations.

  • Webpack — build tool enabling tight asset management and small deployment sizes through three-shaking etc.

  • glTF — the 3D asset format we chose to use for packaging our models.

  • The site is hosted on Google Cloud Platform’s own App Engine (obviously):


All 3D models were created in Cinema4D with Blender used for importing/exporting to glTF.
The entire website clocks in at 2.9MB gzipped, 2.22MB without audio.


Setting the scene - Look and feel

We knew we wanted a fairly minimal 3D look with emphasis on subtle greyscale variations as well as a few main colors from the Google Cloud color scheme. The first thing that came to mind was to try and go for a classic ray-traced clay-style with nice beautiful soft shadows, a look well known from architectural renderings. So even before we knew what the final models where gonna be, we started exploring if this was possible.


Enter the Troubled Developer and the 60 FPS…

Given that render times in Cinema 4D (for this look) way exceeded anything remotely close to real-time rendering, even on a machine equipped with 4 x NVIDIA GTX 1080 Ti graphic cards, it was clear from the start that real Ray Tracing was not a viable approach for something that would eventually need to run on just a regular consumer laptop, newer smartphones etc.


From previous projects we’ve learned that this is often the case when moving from 3D software like Cinema 4D into the browser. What we often do is to try and explore different visual styles through renderings from Cinema 4D and then figure out how to recreate or mimic that look in WebGL. It’s a back-and-forth process, and while it’s time-consuming it’s also what makes it fun to develop these kinds of projects.


It’s not always clear just where we’re gonna end up.


Shaders and Anti Aliasing (AA)

The clay look trough Ray Tracing is mostly defined trough the soft shadows and light bouncing around multiple times to pick up the colors of the materials they bounce off of. Light bouncing multiple times is computationally very costly and so we quickly focused on getting the shadows right, foregoing true Ray Tracing.


We explored adding Ambient Occlusion using a shader pass known as Screen Space Ambient Occlusion (SSAO) — however once again it led to another trade-off. When setting up post-processing shaders in Three.js, it’s done by setting up a new offscreen render target making the browsers built in anti aliasing unavailable.


Having lost anti aliasing we looked at ways to bring back the sharp straight lines. So we explored adding custom anti aliasing shaders, like the FXAA shader suggested in this issue: https://github.com/mrdoob/three.js/issues/568.


While it enabled us to disable the browsers’ built in anti aliasing — giving quite a performance boost — the custom anti aliasing shaders weren’t quite up to par quality-wise with the browsers built in anti aliasing.


Given that the site and models are very minimalistic — almost to the point of looking like vector graphics — any low-res shadows or aliased lines stood out quite glaringly as opposed to a more complex/colorful 3D scene where the eye has a tendency not to focus so much on these little imperfections in render quality.


With Anti Aliasing / without Anti Aliasing

Load time

The first thing that was modeled (and re-modelled multiple times) was a Region from the outside and the inside of a Data Center. A Region consists of multiple Data Centers. Through many iterations and Hangouts with Google engineers we landed on a representation that fitted close enough to how an actual Google Cloud Region is laid out.


Early sketch and iteration of Region and Data Center.

The initial export from Cinema4D was 100.4 MB OBJ-file. No way was that gonna fly for a site even though it did actually load (slowly). For quick model testing, Three.js’ editor is highly recommended: https://threejs.org/editor/


When looking at 3D model formats it can be quite a jungle to narrow down what the best option is. It’s not a one-size-fits-all, but the Khronos Group has put a lot of effort into the glTF format, striking a fine balance between low file sizes, good export options, fast browser parsing time and high consistency between 3D authoring tools and the model loaded with Three.js’ GLTFLoader. All our models where exported as glTF binaries (.glb) giving a boost in browser parse times. With other formats it’s often not just the actual network transfer that extends load times, but also the amount of time the browser uses to parse the downloaded models.


Switching to the glTF format gave us a significant reduction in file-size and browser parse time, but the file was still too big, not only in size but also in complexity. We simply had too much detail in our model. So by working in a tight iteration cycle of trying to reduce model complexity by removing vertices and testing how it looked in the browser, we finally got the model down to the size it is now: ~716 KB (gzipped). We also stripped away the normals data in the exported model, opting to calculate them at load using Three.js’ Geometry.computeVertextNormals();


The real trick though was not just to optimize the model, but to identify identical objects.


Instanced Rendering & GPU Animation

Our model consisted of many identical objects like fans, server racks, trees, cars, trucks, windmills, cooling towers etc. The only difference being each instance’s individual position, scale and rotation (PSR), the actual geometry of each instance was exactly the same, so no need to load that geometry more than once, and most importantly for render times, no need to upload the geometry to the GPU more than once. The Three.js repo has many examples of this technique, which combines rendering multiple objects with individual PSR in one draw call.


In addition to PSR instancing, we needed to maintain the ability to animate each object individually, so for example all the fans wouldn’t be rotating in unison etc. To accomplish this on the GPU involves writing a custom vertex shader and putting the individual animation information onto custom BufferGeometry attributes.


All in all not the easiest to work with solely within Three.js standard APIs, so we opted to add the THREE.BAS library to make it a bit easier to work with these GPU based animations while maintaining the ability to use Three.js materials and their support for lights.


Offloading the animation calculations from the CPU to the GPU comes with speed improvements at the cost of a little flexibility, but overall this was okay for the simple animations needed on the fans, windmills etc. However, we still needed to be able to have some over the animations, so we exposed a single uniform in the vertex shader that then acts as an overall progress variable for the whole set of animations.


Model Structure (Easy Instancing Workflow)

Our model contains only one source geometry per object that’s instanced, so for instance for a tree, we just defined one tree as the source, and used Instance Objects in Cinema 4D to place copies of the source tree all around the scene with variations in PSR. We wanted to be able to place these objects visually and preview it within it Cinema 4D and have an easy way to export the instanced copies’ PSR information embedded within the model. This way, to preview new changes, we only needed to do one export of the model from Cinema 4D to glTF (more on that in the next section).


Turns out it’s quite easy, we simply landed on a naming convention where we name source geometry objects: source_object_name and then all the instanced objects are named: replace_object_name_X. Here’s a screenshot of how that looks for the trees:


Now there is one extra step, sadly. Because Cinema 4D can’t export directly to glTF we needed to roundtrip through Blender using FBX as an intermediary format. When exporting Instanced Objects from Cinema 4D to FBX all the Instanced Objects are replaced with actual geometry — not what we wanted. The way we solved it was to convert all the Instanced Objects to Cinema 4D Null objects, thus maintaining the Instanced Objects PSR info on export, but without replicated geometry:


Cinema4D to Three.js and glTF with Blender

Sadly there is (as of yet) no direct way to export/import glTF files in Cinema4D so we needed to go through Blender with the glTF import/export extension installed.


The flow ended up looking something like this:
Cinema4D → FBX Export → Blender → Import FBX → Export glTF Binary


Here’s a short screen recording of the flow which also shows some of the import/export settings we ended up using to compensate for the differences in world-scale and world-orientation between Cinema4D and Three.js:


In-browser Exploration

The above video ends up in the Three.js Editor which is a great tool for previewing models and playing around with materials, lights, geometry etc. Sadly by default there’s no way to have this “more visual approach” to playing around with the various properties of geometries, lights, materials etc. in your own project. That said, it’s very much needed, and it also what gives applications like Unity3D their great edge. In some cases you just need to play around and do tiny tweaks to lights and materials to get everything right, or to compensate for any annoying z-fighting bug etc. And while it won’t bring the power of Unity3d into the browser the Three.js Inspector Chrome Extension does add some much needed features to do just this kind of in-browser tweaking — no need to constantly change tiny code parameters and wait for a full reload of your site:


Just remember to add your Three.Scene to the window object:


Baking Shadows

At this point we had pretty much been forced to cut all realtime shadows in favor of a sharp look, but we also wanted to bring back some of the original look. A low cost way of adding shadows and other lighting effects is to bake them into a texture directly from Cinema4D and load them onto a flat plane in Three.js. We baked two textures, one for the outside of the Region and one for the inside of the Data Center. And we used two textures to have higher resolution shadows inside the Data Center.


Outside / Inside shadow textures

Smooth Transitions

Another goal was to be able to make smooth transitions from one scene to another. Now the most important thing to remember when dealing with GPUs and 60FPS is that any new information that needs to get uploaded to the GPU takes time and can cause frames to drop. Especially textures can be heavy to upload. There are various formats (PVRTC, DDS, ETC…) optimized for quick decompression and low memory usage on the GPU, however the native support on different platforms varies wildly and the file-size compression settings in our testing couldn’t match regular JPGs. So we decided to use JPGs because of their small network load, and sacrifice some runtime performance.


This is one of the areas we’d like to explore more, but for this project we opted to try and mitigate some of the runtime overhead by pre-uploading all textures and precompiling all materials (shaders) needed for the entire site on initial site-load. At a high level this basically means that we tried to make the cameras see all the different scenes at load by making everything visible, forcing a render once and then resetting everything to their original visibility:


The Quality Selector & Various Performance Tips

Building a WebGL site that needs to run 60FPS is no small feat. It’s a constant compromise between look, feel, and responsiveness. Many factors come into play such as not knowing what the capabilities of the device it’s gonna run on is, what resolution it will run at and so on. A visitor might have a Macbook hooked up to a 4K display — that’s just never gonna fly for anything but a fairly static site. We tried to set sensible defaults, but also explored adding a quality switcher to allow visitors to choose themselves. Admittedly, this is probably not the most used feature of the site, but building it gave some insights into which parameters matter in terms of performance, and how to change these at runtime without reloading the full site. We did not want to have the user choose a quality setting when they entered the site.


To keep it simple and to avoid adding a ton of conditional code based on the settings, we chose to tweak the rendering size (pixel ratio) and toggle anti aliasing on/off.


This resulted in three quality settings:


Low Definition (LD) Turns off anti aliasing and sets a max pixel ratio = 1.
Standard Definition (SD) Anti aliasing is turned on. Slightly adaptive in that it doesn’t just set one fixed max pixel ratio but tries to be a bit smart by adapting the max pixel ratio based on window size and screen pixel density. For example we increase the max pixel ratio slightly on 5K displays for increased sharpness, assuming that a device connected to this type of display is slightly beefier than your average laptop.
High Definition (HD) Limit pixel ratio to a max of 2.5 with anti aliasing enabled.


The tricky part was to toggle anti aliasing on/off as this currently requires us to setup a new THREE.WebGLRenderer(); and clean up the old one.
Pseudo code:


For further tips on how to optimize performance the responses in this Twitter thread by @mrdoob gives some good quick hints.


Detecting device capabilities:

Since we now had a way to tweak the device requirements we wanted to explore if there was a way to automatically detect and set the quality setting based on how powerful a visitors device is. Long story short —kinda, but it’s a bit of a hack and there’s too many false positives.


EPIC Agency describes the approach in this article under “Adaptive quality”.


The approach relies on sniffing out a specific GFX card name, using the WebGL extension WEBGL_debug_renderer_info and then correlating it to a performance score.


However their project was limited to mobile devices where it may be more reasonable to link certain GFX cards to a specific performance value, since mobile devices’ GPU capabilities are usually better matched to the devices’ screen resolutions, whereas on desktop devices this may not necessarily be the case.


It is however possible to use this information if there’s a known GFX card that is underpowered for your project even at very low resolutions —think of it like minimum requirements for a game. It would have to be a GFX card that we know is used on a lot of devices to make it worthwhile the effort of testing all these GFX cards’ performance, not to mention the regular expressions needed to detect them all.


On top of that iOS 11 introduced throttling requestAnimationFrame(); to 30FPS when in Low Power mode, probably a sensible feature to save battery, but they left out a way for a site to differentiate between intentional system-wide throttling or if we are simply running on a slow device.


We decided that for the time being the best option was to not try and be too smart about auto-changing quality settings, not even based on just a simple average frame rate tracking technique.


Going Beyond 60FPS by Disabling VSync

It can be hard to track how small changes affect the performance of a WebGL project if you’re well within the limits of your GPU and browser. So let’s say you wanted to figure out how adding a new light to a scene impacts performance, but all you’re seeing is a smooth 60FPS — with and without the new light. We found it helpful to disable Chrome’s frame rate limit of 60 FPS and just let it run as fast as it can. You can do this by opening Chrome from the terminal:


open -a "Google Chrome" --args --disable-gpu-vsync


Be mindful that other tasks running on your laptop or in other tabs may affect the FPS …looking at your Dropbox!


Chrome vs. Safari WebGL Performance

This is more of an observation than a learning, but Safari’s WebGL performance simply blows away Chrome’s on 5K displays (and probably also on lower res displays). We found that we could easily increase the pixel ratio to almost native display resolution in Safari and still have 60 FPS, sadly this is not the case with Chrome as of today.


Improvements to Explore for the Future

The biggest pain point right now for us when developing a WebGL heavy site is all the intermediary steps between having an idea in Cinema 4D to actually previewing how that idea carries over into Three.js and the browser — a lot of good work has gone into solving some of these pain points, but we’d still like to find easier workflows to enable higher parity between how materials, lights and cameras look and behave in Cinema 4D vs. their Three.js counterparts. PBR materials and the work put into glTF goes a long way to smoothen these things out, but there’s still room for improvement. Biggest wish for now would be direct export to glTF from Cinema 4D.


Something something
We creates joyful digital ideas, products, brand identities and experiences that connect the hearts of brands to the hearts of their audiences.